All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/6] Boot time cpupools
@ 2022-04-08  8:45 Luca Fancellu
  2022-04-08  8:45 ` [PATCH v6 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
                   ` (5 more replies)
  0 siblings, 6 replies; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  8:45 UTC (permalink / raw)
  To: xen-devel
  Cc: bertrand.marquis, wei.chen, Wei Liu, Anthony PERARD,
	Juergen Gross, Dario Faggioli, George Dunlap, Andrew Cooper,
	Jan Beulich, Julien Grall, Stefano Stabellini, Volodymyr Babchuk

This serie introduces a feature for Xen to create cpu pools at boot time, the
feature is enabled using a configurable that is disabled by default.
The boot time cpupool feature relies on the device tree to describe the cpu
pools.
Another feature is introduced by the serie, the possibility to assign a
dom0less guest to a cpupool at boot time.

Here follows an example, Xen is built with CONFIG_BOOT_TIME_CPUPOOLS=y.

From the DT:

  [...]

  a72_0: cpu@0 {
    compatible = "arm,cortex-a72";
    reg = <0x0 0x0>;
    device_type = "cpu";
    [...]
  };

  a72_1: cpu@1 {
    compatible = "arm,cortex-a72";
    reg = <0x0 0x1>;
    device_type = "cpu";
    [...]
  };

  a53_0: cpu@100 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x100>;
    device_type = "cpu";
    [...]
  };

  a53_1: cpu@101 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x101>;
    device_type = "cpu";
    [...]
  };

  a53_2: cpu@102 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x102>;
    device_type = "cpu";
    [...]
  };

  a53_3: cpu@103 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x103>;
    device_type = "cpu";
    [...]
  };

  chosen {
    #size-cells = <0x1>;
    #address-cells = <0x1>;
    xen,dom0-bootargs = "...";
    xen,xen-bootargs = "...";

    cpupool0 {
      compatible = "xen,cpupool";
      cpupool-cpus = <&a72_0 &a72_1>;
      cpupool-sched = "credit2";
    };

    cp1: cpupool1 {
      compatible = "xen,cpupool";
      cpupool-cpus = <&a53_0 &a53_1 &a53_2 &a53_3>;
    };

    module@0 {
      reg = <0x80080000 0x1300000>;
      compatible = "multiboot,module";
    };

    domU1 {
      #size-cells = <0x1>;
      #address-cells = <0x1>;
      compatible = "xen,domain";
      cpus = <1>;
      memory = <0 0xC0000>;
      vpl011;
      domain-cpupool = <&cp1>;

      module@92000000 {
        compatible = "multiboot,kernel", "multiboot,module";
        reg = <0x92000000 0x1ffffff>;
        bootargs = "...";
      };
    };
  };

  [...]

The example DT is instructing Xen to have two cpu pools, the one with id 0
having two phisical cpus and the one with id 1 having 4 phisical cpu, the
second cpu pool uses the null scheduler and from the /chosen node we can see
that a dom0less guest will be started on that cpu pool.

In this particular case Xen must boot with different type of cpus, so the
boot argument hmp_unsafe must be enabled.

Luca Fancellu (6):
  tools/cpupools: Give a name to unnamed cpupools
  xen/sched: create public function for cpupools creation
  xen/sched: retrieve scheduler id by name
  xen/cpupool: Create different cpupools at boot time
  arm/dom0less: assign dom0less guests to cpupools
  xen/cpupool: Allow cpupool0 to use different scheduler

 docs/misc/arm/device-tree/booting.txt  |   5 +
 docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++
 tools/helpers/xen-init-dom0.c          |  37 +++-
 tools/libs/light/libxl_utils.c         |   3 +-
 xen/arch/arm/domain_build.c            |  14 +-
 xen/arch/arm/include/asm/smp.h         |   3 +
 xen/common/Kconfig                     |   7 +
 xen/common/Makefile                    |   1 +
 xen/common/boot_cpupools.c             | 234 +++++++++++++++++++++++++
 xen/common/domain.c                    |   2 +-
 xen/common/sched/core.c                |  40 +++--
 xen/common/sched/cpupool.c             |  35 +++-
 xen/include/public/domctl.h            |   4 +-
 xen/include/xen/sched.h                |  53 ++++++
 14 files changed, 550 insertions(+), 28 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/cpupools.txt
 create mode 100644 xen/common/boot_cpupools.c

-- 
2.17.1



^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v6 1/6] tools/cpupools: Give a name to unnamed cpupools
  2022-04-08  8:45 [PATCH v6 0/6] Boot time cpupools Luca Fancellu
@ 2022-04-08  8:45 ` Luca Fancellu
  2022-04-08  9:54   ` Anthony PERARD
  2022-04-08  8:45 ` [PATCH v6 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  8:45 UTC (permalink / raw)
  To: xen-devel
  Cc: bertrand.marquis, wei.chen, Wei Liu, Anthony PERARD, Juergen Gross

With the introduction of boot time cpupools, Xen can create many
different cpupools at boot time other than cpupool with id 0.

Since these newly created cpupools can't have an
entry in Xenstore, create the entry using xen-init-dom0
helper with the usual convention: Pool-<cpupool id>.

Given the change, remove the check for poolid == 0 from
libxl_cpupoolid_to_name(...).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v6:
- Reworked loop to have only one error path (Anthony)
Changes in v5:
- no changes
Changes in v4:
- no changes
Changes in v3:
- no changes, add R-by
Changes in v2:
 - Remove unused variable, moved xc_cpupool_infofree
   ahead to simplify the code, use asprintf (Juergen)
---
 tools/helpers/xen-init-dom0.c  | 37 +++++++++++++++++++++++++++++++++-
 tools/libs/light/libxl_utils.c |  3 +--
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
index c99224a4b607..37eff8868f25 100644
--- a/tools/helpers/xen-init-dom0.c
+++ b/tools/helpers/xen-init-dom0.c
@@ -43,7 +43,10 @@ int main(int argc, char **argv)
     int rc;
     struct xs_handle *xsh = NULL;
     xc_interface *xch = NULL;
-    char *domname_string = NULL, *domid_string = NULL;
+    char *domname_string = NULL, *domid_string = NULL,
+         *pool_path = NULL, *pool_name = NULL;
+    xc_cpupoolinfo_t *xcinfo;
+    unsigned int pool_id = 0;
     libxl_uuid uuid;
 
     /* Accept 0 or 1 argument */
@@ -114,9 +117,41 @@ int main(int argc, char **argv)
         goto out;
     }
 
+    /* Create an entry in xenstore for each cpupool on the system */
+    do {
+        xcinfo = xc_cpupool_getinfo(xch, pool_id);
+        if (xcinfo != NULL) {
+            if (xcinfo->cpupool_id != pool_id)
+                pool_id = xcinfo->cpupool_id;
+            xc_cpupool_infofree(xch, xcinfo);
+            if (asprintf(&pool_path, "/local/pool/%d/name", pool_id) <= 0) {
+                fprintf(stderr, "cannot allocate memory for pool path\n");
+                rc = 1;
+                goto out;
+            }
+            if (asprintf(&pool_name, "Pool-%d", pool_id) <= 0) {
+                fprintf(stderr, "cannot allocate memory for pool name\n");
+                rc = 1;
+                goto out;
+            }
+            pool_id++;
+            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
+                          strlen(pool_name))) {
+                fprintf(stderr, "cannot set pool name\n");
+                rc = 1;
+                goto out;
+            }
+            free(pool_name);
+            free(pool_path);
+            pool_path = pool_name = NULL;
+        }
+    } while(xcinfo != NULL);
+
     printf("Done setting up Dom0\n");
 
 out:
+    free(pool_path);
+    free(pool_name);
     free(domid_string);
     free(domname_string);
     xs_close(xsh);
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index b91c2cafa223..81780da3ff40 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -151,8 +151,7 @@ char *libxl_cpupoolid_to_name(libxl_ctx *ctx, uint32_t poolid)
 
     snprintf(path, sizeof(path), "/local/pool/%d/name", poolid);
     s = xs_read(ctx->xsh, XBT_NULL, path, &len);
-    if (!s && (poolid == 0))
-        return strdup("Pool-0");
+
     return s;
 }
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 2/6] xen/sched: create public function for cpupools creation
  2022-04-08  8:45 [PATCH v6 0/6] Boot time cpupools Luca Fancellu
  2022-04-08  8:45 ` [PATCH v6 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
@ 2022-04-08  8:45 ` Luca Fancellu
  2022-04-08  8:45 ` [PATCH v6 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  8:45 UTC (permalink / raw)
  To: xen-devel
  Cc: bertrand.marquis, wei.chen, Juergen Gross, Dario Faggioli,
	George Dunlap, Andrew Cooper, Jan Beulich, Julien Grall,
	Stefano Stabellini, Wei Liu

Create new public function to create cpupools, can take as parameter
the scheduler id or a negative value that means the default Xen
scheduler will be used.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
Changes in v6:
- add R-by
Changes in v5:
- no changes
Changes in v4:
- no changes
Changes in v3:
- Fixed comment (Andrew)
Changes in v2:
- cpupool_create_pool doesn't check anymore for pool id uniqueness
  before calling cpupool_create. Modified commit message accordingly
---
 xen/common/sched/cpupool.c | 15 +++++++++++++++
 xen/include/xen/sched.h    | 16 ++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index a6da4970506a..89a891af7076 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1219,6 +1219,21 @@ static void cpupool_hypfs_init(void)
 
 #endif /* CONFIG_HYPFS */
 
+struct cpupool *__init cpupool_create_pool(unsigned int pool_id, int sched_id)
+{
+    struct cpupool *pool;
+
+    if ( sched_id < 0 )
+        sched_id = scheduler_get_default()->sched_id;
+
+    pool = cpupool_create(pool_id, sched_id);
+
+    BUG_ON(IS_ERR(pool));
+    cpupool_put(pool);
+
+    return pool;
+}
+
 static int __init cf_check cpupool_init(void)
 {
     unsigned int cpu;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 406d9bc610a4..b07717987434 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1147,6 +1147,22 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 unsigned int cpupool_get_id(const struct domain *d);
 const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
+
+/*
+ * cpupool_create_pool - Creates a cpupool
+ * @pool_id: id of the pool to be created
+ * @sched_id: id of the scheduler to be used for the pool
+ *
+ * Creates a cpupool with pool_id id.
+ * The sched_id parameter identifies the scheduler to be used, if it is
+ * negative, the default scheduler of Xen will be used.
+ *
+ * returns:
+ *     pointer to the struct cpupool just created, or Xen will panic in case of
+ *     error
+ */
+struct cpupool *cpupool_create_pool(unsigned int pool_id, int sched_id);
+
 extern void cf_check dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 3/6] xen/sched: retrieve scheduler id by name
  2022-04-08  8:45 [PATCH v6 0/6] Boot time cpupools Luca Fancellu
  2022-04-08  8:45 ` [PATCH v6 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
  2022-04-08  8:45 ` [PATCH v6 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
@ 2022-04-08  8:45 ` Luca Fancellu
  2022-04-08 10:29   ` Dario Faggioli
  2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  8:45 UTC (permalink / raw)
  To: xen-devel
  Cc: bertrand.marquis, wei.chen, George Dunlap, Dario Faggioli,
	Andrew Cooper, Jan Beulich, Julien Grall, Stefano Stabellini,
	Wei Liu

Add a static function to retrieve the scheduler pointer using the
scheduler name.

Add a public function to retrieve the scheduler id by the scheduler
name that makes use of the new static function.

Take the occasion to replace open coded scheduler search with the
new static function in scheduler_init.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
Changes in v6:
- no changes
Changes in v5:
- no changes
Changes in v4:
- no changes
Changes in v3:
- add R-by
Changes in v2:
- replace open coded scheduler search in scheduler_init (Juergen)
---
 xen/common/sched/core.c | 40 ++++++++++++++++++++++++++--------------
 xen/include/xen/sched.h | 11 +++++++++++
 2 files changed, 37 insertions(+), 14 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 19ab67818106..48ee01420fb8 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2947,10 +2947,30 @@ void scheduler_enable(void)
     scheduler_active = true;
 }
 
+static inline
+const struct scheduler *__init sched_get_by_name(const char *sched_name)
+{
+    unsigned int i;
+
+    for ( i = 0; i < NUM_SCHEDULERS; i++ )
+        if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name) )
+            return schedulers[i];
+
+    return NULL;
+}
+
+int __init sched_get_id_by_name(const char *sched_name)
+{
+    const struct scheduler *scheduler = sched_get_by_name(sched_name);
+
+    return scheduler ? scheduler->sched_id : -1;
+}
+
 /* Initialise the data structures. */
 void __init scheduler_init(void)
 {
     struct domain *idle_domain;
+    const struct scheduler *scheduler;
     int i;
 
     scheduler_enable();
@@ -2981,25 +3001,17 @@ void __init scheduler_init(void)
                    schedulers[i]->opt_name);
             schedulers[i] = NULL;
         }
-
-        if ( schedulers[i] && !ops.name &&
-             !strcmp(schedulers[i]->opt_name, opt_sched) )
-            ops = *schedulers[i];
     }
 
-    if ( !ops.name )
+    scheduler = sched_get_by_name(opt_sched);
+    if ( !scheduler )
     {
         printk("Could not find scheduler: %s\n", opt_sched);
-        for ( i = 0; i < NUM_SCHEDULERS; i++ )
-            if ( schedulers[i] &&
-                 !strcmp(schedulers[i]->opt_name, CONFIG_SCHED_DEFAULT) )
-            {
-                ops = *schedulers[i];
-                break;
-            }
-        BUG_ON(!ops.name);
-        printk("Using '%s' (%s)\n", ops.name, ops.opt_name);
+        scheduler = sched_get_by_name(CONFIG_SCHED_DEFAULT);
+        BUG_ON(!scheduler);
+        printk("Using '%s' (%s)\n", scheduler->name, scheduler->opt_name);
     }
+    ops = *scheduler;
 
     if ( cpu_schedule_up(0) )
         BUG();
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index b07717987434..b527f141a1d3 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -758,6 +758,17 @@ void sched_destroy_domain(struct domain *d);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
+
+/*
+ * sched_get_id_by_name - retrieves a scheduler id given a scheduler name
+ * @sched_name: scheduler name as a string
+ *
+ * returns:
+ *     positive value being the scheduler id, on success
+ *     negative value if the scheduler name is not found.
+ */
+int sched_get_id_by_name(const char *sched_name);
+
 void vcpu_wake(struct vcpu *v);
 long vcpu_yield(void);
 void vcpu_sleep_nosync(struct vcpu *v);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  8:45 [PATCH v6 0/6] Boot time cpupools Luca Fancellu
                   ` (2 preceding siblings ...)
  2022-04-08  8:45 ` [PATCH v6 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
@ 2022-04-08  8:45 ` Luca Fancellu
  2022-04-08  8:56   ` Jan Beulich
                     ` (4 more replies)
  2022-04-08  8:45 ` [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
  2022-04-08  8:45 ` [PATCH v6 6/6] xen/cpupool: Allow cpupool0 to use different scheduler Luca Fancellu
  5 siblings, 5 replies; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  8:45 UTC (permalink / raw)
  To: xen-devel
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli

Introduce a way to create different cpupools at boot time, this is
particularly useful on ARM big.LITTLE system where there might be the
need to have different cpupools for each type of core, but also
systems using NUMA can have different cpu pools for each node.

The feature on arm relies on a specification of the cpupools from the
device tree to build pools and assign cpus to them.

ACPI is not supported for this feature.

Documentation is created to explain the feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
Changes in v6:
- Changed docs, return if booted with ACPI in btcpupools_dtb_parse,
  panic if /chosen does not exists. Changed commit message (Julien)
- Add Juergen R-by for the xen/common/sched part that didn't change
Changes in v5:
- Fixed wrong variable name, swapped schedulers, add scheduler info
  in the printk (Stefano)
- introduce assert in cpupool_init and btcpupools_get_cpupool_id to
  harden the code
Changes in v4:
- modify Makefile to put in *.init.o, fixed stubs and macro (Jan)
- fixed docs, fix brakets (Stefano)
- keep cpu0 in Pool-0 (Julien)
- moved printk from btcpupools_allocate_pools to
  btcpupools_get_cpupool_id
- Add to docs constraint about cpu0 and Pool-0
Changes in v3:
- Add newline to cpupools.txt and removed "default n" from Kconfig (Jan)
- Fixed comment, moved defines, used global cpu_online_map, use
  HAS_DEVICE_TREE instead of ARM and place arch specific code in header
  (Juergen)
- Fix brakets, x86 code only panic, get rid of scheduler dt node, don't
  save pool pointer and look for it from the pool list (Stefano)
- Changed data structures to allow modification to the code.
Changes in v2:
- Move feature to common code (Juergen)
- Try to decouple dtb parse and cpupool creation to allow
  more way to specify cpupools (for example command line)
- Created standalone dt node for the scheduler so it can
  be used in future work to set scheduler specific
  parameters
- Use only auto generated ids for cpupools
---
 docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
 xen/arch/arm/include/asm/smp.h         |   3 +
 xen/common/Kconfig                     |   7 +
 xen/common/Makefile                    |   1 +
 xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
 xen/common/sched/cpupool.c             |  12 +-
 xen/include/xen/sched.h                |  14 ++
 7 files changed, 383 insertions(+), 1 deletion(-)
 create mode 100644 docs/misc/arm/device-tree/cpupools.txt
 create mode 100644 xen/common/boot_cpupools.c

diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
new file mode 100644
index 000000000000..40cc8135c66f
--- /dev/null
+++ b/docs/misc/arm/device-tree/cpupools.txt
@@ -0,0 +1,140 @@
+Boot time cpupools
+==================
+
+When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
+create cpupools during boot phase by specifying them in the device tree.
+ACPI is not supported for this feature.
+
+Cpupools specification nodes shall be direct childs of /chosen node.
+Each cpupool node contains the following properties:
+
+- compatible (mandatory)
+
+    Must always include the compatiblity string: "xen,cpupool".
+
+- cpupool-cpus (mandatory)
+
+    Must be a list of device tree phandle to nodes describing cpus (e.g. having
+    device_type = "cpu"), it can't be empty.
+
+- cpupool-sched (optional)
+
+    Must be a string having the name of a Xen scheduler. Check the sched=<...>
+    boot argument for allowed values [1]. When this property is omitted, the Xen
+    default scheduler will be used.
+
+
+Constraints
+===========
+
+If no cpupools are specified, all cpus will be assigned to one cpupool
+implicitly created (Pool-0).
+
+If cpupools node are specified, but not every cpu brought up by Xen is assigned,
+all the not assigned cpu will be assigned to an additional cpupool.
+
+If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
+stop.
+
+The boot cpu must be assigned to Pool-0, so the cpupool containing that core
+will become Pool-0 automatically.
+
+
+Examples
+========
+
+A system having two types of core, the following device tree specification will
+instruct Xen to have two cpupools:
+
+- The cpupool with id 0 will have 4 cpus assigned.
+- The cpupool with id 1 will have 2 cpus assigned.
+
+The following example can work only if hmp-unsafe=1 is passed to Xen boot
+arguments, otherwise not all cores will be brought up by Xen and the cpupool
+creation process will stop Xen.
+
+
+a72_1: cpu@0 {
+        compatible = "arm,cortex-a72";
+        reg = <0x0 0x0>;
+        device_type = "cpu";
+        [...]
+};
+
+a72_2: cpu@1 {
+        compatible = "arm,cortex-a72";
+        reg = <0x0 0x1>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_1: cpu@100 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x100>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_2: cpu@101 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x101>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_3: cpu@102 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x102>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_4: cpu@103 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x103>;
+        device_type = "cpu";
+        [...]
+};
+
+chosen {
+
+    cpupool_a {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
+    };
+    cpupool_b {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a72_1 &a72_2>;
+        cpupool-sched = "credit2";
+    };
+
+    [...]
+
+};
+
+
+A system having the cpupools specification below will instruct Xen to have three
+cpupools:
+
+- The cpupool Pool-0 will have 2 cpus assigned.
+- The cpupool Pool-1 will have 2 cpus assigned.
+- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
+  assigned cpus a53_3 and a53_4).
+
+chosen {
+
+    cpupool_a {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a53_1 &a53_2>;
+    };
+    cpupool_b {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a72_1 &a72_2>;
+        cpupool-sched = "null";
+    };
+
+    [...]
+
+};
+
+[1] docs/misc/xen-command-line.pandoc
diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index af5a2fe65266..83c0cd69767b 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -34,6 +34,9 @@ extern void init_secondary(void);
 extern void smp_init_cpus(void);
 extern void smp_clear_cpu_maps (void);
 extern int smp_get_max_cpus (void);
+
+#define cpu_physical_id(cpu) cpu_logical_map(cpu)
+
 #endif
 
 /*
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index d921c74d615e..70aac5220e75 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -22,6 +22,13 @@ config GRANT_TABLE
 
 	  If unsure, say Y.
 
+config BOOT_TIME_CPUPOOLS
+	bool "Create cpupools at boot time"
+	depends on HAS_DEVICE_TREE
+	help
+	  Creates cpupools during boot time and assigns cpus to them. Cpupools
+	  options can be specified in the device tree.
+
 config ALTERNATIVE_CALL
 	bool
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index b1e076c30b81..218174ca8b6b 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_ARGO) += argo.o
 obj-y += bitmap.o
+obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.init.o
 obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
new file mode 100644
index 000000000000..9429a5025fc4
--- /dev/null
+++ b/xen/common/boot_cpupools.c
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/common/boot_cpupools.c
+ *
+ * Code to create cpupools at boot time.
+ *
+ * Copyright (C) 2022 Arm Ltd.
+ */
+
+#include <xen/sched.h>
+#include <asm/acpi.h>
+
+/*
+ * pool_cpu_map:   Index is logical cpu number, content is cpupool id, (-1) for
+ *                 unassigned.
+ * pool_sched_map: Index is cpupool id, content is scheduler id, (-1) for
+ *                 unassigned.
+ */
+static int __initdata pool_cpu_map[NR_CPUS]   = { [0 ... NR_CPUS-1] = -1 };
+static int __initdata pool_sched_map[NR_CPUS] = { [0 ... NR_CPUS-1] = -1 };
+static unsigned int __initdata next_pool_id;
+
+#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
+#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
+
+static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
+{
+    unsigned int i;
+
+    for ( i = 0; i < nr_cpu_ids; i++ )
+    {
+        if ( cpu_physical_id(i) == hwid )
+            return i;
+    }
+
+    return -1;
+}
+
+static int __init
+get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
+{
+    int cpu_num;
+    const __be32 *prop;
+    unsigned int cpu_reg;
+
+    prop = dt_get_property(cpu_node, "reg", NULL);
+    if ( !prop )
+        return BTCPUPOOLS_DT_NODE_NO_REG;
+
+    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
+
+    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
+    if ( cpu_num < 0 )
+        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
+
+    return cpu_num;
+}
+
+static int __init check_and_get_sched_id(const char* scheduler_name)
+{
+    int sched_id = sched_get_id_by_name(scheduler_name);
+
+    if ( sched_id < 0 )
+        panic("Scheduler %s does not exists!\n", scheduler_name);
+
+    return sched_id;
+}
+
+void __init btcpupools_dtb_parse(void)
+{
+    const struct dt_device_node *chosen, *node;
+
+    if ( !acpi_disabled )
+        return;
+
+    chosen = dt_find_node_by_path("/chosen");
+    if ( !chosen )
+        panic("/chosen missing. Boot time cpupools can't be parsed from DT.\n");
+
+    dt_for_each_child_node(chosen, node)
+    {
+        const struct dt_device_node *phandle_node;
+        int sched_id = -1;
+        const char* scheduler_name;
+        unsigned int i = 0;
+
+        if ( !dt_device_is_compatible(node, "xen,cpupool") )
+            continue;
+
+        if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) )
+            sched_id = check_and_get_sched_id(scheduler_name);
+
+        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
+        if ( !phandle_node )
+            panic("Missing or empty cpupool-cpus property!\n");
+
+        while ( phandle_node )
+        {
+            int cpu_num;
+
+            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
+
+            if ( cpu_num < 0 )
+                panic("Error retrieving logical cpu from node %s (%d)\n",
+                      dt_node_name(node), cpu_num);
+
+            if ( pool_cpu_map[cpu_num] != -1 )
+                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
+
+            pool_cpu_map[cpu_num] = next_pool_id;
+
+            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
+        }
+
+        /* Save scheduler choice for this cpupool id */
+        pool_sched_map[next_pool_id] = sched_id;
+
+        /* Let Xen generate pool ids */
+        next_pool_id++;
+    }
+}
+
+void __init btcpupools_allocate_pools(void)
+{
+    unsigned int i;
+    bool add_extra_cpupool = false;
+    int swap_id = -1;
+
+    /*
+     * If there are no cpupools, the value of next_pool_id is zero, so the code
+     * below will assign every cpu to cpupool0 as the default behavior.
+     * When there are cpupools, the code below is assigning all the not
+     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
+     * In the same loop we check if there is any assigned cpu that is not
+     * online.
+     */
+    for ( i = 0; i < nr_cpu_ids; i++ )
+    {
+        if ( cpumask_test_cpu(i, &cpu_online_map) )
+        {
+            /* Unassigned cpu gets next_pool_id pool id value */
+            if ( pool_cpu_map[i] < 0 )
+            {
+                pool_cpu_map[i] = next_pool_id;
+                add_extra_cpupool = true;
+            }
+
+            /*
+             * Cpu0 must be in cpupool0, otherwise some operations like moving
+             * cpus between cpupools, cpu hotplug, destroying cpupools, shutdown
+             * of the host, might not work in a sane way.
+             */
+            if ( !i && (pool_cpu_map[0] != 0) )
+                swap_id = pool_cpu_map[0];
+
+            if ( swap_id != -1 )
+            {
+                if ( pool_cpu_map[i] == swap_id )
+                    pool_cpu_map[i] = 0;
+                else if ( pool_cpu_map[i] == 0 )
+                    pool_cpu_map[i] = swap_id;
+            }
+        }
+        else
+        {
+            if ( pool_cpu_map[i] >= 0 )
+                panic("Pool-%d contains cpu%u that is not online!\n",
+                      pool_cpu_map[i], i);
+        }
+    }
+
+    /* A swap happened, swap schedulers between cpupool id 0 and the other */
+    if ( swap_id != -1 )
+    {
+        int swap_sched = pool_sched_map[swap_id];
+
+        pool_sched_map[swap_id] = pool_sched_map[0];
+        pool_sched_map[0] = swap_sched;
+    }
+
+    if ( add_extra_cpupool )
+        next_pool_id++;
+
+    /* Create cpupools with selected schedulers */
+    for ( i = 0; i < next_pool_id; i++ )
+        cpupool_create_pool(i, pool_sched_map[i]);
+}
+
+unsigned int __init btcpupools_get_cpupool_id(unsigned int cpu)
+{
+    ASSERT((cpu < NR_CPUS) && (pool_cpu_map[cpu] >= 0));
+
+    printk(XENLOG_INFO "Logical CPU %u in Pool-%d (Scheduler id: %d).\n",
+           cpu, pool_cpu_map[cpu], pool_sched_map[pool_cpu_map[cpu]]);
+
+    return pool_cpu_map[cpu];
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 89a891af7076..86a175f99cd5 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1247,12 +1247,22 @@ static int __init cf_check cpupool_init(void)
     cpupool_put(cpupool0);
     register_cpu_notifier(&cpu_nfb);
 
+    btcpupools_dtb_parse();
+
+    btcpupools_allocate_pools();
+
     spin_lock(&cpupool_lock);
 
     cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
 
     for_each_cpu ( cpu, &cpupool_free_cpus )
-        cpupool_assign_cpu_locked(cpupool0, cpu);
+    {
+        unsigned int pool_id = btcpupools_get_cpupool_id(cpu);
+        struct cpupool *pool = cpupool_find_by_id(pool_id);
+
+        ASSERT(pool);
+        cpupool_assign_cpu_locked(pool, cpu);
+    }
 
     spin_unlock(&cpupool_lock);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index b527f141a1d3..453e98f1cba8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1178,6 +1178,20 @@ extern void cf_check dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
 
+#ifdef CONFIG_BOOT_TIME_CPUPOOLS
+void btcpupools_allocate_pools(void);
+unsigned int btcpupools_get_cpupool_id(unsigned int cpu);
+void btcpupools_dtb_parse(void);
+
+#else /* !CONFIG_BOOT_TIME_CPUPOOLS */
+static inline void btcpupools_allocate_pools(void) {}
+static inline void btcpupools_dtb_parse(void) {}
+static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu)
+{
+    return 0;
+}
+#endif
+
 #endif /* __SCHED_H__ */
 
 /*
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08  8:45 [PATCH v6 0/6] Boot time cpupools Luca Fancellu
                   ` (3 preceding siblings ...)
  2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
@ 2022-04-08  8:45 ` Luca Fancellu
  2022-04-08  9:10   ` Jan Beulich
  2022-04-08  8:45 ` [PATCH v6 6/6] xen/cpupool: Allow cpupool0 to use different scheduler Luca Fancellu
  5 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  8:45 UTC (permalink / raw)
  To: xen-devel
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu

Introduce domain-cpupool property of a xen,domain device tree node,
that specifies the cpupool device tree handle of a xen,cpupool node
that identifies a cpupool created at boot time where the guest will
be assigned on creation.

Add member to the xen_domctl_createdomain public interface so the
XEN_DOMCTL_INTERFACE_VERSION version is bumped.

Add public function to retrieve a pool id from the device tree
cpupool node.

Update documentation about the property.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in v6:
- no changes
Changes in v5:
- no changes
Changes in v4:
- no changes
- add R-by
Changes in v3:
- Use explicitely sized integer for struct xen_domctl_createdomain
  cpupool_id member. (Stefano)
- Changed code due to previous commit code changes
Changes in v2:
- Moved cpupool_id from arch specific to common part (Juergen)
- Implemented functions to retrieve the cpupool id from the
  cpupool dtb node.
---
 docs/misc/arm/device-tree/booting.txt |  5 +++++
 xen/arch/arm/domain_build.c           | 14 +++++++++++++-
 xen/common/boot_cpupools.c            | 24 ++++++++++++++++++++++++
 xen/common/domain.c                   |  2 +-
 xen/include/public/domctl.h           |  4 +++-
 xen/include/xen/sched.h               |  9 +++++++++
 6 files changed, 55 insertions(+), 3 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index a94125394e35..7b4a29a2c293 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -188,6 +188,11 @@ with the following properties:
     An empty property to request the memory of the domain to be
     direct-map (guest physical address == physical address).
 
+- domain-cpupool
+
+    Optional. Handle to a xen,cpupool device tree node that identifies the
+    cpupool where the guest will be started at boot.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 8be01678de05..9c67a483d4a4 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3172,7 +3172,8 @@ static int __init construct_domU(struct domain *d,
 void __init create_domUs(void)
 {
     struct dt_device_node *node;
-    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
+    const struct dt_device_node *cpupool_node,
+                                *chosen = dt_find_node_by_path("/chosen");
 
     BUG_ON(chosen == NULL);
     dt_for_each_child_node(chosen, node)
@@ -3241,6 +3242,17 @@ void __init create_domUs(void)
                                          vpl011_virq - 32 + 1);
         }
 
+        /* Get the optional property domain-cpupool */
+        cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0);
+        if ( cpupool_node )
+        {
+            int pool_id = btcpupools_get_domain_pool_id(cpupool_node);
+            if ( pool_id < 0 )
+                panic("Error getting cpupool id from domain-cpupool (%d)\n",
+                      pool_id);
+            d_cfg.cpupool_id = pool_id;
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
index 9429a5025fc4..240bae4cebb8 100644
--- a/xen/common/boot_cpupools.c
+++ b/xen/common/boot_cpupools.c
@@ -22,6 +22,8 @@ static unsigned int __initdata next_pool_id;
 
 #define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
 #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
+#define BTCPUPOOLS_DT_WRONG_NODE      (-3)
+#define BTCPUPOOLS_DT_CORRUPTED_NODE  (-4)
 
 static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
 {
@@ -56,6 +58,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
     return cpu_num;
 }
 
+int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node)
+{
+    const struct dt_device_node *phandle_node;
+    int cpu_num;
+
+    if ( !dt_device_is_compatible(node, "xen,cpupool") )
+        return BTCPUPOOLS_DT_WRONG_NODE;
+    /*
+     * Get first cpu listed in the cpupool, from its reg it's possible to
+     * retrieve the cpupool id.
+     */
+    phandle_node = dt_parse_phandle(node, "cpupool-cpus", 0);
+    if ( !phandle_node )
+        return BTCPUPOOLS_DT_CORRUPTED_NODE;
+
+    cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
+    if ( cpu_num < 0 )
+        return cpu_num;
+
+    return pool_cpu_map[cpu_num];
+}
+
 static int __init check_and_get_sched_id(const char* scheduler_name)
 {
     int sched_id = sched_get_id_by_name(scheduler_name);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 351029f8b239..0827400f4f49 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -698,7 +698,7 @@ struct domain *domain_create(domid_t domid,
         if ( !d->pbuf )
             goto fail;
 
-        if ( (err = sched_init_domain(d, 0)) != 0 )
+        if ( (err = sched_init_domain(d, config->cpupool_id)) != 0 )
             goto fail;
 
         if ( (err = late_hwdom_init(d)) != 0 )
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index b85e6170b0aa..2f4cf56f438d 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -38,7 +38,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
@@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
     /* Per-vCPU buffer size in bytes.  0 to disable. */
     uint32_t vmtrace_size;
 
+    uint32_t cpupool_id;
+
     struct xen_arch_domainconfig arch;
 };
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 453e98f1cba8..b62315ad5e5d 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1182,6 +1182,7 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
 void btcpupools_allocate_pools(void);
 unsigned int btcpupools_get_cpupool_id(unsigned int cpu);
 void btcpupools_dtb_parse(void);
+int btcpupools_get_domain_pool_id(const struct dt_device_node *node);
 
 #else /* !CONFIG_BOOT_TIME_CPUPOOLS */
 static inline void btcpupools_allocate_pools(void) {}
@@ -1190,6 +1191,14 @@ static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu)
 {
     return 0;
 }
+#ifdef CONFIG_HAS_DEVICE_TREE
+static inline int
+btcpupools_get_domain_pool_id(const struct dt_device_node *node)
+{
+    return 0;
+}
+#endif
+
 #endif
 
 #endif /* __SCHED_H__ */
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 6/6] xen/cpupool: Allow cpupool0 to use different scheduler
  2022-04-08  8:45 [PATCH v6 0/6] Boot time cpupools Luca Fancellu
                   ` (4 preceding siblings ...)
  2022-04-08  8:45 ` [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
@ 2022-04-08  8:45 ` Luca Fancellu
  5 siblings, 0 replies; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  8:45 UTC (permalink / raw)
  To: xen-devel
  Cc: bertrand.marquis, wei.chen, Andrew Cooper, George Dunlap,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu,
	Juergen Gross, Dario Faggioli

Currently cpupool0 can use only the default scheduler, and
cpupool_create has an hardcoded behavior when creating the pool 0
that doesn't allocate new memory for the scheduler, but uses the
default scheduler structure in memory.

With this commit it is possible to allocate a different scheduler for
the cpupool0 when using the boot time cpupool.
To achieve this the hardcoded behavior in cpupool_create is removed
and the cpupool0 creation is moved.

When compiling without boot time cpupools enabled, the current
behavior is maintained (except that cpupool0 scheduler memory will be
allocated).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
Changes in v6:
- Add R-by
Changes in v5:
- no changes
Changes in v4:
- no changes
Changes in v3:
- fix typo in commit message (Juergen)
- rebase changes
Changes in v2:
- new patch
---
 xen/common/boot_cpupools.c | 5 ++++-
 xen/common/sched/cpupool.c | 8 +-------
 xen/include/xen/sched.h    | 5 ++++-
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
index 240bae4cebb8..5955e6f9a98b 100644
--- a/xen/common/boot_cpupools.c
+++ b/xen/common/boot_cpupools.c
@@ -205,8 +205,11 @@ void __init btcpupools_allocate_pools(void)
     if ( add_extra_cpupool )
         next_pool_id++;
 
+    /* Keep track of cpupool id 0 with the global cpupool0 */
+    cpupool0 = cpupool_create_pool(0, pool_sched_map[0]);
+
     /* Create cpupools with selected schedulers */
-    for ( i = 0; i < next_pool_id; i++ )
+    for ( i = 1; i < next_pool_id; i++ )
         cpupool_create_pool(i, pool_sched_map[i]);
 }
 
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 86a175f99cd5..83112f5f04d3 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -312,10 +312,7 @@ static struct cpupool *cpupool_create(unsigned int poolid,
         c->cpupool_id = q->cpupool_id + 1;
     }
 
-    if ( poolid == 0 )
-        c->sched = scheduler_get_default();
-    else
-        c->sched = scheduler_alloc(sched_id);
+    c->sched = scheduler_alloc(sched_id);
     if ( IS_ERR(c->sched) )
     {
         ret = PTR_ERR(c->sched);
@@ -1242,9 +1239,6 @@ static int __init cf_check cpupool_init(void)
 
     cpupool_hypfs_init();
 
-    cpupool0 = cpupool_create(0, 0);
-    BUG_ON(IS_ERR(cpupool0));
-    cpupool_put(cpupool0);
     register_cpu_notifier(&cpu_nfb);
 
     btcpupools_dtb_parse();
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index b62315ad5e5d..e8f31758c058 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1185,7 +1185,10 @@ void btcpupools_dtb_parse(void);
 int btcpupools_get_domain_pool_id(const struct dt_device_node *node);
 
 #else /* !CONFIG_BOOT_TIME_CPUPOOLS */
-static inline void btcpupools_allocate_pools(void) {}
+static inline void btcpupools_allocate_pools(void)
+{
+    cpupool0 = cpupool_create_pool(0, -1);
+}
 static inline void btcpupools_dtb_parse(void) {}
 static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu)
 {
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
@ 2022-04-08  8:56   ` Jan Beulich
  2022-04-08  9:06     ` Luca Fancellu
  2022-04-08  9:01   ` Jan Beulich
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2022-04-08  8:56 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel

On 08.04.2022 10:45, Luca Fancellu wrote:
> Introduce a way to create different cpupools at boot time, this is
> particularly useful on ARM big.LITTLE system where there might be the
> need to have different cpupools for each type of core, but also
> systems using NUMA can have different cpu pools for each node.
> 
> The feature on arm relies on a specification of the cpupools from the
> device tree to build pools and assign cpus to them.
> 
> ACPI is not supported for this feature.
> 
> Documentation is created to explain the feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

This looks to be not in line with ...

> ---
> Changes in v6:
> - Changed docs, return if booted with ACPI in btcpupools_dtb_parse,
>   panic if /chosen does not exists. Changed commit message (Julien)
> - Add Juergen R-by for the xen/common/sched part that didn't change

... what you say here. What's the scope of Jürgen's R-b? If it has
restricted scope, you need to retain that restriction for committers
to know.

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
  2022-04-08  8:56   ` Jan Beulich
@ 2022-04-08  9:01   ` Jan Beulich
  2022-04-08 11:37     ` Luca Fancellu
  2022-04-08 17:41   ` Julien Grall
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2022-04-08  9:01 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel

On 08.04.2022 10:45, Luca Fancellu wrote:
> ---
>  docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
>  xen/arch/arm/include/asm/smp.h         |   3 +
>  xen/common/Kconfig                     |   7 +
>  xen/common/Makefile                    |   1 +
>  xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
>  xen/common/sched/cpupool.c             |  12 +-
>  xen/include/xen/sched.h                |  14 ++
>  7 files changed, 383 insertions(+), 1 deletion(-)
>  create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>  create mode 100644 xen/common/boot_cpupools.c

Under whose maintainership is the new file to fall? Without an
addition to ./MAINTAINERS and without the file being placed in
xen/common/sched/, it'll be REST maintainers, which I think would
better be avoided. Would it perhaps make sense to have this as
xen/common/sched/boot.c, allowing other boot-only code to
potentially be moved there over time? This would then also avoid
me asking about the underscore in the file name: Underscores are
a somewhat artificial thing for use in places where dashes can't
be used. Yet in the file system dashes are fine, and dashes are
(slightly) easier to type.

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  8:56   ` Jan Beulich
@ 2022-04-08  9:06     ` Luca Fancellu
  0 siblings, 0 replies; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  9:06 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel



> On 8 Apr 2022, at 09:56, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 08.04.2022 10:45, Luca Fancellu wrote:
>> Introduce a way to create different cpupools at boot time, this is
>> particularly useful on ARM big.LITTLE system where there might be the
>> need to have different cpupools for each type of core, but also
>> systems using NUMA can have different cpu pools for each node.
>> 
>> The feature on arm relies on a specification of the cpupools from the
>> device tree to build pools and assign cpus to them.
>> 
>> ACPI is not supported for this feature.
>> 
>> Documentation is created to explain the feature.
>> 
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
> 
> This looks to be not in line with ...
> 
>> ---
>> Changes in v6:
>> - Changed docs, return if booted with ACPI in btcpupools_dtb_parse,
>> panic if /chosen does not exists. Changed commit message (Julien)
>> - Add Juergen R-by for the xen/common/sched part that didn't change
> 
> ... what you say here. What's the scope of Jürgen's R-b? If it has
> restricted scope, you need to retain that restriction for committers
> to know.

Hi Jan,

Sorry about that, I’ve just refreshed my memory with sending-patches.pandoc and I see
I should have added Juergen's R-by with # area.

It’s the first time I retain an R-by for just a part of the commit, I will remember it for the next
time.

Cheers,
Luca 

> 
> Jan


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08  8:45 ` [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
@ 2022-04-08  9:10   ` Jan Beulich
  2022-04-08  9:39     ` Luca Fancellu
  2022-04-08 10:37     ` Juergen Gross
  0 siblings, 2 replies; 34+ messages in thread
From: Jan Beulich @ 2022-04-08  9:10 UTC (permalink / raw)
  To: Luca Fancellu, Juergen Gross
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	xen-devel

On 08.04.2022 10:45, Luca Fancellu wrote:
> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>      /* Per-vCPU buffer size in bytes.  0 to disable. */
>      uint32_t vmtrace_size;
>  
> +    uint32_t cpupool_id;

This could do with a comment explaining default behavior. In particular
I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
would be impossible to delete pool 0 (but there may of course be
reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
Yet if pool 0 can be removed, zero being passed in here should imo not
lead to failure of VM creation. Otoh I understand that this would
already happen ahead of your change, preventing of which would
apparently possible only via passing CPUPOOLID_NONE here.

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08  9:10   ` Jan Beulich
@ 2022-04-08  9:39     ` Luca Fancellu
  2022-04-08 10:24       ` Jan Beulich
  2022-04-08 10:37     ` Juergen Gross
  1 sibling, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08  9:39 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel



> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 08.04.2022 10:45, Luca Fancellu wrote:
>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>     /* Per-vCPU buffer size in bytes.  0 to disable. */
>>     uint32_t vmtrace_size;
>> 
>> +    uint32_t cpupool_id;
> 
> This could do with a comment explaining default behavior. In particular
> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
> would be impossible to delete pool 0 (but there may of course be
> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
> Yet if pool 0 can be removed, zero being passed in here should imo not
> lead to failure of VM creation. Otoh I understand that this would
> already happen ahead of your change, preventing of which would
> apparently possible only via passing CPUPOOLID_NONE here.

Hi Jan,

Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
cpupool_id only for DomUs).

I thought the name was self explanatory, but if I have to put a comment, would
It work something like that:

/* Cpupool id where the domain will be assigned on creation */


> 
> Jan
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 1/6] tools/cpupools: Give a name to unnamed cpupools
  2022-04-08  8:45 ` [PATCH v6 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
@ 2022-04-08  9:54   ` Anthony PERARD
  0 siblings, 0 replies; 34+ messages in thread
From: Anthony PERARD @ 2022-04-08  9:54 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: xen-devel, bertrand.marquis, wei.chen, Wei Liu, Juergen Gross

On Fri, Apr 08, 2022 at 09:45:12AM +0100, Luca Fancellu wrote:
> With the introduction of boot time cpupools, Xen can create many
> different cpupools at boot time other than cpupool with id 0.
> 
> Since these newly created cpupools can't have an
> entry in Xenstore, create the entry using xen-init-dom0
> helper with the usual convention: Pool-<cpupool id>.
> 
> Given the change, remove the check for poolid == 0 from
> libxl_cpupoolid_to_name(...).
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v6:
> - Reworked loop to have only one error path (Anthony)

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08  9:39     ` Luca Fancellu
@ 2022-04-08 10:24       ` Jan Beulich
  2022-04-08 11:15         ` Luca Fancellu
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2022-04-08 10:24 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel

On 08.04.2022 11:39, Luca Fancellu wrote:
> 
> 
>> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>>     /* Per-vCPU buffer size in bytes.  0 to disable. */
>>>     uint32_t vmtrace_size;
>>>
>>> +    uint32_t cpupool_id;
>>
>> This could do with a comment explaining default behavior. In particular
>> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
>> would be impossible to delete pool 0 (but there may of course be
>> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
>> Yet if pool 0 can be removed, zero being passed in here should imo not
>> lead to failure of VM creation. Otoh I understand that this would
>> already happen ahead of your change, preventing of which would
>> apparently possible only via passing CPUPOOLID_NONE here.
> 
> Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
> cpupool_id only for DomUs).

But we're talking about dom0less as per the subject of the patch here.

> I thought the name was self explanatory, but if I have to put a comment, would
> It work something like that:
> 
> /* Cpupool id where the domain will be assigned on creation */

I don't view this kind of comment as necessary. I was really after
calling out default behavior, along the lines of "0 to disable" that
you can see in patch context.

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 3/6] xen/sched: retrieve scheduler id by name
  2022-04-08  8:45 ` [PATCH v6 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
@ 2022-04-08 10:29   ` Dario Faggioli
  0 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2022-04-08 10:29 UTC (permalink / raw)
  To: luca.fancellu, xen-devel
  Cc: julien, Jan Beulich, bertrand.marquis, wl, sstabellini, wei.chen,
	george.dunlap, andrew.cooper3

[-- Attachment #1: Type: text/plain, Size: 828 bytes --]

On Fri, 2022-04-08 at 09:45 +0100, Luca Fancellu wrote:
> Add a static function to retrieve the scheduler pointer using the
> scheduler name.
> 
> Add a public function to retrieve the scheduler id by the scheduler
> name that makes use of the new static function.
> 
> Take the occasion to replace open coded scheduler search with the
> new static function in scheduler_init.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08  9:10   ` Jan Beulich
  2022-04-08  9:39     ` Luca Fancellu
@ 2022-04-08 10:37     ` Juergen Gross
  1 sibling, 0 replies; 34+ messages in thread
From: Juergen Gross @ 2022-04-08 10:37 UTC (permalink / raw)
  To: Jan Beulich, Luca Fancellu
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 894 bytes --]

On 08.04.22 11:10, Jan Beulich wrote:
> On 08.04.2022 10:45, Luca Fancellu wrote:
>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>       /* Per-vCPU buffer size in bytes.  0 to disable. */
>>       uint32_t vmtrace_size;
>>   
>> +    uint32_t cpupool_id;
> 
> This could do with a comment explaining default behavior. In particular
> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
> would be impossible to delete pool 0 (but there may of course be
> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?

Yes, I think destroying of cpupool 0 in a dom0less system should be
prohibited, assuming there is a control domain being able to destroy
a cpupool in a dom0less system.

Main reason is that cpupool 0 has a special role e.g. during domain
destruction (see domain_kill()) and for cpu hotplug operations.


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08 10:24       ` Jan Beulich
@ 2022-04-08 11:15         ` Luca Fancellu
  2022-04-08 12:10           ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08 11:15 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel



> On 8 Apr 2022, at 11:24, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 08.04.2022 11:39, Luca Fancellu wrote:
>> 
>> 
>>> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>> 
>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>>> /* Per-vCPU buffer size in bytes. 0 to disable. */
>>>> uint32_t vmtrace_size;
>>>> 
>>>> + uint32_t cpupool_id;
>>> 
>>> This could do with a comment explaining default behavior. In particular
>>> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
>>> would be impossible to delete pool 0 (but there may of course be
>>> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
>>> Yet if pool 0 can be removed, zero being passed in here should imo not
>>> lead to failure of VM creation. Otoh I understand that this would
>>> already happen ahead of your change, preventing of which would
>>> apparently possible only via passing CPUPOOLID_NONE here.
>> 
>> Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
>> cpupool_id only for DomUs).
> 
> But we're talking about dom0less as per the subject of the patch here.

Domains started using dom0less feature are not privileged and can’t do any operation
on cpu pools, that’s why I thought about Dom0.

> 
>> I thought the name was self explanatory, but if I have to put a comment, would
>> It work something like that:
>> 
>> /* Cpupool id where the domain will be assigned on creation */
> 
> I don't view this kind of comment as necessary. I was really after
> calling out default behavior, along the lines of "0 to disable" that
> you can see in patch context.

Ok, could this work?

/* Domain cpupool id on creation. Default 0 as Pool-0 is always present. */

> 
> Jan


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  9:01   ` Jan Beulich
@ 2022-04-08 11:37     ` Luca Fancellu
  2022-04-08 11:58       ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-08 11:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel



> On 8 Apr 2022, at 10:01, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 08.04.2022 10:45, Luca Fancellu wrote:
>> ---
>> docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
>> xen/arch/arm/include/asm/smp.h         |   3 +
>> xen/common/Kconfig                     |   7 +
>> xen/common/Makefile                    |   1 +
>> xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
>> xen/common/sched/cpupool.c             |  12 +-
>> xen/include/xen/sched.h                |  14 ++
>> 7 files changed, 383 insertions(+), 1 deletion(-)
>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>> create mode 100644 xen/common/boot_cpupools.c
> 
> Under whose maintainership is the new file to fall? Without an
> addition to ./MAINTAINERS and without the file being placed in
> xen/common/sched/, it'll be REST maintainers, which I think would
> better be avoided. Would it perhaps make sense to have this as
> xen/common/sched/boot.c, allowing other boot-only code to
> potentially be moved there over time? This would then also avoid
> me asking about the underscore in the file name: Underscores are
> a somewhat artificial thing for use in places where dashes can't
> be used. Yet in the file system dashes are fine, and dashes are
> (slightly) easier to type.
> 

Ok I can put the new file under xen/common/sched/ as boot.c, should this new
file be under this section?

CPU POOLS
M:  Juergen Gross <jgross@suse.com>
M:  Dario Faggioli <dfaggioli@suse.com>
S:  Supported
F:  xen/common/sched/cpupool.c
+ F:  xen/common/sched/boot.c


> Jan
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08 11:37     ` Luca Fancellu
@ 2022-04-08 11:58       ` Jan Beulich
  2022-04-08 20:25         ` Stefano Stabellini
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2022-04-08 11:58 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Bertrand Marquis, Wei Chen, Stefano Stabellini, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel

On 08.04.2022 13:37, Luca Fancellu wrote:
> 
> 
>> On 8 Apr 2022, at 10:01, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>> ---
>>> docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
>>> xen/arch/arm/include/asm/smp.h         |   3 +
>>> xen/common/Kconfig                     |   7 +
>>> xen/common/Makefile                    |   1 +
>>> xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
>>> xen/common/sched/cpupool.c             |  12 +-
>>> xen/include/xen/sched.h                |  14 ++
>>> 7 files changed, 383 insertions(+), 1 deletion(-)
>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>> create mode 100644 xen/common/boot_cpupools.c
>>
>> Under whose maintainership is the new file to fall? Without an
>> addition to ./MAINTAINERS and without the file being placed in
>> xen/common/sched/, it'll be REST maintainers, which I think would
>> better be avoided. Would it perhaps make sense to have this as
>> xen/common/sched/boot.c, allowing other boot-only code to
>> potentially be moved there over time? This would then also avoid
>> me asking about the underscore in the file name: Underscores are
>> a somewhat artificial thing for use in places where dashes can't
>> be used. Yet in the file system dashes are fine, and dashes are
>> (slightly) easier to type.
>>
> 
> Ok I can put the new file under xen/common/sched/ as boot.c, should this new
> file be under this section?
> 
> CPU POOLS
> M:  Juergen Gross <jgross@suse.com>
> M:  Dario Faggioli <dfaggioli@suse.com>
> S:  Supported
> F:  xen/common/sched/cpupool.c
> + F:  xen/common/sched/boot.c

If it's to hold general scheduler code (which this shorter name would
suggest), it shouldn't need any change to ./MAINTAINERS as the
scheduler section would already cover it then. If it was to remain
CPU-pools-specific, then you'd need to stick to the longer name and
put it in the section you have reproduced above.

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08 11:15         ` Luca Fancellu
@ 2022-04-08 12:10           ` Jan Beulich
  2022-04-11  8:54             ` Luca Fancellu
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2022-04-08 12:10 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel

On 08.04.2022 13:15, Luca Fancellu wrote:
> 
> 
>> On 8 Apr 2022, at 11:24, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 08.04.2022 11:39, Luca Fancellu wrote:
>>>
>>>
>>>> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>>>> /* Per-vCPU buffer size in bytes. 0 to disable. */
>>>>> uint32_t vmtrace_size;
>>>>>
>>>>> + uint32_t cpupool_id;
>>>>
>>>> This could do with a comment explaining default behavior. In particular
>>>> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
>>>> would be impossible to delete pool 0 (but there may of course be
>>>> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
>>>> Yet if pool 0 can be removed, zero being passed in here should imo not
>>>> lead to failure of VM creation. Otoh I understand that this would
>>>> already happen ahead of your change, preventing of which would
>>>> apparently possible only via passing CPUPOOLID_NONE here.
>>>
>>> Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
>>> cpupool_id only for DomUs).
>>
>> But we're talking about dom0less as per the subject of the patch here.
> 
> Domains started using dom0less feature are not privileged and can’t do any operation
> on cpu pools, that’s why I thought about Dom0.

It's all a matter of XSM policy what a domain may or may not be able
to carry out.

>>> I thought the name was self explanatory, but if I have to put a comment, would
>>> It work something like that:
>>>
>>> /* Cpupool id where the domain will be assigned on creation */
>>
>> I don't view this kind of comment as necessary. I was really after
>> calling out default behavior, along the lines of "0 to disable" that
>> you can see in patch context.
> 
> Ok, could this work?
> 
> /* Domain cpupool id on creation. Default 0 as Pool-0 is always present. */

Hmm, I may have misguided you by talking about "default". There's no
default here, as it's the caller's responsibility to set the field,
and what's there will be used. Maybe "CPU pool to use; specify 0
unless a specific existing pool is to be used".

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
  2022-04-08  8:56   ` Jan Beulich
  2022-04-08  9:01   ` Jan Beulich
@ 2022-04-08 17:41   ` Julien Grall
  2022-04-08 20:18   ` Stefano Stabellini
  2022-04-11 10:58   ` Julien Grall
  4 siblings, 0 replies; 34+ messages in thread
From: Julien Grall @ 2022-04-08 17:41 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli

Hi Luca,

On 08/04/2022 09:45, Luca Fancellu wrote:
> Introduce a way to create different cpupools at boot time, this is
> particularly useful on ARM big.LITTLE system where there might be the
> need to have different cpupools for each type of core, but also
> systems using NUMA can have different cpu pools for each node.
> 
> The feature on arm relies on a specification of the cpupools from the
> device tree to build pools and assign cpus to them.
> 
> ACPI is not supported for this feature.
> 
> Documentation is created to explain the feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> ---
> Changes in v6:
> - Changed docs, return if booted with ACPI in btcpupools_dtb_parse,
>    panic if /chosen does not exists. Changed commit message (Julien)

I went through the changes and they LGTM. Stefano has paid closer 
attention to this series, so I will leave him to do the full review.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
                     ` (2 preceding siblings ...)
  2022-04-08 17:41   ` Julien Grall
@ 2022-04-08 20:18   ` Stefano Stabellini
  2022-04-11 10:58   ` Julien Grall
  4 siblings, 0 replies; 34+ messages in thread
From: Stefano Stabellini @ 2022-04-08 20:18 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: xen-devel, bertrand.marquis, wei.chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Jan Beulich, Wei Liu, Juergen Gross, Dario Faggioli

On Fri, 8 Apr 2022, Luca Fancellu wrote:
> Introduce a way to create different cpupools at boot time, this is
> particularly useful on ARM big.LITTLE system where there might be the
> need to have different cpupools for each type of core, but also
> systems using NUMA can have different cpu pools for each node.
> 
> The feature on arm relies on a specification of the cpupools from the
> device tree to build pools and assign cpus to them.
> 
> ACPI is not supported for this feature.
> 
> Documentation is created to explain the feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v6:
> - Changed docs, return if booted with ACPI in btcpupools_dtb_parse,
>   panic if /chosen does not exists. Changed commit message (Julien)
> - Add Juergen R-by for the xen/common/sched part that didn't change
> Changes in v5:
> - Fixed wrong variable name, swapped schedulers, add scheduler info
>   in the printk (Stefano)
> - introduce assert in cpupool_init and btcpupools_get_cpupool_id to
>   harden the code
> Changes in v4:
> - modify Makefile to put in *.init.o, fixed stubs and macro (Jan)
> - fixed docs, fix brakets (Stefano)
> - keep cpu0 in Pool-0 (Julien)
> - moved printk from btcpupools_allocate_pools to
>   btcpupools_get_cpupool_id
> - Add to docs constraint about cpu0 and Pool-0
> Changes in v3:
> - Add newline to cpupools.txt and removed "default n" from Kconfig (Jan)
> - Fixed comment, moved defines, used global cpu_online_map, use
>   HAS_DEVICE_TREE instead of ARM and place arch specific code in header
>   (Juergen)
> - Fix brakets, x86 code only panic, get rid of scheduler dt node, don't
>   save pool pointer and look for it from the pool list (Stefano)
> - Changed data structures to allow modification to the code.
> Changes in v2:
> - Move feature to common code (Juergen)
> - Try to decouple dtb parse and cpupool creation to allow
>   more way to specify cpupools (for example command line)
> - Created standalone dt node for the scheduler so it can
>   be used in future work to set scheduler specific
>   parameters
> - Use only auto generated ids for cpupools
> ---
>  docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
>  xen/arch/arm/include/asm/smp.h         |   3 +
>  xen/common/Kconfig                     |   7 +
>  xen/common/Makefile                    |   1 +
>  xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
>  xen/common/sched/cpupool.c             |  12 +-
>  xen/include/xen/sched.h                |  14 ++
>  7 files changed, 383 insertions(+), 1 deletion(-)
>  create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>  create mode 100644 xen/common/boot_cpupools.c
> 
> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
> new file mode 100644
> index 000000000000..40cc8135c66f
> --- /dev/null
> +++ b/docs/misc/arm/device-tree/cpupools.txt
> @@ -0,0 +1,140 @@
> +Boot time cpupools
> +==================
> +
> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
> +create cpupools during boot phase by specifying them in the device tree.
> +ACPI is not supported for this feature.
> +
> +Cpupools specification nodes shall be direct childs of /chosen node.
> +Each cpupool node contains the following properties:
> +
> +- compatible (mandatory)
> +
> +    Must always include the compatiblity string: "xen,cpupool".
> +
> +- cpupool-cpus (mandatory)
> +
> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
> +    device_type = "cpu"), it can't be empty.
> +
> +- cpupool-sched (optional)
> +
> +    Must be a string having the name of a Xen scheduler. Check the sched=<...>
> +    boot argument for allowed values [1]. When this property is omitted, the Xen
> +    default scheduler will be used.
> +
> +
> +Constraints
> +===========
> +
> +If no cpupools are specified, all cpus will be assigned to one cpupool
> +implicitly created (Pool-0).
> +
> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
> +all the not assigned cpu will be assigned to an additional cpupool.
> +
> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
> +stop.
> +
> +The boot cpu must be assigned to Pool-0, so the cpupool containing that core
> +will become Pool-0 automatically.
> +
> +
> +Examples
> +========
> +
> +A system having two types of core, the following device tree specification will
> +instruct Xen to have two cpupools:
> +
> +- The cpupool with id 0 will have 4 cpus assigned.
> +- The cpupool with id 1 will have 2 cpus assigned.
> +
> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
> +creation process will stop Xen.
> +
> +
> +a72_1: cpu@0 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x0>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a72_2: cpu@1 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x1>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_1: cpu@100 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x100>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_2: cpu@101 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x101>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_3: cpu@102 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x102>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_4: cpu@103 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x103>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +chosen {
> +
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = "credit2";
> +    };
> +
> +    [...]
> +
> +};
> +
> +
> +A system having the cpupools specification below will instruct Xen to have three
> +cpupools:
> +
> +- The cpupool Pool-0 will have 2 cpus assigned.
> +- The cpupool Pool-1 will have 2 cpus assigned.
> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
> +  assigned cpus a53_3 and a53_4).
> +
> +chosen {
> +
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a53_1 &a53_2>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = "null";
> +    };
> +
> +    [...]
> +
> +};
> +
> +[1] docs/misc/xen-command-line.pandoc
> diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
> index af5a2fe65266..83c0cd69767b 100644
> --- a/xen/arch/arm/include/asm/smp.h
> +++ b/xen/arch/arm/include/asm/smp.h
> @@ -34,6 +34,9 @@ extern void init_secondary(void);
>  extern void smp_init_cpus(void);
>  extern void smp_clear_cpu_maps (void);
>  extern int smp_get_max_cpus (void);
> +
> +#define cpu_physical_id(cpu) cpu_logical_map(cpu)
> +
>  #endif
>  
>  /*
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index d921c74d615e..70aac5220e75 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -22,6 +22,13 @@ config GRANT_TABLE
>  
>  	  If unsure, say Y.
>  
> +config BOOT_TIME_CPUPOOLS
> +	bool "Create cpupools at boot time"
> +	depends on HAS_DEVICE_TREE
> +	help
> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
> +	  options can be specified in the device tree.
> +
>  config ALTERNATIVE_CALL
>  	bool
>  
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index b1e076c30b81..218174ca8b6b 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -1,5 +1,6 @@
>  obj-$(CONFIG_ARGO) += argo.o
>  obj-y += bitmap.o
> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.init.o
>  obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>  obj-$(CONFIG_CORE_PARKING) += core_parking.o
>  obj-y += cpu.o
> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
> new file mode 100644
> index 000000000000..9429a5025fc4
> --- /dev/null
> +++ b/xen/common/boot_cpupools.c
> @@ -0,0 +1,207 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * xen/common/boot_cpupools.c
> + *
> + * Code to create cpupools at boot time.
> + *
> + * Copyright (C) 2022 Arm Ltd.
> + */
> +
> +#include <xen/sched.h>
> +#include <asm/acpi.h>
> +
> +/*
> + * pool_cpu_map:   Index is logical cpu number, content is cpupool id, (-1) for
> + *                 unassigned.
> + * pool_sched_map: Index is cpupool id, content is scheduler id, (-1) for
> + *                 unassigned.
> + */
> +static int __initdata pool_cpu_map[NR_CPUS]   = { [0 ... NR_CPUS-1] = -1 };
> +static int __initdata pool_sched_map[NR_CPUS] = { [0 ... NR_CPUS-1] = -1 };
> +static unsigned int __initdata next_pool_id;
> +
> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
> +
> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
> +{
> +    unsigned int i;
> +
> +    for ( i = 0; i < nr_cpu_ids; i++ )
> +    {
> +        if ( cpu_physical_id(i) == hwid )
> +            return i;
> +    }
> +
> +    return -1;
> +}
> +
> +static int __init
> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
> +{
> +    int cpu_num;
> +    const __be32 *prop;
> +    unsigned int cpu_reg;
> +
> +    prop = dt_get_property(cpu_node, "reg", NULL);
> +    if ( !prop )
> +        return BTCPUPOOLS_DT_NODE_NO_REG;
> +
> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
> +
> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
> +    if ( cpu_num < 0 )
> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
> +
> +    return cpu_num;
> +}
> +
> +static int __init check_and_get_sched_id(const char* scheduler_name)
> +{
> +    int sched_id = sched_get_id_by_name(scheduler_name);
> +
> +    if ( sched_id < 0 )
> +        panic("Scheduler %s does not exists!\n", scheduler_name);
> +
> +    return sched_id;
> +}
> +
> +void __init btcpupools_dtb_parse(void)
> +{
> +    const struct dt_device_node *chosen, *node;
> +
> +    if ( !acpi_disabled )
> +        return;
> +
> +    chosen = dt_find_node_by_path("/chosen");
> +    if ( !chosen )
> +        panic("/chosen missing. Boot time cpupools can't be parsed from DT.\n");
> +
> +    dt_for_each_child_node(chosen, node)
> +    {
> +        const struct dt_device_node *phandle_node;
> +        int sched_id = -1;
> +        const char* scheduler_name;
> +        unsigned int i = 0;
> +
> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
> +            continue;
> +
> +        if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) )
> +            sched_id = check_and_get_sched_id(scheduler_name);
> +
> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> +        if ( !phandle_node )
> +            panic("Missing or empty cpupool-cpus property!\n");
> +
> +        while ( phandle_node )
> +        {
> +            int cpu_num;
> +
> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
> +
> +            if ( cpu_num < 0 )
> +                panic("Error retrieving logical cpu from node %s (%d)\n",
> +                      dt_node_name(node), cpu_num);
> +
> +            if ( pool_cpu_map[cpu_num] != -1 )
> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
> +
> +            pool_cpu_map[cpu_num] = next_pool_id;
> +
> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> +        }
> +
> +        /* Save scheduler choice for this cpupool id */
> +        pool_sched_map[next_pool_id] = sched_id;
> +
> +        /* Let Xen generate pool ids */
> +        next_pool_id++;
> +    }
> +}
> +
> +void __init btcpupools_allocate_pools(void)
> +{
> +    unsigned int i;
> +    bool add_extra_cpupool = false;
> +    int swap_id = -1;
> +
> +    /*
> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
> +     * below will assign every cpu to cpupool0 as the default behavior.
> +     * When there are cpupools, the code below is assigning all the not
> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
> +     * In the same loop we check if there is any assigned cpu that is not
> +     * online.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; i++ )
> +    {
> +        if ( cpumask_test_cpu(i, &cpu_online_map) )
> +        {
> +            /* Unassigned cpu gets next_pool_id pool id value */
> +            if ( pool_cpu_map[i] < 0 )
> +            {
> +                pool_cpu_map[i] = next_pool_id;
> +                add_extra_cpupool = true;
> +            }
> +
> +            /*
> +             * Cpu0 must be in cpupool0, otherwise some operations like moving
> +             * cpus between cpupools, cpu hotplug, destroying cpupools, shutdown
> +             * of the host, might not work in a sane way.
> +             */
> +            if ( !i && (pool_cpu_map[0] != 0) )
> +                swap_id = pool_cpu_map[0];
> +
> +            if ( swap_id != -1 )
> +            {
> +                if ( pool_cpu_map[i] == swap_id )
> +                    pool_cpu_map[i] = 0;
> +                else if ( pool_cpu_map[i] == 0 )
> +                    pool_cpu_map[i] = swap_id;
> +            }
> +        }
> +        else
> +        {
> +            if ( pool_cpu_map[i] >= 0 )
> +                panic("Pool-%d contains cpu%u that is not online!\n",
> +                      pool_cpu_map[i], i);
> +        }
> +    }
> +
> +    /* A swap happened, swap schedulers between cpupool id 0 and the other */
> +    if ( swap_id != -1 )
> +    {
> +        int swap_sched = pool_sched_map[swap_id];
> +
> +        pool_sched_map[swap_id] = pool_sched_map[0];
> +        pool_sched_map[0] = swap_sched;
> +    }
> +
> +    if ( add_extra_cpupool )
> +        next_pool_id++;
> +
> +    /* Create cpupools with selected schedulers */
> +    for ( i = 0; i < next_pool_id; i++ )
> +        cpupool_create_pool(i, pool_sched_map[i]);
> +}
> +
> +unsigned int __init btcpupools_get_cpupool_id(unsigned int cpu)
> +{
> +    ASSERT((cpu < NR_CPUS) && (pool_cpu_map[cpu] >= 0));
> +
> +    printk(XENLOG_INFO "Logical CPU %u in Pool-%d (Scheduler id: %d).\n",
> +           cpu, pool_cpu_map[cpu], pool_sched_map[pool_cpu_map[cpu]]);
> +
> +    return pool_cpu_map[cpu];
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
> index 89a891af7076..86a175f99cd5 100644
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -1247,12 +1247,22 @@ static int __init cf_check cpupool_init(void)
>      cpupool_put(cpupool0);
>      register_cpu_notifier(&cpu_nfb);
>  
> +    btcpupools_dtb_parse();
> +
> +    btcpupools_allocate_pools();
> +
>      spin_lock(&cpupool_lock);
>  
>      cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
>  
>      for_each_cpu ( cpu, &cpupool_free_cpus )
> -        cpupool_assign_cpu_locked(cpupool0, cpu);
> +    {
> +        unsigned int pool_id = btcpupools_get_cpupool_id(cpu);
> +        struct cpupool *pool = cpupool_find_by_id(pool_id);
> +
> +        ASSERT(pool);
> +        cpupool_assign_cpu_locked(pool, cpu);
> +    }
>  
>      spin_unlock(&cpupool_lock);
>  
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index b527f141a1d3..453e98f1cba8 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1178,6 +1178,20 @@ extern void cf_check dump_runq(unsigned char key);
>  
>  void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
>  
> +#ifdef CONFIG_BOOT_TIME_CPUPOOLS
> +void btcpupools_allocate_pools(void);
> +unsigned int btcpupools_get_cpupool_id(unsigned int cpu);
> +void btcpupools_dtb_parse(void);
> +
> +#else /* !CONFIG_BOOT_TIME_CPUPOOLS */
> +static inline void btcpupools_allocate_pools(void) {}
> +static inline void btcpupools_dtb_parse(void) {}
> +static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu)
> +{
> +    return 0;
> +}
> +#endif
> +
>  #endif /* __SCHED_H__ */
>  
>  /*
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08 11:58       ` Jan Beulich
@ 2022-04-08 20:25         ` Stefano Stabellini
  2022-04-09  9:14           ` Juergen Gross
  2022-04-11  6:15           ` Jan Beulich
  0 siblings, 2 replies; 34+ messages in thread
From: Stefano Stabellini @ 2022-04-08 20:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Luca Fancellu, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, Juergen Gross, Dario Faggioli, xen-devel

On Fri, 8 Apr 2022, Jan Beulich wrote:
> On 08.04.2022 13:37, Luca Fancellu wrote:
> > 
> > 
> >> On 8 Apr 2022, at 10:01, Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 08.04.2022 10:45, Luca Fancellu wrote:
> >>> ---
> >>> docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
> >>> xen/arch/arm/include/asm/smp.h         |   3 +
> >>> xen/common/Kconfig                     |   7 +
> >>> xen/common/Makefile                    |   1 +
> >>> xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
> >>> xen/common/sched/cpupool.c             |  12 +-
> >>> xen/include/xen/sched.h                |  14 ++
> >>> 7 files changed, 383 insertions(+), 1 deletion(-)
> >>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
> >>> create mode 100644 xen/common/boot_cpupools.c
> >>
> >> Under whose maintainership is the new file to fall? Without an
> >> addition to ./MAINTAINERS and without the file being placed in
> >> xen/common/sched/, it'll be REST maintainers, which I think would
> >> better be avoided. Would it perhaps make sense to have this as
> >> xen/common/sched/boot.c, allowing other boot-only code to
> >> potentially be moved there over time? This would then also avoid
> >> me asking about the underscore in the file name: Underscores are
> >> a somewhat artificial thing for use in places where dashes can't
> >> be used. Yet in the file system dashes are fine, and dashes are
> >> (slightly) easier to type.
> >>
> > 
> > Ok I can put the new file under xen/common/sched/ as boot.c, should this new
> > file be under this section?
> > 
> > CPU POOLS
> > M:  Juergen Gross <jgross@suse.com>
> > M:  Dario Faggioli <dfaggioli@suse.com>
> > S:  Supported
> > F:  xen/common/sched/cpupool.c
> > + F:  xen/common/sched/boot.c
> 
> If it's to hold general scheduler code (which this shorter name would
> suggest), it shouldn't need any change to ./MAINTAINERS as the
> scheduler section would already cover it then. If it was to remain
> CPU-pools-specific, then you'd need to stick to the longer name and
> put it in the section you have reproduced above.

In my opinion it is best if the maintenance of boot_cpupools.c falls
under "CPU POOLS". Luca, you can retain my reviewed-by when you add
the change to MAINTAINERS or rename the file.

I don't have an opinion if it should be called
xen/common/boot_cpupools.c or xen/common/boot-cpupools.c


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08 20:25         ` Stefano Stabellini
@ 2022-04-09  9:14           ` Juergen Gross
  2022-04-11  6:15           ` Jan Beulich
  1 sibling, 0 replies; 34+ messages in thread
From: Juergen Gross @ 2022-04-09  9:14 UTC (permalink / raw)
  To: Stefano Stabellini, Jan Beulich
  Cc: Luca Fancellu, Bertrand Marquis, Wei Chen, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Dario Faggioli, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 2576 bytes --]

On 08.04.22 22:25, Stefano Stabellini wrote:
> On Fri, 8 Apr 2022, Jan Beulich wrote:
>> On 08.04.2022 13:37, Luca Fancellu wrote:
>>>
>>>
>>>> On 8 Apr 2022, at 10:01, Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>> ---
>>>>> docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
>>>>> xen/arch/arm/include/asm/smp.h         |   3 +
>>>>> xen/common/Kconfig                     |   7 +
>>>>> xen/common/Makefile                    |   1 +
>>>>> xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
>>>>> xen/common/sched/cpupool.c             |  12 +-
>>>>> xen/include/xen/sched.h                |  14 ++
>>>>> 7 files changed, 383 insertions(+), 1 deletion(-)
>>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>>> create mode 100644 xen/common/boot_cpupools.c
>>>>
>>>> Under whose maintainership is the new file to fall? Without an
>>>> addition to ./MAINTAINERS and without the file being placed in
>>>> xen/common/sched/, it'll be REST maintainers, which I think would
>>>> better be avoided. Would it perhaps make sense to have this as
>>>> xen/common/sched/boot.c, allowing other boot-only code to
>>>> potentially be moved there over time? This would then also avoid
>>>> me asking about the underscore in the file name: Underscores are
>>>> a somewhat artificial thing for use in places where dashes can't
>>>> be used. Yet in the file system dashes are fine, and dashes are
>>>> (slightly) easier to type.
>>>>
>>>
>>> Ok I can put the new file under xen/common/sched/ as boot.c, should this new
>>> file be under this section?
>>>
>>> CPU POOLS
>>> M:  Juergen Gross <jgross@suse.com>
>>> M:  Dario Faggioli <dfaggioli@suse.com>
>>> S:  Supported
>>> F:  xen/common/sched/cpupool.c
>>> + F:  xen/common/sched/boot.c
>>
>> If it's to hold general scheduler code (which this shorter name would
>> suggest), it shouldn't need any change to ./MAINTAINERS as the
>> scheduler section would already cover it then. If it was to remain
>> CPU-pools-specific, then you'd need to stick to the longer name and
>> put it in the section you have reproduced above.
> 
> In my opinion it is best if the maintenance of boot_cpupools.c falls
> under "CPU POOLS". Luca, you can retain my reviewed-by when you add
> the change to MAINTAINERS or rename the file.
> 
> I don't have an opinion if it should be called
> xen/common/boot_cpupools.c or xen/common/boot-cpupools.c
> 

I'd go with xen/common/sched/boot-cpupool.c


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08 20:25         ` Stefano Stabellini
  2022-04-09  9:14           ` Juergen Gross
@ 2022-04-11  6:15           ` Jan Beulich
  2022-04-11  8:29             ` Luca Fancellu
  1 sibling, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2022-04-11  6:15 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Luca Fancellu, Bertrand Marquis, Wei Chen, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel

On 08.04.2022 22:25, Stefano Stabellini wrote:
> On Fri, 8 Apr 2022, Jan Beulich wrote:
>> On 08.04.2022 13:37, Luca Fancellu wrote:
>>>
>>>
>>>> On 8 Apr 2022, at 10:01, Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>> ---
>>>>> docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
>>>>> xen/arch/arm/include/asm/smp.h         |   3 +
>>>>> xen/common/Kconfig                     |   7 +
>>>>> xen/common/Makefile                    |   1 +
>>>>> xen/common/boot_cpupools.c             | 207 +++++++++++++++++++++++++
>>>>> xen/common/sched/cpupool.c             |  12 +-
>>>>> xen/include/xen/sched.h                |  14 ++
>>>>> 7 files changed, 383 insertions(+), 1 deletion(-)
>>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>>> create mode 100644 xen/common/boot_cpupools.c
>>>>
>>>> Under whose maintainership is the new file to fall? Without an
>>>> addition to ./MAINTAINERS and without the file being placed in
>>>> xen/common/sched/, it'll be REST maintainers, which I think would
>>>> better be avoided. Would it perhaps make sense to have this as
>>>> xen/common/sched/boot.c, allowing other boot-only code to
>>>> potentially be moved there over time? This would then also avoid
>>>> me asking about the underscore in the file name: Underscores are
>>>> a somewhat artificial thing for use in places where dashes can't
>>>> be used. Yet in the file system dashes are fine, and dashes are
>>>> (slightly) easier to type.
>>>>
>>>
>>> Ok I can put the new file under xen/common/sched/ as boot.c, should this new
>>> file be under this section?
>>>
>>> CPU POOLS
>>> M:  Juergen Gross <jgross@suse.com>
>>> M:  Dario Faggioli <dfaggioli@suse.com>
>>> S:  Supported
>>> F:  xen/common/sched/cpupool.c
>>> + F:  xen/common/sched/boot.c
>>
>> If it's to hold general scheduler code (which this shorter name would
>> suggest), it shouldn't need any change to ./MAINTAINERS as the
>> scheduler section would already cover it then. If it was to remain
>> CPU-pools-specific, then you'd need to stick to the longer name and
>> put it in the section you have reproduced above.
> 
> In my opinion it is best if the maintenance of boot_cpupools.c falls
> under "CPU POOLS". Luca, you can retain my reviewed-by when you add
> the change to MAINTAINERS or rename the file.

Yet even then, with cpupools.c living in sched/, ...

> I don't have an opinion if it should be called
> xen/common/boot_cpupools.c or xen/common/boot-cpupools.c
> 

... this one may want living there are well.

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-11  6:15           ` Jan Beulich
@ 2022-04-11  8:29             ` Luca Fancellu
  2022-04-11 10:29               ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-11  8:29 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Bertrand Marquis, Wei Chen, Julien Grall,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel



> On 11 Apr 2022, at 07:15, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 08.04.2022 22:25, Stefano Stabellini wrote:
>> On Fri, 8 Apr 2022, Jan Beulich wrote:
>>> On 08.04.2022 13:37, Luca Fancellu wrote:
>>>> 
>>>> 
>>>>> On 8 Apr 2022, at 10:01, Jan Beulich <jbeulich@suse.com> wrote:
>>>>> 
>>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>>> ---
>>>>>> docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++
>>>>>> xen/arch/arm/include/asm/smp.h | 3 +
>>>>>> xen/common/Kconfig | 7 +
>>>>>> xen/common/Makefile | 1 +
>>>>>> xen/common/boot_cpupools.c | 207 +++++++++++++++++++++++++
>>>>>> xen/common/sched/cpupool.c | 12 +-
>>>>>> xen/include/xen/sched.h | 14 ++
>>>>>> 7 files changed, 383 insertions(+), 1 deletion(-)
>>>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>>>> create mode 100644 xen/common/boot_cpupools.c
>>>>> 
>>>>> Under whose maintainership is the new file to fall? Without an
>>>>> addition to ./MAINTAINERS and without the file being placed in
>>>>> xen/common/sched/, it'll be REST maintainers, which I think would
>>>>> better be avoided. Would it perhaps make sense to have this as
>>>>> xen/common/sched/boot.c, allowing other boot-only code to
>>>>> potentially be moved there over time? This would then also avoid
>>>>> me asking about the underscore in the file name: Underscores are
>>>>> a somewhat artificial thing for use in places where dashes can't
>>>>> be used. Yet in the file system dashes are fine, and dashes are
>>>>> (slightly) easier to type.
>>>>> 
>>>> 
>>>> Ok I can put the new file under xen/common/sched/ as boot.c, should this new
>>>> file be under this section?
>>>> 
>>>> CPU POOLS
>>>> M: Juergen Gross <jgross@suse.com>
>>>> M: Dario Faggioli <dfaggioli@suse.com>
>>>> S: Supported
>>>> F: xen/common/sched/cpupool.c
>>>> + F: xen/common/sched/boot.c
>>> 
>>> If it's to hold general scheduler code (which this shorter name would
>>> suggest), it shouldn't need any change to ./MAINTAINERS as the
>>> scheduler section would already cover it then. If it was to remain
>>> CPU-pools-specific, then you'd need to stick to the longer name and
>>> put it in the section you have reproduced above.
>> 
>> In my opinion it is best if the maintenance of boot_cpupools.c falls
>> under "CPU POOLS". Luca, you can retain my reviewed-by when you add
>> the change to MAINTAINERS or rename the file.
> 
> Yet even then, with cpupools.c living in sched/, ...
> 
>> I don't have an opinion if it should be called
>> xen/common/boot_cpupools.c or xen/common/boot-cpupools.c
>> 
> 
> ... this one may want living there are well.

Yes I agree with you all, I will rename it to xen/common/sched/boot-cpupool.c
and add it in MAINTAINERS.

> 
> Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-08 12:10           ` Jan Beulich
@ 2022-04-11  8:54             ` Luca Fancellu
  2022-04-11  9:08               ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-11  8:54 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel



> On 8 Apr 2022, at 13:10, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 08.04.2022 13:15, Luca Fancellu wrote:
>> 
>> 
>>> On 8 Apr 2022, at 11:24, Jan Beulich <jbeulich@suse.com> wrote:
>>> 
>>> On 08.04.2022 11:39, Luca Fancellu wrote:
>>>> 
>>>> 
>>>>> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>> 
>>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>>>>> /* Per-vCPU buffer size in bytes. 0 to disable. */
>>>>>> uint32_t vmtrace_size;
>>>>>> 
>>>>>> + uint32_t cpupool_id;
>>>>> 
>>>>> This could do with a comment explaining default behavior. In particular
>>>>> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
>>>>> would be impossible to delete pool 0 (but there may of course be
>>>>> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
>>>>> Yet if pool 0 can be removed, zero being passed in here should imo not
>>>>> lead to failure of VM creation. Otoh I understand that this would
>>>>> already happen ahead of your change, preventing of which would
>>>>> apparently possible only via passing CPUPOOLID_NONE here.
>>>> 
>>>> Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
>>>> cpupool_id only for DomUs).
>>> 
>>> But we're talking about dom0less as per the subject of the patch here.
>> 
>> Domains started using dom0less feature are not privileged and can’t do any operation
>> on cpu pools, that’s why I thought about Dom0.
> 
> It's all a matter of XSM policy what a domain may or may not be able
> to carry out.

Yes you are right, however I didn’t see so far this use case with a domU and the tool stack,
probably because it would need also xenstore etc… I’m aware that there is some work going
on to enable it also for dom0less domUs, so my question is:

Do you see this as a blocker for this patch? Are you ok if I send this patch with just the comment
below or in your opinion this patch requires some other work?

> 
>>>> I thought the name was self explanatory, but if I have to put a comment, would
>>>> It work something like that:
>>>> 
>>>> /* Cpupool id where the domain will be assigned on creation */
>>> 
>>> I don't view this kind of comment as necessary. I was really after
>>> calling out default behavior, along the lines of "0 to disable" that
>>> you can see in patch context.
>> 
>> Ok, could this work?
>> 
>> /* Domain cpupool id on creation. Default 0 as Pool-0 is always present. */
> 
> Hmm, I may have misguided you by talking about "default". There's no
> default here, as it's the caller's responsibility to set the field,
> and what's there will be used. Maybe "CPU pool to use; specify 0
> unless a specific existing pool is to be used".

Thank you, I will use it and update the patch.

> 
> Jan


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-11  8:54             ` Luca Fancellu
@ 2022-04-11  9:08               ` Jan Beulich
  2022-04-11 10:20                 ` Luca Fancellu
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2022-04-11  9:08 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel

On 11.04.2022 10:54, Luca Fancellu wrote:
>> On 8 Apr 2022, at 13:10, Jan Beulich <jbeulich@suse.com> wrote:
>> On 08.04.2022 13:15, Luca Fancellu wrote:
>>>> On 8 Apr 2022, at 11:24, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 08.04.2022 11:39, Luca Fancellu wrote:
>>>>>> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>>>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>>>>>> /* Per-vCPU buffer size in bytes. 0 to disable. */
>>>>>>> uint32_t vmtrace_size;
>>>>>>>
>>>>>>> + uint32_t cpupool_id;
>>>>>>
>>>>>> This could do with a comment explaining default behavior. In particular
>>>>>> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
>>>>>> would be impossible to delete pool 0 (but there may of course be
>>>>>> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
>>>>>> Yet if pool 0 can be removed, zero being passed in here should imo not
>>>>>> lead to failure of VM creation. Otoh I understand that this would
>>>>>> already happen ahead of your change, preventing of which would
>>>>>> apparently possible only via passing CPUPOOLID_NONE here.
>>>>>
>>>>> Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
>>>>> cpupool_id only for DomUs).
>>>>
>>>> But we're talking about dom0less as per the subject of the patch here.
>>>
>>> Domains started using dom0less feature are not privileged and can’t do any operation
>>> on cpu pools, that’s why I thought about Dom0.
>>
>> It's all a matter of XSM policy what a domain may or may not be able
>> to carry out.
> 
> Yes you are right, however I didn’t see so far this use case with a domU and the tool stack,
> probably because it would need also xenstore etc… I’m aware that there is some work going
> on to enable it also for dom0less domUs, so my question is:
> 
> Do you see this as a blocker for this patch? Are you ok if I send this patch with just the comment
> below or in your opinion this patch requires some other work?

Agreement looks to be that there should be precautionary code added
to prevent the deleting of pool 0. This imo wants to be a prereq
change to the one here.

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-11  9:08               ` Jan Beulich
@ 2022-04-11 10:20                 ` Luca Fancellu
  2022-04-11 10:23                   ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-11 10:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel



> On 11 Apr 2022, at 10:08, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 11.04.2022 10:54, Luca Fancellu wrote:
>>> On 8 Apr 2022, at 13:10, Jan Beulich <jbeulich@suse.com> wrote:
>>> On 08.04.2022 13:15, Luca Fancellu wrote:
>>>>> On 8 Apr 2022, at 11:24, Jan Beulich <jbeulich@suse.com> wrote:
>>>>> On 08.04.2022 11:39, Luca Fancellu wrote:
>>>>>>> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>>>>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>>>>>>> /* Per-vCPU buffer size in bytes. 0 to disable. */
>>>>>>>> uint32_t vmtrace_size;
>>>>>>>> 
>>>>>>>> + uint32_t cpupool_id;
>>>>>>> 
>>>>>>> This could do with a comment explaining default behavior. In particular
>>>>>>> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
>>>>>>> would be impossible to delete pool 0 (but there may of course be
>>>>>>> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
>>>>>>> Yet if pool 0 can be removed, zero being passed in here should imo not
>>>>>>> lead to failure of VM creation. Otoh I understand that this would
>>>>>>> already happen ahead of your change, preventing of which would
>>>>>>> apparently possible only via passing CPUPOOLID_NONE here.
>>>>>> 
>>>>>> Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
>>>>>> cpupool_id only for DomUs).
>>>>> 
>>>>> But we're talking about dom0less as per the subject of the patch here.
>>>> 
>>>> Domains started using dom0less feature are not privileged and can’t do any operation
>>>> on cpu pools, that’s why I thought about Dom0.
>>> 
>>> It's all a matter of XSM policy what a domain may or may not be able
>>> to carry out.
>> 
>> Yes you are right, however I didn’t see so far this use case with a domU and the tool stack,
>> probably because it would need also xenstore etc… I’m aware that there is some work going
>> on to enable it also for dom0less domUs, so my question is:
>> 
>> Do you see this as a blocker for this patch? Are you ok if I send this patch with just the comment
>> below or in your opinion this patch requires some other work?
> 
> Agreement looks to be that there should be precautionary code added
> to prevent the deleting of pool 0. This imo wants to be a prereq
> change to the one here.

Since we have the requirement of having cpu0 in pool-0, I’m thinking about a check to don’t allow
Cpu0 to be removed from pool-0, that will cover also the destroy case because we can’t destroy
a cpupool that is not empty.

In your opinion is it ok to proceed with a separate patch as prereq work having this change?

> 
> Jan


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-04-11 10:20                 ` Luca Fancellu
@ 2022-04-11 10:23                   ` Jan Beulich
  0 siblings, 0 replies; 34+ messages in thread
From: Jan Beulich @ 2022-04-11 10:23 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Juergen Gross, Bertrand Marquis, Wei Chen, Stefano Stabellini,
	Julien Grall, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Wei Liu, xen-devel

On 11.04.2022 12:20, Luca Fancellu wrote:
> 
> 
>> On 11 Apr 2022, at 10:08, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 11.04.2022 10:54, Luca Fancellu wrote:
>>>> On 8 Apr 2022, at 13:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 08.04.2022 13:15, Luca Fancellu wrote:
>>>>>> On 8 Apr 2022, at 11:24, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> On 08.04.2022 11:39, Luca Fancellu wrote:
>>>>>>>> On 8 Apr 2022, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>> On 08.04.2022 10:45, Luca Fancellu wrote:
>>>>>>>>> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>>>>>>>>> /* Per-vCPU buffer size in bytes. 0 to disable. */
>>>>>>>>> uint32_t vmtrace_size;
>>>>>>>>>
>>>>>>>>> + uint32_t cpupool_id;
>>>>>>>>
>>>>>>>> This could do with a comment explaining default behavior. In particular
>>>>>>>> I wonder what 0 means: Looking at cpupool_destroy() I can't see that it
>>>>>>>> would be impossible to delete pool 0 (but there may of course be
>>>>>>>> reasons elsewhere, e.g. preventing pool 0 to ever go empty) - Jürgen?
>>>>>>>> Yet if pool 0 can be removed, zero being passed in here should imo not
>>>>>>>> lead to failure of VM creation. Otoh I understand that this would
>>>>>>>> already happen ahead of your change, preventing of which would
>>>>>>>> apparently possible only via passing CPUPOOLID_NONE here.
>>>>>>>
>>>>>>> Pool-0 can’t be emptied because Dom0 is sitting there (the patch is modifying
>>>>>>> cpupool_id only for DomUs).
>>>>>>
>>>>>> But we're talking about dom0less as per the subject of the patch here.
>>>>>
>>>>> Domains started using dom0less feature are not privileged and can’t do any operation
>>>>> on cpu pools, that’s why I thought about Dom0.
>>>>
>>>> It's all a matter of XSM policy what a domain may or may not be able
>>>> to carry out.
>>>
>>> Yes you are right, however I didn’t see so far this use case with a domU and the tool stack,
>>> probably because it would need also xenstore etc… I’m aware that there is some work going
>>> on to enable it also for dom0less domUs, so my question is:
>>>
>>> Do you see this as a blocker for this patch? Are you ok if I send this patch with just the comment
>>> below or in your opinion this patch requires some other work?
>>
>> Agreement looks to be that there should be precautionary code added
>> to prevent the deleting of pool 0. This imo wants to be a prereq
>> change to the one here.
> 
> Since we have the requirement of having cpu0 in pool-0, I’m thinking about a check to don’t allow
> Cpu0 to be removed from pool-0, that will cover also the destroy case because we can’t destroy
> a cpupool that is not empty.
> 
> In your opinion is it ok to proceed with a separate patch as prereq work having this change?

Well, I did already say so (see context above).

Jan



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-11  8:29             ` Luca Fancellu
@ 2022-04-11 10:29               ` Dario Faggioli
  0 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2022-04-11 10:29 UTC (permalink / raw)
  To: Luca.Fancellu, Jan Beulich
  Cc: julien, Juergen Gross, Bertrand.Marquis, wl, sstabellini,
	Volodymyr_Babchuk, Wei.Chen, george.dunlap, xen-devel,
	andrew.cooper3

[-- Attachment #1: Type: text/plain, Size: 1263 bytes --]

On Mon, 2022-04-11 at 08:29 +0000, Luca Fancellu wrote:
> > On 11 Apr 2022, at 07:15, Jan Beulich <jbeulich@suse.com> wrote:
> > On 08.04.2022 22:25, Stefano Stabellini wrote:
> > > In my opinion it is best if the maintenance of boot_cpupools.c
> > > falls
> > > under "CPU POOLS". Luca, you can retain my reviewed-by when you
> > > add
> > > the change to MAINTAINERS or rename the file.
> > 
> > Yet even then, with cpupools.c living in sched/, ...
> > 
> > > I don't have an opinion if it should be called
> > > xen/common/boot_cpupools.c or xen/common/boot-cpupools.c
> > > 
> > 
> > ... this one may want living there are well.
> 
> Yes I agree with you all, I will rename it to xen/common/sched/boot-
> cpupool.c
> and add it in MAINTAINERS.
> 
FWIW, I agree as well. With something like this, IMO:

CPU POOLS
M:      Juergen Gross <jgross@suse.com>
M:      Dario Faggioli <dfaggioli@suse.com>
S:      Supported
F:      xen/common/sched/*cpupool.c

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
                     ` (3 preceding siblings ...)
  2022-04-08 20:18   ` Stefano Stabellini
@ 2022-04-11 10:58   ` Julien Grall
  2022-04-11 11:30     ` Luca Fancellu
  4 siblings, 1 reply; 34+ messages in thread
From: Julien Grall @ 2022-04-11 10:58 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: bertrand.marquis, wei.chen, Stefano Stabellini,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli

Hi Luca,

On 08/04/2022 09:45, Luca Fancellu wrote:
> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
> new file mode 100644
> index 000000000000..40cc8135c66f
> --- /dev/null
> +++ b/docs/misc/arm/device-tree/cpupools.txt
> @@ -0,0 +1,140 @@
> +Boot time cpupools
> +==================
> +
> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
> +create cpupools during boot phase by specifying them in the device tree.
> +ACPI is not supported for this feature.
> +
> +Cpupools specification nodes shall be direct childs of /chosen node.
> +Each cpupool node contains the following properties:
> +
> +- compatible (mandatory)
> +
> +    Must always include the compatiblity string: "xen,cpupool".
> +
> +- cpupool-cpus (mandatory)
> +
> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
> +    device_type = "cpu"), it can't be empty.
> +
> +- cpupool-sched (optional)
> +
> +    Must be a string having the name of a Xen scheduler. Check the sched=<...>
> +    boot argument for allowed values [1]. When this property is omitted, the Xen
> +    default scheduler will be used.
> +
> +
> +Constraints
> +===========
> +
> +If no cpupools are specified, all cpus will be assigned to one cpupool
> +implicitly created (Pool-0).
> +
> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
> +all the not assigned cpu will be assigned to an additional cpupool.
> +
> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
> +stop.
> +
> +The boot cpu must be assigned to Pool-0, so the cpupool containing that core
> +will become Pool-0 automatically.
> +
> +
> +Examples
> +========
> +
> +A system having two types of core, the following device tree specification will
> +instruct Xen to have two cpupools:
> +
> +- The cpupool with id 0 will have 4 cpus assigned.
> +- The cpupool with id 1 will have 2 cpus assigned.

AFAIK, there are no guarantee that Xen will parse cpupool_a first. So it 
would be possible that the ID are inverted here.

This could happen if you want to keep the boot CPU in pool 0 and it is 
not cpu@0 (some bootloaders allows you to change the boot CPU).

Also, here you write "The cpupool with id X" but ...

> +A system having the cpupools specification below will instruct Xen to have three
> +cpupools:
> +
> +- The cpupool Pool-0 will have 2 cpus assigned.
> +- The cpupool Pool-1 will have 2 cpus assigned.
> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
> +  assigned cpus a53_3 and a53_4).

here you write "The cpupool Pool-X". Can you be consistent?

On a separate topic, I think dom0_max_vcpus() needs to be updated to by 
default (i.e when opt_dom0_max_vcpus == 0) the number of vCPUs match the 
number of vCPUs in the cpupool (I think 0) used to created dom0.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-11 10:58   ` Julien Grall
@ 2022-04-11 11:30     ` Luca Fancellu
  2022-04-11 11:42       ` Julien Grall
  0 siblings, 1 reply; 34+ messages in thread
From: Luca Fancellu @ 2022-04-11 11:30 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen developer discussion, Bertrand Marquis, Wei Chen,
	Stefano Stabellini, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Juergen Gross,
	Dario Faggioli



> On 11 Apr 2022, at 11:58, Julien Grall <julien@xen.org> wrote:
> 
> Hi Luca,
> 
> On 08/04/2022 09:45, Luca Fancellu wrote:
>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>> new file mode 100644
>> index 000000000000..40cc8135c66f
>> --- /dev/null
>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>> @@ -0,0 +1,140 @@
>> +Boot time cpupools
>> +==================
>> +
>> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
>> +create cpupools during boot phase by specifying them in the device tree.
>> +ACPI is not supported for this feature.
>> +
>> +Cpupools specification nodes shall be direct childs of /chosen node.
>> +Each cpupool node contains the following properties:
>> +
>> +- compatible (mandatory)
>> +
>> +    Must always include the compatiblity string: "xen,cpupool".
>> +
>> +- cpupool-cpus (mandatory)
>> +
>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>> +    device_type = "cpu"), it can't be empty.
>> +
>> +- cpupool-sched (optional)
>> +
>> +    Must be a string having the name of a Xen scheduler. Check the sched=<...>
>> +    boot argument for allowed values [1]. When this property is omitted, the Xen
>> +    default scheduler will be used.
>> +
>> +
>> +Constraints
>> +===========
>> +
>> +If no cpupools are specified, all cpus will be assigned to one cpupool
>> +implicitly created (Pool-0).
>> +
>> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
>> +all the not assigned cpu will be assigned to an additional cpupool.
>> +
>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>> +stop.
>> +
>> +The boot cpu must be assigned to Pool-0, so the cpupool containing that core
>> +will become Pool-0 automatically.
>> +
>> +
>> +Examples
>> +========
>> +
>> +A system having two types of core, the following device tree specification will
>> +instruct Xen to have two cpupools:
>> +
>> +- The cpupool with id 0 will have 4 cpus assigned.
>> +- The cpupool with id 1 will have 2 cpus assigned.
> 
> AFAIK, there are no guarantee that Xen will parse cpupool_a first. So it would be possible that the ID are inverted here.
> 
> This could happen if you want to keep the boot CPU in pool 0 and it is not cpu@0 (some bootloaders allows you to change the boot CPU).

Hi Julien,

Yes I will specify that the boot cpu is listed in cpupool_a, so that cpupool will have id 0 regardless of the parsing order.

> 
> Also, here you write "The cpupool with id X" but ...
> 
>> +A system having the cpupools specification below will instruct Xen to have three
>> +cpupools:
>> +
>> +- The cpupool Pool-0 will have 2 cpus assigned.
>> +- The cpupool Pool-1 will have 2 cpus assigned.
>> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
>> +  assigned cpus a53_3 and a53_4).
> 
> here you write "The cpupool Pool-X". Can you be consistent?

Sure, do you have a preference between “The cpupool with id X” and “Pool-X”? Otherwise I would go for Pool-X everywhere.

> 
> On a separate topic, I think dom0_max_vcpus() needs to be updated to by default (i.e when opt_dom0_max_vcpus == 0) the number of vCPUs match the number of vCPUs in the cpupool (I think 0) used to created dom0.

Yes right, I didn’t think about that, I think the change could be something like that:

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9c67a483d4a4..9787104c3d31 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -73,7 +73,10 @@ custom_param("dom0_mem", parse_dom0_mem);
 unsigned int __init dom0_max_vcpus(void)
 {
     if ( opt_dom0_max_vcpus == 0 )
-        opt_dom0_max_vcpus = num_online_cpus();
+    {
+        ASSERT(cpupool0);
+        opt_dom0_max_vcpus = cpumask_weight(cpupool_valid_cpus(cpupool0));
+    }
     if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
         opt_dom0_max_vcpus = MAX_VIRT_CPUS;

And if you agree I will include the changes for the v7.

Cheers,
Luca

> 
> Cheers,
> 
> -- 
> Julien Grall


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time
  2022-04-11 11:30     ` Luca Fancellu
@ 2022-04-11 11:42       ` Julien Grall
  0 siblings, 0 replies; 34+ messages in thread
From: Julien Grall @ 2022-04-11 11:42 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen developer discussion, Bertrand Marquis, Wei Chen,
	Stefano Stabellini, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Juergen Gross,
	Dario Faggioli

Hi Luca,

On 11/04/2022 12:30, Luca Fancellu wrote:
>> On 11 Apr 2022, at 11:58, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 08/04/2022 09:45, Luca Fancellu wrote:
>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>> new file mode 100644
>>> index 000000000000..40cc8135c66f
>>> --- /dev/null
>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>> @@ -0,0 +1,140 @@
>>> +Boot time cpupools
>>> +==================
>>> +
>>> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
>>> +create cpupools during boot phase by specifying them in the device tree.
>>> +ACPI is not supported for this feature.
>>> +
>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>> +Each cpupool node contains the following properties:
>>> +
>>> +- compatible (mandatory)
>>> +
>>> +    Must always include the compatiblity string: "xen,cpupool".
>>> +
>>> +- cpupool-cpus (mandatory)
>>> +
>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>> +    device_type = "cpu"), it can't be empty.
>>> +
>>> +- cpupool-sched (optional)
>>> +
>>> +    Must be a string having the name of a Xen scheduler. Check the sched=<...>
>>> +    boot argument for allowed values [1]. When this property is omitted, the Xen
>>> +    default scheduler will be used.
>>> +
>>> +
>>> +Constraints
>>> +===========
>>> +
>>> +If no cpupools are specified, all cpus will be assigned to one cpupool
>>> +implicitly created (Pool-0).
>>> +
>>> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
>>> +all the not assigned cpu will be assigned to an additional cpupool.
>>> +
>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>> +stop.
>>> +
>>> +The boot cpu must be assigned to Pool-0, so the cpupool containing that core
>>> +will become Pool-0 automatically.
>>> +
>>> +
>>> +Examples
>>> +========
>>> +
>>> +A system having two types of core, the following device tree specification will
>>> +instruct Xen to have two cpupools:
>>> +
>>> +- The cpupool with id 0 will have 4 cpus assigned.
>>> +- The cpupool with id 1 will have 2 cpus assigned.
>>
>> AFAIK, there are no guarantee that Xen will parse cpupool_a first. So it would be possible that the ID are inverted here.
>>
>> This could happen if you want to keep the boot CPU in pool 0 and it is not cpu@0 (some bootloaders allows you to change the boot CPU).
> Yes I will specify that the boot cpu is listed in cpupool_a, so that cpupool will have id 0 regardless of the parsing order.

This only covers the case where are two cpupools.

AFAIK, there are no guarantee that Xen will parse the DT or the compiler 
will generate the DT the way you want. So for three cpupools, we still 
don't know which pool will be ID 1/2.

See more below.

> 
>>
>> Also, here you write "The cpupool with id X" but ...
>>
>>> +A system having the cpupools specification below will instruct Xen to have three
>>> +cpupools:
>>> +
>>> +- The cpupool Pool-0 will have 2 cpus assigned.
>>> +- The cpupool Pool-1 will have 2 cpus assigned.
>>> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
>>> +  assigned cpus a53_3 and a53_4).
>>
>> here you write "The cpupool Pool-X". Can you be consistent?
> 
> Sure, do you have a preference between “The cpupool with id X” and “Pool-X”? Otherwise I would go for Pool-X everywhere.

Using "cpupool with ID 0" is definitely wrong. Pool-X is marginally 
better because an admin may think that this name will match what we have 
in Xen.

So I think it would be better to use the node name and mention that 
there are no guarantee in which ID will used by Xen.

> 
>>
>> On a separate topic, I think dom0_max_vcpus() needs to be updated to by default (i.e when opt_dom0_max_vcpus == 0) the number of vCPUs match the number of vCPUs in the cpupool (I think 0) used to created dom0.
> 
> Yes right, I didn’t think about that, I think the change could be something like that:
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9c67a483d4a4..9787104c3d31 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -73,7 +73,10 @@ custom_param("dom0_mem", parse_dom0_mem);
>   unsigned int __init dom0_max_vcpus(void)
>   {
>       if ( opt_dom0_max_vcpus == 0 )
> -        opt_dom0_max_vcpus = num_online_cpus();
> +    {
> +        ASSERT(cpupool0);
> +        opt_dom0_max_vcpus = cpumask_weight(cpupool_valid_cpus(cpupool0));
> +    }
>       if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
>           opt_dom0_max_vcpus = MAX_VIRT_CPUS;
> 
> And if you agree I will include the changes for the v7.

This should work.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2022-04-11 11:42 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-08  8:45 [PATCH v6 0/6] Boot time cpupools Luca Fancellu
2022-04-08  8:45 ` [PATCH v6 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
2022-04-08  9:54   ` Anthony PERARD
2022-04-08  8:45 ` [PATCH v6 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
2022-04-08  8:45 ` [PATCH v6 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
2022-04-08 10:29   ` Dario Faggioli
2022-04-08  8:45 ` [PATCH v6 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
2022-04-08  8:56   ` Jan Beulich
2022-04-08  9:06     ` Luca Fancellu
2022-04-08  9:01   ` Jan Beulich
2022-04-08 11:37     ` Luca Fancellu
2022-04-08 11:58       ` Jan Beulich
2022-04-08 20:25         ` Stefano Stabellini
2022-04-09  9:14           ` Juergen Gross
2022-04-11  6:15           ` Jan Beulich
2022-04-11  8:29             ` Luca Fancellu
2022-04-11 10:29               ` Dario Faggioli
2022-04-08 17:41   ` Julien Grall
2022-04-08 20:18   ` Stefano Stabellini
2022-04-11 10:58   ` Julien Grall
2022-04-11 11:30     ` Luca Fancellu
2022-04-11 11:42       ` Julien Grall
2022-04-08  8:45 ` [PATCH v6 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
2022-04-08  9:10   ` Jan Beulich
2022-04-08  9:39     ` Luca Fancellu
2022-04-08 10:24       ` Jan Beulich
2022-04-08 11:15         ` Luca Fancellu
2022-04-08 12:10           ` Jan Beulich
2022-04-11  8:54             ` Luca Fancellu
2022-04-11  9:08               ` Jan Beulich
2022-04-11 10:20                 ` Luca Fancellu
2022-04-11 10:23                   ` Jan Beulich
2022-04-08 10:37     ` Juergen Gross
2022-04-08  8:45 ` [PATCH v6 6/6] xen/cpupool: Allow cpupool0 to use different scheduler Luca Fancellu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.