All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Boot time cpupools
@ 2022-03-10 17:10 Luca Fancellu
  2022-03-10 17:10 ` [PATCH v2 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
                   ` (5 more replies)
  0 siblings, 6 replies; 30+ messages in thread
From: Luca Fancellu @ 2022-03-10 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Wei Liu, Anthony PERARD, Juergen Gross, Dario Faggioli,
	George Dunlap, Andrew Cooper, Jan Beulich, Julien Grall,
	Stefano Stabellini, Bertrand Marquis, Volodymyr Babchuk

This serie introduces a feature for Xen to create cpu pools at boot time, the
feature is enabled using a configurable that is disabled by default.
The boot time cpupool feature relies on the device tree to describe the cpu
pools.
Another feature is introduced by the serie, the possibility to assign a
dom0less guest to a cpupool at boot time.

Here follows an example, Xen is built with CONFIG_BOOT_TIME_CPUPOOLS=y.

From the DT:

  [...]

  a72_0: cpu@0 {
    compatible = "arm,cortex-a72";
    reg = <0x0 0x0>;
    device_type = "cpu";
    [...]
  };

  a72_1: cpu@1 {
    compatible = "arm,cortex-a72";
    reg = <0x0 0x1>;
    device_type = "cpu";
    [...]
  };

  a53_0: cpu@100 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x100>;
    device_type = "cpu";
    [...]
  };

  a53_1: cpu@101 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x101>;
    device_type = "cpu";
    [...]
  };

  a53_2: cpu@102 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x102>;
    device_type = "cpu";
    [...]
  };

  a53_3: cpu@103 {
    compatible = "arm,cortex-a53";
    reg = <0x0 0x103>;
    device_type = "cpu";
    [...]
  };

  chosen {
    #size-cells = <0x1>;
    #address-cells = <0x1>;
    xen,dom0-bootargs = "...";
    xen,xen-bootargs = "...";

    s1: sched_a {
        compatible = "xen,scheduler";
        sched-name = "credit2";
    };
    s2: sched_b {
        compatible = "xen,scheduler";
        sched-name = "null";
    };

    cpupool0 {
      compatible = "xen,cpupool";
      cpupool-cpus = <&a72_0 &a72_1>;
      cpupool-sched = <&1>;
    };

    cp1: cpupool1 {
      compatible = "xen,cpupool";
      cpupool-cpus = <&a53_0 &a53_1 &a53_2 &a53_3>;
      cpupool-sched = <&s2>;
    };

    module@0 {
      reg = <0x80080000 0x1300000>;
      compatible = "multiboot,module";
    };

    domU1 {
      #size-cells = <0x1>;
      #address-cells = <0x1>;
      compatible = "xen,domain";
      cpus = <1>;
      memory = <0 0xC0000>;
      vpl011;
      domain-cpupool = <&cp1>;

      module@92000000 {
        compatible = "multiboot,kernel", "multiboot,module";
        reg = <0x92000000 0x1ffffff>;
        bootargs = "...";
      };
    };
  };

  [...]

The example DT is instructing Xen to have two cpu pools, the one with id 0
having two phisical cpus and the one with id 1 having 4 phisical cpu, the
second cpu pool uses the null scheduler and from the /chosen node we can see
that a dom0less guest will be started on that cpu pool.

In this particular case Xen must boot with different type of cpus, so the
boot argument hmp_unsafe must be enabled.


Luca Fancellu (6):
  tools/cpupools: Give a name to unnamed cpupools
  xen/sched: create public function for cpupools creation
  xen/sched: retrieve scheduler id by name
  xen/cpupool: Create different cpupools at boot time
  arm/dom0less: assign dom0less guests to cpupools
  xen/cpupool: Allow cpupool0 to use different scheduler

 docs/misc/arm/device-tree/booting.txt  |   5 +
 docs/misc/arm/device-tree/cpupools.txt | 156 +++++++++++++++++
 tools/helpers/xen-init-dom0.c          |  35 +++-
 tools/libs/light/libxl_utils.c         |   3 +-
 xen/arch/arm/domain_build.c            |  14 +-
 xen/common/Kconfig                     |   8 +
 xen/common/Makefile                    |   1 +
 xen/common/boot_cpupools.c             | 233 +++++++++++++++++++++++++
 xen/common/domain.c                    |   2 +-
 xen/common/sched/core.c                |  40 +++--
 xen/common/sched/cpupool.c             |  29 ++-
 xen/include/public/domctl.h            |   4 +-
 xen/include/xen/sched.h                |  58 ++++++
 13 files changed, 560 insertions(+), 28 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/cpupools.txt
 create mode 100644 xen/common/boot_cpupools.c

-- 
2.17.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 1/6] tools/cpupools: Give a name to unnamed cpupools
  2022-03-10 17:10 [PATCH v2 0/6] Boot time cpupools Luca Fancellu
@ 2022-03-10 17:10 ` Luca Fancellu
  2022-03-11  8:17   ` Juergen Gross
  2022-03-10 17:10 ` [PATCH v2 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 30+ messages in thread
From: Luca Fancellu @ 2022-03-10 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.chen, Wei Liu, Anthony PERARD, Juergen Gross

With the introduction of boot time cpupools, Xen can create many
different cpupools at boot time other than cpupool with id 0.

Since these newly created cpupools can't have an
entry in Xenstore, create the entry using xen-init-dom0
helper with the usual convention: Pool-<cpupool id>.

Given the change, remove the check for poolid == 0 from
libxl_cpupoolid_to_name(...).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
 - Remove unused variable, moved xc_cpupool_infofree
   ahead to simplify the code, use asprintf (Juergen)
---
 tools/helpers/xen-init-dom0.c  | 35 +++++++++++++++++++++++++++++++++-
 tools/libs/light/libxl_utils.c |  3 +--
 2 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
index c99224a4b607..84286617790f 100644
--- a/tools/helpers/xen-init-dom0.c
+++ b/tools/helpers/xen-init-dom0.c
@@ -43,7 +43,9 @@ int main(int argc, char **argv)
     int rc;
     struct xs_handle *xsh = NULL;
     xc_interface *xch = NULL;
-    char *domname_string = NULL, *domid_string = NULL;
+    char *domname_string = NULL, *domid_string = NULL, *pool_path, *pool_name;
+    xc_cpupoolinfo_t *xcinfo;
+    unsigned int pool_id = 0;
     libxl_uuid uuid;
 
     /* Accept 0 or 1 argument */
@@ -114,6 +116,37 @@ int main(int argc, char **argv)
         goto out;
     }
 
+    /* Create an entry in xenstore for each cpupool on the system */
+    do {
+        xcinfo = xc_cpupool_getinfo(xch, pool_id);
+        if (xcinfo != NULL) {
+            if (xcinfo->cpupool_id != pool_id)
+                pool_id = xcinfo->cpupool_id;
+            xc_cpupool_infofree(xch, xcinfo);
+            if (asprintf(&pool_path, "/local/pool/%d/name", pool_id) <= 0) {
+                fprintf(stderr, "cannot allocate memory for pool path\n");
+                rc = 1;
+                goto out;
+            }
+            if (asprintf(&pool_name, "Pool-%d", pool_id) <= 0) {
+                fprintf(stderr, "cannot allocate memory for pool name\n");
+                rc = 1;
+                goto out_err;
+            }
+            pool_id++;
+            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
+                          strlen(pool_name))) {
+                fprintf(stderr, "cannot set pool name\n");
+                rc = 1;
+            }
+            free(pool_name);
+out_err:
+            free(pool_path);
+            if ( rc )
+                goto out;
+        }
+    } while(xcinfo != NULL);
+
     printf("Done setting up Dom0\n");
 
 out:
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index b91c2cafa223..81780da3ff40 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -151,8 +151,7 @@ char *libxl_cpupoolid_to_name(libxl_ctx *ctx, uint32_t poolid)
 
     snprintf(path, sizeof(path), "/local/pool/%d/name", poolid);
     s = xs_read(ctx->xsh, XBT_NULL, path, &len);
-    if (!s && (poolid == 0))
-        return strdup("Pool-0");
+
     return s;
 }
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 2/6] xen/sched: create public function for cpupools creation
  2022-03-10 17:10 [PATCH v2 0/6] Boot time cpupools Luca Fancellu
  2022-03-10 17:10 ` [PATCH v2 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
@ 2022-03-10 17:10 ` Luca Fancellu
  2022-03-11  8:21   ` Juergen Gross
  2022-03-15 17:45   ` Andrew Cooper
  2022-03-10 17:10 ` [PATCH v2 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 30+ messages in thread
From: Luca Fancellu @ 2022-03-10 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Juergen Gross, Dario Faggioli, George Dunlap,
	Andrew Cooper, Jan Beulich, Julien Grall, Stefano Stabellini,
	Wei Liu

Create new public function to create cpupools, can take as parameter
the scheduler id or a negative value that means the default Xen
scheduler will be used.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
- cpupool_create_pool doesn't check anymore for pool id uniqueness
  before calling cpupool_create. Modified commit message accordingly
---
 xen/common/sched/cpupool.c | 15 +++++++++++++++
 xen/include/xen/sched.h    | 16 ++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index a6da4970506a..89a891af7076 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1219,6 +1219,21 @@ static void cpupool_hypfs_init(void)
 
 #endif /* CONFIG_HYPFS */
 
+struct cpupool *__init cpupool_create_pool(unsigned int pool_id, int sched_id)
+{
+    struct cpupool *pool;
+
+    if ( sched_id < 0 )
+        sched_id = scheduler_get_default()->sched_id;
+
+    pool = cpupool_create(pool_id, sched_id);
+
+    BUG_ON(IS_ERR(pool));
+    cpupool_put(pool);
+
+    return pool;
+}
+
 static int __init cf_check cpupool_init(void)
 {
     unsigned int cpu;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 10ea969c7af9..47fc856e0fe0 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1145,6 +1145,22 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 unsigned int cpupool_get_id(const struct domain *d);
 const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
+
+/*
+ * cpupool_create_pool - Creates a cpupool
+ * @pool_id: id of the pool to be created
+ * @sched_id: id of the scheduler to be used for the pool
+ *
+ * Creates a cpupool with pool_id id.
+ * The sched_id parameter identifies the scheduler to be used, if it is
+ * negative, the default scheduler of Xen will be used.
+ *
+ * returns:
+ *     pointer to the struct cpupool just created, on success
+ *     NULL, on cpupool creation error
+ */
+struct cpupool *cpupool_create_pool(unsigned int pool_id, int sched_id);
+
 extern void cf_check dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 3/6] xen/sched: retrieve scheduler id by name
  2022-03-10 17:10 [PATCH v2 0/6] Boot time cpupools Luca Fancellu
  2022-03-10 17:10 ` [PATCH v2 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
  2022-03-10 17:10 ` [PATCH v2 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
@ 2022-03-10 17:10 ` Luca Fancellu
  2022-03-11  8:13   ` Juergen Gross
  2022-03-10 17:10 ` [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 30+ messages in thread
From: Luca Fancellu @ 2022-03-10 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, George Dunlap, Dario Faggioli, Andrew Cooper,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Add a static function to retrieve the scheduler pointer using the
scheduler name.

Add a public function to retrieve the scheduler id by the scheduler
name that makes use of the new static function.

Take the occasion to replace open coded scheduler search with the
new static function in scheduler_init.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
- replace open coded scheduler search in scheduler_init (Juergen)
---
 xen/common/sched/core.c | 40 ++++++++++++++++++++++++++--------------
 xen/include/xen/sched.h | 11 +++++++++++
 2 files changed, 37 insertions(+), 14 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 19ab67818106..48ee01420fb8 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2947,10 +2947,30 @@ void scheduler_enable(void)
     scheduler_active = true;
 }
 
+static inline
+const struct scheduler *__init sched_get_by_name(const char *sched_name)
+{
+    unsigned int i;
+
+    for ( i = 0; i < NUM_SCHEDULERS; i++ )
+        if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name) )
+            return schedulers[i];
+
+    return NULL;
+}
+
+int __init sched_get_id_by_name(const char *sched_name)
+{
+    const struct scheduler *scheduler = sched_get_by_name(sched_name);
+
+    return scheduler ? scheduler->sched_id : -1;
+}
+
 /* Initialise the data structures. */
 void __init scheduler_init(void)
 {
     struct domain *idle_domain;
+    const struct scheduler *scheduler;
     int i;
 
     scheduler_enable();
@@ -2981,25 +3001,17 @@ void __init scheduler_init(void)
                    schedulers[i]->opt_name);
             schedulers[i] = NULL;
         }
-
-        if ( schedulers[i] && !ops.name &&
-             !strcmp(schedulers[i]->opt_name, opt_sched) )
-            ops = *schedulers[i];
     }
 
-    if ( !ops.name )
+    scheduler = sched_get_by_name(opt_sched);
+    if ( !scheduler )
     {
         printk("Could not find scheduler: %s\n", opt_sched);
-        for ( i = 0; i < NUM_SCHEDULERS; i++ )
-            if ( schedulers[i] &&
-                 !strcmp(schedulers[i]->opt_name, CONFIG_SCHED_DEFAULT) )
-            {
-                ops = *schedulers[i];
-                break;
-            }
-        BUG_ON(!ops.name);
-        printk("Using '%s' (%s)\n", ops.name, ops.opt_name);
+        scheduler = sched_get_by_name(CONFIG_SCHED_DEFAULT);
+        BUG_ON(!scheduler);
+        printk("Using '%s' (%s)\n", scheduler->name, scheduler->opt_name);
     }
+    ops = *scheduler;
 
     if ( cpu_schedule_up(0) )
         BUG();
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 47fc856e0fe0..2c10303f0187 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -756,6 +756,17 @@ void sched_destroy_domain(struct domain *d);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
+
+/*
+ * sched_get_id_by_name - retrieves a scheduler id given a scheduler name
+ * @sched_name: scheduler name as a string
+ *
+ * returns:
+ *     positive value being the scheduler id, on success
+ *     negative value if the scheduler name is not found.
+ */
+int sched_get_id_by_name(const char *sched_name);
+
 void vcpu_wake(struct vcpu *v);
 long vcpu_yield(void);
 void vcpu_sleep_nosync(struct vcpu *v);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-10 17:10 [PATCH v2 0/6] Boot time cpupools Luca Fancellu
                   ` (2 preceding siblings ...)
  2022-03-10 17:10 ` [PATCH v2 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
@ 2022-03-10 17:10 ` Luca Fancellu
  2022-03-11  3:57   ` Stefano Stabellini
                     ` (2 more replies)
  2022-03-10 17:10 ` [PATCH v2 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
  2022-03-10 17:10 ` [PATCH v2 6/6] xen/cpupool: Allow cpupool0 to use different scheduler Luca Fancellu
  5 siblings, 3 replies; 30+ messages in thread
From: Luca Fancellu @ 2022-03-10 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli

Introduce a way to create different cpupools at boot time, this is
particularly useful on ARM big.LITTLE system where there might be the
need to have different cpupools for each type of core, but also
systems using NUMA can have different cpu pools for each node.

The feature on arm relies on a specification of the cpupools from the
device tree to build pools and assign cpus to them.

Documentation is created to explain the feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
- Move feature to common code (Juergen)
- Try to decouple dtb parse and cpupool creation to allow
  more way to specify cpupools (for example command line)
- Created standalone dt node for the scheduler so it can
  be used in future work to set scheduler specific
  parameters
- Use only auto generated ids for cpupools
---
 docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
 xen/common/Kconfig                     |   8 +
 xen/common/Makefile                    |   1 +
 xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
 xen/common/sched/cpupool.c             |   6 +-
 xen/include/xen/sched.h                |  19 +++
 6 files changed, 401 insertions(+), 1 deletion(-)
 create mode 100644 docs/misc/arm/device-tree/cpupools.txt
 create mode 100644 xen/common/boot_cpupools.c

diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
new file mode 100644
index 000000000000..d5a82ed0d45a
--- /dev/null
+++ b/docs/misc/arm/device-tree/cpupools.txt
@@ -0,0 +1,156 @@
+Boot time cpupools
+==================
+
+When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
+create cpupools during boot phase by specifying them in the device tree.
+
+Cpupools specification nodes shall be direct childs of /chosen node.
+Each cpupool node contains the following properties:
+
+- compatible (mandatory)
+
+    Must always include the compatiblity string: "xen,cpupool".
+
+- cpupool-cpus (mandatory)
+
+    Must be a list of device tree phandle to nodes describing cpus (e.g. having
+    device_type = "cpu"), it can't be empty.
+
+- cpupool-sched (optional)
+
+    Must be a device tree phandle to a node having "xen,scheduler" compatible
+    (description below), it has no effect when the cpupool refers to the cpupool
+    number zero, in that case the default Xen scheduler is selected (sched=<...>
+    boot argument).
+
+
+A scheduler specification node is a device tree node that contains the following
+properties:
+
+- compatible (mandatory)
+
+    Must always include the compatiblity string: "xen,scheduler".
+
+- sched-name (mandatory)
+
+    Must be a string having the name of a Xen scheduler, check the sched=<...>
+    boot argument for allowed values.
+
+
+Constraints
+===========
+
+If no cpupools are specified, all cpus will be assigned to one cpupool
+implicitly created (Pool-0).
+
+If cpupools node are specified, but not every cpu brought up by Xen is assigned,
+all the not assigned cpu will be assigned to an additional cpupool.
+
+If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
+stop.
+
+
+Examples
+========
+
+A system having two types of core, the following device tree specification will
+instruct Xen to have two cpupools:
+
+- The cpupool with id 0 will have 4 cpus assigned.
+- The cpupool with id 1 will have 2 cpus assigned.
+
+The following example can work only if hmp-unsafe=1 is passed to Xen boot
+arguments, otherwise not all cores will be brought up by Xen and the cpupool
+creation process will stop Xen.
+
+
+a72_1: cpu@0 {
+        compatible = "arm,cortex-a72";
+        reg = <0x0 0x0>;
+        device_type = "cpu";
+        [...]
+};
+
+a72_2: cpu@1 {
+        compatible = "arm,cortex-a72";
+        reg = <0x0 0x1>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_1: cpu@100 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x100>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_2: cpu@101 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x101>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_3: cpu@102 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x102>;
+        device_type = "cpu";
+        [...]
+};
+
+a53_4: cpu@103 {
+        compatible = "arm,cortex-a53";
+        reg = <0x0 0x103>;
+        device_type = "cpu";
+        [...]
+};
+
+chosen {
+
+    sched: sched_a {
+        compatible = "xen,scheduler";
+        sched-name = "credit2";
+    };
+    cpupool_a {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
+    };
+    cpupool_b {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a72_1 &a72_2>;
+        cpupool-sched = <&sched>;
+    };
+
+    [...]
+
+};
+
+
+A system having the cpupools specification below will instruct Xen to have three
+cpupools:
+
+- The cpupool Pool-0 will have 2 cpus assigned.
+- The cpupool Pool-1 will have 2 cpus assigned.
+- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
+  assigned cpus a53_3 and a53_4).
+
+chosen {
+
+    sched: sched_a {
+        compatible = "xen,scheduler";
+        sched-name = "null";
+    };
+    cpupool_a {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a53_1 &a53_2>;
+    };
+    cpupool_b {
+        compatible = "xen,cpupool";
+        cpupool-cpus = <&a72_1 &a72_2>;
+        cpupool-sched = <&sched>;
+    };
+
+    [...]
+
+};
\ No newline at end of file
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 64439438891c..dc9eed31682f 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -22,6 +22,14 @@ config GRANT_TABLE
 
 	  If unsure, say Y.
 
+config BOOT_TIME_CPUPOOLS
+	bool "Create cpupools at boot time"
+	depends on HAS_DEVICE_TREE
+	default n
+	help
+	  Creates cpupools during boot time and assigns cpus to them. Cpupools
+	  options can be specified in the device tree.
+
 config ALTERNATIVE_CALL
 	bool
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index dc8d3a13f5b8..c5949785ab28 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_ARGO) += argo.o
 obj-y += bitmap.o
+obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
 obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
new file mode 100644
index 000000000000..e8529a902d21
--- /dev/null
+++ b/xen/common/boot_cpupools.c
@@ -0,0 +1,212 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/common/boot_cpupools.c
+ *
+ * Code to create cpupools at boot time for arm architecture.
+ *
+ * Copyright (C) 2022 Arm Ltd.
+ */
+
+#include <xen/sched.h>
+
+#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
+#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
+
+struct pool_map {
+    int pool_id;
+    int sched_id;
+    struct cpupool *pool;
+};
+
+static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
+    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
+static unsigned int __initdata next_pool_id;
+
+#ifdef CONFIG_ARM
+static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
+{
+    unsigned int i;
+
+    for ( i = 0; i < nr_cpu_ids; i++ )
+        if ( cpu_logical_map(i) == hwid )
+            return i;
+
+    return -1;
+}
+
+static int __init
+get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
+{
+    unsigned int cpu_reg, cpu_num;
+    const __be32 *prop;
+
+    prop = dt_get_property(cpu_node, "reg", NULL);
+    if ( !prop )
+        return BTCPUPOOLS_DT_NODE_NO_REG;
+
+    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
+
+    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
+    if ( cpu_num < 0 )
+        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
+
+    return cpu_num;
+}
+
+static int __init check_and_get_sched_id(const char* scheduler_name)
+{
+    int sched_id = sched_get_id_by_name(scheduler_name);
+
+    if ( sched_id < 0 )
+        panic("Scheduler %s does not exists!\n", scheduler_name);
+
+    return sched_id;
+}
+
+void __init btcpupools_dtb_parse(void)
+{
+    const struct dt_device_node *chosen, *node;
+
+    chosen = dt_find_node_by_path("/chosen");
+    if ( !chosen )
+        return;
+
+    dt_for_each_child_node(chosen, node)
+    {
+        const struct dt_device_node *phandle_node;
+        int sched_id = -1;
+        const char* scheduler_name;
+        unsigned int i = 0;
+
+        if ( !dt_device_is_compatible(node, "xen,cpupool") )
+            continue;
+
+        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
+        if ( phandle_node )
+        {
+            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
+                panic("cpupool-sched must be a xen,scheduler compatible"
+                      "node!\n");
+            if ( !dt_property_read_string(phandle_node, "sched-name",
+                                          &scheduler_name) )
+                sched_id = check_and_get_sched_id(scheduler_name);
+            else
+                panic("Error trying to read sched-name in %s!\n",
+                      dt_node_name(phandle_node));
+        }
+
+        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
+        if ( !phandle_node )
+            panic("Missing or empty cpupool-cpus property!\n");
+
+        while ( phandle_node )
+        {
+            int cpu_num;
+
+            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
+
+            if ( cpu_num < 0 )
+                panic("Error retrieving logical cpu from node %s (%d)\n",
+                      dt_node_name(node), cpu_num);
+
+            if ( pool_cpu_map[cpu_num].pool_id != -1 )
+                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
+
+            pool_cpu_map[cpu_num].pool_id = next_pool_id;
+            pool_cpu_map[cpu_num].sched_id = sched_id;
+
+            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
+        }
+
+        /* Let Xen generate pool ids */
+        next_pool_id++;
+    }
+}
+#endif
+
+void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
+{
+    unsigned int cpu_num;
+
+    /*
+     * If there are no cpupools, the value of next_pool_id is zero, so the code
+     * below will assign every cpu to cpupool0 as the default behavior.
+     * When there are cpupools, the code below is assigning all the not
+     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
+     * In the same loop we check if there is any assigned cpu that is not
+     * online.
+     */
+    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
+        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
+        {
+            if ( pool_cpu_map[cpu_num].pool_id < 0 )
+                pool_cpu_map[cpu_num].pool_id = next_pool_id;
+        }
+        else
+            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
+                panic("Pool-%d contains cpu%u that is not online!\n",
+                      pool_cpu_map[cpu_num].pool_id, cpu_num);
+
+#ifdef CONFIG_X86
+    /* Cpu0 must be in cpupool0 for x86 */
+    if ( pool_cpu_map[0].pool_id != 0 )
+    {
+        /* The cpupool containing cpu0 will become cpupool0 */
+        unsigned int swap_id = pool_cpu_map[0].pool_id;
+        for_each_cpu ( cpu_num, cpu_online_map )
+            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
+                pool_cpu_map[cpu_num].pool_id = 0;
+            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
+                pool_cpu_map[cpu_num].pool_id = swap_id;
+    }
+#endif
+
+    for_each_cpu ( cpu_num, cpu_online_map )
+    {
+        struct cpupool *pool = NULL;
+        int pool_id, sched_id;
+
+        pool_id = pool_cpu_map[cpu_num].pool_id;
+        sched_id = pool_cpu_map[cpu_num].sched_id;
+
+        if ( pool_id )
+        {
+            unsigned int i;
+
+            /* Look for previously created pool with id pool_id */
+            for ( i = 0; i < cpu_num; i++ )
+                if ( (pool_cpu_map[i].pool_id == pool_id) &&
+                     pool_cpu_map[i].pool )
+                {
+                    pool = pool_cpu_map[i].pool;
+                    break;
+                }
+
+            /* If no pool was created before, create it */
+            if ( !pool )
+                pool = cpupool_create_pool(pool_id, sched_id);
+            if ( !pool )
+                panic("Error creating pool id %u!\n", pool_id);
+        }
+        else
+            pool = cpupool0;
+
+        pool_cpu_map[cpu_num].pool = pool;
+        printk(XENLOG_INFO "Logical CPU %u in Pool-%u.\n", cpu_num, pool_id);
+    }
+}
+
+struct cpupool *__init btcpupools_get_cpupool(unsigned int cpu)
+{
+    return pool_cpu_map[cpu].pool;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 89a891af7076..b2495ad6d03e 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1247,12 +1247,16 @@ static int __init cf_check cpupool_init(void)
     cpupool_put(cpupool0);
     register_cpu_notifier(&cpu_nfb);
 
+    btcpupools_dtb_parse();
+
+    btcpupools_allocate_pools(&cpu_online_map);
+
     spin_lock(&cpupool_lock);
 
     cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
 
     for_each_cpu ( cpu, &cpupool_free_cpus )
-        cpupool_assign_cpu_locked(cpupool0, cpu);
+        cpupool_assign_cpu_locked(btcpupools_get_cpupool(cpu), cpu);
 
     spin_unlock(&cpupool_lock);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2c10303f0187..de4e8feea399 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1176,6 +1176,25 @@ extern void cf_check dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
 
+#ifdef CONFIG_BOOT_TIME_CPUPOOLS
+void btcpupools_allocate_pools(const cpumask_t *cpu_online_map);
+struct cpupool *btcpupools_get_cpupool(unsigned int cpu);
+
+#ifdef CONFIG_ARM
+void btcpupools_dtb_parse(void);
+#else
+static inline void btcpupools_dtb_parse(void) {}
+#endif
+
+#else
+static inline void btcpupools_allocate_pools(const cpumask_t *cpu_online_map) {}
+static inline void btcpupools_dtb_parse(void) {}
+static inline struct cpupool *btcpupools_get_cpupool(unsigned int cpu)
+{
+    return cpupool0;
+}
+#endif
+
 #endif /* __SCHED_H__ */
 
 /*
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-03-10 17:10 [PATCH v2 0/6] Boot time cpupools Luca Fancellu
                   ` (3 preceding siblings ...)
  2022-03-10 17:10 ` [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
@ 2022-03-10 17:10 ` Luca Fancellu
  2022-03-11  3:58   ` Stefano Stabellini
  2022-03-10 17:10 ` [PATCH v2 6/6] xen/cpupool: Allow cpupool0 to use different scheduler Luca Fancellu
  5 siblings, 1 reply; 30+ messages in thread
From: Luca Fancellu @ 2022-03-10 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu

Introduce domain-cpupool property of a xen,domain device tree node,
that specifies the cpupool device tree handle of a xen,cpupool node
that identifies a cpupool created at boot time where the guest will
be assigned on creation.

Add member to the xen_domctl_createdomain public interface so the
XEN_DOMCTL_INTERFACE_VERSION version is bumped.

Add public function to retrieve a pool id from the device tree
cpupool node.

Update documentation about the property.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
- Moved cpupool_id from arch specific to common part (Juergen)
- Implemented functions to retrieve the cpupool id from the
  cpupool dtb node.
---
 docs/misc/arm/device-tree/booting.txt |  5 +++++
 xen/arch/arm/domain_build.c           | 14 +++++++++++++-
 xen/common/boot_cpupools.c            | 24 ++++++++++++++++++++++++
 xen/common/domain.c                   |  2 +-
 xen/include/public/domctl.h           |  4 +++-
 xen/include/xen/sched.h               |  9 +++++++++
 6 files changed, 55 insertions(+), 3 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index a94125394e35..7b4a29a2c293 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -188,6 +188,11 @@ with the following properties:
     An empty property to request the memory of the domain to be
     direct-map (guest physical address == physical address).
 
+- domain-cpupool
+
+    Optional. Handle to a xen,cpupool device tree node that identifies the
+    cpupool where the guest will be started at boot.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 8be01678de05..9c67a483d4a4 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3172,7 +3172,8 @@ static int __init construct_domU(struct domain *d,
 void __init create_domUs(void)
 {
     struct dt_device_node *node;
-    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
+    const struct dt_device_node *cpupool_node,
+                                *chosen = dt_find_node_by_path("/chosen");
 
     BUG_ON(chosen == NULL);
     dt_for_each_child_node(chosen, node)
@@ -3241,6 +3242,17 @@ void __init create_domUs(void)
                                          vpl011_virq - 32 + 1);
         }
 
+        /* Get the optional property domain-cpupool */
+        cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0);
+        if ( cpupool_node )
+        {
+            int pool_id = btcpupools_get_domain_pool_id(cpupool_node);
+            if ( pool_id < 0 )
+                panic("Error getting cpupool id from domain-cpupool (%d)\n",
+                      pool_id);
+            d_cfg.cpupool_id = pool_id;
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
index e8529a902d21..01a69f894f14 100644
--- a/xen/common/boot_cpupools.c
+++ b/xen/common/boot_cpupools.c
@@ -11,6 +11,8 @@
 
 #define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
 #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
+#define BTCPUPOOLS_DT_WRONG_NODE      (-3)
+#define BTCPUPOOLS_DT_CORRUPTED_NODE  (-4)
 
 struct pool_map {
     int pool_id;
@@ -53,6 +55,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
     return cpu_num;
 }
 
+int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node)
+{
+    const struct dt_device_node *phandle_node;
+    int cpu_num;
+
+    if ( !dt_device_is_compatible(node, "xen,cpupool") )
+        return BTCPUPOOLS_DT_WRONG_NODE;
+    /*
+     * Get first cpu listed in the cpupool, from its reg it's possible to
+     * retrieve the cpupool id.
+     */
+    phandle_node = dt_parse_phandle(node, "cpupool-cpus", 0);
+    if ( !phandle_node )
+        return BTCPUPOOLS_DT_CORRUPTED_NODE;
+
+    cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
+    if ( cpu_num < 0 )
+        return cpu_num;
+
+    return pool_cpu_map[cpu_num].pool_id;
+}
+
 static int __init check_and_get_sched_id(const char* scheduler_name)
 {
     int sched_id = sched_get_id_by_name(scheduler_name);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 351029f8b239..0827400f4f49 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -698,7 +698,7 @@ struct domain *domain_create(domid_t domid,
         if ( !d->pbuf )
             goto fail;
 
-        if ( (err = sched_init_domain(d, 0)) != 0 )
+        if ( (err = sched_init_domain(d, config->cpupool_id)) != 0 )
             goto fail;
 
         if ( (err = late_hwdom_init(d)) != 0 )
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index b85e6170b0aa..3d431a8031fd 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -38,7 +38,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
@@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
     /* Per-vCPU buffer size in bytes.  0 to disable. */
     uint32_t vmtrace_size;
 
+    unsigned int cpupool_id;
+
     struct xen_arch_domainconfig arch;
 };
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index de4e8feea399..30a6538452bc 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1182,6 +1182,7 @@ struct cpupool *btcpupools_get_cpupool(unsigned int cpu);
 
 #ifdef CONFIG_ARM
 void btcpupools_dtb_parse(void);
+int btcpupools_get_domain_pool_id(const struct dt_device_node *node);
 #else
 static inline void btcpupools_dtb_parse(void) {}
 #endif
@@ -1193,6 +1194,14 @@ static inline struct cpupool *btcpupools_get_cpupool(unsigned int cpu)
 {
     return cpupool0;
 }
+#ifdef CONFIG_ARM
+static inline int
+btcpupools_get_domain_pool_id(const struct dt_device_node *node)
+{
+    return 0;
+}
+#endif
+
 #endif
 
 #endif /* __SCHED_H__ */
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 6/6] xen/cpupool: Allow cpupool0 to use different scheduler
  2022-03-10 17:10 [PATCH v2 0/6] Boot time cpupools Luca Fancellu
                   ` (4 preceding siblings ...)
  2022-03-10 17:10 ` [PATCH v2 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
@ 2022-03-10 17:10 ` Luca Fancellu
  2022-03-11  8:25   ` Juergen Gross
  5 siblings, 1 reply; 30+ messages in thread
From: Luca Fancellu @ 2022-03-10 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.chen, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu, Juergen Gross,
	Dario Faggioli

Currently cpupool0 can use only the default scheduler, and
cpupool_create has an harcoded behavior when creating the pool 0
that doesn't allocate new memory for the scheduler, but uses the
default scheduler structure in memory.

With this commit it is possible to allocate a different scheduler for
the cpupool0 when using the boot time cpupool.
To achieve this the hardcoded behavior in cpupool_create is removed
and the cpupool0 creation is moved.

When compiling without boot time cpupools enabled, the current
behavior is maintained (except that cpupool0 scheduler memory will be
allocated).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
- new patch
---
 xen/common/boot_cpupools.c | 39 ++++++++++++++++++--------------------
 xen/common/sched/cpupool.c |  8 +-------
 xen/include/xen/sched.h    |  5 ++++-
 3 files changed, 23 insertions(+), 29 deletions(-)

diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
index 01a69f894f14..a8ae8c5b7852 100644
--- a/xen/common/boot_cpupools.c
+++ b/xen/common/boot_cpupools.c
@@ -189,31 +189,28 @@ void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
     {
         struct cpupool *pool = NULL;
         int pool_id, sched_id;
+        unsigned int i;
 
         pool_id = pool_cpu_map[cpu_num].pool_id;
         sched_id = pool_cpu_map[cpu_num].sched_id;
 
-        if ( pool_id )
-        {
-            unsigned int i;
-
-            /* Look for previously created pool with id pool_id */
-            for ( i = 0; i < cpu_num; i++ )
-                if ( (pool_cpu_map[i].pool_id == pool_id) &&
-                     pool_cpu_map[i].pool )
-                {
-                    pool = pool_cpu_map[i].pool;
-                    break;
-                }
-
-            /* If no pool was created before, create it */
-            if ( !pool )
-                pool = cpupool_create_pool(pool_id, sched_id);
-            if ( !pool )
-                panic("Error creating pool id %u!\n", pool_id);
-        }
-        else
-            pool = cpupool0;
+        /* Look for previously created pool with id pool_id */
+        for ( i = 0; i < cpu_num; i++ )
+            if ( (pool_cpu_map[i].pool_id == pool_id) && pool_cpu_map[i].pool )
+            {
+                pool = pool_cpu_map[i].pool;
+                break;
+            }
+
+        /* If no pool was created before, create it */
+        if ( !pool )
+            pool = cpupool_create_pool(pool_id, sched_id);
+        if ( !pool )
+            panic("Error creating pool id %u!\n", pool_id);
+
+        /* Keep track of cpupool id 0 with the global cpupool0 */
+        if ( !pool_id )
+            cpupool0 = pool;
 
         pool_cpu_map[cpu_num].pool = pool;
         printk(XENLOG_INFO "Logical CPU %u in Pool-%u.\n", cpu_num, pool_id);
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index b2495ad6d03e..3d458a4932b2 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -312,10 +312,7 @@ static struct cpupool *cpupool_create(unsigned int poolid,
         c->cpupool_id = q->cpupool_id + 1;
     }
 
-    if ( poolid == 0 )
-        c->sched = scheduler_get_default();
-    else
-        c->sched = scheduler_alloc(sched_id);
+    c->sched = scheduler_alloc(sched_id);
     if ( IS_ERR(c->sched) )
     {
         ret = PTR_ERR(c->sched);
@@ -1242,9 +1239,6 @@ static int __init cf_check cpupool_init(void)
 
     cpupool_hypfs_init();
 
-    cpupool0 = cpupool_create(0, 0);
-    BUG_ON(IS_ERR(cpupool0));
-    cpupool_put(cpupool0);
     register_cpu_notifier(&cpu_nfb);
 
     btcpupools_dtb_parse();
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 30a6538452bc..4007a3df4c1c 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1188,7 +1188,10 @@ static inline void btcpupools_dtb_parse(void) {}
 #endif
 
 #else
-static inline void btcpupools_allocate_pools(const cpumask_t *cpu_online_map) {}
+static inline void btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
+{
+    cpupool0 = cpupool_create_pool(0, -1);
+}
 static inline void btcpupools_dtb_parse(void) {}
 static inline struct cpupool *btcpupools_get_cpupool(unsigned int cpu)
 {
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-10 17:10 ` [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
@ 2022-03-11  3:57   ` Stefano Stabellini
  2022-03-11 14:11     ` Luca Fancellu
  2022-03-11  7:41   ` Jan Beulich
  2022-03-11  8:09   ` Juergen Gross
  2 siblings, 1 reply; 30+ messages in thread
From: Stefano Stabellini @ 2022-03-11  3:57 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: xen-devel, wei.chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Juergen Gross,
	Dario Faggioli

On Thu, 10 Mar 2022, Luca Fancellu wrote:
> Introduce a way to create different cpupools at boot time, this is
> particularly useful on ARM big.LITTLE system where there might be the
> need to have different cpupools for each type of core, but also
> systems using NUMA can have different cpu pools for each node.
> 
> The feature on arm relies on a specification of the cpupools from the
> device tree to build pools and assign cpus to them.
> 
> Documentation is created to explain the feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v2:
> - Move feature to common code (Juergen)
> - Try to decouple dtb parse and cpupool creation to allow
>   more way to specify cpupools (for example command line)
> - Created standalone dt node for the scheduler so it can
>   be used in future work to set scheduler specific
>   parameters
> - Use only auto generated ids for cpupools
> ---
>  docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
>  xen/common/Kconfig                     |   8 +
>  xen/common/Makefile                    |   1 +
>  xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
>  xen/common/sched/cpupool.c             |   6 +-
>  xen/include/xen/sched.h                |  19 +++
>  6 files changed, 401 insertions(+), 1 deletion(-)
>  create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>  create mode 100644 xen/common/boot_cpupools.c
> 
> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
> new file mode 100644
> index 000000000000..d5a82ed0d45a
> --- /dev/null
> +++ b/docs/misc/arm/device-tree/cpupools.txt
> @@ -0,0 +1,156 @@
> +Boot time cpupools
> +==================
> +
> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
> +create cpupools during boot phase by specifying them in the device tree.
> +
> +Cpupools specification nodes shall be direct childs of /chosen node.
> +Each cpupool node contains the following properties:
> +
> +- compatible (mandatory)
> +
> +    Must always include the compatiblity string: "xen,cpupool".
> +
> +- cpupool-cpus (mandatory)
> +
> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
> +    device_type = "cpu"), it can't be empty.
> +
> +- cpupool-sched (optional)
> +
> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
> +    (description below), it has no effect when the cpupool refers to the cpupool
> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
> +    boot argument).

This is *a lot* better.

The device tree part is nice. I have only one question left on it: why
do we need a separate scheduler node? Could the "cpupool-sched" property
be a simple string with the scheduler name?

E.g.:

    cpupool_a {
        compatible = "xen,cpupool";
        cpupool-cpus = <&a53_1 &a53_2>;
    };
    cpupool_b {
        compatible = "xen,cpupool";
        cpupool-cpus = <&a72_1 &a72_2>;
        cpupool-sched = "null";
    };


To me, it doesn't look like these new "scheduler specification nodes"
bring any benefits. I would just get rid of them.


> +A scheduler specification node is a device tree node that contains the following
> +properties:
> +
> +- compatible (mandatory)
> +
> +    Must always include the compatiblity string: "xen,scheduler".
> +
> +- sched-name (mandatory)
> +
> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
> +    boot argument for allowed values.
> +
> +
> +Constraints
> +===========
> +
> +If no cpupools are specified, all cpus will be assigned to one cpupool
> +implicitly created (Pool-0).
> +
> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
> +all the not assigned cpu will be assigned to an additional cpupool.
> +
> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
> +stop.
> +
> +
> +Examples
> +========
> +
> +A system having two types of core, the following device tree specification will
> +instruct Xen to have two cpupools:
> +
> +- The cpupool with id 0 will have 4 cpus assigned.
> +- The cpupool with id 1 will have 2 cpus assigned.
> +
> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
> +creation process will stop Xen.
> +
> +
> +a72_1: cpu@0 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x0>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a72_2: cpu@1 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x1>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_1: cpu@100 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x100>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_2: cpu@101 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x101>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_3: cpu@102 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x102>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_4: cpu@103 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x103>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +chosen {
> +
> +    sched: sched_a {
> +        compatible = "xen,scheduler";
> +        sched-name = "credit2";
> +    };
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = <&sched>;
> +    };
> +
> +    [...]
> +
> +};
> +
> +
> +A system having the cpupools specification below will instruct Xen to have three
> +cpupools:
> +
> +- The cpupool Pool-0 will have 2 cpus assigned.
> +- The cpupool Pool-1 will have 2 cpus assigned.
> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
> +  assigned cpus a53_3 and a53_4).
> +
> +chosen {
> +
> +    sched: sched_a {
> +        compatible = "xen,scheduler";
> +        sched-name = "null";
> +    };
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a53_1 &a53_2>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = <&sched>;
> +    };
> +
> +    [...]
> +
> +};
> \ No newline at end of file
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index 64439438891c..dc9eed31682f 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -22,6 +22,14 @@ config GRANT_TABLE
>  
>  	  If unsure, say Y.
>  
> +config BOOT_TIME_CPUPOOLS
> +	bool "Create cpupools at boot time"
> +	depends on HAS_DEVICE_TREE
> +	default n
> +	help
> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
> +	  options can be specified in the device tree.
> +
>  config ALTERNATIVE_CALL
>  	bool
>  
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index dc8d3a13f5b8..c5949785ab28 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -1,5 +1,6 @@
>  obj-$(CONFIG_ARGO) += argo.o
>  obj-y += bitmap.o
> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
>  obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>  obj-$(CONFIG_CORE_PARKING) += core_parking.o
>  obj-y += cpu.o
> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
> new file mode 100644
> index 000000000000..e8529a902d21
> --- /dev/null
> +++ b/xen/common/boot_cpupools.c
> @@ -0,0 +1,212 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * xen/common/boot_cpupools.c
> + *
> + * Code to create cpupools at boot time for arm architecture.
> + *
> + * Copyright (C) 2022 Arm Ltd.
> + */
> +
> +#include <xen/sched.h>
> +
> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
> +
> +struct pool_map {
> +    int pool_id;
> +    int sched_id;
> +    struct cpupool *pool;
> +};
> +
> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
> +static unsigned int __initdata next_pool_id;
> +
> +#ifdef CONFIG_ARM
> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
> +{
> +    unsigned int i;
> +
> +    for ( i = 0; i < nr_cpu_ids; i++ )
> +        if ( cpu_logical_map(i) == hwid )
> +            return i;
> +
> +    return -1;
> +}
> +
> +static int __init
> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
> +{
> +    unsigned int cpu_reg, cpu_num;
> +    const __be32 *prop;
> +
> +    prop = dt_get_property(cpu_node, "reg", NULL);
> +    if ( !prop )
> +        return BTCPUPOOLS_DT_NODE_NO_REG;
> +
> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
> +
> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
> +    if ( cpu_num < 0 )
> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
> +
> +    return cpu_num;
> +}
> +
> +static int __init check_and_get_sched_id(const char* scheduler_name)
> +{
> +    int sched_id = sched_get_id_by_name(scheduler_name);
> +
> +    if ( sched_id < 0 )
> +        panic("Scheduler %s does not exists!\n", scheduler_name);
> +
> +    return sched_id;
> +}
> +
> +void __init btcpupools_dtb_parse(void)
> +{
> +    const struct dt_device_node *chosen, *node;
> +
> +    chosen = dt_find_node_by_path("/chosen");
> +    if ( !chosen )
> +        return;
> +
> +    dt_for_each_child_node(chosen, node)
> +    {
> +        const struct dt_device_node *phandle_node;
> +        int sched_id = -1;
> +        const char* scheduler_name;
> +        unsigned int i = 0;
> +
> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
> +            continue;
> +
> +        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
> +        if ( phandle_node )
> +        {
> +            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
> +                panic("cpupool-sched must be a xen,scheduler compatible"
> +                      "node!\n");
> +            if ( !dt_property_read_string(phandle_node, "sched-name",
> +                                          &scheduler_name) )
> +                sched_id = check_and_get_sched_id(scheduler_name);
> +            else
> +                panic("Error trying to read sched-name in %s!\n",
> +                      dt_node_name(phandle_node));
> +        }

it doesn't look like the "xen,scheduler" nodes are very useful from a dt
parsing perspective either


> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> +        if ( !phandle_node )
> +            panic("Missing or empty cpupool-cpus property!\n");
> +
> +        while ( phandle_node )
> +        {
> +            int cpu_num;
> +
> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
> +
> +            if ( cpu_num < 0 )
> +                panic("Error retrieving logical cpu from node %s (%d)\n",
> +                      dt_node_name(node), cpu_num);
> +
> +            if ( pool_cpu_map[cpu_num].pool_id != -1 )
> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
> +
> +            pool_cpu_map[cpu_num].pool_id = next_pool_id;
> +            pool_cpu_map[cpu_num].sched_id = sched_id;
> +
> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> +        }
> +
> +        /* Let Xen generate pool ids */
> +        next_pool_id++;
> +    }
> +}
> +#endif
> +
> +void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
> +{
> +    unsigned int cpu_num;
> +
> +    /*
> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
> +     * below will assign every cpu to cpupool0 as the default behavior.
> +     * When there are cpupools, the code below is assigning all the not
> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
> +     * In the same loop we check if there is any assigned cpu that is not
> +     * online.
> +     */
> +    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
> +        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
> +        {
> +            if ( pool_cpu_map[cpu_num].pool_id < 0 )
> +                pool_cpu_map[cpu_num].pool_id = next_pool_id;
> +        }
> +        else

Please add { }


> +            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
> +                panic("Pool-%d contains cpu%u that is not online!\n",
> +                      pool_cpu_map[cpu_num].pool_id, cpu_num);



> +#ifdef CONFIG_X86
> +    /* Cpu0 must be in cpupool0 for x86 */
> +    if ( pool_cpu_map[0].pool_id != 0 )

Is that even possible on x86 given that btcpupools_dtb_parse cannot even
run on x86?

If it is not possible, I would remove the code below and simply panic
instead.


> +    {
> +        /* The cpupool containing cpu0 will become cpupool0 */
> +        unsigned int swap_id = pool_cpu_map[0].pool_id;
> +        for_each_cpu ( cpu_num, cpu_online_map )
> +            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
> +                pool_cpu_map[cpu_num].pool_id = 0;
> +            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
> +                pool_cpu_map[cpu_num].pool_id = swap_id;
> +    }
> +#endif
> +
> +    for_each_cpu ( cpu_num, cpu_online_map )
> +    {
> +        struct cpupool *pool = NULL;
> +        int pool_id, sched_id;
> +
> +        pool_id = pool_cpu_map[cpu_num].pool_id;
> +        sched_id = pool_cpu_map[cpu_num].sched_id;
> +
> +        if ( pool_id )
> +        {
> +            unsigned int i;
> +
> +            /* Look for previously created pool with id pool_id */
> +            for ( i = 0; i < cpu_num; i++ )

Please add { }

But actually, the double loop seems a bit excessive for this. Could we
just have a single loop to cpupool_create_pool from 0 to next_pool_id?

We could get rid of pool_cpu_map[i].pool and just rely on
pool_cpu_map[i].pool_id. No need to update pool_cpu_map[i].pool if we
get rid of it: it doesn't look like it is very useful anyway?


> +                if ( (pool_cpu_map[i].pool_id == pool_id) &&
> +                     pool_cpu_map[i].pool )
> +                {
> +                    pool = pool_cpu_map[i].pool;
> +                    break;
> +                }
> +
> +            /* If no pool was created before, create it */
> +            if ( !pool )
> +                pool = cpupool_create_pool(pool_id, sched_id);
> +            if ( !pool )
> +                panic("Error creating pool id %u!\n", pool_id);
> +        }
> +        else
> +            pool = cpupool0;
> +
> +        pool_cpu_map[cpu_num].pool = pool;
> +        printk(XENLOG_INFO "Logical CPU %u in Pool-%u.\n", cpu_num, pool_id);
> +    }
> +}
> +
> +struct cpupool *__init btcpupools_get_cpupool(unsigned int cpu)
> +{
> +    return pool_cpu_map[cpu].pool;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
> index 89a891af7076..b2495ad6d03e 100644
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -1247,12 +1247,16 @@ static int __init cf_check cpupool_init(void)
>      cpupool_put(cpupool0);
>      register_cpu_notifier(&cpu_nfb);
>  
> +    btcpupools_dtb_parse();
> +
> +    btcpupools_allocate_pools(&cpu_online_map);
> +
>      spin_lock(&cpupool_lock);
>  
>      cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
>  
>      for_each_cpu ( cpu, &cpupool_free_cpus )
> -        cpupool_assign_cpu_locked(cpupool0, cpu);
> +        cpupool_assign_cpu_locked(btcpupools_get_cpupool(cpu), cpu);
>  
>      spin_unlock(&cpupool_lock);
>  
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 2c10303f0187..de4e8feea399 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1176,6 +1176,25 @@ extern void cf_check dump_runq(unsigned char key);
>  
>  void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
>  
> +#ifdef CONFIG_BOOT_TIME_CPUPOOLS
> +void btcpupools_allocate_pools(const cpumask_t *cpu_online_map);
> +struct cpupool *btcpupools_get_cpupool(unsigned int cpu);
> +
> +#ifdef CONFIG_ARM
> +void btcpupools_dtb_parse(void);
> +#else
> +static inline void btcpupools_dtb_parse(void) {}
> +#endif
> +
> +#else
> +static inline void btcpupools_allocate_pools(const cpumask_t *cpu_online_map) {}
> +static inline void btcpupools_dtb_parse(void) {}
> +static inline struct cpupool *btcpupools_get_cpupool(unsigned int cpu)
> +{
> +    return cpupool0;
> +}
> +#endif
> +
>  #endif /* __SCHED_H__ */
>  
>  /*
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 5/6] arm/dom0less: assign dom0less guests to cpupools
  2022-03-10 17:10 ` [PATCH v2 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
@ 2022-03-11  3:58   ` Stefano Stabellini
  0 siblings, 0 replies; 30+ messages in thread
From: Stefano Stabellini @ 2022-03-11  3:58 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: xen-devel, wei.chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu

On Thu, 10 Mar 2022, Luca Fancellu wrote:
> Introduce domain-cpupool property of a xen,domain device tree node,
> that specifies the cpupool device tree handle of a xen,cpupool node
> that identifies a cpupool created at boot time where the guest will
> be assigned on creation.
> 
> Add member to the xen_domctl_createdomain public interface so the
> XEN_DOMCTL_INTERFACE_VERSION version is bumped.
> 
> Add public function to retrieve a pool id from the device tree
> cpupool node.
> 
> Update documentation about the property.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v2:
> - Moved cpupool_id from arch specific to common part (Juergen)
> - Implemented functions to retrieve the cpupool id from the
>   cpupool dtb node.
> ---
>  docs/misc/arm/device-tree/booting.txt |  5 +++++
>  xen/arch/arm/domain_build.c           | 14 +++++++++++++-
>  xen/common/boot_cpupools.c            | 24 ++++++++++++++++++++++++
>  xen/common/domain.c                   |  2 +-
>  xen/include/public/domctl.h           |  4 +++-
>  xen/include/xen/sched.h               |  9 +++++++++
>  6 files changed, 55 insertions(+), 3 deletions(-)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index a94125394e35..7b4a29a2c293 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -188,6 +188,11 @@ with the following properties:
>      An empty property to request the memory of the domain to be
>      direct-map (guest physical address == physical address).
>  
> +- domain-cpupool
> +
> +    Optional. Handle to a xen,cpupool device tree node that identifies the
> +    cpupool where the guest will be started at boot.
> +
>  Under the "xen,domain" compatible node, one or more sub-nodes are present
>  for the DomU kernel and ramdisk.
>  
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 8be01678de05..9c67a483d4a4 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3172,7 +3172,8 @@ static int __init construct_domU(struct domain *d,
>  void __init create_domUs(void)
>  {
>      struct dt_device_node *node;
> -    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
> +    const struct dt_device_node *cpupool_node,
> +                                *chosen = dt_find_node_by_path("/chosen");
>  
>      BUG_ON(chosen == NULL);
>      dt_for_each_child_node(chosen, node)
> @@ -3241,6 +3242,17 @@ void __init create_domUs(void)
>                                           vpl011_virq - 32 + 1);
>          }
>  
> +        /* Get the optional property domain-cpupool */
> +        cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0);
> +        if ( cpupool_node )
> +        {
> +            int pool_id = btcpupools_get_domain_pool_id(cpupool_node);
> +            if ( pool_id < 0 )
> +                panic("Error getting cpupool id from domain-cpupool (%d)\n",
> +                      pool_id);
> +            d_cfg.cpupool_id = pool_id;
> +        }
> +
>          /*
>           * The variable max_init_domid is initialized with zero, so here it's
>           * very important to use the pre-increment operator to call
> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
> index e8529a902d21..01a69f894f14 100644
> --- a/xen/common/boot_cpupools.c
> +++ b/xen/common/boot_cpupools.c
> @@ -11,6 +11,8 @@
>  
>  #define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>  #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
> +#define BTCPUPOOLS_DT_WRONG_NODE      (-3)
> +#define BTCPUPOOLS_DT_CORRUPTED_NODE  (-4)
>  
>  struct pool_map {
>      int pool_id;
> @@ -53,6 +55,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
>      return cpu_num;
>  }
>  
> +int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node)
> +{
> +    const struct dt_device_node *phandle_node;
> +    int cpu_num;
> +
> +    if ( !dt_device_is_compatible(node, "xen,cpupool") )
> +        return BTCPUPOOLS_DT_WRONG_NODE;
> +    /*
> +     * Get first cpu listed in the cpupool, from its reg it's possible to
> +     * retrieve the cpupool id.
> +     */
> +    phandle_node = dt_parse_phandle(node, "cpupool-cpus", 0);
> +    if ( !phandle_node )
> +        return BTCPUPOOLS_DT_CORRUPTED_NODE;
> +
> +    cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
> +    if ( cpu_num < 0 )
> +        return cpu_num;
> +
> +    return pool_cpu_map[cpu_num].pool_id;
> +}

Nice!


>  static int __init check_and_get_sched_id(const char* scheduler_name)
>  {
>      int sched_id = sched_get_id_by_name(scheduler_name);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 351029f8b239..0827400f4f49 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -698,7 +698,7 @@ struct domain *domain_create(domid_t domid,
>          if ( !d->pbuf )
>              goto fail;
>  
> -        if ( (err = sched_init_domain(d, 0)) != 0 )
> +        if ( (err = sched_init_domain(d, config->cpupool_id)) != 0 )
>              goto fail;
>  
>          if ( (err = late_hwdom_init(d)) != 0 )
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index b85e6170b0aa..3d431a8031fd 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -38,7 +38,7 @@
>  #include "hvm/save.h"
>  #include "memory.h"
>  
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
>  
>  /*
>   * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
> @@ -106,6 +106,8 @@ struct xen_domctl_createdomain {
>      /* Per-vCPU buffer size in bytes.  0 to disable. */
>      uint32_t vmtrace_size;
>  
> +    unsigned int cpupool_id;

Please use an explicitly sized integer


>      struct xen_arch_domainconfig arch;
>  };
>  
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index de4e8feea399..30a6538452bc 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1182,6 +1182,7 @@ struct cpupool *btcpupools_get_cpupool(unsigned int cpu);
>  
>  #ifdef CONFIG_ARM
>  void btcpupools_dtb_parse(void);
> +int btcpupools_get_domain_pool_id(const struct dt_device_node *node);
>  #else
>  static inline void btcpupools_dtb_parse(void) {}
>  #endif
> @@ -1193,6 +1194,14 @@ static inline struct cpupool *btcpupools_get_cpupool(unsigned int cpu)
>  {
>      return cpupool0;
>  }
> +#ifdef CONFIG_ARM
> +static inline int
> +btcpupools_get_domain_pool_id(const struct dt_device_node *node)
> +{
> +    return 0;
> +}
> +#endif
> +
>  #endif
>  
>  #endif /* __SCHED_H__ */
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-10 17:10 ` [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
  2022-03-11  3:57   ` Stefano Stabellini
@ 2022-03-11  7:41   ` Jan Beulich
  2022-03-11  8:09   ` Juergen Gross
  2 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2022-03-11  7:41 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Juergen Gross, Dario Faggioli, xen-devel

On 10.03.2022 18:10, Luca Fancellu wrote:
> +chosen {
> +
> +    sched: sched_a {
> +        compatible = "xen,scheduler";
> +        sched-name = "null";
> +    };
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a53_1 &a53_2>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = <&sched>;
> +    };
> +
> +    [...]
> +
> +};
> \ No newline at end of file

Only seeing this in context of where I wanted to actually comment on.
Please fix.

> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -22,6 +22,14 @@ config GRANT_TABLE
>  
>  	  If unsure, say Y.
>  
> +config BOOT_TIME_CPUPOOLS
> +	bool "Create cpupools at boot time"
> +	depends on HAS_DEVICE_TREE
> +	default n

Nit: Please omit this line - the default is N anyway unless specified
otherwise explicitly.

Jan



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-10 17:10 ` [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
  2022-03-11  3:57   ` Stefano Stabellini
  2022-03-11  7:41   ` Jan Beulich
@ 2022-03-11  8:09   ` Juergen Gross
  2022-03-11  8:56     ` Luca Fancellu
  2 siblings, 1 reply; 30+ messages in thread
From: Juergen Gross @ 2022-03-11  8:09 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 14607 bytes --]

On 10.03.22 18:10, Luca Fancellu wrote:
> Introduce a way to create different cpupools at boot time, this is
> particularly useful on ARM big.LITTLE system where there might be the
> need to have different cpupools for each type of core, but also
> systems using NUMA can have different cpu pools for each node.
> 
> The feature on arm relies on a specification of the cpupools from the
> device tree to build pools and assign cpus to them.
> 
> Documentation is created to explain the feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v2:
> - Move feature to common code (Juergen)
> - Try to decouple dtb parse and cpupool creation to allow
>    more way to specify cpupools (for example command line)
> - Created standalone dt node for the scheduler so it can
>    be used in future work to set scheduler specific
>    parameters
> - Use only auto generated ids for cpupools
> ---
>   docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
>   xen/common/Kconfig                     |   8 +
>   xen/common/Makefile                    |   1 +
>   xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
>   xen/common/sched/cpupool.c             |   6 +-
>   xen/include/xen/sched.h                |  19 +++
>   6 files changed, 401 insertions(+), 1 deletion(-)
>   create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>   create mode 100644 xen/common/boot_cpupools.c
> 
> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
> new file mode 100644
> index 000000000000..d5a82ed0d45a
> --- /dev/null
> +++ b/docs/misc/arm/device-tree/cpupools.txt
> @@ -0,0 +1,156 @@
> +Boot time cpupools
> +==================
> +
> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
> +create cpupools during boot phase by specifying them in the device tree.
> +
> +Cpupools specification nodes shall be direct childs of /chosen node.
> +Each cpupool node contains the following properties:
> +
> +- compatible (mandatory)
> +
> +    Must always include the compatiblity string: "xen,cpupool".
> +
> +- cpupool-cpus (mandatory)
> +
> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
> +    device_type = "cpu"), it can't be empty.
> +
> +- cpupool-sched (optional)
> +
> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
> +    (description below), it has no effect when the cpupool refers to the cpupool
> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
> +    boot argument).
> +
> +
> +A scheduler specification node is a device tree node that contains the following
> +properties:
> +
> +- compatible (mandatory)
> +
> +    Must always include the compatiblity string: "xen,scheduler".
> +
> +- sched-name (mandatory)
> +
> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
> +    boot argument for allowed values.
> +
> +
> +Constraints
> +===========
> +
> +If no cpupools are specified, all cpus will be assigned to one cpupool
> +implicitly created (Pool-0).
> +
> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
> +all the not assigned cpu will be assigned to an additional cpupool.
> +
> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
> +stop.
> +
> +
> +Examples
> +========
> +
> +A system having two types of core, the following device tree specification will
> +instruct Xen to have two cpupools:
> +
> +- The cpupool with id 0 will have 4 cpus assigned.
> +- The cpupool with id 1 will have 2 cpus assigned.
> +
> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
> +creation process will stop Xen.
> +
> +
> +a72_1: cpu@0 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x0>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a72_2: cpu@1 {
> +        compatible = "arm,cortex-a72";
> +        reg = <0x0 0x1>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_1: cpu@100 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x100>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_2: cpu@101 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x101>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_3: cpu@102 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x102>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +a53_4: cpu@103 {
> +        compatible = "arm,cortex-a53";
> +        reg = <0x0 0x103>;
> +        device_type = "cpu";
> +        [...]
> +};
> +
> +chosen {
> +
> +    sched: sched_a {
> +        compatible = "xen,scheduler";
> +        sched-name = "credit2";
> +    };
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = <&sched>;
> +    };
> +
> +    [...]
> +
> +};
> +
> +
> +A system having the cpupools specification below will instruct Xen to have three
> +cpupools:
> +
> +- The cpupool Pool-0 will have 2 cpus assigned.
> +- The cpupool Pool-1 will have 2 cpus assigned.
> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
> +  assigned cpus a53_3 and a53_4).
> +
> +chosen {
> +
> +    sched: sched_a {
> +        compatible = "xen,scheduler";
> +        sched-name = "null";
> +    };
> +    cpupool_a {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a53_1 &a53_2>;
> +    };
> +    cpupool_b {
> +        compatible = "xen,cpupool";
> +        cpupool-cpus = <&a72_1 &a72_2>;
> +        cpupool-sched = <&sched>;
> +    };
> +
> +    [...]
> +
> +};
> \ No newline at end of file
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index 64439438891c..dc9eed31682f 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -22,6 +22,14 @@ config GRANT_TABLE
>   
>   	  If unsure, say Y.
>   
> +config BOOT_TIME_CPUPOOLS
> +	bool "Create cpupools at boot time"
> +	depends on HAS_DEVICE_TREE
> +	default n
> +	help
> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
> +	  options can be specified in the device tree.
> +
>   config ALTERNATIVE_CALL
>   	bool
>   
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index dc8d3a13f5b8..c5949785ab28 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -1,5 +1,6 @@
>   obj-$(CONFIG_ARGO) += argo.o
>   obj-y += bitmap.o
> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
>   obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>   obj-$(CONFIG_CORE_PARKING) += core_parking.o
>   obj-y += cpu.o
> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
> new file mode 100644
> index 000000000000..e8529a902d21
> --- /dev/null
> +++ b/xen/common/boot_cpupools.c
> @@ -0,0 +1,212 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * xen/common/boot_cpupools.c
> + *
> + * Code to create cpupools at boot time for arm architecture.

Please drop the arm reference here.

> + *
> + * Copyright (C) 2022 Arm Ltd.
> + */
> +
> +#include <xen/sched.h>
> +
> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)

Move those inside the #ifdef below, please

> +
> +struct pool_map {
> +    int pool_id;
> +    int sched_id;
> +    struct cpupool *pool;
> +};
> +
> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
> +static unsigned int __initdata next_pool_id;
> +
> +#ifdef CONFIG_ARM

Shouldn't this be CONFIG_HAS_DEVICE_TREE?

> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
> +{
> +    unsigned int i;
> +
> +    for ( i = 0; i < nr_cpu_ids; i++ )
> +        if ( cpu_logical_map(i) == hwid )
> +            return i;
> +
> +    return -1;
> +}
> +
> +static int __init
> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
> +{
> +    unsigned int cpu_reg, cpu_num;
> +    const __be32 *prop;
> +
> +    prop = dt_get_property(cpu_node, "reg", NULL);
> +    if ( !prop )
> +        return BTCPUPOOLS_DT_NODE_NO_REG;
> +
> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
> +
> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
> +    if ( cpu_num < 0 )
> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
> +
> +    return cpu_num;
> +}
> +
> +static int __init check_and_get_sched_id(const char* scheduler_name)
> +{
> +    int sched_id = sched_get_id_by_name(scheduler_name);
> +
> +    if ( sched_id < 0 )
> +        panic("Scheduler %s does not exists!\n", scheduler_name);
> +
> +    return sched_id;
> +}
> +
> +void __init btcpupools_dtb_parse(void)
> +{
> +    const struct dt_device_node *chosen, *node;
> +
> +    chosen = dt_find_node_by_path("/chosen");
> +    if ( !chosen )
> +        return;
> +
> +    dt_for_each_child_node(chosen, node)
> +    {
> +        const struct dt_device_node *phandle_node;
> +        int sched_id = -1;
> +        const char* scheduler_name;
> +        unsigned int i = 0;
> +
> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
> +            continue;
> +
> +        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
> +        if ( phandle_node )
> +        {
> +            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
> +                panic("cpupool-sched must be a xen,scheduler compatible"
> +                      "node!\n");
> +            if ( !dt_property_read_string(phandle_node, "sched-name",
> +                                          &scheduler_name) )
> +                sched_id = check_and_get_sched_id(scheduler_name);
> +            else
> +                panic("Error trying to read sched-name in %s!\n",
> +                      dt_node_name(phandle_node));
> +        }
> +
> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> +        if ( !phandle_node )
> +            panic("Missing or empty cpupool-cpus property!\n");
> +
> +        while ( phandle_node )
> +        {
> +            int cpu_num;
> +
> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
> +
> +            if ( cpu_num < 0 )
> +                panic("Error retrieving logical cpu from node %s (%d)\n",
> +                      dt_node_name(node), cpu_num);
> +
> +            if ( pool_cpu_map[cpu_num].pool_id != -1 )
> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
> +
> +            pool_cpu_map[cpu_num].pool_id = next_pool_id;
> +            pool_cpu_map[cpu_num].sched_id = sched_id;
> +
> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> +        }
> +
> +        /* Let Xen generate pool ids */
> +        next_pool_id++;
> +    }
> +}
> +#endif
> +
> +void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)

Either rename the parameter or drop it completely.

Right now shadowing cpu_online_map is no real problem, because the only
caller is passing the global cpu_online_map, but in case another caller
with different needs would come up, this would be rather confusing.

With the x86 specific loop in this function I don't see how a different
map than the global cpu_online_map could work, so I think dropping the
parameter is the best move.

> +{
> +    unsigned int cpu_num;
> +
> +    /*
> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
> +     * below will assign every cpu to cpupool0 as the default behavior.
> +     * When there are cpupools, the code below is assigning all the not
> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
> +     * In the same loop we check if there is any assigned cpu that is not
> +     * online.
> +     */
> +    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
> +        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
> +        {
> +            if ( pool_cpu_map[cpu_num].pool_id < 0 )
> +                pool_cpu_map[cpu_num].pool_id = next_pool_id;
> +        }
> +        else
> +            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
> +                panic("Pool-%d contains cpu%u that is not online!\n",
> +                      pool_cpu_map[cpu_num].pool_id, cpu_num);
> +
> +#ifdef CONFIG_X86
> +    /* Cpu0 must be in cpupool0 for x86 */
> +    if ( pool_cpu_map[0].pool_id != 0 )
> +    {
> +        /* The cpupool containing cpu0 will become cpupool0 */
> +        unsigned int swap_id = pool_cpu_map[0].pool_id;
> +        for_each_cpu ( cpu_num, cpu_online_map )
> +            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
> +                pool_cpu_map[cpu_num].pool_id = 0;
> +            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
> +                pool_cpu_map[cpu_num].pool_id = swap_id;
> +    }
> +#endif
> +
> +    for_each_cpu ( cpu_num, cpu_online_map )
> +    {
> +        struct cpupool *pool = NULL;
> +        int pool_id, sched_id;
> +
> +        pool_id = pool_cpu_map[cpu_num].pool_id;
> +        sched_id = pool_cpu_map[cpu_num].sched_id;
> +
> +        if ( pool_id )
> +        {
> +            unsigned int i;
> +
> +            /* Look for previously created pool with id pool_id */
> +            for ( i = 0; i < cpu_num; i++ )
> +                if ( (pool_cpu_map[i].pool_id == pool_id) &&
> +                     pool_cpu_map[i].pool )
> +                {
> +                    pool = pool_cpu_map[i].pool;
> +                    break;
> +                }
> +
> +            /* If no pool was created before, create it */
> +            if ( !pool )
> +                pool = cpupool_create_pool(pool_id, sched_id);
> +            if ( !pool )
> +                panic("Error creating pool id %u!\n", pool_id);
> +        }
> +        else
> +            pool = cpupool0;
> +
> +        pool_cpu_map[cpu_num].pool = pool;
> +        printk(XENLOG_INFO "Logical CPU %u in Pool-%u.\n", cpu_num, pool_id);
> +    }
> +}


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/6] xen/sched: retrieve scheduler id by name
  2022-03-10 17:10 ` [PATCH v2 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
@ 2022-03-11  8:13   ` Juergen Gross
  0 siblings, 0 replies; 30+ messages in thread
From: Juergen Gross @ 2022-03-11  8:13 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, George Dunlap, Dario Faggioli, Andrew Cooper,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu


[-- Attachment #1.1.1: Type: text/plain, Size: 491 bytes --]

On 10.03.22 18:10, Luca Fancellu wrote:
> Add a static function to retrieve the scheduler pointer using the
> scheduler name.
> 
> Add a public function to retrieve the scheduler id by the scheduler
> name that makes use of the new static function.
> 
> Take the occasion to replace open coded scheduler search with the
> new static function in scheduler_init.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/6] tools/cpupools: Give a name to unnamed cpupools
  2022-03-10 17:10 ` [PATCH v2 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
@ 2022-03-11  8:17   ` Juergen Gross
  0 siblings, 0 replies; 30+ messages in thread
From: Juergen Gross @ 2022-03-11  8:17 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel; +Cc: wei.chen, Wei Liu, Anthony PERARD


[-- Attachment #1.1.1: Type: text/plain, Size: 3490 bytes --]

On 10.03.22 18:10, Luca Fancellu wrote:
> With the introduction of boot time cpupools, Xen can create many
> different cpupools at boot time other than cpupool with id 0.
> 
> Since these newly created cpupools can't have an
> entry in Xenstore, create the entry using xen-init-dom0
> helper with the usual convention: Pool-<cpupool id>.
> 
> Given the change, remove the check for poolid == 0 from
> libxl_cpupoolid_to_name(...).
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v2:
>   - Remove unused variable, moved xc_cpupool_infofree
>     ahead to simplify the code, use asprintf (Juergen)
> ---
>   tools/helpers/xen-init-dom0.c  | 35 +++++++++++++++++++++++++++++++++-
>   tools/libs/light/libxl_utils.c |  3 +--
>   2 files changed, 35 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
> index c99224a4b607..84286617790f 100644
> --- a/tools/helpers/xen-init-dom0.c
> +++ b/tools/helpers/xen-init-dom0.c
> @@ -43,7 +43,9 @@ int main(int argc, char **argv)
>       int rc;
>       struct xs_handle *xsh = NULL;
>       xc_interface *xch = NULL;
> -    char *domname_string = NULL, *domid_string = NULL;
> +    char *domname_string = NULL, *domid_string = NULL, *pool_path, *pool_name;
> +    xc_cpupoolinfo_t *xcinfo;
> +    unsigned int pool_id = 0;
>       libxl_uuid uuid;
>   
>       /* Accept 0 or 1 argument */
> @@ -114,6 +116,37 @@ int main(int argc, char **argv)
>           goto out;
>       }
>   
> +    /* Create an entry in xenstore for each cpupool on the system */
> +    do {
> +        xcinfo = xc_cpupool_getinfo(xch, pool_id);
> +        if (xcinfo != NULL) {
> +            if (xcinfo->cpupool_id != pool_id)

I think you can just drop this if.

With or without this,

Reviewed-by: Juergen Gross <jgross@suse.com>

> +                pool_id = xcinfo->cpupool_id;
> +            xc_cpupool_infofree(xch, xcinfo);
> +            if (asprintf(&pool_path, "/local/pool/%d/name", pool_id) <= 0) {
> +                fprintf(stderr, "cannot allocate memory for pool path\n");
> +                rc = 1;
> +                goto out;
> +            }
> +            if (asprintf(&pool_name, "Pool-%d", pool_id) <= 0) {
> +                fprintf(stderr, "cannot allocate memory for pool name\n");
> +                rc = 1;
> +                goto out_err;
> +            }
> +            pool_id++;
> +            if (!xs_write(xsh, XBT_NULL, pool_path, pool_name,
> +                          strlen(pool_name))) {
> +                fprintf(stderr, "cannot set pool name\n");
> +                rc = 1;
> +            }
> +            free(pool_name);
> +out_err:
> +            free(pool_path);
> +            if ( rc )
> +                goto out;
> +        }
> +    } while(xcinfo != NULL);
> +
>       printf("Done setting up Dom0\n");
>   
>   out:
> diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
> index b91c2cafa223..81780da3ff40 100644
> --- a/tools/libs/light/libxl_utils.c
> +++ b/tools/libs/light/libxl_utils.c
> @@ -151,8 +151,7 @@ char *libxl_cpupoolid_to_name(libxl_ctx *ctx, uint32_t poolid)
>   
>       snprintf(path, sizeof(path), "/local/pool/%d/name", poolid);
>       s = xs_read(ctx->xsh, XBT_NULL, path, &len);
> -    if (!s && (poolid == 0))
> -        return strdup("Pool-0");
> +
>       return s;
>   }
>   


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/6] xen/sched: create public function for cpupools creation
  2022-03-10 17:10 ` [PATCH v2 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
@ 2022-03-11  8:21   ` Juergen Gross
  2022-03-15 17:45   ` Andrew Cooper
  1 sibling, 0 replies; 30+ messages in thread
From: Juergen Gross @ 2022-03-11  8:21 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, Dario Faggioli, George Dunlap, Andrew Cooper,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu


[-- Attachment #1.1.1: Type: text/plain, Size: 2627 bytes --]

On 10.03.22 18:10, Luca Fancellu wrote:
> Create new public function to create cpupools, can take as parameter
> the scheduler id or a negative value that means the default Xen
> scheduler will be used.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v2:
> - cpupool_create_pool doesn't check anymore for pool id uniqueness
>    before calling cpupool_create. Modified commit message accordingly
> ---
>   xen/common/sched/cpupool.c | 15 +++++++++++++++
>   xen/include/xen/sched.h    | 16 ++++++++++++++++
>   2 files changed, 31 insertions(+)
> 
> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
> index a6da4970506a..89a891af7076 100644
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -1219,6 +1219,21 @@ static void cpupool_hypfs_init(void)
>   
>   #endif /* CONFIG_HYPFS */
>   
> +struct cpupool *__init cpupool_create_pool(unsigned int pool_id, int sched_id)
> +{
> +    struct cpupool *pool;
> +
> +    if ( sched_id < 0 )
> +        sched_id = scheduler_get_default()->sched_id;
> +
> +    pool = cpupool_create(pool_id, sched_id);
> +
> +    BUG_ON(IS_ERR(pool));
> +    cpupool_put(pool);
> +
> +    return pool;
> +}
> +
>   static int __init cf_check cpupool_init(void)
>   {
>       unsigned int cpu;
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 10ea969c7af9..47fc856e0fe0 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1145,6 +1145,22 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c);
>   int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
>   unsigned int cpupool_get_id(const struct domain *d);
>   const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
> +
> +/*
> + * cpupool_create_pool - Creates a cpupool
> + * @pool_id: id of the pool to be created
> + * @sched_id: id of the scheduler to be used for the pool
> + *
> + * Creates a cpupool with pool_id id.
> + * The sched_id parameter identifies the scheduler to be used, if it is
> + * negative, the default scheduler of Xen will be used.
> + *
> + * returns:
> + *     pointer to the struct cpupool just created, on success
> + *     NULL, on cpupool creation error

Either add a "." in the previous line, or rephrase (e.g.:
"pointer to the struct cpupool just created, or NULL in case of error"

I happened to read it as "pointer to the struct cpupool just created,
on success NULL, on cpupool creation error" first, which was weird.

With that fixed:

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 6/6] xen/cpupool: Allow cpupool0 to use different scheduler
  2022-03-10 17:10 ` [PATCH v2 6/6] xen/cpupool: Allow cpupool0 to use different scheduler Luca Fancellu
@ 2022-03-11  8:25   ` Juergen Gross
  0 siblings, 0 replies; 30+ messages in thread
From: Juergen Gross @ 2022-03-11  8:25 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 829 bytes --]

On 10.03.22 18:10, Luca Fancellu wrote:
> Currently cpupool0 can use only the default scheduler, and
> cpupool_create has an harcoded behavior when creating the pool 0

Nit: s/harcoded/hardcoded/

> that doesn't allocate new memory for the scheduler, but uses the
> default scheduler structure in memory.
> 
> With this commit it is possible to allocate a different scheduler for
> the cpupool0 when using the boot time cpupool.
> To achieve this the hardcoded behavior in cpupool_create is removed
> and the cpupool0 creation is moved.
> 
> When compiling without boot time cpupools enabled, the current
> behavior is maintained (except that cpupool0 scheduler memory will be
> allocated).
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11  8:09   ` Juergen Gross
@ 2022-03-11  8:56     ` Luca Fancellu
  2022-03-11  9:29       ` Juergen Gross
  0 siblings, 1 reply; 30+ messages in thread
From: Luca Fancellu @ 2022-03-11  8:56 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli

Hi Juergen,

Thanks for your review

> On 11 Mar 2022, at 08:09, Juergen Gross <jgross@suse.com> wrote:
> 
> On 10.03.22 18:10, Luca Fancellu wrote:
>> Introduce a way to create different cpupools at boot time, this is
>> particularly useful on ARM big.LITTLE system where there might be the
>> need to have different cpupools for each type of core, but also
>> systems using NUMA can have different cpu pools for each node.
>> The feature on arm relies on a specification of the cpupools from the
>> device tree to build pools and assign cpus to them.
>> Documentation is created to explain the feature.
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> Changes in v2:
>> - Move feature to common code (Juergen)
>> - Try to decouple dtb parse and cpupool creation to allow
>>   more way to specify cpupools (for example command line)
>> - Created standalone dt node for the scheduler so it can
>>   be used in future work to set scheduler specific
>>   parameters
>> - Use only auto generated ids for cpupools
>> ---
>>  docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
>>  xen/common/Kconfig                     |   8 +
>>  xen/common/Makefile                    |   1 +
>>  xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
>>  xen/common/sched/cpupool.c             |   6 +-
>>  xen/include/xen/sched.h                |  19 +++
>>  6 files changed, 401 insertions(+), 1 deletion(-)
>>  create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>  create mode 100644 xen/common/boot_cpupools.c
>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>> new file mode 100644
>> index 000000000000..d5a82ed0d45a
>> --- /dev/null
>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>> @@ -0,0 +1,156 @@
>> +Boot time cpupools
>> +==================
>> +
>> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
>> +create cpupools during boot phase by specifying them in the device tree.
>> +
>> +Cpupools specification nodes shall be direct childs of /chosen node.
>> +Each cpupool node contains the following properties:
>> +
>> +- compatible (mandatory)
>> +
>> +    Must always include the compatiblity string: "xen,cpupool".
>> +
>> +- cpupool-cpus (mandatory)
>> +
>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>> +    device_type = "cpu"), it can't be empty.
>> +
>> +- cpupool-sched (optional)
>> +
>> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
>> +    (description below), it has no effect when the cpupool refers to the cpupool
>> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
>> +    boot argument).
>> +
>> +
>> +A scheduler specification node is a device tree node that contains the following
>> +properties:
>> +
>> +- compatible (mandatory)
>> +
>> +    Must always include the compatiblity string: "xen,scheduler".
>> +
>> +- sched-name (mandatory)
>> +
>> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
>> +    boot argument for allowed values.
>> +
>> +
>> +Constraints
>> +===========
>> +
>> +If no cpupools are specified, all cpus will be assigned to one cpupool
>> +implicitly created (Pool-0).
>> +
>> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
>> +all the not assigned cpu will be assigned to an additional cpupool.
>> +
>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>> +stop.
>> +
>> +
>> +Examples
>> +========
>> +
>> +A system having two types of core, the following device tree specification will
>> +instruct Xen to have two cpupools:
>> +
>> +- The cpupool with id 0 will have 4 cpus assigned.
>> +- The cpupool with id 1 will have 2 cpus assigned.
>> +
>> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
>> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
>> +creation process will stop Xen.
>> +
>> +
>> +a72_1: cpu@0 {
>> +        compatible = "arm,cortex-a72";
>> +        reg = <0x0 0x0>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a72_2: cpu@1 {
>> +        compatible = "arm,cortex-a72";
>> +        reg = <0x0 0x1>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_1: cpu@100 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x100>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_2: cpu@101 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x101>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_3: cpu@102 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x102>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_4: cpu@103 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x103>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +chosen {
>> +
>> +    sched: sched_a {
>> +        compatible = "xen,scheduler";
>> +        sched-name = "credit2";
>> +    };
>> +    cpupool_a {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
>> +    };
>> +    cpupool_b {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a72_1 &a72_2>;
>> +        cpupool-sched = <&sched>;
>> +    };
>> +
>> +    [...]
>> +
>> +};
>> +
>> +
>> +A system having the cpupools specification below will instruct Xen to have three
>> +cpupools:
>> +
>> +- The cpupool Pool-0 will have 2 cpus assigned.
>> +- The cpupool Pool-1 will have 2 cpus assigned.
>> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
>> +  assigned cpus a53_3 and a53_4).
>> +
>> +chosen {
>> +
>> +    sched: sched_a {
>> +        compatible = "xen,scheduler";
>> +        sched-name = "null";
>> +    };
>> +    cpupool_a {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a53_1 &a53_2>;
>> +    };
>> +    cpupool_b {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a72_1 &a72_2>;
>> +        cpupool-sched = <&sched>;
>> +    };
>> +
>> +    [...]
>> +
>> +};
>> \ No newline at end of file
>> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
>> index 64439438891c..dc9eed31682f 100644
>> --- a/xen/common/Kconfig
>> +++ b/xen/common/Kconfig
>> @@ -22,6 +22,14 @@ config GRANT_TABLE
>>    	  If unsure, say Y.
>>  +config BOOT_TIME_CPUPOOLS
>> +	bool "Create cpupools at boot time"
>> +	depends on HAS_DEVICE_TREE
>> +	default n
>> +	help
>> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
>> +	  options can be specified in the device tree.
>> +
>>  config ALTERNATIVE_CALL
>>  	bool
>>  diff --git a/xen/common/Makefile b/xen/common/Makefile
>> index dc8d3a13f5b8..c5949785ab28 100644
>> --- a/xen/common/Makefile
>> +++ b/xen/common/Makefile
>> @@ -1,5 +1,6 @@
>>  obj-$(CONFIG_ARGO) += argo.o
>>  obj-y += bitmap.o
>> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
>>  obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>>  obj-$(CONFIG_CORE_PARKING) += core_parking.o
>>  obj-y += cpu.o
>> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
>> new file mode 100644
>> index 000000000000..e8529a902d21
>> --- /dev/null
>> +++ b/xen/common/boot_cpupools.c
>> @@ -0,0 +1,212 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * xen/common/boot_cpupools.c
>> + *
>> + * Code to create cpupools at boot time for arm architecture.
> 
> Please drop the arm reference here.
> 
>> + *
>> + * Copyright (C) 2022 Arm Ltd.
>> + */
>> +
>> +#include <xen/sched.h>
>> +
>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
> 
> Move those inside the #ifdef below, please
> 
>> +
>> +struct pool_map {
>> +    int pool_id;
>> +    int sched_id;
>> +    struct cpupool *pool;
>> +};
>> +
>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>> +static unsigned int __initdata next_pool_id;
>> +
>> +#ifdef CONFIG_ARM
> 
> Shouldn't this be CONFIG_HAS_DEVICE_TREE?

Yes, the only problem is that in get_logical_cpu_from_hw_id I use the arm specific
cpu_logical_map(…), so what do you think it’s the better way here?
Do you think I should have everything under CONFIG_HAS_DEVICE_TREE
and get_logical_cpu_from_hw_id under CONFIG_ARM like in this way below?

#ifdef CONFIG_ARM
static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
{
    unsigned int i;

    for ( i = 0; i < nr_cpu_ids; i++ )
        if ( cpu_logical_map(i) == hwid )
            return i;

    return -1;
}
#else
static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
{
	/* not implemented */
        return -1;
}
#endif


> 
>> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
>> +{
>> +    unsigned int i;
>> +
>> +    for ( i = 0; i < nr_cpu_ids; i++ )
>> +        if ( cpu_logical_map(i) == hwid )
>> +            return i;
>> +
>> +    return -1;
>> +}
>> +
>> +static int __init
>> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
>> +{
>> +    unsigned int cpu_reg, cpu_num;
>> +    const __be32 *prop;
>> +
>> +    prop = dt_get_property(cpu_node, "reg", NULL);
>> +    if ( !prop )
>> +        return BTCPUPOOLS_DT_NODE_NO_REG;
>> +
>> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
>> +
>> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
>> +    if ( cpu_num < 0 )
>> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
>> +
>> +    return cpu_num;
>> +}
>> +
>> +static int __init check_and_get_sched_id(const char* scheduler_name)
>> +{
>> +    int sched_id = sched_get_id_by_name(scheduler_name);
>> +
>> +    if ( sched_id < 0 )
>> +        panic("Scheduler %s does not exists!\n", scheduler_name);
>> +
>> +    return sched_id;
>> +}
>> +
>> +void __init btcpupools_dtb_parse(void)
>> +{
>> +    const struct dt_device_node *chosen, *node;
>> +
>> +    chosen = dt_find_node_by_path("/chosen");
>> +    if ( !chosen )
>> +        return;
>> +
>> +    dt_for_each_child_node(chosen, node)
>> +    {
>> +        const struct dt_device_node *phandle_node;
>> +        int sched_id = -1;
>> +        const char* scheduler_name;
>> +        unsigned int i = 0;
>> +
>> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
>> +            continue;
>> +
>> +        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
>> +        if ( phandle_node )
>> +        {
>> +            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
>> +                panic("cpupool-sched must be a xen,scheduler compatible"
>> +                      "node!\n");
>> +            if ( !dt_property_read_string(phandle_node, "sched-name",
>> +                                          &scheduler_name) )
>> +                sched_id = check_and_get_sched_id(scheduler_name);
>> +            else
>> +                panic("Error trying to read sched-name in %s!\n",
>> +                      dt_node_name(phandle_node));
>> +        }
>> +
>> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>> +        if ( !phandle_node )
>> +            panic("Missing or empty cpupool-cpus property!\n");
>> +
>> +        while ( phandle_node )
>> +        {
>> +            int cpu_num;
>> +
>> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
>> +
>> +            if ( cpu_num < 0 )
>> +                panic("Error retrieving logical cpu from node %s (%d)\n",
>> +                      dt_node_name(node), cpu_num);
>> +
>> +            if ( pool_cpu_map[cpu_num].pool_id != -1 )
>> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
>> +
>> +            pool_cpu_map[cpu_num].pool_id = next_pool_id;
>> +            pool_cpu_map[cpu_num].sched_id = sched_id;
>> +
>> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>> +        }
>> +
>> +        /* Let Xen generate pool ids */
>> +        next_pool_id++;
>> +    }
>> +}
>> +#endif
>> +
>> +void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
> 
> Either rename the parameter or drop it completely.
> 
> Right now shadowing cpu_online_map is no real problem, because the only
> caller is passing the global cpu_online_map, but in case another caller
> with different needs would come up, this would be rather confusing.
> 
> With the x86 specific loop in this function I don't see how a different
> map than the global cpu_online_map could work, so I think dropping the
> parameter is the best move.
> 
>> +{
>> +    unsigned int cpu_num;
>> +
>> +    /*
>> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
>> +     * below will assign every cpu to cpupool0 as the default behavior.
>> +     * When there are cpupools, the code below is assigning all the not
>> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
>> +     * In the same loop we check if there is any assigned cpu that is not
>> +     * online.
>> +     */
>> +    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
>> +        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
>> +        {
>> +            if ( pool_cpu_map[cpu_num].pool_id < 0 )
>> +                pool_cpu_map[cpu_num].pool_id = next_pool_id;
>> +        }
>> +        else
>> +            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
>> +                panic("Pool-%d contains cpu%u that is not online!\n",
>> +                      pool_cpu_map[cpu_num].pool_id, cpu_num);
>> +
>> +#ifdef CONFIG_X86
>> +    /* Cpu0 must be in cpupool0 for x86 */
>> +    if ( pool_cpu_map[0].pool_id != 0 )
>> +    {
>> +        /* The cpupool containing cpu0 will become cpupool0 */
>> +        unsigned int swap_id = pool_cpu_map[0].pool_id;
>> +        for_each_cpu ( cpu_num, cpu_online_map )
>> +            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
>> +                pool_cpu_map[cpu_num].pool_id = 0;
>> +            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
>> +                pool_cpu_map[cpu_num].pool_id = swap_id;
>> +    }
>> +#endif
>> +
>> +    for_each_cpu ( cpu_num, cpu_online_map )
>> +    {
>> +        struct cpupool *pool = NULL;
>> +        int pool_id, sched_id;
>> +
>> +        pool_id = pool_cpu_map[cpu_num].pool_id;
>> +        sched_id = pool_cpu_map[cpu_num].sched_id;
>> +
>> +        if ( pool_id )
>> +        {
>> +            unsigned int i;
>> +
>> +            /* Look for previously created pool with id pool_id */
>> +            for ( i = 0; i < cpu_num; i++ )
>> +                if ( (pool_cpu_map[i].pool_id == pool_id) &&
>> +                     pool_cpu_map[i].pool )
>> +                {
>> +                    pool = pool_cpu_map[i].pool;
>> +                    break;
>> +                }
>> +
>> +            /* If no pool was created before, create it */
>> +            if ( !pool )
>> +                pool = cpupool_create_pool(pool_id, sched_id);
>> +            if ( !pool )
>> +                panic("Error creating pool id %u!\n", pool_id);
>> +        }
>> +        else
>> +            pool = cpupool0;
>> +
>> +        pool_cpu_map[cpu_num].pool = pool;
>> +        printk(XENLOG_INFO "Logical CPU %u in Pool-%u.\n", cpu_num, pool_id);
>> +    }
>> +}
> 

Will fix your other findings in the next serie.

Cheers,
Luca

> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11  8:56     ` Luca Fancellu
@ 2022-03-11  9:29       ` Juergen Gross
  2022-03-11  9:46         ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Juergen Gross @ 2022-03-11  9:29 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 9387 bytes --]

On 11.03.22 09:56, Luca Fancellu wrote:
> Hi Juergen,
> 
> Thanks for your review
> 
>> On 11 Mar 2022, at 08:09, Juergen Gross <jgross@suse.com> wrote:
>>
>> On 10.03.22 18:10, Luca Fancellu wrote:
>>> Introduce a way to create different cpupools at boot time, this is
>>> particularly useful on ARM big.LITTLE system where there might be the
>>> need to have different cpupools for each type of core, but also
>>> systems using NUMA can have different cpu pools for each node.
>>> The feature on arm relies on a specification of the cpupools from the
>>> device tree to build pools and assign cpus to them.
>>> Documentation is created to explain the feature.
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> Changes in v2:
>>> - Move feature to common code (Juergen)
>>> - Try to decouple dtb parse and cpupool creation to allow
>>>    more way to specify cpupools (for example command line)
>>> - Created standalone dt node for the scheduler so it can
>>>    be used in future work to set scheduler specific
>>>    parameters
>>> - Use only auto generated ids for cpupools
>>> ---
>>>   docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
>>>   xen/common/Kconfig                     |   8 +
>>>   xen/common/Makefile                    |   1 +
>>>   xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
>>>   xen/common/sched/cpupool.c             |   6 +-
>>>   xen/include/xen/sched.h                |  19 +++
>>>   6 files changed, 401 insertions(+), 1 deletion(-)
>>>   create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>   create mode 100644 xen/common/boot_cpupools.c
>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>> new file mode 100644
>>> index 000000000000..d5a82ed0d45a
>>> --- /dev/null
>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>> @@ -0,0 +1,156 @@
>>> +Boot time cpupools
>>> +==================
>>> +
>>> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
>>> +create cpupools during boot phase by specifying them in the device tree.
>>> +
>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>> +Each cpupool node contains the following properties:
>>> +
>>> +- compatible (mandatory)
>>> +
>>> +    Must always include the compatiblity string: "xen,cpupool".
>>> +
>>> +- cpupool-cpus (mandatory)
>>> +
>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>> +    device_type = "cpu"), it can't be empty.
>>> +
>>> +- cpupool-sched (optional)
>>> +
>>> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
>>> +    (description below), it has no effect when the cpupool refers to the cpupool
>>> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
>>> +    boot argument).
>>> +
>>> +
>>> +A scheduler specification node is a device tree node that contains the following
>>> +properties:
>>> +
>>> +- compatible (mandatory)
>>> +
>>> +    Must always include the compatiblity string: "xen,scheduler".
>>> +
>>> +- sched-name (mandatory)
>>> +
>>> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
>>> +    boot argument for allowed values.
>>> +
>>> +
>>> +Constraints
>>> +===========
>>> +
>>> +If no cpupools are specified, all cpus will be assigned to one cpupool
>>> +implicitly created (Pool-0).
>>> +
>>> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
>>> +all the not assigned cpu will be assigned to an additional cpupool.
>>> +
>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>> +stop.
>>> +
>>> +
>>> +Examples
>>> +========
>>> +
>>> +A system having two types of core, the following device tree specification will
>>> +instruct Xen to have two cpupools:
>>> +
>>> +- The cpupool with id 0 will have 4 cpus assigned.
>>> +- The cpupool with id 1 will have 2 cpus assigned.
>>> +
>>> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
>>> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
>>> +creation process will stop Xen.
>>> +
>>> +
>>> +a72_1: cpu@0 {
>>> +        compatible = "arm,cortex-a72";
>>> +        reg = <0x0 0x0>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a72_2: cpu@1 {
>>> +        compatible = "arm,cortex-a72";
>>> +        reg = <0x0 0x1>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_1: cpu@100 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x100>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_2: cpu@101 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x101>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_3: cpu@102 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x102>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_4: cpu@103 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x103>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +chosen {
>>> +
>>> +    sched: sched_a {
>>> +        compatible = "xen,scheduler";
>>> +        sched-name = "credit2";
>>> +    };
>>> +    cpupool_a {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
>>> +    };
>>> +    cpupool_b {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a72_1 &a72_2>;
>>> +        cpupool-sched = <&sched>;
>>> +    };
>>> +
>>> +    [...]
>>> +
>>> +};
>>> +
>>> +
>>> +A system having the cpupools specification below will instruct Xen to have three
>>> +cpupools:
>>> +
>>> +- The cpupool Pool-0 will have 2 cpus assigned.
>>> +- The cpupool Pool-1 will have 2 cpus assigned.
>>> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
>>> +  assigned cpus a53_3 and a53_4).
>>> +
>>> +chosen {
>>> +
>>> +    sched: sched_a {
>>> +        compatible = "xen,scheduler";
>>> +        sched-name = "null";
>>> +    };
>>> +    cpupool_a {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a53_1 &a53_2>;
>>> +    };
>>> +    cpupool_b {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a72_1 &a72_2>;
>>> +        cpupool-sched = <&sched>;
>>> +    };
>>> +
>>> +    [...]
>>> +
>>> +};
>>> \ No newline at end of file
>>> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
>>> index 64439438891c..dc9eed31682f 100644
>>> --- a/xen/common/Kconfig
>>> +++ b/xen/common/Kconfig
>>> @@ -22,6 +22,14 @@ config GRANT_TABLE
>>>     	  If unsure, say Y.
>>>   +config BOOT_TIME_CPUPOOLS
>>> +	bool "Create cpupools at boot time"
>>> +	depends on HAS_DEVICE_TREE
>>> +	default n
>>> +	help
>>> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
>>> +	  options can be specified in the device tree.
>>> +
>>>   config ALTERNATIVE_CALL
>>>   	bool
>>>   diff --git a/xen/common/Makefile b/xen/common/Makefile
>>> index dc8d3a13f5b8..c5949785ab28 100644
>>> --- a/xen/common/Makefile
>>> +++ b/xen/common/Makefile
>>> @@ -1,5 +1,6 @@
>>>   obj-$(CONFIG_ARGO) += argo.o
>>>   obj-y += bitmap.o
>>> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
>>>   obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>>>   obj-$(CONFIG_CORE_PARKING) += core_parking.o
>>>   obj-y += cpu.o
>>> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
>>> new file mode 100644
>>> index 000000000000..e8529a902d21
>>> --- /dev/null
>>> +++ b/xen/common/boot_cpupools.c
>>> @@ -0,0 +1,212 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * xen/common/boot_cpupools.c
>>> + *
>>> + * Code to create cpupools at boot time for arm architecture.
>>
>> Please drop the arm reference here.
>>
>>> + *
>>> + * Copyright (C) 2022 Arm Ltd.
>>> + */
>>> +
>>> +#include <xen/sched.h>
>>> +
>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>
>> Move those inside the #ifdef below, please
>>
>>> +
>>> +struct pool_map {
>>> +    int pool_id;
>>> +    int sched_id;
>>> +    struct cpupool *pool;
>>> +};
>>> +
>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>> +static unsigned int __initdata next_pool_id;
>>> +
>>> +#ifdef CONFIG_ARM
>>
>> Shouldn't this be CONFIG_HAS_DEVICE_TREE?
> 
> Yes, the only problem is that in get_logical_cpu_from_hw_id I use the arm specific
> cpu_logical_map(…), so what do you think it’s the better way here?
> Do you think I should have everything under CONFIG_HAS_DEVICE_TREE
> and get_logical_cpu_from_hw_id under CONFIG_ARM like in this way below?

Hmm, what is the hwid used for on Arm? I guess this could be similar
to the x86 acpi-id?

So I'd rather put get_logical_cpu_from_hw_id() into Arm specific code
and add a related x86 function to x86 code. Depending on the answer to
above question this could either be get_cpu_id(), or maybe an identity
function.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11  9:29       ` Juergen Gross
@ 2022-03-11  9:46         ` Jan Beulich
  2022-03-11 10:18           ` Juergen Gross
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2022-03-11  9:46 UTC (permalink / raw)
  To: Juergen Gross, Luca Fancellu
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Wei Liu, Dario Faggioli

On 11.03.2022 10:29, Juergen Gross wrote:
> On 11.03.22 09:56, Luca Fancellu wrote:
>>> On 11 Mar 2022, at 08:09, Juergen Gross <jgross@suse.com> wrote:
>>> On 10.03.22 18:10, Luca Fancellu wrote:
>>>> --- /dev/null
>>>> +++ b/xen/common/boot_cpupools.c
>>>> @@ -0,0 +1,212 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +/*
>>>> + * xen/common/boot_cpupools.c
>>>> + *
>>>> + * Code to create cpupools at boot time for arm architecture.
>>>
>>> Please drop the arm reference here.
>>>
>>>> + *
>>>> + * Copyright (C) 2022 Arm Ltd.
>>>> + */
>>>> +
>>>> +#include <xen/sched.h>
>>>> +
>>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>>
>>> Move those inside the #ifdef below, please
>>>
>>>> +
>>>> +struct pool_map {
>>>> +    int pool_id;
>>>> +    int sched_id;
>>>> +    struct cpupool *pool;
>>>> +};
>>>> +
>>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>>> +static unsigned int __initdata next_pool_id;
>>>> +
>>>> +#ifdef CONFIG_ARM
>>>
>>> Shouldn't this be CONFIG_HAS_DEVICE_TREE?
>>
>> Yes, the only problem is that in get_logical_cpu_from_hw_id I use the arm specific
>> cpu_logical_map(…), so what do you think it’s the better way here?
>> Do you think I should have everything under CONFIG_HAS_DEVICE_TREE
>> and get_logical_cpu_from_hw_id under CONFIG_ARM like in this way below?
> 
> Hmm, what is the hwid used for on Arm? I guess this could be similar
> to the x86 acpi-id?

Since there's going to be only one of DT or ACPI, if anything this could
be the APIC ID and then ...

> So I'd rather put get_logical_cpu_from_hw_id() into Arm specific code
> and add a related x86 function to x86 code. Depending on the answer to
> above question this could either be get_cpu_id(), or maybe an identity
> function.

... a lookup loop over x86_cpu_to_apicid[] (I thought we had a function
doing so, but right now I can't find one).

Jan



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11  9:46         ` Jan Beulich
@ 2022-03-11 10:18           ` Juergen Gross
  2022-03-11 11:29             ` Luca Fancellu
  0 siblings, 1 reply; 30+ messages in thread
From: Juergen Gross @ 2022-03-11 10:18 UTC (permalink / raw)
  To: Jan Beulich, Luca Fancellu
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 2219 bytes --]

On 11.03.22 10:46, Jan Beulich wrote:
> On 11.03.2022 10:29, Juergen Gross wrote:
>> On 11.03.22 09:56, Luca Fancellu wrote:
>>>> On 11 Mar 2022, at 08:09, Juergen Gross <jgross@suse.com> wrote:
>>>> On 10.03.22 18:10, Luca Fancellu wrote:
>>>>> --- /dev/null
>>>>> +++ b/xen/common/boot_cpupools.c
>>>>> @@ -0,0 +1,212 @@
>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>> +/*
>>>>> + * xen/common/boot_cpupools.c
>>>>> + *
>>>>> + * Code to create cpupools at boot time for arm architecture.
>>>>
>>>> Please drop the arm reference here.
>>>>
>>>>> + *
>>>>> + * Copyright (C) 2022 Arm Ltd.
>>>>> + */
>>>>> +
>>>>> +#include <xen/sched.h>
>>>>> +
>>>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>>>
>>>> Move those inside the #ifdef below, please
>>>>
>>>>> +
>>>>> +struct pool_map {
>>>>> +    int pool_id;
>>>>> +    int sched_id;
>>>>> +    struct cpupool *pool;
>>>>> +};
>>>>> +
>>>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>>>> +static unsigned int __initdata next_pool_id;
>>>>> +
>>>>> +#ifdef CONFIG_ARM
>>>>
>>>> Shouldn't this be CONFIG_HAS_DEVICE_TREE?
>>>
>>> Yes, the only problem is that in get_logical_cpu_from_hw_id I use the arm specific
>>> cpu_logical_map(…), so what do you think it’s the better way here?
>>> Do you think I should have everything under CONFIG_HAS_DEVICE_TREE
>>> and get_logical_cpu_from_hw_id under CONFIG_ARM like in this way below?
>>
>> Hmm, what is the hwid used for on Arm? I guess this could be similar
>> to the x86 acpi-id?
> 
> Since there's going to be only one of DT or ACPI, if anything this could
> be the APIC ID and then ...
> 
>> So I'd rather put get_logical_cpu_from_hw_id() into Arm specific code
>> and add a related x86 function to x86 code. Depending on the answer to
>> above question this could either be get_cpu_id(), or maybe an identity
>> function.
> 
> ... a lookup loop over x86_cpu_to_apicid[] (I thought we had a function
> doing so, but right now I can't find one).

It is the second half of get_cpu_id().


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11 10:18           ` Juergen Gross
@ 2022-03-11 11:29             ` Luca Fancellu
  2022-03-11 12:22               ` Juergen Gross
  2022-03-11 12:28               ` Jan Beulich
  0 siblings, 2 replies; 30+ messages in thread
From: Luca Fancellu @ 2022-03-11 11:29 UTC (permalink / raw)
  To: Juergen Gross, Jan Beulich
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Wei Liu, Dario Faggioli



> On 11 Mar 2022, at 10:18, Juergen Gross <jgross@suse.com> wrote:
> 
> On 11.03.22 10:46, Jan Beulich wrote:
>> On 11.03.2022 10:29, Juergen Gross wrote:
>>> On 11.03.22 09:56, Luca Fancellu wrote:
>>>>> On 11 Mar 2022, at 08:09, Juergen Gross <jgross@suse.com> wrote:
>>>>> On 10.03.22 18:10, Luca Fancellu wrote:
>>>>>> --- /dev/null
>>>>>> +++ b/xen/common/boot_cpupools.c
>>>>>> @@ -0,0 +1,212 @@
>>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>>> +/*
>>>>>> + * xen/common/boot_cpupools.c
>>>>>> + *
>>>>>> + * Code to create cpupools at boot time for arm architecture.
>>>>> 
>>>>> Please drop the arm reference here.
>>>>> 
>>>>>> + *
>>>>>> + * Copyright (C) 2022 Arm Ltd.
>>>>>> + */
>>>>>> +
>>>>>> +#include <xen/sched.h>
>>>>>> +
>>>>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>>>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>>>> 
>>>>> Move those inside the #ifdef below, please
>>>>> 
>>>>>> +
>>>>>> +struct pool_map {
>>>>>> +    int pool_id;
>>>>>> +    int sched_id;
>>>>>> +    struct cpupool *pool;
>>>>>> +};
>>>>>> +
>>>>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>>>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>>>>> +static unsigned int __initdata next_pool_id;
>>>>>> +
>>>>>> +#ifdef CONFIG_ARM
>>>>> 
>>>>> Shouldn't this be CONFIG_HAS_DEVICE_TREE?
>>>> 
>>>> Yes, the only problem is that in get_logical_cpu_from_hw_id I use the arm specific
>>>> cpu_logical_map(…), so what do you think it’s the better way here?
>>>> Do you think I should have everything under CONFIG_HAS_DEVICE_TREE
>>>> and get_logical_cpu_from_hw_id under CONFIG_ARM like in this way below?
>>> 
>>> Hmm, what is the hwid used for on Arm? I guess this could be similar
>>> to the x86 acpi-id?
>> Since there's going to be only one of DT or ACPI, if anything this could
>> be the APIC ID and then ...
>>> So I'd rather put get_logical_cpu_from_hw_id() into Arm specific code
>>> and add a related x86 function to x86 code. Depending on the answer to
>>> above question this could either be get_cpu_id(), or maybe an identity
>>> function.
>> ... a lookup loop over x86_cpu_to_apicid[] (I thought we had a function
>> doing so, but right now I can't find one).
> 
> It is the second half of get_cpu_id().

I was going to say, maybe I can do something like this:

#ifdef CONFIG_ARM
#define hwid_from_logical_cpu_id(x) cpu_logical_map(x)
#elif defined(CONFIG_X86)
#define hwid_from_logical_cpu_id(x) x86_cpu_to_apicid(x)
#else
#define hwid_from_logical_cpu_id(x) (-1)
#end

static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
{
    unsigned int i;

    for ( i = 0; i < nr_cpu_ids; i++ )
        if ( hwid_from_logical_cpu_id(i) == hwid )
            return i;

    return -1;
}

Do you think it is acceptable?

I see the current get_cpu_id(…) from x86 code is starting from an acpi id to
lookup the apicid and then it is looking for the logical cpu number.
In the x86 context, eventually, the reg property of a cpu node would hold an
Acpi id or an apicid? I would have say the last one but I’m not sure now.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11 11:29             ` Luca Fancellu
@ 2022-03-11 12:22               ` Juergen Gross
  2022-03-11 12:28               ` Jan Beulich
  1 sibling, 0 replies; 30+ messages in thread
From: Juergen Gross @ 2022-03-11 12:22 UTC (permalink / raw)
  To: Luca Fancellu, Jan Beulich
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 3584 bytes --]

On 11.03.22 12:29, Luca Fancellu wrote:
> 
> 
>> On 11 Mar 2022, at 10:18, Juergen Gross <jgross@suse.com> wrote:
>>
>> On 11.03.22 10:46, Jan Beulich wrote:
>>> On 11.03.2022 10:29, Juergen Gross wrote:
>>>> On 11.03.22 09:56, Luca Fancellu wrote:
>>>>>> On 11 Mar 2022, at 08:09, Juergen Gross <jgross@suse.com> wrote:
>>>>>> On 10.03.22 18:10, Luca Fancellu wrote:
>>>>>>> --- /dev/null
>>>>>>> +++ b/xen/common/boot_cpupools.c
>>>>>>> @@ -0,0 +1,212 @@
>>>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>>>> +/*
>>>>>>> + * xen/common/boot_cpupools.c
>>>>>>> + *
>>>>>>> + * Code to create cpupools at boot time for arm architecture.
>>>>>>
>>>>>> Please drop the arm reference here.
>>>>>>
>>>>>>> + *
>>>>>>> + * Copyright (C) 2022 Arm Ltd.
>>>>>>> + */
>>>>>>> +
>>>>>>> +#include <xen/sched.h>
>>>>>>> +
>>>>>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>>>>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>>>>>
>>>>>> Move those inside the #ifdef below, please
>>>>>>
>>>>>>> +
>>>>>>> +struct pool_map {
>>>>>>> +    int pool_id;
>>>>>>> +    int sched_id;
>>>>>>> +    struct cpupool *pool;
>>>>>>> +};
>>>>>>> +
>>>>>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>>>>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>>>>>> +static unsigned int __initdata next_pool_id;
>>>>>>> +
>>>>>>> +#ifdef CONFIG_ARM
>>>>>>
>>>>>> Shouldn't this be CONFIG_HAS_DEVICE_TREE?
>>>>>
>>>>> Yes, the only problem is that in get_logical_cpu_from_hw_id I use the arm specific
>>>>> cpu_logical_map(…), so what do you think it’s the better way here?
>>>>> Do you think I should have everything under CONFIG_HAS_DEVICE_TREE
>>>>> and get_logical_cpu_from_hw_id under CONFIG_ARM like in this way below?
>>>>
>>>> Hmm, what is the hwid used for on Arm? I guess this could be similar
>>>> to the x86 acpi-id?
>>> Since there's going to be only one of DT or ACPI, if anything this could
>>> be the APIC ID and then ...
>>>> So I'd rather put get_logical_cpu_from_hw_id() into Arm specific code
>>>> and add a related x86 function to x86 code. Depending on the answer to
>>>> above question this could either be get_cpu_id(), or maybe an identity
>>>> function.
>>> ... a lookup loop over x86_cpu_to_apicid[] (I thought we had a function
>>> doing so, but right now I can't find one).
>>
>> It is the second half of get_cpu_id().
> 
> I was going to say, maybe I can do something like this:
> 
> #ifdef CONFIG_ARM
> #define hwid_from_logical_cpu_id(x) cpu_logical_map(x)
> #elif defined(CONFIG_X86)
> #define hwid_from_logical_cpu_id(x) x86_cpu_to_apicid(x)
> #else
> #define hwid_from_logical_cpu_id(x) (-1)
> #end
> 
> static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
> {
>      unsigned int i;
> 
>      for ( i = 0; i < nr_cpu_ids; i++ )
>          if ( hwid_from_logical_cpu_id(i) == hwid )
>              return i;
> 
>      return -1;
> }
> 
> Do you think it is acceptable?

I'd rather have this abstraction in some header, but this is
something the related maintainers should decide.

> 
> I see the current get_cpu_id(…) from x86 code is starting from an acpi id to
> lookup the apicid and then it is looking for the logical cpu number.
> In the x86 context, eventually, the reg property of a cpu node would hold an
> Acpi id or an apicid? I would have say the last one but I’m not sure now.

According to Jan ACPI and device tree are mutually exclusive, so the
apicid is probably the correct answer.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11 11:29             ` Luca Fancellu
  2022-03-11 12:22               ` Juergen Gross
@ 2022-03-11 12:28               ` Jan Beulich
  1 sibling, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2022-03-11 12:28 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen-devel, Wei Chen, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Wei Liu, Dario Faggioli, Juergen Gross

On 11.03.2022 12:29, Luca Fancellu wrote:
>> On 11 Mar 2022, at 10:18, Juergen Gross <jgross@suse.com> wrote:
>> On 11.03.22 10:46, Jan Beulich wrote:
>>> On 11.03.2022 10:29, Juergen Gross wrote:
>>>> On 11.03.22 09:56, Luca Fancellu wrote:
>>>>>> On 11 Mar 2022, at 08:09, Juergen Gross <jgross@suse.com> wrote:
>>>>>> On 10.03.22 18:10, Luca Fancellu wrote:
>>>>>>> --- /dev/null
>>>>>>> +++ b/xen/common/boot_cpupools.c
>>>>>>> @@ -0,0 +1,212 @@
>>>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>>>> +/*
>>>>>>> + * xen/common/boot_cpupools.c
>>>>>>> + *
>>>>>>> + * Code to create cpupools at boot time for arm architecture.
>>>>>>
>>>>>> Please drop the arm reference here.
>>>>>>
>>>>>>> + *
>>>>>>> + * Copyright (C) 2022 Arm Ltd.
>>>>>>> + */
>>>>>>> +
>>>>>>> +#include <xen/sched.h>
>>>>>>> +
>>>>>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>>>>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>>>>>
>>>>>> Move those inside the #ifdef below, please
>>>>>>
>>>>>>> +
>>>>>>> +struct pool_map {
>>>>>>> +    int pool_id;
>>>>>>> +    int sched_id;
>>>>>>> +    struct cpupool *pool;
>>>>>>> +};
>>>>>>> +
>>>>>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>>>>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>>>>>> +static unsigned int __initdata next_pool_id;
>>>>>>> +
>>>>>>> +#ifdef CONFIG_ARM
>>>>>>
>>>>>> Shouldn't this be CONFIG_HAS_DEVICE_TREE?
>>>>>
>>>>> Yes, the only problem is that in get_logical_cpu_from_hw_id I use the arm specific
>>>>> cpu_logical_map(…), so what do you think it’s the better way here?
>>>>> Do you think I should have everything under CONFIG_HAS_DEVICE_TREE
>>>>> and get_logical_cpu_from_hw_id under CONFIG_ARM like in this way below?
>>>>
>>>> Hmm, what is the hwid used for on Arm? I guess this could be similar
>>>> to the x86 acpi-id?
>>> Since there's going to be only one of DT or ACPI, if anything this could
>>> be the APIC ID and then ...
>>>> So I'd rather put get_logical_cpu_from_hw_id() into Arm specific code
>>>> and add a related x86 function to x86 code. Depending on the answer to
>>>> above question this could either be get_cpu_id(), or maybe an identity
>>>> function.
>>> ... a lookup loop over x86_cpu_to_apicid[] (I thought we had a function
>>> doing so, but right now I can't find one).
>>
>> It is the second half of get_cpu_id().
> 
> I was going to say, maybe I can do something like this:
> 
> #ifdef CONFIG_ARM
> #define hwid_from_logical_cpu_id(x) cpu_logical_map(x)
> #elif defined(CONFIG_X86)
> #define hwid_from_logical_cpu_id(x) x86_cpu_to_apicid(x)
> #else
> #define hwid_from_logical_cpu_id(x) (-1)
> #end
> 
> static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
> {
>     unsigned int i;
> 
>     for ( i = 0; i < nr_cpu_ids; i++ )
>         if ( hwid_from_logical_cpu_id(i) == hwid )
>             return i;
> 
>     return -1;
> }
> 
> Do you think it is acceptable?

Why not, if even on Arm you have to use a loop. As Jürgen said, this
likely wants to move to some header file. Whether the names are
suitable for an arch abstraction I'm not sure, but I also have no
immediate alternative suggestion.

> I see the current get_cpu_id(…) from x86 code is starting from an acpi id to
> lookup the apicid and then it is looking for the logical cpu number.
> In the x86 context, eventually, the reg property of a cpu node would hold an
> Acpi id or an apicid? I would have say the last one but I’m not sure now.

Without ACPI it can't sensibly be an ACPI ID. The most logical thing
to expect would be an APIC ID, but then it's all up to whoever specifies
what DT is to supply.

Jan



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11  3:57   ` Stefano Stabellini
@ 2022-03-11 14:11     ` Luca Fancellu
  2022-03-11 14:16       ` Luca Fancellu
  2022-03-12  0:10       ` Stefano Stabellini
  0 siblings, 2 replies; 30+ messages in thread
From: Luca Fancellu @ 2022-03-11 14:11 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Xen-devel, Wei Chen, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli

Hi Stefano,

> On 11 Mar 2022, at 03:57, Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> On Thu, 10 Mar 2022, Luca Fancellu wrote:
>> Introduce a way to create different cpupools at boot time, this is
>> particularly useful on ARM big.LITTLE system where there might be the
>> need to have different cpupools for each type of core, but also
>> systems using NUMA can have different cpu pools for each node.
>> 
>> The feature on arm relies on a specification of the cpupools from the
>> device tree to build pools and assign cpus to them.
>> 
>> Documentation is created to explain the feature.
>> 
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> Changes in v2:
>> - Move feature to common code (Juergen)
>> - Try to decouple dtb parse and cpupool creation to allow
>>  more way to specify cpupools (for example command line)
>> - Created standalone dt node for the scheduler so it can
>>  be used in future work to set scheduler specific
>>  parameters
>> - Use only auto generated ids for cpupools
>> ---
>> docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
>> xen/common/Kconfig                     |   8 +
>> xen/common/Makefile                    |   1 +
>> xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
>> xen/common/sched/cpupool.c             |   6 +-
>> xen/include/xen/sched.h                |  19 +++
>> 6 files changed, 401 insertions(+), 1 deletion(-)
>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>> create mode 100644 xen/common/boot_cpupools.c
>> 
>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>> new file mode 100644
>> index 000000000000..d5a82ed0d45a
>> --- /dev/null
>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>> @@ -0,0 +1,156 @@
>> +Boot time cpupools
>> +==================
>> +
>> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
>> +create cpupools during boot phase by specifying them in the device tree.
>> +
>> +Cpupools specification nodes shall be direct childs of /chosen node.
>> +Each cpupool node contains the following properties:
>> +
>> +- compatible (mandatory)
>> +
>> +    Must always include the compatiblity string: "xen,cpupool".
>> +
>> +- cpupool-cpus (mandatory)
>> +
>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>> +    device_type = "cpu"), it can't be empty.
>> +
>> +- cpupool-sched (optional)
>> +
>> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
>> +    (description below), it has no effect when the cpupool refers to the cpupool
>> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
>> +    boot argument).
> 
> This is *a lot* better.
> 
> The device tree part is nice. I have only one question left on it: why
> do we need a separate scheduler node? Could the "cpupool-sched" property
> be a simple string with the scheduler name?
> 
> E.g.:
> 
>    cpupool_a {
>        compatible = "xen,cpupool";
>        cpupool-cpus = <&a53_1 &a53_2>;
>    };
>    cpupool_b {
>        compatible = "xen,cpupool";
>        cpupool-cpus = <&a72_1 &a72_2>;
>        cpupool-sched = "null";
>    };
> 
> 
> To me, it doesn't look like these new "scheduler specification nodes"
> bring any benefits. I would just get rid of them.

From a comment of Juergen on the second patch I thought someone sees the need to
have a way to set scheduling parameters:

“you are allowing to use another scheduler,
but what if someone wants to set non-standard scheduling parameters
(e.g. another time slice)?”

So I thought I could introduce a scheduler specification node that could in the future be
extended and used to set scheduling parameter.

If it is something that is not needed, I will get rid of it.

> 
> 
>> +A scheduler specification node is a device tree node that contains the following
>> +properties:
>> +
>> +- compatible (mandatory)
>> +
>> +    Must always include the compatiblity string: "xen,scheduler".
>> +
>> +- sched-name (mandatory)
>> +
>> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
>> +    boot argument for allowed values.
>> +
>> +
>> +Constraints
>> +===========
>> +
>> +If no cpupools are specified, all cpus will be assigned to one cpupool
>> +implicitly created (Pool-0).
>> +
>> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
>> +all the not assigned cpu will be assigned to an additional cpupool.
>> +
>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>> +stop.
>> +
>> +
>> +Examples
>> +========
>> +
>> +A system having two types of core, the following device tree specification will
>> +instruct Xen to have two cpupools:
>> +
>> +- The cpupool with id 0 will have 4 cpus assigned.
>> +- The cpupool with id 1 will have 2 cpus assigned.
>> +
>> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
>> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
>> +creation process will stop Xen.
>> +
>> +
>> +a72_1: cpu@0 {
>> +        compatible = "arm,cortex-a72";
>> +        reg = <0x0 0x0>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a72_2: cpu@1 {
>> +        compatible = "arm,cortex-a72";
>> +        reg = <0x0 0x1>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_1: cpu@100 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x100>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_2: cpu@101 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x101>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_3: cpu@102 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x102>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +a53_4: cpu@103 {
>> +        compatible = "arm,cortex-a53";
>> +        reg = <0x0 0x103>;
>> +        device_type = "cpu";
>> +        [...]
>> +};
>> +
>> +chosen {
>> +
>> +    sched: sched_a {
>> +        compatible = "xen,scheduler";
>> +        sched-name = "credit2";
>> +    };
>> +    cpupool_a {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
>> +    };
>> +    cpupool_b {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a72_1 &a72_2>;
>> +        cpupool-sched = <&sched>;
>> +    };
>> +
>> +    [...]
>> +
>> +};
>> +
>> +
>> +A system having the cpupools specification below will instruct Xen to have three
>> +cpupools:
>> +
>> +- The cpupool Pool-0 will have 2 cpus assigned.
>> +- The cpupool Pool-1 will have 2 cpus assigned.
>> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
>> +  assigned cpus a53_3 and a53_4).
>> +
>> +chosen {
>> +
>> +    sched: sched_a {
>> +        compatible = "xen,scheduler";
>> +        sched-name = "null";
>> +    };
>> +    cpupool_a {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a53_1 &a53_2>;
>> +    };
>> +    cpupool_b {
>> +        compatible = "xen,cpupool";
>> +        cpupool-cpus = <&a72_1 &a72_2>;
>> +        cpupool-sched = <&sched>;
>> +    };
>> +
>> +    [...]
>> +
>> +};
>> \ No newline at end of file
>> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
>> index 64439438891c..dc9eed31682f 100644
>> --- a/xen/common/Kconfig
>> +++ b/xen/common/Kconfig
>> @@ -22,6 +22,14 @@ config GRANT_TABLE
>> 
>> 	  If unsure, say Y.
>> 
>> +config BOOT_TIME_CPUPOOLS
>> +	bool "Create cpupools at boot time"
>> +	depends on HAS_DEVICE_TREE
>> +	default n
>> +	help
>> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
>> +	  options can be specified in the device tree.
>> +
>> config ALTERNATIVE_CALL
>> 	bool
>> 
>> diff --git a/xen/common/Makefile b/xen/common/Makefile
>> index dc8d3a13f5b8..c5949785ab28 100644
>> --- a/xen/common/Makefile
>> +++ b/xen/common/Makefile
>> @@ -1,5 +1,6 @@
>> obj-$(CONFIG_ARGO) += argo.o
>> obj-y += bitmap.o
>> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
>> obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>> obj-$(CONFIG_CORE_PARKING) += core_parking.o
>> obj-y += cpu.o
>> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
>> new file mode 100644
>> index 000000000000..e8529a902d21
>> --- /dev/null
>> +++ b/xen/common/boot_cpupools.c
>> @@ -0,0 +1,212 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * xen/common/boot_cpupools.c
>> + *
>> + * Code to create cpupools at boot time for arm architecture.
>> + *
>> + * Copyright (C) 2022 Arm Ltd.
>> + */
>> +
>> +#include <xen/sched.h>
>> +
>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>> +
>> +struct pool_map {
>> +    int pool_id;
>> +    int sched_id;
>> +    struct cpupool *pool;
>> +};
>> +
>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>> +static unsigned int __initdata next_pool_id;
>> +
>> +#ifdef CONFIG_ARM
>> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
>> +{
>> +    unsigned int i;
>> +
>> +    for ( i = 0; i < nr_cpu_ids; i++ )
>> +        if ( cpu_logical_map(i) == hwid )
>> +            return i;
>> +
>> +    return -1;
>> +}
>> +
>> +static int __init
>> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
>> +{
>> +    unsigned int cpu_reg, cpu_num;
>> +    const __be32 *prop;
>> +
>> +    prop = dt_get_property(cpu_node, "reg", NULL);
>> +    if ( !prop )
>> +        return BTCPUPOOLS_DT_NODE_NO_REG;
>> +
>> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
>> +
>> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
>> +    if ( cpu_num < 0 )
>> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
>> +
>> +    return cpu_num;
>> +}
>> +
>> +static int __init check_and_get_sched_id(const char* scheduler_name)
>> +{
>> +    int sched_id = sched_get_id_by_name(scheduler_name);
>> +
>> +    if ( sched_id < 0 )
>> +        panic("Scheduler %s does not exists!\n", scheduler_name);
>> +
>> +    return sched_id;
>> +}
>> +
>> +void __init btcpupools_dtb_parse(void)
>> +{
>> +    const struct dt_device_node *chosen, *node;
>> +
>> +    chosen = dt_find_node_by_path("/chosen");
>> +    if ( !chosen )
>> +        return;
>> +
>> +    dt_for_each_child_node(chosen, node)
>> +    {
>> +        const struct dt_device_node *phandle_node;
>> +        int sched_id = -1;
>> +        const char* scheduler_name;
>> +        unsigned int i = 0;
>> +
>> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
>> +            continue;
>> +
>> +        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
>> +        if ( phandle_node )
>> +        {
>> +            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
>> +                panic("cpupool-sched must be a xen,scheduler compatible"
>> +                      "node!\n");
>> +            if ( !dt_property_read_string(phandle_node, "sched-name",
>> +                                          &scheduler_name) )
>> +                sched_id = check_and_get_sched_id(scheduler_name);
>> +            else
>> +                panic("Error trying to read sched-name in %s!\n",
>> +                      dt_node_name(phandle_node));
>> +        }
> 
> it doesn't look like the "xen,scheduler" nodes are very useful from a dt
> parsing perspective either
> 
> 
>> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>> +        if ( !phandle_node )
>> +            panic("Missing or empty cpupool-cpus property!\n");
>> +
>> +        while ( phandle_node )
>> +        {
>> +            int cpu_num;
>> +
>> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
>> +
>> +            if ( cpu_num < 0 )
>> +                panic("Error retrieving logical cpu from node %s (%d)\n",
>> +                      dt_node_name(node), cpu_num);
>> +
>> +            if ( pool_cpu_map[cpu_num].pool_id != -1 )
>> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
>> +
>> +            pool_cpu_map[cpu_num].pool_id = next_pool_id;
>> +            pool_cpu_map[cpu_num].sched_id = sched_id;
>> +
>> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>> +        }
>> +
>> +        /* Let Xen generate pool ids */
>> +        next_pool_id++;
>> +    }
>> +}
>> +#endif
>> +
>> +void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
>> +{
>> +    unsigned int cpu_num;
>> +
>> +    /*
>> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
>> +     * below will assign every cpu to cpupool0 as the default behavior.
>> +     * When there are cpupools, the code below is assigning all the not
>> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
>> +     * In the same loop we check if there is any assigned cpu that is not
>> +     * online.
>> +     */
>> +    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
>> +        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
>> +        {
>> +            if ( pool_cpu_map[cpu_num].pool_id < 0 )
>> +                pool_cpu_map[cpu_num].pool_id = next_pool_id;
>> +        }
>> +        else
> 
> Please add { }
> 
> 
>> +            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
>> +                panic("Pool-%d contains cpu%u that is not online!\n",
>> +                      pool_cpu_map[cpu_num].pool_id, cpu_num);
> 
> 
> 
>> +#ifdef CONFIG_X86
>> +    /* Cpu0 must be in cpupool0 for x86 */
>> +    if ( pool_cpu_map[0].pool_id != 0 )
> 
> Is that even possible on x86 given that btcpupools_dtb_parse cannot even
> run on x86?
> 
> If it is not possible, I would remove the code below and simply panic
> instead.

Currently x86 doesn’t have a way to specify cpupools, so for now on x86 there will
be only cpupool 0 with every cpu attached, I thought I had to handle the case if in
the future someone adds a way to specify cpupools (cmdline?).
If you think this should be handled only by who implements that feature, I will remove
completely the block.

> 
> 
>> +    {
>> +        /* The cpupool containing cpu0 will become cpupool0 */
>> +        unsigned int swap_id = pool_cpu_map[0].pool_id;
>> +        for_each_cpu ( cpu_num, cpu_online_map )
>> +            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
>> +                pool_cpu_map[cpu_num].pool_id = 0;
>> +            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
>> +                pool_cpu_map[cpu_num].pool_id = swap_id;
>> +    }
>> +#endif
>> +
>> +    for_each_cpu ( cpu_num, cpu_online_map )
>> +    {
>> +        struct cpupool *pool = NULL;
>> +        int pool_id, sched_id;
>> +
>> +        pool_id = pool_cpu_map[cpu_num].pool_id;
>> +        sched_id = pool_cpu_map[cpu_num].sched_id;
>> +
>> +        if ( pool_id )
>> +        {
>> +            unsigned int i;
>> +
>> +            /* Look for previously created pool with id pool_id */
>> +            for ( i = 0; i < cpu_num; i++ )
> 
> Please add { }
> 
> But actually, the double loop seems a bit excessive for this. Could we
> just have a single loop to cpupool_create_pool from 0 to next_pool_id?
> 
> We could get rid of pool_cpu_map[i].pool and just rely on
> pool_cpu_map[i].pool_id. No need to update pool_cpu_map[i].pool if we
> get rid of it: it doesn't look like it is very useful anyway?

Yes we could create all the cpupools in a loop easily, but to retrieve the pointer
from the cpupool list I would need something, I can make public this function:

static struct cpupool *cpupool_find_by_id(unsigned int poolid)

from cpupool.c to get the pointer from the pool id, do you think it can be ok?

I will address your other findings in the next serie.

Thank you for your review.

Cheers,
Luca

> 
> 
>> +                if ( (pool_cpu_map[i].pool_id == pool_id) &&
>> +                     pool_cpu_map[i].pool )
>> +                {
>> +                    pool = pool_cpu_map[i].pool;
>> +                    break;
>> +                }
>> +
>> +            /* If no pool was created before, create it */
>> +            if ( !pool )
>> +                pool = cpupool_create_pool(pool_id, sched_id);
>> +            if ( !pool )
>> +                panic("Error creating pool id %u!\n", pool_id);
>> +        }
>> +        else
>> +            pool = cpupool0;
>> +
>> +        pool_cpu_map[cpu_num].pool = pool;
>> +        printk(XENLOG_INFO "Logical CPU %u in Pool-%u.\n", cpu_num, pool_id);
>> +    }
>> +}
>> +
>> +struct cpupool *__init btcpupools_get_cpupool(unsigned int cpu)
>> +{
>> +    return pool_cpu_map[cpu].pool;
>> +}
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
>> index 89a891af7076..b2495ad6d03e 100644
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -1247,12 +1247,16 @@ static int __init cf_check cpupool_init(void)
>>     cpupool_put(cpupool0);
>>     register_cpu_notifier(&cpu_nfb);
>> 
>> +    btcpupools_dtb_parse();
>> +
>> +    btcpupools_allocate_pools(&cpu_online_map);
>> +
>>     spin_lock(&cpupool_lock);
>> 
>>     cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
>> 
>>     for_each_cpu ( cpu, &cpupool_free_cpus )
>> -        cpupool_assign_cpu_locked(cpupool0, cpu);
>> +        cpupool_assign_cpu_locked(btcpupools_get_cpupool(cpu), cpu);
>> 
>>     spin_unlock(&cpupool_lock);
>> 
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index 2c10303f0187..de4e8feea399 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -1176,6 +1176,25 @@ extern void cf_check dump_runq(unsigned char key);
>> 
>> void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
>> 
>> +#ifdef CONFIG_BOOT_TIME_CPUPOOLS
>> +void btcpupools_allocate_pools(const cpumask_t *cpu_online_map);
>> +struct cpupool *btcpupools_get_cpupool(unsigned int cpu);
>> +
>> +#ifdef CONFIG_ARM
>> +void btcpupools_dtb_parse(void);
>> +#else
>> +static inline void btcpupools_dtb_parse(void) {}
>> +#endif
>> +
>> +#else
>> +static inline void btcpupools_allocate_pools(const cpumask_t *cpu_online_map) {}
>> +static inline void btcpupools_dtb_parse(void) {}
>> +static inline struct cpupool *btcpupools_get_cpupool(unsigned int cpu)
>> +{
>> +    return cpupool0;
>> +}
>> +#endif
>> +
>> #endif /* __SCHED_H__ */
>> 
>> /*
>> -- 
>> 2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11 14:11     ` Luca Fancellu
@ 2022-03-11 14:16       ` Luca Fancellu
  2022-03-12  0:10       ` Stefano Stabellini
  1 sibling, 0 replies; 30+ messages in thread
From: Luca Fancellu @ 2022-03-11 14:16 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Xen-devel, Wei Chen, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Juergen Gross, Dario Faggioli



> On 11 Mar 2022, at 14:11, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
> 
> Hi Stefano,
> 
>> On 11 Mar 2022, at 03:57, Stefano Stabellini <sstabellini@kernel.org> wrote:
>> 
>> On Thu, 10 Mar 2022, Luca Fancellu wrote:
>>> Introduce a way to create different cpupools at boot time, this is
>>> particularly useful on ARM big.LITTLE system where there might be the
>>> need to have different cpupools for each type of core, but also
>>> systems using NUMA can have different cpu pools for each node.
>>> 
>>> The feature on arm relies on a specification of the cpupools from the
>>> device tree to build pools and assign cpus to them.
>>> 
>>> Documentation is created to explain the feature.
>>> 
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> Changes in v2:
>>> - Move feature to common code (Juergen)
>>> - Try to decouple dtb parse and cpupool creation to allow
>>> more way to specify cpupools (for example command line)
>>> - Created standalone dt node for the scheduler so it can
>>> be used in future work to set scheduler specific
>>> parameters
>>> - Use only auto generated ids for cpupools
>>> ---
>>> docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
>>> xen/common/Kconfig                     |   8 +
>>> xen/common/Makefile                    |   1 +
>>> xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
>>> xen/common/sched/cpupool.c             |   6 +-
>>> xen/include/xen/sched.h                |  19 +++
>>> 6 files changed, 401 insertions(+), 1 deletion(-)
>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>> create mode 100644 xen/common/boot_cpupools.c
>>> 
>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>> new file mode 100644
>>> index 000000000000..d5a82ed0d45a
>>> --- /dev/null
>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>> @@ -0,0 +1,156 @@
>>> +Boot time cpupools
>>> +==================
>>> +
>>> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
>>> +create cpupools during boot phase by specifying them in the device tree.
>>> +
>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>> +Each cpupool node contains the following properties:
>>> +
>>> +- compatible (mandatory)
>>> +
>>> +    Must always include the compatiblity string: "xen,cpupool".
>>> +
>>> +- cpupool-cpus (mandatory)
>>> +
>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>> +    device_type = "cpu"), it can't be empty.
>>> +
>>> +- cpupool-sched (optional)
>>> +
>>> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
>>> +    (description below), it has no effect when the cpupool refers to the cpupool
>>> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
>>> +    boot argument).
>> 
>> This is *a lot* better.
>> 
>> The device tree part is nice. I have only one question left on it: why
>> do we need a separate scheduler node? Could the "cpupool-sched" property
>> be a simple string with the scheduler name?
>> 
>> E.g.:
>> 
>>   cpupool_a {
>>       compatible = "xen,cpupool";
>>       cpupool-cpus = <&a53_1 &a53_2>;
>>   };
>>   cpupool_b {
>>       compatible = "xen,cpupool";
>>       cpupool-cpus = <&a72_1 &a72_2>;
>>       cpupool-sched = "null";
>>   };
>> 
>> 
>> To me, it doesn't look like these new "scheduler specification nodes"
>> bring any benefits. I would just get rid of them.
> 
> From a comment of Juergen on the second patch I thought someone sees the need to
> have a way to set scheduling parameters:
> 
> “you are allowing to use another scheduler,
> but what if someone wants to set non-standard scheduling parameters
> (e.g. another time slice)?”
> 
> So I thought I could introduce a scheduler specification node that could in the future be
> extended and used to set scheduling parameter.
> 
> If it is something that is not needed, I will get rid of it.
> 
>> 
>> 
>>> +A scheduler specification node is a device tree node that contains the following
>>> +properties:
>>> +
>>> +- compatible (mandatory)
>>> +
>>> +    Must always include the compatiblity string: "xen,scheduler".
>>> +
>>> +- sched-name (mandatory)
>>> +
>>> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
>>> +    boot argument for allowed values.
>>> +
>>> +
>>> +Constraints
>>> +===========
>>> +
>>> +If no cpupools are specified, all cpus will be assigned to one cpupool
>>> +implicitly created (Pool-0).
>>> +
>>> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
>>> +all the not assigned cpu will be assigned to an additional cpupool.
>>> +
>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>> +stop.
>>> +
>>> +
>>> +Examples
>>> +========
>>> +
>>> +A system having two types of core, the following device tree specification will
>>> +instruct Xen to have two cpupools:
>>> +
>>> +- The cpupool with id 0 will have 4 cpus assigned.
>>> +- The cpupool with id 1 will have 2 cpus assigned.
>>> +
>>> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
>>> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
>>> +creation process will stop Xen.
>>> +
>>> +
>>> +a72_1: cpu@0 {
>>> +        compatible = "arm,cortex-a72";
>>> +        reg = <0x0 0x0>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a72_2: cpu@1 {
>>> +        compatible = "arm,cortex-a72";
>>> +        reg = <0x0 0x1>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_1: cpu@100 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x100>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_2: cpu@101 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x101>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_3: cpu@102 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x102>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +a53_4: cpu@103 {
>>> +        compatible = "arm,cortex-a53";
>>> +        reg = <0x0 0x103>;
>>> +        device_type = "cpu";
>>> +        [...]
>>> +};
>>> +
>>> +chosen {
>>> +
>>> +    sched: sched_a {
>>> +        compatible = "xen,scheduler";
>>> +        sched-name = "credit2";
>>> +    };
>>> +    cpupool_a {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
>>> +    };
>>> +    cpupool_b {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a72_1 &a72_2>;
>>> +        cpupool-sched = <&sched>;
>>> +    };
>>> +
>>> +    [...]
>>> +
>>> +};
>>> +
>>> +
>>> +A system having the cpupools specification below will instruct Xen to have three
>>> +cpupools:
>>> +
>>> +- The cpupool Pool-0 will have 2 cpus assigned.
>>> +- The cpupool Pool-1 will have 2 cpus assigned.
>>> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
>>> +  assigned cpus a53_3 and a53_4).
>>> +
>>> +chosen {
>>> +
>>> +    sched: sched_a {
>>> +        compatible = "xen,scheduler";
>>> +        sched-name = "null";
>>> +    };
>>> +    cpupool_a {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a53_1 &a53_2>;
>>> +    };
>>> +    cpupool_b {
>>> +        compatible = "xen,cpupool";
>>> +        cpupool-cpus = <&a72_1 &a72_2>;
>>> +        cpupool-sched = <&sched>;
>>> +    };
>>> +
>>> +    [...]
>>> +
>>> +};
>>> \ No newline at end of file
>>> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
>>> index 64439438891c..dc9eed31682f 100644
>>> --- a/xen/common/Kconfig
>>> +++ b/xen/common/Kconfig
>>> @@ -22,6 +22,14 @@ config GRANT_TABLE
>>> 
>>> 	  If unsure, say Y.
>>> 
>>> +config BOOT_TIME_CPUPOOLS
>>> +	bool "Create cpupools at boot time"
>>> +	depends on HAS_DEVICE_TREE
>>> +	default n
>>> +	help
>>> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
>>> +	  options can be specified in the device tree.
>>> +
>>> config ALTERNATIVE_CALL
>>> 	bool
>>> 
>>> diff --git a/xen/common/Makefile b/xen/common/Makefile
>>> index dc8d3a13f5b8..c5949785ab28 100644
>>> --- a/xen/common/Makefile
>>> +++ b/xen/common/Makefile
>>> @@ -1,5 +1,6 @@
>>> obj-$(CONFIG_ARGO) += argo.o
>>> obj-y += bitmap.o
>>> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
>>> obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>>> obj-$(CONFIG_CORE_PARKING) += core_parking.o
>>> obj-y += cpu.o
>>> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
>>> new file mode 100644
>>> index 000000000000..e8529a902d21
>>> --- /dev/null
>>> +++ b/xen/common/boot_cpupools.c
>>> @@ -0,0 +1,212 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * xen/common/boot_cpupools.c
>>> + *
>>> + * Code to create cpupools at boot time for arm architecture.
>>> + *
>>> + * Copyright (C) 2022 Arm Ltd.
>>> + */
>>> +
>>> +#include <xen/sched.h>
>>> +
>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>> +
>>> +struct pool_map {
>>> +    int pool_id;
>>> +    int sched_id;
>>> +    struct cpupool *pool;
>>> +};
>>> +
>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>> +static unsigned int __initdata next_pool_id;
>>> +
>>> +#ifdef CONFIG_ARM
>>> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
>>> +{
>>> +    unsigned int i;
>>> +
>>> +    for ( i = 0; i < nr_cpu_ids; i++ )
>>> +        if ( cpu_logical_map(i) == hwid )
>>> +            return i;
>>> +
>>> +    return -1;
>>> +}
>>> +
>>> +static int __init
>>> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
>>> +{
>>> +    unsigned int cpu_reg, cpu_num;
>>> +    const __be32 *prop;
>>> +
>>> +    prop = dt_get_property(cpu_node, "reg", NULL);
>>> +    if ( !prop )
>>> +        return BTCPUPOOLS_DT_NODE_NO_REG;
>>> +
>>> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
>>> +
>>> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
>>> +    if ( cpu_num < 0 )
>>> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
>>> +
>>> +    return cpu_num;
>>> +}
>>> +
>>> +static int __init check_and_get_sched_id(const char* scheduler_name)
>>> +{
>>> +    int sched_id = sched_get_id_by_name(scheduler_name);
>>> +
>>> +    if ( sched_id < 0 )
>>> +        panic("Scheduler %s does not exists!\n", scheduler_name);
>>> +
>>> +    return sched_id;
>>> +}
>>> +
>>> +void __init btcpupools_dtb_parse(void)
>>> +{
>>> +    const struct dt_device_node *chosen, *node;
>>> +
>>> +    chosen = dt_find_node_by_path("/chosen");
>>> +    if ( !chosen )
>>> +        return;
>>> +
>>> +    dt_for_each_child_node(chosen, node)
>>> +    {
>>> +        const struct dt_device_node *phandle_node;
>>> +        int sched_id = -1;
>>> +        const char* scheduler_name;
>>> +        unsigned int i = 0;
>>> +
>>> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
>>> +            continue;
>>> +
>>> +        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
>>> +        if ( phandle_node )
>>> +        {
>>> +            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
>>> +                panic("cpupool-sched must be a xen,scheduler compatible"
>>> +                      "node!\n");
>>> +            if ( !dt_property_read_string(phandle_node, "sched-name",
>>> +                                          &scheduler_name) )
>>> +                sched_id = check_and_get_sched_id(scheduler_name);
>>> +            else
>>> +                panic("Error trying to read sched-name in %s!\n",
>>> +                      dt_node_name(phandle_node));
>>> +        }
>> 
>> it doesn't look like the "xen,scheduler" nodes are very useful from a dt
>> parsing perspective either
>> 
>> 
>>> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>>> +        if ( !phandle_node )
>>> +            panic("Missing or empty cpupool-cpus property!\n");
>>> +
>>> +        while ( phandle_node )
>>> +        {
>>> +            int cpu_num;
>>> +
>>> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
>>> +
>>> +            if ( cpu_num < 0 )
>>> +                panic("Error retrieving logical cpu from node %s (%d)\n",
>>> +                      dt_node_name(node), cpu_num);
>>> +
>>> +            if ( pool_cpu_map[cpu_num].pool_id != -1 )
>>> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
>>> +
>>> +            pool_cpu_map[cpu_num].pool_id = next_pool_id;
>>> +            pool_cpu_map[cpu_num].sched_id = sched_id;
>>> +
>>> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>>> +        }
>>> +
>>> +        /* Let Xen generate pool ids */
>>> +        next_pool_id++;
>>> +    }
>>> +}
>>> +#endif
>>> +
>>> +void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
>>> +{
>>> +    unsigned int cpu_num;
>>> +
>>> +    /*
>>> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
>>> +     * below will assign every cpu to cpupool0 as the default behavior.
>>> +     * When there are cpupools, the code below is assigning all the not
>>> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
>>> +     * In the same loop we check if there is any assigned cpu that is not
>>> +     * online.
>>> +     */
>>> +    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
>>> +        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
>>> +        {
>>> +            if ( pool_cpu_map[cpu_num].pool_id < 0 )
>>> +                pool_cpu_map[cpu_num].pool_id = next_pool_id;
>>> +        }
>>> +        else
>> 
>> Please add { }
>> 
>> 
>>> +            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
>>> +                panic("Pool-%d contains cpu%u that is not online!\n",
>>> +                      pool_cpu_map[cpu_num].pool_id, cpu_num);
>> 
>> 
>> 
>>> +#ifdef CONFIG_X86
>>> +    /* Cpu0 must be in cpupool0 for x86 */
>>> +    if ( pool_cpu_map[0].pool_id != 0 )
>> 
>> Is that even possible on x86 given that btcpupools_dtb_parse cannot even
>> run on x86?
>> 
>> If it is not possible, I would remove the code below and simply panic
>> instead.
> 
> Currently x86 doesn’t have a way to specify cpupools, so for now on x86 there will
> be only cpupool 0 with every cpu attached, I thought I had to handle the case if in
> the future someone adds a way to specify cpupools (cmdline?).
> If you think this should be handled only by who implements that feature, I will remove
> completely the block.
> 
>> 
>> 
>>> +    {
>>> +        /* The cpupool containing cpu0 will become cpupool0 */
>>> +        unsigned int swap_id = pool_cpu_map[0].pool_id;
>>> +        for_each_cpu ( cpu_num, cpu_online_map )
>>> +            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
>>> +                pool_cpu_map[cpu_num].pool_id = 0;
>>> +            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
>>> +                pool_cpu_map[cpu_num].pool_id = swap_id;
>>> +    }
>>> +#endif
>>> +
>>> +    for_each_cpu ( cpu_num, cpu_online_map )
>>> +    {
>>> +        struct cpupool *pool = NULL;
>>> +        int pool_id, sched_id;
>>> +
>>> +        pool_id = pool_cpu_map[cpu_num].pool_id;
>>> +        sched_id = pool_cpu_map[cpu_num].sched_id;
>>> +
>>> +        if ( pool_id )
>>> +        {
>>> +            unsigned int i;
>>> +
>>> +            /* Look for previously created pool with id pool_id */
>>> +            for ( i = 0; i < cpu_num; i++ )
>> 
>> Please add { }
>> 
>> But actually, the double loop seems a bit excessive for this. Could we
>> just have a single loop to cpupool_create_pool from 0 to next_pool_id?
>> 
>> We could get rid of pool_cpu_map[i].pool and just rely on
>> pool_cpu_map[i].pool_id. No need to update pool_cpu_map[i].pool if we
>> get rid of it: it doesn't look like it is very useful anyway?
> 
> Yes we could create all the cpupools in a loop easily, but to retrieve the pointer
> from the cpupool list I would need something, I can make public this function:
> 
> static struct cpupool *cpupool_find_by_id(unsigned int poolid)
> 

I meant a wrapper of this function, since this needs the lock...

> from cpupool.c to get the pointer from the pool id, do you think it can be ok?
> 
> I will address your other findings in the next serie.
> 
> Thank you for your review.
> 
> Cheers,
> Luca
> 
>> 
>> 
>>> +                if ( (pool_cpu_map[i].pool_id == pool_id) &&
>>> +                     pool_cpu_map[i].pool )
>>> +                {
>>> +                    pool = pool_cpu_map[i].pool;
>>> +                    break;
>>> +                }
>>> +
>>> +            /* If no pool was created before, create it */
>>> +            if ( !pool )
>>> +                pool = cpupool_create_pool(pool_id, sched_id);
>>> +            if ( !pool )
>>> +                panic("Error creating pool id %u!\n", pool_id);
>>> +        }
>>> +        else
>>> +            pool = cpupool0;
>>> +
>>> +        pool_cpu_map[cpu_num].pool = pool;
>>> +        printk(XENLOG_INFO "Logical CPU %u in Pool-%u.\n", cpu_num, pool_id);
>>> +    }
>>> +}
>>> +
>>> +struct cpupool *__init btcpupools_get_cpupool(unsigned int cpu)
>>> +{
>>> +    return pool_cpu_map[cpu].pool;
>>> +}
>>> +
>>> +/*
>>> + * Local variables:
>>> + * mode: C
>>> + * c-file-style: "BSD"
>>> + * c-basic-offset: 4
>>> + * tab-width: 4
>>> + * indent-tabs-mode: nil
>>> + * End:
>>> + */
>>> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
>>> index 89a891af7076..b2495ad6d03e 100644
>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -1247,12 +1247,16 @@ static int __init cf_check cpupool_init(void)
>>>    cpupool_put(cpupool0);
>>>    register_cpu_notifier(&cpu_nfb);
>>> 
>>> +    btcpupools_dtb_parse();
>>> +
>>> +    btcpupools_allocate_pools(&cpu_online_map);
>>> +
>>>    spin_lock(&cpupool_lock);
>>> 
>>>    cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
>>> 
>>>    for_each_cpu ( cpu, &cpupool_free_cpus )
>>> -        cpupool_assign_cpu_locked(cpupool0, cpu);
>>> +        cpupool_assign_cpu_locked(btcpupools_get_cpupool(cpu), cpu);
>>> 
>>>    spin_unlock(&cpupool_lock);
>>> 
>>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>>> index 2c10303f0187..de4e8feea399 100644
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -1176,6 +1176,25 @@ extern void cf_check dump_runq(unsigned char key);
>>> 
>>> void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
>>> 
>>> +#ifdef CONFIG_BOOT_TIME_CPUPOOLS
>>> +void btcpupools_allocate_pools(const cpumask_t *cpu_online_map);
>>> +struct cpupool *btcpupools_get_cpupool(unsigned int cpu);
>>> +
>>> +#ifdef CONFIG_ARM
>>> +void btcpupools_dtb_parse(void);
>>> +#else
>>> +static inline void btcpupools_dtb_parse(void) {}
>>> +#endif
>>> +
>>> +#else
>>> +static inline void btcpupools_allocate_pools(const cpumask_t *cpu_online_map) {}
>>> +static inline void btcpupools_dtb_parse(void) {}
>>> +static inline struct cpupool *btcpupools_get_cpupool(unsigned int cpu)
>>> +{
>>> +    return cpupool0;
>>> +}
>>> +#endif
>>> +
>>> #endif /* __SCHED_H__ */
>>> 
>>> /*
>>> -- 
>>> 2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-11 14:11     ` Luca Fancellu
  2022-03-11 14:16       ` Luca Fancellu
@ 2022-03-12  0:10       ` Stefano Stabellini
  2022-03-15  6:04         ` Juergen Gross
  1 sibling, 1 reply; 30+ messages in thread
From: Stefano Stabellini @ 2022-03-12  0:10 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Stefano Stabellini, Xen-devel, Wei Chen, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Juergen Gross,
	Dario Faggioli

[-- Attachment #1: Type: text/plain, Size: 17306 bytes --]

On Fri, 11 Mar 2022, Luca Fancellu wrote:
> > On Thu, 10 Mar 2022, Luca Fancellu wrote:
> >> Introduce a way to create different cpupools at boot time, this is
> >> particularly useful on ARM big.LITTLE system where there might be the
> >> need to have different cpupools for each type of core, but also
> >> systems using NUMA can have different cpu pools for each node.
> >> 
> >> The feature on arm relies on a specification of the cpupools from the
> >> device tree to build pools and assign cpus to them.
> >> 
> >> Documentation is created to explain the feature.
> >> 
> >> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> >> ---
> >> Changes in v2:
> >> - Move feature to common code (Juergen)
> >> - Try to decouple dtb parse and cpupool creation to allow
> >>  more way to specify cpupools (for example command line)
> >> - Created standalone dt node for the scheduler so it can
> >>  be used in future work to set scheduler specific
> >>  parameters
> >> - Use only auto generated ids for cpupools
> >> ---
> >> docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
> >> xen/common/Kconfig                     |   8 +
> >> xen/common/Makefile                    |   1 +
> >> xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
> >> xen/common/sched/cpupool.c             |   6 +-
> >> xen/include/xen/sched.h                |  19 +++
> >> 6 files changed, 401 insertions(+), 1 deletion(-)
> >> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
> >> create mode 100644 xen/common/boot_cpupools.c
> >> 
> >> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
> >> new file mode 100644
> >> index 000000000000..d5a82ed0d45a
> >> --- /dev/null
> >> +++ b/docs/misc/arm/device-tree/cpupools.txt
> >> @@ -0,0 +1,156 @@
> >> +Boot time cpupools
> >> +==================
> >> +
> >> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
> >> +create cpupools during boot phase by specifying them in the device tree.
> >> +
> >> +Cpupools specification nodes shall be direct childs of /chosen node.
> >> +Each cpupool node contains the following properties:
> >> +
> >> +- compatible (mandatory)
> >> +
> >> +    Must always include the compatiblity string: "xen,cpupool".
> >> +
> >> +- cpupool-cpus (mandatory)
> >> +
> >> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
> >> +    device_type = "cpu"), it can't be empty.
> >> +
> >> +- cpupool-sched (optional)
> >> +
> >> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
> >> +    (description below), it has no effect when the cpupool refers to the cpupool
> >> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
> >> +    boot argument).
> > 
> > This is *a lot* better.
> > 
> > The device tree part is nice. I have only one question left on it: why
> > do we need a separate scheduler node? Could the "cpupool-sched" property
> > be a simple string with the scheduler name?
> > 
> > E.g.:
> > 
> >    cpupool_a {
> >        compatible = "xen,cpupool";
> >        cpupool-cpus = <&a53_1 &a53_2>;
> >    };
> >    cpupool_b {
> >        compatible = "xen,cpupool";
> >        cpupool-cpus = <&a72_1 &a72_2>;
> >        cpupool-sched = "null";
> >    };
> > 
> > 
> > To me, it doesn't look like these new "scheduler specification nodes"
> > bring any benefits. I would just get rid of them.
> 
> From a comment of Juergen on the second patch I thought someone sees the need to
> have a way to set scheduling parameters:
> 
> “you are allowing to use another scheduler,
> but what if someone wants to set non-standard scheduling parameters
> (e.g. another time slice)?”
> 
> So I thought I could introduce a scheduler specification node that could in the future be
> extended and used to set scheduling parameter.
> 
> If it is something that is not needed, I will get rid of it.

I think you should get rid of it because it doesn't help. For instance,
if two cpupools want to use the same scheduler but with different
parameters we would end up with two different scheduler nodes.

Instead, in the future we could have one or more sched-params properties
as needed in the cpupools node to specify scheduler parameters.

 
> >> +A scheduler specification node is a device tree node that contains the following
> >> +properties:
> >> +
> >> +- compatible (mandatory)
> >> +
> >> +    Must always include the compatiblity string: "xen,scheduler".
> >> +
> >> +- sched-name (mandatory)
> >> +
> >> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
> >> +    boot argument for allowed values.
> >> +
> >> +
> >> +Constraints
> >> +===========
> >> +
> >> +If no cpupools are specified, all cpus will be assigned to one cpupool
> >> +implicitly created (Pool-0).
> >> +
> >> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
> >> +all the not assigned cpu will be assigned to an additional cpupool.
> >> +
> >> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
> >> +stop.
> >> +
> >> +
> >> +Examples
> >> +========
> >> +
> >> +A system having two types of core, the following device tree specification will
> >> +instruct Xen to have two cpupools:
> >> +
> >> +- The cpupool with id 0 will have 4 cpus assigned.
> >> +- The cpupool with id 1 will have 2 cpus assigned.
> >> +
> >> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
> >> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
> >> +creation process will stop Xen.
> >> +
> >> +
> >> +a72_1: cpu@0 {
> >> +        compatible = "arm,cortex-a72";
> >> +        reg = <0x0 0x0>;
> >> +        device_type = "cpu";
> >> +        [...]
> >> +};
> >> +
> >> +a72_2: cpu@1 {
> >> +        compatible = "arm,cortex-a72";
> >> +        reg = <0x0 0x1>;
> >> +        device_type = "cpu";
> >> +        [...]
> >> +};
> >> +
> >> +a53_1: cpu@100 {
> >> +        compatible = "arm,cortex-a53";
> >> +        reg = <0x0 0x100>;
> >> +        device_type = "cpu";
> >> +        [...]
> >> +};
> >> +
> >> +a53_2: cpu@101 {
> >> +        compatible = "arm,cortex-a53";
> >> +        reg = <0x0 0x101>;
> >> +        device_type = "cpu";
> >> +        [...]
> >> +};
> >> +
> >> +a53_3: cpu@102 {
> >> +        compatible = "arm,cortex-a53";
> >> +        reg = <0x0 0x102>;
> >> +        device_type = "cpu";
> >> +        [...]
> >> +};
> >> +
> >> +a53_4: cpu@103 {
> >> +        compatible = "arm,cortex-a53";
> >> +        reg = <0x0 0x103>;
> >> +        device_type = "cpu";
> >> +        [...]
> >> +};
> >> +
> >> +chosen {
> >> +
> >> +    sched: sched_a {
> >> +        compatible = "xen,scheduler";
> >> +        sched-name = "credit2";
> >> +    };
> >> +    cpupool_a {
> >> +        compatible = "xen,cpupool";
> >> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
> >> +    };
> >> +    cpupool_b {
> >> +        compatible = "xen,cpupool";
> >> +        cpupool-cpus = <&a72_1 &a72_2>;
> >> +        cpupool-sched = <&sched>;
> >> +    };
> >> +
> >> +    [...]
> >> +
> >> +};
> >> +
> >> +
> >> +A system having the cpupools specification below will instruct Xen to have three
> >> +cpupools:
> >> +
> >> +- The cpupool Pool-0 will have 2 cpus assigned.
> >> +- The cpupool Pool-1 will have 2 cpus assigned.
> >> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
> >> +  assigned cpus a53_3 and a53_4).
> >> +
> >> +chosen {
> >> +
> >> +    sched: sched_a {
> >> +        compatible = "xen,scheduler";
> >> +        sched-name = "null";
> >> +    };
> >> +    cpupool_a {
> >> +        compatible = "xen,cpupool";
> >> +        cpupool-cpus = <&a53_1 &a53_2>;
> >> +    };
> >> +    cpupool_b {
> >> +        compatible = "xen,cpupool";
> >> +        cpupool-cpus = <&a72_1 &a72_2>;
> >> +        cpupool-sched = <&sched>;
> >> +    };
> >> +
> >> +    [...]
> >> +
> >> +};
> >> \ No newline at end of file
> >> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> >> index 64439438891c..dc9eed31682f 100644
> >> --- a/xen/common/Kconfig
> >> +++ b/xen/common/Kconfig
> >> @@ -22,6 +22,14 @@ config GRANT_TABLE
> >> 
> >> 	  If unsure, say Y.
> >> 
> >> +config BOOT_TIME_CPUPOOLS
> >> +	bool "Create cpupools at boot time"
> >> +	depends on HAS_DEVICE_TREE
> >> +	default n
> >> +	help
> >> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
> >> +	  options can be specified in the device tree.
> >> +
> >> config ALTERNATIVE_CALL
> >> 	bool
> >> 
> >> diff --git a/xen/common/Makefile b/xen/common/Makefile
> >> index dc8d3a13f5b8..c5949785ab28 100644
> >> --- a/xen/common/Makefile
> >> +++ b/xen/common/Makefile
> >> @@ -1,5 +1,6 @@
> >> obj-$(CONFIG_ARGO) += argo.o
> >> obj-y += bitmap.o
> >> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
> >> obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
> >> obj-$(CONFIG_CORE_PARKING) += core_parking.o
> >> obj-y += cpu.o
> >> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
> >> new file mode 100644
> >> index 000000000000..e8529a902d21
> >> --- /dev/null
> >> +++ b/xen/common/boot_cpupools.c
> >> @@ -0,0 +1,212 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +/*
> >> + * xen/common/boot_cpupools.c
> >> + *
> >> + * Code to create cpupools at boot time for arm architecture.
> >> + *
> >> + * Copyright (C) 2022 Arm Ltd.
> >> + */
> >> +
> >> +#include <xen/sched.h>
> >> +
> >> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
> >> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
> >> +
> >> +struct pool_map {
> >> +    int pool_id;
> >> +    int sched_id;
> >> +    struct cpupool *pool;
> >> +};
> >> +
> >> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
> >> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
> >> +static unsigned int __initdata next_pool_id;
> >> +
> >> +#ifdef CONFIG_ARM
> >> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
> >> +{
> >> +    unsigned int i;
> >> +
> >> +    for ( i = 0; i < nr_cpu_ids; i++ )
> >> +        if ( cpu_logical_map(i) == hwid )
> >> +            return i;
> >> +
> >> +    return -1;
> >> +}
> >> +
> >> +static int __init
> >> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
> >> +{
> >> +    unsigned int cpu_reg, cpu_num;
> >> +    const __be32 *prop;
> >> +
> >> +    prop = dt_get_property(cpu_node, "reg", NULL);
> >> +    if ( !prop )
> >> +        return BTCPUPOOLS_DT_NODE_NO_REG;
> >> +
> >> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
> >> +
> >> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
> >> +    if ( cpu_num < 0 )
> >> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
> >> +
> >> +    return cpu_num;
> >> +}
> >> +
> >> +static int __init check_and_get_sched_id(const char* scheduler_name)
> >> +{
> >> +    int sched_id = sched_get_id_by_name(scheduler_name);
> >> +
> >> +    if ( sched_id < 0 )
> >> +        panic("Scheduler %s does not exists!\n", scheduler_name);
> >> +
> >> +    return sched_id;
> >> +}
> >> +
> >> +void __init btcpupools_dtb_parse(void)
> >> +{
> >> +    const struct dt_device_node *chosen, *node;
> >> +
> >> +    chosen = dt_find_node_by_path("/chosen");
> >> +    if ( !chosen )
> >> +        return;
> >> +
> >> +    dt_for_each_child_node(chosen, node)
> >> +    {
> >> +        const struct dt_device_node *phandle_node;
> >> +        int sched_id = -1;
> >> +        const char* scheduler_name;
> >> +        unsigned int i = 0;
> >> +
> >> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
> >> +            continue;
> >> +
> >> +        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
> >> +        if ( phandle_node )
> >> +        {
> >> +            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
> >> +                panic("cpupool-sched must be a xen,scheduler compatible"
> >> +                      "node!\n");
> >> +            if ( !dt_property_read_string(phandle_node, "sched-name",
> >> +                                          &scheduler_name) )
> >> +                sched_id = check_and_get_sched_id(scheduler_name);
> >> +            else
> >> +                panic("Error trying to read sched-name in %s!\n",
> >> +                      dt_node_name(phandle_node));
> >> +        }
> > 
> > it doesn't look like the "xen,scheduler" nodes are very useful from a dt
> > parsing perspective either
> > 
> > 
> >> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> >> +        if ( !phandle_node )
> >> +            panic("Missing or empty cpupool-cpus property!\n");
> >> +
> >> +        while ( phandle_node )
> >> +        {
> >> +            int cpu_num;
> >> +
> >> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
> >> +
> >> +            if ( cpu_num < 0 )
> >> +                panic("Error retrieving logical cpu from node %s (%d)\n",
> >> +                      dt_node_name(node), cpu_num);
> >> +
> >> +            if ( pool_cpu_map[cpu_num].pool_id != -1 )
> >> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
> >> +
> >> +            pool_cpu_map[cpu_num].pool_id = next_pool_id;
> >> +            pool_cpu_map[cpu_num].sched_id = sched_id;
> >> +
> >> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
> >> +        }
> >> +
> >> +        /* Let Xen generate pool ids */
> >> +        next_pool_id++;
> >> +    }
> >> +}
> >> +#endif
> >> +
> >> +void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
> >> +{
> >> +    unsigned int cpu_num;
> >> +
> >> +    /*
> >> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
> >> +     * below will assign every cpu to cpupool0 as the default behavior.
> >> +     * When there are cpupools, the code below is assigning all the not
> >> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
> >> +     * In the same loop we check if there is any assigned cpu that is not
> >> +     * online.
> >> +     */
> >> +    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
> >> +        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
> >> +        {
> >> +            if ( pool_cpu_map[cpu_num].pool_id < 0 )
> >> +                pool_cpu_map[cpu_num].pool_id = next_pool_id;
> >> +        }
> >> +        else
> > 
> > Please add { }
> > 
> > 
> >> +            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
> >> +                panic("Pool-%d contains cpu%u that is not online!\n",
> >> +                      pool_cpu_map[cpu_num].pool_id, cpu_num);
> > 
> > 
> > 
> >> +#ifdef CONFIG_X86
> >> +    /* Cpu0 must be in cpupool0 for x86 */
> >> +    if ( pool_cpu_map[0].pool_id != 0 )
> > 
> > Is that even possible on x86 given that btcpupools_dtb_parse cannot even
> > run on x86?
> > 
> > If it is not possible, I would remove the code below and simply panic
> > instead.
> 
> Currently x86 doesn’t have a way to specify cpupools, so for now on x86 there will
> be only cpupool 0 with every cpu attached, I thought I had to handle the case if in
> the future someone adds a way to specify cpupools (cmdline?).
> If you think this should be handled only by who implements that feature, I will remove
> completely the block.
 
In general, it is good to try to make the code as generally useful as
possible. However, it needs to be testable. In this case, this code is
basically dead code because there is no way to trigger it. So I suggest
to remove it and replace it with a panic.
 
 
> >> +    {
> >> +        /* The cpupool containing cpu0 will become cpupool0 */
> >> +        unsigned int swap_id = pool_cpu_map[0].pool_id;
> >> +        for_each_cpu ( cpu_num, cpu_online_map )
> >> +            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
> >> +                pool_cpu_map[cpu_num].pool_id = 0;
> >> +            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
> >> +                pool_cpu_map[cpu_num].pool_id = swap_id;
> >> +    }
> >> +#endif
> >> +
> >> +    for_each_cpu ( cpu_num, cpu_online_map )
> >> +    {
> >> +        struct cpupool *pool = NULL;
> >> +        int pool_id, sched_id;
> >> +
> >> +        pool_id = pool_cpu_map[cpu_num].pool_id;
> >> +        sched_id = pool_cpu_map[cpu_num].sched_id;
> >> +
> >> +        if ( pool_id )
> >> +        {
> >> +            unsigned int i;
> >> +
> >> +            /* Look for previously created pool with id pool_id */
> >> +            for ( i = 0; i < cpu_num; i++ )
> > 
> > Please add { }
> > 
> > But actually, the double loop seems a bit excessive for this. Could we
> > just have a single loop to cpupool_create_pool from 0 to next_pool_id?
> > 
> > We could get rid of pool_cpu_map[i].pool and just rely on
> > pool_cpu_map[i].pool_id. No need to update pool_cpu_map[i].pool if we
> > get rid of it: it doesn't look like it is very useful anyway?
> 
> Yes we could create all the cpupools in a loop easily, but to retrieve the pointer
> from the cpupool list I would need something, I can make public this function:
> 
> static struct cpupool *cpupool_find_by_id(unsigned int poolid)
> 
> from cpupool.c to get the pointer from the pool id, do you think it can be ok?

Yes I think that is a better option if Juergen agrees.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-12  0:10       ` Stefano Stabellini
@ 2022-03-15  6:04         ` Juergen Gross
  2022-03-15  8:40           ` Luca Fancellu
  0 siblings, 1 reply; 30+ messages in thread
From: Juergen Gross @ 2022-03-15  6:04 UTC (permalink / raw)
  To: Stefano Stabellini, Luca Fancellu
  Cc: Xen-devel, Wei Chen, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 18135 bytes --]

On 12.03.22 01:10, Stefano Stabellini wrote:
> On Fri, 11 Mar 2022, Luca Fancellu wrote:
>>> On Thu, 10 Mar 2022, Luca Fancellu wrote:
>>>> Introduce a way to create different cpupools at boot time, this is
>>>> particularly useful on ARM big.LITTLE system where there might be the
>>>> need to have different cpupools for each type of core, but also
>>>> systems using NUMA can have different cpu pools for each node.
>>>>
>>>> The feature on arm relies on a specification of the cpupools from the
>>>> device tree to build pools and assign cpus to them.
>>>>
>>>> Documentation is created to explain the feature.
>>>>
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>> ---
>>>> Changes in v2:
>>>> - Move feature to common code (Juergen)
>>>> - Try to decouple dtb parse and cpupool creation to allow
>>>>   more way to specify cpupools (for example command line)
>>>> - Created standalone dt node for the scheduler so it can
>>>>   be used in future work to set scheduler specific
>>>>   parameters
>>>> - Use only auto generated ids for cpupools
>>>> ---
>>>> docs/misc/arm/device-tree/cpupools.txt | 156 ++++++++++++++++++
>>>> xen/common/Kconfig                     |   8 +
>>>> xen/common/Makefile                    |   1 +
>>>> xen/common/boot_cpupools.c             | 212 +++++++++++++++++++++++++
>>>> xen/common/sched/cpupool.c             |   6 +-
>>>> xen/include/xen/sched.h                |  19 +++
>>>> 6 files changed, 401 insertions(+), 1 deletion(-)
>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
>>>> create mode 100644 xen/common/boot_cpupools.c
>>>>
>>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt
>>>> new file mode 100644
>>>> index 000000000000..d5a82ed0d45a
>>>> --- /dev/null
>>>> +++ b/docs/misc/arm/device-tree/cpupools.txt
>>>> @@ -0,0 +1,156 @@
>>>> +Boot time cpupools
>>>> +==================
>>>> +
>>>> +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to
>>>> +create cpupools during boot phase by specifying them in the device tree.
>>>> +
>>>> +Cpupools specification nodes shall be direct childs of /chosen node.
>>>> +Each cpupool node contains the following properties:
>>>> +
>>>> +- compatible (mandatory)
>>>> +
>>>> +    Must always include the compatiblity string: "xen,cpupool".
>>>> +
>>>> +- cpupool-cpus (mandatory)
>>>> +
>>>> +    Must be a list of device tree phandle to nodes describing cpus (e.g. having
>>>> +    device_type = "cpu"), it can't be empty.
>>>> +
>>>> +- cpupool-sched (optional)
>>>> +
>>>> +    Must be a device tree phandle to a node having "xen,scheduler" compatible
>>>> +    (description below), it has no effect when the cpupool refers to the cpupool
>>>> +    number zero, in that case the default Xen scheduler is selected (sched=<...>
>>>> +    boot argument).
>>>
>>> This is *a lot* better.
>>>
>>> The device tree part is nice. I have only one question left on it: why
>>> do we need a separate scheduler node? Could the "cpupool-sched" property
>>> be a simple string with the scheduler name?
>>>
>>> E.g.:
>>>
>>>     cpupool_a {
>>>         compatible = "xen,cpupool";
>>>         cpupool-cpus = <&a53_1 &a53_2>;
>>>     };
>>>     cpupool_b {
>>>         compatible = "xen,cpupool";
>>>         cpupool-cpus = <&a72_1 &a72_2>;
>>>         cpupool-sched = "null";
>>>     };
>>>
>>>
>>> To me, it doesn't look like these new "scheduler specification nodes"
>>> bring any benefits. I would just get rid of them.
>>
>>  From a comment of Juergen on the second patch I thought someone sees the need to
>> have a way to set scheduling parameters:
>>
>> “you are allowing to use another scheduler,
>> but what if someone wants to set non-standard scheduling parameters
>> (e.g. another time slice)?”
>>
>> So I thought I could introduce a scheduler specification node that could in the future be
>> extended and used to set scheduling parameter.
>>
>> If it is something that is not needed, I will get rid of it.
> 
> I think you should get rid of it because it doesn't help. For instance,
> if two cpupools want to use the same scheduler but with different
> parameters we would end up with two different scheduler nodes.
> 
> Instead, in the future we could have one or more sched-params properties
> as needed in the cpupools node to specify scheduler parameters.
> 
>   
>>>> +A scheduler specification node is a device tree node that contains the following
>>>> +properties:
>>>> +
>>>> +- compatible (mandatory)
>>>> +
>>>> +    Must always include the compatiblity string: "xen,scheduler".
>>>> +
>>>> +- sched-name (mandatory)
>>>> +
>>>> +    Must be a string having the name of a Xen scheduler, check the sched=<...>
>>>> +    boot argument for allowed values.
>>>> +
>>>> +
>>>> +Constraints
>>>> +===========
>>>> +
>>>> +If no cpupools are specified, all cpus will be assigned to one cpupool
>>>> +implicitly created (Pool-0).
>>>> +
>>>> +If cpupools node are specified, but not every cpu brought up by Xen is assigned,
>>>> +all the not assigned cpu will be assigned to an additional cpupool.
>>>> +
>>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will
>>>> +stop.
>>>> +
>>>> +
>>>> +Examples
>>>> +========
>>>> +
>>>> +A system having two types of core, the following device tree specification will
>>>> +instruct Xen to have two cpupools:
>>>> +
>>>> +- The cpupool with id 0 will have 4 cpus assigned.
>>>> +- The cpupool with id 1 will have 2 cpus assigned.
>>>> +
>>>> +The following example can work only if hmp-unsafe=1 is passed to Xen boot
>>>> +arguments, otherwise not all cores will be brought up by Xen and the cpupool
>>>> +creation process will stop Xen.
>>>> +
>>>> +
>>>> +a72_1: cpu@0 {
>>>> +        compatible = "arm,cortex-a72";
>>>> +        reg = <0x0 0x0>;
>>>> +        device_type = "cpu";
>>>> +        [...]
>>>> +};
>>>> +
>>>> +a72_2: cpu@1 {
>>>> +        compatible = "arm,cortex-a72";
>>>> +        reg = <0x0 0x1>;
>>>> +        device_type = "cpu";
>>>> +        [...]
>>>> +};
>>>> +
>>>> +a53_1: cpu@100 {
>>>> +        compatible = "arm,cortex-a53";
>>>> +        reg = <0x0 0x100>;
>>>> +        device_type = "cpu";
>>>> +        [...]
>>>> +};
>>>> +
>>>> +a53_2: cpu@101 {
>>>> +        compatible = "arm,cortex-a53";
>>>> +        reg = <0x0 0x101>;
>>>> +        device_type = "cpu";
>>>> +        [...]
>>>> +};
>>>> +
>>>> +a53_3: cpu@102 {
>>>> +        compatible = "arm,cortex-a53";
>>>> +        reg = <0x0 0x102>;
>>>> +        device_type = "cpu";
>>>> +        [...]
>>>> +};
>>>> +
>>>> +a53_4: cpu@103 {
>>>> +        compatible = "arm,cortex-a53";
>>>> +        reg = <0x0 0x103>;
>>>> +        device_type = "cpu";
>>>> +        [...]
>>>> +};
>>>> +
>>>> +chosen {
>>>> +
>>>> +    sched: sched_a {
>>>> +        compatible = "xen,scheduler";
>>>> +        sched-name = "credit2";
>>>> +    };
>>>> +    cpupool_a {
>>>> +        compatible = "xen,cpupool";
>>>> +        cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>;
>>>> +    };
>>>> +    cpupool_b {
>>>> +        compatible = "xen,cpupool";
>>>> +        cpupool-cpus = <&a72_1 &a72_2>;
>>>> +        cpupool-sched = <&sched>;
>>>> +    };
>>>> +
>>>> +    [...]
>>>> +
>>>> +};
>>>> +
>>>> +
>>>> +A system having the cpupools specification below will instruct Xen to have three
>>>> +cpupools:
>>>> +
>>>> +- The cpupool Pool-0 will have 2 cpus assigned.
>>>> +- The cpupool Pool-1 will have 2 cpus assigned.
>>>> +- The cpupool Pool-2 will have 2 cpus assigned (created by Xen with all the not
>>>> +  assigned cpus a53_3 and a53_4).
>>>> +
>>>> +chosen {
>>>> +
>>>> +    sched: sched_a {
>>>> +        compatible = "xen,scheduler";
>>>> +        sched-name = "null";
>>>> +    };
>>>> +    cpupool_a {
>>>> +        compatible = "xen,cpupool";
>>>> +        cpupool-cpus = <&a53_1 &a53_2>;
>>>> +    };
>>>> +    cpupool_b {
>>>> +        compatible = "xen,cpupool";
>>>> +        cpupool-cpus = <&a72_1 &a72_2>;
>>>> +        cpupool-sched = <&sched>;
>>>> +    };
>>>> +
>>>> +    [...]
>>>> +
>>>> +};
>>>> \ No newline at end of file
>>>> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
>>>> index 64439438891c..dc9eed31682f 100644
>>>> --- a/xen/common/Kconfig
>>>> +++ b/xen/common/Kconfig
>>>> @@ -22,6 +22,14 @@ config GRANT_TABLE
>>>>
>>>> 	  If unsure, say Y.
>>>>
>>>> +config BOOT_TIME_CPUPOOLS
>>>> +	bool "Create cpupools at boot time"
>>>> +	depends on HAS_DEVICE_TREE
>>>> +	default n
>>>> +	help
>>>> +	  Creates cpupools during boot time and assigns cpus to them. Cpupools
>>>> +	  options can be specified in the device tree.
>>>> +
>>>> config ALTERNATIVE_CALL
>>>> 	bool
>>>>
>>>> diff --git a/xen/common/Makefile b/xen/common/Makefile
>>>> index dc8d3a13f5b8..c5949785ab28 100644
>>>> --- a/xen/common/Makefile
>>>> +++ b/xen/common/Makefile
>>>> @@ -1,5 +1,6 @@
>>>> obj-$(CONFIG_ARGO) += argo.o
>>>> obj-y += bitmap.o
>>>> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot_cpupools.o
>>>> obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>>>> obj-$(CONFIG_CORE_PARKING) += core_parking.o
>>>> obj-y += cpu.o
>>>> diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c
>>>> new file mode 100644
>>>> index 000000000000..e8529a902d21
>>>> --- /dev/null
>>>> +++ b/xen/common/boot_cpupools.c
>>>> @@ -0,0 +1,212 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +/*
>>>> + * xen/common/boot_cpupools.c
>>>> + *
>>>> + * Code to create cpupools at boot time for arm architecture.
>>>> + *
>>>> + * Copyright (C) 2022 Arm Ltd.
>>>> + */
>>>> +
>>>> +#include <xen/sched.h>
>>>> +
>>>> +#define BTCPUPOOLS_DT_NODE_NO_REG     (-1)
>>>> +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2)
>>>> +
>>>> +struct pool_map {
>>>> +    int pool_id;
>>>> +    int sched_id;
>>>> +    struct cpupool *pool;
>>>> +};
>>>> +
>>>> +static struct pool_map __initdata pool_cpu_map[NR_CPUS] =
>>>> +    { [0 ... NR_CPUS-1] = {.pool_id = -1, .sched_id = -1, .pool = NULL} };
>>>> +static unsigned int __initdata next_pool_id;
>>>> +
>>>> +#ifdef CONFIG_ARM
>>>> +static int __init get_logical_cpu_from_hw_id(unsigned int hwid)
>>>> +{
>>>> +    unsigned int i;
>>>> +
>>>> +    for ( i = 0; i < nr_cpu_ids; i++ )
>>>> +        if ( cpu_logical_map(i) == hwid )
>>>> +            return i;
>>>> +
>>>> +    return -1;
>>>> +}
>>>> +
>>>> +static int __init
>>>> +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node)
>>>> +{
>>>> +    unsigned int cpu_reg, cpu_num;
>>>> +    const __be32 *prop;
>>>> +
>>>> +    prop = dt_get_property(cpu_node, "reg", NULL);
>>>> +    if ( !prop )
>>>> +        return BTCPUPOOLS_DT_NODE_NO_REG;
>>>> +
>>>> +    cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node));
>>>> +
>>>> +    cpu_num = get_logical_cpu_from_hw_id(cpu_reg);
>>>> +    if ( cpu_num < 0 )
>>>> +        return BTCPUPOOLS_DT_NODE_NO_LOG_CPU;
>>>> +
>>>> +    return cpu_num;
>>>> +}
>>>> +
>>>> +static int __init check_and_get_sched_id(const char* scheduler_name)
>>>> +{
>>>> +    int sched_id = sched_get_id_by_name(scheduler_name);
>>>> +
>>>> +    if ( sched_id < 0 )
>>>> +        panic("Scheduler %s does not exists!\n", scheduler_name);
>>>> +
>>>> +    return sched_id;
>>>> +}
>>>> +
>>>> +void __init btcpupools_dtb_parse(void)
>>>> +{
>>>> +    const struct dt_device_node *chosen, *node;
>>>> +
>>>> +    chosen = dt_find_node_by_path("/chosen");
>>>> +    if ( !chosen )
>>>> +        return;
>>>> +
>>>> +    dt_for_each_child_node(chosen, node)
>>>> +    {
>>>> +        const struct dt_device_node *phandle_node;
>>>> +        int sched_id = -1;
>>>> +        const char* scheduler_name;
>>>> +        unsigned int i = 0;
>>>> +
>>>> +        if ( !dt_device_is_compatible(node, "xen,cpupool") )
>>>> +            continue;
>>>> +
>>>> +        phandle_node = dt_parse_phandle(node, "cpupool-sched", 0);
>>>> +        if ( phandle_node )
>>>> +        {
>>>> +            if ( !dt_device_is_compatible(phandle_node, "xen,scheduler") )
>>>> +                panic("cpupool-sched must be a xen,scheduler compatible"
>>>> +                      "node!\n");
>>>> +            if ( !dt_property_read_string(phandle_node, "sched-name",
>>>> +                                          &scheduler_name) )
>>>> +                sched_id = check_and_get_sched_id(scheduler_name);
>>>> +            else
>>>> +                panic("Error trying to read sched-name in %s!\n",
>>>> +                      dt_node_name(phandle_node));
>>>> +        }
>>>
>>> it doesn't look like the "xen,scheduler" nodes are very useful from a dt
>>> parsing perspective either
>>>
>>>
>>>> +        phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>>>> +        if ( !phandle_node )
>>>> +            panic("Missing or empty cpupool-cpus property!\n");
>>>> +
>>>> +        while ( phandle_node )
>>>> +        {
>>>> +            int cpu_num;
>>>> +
>>>> +            cpu_num = get_logical_cpu_from_cpu_node(phandle_node);
>>>> +
>>>> +            if ( cpu_num < 0 )
>>>> +                panic("Error retrieving logical cpu from node %s (%d)\n",
>>>> +                      dt_node_name(node), cpu_num);
>>>> +
>>>> +            if ( pool_cpu_map[cpu_num].pool_id != -1 )
>>>> +                panic("Logical cpu %d already added to a cpupool!\n", cpu_num);
>>>> +
>>>> +            pool_cpu_map[cpu_num].pool_id = next_pool_id;
>>>> +            pool_cpu_map[cpu_num].sched_id = sched_id;
>>>> +
>>>> +            phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++);
>>>> +        }
>>>> +
>>>> +        /* Let Xen generate pool ids */
>>>> +        next_pool_id++;
>>>> +    }
>>>> +}
>>>> +#endif
>>>> +
>>>> +void __init btcpupools_allocate_pools(const cpumask_t *cpu_online_map)
>>>> +{
>>>> +    unsigned int cpu_num;
>>>> +
>>>> +    /*
>>>> +     * If there are no cpupools, the value of next_pool_id is zero, so the code
>>>> +     * below will assign every cpu to cpupool0 as the default behavior.
>>>> +     * When there are cpupools, the code below is assigning all the not
>>>> +     * assigned cpu to a new pool (next_pool_id value is the last id + 1).
>>>> +     * In the same loop we check if there is any assigned cpu that is not
>>>> +     * online.
>>>> +     */
>>>> +    for ( cpu_num = 0; cpu_num < nr_cpu_ids; cpu_num++ )
>>>> +        if ( cpumask_test_cpu(cpu_num, cpu_online_map) )
>>>> +        {
>>>> +            if ( pool_cpu_map[cpu_num].pool_id < 0 )
>>>> +                pool_cpu_map[cpu_num].pool_id = next_pool_id;
>>>> +        }
>>>> +        else
>>>
>>> Please add { }
>>>
>>>
>>>> +            if ( pool_cpu_map[cpu_num].pool_id >= 0 )
>>>> +                panic("Pool-%d contains cpu%u that is not online!\n",
>>>> +                      pool_cpu_map[cpu_num].pool_id, cpu_num);
>>>
>>>
>>>
>>>> +#ifdef CONFIG_X86
>>>> +    /* Cpu0 must be in cpupool0 for x86 */
>>>> +    if ( pool_cpu_map[0].pool_id != 0 )
>>>
>>> Is that even possible on x86 given that btcpupools_dtb_parse cannot even
>>> run on x86?
>>>
>>> If it is not possible, I would remove the code below and simply panic
>>> instead.
>>
>> Currently x86 doesn’t have a way to specify cpupools, so for now on x86 there will
>> be only cpupool 0 with every cpu attached, I thought I had to handle the case if in
>> the future someone adds a way to specify cpupools (cmdline?).
>> If you think this should be handled only by who implements that feature, I will remove
>> completely the block.
>   
> In general, it is good to try to make the code as generally useful as
> possible. However, it needs to be testable. In this case, this code is
> basically dead code because there is no way to trigger it. So I suggest
> to remove it and replace it with a panic.
>   
>   
>>>> +    {
>>>> +        /* The cpupool containing cpu0 will become cpupool0 */
>>>> +        unsigned int swap_id = pool_cpu_map[0].pool_id;
>>>> +        for_each_cpu ( cpu_num, cpu_online_map )
>>>> +            if ( pool_cpu_map[cpu_num].pool_id == swap_id )
>>>> +                pool_cpu_map[cpu_num].pool_id = 0;
>>>> +            else if ( pool_cpu_map[cpu_num].pool_id == 0 )
>>>> +                pool_cpu_map[cpu_num].pool_id = swap_id;
>>>> +    }
>>>> +#endif
>>>> +
>>>> +    for_each_cpu ( cpu_num, cpu_online_map )
>>>> +    {
>>>> +        struct cpupool *pool = NULL;
>>>> +        int pool_id, sched_id;
>>>> +
>>>> +        pool_id = pool_cpu_map[cpu_num].pool_id;
>>>> +        sched_id = pool_cpu_map[cpu_num].sched_id;
>>>> +
>>>> +        if ( pool_id )
>>>> +        {
>>>> +            unsigned int i;
>>>> +
>>>> +            /* Look for previously created pool with id pool_id */
>>>> +            for ( i = 0; i < cpu_num; i++ )
>>>
>>> Please add { }
>>>
>>> But actually, the double loop seems a bit excessive for this. Could we
>>> just have a single loop to cpupool_create_pool from 0 to next_pool_id?
>>>
>>> We could get rid of pool_cpu_map[i].pool and just rely on
>>> pool_cpu_map[i].pool_id. No need to update pool_cpu_map[i].pool if we
>>> get rid of it: it doesn't look like it is very useful anyway?
>>
>> Yes we could create all the cpupools in a loop easily, but to retrieve the pointer
>> from the cpupool list I would need something, I can make public this function:
>>
>> static struct cpupool *cpupool_find_by_id(unsigned int poolid)
>>
>> from cpupool.c to get the pointer from the pool id, do you think it can be ok?
> 
> Yes I think that is a better option if Juergen agrees.

Hmm, this will fail the ASSERT(spin_is_locked(&cpupool_lock)) in
__cpupool_find_by_id().

I think you need to use cpupool_get_by_id() and cpupool_put() by making them
globally visible (move their prototypes to sched.h).


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-15  6:04         ` Juergen Gross
@ 2022-03-15  8:40           ` Luca Fancellu
  2022-03-15  9:12             ` Juergen Gross
  0 siblings, 1 reply; 30+ messages in thread
From: Luca Fancellu @ 2022-03-15  8:40 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Xen-devel, Wei Chen, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli


> Hmm, this will fail the ASSERT(spin_is_locked(&cpupool_lock)) in
> __cpupool_find_by_id().
> 
> I think you need to use cpupool_get_by_id() and cpupool_put() by making them
> globally visible (move their prototypes to sched.h).

Hi Juergen,

Yes I was thinking more to a __init wrapper that takes the lock and calls __cpupool_find_by_id,
this to avoid exporting cpupool_put outside since we would be the only user.
But I would like your opinion on that.

Cheers,
Luca

> 
> 
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time
  2022-03-15  8:40           ` Luca Fancellu
@ 2022-03-15  9:12             ` Juergen Gross
  0 siblings, 0 replies; 30+ messages in thread
From: Juergen Gross @ 2022-03-15  9:12 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Stefano Stabellini, Xen-devel, Wei Chen, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Dario Faggioli


[-- Attachment #1.1.1: Type: text/plain, Size: 567 bytes --]

On 15.03.22 09:40, Luca Fancellu wrote:
> 
>> Hmm, this will fail the ASSERT(spin_is_locked(&cpupool_lock)) in
>> __cpupool_find_by_id().
>>
>> I think you need to use cpupool_get_by_id() and cpupool_put() by making them
>> globally visible (move their prototypes to sched.h).
> 
> Hi Juergen,
> 
> Yes I was thinking more to a __init wrapper that takes the lock and calls __cpupool_find_by_id,
> this to avoid exporting cpupool_put outside since we would be the only user.
> But I would like your opinion on that.

I'd be fine with that.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/6] xen/sched: create public function for cpupools creation
  2022-03-10 17:10 ` [PATCH v2 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
  2022-03-11  8:21   ` Juergen Gross
@ 2022-03-15 17:45   ` Andrew Cooper
  2022-03-15 17:50     ` Luca Fancellu
  1 sibling, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2022-03-15 17:45 UTC (permalink / raw)
  To: Luca Fancellu, xen-devel
  Cc: wei.chen, Juergen Gross, Dario Faggioli, George Dunlap,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

On 10/03/2022 17:10, Luca Fancellu wrote:
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 10ea969c7af9..47fc856e0fe0 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1145,6 +1145,22 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c);
>  int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
>  unsigned int cpupool_get_id(const struct domain *d);
>  const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
> +
> +/*
> + * cpupool_create_pool - Creates a cpupool
> + * @pool_id: id of the pool to be created
> + * @sched_id: id of the scheduler to be used for the pool
> + *
> + * Creates a cpupool with pool_id id.
> + * The sched_id parameter identifies the scheduler to be used, if it is
> + * negative, the default scheduler of Xen will be used.
> + *
> + * returns:
> + *     pointer to the struct cpupool just created, on success
> + *     NULL, on cpupool creation error

What makes you say this?  Your new function will fall over a NULL
pointer before it returns one...

~Andrew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/6] xen/sched: create public function for cpupools creation
  2022-03-15 17:45   ` Andrew Cooper
@ 2022-03-15 17:50     ` Luca Fancellu
  0 siblings, 0 replies; 30+ messages in thread
From: Luca Fancellu @ 2022-03-15 17:50 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel, Wei Chen, Juergen Gross, Dario Faggioli,
	George Dunlap, Jan Beulich, Julien Grall, Stefano Stabellini,
	Wei Liu



> On 15 Mar 2022, at 17:45, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
> 
> On 10/03/2022 17:10, Luca Fancellu wrote:
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index 10ea969c7af9..47fc856e0fe0 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -1145,6 +1145,22 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c);
>> int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
>> unsigned int cpupool_get_id(const struct domain *d);
>> const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
>> +
>> +/*
>> + * cpupool_create_pool - Creates a cpupool
>> + * @pool_id: id of the pool to be created
>> + * @sched_id: id of the scheduler to be used for the pool
>> + *
>> + * Creates a cpupool with pool_id id.
>> + * The sched_id parameter identifies the scheduler to be used, if it is
>> + * negative, the default scheduler of Xen will be used.
>> + *
>> + * returns:
>> + *     pointer to the struct cpupool just created, on success
>> + *     NULL, on cpupool creation error
> 
> What makes you say this?  Your new function will fall over a NULL
> pointer before it returns one...

You are right, it’s a leftover from the v1, I will change it and review the code that uses it.

Cheers,
Luca

> 
> ~Andrew


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-03-15 17:51 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-10 17:10 [PATCH v2 0/6] Boot time cpupools Luca Fancellu
2022-03-10 17:10 ` [PATCH v2 1/6] tools/cpupools: Give a name to unnamed cpupools Luca Fancellu
2022-03-11  8:17   ` Juergen Gross
2022-03-10 17:10 ` [PATCH v2 2/6] xen/sched: create public function for cpupools creation Luca Fancellu
2022-03-11  8:21   ` Juergen Gross
2022-03-15 17:45   ` Andrew Cooper
2022-03-15 17:50     ` Luca Fancellu
2022-03-10 17:10 ` [PATCH v2 3/6] xen/sched: retrieve scheduler id by name Luca Fancellu
2022-03-11  8:13   ` Juergen Gross
2022-03-10 17:10 ` [PATCH v2 4/6] xen/cpupool: Create different cpupools at boot time Luca Fancellu
2022-03-11  3:57   ` Stefano Stabellini
2022-03-11 14:11     ` Luca Fancellu
2022-03-11 14:16       ` Luca Fancellu
2022-03-12  0:10       ` Stefano Stabellini
2022-03-15  6:04         ` Juergen Gross
2022-03-15  8:40           ` Luca Fancellu
2022-03-15  9:12             ` Juergen Gross
2022-03-11  7:41   ` Jan Beulich
2022-03-11  8:09   ` Juergen Gross
2022-03-11  8:56     ` Luca Fancellu
2022-03-11  9:29       ` Juergen Gross
2022-03-11  9:46         ` Jan Beulich
2022-03-11 10:18           ` Juergen Gross
2022-03-11 11:29             ` Luca Fancellu
2022-03-11 12:22               ` Juergen Gross
2022-03-11 12:28               ` Jan Beulich
2022-03-10 17:10 ` [PATCH v2 5/6] arm/dom0less: assign dom0less guests to cpupools Luca Fancellu
2022-03-11  3:58   ` Stefano Stabellini
2022-03-10 17:10 ` [PATCH v2 6/6] xen/cpupool: Allow cpupool0 to use different scheduler Luca Fancellu
2022-03-11  8:25   ` Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.