All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] xen/sched: fix cpu hotplug
@ 2022-08-02 13:27 Juergen Gross
  2022-08-02 13:27 ` [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity() Juergen Gross
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Juergen Gross @ 2022-08-02 13:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli

A recent change in the hypervisor memory allocation framework led to
crashes when unplugging host cpus.

This was due to the (correct) assertion that allocating and freeing
memory is allowed with enabled interrupts only. As the main cpu unplug
operation is done in stop-machine context, this assertion triggers in
debug builds.

Correct that by pre-allocating all needed memory while interrupts are
still on, and free memory after interrupts are enabled again.

Juergen Gross (3):
  xen/sched: introduce cpupool_update_node_affinity()
  xen/sched: carve out memory allocation and freeing from
    schedule_cpu_rm()
  xen/sched: fix cpu hotplug

 xen/common/sched/core.c    | 198 +++++++++++++++++++++----------------
 xen/common/sched/cpupool.c | 119 +++++++++++++++++-----
 xen/common/sched/private.h |  21 +++-
 3 files changed, 229 insertions(+), 109 deletions(-)

-- 
2.35.3



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()
  2022-08-02 13:27 [PATCH 0/3] xen/sched: fix cpu hotplug Juergen Gross
@ 2022-08-02 13:27 ` Juergen Gross
  2022-08-03  7:50   ` Jan Beulich
  2022-08-02 13:27 ` [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm() Juergen Gross
  2022-08-02 13:36 ` [PATCH 3/3] xen/sched: fix cpu hotplug Juergen Gross
  2 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2022-08-02 13:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli

For updating the node affinities of all domains in a cpupool add a new
function cpupool_update_node_affinity().

In order to avoid multiple allocations of cpumasks split
domain_update_node_affinity() into a wrapper doing the needed
allocations and a work function, which can be called by
cpupool_update_node_affinity(), too.

This will help later to pre-allocate the cpumasks in order to avoid
allocations in stop-machine context.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c    | 61 ++++++++++++++++++++-----------------
 xen/common/sched/cpupool.c | 62 +++++++++++++++++++++++++++-----------
 xen/common/sched/private.h |  8 +++++
 3 files changed, 87 insertions(+), 44 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index f689b55783..c8d1034d3d 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
     return ret;
 }
 
-void domain_update_node_affinity(struct domain *d)
+void domain_update_node_affinity_noalloc(struct domain *d,
+                                         const cpumask_t *online,
+                                         struct affinity_masks *affinity)
 {
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
     cpumask_t *dom_affinity;
-    const cpumask_t *online;
     struct sched_unit *unit;
     unsigned int cpu;
 
-    /* Do we have vcpus already? If not, no need to update node-affinity. */
-    if ( !d->vcpu || !d->vcpu[0] )
-        return;
-
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
-    {
-        free_cpumask_var(dom_cpumask);
-        return;
-    }
-
-    online = cpupool_domain_master_cpumask(d);
-
     spin_lock(&d->node_affinity_lock);
 
     /*
@@ -1830,22 +1816,21 @@ void domain_update_node_affinity(struct domain *d)
          */
         for_each_sched_unit ( d, unit )
         {
-            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-                       unit->cpu_soft_affinity);
+            cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affinity);
+            cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affinity);
         }
         /* Filter out non-online cpus */
-        cpumask_and(dom_cpumask, dom_cpumask, online);
-        ASSERT(!cpumask_empty(dom_cpumask));
+        cpumask_and(affinity->hard, affinity->hard, online);
+        ASSERT(!cpumask_empty(affinity->hard));
         /* And compute the intersection between hard, online and soft */
-        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
+        cpumask_and(affinity->soft, affinity->soft, affinity->hard);
 
         /*
          * If not empty, the intersection of hard, soft and online is the
          * narrowest set we want. If empty, we fall back to hard&online.
          */
-        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-                           dom_cpumask : dom_cpumask_soft;
+        dom_affinity = cpumask_empty(affinity->soft) ? affinity->hard
+                                                     : affinity->soft;
 
         nodes_clear(d->node_affinity);
         for_each_cpu ( cpu, dom_affinity )
@@ -1853,9 +1838,31 @@ void domain_update_node_affinity(struct domain *d)
     }
 
     spin_unlock(&d->node_affinity_lock);
+}
+
+void domain_update_node_affinity(struct domain *d)
+{
+    struct affinity_masks masks;
+    const cpumask_t *online;
+
+    /* Do we have vcpus already? If not, no need to update node-affinity. */
+    if ( !d->vcpu || !d->vcpu[0] )
+        return;
+
+    if ( !zalloc_cpumask_var(&masks.hard) )
+        return;
+    if ( !zalloc_cpumask_var(&masks.soft) )
+    {
+        free_cpumask_var(masks.hard);
+        return;
+    }
+
+    online = cpupool_domain_master_cpumask(d);
+
+    domain_update_node_affinity_noalloc(d, online, &masks);
 
-    free_cpumask_var(dom_cpumask_soft);
-    free_cpumask_var(dom_cpumask);
+    free_cpumask_var(masks.soft);
+    free_cpumask_var(masks.hard);
 }
 
 typedef long ret_t;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 2afe54f54d..1463dcd767 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
     return ret;
 }
 
+/* Update affinities of all domains in a cpupool. */
+static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
+{
+    if ( !alloc_cpumask_var(&masks->hard) )
+        return -ENOMEM;
+    if ( alloc_cpumask_var(&masks->soft) )
+        return 0;
+
+    free_cpumask_var(masks->hard);
+    return -ENOMEM;
+}
+
+static void cpupool_free_affin_masks(struct affinity_masks *masks)
+{
+    free_cpumask_var(masks->soft);
+    free_cpumask_var(masks->hard);
+}
+
+static void cpupool_update_node_affinity(const struct cpupool *c)
+{
+    const cpumask_t *online = c->res_valid;
+    struct affinity_masks masks;
+    struct domain *d;
+
+    if ( cpupool_alloc_affin_masks(&masks) )
+        return;
+
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain_in_cpupool(d, c)
+    {
+        if ( d->vcpu && d->vcpu[0] )
+        {
+            cpumask_clear(masks.hard);
+            cpumask_clear(masks.soft);
+            domain_update_node_affinity_noalloc(d, online, &masks);
+        }
+    }
+    rcu_read_unlock(&domlist_read_lock);
+
+    cpupool_free_affin_masks(&masks);
+}
+
 /*
  * assign a specific cpu to a cpupool
  * cpupool_lock must be held
@@ -417,7 +459,6 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
-    struct domain *d;
     const cpumask_t *cpus;
 
     cpus = sched_get_opt_cpumask(c->gran, cpu);
@@ -442,12 +483,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    rcu_read_lock(&domlist_read_lock);
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return 0;
 }
@@ -456,18 +492,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
-    struct domain *d;
     int ret;
 
     if ( c != cpupool_cpu_moving )
         return -EADDRNOTAVAIL;
 
-    /*
-     * We need this for scanning the domain list, both in
-     * cpu_disable_scheduler(), and at the bottom of this function.
-     */
     rcu_read_lock(&domlist_read_lock);
     ret = cpu_disable_scheduler(cpu);
+    rcu_read_unlock(&domlist_read_lock);
 
     rcu_read_lock(&sched_res_rculock);
     cpus = get_sched_res(cpu)->cpus;
@@ -494,11 +526,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return ret;
 }
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index a870320146..de0cf63ce8 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -593,6 +593,14 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step,
         cpumask_copy(mask, unit->cpu_hard_affinity);
 }
 
+struct affinity_masks {
+    cpumask_var_t hard;
+    cpumask_var_t soft;
+};
+
+void domain_update_node_affinity_noalloc(struct domain *d,
+                                         const cpumask_t *online,
+                                         struct affinity_masks *affinity);
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
  2022-08-02 13:27 [PATCH 0/3] xen/sched: fix cpu hotplug Juergen Gross
  2022-08-02 13:27 ` [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity() Juergen Gross
@ 2022-08-02 13:27 ` Juergen Gross
  2022-08-03  9:25   ` Jan Beulich
  2022-08-02 13:36 ` [PATCH 3/3] xen/sched: fix cpu hotplug Juergen Gross
  2 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2022-08-02 13:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli

In order to prepare not allocating or freeing memory from
schedule_cpu_rm(), move this functionality to dedicated functions.

For now call those functions from schedule_cpu_rm().

No change of behavior expected.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c    | 133 +++++++++++++++++++++----------------
 xen/common/sched/private.h |   8 +++
 2 files changed, 85 insertions(+), 56 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index c8d1034d3d..d6ff4f4921 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3190,6 +3190,66 @@ out:
     return ret;
 }
 
+static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
+{
+    struct cpu_rm_data *data;
+    struct sched_resource *sr;
+    int idx;
+
+    rcu_read_lock(&sched_res_rculock);
+
+    sr = get_sched_res(cpu);
+    data = xzalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);
+    if ( !data )
+        goto out;
+
+    data->old_ops = sr->scheduler;
+    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
+    data->ppriv_old = sr->sched_priv;
+
+    for ( idx = 0; idx < sr->granularity - 1; idx++ )
+    {
+        data->sr[idx] = sched_alloc_res();
+        if ( data->sr[idx] )
+        {
+            data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
+            if ( !data->sr[idx]->sched_unit_idle )
+            {
+                sched_res_free(&data->sr[idx]->rcu);
+                data->sr[idx] = NULL;
+            }
+        }
+        if ( !data->sr[idx] )
+        {
+            for ( idx--; idx >= 0; idx-- )
+                sched_res_free(&data->sr[idx]->rcu);
+            xfree(data);
+            data = NULL;
+            goto out;
+        }
+
+        data->sr[idx]->curr = data->sr[idx]->sched_unit_idle;
+        data->sr[idx]->scheduler = &sched_idle_ops;
+        data->sr[idx]->granularity = 1;
+
+        /* We want the lock not to change when replacing the resource. */
+        data->sr[idx]->schedule_lock = sr->schedule_lock;
+    }
+
+ out:
+    rcu_read_unlock(&sched_res_rculock);
+
+    return data;
+}
+
+static void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu)
+{
+    sched_free_udata(mem->old_ops, mem->vpriv_old);
+    sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+
+    xfree(mem);
+}
+
 /*
  * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops
  * (the idle scheduler).
@@ -3198,53 +3258,22 @@ out:
  */
 int schedule_cpu_rm(unsigned int cpu)
 {
-    void *ppriv_old, *vpriv_old;
-    struct sched_resource *sr, **sr_new = NULL;
+    struct sched_resource *sr;
+    struct cpu_rm_data *data;
     struct sched_unit *unit;
-    struct scheduler *old_ops;
     spinlock_t *old_lock;
     unsigned long flags;
-    int idx, ret = -ENOMEM;
+    int idx = 0;
     unsigned int cpu_iter;
 
+    data = schedule_cpu_rm_alloc(cpu);
+    if ( !data )
+        return -ENOMEM;
+
     rcu_read_lock(&sched_res_rculock);
 
     sr = get_sched_res(cpu);
-    old_ops = sr->scheduler;
 
-    if ( sr->granularity > 1 )
-    {
-        sr_new = xmalloc_array(struct sched_resource *, sr->granularity - 1);
-        if ( !sr_new )
-            goto out;
-        for ( idx = 0; idx < sr->granularity - 1; idx++ )
-        {
-            sr_new[idx] = sched_alloc_res();
-            if ( sr_new[idx] )
-            {
-                sr_new[idx]->sched_unit_idle = sched_alloc_unit_mem();
-                if ( !sr_new[idx]->sched_unit_idle )
-                {
-                    sched_res_free(&sr_new[idx]->rcu);
-                    sr_new[idx] = NULL;
-                }
-            }
-            if ( !sr_new[idx] )
-            {
-                for ( idx--; idx >= 0; idx-- )
-                    sched_res_free(&sr_new[idx]->rcu);
-                goto out;
-            }
-            sr_new[idx]->curr = sr_new[idx]->sched_unit_idle;
-            sr_new[idx]->scheduler = &sched_idle_ops;
-            sr_new[idx]->granularity = 1;
-
-            /* We want the lock not to change when replacing the resource. */
-            sr_new[idx]->schedule_lock = sr->schedule_lock;
-        }
-    }
-
-    ret = 0;
     ASSERT(sr->cpupool != NULL);
     ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus));
     ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid));
@@ -3252,10 +3281,6 @@ int schedule_cpu_rm(unsigned int cpu)
     /* See comment in schedule_cpu_add() regarding lock switching. */
     old_lock = pcpu_schedule_lock_irqsave(cpu, &flags);
 
-    vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
-    ppriv_old = sr->sched_priv;
-
-    idx = 0;
     for_each_cpu ( cpu_iter, sr->cpus )
     {
         per_cpu(sched_res_idx, cpu_iter) = 0;
@@ -3269,27 +3294,27 @@ int schedule_cpu_rm(unsigned int cpu)
         else
         {
             /* Initialize unit. */
-            unit = sr_new[idx]->sched_unit_idle;
-            unit->res = sr_new[idx];
+            unit = data->sr[idx]->sched_unit_idle;
+            unit->res = data->sr[idx];
             unit->is_running = true;
             sched_unit_add_vcpu(unit, idle_vcpu[cpu_iter]);
             sched_domain_insert_unit(unit, idle_vcpu[cpu_iter]->domain);
 
             /* Adjust cpu masks of resources (old and new). */
             cpumask_clear_cpu(cpu_iter, sr->cpus);
-            cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus);
+            cpumask_set_cpu(cpu_iter, data->sr[idx]->cpus);
             cpumask_set_cpu(cpu_iter, &sched_res_mask);
 
             /* Init timer. */
-            init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
+            init_timer(&data->sr[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
 
             /* Last resource initializations and insert resource pointer. */
-            sr_new[idx]->master_cpu = cpu_iter;
-            set_sched_res(cpu_iter, sr_new[idx]);
+            data->sr[idx]->master_cpu = cpu_iter;
+            set_sched_res(cpu_iter, data->sr[idx]);
 
             /* Last action: set the new lock pointer. */
             smp_mb();
-            sr_new[idx]->schedule_lock = &sched_free_cpu_lock;
+            data->sr[idx]->schedule_lock = &sched_free_cpu_lock;
 
             idx++;
         }
@@ -3305,16 +3330,12 @@ int schedule_cpu_rm(unsigned int cpu)
     /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */
     spin_unlock_irqrestore(old_lock, flags);
 
-    sched_deinit_pdata(old_ops, ppriv_old, cpu);
-
-    sched_free_udata(old_ops, vpriv_old);
-    sched_free_pdata(old_ops, ppriv_old, cpu);
+    sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
-out:
     rcu_read_unlock(&sched_res_rculock);
-    xfree(sr_new);
+    schedule_cpu_rm_free(data, cpu);
 
-    return ret;
+    return 0;
 }
 
 struct scheduler *scheduler_get_default(void)
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index de0cf63ce8..c626ad4907 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -598,6 +598,14 @@ struct affinity_masks {
     cpumask_var_t soft;
 };
 
+/* Memory allocation related data for schedule_cpu_rm(). */
+struct cpu_rm_data {
+    struct scheduler *old_ops;
+    void *ppriv_old;
+    void *vpriv_old;
+    struct sched_resource *sr[];
+};
+
 void domain_update_node_affinity_noalloc(struct domain *d,
                                          const cpumask_t *online,
                                          struct affinity_masks *affinity);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/3] xen/sched: fix cpu hotplug
  2022-08-02 13:27 [PATCH 0/3] xen/sched: fix cpu hotplug Juergen Gross
  2022-08-02 13:27 ` [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity() Juergen Gross
  2022-08-02 13:27 ` [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm() Juergen Gross
@ 2022-08-02 13:36 ` Juergen Gross
  2022-08-03  9:53   ` Jan Beulich
  2 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2022-08-02 13:36 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli, Gao Ruifeng

Cpu cpu unplugging is calling schedule_cpu_rm() via stop_machine_run()
with interrupts disabled, thus any memory allocation or freeing must
be avoided.

Since commit 5047cd1d5dea ("xen/common: Use enhanced
ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced
via an assertion, which will now fail.

Before that commit cpu unplugging in normal configurations was working
just by chance as only the cpu performing schedule_cpu_rm() was doing
active work. With core scheduling enabled, however, failures could
result from memory allocations not being properly propagated to other
cpus' TLBs.

Fix this mess by allocating needed memory before entering
stop_machine_run() and freeing any memory only after having finished
stop_machine_run().

Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_cpu_[add/rm]()")
Reported-by: Gao Ruifeng <ruifeng.gao@intel.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c    | 14 ++++---
 xen/common/sched/cpupool.c | 77 +++++++++++++++++++++++++++++---------
 xen/common/sched/private.h |  5 ++-
 3 files changed, 72 insertions(+), 24 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d6ff4f4921..1473cef372 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3190,7 +3190,7 @@ out:
     return ret;
 }
 
-static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
+struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
 {
     struct cpu_rm_data *data;
     struct sched_resource *sr;
@@ -3242,7 +3242,7 @@ static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
     return data;
 }
 
-static void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu)
+void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu)
 {
     sched_free_udata(mem->old_ops, mem->vpriv_old);
     sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
@@ -3256,17 +3256,18 @@ static void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu)
  * The cpu is already marked as "free" and not valid any longer for its
  * cpupool.
  */
-int schedule_cpu_rm(unsigned int cpu)
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data)
 {
     struct sched_resource *sr;
-    struct cpu_rm_data *data;
     struct sched_unit *unit;
     spinlock_t *old_lock;
     unsigned long flags;
     int idx = 0;
     unsigned int cpu_iter;
+    bool freemem = !data;
 
-    data = schedule_cpu_rm_alloc(cpu);
+    if ( !data )
+        data = schedule_cpu_rm_alloc(cpu);
     if ( !data )
         return -ENOMEM;
 
@@ -3333,7 +3334,8 @@ int schedule_cpu_rm(unsigned int cpu)
     sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-    schedule_cpu_rm_free(data, cpu);
+    if ( freemem )
+        schedule_cpu_rm_free(data, cpu);
 
     return 0;
 }
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 1463dcd767..d9dadedea3 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -419,6 +419,8 @@ static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
         return 0;
 
     free_cpumask_var(masks->hard);
+    memset(masks, 0, sizeof(*masks));
+
     return -ENOMEM;
 }
 
@@ -428,28 +430,34 @@ static void cpupool_free_affin_masks(struct affinity_masks *masks)
     free_cpumask_var(masks->hard);
 }
 
-static void cpupool_update_node_affinity(const struct cpupool *c)
+static void cpupool_update_node_affinity(const struct cpupool *c,
+                                         struct affinity_masks *masks)
 {
     const cpumask_t *online = c->res_valid;
-    struct affinity_masks masks;
+    struct affinity_masks local_masks;
     struct domain *d;
 
-    if ( cpupool_alloc_affin_masks(&masks) )
-        return;
+    if ( !masks )
+    {
+        if ( cpupool_alloc_affin_masks(&local_masks) )
+            return;
+        masks = &local_masks;
+    }
 
     rcu_read_lock(&domlist_read_lock);
     for_each_domain_in_cpupool(d, c)
     {
         if ( d->vcpu && d->vcpu[0] )
         {
-            cpumask_clear(masks.hard);
-            cpumask_clear(masks.soft);
-            domain_update_node_affinity_noalloc(d, online, &masks);
+            cpumask_clear(masks->hard);
+            cpumask_clear(masks->soft);
+            domain_update_node_affinity_noalloc(d, online, masks);
         }
     }
     rcu_read_unlock(&domlist_read_lock);
 
-    cpupool_free_affin_masks(&masks);
+    if ( masks == &local_masks )
+        cpupool_free_affin_masks(&local_masks);
 }
 
 /*
@@ -483,15 +491,17 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, NULL);
 
     return 0;
 }
 
-static int cpupool_unassign_cpu_finish(struct cpupool *c)
+static int cpupool_unassign_cpu_finish(struct cpupool *c,
+                                       struct cpu_rm_data *mem)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
+    struct affinity_masks *masks = mem ? &mem->affinity : NULL;
     int ret;
 
     if ( c != cpupool_cpu_moving )
@@ -514,7 +524,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
      */
     if ( !ret )
     {
-        ret = schedule_cpu_rm(cpu);
+        ret = schedule_cpu_rm(cpu, mem);
         if ( ret )
             cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus);
         else
@@ -526,7 +536,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, masks);
 
     return ret;
 }
@@ -590,7 +600,7 @@ static long cf_check cpupool_unassign_cpu_helper(void *info)
                       cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu);
     spin_lock(&cpupool_lock);
 
-    ret = cpupool_unassign_cpu_finish(c);
+    ret = cpupool_unassign_cpu_finish(c, NULL);
 
     spin_unlock(&cpupool_lock);
     debugtrace_printk("cpupool_unassign_cpu ret=%ld\n", ret);
@@ -737,7 +747,7 @@ static int cpupool_cpu_add(unsigned int cpu)
  * This function is called in stop_machine context, so we can be sure no
  * non-idle vcpu is active on the system.
  */
-static void cpupool_cpu_remove(unsigned int cpu)
+static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem)
 {
     int ret;
 
@@ -745,7 +755,7 @@ static void cpupool_cpu_remove(unsigned int cpu)
 
     if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
     {
-        ret = cpupool_unassign_cpu_finish(cpupool0);
+        ret = cpupool_unassign_cpu_finish(cpupool0, mem);
         BUG_ON(ret);
     }
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
@@ -811,7 +821,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu)
         {
             ret = cpupool_unassign_cpu_start(c, master_cpu);
             BUG_ON(ret);
-            ret = cpupool_unassign_cpu_finish(c);
+            ret = cpupool_unassign_cpu_finish(c, NULL);
             BUG_ON(ret);
         }
     }
@@ -1031,10 +1041,23 @@ static int cf_check cpu_callback(
 {
     unsigned int cpu = (unsigned long)hcpu;
     int rc = 0;
+    static struct cpu_rm_data *mem;
 
     switch ( action )
     {
     case CPU_DOWN_FAILED:
+        if ( system_state <= SYS_STATE_active )
+        {
+            if ( mem )
+            {
+                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
+                    cpupool_free_affin_masks(&mem->affinity);
+                schedule_cpu_rm_free(mem, cpu);
+                mem = NULL;
+            }
+            rc = cpupool_cpu_add(cpu);
+        }
+        break;
     case CPU_ONLINE:
         if ( system_state <= SYS_STATE_active )
             rc = cpupool_cpu_add(cpu);
@@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
     case CPU_DOWN_PREPARE:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
+        {
             rc = cpupool_cpu_remove_prologue(cpu);
+            if ( !rc )
+            {
+                ASSERT(!mem);
+                mem = schedule_cpu_rm_alloc(cpu);
+                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;
+            }
+        }
         break;
     case CPU_DYING:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
-            cpupool_cpu_remove(cpu);
+        {
+            ASSERT(mem);
+            cpupool_cpu_remove(cpu, mem);
+        }
+        break;
+    case CPU_DEAD:
+        if ( system_state <= SYS_STATE_active )
+        {
+            ASSERT(mem);
+            cpupool_free_affin_masks(&mem->affinity);
+            schedule_cpu_rm_free(mem, cpu);
+            mem = NULL;
+        }
         break;
     case CPU_RESUME_FAILED:
         cpupool_cpu_remove_forced(cpu);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index c626ad4907..f5bf41226c 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -600,6 +600,7 @@ struct affinity_masks {
 
 /* Memory allocation related data for schedule_cpu_rm(). */
 struct cpu_rm_data {
+    struct affinity_masks affinity;
     struct scheduler *old_ops;
     void *ppriv_old;
     void *vpriv_old;
@@ -617,7 +618,9 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-int schedule_cpu_rm(unsigned int cpu);
+struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu);
+void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu);
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
 void cpupool_put(struct cpupool *pool);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()
  2022-08-02 13:27 ` [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity() Juergen Gross
@ 2022-08-03  7:50   ` Jan Beulich
  2022-08-03  8:01     ` Juergen Gross
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2022-08-03  7:50 UTC (permalink / raw)
  To: Juergen Gross; +Cc: George Dunlap, Dario Faggioli, xen-devel

On 02.08.2022 15:27, Juergen Gross wrote:
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
>      return ret;
>  }
>  
> -void domain_update_node_affinity(struct domain *d)
> +void domain_update_node_affinity_noalloc(struct domain *d,
> +                                         const cpumask_t *online,
> +                                         struct affinity_masks *affinity)
>  {
> -    cpumask_var_t dom_cpumask, dom_cpumask_soft;
>      cpumask_t *dom_affinity;
> -    const cpumask_t *online;
>      struct sched_unit *unit;
>      unsigned int cpu;
>  
> -    /* Do we have vcpus already? If not, no need to update node-affinity. */
> -    if ( !d->vcpu || !d->vcpu[0] )
> -        return;
> -
> -    if ( !zalloc_cpumask_var(&dom_cpumask) )
> -        return;
> -    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
> -    {
> -        free_cpumask_var(dom_cpumask);
> -        return;
> -    }

Instead of splitting the function, did you consider using
cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
callers to pre-allocate the masks? Would imo be quite a bit less
code churn (I think).

> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
>      return ret;
>  }
>  
> +/* Update affinities of all domains in a cpupool. */
> +static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
> +{
> +    if ( !alloc_cpumask_var(&masks->hard) )
> +        return -ENOMEM;
> +    if ( alloc_cpumask_var(&masks->soft) )
> +        return 0;
> +
> +    free_cpumask_var(masks->hard);
> +    return -ENOMEM;
> +}

Wouldn't this be a nice general helper function, also usable from
outside of this CU?

As a nit - right now the only caller treats the return value as boolean,
so perhaps the function better would return bool?

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()
  2022-08-03  7:50   ` Jan Beulich
@ 2022-08-03  8:01     ` Juergen Gross
  2022-08-03  8:30       ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2022-08-03  8:01 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Dario Faggioli, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 2657 bytes --]

On 03.08.22 09:50, Jan Beulich wrote:
> On 02.08.2022 15:27, Juergen Gross wrote:
>> --- a/xen/common/sched/core.c
>> +++ b/xen/common/sched/core.c
>> @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
>>       return ret;
>>   }
>>   
>> -void domain_update_node_affinity(struct domain *d)
>> +void domain_update_node_affinity_noalloc(struct domain *d,
>> +                                         const cpumask_t *online,
>> +                                         struct affinity_masks *affinity)
>>   {
>> -    cpumask_var_t dom_cpumask, dom_cpumask_soft;
>>       cpumask_t *dom_affinity;
>> -    const cpumask_t *online;
>>       struct sched_unit *unit;
>>       unsigned int cpu;
>>   
>> -    /* Do we have vcpus already? If not, no need to update node-affinity. */
>> -    if ( !d->vcpu || !d->vcpu[0] )
>> -        return;
>> -
>> -    if ( !zalloc_cpumask_var(&dom_cpumask) )
>> -        return;
>> -    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
>> -    {
>> -        free_cpumask_var(dom_cpumask);
>> -        return;
>> -    }
> 
> Instead of splitting the function, did you consider using
> cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
> callers to pre-allocate the masks? Would imo be quite a bit less
> code churn (I think).

This would require to change all callers of domain_update_node_affinity()
to add the additional mask parameter. The now common/sched local struct
affinity_masks would then need to made globally visible.

I'm not sure this is a good idea.

> 
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
>>       return ret;
>>   }
>>   
>> +/* Update affinities of all domains in a cpupool. */
>> +static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>> +{
>> +    if ( !alloc_cpumask_var(&masks->hard) )
>> +        return -ENOMEM;
>> +    if ( alloc_cpumask_var(&masks->soft) )
>> +        return 0;
>> +
>> +    free_cpumask_var(masks->hard);
>> +    return -ENOMEM;
>> +}
> 
> Wouldn't this be a nice general helper function, also usable from
> outside of this CU?

I considered that, but wasn't sure this is really helpful. The only
potential other user would be domain_update_node_affinity(), requiring
to use the zalloc variant of the allocation in the helper (not that this
would be a major problem, though).

> As a nit - right now the only caller treats the return value as boolean,
> so perhaps the function better would return bool?

I can do that.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()
  2022-08-03  8:01     ` Juergen Gross
@ 2022-08-03  8:30       ` Jan Beulich
  2022-08-03  8:40         ` Juergen Gross
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2022-08-03  8:30 UTC (permalink / raw)
  To: Juergen Gross; +Cc: George Dunlap, Dario Faggioli, xen-devel

On 03.08.2022 10:01, Juergen Gross wrote:
> On 03.08.22 09:50, Jan Beulich wrote:
>> On 02.08.2022 15:27, Juergen Gross wrote:
>>> --- a/xen/common/sched/core.c
>>> +++ b/xen/common/sched/core.c
>>> @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
>>>       return ret;
>>>   }
>>>   
>>> -void domain_update_node_affinity(struct domain *d)
>>> +void domain_update_node_affinity_noalloc(struct domain *d,
>>> +                                         const cpumask_t *online,
>>> +                                         struct affinity_masks *affinity)
>>>   {
>>> -    cpumask_var_t dom_cpumask, dom_cpumask_soft;
>>>       cpumask_t *dom_affinity;
>>> -    const cpumask_t *online;
>>>       struct sched_unit *unit;
>>>       unsigned int cpu;
>>>   
>>> -    /* Do we have vcpus already? If not, no need to update node-affinity. */
>>> -    if ( !d->vcpu || !d->vcpu[0] )
>>> -        return;
>>> -
>>> -    if ( !zalloc_cpumask_var(&dom_cpumask) )
>>> -        return;
>>> -    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
>>> -    {
>>> -        free_cpumask_var(dom_cpumask);
>>> -        return;
>>> -    }
>>
>> Instead of splitting the function, did you consider using
>> cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
>> callers to pre-allocate the masks? Would imo be quite a bit less
>> code churn (I think).
> 
> This would require to change all callers of domain_update_node_affinity()
> to add the additional mask parameter. The now common/sched local struct
> affinity_masks would then need to made globally visible.
> 
> I'm not sure this is a good idea.

Hmm, I see there are quite a few callers (so there would be code churn
elsewhere). But I don't think the struct would need making globally
visible - the majority of callers could simply pass NULL, making the
function use a local instance of the struct instead. Personally I think
that would still be neater than having a _noalloc-suffixed variant of a
function (and specifically in this case also with an already long name).
But I guess this is then up to you / the scheduler maintainers.

>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
>>>       return ret;
>>>   }
>>>   
>>> +/* Update affinities of all domains in a cpupool. */
>>> +static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>>> +{
>>> +    if ( !alloc_cpumask_var(&masks->hard) )
>>> +        return -ENOMEM;
>>> +    if ( alloc_cpumask_var(&masks->soft) )
>>> +        return 0;
>>> +
>>> +    free_cpumask_var(masks->hard);
>>> +    return -ENOMEM;
>>> +}
>>
>> Wouldn't this be a nice general helper function, also usable from
>> outside of this CU?
> 
> I considered that, but wasn't sure this is really helpful. The only
> potential other user would be domain_update_node_affinity(), requiring
> to use the zalloc variant of the allocation in the helper (not that this
> would be a major problem, though).

I was actually thinking the other way around - the clearing of the masks
might better move into what is domain_update_node_affinity_noalloc() in
this version of the patch, so the helper could continue to use the non-
clearing allocations.

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()
  2022-08-03  8:30       ` Jan Beulich
@ 2022-08-03  8:40         ` Juergen Gross
  0 siblings, 0 replies; 13+ messages in thread
From: Juergen Gross @ 2022-08-03  8:40 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Dario Faggioli, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 3748 bytes --]

On 03.08.22 10:30, Jan Beulich wrote:
> On 03.08.2022 10:01, Juergen Gross wrote:
>> On 03.08.22 09:50, Jan Beulich wrote:
>>> On 02.08.2022 15:27, Juergen Gross wrote:
>>>> --- a/xen/common/sched/core.c
>>>> +++ b/xen/common/sched/core.c
>>>> @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
>>>>        return ret;
>>>>    }
>>>>    
>>>> -void domain_update_node_affinity(struct domain *d)
>>>> +void domain_update_node_affinity_noalloc(struct domain *d,
>>>> +                                         const cpumask_t *online,
>>>> +                                         struct affinity_masks *affinity)
>>>>    {
>>>> -    cpumask_var_t dom_cpumask, dom_cpumask_soft;
>>>>        cpumask_t *dom_affinity;
>>>> -    const cpumask_t *online;
>>>>        struct sched_unit *unit;
>>>>        unsigned int cpu;
>>>>    
>>>> -    /* Do we have vcpus already? If not, no need to update node-affinity. */
>>>> -    if ( !d->vcpu || !d->vcpu[0] )
>>>> -        return;
>>>> -
>>>> -    if ( !zalloc_cpumask_var(&dom_cpumask) )
>>>> -        return;
>>>> -    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
>>>> -    {
>>>> -        free_cpumask_var(dom_cpumask);
>>>> -        return;
>>>> -    }
>>>
>>> Instead of splitting the function, did you consider using
>>> cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
>>> callers to pre-allocate the masks? Would imo be quite a bit less
>>> code churn (I think).
>>
>> This would require to change all callers of domain_update_node_affinity()
>> to add the additional mask parameter. The now common/sched local struct
>> affinity_masks would then need to made globally visible.
>>
>> I'm not sure this is a good idea.
> 
> Hmm, I see there are quite a few callers (so there would be code churn
> elsewhere). But I don't think the struct would need making globally
> visible - the majority of callers could simply pass NULL, making the
> function use a local instance of the struct instead. Personally I think
> that would still be neater than having a _noalloc-suffixed variant of a
> function (and specifically in this case also with an already long name).

Hmm, true.

I could even rename the real function to domain_update_node_aff() and
add an inline domain_update_node_affinity() function adding the NULL
parameter.

> But I guess this is then up to you / the scheduler maintainers.
> 
>>>> --- a/xen/common/sched/cpupool.c
>>>> +++ b/xen/common/sched/cpupool.c
>>>> @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
>>>>        return ret;
>>>>    }
>>>>    
>>>> +/* Update affinities of all domains in a cpupool. */
>>>> +static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>>>> +{
>>>> +    if ( !alloc_cpumask_var(&masks->hard) )
>>>> +        return -ENOMEM;
>>>> +    if ( alloc_cpumask_var(&masks->soft) )
>>>> +        return 0;
>>>> +
>>>> +    free_cpumask_var(masks->hard);
>>>> +    return -ENOMEM;
>>>> +}
>>>
>>> Wouldn't this be a nice general helper function, also usable from
>>> outside of this CU?
>>
>> I considered that, but wasn't sure this is really helpful. The only
>> potential other user would be domain_update_node_affinity(), requiring
>> to use the zalloc variant of the allocation in the helper (not that this
>> would be a major problem, though).
> 
> I was actually thinking the other way around - the clearing of the masks
> might better move into what is domain_update_node_affinity_noalloc() in
> this version of the patch, so the helper could continue to use the non-
> clearing allocations.

I guess with cond_zalloc_cpumask_var() this would come for free.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
  2022-08-02 13:27 ` [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm() Juergen Gross
@ 2022-08-03  9:25   ` Jan Beulich
  2022-08-08 10:04     ` Juergen Gross
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2022-08-03  9:25 UTC (permalink / raw)
  To: Juergen Gross; +Cc: George Dunlap, Dario Faggioli, xen-devel

On 02.08.2022 15:27, Juergen Gross wrote:
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -3190,6 +3190,66 @@ out:
>      return ret;
>  }
>  
> +static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
> +{
> +    struct cpu_rm_data *data;
> +    struct sched_resource *sr;

const?

> +    int idx;

While code is supposedly only being moved, I still question this not
being "unsigned int", the more that sr->granularity is "unsigned int"
as well. (Same then for the retained instance ofthe variable in the
original function.) Of course the loop in the error path then needs
writing differently.

> +    rcu_read_lock(&sched_res_rculock);
> +
> +    sr = get_sched_res(cpu);
> +    data = xzalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);

Afaict xmalloc_flex_struct() would do here, as you fill all fields.

> +    if ( !data )
> +        goto out;
> +
> +    data->old_ops = sr->scheduler;
> +    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
> +    data->ppriv_old = sr->sched_priv;

At least from an abstract perspective, doesn't reading fields from
sr require the RCU lock to be held continuously (i.e. not dropping
it at the end of this function and re-acquiring it in the caller)?

> +    for ( idx = 0; idx < sr->granularity - 1; idx++ )
> +    {
> +        data->sr[idx] = sched_alloc_res();
> +        if ( data->sr[idx] )
> +        {
> +            data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
> +            if ( !data->sr[idx]->sched_unit_idle )
> +            {
> +                sched_res_free(&data->sr[idx]->rcu);
> +                data->sr[idx] = NULL;
> +            }
> +        }
> +        if ( !data->sr[idx] )
> +        {
> +            for ( idx--; idx >= 0; idx-- )
> +                sched_res_free(&data->sr[idx]->rcu);
> +            xfree(data);
> +            data = NULL;

XFREE()?

> @@ -3198,53 +3258,22 @@ out:
>   */
>  int schedule_cpu_rm(unsigned int cpu)
>  {
> -    void *ppriv_old, *vpriv_old;
> -    struct sched_resource *sr, **sr_new = NULL;
> +    struct sched_resource *sr;
> +    struct cpu_rm_data *data;
>      struct sched_unit *unit;
> -    struct scheduler *old_ops;
>      spinlock_t *old_lock;
>      unsigned long flags;
> -    int idx, ret = -ENOMEM;
> +    int idx = 0;
>      unsigned int cpu_iter;
>  
> +    data = schedule_cpu_rm_alloc(cpu);
> +    if ( !data )
> +        return -ENOMEM;
> +
>      rcu_read_lock(&sched_res_rculock);
>  
>      sr = get_sched_res(cpu);
> -    old_ops = sr->scheduler;
>  
> -    if ( sr->granularity > 1 )
> -    {

This conditional is lost afaict, resulting in potentially wrong behavior
in the new helper. Considering its purpose I expect there's a guarantee
that the field's value can never be zero, but then I guess an ASSERT()
would be nice next to the potentially problematic uses in the helper.

> --- a/xen/common/sched/private.h
> +++ b/xen/common/sched/private.h
> @@ -598,6 +598,14 @@ struct affinity_masks {
>      cpumask_var_t soft;
>  };
>  
> +/* Memory allocation related data for schedule_cpu_rm(). */
> +struct cpu_rm_data {
> +    struct scheduler *old_ops;

const?

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] xen/sched: fix cpu hotplug
  2022-08-02 13:36 ` [PATCH 3/3] xen/sched: fix cpu hotplug Juergen Gross
@ 2022-08-03  9:53   ` Jan Beulich
  2022-08-08 10:21     ` Juergen Gross
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2022-08-03  9:53 UTC (permalink / raw)
  To: Juergen Gross; +Cc: George Dunlap, Dario Faggioli, Gao Ruifeng, xen-devel

On 02.08.2022 15:36, Juergen Gross wrote:
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -419,6 +419,8 @@ static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>          return 0;
>  
>      free_cpumask_var(masks->hard);
> +    memset(masks, 0, sizeof(*masks));

FREE_CPUMASK_VAR()?

> @@ -1031,10 +1041,23 @@ static int cf_check cpu_callback(
>  {
>      unsigned int cpu = (unsigned long)hcpu;
>      int rc = 0;
> +    static struct cpu_rm_data *mem;

When you mentioned your plan, I was actually envisioning a slightly
different model: Instead of doing the allocation at CPU_DOWN_PREPARE,
allocate a single instance during boot, which would never be freed.
Did you consider such, and it turned out worse? I guess the main
obstacle would be figuring an upper bound for sr->granularity, but
of course schedule_cpu_rm_alloc(), besides the allocations, also
does quite a bit of filling in values, which can't be done up front.

>      switch ( action )
>      {
>      case CPU_DOWN_FAILED:
> +        if ( system_state <= SYS_STATE_active )
> +        {
> +            if ( mem )
> +            {
> +                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
> +                    cpupool_free_affin_masks(&mem->affinity);

I don't think the conditional is really needed - it merely avoids two
xfree(NULL) invocations at the expense of readability here. Plus -
wouldn't this better be part of ...

> +                schedule_cpu_rm_free(mem, cpu);

... this anyway?

> @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
>      case CPU_DOWN_PREPARE:
>          /* Suspend/Resume don't change assignments of cpus to cpupools. */
>          if ( system_state <= SYS_STATE_active )
> +        {
>              rc = cpupool_cpu_remove_prologue(cpu);
> +            if ( !rc )
> +            {
> +                ASSERT(!mem);
> +                mem = schedule_cpu_rm_alloc(cpu);
> +                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;

Ah - here you actually want a non-boolean return value. No need to
change that then in the earlier patch (albeit of course a change
there could be easily accommodated here).

Along the lines of the earlier comment this 2nd allocation may also
want to move into schedule_cpu_rm_alloc(). If other users of the
function don't need the extra allocations, perhaps by adding a bool
parameter.

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
  2022-08-03  9:25   ` Jan Beulich
@ 2022-08-08 10:04     ` Juergen Gross
  0 siblings, 0 replies; 13+ messages in thread
From: Juergen Gross @ 2022-08-08 10:04 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Dario Faggioli, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 3690 bytes --]

On 03.08.22 11:25, Jan Beulich wrote:
> On 02.08.2022 15:27, Juergen Gross wrote:
>> --- a/xen/common/sched/core.c
>> +++ b/xen/common/sched/core.c
>> @@ -3190,6 +3190,66 @@ out:
>>       return ret;
>>   }
>>   
>> +static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
>> +{
>> +    struct cpu_rm_data *data;
>> +    struct sched_resource *sr;
> 
> const?

Yes.

> 
>> +    int idx;
> 
> While code is supposedly only being moved, I still question this not
> being "unsigned int", the more that sr->granularity is "unsigned int"
> as well. (Same then for the retained instance ofthe variable in the
> original function.) Of course the loop in the error path then needs
> writing differently.

I considered that and didn't want to change the loop. OTOH this seems
to be rather trivial, so I can do the switch.

> 
>> +    rcu_read_lock(&sched_res_rculock);
>> +
>> +    sr = get_sched_res(cpu);
>> +    data = xzalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);
> 
> Afaict xmalloc_flex_struct() would do here, as you fill all fields.

Okay.

> 
>> +    if ( !data )
>> +        goto out;
>> +
>> +    data->old_ops = sr->scheduler;
>> +    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
>> +    data->ppriv_old = sr->sched_priv;
> 
> At least from an abstract perspective, doesn't reading fields from
> sr require the RCU lock to be held continuously (i.e. not dropping
> it at the end of this function and re-acquiring it in the caller)?
> 
>> +    for ( idx = 0; idx < sr->granularity - 1; idx++ )
>> +    {
>> +        data->sr[idx] = sched_alloc_res();
>> +        if ( data->sr[idx] )
>> +        {
>> +            data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
>> +            if ( !data->sr[idx]->sched_unit_idle )
>> +            {
>> +                sched_res_free(&data->sr[idx]->rcu);
>> +                data->sr[idx] = NULL;
>> +            }
>> +        }
>> +        if ( !data->sr[idx] )
>> +        {
>> +            for ( idx--; idx >= 0; idx-- )
>> +                sched_res_free(&data->sr[idx]->rcu);
>> +            xfree(data);
>> +            data = NULL;
> 
> XFREE()?

Oh, right. Forgot about that possibility.

> 
>> @@ -3198,53 +3258,22 @@ out:
>>    */
>>   int schedule_cpu_rm(unsigned int cpu)
>>   {
>> -    void *ppriv_old, *vpriv_old;
>> -    struct sched_resource *sr, **sr_new = NULL;
>> +    struct sched_resource *sr;
>> +    struct cpu_rm_data *data;
>>       struct sched_unit *unit;
>> -    struct scheduler *old_ops;
>>       spinlock_t *old_lock;
>>       unsigned long flags;
>> -    int idx, ret = -ENOMEM;
>> +    int idx = 0;
>>       unsigned int cpu_iter;
>>   
>> +    data = schedule_cpu_rm_alloc(cpu);
>> +    if ( !data )
>> +        return -ENOMEM;
>> +
>>       rcu_read_lock(&sched_res_rculock);
>>   
>>       sr = get_sched_res(cpu);
>> -    old_ops = sr->scheduler;
>>   
>> -    if ( sr->granularity > 1 )
>> -    {
> 
> This conditional is lost afaict, resulting in potentially wrong behavior
> in the new helper. Considering its purpose I expect there's a guarantee
> that the field's value can never be zero, but then I guess an ASSERT()
> would be nice next to the potentially problematic uses in the helper.

I'll add the ASSERT().

> 
>> --- a/xen/common/sched/private.h
>> +++ b/xen/common/sched/private.h
>> @@ -598,6 +598,14 @@ struct affinity_masks {
>>       cpumask_var_t soft;
>>   };
>>   
>> +/* Memory allocation related data for schedule_cpu_rm(). */
>> +struct cpu_rm_data {
>> +    struct scheduler *old_ops;
> 
> const?

Yes.


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] xen/sched: fix cpu hotplug
  2022-08-03  9:53   ` Jan Beulich
@ 2022-08-08 10:21     ` Juergen Gross
  2022-08-09  6:15       ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2022-08-08 10:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Dario Faggioli, Gao Ruifeng, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 2912 bytes --]

On 03.08.22 11:53, Jan Beulich wrote:
> On 02.08.2022 15:36, Juergen Gross wrote:
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -419,6 +419,8 @@ static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>>           return 0;
>>   
>>       free_cpumask_var(masks->hard);
>> +    memset(masks, 0, sizeof(*masks));
> 
> FREE_CPUMASK_VAR()?

Oh, yes.

> 
>> @@ -1031,10 +1041,23 @@ static int cf_check cpu_callback(
>>   {
>>       unsigned int cpu = (unsigned long)hcpu;
>>       int rc = 0;
>> +    static struct cpu_rm_data *mem;
> 
> When you mentioned your plan, I was actually envisioning a slightly
> different model: Instead of doing the allocation at CPU_DOWN_PREPARE,
> allocate a single instance during boot, which would never be freed.
> Did you consider such, and it turned out worse? I guess the main
> obstacle would be figuring an upper bound for sr->granularity, but
> of course schedule_cpu_rm_alloc(), besides the allocations, also
> does quite a bit of filling in values, which can't be done up front.

With sched-gran=socket sr->granularity can grow to above 100, so I'm
not sure we'd want to do that.

> 
>>       switch ( action )
>>       {
>>       case CPU_DOWN_FAILED:
>> +        if ( system_state <= SYS_STATE_active )
>> +        {
>> +            if ( mem )
>> +            {
>> +                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
>> +                    cpupool_free_affin_masks(&mem->affinity);
> 
> I don't think the conditional is really needed - it merely avoids two
> xfree(NULL) invocations at the expense of readability here. Plus -

Okay.

> wouldn't this better be part of ...
> 
>> +                schedule_cpu_rm_free(mem, cpu);
> 
> ... this anyway?

This would add a layering violation IMHO.

> 
>> @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
>>       case CPU_DOWN_PREPARE:
>>           /* Suspend/Resume don't change assignments of cpus to cpupools. */
>>           if ( system_state <= SYS_STATE_active )
>> +        {
>>               rc = cpupool_cpu_remove_prologue(cpu);
>> +            if ( !rc )
>> +            {
>> +                ASSERT(!mem);
>> +                mem = schedule_cpu_rm_alloc(cpu);
>> +                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;
> 
> Ah - here you actually want a non-boolean return value. No need to
> change that then in the earlier patch (albeit of course a change
> there could be easily accommodated here).
> 
> Along the lines of the earlier comment this 2nd allocation may also
> want to move into schedule_cpu_rm_alloc(). If other users of the
> function don't need the extra allocations, perhaps by adding a bool
> parameter.

I could do that, but I still think this would pull cpupool specific needs
into sched/core.c.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] xen/sched: fix cpu hotplug
  2022-08-08 10:21     ` Juergen Gross
@ 2022-08-09  6:15       ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2022-08-09  6:15 UTC (permalink / raw)
  To: Juergen Gross; +Cc: George Dunlap, Dario Faggioli, Gao Ruifeng, xen-devel

On 08.08.2022 12:21, Juergen Gross wrote:
> On 03.08.22 11:53, Jan Beulich wrote:
>> On 02.08.2022 15:36, Juergen Gross wrote:
>>>       switch ( action )
>>>       {
>>>       case CPU_DOWN_FAILED:
>>> +        if ( system_state <= SYS_STATE_active )
>>> +        {
>>> +            if ( mem )
>>> +            {
>>> +                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
>>> +                    cpupool_free_affin_masks(&mem->affinity);
>>
>> I don't think the conditional is really needed - it merely avoids two
>> xfree(NULL) invocations at the expense of readability here. Plus -
> 
> Okay.
> 
>> wouldn't this better be part of ...
>>
>>> +                schedule_cpu_rm_free(mem, cpu);
>>
>> ... this anyway?
> 
> This would add a layering violation IMHO.
> 
>>
>>> @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
>>>       case CPU_DOWN_PREPARE:
>>>           /* Suspend/Resume don't change assignments of cpus to cpupools. */
>>>           if ( system_state <= SYS_STATE_active )
>>> +        {
>>>               rc = cpupool_cpu_remove_prologue(cpu);
>>> +            if ( !rc )
>>> +            {
>>> +                ASSERT(!mem);
>>> +                mem = schedule_cpu_rm_alloc(cpu);
>>> +                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : -ENOMEM;
>>
>> Ah - here you actually want a non-boolean return value. No need to
>> change that then in the earlier patch (albeit of course a change
>> there could be easily accommodated here).
>>
>> Along the lines of the earlier comment this 2nd allocation may also
>> want to move into schedule_cpu_rm_alloc(). If other users of the
>> function don't need the extra allocations, perhaps by adding a bool
>> parameter.
> 
> I could do that, but I still think this would pull cpupool specific needs
> into sched/core.c.

But the struct isn't cpupool specific, and hence controlling the setting up
of the field via a function parameter doesn't really look like a layering
violation to me. While imo the end result would be more clean (as in - all
allocations / freeing in one place), I'm not going to insist (not the least
because I'm not maintainer of that code anyway).

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-08-09  6:15 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-02 13:27 [PATCH 0/3] xen/sched: fix cpu hotplug Juergen Gross
2022-08-02 13:27 ` [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity() Juergen Gross
2022-08-03  7:50   ` Jan Beulich
2022-08-03  8:01     ` Juergen Gross
2022-08-03  8:30       ` Jan Beulich
2022-08-03  8:40         ` Juergen Gross
2022-08-02 13:27 ` [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm() Juergen Gross
2022-08-03  9:25   ` Jan Beulich
2022-08-08 10:04     ` Juergen Gross
2022-08-02 13:36 ` [PATCH 3/3] xen/sched: fix cpu hotplug Juergen Gross
2022-08-03  9:53   ` Jan Beulich
2022-08-08 10:21     ` Juergen Gross
2022-08-09  6:15       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.