From: Juergen Gross <jgross@suse.com> To: xen-devel@lists.xenproject.org Cc: Juergen Gross <jgross@suse.com>, Tim Deegan <tim@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, George Dunlap <george.dunlap@eu.citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, Robert VanVossen <robert.vanvossen@dornerworks.com>, Dario Faggioli <dfaggioli@suse.com>, Julien Grall <julien.grall@arm.com>, Josh Whitehead <josh.whitehead@dornerworks.com>, Meng Xu <mengxu@cis.upenn.edu>, Jan Beulich <jbeulich@suse.com> Subject: [PATCH 27/60] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers Date: Tue, 28 May 2019 12:32:40 +0200 [thread overview] Message-ID: <20190528103313.1343-28-jgross@suse.com> (raw) In-Reply-To: <20190528103313.1343-1-jgross@suse.com> Especially in the do_schedule() functions of the different schedulers using smp_processor_id() for the local cpu number is correct only if the sched_unit is a single vcpu. As soon as larger sched_units are used most uses should be replaced by the cpu number of the local sched_resource instead. Add a helper to get that sched_resource cpu and modify the schedulers to use it in a correct way. Signed-off-by: Juergen Gross <jgross@suse.com> --- xen/common/sched_arinc653.c | 2 +- xen/common/sched_credit.c | 21 +++++++++--------- xen/common/sched_credit2.c | 53 +++++++++++++++++++++++---------------------- xen/common/sched_null.c | 17 ++++++++------- xen/common/sched_rt.c | 17 ++++++++------- xen/include/xen/sched-if.h | 5 +++++ 6 files changed, 62 insertions(+), 53 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 213bc960ef..e48f2b2eb9 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -513,7 +513,7 @@ a653sched_do_schedule( static unsigned int sched_index = 0; static s_time_t next_switch_time; a653sched_priv_t *sched_priv = SCHED_PRIV(ops); - const unsigned int cpu = smp_processor_id(); + const unsigned int cpu = sched_get_resource_cpu(smp_processor_id()); unsigned long flags; spin_lock_irqsave(&sched_priv->lock, flags); diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 05df9e54ac..e67b9bb23e 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1690,7 +1690,7 @@ csched_load_balance(struct csched_private *prv, int cpu, int peer_cpu, first_cpu, peer_node, bstep; int node = cpu_to_node(cpu); - BUG_ON( cpu != sched_unit_cpu(snext->unit) ); + BUG_ON( sched_get_resource_cpu(cpu) != sched_unit_cpu(snext->unit) ); online = cpupool_online_cpumask(c); /* @@ -1831,8 +1831,9 @@ static struct task_slice csched_schedule( const struct scheduler *ops, s_time_t now, bool_t tasklet_work_scheduled) { - const int cpu = smp_processor_id(); - struct list_head * const runq = RUNQ(cpu); + const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); + struct list_head * const runq = RUNQ(sched_cpu); struct sched_unit *unit = current->sched_unit; struct csched_unit * const scurr = CSCHED_UNIT(unit); struct csched_private *prv = CSCHED_PRIV(ops); @@ -1942,7 +1943,7 @@ csched_schedule( { BUG_ON( is_idle_unit(unit) || list_empty(runq) ); /* Current has blocked. Update the runnable counter for this cpu. */ - dec_nr_runnable(cpu); + dec_nr_runnable(sched_cpu); } snext = __runq_elem(runq->next); @@ -1952,7 +1953,7 @@ csched_schedule( if ( tasklet_work_scheduled ) { TRACE_0D(TRC_CSCHED_SCHED_TASKLET); - snext = CSCHED_UNIT(sched_idle_unit(cpu)); + snext = CSCHED_UNIT(sched_idle_unit(sched_cpu)); snext->pri = CSCHED_PRI_TS_BOOST; } @@ -1972,7 +1973,7 @@ csched_schedule( if ( snext->pri > CSCHED_PRI_TS_OVER ) __runq_remove(snext); else - snext = csched_load_balance(prv, cpu, snext, &ret.migrated); + snext = csched_load_balance(prv, sched_cpu, snext, &ret.migrated); /* * Update idlers mask if necessary. When we're idling, other CPUs @@ -1980,12 +1981,12 @@ csched_schedule( */ if ( !tasklet_work_scheduled && snext->pri == CSCHED_PRI_IDLE ) { - if ( !cpumask_test_cpu(cpu, prv->idlers) ) - cpumask_set_cpu(cpu, prv->idlers); + if ( !cpumask_test_cpu(sched_cpu, prv->idlers) ) + cpumask_set_cpu(sched_cpu, prv->idlers); } - else if ( cpumask_test_cpu(cpu, prv->idlers) ) + else if ( cpumask_test_cpu(sched_cpu, prv->idlers) ) { - cpumask_clear_cpu(cpu, prv->idlers); + cpumask_clear_cpu(sched_cpu, prv->idlers); } if ( !is_idle_unit(snext->unit) ) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 0f57135e81..cf502b3cbc 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3447,7 +3447,8 @@ static struct task_slice csched2_schedule( const struct scheduler *ops, s_time_t now, bool tasklet_work_scheduled) { - const int cpu = smp_processor_id(); + const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); struct csched2_runqueue_data *rqd; struct sched_unit *currunit = current->sched_unit; struct csched2_unit * const scurr = csched2_unit(currunit); @@ -3459,22 +3460,22 @@ csched2_schedule( SCHED_STAT_CRANK(schedule); CSCHED2_UNIT_CHECK(currunit); - BUG_ON(!cpumask_test_cpu(cpu, &csched2_priv(ops)->initialized)); + BUG_ON(!cpumask_test_cpu(sched_cpu, &csched2_priv(ops)->initialized)); - rqd = c2rqd(ops, cpu); - BUG_ON(!cpumask_test_cpu(cpu, &rqd->active)); + rqd = c2rqd(ops, sched_cpu); + BUG_ON(!cpumask_test_cpu(sched_cpu, &rqd->active)); - ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(sched_cpu)->schedule_lock)); BUG_ON(!is_idle_unit(currunit) && scurr->rqd != rqd); /* Clear "tickled" bit now that we've been scheduled */ - tickled = cpumask_test_cpu(cpu, &rqd->tickled); + tickled = cpumask_test_cpu(sched_cpu, &rqd->tickled); if ( tickled ) { - __cpumask_clear_cpu(cpu, &rqd->tickled); + __cpumask_clear_cpu(sched_cpu, &rqd->tickled); cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled); - smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle); + smt_idle_mask_set(sched_cpu, cpumask_scratch, &rqd->smt_idle); } if ( unlikely(tb_init_done) ) @@ -3484,10 +3485,10 @@ csched2_schedule( unsigned tasklet:8, idle:8, smt_idle:8, tickled:8; } d; d.cpu = cpu; - d.rq_id = c2r(cpu); + d.rq_id = c2r(sched_cpu); d.tasklet = tasklet_work_scheduled; d.idle = is_idle_unit(currunit); - d.smt_idle = cpumask_test_cpu(cpu, &rqd->smt_idle); + d.smt_idle = cpumask_test_cpu(sched_cpu, &rqd->smt_idle); d.tickled = tickled; __trace_var(TRC_CSCHED2_SCHEDULE, 1, sizeof(d), @@ -3527,10 +3528,10 @@ csched2_schedule( { __clear_bit(__CSFLAG_unit_yield, &scurr->flags); trace_var(TRC_CSCHED2_SCHED_TASKLET, 1, 0, NULL); - snext = csched2_unit(sched_idle_unit(cpu)); + snext = csched2_unit(sched_idle_unit(sched_cpu)); } else - snext = runq_candidate(rqd, scurr, cpu, now, &skipped_units); + snext = runq_candidate(rqd, scurr, sched_cpu, now, &skipped_units); /* If switching from a non-idle runnable unit, put it * back on the runqueue. */ @@ -3555,10 +3556,10 @@ csched2_schedule( } /* Clear the idle mask if necessary */ - if ( cpumask_test_cpu(cpu, &rqd->idle) ) + if ( cpumask_test_cpu(sched_cpu, &rqd->idle) ) { - __cpumask_clear_cpu(cpu, &rqd->idle); - smt_idle_mask_clear(cpu, &rqd->smt_idle); + __cpumask_clear_cpu(sched_cpu, &rqd->idle); + smt_idle_mask_clear(sched_cpu, &rqd->smt_idle); } /* @@ -3577,18 +3578,18 @@ csched2_schedule( */ if ( skipped_units == 0 && snext->credit <= CSCHED2_CREDIT_RESET ) { - reset_credit(ops, cpu, now, snext); - balance_load(ops, cpu, now); + reset_credit(ops, sched_cpu, now, snext); + balance_load(ops, sched_cpu, now); } snext->start_time = now; snext->tickled_cpu = -1; /* Safe because lock for old processor is held */ - if ( sched_unit_cpu(snext->unit) != cpu ) + if ( sched_unit_cpu(snext->unit) != sched_cpu ) { snext->credit += CSCHED2_MIGRATE_COMPENSATION; - sched_set_res(snext->unit, get_sched_res(cpu)); + sched_set_res(snext->unit, get_sched_res(sched_cpu)); SCHED_STAT_CRANK(migrated); ret.migrated = 1; } @@ -3601,17 +3602,17 @@ csched2_schedule( */ if ( tasklet_work_scheduled ) { - if ( cpumask_test_cpu(cpu, &rqd->idle) ) + if ( cpumask_test_cpu(sched_cpu, &rqd->idle) ) { - __cpumask_clear_cpu(cpu, &rqd->idle); - smt_idle_mask_clear(cpu, &rqd->smt_idle); + __cpumask_clear_cpu(sched_cpu, &rqd->idle); + smt_idle_mask_clear(sched_cpu, &rqd->smt_idle); } } - else if ( !cpumask_test_cpu(cpu, &rqd->idle) ) + else if ( !cpumask_test_cpu(sched_cpu, &rqd->idle) ) { - __cpumask_set_cpu(cpu, &rqd->idle); + __cpumask_set_cpu(sched_cpu, &rqd->idle); cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled); - smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle); + smt_idle_mask_set(sched_cpu, cpumask_scratch, &rqd->smt_idle); } /* Make sure avgload gets updated periodically even * if there's no activity */ @@ -3621,7 +3622,7 @@ csched2_schedule( /* * Return task to run next... */ - ret.time = csched2_runtime(ops, cpu, snext, now); + ret.time = csched2_runtime(ops, sched_cpu, snext, now); ret.task = snext->unit; CSCHED2_UNIT_CHECK(ret.task); diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 53a20d73aa..6c350a342b 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -701,6 +701,7 @@ static struct task_slice null_schedule(const struct scheduler *ops, { unsigned int bs; const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); struct null_private *prv = null_priv(ops); struct null_unit *wvc; struct task_slice ret; @@ -716,14 +717,14 @@ static struct task_slice null_schedule(const struct scheduler *ops, } d; d.cpu = cpu; d.tasklet = tasklet_work_scheduled; - if ( per_cpu(npc, cpu).unit == NULL ) + if ( per_cpu(npc, sched_cpu).unit == NULL ) { d.unit = d.dom = -1; } else { - d.unit = per_cpu(npc, cpu).unit->unit_id; - d.dom = per_cpu(npc, cpu).unit->domain->domain_id; + d.unit = per_cpu(npc, sched_cpu).unit->unit_id; + d.dom = per_cpu(npc, sched_cpu).unit->domain->domain_id; } __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d); } @@ -731,10 +732,10 @@ static struct task_slice null_schedule(const struct scheduler *ops, if ( tasklet_work_scheduled ) { trace_var(TRC_SNULL_TASKLET, 1, 0, NULL); - ret.task = sched_idle_unit(cpu); + ret.task = sched_idle_unit(sched_cpu); } else - ret.task = per_cpu(npc, cpu).unit; + ret.task = per_cpu(npc, sched_cpu).unit; ret.migrated = 0; ret.time = -1; @@ -765,9 +766,9 @@ static struct task_slice null_schedule(const struct scheduler *ops, !has_soft_affinity(wvc->unit) ) continue; - if ( unit_check_affinity(wvc->unit, cpu, bs) ) + if ( unit_check_affinity(wvc->unit, sched_cpu, bs) ) { - unit_assign(prv, wvc->unit, cpu); + unit_assign(prv, wvc->unit, sched_cpu); list_del_init(&wvc->waitq_elem); ret.task = wvc->unit; goto unlock; @@ -779,7 +780,7 @@ static struct task_slice null_schedule(const struct scheduler *ops, } if ( unlikely(ret.task == NULL || !unit_runnable(ret.task)) ) - ret.task = sched_idle_unit(cpu); + ret.task = sched_idle_unit(sched_cpu); NULL_UNIT_CHECK(ret.task); return ret; diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 57883bd0a0..ddd1badd38 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1058,7 +1058,8 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask) static struct task_slice rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_scheduled) { - const int cpu = smp_processor_id(); + const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); struct rt_private *prv = rt_priv(ops); struct rt_unit *const scurr = rt_unit(current->sched_unit); struct rt_unit *snext = NULL; @@ -1072,7 +1073,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched } d; d.cpu = cpu; d.tasklet = tasklet_work_scheduled; - d.tickled = cpumask_test_cpu(cpu, &prv->tickled); + d.tickled = cpumask_test_cpu(sched_cpu, &prv->tickled); d.idle = is_idle_unit(currunit); trace_var(TRC_RTDS_SCHEDULE, 1, sizeof(d), @@ -1080,7 +1081,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched } /* clear ticked bit now that we've been scheduled */ - cpumask_clear_cpu(cpu, &prv->tickled); + cpumask_clear_cpu(sched_cpu, &prv->tickled); /* burn_budget would return for IDLE UNIT */ burn_budget(ops, scurr, now); @@ -1088,13 +1089,13 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched if ( tasklet_work_scheduled ) { trace_var(TRC_RTDS_SCHED_TASKLET, 1, 0, NULL); - snext = rt_unit(sched_idle_unit(cpu)); + snext = rt_unit(sched_idle_unit(sched_cpu)); } else { - snext = runq_pick(ops, cpumask_of(cpu)); + snext = runq_pick(ops, cpumask_of(sched_cpu)); if ( snext == NULL ) - snext = rt_unit(sched_idle_unit(cpu)); + snext = rt_unit(sched_idle_unit(sched_cpu)); /* if scurr has higher priority and budget, still pick scurr */ if ( !is_idle_unit(currunit) && @@ -1119,9 +1120,9 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched q_remove(snext); __set_bit(__RTDS_scheduled, &snext->flags); } - if ( sched_unit_cpu(snext->unit) != cpu ) + if ( sched_unit_cpu(snext->unit) != sched_cpu ) { - sched_set_res(snext->unit, get_sched_res(cpu)); + sched_set_res(snext->unit, get_sched_res(sched_cpu)); ret.migrated = 1; } ret.time = snext->cur_budget; /* invoke the scheduler next time */ diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 2a8789b438..8b349069db 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -110,6 +110,11 @@ static inline struct sched_unit *sched_idle_unit(unsigned int cpu) return idle_vcpu[cpu]->sched_unit; } +static inline unsigned int sched_get_resource_cpu(unsigned int cpu) +{ + return get_sched_res(cpu)->processor; +} + /* * Scratch space, for avoiding having too many cpumask_t on the stack. * Within each scheduler, when using the scratch mask of one pCPU: -- 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
WARNING: multiple messages have this Message-ID (diff)
From: Juergen Gross <jgross@suse.com> To: xen-devel@lists.xenproject.org Cc: Juergen Gross <jgross@suse.com>, Tim Deegan <tim@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, George Dunlap <george.dunlap@eu.citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, Robert VanVossen <robert.vanvossen@dornerworks.com>, Dario Faggioli <dfaggioli@suse.com>, Julien Grall <julien.grall@arm.com>, Josh Whitehead <josh.whitehead@dornerworks.com>, Meng Xu <mengxu@cis.upenn.edu>, Jan Beulich <jbeulich@suse.com> Subject: [Xen-devel] [PATCH 27/60] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers Date: Tue, 28 May 2019 12:32:40 +0200 [thread overview] Message-ID: <20190528103313.1343-28-jgross@suse.com> (raw) Message-ID: <20190528103240.bxoI200fgs1Y-MrQjNZUOxQXJrbvxcVNVuMrheDoGCI@z> (raw) In-Reply-To: <20190528103313.1343-1-jgross@suse.com> Especially in the do_schedule() functions of the different schedulers using smp_processor_id() for the local cpu number is correct only if the sched_unit is a single vcpu. As soon as larger sched_units are used most uses should be replaced by the cpu number of the local sched_resource instead. Add a helper to get that sched_resource cpu and modify the schedulers to use it in a correct way. Signed-off-by: Juergen Gross <jgross@suse.com> --- xen/common/sched_arinc653.c | 2 +- xen/common/sched_credit.c | 21 +++++++++--------- xen/common/sched_credit2.c | 53 +++++++++++++++++++++++---------------------- xen/common/sched_null.c | 17 ++++++++------- xen/common/sched_rt.c | 17 ++++++++------- xen/include/xen/sched-if.h | 5 +++++ 6 files changed, 62 insertions(+), 53 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 213bc960ef..e48f2b2eb9 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -513,7 +513,7 @@ a653sched_do_schedule( static unsigned int sched_index = 0; static s_time_t next_switch_time; a653sched_priv_t *sched_priv = SCHED_PRIV(ops); - const unsigned int cpu = smp_processor_id(); + const unsigned int cpu = sched_get_resource_cpu(smp_processor_id()); unsigned long flags; spin_lock_irqsave(&sched_priv->lock, flags); diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 05df9e54ac..e67b9bb23e 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1690,7 +1690,7 @@ csched_load_balance(struct csched_private *prv, int cpu, int peer_cpu, first_cpu, peer_node, bstep; int node = cpu_to_node(cpu); - BUG_ON( cpu != sched_unit_cpu(snext->unit) ); + BUG_ON( sched_get_resource_cpu(cpu) != sched_unit_cpu(snext->unit) ); online = cpupool_online_cpumask(c); /* @@ -1831,8 +1831,9 @@ static struct task_slice csched_schedule( const struct scheduler *ops, s_time_t now, bool_t tasklet_work_scheduled) { - const int cpu = smp_processor_id(); - struct list_head * const runq = RUNQ(cpu); + const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); + struct list_head * const runq = RUNQ(sched_cpu); struct sched_unit *unit = current->sched_unit; struct csched_unit * const scurr = CSCHED_UNIT(unit); struct csched_private *prv = CSCHED_PRIV(ops); @@ -1942,7 +1943,7 @@ csched_schedule( { BUG_ON( is_idle_unit(unit) || list_empty(runq) ); /* Current has blocked. Update the runnable counter for this cpu. */ - dec_nr_runnable(cpu); + dec_nr_runnable(sched_cpu); } snext = __runq_elem(runq->next); @@ -1952,7 +1953,7 @@ csched_schedule( if ( tasklet_work_scheduled ) { TRACE_0D(TRC_CSCHED_SCHED_TASKLET); - snext = CSCHED_UNIT(sched_idle_unit(cpu)); + snext = CSCHED_UNIT(sched_idle_unit(sched_cpu)); snext->pri = CSCHED_PRI_TS_BOOST; } @@ -1972,7 +1973,7 @@ csched_schedule( if ( snext->pri > CSCHED_PRI_TS_OVER ) __runq_remove(snext); else - snext = csched_load_balance(prv, cpu, snext, &ret.migrated); + snext = csched_load_balance(prv, sched_cpu, snext, &ret.migrated); /* * Update idlers mask if necessary. When we're idling, other CPUs @@ -1980,12 +1981,12 @@ csched_schedule( */ if ( !tasklet_work_scheduled && snext->pri == CSCHED_PRI_IDLE ) { - if ( !cpumask_test_cpu(cpu, prv->idlers) ) - cpumask_set_cpu(cpu, prv->idlers); + if ( !cpumask_test_cpu(sched_cpu, prv->idlers) ) + cpumask_set_cpu(sched_cpu, prv->idlers); } - else if ( cpumask_test_cpu(cpu, prv->idlers) ) + else if ( cpumask_test_cpu(sched_cpu, prv->idlers) ) { - cpumask_clear_cpu(cpu, prv->idlers); + cpumask_clear_cpu(sched_cpu, prv->idlers); } if ( !is_idle_unit(snext->unit) ) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 0f57135e81..cf502b3cbc 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3447,7 +3447,8 @@ static struct task_slice csched2_schedule( const struct scheduler *ops, s_time_t now, bool tasklet_work_scheduled) { - const int cpu = smp_processor_id(); + const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); struct csched2_runqueue_data *rqd; struct sched_unit *currunit = current->sched_unit; struct csched2_unit * const scurr = csched2_unit(currunit); @@ -3459,22 +3460,22 @@ csched2_schedule( SCHED_STAT_CRANK(schedule); CSCHED2_UNIT_CHECK(currunit); - BUG_ON(!cpumask_test_cpu(cpu, &csched2_priv(ops)->initialized)); + BUG_ON(!cpumask_test_cpu(sched_cpu, &csched2_priv(ops)->initialized)); - rqd = c2rqd(ops, cpu); - BUG_ON(!cpumask_test_cpu(cpu, &rqd->active)); + rqd = c2rqd(ops, sched_cpu); + BUG_ON(!cpumask_test_cpu(sched_cpu, &rqd->active)); - ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); + ASSERT(spin_is_locked(get_sched_res(sched_cpu)->schedule_lock)); BUG_ON(!is_idle_unit(currunit) && scurr->rqd != rqd); /* Clear "tickled" bit now that we've been scheduled */ - tickled = cpumask_test_cpu(cpu, &rqd->tickled); + tickled = cpumask_test_cpu(sched_cpu, &rqd->tickled); if ( tickled ) { - __cpumask_clear_cpu(cpu, &rqd->tickled); + __cpumask_clear_cpu(sched_cpu, &rqd->tickled); cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled); - smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle); + smt_idle_mask_set(sched_cpu, cpumask_scratch, &rqd->smt_idle); } if ( unlikely(tb_init_done) ) @@ -3484,10 +3485,10 @@ csched2_schedule( unsigned tasklet:8, idle:8, smt_idle:8, tickled:8; } d; d.cpu = cpu; - d.rq_id = c2r(cpu); + d.rq_id = c2r(sched_cpu); d.tasklet = tasklet_work_scheduled; d.idle = is_idle_unit(currunit); - d.smt_idle = cpumask_test_cpu(cpu, &rqd->smt_idle); + d.smt_idle = cpumask_test_cpu(sched_cpu, &rqd->smt_idle); d.tickled = tickled; __trace_var(TRC_CSCHED2_SCHEDULE, 1, sizeof(d), @@ -3527,10 +3528,10 @@ csched2_schedule( { __clear_bit(__CSFLAG_unit_yield, &scurr->flags); trace_var(TRC_CSCHED2_SCHED_TASKLET, 1, 0, NULL); - snext = csched2_unit(sched_idle_unit(cpu)); + snext = csched2_unit(sched_idle_unit(sched_cpu)); } else - snext = runq_candidate(rqd, scurr, cpu, now, &skipped_units); + snext = runq_candidate(rqd, scurr, sched_cpu, now, &skipped_units); /* If switching from a non-idle runnable unit, put it * back on the runqueue. */ @@ -3555,10 +3556,10 @@ csched2_schedule( } /* Clear the idle mask if necessary */ - if ( cpumask_test_cpu(cpu, &rqd->idle) ) + if ( cpumask_test_cpu(sched_cpu, &rqd->idle) ) { - __cpumask_clear_cpu(cpu, &rqd->idle); - smt_idle_mask_clear(cpu, &rqd->smt_idle); + __cpumask_clear_cpu(sched_cpu, &rqd->idle); + smt_idle_mask_clear(sched_cpu, &rqd->smt_idle); } /* @@ -3577,18 +3578,18 @@ csched2_schedule( */ if ( skipped_units == 0 && snext->credit <= CSCHED2_CREDIT_RESET ) { - reset_credit(ops, cpu, now, snext); - balance_load(ops, cpu, now); + reset_credit(ops, sched_cpu, now, snext); + balance_load(ops, sched_cpu, now); } snext->start_time = now; snext->tickled_cpu = -1; /* Safe because lock for old processor is held */ - if ( sched_unit_cpu(snext->unit) != cpu ) + if ( sched_unit_cpu(snext->unit) != sched_cpu ) { snext->credit += CSCHED2_MIGRATE_COMPENSATION; - sched_set_res(snext->unit, get_sched_res(cpu)); + sched_set_res(snext->unit, get_sched_res(sched_cpu)); SCHED_STAT_CRANK(migrated); ret.migrated = 1; } @@ -3601,17 +3602,17 @@ csched2_schedule( */ if ( tasklet_work_scheduled ) { - if ( cpumask_test_cpu(cpu, &rqd->idle) ) + if ( cpumask_test_cpu(sched_cpu, &rqd->idle) ) { - __cpumask_clear_cpu(cpu, &rqd->idle); - smt_idle_mask_clear(cpu, &rqd->smt_idle); + __cpumask_clear_cpu(sched_cpu, &rqd->idle); + smt_idle_mask_clear(sched_cpu, &rqd->smt_idle); } } - else if ( !cpumask_test_cpu(cpu, &rqd->idle) ) + else if ( !cpumask_test_cpu(sched_cpu, &rqd->idle) ) { - __cpumask_set_cpu(cpu, &rqd->idle); + __cpumask_set_cpu(sched_cpu, &rqd->idle); cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled); - smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle); + smt_idle_mask_set(sched_cpu, cpumask_scratch, &rqd->smt_idle); } /* Make sure avgload gets updated periodically even * if there's no activity */ @@ -3621,7 +3622,7 @@ csched2_schedule( /* * Return task to run next... */ - ret.time = csched2_runtime(ops, cpu, snext, now); + ret.time = csched2_runtime(ops, sched_cpu, snext, now); ret.task = snext->unit; CSCHED2_UNIT_CHECK(ret.task); diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 53a20d73aa..6c350a342b 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -701,6 +701,7 @@ static struct task_slice null_schedule(const struct scheduler *ops, { unsigned int bs; const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); struct null_private *prv = null_priv(ops); struct null_unit *wvc; struct task_slice ret; @@ -716,14 +717,14 @@ static struct task_slice null_schedule(const struct scheduler *ops, } d; d.cpu = cpu; d.tasklet = tasklet_work_scheduled; - if ( per_cpu(npc, cpu).unit == NULL ) + if ( per_cpu(npc, sched_cpu).unit == NULL ) { d.unit = d.dom = -1; } else { - d.unit = per_cpu(npc, cpu).unit->unit_id; - d.dom = per_cpu(npc, cpu).unit->domain->domain_id; + d.unit = per_cpu(npc, sched_cpu).unit->unit_id; + d.dom = per_cpu(npc, sched_cpu).unit->domain->domain_id; } __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d); } @@ -731,10 +732,10 @@ static struct task_slice null_schedule(const struct scheduler *ops, if ( tasklet_work_scheduled ) { trace_var(TRC_SNULL_TASKLET, 1, 0, NULL); - ret.task = sched_idle_unit(cpu); + ret.task = sched_idle_unit(sched_cpu); } else - ret.task = per_cpu(npc, cpu).unit; + ret.task = per_cpu(npc, sched_cpu).unit; ret.migrated = 0; ret.time = -1; @@ -765,9 +766,9 @@ static struct task_slice null_schedule(const struct scheduler *ops, !has_soft_affinity(wvc->unit) ) continue; - if ( unit_check_affinity(wvc->unit, cpu, bs) ) + if ( unit_check_affinity(wvc->unit, sched_cpu, bs) ) { - unit_assign(prv, wvc->unit, cpu); + unit_assign(prv, wvc->unit, sched_cpu); list_del_init(&wvc->waitq_elem); ret.task = wvc->unit; goto unlock; @@ -779,7 +780,7 @@ static struct task_slice null_schedule(const struct scheduler *ops, } if ( unlikely(ret.task == NULL || !unit_runnable(ret.task)) ) - ret.task = sched_idle_unit(cpu); + ret.task = sched_idle_unit(sched_cpu); NULL_UNIT_CHECK(ret.task); return ret; diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 57883bd0a0..ddd1badd38 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1058,7 +1058,8 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask) static struct task_slice rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_scheduled) { - const int cpu = smp_processor_id(); + const unsigned int cpu = smp_processor_id(); + const unsigned int sched_cpu = sched_get_resource_cpu(cpu); struct rt_private *prv = rt_priv(ops); struct rt_unit *const scurr = rt_unit(current->sched_unit); struct rt_unit *snext = NULL; @@ -1072,7 +1073,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched } d; d.cpu = cpu; d.tasklet = tasklet_work_scheduled; - d.tickled = cpumask_test_cpu(cpu, &prv->tickled); + d.tickled = cpumask_test_cpu(sched_cpu, &prv->tickled); d.idle = is_idle_unit(currunit); trace_var(TRC_RTDS_SCHEDULE, 1, sizeof(d), @@ -1080,7 +1081,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched } /* clear ticked bit now that we've been scheduled */ - cpumask_clear_cpu(cpu, &prv->tickled); + cpumask_clear_cpu(sched_cpu, &prv->tickled); /* burn_budget would return for IDLE UNIT */ burn_budget(ops, scurr, now); @@ -1088,13 +1089,13 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched if ( tasklet_work_scheduled ) { trace_var(TRC_RTDS_SCHED_TASKLET, 1, 0, NULL); - snext = rt_unit(sched_idle_unit(cpu)); + snext = rt_unit(sched_idle_unit(sched_cpu)); } else { - snext = runq_pick(ops, cpumask_of(cpu)); + snext = runq_pick(ops, cpumask_of(sched_cpu)); if ( snext == NULL ) - snext = rt_unit(sched_idle_unit(cpu)); + snext = rt_unit(sched_idle_unit(sched_cpu)); /* if scurr has higher priority and budget, still pick scurr */ if ( !is_idle_unit(currunit) && @@ -1119,9 +1120,9 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched q_remove(snext); __set_bit(__RTDS_scheduled, &snext->flags); } - if ( sched_unit_cpu(snext->unit) != cpu ) + if ( sched_unit_cpu(snext->unit) != sched_cpu ) { - sched_set_res(snext->unit, get_sched_res(cpu)); + sched_set_res(snext->unit, get_sched_res(sched_cpu)); ret.migrated = 1; } ret.time = snext->cur_budget; /* invoke the scheduler next time */ diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 2a8789b438..8b349069db 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -110,6 +110,11 @@ static inline struct sched_unit *sched_idle_unit(unsigned int cpu) return idle_vcpu[cpu]->sched_unit; } +static inline unsigned int sched_get_resource_cpu(unsigned int cpu) +{ + return get_sched_res(cpu)->processor; +} + /* * Scratch space, for avoiding having too many cpumask_t on the stack. * Within each scheduler, when using the scratch mask of one pCPU: -- 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-05-28 10:33 UTC|newest] Thread overview: 202+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-05-28 10:32 [PATCH 00/60] xen: add core scheduling support Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 01/60] xen/sched: only allow schedulers with all mandatory functions available Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-06-11 16:03 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 02/60] xen/sched: add inline wrappers for calling per-scheduler functions Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-06-11 16:21 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 03/60] xen/sched: let sched_switch_sched() return new lock address Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-06-11 16:55 ` Dario Faggioli 2019-06-12 7:40 ` Jan Beulich 2019-06-12 8:06 ` Juergen Gross 2019-06-12 9:44 ` Dario Faggioli 2019-06-12 8:05 ` Andrew Cooper 2019-06-12 8:19 ` Juergen Gross 2019-06-12 9:32 ` Jan Beulich [not found] ` <5D00C6960200007800237622@suse.com> 2019-06-12 9:56 ` Dario Faggioli 2019-06-12 10:14 ` Jan Beulich 2019-06-12 11:27 ` Andrew Cooper 2019-06-12 13:32 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 04/60] xen/sched: use new sched_unit instead of vcpu in scheduler interfaces Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-18 17:44 ` Dario Faggioli 2019-07-19 4:49 ` Juergen Gross 2019-07-19 17:01 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 05/60] xen/sched: alloc struct sched_unit for each vcpu Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-18 17:57 ` Dario Faggioli 2019-07-19 4:56 ` Juergen Gross 2019-07-19 17:04 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 06/60] xen/sched: move per-vcpu scheduler private data pointer to sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-18 22:52 ` Dario Faggioli 2019-07-19 5:03 ` Juergen Gross 2019-07-19 17:10 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 07/60] xen/sched: build a linked list of struct sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-19 0:01 ` Dario Faggioli 2019-07-19 5:07 ` Juergen Gross 2019-07-19 17:16 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 08/60] xen/sched: introduce struct sched_resource Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-19 17:43 ` Dario Faggioli 2019-07-19 17:49 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 09/60] xen/sched: let pick_cpu return a scheduler resource Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-19 18:06 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 10/60] xen/sched: switch schedule_data.curr to point at sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-29 22:08 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 11/60] xen/sched: move per cpu scheduler private data into struct sched_resource Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 11:32 ` Jan Beulich 2019-05-28 11:32 ` [Xen-devel] " Jan Beulich 2019-07-29 22:22 ` Dario Faggioli 2019-05-28 10:32 ` [PATCH 12/60] xen/sched: switch vcpu_schedule_lock to unit_schedule_lock Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 13/60] xen/sched: move some per-vcpu items to struct sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-06-13 7:18 ` Andrii Anisov 2019-06-13 7:29 ` Juergen Gross 2019-06-13 7:34 ` Andrii Anisov 2019-06-13 8:39 ` Juergen Gross 2019-06-13 8:49 ` Andrii Anisov 2019-05-28 10:32 ` [PATCH 14/60] xen/sched: add scheduler helpers hiding vcpu Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 15/60] xen/sched: add domain pointer to struct sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 16/60] xen/sched: add id " Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 17/60] xen/sched: rename scheduler related perf counters Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 18/60] xen/sched: switch struct task_slice from vcpu to sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 19/60] xen/sched: add is_running indicator to struct sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 20/60] xen/sched: make null scheduler vcpu agnostic Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 21/60] xen/sched: make rt " Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 22/60] xen/sched: make credit " Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 23/60] xen/sched: make credit2 " Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 24/60] xen/sched: make arinc653 " Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 25/60] xen: add sched_unit_pause_nosync() and sched_unit_unpause() Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 26/60] xen: let vcpu_create() select processor Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` Juergen Gross [this message] 2019-05-28 10:32 ` [Xen-devel] [PATCH 27/60] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers Juergen Gross 2019-05-28 10:32 ` [PATCH 28/60] xen/sched: switch schedule() from vcpus to sched_units Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 29/60] xen/sched: switch sched_move_irqs() to take sched_unit as parameter Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 30/60] xen: switch from for_each_vcpu() to for_each_sched_unit() Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 31/60] xen/sched: add runstate counters to struct sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 32/60] xen/sched: rework and rename vcpu_force_reschedule() Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 33/60] xen/sched: Change vcpu_migrate_*() to operate on schedule unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 34/60] xen/sched: move struct task_slice into struct sched_unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 35/60] xen/sched: add code to sync scheduling of all vcpus of a sched unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 36/60] xen/sched: introduce unit_runnable_state() Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 37/60] xen/sched: add support for multiple vcpus per sched unit where missing Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 38/60] x86: make loading of GDT at context switch more modular Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-02 15:38 ` Andrew Cooper 2019-05-28 10:32 ` [PATCH 39/60] x86: optimize loading of GDT at context switch Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-07-02 16:09 ` Andrew Cooper 2019-07-03 6:30 ` Juergen Gross 2019-07-03 12:21 ` Andrew Cooper 2019-07-05 7:30 ` Juergen Gross 2019-05-28 10:32 ` [PATCH 40/60] xen/sched: modify cpupool_domain_cpumask() to be an unit mask Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 41/60] xen/sched: support allocating multiple vcpus into one sched unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 42/60] xen/sched: add a scheduler_percpu_init() function Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 43/60] xen/sched: add a percpu resource index Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 44/60] xen/sched: add fall back to idle vcpu when scheduling unit Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 45/60] xen/sched: make vcpu_wake() and vcpu_sleep() core scheduling aware Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:32 ` [PATCH 46/60] xen/sched: carve out freeing sched_unit memory into dedicated function Juergen Gross 2019-05-28 10:32 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 47/60] xen/sched: move per-cpu variable scheduler to struct sched_resource Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 48/60] xen/sched: move per-cpu variable cpupool " Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 49/60] xen/sched: reject switching smt on/off with core scheduling active Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 11:44 ` Jan Beulich 2019-05-28 11:44 ` [Xen-devel] " Jan Beulich 2019-05-28 11:52 ` Juergen Gross 2019-05-28 11:52 ` [Xen-devel] " Juergen Gross 2019-06-12 9:36 ` Dario Faggioli 2019-05-28 10:33 ` [PATCH 50/60] xen/sched: prepare per-cpupool scheduling granularity Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 51/60] xen/sched: use one schedule lock for all free cpus Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 52/60] xen/sched: populate cpupool0 only after all cpus are up Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 53/60] xen/sched: remove cpu from pool0 before removing it Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 54/60] xen/sched: add minimalistic idle scheduler for free cpus Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 11:47 ` Jan Beulich 2019-05-28 11:47 ` [Xen-devel] " Jan Beulich 2019-05-28 11:58 ` Juergen Gross 2019-05-28 11:58 ` [Xen-devel] " Juergen Gross 2019-05-31 14:15 ` Dario Faggioli 2019-05-31 14:15 ` [Xen-devel] " Dario Faggioli 2019-05-31 15:52 ` Dario Faggioli 2019-05-31 15:52 ` [Xen-devel] " Dario Faggioli 2019-05-31 16:44 ` Juergen Gross 2019-05-31 16:44 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 55/60] xen/sched: split schedule_cpu_switch() Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 56/60] xen/sched: protect scheduling resource via rcu Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 57/60] xen/sched: support multiple cpus per scheduling resource Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 58/60] xen/sched: support differing granularity in schedule_cpu_[add/rm]() Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 59/60] xen/sched: support core scheduling for moving cpus to/from cpupools Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 10:33 ` [PATCH 60/60] xen/sched: add scheduling granularity enum Juergen Gross 2019-05-28 10:33 ` [Xen-devel] " Juergen Gross 2019-05-28 11:51 ` Jan Beulich 2019-05-28 11:51 ` [Xen-devel] " Jan Beulich 2019-05-28 12:02 ` Juergen Gross 2019-05-28 12:02 ` [Xen-devel] " Juergen Gross 2019-07-19 18:31 ` Dario Faggioli 2019-07-05 13:17 ` [Xen-devel] [PATCH 00/60] xen: add core scheduling support Sergey Dyasli 2019-07-05 13:22 ` Juergen Gross 2019-07-05 13:56 ` Dario Faggioli 2019-07-15 14:08 ` Sergey Dyasli 2019-07-18 14:48 ` Juergen Gross 2019-07-18 15:14 ` Sergey Dyasli 2019-07-18 16:04 ` Dario Faggioli 2019-07-19 5:41 ` Juergen Gross 2019-07-19 11:24 ` Juergen Gross 2019-07-19 13:57 ` Juergen Gross 2019-07-22 14:22 ` Sergey Dyasli 2019-07-24 9:13 ` Juergen Gross 2019-07-24 14:54 ` Sergey Dyasli 2019-07-24 15:11 ` Juergen Gross 2019-07-16 15:45 ` Sergey Dyasli 2019-07-19 13:35 ` Juergen Gross 2019-07-25 16:01 ` Sergey Dyasli 2019-07-11 13:40 ` Dario Faggioli [not found] <20190528103313.13431jgross@suse.com> [not found] ` <20190528103313.13434jgross@suse.com> [not found] <20190528103313.13431jgross@suse.com> [not found] ` <20190528103313.13434jgross@suse.com>
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190528103313.1343-28-jgross@suse.com \ --to=jgross@suse.com \ --cc=andrew.cooper3@citrix.com \ --cc=dfaggioli@suse.com \ --cc=george.dunlap@eu.citrix.com \ --cc=ian.jackson@eu.citrix.com \ --cc=jbeulich@suse.com \ --cc=josh.whitehead@dornerworks.com \ --cc=julien.grall@arm.com \ --cc=konrad.wilk@oracle.com \ --cc=mengxu@cis.upenn.edu \ --cc=robert.vanvossen@dornerworks.com \ --cc=sstabellini@kernel.org \ --cc=tim@xen.org \ --cc=wl@xen.org \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.