From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, Tim Deegan <tim@xen.org>,
Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
Robert VanVossen <robert.vanvossen@dornerworks.com>,
Dario Faggioli <dfaggioli@suse.com>,
Julien Grall <julien.grall@arm.com>,
Josh Whitehead <josh.whitehead@dornerworks.com>,
Meng Xu <mengxu@cis.upenn.edu>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH v2 01/48] xen/sched: use new sched_unit instead of vcpu in scheduler interfaces
Date: Fri, 9 Aug 2019 16:57:46 +0200 [thread overview]
Message-ID: <20190809145833.1020-2-jgross@suse.com> (raw)
In-Reply-To: <20190809145833.1020-1-jgross@suse.com>
In order to prepare core- and socket-scheduling use a new struct
sched_unit instead of struct vcpu for interfaces of the different
schedulers.
Rename the per-scheduler functions insert_vcpu and remove_vcpu to
insert_unit and remove_unit to reflect the change of the parameter.
In the schedulers rename local functions switched to sched_unit, too.
For now this new struct will contain a domain, a vcpu pointer and a
unit_id only and is allocated at vcpu creation time.
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
---
RFC V2:
- move definition of struct sched_unit to sched.h (Andrew Cooper)
V1:
- rename "item" to "unit" (George Dunlap)
V2:
- rename unit->vcpu to unit->vcpu_list (Jan Beulich)
- merge patch with next one in series (Dario Faggioli)
- merge patch introducing domain pointer in sched_unit into this one
(Jan Beulich)
- merge patch introducing unit_id into this one
---
xen/common/sched_arinc653.c | 30 +++++++++------
xen/common/sched_credit.c | 41 ++++++++++++--------
xen/common/sched_credit2.c | 57 ++++++++++++++++------------
xen/common/sched_null.c | 37 +++++++++++-------
xen/common/sched_rt.c | 33 +++++++++-------
xen/common/schedule.c | 54 +++++++++++++++++---------
xen/include/xen/sched-if.h | 92 ++++++++++++++++++++++++++-------------------
xen/include/xen/sched.h | 8 ++++
8 files changed, 219 insertions(+), 133 deletions(-)
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
index 72b988ea5f..2059314791 100644
--- a/xen/common/sched_arinc653.c
+++ b/xen/common/sched_arinc653.c
@@ -376,13 +376,16 @@ a653sched_deinit(struct scheduler *ops)
* This function allocates scheduler-specific data for a VCPU
*
* @param ops Pointer to this instance of the scheduler structure
+ * @param unit Pointer to struct sched_unit
*
* @return Pointer to the allocated data
*/
static void *
-a653sched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
+a653sched_alloc_vdata(const struct scheduler *ops, struct sched_unit *unit,
+ void *dd)
{
a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
+ struct vcpu *vc = unit->vcpu_list;
arinc653_vcpu_t *svc;
unsigned int entry;
unsigned long flags;
@@ -458,11 +461,13 @@ a653sched_free_vdata(const struct scheduler *ops, void *priv)
* Xen scheduler callback function to sleep a VCPU
*
* @param ops Pointer to this instance of the scheduler structure
- * @param vc Pointer to the VCPU structure for the current domain
+ * @param unit Pointer to struct sched_unit
*/
static void
-a653sched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
+a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
+
if ( AVCPU(vc) != NULL )
AVCPU(vc)->awake = 0;
@@ -478,11 +483,13 @@ a653sched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
* Xen scheduler callback function to wake up a VCPU
*
* @param ops Pointer to this instance of the scheduler structure
- * @param vc Pointer to the VCPU structure for the current domain
+ * @param unit Pointer to struct sched_unit
*/
static void
-a653sched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
+a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
+
if ( AVCPU(vc) != NULL )
AVCPU(vc)->awake = 1;
@@ -597,13 +604,14 @@ a653sched_do_schedule(
* Xen scheduler callback function to select a CPU for the VCPU to run on
*
* @param ops Pointer to this instance of the scheduler structure
- * @param v Pointer to the VCPU structure for the current domain
+ * @param unit Pointer to struct sched_unit
*
* @return Number of selected physical CPU
*/
static int
-a653sched_pick_cpu(const struct scheduler *ops, struct vcpu *vc)
+a653sched_pick_cpu(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
cpumask_t *online;
unsigned int cpu;
@@ -702,11 +710,11 @@ static const struct scheduler sched_arinc653_def = {
.free_vdata = a653sched_free_vdata,
.alloc_vdata = a653sched_alloc_vdata,
- .insert_vcpu = NULL,
- .remove_vcpu = NULL,
+ .insert_unit = NULL,
+ .remove_unit = NULL,
- .sleep = a653sched_vcpu_sleep,
- .wake = a653sched_vcpu_wake,
+ .sleep = a653sched_unit_sleep,
+ .wake = a653sched_unit_wake,
.yield = NULL,
.context_saved = NULL,
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 70fe718127..464194a578 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -854,15 +854,16 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
}
static int
-csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc)
+csched_cpu_pick(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched_vcpu *svc = CSCHED_VCPU(vc);
/*
* We have been called by vcpu_migrate() (in schedule.c), as part
* of the process of seeing if vc can be migrated to another pcpu.
* We make a note about this in svc->flags so that later, in
- * csched_vcpu_wake() (still called from vcpu_migrate()) we won't
+ * csched_unit_wake() (still called from vcpu_migrate()) we won't
* get boosted, which we don't deserve as we are "only" migrating.
*/
set_bit(CSCHED_FLAG_VCPU_MIGRATING, &svc->flags);
@@ -990,8 +991,10 @@ csched_vcpu_acct(struct csched_private *prv, unsigned int cpu)
}
static void *
-csched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
+csched_alloc_vdata(const struct scheduler *ops, struct sched_unit *unit,
+ void *dd)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched_vcpu *svc;
/* Allocate per-VCPU info */
@@ -1011,8 +1014,9 @@ csched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
}
static void
-csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
+csched_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched_vcpu *svc = vc->sched_priv;
spinlock_t *lock;
@@ -1021,7 +1025,7 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
/* csched_cpu_pick() looks in vc->processor's runq, so we need the lock. */
lock = vcpu_schedule_lock_irq(vc);
- vc->processor = csched_cpu_pick(ops, vc);
+ vc->processor = csched_cpu_pick(ops, unit);
spin_unlock_irq(lock);
@@ -1046,9 +1050,10 @@ csched_free_vdata(const struct scheduler *ops, void *priv)
}
static void
-csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
+csched_unit_remove(const struct scheduler *ops, struct sched_unit *unit)
{
struct csched_private *prv = CSCHED_PRIV(ops);
+ struct vcpu *vc = unit->vcpu_list;
struct csched_vcpu * const svc = CSCHED_VCPU(vc);
struct csched_dom * const sdom = svc->sdom;
@@ -1073,8 +1078,9 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
}
static void
-csched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
+csched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched_vcpu * const svc = CSCHED_VCPU(vc);
unsigned int cpu = vc->processor;
@@ -1097,8 +1103,9 @@ csched_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
}
static void
-csched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
+csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched_vcpu * const svc = CSCHED_VCPU(vc);
bool_t migrating;
@@ -1158,8 +1165,9 @@ csched_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
}
static void
-csched_vcpu_yield(const struct scheduler *ops, struct vcpu *vc)
+csched_unit_yield(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched_vcpu * const svc = CSCHED_VCPU(vc);
/* Let the scheduler know that this vcpu is trying to yield */
@@ -1212,9 +1220,10 @@ csched_dom_cntl(
}
static void
-csched_aff_cntl(const struct scheduler *ops, struct vcpu *v,
+csched_aff_cntl(const struct scheduler *ops, struct sched_unit *unit,
const cpumask_t *hard, const cpumask_t *soft)
{
+ struct vcpu *v = unit->vcpu_list;
struct csched_vcpu *svc = CSCHED_VCPU(v);
if ( !hard )
@@ -1743,7 +1752,7 @@ csched_load_balance(struct csched_private *prv, int cpu,
* - if we race with inc_nr_runnable(), we skip a pCPU that may
* have runnable vCPUs in its runqueue, but that's not a
* problem because:
- * + if racing with csched_vcpu_insert() or csched_vcpu_wake(),
+ * + if racing with csched_unit_insert() or csched_unit_wake(),
* __runq_tickle() will be called afterwords, so the vCPU
* won't get stuck in the runqueue for too long;
* + if racing with csched_runq_steal(), it may be that a
@@ -2256,12 +2265,12 @@ static const struct scheduler sched_credit_def = {
.global_init = csched_global_init,
- .insert_vcpu = csched_vcpu_insert,
- .remove_vcpu = csched_vcpu_remove,
+ .insert_unit = csched_unit_insert,
+ .remove_unit = csched_unit_remove,
- .sleep = csched_vcpu_sleep,
- .wake = csched_vcpu_wake,
- .yield = csched_vcpu_yield,
+ .sleep = csched_unit_sleep,
+ .wake = csched_unit_wake,
+ .yield = csched_unit_yield,
.adjust = csched_dom_cntl,
.adjust_affinity= csched_aff_cntl,
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 6b77da7476..2120da6f98 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -273,7 +273,7 @@
* CSFLAG_delayed_runq_add: Do we need to add this to the runqueue once it'd done
* being context switched out?
* + Set when scheduling out in csched2_schedule() if prev is runnable
- * + Set in csched2_vcpu_wake if it finds CSFLAG_scheduled set
+ * + Set in csched2_unit_wake if it finds CSFLAG_scheduled set
* + Read in csched2_context_saved(). If set, it adds prev to the runqueue and
* clears the bit.
*/
@@ -624,14 +624,14 @@ static inline bool has_cap(const struct csched2_vcpu *svc)
* This logic is entirely implemented in runq_tickle(), and that is enough.
* In fact, in this scheduler, placement of a vcpu on one of the pcpus of a
* runq, _always_ happens by means of tickling:
- * - when a vcpu wakes up, it calls csched2_vcpu_wake(), which calls
+ * - when a vcpu wakes up, it calls csched2_unit_wake(), which calls
* runq_tickle();
* - when a migration is initiated in schedule.c, we call csched2_cpu_pick(),
- * csched2_vcpu_migrate() (which calls migrate()) and csched2_vcpu_wake().
+ * csched2_unit_migrate() (which calls migrate()) and csched2_unit_wake().
* csched2_cpu_pick() looks for the least loaded runq and return just any
- * of its processors. Then, csched2_vcpu_migrate() just moves the vcpu to
+ * of its processors. Then, csched2_unit_migrate() just moves the vcpu to
* the chosen runq, and it is again runq_tickle(), called by
- * csched2_vcpu_wake() that actually decides what pcpu to use within the
+ * csched2_unit_wake() that actually decides what pcpu to use within the
* chosen runq;
* - when a migration is initiated in sched_credit2.c, by calling migrate()
* directly, that again temporarily use a random pcpu from the new runq,
@@ -2027,8 +2027,10 @@ csched2_vcpu_check(struct vcpu *vc)
#endif
static void *
-csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
+csched2_alloc_vdata(const struct scheduler *ops, struct sched_unit *unit,
+ void *dd)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched2_vcpu *svc;
/* Allocate per-VCPU info */
@@ -2070,8 +2072,9 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
}
static void
-csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
+csched2_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched2_vcpu * const svc = csched2_vcpu(vc);
ASSERT(!is_idle_vcpu(vc));
@@ -2092,8 +2095,9 @@ csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
}
static void
-csched2_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
+csched2_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched2_vcpu * const svc = csched2_vcpu(vc);
unsigned int cpu = vc->processor;
s_time_t now;
@@ -2147,16 +2151,18 @@ out:
}
static void
-csched2_vcpu_yield(const struct scheduler *ops, struct vcpu *v)
+csched2_unit_yield(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *v = unit->vcpu_list;
struct csched2_vcpu * const svc = csched2_vcpu(v);
__set_bit(__CSFLAG_vcpu_yield, &svc->flags);
}
static void
-csched2_context_saved(const struct scheduler *ops, struct vcpu *vc)
+csched2_context_saved(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched2_vcpu * const svc = csched2_vcpu(vc);
spinlock_t *lock = vcpu_schedule_lock_irq(vc);
s_time_t now = NOW();
@@ -2197,9 +2203,10 @@ csched2_context_saved(const struct scheduler *ops, struct vcpu *vc)
#define MAX_LOAD (STIME_MAX)
static int
-csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc)
+csched2_cpu_pick(const struct scheduler *ops, struct sched_unit *unit)
{
struct csched2_private *prv = csched2_priv(ops);
+ struct vcpu *vc = unit->vcpu_list;
int i, min_rqi = -1, min_s_rqi = -1;
unsigned int new_cpu, cpu = vc->processor;
struct csched2_vcpu *svc = csched2_vcpu(vc);
@@ -2734,9 +2741,10 @@ retry:
}
static void
-csched2_vcpu_migrate(
- const struct scheduler *ops, struct vcpu *vc, unsigned int new_cpu)
+csched2_unit_migrate(
+ const struct scheduler *ops, struct sched_unit *unit, unsigned int new_cpu)
{
+ struct vcpu *vc = unit->vcpu_list;
struct domain *d = vc->domain;
struct csched2_vcpu * const svc = csched2_vcpu(vc);
struct csched2_runqueue_data *trqd;
@@ -2997,9 +3005,10 @@ csched2_dom_cntl(
}
static void
-csched2_aff_cntl(const struct scheduler *ops, struct vcpu *v,
+csched2_aff_cntl(const struct scheduler *ops, struct sched_unit *unit,
const cpumask_t *hard, const cpumask_t *soft)
{
+ struct vcpu *v = unit->vcpu_list;
struct csched2_vcpu *svc = csched2_vcpu(v);
if ( !hard )
@@ -3097,8 +3106,9 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
}
static void
-csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
+csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched2_vcpu *svc = vc->sched_priv;
struct csched2_dom * const sdom = svc->sdom;
spinlock_t *lock;
@@ -3109,7 +3119,7 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
/* csched2_cpu_pick() expects the pcpu lock to be held */
lock = vcpu_schedule_lock_irq(vc);
- vc->processor = csched2_cpu_pick(ops, vc);
+ vc->processor = csched2_cpu_pick(ops, unit);
spin_unlock_irq(lock);
@@ -3136,8 +3146,9 @@ csched2_free_vdata(const struct scheduler *ops, void *priv)
}
static void
-csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
+csched2_unit_remove(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct csched2_vcpu * const svc = csched2_vcpu(vc);
spinlock_t *lock;
@@ -4083,19 +4094,19 @@ static const struct scheduler sched_credit2_def = {
.global_init = csched2_global_init,
- .insert_vcpu = csched2_vcpu_insert,
- .remove_vcpu = csched2_vcpu_remove,
+ .insert_unit = csched2_unit_insert,
+ .remove_unit = csched2_unit_remove,
- .sleep = csched2_vcpu_sleep,
- .wake = csched2_vcpu_wake,
- .yield = csched2_vcpu_yield,
+ .sleep = csched2_unit_sleep,
+ .wake = csched2_unit_wake,
+ .yield = csched2_unit_yield,
.adjust = csched2_dom_cntl,
.adjust_affinity= csched2_aff_cntl,
.adjust_global = csched2_sys_cntl,
.pick_cpu = csched2_cpu_pick,
- .migrate = csched2_vcpu_migrate,
+ .migrate = csched2_unit_migrate,
.do_schedule = csched2_schedule,
.context_saved = csched2_context_saved,
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index 6782ecda5c..fd031c989b 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -186,8 +186,9 @@ static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
}
static void *null_alloc_vdata(const struct scheduler *ops,
- struct vcpu *v, void *dd)
+ struct sched_unit *unit, void *dd)
{
+ struct vcpu *v = unit->vcpu_list;
struct null_vcpu *nvc;
nvc = xzalloc(struct null_vcpu);
@@ -435,8 +436,10 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops,
return &sd->_lock;
}
-static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v)
+static void null_unit_insert(const struct scheduler *ops,
+ struct sched_unit *unit)
{
+ struct vcpu *v = unit->vcpu_list;
struct null_private *prv = null_priv(ops);
struct null_vcpu *nvc = null_vcpu(v);
unsigned int cpu;
@@ -496,8 +499,10 @@ static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v)
SCHED_STAT_CRANK(vcpu_insert);
}
-static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v)
+static void null_unit_remove(const struct scheduler *ops,
+ struct sched_unit *unit)
{
+ struct vcpu *v = unit->vcpu_list;
struct null_private *prv = null_priv(ops);
struct null_vcpu *nvc = null_vcpu(v);
spinlock_t *lock;
@@ -532,8 +537,10 @@ static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v)
SCHED_STAT_CRANK(vcpu_remove);
}
-static void null_vcpu_wake(const struct scheduler *ops, struct vcpu *v)
+static void null_unit_wake(const struct scheduler *ops,
+ struct sched_unit *unit)
{
+ struct vcpu *v = unit->vcpu_list;
struct null_private *prv = null_priv(ops);
struct null_vcpu *nvc = null_vcpu(v);
unsigned int cpu = v->processor;
@@ -604,8 +611,10 @@ static void null_vcpu_wake(const struct scheduler *ops, struct vcpu *v)
cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ);
}
-static void null_vcpu_sleep(const struct scheduler *ops, struct vcpu *v)
+static void null_unit_sleep(const struct scheduler *ops,
+ struct sched_unit *unit)
{
+ struct vcpu *v = unit->vcpu_list;
struct null_private *prv = null_priv(ops);
unsigned int cpu = v->processor;
bool tickled = false;
@@ -637,15 +646,17 @@ static void null_vcpu_sleep(const struct scheduler *ops, struct vcpu *v)
SCHED_STAT_CRANK(vcpu_sleep);
}
-static int null_cpu_pick(const struct scheduler *ops, struct vcpu *v)
+static int null_cpu_pick(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *v = unit->vcpu_list;
ASSERT(!is_idle_vcpu(v));
return pick_cpu(null_priv(ops), v);
}
-static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v,
- unsigned int new_cpu)
+static void null_unit_migrate(const struct scheduler *ops,
+ struct sched_unit *unit, unsigned int new_cpu)
{
+ struct vcpu *v = unit->vcpu_list;
struct null_private *prv = null_priv(ops);
struct null_vcpu *nvc = null_vcpu(v);
@@ -965,13 +976,13 @@ static const struct scheduler sched_null_def = {
.alloc_domdata = null_alloc_domdata,
.free_domdata = null_free_domdata,
- .insert_vcpu = null_vcpu_insert,
- .remove_vcpu = null_vcpu_remove,
+ .insert_unit = null_unit_insert,
+ .remove_unit = null_unit_remove,
- .wake = null_vcpu_wake,
- .sleep = null_vcpu_sleep,
+ .wake = null_unit_wake,
+ .sleep = null_unit_sleep,
.pick_cpu = null_cpu_pick,
- .migrate = null_vcpu_migrate,
+ .migrate = null_unit_migrate,
.do_schedule = null_schedule,
.dump_cpu_state = null_dump_pcpu,
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index e0e350bdf3..da76a41436 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -136,7 +136,7 @@
* RTDS_delayed_runq_add: Do we need to add this to the RunQ/DepletedQ
* once it's done being context switching out?
* + Set when scheduling out in rt_schedule() if prev is runable
- * + Set in rt_vcpu_wake if it finds RTDS_scheduled set
+ * + Set in rt_unit_wake if it finds RTDS_scheduled set
* + Read in rt_context_saved(). If set, it adds prev to the Runqueue/DepletedQ
* and clears the bit.
*/
@@ -636,8 +636,9 @@ replq_reinsert(const struct scheduler *ops, struct rt_vcpu *svc)
* and available cpus
*/
static int
-rt_cpu_pick(const struct scheduler *ops, struct vcpu *vc)
+rt_cpu_pick(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
cpumask_t cpus;
cpumask_t *online;
int cpu;
@@ -837,8 +838,9 @@ rt_free_domdata(const struct scheduler *ops, void *data)
}
static void *
-rt_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
+rt_alloc_vdata(const struct scheduler *ops, struct sched_unit *unit, void *dd)
{
+ struct vcpu *vc = unit->vcpu_list;
struct rt_vcpu *svc;
/* Allocate per-VCPU info */
@@ -880,8 +882,9 @@ rt_free_vdata(const struct scheduler *ops, void *priv)
* dest. cpupool.
*/
static void
-rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
+rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct rt_vcpu *svc = rt_vcpu(vc);
s_time_t now;
spinlock_t *lock;
@@ -889,7 +892,7 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
BUG_ON( is_idle_vcpu(vc) );
/* This is safe because vc isn't yet being scheduled */
- vc->processor = rt_cpu_pick(ops, vc);
+ vc->processor = rt_cpu_pick(ops, unit);
lock = vcpu_schedule_lock_irq(vc);
@@ -913,8 +916,9 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
* Remove rt_vcpu svc from the old scheduler in source cpupool.
*/
static void
-rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
+rt_unit_remove(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct rt_vcpu * const svc = rt_vcpu(vc);
struct rt_dom * const sdom = svc->sdom;
spinlock_t *lock;
@@ -1133,8 +1137,9 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
* The lock is already grabbed in schedule.c, no need to lock here
*/
static void
-rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
+rt_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct rt_vcpu * const svc = rt_vcpu(vc);
BUG_ON( is_idle_vcpu(vc) );
@@ -1248,8 +1253,9 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new)
* TODO: what if these two vcpus belongs to the same domain?
*/
static void
-rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
+rt_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct rt_vcpu * const svc = rt_vcpu(vc);
s_time_t now;
bool_t missed;
@@ -1318,8 +1324,9 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
* and then pick the highest priority vcpu from runq to run
*/
static void
-rt_context_saved(const struct scheduler *ops, struct vcpu *vc)
+rt_context_saved(const struct scheduler *ops, struct sched_unit *unit)
{
+ struct vcpu *vc = unit->vcpu_list;
struct rt_vcpu *svc = rt_vcpu(vc);
spinlock_t *lock = vcpu_schedule_lock_irq(vc);
@@ -1548,15 +1555,15 @@ static const struct scheduler sched_rtds_def = {
.free_domdata = rt_free_domdata,
.alloc_vdata = rt_alloc_vdata,
.free_vdata = rt_free_vdata,
- .insert_vcpu = rt_vcpu_insert,
- .remove_vcpu = rt_vcpu_remove,
+ .insert_unit = rt_unit_insert,
+ .remove_unit = rt_unit_remove,
.adjust = rt_dom_cntl,
.pick_cpu = rt_cpu_pick,
.do_schedule = rt_schedule,
- .sleep = rt_vcpu_sleep,
- .wake = rt_vcpu_wake,
+ .sleep = rt_unit_sleep,
+ .wake = rt_unit_wake,
.context_saved = rt_context_saved,
};
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 1106698fb4..2c1a72c3c9 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -87,13 +87,13 @@ sched_idle_switch_sched(struct scheduler *new_ops, unsigned int cpu,
}
static int
-sched_idle_cpu_pick(const struct scheduler *ops, struct vcpu *v)
+sched_idle_cpu_pick(const struct scheduler *ops, struct sched_unit *unit)
{
- return v->processor;
+ return unit->vcpu_list->processor;
}
static void *
-sched_idle_alloc_vdata(const struct scheduler *ops, struct vcpu *v,
+sched_idle_alloc_vdata(const struct scheduler *ops, struct sched_unit *unit,
void *dd)
{
/* Any non-NULL pointer is fine here. */
@@ -308,9 +308,17 @@ static void sched_spin_unlock_double(spinlock_t *lock1, spinlock_t *lock2,
int sched_init_vcpu(struct vcpu *v, unsigned int processor)
{
struct domain *d = v->domain;
+ struct sched_unit *unit;
v->processor = processor;
+ if ( (unit = xzalloc(struct sched_unit)) == NULL )
+ return 1;
+ v->sched_unit = unit;
+ unit->vcpu_list = v;
+ unit->unit_id = v->vcpu_id;
+ unit->domain = d;
+
/* Initialise the per-vcpu timers. */
init_timer(&v->periodic_timer, vcpu_periodic_timer_fn,
v, v->processor);
@@ -319,9 +327,13 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
init_timer(&v->poll_timer, poll_timer_fn,
v, v->processor);
- v->sched_priv = sched_alloc_vdata(dom_scheduler(d), v, d->sched_priv);
+ v->sched_priv = sched_alloc_vdata(dom_scheduler(d), unit, d->sched_priv);
if ( v->sched_priv == NULL )
+ {
+ v->sched_unit = NULL;
+ xfree(unit);
return 1;
+ }
/*
* Initialize affinity settings. The idler, and potentially
@@ -340,7 +352,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
}
else
{
- sched_insert_vcpu(dom_scheduler(d), v);
+ sched_insert_unit(dom_scheduler(d), unit);
}
return 0;
@@ -381,7 +393,8 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
for_each_vcpu ( d, v )
{
- vcpu_priv[v->vcpu_id] = sched_alloc_vdata(c->sched, v, domdata);
+ vcpu_priv[v->vcpu_id] = sched_alloc_vdata(c->sched, v->sched_unit,
+ domdata);
if ( vcpu_priv[v->vcpu_id] == NULL )
{
for_each_vcpu ( d, v )
@@ -399,7 +412,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
for_each_vcpu ( d, v )
{
- sched_remove_vcpu(old_ops, v);
+ sched_remove_unit(old_ops, v->sched_unit);
}
d->cpupool = c;
@@ -434,7 +447,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
new_p = cpumask_cycle(new_p, c->cpu_valid);
- sched_insert_vcpu(c->sched, v);
+ sched_insert_unit(c->sched, v->sched_unit);
sched_free_vdata(old_ops, vcpudata);
}
@@ -452,13 +465,17 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
void sched_destroy_vcpu(struct vcpu *v)
{
+ struct sched_unit *unit = v->sched_unit;
+
kill_timer(&v->periodic_timer);
kill_timer(&v->singleshot_timer);
kill_timer(&v->poll_timer);
if ( test_and_clear_bool(v->is_urgent) )
atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count);
- sched_remove_vcpu(vcpu_scheduler(v), v);
+ sched_remove_unit(vcpu_scheduler(v), unit);
sched_free_vdata(vcpu_scheduler(v), v->sched_priv);
+ xfree(unit);
+ v->sched_unit = NULL;
}
int sched_init_domain(struct domain *d, int poolid)
@@ -509,7 +526,7 @@ void vcpu_sleep_nosync_locked(struct vcpu *v)
if ( v->runstate.state == RUNSTATE_runnable )
vcpu_runstate_change(v, RUNSTATE_offline, NOW());
- sched_sleep(vcpu_scheduler(v), v);
+ sched_sleep(vcpu_scheduler(v), v->sched_unit);
}
}
@@ -550,7 +567,7 @@ void vcpu_wake(struct vcpu *v)
{
if ( v->runstate.state >= RUNSTATE_blocked )
vcpu_runstate_change(v, RUNSTATE_runnable, NOW());
- sched_wake(vcpu_scheduler(v), v);
+ sched_wake(vcpu_scheduler(v), v->sched_unit);
}
else if ( !(v->pause_flags & VPF_blocked) )
{
@@ -605,7 +622,7 @@ static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu)
* Actual CPU switch to new CPU. This is safe because the lock
* pointer can't change while the current lock is held.
*/
- sched_migrate(vcpu_scheduler(v), v, new_cpu);
+ sched_migrate(vcpu_scheduler(v), v->sched_unit, new_cpu);
}
/*
@@ -683,7 +700,7 @@ static void vcpu_migrate_finish(struct vcpu *v)
break;
/* Select a new CPU. */
- new_cpu = sched_pick_cpu(vcpu_scheduler(v), v);
+ new_cpu = sched_pick_cpu(vcpu_scheduler(v), v->sched_unit);
if ( (new_lock == per_cpu(schedule_data, new_cpu).schedule_lock) &&
cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) )
break;
@@ -793,7 +810,7 @@ void restore_vcpu_affinity(struct domain *d)
/* v->processor might have changed, so reacquire the lock. */
lock = vcpu_schedule_lock_irq(v);
- v->processor = sched_pick_cpu(vcpu_scheduler(v), v);
+ v->processor = sched_pick_cpu(vcpu_scheduler(v), v->sched_unit);
spin_unlock_irq(lock);
if ( old_cpu != v->processor )
@@ -905,7 +922,7 @@ static int cpu_disable_scheduler_check(unsigned int cpu)
void sched_set_affinity(
struct vcpu *v, const cpumask_t *hard, const cpumask_t *soft)
{
- sched_adjust_affinity(dom_scheduler(v->domain), v, hard, soft);
+ sched_adjust_affinity(dom_scheduler(v->domain), v->sched_unit, hard, soft);
if ( hard )
cpumask_copy(v->cpu_hard_affinity, hard);
@@ -1080,7 +1097,7 @@ long vcpu_yield(void)
struct vcpu * v=current;
spinlock_t *lock = vcpu_schedule_lock_irq(v);
- sched_yield(vcpu_scheduler(v), v);
+ sched_yield(vcpu_scheduler(v), v->sched_unit);
vcpu_schedule_unlock_irq(lock, v);
SCHED_STAT_CRANK(vcpu_yield);
@@ -1605,7 +1622,7 @@ void context_saved(struct vcpu *prev)
/* Check for migration request /after/ clearing running flag. */
smp_mb();
- sched_context_saved(vcpu_scheduler(prev), prev);
+ sched_context_saved(vcpu_scheduler(prev), prev->sched_unit);
vcpu_migrate_finish(prev);
}
@@ -1881,7 +1898,8 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
ppriv = sched_alloc_pdata(new_ops, cpu);
if ( IS_ERR(ppriv) )
return PTR_ERR(ppriv);
- vpriv = sched_alloc_vdata(new_ops, idle, idle->domain->sched_priv);
+ vpriv = sched_alloc_vdata(new_ops, idle->sched_unit,
+ idle->domain->sched_priv);
if ( vpriv == NULL )
{
sched_free_pdata(new_ops, ppriv, cpu);
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
index dc255b064b..9fd367377a 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -141,8 +141,8 @@ struct scheduler {
void (*deinit) (struct scheduler *);
void (*free_vdata) (const struct scheduler *, void *);
- void * (*alloc_vdata) (const struct scheduler *, struct vcpu *,
- void *);
+ void * (*alloc_vdata) (const struct scheduler *,
+ struct sched_unit *, void *);
void (*free_pdata) (const struct scheduler *, void *, int);
void * (*alloc_pdata) (const struct scheduler *, int);
void (*init_pdata) (const struct scheduler *, void *, int);
@@ -156,24 +156,32 @@ struct scheduler {
spinlock_t * (*switch_sched) (struct scheduler *, unsigned int,
void *, void *);
- /* Activate / deactivate vcpus in a cpu pool */
- void (*insert_vcpu) (const struct scheduler *, struct vcpu *);
- void (*remove_vcpu) (const struct scheduler *, struct vcpu *);
-
- void (*sleep) (const struct scheduler *, struct vcpu *);
- void (*wake) (const struct scheduler *, struct vcpu *);
- void (*yield) (const struct scheduler *, struct vcpu *);
- void (*context_saved) (const struct scheduler *, struct vcpu *);
+ /* Activate / deactivate units in a cpu pool */
+ void (*insert_unit) (const struct scheduler *,
+ struct sched_unit *);
+ void (*remove_unit) (const struct scheduler *,
+ struct sched_unit *);
+
+ void (*sleep) (const struct scheduler *,
+ struct sched_unit *);
+ void (*wake) (const struct scheduler *,
+ struct sched_unit *);
+ void (*yield) (const struct scheduler *,
+ struct sched_unit *);
+ void (*context_saved) (const struct scheduler *,
+ struct sched_unit *);
struct task_slice (*do_schedule) (const struct scheduler *, s_time_t,
bool_t tasklet_work_scheduled);
- int (*pick_cpu) (const struct scheduler *, struct vcpu *);
- void (*migrate) (const struct scheduler *, struct vcpu *,
- unsigned int);
+ int (*pick_cpu) (const struct scheduler *,
+ struct sched_unit *);
+ void (*migrate) (const struct scheduler *,
+ struct sched_unit *, unsigned int);
int (*adjust) (const struct scheduler *, struct domain *,
struct xen_domctl_scheduler_op *);
- void (*adjust_affinity)(const struct scheduler *, struct vcpu *,
+ void (*adjust_affinity)(const struct scheduler *,
+ struct sched_unit *,
const struct cpumask *,
const struct cpumask *);
int (*adjust_global) (const struct scheduler *,
@@ -267,10 +275,10 @@ static inline void sched_deinit_pdata(const struct scheduler *s, void *data,
s->deinit_pdata(s, data, cpu);
}
-static inline void *sched_alloc_vdata(const struct scheduler *s, struct vcpu *v,
- void *dom_data)
+static inline void *sched_alloc_vdata(const struct scheduler *s,
+ struct sched_unit *unit, void *dom_data)
{
- return s->alloc_vdata(s, v, dom_data);
+ return s->alloc_vdata(s, unit, dom_data);
}
static inline void sched_free_vdata(const struct scheduler *s, void *data)
@@ -278,64 +286,70 @@ static inline void sched_free_vdata(const struct scheduler *s, void *data)
s->free_vdata(s, data);
}
-static inline void sched_insert_vcpu(const struct scheduler *s, struct vcpu *v)
+static inline void sched_insert_unit(const struct scheduler *s,
+ struct sched_unit *unit)
{
- if ( s->insert_vcpu )
- s->insert_vcpu(s, v);
+ if ( s->insert_unit )
+ s->insert_unit(s, unit);
}
-static inline void sched_remove_vcpu(const struct scheduler *s, struct vcpu *v)
+static inline void sched_remove_unit(const struct scheduler *s,
+ struct sched_unit *unit)
{
- if ( s->remove_vcpu )
- s->remove_vcpu(s, v);
+ if ( s->remove_unit )
+ s->remove_unit(s, unit);
}
-static inline void sched_sleep(const struct scheduler *s, struct vcpu *v)
+static inline void sched_sleep(const struct scheduler *s,
+ struct sched_unit *unit)
{
if ( s->sleep )
- s->sleep(s, v);
+ s->sleep(s, unit);
}
-static inline void sched_wake(const struct scheduler *s, struct vcpu *v)
+static inline void sched_wake(const struct scheduler *s,
+ struct sched_unit *unit)
{
if ( s->wake )
- s->wake(s, v);
+ s->wake(s, unit);
}
-static inline void sched_yield(const struct scheduler *s, struct vcpu *v)
+static inline void sched_yield(const struct scheduler *s,
+ struct sched_unit *unit)
{
if ( s->yield )
- s->yield(s, v);
+ s->yield(s, unit);
}
static inline void sched_context_saved(const struct scheduler *s,
- struct vcpu *v)
+ struct sched_unit *unit)
{
if ( s->context_saved )
- s->context_saved(s, v);
+ s->context_saved(s, unit);
}
-static inline void sched_migrate(const struct scheduler *s, struct vcpu *v,
- unsigned int cpu)
+static inline void sched_migrate(const struct scheduler *s,
+ struct sched_unit *unit, unsigned int cpu)
{
if ( s->migrate )
- s->migrate(s, v, cpu);
+ s->migrate(s, unit, cpu);
else
- v->processor = cpu;
+ unit->vcpu_list->processor = cpu;
}
-static inline int sched_pick_cpu(const struct scheduler *s, struct vcpu *v)
+static inline int sched_pick_cpu(const struct scheduler *s,
+ struct sched_unit *unit)
{
- return s->pick_cpu(s, v);
+ return s->pick_cpu(s, unit);
}
static inline void sched_adjust_affinity(const struct scheduler *s,
- struct vcpu *v,
+ struct sched_unit *unit,
const cpumask_t *hard,
const cpumask_t *soft)
{
if ( s->adjust_affinity )
- s->adjust_affinity(s, v, hard, soft);
+ s->adjust_affinity(s, unit, hard, soft);
}
static inline int sched_adjust_dom(const struct scheduler *s, struct domain *d,
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2e6e0d3488..d7dd182885 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -140,6 +140,7 @@ void evtchn_destroy(struct domain *d); /* from domain_kill */
void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
struct waitqueue_vcpu;
+struct sched_unit;
struct vcpu
{
@@ -160,6 +161,7 @@ struct vcpu
struct timer poll_timer; /* timeout for SCHEDOP_poll */
+ struct sched_unit *sched_unit;
void *sched_priv; /* scheduler-specific data */
struct vcpu_runstate_info runstate;
@@ -272,6 +274,12 @@ struct vcpu
struct arch_vcpu arch;
};
+struct sched_unit {
+ struct domain *domain;
+ struct vcpu *vcpu_list;
+ int unit_id;
+};
+
/* Per-domain lock can be recursively acquired in fault handlers. */
#define domain_lock(d) spin_lock_recursive(&(d)->domain_lock)
#define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock)
--
2.16.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-08-09 14:59 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-09 14:57 [Xen-devel] [PATCH v2 00/48] xen: add core scheduling support Juergen Gross
2019-08-09 14:57 ` Juergen Gross [this message]
2019-09-02 9:07 ` [Xen-devel] [PATCH v2 01/48] xen/sched: use new sched_unit instead of vcpu in scheduler interfaces Jan Beulich
2019-09-09 5:26 ` Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 02/48] xen/sched: move per-vcpu scheduler private data pointer to sched_unit Juergen Gross
2019-08-23 10:47 ` Dario Faggioli
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 03/48] xen/sched: build a linked list of struct sched_unit Juergen Gross
2019-08-23 10:52 ` Dario Faggioli
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 04/48] xen/sched: introduce struct sched_resource Juergen Gross
2019-08-23 10:54 ` Dario Faggioli
2019-09-04 13:10 ` Jan Beulich
2019-09-09 5:31 ` Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 05/48] xen/sched: let pick_cpu return a scheduler resource Juergen Gross
2019-09-04 13:34 ` Jan Beulich
2019-09-09 5:43 ` Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 06/48] xen/sched: switch schedule_data.curr to point at sched_unit Juergen Gross
2019-09-04 13:36 ` Jan Beulich
2019-09-09 5:46 ` Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 07/48] xen/sched: move per cpu scheduler private data into struct sched_resource Juergen Gross
2019-09-04 13:48 ` Jan Beulich
2019-09-05 7:13 ` Juergen Gross
2019-09-05 7:38 ` Jan Beulich
2019-09-09 13:03 ` Dario Faggioli
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 08/48] xen/sched: switch vcpu_schedule_lock to unit_schedule_lock Juergen Gross
2019-09-04 14:02 ` Jan Beulich
2019-09-04 14:41 ` Juergen Gross
2019-09-04 14:54 ` Jan Beulich
2019-09-04 15:02 ` Juergen Gross
2019-09-11 16:02 ` Dario Faggioli
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 09/48] xen/sched: move some per-vcpu items to struct sched_unit Juergen Gross
2019-09-04 14:16 ` Jan Beulich
2019-09-09 6:39 ` Juergen Gross
2019-09-09 6:55 ` Jan Beulich
2019-09-09 7:05 ` Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 10/48] xen/sched: add scheduler helpers hiding vcpu Juergen Gross
2019-09-04 14:49 ` Jan Beulich
2019-09-11 13:22 ` Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 11/48] xen/sched: rename scheduler related perf counters Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 12/48] xen/sched: switch struct task_slice from vcpu to sched_unit Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 13/48] xen/sched: add is_running indicator to struct sched_unit Juergen Gross
2019-09-04 15:06 ` Jan Beulich
2019-09-11 13:44 ` Juergen Gross
2019-09-11 15:06 ` Jan Beulich
2019-09-11 15:32 ` Juergen Gross
2019-08-09 14:57 ` [Xen-devel] [PATCH v2 14/48] xen/sched: make null scheduler vcpu agnostic Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 15/48] xen/sched: make rt " Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 16/48] xen/sched: make credit " Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 17/48] xen/sched: make credit2 " Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 18/48] xen/sched: make arinc653 " Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 19/48] xen: add sched_unit_pause_nosync() and sched_unit_unpause() Juergen Gross
2019-09-09 13:34 ` Jan Beulich
2019-09-11 14:15 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 20/48] xen: let vcpu_create() select processor Juergen Gross
2019-08-23 16:42 ` Julien Grall
2019-09-09 13:38 ` Jan Beulich
2019-09-11 14:22 ` Juergen Gross
2019-09-11 17:20 ` Dario Faggioli
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 21/48] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers Juergen Gross
2019-09-09 14:17 ` Jan Beulich
2019-09-12 9:34 ` Juergen Gross
2019-09-12 10:04 ` Jan Beulich
2019-09-12 11:03 ` Juergen Gross
2019-09-12 11:17 ` Juergen Gross
2019-09-12 11:46 ` Jan Beulich
2019-09-12 11:53 ` Juergen Gross
2019-09-12 12:08 ` Jan Beulich
2019-09-12 12:13 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 22/48] xen/sched: switch schedule() from vcpus to sched_units Juergen Gross
2019-09-09 14:35 ` Jan Beulich
2019-09-12 13:44 ` Juergen Gross
2019-09-12 14:34 ` Jan Beulich
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 23/48] xen/sched: switch sched_move_irqs() to take sched_unit as parameter Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 24/48] xen: switch from for_each_vcpu() to for_each_sched_unit() Juergen Gross
2019-09-09 15:14 ` Jan Beulich
2019-09-12 14:02 ` Juergen Gross
2019-09-12 14:40 ` Jan Beulich
2019-09-12 14:47 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 25/48] xen/sched: add runstate counters to struct sched_unit Juergen Gross
2019-09-09 14:30 ` Jan Beulich
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 26/48] xen/sched: rework and rename vcpu_force_reschedule() Juergen Gross
2019-09-10 14:06 ` Jan Beulich
2019-09-13 9:33 ` Juergen Gross
2019-09-13 9:40 ` Jan Beulich
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 27/48] xen/sched: Change vcpu_migrate_*() to operate on schedule unit Juergen Gross
2019-09-10 15:11 ` Jan Beulich
2019-09-13 12:33 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 28/48] xen/sched: move struct task_slice into struct sched_unit Juergen Gross
2019-09-10 15:18 ` Jan Beulich
2019-09-13 12:56 ` Juergen Gross
2019-09-12 8:13 ` Dario Faggioli
2019-09-12 8:21 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 29/48] xen/sched: add code to sync scheduling of all vcpus of a sched unit Juergen Gross
2019-09-10 15:36 ` Jan Beulich
2019-09-13 13:12 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 30/48] xen/sched: introduce unit_runnable_state() Juergen Gross
2019-09-11 10:30 ` Jan Beulich
2019-09-12 10:22 ` Dario Faggioli
2019-09-13 14:07 ` Juergen Gross
2019-09-13 14:44 ` Jan Beulich
2019-09-13 15:23 ` Juergen Gross
2019-09-12 10:24 ` Dario Faggioli
2019-09-13 14:14 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 31/48] xen/sched: add support for multiple vcpus per sched unit where missing Juergen Gross
2019-09-11 10:43 ` Jan Beulich
2019-09-13 15:01 ` Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 32/48] xen/sched: modify cpupool_domain_cpumask() to be an unit mask Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 33/48] xen/sched: support allocating multiple vcpus into one sched unit Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 34/48] xen/sched: add a percpu resource index Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 35/48] xen/sched: add fall back to idle vcpu when scheduling unit Juergen Gross
2019-09-11 11:33 ` Julien Grall
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 36/48] xen/sched: make vcpu_wake() and vcpu_sleep() core scheduling aware Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 37/48] xen/sched: carve out freeing sched_unit memory into dedicated function Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 38/48] xen/sched: move per-cpu variable scheduler to struct sched_resource Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 39/48] xen/sched: move per-cpu variable cpupool " Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 40/48] xen/sched: reject switching smt on/off with core scheduling active Juergen Gross
2019-09-10 15:47 ` Jan Beulich
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 41/48] xen/sched: prepare per-cpupool scheduling granularity Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 42/48] xen/sched: split schedule_cpu_switch() Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 43/48] xen/sched: protect scheduling resource via rcu Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 44/48] xen/sched: support multiple cpus per scheduling resource Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 45/48] xen/sched: support differing granularity in schedule_cpu_[add/rm]() Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 46/48] xen/sched: support core scheduling for moving cpus to/from cpupools Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 47/48] xen/sched: disable scheduling when entering ACPI deep sleep states Juergen Gross
2019-08-09 14:58 ` [Xen-devel] [PATCH v2 48/48] xen/sched: add scheduling granularity enum Juergen Gross
2019-08-15 10:17 ` [Xen-devel] [PATCH v2 00/48] xen: add core scheduling support Sergey Dyasli
2019-09-05 6:22 ` Juergen Gross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190809145833.1020-2-jgross@suse.com \
--to=jgross@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=dfaggioli@suse.com \
--cc=george.dunlap@eu.citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=josh.whitehead@dornerworks.com \
--cc=julien.grall@arm.com \
--cc=konrad.wilk@oracle.com \
--cc=mengxu@cis.upenn.edu \
--cc=robert.vanvossen@dornerworks.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).