All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH RFC V2 21/45] xen/sched: make credit2 scheduler vcpu agnostic.
Date: Mon,  6 May 2019 08:56:20 +0200	[thread overview]
Message-ID: <20190506065644.7415-22-jgross@suse.com> (raw)
In-Reply-To: <20190506065644.7415-1-jgross@suse.com>

Switch credit2 scheduler completely from vcpu to sched_item usage.

As we are touching lots of lines remove some white space at the end of
the line, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched_credit2.c | 820 ++++++++++++++++++++++-----------------------
 1 file changed, 403 insertions(+), 417 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 0f0d6a3801..7918d46a23 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -45,7 +45,7 @@
 #define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
 #define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
 #define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
+#define TRC_CSCHED2_UPDATE_ITEM_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
 #define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
 #define TRC_CSCHED2_TICKLE_NEW       TRC_SCHED_CLASS_EVT(CSCHED2, 13)
 #define TRC_CSCHED2_RUNQ_MAX_WEIGHT  TRC_SCHED_CLASS_EVT(CSCHED2, 14)
@@ -74,13 +74,13 @@
  * Design:
  *
  * VMs "burn" credits based on their weight; higher weight means
- * credits burn more slowly.  The highest weight vcpu burns credits at
+ * credits burn more slowly.  The highest weight item burns credits at
  * a rate of 1 credit per nanosecond.  Others burn proportionally
  * more.
  *
- * vcpus are inserted into the runqueue by credit order.
+ * items are inserted into the runqueue by credit order.
  *
- * Credits are "reset" when the next vcpu in the runqueue is less than
+ * Credits are "reset" when the next item in the runqueue is less than
  * or equal to zero.  At that point, everyone's credits are "clipped"
  * to a small value, and a fixed credit is added to everyone.
  */
@@ -95,33 +95,33 @@
  *   be given a cap of 25%; a domain that must not use more than 1+1/2 of
  *   physical CPU time, will be given a cap of 150%;
  *
- * - caps are per-domain (not per-vCPU). If a domain has only 1 vCPU, and
- *   a 40% cap, that one vCPU will use 40% of one pCPU. If a somain has 4
- *   vCPUs, and a 200% cap, the equivalent of 100% time on 2 pCPUs will be
- *   split among the v vCPUs. How much each of the vCPUs will actually get,
+ * - caps are per-domain (not per-item). If a domain has only 1 item, and
+ *   a 40% cap, that one item will use 40% of one pCPU. If a somain has 4
+ *   items, and a 200% cap, the equivalent of 100% time on 2 pCPUs will be
+ *   split among the v items. How much each of the items will actually get,
  *   during any given interval of time, is unspecified (as it depends on
  *   various aspects: workload, system load, etc.). For instance, it is
- *   possible that, during a given time interval, 2 vCPUs use 100% each,
+ *   possible that, during a given time interval, 2 items use 100% each,
  *   and the other two use nothing; while during another time interval,
- *   two vCPUs use 80%, one uses 10% and the other 30%; or that each use
+ *   two items use 80%, one uses 10% and the other 30%; or that each use
  *   50% (and so on and so forth).
  *
  * For implementing this, we use the following approach:
  *
  * - each domain is given a 'budget', an each domain has a timer, which
  *   replenishes the domain's budget periodically. The budget is the amount
- *   of time the vCPUs of the domain can use every 'period';
+ *   of time the items of the domain can use every 'period';
  *
  * - the period is CSCHED2_BDGT_REPL_PERIOD, and is the same for all domains
  *   (but each domain has its own timer; so the all are periodic by the same
  *   period, but replenishment of the budgets of the various domains, at
  *   periods boundaries, are not synchronous);
  *
- * - when vCPUs run, they consume budget. When they don't run, they don't
- *   consume budget. If there is no budget left for the domain, no vCPU of
- *   that domain can run. If a vCPU tries to run and finds that there is no
+ * - when items run, they consume budget. When they don't run, they don't
+ *   consume budget. If there is no budget left for the domain, no item of
+ *   that domain can run. If an item tries to run and finds that there is no
  *   budget, it blocks.
- *   At whatever time a vCPU wants to run, it must check the domain's budget,
+ *   At whatever time an item wants to run, it must check the domain's budget,
  *   and if there is some, it can use it.
  *
  * - budget is replenished to the top of the capacity for the domain once
@@ -129,39 +129,39 @@
  *   though, the budget after a replenishment will always be at most equal
  *   to the total capacify of the domain ('tot_budget');
  *
- * - when a budget replenishment occurs, if there are vCPUs that had been
+ * - when a budget replenishment occurs, if there are items that had been
  *   blocked because of lack of budget, they'll be unblocked, and they will
  *   (potentially) be able to run again.
  *
  * Finally, some even more implementation related detail:
  *
- * - budget is stored in a domain-wide pool. vCPUs of the domain that want
+ * - budget is stored in a domain-wide pool. Items of the domain that want
  *   to run go to such pool, and grub some. When they do so, the amount
  *   they grabbed is _immediately_ removed from the pool. This happens in
- *   vcpu_grab_budget();
+ *   item_grab_budget();
  *
- * - when vCPUs stop running, if they've not consumed all the budget they
+ * - when items stop running, if they've not consumed all the budget they
  *   took, the leftover is put back in the pool. This happens in
- *   vcpu_return_budget();
+ *   item_return_budget();
  *
- * - the above means that a vCPU can find out that there is no budget and
+ * - the above means that an item can find out that there is no budget and
  *   block, not only if the cap has actually been reached (for this period),
- *   but also if some other vCPUs, in order to run, have grabbed a certain
+ *   but also if some other items, in order to run, have grabbed a certain
  *   quota of budget, no matter whether they've already used it all or not.
- *   A vCPU blocking because (any form of) lack of budget is said to be
- *   "parked", and such blocking happens in park_vcpu();
+ *   An item blocking because (any form of) lack of budget is said to be
+ *   "parked", and such blocking happens in park_item();
  *
- * - when a vCPU stops running, and puts back some budget in the domain pool,
+ * - when an item stops running, and puts back some budget in the domain pool,
  *   we need to check whether there is someone which has been parked and that
- *   can be unparked. This happens in unpark_parked_vcpus(), called from
+ *   can be unparked. This happens in unpark_parked_items(), called from
  *   csched2_context_saved();
  *
  * - of course, unparking happens also as a consequence of the domain's budget
  *   being replenished by the periodic timer. This also occurs by means of
  *   calling csched2_context_saved() (but from replenish_domain_budget());
  *
- * - parked vCPUs of a domain are kept in a (per-domain) list, called
- *   'parked_vcpus'). Manipulation of the list and of the domain-wide budget
+ * - parked items of a domain are kept in a (per-domain) list, called
+ *   'parked_items'). Manipulation of the list and of the domain-wide budget
  *   pool, must occur only when holding the 'budget_lock'.
  */
 
@@ -174,9 +174,9 @@
  *     pcpu_schedule_lock() / item_schedule_lock() (and friends),
  *   * a cpu may (try to) take a "remote" runqueue lock, e.g., for
  *     load balancing;
- *  + serializes runqueue operations (removing and inserting vcpus);
+ *  + serializes runqueue operations (removing and inserting items);
  *  + protects runqueue-wide data in csched2_runqueue_data;
- *  + protects vcpu parameters in csched2_item for the vcpu in the
+ *  + protects item parameters in csched2_item for the item in the
  *    runqueue.
  *
  * - Private scheduler lock
@@ -190,8 +190,8 @@
  *  + it is per-domain;
  *  + protects, in domains that have an utilization cap;
  *   * manipulation of the total budget of the domain (as it is shared
- *     among all vCPUs of the domain),
- *   * manipulation of the list of vCPUs that are blocked waiting for
+ *     among all items of the domain),
+ *   * manipulation of the list of items that are blocked waiting for
  *     some budget to be available.
  *
  * - Type:
@@ -228,9 +228,9 @@
  */
 #define CSCHED2_CREDIT_INIT          MILLISECS(10)
 /*
- * Amount of credit the idle vcpus have. It never changes, as idle
- * vcpus does not consume credits, and it must be lower than whatever
- * amount of credit 'regular' vcpu would end up with.
+ * Amount of credit the idle items have. It never changes, as idle
+ * items does not consume credits, and it must be lower than whatever
+ * amount of credit 'regular' item would end up with.
  */
 #define CSCHED2_IDLE_CREDIT          (-(1U<<30))
 /*
@@ -243,9 +243,9 @@
  * MIN_TIMER.
  */
 #define CSCHED2_MIGRATE_RESIST       ((opt_migrate_resist)*MICROSECS(1))
-/* How much to "compensate" a vcpu for L2 migration. */
+/* How much to "compensate" an item for L2 migration. */
 #define CSCHED2_MIGRATE_COMPENSATION MICROSECS(50)
-/* How tolerant we should be when peeking at runtime of vcpus on other cpus */
+/* How tolerant we should be when peeking at runtime of items on other cpus */
 #define CSCHED2_RATELIMIT_TICKLE_TOLERANCE MICROSECS(50)
 /* Reset: Value below which credit will be reset. */
 #define CSCHED2_CREDIT_RESET         0
@@ -258,7 +258,7 @@
  * Flags
  */
 /*
- * CSFLAG_scheduled: Is this vcpu either running on, or context-switching off,
+ * CSFLAG_scheduled: Is this item either running on, or context-switching off,
  * a physical cpu?
  * + Accessed only with runqueue lock held
  * + Set when chosen as next in csched2_schedule().
@@ -280,21 +280,21 @@
 #define __CSFLAG_delayed_runq_add 2
 #define CSFLAG_delayed_runq_add (1U<<__CSFLAG_delayed_runq_add)
 /*
- * CSFLAG_runq_migrate_request: This vcpu is being migrated as a result of a
+ * CSFLAG_runq_migrate_request: This item is being migrated as a result of a
  * credit2-initiated runq migrate request; migrate it to the runqueue indicated
- * in the svc struct. 
+ * in the svc struct.
  */
 #define __CSFLAG_runq_migrate_request 3
 #define CSFLAG_runq_migrate_request (1U<<__CSFLAG_runq_migrate_request)
 /*
- * CSFLAG_vcpu_yield: this vcpu was running, and has called vcpu_yield(). The
+ * CSFLAG_item_yield: this item was running, and has called vcpu_yield(). The
  * scheduler is invoked to see if we can give the cpu to someone else, and
- * get back to the yielding vcpu in a while.
+ * get back to the yielding item in a while.
  */
-#define __CSFLAG_vcpu_yield 4
-#define CSFLAG_vcpu_yield (1U<<__CSFLAG_vcpu_yield)
+#define __CSFLAG_item_yield 4
+#define CSFLAG_item_yield (1U<<__CSFLAG_item_yield)
 /*
- * CSFLAGS_pinned: this vcpu is currently 'pinned', i.e., has its hard
+ * CSFLAGS_pinned: this item is currently 'pinned', i.e., has its hard
  * affinity set to one and only 1 cpu (and, hence, can only run there).
  */
 #define __CSFLAG_pinned 5
@@ -306,7 +306,7 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
 /*
  * Load tracking and load balancing
  *
- * Load history of runqueues and vcpus is accounted for by using an
+ * Load history of runqueues and items is accounted for by using an
  * exponential weighted moving average algorithm. However, instead of using
  * fractions,we shift everything to left by the number of bits we want to
  * use for representing the fractional part (Q-format).
@@ -326,7 +326,7 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
  *
  * where W is the length of the window, P the multiplier for transitiong into
  * Q-format fixed point arithmetic and load is the instantaneous load of a
- * runqueue, which basically is the number of runnable vcpus there are on the
+ * runqueue, which basically is the number of runnable items there are on the
  * runqueue (for the meaning of the other terms, look at the doc comment to
  *  update_runq_load()).
  *
@@ -338,7 +338,7 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
  * The maximum possible value for the average load, which we want to store in
  * s_time_t type variables (i.e., we have 63 bits available) is load*P. This
  * means that, with P 18 bits wide, load can occupy 45 bits. This in turn
- * means we can have 2^45 vcpus in each runqueue, before overflow occurs!
+ * means we can have 2^45 items in each runqueue, before overflow occurs!
  *
  * However, it can happen that, at step j+1, if:
  *
@@ -354,13 +354,13 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
  *
  *  2^(63 - 30 - 18) = 2^15 = 32768
  *
- * So 32768 is the maximum number of vcpus the we can have in a runqueue,
+ * So 32768 is the maximum number of items the we can have in a runqueue,
  * at any given time, and still not have problems with the load tracking
  * calculations... and this is more than fine.
  *
  * As a matter of fact, since we are using microseconds granularity, we have
  * W=2^20. So, still with 18 fractional bits and a 1 second long window, there
- * may be 2^25 = 33554432 vcpus in a runq before we have to start thinking
+ * may be 2^25 = 33554432 items in a runq before we have to start thinking
  * about overflow.
  */
 
@@ -468,7 +468,7 @@ struct csched2_runqueue_data {
     struct list_head runq;     /* Ordered list of runnable vms               */
     int id;                    /* ID of this runqueue (-1 if invalid)        */
 
-    int load;                  /* Instantaneous load (num of non-idle vcpus) */
+    int load;                  /* Instantaneous load (num of non-idle items) */
     s_time_t load_last_update; /* Last time average was updated              */
     s_time_t avgload;          /* Decaying queue load                        */
     s_time_t b_avgload;        /* Decaying queue load modified by balancing  */
@@ -478,8 +478,8 @@ struct csched2_runqueue_data {
         tickled,               /* Have been asked to go through schedule     */
         idle;                  /* Currently idle pcpus                       */
 
-    struct list_head svc;      /* List of all vcpus assigned to the runqueue */
-    unsigned int max_weight;   /* Max weight of the vcpus in this runqueue   */
+    struct list_head svc;      /* List of all items assigned to the runqueue */
+    unsigned int max_weight;   /* Max weight of the items in this runqueue   */
     unsigned int pick_bias;    /* Last picked pcpu. Start from it next time  */
 };
 
@@ -509,20 +509,20 @@ struct csched2_pcpu {
 };
 
 /*
- * Virtual CPU
+ * Schedule Item
  */
 struct csched2_item {
     struct csched2_dom *sdom;          /* Up-pointer to domain                */
-    struct vcpu *vcpu;                 /* Up-pointer, to vcpu                 */
+    struct sched_item *item;           /* Up-pointer, to schedule item        */
     struct csched2_runqueue_data *rqd; /* Up-pointer to the runqueue          */
 
     int credit;                        /* Current amount of credit            */
-    unsigned int weight;               /* Weight of this vcpu                 */
+    unsigned int weight;               /* Weight of this item                 */
     unsigned int residual;             /* Reminder of div(max_weight/weight)  */
     unsigned flags;                    /* Status flags (16 bits would be ok,  */
     s_time_t budget;                   /* Current budget (if domains has cap) */
                                        /* but clear_bit() does not like that) */
-    s_time_t budget_quota;             /* Budget to which vCPU is entitled    */
+    s_time_t budget_quota;             /* Budget to which item is entitled    */
 
     s_time_t start_time;               /* Time we were scheduled (for credit) */
 
@@ -531,7 +531,7 @@ struct csched2_item {
     s_time_t avgload;                  /* Decaying queue load                 */
 
     struct list_head runq_elem;        /* On the runqueue (rqd->runq)         */
-    struct list_head parked_elem;      /* On the parked_vcpus list            */
+    struct list_head parked_elem;      /* On the parked_items list            */
     struct list_head rqd_elem;         /* On csched2_runqueue_data's svc list */
     struct csched2_runqueue_data *migrate_rqd; /* Pre-determined migr. target */
     int tickled_cpu;                   /* Cpu that will pick us (-1 if none)  */
@@ -549,12 +549,12 @@ struct csched2_dom {
 
     struct timer repl_timer;    /* Timer for periodic replenishment of budget */
     s_time_t next_repl;         /* Time at which next replenishment occurs    */
-    struct list_head parked_vcpus; /* List of CPUs waiting for budget         */
+    struct list_head parked_items; /* List of CPUs waiting for budget         */
 
     struct list_head sdom_elem; /* On csched2_runqueue_data's sdom list       */
     uint16_t weight;            /* User specified weight                      */
     uint16_t cap;               /* User specified cap                         */
-    uint16_t nr_vcpus;          /* Number of vcpus of this domain             */
+    uint16_t nr_items;          /* Number of items of this domain             */
 };
 
 /*
@@ -593,7 +593,7 @@ static inline struct csched2_runqueue_data *c2rqd(const struct scheduler *ops,
     return &csched2_priv(ops)->rqd[c2r(cpu)];
 }
 
-/* Does the domain of this vCPU have a cap? */
+/* Does the domain of this item have a cap? */
 static inline bool has_cap(const struct csched2_item *svc)
 {
     return svc->budget != STIME_MAX;
@@ -611,24 +611,24 @@ static inline bool has_cap(const struct csched2_item *svc)
  *    smt_idle mask.
  *
  * Once we have such a mask, it is easy to implement a policy that, either:
- *  - uses fully idle cores first: it is enough to try to schedule the vcpus
+ *  - uses fully idle cores first: it is enough to try to schedule the items
  *    on pcpus from smt_idle mask first. This is what happens if
  *    sched_smt_power_savings was not set at boot (default), and it maximizes
  *    true parallelism, and hence performance;
- *  - uses already busy cores first: it is enough to try to schedule the vcpus
+ *  - uses already busy cores first: it is enough to try to schedule the items
  *    on pcpus that are idle, but are not in smt_idle. This is what happens if
  *    sched_smt_power_savings is set at boot, and it allows as more cores as
  *    possible to stay in low power states, minimizing power consumption.
  *
  * This logic is entirely implemented in runq_tickle(), and that is enough.
- * In fact, in this scheduler, placement of a vcpu on one of the pcpus of a
+ * In fact, in this scheduler, placement of an item on one of the pcpus of a
  * runq, _always_ happens by means of tickling:
- *  - when a vcpu wakes up, it calls csched2_item_wake(), which calls
+ *  - when an item wakes up, it calls csched2_item_wake(), which calls
  *    runq_tickle();
  *  - when a migration is initiated in schedule.c, we call csched2_res_pick(),
  *    csched2_item_migrate() (which calls migrate()) and csched2_item_wake().
  *    csched2_res_pick() looks for the least loaded runq and return just any
- *    of its processors. Then, csched2_item_migrate() just moves the vcpu to
+ *    of its processors. Then, csched2_item_migrate() just moves the item to
  *    the chosen runq, and it is again runq_tickle(), called by
  *    csched2_item_wake() that actually decides what pcpu to use within the
  *    chosen runq;
@@ -643,7 +643,7 @@ static inline bool has_cap(const struct csched2_item *svc)
  *
  * NB that rqd->smt_idle is different than rqd->idle.  rqd->idle
  * records pcpus that at are merely idle (i.e., at the moment do not
- * have a vcpu running on them).  But you have to manually filter out
+ * have an item running on them).  But you have to manually filter out
  * which pcpus have been tickled in order to find cores that are not
  * going to be busy soon.  Filtering out tickled cpus pairwise is a
  * lot of extra pain; so for rqd->smt_idle, we explicitly make so that
@@ -690,24 +690,24 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask)
  */
 static int get_fallback_cpu(struct csched2_item *svc)
 {
-    struct vcpu *v = svc->vcpu;
+    struct sched_item *item = svc->item;
     unsigned int bs;
 
     SCHED_STAT_CRANK(need_fallback_cpu);
 
     for_each_affinity_balance_step( bs )
     {
-        int cpu = v->processor;
+        int cpu = sched_item_cpu(item);
 
-        if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(v->sched_item) )
+        if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(item) )
             continue;
 
-        affinity_balance_cpumask(v->sched_item, bs, cpumask_scratch_cpu(cpu));
+        affinity_balance_cpumask(item, bs, cpumask_scratch_cpu(cpu));
         cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
-                    cpupool_domain_cpumask(v->domain));
+                    cpupool_domain_cpumask(item->domain));
 
         /*
-         * This is cases 1 or 3 (depending on bs): if v->processor is (still)
+         * This is cases 1 or 3 (depending on bs): if processor is (still)
          * in our affinity, go for it, for cache betterness.
          */
         if ( likely(cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
@@ -729,7 +729,7 @@ static int get_fallback_cpu(struct csched2_item *svc)
          * We may well pick any valid pcpu from our soft-affinity, outside
          * of our current runqueue, but we decide not to. In fact, changing
          * runqueue is slow, affects load distribution, and is a source of
-         * overhead for the vcpus running on the other runqueue (we need the
+         * overhead for the items running on the other runqueue (we need the
          * lock). So, better do that as a consequence of a well informed
          * decision (or if we really don't have any other chance, as we will,
          * at step 5, if we get to there).
@@ -761,7 +761,7 @@ static int get_fallback_cpu(struct csched2_item *svc)
      * We can't be here.  But if that somehow happen (in non-debug builds),
      * at least return something which both online and in our hard-affinity.
      */
-    return cpumask_any(cpumask_scratch_cpu(v->processor));
+    return cpumask_any(cpumask_scratch_cpu(sched_item_cpu(item)));
 }
 
 /*
@@ -790,7 +790,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c
  * Runqueue related code.
  */
 
-static inline int vcpu_on_runq(struct csched2_item *svc)
+static inline int item_on_runq(struct csched2_item *svc)
 {
     return !list_empty(&svc->runq_elem);
 }
@@ -948,17 +948,17 @@ _runq_assign(struct csched2_item *svc, struct csched2_runqueue_data *rqd)
 
     update_max_weight(svc->rqd, svc->weight, 0);
 
-    /* Expected new load based on adding this vcpu */
+    /* Expected new load based on adding this item */
     rqd->b_avgload += svc->avgload;
 
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned rqi:16;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.rqi=rqd->id;
         __trace_var(TRC_CSCHED2_RUNQ_ASSIGN, 1,
                     sizeof(d),
@@ -968,13 +968,13 @@ _runq_assign(struct csched2_item *svc, struct csched2_runqueue_data *rqd)
 }
 
 static void
-runq_assign(const struct scheduler *ops, struct vcpu *vc)
+runq_assign(const struct scheduler *ops, struct sched_item *item)
 {
-    struct csched2_item *svc = vc->sched_item->priv;
+    struct csched2_item *svc = item->priv;
 
     ASSERT(svc->rqd == NULL);
 
-    _runq_assign(svc, c2rqd(ops, vc->processor));
+    _runq_assign(svc, c2rqd(ops, sched_item_cpu(item)));
 }
 
 static void
@@ -982,24 +982,24 @@ _runq_deassign(struct csched2_item *svc)
 {
     struct csched2_runqueue_data *rqd = svc->rqd;
 
-    ASSERT(!vcpu_on_runq(svc));
+    ASSERT(!item_on_runq(svc));
     ASSERT(!(svc->flags & CSFLAG_scheduled));
 
     list_del_init(&svc->rqd_elem);
     update_max_weight(rqd, 0, svc->weight);
 
-    /* Expected new load based on removing this vcpu */
+    /* Expected new load based on removing this item */
     rqd->b_avgload = max_t(s_time_t, rqd->b_avgload - svc->avgload, 0);
 
     svc->rqd = NULL;
 }
 
 static void
-runq_deassign(const struct scheduler *ops, struct vcpu *vc)
+runq_deassign(const struct scheduler *ops, struct sched_item *item)
 {
-    struct csched2_item *svc = vc->sched_item->priv;
+    struct csched2_item *svc = item->priv;
 
-    ASSERT(svc->rqd == c2rqd(ops, vc->processor));
+    ASSERT(svc->rqd == c2rqd(ops, sched_item_cpu(item)));
 
     _runq_deassign(svc);
 }
@@ -1202,15 +1202,15 @@ update_svc_load(const struct scheduler *ops,
                 struct csched2_item *svc, int change, s_time_t now)
 {
     struct csched2_private *prv = csched2_priv(ops);
-    s_time_t delta, vcpu_load;
+    s_time_t delta, item_load;
     unsigned int P, W;
 
     if ( change == -1 )
-        vcpu_load = 1;
+        item_load = 1;
     else if ( change == 1 )
-        vcpu_load = 0;
+        item_load = 0;
     else
-        vcpu_load = vcpu_runnable(svc->vcpu);
+        item_load = item_runnable(svc->item);
 
     W = prv->load_window_shift;
     P = prv->load_precision_shift;
@@ -1218,7 +1218,7 @@ update_svc_load(const struct scheduler *ops,
 
     if ( svc->load_last_update + (1ULL << W) < now )
     {
-        svc->avgload = vcpu_load << P;
+        svc->avgload = item_load << P;
     }
     else
     {
@@ -1231,7 +1231,7 @@ update_svc_load(const struct scheduler *ops,
         }
 
         svc->avgload = svc->avgload +
-                       ((delta * (vcpu_load << P)) >> W) -
+                       ((delta * (item_load << P)) >> W) -
                        ((delta * svc->avgload) >> W);
     }
     svc->load_last_update = now;
@@ -1243,14 +1243,14 @@ update_svc_load(const struct scheduler *ops,
     {
         struct {
             uint64_t v_avgload;
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned shift;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.v_avgload = svc->avgload;
         d.shift = P;
-        __trace_var(TRC_CSCHED2_UPDATE_VCPU_LOAD, 1,
+        __trace_var(TRC_CSCHED2_UPDATE_ITEM_LOAD, 1,
                     sizeof(d),
                     (unsigned char *)&d);
     }
@@ -1272,18 +1272,18 @@ static void
 runq_insert(const struct scheduler *ops, struct csched2_item *svc)
 {
     struct list_head *iter;
-    unsigned int cpu = svc->vcpu->processor;
+    unsigned int cpu = sched_item_cpu(svc->item);
     struct list_head * runq = &c2rqd(ops, cpu)->runq;
     int pos = 0;
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
-    ASSERT(!vcpu_on_runq(svc));
-    ASSERT(c2r(cpu) == c2r(svc->vcpu->processor));
+    ASSERT(!item_on_runq(svc));
+    ASSERT(c2r(cpu) == c2r(sched_item_cpu(svc->item)));
 
     ASSERT(&svc->rqd->runq == runq);
-    ASSERT(!is_idle_vcpu(svc->vcpu));
-    ASSERT(!svc->vcpu->sched_item->is_running);
+    ASSERT(!is_idle_item(svc->item));
+    ASSERT(!svc->item->is_running);
     ASSERT(!(svc->flags & CSFLAG_scheduled));
 
     list_for_each( iter, runq )
@@ -1300,11 +1300,11 @@ runq_insert(const struct scheduler *ops, struct csched2_item *svc)
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned pos;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.pos = pos;
         __trace_var(TRC_CSCHED2_RUNQ_POS, 1,
                     sizeof(d),
@@ -1314,7 +1314,7 @@ runq_insert(const struct scheduler *ops, struct csched2_item *svc)
 
 static inline void runq_remove(struct csched2_item *svc)
 {
-    ASSERT(vcpu_on_runq(svc));
+    ASSERT(item_on_runq(svc));
     list_del_init(&svc->runq_elem);
 }
 
@@ -1340,8 +1340,8 @@ static inline bool is_preemptable(const struct csched2_item *svc,
     if ( ratelimit <= CSCHED2_RATELIMIT_TICKLE_TOLERANCE )
         return true;
 
-    ASSERT(svc->vcpu->sched_item->is_running);
-    return now - svc->vcpu->sched_item->state_entry_time >
+    ASSERT(svc->item->is_running);
+    return now - svc->item->state_entry_time >
            ratelimit - CSCHED2_RATELIMIT_TICKLE_TOLERANCE;
 }
 
@@ -1369,17 +1369,17 @@ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
 
     /*
      * We are dealing with cpus that are marked non-idle (i.e., that are not
-     * in rqd->idle). However, some of them may be running their idle vcpu,
+     * in rqd->idle). However, some of them may be running their idle item,
      * if taking care of tasklets. In that case, we want to leave it alone.
      */
-    if ( unlikely(is_idle_vcpu(cur->vcpu) ||
+    if ( unlikely(is_idle_item(cur->item) ||
          !is_preemptable(cur, now, MICROSECS(prv->ratelimit_us))) )
         return -1;
 
     burn_credits(rqd, cur, now);
 
     score = new->credit - cur->credit;
-    if ( new->vcpu->processor != cpu )
+    if ( sched_item_cpu(new->item) != cpu )
         score -= CSCHED2_MIGRATE_RESIST;
 
     /*
@@ -1390,21 +1390,21 @@ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
      */
     if ( score > 0 )
     {
-        if ( cpumask_test_cpu(cpu, new->vcpu->sched_item->cpu_soft_affinity) )
+        if ( cpumask_test_cpu(cpu, new->item->cpu_soft_affinity) )
             score += CSCHED2_CREDIT_INIT;
 
-        if ( !cpumask_test_cpu(cpu, cur->vcpu->sched_item->cpu_soft_affinity) )
+        if ( !cpumask_test_cpu(cpu, cur->item->cpu_soft_affinity) )
             score += CSCHED2_CREDIT_INIT;
     }
 
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             int credit, score;
         } d;
-        d.dom = cur->vcpu->domain->domain_id;
-        d.vcpu = cur->vcpu->vcpu_id;
+        d.dom = cur->item->domain->domain_id;
+        d.item = cur->item->item_id;
         d.credit = cur->credit;
         d.score = score;
         __trace_var(TRC_CSCHED2_TICKLE_CHECK, 1,
@@ -1416,14 +1416,14 @@ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
 }
 
 /*
- * Check what processor it is best to 'wake', for picking up a vcpu that has
+ * Check what processor it is best to 'wake', for picking up an item that has
  * just been put (back) in the runqueue. Logic is as follows:
  *  1. if there are idle processors in the runq, wake one of them;
- *  2. if there aren't idle processor, check the one were the vcpu was
+ *  2. if there aren't idle processor, check the one were the item was
  *     running before to see if we can preempt what's running there now
  *     (and hence doing just one migration);
- *  3. last stand: check all processors and see if the vcpu is in right
- *     of preempting any of the other vcpus running on them (this requires
+ *  3. last stand: check all processors and see if the item is in right
+ *     of preempting any of the other items running on them (this requires
  *     two migrations, and that's indeed why it is left as the last stand).
  *
  * Note that when we say 'idle processors' what we really mean is (pretty
@@ -1436,10 +1436,10 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
 {
     int i, ipid = -1;
     s_time_t max = 0;
-    struct sched_item *item = new->vcpu->sched_item;
-    unsigned int bs, cpu = new->vcpu->processor;
+    struct sched_item *item = new->item;
+    unsigned int bs, cpu = sched_item_cpu(item);
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
-    cpumask_t *online = cpupool_domain_cpumask(new->vcpu->domain);
+    cpumask_t *online = cpupool_domain_cpumask(item->domain);
     cpumask_t mask;
 
     ASSERT(new->rqd == rqd);
@@ -1447,13 +1447,13 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned processor;
             int credit;
         } d;
-        d.dom = new->vcpu->domain->domain_id;
-        d.vcpu = new->vcpu->vcpu_id;
-        d.processor = new->vcpu->processor;
+        d.dom = item->domain->domain_id;
+        d.item = item->item_id;
+        d.processor = cpu;
         d.credit = new->credit;
         __trace_var(TRC_CSCHED2_TICKLE_NEW, 1,
                     sizeof(d),
@@ -1461,11 +1461,11 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
     }
 
     /*
-     * Exclusive pinning is when a vcpu has hard-affinity with only one
-     * cpu, and there is no other vcpu that has hard-affinity with that
+     * Exclusive pinning is when an item has hard-affinity with only one
+     * cpu, and there is no other item that has hard-affinity with that
      * same cpu. This is infrequent, but if it happens, is for achieving
      * the most possible determinism, and least possible overhead for
-     * the vcpus in question.
+     * the items in question.
      *
      * Try to identify the vast majority of these situations, and deal
      * with them quickly.
@@ -1532,7 +1532,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
     /*
      * Note that, if we are here, it means we have done the hard-affinity
      * balancing step of the loop, and hence what we have in cpumask_scratch
-     * is what we put there for last, i.e., new's vcpu_hard_affinity & online
+     * is what we put there for last, i.e., new's item_hard_affinity & online
      * which is exactly what we need for the next part of the function.
      */
 
@@ -1543,7 +1543,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
      *
      * For deciding which cpu to tickle, we use tickle_score(), which will
      * factor in both new's soft-affinity, and the soft-affinity of the
-     * vcpu running on each cpu that we consider.
+     * item running on each cpu that we consider.
      */
     cpumask_andnot(&mask, &rqd->active, &rqd->idle);
     cpumask_andnot(&mask, &mask, &rqd->tickled);
@@ -1588,7 +1588,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
         return;
     }
 
-    ASSERT(!is_idle_vcpu(curr_on_cpu(ipid)->vcpu));
+    ASSERT(!is_idle_item(curr_on_cpu(ipid)));
     SCHED_STAT_CRANK(tickled_busy_cpu);
  tickle:
     BUG_ON(ipid == -1);
@@ -1623,16 +1623,16 @@ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now,
 
     /*
      * Under normal circumstances, snext->credit should never be less
-     * than -CSCHED2_MIN_TIMER.  However, under some circumstances, a
-     * vcpu with low credits may be allowed to run long enough that
+     * than -CSCHED2_MIN_TIMER.  However, under some circumstances, an
+     * item with low credits may be allowed to run long enough that
      * its credits are actually less than -CSCHED2_CREDIT_INIT.
-     * (Instances have been observed, for example, where a vcpu with
+     * (Instances have been observed, for example, where an item with
      * 200us of credit was allowed to run for 11ms, giving it -10.8ms
      * of credit.  Thus it was still negative even after the reset.)
      *
      * If this is the case for snext, we simply want to keep moving
      * everyone up until it is in the black again.  This fair because
-     * none of the other vcpus want to run at the moment.
+     * none of the other items want to run at the moment.
      *
      * Rather than looping, however, we just calculate a multiplier,
      * avoiding an integer division and multiplication in the common
@@ -1649,16 +1649,16 @@ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now,
         int start_credit;
 
         svc = list_entry(iter, struct csched2_item, rqd_elem);
-        svc_cpu = svc->vcpu->processor;
+        svc_cpu = sched_item_cpu(svc->item);
 
-        ASSERT(!is_idle_vcpu(svc->vcpu));
+        ASSERT(!is_idle_item(svc->item));
         ASSERT(svc->rqd == rqd);
 
         /*
          * If svc is running, it is our responsibility to make sure, here,
          * that the credit it has spent so far get accounted.
          */
-        if ( svc->vcpu == curr_on_cpu(svc_cpu)->vcpu )
+        if ( svc->item == curr_on_cpu(svc_cpu) )
         {
             burn_credits(rqd, svc, now);
             /*
@@ -1689,12 +1689,12 @@ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now,
         if ( unlikely(tb_init_done) )
         {
             struct {
-                unsigned vcpu:16, dom:16;
+                unsigned item:16, dom:16;
                 int credit_start, credit_end;
                 unsigned multiplier;
             } d;
-            d.dom = svc->vcpu->domain->domain_id;
-            d.vcpu = svc->vcpu->vcpu_id;
+            d.dom = svc->item->domain->domain_id;
+            d.item = svc->item->item_id;
             d.credit_start = start_credit;
             d.credit_end = svc->credit;
             d.multiplier = m;
@@ -1714,9 +1714,9 @@ void burn_credits(struct csched2_runqueue_data *rqd,
 {
     s_time_t delta;
 
-    ASSERT(svc == csched2_item(curr_on_cpu(svc->vcpu->processor)));
+    ASSERT(svc == csched2_item(curr_on_cpu(sched_item_cpu(svc->item))));
 
-    if ( unlikely(is_idle_vcpu(svc->vcpu)) )
+    if ( unlikely(is_idle_item(svc->item)) )
     {
         ASSERT(svc->credit == CSCHED2_IDLE_CREDIT);
         return;
@@ -1745,12 +1745,12 @@ void burn_credits(struct csched2_runqueue_data *rqd,
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             int credit, budget;
             int delta;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.credit = svc->credit;
         d.budget = has_cap(svc) ?  svc->budget : INT_MIN;
         d.delta = delta;
@@ -1764,39 +1764,39 @@ void burn_credits(struct csched2_runqueue_data *rqd,
  * Budget-related code.
  */
 
-static void park_vcpu(struct csched2_item *svc)
+static void park_item(struct csched2_item *svc)
 {
-    struct vcpu *v = svc->vcpu;
+    struct sched_item *item = svc->item;
 
     ASSERT(spin_is_locked(&svc->sdom->budget_lock));
 
     /*
-     * It was impossible to find budget for this vCPU, so it has to be
+     * It was impossible to find budget for this item, so it has to be
      * "parked". This implies it is not runnable, so we mark it as such in
-     * its pause_flags. If the vCPU is currently scheduled (which means we
+     * its pause_flags. If the item is currently scheduled (which means we
      * are here after being called from within csched_schedule()), flagging
      * is enough, as we'll choose someone else, and then context_saved()
      * will take care of updating the load properly.
      *
-     * If, OTOH, the vCPU is sitting in the runqueue (which means we are here
+     * If, OTOH, the item is sitting in the runqueue (which means we are here
      * after being called from within runq_candidate()), we must go all the
      * way down to taking it out of there, and updating the load accordingly.
      *
-     * In both cases, we also add it to the list of parked vCPUs of the domain.
+     * In both cases, we also add it to the list of parked items of the domain.
      */
-    __set_bit(_VPF_parked, &v->pause_flags);
-    if ( vcpu_on_runq(svc) )
+    sched_set_pause_flags(item, _VPF_parked);
+    if ( item_on_runq(svc) )
     {
         runq_remove(svc);
         update_load(svc->sdom->dom->cpupool->sched, svc->rqd, svc, -1, NOW());
     }
-    list_add(&svc->parked_elem, &svc->sdom->parked_vcpus);
+    list_add(&svc->parked_elem, &svc->sdom->parked_items);
 }
 
-static bool vcpu_grab_budget(struct csched2_item *svc)
+static bool item_grab_budget(struct csched2_item *svc)
 {
     struct csched2_dom *sdom = svc->sdom;
-    unsigned int cpu = svc->vcpu->processor;
+    unsigned int cpu = sched_item_cpu(svc->item);
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
@@ -1808,9 +1808,9 @@ static bool vcpu_grab_budget(struct csched2_item *svc)
 
     /*
      * Here, svc->budget is <= 0 (as, if it was > 0, we'd have taken the if
-     * above!). That basically means the vCPU has overrun a bit --because of
+     * above!). That basically means the item has overrun a bit --because of
      * various reasons-- and we want to take that into account. With the +=,
-     * we are actually subtracting the amount of budget the vCPU has
+     * we are actually subtracting the amount of budget the item has
      * overconsumed, from the total domain budget.
      */
     sdom->budget += svc->budget;
@@ -1831,7 +1831,7 @@ static bool vcpu_grab_budget(struct csched2_item *svc)
     else
     {
         svc->budget = 0;
-        park_vcpu(svc);
+        park_item(svc);
     }
 
     spin_unlock(&sdom->budget_lock);
@@ -1840,10 +1840,10 @@ static bool vcpu_grab_budget(struct csched2_item *svc)
 }
 
 static void
-vcpu_return_budget(struct csched2_item *svc, struct list_head *parked)
+item_return_budget(struct csched2_item *svc, struct list_head *parked)
 {
     struct csched2_dom *sdom = svc->sdom;
-    unsigned int cpu = svc->vcpu->processor;
+    unsigned int cpu = sched_item_cpu(svc->item);
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
     ASSERT(list_empty(parked));
@@ -1852,7 +1852,7 @@ vcpu_return_budget(struct csched2_item *svc, struct list_head *parked)
     spin_lock(&sdom->budget_lock);
 
     /*
-     * The vCPU is stopping running (e.g., because it's blocking, or it has
+     * The item is stopping running (e.g., because it's blocking, or it has
      * been preempted). If it hasn't consumed all the budget it got when,
      * starting to run, put that remaining amount back in the domain's budget
      * pool.
@@ -1861,58 +1861,58 @@ vcpu_return_budget(struct csched2_item *svc, struct list_head *parked)
     svc->budget = 0;
 
     /*
-     * Making budget available again to the domain means that parked vCPUs
-     * may be unparked and run. They are, if any, in the domain's parked_vcpus
+     * Making budget available again to the domain means that parked items
+     * may be unparked and run. They are, if any, in the domain's parked_items
      * list, so we want to go through that and unpark them (so they can try
      * to get some budget).
      *
      * Touching the list requires the budget_lock, which we hold. Let's
      * therefore put everyone in that list in another, temporary list, which
-     * then the caller will traverse, unparking the vCPUs it finds there.
+     * then the caller will traverse, unparking the items it finds there.
      *
      * In fact, we can't do the actual unparking here, because that requires
-     * taking the runqueue lock of the vCPUs being unparked, and we can't
+     * taking the runqueue lock of the items being unparked, and we can't
      * take any runqueue locks while we hold a budget_lock.
      */
     if ( sdom->budget > 0 )
-        list_splice_init(&sdom->parked_vcpus, parked);
+        list_splice_init(&sdom->parked_items, parked);
 
     spin_unlock(&sdom->budget_lock);
 }
 
 static void
-unpark_parked_vcpus(const struct scheduler *ops, struct list_head *vcpus)
+unpark_parked_items(const struct scheduler *ops, struct list_head *items)
 {
     struct csched2_item *svc, *tmp;
     spinlock_t *lock;
 
-    list_for_each_entry_safe(svc, tmp, vcpus, parked_elem)
+    list_for_each_entry_safe(svc, tmp, items, parked_elem)
     {
         unsigned long flags;
         s_time_t now;
 
-        lock = item_schedule_lock_irqsave(svc->vcpu->sched_item, &flags);
+        lock = item_schedule_lock_irqsave(svc->item, &flags);
 
-        __clear_bit(_VPF_parked, &svc->vcpu->pause_flags);
+        sched_clear_pause_flags(svc->item, _VPF_parked);
         if ( unlikely(svc->flags & CSFLAG_scheduled) )
         {
             /*
              * We end here if a budget replenishment arrived between
              * csched2_schedule() (and, in particular, after a call to
-             * vcpu_grab_budget() that returned false), and
+             * item_grab_budget() that returned false), and
              * context_saved(). By setting __CSFLAG_delayed_runq_add,
-             * we tell context_saved() to put the vCPU back in the
+             * we tell context_saved() to put the item back in the
              * runqueue, from where it will compete with the others
              * for the newly replenished budget.
              */
             ASSERT( svc->rqd != NULL );
-            ASSERT( c2rqd(ops, svc->vcpu->processor) == svc->rqd );
+            ASSERT( c2rqd(ops, sched_item_cpu(svc->item)) == svc->rqd );
             __set_bit(__CSFLAG_delayed_runq_add, &svc->flags);
         }
-        else if ( vcpu_runnable(svc->vcpu) )
+        else if ( item_runnable(svc->item) )
         {
             /*
-             * The vCPU should go back to the runqueue, and compete for
+             * The item should go back to the runqueue, and compete for
              * the newly replenished budget, but only if it is actually
              * runnable (and was therefore offline only because of the
              * lack of budget).
@@ -1924,7 +1924,7 @@ unpark_parked_vcpus(const struct scheduler *ops, struct list_head *vcpus)
         }
         list_del_init(&svc->parked_elem);
 
-        item_schedule_unlock_irqrestore(lock, flags, svc->vcpu->sched_item);
+        item_schedule_unlock_irqrestore(lock, flags, svc->item);
     }
 }
 
@@ -1954,7 +1954,7 @@ static void replenish_domain_budget(void* data)
      *
      * Even in cases of overrun or delay, however, we expect that in 99% of
      * cases, doing just one replenishment will be good enough for being able
-     * to unpark the vCPUs that are waiting for some budget.
+     * to unpark the items that are waiting for some budget.
      */
     do_replenish(sdom);
 
@@ -1974,7 +1974,7 @@ static void replenish_domain_budget(void* data)
     }
     /*
      * 2) if we overrun by more than tot_budget, then budget+tot_budget is
-     * still < 0, which means that we can't unpark the vCPUs. Let's bail,
+     * still < 0, which means that we can't unpark the items. Let's bail,
      * and wait for future replenishments.
      */
     if ( unlikely(sdom->budget <= 0) )
@@ -1988,14 +1988,14 @@ static void replenish_domain_budget(void* data)
 
     /*
      * As above, let's prepare the temporary list, out of the domain's
-     * parked_vcpus list, now that we hold the budget_lock. Then, drop such
+     * parked_items list, now that we hold the budget_lock. Then, drop such
      * lock, and pass the list to the unparking function.
      */
-    list_splice_init(&sdom->parked_vcpus, &parked);
+    list_splice_init(&sdom->parked_items, &parked);
 
     spin_unlock_irqrestore(&sdom->budget_lock, flags);
 
-    unpark_parked_vcpus(sdom->dom->cpupool->sched, &parked);
+    unpark_parked_items(sdom->dom->cpupool->sched, &parked);
 
  out:
     set_timer(&sdom->repl_timer, sdom->next_repl);
@@ -2003,37 +2003,36 @@ static void replenish_domain_budget(void* data)
 
 #ifndef NDEBUG
 static inline void
-csched2_vcpu_check(struct vcpu *vc)
+csched2_item_check(struct sched_item *item)
 {
-    struct csched2_item * const svc = csched2_item(vc->sched_item);
+    struct csched2_item * const svc = csched2_item(item);
     struct csched2_dom * const sdom = svc->sdom;
 
-    BUG_ON( svc->vcpu != vc );
-    BUG_ON( sdom != csched2_dom(vc->domain) );
+    BUG_ON( svc->item != item );
+    BUG_ON( sdom != csched2_dom(item->domain) );
     if ( sdom )
     {
-        BUG_ON( is_idle_vcpu(vc) );
-        BUG_ON( sdom->dom != vc->domain );
+        BUG_ON( is_idle_item(item) );
+        BUG_ON( sdom->dom != item->domain );
     }
     else
     {
-        BUG_ON( !is_idle_vcpu(vc) );
+        BUG_ON( !is_idle_item(item) );
     }
     SCHED_STAT_CRANK(item_check);
 }
-#define CSCHED2_VCPU_CHECK(_vc)  (csched2_vcpu_check(_vc))
+#define CSCHED2_ITEM_CHECK(item)  (csched2_item_check(item))
 #else
-#define CSCHED2_VCPU_CHECK(_vc)
+#define CSCHED2_ITEM_CHECK(item)
 #endif
 
 static void *
 csched2_alloc_vdata(const struct scheduler *ops, struct sched_item *item,
                     void *dd)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item *svc;
 
-    /* Allocate per-VCPU info */
+    /* Allocate per-ITEM info */
     svc = xzalloc(struct csched2_item);
     if ( svc == NULL )
         return NULL;
@@ -2042,10 +2041,10 @@ csched2_alloc_vdata(const struct scheduler *ops, struct sched_item *item,
     INIT_LIST_HEAD(&svc->runq_elem);
 
     svc->sdom = dd;
-    svc->vcpu = vc;
+    svc->item = item;
     svc->flags = 0U;
 
-    if ( ! is_idle_vcpu(vc) )
+    if ( ! is_idle_item(item) )
     {
         ASSERT(svc->sdom != NULL);
         svc->credit = CSCHED2_CREDIT_INIT;
@@ -2074,19 +2073,18 @@ csched2_alloc_vdata(const struct scheduler *ops, struct sched_item *item,
 static void
 csched2_item_sleep(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
     SCHED_STAT_CRANK(item_sleep);
 
-    if ( curr_on_cpu(vc->processor) == item )
+    if ( curr_on_cpu(sched_item_cpu(item)) == item )
     {
-        tickle_cpu(vc->processor, svc->rqd);
+        tickle_cpu(sched_item_cpu(item), svc->rqd);
     }
-    else if ( vcpu_on_runq(svc) )
+    else if ( item_on_runq(svc) )
     {
-        ASSERT(svc->rqd == c2rqd(ops, vc->processor));
+        ASSERT(svc->rqd == c2rqd(ops, sched_item_cpu(item)));
         update_load(ops, svc->rqd, svc, -1, NOW());
         runq_remove(svc);
     }
@@ -2097,14 +2095,13 @@ csched2_item_sleep(const struct scheduler *ops, struct sched_item *item)
 static void
 csched2_item_wake(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
-    unsigned int cpu = vc->processor;
+    unsigned int cpu = sched_item_cpu(item);
     s_time_t now;
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
 
     if ( unlikely(curr_on_cpu(cpu) == item) )
     {
@@ -2112,18 +2109,18 @@ csched2_item_wake(const struct scheduler *ops, struct sched_item *item)
         goto out;
     }
 
-    if ( unlikely(vcpu_on_runq(svc)) )
+    if ( unlikely(item_on_runq(svc)) )
     {
         SCHED_STAT_CRANK(item_wake_onrunq);
         goto out;
     }
 
-    if ( likely(vcpu_runnable(vc)) )
+    if ( likely(item_runnable(item)) )
         SCHED_STAT_CRANK(item_wake_runnable);
     else
         SCHED_STAT_CRANK(item_wake_not_runnable);
 
-    /* If the context hasn't been saved for this vcpu yet, we can't put it on
+    /* If the context hasn't been saved for this item yet, we can't put it on
      * another runqueue.  Instead, we set a flag so that it will be put on the runqueue
      * after the context has been saved. */
     if ( unlikely(svc->flags & CSFLAG_scheduled) )
@@ -2134,15 +2131,15 @@ csched2_item_wake(const struct scheduler *ops, struct sched_item *item)
 
     /* Add into the new runqueue if necessary */
     if ( svc->rqd == NULL )
-        runq_assign(ops, vc);
+        runq_assign(ops, item);
     else
-        ASSERT(c2rqd(ops, vc->processor) == svc->rqd );
+        ASSERT(c2rqd(ops, sched_item_cpu(item)) == svc->rqd );
 
     now = NOW();
 
     update_load(ops, svc->rqd, svc, 1, now);
-        
-    /* Put the VCPU on the runq */
+
+    /* Put the ITEM on the runq */
     runq_insert(ops, svc);
     runq_tickle(ops, svc, now);
 
@@ -2155,49 +2152,48 @@ csched2_item_yield(const struct scheduler *ops, struct sched_item *item)
 {
     struct csched2_item * const svc = csched2_item(item);
 
-    __set_bit(__CSFLAG_vcpu_yield, &svc->flags);
+    __set_bit(__CSFLAG_item_yield, &svc->flags);
 }
 
 static void
 csched2_context_saved(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
     spinlock_t *lock = item_schedule_lock_irq(item);
     s_time_t now = NOW();
     LIST_HEAD(were_parked);
 
-    BUG_ON( !is_idle_vcpu(vc) && svc->rqd != c2rqd(ops, vc->processor));
-    ASSERT(is_idle_vcpu(vc) || svc->rqd == c2rqd(ops, vc->processor));
+    BUG_ON( !is_idle_item(item) && svc->rqd != c2rqd(ops, sched_item_cpu(item)));
+    ASSERT(is_idle_item(item) || svc->rqd == c2rqd(ops, sched_item_cpu(item)));
 
-    /* This vcpu is now eligible to be put on the runqueue again */
+    /* This item is now eligible to be put on the runqueue again */
     __clear_bit(__CSFLAG_scheduled, &svc->flags);
 
     if ( unlikely(has_cap(svc) && svc->budget > 0) )
-        vcpu_return_budget(svc, &were_parked);
+        item_return_budget(svc, &were_parked);
 
     /* If someone wants it on the runqueue, put it there. */
     /*
      * NB: We can get rid of CSFLAG_scheduled by checking for
-     * vc->is_running and vcpu_on_runq(svc) here.  However,
+     * vc->is_running and item_on_runq(svc) here.  However,
      * since we're accessing the flags cacheline anyway,
      * it seems a bit pointless; especially as we have plenty of
      * bits free.
      */
     if ( __test_and_clear_bit(__CSFLAG_delayed_runq_add, &svc->flags)
-         && likely(vcpu_runnable(vc)) )
+         && likely(item_runnable(item)) )
     {
-        ASSERT(!vcpu_on_runq(svc));
+        ASSERT(!item_on_runq(svc));
 
         runq_insert(ops, svc);
         runq_tickle(ops, svc, now);
     }
-    else if ( !is_idle_vcpu(vc) )
+    else if ( !is_idle_item(item) )
         update_load(ops, svc->rqd, svc, -1, now);
 
     item_schedule_unlock_irq(lock, item);
 
-    unpark_parked_vcpus(ops, &were_parked);
+    unpark_parked_items(ops, &were_parked);
 }
 
 #define MAX_LOAD (STIME_MAX)
@@ -2205,9 +2201,8 @@ static struct sched_resource *
 csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
 {
     struct csched2_private *prv = csched2_priv(ops);
-    struct vcpu *vc = item->vcpu;
     int i, min_rqi = -1, min_s_rqi = -1;
-    unsigned int new_cpu, cpu = vc->processor;
+    unsigned int new_cpu, cpu = sched_item_cpu(item);
     struct csched2_item *svc = csched2_item(item);
     s_time_t min_avgload = MAX_LOAD, min_s_avgload = MAX_LOAD;
     bool has_soft;
@@ -2245,7 +2240,7 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
     }
 
     cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affinity,
-                cpupool_domain_cpumask(vc->domain));
+                cpupool_domain_cpumask(item->domain));
 
     /*
      * First check to see if we're here because someone else suggested a place
@@ -2356,7 +2351,7 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
          * We have soft affinity, and we have a candidate runq, so go for it.
          *
          * Note that, to obtain the soft-affinity mask, we "just" put what we
-         * have in cpumask_scratch in && with vc->cpu_soft_affinity. This is
+         * have in cpumask_scratch in && with item->cpu_soft_affinity. This is
          * ok because:
          * - we know that item->cpu_hard_affinity and ->cpu_soft_affinity have
          *   a non-empty intersection (because has_soft is true);
@@ -2379,7 +2374,7 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
          * any suitable runq. But we did find one when considering hard
          * affinity, so go for it.
          *
-         * cpumask_scratch already has vc->cpu_hard_affinity &
+         * cpumask_scratch already has item->cpu_hard_affinity &
          * cpupool_domain_cpumask() in it, so it's enough that we filter
          * with the cpus of the runq.
          */
@@ -2410,11 +2405,11 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
     {
         struct {
             uint64_t b_avgload;
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned rq_id:16, new_cpu:16;
         } d;
-        d.dom = vc->domain->domain_id;
-        d.vcpu = vc->vcpu_id;
+        d.dom = item->domain->domain_id;
+        d.item = item->item_id;
         d.rq_id = min_rqi;
         d.b_avgload = min_avgload;
         d.new_cpu = new_cpu;
@@ -2433,10 +2428,10 @@ typedef struct {
     struct csched2_item * best_push_svc, *best_pull_svc;
     /* NB: Read by consider() */
     struct csched2_runqueue_data *lrqd;
-    struct csched2_runqueue_data *orqd;                  
+    struct csched2_runqueue_data *orqd;
 } balance_state_t;
 
-static void consider(balance_state_t *st, 
+static void consider(balance_state_t *st,
                      struct csched2_item *push_svc,
                      struct csched2_item *pull_svc)
 {
@@ -2475,17 +2470,17 @@ static void migrate(const struct scheduler *ops,
                     struct csched2_runqueue_data *trqd,
                     s_time_t now)
 {
-    int cpu = svc->vcpu->processor;
-    struct sched_item *item = svc->vcpu->sched_item;
+    struct sched_item *item = svc->item;
+    int cpu = sched_item_cpu(item);
 
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned rqi:16, trqi:16;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = item->domain->domain_id;
+        d.item = item->item_id;
         d.rqi = svc->rqd->id;
         d.trqi = trqd->id;
         __trace_var(TRC_CSCHED2_MIGRATE, 1,
@@ -2497,7 +2492,7 @@ static void migrate(const struct scheduler *ops,
     {
         /* It's running; mark it to migrate. */
         svc->migrate_rqd = trqd;
-        __set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
+        sched_set_pause_flags(item, _VPF_migrating);
         __set_bit(__CSFLAG_runq_migrate_request, &svc->flags);
         SCHED_STAT_CRANK(migrate_requested);
         tickle_cpu(cpu, svc->rqd);
@@ -2506,7 +2501,7 @@ static void migrate(const struct scheduler *ops,
     {
         int on_runq = 0;
         /* It's not running; just move it */
-        if ( vcpu_on_runq(svc) )
+        if ( item_on_runq(svc) )
         {
             runq_remove(svc);
             update_load(ops, svc->rqd, NULL, -1, now);
@@ -2515,14 +2510,14 @@ static void migrate(const struct scheduler *ops,
         _runq_deassign(svc);
 
         cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affinity,
-                    cpupool_domain_cpumask(svc->vcpu->domain));
+                    cpupool_domain_cpumask(item->domain));
         cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
                     &trqd->active);
-        svc->vcpu->processor = cpumask_cycle(trqd->pick_bias,
-                                             cpumask_scratch_cpu(cpu));
-        svc->vcpu->sched_item->res = per_cpu(sched_res, svc->vcpu->processor);
-        trqd->pick_bias = svc->vcpu->processor;
-        ASSERT(svc->vcpu->processor < nr_cpu_ids);
+        sched_set_res(item, per_cpu(sched_res,
+                                    cpumask_cycle(trqd->pick_bias,
+                                                  cpumask_scratch_cpu(cpu))));
+        trqd->pick_bias = sched_item_cpu(item);
+        ASSERT(sched_item_cpu(item) < nr_cpu_ids);
 
         _runq_assign(svc, trqd);
         if ( on_runq )
@@ -2542,14 +2537,14 @@ static void migrate(const struct scheduler *ops,
  *  - svc is not already flagged to migrate,
  *  - if svc is allowed to run on at least one of the pcpus of rqd.
  */
-static bool vcpu_is_migrateable(struct csched2_item *svc,
+static bool item_is_migrateable(struct csched2_item *svc,
                                   struct csched2_runqueue_data *rqd)
 {
-    struct vcpu *v = svc->vcpu;
-    int cpu = svc->vcpu->processor;
+    struct sched_item *item = svc->item;
+    int cpu = sched_item_cpu(item);
 
-    cpumask_and(cpumask_scratch_cpu(cpu), v->sched_item->cpu_hard_affinity,
-                cpupool_domain_cpumask(v->domain));
+    cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affinity,
+                cpupool_domain_cpumask(item->domain));
 
     return !(svc->flags & CSFLAG_runq_migrate_request) &&
            cpumask_intersects(cpumask_scratch_cpu(cpu), &rqd->active);
@@ -2586,7 +2581,7 @@ retry:
     for_each_cpu(i, &prv->active_queues)
     {
         s_time_t delta;
-        
+
         st.orqd = prv->rqd + i;
 
         if ( st.orqd == st.lrqd
@@ -2594,7 +2589,7 @@ retry:
             continue;
 
         update_runq_load(ops, st.orqd, 0, now);
-    
+
         delta = st.lrqd->b_avgload - st.orqd->b_avgload;
         if ( delta < 0 )
             delta = -delta;
@@ -2617,7 +2612,7 @@ retry:
         s_time_t load_max;
         int cpus_max;
 
-        
+
         load_max = st.lrqd->b_avgload;
         if ( st.orqd->b_avgload > load_max )
             load_max = st.orqd->b_avgload;
@@ -2656,7 +2651,7 @@ retry:
                                            opt_overload_balance_tolerance)) )
                 goto out;
     }
-             
+
     /* Try to grab the other runqueue lock; if it's been taken in the
      * meantime, try the process over again.  This can't deadlock
      * because if it doesn't get any other rqd locks, it will simply
@@ -2696,17 +2691,17 @@ retry:
 
         update_svc_load(ops, push_svc, 0, now);
 
-        if ( !vcpu_is_migrateable(push_svc, st.orqd) )
+        if ( !item_is_migrateable(push_svc, st.orqd) )
             continue;
 
         list_for_each( pull_iter, &st.orqd->svc )
         {
             struct csched2_item * pull_svc = list_entry(pull_iter, struct csched2_item, rqd_elem);
-            
+
             if ( !inner_load_updated )
                 update_svc_load(ops, pull_svc, 0, now);
-        
-            if ( !vcpu_is_migrateable(pull_svc, st.lrqd) )
+
+            if ( !item_is_migrateable(pull_svc, st.lrqd) )
                 continue;
 
             consider(&st, push_svc, pull_svc);
@@ -2721,8 +2716,8 @@ retry:
     list_for_each( pull_iter, &st.orqd->svc )
     {
         struct csched2_item * pull_svc = list_entry(pull_iter, struct csched2_item, rqd_elem);
-        
-        if ( !vcpu_is_migrateable(pull_svc, st.lrqd) )
+
+        if ( !item_is_migrateable(pull_svc, st.lrqd) )
             continue;
 
         /* Consider pull only */
@@ -2745,8 +2740,7 @@ static void
 csched2_item_migrate(
     const struct scheduler *ops, struct sched_item *item, unsigned int new_cpu)
 {
-    struct vcpu *vc = item->vcpu;
-    struct domain *d = vc->domain;
+    struct domain *d = item->domain;
     struct csched2_item * const svc = csched2_item(item);
     struct csched2_runqueue_data *trqd;
     s_time_t now = NOW();
@@ -2758,25 +2752,24 @@ csched2_item_migrate(
      * cpupool.
      *
      * And since there indeed is the chance that it is not part of it, all
-     * we must do is remove _and_ unassign the vCPU from any runqueue, as
+     * we must do is remove _and_ unassign the item from any runqueue, as
      * well as updating v->processor with the target, so that the suspend
      * process can continue.
      *
      * It will then be during resume that a new, meaningful, value for
      * v->processor will be chosen, and during actual domain unpause that
-     * the vCPU will be assigned to and added to the proper runqueue.
+     * the item will be assigned to and added to the proper runqueue.
      */
     if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_cpumask(d))) )
     {
         ASSERT(system_state == SYS_STATE_suspend);
-        if ( vcpu_on_runq(svc) )
+        if ( item_on_runq(svc) )
         {
             runq_remove(svc);
             update_load(ops, svc->rqd, NULL, -1, now);
         }
         _runq_deassign(svc);
-        vc->processor = new_cpu;
-        item->res = per_cpu(sched_res, new_cpu);
+        sched_set_res(item, per_cpu(sched_res, new_cpu));
         return;
     }
 
@@ -2790,17 +2783,14 @@ csched2_item_migrate(
      * Do the actual movement toward new_cpu, and update vc->processor.
      * If we are changing runqueue, migrate() takes care of everything.
      * If we are not changing runqueue, we need to update vc->processor
-     * here. In fact, if, for instance, we are here because the vcpu's
+     * here. In fact, if, for instance, we are here because the item's
      * hard affinity changed, we don't want to risk leaving vc->processor
      * pointing to a pcpu where we can't run any longer.
      */
     if ( trqd != svc->rqd )
         migrate(ops, svc, trqd, now);
     else
-    {
-        vc->processor = new_cpu;
-        item->res = per_cpu(sched_res, new_cpu);
-    }
+        sched_set_res(item, per_cpu(sched_res, new_cpu));
 }
 
 static int
@@ -2812,18 +2802,18 @@ csched2_dom_cntl(
     struct csched2_dom * const sdom = csched2_dom(d);
     struct csched2_private *prv = csched2_priv(ops);
     unsigned long flags;
-    struct vcpu *v;
+    struct sched_item *item;
     int rc = 0;
 
     /*
      * Locking:
      *  - we must take the private lock for accessing the weights of the
-     *    vcpus of d, and/or the cap;
+     *    items of d, and/or the cap;
      *  - in the putinfo case, we also need the runqueue lock(s), for
      *    updating the max waight of the runqueue(s).
      *    If changing the cap, we also need the budget_lock, for updating
      *    the value of the domain budget pool (and the runqueue lock,
-     *    for adjusting the parameters and rescheduling any vCPU that is
+     *    for adjusting the parameters and rescheduling any item that is
      *    running at the time of the change).
      */
     switch ( op->cmd )
@@ -2845,18 +2835,18 @@ csched2_dom_cntl(
 
             sdom->weight = op->u.credit2.weight;
 
-            /* Update weights for vcpus, and max_weight for runqueues on which they reside */
-            for_each_vcpu ( d, v )
+            /* Update weights for items, and max_weight for runqueues on which they reside */
+            for_each_sched_item ( d, item )
             {
-                struct csched2_item *svc = csched2_item(v->sched_item);
-                spinlock_t *lock = item_schedule_lock(svc->vcpu->sched_item);
+                struct csched2_item *svc = csched2_item(item);
+                spinlock_t *lock = item_schedule_lock(item);
 
-                ASSERT(svc->rqd == c2rqd(ops, svc->vcpu->processor));
+                ASSERT(svc->rqd == c2rqd(ops, sched_item_cpu(item)));
 
                 svc->weight = sdom->weight;
                 update_max_weight(svc->rqd, svc->weight, old_weight);
 
-                item_schedule_unlock(lock, svc->vcpu->sched_item);
+                item_schedule_unlock(lock, item);
             }
         }
         /* Cap */
@@ -2865,8 +2855,8 @@ csched2_dom_cntl(
             struct csched2_item *svc;
             spinlock_t *lock;
 
-            /* Cap is only valid if it's below 100 * nr_of_vCPUS */
-            if ( op->u.credit2.cap > 100 * sdom->nr_vcpus )
+            /* Cap is only valid if it's below 100 * nr_of_items */
+            if ( op->u.credit2.cap > 100 * sdom->nr_items )
             {
                 rc = -EINVAL;
                 write_unlock_irqrestore(&prv->lock, flags);
@@ -2879,23 +2869,23 @@ csched2_dom_cntl(
             spin_unlock(&sdom->budget_lock);
 
             /*
-             * When trying to get some budget and run, each vCPU will grab
-             * from the pool 1/N (with N = nr of vCPUs of the domain) of
-             * the total budget. Roughly speaking, this means each vCPU will
+             * When trying to get some budget and run, each item will grab
+             * from the pool 1/N (with N = nr of items of the domain) of
+             * the total budget. Roughly speaking, this means each item will
              * have at least one chance to run during every period.
              */
-            for_each_vcpu ( d, v )
+            for_each_sched_item ( d, item )
             {
-                svc = csched2_item(v->sched_item);
-                lock = item_schedule_lock(svc->vcpu->sched_item);
+                svc = csched2_item(item);
+                lock = item_schedule_lock(item);
                 /*
                  * Too small quotas would in theory cause a lot of overhead,
                  * which then won't happen because, in csched2_runtime(),
                  * CSCHED2_MIN_TIMER is what would be used anyway.
                  */
-                svc->budget_quota = max(sdom->tot_budget / sdom->nr_vcpus,
+                svc->budget_quota = max(sdom->tot_budget / sdom->nr_items,
                                         CSCHED2_MIN_TIMER);
-                item_schedule_unlock(lock, svc->vcpu->sched_item);
+                item_schedule_unlock(lock, item);
             }
 
             if ( sdom->cap == 0 )
@@ -2905,7 +2895,7 @@ csched2_dom_cntl(
                  * and queue its first replenishment event.
                  *
                  * Since cap is currently disabled for this domain, we
-                 * know no vCPU is messing with the domain's budget, and
+                 * know no item is messing with the domain's budget, and
                  * the replenishment timer is still off.
                  * For these reasons, it is safe to do the following without
                  * taking the budget_lock.
@@ -2915,42 +2905,42 @@ csched2_dom_cntl(
                 set_timer(&sdom->repl_timer, sdom->next_repl);
 
                 /*
-                 * Now, let's enable budget accounting for all the vCPUs.
+                 * Now, let's enable budget accounting for all the items.
                  * For making sure that they will start to honour the domain's
                  * cap, we set their budget to 0.
                  * This way, as soon as they will try to run, they will have
                  * to get some budget.
                  *
-                 * For the vCPUs that are already running, we trigger the
+                 * For the items that are already running, we trigger the
                  * scheduler on their pCPU. When, as a consequence of this,
                  * csched2_schedule() will run, it will figure out there is
-                 * no budget, and the vCPU will try to get some (and be parked,
+                 * no budget, and the item will try to get some (and be parked,
                  * if there's none, and we'll switch to someone else).
                  */
-                for_each_vcpu ( d, v )
+                for_each_sched_item ( d, item )
                 {
-                    svc = csched2_item(v->sched_item);
-                    lock = item_schedule_lock(svc->vcpu->sched_item);
-                    if ( v->sched_item->is_running )
+                    svc = csched2_item(item);
+                    lock = item_schedule_lock(item);
+                    if ( item->is_running )
                     {
-                        unsigned int cpu = v->processor;
+                        unsigned int cpu = sched_item_cpu(item);
                         struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
 
-                        ASSERT(curr_on_cpu(cpu)->vcpu == v);
+                        ASSERT(curr_on_cpu(cpu) == item);
 
                         /*
-                         * We are triggering a reschedule on the vCPU's
+                         * We are triggering a reschedule on the item's
                          * pCPU. That will run burn_credits() and, since
-                         * the vCPU is capped now, it would charge all the
+                         * the item is capped now, it would charge all the
                          * execution time of this last round as budget as
-                         * well. That will make the vCPU budget go negative,
+                         * well. That will make the item budget go negative,
                          * potentially by a large amount, and it's unfair.
                          *
                          * To avoid that, call burn_credit() here, to do the
                          * accounting of this current running instance now,
                          * with budgetting still disabled. This does not
                          * prevent some small amount of budget being charged
-                         * to the vCPU (i.e., the amount of time it runs from
+                         * to the item (i.e., the amount of time it runs from
                          * now, to when scheduling happens). The budget will
                          * also go below 0, but a lot less than how it would
                          * if we don't do this.
@@ -2961,7 +2951,7 @@ csched2_dom_cntl(
                         cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
                     }
                     svc->budget = 0;
-                    item_schedule_unlock(lock, svc->vcpu->sched_item);
+                    item_schedule_unlock(lock, item);
                 }
             }
 
@@ -2973,30 +2963,30 @@ csched2_dom_cntl(
 
             stop_timer(&sdom->repl_timer);
 
-            /* Disable budget accounting for all the vCPUs. */
-            for_each_vcpu ( d, v )
+            /* Disable budget accounting for all the items. */
+            for_each_sched_item ( d, item )
             {
-                struct csched2_item *svc = csched2_item(v->sched_item);
-                spinlock_t *lock = item_schedule_lock(svc->vcpu->sched_item);
+                struct csched2_item *svc = csched2_item(item);
+                spinlock_t *lock = item_schedule_lock(item);
 
                 svc->budget = STIME_MAX;
                 svc->budget_quota = 0;
 
-                item_schedule_unlock(lock, svc->vcpu->sched_item);
+                item_schedule_unlock(lock, item);
             }
             sdom->cap = 0;
             /*
              * We are disabling the cap for this domain, which may have
-             * vCPUs waiting for a replenishment, so we unpark them all.
+             * items waiting for a replenishment, so we unpark them all.
              * Note that, since we have already disabled budget accounting
-             * for all the vCPUs of the domain, no currently running vCPU
-             * will be added to the parked vCPUs list any longer.
+             * for all the items of the domain, no currently running item
+             * will be added to the parked items list any longer.
              */
             spin_lock(&sdom->budget_lock);
-            list_splice_init(&sdom->parked_vcpus, &parked);
+            list_splice_init(&sdom->parked_items, &parked);
             spin_unlock(&sdom->budget_lock);
 
-            unpark_parked_vcpus(ops, &parked);
+            unpark_parked_items(ops, &parked);
         }
         write_unlock_irqrestore(&prv->lock, flags);
         break;
@@ -3073,12 +3063,12 @@ csched2_alloc_domdata(const struct scheduler *ops, struct domain *dom)
     sdom->dom = dom;
     sdom->weight = CSCHED2_DEFAULT_WEIGHT;
     sdom->cap = 0U;
-    sdom->nr_vcpus = 0;
+    sdom->nr_items = 0;
 
     init_timer(&sdom->repl_timer, replenish_domain_budget, sdom,
                cpumask_any(cpupool_domain_cpumask(dom)));
     spin_lock_init(&sdom->budget_lock);
-    INIT_LIST_HEAD(&sdom->parked_vcpus);
+    INIT_LIST_HEAD(&sdom->parked_items);
 
     write_lock_irqsave(&prv->lock, flags);
 
@@ -3112,34 +3102,32 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
 static void
 csched2_item_insert(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item *svc = item->priv;
     struct csched2_dom * const sdom = svc->sdom;
     spinlock_t *lock;
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
     ASSERT(list_empty(&svc->runq_elem));
 
     /* csched2_res_pick() expects the pcpu lock to be held */
     lock = item_schedule_lock_irq(item);
 
-    item->res = csched2_res_pick(ops, item);
-    vc->processor = item->res->processor;
+    sched_set_res(item, csched2_res_pick(ops, item));
 
     spin_unlock_irq(lock);
 
     lock = item_schedule_lock_irq(item);
 
-    /* Add vcpu to runqueue of initial processor */
-    runq_assign(ops, vc);
+    /* Add item to runqueue of initial processor */
+    runq_assign(ops, item);
 
     item_schedule_unlock_irq(lock, item);
 
-    sdom->nr_vcpus++;
+    sdom->nr_items++;
 
     SCHED_STAT_CRANK(item_insert);
 
-    CSCHED2_VCPU_CHECK(vc);
+    CSCHED2_ITEM_CHECK(item);
 }
 
 static void
@@ -3153,11 +3141,10 @@ csched2_free_vdata(const struct scheduler *ops, void *priv)
 static void
 csched2_item_remove(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
     spinlock_t *lock;
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
     ASSERT(list_empty(&svc->runq_elem));
 
     SCHED_STAT_CRANK(item_remove);
@@ -3165,14 +3152,14 @@ csched2_item_remove(const struct scheduler *ops, struct sched_item *item)
     /* Remove from runqueue */
     lock = item_schedule_lock_irq(item);
 
-    runq_deassign(ops, vc);
+    runq_deassign(ops, item);
 
     item_schedule_unlock_irq(lock, item);
 
-    svc->sdom->nr_vcpus--;
+    svc->sdom->nr_items--;
 }
 
-/* How long should we let this vcpu run for? */
+/* How long should we let this item run for? */
 static s_time_t
 csched2_runtime(const struct scheduler *ops, int cpu,
                 struct csched2_item *snext, s_time_t now)
@@ -3187,7 +3174,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
      * If we're idle, just stay so. Others (or external events)
      * will poke us when necessary.
      */
-    if ( is_idle_vcpu(snext->vcpu) )
+    if ( is_idle_item(snext->item) )
         return -1;
 
     /* General algorithm:
@@ -3204,8 +3191,8 @@ csched2_runtime(const struct scheduler *ops, int cpu,
     if ( prv->ratelimit_us )
     {
         s_time_t ratelimit_min = MICROSECS(prv->ratelimit_us);
-        if ( snext->vcpu->sched_item->is_running )
-            ratelimit_min = snext->vcpu->sched_item->state_entry_time +
+        if ( snext->item->is_running )
+            ratelimit_min = snext->item->state_entry_time +
                             MICROSECS(prv->ratelimit_us) - now;
         if ( ratelimit_min > min_time )
             min_time = ratelimit_min;
@@ -3222,7 +3209,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
     {
         struct csched2_item *swait = runq_elem(runq->next);
 
-        if ( ! is_idle_vcpu(swait->vcpu)
+        if ( ! is_idle_item(swait->item)
              && swait->credit > 0 )
         {
             rt_credit = snext->credit - swait->credit;
@@ -3236,7 +3223,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
      *
      * FIXME: See if we can eliminate this conversion if we know time
      * will be outside (MIN,MAX).  Probably requires pre-calculating
-     * credit values of MIN,MAX per vcpu, since each vcpu burns credit
+     * credit values of MIN,MAX per item, since each item burns credit
      * at a different rate.
      */
     if ( rt_credit > 0 )
@@ -3284,36 +3271,35 @@ runq_candidate(struct csched2_runqueue_data *rqd,
 
     *skipped = 0;
 
-    if ( unlikely(is_idle_vcpu(scurr->vcpu)) )
+    if ( unlikely(is_idle_item(scurr->item)) )
     {
         snext = scurr;
         goto check_runq;
     }
 
-    yield = __test_and_clear_bit(__CSFLAG_vcpu_yield, &scurr->flags);
+    yield = __test_and_clear_bit(__CSFLAG_item_yield, &scurr->flags);
 
     /*
-     * Return the current vcpu if it has executed for less than ratelimit.
-     * Adjuststment for the selected vcpu's credit and decision
+     * Return the current item if it has executed for less than ratelimit.
+     * Adjuststment for the selected item's credit and decision
      * for how long it will run will be taken in csched2_runtime.
      *
      * Note that, if scurr is yielding, we don't let rate limiting kick in.
      * In fact, it may be the case that scurr is about to spin, and there's
      * no point forcing it to do so until rate limiting expires.
      */
-    if ( !yield && prv->ratelimit_us && vcpu_runnable(scurr->vcpu) &&
-         (now - scurr->vcpu->sched_item->state_entry_time) <
-          MICROSECS(prv->ratelimit_us) )
+    if ( !yield && prv->ratelimit_us && item_runnable(scurr->item) &&
+         (now - scurr->item->state_entry_time) < MICROSECS(prv->ratelimit_us) )
     {
         if ( unlikely(tb_init_done) )
         {
             struct {
-                unsigned vcpu:16, dom:16;
+                unsigned item:16, dom:16;
                 unsigned runtime;
             } d;
-            d.dom = scurr->vcpu->domain->domain_id;
-            d.vcpu = scurr->vcpu->vcpu_id;
-            d.runtime = now - scurr->vcpu->sched_item->state_entry_time;
+            d.dom = scurr->item->domain->domain_id;
+            d.item = scurr->item->item_id;
+            d.runtime = now - scurr->item->state_entry_time;
             __trace_var(TRC_CSCHED2_RATELIMIT, 1,
                         sizeof(d),
                         (unsigned char *)&d);
@@ -3322,13 +3308,13 @@ runq_candidate(struct csched2_runqueue_data *rqd,
     }
 
     /* If scurr has a soft-affinity, let's check whether cpu is part of it */
-    if ( has_soft_affinity(scurr->vcpu->sched_item) )
+    if ( has_soft_affinity(scurr->item) )
     {
-        affinity_balance_cpumask(scurr->vcpu->sched_item, BALANCE_SOFT_AFFINITY,
+        affinity_balance_cpumask(scurr->item, BALANCE_SOFT_AFFINITY,
                                  cpumask_scratch);
         if ( unlikely(!cpumask_test_cpu(cpu, cpumask_scratch)) )
         {
-            cpumask_t *online = cpupool_domain_cpumask(scurr->vcpu->domain);
+            cpumask_t *online = cpupool_domain_cpumask(scurr->item->domain);
 
             /* Ok, is any of the pcpus in scurr soft-affinity idle? */
             cpumask_and(cpumask_scratch, cpumask_scratch, &rqd->idle);
@@ -3356,10 +3342,10 @@ runq_candidate(struct csched2_runqueue_data *rqd,
      *
      * Of course, we also default to idle also if scurr is not runnable.
      */
-    if ( vcpu_runnable(scurr->vcpu) && !soft_aff_preempt )
+    if ( item_runnable(scurr->item) && !soft_aff_preempt )
         snext = scurr;
     else
-        snext = csched2_item(idle_vcpu[cpu]->sched_item);
+        snext = csched2_item(sched_idle_item(cpu));
 
  check_runq:
     list_for_each_safe( iter, temp, &rqd->runq )
@@ -3369,24 +3355,24 @@ runq_candidate(struct csched2_runqueue_data *rqd,
         if ( unlikely(tb_init_done) )
         {
             struct {
-                unsigned vcpu:16, dom:16;
+                unsigned item:16, dom:16;
             } d;
-            d.dom = svc->vcpu->domain->domain_id;
-            d.vcpu = svc->vcpu->vcpu_id;
+            d.dom = svc->item->domain->domain_id;
+            d.item = svc->item->item_id;
             __trace_var(TRC_CSCHED2_RUNQ_CAND_CHECK, 1,
                         sizeof(d),
                         (unsigned char *)&d);
         }
 
-        /* Only consider vcpus that are allowed to run on this processor. */
-        if ( !cpumask_test_cpu(cpu, svc->vcpu->sched_item->cpu_hard_affinity) )
+        /* Only consider items that are allowed to run on this processor. */
+        if ( !cpumask_test_cpu(cpu, svc->item->cpu_hard_affinity) )
         {
             (*skipped)++;
             continue;
         }
 
         /*
-         * If a vcpu is meant to be picked up by another processor, and such
+         * If an item is meant to be picked up by another processor, and such
          * processor has not scheduled yet, leave it in the runqueue for him.
          */
         if ( svc->tickled_cpu != -1 && svc->tickled_cpu != cpu &&
@@ -3401,7 +3387,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
          * If this is on a different processor, don't pull it unless
          * its credit is at least CSCHED2_MIGRATE_RESIST higher.
          */
-        if ( svc->vcpu->processor != cpu
+        if ( sched_item_cpu(svc->item) != cpu
              && snext->credit + CSCHED2_MIGRATE_RESIST > svc->credit )
         {
             (*skipped)++;
@@ -3416,7 +3402,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
          * some budget, then choose it.
          */
         if ( (yield || svc->credit > snext->credit) &&
-             (!has_cap(svc) || vcpu_grab_budget(svc)) )
+             (!has_cap(svc) || item_grab_budget(svc)) )
             snext = svc;
 
         /* In any case, if we got this far, break. */
@@ -3426,12 +3412,12 @@ runq_candidate(struct csched2_runqueue_data *rqd,
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned tickled_cpu, skipped;
             int credit;
         } d;
-        d.dom = snext->vcpu->domain->domain_id;
-        d.vcpu = snext->vcpu->vcpu_id;
+        d.dom = snext->item->domain->domain_id;
+        d.item = snext->item->item_id;
         d.credit = snext->credit;
         d.tickled_cpu = snext->tickled_cpu;
         d.skipped = *skipped;
@@ -3463,14 +3449,15 @@ csched2_schedule(
 {
     const int cpu = smp_processor_id();
     struct csched2_runqueue_data *rqd;
-    struct csched2_item * const scurr = csched2_item(current->sched_item);
+    struct sched_item *curritem = current->sched_item;
+    struct csched2_item * const scurr = csched2_item(curritem);
     struct csched2_item *snext = NULL;
-    unsigned int skipped_vcpus = 0;
+    unsigned int skipped_items = 0;
     struct task_slice ret;
     bool tickled;
 
     SCHED_STAT_CRANK(schedule);
-    CSCHED2_VCPU_CHECK(current);
+    CSCHED2_ITEM_CHECK(curritem);
 
     BUG_ON(!cpumask_test_cpu(cpu, &csched2_priv(ops)->initialized));
 
@@ -3479,7 +3466,7 @@ csched2_schedule(
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
-    BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
+    BUG_ON(!is_idle_item(curritem) && scurr->rqd != rqd);
 
     /* Clear "tickled" bit now that we've been scheduled */
     tickled = cpumask_test_cpu(cpu, &rqd->tickled);
@@ -3499,7 +3486,7 @@ csched2_schedule(
         d.cpu = cpu;
         d.rq_id = c2r(cpu);
         d.tasklet = tasklet_work_scheduled;
-        d.idle = is_idle_vcpu(current);
+        d.idle = is_idle_item(curritem);
         d.smt_idle = cpumask_test_cpu(cpu, &rqd->smt_idle);
         d.tickled = tickled;
         __trace_var(TRC_CSCHED2_SCHEDULE, 1,
@@ -3513,55 +3500,55 @@ csched2_schedule(
     /*
      *  Below 0, means that we are capped and we have overrun our  budget.
      *  Let's try to get some more but, if we fail (e.g., because of the
-     *  other running vcpus), we will be parked.
+     *  other running items), we will be parked.
      */
     if ( unlikely(scurr->budget <= 0) )
-        vcpu_grab_budget(scurr);
+        item_grab_budget(scurr);
 
     /*
-     * Select next runnable local VCPU (ie top of local runq).
+     * Select next runnable local ITEM (ie top of local runq).
      *
-     * If the current vcpu is runnable, and has higher credit than
+     * If the current item is runnable, and has higher credit than
      * the next guy on the queue (or there is noone else), we want to
      * run him again.
      *
-     * If there's tasklet work to do, we want to chose the idle vcpu
+     * If there's tasklet work to do, we want to chose the idle item
      * for this processor, and mark the current for delayed runqueue
      * add.
      *
-     * If the current vcpu is runnable, and there's another runnable
+     * If the current item is runnable, and there's another runnable
      * candidate, we want to mark current for delayed runqueue add,
      * and remove the next guy from the queue.
      *
-     * If the current vcpu is not runnable, we want to chose the idle
-     * vcpu for this processor.
+     * If the current item is not runnable, we want to chose the idle
+     * item for this processor.
      */
     if ( tasklet_work_scheduled )
     {
-        __clear_bit(__CSFLAG_vcpu_yield, &scurr->flags);
+        __clear_bit(__CSFLAG_item_yield, &scurr->flags);
         trace_var(TRC_CSCHED2_SCHED_TASKLET, 1, 0, NULL);
-        snext = csched2_item(idle_vcpu[cpu]->sched_item);
+        snext = csched2_item(sched_idle_item(cpu));
     }
     else
-        snext = runq_candidate(rqd, scurr, cpu, now, &skipped_vcpus);
+        snext = runq_candidate(rqd, scurr, cpu, now, &skipped_items);
 
-    /* If switching from a non-idle runnable vcpu, put it
+    /* If switching from a non-idle runnable item, put it
      * back on the runqueue. */
     if ( snext != scurr
-         && !is_idle_vcpu(scurr->vcpu)
-         && vcpu_runnable(current) )
+         && !is_idle_item(curritem)
+         && item_runnable(curritem) )
         __set_bit(__CSFLAG_delayed_runq_add, &scurr->flags);
 
     ret.migrated = 0;
 
     /* Accounting for non-idle tasks */
-    if ( !is_idle_vcpu(snext->vcpu) )
+    if ( !is_idle_item(snext->item) )
     {
         /* If switching, remove this from the runqueue and mark it scheduled */
         if ( snext != scurr )
         {
             ASSERT(snext->rqd == rqd);
-            ASSERT(!snext->vcpu->sched_item->is_running);
+            ASSERT(!snext->item->is_running);
 
             runq_remove(snext);
             __set_bit(__CSFLAG_scheduled, &snext->flags);
@@ -3576,19 +3563,19 @@ csched2_schedule(
 
         /*
          * The reset condition is "has a scheduler epoch come to an end?".
-         * The way this is enforced is checking whether the vcpu at the top
+         * The way this is enforced is checking whether the item at the top
          * of the runqueue has negative credits. This means the epochs have
          * variable length, as in one epoch expores when:
-         *  1) the vcpu at the top of the runqueue has executed for
+         *  1) the item at the top of the runqueue has executed for
          *     around 10 ms (with default parameters);
-         *  2) no other vcpu with higher credits wants to run.
+         *  2) no other item with higher credits wants to run.
          *
          * Here, where we want to check for reset, we need to make sure the
-         * proper vcpu is being used. In fact, runqueue_candidate() may have
-         * not returned the first vcpu in the runqueue, for various reasons
+         * proper item is being used. In fact, runqueue_candidate() may have
+         * not returned the first item in the runqueue, for various reasons
          * (e.g., affinity). Only trigger a reset when it does.
          */
-        if ( skipped_vcpus == 0 && snext->credit <= CSCHED2_CREDIT_RESET )
+        if ( skipped_items == 0 && snext->credit <= CSCHED2_CREDIT_RESET )
         {
             reset_credit(ops, cpu, now, snext);
             balance_load(ops, cpu, now);
@@ -3598,11 +3585,10 @@ csched2_schedule(
         snext->tickled_cpu = -1;
 
         /* Safe because lock for old processor is held */
-        if ( snext->vcpu->processor != cpu )
+        if ( sched_item_cpu(snext->item) != cpu )
         {
             snext->credit += CSCHED2_MIGRATE_COMPENSATION;
-            snext->vcpu->processor = cpu;
-            snext->vcpu->sched_item->res = per_cpu(sched_res, cpu);
+            sched_set_res(snext->item, per_cpu(sched_res, cpu));
             SCHED_STAT_CRANK(migrated);
             ret.migrated = 1;
         }
@@ -3636,20 +3622,20 @@ csched2_schedule(
      * Return task to run next...
      */
     ret.time = csched2_runtime(ops, cpu, snext, now);
-    ret.task = snext->vcpu->sched_item;
+    ret.task = snext->item;
 
-    CSCHED2_VCPU_CHECK(ret.task->vcpu);
+    CSCHED2_ITEM_CHECK(ret.task);
     return ret;
 }
 
 static void
-csched2_dump_vcpu(struct csched2_private *prv, struct csched2_item *svc)
+csched2_dump_item(struct csched2_private *prv, struct csched2_item *svc)
 {
     printk("[%i.%i] flags=%x cpu=%i",
-            svc->vcpu->domain->domain_id,
-            svc->vcpu->vcpu_id,
+            svc->item->domain->domain_id,
+            svc->item->item_id,
             svc->flags,
-            svc->vcpu->processor);
+            sched_item_cpu(svc->item));
 
     printk(" credit=%" PRIi32" [w=%u]", svc->credit, svc->weight);
 
@@ -3674,12 +3660,12 @@ dump_pcpu(const struct scheduler *ops, int cpu)
            nr_cpu_ids, cpumask_bits(per_cpu(cpu_sibling_mask, cpu)),
            nr_cpu_ids, cpumask_bits(per_cpu(cpu_core_mask, cpu)));
 
-    /* current VCPU (nothing to say if that's the idle vcpu) */
+    /* current ITEM (nothing to say if that's the idle item) */
     svc = csched2_item(curr_on_cpu(cpu));
-    if ( svc && !is_idle_vcpu(svc->vcpu) )
+    if ( svc && !is_idle_item(svc->item) )
     {
         printk("\trun: ");
-        csched2_dump_vcpu(prv, svc);
+        csched2_dump_item(prv, svc);
     }
 }
 
@@ -3736,7 +3722,7 @@ csched2_dump(const struct scheduler *ops)
     list_for_each( iter_sdom, &prv->sdom )
     {
         struct csched2_dom *sdom;
-        struct vcpu *v;
+        struct sched_item *item;
 
         sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
 
@@ -3744,19 +3730,19 @@ csched2_dump(const struct scheduler *ops)
                sdom->dom->domain_id,
                sdom->weight,
                sdom->cap,
-               sdom->nr_vcpus);
+               sdom->nr_items);
 
-        for_each_vcpu( sdom->dom, v )
+        for_each_sched_item( sdom->dom, item )
         {
-            struct csched2_item * const svc = csched2_item(v->sched_item);
+            struct csched2_item * const svc = csched2_item(item);
             spinlock_t *lock;
 
-            lock = item_schedule_lock(svc->vcpu->sched_item);
+            lock = item_schedule_lock(item);
 
             printk("\t%3d: ", ++loop);
-            csched2_dump_vcpu(prv, svc);
+            csched2_dump_item(prv, svc);
 
-            item_schedule_unlock(lock, svc->vcpu->sched_item);
+            item_schedule_unlock(lock, item);
         }
     }
 
@@ -3782,7 +3768,7 @@ csched2_dump(const struct scheduler *ops)
             if ( svc )
             {
                 printk("\t%3d: ", loop++);
-                csched2_dump_vcpu(prv, svc);
+                csched2_dump_item(prv, svc);
             }
         }
         spin_unlock(&rqd->lock);
@@ -3882,7 +3868,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
     struct csched2_item *svc = vdata;
     unsigned rqi;
 
-    ASSERT(pdata && svc && is_idle_vcpu(svc->vcpu));
+    ASSERT(pdata && svc && is_idle_item(svc->item));
 
     /*
      * We own one runqueue lock already (from schedule_cpu_switch()). This
@@ -3895,7 +3881,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
     ASSERT(!local_irq_is_enabled());
     write_lock(&prv->lock);
 
-    idle_vcpu[cpu]->sched_item->priv = vdata;
+    sched_idle_item(cpu)->priv = vdata;
 
     rqi = init_pdata(prv, pdata, cpu);
 
@@ -3946,7 +3932,7 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
      */
     ASSERT(spc && spc->runq_id != -1);
     ASSERT(cpumask_test_cpu(cpu, &prv->initialized));
-    
+
     /* Find the old runqueue and remove this cpu from it */
     rqd = prv->rqd + spc->runq_id;
 
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

WARNING: multiple messages have this Message-ID (diff)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [Xen-devel] [PATCH RFC V2 21/45] xen/sched: make credit2 scheduler vcpu agnostic.
Date: Mon,  6 May 2019 08:56:20 +0200	[thread overview]
Message-ID: <20190506065644.7415-22-jgross@suse.com> (raw)
Message-ID: <20190506065620.0FDd2RK-Q0W2_rqxxW2QxFuUNqG1RQZhSgX3x6raTus@z> (raw)
In-Reply-To: <20190506065644.7415-1-jgross@suse.com>

Switch credit2 scheduler completely from vcpu to sched_item usage.

As we are touching lots of lines remove some white space at the end of
the line, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched_credit2.c | 820 ++++++++++++++++++++++-----------------------
 1 file changed, 403 insertions(+), 417 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 0f0d6a3801..7918d46a23 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -45,7 +45,7 @@
 #define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
 #define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
 #define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
+#define TRC_CSCHED2_UPDATE_ITEM_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
 #define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
 #define TRC_CSCHED2_TICKLE_NEW       TRC_SCHED_CLASS_EVT(CSCHED2, 13)
 #define TRC_CSCHED2_RUNQ_MAX_WEIGHT  TRC_SCHED_CLASS_EVT(CSCHED2, 14)
@@ -74,13 +74,13 @@
  * Design:
  *
  * VMs "burn" credits based on their weight; higher weight means
- * credits burn more slowly.  The highest weight vcpu burns credits at
+ * credits burn more slowly.  The highest weight item burns credits at
  * a rate of 1 credit per nanosecond.  Others burn proportionally
  * more.
  *
- * vcpus are inserted into the runqueue by credit order.
+ * items are inserted into the runqueue by credit order.
  *
- * Credits are "reset" when the next vcpu in the runqueue is less than
+ * Credits are "reset" when the next item in the runqueue is less than
  * or equal to zero.  At that point, everyone's credits are "clipped"
  * to a small value, and a fixed credit is added to everyone.
  */
@@ -95,33 +95,33 @@
  *   be given a cap of 25%; a domain that must not use more than 1+1/2 of
  *   physical CPU time, will be given a cap of 150%;
  *
- * - caps are per-domain (not per-vCPU). If a domain has only 1 vCPU, and
- *   a 40% cap, that one vCPU will use 40% of one pCPU. If a somain has 4
- *   vCPUs, and a 200% cap, the equivalent of 100% time on 2 pCPUs will be
- *   split among the v vCPUs. How much each of the vCPUs will actually get,
+ * - caps are per-domain (not per-item). If a domain has only 1 item, and
+ *   a 40% cap, that one item will use 40% of one pCPU. If a somain has 4
+ *   items, and a 200% cap, the equivalent of 100% time on 2 pCPUs will be
+ *   split among the v items. How much each of the items will actually get,
  *   during any given interval of time, is unspecified (as it depends on
  *   various aspects: workload, system load, etc.). For instance, it is
- *   possible that, during a given time interval, 2 vCPUs use 100% each,
+ *   possible that, during a given time interval, 2 items use 100% each,
  *   and the other two use nothing; while during another time interval,
- *   two vCPUs use 80%, one uses 10% and the other 30%; or that each use
+ *   two items use 80%, one uses 10% and the other 30%; or that each use
  *   50% (and so on and so forth).
  *
  * For implementing this, we use the following approach:
  *
  * - each domain is given a 'budget', an each domain has a timer, which
  *   replenishes the domain's budget periodically. The budget is the amount
- *   of time the vCPUs of the domain can use every 'period';
+ *   of time the items of the domain can use every 'period';
  *
  * - the period is CSCHED2_BDGT_REPL_PERIOD, and is the same for all domains
  *   (but each domain has its own timer; so the all are periodic by the same
  *   period, but replenishment of the budgets of the various domains, at
  *   periods boundaries, are not synchronous);
  *
- * - when vCPUs run, they consume budget. When they don't run, they don't
- *   consume budget. If there is no budget left for the domain, no vCPU of
- *   that domain can run. If a vCPU tries to run and finds that there is no
+ * - when items run, they consume budget. When they don't run, they don't
+ *   consume budget. If there is no budget left for the domain, no item of
+ *   that domain can run. If an item tries to run and finds that there is no
  *   budget, it blocks.
- *   At whatever time a vCPU wants to run, it must check the domain's budget,
+ *   At whatever time an item wants to run, it must check the domain's budget,
  *   and if there is some, it can use it.
  *
  * - budget is replenished to the top of the capacity for the domain once
@@ -129,39 +129,39 @@
  *   though, the budget after a replenishment will always be at most equal
  *   to the total capacify of the domain ('tot_budget');
  *
- * - when a budget replenishment occurs, if there are vCPUs that had been
+ * - when a budget replenishment occurs, if there are items that had been
  *   blocked because of lack of budget, they'll be unblocked, and they will
  *   (potentially) be able to run again.
  *
  * Finally, some even more implementation related detail:
  *
- * - budget is stored in a domain-wide pool. vCPUs of the domain that want
+ * - budget is stored in a domain-wide pool. Items of the domain that want
  *   to run go to such pool, and grub some. When they do so, the amount
  *   they grabbed is _immediately_ removed from the pool. This happens in
- *   vcpu_grab_budget();
+ *   item_grab_budget();
  *
- * - when vCPUs stop running, if they've not consumed all the budget they
+ * - when items stop running, if they've not consumed all the budget they
  *   took, the leftover is put back in the pool. This happens in
- *   vcpu_return_budget();
+ *   item_return_budget();
  *
- * - the above means that a vCPU can find out that there is no budget and
+ * - the above means that an item can find out that there is no budget and
  *   block, not only if the cap has actually been reached (for this period),
- *   but also if some other vCPUs, in order to run, have grabbed a certain
+ *   but also if some other items, in order to run, have grabbed a certain
  *   quota of budget, no matter whether they've already used it all or not.
- *   A vCPU blocking because (any form of) lack of budget is said to be
- *   "parked", and such blocking happens in park_vcpu();
+ *   An item blocking because (any form of) lack of budget is said to be
+ *   "parked", and such blocking happens in park_item();
  *
- * - when a vCPU stops running, and puts back some budget in the domain pool,
+ * - when an item stops running, and puts back some budget in the domain pool,
  *   we need to check whether there is someone which has been parked and that
- *   can be unparked. This happens in unpark_parked_vcpus(), called from
+ *   can be unparked. This happens in unpark_parked_items(), called from
  *   csched2_context_saved();
  *
  * - of course, unparking happens also as a consequence of the domain's budget
  *   being replenished by the periodic timer. This also occurs by means of
  *   calling csched2_context_saved() (but from replenish_domain_budget());
  *
- * - parked vCPUs of a domain are kept in a (per-domain) list, called
- *   'parked_vcpus'). Manipulation of the list and of the domain-wide budget
+ * - parked items of a domain are kept in a (per-domain) list, called
+ *   'parked_items'). Manipulation of the list and of the domain-wide budget
  *   pool, must occur only when holding the 'budget_lock'.
  */
 
@@ -174,9 +174,9 @@
  *     pcpu_schedule_lock() / item_schedule_lock() (and friends),
  *   * a cpu may (try to) take a "remote" runqueue lock, e.g., for
  *     load balancing;
- *  + serializes runqueue operations (removing and inserting vcpus);
+ *  + serializes runqueue operations (removing and inserting items);
  *  + protects runqueue-wide data in csched2_runqueue_data;
- *  + protects vcpu parameters in csched2_item for the vcpu in the
+ *  + protects item parameters in csched2_item for the item in the
  *    runqueue.
  *
  * - Private scheduler lock
@@ -190,8 +190,8 @@
  *  + it is per-domain;
  *  + protects, in domains that have an utilization cap;
  *   * manipulation of the total budget of the domain (as it is shared
- *     among all vCPUs of the domain),
- *   * manipulation of the list of vCPUs that are blocked waiting for
+ *     among all items of the domain),
+ *   * manipulation of the list of items that are blocked waiting for
  *     some budget to be available.
  *
  * - Type:
@@ -228,9 +228,9 @@
  */
 #define CSCHED2_CREDIT_INIT          MILLISECS(10)
 /*
- * Amount of credit the idle vcpus have. It never changes, as idle
- * vcpus does not consume credits, and it must be lower than whatever
- * amount of credit 'regular' vcpu would end up with.
+ * Amount of credit the idle items have. It never changes, as idle
+ * items does not consume credits, and it must be lower than whatever
+ * amount of credit 'regular' item would end up with.
  */
 #define CSCHED2_IDLE_CREDIT          (-(1U<<30))
 /*
@@ -243,9 +243,9 @@
  * MIN_TIMER.
  */
 #define CSCHED2_MIGRATE_RESIST       ((opt_migrate_resist)*MICROSECS(1))
-/* How much to "compensate" a vcpu for L2 migration. */
+/* How much to "compensate" an item for L2 migration. */
 #define CSCHED2_MIGRATE_COMPENSATION MICROSECS(50)
-/* How tolerant we should be when peeking at runtime of vcpus on other cpus */
+/* How tolerant we should be when peeking at runtime of items on other cpus */
 #define CSCHED2_RATELIMIT_TICKLE_TOLERANCE MICROSECS(50)
 /* Reset: Value below which credit will be reset. */
 #define CSCHED2_CREDIT_RESET         0
@@ -258,7 +258,7 @@
  * Flags
  */
 /*
- * CSFLAG_scheduled: Is this vcpu either running on, or context-switching off,
+ * CSFLAG_scheduled: Is this item either running on, or context-switching off,
  * a physical cpu?
  * + Accessed only with runqueue lock held
  * + Set when chosen as next in csched2_schedule().
@@ -280,21 +280,21 @@
 #define __CSFLAG_delayed_runq_add 2
 #define CSFLAG_delayed_runq_add (1U<<__CSFLAG_delayed_runq_add)
 /*
- * CSFLAG_runq_migrate_request: This vcpu is being migrated as a result of a
+ * CSFLAG_runq_migrate_request: This item is being migrated as a result of a
  * credit2-initiated runq migrate request; migrate it to the runqueue indicated
- * in the svc struct. 
+ * in the svc struct.
  */
 #define __CSFLAG_runq_migrate_request 3
 #define CSFLAG_runq_migrate_request (1U<<__CSFLAG_runq_migrate_request)
 /*
- * CSFLAG_vcpu_yield: this vcpu was running, and has called vcpu_yield(). The
+ * CSFLAG_item_yield: this item was running, and has called vcpu_yield(). The
  * scheduler is invoked to see if we can give the cpu to someone else, and
- * get back to the yielding vcpu in a while.
+ * get back to the yielding item in a while.
  */
-#define __CSFLAG_vcpu_yield 4
-#define CSFLAG_vcpu_yield (1U<<__CSFLAG_vcpu_yield)
+#define __CSFLAG_item_yield 4
+#define CSFLAG_item_yield (1U<<__CSFLAG_item_yield)
 /*
- * CSFLAGS_pinned: this vcpu is currently 'pinned', i.e., has its hard
+ * CSFLAGS_pinned: this item is currently 'pinned', i.e., has its hard
  * affinity set to one and only 1 cpu (and, hence, can only run there).
  */
 #define __CSFLAG_pinned 5
@@ -306,7 +306,7 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
 /*
  * Load tracking and load balancing
  *
- * Load history of runqueues and vcpus is accounted for by using an
+ * Load history of runqueues and items is accounted for by using an
  * exponential weighted moving average algorithm. However, instead of using
  * fractions,we shift everything to left by the number of bits we want to
  * use for representing the fractional part (Q-format).
@@ -326,7 +326,7 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
  *
  * where W is the length of the window, P the multiplier for transitiong into
  * Q-format fixed point arithmetic and load is the instantaneous load of a
- * runqueue, which basically is the number of runnable vcpus there are on the
+ * runqueue, which basically is the number of runnable items there are on the
  * runqueue (for the meaning of the other terms, look at the doc comment to
  *  update_runq_load()).
  *
@@ -338,7 +338,7 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
  * The maximum possible value for the average load, which we want to store in
  * s_time_t type variables (i.e., we have 63 bits available) is load*P. This
  * means that, with P 18 bits wide, load can occupy 45 bits. This in turn
- * means we can have 2^45 vcpus in each runqueue, before overflow occurs!
+ * means we can have 2^45 items in each runqueue, before overflow occurs!
  *
  * However, it can happen that, at step j+1, if:
  *
@@ -354,13 +354,13 @@ integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
  *
  *  2^(63 - 30 - 18) = 2^15 = 32768
  *
- * So 32768 is the maximum number of vcpus the we can have in a runqueue,
+ * So 32768 is the maximum number of items the we can have in a runqueue,
  * at any given time, and still not have problems with the load tracking
  * calculations... and this is more than fine.
  *
  * As a matter of fact, since we are using microseconds granularity, we have
  * W=2^20. So, still with 18 fractional bits and a 1 second long window, there
- * may be 2^25 = 33554432 vcpus in a runq before we have to start thinking
+ * may be 2^25 = 33554432 items in a runq before we have to start thinking
  * about overflow.
  */
 
@@ -468,7 +468,7 @@ struct csched2_runqueue_data {
     struct list_head runq;     /* Ordered list of runnable vms               */
     int id;                    /* ID of this runqueue (-1 if invalid)        */
 
-    int load;                  /* Instantaneous load (num of non-idle vcpus) */
+    int load;                  /* Instantaneous load (num of non-idle items) */
     s_time_t load_last_update; /* Last time average was updated              */
     s_time_t avgload;          /* Decaying queue load                        */
     s_time_t b_avgload;        /* Decaying queue load modified by balancing  */
@@ -478,8 +478,8 @@ struct csched2_runqueue_data {
         tickled,               /* Have been asked to go through schedule     */
         idle;                  /* Currently idle pcpus                       */
 
-    struct list_head svc;      /* List of all vcpus assigned to the runqueue */
-    unsigned int max_weight;   /* Max weight of the vcpus in this runqueue   */
+    struct list_head svc;      /* List of all items assigned to the runqueue */
+    unsigned int max_weight;   /* Max weight of the items in this runqueue   */
     unsigned int pick_bias;    /* Last picked pcpu. Start from it next time  */
 };
 
@@ -509,20 +509,20 @@ struct csched2_pcpu {
 };
 
 /*
- * Virtual CPU
+ * Schedule Item
  */
 struct csched2_item {
     struct csched2_dom *sdom;          /* Up-pointer to domain                */
-    struct vcpu *vcpu;                 /* Up-pointer, to vcpu                 */
+    struct sched_item *item;           /* Up-pointer, to schedule item        */
     struct csched2_runqueue_data *rqd; /* Up-pointer to the runqueue          */
 
     int credit;                        /* Current amount of credit            */
-    unsigned int weight;               /* Weight of this vcpu                 */
+    unsigned int weight;               /* Weight of this item                 */
     unsigned int residual;             /* Reminder of div(max_weight/weight)  */
     unsigned flags;                    /* Status flags (16 bits would be ok,  */
     s_time_t budget;                   /* Current budget (if domains has cap) */
                                        /* but clear_bit() does not like that) */
-    s_time_t budget_quota;             /* Budget to which vCPU is entitled    */
+    s_time_t budget_quota;             /* Budget to which item is entitled    */
 
     s_time_t start_time;               /* Time we were scheduled (for credit) */
 
@@ -531,7 +531,7 @@ struct csched2_item {
     s_time_t avgload;                  /* Decaying queue load                 */
 
     struct list_head runq_elem;        /* On the runqueue (rqd->runq)         */
-    struct list_head parked_elem;      /* On the parked_vcpus list            */
+    struct list_head parked_elem;      /* On the parked_items list            */
     struct list_head rqd_elem;         /* On csched2_runqueue_data's svc list */
     struct csched2_runqueue_data *migrate_rqd; /* Pre-determined migr. target */
     int tickled_cpu;                   /* Cpu that will pick us (-1 if none)  */
@@ -549,12 +549,12 @@ struct csched2_dom {
 
     struct timer repl_timer;    /* Timer for periodic replenishment of budget */
     s_time_t next_repl;         /* Time at which next replenishment occurs    */
-    struct list_head parked_vcpus; /* List of CPUs waiting for budget         */
+    struct list_head parked_items; /* List of CPUs waiting for budget         */
 
     struct list_head sdom_elem; /* On csched2_runqueue_data's sdom list       */
     uint16_t weight;            /* User specified weight                      */
     uint16_t cap;               /* User specified cap                         */
-    uint16_t nr_vcpus;          /* Number of vcpus of this domain             */
+    uint16_t nr_items;          /* Number of items of this domain             */
 };
 
 /*
@@ -593,7 +593,7 @@ static inline struct csched2_runqueue_data *c2rqd(const struct scheduler *ops,
     return &csched2_priv(ops)->rqd[c2r(cpu)];
 }
 
-/* Does the domain of this vCPU have a cap? */
+/* Does the domain of this item have a cap? */
 static inline bool has_cap(const struct csched2_item *svc)
 {
     return svc->budget != STIME_MAX;
@@ -611,24 +611,24 @@ static inline bool has_cap(const struct csched2_item *svc)
  *    smt_idle mask.
  *
  * Once we have such a mask, it is easy to implement a policy that, either:
- *  - uses fully idle cores first: it is enough to try to schedule the vcpus
+ *  - uses fully idle cores first: it is enough to try to schedule the items
  *    on pcpus from smt_idle mask first. This is what happens if
  *    sched_smt_power_savings was not set at boot (default), and it maximizes
  *    true parallelism, and hence performance;
- *  - uses already busy cores first: it is enough to try to schedule the vcpus
+ *  - uses already busy cores first: it is enough to try to schedule the items
  *    on pcpus that are idle, but are not in smt_idle. This is what happens if
  *    sched_smt_power_savings is set at boot, and it allows as more cores as
  *    possible to stay in low power states, minimizing power consumption.
  *
  * This logic is entirely implemented in runq_tickle(), and that is enough.
- * In fact, in this scheduler, placement of a vcpu on one of the pcpus of a
+ * In fact, in this scheduler, placement of an item on one of the pcpus of a
  * runq, _always_ happens by means of tickling:
- *  - when a vcpu wakes up, it calls csched2_item_wake(), which calls
+ *  - when an item wakes up, it calls csched2_item_wake(), which calls
  *    runq_tickle();
  *  - when a migration is initiated in schedule.c, we call csched2_res_pick(),
  *    csched2_item_migrate() (which calls migrate()) and csched2_item_wake().
  *    csched2_res_pick() looks for the least loaded runq and return just any
- *    of its processors. Then, csched2_item_migrate() just moves the vcpu to
+ *    of its processors. Then, csched2_item_migrate() just moves the item to
  *    the chosen runq, and it is again runq_tickle(), called by
  *    csched2_item_wake() that actually decides what pcpu to use within the
  *    chosen runq;
@@ -643,7 +643,7 @@ static inline bool has_cap(const struct csched2_item *svc)
  *
  * NB that rqd->smt_idle is different than rqd->idle.  rqd->idle
  * records pcpus that at are merely idle (i.e., at the moment do not
- * have a vcpu running on them).  But you have to manually filter out
+ * have an item running on them).  But you have to manually filter out
  * which pcpus have been tickled in order to find cores that are not
  * going to be busy soon.  Filtering out tickled cpus pairwise is a
  * lot of extra pain; so for rqd->smt_idle, we explicitly make so that
@@ -690,24 +690,24 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask)
  */
 static int get_fallback_cpu(struct csched2_item *svc)
 {
-    struct vcpu *v = svc->vcpu;
+    struct sched_item *item = svc->item;
     unsigned int bs;
 
     SCHED_STAT_CRANK(need_fallback_cpu);
 
     for_each_affinity_balance_step( bs )
     {
-        int cpu = v->processor;
+        int cpu = sched_item_cpu(item);
 
-        if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(v->sched_item) )
+        if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(item) )
             continue;
 
-        affinity_balance_cpumask(v->sched_item, bs, cpumask_scratch_cpu(cpu));
+        affinity_balance_cpumask(item, bs, cpumask_scratch_cpu(cpu));
         cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
-                    cpupool_domain_cpumask(v->domain));
+                    cpupool_domain_cpumask(item->domain));
 
         /*
-         * This is cases 1 or 3 (depending on bs): if v->processor is (still)
+         * This is cases 1 or 3 (depending on bs): if processor is (still)
          * in our affinity, go for it, for cache betterness.
          */
         if ( likely(cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
@@ -729,7 +729,7 @@ static int get_fallback_cpu(struct csched2_item *svc)
          * We may well pick any valid pcpu from our soft-affinity, outside
          * of our current runqueue, but we decide not to. In fact, changing
          * runqueue is slow, affects load distribution, and is a source of
-         * overhead for the vcpus running on the other runqueue (we need the
+         * overhead for the items running on the other runqueue (we need the
          * lock). So, better do that as a consequence of a well informed
          * decision (or if we really don't have any other chance, as we will,
          * at step 5, if we get to there).
@@ -761,7 +761,7 @@ static int get_fallback_cpu(struct csched2_item *svc)
      * We can't be here.  But if that somehow happen (in non-debug builds),
      * at least return something which both online and in our hard-affinity.
      */
-    return cpumask_any(cpumask_scratch_cpu(v->processor));
+    return cpumask_any(cpumask_scratch_cpu(sched_item_cpu(item)));
 }
 
 /*
@@ -790,7 +790,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c
  * Runqueue related code.
  */
 
-static inline int vcpu_on_runq(struct csched2_item *svc)
+static inline int item_on_runq(struct csched2_item *svc)
 {
     return !list_empty(&svc->runq_elem);
 }
@@ -948,17 +948,17 @@ _runq_assign(struct csched2_item *svc, struct csched2_runqueue_data *rqd)
 
     update_max_weight(svc->rqd, svc->weight, 0);
 
-    /* Expected new load based on adding this vcpu */
+    /* Expected new load based on adding this item */
     rqd->b_avgload += svc->avgload;
 
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned rqi:16;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.rqi=rqd->id;
         __trace_var(TRC_CSCHED2_RUNQ_ASSIGN, 1,
                     sizeof(d),
@@ -968,13 +968,13 @@ _runq_assign(struct csched2_item *svc, struct csched2_runqueue_data *rqd)
 }
 
 static void
-runq_assign(const struct scheduler *ops, struct vcpu *vc)
+runq_assign(const struct scheduler *ops, struct sched_item *item)
 {
-    struct csched2_item *svc = vc->sched_item->priv;
+    struct csched2_item *svc = item->priv;
 
     ASSERT(svc->rqd == NULL);
 
-    _runq_assign(svc, c2rqd(ops, vc->processor));
+    _runq_assign(svc, c2rqd(ops, sched_item_cpu(item)));
 }
 
 static void
@@ -982,24 +982,24 @@ _runq_deassign(struct csched2_item *svc)
 {
     struct csched2_runqueue_data *rqd = svc->rqd;
 
-    ASSERT(!vcpu_on_runq(svc));
+    ASSERT(!item_on_runq(svc));
     ASSERT(!(svc->flags & CSFLAG_scheduled));
 
     list_del_init(&svc->rqd_elem);
     update_max_weight(rqd, 0, svc->weight);
 
-    /* Expected new load based on removing this vcpu */
+    /* Expected new load based on removing this item */
     rqd->b_avgload = max_t(s_time_t, rqd->b_avgload - svc->avgload, 0);
 
     svc->rqd = NULL;
 }
 
 static void
-runq_deassign(const struct scheduler *ops, struct vcpu *vc)
+runq_deassign(const struct scheduler *ops, struct sched_item *item)
 {
-    struct csched2_item *svc = vc->sched_item->priv;
+    struct csched2_item *svc = item->priv;
 
-    ASSERT(svc->rqd == c2rqd(ops, vc->processor));
+    ASSERT(svc->rqd == c2rqd(ops, sched_item_cpu(item)));
 
     _runq_deassign(svc);
 }
@@ -1202,15 +1202,15 @@ update_svc_load(const struct scheduler *ops,
                 struct csched2_item *svc, int change, s_time_t now)
 {
     struct csched2_private *prv = csched2_priv(ops);
-    s_time_t delta, vcpu_load;
+    s_time_t delta, item_load;
     unsigned int P, W;
 
     if ( change == -1 )
-        vcpu_load = 1;
+        item_load = 1;
     else if ( change == 1 )
-        vcpu_load = 0;
+        item_load = 0;
     else
-        vcpu_load = vcpu_runnable(svc->vcpu);
+        item_load = item_runnable(svc->item);
 
     W = prv->load_window_shift;
     P = prv->load_precision_shift;
@@ -1218,7 +1218,7 @@ update_svc_load(const struct scheduler *ops,
 
     if ( svc->load_last_update + (1ULL << W) < now )
     {
-        svc->avgload = vcpu_load << P;
+        svc->avgload = item_load << P;
     }
     else
     {
@@ -1231,7 +1231,7 @@ update_svc_load(const struct scheduler *ops,
         }
 
         svc->avgload = svc->avgload +
-                       ((delta * (vcpu_load << P)) >> W) -
+                       ((delta * (item_load << P)) >> W) -
                        ((delta * svc->avgload) >> W);
     }
     svc->load_last_update = now;
@@ -1243,14 +1243,14 @@ update_svc_load(const struct scheduler *ops,
     {
         struct {
             uint64_t v_avgload;
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned shift;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.v_avgload = svc->avgload;
         d.shift = P;
-        __trace_var(TRC_CSCHED2_UPDATE_VCPU_LOAD, 1,
+        __trace_var(TRC_CSCHED2_UPDATE_ITEM_LOAD, 1,
                     sizeof(d),
                     (unsigned char *)&d);
     }
@@ -1272,18 +1272,18 @@ static void
 runq_insert(const struct scheduler *ops, struct csched2_item *svc)
 {
     struct list_head *iter;
-    unsigned int cpu = svc->vcpu->processor;
+    unsigned int cpu = sched_item_cpu(svc->item);
     struct list_head * runq = &c2rqd(ops, cpu)->runq;
     int pos = 0;
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
-    ASSERT(!vcpu_on_runq(svc));
-    ASSERT(c2r(cpu) == c2r(svc->vcpu->processor));
+    ASSERT(!item_on_runq(svc));
+    ASSERT(c2r(cpu) == c2r(sched_item_cpu(svc->item)));
 
     ASSERT(&svc->rqd->runq == runq);
-    ASSERT(!is_idle_vcpu(svc->vcpu));
-    ASSERT(!svc->vcpu->sched_item->is_running);
+    ASSERT(!is_idle_item(svc->item));
+    ASSERT(!svc->item->is_running);
     ASSERT(!(svc->flags & CSFLAG_scheduled));
 
     list_for_each( iter, runq )
@@ -1300,11 +1300,11 @@ runq_insert(const struct scheduler *ops, struct csched2_item *svc)
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned pos;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.pos = pos;
         __trace_var(TRC_CSCHED2_RUNQ_POS, 1,
                     sizeof(d),
@@ -1314,7 +1314,7 @@ runq_insert(const struct scheduler *ops, struct csched2_item *svc)
 
 static inline void runq_remove(struct csched2_item *svc)
 {
-    ASSERT(vcpu_on_runq(svc));
+    ASSERT(item_on_runq(svc));
     list_del_init(&svc->runq_elem);
 }
 
@@ -1340,8 +1340,8 @@ static inline bool is_preemptable(const struct csched2_item *svc,
     if ( ratelimit <= CSCHED2_RATELIMIT_TICKLE_TOLERANCE )
         return true;
 
-    ASSERT(svc->vcpu->sched_item->is_running);
-    return now - svc->vcpu->sched_item->state_entry_time >
+    ASSERT(svc->item->is_running);
+    return now - svc->item->state_entry_time >
            ratelimit - CSCHED2_RATELIMIT_TICKLE_TOLERANCE;
 }
 
@@ -1369,17 +1369,17 @@ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
 
     /*
      * We are dealing with cpus that are marked non-idle (i.e., that are not
-     * in rqd->idle). However, some of them may be running their idle vcpu,
+     * in rqd->idle). However, some of them may be running their idle item,
      * if taking care of tasklets. In that case, we want to leave it alone.
      */
-    if ( unlikely(is_idle_vcpu(cur->vcpu) ||
+    if ( unlikely(is_idle_item(cur->item) ||
          !is_preemptable(cur, now, MICROSECS(prv->ratelimit_us))) )
         return -1;
 
     burn_credits(rqd, cur, now);
 
     score = new->credit - cur->credit;
-    if ( new->vcpu->processor != cpu )
+    if ( sched_item_cpu(new->item) != cpu )
         score -= CSCHED2_MIGRATE_RESIST;
 
     /*
@@ -1390,21 +1390,21 @@ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
      */
     if ( score > 0 )
     {
-        if ( cpumask_test_cpu(cpu, new->vcpu->sched_item->cpu_soft_affinity) )
+        if ( cpumask_test_cpu(cpu, new->item->cpu_soft_affinity) )
             score += CSCHED2_CREDIT_INIT;
 
-        if ( !cpumask_test_cpu(cpu, cur->vcpu->sched_item->cpu_soft_affinity) )
+        if ( !cpumask_test_cpu(cpu, cur->item->cpu_soft_affinity) )
             score += CSCHED2_CREDIT_INIT;
     }
 
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             int credit, score;
         } d;
-        d.dom = cur->vcpu->domain->domain_id;
-        d.vcpu = cur->vcpu->vcpu_id;
+        d.dom = cur->item->domain->domain_id;
+        d.item = cur->item->item_id;
         d.credit = cur->credit;
         d.score = score;
         __trace_var(TRC_CSCHED2_TICKLE_CHECK, 1,
@@ -1416,14 +1416,14 @@ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
 }
 
 /*
- * Check what processor it is best to 'wake', for picking up a vcpu that has
+ * Check what processor it is best to 'wake', for picking up an item that has
  * just been put (back) in the runqueue. Logic is as follows:
  *  1. if there are idle processors in the runq, wake one of them;
- *  2. if there aren't idle processor, check the one were the vcpu was
+ *  2. if there aren't idle processor, check the one were the item was
  *     running before to see if we can preempt what's running there now
  *     (and hence doing just one migration);
- *  3. last stand: check all processors and see if the vcpu is in right
- *     of preempting any of the other vcpus running on them (this requires
+ *  3. last stand: check all processors and see if the item is in right
+ *     of preempting any of the other items running on them (this requires
  *     two migrations, and that's indeed why it is left as the last stand).
  *
  * Note that when we say 'idle processors' what we really mean is (pretty
@@ -1436,10 +1436,10 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
 {
     int i, ipid = -1;
     s_time_t max = 0;
-    struct sched_item *item = new->vcpu->sched_item;
-    unsigned int bs, cpu = new->vcpu->processor;
+    struct sched_item *item = new->item;
+    unsigned int bs, cpu = sched_item_cpu(item);
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
-    cpumask_t *online = cpupool_domain_cpumask(new->vcpu->domain);
+    cpumask_t *online = cpupool_domain_cpumask(item->domain);
     cpumask_t mask;
 
     ASSERT(new->rqd == rqd);
@@ -1447,13 +1447,13 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned processor;
             int credit;
         } d;
-        d.dom = new->vcpu->domain->domain_id;
-        d.vcpu = new->vcpu->vcpu_id;
-        d.processor = new->vcpu->processor;
+        d.dom = item->domain->domain_id;
+        d.item = item->item_id;
+        d.processor = cpu;
         d.credit = new->credit;
         __trace_var(TRC_CSCHED2_TICKLE_NEW, 1,
                     sizeof(d),
@@ -1461,11 +1461,11 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
     }
 
     /*
-     * Exclusive pinning is when a vcpu has hard-affinity with only one
-     * cpu, and there is no other vcpu that has hard-affinity with that
+     * Exclusive pinning is when an item has hard-affinity with only one
+     * cpu, and there is no other item that has hard-affinity with that
      * same cpu. This is infrequent, but if it happens, is for achieving
      * the most possible determinism, and least possible overhead for
-     * the vcpus in question.
+     * the items in question.
      *
      * Try to identify the vast majority of these situations, and deal
      * with them quickly.
@@ -1532,7 +1532,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
     /*
      * Note that, if we are here, it means we have done the hard-affinity
      * balancing step of the loop, and hence what we have in cpumask_scratch
-     * is what we put there for last, i.e., new's vcpu_hard_affinity & online
+     * is what we put there for last, i.e., new's item_hard_affinity & online
      * which is exactly what we need for the next part of the function.
      */
 
@@ -1543,7 +1543,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
      *
      * For deciding which cpu to tickle, we use tickle_score(), which will
      * factor in both new's soft-affinity, and the soft-affinity of the
-     * vcpu running on each cpu that we consider.
+     * item running on each cpu that we consider.
      */
     cpumask_andnot(&mask, &rqd->active, &rqd->idle);
     cpumask_andnot(&mask, &mask, &rqd->tickled);
@@ -1588,7 +1588,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_item *new, s_time_t now)
         return;
     }
 
-    ASSERT(!is_idle_vcpu(curr_on_cpu(ipid)->vcpu));
+    ASSERT(!is_idle_item(curr_on_cpu(ipid)));
     SCHED_STAT_CRANK(tickled_busy_cpu);
  tickle:
     BUG_ON(ipid == -1);
@@ -1623,16 +1623,16 @@ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now,
 
     /*
      * Under normal circumstances, snext->credit should never be less
-     * than -CSCHED2_MIN_TIMER.  However, under some circumstances, a
-     * vcpu with low credits may be allowed to run long enough that
+     * than -CSCHED2_MIN_TIMER.  However, under some circumstances, an
+     * item with low credits may be allowed to run long enough that
      * its credits are actually less than -CSCHED2_CREDIT_INIT.
-     * (Instances have been observed, for example, where a vcpu with
+     * (Instances have been observed, for example, where an item with
      * 200us of credit was allowed to run for 11ms, giving it -10.8ms
      * of credit.  Thus it was still negative even after the reset.)
      *
      * If this is the case for snext, we simply want to keep moving
      * everyone up until it is in the black again.  This fair because
-     * none of the other vcpus want to run at the moment.
+     * none of the other items want to run at the moment.
      *
      * Rather than looping, however, we just calculate a multiplier,
      * avoiding an integer division and multiplication in the common
@@ -1649,16 +1649,16 @@ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now,
         int start_credit;
 
         svc = list_entry(iter, struct csched2_item, rqd_elem);
-        svc_cpu = svc->vcpu->processor;
+        svc_cpu = sched_item_cpu(svc->item);
 
-        ASSERT(!is_idle_vcpu(svc->vcpu));
+        ASSERT(!is_idle_item(svc->item));
         ASSERT(svc->rqd == rqd);
 
         /*
          * If svc is running, it is our responsibility to make sure, here,
          * that the credit it has spent so far get accounted.
          */
-        if ( svc->vcpu == curr_on_cpu(svc_cpu)->vcpu )
+        if ( svc->item == curr_on_cpu(svc_cpu) )
         {
             burn_credits(rqd, svc, now);
             /*
@@ -1689,12 +1689,12 @@ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now,
         if ( unlikely(tb_init_done) )
         {
             struct {
-                unsigned vcpu:16, dom:16;
+                unsigned item:16, dom:16;
                 int credit_start, credit_end;
                 unsigned multiplier;
             } d;
-            d.dom = svc->vcpu->domain->domain_id;
-            d.vcpu = svc->vcpu->vcpu_id;
+            d.dom = svc->item->domain->domain_id;
+            d.item = svc->item->item_id;
             d.credit_start = start_credit;
             d.credit_end = svc->credit;
             d.multiplier = m;
@@ -1714,9 +1714,9 @@ void burn_credits(struct csched2_runqueue_data *rqd,
 {
     s_time_t delta;
 
-    ASSERT(svc == csched2_item(curr_on_cpu(svc->vcpu->processor)));
+    ASSERT(svc == csched2_item(curr_on_cpu(sched_item_cpu(svc->item))));
 
-    if ( unlikely(is_idle_vcpu(svc->vcpu)) )
+    if ( unlikely(is_idle_item(svc->item)) )
     {
         ASSERT(svc->credit == CSCHED2_IDLE_CREDIT);
         return;
@@ -1745,12 +1745,12 @@ void burn_credits(struct csched2_runqueue_data *rqd,
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             int credit, budget;
             int delta;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = svc->item->domain->domain_id;
+        d.item = svc->item->item_id;
         d.credit = svc->credit;
         d.budget = has_cap(svc) ?  svc->budget : INT_MIN;
         d.delta = delta;
@@ -1764,39 +1764,39 @@ void burn_credits(struct csched2_runqueue_data *rqd,
  * Budget-related code.
  */
 
-static void park_vcpu(struct csched2_item *svc)
+static void park_item(struct csched2_item *svc)
 {
-    struct vcpu *v = svc->vcpu;
+    struct sched_item *item = svc->item;
 
     ASSERT(spin_is_locked(&svc->sdom->budget_lock));
 
     /*
-     * It was impossible to find budget for this vCPU, so it has to be
+     * It was impossible to find budget for this item, so it has to be
      * "parked". This implies it is not runnable, so we mark it as such in
-     * its pause_flags. If the vCPU is currently scheduled (which means we
+     * its pause_flags. If the item is currently scheduled (which means we
      * are here after being called from within csched_schedule()), flagging
      * is enough, as we'll choose someone else, and then context_saved()
      * will take care of updating the load properly.
      *
-     * If, OTOH, the vCPU is sitting in the runqueue (which means we are here
+     * If, OTOH, the item is sitting in the runqueue (which means we are here
      * after being called from within runq_candidate()), we must go all the
      * way down to taking it out of there, and updating the load accordingly.
      *
-     * In both cases, we also add it to the list of parked vCPUs of the domain.
+     * In both cases, we also add it to the list of parked items of the domain.
      */
-    __set_bit(_VPF_parked, &v->pause_flags);
-    if ( vcpu_on_runq(svc) )
+    sched_set_pause_flags(item, _VPF_parked);
+    if ( item_on_runq(svc) )
     {
         runq_remove(svc);
         update_load(svc->sdom->dom->cpupool->sched, svc->rqd, svc, -1, NOW());
     }
-    list_add(&svc->parked_elem, &svc->sdom->parked_vcpus);
+    list_add(&svc->parked_elem, &svc->sdom->parked_items);
 }
 
-static bool vcpu_grab_budget(struct csched2_item *svc)
+static bool item_grab_budget(struct csched2_item *svc)
 {
     struct csched2_dom *sdom = svc->sdom;
-    unsigned int cpu = svc->vcpu->processor;
+    unsigned int cpu = sched_item_cpu(svc->item);
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
@@ -1808,9 +1808,9 @@ static bool vcpu_grab_budget(struct csched2_item *svc)
 
     /*
      * Here, svc->budget is <= 0 (as, if it was > 0, we'd have taken the if
-     * above!). That basically means the vCPU has overrun a bit --because of
+     * above!). That basically means the item has overrun a bit --because of
      * various reasons-- and we want to take that into account. With the +=,
-     * we are actually subtracting the amount of budget the vCPU has
+     * we are actually subtracting the amount of budget the item has
      * overconsumed, from the total domain budget.
      */
     sdom->budget += svc->budget;
@@ -1831,7 +1831,7 @@ static bool vcpu_grab_budget(struct csched2_item *svc)
     else
     {
         svc->budget = 0;
-        park_vcpu(svc);
+        park_item(svc);
     }
 
     spin_unlock(&sdom->budget_lock);
@@ -1840,10 +1840,10 @@ static bool vcpu_grab_budget(struct csched2_item *svc)
 }
 
 static void
-vcpu_return_budget(struct csched2_item *svc, struct list_head *parked)
+item_return_budget(struct csched2_item *svc, struct list_head *parked)
 {
     struct csched2_dom *sdom = svc->sdom;
-    unsigned int cpu = svc->vcpu->processor;
+    unsigned int cpu = sched_item_cpu(svc->item);
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
     ASSERT(list_empty(parked));
@@ -1852,7 +1852,7 @@ vcpu_return_budget(struct csched2_item *svc, struct list_head *parked)
     spin_lock(&sdom->budget_lock);
 
     /*
-     * The vCPU is stopping running (e.g., because it's blocking, or it has
+     * The item is stopping running (e.g., because it's blocking, or it has
      * been preempted). If it hasn't consumed all the budget it got when,
      * starting to run, put that remaining amount back in the domain's budget
      * pool.
@@ -1861,58 +1861,58 @@ vcpu_return_budget(struct csched2_item *svc, struct list_head *parked)
     svc->budget = 0;
 
     /*
-     * Making budget available again to the domain means that parked vCPUs
-     * may be unparked and run. They are, if any, in the domain's parked_vcpus
+     * Making budget available again to the domain means that parked items
+     * may be unparked and run. They are, if any, in the domain's parked_items
      * list, so we want to go through that and unpark them (so they can try
      * to get some budget).
      *
      * Touching the list requires the budget_lock, which we hold. Let's
      * therefore put everyone in that list in another, temporary list, which
-     * then the caller will traverse, unparking the vCPUs it finds there.
+     * then the caller will traverse, unparking the items it finds there.
      *
      * In fact, we can't do the actual unparking here, because that requires
-     * taking the runqueue lock of the vCPUs being unparked, and we can't
+     * taking the runqueue lock of the items being unparked, and we can't
      * take any runqueue locks while we hold a budget_lock.
      */
     if ( sdom->budget > 0 )
-        list_splice_init(&sdom->parked_vcpus, parked);
+        list_splice_init(&sdom->parked_items, parked);
 
     spin_unlock(&sdom->budget_lock);
 }
 
 static void
-unpark_parked_vcpus(const struct scheduler *ops, struct list_head *vcpus)
+unpark_parked_items(const struct scheduler *ops, struct list_head *items)
 {
     struct csched2_item *svc, *tmp;
     spinlock_t *lock;
 
-    list_for_each_entry_safe(svc, tmp, vcpus, parked_elem)
+    list_for_each_entry_safe(svc, tmp, items, parked_elem)
     {
         unsigned long flags;
         s_time_t now;
 
-        lock = item_schedule_lock_irqsave(svc->vcpu->sched_item, &flags);
+        lock = item_schedule_lock_irqsave(svc->item, &flags);
 
-        __clear_bit(_VPF_parked, &svc->vcpu->pause_flags);
+        sched_clear_pause_flags(svc->item, _VPF_parked);
         if ( unlikely(svc->flags & CSFLAG_scheduled) )
         {
             /*
              * We end here if a budget replenishment arrived between
              * csched2_schedule() (and, in particular, after a call to
-             * vcpu_grab_budget() that returned false), and
+             * item_grab_budget() that returned false), and
              * context_saved(). By setting __CSFLAG_delayed_runq_add,
-             * we tell context_saved() to put the vCPU back in the
+             * we tell context_saved() to put the item back in the
              * runqueue, from where it will compete with the others
              * for the newly replenished budget.
              */
             ASSERT( svc->rqd != NULL );
-            ASSERT( c2rqd(ops, svc->vcpu->processor) == svc->rqd );
+            ASSERT( c2rqd(ops, sched_item_cpu(svc->item)) == svc->rqd );
             __set_bit(__CSFLAG_delayed_runq_add, &svc->flags);
         }
-        else if ( vcpu_runnable(svc->vcpu) )
+        else if ( item_runnable(svc->item) )
         {
             /*
-             * The vCPU should go back to the runqueue, and compete for
+             * The item should go back to the runqueue, and compete for
              * the newly replenished budget, but only if it is actually
              * runnable (and was therefore offline only because of the
              * lack of budget).
@@ -1924,7 +1924,7 @@ unpark_parked_vcpus(const struct scheduler *ops, struct list_head *vcpus)
         }
         list_del_init(&svc->parked_elem);
 
-        item_schedule_unlock_irqrestore(lock, flags, svc->vcpu->sched_item);
+        item_schedule_unlock_irqrestore(lock, flags, svc->item);
     }
 }
 
@@ -1954,7 +1954,7 @@ static void replenish_domain_budget(void* data)
      *
      * Even in cases of overrun or delay, however, we expect that in 99% of
      * cases, doing just one replenishment will be good enough for being able
-     * to unpark the vCPUs that are waiting for some budget.
+     * to unpark the items that are waiting for some budget.
      */
     do_replenish(sdom);
 
@@ -1974,7 +1974,7 @@ static void replenish_domain_budget(void* data)
     }
     /*
      * 2) if we overrun by more than tot_budget, then budget+tot_budget is
-     * still < 0, which means that we can't unpark the vCPUs. Let's bail,
+     * still < 0, which means that we can't unpark the items. Let's bail,
      * and wait for future replenishments.
      */
     if ( unlikely(sdom->budget <= 0) )
@@ -1988,14 +1988,14 @@ static void replenish_domain_budget(void* data)
 
     /*
      * As above, let's prepare the temporary list, out of the domain's
-     * parked_vcpus list, now that we hold the budget_lock. Then, drop such
+     * parked_items list, now that we hold the budget_lock. Then, drop such
      * lock, and pass the list to the unparking function.
      */
-    list_splice_init(&sdom->parked_vcpus, &parked);
+    list_splice_init(&sdom->parked_items, &parked);
 
     spin_unlock_irqrestore(&sdom->budget_lock, flags);
 
-    unpark_parked_vcpus(sdom->dom->cpupool->sched, &parked);
+    unpark_parked_items(sdom->dom->cpupool->sched, &parked);
 
  out:
     set_timer(&sdom->repl_timer, sdom->next_repl);
@@ -2003,37 +2003,36 @@ static void replenish_domain_budget(void* data)
 
 #ifndef NDEBUG
 static inline void
-csched2_vcpu_check(struct vcpu *vc)
+csched2_item_check(struct sched_item *item)
 {
-    struct csched2_item * const svc = csched2_item(vc->sched_item);
+    struct csched2_item * const svc = csched2_item(item);
     struct csched2_dom * const sdom = svc->sdom;
 
-    BUG_ON( svc->vcpu != vc );
-    BUG_ON( sdom != csched2_dom(vc->domain) );
+    BUG_ON( svc->item != item );
+    BUG_ON( sdom != csched2_dom(item->domain) );
     if ( sdom )
     {
-        BUG_ON( is_idle_vcpu(vc) );
-        BUG_ON( sdom->dom != vc->domain );
+        BUG_ON( is_idle_item(item) );
+        BUG_ON( sdom->dom != item->domain );
     }
     else
     {
-        BUG_ON( !is_idle_vcpu(vc) );
+        BUG_ON( !is_idle_item(item) );
     }
     SCHED_STAT_CRANK(item_check);
 }
-#define CSCHED2_VCPU_CHECK(_vc)  (csched2_vcpu_check(_vc))
+#define CSCHED2_ITEM_CHECK(item)  (csched2_item_check(item))
 #else
-#define CSCHED2_VCPU_CHECK(_vc)
+#define CSCHED2_ITEM_CHECK(item)
 #endif
 
 static void *
 csched2_alloc_vdata(const struct scheduler *ops, struct sched_item *item,
                     void *dd)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item *svc;
 
-    /* Allocate per-VCPU info */
+    /* Allocate per-ITEM info */
     svc = xzalloc(struct csched2_item);
     if ( svc == NULL )
         return NULL;
@@ -2042,10 +2041,10 @@ csched2_alloc_vdata(const struct scheduler *ops, struct sched_item *item,
     INIT_LIST_HEAD(&svc->runq_elem);
 
     svc->sdom = dd;
-    svc->vcpu = vc;
+    svc->item = item;
     svc->flags = 0U;
 
-    if ( ! is_idle_vcpu(vc) )
+    if ( ! is_idle_item(item) )
     {
         ASSERT(svc->sdom != NULL);
         svc->credit = CSCHED2_CREDIT_INIT;
@@ -2074,19 +2073,18 @@ csched2_alloc_vdata(const struct scheduler *ops, struct sched_item *item,
 static void
 csched2_item_sleep(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
     SCHED_STAT_CRANK(item_sleep);
 
-    if ( curr_on_cpu(vc->processor) == item )
+    if ( curr_on_cpu(sched_item_cpu(item)) == item )
     {
-        tickle_cpu(vc->processor, svc->rqd);
+        tickle_cpu(sched_item_cpu(item), svc->rqd);
     }
-    else if ( vcpu_on_runq(svc) )
+    else if ( item_on_runq(svc) )
     {
-        ASSERT(svc->rqd == c2rqd(ops, vc->processor));
+        ASSERT(svc->rqd == c2rqd(ops, sched_item_cpu(item)));
         update_load(ops, svc->rqd, svc, -1, NOW());
         runq_remove(svc);
     }
@@ -2097,14 +2095,13 @@ csched2_item_sleep(const struct scheduler *ops, struct sched_item *item)
 static void
 csched2_item_wake(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
-    unsigned int cpu = vc->processor;
+    unsigned int cpu = sched_item_cpu(item);
     s_time_t now;
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
 
     if ( unlikely(curr_on_cpu(cpu) == item) )
     {
@@ -2112,18 +2109,18 @@ csched2_item_wake(const struct scheduler *ops, struct sched_item *item)
         goto out;
     }
 
-    if ( unlikely(vcpu_on_runq(svc)) )
+    if ( unlikely(item_on_runq(svc)) )
     {
         SCHED_STAT_CRANK(item_wake_onrunq);
         goto out;
     }
 
-    if ( likely(vcpu_runnable(vc)) )
+    if ( likely(item_runnable(item)) )
         SCHED_STAT_CRANK(item_wake_runnable);
     else
         SCHED_STAT_CRANK(item_wake_not_runnable);
 
-    /* If the context hasn't been saved for this vcpu yet, we can't put it on
+    /* If the context hasn't been saved for this item yet, we can't put it on
      * another runqueue.  Instead, we set a flag so that it will be put on the runqueue
      * after the context has been saved. */
     if ( unlikely(svc->flags & CSFLAG_scheduled) )
@@ -2134,15 +2131,15 @@ csched2_item_wake(const struct scheduler *ops, struct sched_item *item)
 
     /* Add into the new runqueue if necessary */
     if ( svc->rqd == NULL )
-        runq_assign(ops, vc);
+        runq_assign(ops, item);
     else
-        ASSERT(c2rqd(ops, vc->processor) == svc->rqd );
+        ASSERT(c2rqd(ops, sched_item_cpu(item)) == svc->rqd );
 
     now = NOW();
 
     update_load(ops, svc->rqd, svc, 1, now);
-        
-    /* Put the VCPU on the runq */
+
+    /* Put the ITEM on the runq */
     runq_insert(ops, svc);
     runq_tickle(ops, svc, now);
 
@@ -2155,49 +2152,48 @@ csched2_item_yield(const struct scheduler *ops, struct sched_item *item)
 {
     struct csched2_item * const svc = csched2_item(item);
 
-    __set_bit(__CSFLAG_vcpu_yield, &svc->flags);
+    __set_bit(__CSFLAG_item_yield, &svc->flags);
 }
 
 static void
 csched2_context_saved(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
     spinlock_t *lock = item_schedule_lock_irq(item);
     s_time_t now = NOW();
     LIST_HEAD(were_parked);
 
-    BUG_ON( !is_idle_vcpu(vc) && svc->rqd != c2rqd(ops, vc->processor));
-    ASSERT(is_idle_vcpu(vc) || svc->rqd == c2rqd(ops, vc->processor));
+    BUG_ON( !is_idle_item(item) && svc->rqd != c2rqd(ops, sched_item_cpu(item)));
+    ASSERT(is_idle_item(item) || svc->rqd == c2rqd(ops, sched_item_cpu(item)));
 
-    /* This vcpu is now eligible to be put on the runqueue again */
+    /* This item is now eligible to be put on the runqueue again */
     __clear_bit(__CSFLAG_scheduled, &svc->flags);
 
     if ( unlikely(has_cap(svc) && svc->budget > 0) )
-        vcpu_return_budget(svc, &were_parked);
+        item_return_budget(svc, &were_parked);
 
     /* If someone wants it on the runqueue, put it there. */
     /*
      * NB: We can get rid of CSFLAG_scheduled by checking for
-     * vc->is_running and vcpu_on_runq(svc) here.  However,
+     * vc->is_running and item_on_runq(svc) here.  However,
      * since we're accessing the flags cacheline anyway,
      * it seems a bit pointless; especially as we have plenty of
      * bits free.
      */
     if ( __test_and_clear_bit(__CSFLAG_delayed_runq_add, &svc->flags)
-         && likely(vcpu_runnable(vc)) )
+         && likely(item_runnable(item)) )
     {
-        ASSERT(!vcpu_on_runq(svc));
+        ASSERT(!item_on_runq(svc));
 
         runq_insert(ops, svc);
         runq_tickle(ops, svc, now);
     }
-    else if ( !is_idle_vcpu(vc) )
+    else if ( !is_idle_item(item) )
         update_load(ops, svc->rqd, svc, -1, now);
 
     item_schedule_unlock_irq(lock, item);
 
-    unpark_parked_vcpus(ops, &were_parked);
+    unpark_parked_items(ops, &were_parked);
 }
 
 #define MAX_LOAD (STIME_MAX)
@@ -2205,9 +2201,8 @@ static struct sched_resource *
 csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
 {
     struct csched2_private *prv = csched2_priv(ops);
-    struct vcpu *vc = item->vcpu;
     int i, min_rqi = -1, min_s_rqi = -1;
-    unsigned int new_cpu, cpu = vc->processor;
+    unsigned int new_cpu, cpu = sched_item_cpu(item);
     struct csched2_item *svc = csched2_item(item);
     s_time_t min_avgload = MAX_LOAD, min_s_avgload = MAX_LOAD;
     bool has_soft;
@@ -2245,7 +2240,7 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
     }
 
     cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affinity,
-                cpupool_domain_cpumask(vc->domain));
+                cpupool_domain_cpumask(item->domain));
 
     /*
      * First check to see if we're here because someone else suggested a place
@@ -2356,7 +2351,7 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
          * We have soft affinity, and we have a candidate runq, so go for it.
          *
          * Note that, to obtain the soft-affinity mask, we "just" put what we
-         * have in cpumask_scratch in && with vc->cpu_soft_affinity. This is
+         * have in cpumask_scratch in && with item->cpu_soft_affinity. This is
          * ok because:
          * - we know that item->cpu_hard_affinity and ->cpu_soft_affinity have
          *   a non-empty intersection (because has_soft is true);
@@ -2379,7 +2374,7 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
          * any suitable runq. But we did find one when considering hard
          * affinity, so go for it.
          *
-         * cpumask_scratch already has vc->cpu_hard_affinity &
+         * cpumask_scratch already has item->cpu_hard_affinity &
          * cpupool_domain_cpumask() in it, so it's enough that we filter
          * with the cpus of the runq.
          */
@@ -2410,11 +2405,11 @@ csched2_res_pick(const struct scheduler *ops, struct sched_item *item)
     {
         struct {
             uint64_t b_avgload;
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned rq_id:16, new_cpu:16;
         } d;
-        d.dom = vc->domain->domain_id;
-        d.vcpu = vc->vcpu_id;
+        d.dom = item->domain->domain_id;
+        d.item = item->item_id;
         d.rq_id = min_rqi;
         d.b_avgload = min_avgload;
         d.new_cpu = new_cpu;
@@ -2433,10 +2428,10 @@ typedef struct {
     struct csched2_item * best_push_svc, *best_pull_svc;
     /* NB: Read by consider() */
     struct csched2_runqueue_data *lrqd;
-    struct csched2_runqueue_data *orqd;                  
+    struct csched2_runqueue_data *orqd;
 } balance_state_t;
 
-static void consider(balance_state_t *st, 
+static void consider(balance_state_t *st,
                      struct csched2_item *push_svc,
                      struct csched2_item *pull_svc)
 {
@@ -2475,17 +2470,17 @@ static void migrate(const struct scheduler *ops,
                     struct csched2_runqueue_data *trqd,
                     s_time_t now)
 {
-    int cpu = svc->vcpu->processor;
-    struct sched_item *item = svc->vcpu->sched_item;
+    struct sched_item *item = svc->item;
+    int cpu = sched_item_cpu(item);
 
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned rqi:16, trqi:16;
         } d;
-        d.dom = svc->vcpu->domain->domain_id;
-        d.vcpu = svc->vcpu->vcpu_id;
+        d.dom = item->domain->domain_id;
+        d.item = item->item_id;
         d.rqi = svc->rqd->id;
         d.trqi = trqd->id;
         __trace_var(TRC_CSCHED2_MIGRATE, 1,
@@ -2497,7 +2492,7 @@ static void migrate(const struct scheduler *ops,
     {
         /* It's running; mark it to migrate. */
         svc->migrate_rqd = trqd;
-        __set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
+        sched_set_pause_flags(item, _VPF_migrating);
         __set_bit(__CSFLAG_runq_migrate_request, &svc->flags);
         SCHED_STAT_CRANK(migrate_requested);
         tickle_cpu(cpu, svc->rqd);
@@ -2506,7 +2501,7 @@ static void migrate(const struct scheduler *ops,
     {
         int on_runq = 0;
         /* It's not running; just move it */
-        if ( vcpu_on_runq(svc) )
+        if ( item_on_runq(svc) )
         {
             runq_remove(svc);
             update_load(ops, svc->rqd, NULL, -1, now);
@@ -2515,14 +2510,14 @@ static void migrate(const struct scheduler *ops,
         _runq_deassign(svc);
 
         cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affinity,
-                    cpupool_domain_cpumask(svc->vcpu->domain));
+                    cpupool_domain_cpumask(item->domain));
         cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
                     &trqd->active);
-        svc->vcpu->processor = cpumask_cycle(trqd->pick_bias,
-                                             cpumask_scratch_cpu(cpu));
-        svc->vcpu->sched_item->res = per_cpu(sched_res, svc->vcpu->processor);
-        trqd->pick_bias = svc->vcpu->processor;
-        ASSERT(svc->vcpu->processor < nr_cpu_ids);
+        sched_set_res(item, per_cpu(sched_res,
+                                    cpumask_cycle(trqd->pick_bias,
+                                                  cpumask_scratch_cpu(cpu))));
+        trqd->pick_bias = sched_item_cpu(item);
+        ASSERT(sched_item_cpu(item) < nr_cpu_ids);
 
         _runq_assign(svc, trqd);
         if ( on_runq )
@@ -2542,14 +2537,14 @@ static void migrate(const struct scheduler *ops,
  *  - svc is not already flagged to migrate,
  *  - if svc is allowed to run on at least one of the pcpus of rqd.
  */
-static bool vcpu_is_migrateable(struct csched2_item *svc,
+static bool item_is_migrateable(struct csched2_item *svc,
                                   struct csched2_runqueue_data *rqd)
 {
-    struct vcpu *v = svc->vcpu;
-    int cpu = svc->vcpu->processor;
+    struct sched_item *item = svc->item;
+    int cpu = sched_item_cpu(item);
 
-    cpumask_and(cpumask_scratch_cpu(cpu), v->sched_item->cpu_hard_affinity,
-                cpupool_domain_cpumask(v->domain));
+    cpumask_and(cpumask_scratch_cpu(cpu), item->cpu_hard_affinity,
+                cpupool_domain_cpumask(item->domain));
 
     return !(svc->flags & CSFLAG_runq_migrate_request) &&
            cpumask_intersects(cpumask_scratch_cpu(cpu), &rqd->active);
@@ -2586,7 +2581,7 @@ retry:
     for_each_cpu(i, &prv->active_queues)
     {
         s_time_t delta;
-        
+
         st.orqd = prv->rqd + i;
 
         if ( st.orqd == st.lrqd
@@ -2594,7 +2589,7 @@ retry:
             continue;
 
         update_runq_load(ops, st.orqd, 0, now);
-    
+
         delta = st.lrqd->b_avgload - st.orqd->b_avgload;
         if ( delta < 0 )
             delta = -delta;
@@ -2617,7 +2612,7 @@ retry:
         s_time_t load_max;
         int cpus_max;
 
-        
+
         load_max = st.lrqd->b_avgload;
         if ( st.orqd->b_avgload > load_max )
             load_max = st.orqd->b_avgload;
@@ -2656,7 +2651,7 @@ retry:
                                            opt_overload_balance_tolerance)) )
                 goto out;
     }
-             
+
     /* Try to grab the other runqueue lock; if it's been taken in the
      * meantime, try the process over again.  This can't deadlock
      * because if it doesn't get any other rqd locks, it will simply
@@ -2696,17 +2691,17 @@ retry:
 
         update_svc_load(ops, push_svc, 0, now);
 
-        if ( !vcpu_is_migrateable(push_svc, st.orqd) )
+        if ( !item_is_migrateable(push_svc, st.orqd) )
             continue;
 
         list_for_each( pull_iter, &st.orqd->svc )
         {
             struct csched2_item * pull_svc = list_entry(pull_iter, struct csched2_item, rqd_elem);
-            
+
             if ( !inner_load_updated )
                 update_svc_load(ops, pull_svc, 0, now);
-        
-            if ( !vcpu_is_migrateable(pull_svc, st.lrqd) )
+
+            if ( !item_is_migrateable(pull_svc, st.lrqd) )
                 continue;
 
             consider(&st, push_svc, pull_svc);
@@ -2721,8 +2716,8 @@ retry:
     list_for_each( pull_iter, &st.orqd->svc )
     {
         struct csched2_item * pull_svc = list_entry(pull_iter, struct csched2_item, rqd_elem);
-        
-        if ( !vcpu_is_migrateable(pull_svc, st.lrqd) )
+
+        if ( !item_is_migrateable(pull_svc, st.lrqd) )
             continue;
 
         /* Consider pull only */
@@ -2745,8 +2740,7 @@ static void
 csched2_item_migrate(
     const struct scheduler *ops, struct sched_item *item, unsigned int new_cpu)
 {
-    struct vcpu *vc = item->vcpu;
-    struct domain *d = vc->domain;
+    struct domain *d = item->domain;
     struct csched2_item * const svc = csched2_item(item);
     struct csched2_runqueue_data *trqd;
     s_time_t now = NOW();
@@ -2758,25 +2752,24 @@ csched2_item_migrate(
      * cpupool.
      *
      * And since there indeed is the chance that it is not part of it, all
-     * we must do is remove _and_ unassign the vCPU from any runqueue, as
+     * we must do is remove _and_ unassign the item from any runqueue, as
      * well as updating v->processor with the target, so that the suspend
      * process can continue.
      *
      * It will then be during resume that a new, meaningful, value for
      * v->processor will be chosen, and during actual domain unpause that
-     * the vCPU will be assigned to and added to the proper runqueue.
+     * the item will be assigned to and added to the proper runqueue.
      */
     if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_cpumask(d))) )
     {
         ASSERT(system_state == SYS_STATE_suspend);
-        if ( vcpu_on_runq(svc) )
+        if ( item_on_runq(svc) )
         {
             runq_remove(svc);
             update_load(ops, svc->rqd, NULL, -1, now);
         }
         _runq_deassign(svc);
-        vc->processor = new_cpu;
-        item->res = per_cpu(sched_res, new_cpu);
+        sched_set_res(item, per_cpu(sched_res, new_cpu));
         return;
     }
 
@@ -2790,17 +2783,14 @@ csched2_item_migrate(
      * Do the actual movement toward new_cpu, and update vc->processor.
      * If we are changing runqueue, migrate() takes care of everything.
      * If we are not changing runqueue, we need to update vc->processor
-     * here. In fact, if, for instance, we are here because the vcpu's
+     * here. In fact, if, for instance, we are here because the item's
      * hard affinity changed, we don't want to risk leaving vc->processor
      * pointing to a pcpu where we can't run any longer.
      */
     if ( trqd != svc->rqd )
         migrate(ops, svc, trqd, now);
     else
-    {
-        vc->processor = new_cpu;
-        item->res = per_cpu(sched_res, new_cpu);
-    }
+        sched_set_res(item, per_cpu(sched_res, new_cpu));
 }
 
 static int
@@ -2812,18 +2802,18 @@ csched2_dom_cntl(
     struct csched2_dom * const sdom = csched2_dom(d);
     struct csched2_private *prv = csched2_priv(ops);
     unsigned long flags;
-    struct vcpu *v;
+    struct sched_item *item;
     int rc = 0;
 
     /*
      * Locking:
      *  - we must take the private lock for accessing the weights of the
-     *    vcpus of d, and/or the cap;
+     *    items of d, and/or the cap;
      *  - in the putinfo case, we also need the runqueue lock(s), for
      *    updating the max waight of the runqueue(s).
      *    If changing the cap, we also need the budget_lock, for updating
      *    the value of the domain budget pool (and the runqueue lock,
-     *    for adjusting the parameters and rescheduling any vCPU that is
+     *    for adjusting the parameters and rescheduling any item that is
      *    running at the time of the change).
      */
     switch ( op->cmd )
@@ -2845,18 +2835,18 @@ csched2_dom_cntl(
 
             sdom->weight = op->u.credit2.weight;
 
-            /* Update weights for vcpus, and max_weight for runqueues on which they reside */
-            for_each_vcpu ( d, v )
+            /* Update weights for items, and max_weight for runqueues on which they reside */
+            for_each_sched_item ( d, item )
             {
-                struct csched2_item *svc = csched2_item(v->sched_item);
-                spinlock_t *lock = item_schedule_lock(svc->vcpu->sched_item);
+                struct csched2_item *svc = csched2_item(item);
+                spinlock_t *lock = item_schedule_lock(item);
 
-                ASSERT(svc->rqd == c2rqd(ops, svc->vcpu->processor));
+                ASSERT(svc->rqd == c2rqd(ops, sched_item_cpu(item)));
 
                 svc->weight = sdom->weight;
                 update_max_weight(svc->rqd, svc->weight, old_weight);
 
-                item_schedule_unlock(lock, svc->vcpu->sched_item);
+                item_schedule_unlock(lock, item);
             }
         }
         /* Cap */
@@ -2865,8 +2855,8 @@ csched2_dom_cntl(
             struct csched2_item *svc;
             spinlock_t *lock;
 
-            /* Cap is only valid if it's below 100 * nr_of_vCPUS */
-            if ( op->u.credit2.cap > 100 * sdom->nr_vcpus )
+            /* Cap is only valid if it's below 100 * nr_of_items */
+            if ( op->u.credit2.cap > 100 * sdom->nr_items )
             {
                 rc = -EINVAL;
                 write_unlock_irqrestore(&prv->lock, flags);
@@ -2879,23 +2869,23 @@ csched2_dom_cntl(
             spin_unlock(&sdom->budget_lock);
 
             /*
-             * When trying to get some budget and run, each vCPU will grab
-             * from the pool 1/N (with N = nr of vCPUs of the domain) of
-             * the total budget. Roughly speaking, this means each vCPU will
+             * When trying to get some budget and run, each item will grab
+             * from the pool 1/N (with N = nr of items of the domain) of
+             * the total budget. Roughly speaking, this means each item will
              * have at least one chance to run during every period.
              */
-            for_each_vcpu ( d, v )
+            for_each_sched_item ( d, item )
             {
-                svc = csched2_item(v->sched_item);
-                lock = item_schedule_lock(svc->vcpu->sched_item);
+                svc = csched2_item(item);
+                lock = item_schedule_lock(item);
                 /*
                  * Too small quotas would in theory cause a lot of overhead,
                  * which then won't happen because, in csched2_runtime(),
                  * CSCHED2_MIN_TIMER is what would be used anyway.
                  */
-                svc->budget_quota = max(sdom->tot_budget / sdom->nr_vcpus,
+                svc->budget_quota = max(sdom->tot_budget / sdom->nr_items,
                                         CSCHED2_MIN_TIMER);
-                item_schedule_unlock(lock, svc->vcpu->sched_item);
+                item_schedule_unlock(lock, item);
             }
 
             if ( sdom->cap == 0 )
@@ -2905,7 +2895,7 @@ csched2_dom_cntl(
                  * and queue its first replenishment event.
                  *
                  * Since cap is currently disabled for this domain, we
-                 * know no vCPU is messing with the domain's budget, and
+                 * know no item is messing with the domain's budget, and
                  * the replenishment timer is still off.
                  * For these reasons, it is safe to do the following without
                  * taking the budget_lock.
@@ -2915,42 +2905,42 @@ csched2_dom_cntl(
                 set_timer(&sdom->repl_timer, sdom->next_repl);
 
                 /*
-                 * Now, let's enable budget accounting for all the vCPUs.
+                 * Now, let's enable budget accounting for all the items.
                  * For making sure that they will start to honour the domain's
                  * cap, we set their budget to 0.
                  * This way, as soon as they will try to run, they will have
                  * to get some budget.
                  *
-                 * For the vCPUs that are already running, we trigger the
+                 * For the items that are already running, we trigger the
                  * scheduler on their pCPU. When, as a consequence of this,
                  * csched2_schedule() will run, it will figure out there is
-                 * no budget, and the vCPU will try to get some (and be parked,
+                 * no budget, and the item will try to get some (and be parked,
                  * if there's none, and we'll switch to someone else).
                  */
-                for_each_vcpu ( d, v )
+                for_each_sched_item ( d, item )
                 {
-                    svc = csched2_item(v->sched_item);
-                    lock = item_schedule_lock(svc->vcpu->sched_item);
-                    if ( v->sched_item->is_running )
+                    svc = csched2_item(item);
+                    lock = item_schedule_lock(item);
+                    if ( item->is_running )
                     {
-                        unsigned int cpu = v->processor;
+                        unsigned int cpu = sched_item_cpu(item);
                         struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
 
-                        ASSERT(curr_on_cpu(cpu)->vcpu == v);
+                        ASSERT(curr_on_cpu(cpu) == item);
 
                         /*
-                         * We are triggering a reschedule on the vCPU's
+                         * We are triggering a reschedule on the item's
                          * pCPU. That will run burn_credits() and, since
-                         * the vCPU is capped now, it would charge all the
+                         * the item is capped now, it would charge all the
                          * execution time of this last round as budget as
-                         * well. That will make the vCPU budget go negative,
+                         * well. That will make the item budget go negative,
                          * potentially by a large amount, and it's unfair.
                          *
                          * To avoid that, call burn_credit() here, to do the
                          * accounting of this current running instance now,
                          * with budgetting still disabled. This does not
                          * prevent some small amount of budget being charged
-                         * to the vCPU (i.e., the amount of time it runs from
+                         * to the item (i.e., the amount of time it runs from
                          * now, to when scheduling happens). The budget will
                          * also go below 0, but a lot less than how it would
                          * if we don't do this.
@@ -2961,7 +2951,7 @@ csched2_dom_cntl(
                         cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
                     }
                     svc->budget = 0;
-                    item_schedule_unlock(lock, svc->vcpu->sched_item);
+                    item_schedule_unlock(lock, item);
                 }
             }
 
@@ -2973,30 +2963,30 @@ csched2_dom_cntl(
 
             stop_timer(&sdom->repl_timer);
 
-            /* Disable budget accounting for all the vCPUs. */
-            for_each_vcpu ( d, v )
+            /* Disable budget accounting for all the items. */
+            for_each_sched_item ( d, item )
             {
-                struct csched2_item *svc = csched2_item(v->sched_item);
-                spinlock_t *lock = item_schedule_lock(svc->vcpu->sched_item);
+                struct csched2_item *svc = csched2_item(item);
+                spinlock_t *lock = item_schedule_lock(item);
 
                 svc->budget = STIME_MAX;
                 svc->budget_quota = 0;
 
-                item_schedule_unlock(lock, svc->vcpu->sched_item);
+                item_schedule_unlock(lock, item);
             }
             sdom->cap = 0;
             /*
              * We are disabling the cap for this domain, which may have
-             * vCPUs waiting for a replenishment, so we unpark them all.
+             * items waiting for a replenishment, so we unpark them all.
              * Note that, since we have already disabled budget accounting
-             * for all the vCPUs of the domain, no currently running vCPU
-             * will be added to the parked vCPUs list any longer.
+             * for all the items of the domain, no currently running item
+             * will be added to the parked items list any longer.
              */
             spin_lock(&sdom->budget_lock);
-            list_splice_init(&sdom->parked_vcpus, &parked);
+            list_splice_init(&sdom->parked_items, &parked);
             spin_unlock(&sdom->budget_lock);
 
-            unpark_parked_vcpus(ops, &parked);
+            unpark_parked_items(ops, &parked);
         }
         write_unlock_irqrestore(&prv->lock, flags);
         break;
@@ -3073,12 +3063,12 @@ csched2_alloc_domdata(const struct scheduler *ops, struct domain *dom)
     sdom->dom = dom;
     sdom->weight = CSCHED2_DEFAULT_WEIGHT;
     sdom->cap = 0U;
-    sdom->nr_vcpus = 0;
+    sdom->nr_items = 0;
 
     init_timer(&sdom->repl_timer, replenish_domain_budget, sdom,
                cpumask_any(cpupool_domain_cpumask(dom)));
     spin_lock_init(&sdom->budget_lock);
-    INIT_LIST_HEAD(&sdom->parked_vcpus);
+    INIT_LIST_HEAD(&sdom->parked_items);
 
     write_lock_irqsave(&prv->lock, flags);
 
@@ -3112,34 +3102,32 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
 static void
 csched2_item_insert(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item *svc = item->priv;
     struct csched2_dom * const sdom = svc->sdom;
     spinlock_t *lock;
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
     ASSERT(list_empty(&svc->runq_elem));
 
     /* csched2_res_pick() expects the pcpu lock to be held */
     lock = item_schedule_lock_irq(item);
 
-    item->res = csched2_res_pick(ops, item);
-    vc->processor = item->res->processor;
+    sched_set_res(item, csched2_res_pick(ops, item));
 
     spin_unlock_irq(lock);
 
     lock = item_schedule_lock_irq(item);
 
-    /* Add vcpu to runqueue of initial processor */
-    runq_assign(ops, vc);
+    /* Add item to runqueue of initial processor */
+    runq_assign(ops, item);
 
     item_schedule_unlock_irq(lock, item);
 
-    sdom->nr_vcpus++;
+    sdom->nr_items++;
 
     SCHED_STAT_CRANK(item_insert);
 
-    CSCHED2_VCPU_CHECK(vc);
+    CSCHED2_ITEM_CHECK(item);
 }
 
 static void
@@ -3153,11 +3141,10 @@ csched2_free_vdata(const struct scheduler *ops, void *priv)
 static void
 csched2_item_remove(const struct scheduler *ops, struct sched_item *item)
 {
-    struct vcpu *vc = item->vcpu;
     struct csched2_item * const svc = csched2_item(item);
     spinlock_t *lock;
 
-    ASSERT(!is_idle_vcpu(vc));
+    ASSERT(!is_idle_item(item));
     ASSERT(list_empty(&svc->runq_elem));
 
     SCHED_STAT_CRANK(item_remove);
@@ -3165,14 +3152,14 @@ csched2_item_remove(const struct scheduler *ops, struct sched_item *item)
     /* Remove from runqueue */
     lock = item_schedule_lock_irq(item);
 
-    runq_deassign(ops, vc);
+    runq_deassign(ops, item);
 
     item_schedule_unlock_irq(lock, item);
 
-    svc->sdom->nr_vcpus--;
+    svc->sdom->nr_items--;
 }
 
-/* How long should we let this vcpu run for? */
+/* How long should we let this item run for? */
 static s_time_t
 csched2_runtime(const struct scheduler *ops, int cpu,
                 struct csched2_item *snext, s_time_t now)
@@ -3187,7 +3174,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
      * If we're idle, just stay so. Others (or external events)
      * will poke us when necessary.
      */
-    if ( is_idle_vcpu(snext->vcpu) )
+    if ( is_idle_item(snext->item) )
         return -1;
 
     /* General algorithm:
@@ -3204,8 +3191,8 @@ csched2_runtime(const struct scheduler *ops, int cpu,
     if ( prv->ratelimit_us )
     {
         s_time_t ratelimit_min = MICROSECS(prv->ratelimit_us);
-        if ( snext->vcpu->sched_item->is_running )
-            ratelimit_min = snext->vcpu->sched_item->state_entry_time +
+        if ( snext->item->is_running )
+            ratelimit_min = snext->item->state_entry_time +
                             MICROSECS(prv->ratelimit_us) - now;
         if ( ratelimit_min > min_time )
             min_time = ratelimit_min;
@@ -3222,7 +3209,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
     {
         struct csched2_item *swait = runq_elem(runq->next);
 
-        if ( ! is_idle_vcpu(swait->vcpu)
+        if ( ! is_idle_item(swait->item)
              && swait->credit > 0 )
         {
             rt_credit = snext->credit - swait->credit;
@@ -3236,7 +3223,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
      *
      * FIXME: See if we can eliminate this conversion if we know time
      * will be outside (MIN,MAX).  Probably requires pre-calculating
-     * credit values of MIN,MAX per vcpu, since each vcpu burns credit
+     * credit values of MIN,MAX per item, since each item burns credit
      * at a different rate.
      */
     if ( rt_credit > 0 )
@@ -3284,36 +3271,35 @@ runq_candidate(struct csched2_runqueue_data *rqd,
 
     *skipped = 0;
 
-    if ( unlikely(is_idle_vcpu(scurr->vcpu)) )
+    if ( unlikely(is_idle_item(scurr->item)) )
     {
         snext = scurr;
         goto check_runq;
     }
 
-    yield = __test_and_clear_bit(__CSFLAG_vcpu_yield, &scurr->flags);
+    yield = __test_and_clear_bit(__CSFLAG_item_yield, &scurr->flags);
 
     /*
-     * Return the current vcpu if it has executed for less than ratelimit.
-     * Adjuststment for the selected vcpu's credit and decision
+     * Return the current item if it has executed for less than ratelimit.
+     * Adjuststment for the selected item's credit and decision
      * for how long it will run will be taken in csched2_runtime.
      *
      * Note that, if scurr is yielding, we don't let rate limiting kick in.
      * In fact, it may be the case that scurr is about to spin, and there's
      * no point forcing it to do so until rate limiting expires.
      */
-    if ( !yield && prv->ratelimit_us && vcpu_runnable(scurr->vcpu) &&
-         (now - scurr->vcpu->sched_item->state_entry_time) <
-          MICROSECS(prv->ratelimit_us) )
+    if ( !yield && prv->ratelimit_us && item_runnable(scurr->item) &&
+         (now - scurr->item->state_entry_time) < MICROSECS(prv->ratelimit_us) )
     {
         if ( unlikely(tb_init_done) )
         {
             struct {
-                unsigned vcpu:16, dom:16;
+                unsigned item:16, dom:16;
                 unsigned runtime;
             } d;
-            d.dom = scurr->vcpu->domain->domain_id;
-            d.vcpu = scurr->vcpu->vcpu_id;
-            d.runtime = now - scurr->vcpu->sched_item->state_entry_time;
+            d.dom = scurr->item->domain->domain_id;
+            d.item = scurr->item->item_id;
+            d.runtime = now - scurr->item->state_entry_time;
             __trace_var(TRC_CSCHED2_RATELIMIT, 1,
                         sizeof(d),
                         (unsigned char *)&d);
@@ -3322,13 +3308,13 @@ runq_candidate(struct csched2_runqueue_data *rqd,
     }
 
     /* If scurr has a soft-affinity, let's check whether cpu is part of it */
-    if ( has_soft_affinity(scurr->vcpu->sched_item) )
+    if ( has_soft_affinity(scurr->item) )
     {
-        affinity_balance_cpumask(scurr->vcpu->sched_item, BALANCE_SOFT_AFFINITY,
+        affinity_balance_cpumask(scurr->item, BALANCE_SOFT_AFFINITY,
                                  cpumask_scratch);
         if ( unlikely(!cpumask_test_cpu(cpu, cpumask_scratch)) )
         {
-            cpumask_t *online = cpupool_domain_cpumask(scurr->vcpu->domain);
+            cpumask_t *online = cpupool_domain_cpumask(scurr->item->domain);
 
             /* Ok, is any of the pcpus in scurr soft-affinity idle? */
             cpumask_and(cpumask_scratch, cpumask_scratch, &rqd->idle);
@@ -3356,10 +3342,10 @@ runq_candidate(struct csched2_runqueue_data *rqd,
      *
      * Of course, we also default to idle also if scurr is not runnable.
      */
-    if ( vcpu_runnable(scurr->vcpu) && !soft_aff_preempt )
+    if ( item_runnable(scurr->item) && !soft_aff_preempt )
         snext = scurr;
     else
-        snext = csched2_item(idle_vcpu[cpu]->sched_item);
+        snext = csched2_item(sched_idle_item(cpu));
 
  check_runq:
     list_for_each_safe( iter, temp, &rqd->runq )
@@ -3369,24 +3355,24 @@ runq_candidate(struct csched2_runqueue_data *rqd,
         if ( unlikely(tb_init_done) )
         {
             struct {
-                unsigned vcpu:16, dom:16;
+                unsigned item:16, dom:16;
             } d;
-            d.dom = svc->vcpu->domain->domain_id;
-            d.vcpu = svc->vcpu->vcpu_id;
+            d.dom = svc->item->domain->domain_id;
+            d.item = svc->item->item_id;
             __trace_var(TRC_CSCHED2_RUNQ_CAND_CHECK, 1,
                         sizeof(d),
                         (unsigned char *)&d);
         }
 
-        /* Only consider vcpus that are allowed to run on this processor. */
-        if ( !cpumask_test_cpu(cpu, svc->vcpu->sched_item->cpu_hard_affinity) )
+        /* Only consider items that are allowed to run on this processor. */
+        if ( !cpumask_test_cpu(cpu, svc->item->cpu_hard_affinity) )
         {
             (*skipped)++;
             continue;
         }
 
         /*
-         * If a vcpu is meant to be picked up by another processor, and such
+         * If an item is meant to be picked up by another processor, and such
          * processor has not scheduled yet, leave it in the runqueue for him.
          */
         if ( svc->tickled_cpu != -1 && svc->tickled_cpu != cpu &&
@@ -3401,7 +3387,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
          * If this is on a different processor, don't pull it unless
          * its credit is at least CSCHED2_MIGRATE_RESIST higher.
          */
-        if ( svc->vcpu->processor != cpu
+        if ( sched_item_cpu(svc->item) != cpu
              && snext->credit + CSCHED2_MIGRATE_RESIST > svc->credit )
         {
             (*skipped)++;
@@ -3416,7 +3402,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
          * some budget, then choose it.
          */
         if ( (yield || svc->credit > snext->credit) &&
-             (!has_cap(svc) || vcpu_grab_budget(svc)) )
+             (!has_cap(svc) || item_grab_budget(svc)) )
             snext = svc;
 
         /* In any case, if we got this far, break. */
@@ -3426,12 +3412,12 @@ runq_candidate(struct csched2_runqueue_data *rqd,
     if ( unlikely(tb_init_done) )
     {
         struct {
-            unsigned vcpu:16, dom:16;
+            unsigned item:16, dom:16;
             unsigned tickled_cpu, skipped;
             int credit;
         } d;
-        d.dom = snext->vcpu->domain->domain_id;
-        d.vcpu = snext->vcpu->vcpu_id;
+        d.dom = snext->item->domain->domain_id;
+        d.item = snext->item->item_id;
         d.credit = snext->credit;
         d.tickled_cpu = snext->tickled_cpu;
         d.skipped = *skipped;
@@ -3463,14 +3449,15 @@ csched2_schedule(
 {
     const int cpu = smp_processor_id();
     struct csched2_runqueue_data *rqd;
-    struct csched2_item * const scurr = csched2_item(current->sched_item);
+    struct sched_item *curritem = current->sched_item;
+    struct csched2_item * const scurr = csched2_item(curritem);
     struct csched2_item *snext = NULL;
-    unsigned int skipped_vcpus = 0;
+    unsigned int skipped_items = 0;
     struct task_slice ret;
     bool tickled;
 
     SCHED_STAT_CRANK(schedule);
-    CSCHED2_VCPU_CHECK(current);
+    CSCHED2_ITEM_CHECK(curritem);
 
     BUG_ON(!cpumask_test_cpu(cpu, &csched2_priv(ops)->initialized));
 
@@ -3479,7 +3466,7 @@ csched2_schedule(
 
     ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
 
-    BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
+    BUG_ON(!is_idle_item(curritem) && scurr->rqd != rqd);
 
     /* Clear "tickled" bit now that we've been scheduled */
     tickled = cpumask_test_cpu(cpu, &rqd->tickled);
@@ -3499,7 +3486,7 @@ csched2_schedule(
         d.cpu = cpu;
         d.rq_id = c2r(cpu);
         d.tasklet = tasklet_work_scheduled;
-        d.idle = is_idle_vcpu(current);
+        d.idle = is_idle_item(curritem);
         d.smt_idle = cpumask_test_cpu(cpu, &rqd->smt_idle);
         d.tickled = tickled;
         __trace_var(TRC_CSCHED2_SCHEDULE, 1,
@@ -3513,55 +3500,55 @@ csched2_schedule(
     /*
      *  Below 0, means that we are capped and we have overrun our  budget.
      *  Let's try to get some more but, if we fail (e.g., because of the
-     *  other running vcpus), we will be parked.
+     *  other running items), we will be parked.
      */
     if ( unlikely(scurr->budget <= 0) )
-        vcpu_grab_budget(scurr);
+        item_grab_budget(scurr);
 
     /*
-     * Select next runnable local VCPU (ie top of local runq).
+     * Select next runnable local ITEM (ie top of local runq).
      *
-     * If the current vcpu is runnable, and has higher credit than
+     * If the current item is runnable, and has higher credit than
      * the next guy on the queue (or there is noone else), we want to
      * run him again.
      *
-     * If there's tasklet work to do, we want to chose the idle vcpu
+     * If there's tasklet work to do, we want to chose the idle item
      * for this processor, and mark the current for delayed runqueue
      * add.
      *
-     * If the current vcpu is runnable, and there's another runnable
+     * If the current item is runnable, and there's another runnable
      * candidate, we want to mark current for delayed runqueue add,
      * and remove the next guy from the queue.
      *
-     * If the current vcpu is not runnable, we want to chose the idle
-     * vcpu for this processor.
+     * If the current item is not runnable, we want to chose the idle
+     * item for this processor.
      */
     if ( tasklet_work_scheduled )
     {
-        __clear_bit(__CSFLAG_vcpu_yield, &scurr->flags);
+        __clear_bit(__CSFLAG_item_yield, &scurr->flags);
         trace_var(TRC_CSCHED2_SCHED_TASKLET, 1, 0, NULL);
-        snext = csched2_item(idle_vcpu[cpu]->sched_item);
+        snext = csched2_item(sched_idle_item(cpu));
     }
     else
-        snext = runq_candidate(rqd, scurr, cpu, now, &skipped_vcpus);
+        snext = runq_candidate(rqd, scurr, cpu, now, &skipped_items);
 
-    /* If switching from a non-idle runnable vcpu, put it
+    /* If switching from a non-idle runnable item, put it
      * back on the runqueue. */
     if ( snext != scurr
-         && !is_idle_vcpu(scurr->vcpu)
-         && vcpu_runnable(current) )
+         && !is_idle_item(curritem)
+         && item_runnable(curritem) )
         __set_bit(__CSFLAG_delayed_runq_add, &scurr->flags);
 
     ret.migrated = 0;
 
     /* Accounting for non-idle tasks */
-    if ( !is_idle_vcpu(snext->vcpu) )
+    if ( !is_idle_item(snext->item) )
     {
         /* If switching, remove this from the runqueue and mark it scheduled */
         if ( snext != scurr )
         {
             ASSERT(snext->rqd == rqd);
-            ASSERT(!snext->vcpu->sched_item->is_running);
+            ASSERT(!snext->item->is_running);
 
             runq_remove(snext);
             __set_bit(__CSFLAG_scheduled, &snext->flags);
@@ -3576,19 +3563,19 @@ csched2_schedule(
 
         /*
          * The reset condition is "has a scheduler epoch come to an end?".
-         * The way this is enforced is checking whether the vcpu at the top
+         * The way this is enforced is checking whether the item at the top
          * of the runqueue has negative credits. This means the epochs have
          * variable length, as in one epoch expores when:
-         *  1) the vcpu at the top of the runqueue has executed for
+         *  1) the item at the top of the runqueue has executed for
          *     around 10 ms (with default parameters);
-         *  2) no other vcpu with higher credits wants to run.
+         *  2) no other item with higher credits wants to run.
          *
          * Here, where we want to check for reset, we need to make sure the
-         * proper vcpu is being used. In fact, runqueue_candidate() may have
-         * not returned the first vcpu in the runqueue, for various reasons
+         * proper item is being used. In fact, runqueue_candidate() may have
+         * not returned the first item in the runqueue, for various reasons
          * (e.g., affinity). Only trigger a reset when it does.
          */
-        if ( skipped_vcpus == 0 && snext->credit <= CSCHED2_CREDIT_RESET )
+        if ( skipped_items == 0 && snext->credit <= CSCHED2_CREDIT_RESET )
         {
             reset_credit(ops, cpu, now, snext);
             balance_load(ops, cpu, now);
@@ -3598,11 +3585,10 @@ csched2_schedule(
         snext->tickled_cpu = -1;
 
         /* Safe because lock for old processor is held */
-        if ( snext->vcpu->processor != cpu )
+        if ( sched_item_cpu(snext->item) != cpu )
         {
             snext->credit += CSCHED2_MIGRATE_COMPENSATION;
-            snext->vcpu->processor = cpu;
-            snext->vcpu->sched_item->res = per_cpu(sched_res, cpu);
+            sched_set_res(snext->item, per_cpu(sched_res, cpu));
             SCHED_STAT_CRANK(migrated);
             ret.migrated = 1;
         }
@@ -3636,20 +3622,20 @@ csched2_schedule(
      * Return task to run next...
      */
     ret.time = csched2_runtime(ops, cpu, snext, now);
-    ret.task = snext->vcpu->sched_item;
+    ret.task = snext->item;
 
-    CSCHED2_VCPU_CHECK(ret.task->vcpu);
+    CSCHED2_ITEM_CHECK(ret.task);
     return ret;
 }
 
 static void
-csched2_dump_vcpu(struct csched2_private *prv, struct csched2_item *svc)
+csched2_dump_item(struct csched2_private *prv, struct csched2_item *svc)
 {
     printk("[%i.%i] flags=%x cpu=%i",
-            svc->vcpu->domain->domain_id,
-            svc->vcpu->vcpu_id,
+            svc->item->domain->domain_id,
+            svc->item->item_id,
             svc->flags,
-            svc->vcpu->processor);
+            sched_item_cpu(svc->item));
 
     printk(" credit=%" PRIi32" [w=%u]", svc->credit, svc->weight);
 
@@ -3674,12 +3660,12 @@ dump_pcpu(const struct scheduler *ops, int cpu)
            nr_cpu_ids, cpumask_bits(per_cpu(cpu_sibling_mask, cpu)),
            nr_cpu_ids, cpumask_bits(per_cpu(cpu_core_mask, cpu)));
 
-    /* current VCPU (nothing to say if that's the idle vcpu) */
+    /* current ITEM (nothing to say if that's the idle item) */
     svc = csched2_item(curr_on_cpu(cpu));
-    if ( svc && !is_idle_vcpu(svc->vcpu) )
+    if ( svc && !is_idle_item(svc->item) )
     {
         printk("\trun: ");
-        csched2_dump_vcpu(prv, svc);
+        csched2_dump_item(prv, svc);
     }
 }
 
@@ -3736,7 +3722,7 @@ csched2_dump(const struct scheduler *ops)
     list_for_each( iter_sdom, &prv->sdom )
     {
         struct csched2_dom *sdom;
-        struct vcpu *v;
+        struct sched_item *item;
 
         sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
 
@@ -3744,19 +3730,19 @@ csched2_dump(const struct scheduler *ops)
                sdom->dom->domain_id,
                sdom->weight,
                sdom->cap,
-               sdom->nr_vcpus);
+               sdom->nr_items);
 
-        for_each_vcpu( sdom->dom, v )
+        for_each_sched_item( sdom->dom, item )
         {
-            struct csched2_item * const svc = csched2_item(v->sched_item);
+            struct csched2_item * const svc = csched2_item(item);
             spinlock_t *lock;
 
-            lock = item_schedule_lock(svc->vcpu->sched_item);
+            lock = item_schedule_lock(item);
 
             printk("\t%3d: ", ++loop);
-            csched2_dump_vcpu(prv, svc);
+            csched2_dump_item(prv, svc);
 
-            item_schedule_unlock(lock, svc->vcpu->sched_item);
+            item_schedule_unlock(lock, item);
         }
     }
 
@@ -3782,7 +3768,7 @@ csched2_dump(const struct scheduler *ops)
             if ( svc )
             {
                 printk("\t%3d: ", loop++);
-                csched2_dump_vcpu(prv, svc);
+                csched2_dump_item(prv, svc);
             }
         }
         spin_unlock(&rqd->lock);
@@ -3882,7 +3868,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
     struct csched2_item *svc = vdata;
     unsigned rqi;
 
-    ASSERT(pdata && svc && is_idle_vcpu(svc->vcpu));
+    ASSERT(pdata && svc && is_idle_item(svc->item));
 
     /*
      * We own one runqueue lock already (from schedule_cpu_switch()). This
@@ -3895,7 +3881,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
     ASSERT(!local_irq_is_enabled());
     write_lock(&prv->lock);
 
-    idle_vcpu[cpu]->sched_item->priv = vdata;
+    sched_idle_item(cpu)->priv = vdata;
 
     rqi = init_pdata(prv, pdata, cpu);
 
@@ -3946,7 +3932,7 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
      */
     ASSERT(spc && spc->runq_id != -1);
     ASSERT(cpumask_test_cpu(cpu, &prv->initialized));
-    
+
     /* Find the old runqueue and remove this cpu from it */
     rqd = prv->rqd + spc->runq_id;
 
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2019-05-06  6:57 UTC|newest]

Thread overview: 188+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-06  6:55 [PATCH RFC V2 00/45] xen: add core scheduling support Juergen Gross
2019-05-06  6:55 ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 01/45] xen/sched: add inline wrappers for calling per-scheduler functions Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  8:27   ` Jan Beulich
2019-05-06  8:27     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CCFF004020000780022C0D4@suse.com>
2019-05-06  8:34     ` Juergen Gross
2019-05-06  8:34       ` [Xen-devel] " Juergen Gross
2019-05-06  8:58       ` Jan Beulich
2019-05-06  8:58         ` [Xen-devel] " Jan Beulich
2019-05-06 15:26   ` Dario Faggioli
2019-05-06 15:26     ` [Xen-devel] " Dario Faggioli
2019-05-08 16:24   ` George Dunlap
2019-05-08 16:24     ` [Xen-devel] " George Dunlap
2019-05-09  5:32     ` Juergen Gross
2019-05-09  5:32       ` [Xen-devel] " Juergen Gross
2019-05-09 10:04       ` George Dunlap
2019-05-09 10:04         ` [Xen-devel] " George Dunlap
2019-05-09 10:56         ` Juergen Gross
2019-05-09 10:56           ` [Xen-devel] " Juergen Gross
2019-05-09 11:50           ` Jan Beulich
2019-05-09 11:50             ` [Xen-devel] " Jan Beulich
2019-05-09 12:03             ` Juergen Gross
2019-05-09 12:03               ` [Xen-devel] " Juergen Gross
2019-05-09 12:31               ` Jan Beulich
2019-05-09 12:31                 ` [Xen-devel] " Jan Beulich
     [not found]               ` <5CD41D9C020000780022D259@suse.com>
2019-05-09 12:44                 ` Juergen Gross
2019-05-09 12:44                   ` [Xen-devel] " Juergen Gross
2019-05-09 13:22                   ` Jan Beulich
2019-05-09 13:22                     ` [Xen-devel] " Jan Beulich
2019-05-09 12:27           ` Dario Faggioli
2019-05-09 12:27             ` [Xen-devel] " Dario Faggioli
2019-05-06  6:56 ` [PATCH RFC V2 02/45] xen/sched: use new sched_item instead of vcpu in scheduler interfaces Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-08 16:35   ` George Dunlap
2019-05-08 16:35     ` [Xen-devel] " George Dunlap
2019-05-08 17:27     ` Dario Faggioli
2019-05-08 17:27       ` [Xen-devel] " Dario Faggioli
2019-05-09  5:36     ` Juergen Gross
2019-05-09  5:36       ` [Xen-devel] " Juergen Gross
2019-05-09  5:56       ` Dario Faggioli
2019-05-09  5:56         ` [Xen-devel] " Dario Faggioli
2019-05-09  5:59         ` Juergen Gross
2019-05-09  5:59           ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 03/45] xen/sched: alloc struct sched_item for each vcpu Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 04/45] xen/sched: move per-vcpu scheduler private data pointer to sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 05/45] xen/sched: build a linked list of struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 06/45] xen/sched: introduce struct sched_resource Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 07/45] xen/sched: let pick_cpu return a scheduler resource Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 08/45] xen/sched: switch schedule_data.curr to point at sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 09/45] xen/sched: move per cpu scheduler private data into struct sched_resource Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 10/45] xen/sched: switch vcpu_schedule_lock to item_schedule_lock Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 11/45] xen/sched: move some per-vcpu items to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 12/45] xen/sched: add scheduler helpers hiding vcpu Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 13/45] xen/sched: add domain pointer to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 14/45] xen/sched: add id " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 15/45] xen/sched: rename scheduler related perf counters Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 16/45] xen/sched: switch struct task_slice from vcpu to sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 17/45] xen/sched: add is_running indicator to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 18/45] xen/sched: make null scheduler vcpu agnostic Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 19/45] xen/sched: make rt " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 20/45] xen/sched: make credit " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` Juergen Gross [this message]
2019-05-06  6:56   ` [Xen-devel] [PATCH RFC V2 21/45] xen/sched: make credit2 " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 22/45] xen/sched: make arinc653 " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 23/45] xen: add sched_item_pause_nosync() and sched_item_unpause() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 24/45] xen: let vcpu_create() select processor Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 12:20   ` Jan Beulich
2019-05-16 12:20     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD557D020000780022FA32@suse.com>
2019-05-16 12:46     ` Juergen Gross
2019-05-16 12:46       ` [Xen-devel] " Juergen Gross
2019-05-16 13:10       ` Jan Beulich
2019-05-16 13:10         ` [Xen-devel] " Jan Beulich
2019-05-06  6:56 ` [PATCH RFC V2 25/45] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 26/45] xen/sched: switch schedule() from vcpus to sched_items Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 27/45] xen/sched: switch sched_move_irqs() to take sched_item as parameter Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 28/45] xen: switch from for_each_vcpu() to for_each_sched_item() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 29/45] xen/sched: add runstate counters to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 30/45] xen/sched: rework and rename vcpu_force_reschedule() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  8:37   ` Jan Beulich
2019-05-06  8:37     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CCFF238020000780022C0F9@suse.com>
2019-05-06  8:51     ` Juergen Gross
2019-05-06  8:51       ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 31/45] xen/sched: Change vcpu_migrate_*() to operate on schedule item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 32/45] xen/sched: move struct task_slice into struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 33/45] xen/sched: add code to sync scheduling of all vcpus of a sched item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 34/45] xen/sched: introduce item_runnable_state() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 35/45] xen/sched: add support for multiple vcpus per sched item where missing Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 36/45] x86: make loading of GDT at context switch more modular Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 12:30   ` Jan Beulich
2019-05-16 12:30     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD57DB020000780022FA5E@suse.com>
2019-05-16 12:52     ` Juergen Gross
2019-05-16 12:52       ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 37/45] x86: optimize loading of GDT at context switch Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 12:42   ` Jan Beulich
2019-05-16 12:42     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD5AC2020000780022FA6A@suse.com>
2019-05-16 13:10     ` Juergen Gross
2019-05-16 13:10       ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 38/45] xen/sched: modify cpupool_domain_cpumask() to be an item mask Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 39/45] xen/sched: support allocating multiple vcpus into one sched item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 40/45] xen/sched: add a scheduler_percpu_init() function Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 41/45] xen/sched: add a percpu resource index Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 42/45] xen/sched: add fall back to idle vcpu when scheduling item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 13:05   ` Jan Beulich
2019-05-16 13:05     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD6005020000780022FA9A@suse.com>
2019-05-16 13:51     ` Juergen Gross
2019-05-16 13:51       ` [Xen-devel] " Juergen Gross
2019-05-16 14:41       ` Jan Beulich
2019-05-16 14:41         ` [Xen-devel] " Jan Beulich
     [not found]       ` <5CDD7693020000780022FC59@suse.com>
2019-05-17  5:13         ` Juergen Gross
2019-05-17  5:13           ` [Xen-devel] " Juergen Gross
2019-05-17  6:57           ` Jan Beulich
2019-05-17  6:57             ` [Xen-devel] " Jan Beulich
     [not found]           ` <5CDE5B4E020000780022FEFC@suse.com>
2019-05-17  7:48             ` Juergen Gross
2019-05-17  7:48               ` [Xen-devel] " Juergen Gross
2019-05-17  8:22               ` Jan Beulich
2019-05-17  8:22                 ` [Xen-devel] " Jan Beulich
2019-05-06  6:56 ` [PATCH RFC V2 43/45] xen/sched: make vcpu_wake() and vcpu_sleep() core scheduling aware Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 44/45] xen/sched: carve out freeing sched_item memory into dedicated function Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  8:57   ` Jan Beulich
2019-05-06  8:57     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CCFF6F1020000780022C12B@suse.com>
     [not found]     ` <ac57c420*a72e*7570*db8f*27e4693c2755@suse.com>
2019-05-06  9:23     ` Juergen Gross
2019-05-06  9:23       ` [Xen-devel] " Juergen Gross
2019-05-06 10:01       ` Jan Beulich
2019-05-06 10:01         ` [Xen-devel] " Jan Beulich
2019-05-08 14:36         ` Juergen Gross
2019-05-08 14:36           ` [Xen-devel] " Juergen Gross
2019-05-10  8:53           ` Jan Beulich
2019-05-10  8:53             ` [Xen-devel] " Jan Beulich
     [not found]           ` <5CD53C1C020000780022D706@suse.com>
2019-05-10  9:00             ` Juergen Gross
2019-05-10  9:00               ` [Xen-devel] " Juergen Gross
2019-05-10 10:29               ` Dario Faggioli
2019-05-10 10:29                 ` [Xen-devel] " Dario Faggioli
2019-05-10 11:17               ` Jan Beulich
2019-05-10 11:17                 ` [Xen-devel] " Jan Beulich
     [not found]       ` <5CD005E7020000780022C1B5@suse.com>
2019-05-06 10:20         ` Juergen Gross
2019-05-06 10:20           ` [Xen-devel] " Juergen Gross
2019-05-06 11:58           ` Jan Beulich
2019-05-06 11:58             ` [Xen-devel] " Jan Beulich
     [not found]           ` <5CD02161020000780022C257@suse.com>
2019-05-06 12:23             ` Juergen Gross
2019-05-06 12:23               ` [Xen-devel] " Juergen Gross
2019-05-06 13:14               ` Jan Beulich
2019-05-06 13:14                 ` [Xen-devel] " Jan Beulich
2019-05-06  7:10 ` [PATCH RFC V2 00/45] xen: add core scheduling support Juergen Gross
2019-05-06  7:10   ` [Xen-devel] " Juergen Gross
     [not found] <20190506065644.7415****1****jgross@suse.com>
     [not found] <20190506065644.7415*1*jgross@suse.com>
     [not found] ` <20190506065644.7415*2*jgross@suse.com>
     [not found]   ` <1d5f7b35*304c*6a86*5f24*67b79de447dc@citrix.com>
     [not found]     ` <2ca22195*9bdb*b040*ce12*df5bb2416038@suse.com>
     [not found]       ` <0ed82a64*58e7*7ce4*afd1*22f621c0d56d@citrix.com>
     [not found]         ` <a3e3370b*a4a9*9654*368b*f8c13b7f9742@suse.com>

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190506065644.7415-22-jgross@suse.com \
    --to=jgross@suse.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.