All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] xen: sched_null: support soft affinity
@ 2017-06-29 12:56 Dario Faggioli
  2017-06-29 12:56 ` [PATCH 1/5] xen: sched: factor affinity helpers out of sched_credit.c Dario Faggioli
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Dario Faggioli @ 2017-06-29 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Stefano Stabellini

In the null scheduler, we don't need either hard or soft affinity during online
scheduling operations.  In fact, the vCPUs are statically assigned to the
pCPUs, and hence there's no scope for checking or enforcing any affinity.

We, however, use hard-affinity for 'placement', i.e., for deciding to what pCPU
to statically assign a vCPU.  Let's, therefore, use soft-affinity too, for the
same purpose. Of course, in this case, if there's no free pCPU within the
vCPU's soft-affinity, we go checking the hard-affinity, instead of putting the
vCPU in the waitqueue.

This is particularly important because, as of now, libxl uses set a domain's
soft-affinity, if the automatic NUMA placement logic run at domain creation
succeds to find an ideal collocation for the domain, and Xen uses that for
allocating the domain's memory.

Supporting soft-affinity like this would therefore mean that, even when using
the null scheduler, we try to keep the vCPUs close to their memory (on NUMA
hosts, of course).

Note also that this does has no impact on the online scheduling overhead,
because soft-affinity is only considered in cold-paths (like when a vCPU joins
the scheduler for the first time, or is manually moved between pCPUs by the
user).

Note that what is patch 1 in this series, is the same patch 1 of the 'Soft
affinity for Credit2' series:
 https://lists.xenproject.org/archives/html/xen-devel/2017-06/msg01795.html
 https://lists.xenproject.org/archives/html/xen-devel/2017-06/msg01796.html

Regards,
Dario
---
Dario Faggioli (5):
      xen: sched: factor affinity helpers out of sched_credit.c
      xen: sched_null: check for pending tasklet work a bit earlier
      xen: sched-null: support soft-affinity
      xen: sched_null: add some tracing
      tools: tracing: handle null scheduler's events

 tools/xentrace/formats     |    7 +
 tools/xentrace/xenalyze.c  |   65 ++++++++++++++
 xen/common/sched_credit.c  |   97 +++-----------------
 xen/common/sched_null.c    |  209 ++++++++++++++++++++++++++++++++++++--------
 xen/include/public/trace.h |    1 
 xen/include/xen/sched-if.h |   64 +++++++++++++
 6 files changed, 323 insertions(+), 120 deletions(-)
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] xen: sched: factor affinity helpers out of sched_credit.c
  2017-06-29 12:56 [PATCH 0/5] xen: sched_null: support soft affinity Dario Faggioli
@ 2017-06-29 12:56 ` Dario Faggioli
  2017-06-29 12:56 ` [PATCH 2/5] xen: sched_null: check for pending tasklet work a bit earlier Dario Faggioli
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Dario Faggioli @ 2017-06-29 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Anshul Makkar, Justin T. Weaver, George Dunlap

In fact, we want to be able to use them from any scheduler.

While there, make the moved code use 'v' for struct_vcpu*
variable, like it should be done everywhere.

No functional change intended.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Signed-off-by: Justin T. Weaver <jtweaver@hawaii.edu>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
Cc: Anshul Makkar <anshul.makkar@citrix.com>
---
 xen/common/sched_credit.c  |   97 +++++++-------------------------------------
 xen/include/xen/sched-if.h |   64 +++++++++++++++++++++++++++++
 2 files changed, 79 insertions(+), 82 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index efdf6bf..53773df 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -136,27 +136,6 @@
 #define TRC_CSCHED_RATELIMIT     TRC_SCHED_CLASS_EVT(CSCHED, 10)
 #define TRC_CSCHED_STEAL_CHECK   TRC_SCHED_CLASS_EVT(CSCHED, 11)
 
-
-/*
- * Hard and soft affinity load balancing.
- *
- * Idea is each vcpu has some pcpus that it prefers, some that it does not
- * prefer but is OK with, and some that it cannot run on at all. The first
- * set of pcpus are the ones that are both in the soft affinity *and* in the
- * hard affinity; the second set of pcpus are the ones that are in the hard
- * affinity but *not* in the soft affinity; the third set of pcpus are the
- * ones that are not in the hard affinity.
- *
- * We implement a two step balancing logic. Basically, every time there is
- * the need to decide where to run a vcpu, we first check the soft affinity
- * (well, actually, the && between soft and hard affinity), to see if we can
- * send it where it prefers to (and can) run on. However, if the first step
- * does not find any suitable and free pcpu, we fall back checking the hard
- * affinity.
- */
-#define CSCHED_BALANCE_SOFT_AFFINITY    0
-#define CSCHED_BALANCE_HARD_AFFINITY    1
-
 /*
  * Boot parameters
  */
@@ -331,52 +310,6 @@ runq_remove(struct csched_vcpu *svc)
     __runq_remove(svc);
 }
 
-#define for_each_csched_balance_step(step) \
-    for ( (step) = 0; (step) <= CSCHED_BALANCE_HARD_AFFINITY; (step)++ )
-
-
-/*
- * Hard affinity balancing is always necessary and must never be skipped.
- * But soft affinity need only be considered when it has a functionally
- * different effect than other constraints (such as hard affinity, cpus
- * online, or cpupools).
- *
- * Soft affinity only needs to be considered if:
- * * The cpus in the cpupool are not a subset of soft affinity
- * * The hard affinity is not a subset of soft affinity
- * * There is an overlap between the soft affinity and the mask which is
- *   currently being considered.
- */
-static inline int __vcpu_has_soft_affinity(const struct vcpu *vc,
-                                           const cpumask_t *mask)
-{
-    return !cpumask_subset(cpupool_domain_cpumask(vc->domain),
-                           vc->cpu_soft_affinity) &&
-           !cpumask_subset(vc->cpu_hard_affinity, vc->cpu_soft_affinity) &&
-           cpumask_intersects(vc->cpu_soft_affinity, mask);
-}
-
-/*
- * Each csched-balance step uses its own cpumask. This function determines
- * which one (given the step) and copies it in mask. For the soft affinity
- * balancing step, the pcpus that are not part of vc's hard affinity are
- * filtered out from the result, to avoid running a vcpu where it would
- * like, but is not allowed to!
- */
-static void
-csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
-{
-    if ( step == CSCHED_BALANCE_SOFT_AFFINITY )
-    {
-        cpumask_and(mask, vc->cpu_soft_affinity, vc->cpu_hard_affinity);
-
-        if ( unlikely(cpumask_empty(mask)) )
-            cpumask_copy(mask, vc->cpu_hard_affinity);
-    }
-    else /* step == CSCHED_BALANCE_HARD_AFFINITY */
-        cpumask_copy(mask, vc->cpu_hard_affinity);
-}
-
 static void burn_credits(struct csched_vcpu *svc, s_time_t now)
 {
     s_time_t delta;
@@ -441,18 +374,18 @@ static inline void __runq_tickle(struct csched_vcpu *new)
          * Soft and hard affinity balancing loop. For vcpus without
          * a useful soft affinity, consider hard affinity only.
          */
-        for_each_csched_balance_step( balance_step )
+        for_each_affinity_balance_step( balance_step )
         {
             int new_idlers_empty;
 
-            if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY
-                 && !__vcpu_has_soft_affinity(new->vcpu,
-                                              new->vcpu->cpu_hard_affinity) )
+            if ( balance_step == BALANCE_SOFT_AFFINITY
+                 && !has_soft_affinity(new->vcpu,
+                                       new->vcpu->cpu_hard_affinity) )
                 continue;
 
             /* Are there idlers suitable for new (for this balance step)? */
-            csched_balance_cpumask(new->vcpu, balance_step,
-                                   cpumask_scratch_cpu(cpu));
+            affinity_balance_cpumask(new->vcpu, balance_step,
+                                     cpumask_scratch_cpu(cpu));
             cpumask_and(cpumask_scratch_cpu(cpu),
                         cpumask_scratch_cpu(cpu), &idle_mask);
             new_idlers_empty = cpumask_empty(cpumask_scratch_cpu(cpu));
@@ -463,7 +396,7 @@ static inline void __runq_tickle(struct csched_vcpu *new)
              * hard affinity as well, before taking final decisions.
              */
             if ( new_idlers_empty
-                 && balance_step == CSCHED_BALANCE_SOFT_AFFINITY )
+                 && balance_step == BALANCE_SOFT_AFFINITY )
                 continue;
 
             /*
@@ -789,7 +722,7 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
     online = cpupool_domain_cpumask(vc->domain);
     cpumask_and(&cpus, vc->cpu_hard_affinity, online);
 
-    for_each_csched_balance_step( balance_step )
+    for_each_affinity_balance_step( balance_step )
     {
         /*
          * We want to pick up a pcpu among the ones that are online and
@@ -809,12 +742,12 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
          * cpus and, if the result is empty, we just skip the soft affinity
          * balancing step all together.
          */
-        if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY
-             && !__vcpu_has_soft_affinity(vc, &cpus) )
+        if ( balance_step == BALANCE_SOFT_AFFINITY
+             && !has_soft_affinity(vc, &cpus) )
             continue;
 
         /* Pick an online CPU from the proper affinity mask */
-        csched_balance_cpumask(vc, balance_step, &cpus);
+        affinity_balance_cpumask(vc, balance_step, &cpus);
         cpumask_and(&cpus, &cpus, online);
 
         /* If present, prefer vc's current processor */
@@ -1710,11 +1643,11 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
          * or counter.
          */
         if ( vc->is_running ||
-             (balance_step == CSCHED_BALANCE_SOFT_AFFINITY
-              && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity)) )
+             (balance_step == BALANCE_SOFT_AFFINITY
+              && !has_soft_affinity(vc, vc->cpu_hard_affinity)) )
             continue;
 
-        csched_balance_cpumask(vc, balance_step, cpumask_scratch);
+        affinity_balance_cpumask(vc, balance_step, cpumask_scratch);
         if ( __csched_vcpu_is_migrateable(vc, cpu, cpumask_scratch) )
         {
             /* We got a candidate. Grab it! */
@@ -1774,7 +1707,7 @@ csched_load_balance(struct csched_private *prv, int cpu,
      *  1. any "soft-affine work" to steal first,
      *  2. if not finding anything, any "hard-affine work" to steal.
      */
-    for_each_csched_balance_step( bstep )
+    for_each_affinity_balance_step( bstep )
     {
         /*
          * We peek at the non-idling CPUs in a node-wise fashion. In fact,
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
index c32ee7a..c4a4935 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -208,4 +208,68 @@ static inline cpumask_t* cpupool_domain_cpumask(struct domain *d)
     return d->cpupool->cpu_valid;
 }
 
+/*
+ * Hard and soft affinity load balancing.
+ *
+ * Idea is each vcpu has some pcpus that it prefers, some that it does not
+ * prefer but is OK with, and some that it cannot run on at all. The first
+ * set of pcpus are the ones that are both in the soft affinity *and* in the
+ * hard affinity; the second set of pcpus are the ones that are in the hard
+ * affinity but *not* in the soft affinity; the third set of pcpus are the
+ * ones that are not in the hard affinity.
+ *
+ * We implement a two step balancing logic. Basically, every time there is
+ * the need to decide where to run a vcpu, we first check the soft affinity
+ * (well, actually, the && between soft and hard affinity), to see if we can
+ * send it where it prefers to (and can) run on. However, if the first step
+ * does not find any suitable and free pcpu, we fall back checking the hard
+ * affinity.
+ */
+#define BALANCE_SOFT_AFFINITY    0
+#define BALANCE_HARD_AFFINITY    1
+
+#define for_each_affinity_balance_step(step) \
+    for ( (step) = 0; (step) <= BALANCE_HARD_AFFINITY; (step)++ )
+
+/*
+ * Hard affinity balancing is always necessary and must never be skipped.
+ * But soft affinity need only be considered when it has a functionally
+ * different effect than other constraints (such as hard affinity, cpus
+ * online, or cpupools).
+ *
+ * Soft affinity only needs to be considered if:
+ * * The cpus in the cpupool are not a subset of soft affinity
+ * * The hard affinity is not a subset of soft affinity
+ * * There is an overlap between the soft affinity and the mask which is
+ *   currently being considered.
+ */
+static inline int has_soft_affinity(const struct vcpu *v,
+                                    const cpumask_t *mask)
+{
+    return !cpumask_subset(cpupool_domain_cpumask(v->domain),
+                           v->cpu_soft_affinity) &&
+           !cpumask_subset(v->cpu_hard_affinity, v->cpu_soft_affinity) &&
+           cpumask_intersects(v->cpu_soft_affinity, mask);
+}
+
+/*
+ * This function copies in mask the cpumask that should be used for a
+ * particular affinity balancing step. For the soft affinity one, the pcpus
+ * that are not part of vc's hard affinity are filtered out from the result,
+ * to avoid running a vcpu where it would like, but is not allowed to!
+ */
+static inline void
+affinity_balance_cpumask(const struct vcpu *v, int step, cpumask_t *mask)
+{
+    if ( step == BALANCE_SOFT_AFFINITY )
+    {
+        cpumask_and(mask, v->cpu_soft_affinity, v->cpu_hard_affinity);
+
+        if ( unlikely(cpumask_empty(mask)) )
+            cpumask_copy(mask, v->cpu_hard_affinity);
+    }
+    else /* step == BALANCE_HARD_AFFINITY */
+        cpumask_copy(mask, v->cpu_hard_affinity);
+}
+
 #endif /* __XEN_SCHED_IF_H__ */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] xen: sched_null: check for pending tasklet work a bit earlier
  2017-06-29 12:56 [PATCH 0/5] xen: sched_null: support soft affinity Dario Faggioli
  2017-06-29 12:56 ` [PATCH 1/5] xen: sched: factor affinity helpers out of sched_credit.c Dario Faggioli
@ 2017-06-29 12:56 ` Dario Faggioli
  2017-07-25 15:24   ` George Dunlap
  2017-06-29 12:56 ` [PATCH 3/5] xen: sched-null: support soft-affinity Dario Faggioli
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Dario Faggioli @ 2017-06-29 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap

Whether or not there's pending tasklet work to do, it's
something we know from the tasklet_work_scheduled parameter.

Deal with that as soon as possible, like all other schedulers
do.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/common/sched_null.c |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index 705c00a..610a150 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -641,7 +641,10 @@ static struct task_slice null_schedule(const struct scheduler *ops,
     SCHED_STAT_CRANK(schedule);
     NULL_VCPU_CHECK(current);
 
-    ret.task = per_cpu(npc, cpu).vcpu;
+    if ( tasklet_work_scheduled )
+        ret.task = idle_vcpu[cpu];
+    else
+        ret.task = per_cpu(npc, cpu).vcpu;
     ret.migrated = 0;
     ret.time = -1;
 
@@ -663,9 +666,7 @@ static struct task_slice null_schedule(const struct scheduler *ops,
         spin_unlock(&prv->waitq_lock);
     }
 
-    if ( unlikely(tasklet_work_scheduled ||
-                  ret.task == NULL ||
-                  !vcpu_runnable(ret.task)) )
+    if ( unlikely(ret.task == NULL || !vcpu_runnable(ret.task)) )
         ret.task = idle_vcpu[cpu];
 
     NULL_VCPU_CHECK(ret.task);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] xen: sched-null: support soft-affinity
  2017-06-29 12:56 [PATCH 0/5] xen: sched_null: support soft affinity Dario Faggioli
  2017-06-29 12:56 ` [PATCH 1/5] xen: sched: factor affinity helpers out of sched_credit.c Dario Faggioli
  2017-06-29 12:56 ` [PATCH 2/5] xen: sched_null: check for pending tasklet work a bit earlier Dario Faggioli
@ 2017-06-29 12:56 ` Dario Faggioli
  2017-07-25 15:50   ` George Dunlap
  2017-06-29 12:56 ` [PATCH 4/5] xen: sched_null: add some tracing Dario Faggioli
  2017-06-29 12:56 ` [PATCH 5/5] tools: tracing: handle null scheduler's events Dario Faggioli
  4 siblings, 1 reply; 12+ messages in thread
From: Dario Faggioli @ 2017-06-29 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap

The null scheduler does not really use hard-affinity for
scheduling, it uses it for 'placement', i.e., for deciding
to what pCPU to statically assign a vCPU.

Let's use soft-affinity in the same way, of course with the
difference that, if there's no free pCPU within the vCPU's
soft-affinity, we go checking the hard-affinity, instead of
putting the vCPU in the waitqueue.

This does has no impact on the scheduling overhead, because
soft-affinity is only considered in cold-path (like when a
vCPU joins the scheduler for the first time, or is manually
moved between pCPUs by the user).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/common/sched_null.c |  110 +++++++++++++++++++++++++++++++++--------------
 1 file changed, 77 insertions(+), 33 deletions(-)

diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index 610a150..19c7f0f 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -115,9 +115,11 @@ static inline struct null_dom *null_dom(const struct domain *d)
     return d->sched_priv;
 }
 
-static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu)
+static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu,
+                                       unsigned int balance_step)
 {
-    cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity,
+    affinity_balance_cpumask(v, balance_step, cpumask_scratch_cpu(cpu));
+    cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
                 cpupool_domain_cpumask(v->domain));
 
     return cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu));
@@ -279,31 +281,40 @@ static void null_dom_destroy(const struct scheduler *ops, struct domain *d)
  */
 static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v)
 {
+    unsigned int bs;
     unsigned int cpu = v->processor, new_cpu;
     cpumask_t *cpus = cpupool_domain_cpumask(v->domain);
 
     ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock));
 
-    cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpus);
+    for_each_affinity_balance_step( bs )
+    {
+        if ( bs == BALANCE_SOFT_AFFINITY &&
+             !has_soft_affinity(v, v->cpu_hard_affinity) )
+            continue;
 
-    /*
-     * If our processor is free, or we are assigned to it, and it is also
-     * still valid and part of our affinity, just go for it.
-     * (Note that we may call vcpu_check_affinity(), but we deliberately
-     * don't, so we get to keep in the scratch cpumask what we have just
-     * put in it.)
-     */
-    if ( likely((per_cpu(npc, cpu).vcpu == NULL || per_cpu(npc, cpu).vcpu == v)
-                && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
-        return cpu;
+        affinity_balance_cpumask(v, bs, cpumask_scratch_cpu(cpu));
+        cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), cpus);
 
-    /* If not, just go for a free pCPU, within our affinity, if any */
-    cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
-                &prv->cpus_free);
-    new_cpu = cpumask_first(cpumask_scratch_cpu(cpu));
+        /*
+         * If our processor is free, or we are assigned to it, and it is also
+         * still valid and part of our affinity, just go for it.
+         * (Note that we may call vcpu_check_affinity(), but we deliberately
+         * don't, so we get to keep in the scratch cpumask what we have just
+         * put in it.)
+         */
+        if ( likely((per_cpu(npc, cpu).vcpu == NULL || per_cpu(npc, cpu).vcpu == v)
+                    && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
+            return cpu;
 
-    if ( likely(new_cpu != nr_cpu_ids) )
-        return new_cpu;
+        /* If not, just go for a free pCPU, within our affinity, if any */
+        cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
+                    &prv->cpus_free);
+        new_cpu = cpumask_first(cpumask_scratch_cpu(cpu));
+
+        if ( likely(new_cpu != nr_cpu_ids) )
+            return new_cpu;
+    }
 
     /*
      * If we didn't find any free pCPU, just pick any valid pcpu, even if
@@ -430,6 +441,7 @@ static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v)
 
 static void _vcpu_remove(struct null_private *prv, struct vcpu *v)
 {
+    unsigned int bs;
     unsigned int cpu = v->processor;
     struct null_vcpu *wvc;
 
@@ -441,19 +453,27 @@ static void _vcpu_remove(struct null_private *prv, struct vcpu *v)
 
     /*
      * If v is assigned to a pCPU, let's see if there is someone waiting,
-     * suitable to be assigned to it.
+     * suitable to be assigned to it (prioritizing vcpus that have
+     * soft-affinity with cpu).
      */
-    list_for_each_entry( wvc, &prv->waitq, waitq_elem )
+    for_each_affinity_balance_step( bs )
     {
-        if ( vcpu_check_affinity(wvc->vcpu, cpu) )
+        list_for_each_entry( wvc, &prv->waitq, waitq_elem )
         {
-            list_del_init(&wvc->waitq_elem);
-            vcpu_assign(prv, wvc->vcpu, cpu);
-            cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
-            break;
+            if ( bs == BALANCE_SOFT_AFFINITY &&
+                 !has_soft_affinity(wvc->vcpu, wvc->vcpu->cpu_hard_affinity) )
+                continue;
+
+            if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) )
+            {
+                list_del_init(&wvc->waitq_elem);
+                vcpu_assign(prv, wvc->vcpu, cpu);
+                cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
+                spin_unlock(&prv->waitq_lock);
+                return;
+            }
         }
     }
-
     spin_unlock(&prv->waitq_lock);
 }
 
@@ -570,7 +590,8 @@ static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v,
      *
      * In latter, all we can do is to park v in the waitqueue.
      */
-    if ( per_cpu(npc, new_cpu).vcpu == NULL && vcpu_check_affinity(v, new_cpu) )
+    if ( per_cpu(npc, new_cpu).vcpu == NULL &&
+         vcpu_check_affinity(v, new_cpu, BALANCE_HARD_AFFINITY) )
     {
         /* v might have been in the waitqueue, so remove it */
         spin_lock(&prv->waitq_lock);
@@ -633,6 +654,7 @@ static struct task_slice null_schedule(const struct scheduler *ops,
                                        s_time_t now,
                                        bool_t tasklet_work_scheduled)
 {
+    unsigned int bs;
     const unsigned int cpu = smp_processor_id();
     struct null_private *prv = null_priv(ops);
     struct null_vcpu *wvc;
@@ -656,13 +678,35 @@ static struct task_slice null_schedule(const struct scheduler *ops,
     if ( unlikely(ret.task == NULL) )
     {
         spin_lock(&prv->waitq_lock);
-        wvc = list_first_entry_or_null(&prv->waitq, struct null_vcpu, waitq_elem);
-        if ( wvc && vcpu_check_affinity(wvc->vcpu, cpu) )
+
+        if ( list_empty(&prv->waitq) )
+            goto unlock;
+
+        /*
+         * We scan the waitqueue twice, for prioritizing vcpus that have
+         * soft-affinity with cpu. This may look like something expensive to
+         * do here in null_schedule(), but it's actually fine, beceuse we do
+         * it only in cases where a pcpu has no vcpu associated (e.g., as
+         * said above, the cpu has just joined a cpupool).
+         */
+        for_each_affinity_balance_step( bs )
         {
-            vcpu_assign(prv, wvc->vcpu, cpu);
-            list_del_init(&wvc->waitq_elem);
-            ret.task = wvc->vcpu;
+            list_for_each_entry( wvc, &prv->waitq, waitq_elem )
+            {
+                if ( bs == BALANCE_SOFT_AFFINITY &&
+                     !has_soft_affinity(wvc->vcpu, wvc->vcpu->cpu_hard_affinity) )
+                    continue;
+
+                if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) )
+                {
+                    vcpu_assign(prv, wvc->vcpu, cpu);
+                    list_del_init(&wvc->waitq_elem);
+                    ret.task = wvc->vcpu;
+                    goto unlock;
+                }
+            }
         }
+ unlock:
         spin_unlock(&prv->waitq_lock);
     }
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] xen: sched_null: add some tracing
  2017-06-29 12:56 [PATCH 0/5] xen: sched_null: support soft affinity Dario Faggioli
                   ` (2 preceding siblings ...)
  2017-06-29 12:56 ` [PATCH 3/5] xen: sched-null: support soft-affinity Dario Faggioli
@ 2017-06-29 12:56 ` Dario Faggioli
  2017-07-25 15:15   ` George Dunlap
  2017-07-26 14:50   ` George Dunlap
  2017-06-29 12:56 ` [PATCH 5/5] tools: tracing: handle null scheduler's events Dario Faggioli
  4 siblings, 2 replies; 12+ messages in thread
From: Dario Faggioli @ 2017-06-29 12:56 UTC (permalink / raw)
  To: xen-devel

In line with what is there in all the other schedulers.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/common/sched_null.c    |   94 +++++++++++++++++++++++++++++++++++++++++++-
 xen/include/public/trace.h |    1 
 2 files changed, 92 insertions(+), 3 deletions(-)

diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index 19c7f0f..b4a24ba 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -32,7 +32,17 @@
 #include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <xen/keyhandler.h>
+#include <xen/trace.h>
 
+/*
+ * null tracing events. Check include/public/trace.h for more details.
+ */
+#define TRC_SNULL_PICKED_CPU    TRC_SCHED_CLASS_EVT(SNULL, 1)
+#define TRC_SNULL_VCPU_ASSIGN   TRC_SCHED_CLASS_EVT(SNULL, 2)
+#define TRC_SNULL_VCPU_DEASSIGN TRC_SCHED_CLASS_EVT(SNULL, 3)
+#define TRC_SNULL_MIGRATE       TRC_SCHED_CLASS_EVT(SNULL, 4)
+#define TRC_SNULL_SCHEDULE      TRC_SCHED_CLASS_EVT(SNULL, 5)
+#define TRC_SNULL_TASKLET       TRC_SCHED_CLASS_EVT(SNULL, 6)
 
 /*
  * Locking:
@@ -305,7 +315,10 @@ static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v)
          */
         if ( likely((per_cpu(npc, cpu).vcpu == NULL || per_cpu(npc, cpu).vcpu == v)
                     && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
-            return cpu;
+        {
+            new_cpu = cpu;
+            goto out;
+        }
 
         /* If not, just go for a free pCPU, within our affinity, if any */
         cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
@@ -313,7 +326,7 @@ static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v)
         new_cpu = cpumask_first(cpumask_scratch_cpu(cpu));
 
         if ( likely(new_cpu != nr_cpu_ids) )
-            return new_cpu;
+            goto out;
     }
 
     /*
@@ -328,7 +341,22 @@ static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v)
      * only if the pCPU is free.
      */
     cpumask_and(cpumask_scratch_cpu(cpu), cpus, v->cpu_hard_affinity);
-    return cpumask_any(cpumask_scratch_cpu(cpu));
+    new_cpu = cpumask_any(cpumask_scratch_cpu(cpu));
+
+ out:
+    if ( unlikely(tb_init_done) )
+    {
+        struct {
+            uint16_t vcpu, dom;
+            uint32_t new_cpu;
+        } d;
+        d.dom = v->domain->domain_id;
+        d.vcpu = v->vcpu_id;
+        d.new_cpu = new_cpu;
+        __trace_var(TRC_SNULL_PICKED_CPU, 1, sizeof(d), &d);
+    }
+
+    return new_cpu;
 }
 
 static void vcpu_assign(struct null_private *prv, struct vcpu *v,
@@ -339,6 +367,18 @@ static void vcpu_assign(struct null_private *prv, struct vcpu *v,
     cpumask_clear_cpu(cpu, &prv->cpus_free);
 
     dprintk(XENLOG_G_INFO, "%d <-- d%dv%d\n", cpu, v->domain->domain_id, v->vcpu_id);
+
+    if ( unlikely(tb_init_done) )
+    {
+        struct {
+            uint16_t vcpu, dom;
+            uint32_t cpu;
+        } d;
+        d.dom = v->domain->domain_id;
+        d.vcpu = v->vcpu_id;
+        d.cpu = cpu;
+        __trace_var(TRC_SNULL_VCPU_ASSIGN, 1, sizeof(d), &d);
+    }
 }
 
 static void vcpu_deassign(struct null_private *prv, struct vcpu *v,
@@ -348,6 +388,18 @@ static void vcpu_deassign(struct null_private *prv, struct vcpu *v,
     cpumask_set_cpu(cpu, &prv->cpus_free);
 
     dprintk(XENLOG_G_INFO, "%d <-- NULL (d%dv%d)\n", cpu, v->domain->domain_id, v->vcpu_id);
+
+    if ( unlikely(tb_init_done) )
+    {
+        struct {
+            uint16_t vcpu, dom;
+            uint32_t cpu;
+        } d;
+        d.dom = v->domain->domain_id;
+        d.vcpu = v->vcpu_id;
+        d.cpu = cpu;
+        __trace_var(TRC_SNULL_VCPU_DEASSIGN, 1, sizeof(d), &d);
+    }
 }
 
 /* Change the scheduler of cpu to us (null). */
@@ -562,6 +614,19 @@ static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v,
     if ( v->processor == new_cpu )
         return;
 
+    if ( unlikely(tb_init_done) )
+    {
+        struct {
+            uint16_t vcpu, dom;
+            uint16_t cpu, new_cpu;
+        } d;
+        d.dom = v->domain->domain_id;
+        d.vcpu = v->vcpu_id;
+        d.cpu = v->processor;
+        d.new_cpu = new_cpu;
+        __trace_var(TRC_SNULL_MIGRATE, 1, sizeof(d), &d);
+    }
+
     /*
      * v is either assigned to a pCPU, or in the waitqueue.
      *
@@ -663,8 +728,31 @@ static struct task_slice null_schedule(const struct scheduler *ops,
     SCHED_STAT_CRANK(schedule);
     NULL_VCPU_CHECK(current);
 
+    if ( unlikely(tb_init_done) )
+    {
+        struct {
+            uint16_t tasklet, cpu;
+            int16_t vcpu, dom;
+        } d;
+        d.cpu = cpu;
+        d.tasklet = tasklet_work_scheduled;
+        if ( per_cpu(npc, cpu).vcpu == NULL )
+        {
+            d.vcpu = d.dom = -1;
+        }
+        else
+        {
+            d.vcpu = per_cpu(npc, cpu).vcpu->vcpu_id;
+            d.dom = per_cpu(npc, cpu).vcpu->domain->domain_id;
+        }
+        __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d);
+    }
+
     if ( tasklet_work_scheduled )
+    {
+        trace_var(TRC_SNULL_TASKLET, 1, 0, NULL);
         ret.task = idle_vcpu[cpu];
+    }
     else
         ret.task = per_cpu(npc, cpu).vcpu;
     ret.migrated = 0;
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
index 7f2e891..3746bff 100644
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -78,6 +78,7 @@
 /* #define XEN_SCHEDULER_SEDF 2 (Removed) */
 #define TRC_SCHED_ARINC653 3
 #define TRC_SCHED_RTDS     4
+#define TRC_SCHED_SNULL    5
 
 /* Per-scheduler tracing */
 #define TRC_SCHED_CLASS_EVT(_c, _e) \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] tools: tracing: handle null scheduler's events
  2017-06-29 12:56 [PATCH 0/5] xen: sched_null: support soft affinity Dario Faggioli
                   ` (3 preceding siblings ...)
  2017-06-29 12:56 ` [PATCH 4/5] xen: sched_null: add some tracing Dario Faggioli
@ 2017-06-29 12:56 ` Dario Faggioli
  2017-07-26 14:51   ` George Dunlap
  4 siblings, 1 reply; 12+ messages in thread
From: Dario Faggioli @ 2017-06-29 12:56 UTC (permalink / raw)
  To: xen-devel

In both xentrace and xenalyze.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/xentrace/formats    |    7 +++++
 tools/xentrace/xenalyze.c |   65 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 72 insertions(+)

diff --git a/tools/xentrace/formats b/tools/xentrace/formats
index 8b31780..c1f584f 100644
--- a/tools/xentrace/formats
+++ b/tools/xentrace/formats
@@ -79,6 +79,13 @@
 0x00022805  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:sched_tasklet
 0x00022806  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  rtds:schedule      [ cpu[16]:tasklet[8]:idle[4]:tickled[4] = %(1)08x ]
 
+0x00022A01  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  null:pick_cpu      [ dom:vcpu = 0x%(1)08x, new_cpu = %(2)d ]
+0x00022A02  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  null:assign        [ dom:vcpu = 0x%(1)08x, cpu = %(2)d ]
+0x00022A03  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  null:deassign      [ dom:vcpu = 0x%(1)08x, cpu = %(2)d ]
+0x00022A04  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  null:migrate       [ dom:vcpu = 0x%(1)08x, new_cpu:cpu = 0x%(2)08x ]
+0x00022A05  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  null:schedule      [ cpu[16]:tasklet[16] = %(1)08x, dom:vcpu = 0x%(2)08x ]
+0x00022A06  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  null:sched_tasklet
+
 0x00041001  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  domain_create   [ dom = 0x%(1)08x ]
 0x00041002  CPU%(cpu)d  %(tsc)d (+%(reltsc)8d)  domain_destroy  [ dom = 0x%(1)08x ]
 
diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index fa608ad..24cce2a 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -7968,6 +7968,71 @@ void sched_process(struct pcpu_info *p)
                        r->tickled ? ", tickled" : ", not tickled");
             }
             break;
+        case TRC_SCHED_CLASS_EVT(SNULL, 1): /* PICKED_CPU */
+            if (opt.dump_all) {
+                struct {
+                    uint16_t vcpuid, domid;
+                    uint32_t new_cpu;
+                } *r = (typeof(r))ri->d;
+
+                printf(" %s null:picked_cpu d%uv%u, cpu %u\n",
+                       ri->dump_header, r->domid, r->vcpuid, r->new_cpu);
+            }
+            break;
+        case TRC_SCHED_CLASS_EVT(SNULL, 2): /* VCPU_ASSIGN */
+            if (opt.dump_all) {
+                struct {
+                    uint16_t vcpuid, domid;
+                    uint32_t cpu;
+                } *r = (typeof(r))ri->d;
+
+                printf(" %s null:vcpu_assign d%uv%u to cpu %u\n",
+                       ri->dump_header, r->domid, r->vcpuid, r->cpu);
+            }
+            break;
+        case TRC_SCHED_CLASS_EVT(SNULL, 3): /* VCPU_DEASSIGN */
+            if (opt.dump_all) {
+                struct {
+                    uint16_t vcpuid, domid;
+                    uint32_t cpu;
+                } *r = (typeof(r))ri->d;
+
+                printf(" %s null:vcpu_deassign d%uv%u from cpu %u\n",
+                       ri->dump_header, r->domid, r->vcpuid, r->cpu);
+            }
+            break;
+        case TRC_SCHED_CLASS_EVT(SNULL, 4): /* MIGRATE */
+            if (opt.dump_all) {
+                struct {
+                    uint16_t vcpuid, domid;
+                    uint16_t cpu, new_cpu;
+                } *r = (typeof(r))ri->d;
+
+                printf(" %s null:migrate d%uv%u, cpu %u, new_cpu %u\n",
+                       ri->dump_header, r->domid, r->vcpuid,
+                       r->cpu, r->new_cpu);
+            }
+            break;
+        case TRC_SCHED_CLASS_EVT(SNULL, 5): /* SCHEDULE */
+            if (opt.dump_all) {
+                struct {
+                    uint16_t tasklet, cpu;
+                    int16_t vcpuid, domid;
+                } *r = (typeof(r))ri->d;
+
+                printf(" %s null:schedule cpu %u%s",
+                       ri->dump_header, r->cpu,
+                       r->tasklet ? ", tasklet scheduled" : "");
+                if (r->vcpuid != -1)
+                    printf(", vcpu d%uv%d\n", r->domid, r->vcpuid);
+                else
+                    printf(", no vcpu\n");
+            }
+            break;
+        case TRC_SCHED_CLASS_EVT(SNULL, 6): /* TASKLET */
+            if (opt.dump_all)
+                printf(" %s null:sched_tasklet\n", ri->dump_header);
+            break;
         default:
             process_generic(ri);
         }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] xen: sched_null: add some tracing
  2017-06-29 12:56 ` [PATCH 4/5] xen: sched_null: add some tracing Dario Faggioli
@ 2017-07-25 15:15   ` George Dunlap
  2017-07-25 16:07     ` Dario Faggioli
  2017-07-26 14:50   ` George Dunlap
  1 sibling, 1 reply; 12+ messages in thread
From: George Dunlap @ 2017-07-25 15:15 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: xen-devel

On Thu, Jun 29, 2017 at 1:56 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> In line with what is there in all the other schedulers.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> George Dunlap <george.dunlap@eu.citrix.com>

FYI forgot the 'CC:' for this and patch 5.  :-)  (No problem, just one
extra step to download the series into an mbox.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/5] xen: sched_null: check for pending tasklet work a bit earlier
  2017-06-29 12:56 ` [PATCH 2/5] xen: sched_null: check for pending tasklet work a bit earlier Dario Faggioli
@ 2017-07-25 15:24   ` George Dunlap
  0 siblings, 0 replies; 12+ messages in thread
From: George Dunlap @ 2017-07-25 15:24 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: xen-devel

On Thu, Jun 29, 2017 at 1:56 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> Whether or not there's pending tasklet work to do, it's
> something we know from the tasklet_work_scheduled parameter.
>
> Deal with that as soon as possible, like all other schedulers
> do.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/5] xen: sched-null: support soft-affinity
  2017-06-29 12:56 ` [PATCH 3/5] xen: sched-null: support soft-affinity Dario Faggioli
@ 2017-07-25 15:50   ` George Dunlap
  0 siblings, 0 replies; 12+ messages in thread
From: George Dunlap @ 2017-07-25 15:50 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: xen-devel

On Thu, Jun 29, 2017 at 1:56 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> The null scheduler does not really use hard-affinity for
> scheduling, it uses it for 'placement', i.e., for deciding
> to what pCPU to statically assign a vCPU.
>
> Let's use soft-affinity in the same way, of course with the
> difference that, if there's no free pCPU within the vCPU's
> soft-affinity, we go checking the hard-affinity, instead of
> putting the vCPU in the waitqueue.
>
> This does has no impact on the scheduling overhead, because
> soft-affinity is only considered in cold-path (like when a
> vCPU joins the scheduler for the first time, or is manually
> moved between pCPUs by the user).
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] xen: sched_null: add some tracing
  2017-07-25 15:15   ` George Dunlap
@ 2017-07-25 16:07     ` Dario Faggioli
  0 siblings, 0 replies; 12+ messages in thread
From: Dario Faggioli @ 2017-07-25 16:07 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 842 bytes --]

On Tue, 2017-07-25 at 16:15 +0100, George Dunlap wrote:
> On Thu, Jun 29, 2017 at 1:56 PM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> > In line with what is there in all the other schedulers.
> > 
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > ---
> > George Dunlap <george.dunlap@eu.citrix.com>
> 
> FYI forgot the 'CC:' for this and patch 5.  :-)  
>
Ah, indeed. Sorry for that.

> (No problem, just one
> extra step to download the series into an mbox.)
> 
Ok, will pay more attention next time. :-)

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] xen: sched_null: add some tracing
  2017-06-29 12:56 ` [PATCH 4/5] xen: sched_null: add some tracing Dario Faggioli
  2017-07-25 15:15   ` George Dunlap
@ 2017-07-26 14:50   ` George Dunlap
  1 sibling, 0 replies; 12+ messages in thread
From: George Dunlap @ 2017-07-26 14:50 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: xen-devel

On Thu, Jun 29, 2017 at 1:56 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> In line with what is there in all the other schedulers.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 5/5] tools: tracing: handle null scheduler's events
  2017-06-29 12:56 ` [PATCH 5/5] tools: tracing: handle null scheduler's events Dario Faggioli
@ 2017-07-26 14:51   ` George Dunlap
  0 siblings, 0 replies; 12+ messages in thread
From: George Dunlap @ 2017-07-26 14:51 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: xen-devel

On Thu, Jun 29, 2017 at 1:56 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> In both xentrace and xenalyze.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-07-26 14:51 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-29 12:56 [PATCH 0/5] xen: sched_null: support soft affinity Dario Faggioli
2017-06-29 12:56 ` [PATCH 1/5] xen: sched: factor affinity helpers out of sched_credit.c Dario Faggioli
2017-06-29 12:56 ` [PATCH 2/5] xen: sched_null: check for pending tasklet work a bit earlier Dario Faggioli
2017-07-25 15:24   ` George Dunlap
2017-06-29 12:56 ` [PATCH 3/5] xen: sched-null: support soft-affinity Dario Faggioli
2017-07-25 15:50   ` George Dunlap
2017-06-29 12:56 ` [PATCH 4/5] xen: sched_null: add some tracing Dario Faggioli
2017-07-25 15:15   ` George Dunlap
2017-07-25 16:07     ` Dario Faggioli
2017-07-26 14:50   ` George Dunlap
2017-06-29 12:56 ` [PATCH 5/5] tools: tracing: handle null scheduler's events Dario Faggioli
2017-07-26 14:51   ` George Dunlap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.