All of lore.kernel.org
 help / color / mirror / Atom feed
* [RTDS Patch v3 for Xen4.8]
@ 2017-07-03 16:17 Haoran Li
  2017-07-03 17:09 ` Dario Faggioli
  2017-07-03 18:27 ` Meng Xu
  0 siblings, 2 replies; 4+ messages in thread
From: Haoran Li @ 2017-07-03 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: dario.faggioli, mengxu, naroahlee

From: naroahlee <naroahlee@gmail.com>

 When more than one idle VCPUs that have
 the same PCPU as their previous running core invoke runq_tickle(), they will
 tickle the same PCPU. The tickled PCPU will only pick at most one VCPU, i.e.,
 the highest-priority one, to execute. The other VCPUs will not be scheduled
 for a period, even when there is an idle core, making these VCPUs
 unnecessarily starve for one period. Therefore, always make sure that we only
 tickle PCPUs that have not been tickled already.

Signed-off-by: Haoran Li <naroahlee@gmail.com>
Reviewed-by:   Meng Xu   <mengxu@cis.upenn.edu>

---
 xen/common/sched_rt.c | 27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 1b30014..b3d55d8 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -1144,12 +1144,11 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
  * Called by wake() and context_saved()
  * We have a running candidate here, the kick logic is:
  * Among all the cpus that are within the cpu affinity
- * 1) if the new->cpu is idle, kick it. This could benefit cache hit
- * 2) if there are any idle vcpu, kick it.
- * 3) now all pcpus are busy;
+ * 1) if there are any idle vcpu, kick it.
+ *    For cache benefit, we first search new->cpu.
+ * 2) now all pcpus are busy;
  *    among all the running vcpus, pick lowest priority one
  *    if snext has higher priority, kick it.
- *
  * TODO:
  * 1) what if these two vcpus belongs to the same domain?
  *    replace a vcpu belonging to the same domain introduces more overhead
@@ -1174,17 +1173,11 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new)
     cpumask_and(&not_tickled, online, new->vcpu->cpu_hard_affinity);
     cpumask_andnot(&not_tickled, &not_tickled, &prv->tickled);
 
-    /* 1) if new's previous cpu is idle, kick it for cache benefit */
-    if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) )
-    {
-        SCHED_STAT_CRANK(tickled_idle_cpu);
-        cpu_to_tickle = new->vcpu->processor;
-        goto out;
-    }
-
-    /* 2) if there are any idle pcpu, kick it */
+    /* 1) if there are any idle pcpu, kick it */
     /* The same loop also find the one with lowest priority */
-    for_each_cpu(cpu, &not_tickled)
+	/* For cache benefit, we search new->cpu first */
+    cpu = cpumask_test_or_cycle(new->vcpu->processor, &not_tickled);
+    while ( cpu != nr_cpu_ids )
     {
         iter_vc = curr_on_cpu(cpu);
         if ( is_idle_vcpu(iter_vc) )
@@ -1197,9 +1190,12 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new)
         if ( latest_deadline_vcpu == NULL ||
              iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline )
             latest_deadline_vcpu = iter_svc;
+
+        cpumask_clear_cpu(cpu, &not_tickled);
+        cpu = cpumask_cycle(cpu, &not_tickled);
     }
 
-    /* 3) candicate has higher priority, kick out lowest priority vcpu */
+    /* 2) candicate has higher priority, kick out lowest priority vcpu */
     if ( latest_deadline_vcpu != NULL &&
          new->cur_deadline < latest_deadline_vcpu->cur_deadline )
     {
@@ -1207,7 +1203,6 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new)
         cpu_to_tickle = latest_deadline_vcpu->vcpu->processor;
         goto out;
     }
-
     /* didn't tickle any cpu */
     SCHED_STAT_CRANK(tickled_no_cpu);
     return;
-- 
1.9.1


On Fri, 2017-02-24 at 15:54 -0600, Haoran Li wrote:
> From: naroahlee <naroahlee@gmail.com>
>
> Bug Analysis:
>
Just kill this line above.

> When more than one idle VCPUs that have the same PCPU as their
> previous running core invoke runq_tickle(), they will tickle the same
> PCPU. The tickled PCPU will only pick at most one VCPU, i.e., the
> highest-priority one, to execute. The other VCPUs will not be
> scheduled for a period, even when there is an idle core, making these
> VCPUs unnecessarily starve for one period.
> Therefore, always make sure that we only tickle PCPUs that have not
> been tickled already.
>
And I'd say to wrap around the lines at a shorter threshold. `git log',
for instance, indents the changelogs, and the idea would be for them to
look good on 80 characters terminal.

> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -1144,9 +1144,10 @@ rt_vcpu_sleep(const struct scheduler *ops,
> struct vcpu *vc)
>   * Called by wake() and context_saved()
>   * We have a running candidate here, the kick logic is:
>   * Among all the cpus that are within the cpu affinity
> - * 1) if the new->cpu is idle, kick it. This could benefit cache hit
> - * 2) if there are any idle vcpu, kick it.
> - * 3) now all pcpus are busy;
> + * 1) if there are any idle vcpu, kick it.
> + *    For cache benefit, we first search new->cpu.
> + *     
> + * 2) now all pcpus are busy;
>
As Meng said, no blank line here.

>   *    among all the running vcpus, pick lowest priority one
>   *    if snext has higher priority, kick it.
>   *
> @@ -1174,17 +1175,11 @@ runq_tickle(const struct scheduler *ops,
> struct rt_vcpu *new)
>      cpumask_and(&not_tickled, online, new->vcpu->cpu_hard_affinity);
>      cpumask_andnot(&not_tickled, &not_tickled, &prv->tickled);
>  
> -    /* 1) if new's previous cpu is idle, kick it for cache benefit
> */
> -    if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) )
> -    {
> -        SCHED_STAT_CRANK(tickled_idle_cpu);
> -        cpu_to_tickle = new->vcpu->processor;
> -        goto out;
> -    }
> -
> -    /* 2) if there are any idle pcpu, kick it */
> +    /* 1) if there are any idle pcpu, kick it */
>
While there, do you mind adding a full stop at the end of the sentence?

>      /* The same loop also find the one with lowest priority */
> -    for_each_cpu(cpu, &not_tickled)
> +     /* For cache benefit, we search new->cpu first */
>
And this looks to me to be misindented.

If you fix these things and resend, you can add (together to Meng's
one):

Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com>

And I'm Cc-ing George, so he can also adivse if he wants, as hee is
also a scheduler maintainer... not to mention that he will most likely
be the one that will commit the change, so please do Cc him yourself as
well when you resend the patch (I should have asked to do that before,
but did not notice he was not there).

Thanks and Regards,
Dario


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [RTDS Patch v3 for Xen4.8]
  2017-07-03 16:17 [RTDS Patch v3 for Xen4.8] Haoran Li
@ 2017-07-03 17:09 ` Dario Faggioli
  2017-07-04  7:41   ` Jan Beulich
  2017-07-03 18:27 ` Meng Xu
  1 sibling, 1 reply; 4+ messages in thread
From: Dario Faggioli @ 2017-07-03 17:09 UTC (permalink / raw)
  To: Haoran Li, xen-devel; +Cc: mengxu


[-- Attachment #1.1: Type: text/plain, Size: 1763 bytes --]

On Mon, 2017-07-03 at 11:17 -0500, Haoran Li wrote:
> From: naroahlee <naroahlee@gmail.com>
> 
>  When more than one idle VCPUs that have
>  the same PCPU as their previous running core invoke runq_tickle(),
> they will
>  tickle the same PCPU. The tickled PCPU will only pick at most one
> VCPU, i.e.,
>  the highest-priority one, to execute. The other VCPUs will not be
> scheduled
>  for a period, even when there is an idle core, making these VCPUs
>  unnecessarily starve for one period. Therefore, always make sure
> that we only
>  tickle PCPUs that have not been tickled already.
> 
> Signed-off-by: Haoran Li <naroahlee@gmail.com>
> Reviewed-by:   Meng Xu   <mengxu@cis.upenn.edu>
> 
So, from what I can see from the 'From' tag, and from the pieces of
emails, that appear below the patch, this is some kind of
resubmission/new version, of a patch sent a while back.

However, the subject seems to have changed... Or in any case, the
current subject is no good.

It's also a bit unusual, and definitely not comfortable for people
managing the patch, to have a quoted email conversation below the patch
itself (or so I think). So, please, remove it.

Finally, in that quoted email conversation, I asked for some changes,
and said that, with them done, my Reviewed-by: would stand.

Have you made those changes? If yes, please, mention this somewhere
(Ideally, between the S-o-b, R-b tags and the patch itself, after a
'---' mark).

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RTDS Patch v3 for Xen4.8]
  2017-07-03 16:17 [RTDS Patch v3 for Xen4.8] Haoran Li
  2017-07-03 17:09 ` Dario Faggioli
@ 2017-07-03 18:27 ` Meng Xu
  1 sibling, 0 replies; 4+ messages in thread
From: Meng Xu @ 2017-07-03 18:27 UTC (permalink / raw)
  To: Haoran Li; +Cc: xen-devel, Dario Faggioli

Hi Haoran,

On Mon, Jul 3, 2017 at 12:17 PM, Haoran Li <naroahlee@gmail.com> wrote:
>
> From: naroahlee <naroahlee@gmail.com>
>
>  When more than one idle VCPUs that have
>  the same PCPU as their previous running core invoke runq_tickle(), they will
>  tickle the same PCPU. The tickled PCPU will only pick at most one VCPU, i.e.,
>  the highest-priority one, to execute. The other VCPUs will not be scheduled
>  for a period, even when there is an idle core, making these VCPUs
>  unnecessarily starve for one period. Therefore, always make sure that we only
>  tickle PCPUs that have not been tickled already.
>
> Signed-off-by: Haoran Li <naroahlee@gmail.com>
> Reviewed-by:   Meng Xu   <mengxu@cis.upenn.edu>


As Dario mentioned in the email, the title should be changed and the
email should be a new email thread, instead of a forward email.

A reference to the format of sending a newer version of patch can be
found at https://www.mail-archive.com/xen-devel@lists.xen.org/msg60115.html

In the commit message, you can add
---
Changes to the v1
---
to state the changes made from the previous versions.
You can also refer to the previous discussion with a link in that section..
This makes the reviewers' life easier.
The change log won't be committed.

Could you please send another version after resolving the concerns
raised by Dario and me?

Don't hesitate to ping me if you have any question.

Thanks,

Meng

-- 
-----------
Meng Xu
PhD Candidate in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RTDS Patch v3 for Xen4.8]
  2017-07-03 17:09 ` Dario Faggioli
@ 2017-07-04  7:41   ` Jan Beulich
  0 siblings, 0 replies; 4+ messages in thread
From: Jan Beulich @ 2017-07-04  7:41 UTC (permalink / raw)
  To: Haoran Li; +Cc: xen-devel, Dario Faggioli, mengxu

>>> On 03.07.17 at 19:09, <dario.faggioli@citrix.com> wrote:
> On Mon, 2017-07-03 at 11:17 -0500, Haoran Li wrote:
>> From: naroahlee <naroahlee@gmail.com>
>> 
>>  When more than one idle VCPUs that have
>>  the same PCPU as their previous running core invoke runq_tickle(),
>> they will
>>  tickle the same PCPU. The tickled PCPU will only pick at most one
>> VCPU, i.e.,
>>  the highest-priority one, to execute. The other VCPUs will not be
>> scheduled
>>  for a period, even when there is an idle core, making these VCPUs
>>  unnecessarily starve for one period. Therefore, always make sure
>> that we only
>>  tickle PCPUs that have not been tickled already.
>> 
>> Signed-off-by: Haoran Li <naroahlee@gmail.com>
>> Reviewed-by:   Meng Xu   <mengxu@cis.upenn.edu>
>> 
> So, from what I can see from the 'From' tag, and from the pieces of
> emails, that appear below the patch, this is some kind of
> resubmission/new version, of a patch sent a while back.
> 
> However, the subject seems to have changed... Or in any case, the
> current subject is no good.
> 
> It's also a bit unusual, and definitely not comfortable for people
> managing the patch, to have a quoted email conversation below the patch
> itself (or so I think). So, please, remove it.
> 
> Finally, in that quoted email conversation, I asked for some changes,
> and said that, with them done, my Reviewed-by: would stand.
> 
> Have you made those changes? If yes, please, mention this somewhere
> (Ideally, between the S-o-b, R-b tags and the patch itself, after a
> '---' mark).

Additionally, you would almost never submit patches for other than
the unstable staging branch. The only exception being if there's a
change that absolutely has to go into an older branch, but which
isn't applicable at all anymore to current staging.

If you do your development on an older version, so be it. But for
submission it is you who is responsible for doing (and testing!) the
forward port.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-07-04  7:42 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-03 16:17 [RTDS Patch v3 for Xen4.8] Haoran Li
2017-07-03 17:09 ` Dario Faggioli
2017-07-04  7:41   ` Jan Beulich
2017-07-03 18:27 ` Meng Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.