All of lore.kernel.org
 help / color / mirror / Atom feed
* xen scheduler
@ 2015-05-18  7:54 Rajendra Bele
  2015-05-18 14:53 ` Dario Faggioli
  0 siblings, 1 reply; 13+ messages in thread
From: Rajendra Bele @ 2015-05-18  7:54 UTC (permalink / raw)
  To: Xen-devel, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 485 bytes --]

Dear Developers,

As per my knowledge.
Credit scheduler sorts its queue of VCPUs with priority based on credit
value.
It follows FCFS technique for equal priority if we apply SJF for equal
priority
will be helpful to reduce waiting time spend in the queue basically for the
Under Priority (credits>0) VCPUs.

obliviously situation is rare but will make sense when large no of VM are
active.

If anybody working on this wants his/her comments on this idea

Thanks and regards

Rajendra

[-- Attachment #1.2: Type: text/html, Size: 704 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: xen scheduler
  2015-05-18  7:54 xen scheduler Rajendra Bele
@ 2015-05-18 14:53 ` Dario Faggioli
  2015-05-19  1:50   ` Rajendra Bele
  0 siblings, 1 reply; 13+ messages in thread
From: Dario Faggioli @ 2015-05-18 14:53 UTC (permalink / raw)
  To: Rajendra Bele; +Cc: George Dunlap, xen-devel, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3468 bytes --]

[Adding George. In future, if you are interested in getting feedback on
a particular subsystem, look for it in the MAINTAINERS file, and Cc the
address(es) you find there]

On Mon, 2015-05-18 at 13:24 +0530, Rajendra Bele wrote:
> As per my knowledge.
> Credit scheduler sorts its queue of VCPUs with priority based on
> credit value.
>
Yes and no. :-)

This is probably formally correct, as:
 1. when sorting it, we do rearrange the runq in priority order
 2. the priority of a vCPU is _based_ on credits, as being in UNDER or
    in OVER state does depend on credits

However, as stated here:

/*
 * This is a O(n) optimized sort of the runq.
 *
 * Time-share VCPUs can only be one of two priorities, UNDER or OVER. We walk
 * through the runq and move up any UNDERs that are preceded by OVERS. We
 * remember the last UNDER to make the move up operation O(1).
 */
static void
csched_runq_sort(struct csched_private *prv, unsigned int cpu)

there are only two priorities, so, for Credit, "sorts its queue of VCPUs
with priority based on credit value" means "all the UNDER vCPUs come
before any OVER vCPU"... was that what you meant?

BTW, this is one of the differences between Credit and Credit2, as in
Credit2, the runqueues are kept sorted by credit order...

> It follows FCFS technique for equal priority if we apply SJF for equal
> priority
> will be helpful to reduce waiting time spend in the queue basically
> for the Under Priority (credits>0) VCPUs.
>
Yes, I think that treating the various vCPUs in UNDER differently,
basing on some parameter/state/etc. of them would be good... actually,
that's why I like Credit2, and why we're trying to make it usable in
production.

Doing the same in Credit is of course possible, but I fear it would
reveal really complex. Then, again, we already have Credit2 doing
something like that... So I think that anyone wanting a scheduler with a
similar property should invest time in Credit2, rather than trying to
tweak Credit1 into that.

But then, of course, I may be wrong, and you'll come up with a 15 lines
patch that does the trick! ;-P

Anyway, you're mentioning SJF, which would indeed be great, if it
weren't impossible to implement: "Another disadvantage of using shortest
job next is that the total execution time of a job must be known before
execution" (http://en.wikipedia.org/wiki/Shortest_job_next ) :-(

How where you thinking to approximate the execution time of upcoming
execution instance of a vCPU? I'm asking because, per my experience, the
method chosen for that purpose has quite a bit of influence in the
effectiveness of a particular SJF implementation.

> obliviously situation is rare but will make sense when large no of VM
> are active.
> 
I'm not sure I'm getting what you mean here. What's rare, that there are
many vCPUs in UNDER? I don't think it is. Or, in any case, it certainly
is the typical situation in which a scheduler is important (if there is
less work than CPUs, the scheduler does not count that much!), so it's a
good scenario to consider and try to improve... Or were you referring to
something else?

> If anybody working on this wants his/her comments on this idea
> 
I don't think there is anyone working on this particular item, but
scheduling is certainly receiving some attention, and we're always happy
to discuss potential new features, improvements, and alike! :-)

Regards,
Dario

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: xen scheduler
  2015-05-18 14:53 ` Dario Faggioli
@ 2015-05-19  1:50   ` Rajendra Bele
  2015-05-19  8:43     ` Dario Faggioli
  0 siblings, 1 reply; 13+ messages in thread
From: Rajendra Bele @ 2015-05-19  1:50 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4710 bytes --]

Dear Dario

Thanks for feedback on the comment

In O.S. it is scheduling of processes where equal priority jobs are always
handled with FCFS.
credit scheduler also follows same thing where equal priority VCPU s are
scheduled with FCFS.
In Credit Scheduler there are three priorities BOOST,UNDER,OVER
Local run queue is sorted on these priorities.
If we focus on UNDER priority ,e.g. VCPU having 512 Credits and VCPU having
256 credits will have same priority
and First VCPU (512)will scheduled first and second VCPU (256) will have to
wait though it has less credit.
In Such scenario instead of FCFS if will follow Shortest Credit Next it
will reduce overall average waiting time and context switch time hence bit
enhancement in performance is possible.

In OS limitation of SJF is calculation of process time but here in Credit
Scheduler credits are already known and computed after every 10
milliseconds is additional advantage for implementation.

The fact is definitely useful if implemented successfully

Please Pass Comments  for further motivations

Thanks and Regards

Rajendra


On Mon, May 18, 2015 at 8:23 PM, Dario Faggioli <dario.faggioli@citrix.com>
wrote:

> [Adding George. In future, if you are interested in getting feedback on
> a particular subsystem, look for it in the MAINTAINERS file, and Cc the
> address(es) you find there]
>
> On Mon, 2015-05-18 at 13:24 +0530, Rajendra Bele wrote:
> > As per my knowledge.
> > Credit scheduler sorts its queue of VCPUs with priority based on
> > credit value.
> >
> Yes and no. :-)
>
> This is probably formally correct, as:
>  1. when sorting it, we do rearrange the runq in priority order
>  2. the priority of a vCPU is _based_ on credits, as being in UNDER or
>     in OVER state does depend on credits
>
> However, as stated here:
>
> /*
>  * This is a O(n) optimized sort of the runq.
>  *
>  * Time-share VCPUs can only be one of two priorities, UNDER or OVER. We
> walk
>  * through the runq and move up any UNDERs that are preceded by OVERS. We
>  * remember the last UNDER to make the move up operation O(1).
>  */
> static void
> csched_runq_sort(struct csched_private *prv, unsigned int cpu)
>
> there are only two priorities, so, for Credit, "sorts its queue of VCPUs
> with priority based on credit value" means "all the UNDER vCPUs come
> before any OVER vCPU"... was that what you meant?
>
> BTW, this is one of the differences between Credit and Credit2, as in
> Credit2, the runqueues are kept sorted by credit order...
>
> > It follows FCFS technique for equal priority if we apply SJF for equal
> > priority
> > will be helpful to reduce waiting time spend in the queue basically
> > for the Under Priority (credits>0) VCPUs.
> >
> Yes, I think that treating the various vCPUs in UNDER differently,
> basing on some parameter/state/etc. of them would be good... actually,
> that's why I like Credit2, and why we're trying to make it usable in
> production.
>
> Doing the same in Credit is of course possible, but I fear it would
> reveal really complex. Then, again, we already have Credit2 doing
> something like that... So I think that anyone wanting a scheduler with a
> similar property should invest time in Credit2, rather than trying to
> tweak Credit1 into that.
>
> But then, of course, I may be wrong, and you'll come up with a 15 lines
> patch that does the trick! ;-P
>
> Anyway, you're mentioning SJF, which would indeed be great, if it
> weren't impossible to implement: "Another disadvantage of using shortest
> job next is that the total execution time of a job must be known before
> execution" (http://en.wikipedia.org/wiki/Shortest_job_next ) :-(
>
> How where you thinking to approximate the execution time of upcoming
> execution instance of a vCPU? I'm asking because, per my experience, the
> method chosen for that purpose has quite a bit of influence in the
> effectiveness of a particular SJF implementation.
>
> > obliviously situation is rare but will make sense when large no of VM
> > are active.
> >
> I'm not sure I'm getting what you mean here. What's rare, that there are
> many vCPUs in UNDER? I don't think it is. Or, in any case, it certainly
> is the typical situation in which a scheduler is important (if there is
> less work than CPUs, the scheduler does not count that much!), so it's a
> good scenario to consider and try to improve... Or were you referring to
> something else?
>
> > If anybody working on this wants his/her comments on this idea
> >
> I don't think there is anyone working on this particular item, but
> scheduling is certainly receiving some attention, and we're always happy
> to discuss potential new features, improvements, and alike! :-)
>
> Regards,
> Dario
>

[-- Attachment #1.2: Type: text/html, Size: 5775 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: xen scheduler
  2015-05-19  1:50   ` Rajendra Bele
@ 2015-05-19  8:43     ` Dario Faggioli
  2015-05-21  7:55       ` Rajendra Bele
  0 siblings, 1 reply; 13+ messages in thread
From: Dario Faggioli @ 2015-05-19  8:43 UTC (permalink / raw)
  To: Rajendra Bele; +Cc: George Dunlap, xen-devel, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3408 bytes --]

Hey,

First of all, can you please switch to free text email (instead than
HTML ones), and avoid top-posting?

On Tue, 2015-05-19 at 07:20 +0530, Rajendra Bele wrote:

> In O.S. it is scheduling of processes where equal priority jobs are
> always handled with FCFS.
> credit scheduler also follows same thing where equal priority VCPU s
> are scheduled with FCFS.
> In Credit Scheduler there are three priorities BOOST,UNDER,OVER
>
Correct.

> Local run queue is sorted on these priorities.
>
Sure.

> If we focus on UNDER priority ,e.g. VCPU having 512 Credits and VCPU
> having 256 credits will have same priority
> and First VCPU (512)will scheduled first and second VCPU (256) will
> have to wait though it has less credit.
>
Exactly.

> In Such scenario instead of FCFS if will follow Shortest Credit Next
> it will reduce overall average waiting time and context switch time
> hence bit enhancement in performance is possible.
>
I'm not sure how this will affect 'context switch time', but perhaps you
mean something different with it from what I have in mind (i.e., the
actual time it takes to swap two vCPUs on a physical processor).

Anyway, I think I see it now. Basically, you're proposing that we should
keep the runqueues sorted by credit, even in Credit1, and we do already
in Credit2 (at least, the idea is the same). I certainly don't disagree,
as I already said that such feature is one of the reasons I like Credit2
more than Credit1.

Which also means I certainly won't stop you from trying, but of course,
we'll have to see the patches (even in preliminary/RFC state) to
actually be able to judge how much complexity this would add; and we'll
have to see some thorough performance evaluation, to be sure it is worth
adding complexity, and that we don't introduce regressions in as many
workloads as we can imagine and check.

> In OS limitation of SJF is calculation of process time but here in
> Credit Scheduler credits are already known and computed after every 10
> milliseconds is additional advantage for implementation.
>
True. and this answer my question on how you where planning to implement
SJF. I'm not sure I'd still call the result SJF, though. In fact,
formally speaking, SJF assigns the priorities basing on the actual
runtime of a task/vCPU; here you're proposing using a configuration
parameter --which already is a form of priority, coming from the user--
as an indication of the execution time, creating a 'circle' which I'm
not sure will bring good things...

One of the possible caveat could be how Credit does load balancing
between the various pCPUs' runqueues. Right now we just steal random
pieces of work (pay attention to hard and soft affinity)... Here, you
may want to consider the credits of the vCPU(s) that you're trying to
steal.

Also, the classic SJF is non preemptive, and that is great as it makes
it simple to implement (if it only wouldn't require for you to be a
clairvoyant, it would be perfect!! :-P). Here, if you really plan to use
Credit as your approximation, well, as you said yourself, credits
changes, so you may need to look into a preemptive variant of SJF, which
is way less simple.

> The fact is definitely useful if implemented successfully
>
Yeah, _if_ :-D

> Please Pass Comments  for further motivations
>
I've done my best. :-)

Regards,
Dario


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: xen scheduler
  2015-05-19  8:43     ` Dario Faggioli
@ 2015-05-21  7:55       ` Rajendra Bele
  2015-05-21  9:57         ` Dario Faggioli
  0 siblings, 1 reply; 13+ messages in thread
From: Rajendra Bele @ 2015-05-21  7:55 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: George Dunlap, xen-devel, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4370 bytes --]

Hi
Dario,

Thanks for your comments,

Here Priority will be prime Method of Sorting of VCPUs in local run queue
in addition to it I am Proposing Shortest Credit First
 which is similar to SJF not exactly SJF of O.S. because this is credit
based.

My suggestion is Replace FCFS by SCF(Shortest Credit First) for Equal
Priority VCPUs in local run-queue to reduce average waiting time spend in
the queue.

SCF will be preemptive but Credit Scheduler Itself preemptive after 30
milliseconds

As per my knowledge feature haven't implemented in credit2 also

credit2 also sorts vcpus on priority and apply FCFS for equal priority
.Credit2 having dynamic time slice facility instead of fix 30 milliseconds
we can change 5,10,20 rest of things are as it is of credit1.

please continue commenting

Thanks
Rajendra


On Tue, May 19, 2015 at 2:13 PM, Dario Faggioli <dario.faggioli@citrix.com>
wrote:

> Hey,
>
> First of all, can you please switch to free text email (instead than
> HTML ones), and avoid top-posting?
>
> On Tue, 2015-05-19 at 07:20 +0530, Rajendra Bele wrote:
>
> > In O.S. it is scheduling of processes where equal priority jobs are
> > always handled with FCFS.
> > credit scheduler also follows same thing where equal priority VCPU s
> > are scheduled with FCFS.
> > In Credit Scheduler there are three priorities BOOST,UNDER,OVER
> >
> Correct.
>
> > Local run queue is sorted on these priorities.
> >
> Sure.
>
> > If we focus on UNDER priority ,e.g. VCPU having 512 Credits and VCPU
> > having 256 credits will have same priority
> > and First VCPU (512)will scheduled first and second VCPU (256) will
> > have to wait though it has less credit.
> >
> Exactly.
>
> > In Such scenario instead of FCFS if will follow Shortest Credit Next
> > it will reduce overall average waiting time and context switch time
> > hence bit enhancement in performance is possible.
> >
> I'm not sure how this will affect 'context switch time', but perhaps you
> mean something different with it from what I have in mind (i.e., the
> actual time it takes to swap two vCPUs on a physical processor).
>
> Anyway, I think I see it now. Basically, you're proposing that we should
> keep the runqueues sorted by credit, even in Credit1, and we do already
> in Credit2 (at least, the idea is the same). I certainly don't disagree,
> as I already said that such feature is one of the reasons I like Credit2
> more than Credit1.
>
> Which also means I certainly won't stop you from trying, but of course,
> we'll have to see the patches (even in preliminary/RFC state) to
> actually be able to judge how much complexity this would add; and we'll
> have to see some thorough performance evaluation, to be sure it is worth
> adding complexity, and that we don't introduce regressions in as many
> workloads as we can imagine and check.
>
> > In OS limitation of SJF is calculation of process time but here in
> > Credit Scheduler credits are already known and computed after every 10
> > milliseconds is additional advantage for implementation.
> >
> True. and this answer my question on how you where planning to implement
> SJF. I'm not sure I'd still call the result SJF, though. In fact,
> formally speaking, SJF assigns the priorities basing on the actual
> runtime of a task/vCPU; here you're proposing using a configuration
> parameter --which already is a form of priority, coming from the user--
> as an indication of the execution time, creating a 'circle' which I'm
> not sure will bring good things...
>
> One of the possible caveat could be how Credit does load balancing
> between the various pCPUs' runqueues. Right now we just steal random
> pieces of work (pay attention to hard and soft affinity)... Here, you
> may want to consider the credits of the vCPU(s) that you're trying to
> steal.
>
> Also, the classic SJF is non preemptive, and that is great as it makes
> it simple to implement (if it only wouldn't require for you to be a
> clairvoyant, it would be perfect!! :-P). Here, if you really plan to use
> Credit as your approximation, well, as you said yourself, credits
> changes, so you may need to look into a preemptive variant of SJF, which
> is way less simple.
>
> > The fact is definitely useful if implemented successfully
> >
> Yeah, _if_ :-D
>
> > Please Pass Comments  for further motivations
> >
> I've done my best. :-)
>
> Regards,
> Dario
>
>

[-- Attachment #1.2: Type: text/html, Size: 5499 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: xen scheduler
  2015-05-21  7:55       ` Rajendra Bele
@ 2015-05-21  9:57         ` Dario Faggioli
  0 siblings, 0 replies; 13+ messages in thread
From: Dario Faggioli @ 2015-05-21  9:57 UTC (permalink / raw)
  To: Rajendra Bele; +Cc: George Dunlap, xen-devel, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2446 bytes --]

On Thu, 2015-05-21 at 13:25 +0530, Rajendra Bele wrote:
> Hi
> Dario,
>
Hi,

> Here Priority will be prime Method of Sorting of VCPUs in local run
> queue in addition to it I am Proposing Shortest Credit First
>  which is similar to SJF not exactly SJF of O.S. because this is
> credit based.
>
Understood. My point was that basing on credits will make the final
result rather different than SJF. That does not mean it won't work, of
course. :-)
> 
> My suggestion is Replace FCFS by SCF(Shortest Credit First) for Equal
> Priority VCPUs in local run-queue to reduce average waiting time spend
> in the queue.
>
I got that, and I tried to outline what possible issues I could think
to... but with this things, you never know until you try.
> 
> SCF will be preemptive but Credit Scheduler Itself preemptive after 30
> milliseconds
>
Sure it is. And in fact, right now, I'd say Credit is more Round-Robin
(there is a timeslice) than FCFS, in how it treats same priority vCPUs.
But let's not make this a "taxonomy bun fight"! :-P
> 
> As per my knowledge feature haven't implemented in credit2 also
>
Credit2 keeps its runqueue sorted already. Have a look, for instance, at
__runq_insert(), in sched_credit2.c. It's not SJF (nor it is this SCF
variant you're proposing), and I never said it to be... I've only said
having the runqueue sorted is, in general, a good thing, and Credit2
does this already.

However, generic arguments and assumptions tend not to work that great
in scheduling... we'll see how well that adapts to Credit1 when you'll
have it implemented. :-D
> 
> credit2 also sorts vcpus on priority and apply FCFS for equal
> priority .
>
As already said, check __runq_insert() and runq_tickle() in
xen/common/sched_credit2.c.

> Credit2 having dynamic time slice facility instead of fix 30
> milliseconds we can change 5,10,20 rest of things are as it is of
> credit1.
>
Credit2 has a much more dynamic timeslice than that. Check out
csched2_runtime().

Oh, and you may find useful having a look at these slides:
 http://www-archive.xenproject.org/files/xensummit_intel09/Xenschedulerstatus.pdf

and to this (3 part) video:
 https://www.youtube.com/watch?v=o4frC5bv3es
 https://www.youtube.com/watch?v=2ZAY9J-Cwsw
 https://www.youtube.com/watch?v=TQPVODqEkqM

Although, as usual, the best possible *source* of information is the
*source* code! :-D

Regards,
Dario


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Xen scheduler
  2011-04-19 20:51 Xen scheduler David Xu
@ 2011-04-20  8:47 ` George Dunlap
  0 siblings, 0 replies; 13+ messages in thread
From: George Dunlap @ 2011-04-20  8:47 UTC (permalink / raw)
  To: David Xu; +Cc: xen-devel

David,

xenbits went down about a month ago, and when we rebuilt it, we moved
a bunch of things around.

xenalyze can now be found here:
http://xenbits.xensource.com/ext/xenalyze

My schedule simulator can be found here:
http://xenbits.xensource.com/ext/gdunlap/sched-sim.hg

 -George

On Tue, Apr 19, 2011 at 9:51 PM, David Xu <davidxu06@gmail.com> wrote:
> Hi,
>
> I am a graduated student who are researching on the scheduling of VM. I
> found that George said that there are a tracing mechanism for understanding
> the scheduling algorithm and a scheduler simulator of credit1. But the url
> for them, http://xenbits.xensource.com/ext/xenalyze.hg and
> http://xenbits.xensource.com/people/gdunlap/sched-sim.hg seems unavailable
> now. Could you tell me how to get them now? BTW, what's the new progress of
> credit2, and when will it be released? Thanks.
>
> Regards,
> Cong
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Xen scheduler
@ 2011-04-19 20:51 David Xu
  2011-04-20  8:47 ` George Dunlap
  0 siblings, 1 reply; 13+ messages in thread
From: David Xu @ 2011-04-19 20:51 UTC (permalink / raw)
  To: xen-devel, george.dunlap


[-- Attachment #1.1: Type: text/plain, Size: 500 bytes --]

Hi,

I am a graduated student who are researching on the scheduling of VM. I
found that George said that there are a tracing mechanism for understanding
the scheduling algorithm and a scheduler simulator of credit1. But the url
for them, http://xenbits.xensource.com/ext/xenalyze.hg and
http://xenbits.xensource.com/people/gdunlap/sched-sim.hg seems unavailable
now. Could you tell me how to get them now? BTW, what's the new progress of
credit2, and when will it be released? Thanks.

Regards,
Cong

[-- Attachment #1.2: Type: text/html, Size: 691 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: xen scheduler
  2007-10-24 19:55 xen scheduler Agarwal, Lomesh
@ 2007-10-25 23:29 ` Atsushi SAKAI
  0 siblings, 0 replies; 13+ messages in thread
From: Atsushi SAKAI @ 2007-10-25 23:29 UTC (permalink / raw)
  To: Agarwal, Lomesh; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 751 bytes --]

Hi, 

I guess you already found the point.
If not, please check xen/common/sched_credit.c at first.

Or attached patch is useful for you.
(But it is not commited yet. in thread
http://lists.xensource.com/archives/html/xen-devel/2007-06/msg00917.html)
My mail does not in the web.(Now I recognized)
I think this function is useful for tuning but I agree Keir and Emmanuel,

Thanks
Atsushi SAKAI


"Agarwal, Lomesh" <lomesh.agarwal@intel.com> wrote:

> I think Xen credit scheduler does accounting every 10 ms and schedules
> vcpus every 30 ms. How can I change these time slices for some
> experiments?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

[-- Attachment #2: tune_timeslice.patch --]
[-- Type: application/octet-stream, Size: 12504 bytes --]

diff -r 94cce9a51540 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c	Mon May 14 12:54:26 2007 -0600
+++ b/tools/python/xen/lowlevel/xc/xc.c	Thu May 17 16:45:38 2007 +0900
@@ -833,18 +833,21 @@ static PyObject *pyxc_sched_credit_domai
     uint32_t domid;
     uint16_t weight;
     uint16_t cap;
-    static char *kwd_list[] = { "domid", "weight", "cap", NULL };
-    static char kwd_type[] = "I|HH";
+    uint16_t slice;
+    static char *kwd_list[] = { "domid", "weight", "cap", "slice", NULL };
+    static char kwd_type[] = "I|HHH";
     struct xen_domctl_sched_credit sdom;
     
     weight = 0;
     cap = (uint16_t)~0U;
+    slice = 0;
     if( !PyArg_ParseTupleAndKeywords(args, kwds, kwd_type, kwd_list, 
-                                     &domid, &weight, &cap) )
+                                     &domid, &weight, &cap, &slice) )
         return NULL;
 
     sdom.weight = weight;
     sdom.cap = cap;
+    sdom.slice = slice;
 
     if ( xc_sched_credit_domain_set(self->xc_handle, domid, &sdom) != 0 )
         return pyxc_error_to_exception();
@@ -864,9 +867,10 @@ static PyObject *pyxc_sched_credit_domai
     if ( xc_sched_credit_domain_get(self->xc_handle, domid, &sdom) != 0 )
         return pyxc_error_to_exception();
 
-    return Py_BuildValue("{s:H,s:H}",
+    return Py_BuildValue("{s:H,s:H,s:H}",
                          "weight",  sdom.weight,
-                         "cap",     sdom.cap);
+                         "cap",     sdom.cap,
+                         "slice",   sdom.slice);
 }
 
 static PyObject *pyxc_domain_setmaxmem(XcObject *self, PyObject *args)
@@ -1290,6 +1294,8 @@ static PyMethodDef pyxc_methods[] = {
       "SMP credit scheduler.\n"
       " domid     [int]:   domain id to set\n"
       " weight    [short]: domain's scheduling weight\n"
+      " cap       [short]: domain's scheduling cap\n"
+      " slice     [short]: domain's scheduling slice\n"
       "Returns: [int] 0 on success; -1 on error.\n" },
 
     { "sched_credit_domain_get",
@@ -1299,7 +1305,9 @@ static PyMethodDef pyxc_methods[] = {
       "SMP credit scheduler.\n"
       " domid     [int]:   domain id to get\n"
       "Returns:   [dict]\n"
-      " weight    [short]: domain's scheduling weight\n"},
+      " weight    [short]: domain's scheduling weight\n"
+      " cap       [short]: domain's scheduling cap\n"
+      " slice     [short]: domain's scheduling slice\n" },
 
     { "evtchn_alloc_unbound", 
       (PyCFunction)pyxc_evtchn_alloc_unbound,
diff -r 94cce9a51540 tools/python/xen/xend/XendAPI.py
--- a/tools/python/xen/xend/XendAPI.py	Mon May 14 12:54:26 2007 -0600
+++ b/tools/python/xen/xend/XendAPI.py	Tue May 15 19:52:10 2007 +0900
@@ -1455,10 +1455,12 @@ class XendAPI(object):
 
         #need to update sched params aswell
         if 'weight' in xeninfo.info['vcpus_params'] \
-           and 'cap' in xeninfo.info['vcpus_params']:
+           and 'cap' in xeninfo.info['vcpus_params'] \
+           and 'slice' in xeninfo.info['vcpus_params']:
             weight = xeninfo.info['vcpus_params']['weight']
             cap = xeninfo.info['vcpus_params']['cap']
-            xendom.domain_sched_credit_set(xeninfo.getDomid(), weight, cap)
+            slice = xeninfo.info['vcpus_params']['slice']
+            xendom.domain_sched_credit_set(xeninfo.getDomid(), weight, cap, slice)
 
     def VM_set_VCPUs_number_live(self, _, vm_ref, num):
         dom = XendDomain.instance().get_vm_by_uuid(vm_ref)
diff -r 94cce9a51540 tools/python/xen/xend/XendDomain.py
--- a/tools/python/xen/xend/XendDomain.py	Mon May 14 12:54:26 2007 -0600
+++ b/tools/python/xen/xend/XendDomain.py	Wed May 16 14:48:20 2007 +0900
@@ -1393,13 +1393,14 @@ class XendDomain:
         except Exception, ex:
             raise XendError(str(ex))
     
-    def domain_sched_credit_set(self, domid, weight = None, cap = None):
+    def domain_sched_credit_set(self, domid, weight = None, cap = None, slice = None):
         """Set credit scheduler parameters for a domain.
 
         @param domid: Domain ID or Name
         @type domid: int or string.
         @type weight: int
         @type cap: int
+        @type slice: int
         @rtype: 0
         """
         dominfo = self.domain_lookup_nr(domid)
@@ -1416,10 +1417,16 @@ class XendDomain:
             elif cap < 0 or cap > dominfo.getVCpuCount() * 100:
                 raise XendError("cap is out of range")
 
+            if slice is None:
+                slice = int(0)
+            elif slice < 1 or slice > 101:
+                raise XendError("slice is out of range 1-100")
+
             assert type(weight) == int
             assert type(cap) == int
-
-            return xc.sched_credit_domain_set(dominfo.getDomid(), weight, cap)
+            assert type(slice) == int
+
+            return xc.sched_credit_domain_set(dominfo.getDomid(), weight, cap, slice)
         except Exception, ex:
             log.exception(ex)
             raise XendError(str(ex))
diff -r 94cce9a51540 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py	Mon May 14 12:54:26 2007 -0600
+++ b/tools/python/xen/xend/XendDomainInfo.py	Wed May 16 11:22:01 2007 +0900
@@ -411,7 +411,8 @@ class XendDomainInfo:
                 if xennode.xenschedinfo() == 'credit':
                     xendomains.domain_sched_credit_set(self.getDomid(),
                                                        self.getWeight(),
-                                                       self.getCap())
+                                                       self.getCap(),
+                                                       self.getSlice())
             except:
                 log.exception('VM start failed')
                 self.destroy()
@@ -1009,6 +1010,9 @@ class XendDomainInfo:
 
     def getWeight(self):
         return self.info.get('cpu_weight', 256)
+
+    def getSlice(self):
+        return self.info.get('cpu_slice', 30)
 
     def setResume(self, state):
         self._resume = state
diff -r 94cce9a51540 tools/python/xen/xend/server/SrvDomain.py
--- a/tools/python/xen/xend/server/SrvDomain.py	Mon May 14 12:54:26 2007 -0600
+++ b/tools/python/xen/xend/server/SrvDomain.py	Wed May 16 15:39:23 2007 +0900
@@ -155,7 +155,8 @@ class SrvDomain(SrvDir):
     def op_domain_sched_credit_set(self, _, req):
         fn = FormFn(self.xd.domain_sched_credit_set,
                     [['dom', 'int'],
-                     ['weight', 'int']])
+                     ['weight', 'int']
+                     ['slice', 'int']])
         val = fn(req.args, {'dom': self.dom.domid})
         return val
 
diff -r 94cce9a51540 tools/python/xen/xm/main.py
--- a/tools/python/xen/xm/main.py	Mon May 14 12:54:26 2007 -0600
+++ b/tools/python/xen/xm/main.py	Wed May 16 16:16:48 2007 +0900
@@ -132,7 +132,7 @@ SUBCOMMAND_HELP = {
     'log'         : ('', 'Print Xend log'),
     'rename'      : ('<Domain> <NewDomainName>', 'Rename a domain.'),
     'sched-sedf'  : ('<Domain> [options]', 'Get/set EDF parameters.'),
-    'sched-credit': ('[-d <Domain> [-w[=WEIGHT]|-c[=CAP]]]',
+    'sched-credit': ('[-d <Domain> [-w[=WEIGHT]|-c[=CAP]|-s[=SLICE]]]',
                      'Get/set credit scheduler parameters.'),
     'sysrq'       : ('<Domain> <letter>', 'Send a sysrq to a domain.'),
     'debug-keys'  : ('<Keys>', 'Send debug keys to Xen.'),
@@ -207,6 +207,7 @@ SUBCOMMAND_OPTIONS = {
        ('-d DOMAIN', '--domain=DOMAIN', 'Domain to modify'),
        ('-w WEIGHT', '--weight=WEIGHT', 'Weight (int)'),
        ('-c CAP',    '--cap=CAP',       'Cap (int)'),
+       ('-s SLICE',  '--slice=SLICE',   'Slice (int)'),
     ),
     'list': (
        ('-l', '--long',         'Output all VM details in SXP'),
@@ -1467,8 +1468,8 @@ def xm_sched_credit(args):
     check_sched_type('credit')
 
     try:
-        opts, params = getopt.getopt(args, "d:w:c:",
-            ["domain=", "weight=", "cap="])
+        opts, params = getopt.getopt(args, "d:w:c:s:",
+            ["domain=", "weight=", "cap=", "slice="])
     except getopt.GetoptError, opterr:
         err(opterr)
         usage('sched-credit')
@@ -1476,6 +1477,7 @@ def xm_sched_credit(args):
     domid = None
     weight = None
     cap = None
+    slice = None
 
     for o, a in opts:
         if o == "-d":
@@ -1483,18 +1485,20 @@ def xm_sched_credit(args):
         elif o == "-w":
             weight = int(a)
         elif o == "-c":
-            cap = int(a);
+            cap = int(a)
+        elif o == "-s":
+            slice = int(a);
 
     doms = filter(lambda x : domid_match(domid, x),
                   [parse_doms_info(dom)
                   for dom in getDomains(None, 'running')])
 
-    if weight is None and cap is None:
+    if weight is None and cap is None and slice is None:
         if domid is not None and doms == []: 
             err("Domain '%s' does not exist." % domid)
             usage('sched-credit')
         # print header if we aren't setting any parameters
-        print '%-33s %-2s %-6s %-4s' % ('Name','ID','Weight','Cap')
+        print '%-33s %-2s %-6s %-4s %-6s' % ('Name','ID','Weight','Cap','Slice')
         
         for d in doms:
             try:
@@ -1507,16 +1511,17 @@ def xm_sched_credit(args):
             except xmlrpclib.Fault:
                 pass
 
-            if 'weight' not in info or 'cap' not in info:
+            if 'weight' not in info or 'cap' not in info or 'slice' not in info:
                 # domain does not support sched-credit?
-                info = {'weight': -1, 'cap': -1}
+                info = {'weight': -1, 'cap': -1, 'slice':-1}
 
             info['weight'] = int(info['weight'])
             info['cap']    = int(info['cap'])
+            info['slice']  = int(info['slice'])
             
             info['name']  = d['name']
             info['domid'] = int(d['domid'])
-            print( ("%(name)-32s %(domid)3d %(weight)6d %(cap)4d") % info)
+            print( ("%(name)-32s %(domid)3d %(weight)6d %(cap)4d %(slice)5d") % info)
     else:
         if domid is None:
             # place holder for system-wide scheduler parameters
@@ -1532,8 +1537,12 @@ def xm_sched_credit(args):
                 get_single_vm(domid),
                 "cap",
                 cap)            
+            server.xenapi.VM.add_to_VCPUs_params_live(
+                get_single_vm(domid),
+                "slice",
+                slice)            
         else:
-            result = server.xend.domain.sched_credit_set(domid, weight, cap)
+            result = server.xend.domain.sched_credit_set(domid, weight, cap, slice)
             if result != 0:
                 err(str(result))
 
diff -r 94cce9a51540 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c	Mon May 14 12:54:26 2007 -0600
+++ b/xen/common/sched_credit.c	Wed May 16 17:04:47 2007 +0900
@@ -223,6 +223,7 @@ struct csched_dom {
     uint16_t active_vcpu_count;
     uint16_t weight;
     uint16_t cap;
+    uint16_t slice;
 };
 
 /*
@@ -709,6 +710,7 @@ csched_dom_cntl(
     {
         op->u.credit.weight = sdom->weight;
         op->u.credit.cap = sdom->cap;
+        op->u.credit.slice = sdom->slice;
     }
     else
     {
@@ -728,6 +730,9 @@ csched_dom_cntl(
 
         if ( op->u.credit.cap != (uint16_t)~0U )
             sdom->cap = op->u.credit.cap;
+
+        if ( op->u.credit.slice != 0 )
+            sdom->slice = op->u.credit.slice;
 
         spin_unlock_irqrestore(&csched_priv.lock, flags);
     }
@@ -756,6 +761,7 @@ csched_dom_init(struct domain *dom)
     sdom->dom = dom;
     sdom->weight = CSCHED_DEFAULT_WEIGHT;
     sdom->cap = 0U;
+    sdom->slice = CSCHED_MSECS_PER_TSLICE;
     dom->sched_priv = sdom;
 
     return 0;
@@ -1210,7 +1216,12 @@ csched_schedule(s_time_t now)
     /*
      * Return task to run next...
      */
-    ret.time = MILLISECS(CSCHED_MSECS_PER_TSLICE);
+    if(snext->sdom != NULL){
+        ret.time = MILLISECS(snext->sdom->slice);
+    }else{
+        /* for idle domain */
+        ret.time = MILLISECS(CSCHED_MSECS_PER_TSLICE);
+    }
     ret.task = snext->vcpu;
 
     CSCHED_VCPU_CHECK(ret.task);
diff -r 94cce9a51540 xen/include/public/domctl.h
--- a/xen/include/public/domctl.h	Mon May 14 12:54:26 2007 -0600
+++ b/xen/include/public/domctl.h	Tue May 15 11:08:52 2007 +0900
@@ -305,6 +305,7 @@ struct xen_domctl_scheduler_op {
         struct xen_domctl_sched_credit {
             uint16_t weight;
             uint16_t cap;
+            uint16_t slice;
         } credit;
     } u;
 };

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* xen scheduler
@ 2007-10-24 19:55 Agarwal, Lomesh
  2007-10-25 23:29 ` Atsushi SAKAI
  0 siblings, 1 reply; 13+ messages in thread
From: Agarwal, Lomesh @ 2007-10-24 19:55 UTC (permalink / raw)
  To: xen-devel

I think Xen credit scheduler does accounting every 10 ms and schedules
vcpus every 30 ms. How can I change these time slices for some
experiments?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Xen scheduler
  2007-04-21  6:03 Xen scheduler pak333
  2007-04-21  9:21 ` pradeep singh rautela
@ 2007-04-22 13:09 ` Mike D. Day
  1 sibling, 0 replies; 13+ messages in thread
From: Mike D. Day @ 2007-04-22 13:09 UTC (permalink / raw)
  To: pak333; +Cc: xen-devel

On 21/04/07 06:03 +0000, pak333@comcast.net wrote:
>
>   Hi,
>
>
>
>   On running on a dual/quad core does the Xen scheduler take into
>   account the physical layout of the cores.
>
>   For example if a VM has two vcpus, and there are 4 physical cpus
>   free,  will it take care to assign the 2vcpus (from a VM) to 2 pcpus
>   on the same socket.


The scheduler only knows the affinity of vcpus for physical
cpus. The affinity is determined by a userspace application and can
be modified using a domain control hypercall. Look in
xen/common/domctl.c around line 568 for the following: 

    case XEN_DOMCTL_setvcpuaffinity:
    case XEN_DOMCTL_getvcpuaffinity:



When the credit scheduler migrates a vcpu to a pcpu, it only considers
pcpus for which the affinity bit is set. If the userspace application
sets affinity such that only the bits set for pcpus on the same
socket, then the vcpu will only run on pcpu's sharing the same
socket. 


Mike

-- 
Mike D. Day
IBM LTC
Cell: 919 412-3900
Sametime: ncmike@us.ibm.com AIM: ncmikeday  Yahoo: ultra.runner
PGP key: http://www.ncultra.org/ncmike/pubkey.asc

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Xen scheduler
  2007-04-21  6:03 Xen scheduler pak333
@ 2007-04-21  9:21 ` pradeep singh rautela
  2007-04-22 13:09 ` Mike D. Day
  1 sibling, 0 replies; 13+ messages in thread
From: pradeep singh rautela @ 2007-04-21  9:21 UTC (permalink / raw)
  To: pak333; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 811 bytes --]

On 4/21/07, pak333@comcast.net <pak333@comcast.net> wrote:
>
> Hi,
>
> On running on a dual/quad core does the Xen scheduler take into account
> the physical layout of the cores.
> For example if a VM has two vcpus, and there are 4 physical cpus free,
> will it take care to assign the 2vcpus (from a VM) to 2 pcpus on the same
> socket.
>

Shouldn't this is be concealed.You may never know which physical CPU is
actualyl running a vcpu. It may get scheduled on to a different physical CPU
depending on the load on the other CPUs.

CMIIW

~psr

Thanks
> - Prabha
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>


-- 
---
pradeep singh rautela

"Genius is 1% inspiration, and 99% perspiration" - not me :)

[-- Attachment #1.2: Type: text/html, Size: 1681 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Xen scheduler
@ 2007-04-21  6:03 pak333
  2007-04-21  9:21 ` pradeep singh rautela
  2007-04-22 13:09 ` Mike D. Day
  0 siblings, 2 replies; 13+ messages in thread
From: pak333 @ 2007-04-21  6:03 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 281 bytes --]

Hi,

On running on a dual/quad core does the Xen scheduler take into account the physical layout of the cores.
For example if a VM has two vcpus, and there are 4 physical cpus free,  will it take care to assign the 2vcpus (from a VM) to 2 pcpus on the same socket.

Thanks
- Prabha

[-- Attachment #1.2: Type: text/html, Size: 421 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-05-21  9:57 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-18  7:54 xen scheduler Rajendra Bele
2015-05-18 14:53 ` Dario Faggioli
2015-05-19  1:50   ` Rajendra Bele
2015-05-19  8:43     ` Dario Faggioli
2015-05-21  7:55       ` Rajendra Bele
2015-05-21  9:57         ` Dario Faggioli
  -- strict thread matches above, loose matches on Subject: below --
2011-04-19 20:51 Xen scheduler David Xu
2011-04-20  8:47 ` George Dunlap
2007-10-24 19:55 xen scheduler Agarwal, Lomesh
2007-10-25 23:29 ` Atsushi SAKAI
2007-04-21  6:03 Xen scheduler pak333
2007-04-21  9:21 ` pradeep singh rautela
2007-04-22 13:09 ` Mike D. Day

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.