All of lore.kernel.org
 help / color / mirror / Atom feed
* performance of credit2 on hybrid workload
@ 2011-05-23  8:15 David Xu
  2011-05-25 16:18 ` George Dunlap
  0 siblings, 1 reply; 10+ messages in thread
From: David Xu @ 2011-05-23  8:15 UTC (permalink / raw)
  To: xen-devel; +Cc: george.dunlap


[-- Attachment #1.1: Type: text/plain, Size: 382 bytes --]

Hi,

Xen4.1 datasheet tells that credit2 scheduler is designed for latency
sensitive workloads. Does it have some improvement on the hybrid workload
including both the cpu-bound and latency-sensitive i/o work? For example, if
a VM runs a cpu-bound task burning the cpu and a i/o-bound
(latency-sensitive) task simultaneously, will the latency be guaranteed? And
how?

Regards,
Cong

[-- Attachment #1.2: Type: text/html, Size: 443 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: performance of credit2 on hybrid workload
  2011-05-23  8:15 performance of credit2 on hybrid workload David Xu
@ 2011-05-25 16:18 ` George Dunlap
       [not found]   ` <BANLkTi=57gDitoq7-T7n9Zh0_ZrCMuxfRg@mail.gmail.com>
  0 siblings, 1 reply; 10+ messages in thread
From: George Dunlap @ 2011-05-25 16:18 UTC (permalink / raw)
  To: David Xu; +Cc: George Dunlap, xen-devel

On Mon, 2011-05-23 at 09:15 +0100, David Xu wrote:
> Hi,
> 
> 
> Xen4.1 datasheet tells that credit2 scheduler is designed for latency
> sensitive workloads. Does it have some improvement on the hybrid
> workload including both the cpu-bound and latency-sensitive i/o work?
> For example, if a VM runs a cpu-bound task burning the cpu and a
> i/o-bound (latency-sensitive) task simultaneously, will the latency be
> guaranteed? And how?

At the moment, the "mixed workload" problem, where a single VM does both
cpu-intensive and latency-sensitive* workloads, has not been addressed
yet.  I have some ideas, but I haven't implemented them yet.

* i/o-bound is not the same as latency sensitive.  They obviously go
together frequently, but I would make a distinction between them.  For
example, an scp (copy over ssh) can easily become cpu-bound if there is
competition for the cpu -- but it is nonetheless latency sensitive.  (I
guess to put it another way, a workload which is latency-sensitive may
become i/o-bound if its scheduling latency is too high.)

 -George

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: performance of credit2 on hybrid workload
       [not found]     ` <1306401493.21026.8526.camel@elijah>
@ 2011-06-01  0:55       ` David Xu
  2011-06-01  9:31         ` George Dunlap
  0 siblings, 1 reply; 10+ messages in thread
From: David Xu @ 2011-06-01  0:55 UTC (permalink / raw)
  To: George Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2563 bytes --]

Hi,

I want to reduce the latency of a specific VM. How should I do based on
credit scheduler? For example, I will add another parameter *latency*besides
*weight *and *cap, *and schedule the vcpu whose VM holds the least latency
firstly each time. Thanks.

Regards,
Cong

2011/5/26 George Dunlap <george.dunlap@citrix.com>

> Please reply to the list. :-)
>
> Also, this is a question about credit1, so it should arguably be a
> different thread.
>
>  -George
>
> On Wed, 2011-05-25 at 19:34 +0100, David Xu wrote:
> > Thanks. The boost mechanism in credit can significantly reduce the
> > scheduling latency for pure I/O workload. Since the minimum interval
> > of credit scheduling is 10ms, the magnitude of latency for the target
> > VM should be 10ms (except the credit is not used up and vcpu remain
> > the head of runqueue ) as well. Why the real latency in my test (Ping
> > the target VM) is much shorter than 10ms? Does the vcpu of target VM
> > remain the head of runqueue if it was boosted?
> >
> >
> > David
> >
> > 2011/5/25 George Dunlap <george.dunlap@citrix.com>
> >
> >         On Mon, 2011-05-23 at 09:15 +0100, David Xu wrote:
> >         > Hi,
> >         >
> >         >
> >         > Xen4.1 datasheet tells that credit2 scheduler is designed
> >         for latency
> >         > sensitive workloads. Does it have some improvement on the
> >         hybrid
> >         > workload including both the cpu-bound and latency-sensitive
> >         i/o work?
> >         > For example, if a VM runs a cpu-bound task burning the cpu
> >         and a
> >         > i/o-bound (latency-sensitive) task simultaneously, will the
> >         latency be
> >         > guaranteed? And how?
> >
> >
> >         At the moment, the "mixed workload" problem, where a single VM
> >         does both
> >         cpu-intensive and latency-sensitive* workloads, has not been
> >         addressed
> >         yet.  I have some ideas, but I haven't implemented them yet.
> >
> >         * i/o-bound is not the same as latency sensitive.  They
> >         obviously go
> >         together frequently, but I would make a distinction between
> >         them.  For
> >         example, an scp (copy over ssh) can easily become cpu-bound if
> >         there is
> >         competition for the cpu -- but it is nonetheless latency
> >         sensitive.  (I
> >         guess to put it another way, a workload which is
> >         latency-sensitive may
> >         become i/o-bound if its scheduling latency is too high.)
> >
> >          -George
> >
> >
> >
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 3388 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: performance of credit2 on hybrid workload
  2011-06-01  0:55       ` David Xu
@ 2011-06-01  9:31         ` George Dunlap
  2011-06-07 19:28           ` David Xu
  0 siblings, 1 reply; 10+ messages in thread
From: George Dunlap @ 2011-06-01  9:31 UTC (permalink / raw)
  To: David Xu; +Cc: xen-devel, George Dunlap

You cannot do that with the current code; to add such a parameter
would require major work to the scheduler.

 -George

On Wed, Jun 1, 2011 at 1:55 AM, David Xu <davidxu06@gmail.com> wrote:
> Hi,
> I want to reduce the latency of a specific VM. How should I do based on
> credit scheduler? For example, I will add another parameter latency besides
> weight and cap, and schedule the vcpu whose VM holds the least latency
> firstly each time. Thanks.
> Regards,
> Cong
>
> 2011/5/26 George Dunlap <george.dunlap@citrix.com>
>>
>> Please reply to the list. :-)
>>
>> Also, this is a question about credit1, so it should arguably be a
>> different thread.
>>
>>  -George
>>
>> On Wed, 2011-05-25 at 19:34 +0100, David Xu wrote:
>> > Thanks. The boost mechanism in credit can significantly reduce the
>> > scheduling latency for pure I/O workload. Since the minimum interval
>> > of credit scheduling is 10ms, the magnitude of latency for the target
>> > VM should be 10ms (except the credit is not used up and vcpu remain
>> > the head of runqueue ) as well. Why the real latency in my test (Ping
>> > the target VM) is much shorter than 10ms? Does the vcpu of target VM
>> > remain the head of runqueue if it was boosted?
>> >
>> >
>> > David
>> >
>> > 2011/5/25 George Dunlap <george.dunlap@citrix.com>
>> >
>> >         On Mon, 2011-05-23 at 09:15 +0100, David Xu wrote:
>> >         > Hi,
>> >         >
>> >         >
>> >         > Xen4.1 datasheet tells that credit2 scheduler is designed
>> >         for latency
>> >         > sensitive workloads. Does it have some improvement on the
>> >         hybrid
>> >         > workload including both the cpu-bound and latency-sensitive
>> >         i/o work?
>> >         > For example, if a VM runs a cpu-bound task burning the cpu
>> >         and a
>> >         > i/o-bound (latency-sensitive) task simultaneously, will the
>> >         latency be
>> >         > guaranteed? And how?
>> >
>> >
>> >         At the moment, the "mixed workload" problem, where a single VM
>> >         does both
>> >         cpu-intensive and latency-sensitive* workloads, has not been
>> >         addressed
>> >         yet.  I have some ideas, but I haven't implemented them yet.
>> >
>> >         * i/o-bound is not the same as latency sensitive.  They
>> >         obviously go
>> >         together frequently, but I would make a distinction between
>> >         them.  For
>> >         example, an scp (copy over ssh) can easily become cpu-bound if
>> >         there is
>> >         competition for the cpu -- but it is nonetheless latency
>> >         sensitive.  (I
>> >         guess to put it another way, a workload which is
>> >         latency-sensitive may
>> >         become i/o-bound if its scheduling latency is too high.)
>> >
>> >          -George
>> >
>> >
>> >
>>
>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: performance of credit2 on hybrid workload
  2011-06-01  9:31         ` George Dunlap
@ 2011-06-07 19:28           ` David Xu
  2011-06-08 10:36             ` George Dunlap
  0 siblings, 1 reply; 10+ messages in thread
From: David Xu @ 2011-06-07 19:28 UTC (permalink / raw)
  To: George Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3524 bytes --]

Hi George,

Could you share some ideas about how to addressed the  "mixed workload"
problem,  where a single VM does both
cpu-intensive and latency-sensitive workloads, even though you haven't
implemented it yet?  I am also working on it, maybe I can try some methods
and give you feedback. Thanks.

Regards,
Cong



2011/6/1 George Dunlap <George.Dunlap@eu.citrix.com>

> You cannot do that with the current code; to add such a parameter
> would require major work to the scheduler.
>
>  -George
>
> On Wed, Jun 1, 2011 at 1:55 AM, David Xu <davidxu06@gmail.com> wrote:
> > Hi,
> > I want to reduce the latency of a specific VM. How should I do based on
> > credit scheduler? For example, I will add another parameter latency
> besides
> > weight and cap, and schedule the vcpu whose VM holds the least latency
> > firstly each time. Thanks.
> > Regards,
> > Cong
> >
> > 2011/5/26 George Dunlap <george.dunlap@citrix.com>
> >>
> >> Please reply to the list. :-)
> >>
> >> Also, this is a question about credit1, so it should arguably be a
> >> different thread.
> >>
> >>  -George
> >>
> >> On Wed, 2011-05-25 at 19:34 +0100, David Xu wrote:
> >> > Thanks. The boost mechanism in credit can significantly reduce the
> >> > scheduling latency for pure I/O workload. Since the minimum interval
> >> > of credit scheduling is 10ms, the magnitude of latency for the target
> >> > VM should be 10ms (except the credit is not used up and vcpu remain
> >> > the head of runqueue ) as well. Why the real latency in my test (Ping
> >> > the target VM) is much shorter than 10ms? Does the vcpu of target VM
> >> > remain the head of runqueue if it was boosted?
> >> >
> >> >
> >> > David
> >> >
> >> > 2011/5/25 George Dunlap <george.dunlap@citrix.com>
> >> >
> >> >         On Mon, 2011-05-23 at 09:15 +0100, David Xu wrote:
> >> >         > Hi,
> >> >         >
> >> >         >
> >> >         > Xen4.1 datasheet tells that credit2 scheduler is designed
> >> >         for latency
> >> >         > sensitive workloads. Does it have some improvement on the
> >> >         hybrid
> >> >         > workload including both the cpu-bound and latency-sensitive
> >> >         i/o work?
> >> >         > For example, if a VM runs a cpu-bound task burning the cpu
> >> >         and a
> >> >         > i/o-bound (latency-sensitive) task simultaneously, will the
> >> >         latency be
> >> >         > guaranteed? And how?
> >> >
> >> >
> >> >         At the moment, the "mixed workload" problem, where a single VM
> >> >         does both
> >> >         cpu-intensive and latency-sensitive* workloads, has not been
> >> >         addressed
> >> >         yet.  I have some ideas, but I haven't implemented them yet.
> >> >
> >> >         * i/o-bound is not the same as latency sensitive.  They
> >> >         obviously go
> >> >         together frequently, but I would make a distinction between
> >> >         them.  For
> >> >         example, an scp (copy over ssh) can easily become cpu-bound if
> >> >         there is
> >> >         competition for the cpu -- but it is nonetheless latency
> >> >         sensitive.  (I
> >> >         guess to put it another way, a workload which is
> >> >         latency-sensitive may
> >> >         become i/o-bound if its scheduling latency is too high.)
> >> >
> >> >          -George
> >> >
> >> >
> >> >
> >>
> >>
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >
> >
>

[-- Attachment #1.2: Type: text/html, Size: 5723 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: performance of credit2 on hybrid workload
  2011-06-07 19:28           ` David Xu
@ 2011-06-08 10:36             ` George Dunlap
  2011-06-08 21:43               ` David Xu
  0 siblings, 1 reply; 10+ messages in thread
From: George Dunlap @ 2011-06-08 10:36 UTC (permalink / raw)
  To: David Xu; +Cc: xen-devel

On Tue, Jun 7, 2011 at 8:28 PM, David Xu <davidxu06@gmail.com> wrote:
> Hi George,
> Could you share some ideas about how to addressed the  "mixed workload"
> problem,  where a single VM does both
> cpu-intensive and latency-sensitive workloads, even though you haven't
> implemented it yet?  I am also working on it, maybe I can try some methods
> and give you feedback. Thanks.

Well the main thing to remember is that you can't give the VM any
*more* time.  The amount of time it's allowed is defined by the
scheduler parameters (and the other VMs running).  So all you can do
is change *when* the VM gets the time.  So what you want the scheduler
to do is give the VM shorter timeslices *so that* it can get time more
frequently.

For example, the credit1 scheduler will let a VM burn through 30ms of
credit.  That means if its "fair share" is (say) 50%, then it has to
wait at least 30ms before being allowed to run again in order to
maintain fairness.  If its "fair share" is 33%, then the VM has to
wait at least 60ms.  If the scheduler were to preempt it after 5ms,
then the VM would only have to be delayed for 5ms or 10ms,
respectively; and if it were preempted after 1ms, it would only have
to be delayed 1s or 2s.

So the real key to giving a VM with a mixed workload better latency
characteristics is not to wake it up sooner, but to preempt it sooner.

The problem is, of course, that preempting workloads which are *not*
latency sensitive too soon adds scheduling overhead, and reduces cache
effectiveness.  So the question becomes, how do I know how long to let
a VM run for?

One solution would be to introduce a scheduling parameter that will
tell the scheduler how long to set the preemption timer for.  Then if
an administrator knows he's running a mixed-workload VM, he can
shorten it down; or if he knows he's running a cpu-cruncher, he can
make it longer.  This would also be useful in verifying the logic of
"shorter timeslices -> less latency for mixed workloads"; i.e,. we
could vary this number and see the effects.

One issue with adding this to the credit1 scheduler is that the
credit1 scheduler is that there are only 3 priorities (BOOST, UNDER,
and OVER), and scheduling is round-robin within each priority.  It's a
known issue with round-robin scheduling that tasks which yield (or are
preempted soon) are discriminated against compared to tasks which use
up their full timeslice (or are preempted less soon).  So there
results may not be representative.

The next step would be to try to get the scheduler to determine the
latency characteristics of a VM automatically.  The key observation
here is that most of the time, latency-sensitive operations are
initiated with an interrupt; or to put it the other way, a pending
interrupt generally means that there is a latency sensitive operation
waiting to happen.  My idea was to have the scheduler look at the
historical rate of interrupts and determine a preemption timeslice
based on those, such that on average, the VM's credit would be enough
to run just when the next interrupt arrived for it to handle.

It occurs to me now that after a certain point, interrupts themselves
become inefficient and drivers sometimes go into "polling" mode, which
would look to the scheduler the same as cpu-bound.  Hmm... bears
thinking about. :-)

Anyway, that's where I got in my thinking on this. Let me know what
you think. :-)

 -George

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: performance of credit2 on hybrid workload
  2011-06-08 10:36             ` George Dunlap
@ 2011-06-08 21:43               ` David Xu
  2011-06-09 13:34                 ` George Dunlap
  0 siblings, 1 reply; 10+ messages in thread
From: David Xu @ 2011-06-08 21:43 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5341 bytes --]

Hi George,

Thanks for your reply. I have similar ideas to you, adding another parameter
that indicates the required latency and then letting scheduler
determine latency
characteristics of a VM automatically. Firstly, adding another parameter and
let users set its value in advance sounds similar to SEDF. But sometimes the
configuration process is hard and inflexible when workloads in VM is
complex. So in my opinion, a task-aware scheduler is better. However,
manually configuration can help us to check out the effectiveness of the new
parameter. For another hand, as you described, it is also not easy and
accurate to make scheduler  determine the latency characteristics of a VM
automatically with some information we can get from hypervisor, for instance
the delay interrupt. Therefore, the key point for me is to find and
implement a scheduling helper to indicate which VM should be scheduled soon.
For example, for TCP network, we can implement a tool similar to a packet
sniffer to capture the packet and analyze its head information to infer the
type of workload. Then the analysis result can help scheduler to make a
decision. In fact, not all I/O-intensive workloads require low latency, some
of them only require high-throughput. Of course, scheduling latency impact
significantly the throughput (You handled this problem with boost mechanism
to some extension). What I want to is to only reduce the latency of a VM
which require low latency while postpone other VMs, and use other
technology such as packet offloading to compensate their lost and improve
their throughput.

This is just my course idea and there are many problems as well. I hope I
can often discuss with you and share our results. Thanks very much.

Regards,
Cong

2011/6/8 George Dunlap <George.Dunlap@eu.citrix.com>

> On Tue, Jun 7, 2011 at 8:28 PM, David Xu <davidxu06@gmail.com> wrote:
> > Hi George,
> > Could you share some ideas about how to addressed the  "mixed workload"
> > problem,  where a single VM does both
> > cpu-intensive and latency-sensitive workloads, even though you haven't
> > implemented it yet?  I am also working on it, maybe I can try some
> methods
> > and give you feedback. Thanks.
>
> Well the main thing to remember is that you can't give the VM any
> *more* time.  The amount of time it's allowed is defined by the
> scheduler parameters (and the other VMs running).  So all you can do
> is change *when* the VM gets the time.  So what you want the scheduler
> to do is give the VM shorter timeslices *so that* it can get time more
> frequently.
>
> For example, the credit1 scheduler will let a VM burn through 30ms of
> credit.  That means if its "fair share" is (say) 50%, then it has to
> wait at least 30ms before being allowed to run again in order to
> maintain fairness.  If its "fair share" is 33%, then the VM has to
> wait at least 60ms.  If the scheduler were to preempt it after 5ms,
> then the VM would only have to be delayed for 5ms or 10ms,
> respectively; and if it were preempted after 1ms, it would only have
> to be delayed 1s or 2s.
>
> So the real key to giving a VM with a mixed workload better latency
> characteristics is not to wake it up sooner, but to preempt it sooner.
>
> The problem is, of course, that preempting workloads which are *not*
> latency sensitive too soon adds scheduling overhead, and reduces cache
> effectiveness.  So the question becomes, how do I know how long to let
> a VM run for?
>
> One solution would be to introduce a scheduling parameter that will
> tell the scheduler how long to set the preemption timer for.  Then if
> an administrator knows he's running a mixed-workload VM, he can
> shorten it down; or if he knows he's running a cpu-cruncher, he can
> make it longer.  This would also be useful in verifying the logic of
> "shorter timeslices -> less latency for mixed workloads"; i.e,. we
> could vary this number and see the effects.
>
> One issue with adding this to the credit1 scheduler is that the
> credit1 scheduler is that there are only 3 priorities (BOOST, UNDER,
> and OVER), and scheduling is round-robin within each priority.  It's a
> known issue with round-robin scheduling that tasks which yield (or are
> preempted soon) are discriminated against compared to tasks which use
> up their full timeslice (or are preempted less soon).  So there
> results may not be representative.
>
> The next step would be to try to get the scheduler to determine the
> latency characteristics of a VM automatically.  The key observation
> here is that most of the time, latency-sensitive operations are
> initiated with an interrupt; or to put it the other way, a pending
> interrupt generally means that there is a latency sensitive operation
> waiting to happen.  My idea was to have the scheduler look at the
> historical rate of interrupts and determine a preemption timeslice
> based on those, such that on average, the VM's credit would be enough
> to run just when the next interrupt arrived for it to handle.
>
> It occurs to me now that after a certain point, interrupts themselves
> become inefficient and drivers sometimes go into "polling" mode, which
> would look to the scheduler the same as cpu-bound.  Hmm... bears
> thinking about. :-)
>
> Anyway, that's where I got in my thinking on this. Let me know what
> you think. :-)
>
>  -George
>

[-- Attachment #1.2: Type: text/html, Size: 7207 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: performance of credit2 on hybrid workload
  2011-06-08 21:43               ` David Xu
@ 2011-06-09 13:34                 ` George Dunlap
  2011-06-09 19:50                   ` David Xu
  0 siblings, 1 reply; 10+ messages in thread
From: George Dunlap @ 2011-06-09 13:34 UTC (permalink / raw)
  To: David Xu; +Cc: George Dunlap, xen-devel

On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
> Hi George,
> 
> 
> Thanks for your reply. I have similar ideas to you, adding another
> parameter that indicates the required latency and then letting
> scheduler determine latency characteristics of a VM automatically.
> Firstly, adding another parameter and let users set its value in
> advance sounds similar to SEDF. But sometimes the configuration
> process is hard and inflexible when workloads in VM is complex. So in
> my opinion, a task-aware scheduler is better. However, manually
> configuration can help us to check out the effectiveness of the new
> parameter.

Great!  Sounds like we're on the same page.

>  For another hand, as you described, it is also not easy and accurate
> to make scheduler  determine the latency characteristics of a VM
> automatically with some information we can get from hypervisor, for
> instance the delay interrupt. Therefore, the key point for me is to
> find and implement a scheduling helper to indicate which VM should be
> scheduled soon. 

Remember though -- you can't just give a VM more CPU time.  Giving a VM
more CPU at one time means taking CPU time away at another time.  I
think they key is to think the opposite way -- taking away time from a
VM by giving it a shorter timeslice, so that you can give time back when
it needs it.

> For example, for TCP network, we can implement a tool similar to a
> packet sniffer to capture the packet and analyze its head information
> to infer the type of workload. Then the analysis result can help
> scheduler to make a decision. In fact, not all I/O-intensive workloads
> require low latency, some of them only require high-throughput. Of
> course, scheduling latency impact significantly the throughput (You
> handled this problem with boost mechanism to some extension). 

The boost mechanism (and indeed the whole credit1 scheduler) was
actually written by someone else. :-)  And although it's good in theory,
the way it's implemented actually causes some problems.

I've just been talking to one of our engineers here who used to work for
a company which sold network cards.  Our discussion convinced me that we
shouldn't really need any more information about a VM than the
interrupts which have been delivered to it: even devices which go into
polling mode do so for a relatively brief period of time, then re-enable
interrupts again.  

> What I want to is to only reduce the latency of a VM which require low
> latency while postpone other VMs, and use other technology such as
> packet offloading to compensate their lost and improve their
> throughput.
> 
> 
> This is just my course idea and there are many problems as well. I
> hope I can often discuss with you and share our results. Thanks very
> much.

Yes, I look forward to seeing the results of your work.  Are you going
to be doing this on credit2?

Peace,
 -George

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: performance of credit2 on hybrid workload
  2011-06-09 13:34                 ` George Dunlap
@ 2011-06-09 19:50                   ` David Xu
  2011-06-13 16:52                     ` David Xu
  0 siblings, 1 reply; 10+ messages in thread
From: David Xu @ 2011-06-09 19:50 UTC (permalink / raw)
  To: George Dunlap; +Cc: George Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4829 bytes --]

> Remember though -- you can't just give a VM more CPU time.  Giving a VM
> more CPU at one time means taking CPU time away at another time.  I
> think they key is to think the opposite way -- taking away time from a
> VM by giving it a shorter timeslice, so that you can give time back when
> it needs it.

It seems if scheduler always schedules a VM firstly, it will use up all
allocated credits soon compared with other VMs and steal credits from
others, which may cause unfairness. And your suggestion that thinking the
opposite way is reasonable. An efficient method to reduce scheduling latency
for a specific VM is to preempt current running VM when a interrupt is
coming. However, too frequently context switch and interrupt processing may
negatively impact the performance as well. BTW, do you know how to give a VM
running mix workload a shorter time-slice (ex. 5ms) and keep other VMs the
default value (30ms)?

> I've just been talking to one of our engineers here who used to work for
> a company which sold network cards.  Our discussion convinced me that we
> shouldn't really need any more information about a VM than the
> interrupts which have been delivered to it: even devices which go into
> polling mode do so for a relatively brief period of time, then re-enable
> interrupts again.

Do you think a pending interrupt generally indicates a latency-intensive
workload? From my point of view, it means there is a I/O-intensive workload
which may not be latency-intensive but only require high throughput.

> Yes, I look forward to seeing the results of your work.  Are you going
> to be doing this on credit2?

I am not familiar with credit2, but I will delve into it in future. Of
course, If I have any new progress, I will share my results with you.

2011/6/9 George Dunlap <george.dunlap@citrix.com>

> On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
> > Hi George,
> >
> >
> > Thanks for your reply. I have similar ideas to you, adding another
> > parameter that indicates the required latency and then letting
> > scheduler determine latency characteristics of a VM automatically.
> > Firstly, adding another parameter and let users set its value in
> > advance sounds similar to SEDF. But sometimes the configuration
> > process is hard and inflexible when workloads in VM is complex. So in
> > my opinion, a task-aware scheduler is better. However, manually
> > configuration can help us to check out the effectiveness of the new
> > parameter.
>
> Great!  Sounds like we're on the same page.
>
> >  For another hand, as you described, it is also not easy and accurate
> > to make scheduler  determine the latency characteristics of a VM
> > automatically with some information we can get from hypervisor, for
> > instance the delay interrupt. Therefore, the key point for me is to
> > find and implement a scheduling helper to indicate which VM should be
> > scheduled soon.
>
> Remember though -- you can't just give a VM more CPU time.  Giving a VM
> more CPU at one time means taking CPU time away at another time.  I
> think they key is to think the opposite way -- taking away time from a
> VM by giving it a shorter timeslice, so that you can give time back when
> it needs it.
>
> > For example, for TCP network, we can implement a tool similar to a
> > packet sniffer to capture the packet and analyze its head information
> > to infer the type of workload. Then the analysis result can help
> > scheduler to make a decision. In fact, not all I/O-intensive workloads
> > require low latency, some of them only require high-throughput. Of
> > course, scheduling latency impact significantly the throughput (You
> > handled this problem with boost mechanism to some extension).
>
> The boost mechanism (and indeed the whole credit1 scheduler) was
> actually written by someone else. :-)  And although it's good in theory,
> the way it's implemented actually causes some problems.
>
> I've just been talking to one of our engineers here who used to work for
> a company which sold network cards.  Our discussion convinced me that we
> shouldn't really need any more information about a VM than the
> interrupts which have been delivered to it: even devices which go into
> polling mode do so for a relatively brief period of time, then re-enable
> interrupts again.
>
> > What I want to is to only reduce the latency of a VM which require low
> > latency while postpone other VMs, and use other technology such as
> > packet offloading to compensate their lost and improve their
> > throughput.
> >
> >
> > This is just my course idea and there are many problems as well. I
> > hope I can often discuss with you and share our results. Thanks very
> > much.
>
> Yes, I look forward to seeing the results of your work.  Are you going
> to be doing this on credit2?
>
> Peace,
>  -George
>
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 5739 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: performance of credit2 on hybrid workload
  2011-06-09 19:50                   ` David Xu
@ 2011-06-13 16:52                     ` David Xu
  0 siblings, 0 replies; 10+ messages in thread
From: David Xu @ 2011-06-13 16:52 UTC (permalink / raw)
  To: George Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5148 bytes --]

Hi,

Could you tell me how to check out the pending interrupt during the
scheduling while not adding extra risk of crash? Thanks.

Regards,
Cong

2011/6/9 David Xu <davidxu06@gmail.com>

> > Remember though -- you can't just give a VM more CPU time.  Giving a VM
> > more CPU at one time means taking CPU time away at another time.  I
> > think they key is to think the opposite way -- taking away time from a
> > VM by giving it a shorter timeslice, so that you can give time back when
> > it needs it.
>
> It seems if scheduler always schedules a VM firstly, it will use up all
> allocated credits soon compared with other VMs and steal credits from
> others, which may cause unfairness. And your suggestion that thinking the
> opposite way is reasonable. An efficient method to reduce scheduling latency
> for a specific VM is to preempt current running VM when a interrupt is
> coming. However, too frequently context switch and interrupt processing may
> negatively impact the performance as well. BTW, do you know how to give a VM
> running mix workload a shorter time-slice (ex. 5ms) and keep other VMs the
> default value (30ms)?
>
> > I've just been talking to one of our engineers here who used to work for
> > a company which sold network cards.  Our discussion convinced me that we
> > shouldn't really need any more information about a VM than the
> > interrupts which have been delivered to it: even devices which go into
> > polling mode do so for a relatively brief period of time, then re-enable
> > interrupts again.
>
> Do you think a pending interrupt generally indicates a latency-intensive
> workload? From my point of view, it means there is a I/O-intensive workload
> which may not be latency-intensive but only require high throughput.
>
> > Yes, I look forward to seeing the results of your work.  Are you going
> > to be doing this on credit2?
>
> I am not familiar with credit2, but I will delve into it in future. Of
> course, If I have any new progress, I will share my results with you.
>
> 2011/6/9 George Dunlap <george.dunlap@citrix.com>
>
>> On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
>> > Hi George,
>> >
>> >
>> > Thanks for your reply. I have similar ideas to you, adding another
>> > parameter that indicates the required latency and then letting
>> > scheduler determine latency characteristics of a VM automatically.
>> > Firstly, adding another parameter and let users set its value in
>> > advance sounds similar to SEDF. But sometimes the configuration
>> > process is hard and inflexible when workloads in VM is complex. So in
>> > my opinion, a task-aware scheduler is better. However, manually
>> > configuration can help us to check out the effectiveness of the new
>> > parameter.
>>
>> Great!  Sounds like we're on the same page.
>>
>> >  For another hand, as you described, it is also not easy and accurate
>> > to make scheduler  determine the latency characteristics of a VM
>> > automatically with some information we can get from hypervisor, for
>> > instance the delay interrupt. Therefore, the key point for me is to
>> > find and implement a scheduling helper to indicate which VM should be
>> > scheduled soon.
>>
>> Remember though -- you can't just give a VM more CPU time.  Giving a VM
>> more CPU at one time means taking CPU time away at another time.  I
>> think they key is to think the opposite way -- taking away time from a
>> VM by giving it a shorter timeslice, so that you can give time back when
>> it needs it.
>>
>> > For example, for TCP network, we can implement a tool similar to a
>> > packet sniffer to capture the packet and analyze its head information
>> > to infer the type of workload. Then the analysis result can help
>> > scheduler to make a decision. In fact, not all I/O-intensive workloads
>> > require low latency, some of them only require high-throughput. Of
>> > course, scheduling latency impact significantly the throughput (You
>> > handled this problem with boost mechanism to some extension).
>>
>> The boost mechanism (and indeed the whole credit1 scheduler) was
>> actually written by someone else. :-)  And although it's good in theory,
>> the way it's implemented actually causes some problems.
>>
>> I've just been talking to one of our engineers here who used to work for
>> a company which sold network cards.  Our discussion convinced me that we
>> shouldn't really need any more information about a VM than the
>> interrupts which have been delivered to it: even devices which go into
>> polling mode do so for a relatively brief period of time, then re-enable
>> interrupts again.
>>
>> > What I want to is to only reduce the latency of a VM which require low
>> > latency while postpone other VMs, and use other technology such as
>> > packet offloading to compensate their lost and improve their
>> > throughput.
>> >
>> >
>> > This is just my course idea and there are many problems as well. I
>> > hope I can often discuss with you and share our results. Thanks very
>> > much.
>>
>> Yes, I look forward to seeing the results of your work.  Are you going
>> to be doing this on credit2?
>>
>> Peace,
>>  -George
>>
>>
>>
>>
>

[-- Attachment #1.2: Type: text/html, Size: 7009 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-06-13 16:52 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-23  8:15 performance of credit2 on hybrid workload David Xu
2011-05-25 16:18 ` George Dunlap
     [not found]   ` <BANLkTi=57gDitoq7-T7n9Zh0_ZrCMuxfRg@mail.gmail.com>
     [not found]     ` <1306401493.21026.8526.camel@elijah>
2011-06-01  0:55       ` David Xu
2011-06-01  9:31         ` George Dunlap
2011-06-07 19:28           ` David Xu
2011-06-08 10:36             ` George Dunlap
2011-06-08 21:43               ` David Xu
2011-06-09 13:34                 ` George Dunlap
2011-06-09 19:50                   ` David Xu
2011-06-13 16:52                     ` David Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.