All of lore.kernel.org
 help / color / mirror / Atom feed
* xen 4.5.0 rtds scheduler perform poorly with 2vms
@ 2015-11-23 15:42 Yu-An(Victor) Chen
  2015-11-23 16:07 ` Meng Xu
  2015-11-25  0:15 ` Dario Faggioli
  0 siblings, 2 replies; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-23 15:42 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1054 bytes --]

Hi all,

So I was doing some experiments to evaluate RTDS scheduler schedubility of
real time tasks using 1vm with period of 10000 and budget of 10000. The
experiment results turn out as expected(perform better than xen-credit).

But when I tried to perform similar experiments with 2 vms (both with now
period of 10000 and budget of 5000). The schedubility of real time tasks
turn out really bad. Even if I have one vm idling and the other vm running
the real time tasks, the schedubility of that vm is still really poor(worse
than xen-credit). Am I missing some configuration I should have set for 2vm
cases? Thank you

my setup is the following:
Using Xen 4.5.0.
2vm sharing core 0-7, with RTDS scheduler, both has period of 10000 and
budget of 5000
Dom0 using one core from CPU 8-15, with RTDS scheduler, period of 10000 and
budget of 10000
The experiment I am doing is running real time tasks with different total
utilization rate and measure the task success rate in the vm.

Please let me know if you need any more information. Thank you!

Victor

[-- Attachment #1.2: Type: text/html, Size: 1240 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-23 15:42 xen 4.5.0 rtds scheduler perform poorly with 2vms Yu-An(Victor) Chen
@ 2015-11-23 16:07 ` Meng Xu
  2015-11-23 16:35   ` Yu-An(Victor) Chen
  2015-11-25  0:15 ` Dario Faggioli
  1 sibling, 1 reply; 31+ messages in thread
From: Meng Xu @ 2015-11-23 16:07 UTC (permalink / raw)
  To: Yu-An(Victor) Chen; +Cc: xen-devel

Hi Yu-An,

2015-11-23 10:42 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
>
> Hi all,
>
> So I was doing some experiments to evaluate RTDS scheduler schedubility of real time tasks using 1vm with period of 10000 and budget of 10000. The experiment results turn out as expected(perform better than xen-credit).


Thank you very much for doing this test and trying RTDS scheduler. :-)

>
>
> But when I tried to perform similar experiments with 2 vms (both with now period of 10000 and budget of 5000). The schedubility of real time tasks turn out really bad. Even if I have one vm idling and the other vm running the real time tasks, the schedubility of that vm is still really poor(worse than xen-credit).


RTDS scheduler is not work-conserving, while credit scheduler is. Even
if you keep one VM idling, the real-time VM that you run the RT tasks
will still get 5000 budget in each 10000. This is what the budget
replenish policy requires. However, credit scheduler will try to use
all budget (in its best effort) when there is no others running.

Another factor that may affect the performance is the period of the
VM. The VM period should be smaller than the tasks' period inside.
Keep in mind that in the worst case, the VM may not get resource for
2*(period - budget) time, when the VM's first VCPU get budget at the
beginning of the first period but get budget at the end of the next
period.


>
> Am I missing some configuration I should have set for 2vm cases? Thank you
>
> my setup is the following:
> Using Xen 4.5.0.
> 2vm sharing core 0-7, with RTDS scheduler, both has period of 10000 and budget of 5000
> Dom0 using one core from CPU 8-15, with RTDS scheduler, period of 10000 and budget of 10000
> The experiment I am doing is running real time tasks with different total utilization rate and measure the task success rate in the vm.


Which kind of RT tasks are you running inside VM? Are they
independent?  Are they involving a lot of I/O or memory?

Thanks,

Meng
>
>
-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-23 16:07 ` Meng Xu
@ 2015-11-23 16:35   ` Yu-An(Victor) Chen
  2015-11-24  2:23     ` Meng Xu
  0 siblings, 1 reply; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-23 16:35 UTC (permalink / raw)
  To: Meng Xu; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3182 bytes --]

Hi Meng,

Thank you very much for replying!

The RT tasks I am running for each trial at a certain utilization rate is a
collection of real-time tasks, and each real-time task is a sequence of
jobs that are released periodically. All jobs are periodic, where each job
is defined by a period (and deadline) and a worse-case execution time. Each
job is just running of a number of iterations of floating point operations.
This is based on the base task.c provided with the LITMUSRT userspace
library. So Yes they are independent and not I/O or memory intensive.

The period distribution I have for each task is (10ms,100ms) which is
bigger than the VM period I specified using xl sched-rtds (10000us), but I
guess the lower bound is too close to the VM period. So by your suggestion,
maybe shorten the period for VM can improve the performance?


Thank you!

Victor


On Mon, Nov 23, 2015 at 8:07 AM, Meng Xu <xumengpanda@gmail.com> wrote:

> Hi Yu-An,
>
> 2015-11-23 10:42 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> >
> > Hi all,
> >
> > So I was doing some experiments to evaluate RTDS scheduler schedubility
> of real time tasks using 1vm with period of 10000 and budget of 10000. The
> experiment results turn out as expected(perform better than xen-credit).
>
>
> Thank you very much for doing this test and trying RTDS scheduler. :-)
>
> >
> >
> > But when I tried to perform similar experiments with 2 vms (both with
> now period of 10000 and budget of 5000). The schedubility of real time
> tasks turn out really bad. Even if I have one vm idling and the other vm
> running the real time tasks, the schedubility of that vm is still really
> poor(worse than xen-credit).
>
>
> RTDS scheduler is not work-conserving, while credit scheduler is. Even
> if you keep one VM idling, the real-time VM that you run the RT tasks
> will still get 5000 budget in each 10000. This is what the budget
> replenish policy requires. However, credit scheduler will try to use
> all budget (in its best effort) when there is no others running.
>
> Another factor that may affect the performance is the period of the
> VM. The VM period should be smaller than the tasks' period inside.
> Keep in mind that in the worst case, the VM may not get resource for
> 2*(period - budget) time, when the VM's first VCPU get budget at the
> beginning of the first period but get budget at the end of the next
> period.
>
>
> >
> > Am I missing some configuration I should have set for 2vm cases? Thank
> you
> >
> > my setup is the following:
> > Using Xen 4.5.0.
> > 2vm sharing core 0-7, with RTDS scheduler, both has period of 10000 and
> budget of 5000
> > Dom0 using one core from CPU 8-15, with RTDS scheduler, period of 10000
> and budget of 10000
> > The experiment I am doing is running real time tasks with different
> total utilization rate and measure the task success rate in the vm.
>
>
> Which kind of RT tasks are you running inside VM? Are they
> independent?  Are they involving a lot of I/O or memory?
>
> Thanks,
>
> Meng
> >
> >
> --
>
>
> -----------
> Meng Xu
> PhD Student in Computer and Information Science
> University of Pennsylvania
> http://www.cis.upenn.edu/~mengxu/
>

[-- Attachment #1.2: Type: text/html, Size: 4065 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-23 16:35   ` Yu-An(Victor) Chen
@ 2015-11-24  2:23     ` Meng Xu
  2015-11-24  4:57       ` Yu-An(Victor) Chen
  0 siblings, 1 reply; 31+ messages in thread
From: Meng Xu @ 2015-11-24  2:23 UTC (permalink / raw)
  To: Yu-An(Victor) Chen; +Cc: xen-devel

Hi,

2015-11-23 11:35 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> Hi Meng,
>
> Thank you very much for replying!
>
> The RT tasks I am running for each trial at a certain utilization rate is a
> collection of real-time tasks, and each real-time task is a sequence of jobs
> that are released periodically. All jobs are periodic, where each job is
> defined by a period (and deadline) and a worse-case execution time. Each job
> is just running of a number of iterations of floating point operations. This
> is based on the base task.c provided with the LITMUSRT userspace library. So
> Yes they are independent and not I/O or memory intensive.

Ah, I see. Which version of LITMUSRT did you use? LITMUS^RT has a bug
in Xen environment. The IPI interrupt is not handled properly in
LITMUS on Xen.  With the bug, the system performance is worse than
when the system is loaded, because LITMUS scheduler just fails to
respond due to the IPI ignorance.

IIRC, this bug was fixed in the latest LITMUSRT code.
Did you use the latest LITMUS code?

One way to debug the issue is:
Can you enable the TRACE in LITMUS and collact the scheduling log in
the scenario when you see the bad performance?

>
> The period distribution I have for each task is (10ms,100ms) which is bigger
> than the VM period I specified using xl sched-rtds (10000us), but I guess
> the lower bound is too close to the VM period. So by your suggestion, maybe
> shorten the period for VM can improve the performance?

Yes. At least it shorten the starvation interval.

Meng

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-24  2:23     ` Meng Xu
@ 2015-11-24  4:57       ` Yu-An(Victor) Chen
  2015-11-24  6:19         ` Meng Xu
  0 siblings, 1 reply; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-24  4:57 UTC (permalink / raw)
  To: Meng Xu; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2478 bytes --]

Hi Meng,

Thank you again for your kindly reply!
I am currently using "litmus-rt-2014.2.patch" and with my labmates' patch
for IPI interrupt (https://github.com/LITMUS-RT/liblitmus/pull/1/files)
I asked my labmate about it, he said he was not sure if there is more IPI
interrupt bug other than then one he fixed.

I can am using "st_trace" to check the success rate of the tasks. it only
gives me the how much of the deadline I am missing for each job.
which TRACE function in LITMUS, do you mean?

litmus_log?ft_cpu_traceX?ft_msg_traceX?sched_trace?
or
st_trace?

https://wiki.litmus-rt.org/litmus/Tracing


At this point I think the best way is to install the newest litmus-rt
(litmus-rt-2015.1.patch)
?

Again, thank you very much, I really appreciate your help!

Victor






On Mon, Nov 23, 2015 at 6:23 PM, Meng Xu <xumengpanda@gmail.com> wrote:

> Hi,
>
> 2015-11-23 11:35 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> > Hi Meng,
> >
> > Thank you very much for replying!
> >
> > The RT tasks I am running for each trial at a certain utilization rate
> is a
> > collection of real-time tasks, and each real-time task is a sequence of
> jobs
> > that are released periodically. All jobs are periodic, where each job is
> > defined by a period (and deadline) and a worse-case execution time. Each
> job
> > is just running of a number of iterations of floating point operations.
> This
> > is based on the base task.c provided with the LITMUSRT userspace
> library. So
> > Yes they are independent and not I/O or memory intensive.
>
> Ah, I see. Which version of LITMUSRT did you use? LITMUS^RT has a bug
> in Xen environment. The IPI interrupt is not handled properly in
> LITMUS on Xen.  With the bug, the system performance is worse than
> when the system is loaded, because LITMUS scheduler just fails to
> respond due to the IPI ignorance.
>
> IIRC, this bug was fixed in the latest LITMUSRT code.
> Did you use the latest LITMUS code?
>
> One way to debug the issue is:
> Can you enable the TRACE in LITMUS and collact the scheduling log in
> the scenario when you see the bad performance?
>
> >
> > The period distribution I have for each task is (10ms,100ms) which is
> bigger
> > than the VM period I specified using xl sched-rtds (10000us), but I guess
> > the lower bound is too close to the VM period. So by your suggestion,
> maybe
> > shorten the period for VM can improve the performance?
>
> Yes. At least it shorten the starvation interval.
>
> Meng
>

[-- Attachment #1.2: Type: text/html, Size: 3515 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-24  4:57       ` Yu-An(Victor) Chen
@ 2015-11-24  6:19         ` Meng Xu
  0 siblings, 0 replies; 31+ messages in thread
From: Meng Xu @ 2015-11-24  6:19 UTC (permalink / raw)
  To: Yu-An(Victor) Chen; +Cc: xen-devel

2015-11-23 23:57 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> Hi Meng,
>
Hi,

> Thank you again for your kindly reply!
No problem :-)

> I am currently using "litmus-rt-2014.2.patch" and with my labmates' patch
> for IPI interrupt (https://github.com/LITMUS-RT/liblitmus/pull/1/files)
> I asked my labmate about it, he said he was not sure if there is more IPI
> interrupt bug other than then one he fixed.
>
> I can am using "st_trace" to check the success rate of the tasks. it only
> gives me the how much of the deadline I am missing for each job.
> which TRACE function in LITMUS, do you mean?

Yes! If it is the IPI issue, you should see the destination cpu is not
repsonding to the IPI in LITMUS, not Xen.

>
> litmus_log?ft_cpu_traceX?ft_msg_traceX?sched_trace?
> or
> st_trace?
>
> https://wiki.litmus-rt.org/litmus/Tracing
>
>
> At this point I think the best way is to install the newest litmus-rt
> (litmus-rt-2015.1.patch) ?

Yes. :-)

Another way to rule out the IPI delay issue is to have a background
task running on each VCPU of the VM with small period to see if it
improve the schedulablity. If it does, the IPI issue may still exist
in LITMUS. :-( Then you have to look at the TRACE to figure out which
part misses the IPI in LITMUS.


>
> Again, thank you very much, I really appreciate your help!

No problem.

Meng
>
> Victor
>
>
>
>
>
>
> On Mon, Nov 23, 2015 at 6:23 PM, Meng Xu <xumengpanda@gmail.com> wrote:
>>
>> Hi,
>>
>> 2015-11-23 11:35 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
>> > Hi Meng,
>> >
>> > Thank you very much for replying!
>> >
>> > The RT tasks I am running for each trial at a certain utilization rate
>> > is a
>> > collection of real-time tasks, and each real-time task is a sequence of
>> > jobs
>> > that are released periodically. All jobs are periodic, where each job is
>> > defined by a period (and deadline) and a worse-case execution time. Each
>> > job
>> > is just running of a number of iterations of floating point operations.
>> > This
>> > is based on the base task.c provided with the LITMUSRT userspace
>> > library. So
>> > Yes they are independent and not I/O or memory intensive.
>>
>> Ah, I see. Which version of LITMUSRT did you use? LITMUS^RT has a bug
>> in Xen environment. The IPI interrupt is not handled properly in
>> LITMUS on Xen.  With the bug, the system performance is worse than
>> when the system is loaded, because LITMUS scheduler just fails to
>> respond due to the IPI ignorance.
>>
>> IIRC, this bug was fixed in the latest LITMUSRT code.
>> Did you use the latest LITMUS code?
>>
>> One way to debug the issue is:
>> Can you enable the TRACE in LITMUS and collact the scheduling log in
>> the scenario when you see the bad performance?
>>
>> >
>> > The period distribution I have for each task is (10ms,100ms) which is
>> > bigger
>> > than the VM period I specified using xl sched-rtds (10000us), but I
>> > guess
>> > the lower bound is too close to the VM period. So by your suggestion,
>> > maybe
>> > shorten the period for VM can improve the performance?
>>
>> Yes. At least it shorten the starvation interval.
>>
>> Meng
>
>



-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-23 15:42 xen 4.5.0 rtds scheduler perform poorly with 2vms Yu-An(Victor) Chen
  2015-11-23 16:07 ` Meng Xu
@ 2015-11-25  0:15 ` Dario Faggioli
  2015-11-27 16:36   ` Yu-An(Victor) Chen
  1 sibling, 1 reply; 31+ messages in thread
From: Dario Faggioli @ 2015-11-25  0:15 UTC (permalink / raw)
  To: Yu-An(Victor) Chen, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2140 bytes --]

On Mon, 2015-11-23 at 07:42 -0800, Yu-An(Victor) Chen wrote:
> Hi all,
> 
Hello,

> So I was doing some experiments to evaluate RTDS scheduler
> schedubility of real time tasks using 1vm with period of 10000 and
> budget of 10000. The experiment results turn out as expected(perform
> better than xen-credit).
> 
> But when I tried to perform similar experiments with 2 vms (both with
> now period of 10000 and budget of 5000). The schedubility of real
> time tasks turn out really bad. Even if I have one vm idling and the
> other vm running the real time tasks, the schedubility of that vm is
> still really poor(worse than xen-credit). Am I missing some
> configuration I should have set for 2vm cases? Thank you
> 
What is it that you are trying to prove with this setup? This is
despite all Meng is already saying about the non-work conserving nature
of RTDS, and about the LITMUS IPI bug.

In fact, in general, real-time schedulers are really good at isolating
workloads, with precise time guarantees. If you have stuff that needs
to be done in 2 VMs, and you use RTDS for scheduling the 2 VMs, you'll
get good and precisely characterized isolation between them.

But if you put all the stuff in only 1 VM, and then limit its own
utilization, all you are doing is making it hard for the things inside
the VM itself to achieve their target performance, with respect to both
an instance of RTDS where that VM has 100% utilization, as well as with
(almost) any general purpose scheduler.

Then, again, as Meng is saying, if you not only have "stuff" to do
inside the VM, but you are interested in in-guest real-time, then the
scheduling parameters of the VM(s) and the ones of the tasks in the
guest(s), should be set according to a proper real-time hierarchical
scheduling scheme that allows for guarantees to be met.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-25  0:15 ` Dario Faggioli
@ 2015-11-27 16:36   ` Yu-An(Victor) Chen
  2015-11-27 17:23     ` Dario Faggioli
  0 siblings, 1 reply; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-27 16:36 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3015 bytes --]

Hi Dario,

Thanks for the reply!

My goal for the experiment is to show that xen rtds scheduler is better
than credit scheduler when it comes to real time tasks.
so my set up is:

for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit scheduler(both
with weight of 800 and capacity of 400)
for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with period
of 10000 and budget of 5000)
in both setup, dom0 is using 1 core from cpu 8-15

VM2 will run tasks that has utilization of 4 cores. VM1 will run tasks that
has utilization from 1 to 4, and I will record VM1's schedulbility.

I am hoping to see that the VM1 using xen-rtds will perform better than the
VM1 using xen-credit, but what I am seeing is the opposite.

Thank you!

Victor


On Tue, Nov 24, 2015 at 4:15 PM, Dario Faggioli <dario.faggioli@citrix.com>
wrote:

> On Mon, 2015-11-23 at 07:42 -0800, Yu-An(Victor) Chen wrote:
> > Hi all,
> >
> Hello,
>
> > So I was doing some experiments to evaluate RTDS scheduler
> > schedubility of real time tasks using 1vm with period of 10000 and
> > budget of 10000. The experiment results turn out as expected(perform
> > better than xen-credit).
> >
> > But when I tried to perform similar experiments with 2 vms (both with
> > now period of 10000 and budget of 5000). The schedubility of real
> > time tasks turn out really bad. Even if I have one vm idling and the
> > other vm running the real time tasks, the schedubility of that vm is
> > still really poor(worse than xen-credit). Am I missing some
> > configuration I should have set for 2vm cases? Thank you
> >
> What is it that you are trying to prove with this setup? This is
> despite all Meng is already saying about the non-work conserving nature
> of RTDS, and about the LITMUS IPI bug.
>
> In fact, in general, real-time schedulers are really good at isolating
> workloads, with precise time guarantees. If you have stuff that needs
> to be done in 2 VMs, and you use RTDS for scheduling the 2 VMs, you'll
> get good and precisely characterized isolation between them.
>
> But if you put all the stuff in only 1 VM, and then limit its own
> utilization, all you are doing is making it hard for the things inside
> the VM itself to achieve their target performance, with respect to both
> an instance of RTDS where that VM has 100% utilization, as well as with
> (almost) any general purpose scheduler.
>
> Then, again, as Meng is saying, if you not only have "stuff" to do
> inside the VM, but you are interested in in-guest real-time, then the
> scheduling parameters of the VM(s) and the ones of the tasks in the
> guest(s), should be set according to a proper real-time hierarchical
> scheduling scheme that allows for guarantees to be met.
>
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>

[-- Attachment #1.2: Type: text/html, Size: 4329 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-27 16:36   ` Yu-An(Victor) Chen
@ 2015-11-27 17:23     ` Dario Faggioli
  2015-11-27 17:41       ` Meng Xu
  0 siblings, 1 reply; 31+ messages in thread
From: Dario Faggioli @ 2015-11-27 17:23 UTC (permalink / raw)
  To: Yu-An(Victor) Chen; +Cc: Meng Xu, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2964 bytes --]

On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
> Hi Dario,
> 
Hi,

> Thanks for the reply!
> 
You're welcome. :-)

I'm adding Meng to Cc...

> My goal for the experiment is to show that xen rtds scheduler is
> better than credit scheduler when it comes to real time tasks. 
> so my set up is:
> 
> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
> scheduler(both with weight of 800 and capacity of 400)
> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
> period of 10000 and budget of 5000) 
> in both setup, dom0 is using 1 core from cpu 8-15
> 
I can't see you saying how many vCPUs each VM has. I presume 4? I'll
assume that is the case in the rest of this email.

> VM2 will run tasks that has utilization of 4 cores. VM1 will run
> tasks that has utilization from 1 to 4, and I will record VM1's
> schedulbility. 
> 
I'm not sure what you mean with "tasks that has utilization of 4 cores"
(or, in general, "of X cores").

I'll assume it means that you run some kind of periodic tasks, each one
with an utilization ratio, given by the computation time of their
periodic instances over their period, and that the sum of all these
ratios give 4 (or, in general, X).

If this is not the case, please, clarify.

> I am hoping to see that the VM1 using xen-rtds will perform better
> than the VM1 using xen-credit, but what I am seeing is the opposite.
> 
Ok, let's try some maths. Not real-time scheduling theory (for
now :-) ), just maths.

let's take VM2. What I understand is that the "internal demand", from
tasks running inside it, would be 400%. Assuming the VM has 4 vCPUs,
since each vCPU has 5000/10000=50%, the VM has, in total, 200%. So, in
this case, I don't see how a demand of 400% could be satisfied by a
supply of 200%.

If the VM has 8 vCPUs, each at 50%, that does mean that it could, in
theory, serve a demand of 400%. But that does not "just" happens. Here
it is where real-time scheduling theory comes into play, and I'll let
Meng comment on this (if he's interested :-D), as I don't recall the
details of those formulas any longer!

In a nutshell, budgets ad periods of tasks and VMs must satisfy certain
condition, in order to the schedulability of the whole system to be
guaranteed.

Therefore, either because you have misconfigured the system, or because
you are hitting some real-time scheduling "issues", it is well possible
that, in this simple case, for a best effort scheduler to do a pretty
fine job. There are indeed differences and specific use case and
scenarios for Credit and RTDS.

The one you described is just not one of them. :-D

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-27 17:23     ` Dario Faggioli
@ 2015-11-27 17:41       ` Meng Xu
  2015-11-27 19:50         ` Yu-An(Victor) Chen
  0 siblings, 1 reply; 31+ messages in thread
From: Meng Xu @ 2015-11-27 17:41 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Yu-An(Victor) Chen, xen-devel

2015-11-27 12:23 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
> On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
>> Hi Dario,
>>
> Hi,
>
>> Thanks for the reply!
>>
> You're welcome. :-)
>
> I'm adding Meng to Cc...
>

Thanks! :-)

>> My goal for the experiment is to show that xen rtds scheduler is
>> better than credit scheduler when it comes to real time tasks.
>> so my set up is:
>>
>> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
>> scheduler(both with weight of 800 and capacity of 400)

So you set up 400% cpu cap for each VM. In other words, each VM will
have computation capacity almost equal to 4 cores. Because VCPUs are
also scheduled, the four-core capacity is not equal to 4 physical core
in bare metal, because the resource supplied to tasks from VCPUs also
depend on the scheduling pattern (which affect the resource supply
pattern) of the VCPUs.

>> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
>> period of 10000 and budget of 5000)

How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
200% CPU capacity, which is only half compared to the configuration
you made for credit scheduler.

>> in both setup, dom0 is using 1 core from cpu 8-15

Do you have some quick evaluation report (similar to the evaluation
section in academic papers) that describe how you did the experiments,
so that we can have a better guess on where goes wrong.

Right now, I'm guessing that: the resource configured for each VM
under credit and rtds schedulers are not the same, and it is possible
that some parameters are not configured correctly.

Another thing is that:
credit scheduler is work conserving, while RTDS is not.
So under the under-loaded situation, you will see credit scheduler may
work better because it try to use as much resource as it could. You
can make the comparision more failrly by setting the cap for credit
scheduler as you did, and running some background VM or tasks to
consume the idle resource.

Meng

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-27 17:41       ` Meng Xu
@ 2015-11-27 19:50         ` Yu-An(Victor) Chen
  2015-11-28  0:17           ` Meng Xu
  0 siblings, 1 reply; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-27 19:50 UTC (permalink / raw)
  To: Meng Xu; +Cc: Dario Faggioli, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3232 bytes --]

Hi Dario & Meng,

Thanks for your analysis!

VM1 and VM2 both are given 8 vCPUs and sharing physical CPU 0-7. So in
theory,"VM1 can get the services of 400%"
And yes, Dario, your explanation about the task utilization is correct.

So the resource configuration as I mentioned before is:

for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu 0-7)
using credit scheduler(both with weight of 800 and capacity of 400)
for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores (cpu0-7)
using RTDS (both with period of 10000 and budget of 5000)
In both setup, dom0 is using 1 core from cpu 8-15

In both setup:

I loaded VM2 with constant running task with total utilization of 4 cores.
and in VM1 I run iterations of tasks of total utilization rate of 1 cores,
2 cores, 3 cores, 4 cores, and then record their schedulbility.

Attached is the result plot.


I have tried with the newest litmust-rt, and rtxen is still performing
poorly.

Thank you both very much again, if there is any unclear part, please lemme
know, thx!

Victor



On Fri, Nov 27, 2015 at 9:41 AM, Meng Xu <xumengpanda@gmail.com> wrote:

> 2015-11-27 12:23 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
> > On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
> >> Hi Dario,
> >>
> > Hi,
> >
> >> Thanks for the reply!
> >>
> > You're welcome. :-)
> >
> > I'm adding Meng to Cc...
> >
>
> Thanks! :-)
>
> >> My goal for the experiment is to show that xen rtds scheduler is
> >> better than credit scheduler when it comes to real time tasks.
> >> so my set up is:
> >>
> >> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
> >> scheduler(both with weight of 800 and capacity of 400)
>
> So you set up 400% cpu cap for each VM. In other words, each VM will
> have computation capacity almost equal to 4 cores. Because VCPUs are
> also scheduled, the four-core capacity is not equal to 4 physical core
> in bare metal, because the resource supplied to tasks from VCPUs also
> depend on the scheduling pattern (which affect the resource supply
> pattern) of the VCPUs.
>
> >> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
> >> period of 10000 and budget of 5000)
>
> How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
> 200% CPU capacity, which is only half compared to the configuration
> you made for credit scheduler.
>
> >> in both setup, dom0 is using 1 core from cpu 8-15
>
> Do you have some quick evaluation report (similar to the evaluation
> section in academic papers) that describe how you did the experiments,
> so that we can have a better guess on where goes wrong.
>
> Right now, I'm guessing that: the resource configured for each VM
> under credit and rtds schedulers are not the same, and it is possible
> that some parameters are not configured correctly.
>
> Another thing is that:
> credit scheduler is work conserving, while RTDS is not.
> So under the under-loaded situation, you will see credit scheduler may
> work better because it try to use as much resource as it could. You
> can make the comparision more failrly by setting the cap for credit
> scheduler as you did, and running some background VM or tasks to
> consume the idle resource.
>
> Meng
>

[-- Attachment #1.2: Type: text/html, Size: 4618 bytes --]

[-- Attachment #2: 2vm-xen-rtxen.jpg --]
[-- Type: image/jpeg, Size: 32883 bytes --]

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-27 19:50         ` Yu-An(Victor) Chen
@ 2015-11-28  0:17           ` Meng Xu
  2015-11-28 12:20             ` Yu-An(Victor) Chen
  0 siblings, 1 reply; 31+ messages in thread
From: Meng Xu @ 2015-11-28  0:17 UTC (permalink / raw)
  To: Yu-An(Victor) Chen; +Cc: Dario Faggioli, xen-devel

2015-11-27 14:50 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> Hi Dario & Meng,
>
> Thanks for your analysis!
>
> VM1 and VM2 both are given 8 vCPUs and sharing physical CPU 0-7. So in
> theory,"VM1 can get the services of 400%"
> And yes, Dario, your explanation about the task utilization is correct.
>
> So the resource configuration as I mentioned before is:
>
> for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu 0-7)
> using credit scheduler(both with weight of 800 and capacity of 400)
> for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores (cpu0-7)
> using RTDS (both with period of 10000 and budget of 5000)
> In both setup, dom0 is using 1 core from cpu 8-15
>
> In both setup:
>
> I loaded VM2 with constant running task with total utilization of 4 cores.
> and in VM1 I run iterations of tasks of total utilization rate of 1 cores, 2
> cores, 3 cores, 4 cores, and then record their schedulbility.
>
> Attached is the result plot.
>
>
> I have tried with the newest litmust-rt, and rtxen is still performing
> poorly.

What is the characteristics of tasks you generated? When a taskset
miss ddl., which task inside miss deadline?

Meng


>
> Thank you both very much again, if there is any unclear part, please lemme
> know, thx!
>
> Victor
>
>
>
> On Fri, Nov 27, 2015 at 9:41 AM, Meng Xu <xumengpanda@gmail.com> wrote:
>>
>> 2015-11-27 12:23 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
>> > On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
>> >> Hi Dario,
>> >>
>> > Hi,
>> >
>> >> Thanks for the reply!
>> >>
>> > You're welcome. :-)
>> >
>> > I'm adding Meng to Cc...
>> >
>>
>> Thanks! :-)
>>
>> >> My goal for the experiment is to show that xen rtds scheduler is
>> >> better than credit scheduler when it comes to real time tasks.
>> >> so my set up is:
>> >>
>> >> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
>> >> scheduler(both with weight of 800 and capacity of 400)
>>
>> So you set up 400% cpu cap for each VM. In other words, each VM will
>> have computation capacity almost equal to 4 cores. Because VCPUs are
>> also scheduled, the four-core capacity is not equal to 4 physical core
>> in bare metal, because the resource supplied to tasks from VCPUs also
>> depend on the scheduling pattern (which affect the resource supply
>> pattern) of the VCPUs.
>>
>> >> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
>> >> period of 10000 and budget of 5000)
>>
>> How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
>> 200% CPU capacity, which is only half compared to the configuration
>> you made for credit scheduler.
>>
>> >> in both setup, dom0 is using 1 core from cpu 8-15
>>
>> Do you have some quick evaluation report (similar to the evaluation
>> section in academic papers) that describe how you did the experiments,
>> so that we can have a better guess on where goes wrong.
>>
>> Right now, I'm guessing that: the resource configured for each VM
>> under credit and rtds schedulers are not the same, and it is possible
>> that some parameters are not configured correctly.
>>
>> Another thing is that:
>> credit scheduler is work conserving, while RTDS is not.
>> So under the under-loaded situation, you will see credit scheduler may
>> work better because it try to use as much resource as it could. You
>> can make the comparision more failrly by setting the cap for credit
>> scheduler as you did, and running some background VM or tasks to
>> consume the idle resource.
>>
>> Meng
>
>



-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-28  0:17           ` Meng Xu
@ 2015-11-28 12:20             ` Yu-An(Victor) Chen
  2015-11-28 15:09               ` Meng Xu
  0 siblings, 1 reply; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-28 12:20 UTC (permalink / raw)
  To: Meng Xu; +Cc: Dario Faggioli, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4848 bytes --]

Hi Meng,

Thank you so much for being this patience.

So a task set is composed of a collection of real-time tasks, and each
real-time task is a sequence of jobs that are released periodically... All
jobs are periodic, where each job Ti is defined by a period (and deadline) pi
and a worse-case execution time ei, with pi ≥ ei ≥ 0 and pi, ei ∈ integers.
Each job is comprised of a number of iterations of floating point
operations during each job. This is based on the base task.c provided with
the LITMUSRT userspace library.
So in a tasks set, there are many tasks, and in each tasks, there are a
sequence of jobs, if any job misses deadline, then the whole taskset is
qualified as failed to be schedulable.

Thank you

Victor


On Fri, Nov 27, 2015 at 4:17 PM, Meng Xu <xumengpanda@gmail.com> wrote:

> 2015-11-27 14:50 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> > Hi Dario & Meng,
> >
> > Thanks for your analysis!
> >
> > VM1 and VM2 both are given 8 vCPUs and sharing physical CPU 0-7. So in
> > theory,"VM1 can get the services of 400%"
> > And yes, Dario, your explanation about the task utilization is correct.
> >
> > So the resource configuration as I mentioned before is:
> >
> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu
> 0-7)
> > using credit scheduler(both with weight of 800 and capacity of 400)
> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores (cpu0-7)
> > using RTDS (both with period of 10000 and budget of 5000)
> > In both setup, dom0 is using 1 core from cpu 8-15
> >
> > In both setup:
> >
> > I loaded VM2 with constant running task with total utilization of 4
> cores.
> > and in VM1 I run iterations of tasks of total utilization rate of 1
> cores, 2
> > cores, 3 cores, 4 cores, and then record their schedulbility.
> >
> > Attached is the result plot.
> >
> >
> > I have tried with the newest litmust-rt, and rtxen is still performing
> > poorly.
>
> What is the characteristics of tasks you generated? When a taskset
> miss ddl., which task inside miss deadline?
>
> Meng
>
>
> >
> > Thank you both very much again, if there is any unclear part, please
> lemme
> > know, thx!
> >
> > Victor
> >
> >
> >
> > On Fri, Nov 27, 2015 at 9:41 AM, Meng Xu <xumengpanda@gmail.com> wrote:
> >>
> >> 2015-11-27 12:23 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
> >> > On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
> >> >> Hi Dario,
> >> >>
> >> > Hi,
> >> >
> >> >> Thanks for the reply!
> >> >>
> >> > You're welcome. :-)
> >> >
> >> > I'm adding Meng to Cc...
> >> >
> >>
> >> Thanks! :-)
> >>
> >> >> My goal for the experiment is to show that xen rtds scheduler is
> >> >> better than credit scheduler when it comes to real time tasks.
> >> >> so my set up is:
> >> >>
> >> >> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
> >> >> scheduler(both with weight of 800 and capacity of 400)
> >>
> >> So you set up 400% cpu cap for each VM. In other words, each VM will
> >> have computation capacity almost equal to 4 cores. Because VCPUs are
> >> also scheduled, the four-core capacity is not equal to 4 physical core
> >> in bare metal, because the resource supplied to tasks from VCPUs also
> >> depend on the scheduling pattern (which affect the resource supply
> >> pattern) of the VCPUs.
> >>
> >> >> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
> >> >> period of 10000 and budget of 5000)
> >>
> >> How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
> >> 200% CPU capacity, which is only half compared to the configuration
> >> you made for credit scheduler.
> >>
> >> >> in both setup, dom0 is using 1 core from cpu 8-15
> >>
> >> Do you have some quick evaluation report (similar to the evaluation
> >> section in academic papers) that describe how you did the experiments,
> >> so that we can have a better guess on where goes wrong.
> >>
> >> Right now, I'm guessing that: the resource configured for each VM
> >> under credit and rtds schedulers are not the same, and it is possible
> >> that some parameters are not configured correctly.
> >>
> >> Another thing is that:
> >> credit scheduler is work conserving, while RTDS is not.
> >> So under the under-loaded situation, you will see credit scheduler may
> >> work better because it try to use as much resource as it could. You
> >> can make the comparision more failrly by setting the cap for credit
> >> scheduler as you did, and running some background VM or tasks to
> >> consume the idle resource.
> >>
> >> Meng
> >
> >
>
>
>
> --
>
>
> -----------
> Meng Xu
> PhD Student in Computer and Information Science
> University of Pennsylvania
> http://www.cis.upenn.edu/~mengxu/
>

[-- Attachment #1.2: Type: text/html, Size: 8021 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-28 12:20             ` Yu-An(Victor) Chen
@ 2015-11-28 15:09               ` Meng Xu
  2015-11-29 12:46                 ` Yu-An(Victor) Chen
  0 siblings, 1 reply; 31+ messages in thread
From: Meng Xu @ 2015-11-28 15:09 UTC (permalink / raw)
  To: Yu-An(Victor) Chen; +Cc: Dario Faggioli, xen-devel

2015-11-28 7:20 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> Hi Meng,
>
> Thank you so much for being this patience.
>
> So a task set is composed of a collection of real-time tasks, and each
> real-time task is a sequence of jobs that are released periodically... All
> jobs are periodic, where each job Ti is defined by a period (and deadline)
> pi and a worse-case execution time ei, with pi ≥ ei ≥ 0 and pi, ei ∈
> integers. Each job is comprised of a number of iterations of floating point
> operations during each job. This is based on the base task.c provided with
> the LITMUSRT userspace library.

I knew this information. Basically, I did the experiment with this
kind of configuration before. What I want to know is what is the range
of period, execution and deadline you have for the taskset and when a
taskset is unschedulable under your experiment, what is the
st_jobs_stats result from the LITMUS which will show which job of
which task misses deadline. If you try to service a task with period =
100, exe = 50, deadline = 100, specificed as (100, 50, 100), with a
VCPU with period = 100 and budget = 100, you will never schedule this
task. The reason is because the VCPU can be unavailable when task is
released.
A theoretical analysis can be found in this paper "Periodic Resource
Model for Compositional RealTime Guarantees"
http://repository.upenn.edu/cgi/viewcontent.cgi?article=1033&context=cis_reports

BTW, I'm assuming you are familiar with the schedulability test for
real time systems since you are talking about schedulability. If my
assumption is wrong, you should have a look at the survey paper A
survey of hard real-time scheduling for multiprocessor systems
(http://dl.acm.org/citation.cfm?id=1978814). This will at least tell
you when you should expect a taskset is schedulable in theory. If a
taskset is claimed unschedulabled in theory, it is very likely
unscheduble in practice. In addition, it will tell you taskset
utilization is not the only factor that affects the schedulablity.
Other factors, such as the highest taks utilization in a taskset,
scheduling algorithm, task period relation, can affect schedulability
also.

Best,

Meng


>
>
> On Fri, Nov 27, 2015 at 4:17 PM, Meng Xu <xumengpanda@gmail.com> wrote:
>>
>> 2015-11-27 14:50 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
>> > Hi Dario & Meng,
>> >
>> > Thanks for your analysis!
>> >
>> > VM1 and VM2 both are given 8 vCPUs and sharing physical CPU 0-7. So in
>> > theory,"VM1 can get the services of 400%"
>> > And yes, Dario, your explanation about the task utilization is correct.
>> >
>> > So the resource configuration as I mentioned before is:
>> >
>> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu
>> > 0-7)
>> > using credit scheduler(both with weight of 800 and capacity of 400)
>> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores (cpu0-7)
>> > using RTDS (both with period of 10000 and budget of 5000)
>> > In both setup, dom0 is using 1 core from cpu 8-15
>> >
>> > In both setup:
>> >
>> > I loaded VM2 with constant running task with total utilization of 4
>> > cores.
>> > and in VM1 I run iterations of tasks of total utilization rate of 1
>> > cores, 2
>> > cores, 3 cores, 4 cores, and then record their schedulbility.
>> >
>> > Attached is the result plot.
>> >
>> >
>> > I have tried with the newest litmust-rt, and rtxen is still performing
>> > poorly.
>>
>> What is the characteristics of tasks you generated? When a taskset
>> miss ddl., which task inside miss deadline?
>>
>> Meng
>>
>>
>> >
>> > Thank you both very much again, if there is any unclear part, please
>> > lemme
>> > know, thx!
>> >
>> > Victor
>> >
>> >
>> >
>> > On Fri, Nov 27, 2015 at 9:41 AM, Meng Xu <xumengpanda@gmail.com> wrote:
>> >>
>> >> 2015-11-27 12:23 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
>> >> > On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
>> >> >> Hi Dario,
>> >> >>
>> >> > Hi,
>> >> >
>> >> >> Thanks for the reply!
>> >> >>
>> >> > You're welcome. :-)
>> >> >
>> >> > I'm adding Meng to Cc...
>> >> >
>> >>
>> >> Thanks! :-)
>> >>
>> >> >> My goal for the experiment is to show that xen rtds scheduler is
>> >> >> better than credit scheduler when it comes to real time tasks.
>> >> >> so my set up is:
>> >> >>
>> >> >> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
>> >> >> scheduler(both with weight of 800 and capacity of 400)
>> >>
>> >> So you set up 400% cpu cap for each VM. In other words, each VM will
>> >> have computation capacity almost equal to 4 cores. Because VCPUs are
>> >> also scheduled, the four-core capacity is not equal to 4 physical core
>> >> in bare metal, because the resource supplied to tasks from VCPUs also
>> >> depend on the scheduling pattern (which affect the resource supply
>> >> pattern) of the VCPUs.
>> >>
>> >> >> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
>> >> >> period of 10000 and budget of 5000)
>> >>
>> >> How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
>> >> 200% CPU capacity, which is only half compared to the configuration
>> >> you made for credit scheduler.
>> >>
>> >> >> in both setup, dom0 is using 1 core from cpu 8-15
>> >>
>> >> Do you have some quick evaluation report (similar to the evaluation
>> >> section in academic papers) that describe how you did the experiments,
>> >> so that we can have a better guess on where goes wrong.
>> >>
>> >> Right now, I'm guessing that: the resource configured for each VM
>> >> under credit and rtds schedulers are not the same, and it is possible
>> >> that some parameters are not configured correctly.
>> >>
>> >> Another thing is that:
>> >> credit scheduler is work conserving, while RTDS is not.
>> >> So under the under-loaded situation, you will see credit scheduler may
>> >> work better because it try to use as much resource as it could. You
>> >> can make the comparision more failrly by setting the cap for credit
>> >> scheduler as you did, and running some background VM or tasks to
>> >> consume the idle resource.
>> >>
>> >> Meng
>> >
>> >
>>
>>
>>
>> --
>>
>>
>> -----------
>> Meng Xu
>> PhD Student in Computer and Information Science
>> University of Pennsylvania
>> http://www.cis.upenn.edu/~mengxu/
>
>



-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-28 15:09               ` Meng Xu
@ 2015-11-29 12:46                 ` Yu-An(Victor) Chen
  2015-11-29 15:38                   ` Meng Xu
  2015-11-29 16:18                   ` Dario Faggioli
  0 siblings, 2 replies; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-29 12:46 UTC (permalink / raw)
  To: Meng Xu; +Cc: Dario Faggioli, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 9102 bytes --]

Hi Meng,

So I will rewrite my setup here again, but this time I shorten the period
and budget for RTDS like you suggested:
-----------------------------------------------------------------------------------------------------------------------------

for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu 0-7)
using credit scheduler(both with weight of 800 and capacity of 400)
for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores (cpu0-7)
using RTDS (both with period of 4000(4ms) and budget of 2000(2ms)))
In both setup, dom0 is using 1 core from cpu 8-15

In both setup:

I loaded VM2 with constant running task with total utilization of 4 cores.
and in VM1 I run iterations of tasks of total utilization rate of 1 cores,
2 cores, 3 cores, 4 cores, and then record their schedulbility.
-----------------------------------------------------------------------------------------------------------------------------

So st_jobs_stats for the missed deadline jobs are:

trial #1 composed of 2 tasks: total tasks utilization rate = 1

(period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
(period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss

trial #2 composed of 2 tasks: total tasks utilization rate = 1

(period, exe, deadline)=(68ms,40.685ms,68ms) -> miss all deadline
(period, exe, deadline)=(70ms,28.118ms,70ms) -> no miss

trial #3 composed of 2 tasks: total tasks utilization rate = 1

(period, exe, deadline)=(16ms,11.613ms,16ms) -> miss all deadline
(period, exe, deadline)=(46ms,12.612ms,46ms) -> no miss

I do notice that for within the task that misses deadline, the completion
time get progressively longer,
for example: for trial #3, a snapshot of the task st_jobs_stats tells me
that
the completion time of the job is 79ms and then 87ms, and then 95ms

ok, I will look into the reference you provide, I just started my research
in the rt fields, there is still a lot for me to learn. Thank you again for
reply!

Thank you!

On Sat, Nov 28, 2015 at 7:09 AM, Meng Xu <xumengpanda@gmail.com> wrote:

> 2015-11-28 7:20 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> > Hi Meng,
> >
> > Thank you so much for being this patience.
> >
> > So a task set is composed of a collection of real-time tasks, and each
> > real-time task is a sequence of jobs that are released periodically...
> All
> > jobs are periodic, where each job Ti is defined by a period (and
> deadline)
> > pi and a worse-case execution time ei, with pi ≥ ei ≥ 0 and pi, ei ∈
> > integers. Each job is comprised of a number of iterations of floating
> point
> > operations during each job. This is based on the base task.c provided
> with
> > the LITMUSRT userspace library.
>
> I knew this information. Basically, I did the experiment with this
> kind of configuration before. What I want to know is what is the range
> of period, execution and deadline you have for the taskset and when a
> taskset is unschedulable under your experiment, what is the
> st_jobs_stats result from the LITMUS which will show which job of
> which task misses deadline. If you try to service a task with period =
> 100, exe = 50, deadline = 100, specificed as (100, 50, 100), with a
> VCPU with period = 100 and budget = 100, you will never schedule this
> task. The reason is because the VCPU can be unavailable when task is
> released.
> A theoretical analysis can be found in this paper "Periodic Resource
> Model for Compositional RealTime Guarantees"
>
> http://repository.upenn.edu/cgi/viewcontent.cgi?article=1033&context=cis_reports
>
> BTW, I'm assuming you are familiar with the schedulability test for
> real time systems since you are talking about schedulability. If my
> assumption is wrong, you should have a look at the survey paper A
> survey of hard real-time scheduling for multiprocessor systems
> (http://dl.acm.org/citation.cfm?id=1978814). This will at least tell
> you when you should expect a taskset is schedulable in theory. If a
> taskset is claimed unschedulabled in theory, it is very likely
> unscheduble in practice. In addition, it will tell you taskset
> utilization is not the only factor that affects the schedulablity.
> Other factors, such as the highest taks utilization in a taskset,
> scheduling algorithm, task period relation, can affect schedulability
> also.
>
> Best,
>
> Meng
>
>
> >
> >
> > On Fri, Nov 27, 2015 at 4:17 PM, Meng Xu <xumengpanda@gmail.com> wrote:
> >>
> >> 2015-11-27 14:50 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> >> > Hi Dario & Meng,
> >> >
> >> > Thanks for your analysis!
> >> >
> >> > VM1 and VM2 both are given 8 vCPUs and sharing physical CPU 0-7. So in
> >> > theory,"VM1 can get the services of 400%"
> >> > And yes, Dario, your explanation about the task utilization is
> correct.
> >> >
> >> > So the resource configuration as I mentioned before is:
> >> >
> >> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu
> >> > 0-7)
> >> > using credit scheduler(both with weight of 800 and capacity of 400)
> >> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores
> (cpu0-7)
> >> > using RTDS (both with period of 10000 and budget of 5000)
> >> > In both setup, dom0 is using 1 core from cpu 8-15
> >> >
> >> > In both setup:
> >> >
> >> > I loaded VM2 with constant running task with total utilization of 4
> >> > cores.
> >> > and in VM1 I run iterations of tasks of total utilization rate of 1
> >> > cores, 2
> >> > cores, 3 cores, 4 cores, and then record their schedulbility.
> >> >
> >> > Attached is the result plot.
> >> >
> >> >
> >> > I have tried with the newest litmust-rt, and rtxen is still performing
> >> > poorly.
> >>
> >> What is the characteristics of tasks you generated? When a taskset
> >> miss ddl., which task inside miss deadline?
> >>
> >> Meng
> >>
> >>
> >> >
> >> > Thank you both very much again, if there is any unclear part, please
> >> > lemme
> >> > know, thx!
> >> >
> >> > Victor
> >> >
> >> >
> >> >
> >> > On Fri, Nov 27, 2015 at 9:41 AM, Meng Xu <xumengpanda@gmail.com>
> wrote:
> >> >>
> >> >> 2015-11-27 12:23 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com
> >:
> >> >> > On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
> >> >> >> Hi Dario,
> >> >> >>
> >> >> > Hi,
> >> >> >
> >> >> >> Thanks for the reply!
> >> >> >>
> >> >> > You're welcome. :-)
> >> >> >
> >> >> > I'm adding Meng to Cc...
> >> >> >
> >> >>
> >> >> Thanks! :-)
> >> >>
> >> >> >> My goal for the experiment is to show that xen rtds scheduler is
> >> >> >> better than credit scheduler when it comes to real time tasks.
> >> >> >> so my set up is:
> >> >> >>
> >> >> >> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
> >> >> >> scheduler(both with weight of 800 and capacity of 400)
> >> >>
> >> >> So you set up 400% cpu cap for each VM. In other words, each VM will
> >> >> have computation capacity almost equal to 4 cores. Because VCPUs are
> >> >> also scheduled, the four-core capacity is not equal to 4 physical
> core
> >> >> in bare metal, because the resource supplied to tasks from VCPUs also
> >> >> depend on the scheduling pattern (which affect the resource supply
> >> >> pattern) of the VCPUs.
> >> >>
> >> >> >> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
> >> >> >> period of 10000 and budget of 5000)
> >> >>
> >> >> How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
> >> >> 200% CPU capacity, which is only half compared to the configuration
> >> >> you made for credit scheduler.
> >> >>
> >> >> >> in both setup, dom0 is using 1 core from cpu 8-15
> >> >>
> >> >> Do you have some quick evaluation report (similar to the evaluation
> >> >> section in academic papers) that describe how you did the
> experiments,
> >> >> so that we can have a better guess on where goes wrong.
> >> >>
> >> >> Right now, I'm guessing that: the resource configured for each VM
> >> >> under credit and rtds schedulers are not the same, and it is possible
> >> >> that some parameters are not configured correctly.
> >> >>
> >> >> Another thing is that:
> >> >> credit scheduler is work conserving, while RTDS is not.
> >> >> So under the under-loaded situation, you will see credit scheduler
> may
> >> >> work better because it try to use as much resource as it could. You
> >> >> can make the comparision more failrly by setting the cap for credit
> >> >> scheduler as you did, and running some background VM or tasks to
> >> >> consume the idle resource.
> >> >>
> >> >> Meng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >>
> >>
> >> -----------
> >> Meng Xu
> >> PhD Student in Computer and Information Science
> >> University of Pennsylvania
> >> http://www.cis.upenn.edu/~mengxu/
> >
> >
>
>
>
> --
>
>
> -----------
> Meng Xu
> PhD Student in Computer and Information Science
> University of Pennsylvania
> http://www.cis.upenn.edu/~mengxu/
>

[-- Attachment #1.2: Type: text/html, Size: 12240 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-29 12:46                 ` Yu-An(Victor) Chen
@ 2015-11-29 15:38                   ` Meng Xu
  2015-11-29 16:27                     ` Dario Faggioli
  2015-11-29 16:18                   ` Dario Faggioli
  1 sibling, 1 reply; 31+ messages in thread
From: Meng Xu @ 2015-11-29 15:38 UTC (permalink / raw)
  To: Yu-An(Victor) Chen; +Cc: Dario Faggioli, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 10721 bytes --]

2015-11-29 7:46 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:

> Hi Meng,
>

​Hi,
​


>
> So I will rewrite my setup here again, but this time I shorten the period
> and budget for RTDS like you suggested:
>

​Nice! :-)​



>
> -----------------------------------------------------------------------------------------------------------------------------
>

​(I like this line, BTW. :-D)​


>
>
> for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu
> 0-7) using credit scheduler(both with weight of 800 and capacity of 400)
> for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores (cpu0-7)
> using RTDS (both with period of 4000(4ms) and budget of 2000(2ms)))
> In both setup, dom0 is using 1 core from cpu 8-15
>
> In both setup:
>
> I loaded VM2 with constant running task with total utilization of 4 cores.
> and in VM1 I run iterations of tasks of total utilization rate of 1 cores,
> 2 cores, 3 cores, 4 cores, and then record their schedulbility.
>
> -----------------------------------------------------------------------------------------------------------------------------
>
> So st_jobs_stats for the missed deadline jobs are:
>
> trial #1 composed of 2 tasks: total tasks utilization rate = 1
>
> (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
> (period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss
>

​yes, this is the information I need and we can solve the mystery now...
Let's look at this task:
(period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
Its utilization is 12.023 / 21 ~= 0.5614;
Your VCPU utilization is only 2ms / 4ms = 0.5
So even when this task is pinned to one VCPU, it will still miss deadline
because it has only one thread. :-)
If you use one VCPU with 100% utilization, you should not see the deadline
miss from this task. Or at least, it won't have all jobs missing deadline.

​So basically, it is because your configuration is incorrect. That's also
why you need to read some literatures in real-time scheduling theory. ;-) ​

​


>
> trial #2 composed of 2 tasks: total tasks utilization rate = 1
>
> (period, exe, deadline)=(68ms,40.685ms,68ms) -> miss all deadline
> (period, exe, deadline)=(70ms,28.118ms,70ms) -> no miss
>
> trial #3 composed of 2 tasks: total tasks utilization rate = 1
>
> (period, exe, deadline)=(16ms,11.613ms,16ms) -> miss all deadline
> (period, exe, deadline)=(46ms,12.612ms,46ms) -> no miss
>
> I do notice that for within the task that misses deadline, the completion
> time get progressively longer,
> for example: for trial #3, a snapshot of the task st_jobs_stats tells me
> that
> the completion time of the job is 79ms and then 87ms, and then 95ms
>

​That's carry-over workload. It is common in the overloaded situation under
real-time scheduling.
​


>
> ok, I will look into the reference you provide, I just started my research
> in the rt fields, there is still a lot for me to learn. Thank you again for
> reply!
>

Yeah, you need to read and learn the real-time scheduling theory,
especially you are doing research in RT field. :-D


>
> Thank you!
>

​Hope it helps. :-)

Best,

Meng​



>
> On Sat, Nov 28, 2015 at 7:09 AM, Meng Xu <xumengpanda@gmail.com> wrote:
>
>> 2015-11-28 7:20 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
>> > Hi Meng,
>> >
>> > Thank you so much for being this patience.
>> >
>> > So a task set is composed of a collection of real-time tasks, and each
>> > real-time task is a sequence of jobs that are released periodically...
>> All
>> > jobs are periodic, where each job Ti is defined by a period (and
>> deadline)
>> > pi and a worse-case execution time ei, with pi ≥ ei ≥ 0 and pi, ei ∈
>> > integers. Each job is comprised of a number of iterations of floating
>> point
>> > operations during each job. This is based on the base task.c provided
>> with
>> > the LITMUSRT userspace library.
>>
>> I knew this information. Basically, I did the experiment with this
>> kind of configuration before. What I want to know is what is the range
>> of period, execution and deadline you have for the taskset and when a
>> taskset is unschedulable under your experiment, what is the
>> st_jobs_stats result from the LITMUS which will show which job of
>> which task misses deadline. If you try to service a task with period =
>> 100, exe = 50, deadline = 100, specificed as (100, 50, 100), with a
>> VCPU with period = 100 and budget = 100, you will never schedule this
>> task. The reason is because the VCPU can be unavailable when task is
>> released.
>> A theoretical analysis can be found in this paper "Periodic Resource
>> Model for Compositional RealTime Guarantees"
>>
>> http://repository.upenn.edu/cgi/viewcontent.cgi?article=1033&context=cis_reports
>>
>> BTW, I'm assuming you are familiar with the schedulability test for
>> real time systems since you are talking about schedulability. If my
>> assumption is wrong, you should have a look at the survey paper A
>> survey of hard real-time scheduling for multiprocessor systems
>> (http://dl.acm.org/citation.cfm?id=1978814). This will at least tell
>> you when you should expect a taskset is schedulable in theory. If a
>> taskset is claimed unschedulabled in theory, it is very likely
>> unscheduble in practice. In addition, it will tell you taskset
>> utilization is not the only factor that affects the schedulablity.
>> Other factors, such as the highest taks utilization in a taskset,
>> scheduling algorithm, task period relation, can affect schedulability
>> also.
>>
>> Best,
>>
>> Meng
>>
>>
>> >
>> >
>> > On Fri, Nov 27, 2015 at 4:17 PM, Meng Xu <xumengpanda@gmail.com> wrote:
>> >>
>> >> 2015-11-27 14:50 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
>> >> > Hi Dario & Meng,
>> >> >
>> >> > Thanks for your analysis!
>> >> >
>> >> > VM1 and VM2 both are given 8 vCPUs and sharing physical CPU 0-7. So
>> in
>> >> > theory,"VM1 can get the services of 400%"
>> >> > And yes, Dario, your explanation about the task utilization is
>> correct.
>> >> >
>> >> > So the resource configuration as I mentioned before is:
>> >> >
>> >> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores
>> (cpu
>> >> > 0-7)
>> >> > using credit scheduler(both with weight of 800 and capacity of 400)
>> >> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores
>> (cpu0-7)
>> >> > using RTDS (both with period of 10000 and budget of 5000)
>> >> > In both setup, dom0 is using 1 core from cpu 8-15
>> >> >
>> >> > In both setup:
>> >> >
>> >> > I loaded VM2 with constant running task with total utilization of 4
>> >> > cores.
>> >> > and in VM1 I run iterations of tasks of total utilization rate of 1
>> >> > cores, 2
>> >> > cores, 3 cores, 4 cores, and then record their schedulbility.
>> >> >
>> >> > Attached is the result plot.
>> >> >
>> >> >
>> >> > I have tried with the newest litmust-rt, and rtxen is still
>> performing
>> >> > poorly.
>> >>
>> >> What is the characteristics of tasks you generated? When a taskset
>> >> miss ddl., which task inside miss deadline?
>> >>
>> >> Meng
>> >>
>> >>
>> >> >
>> >> > Thank you both very much again, if there is any unclear part, please
>> >> > lemme
>> >> > know, thx!
>> >> >
>> >> > Victor
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Nov 27, 2015 at 9:41 AM, Meng Xu <xumengpanda@gmail.com>
>> wrote:
>> >> >>
>> >> >> 2015-11-27 12:23 GMT-05:00 Dario Faggioli <
>> dario.faggioli@citrix.com>:
>> >> >> > On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
>> >> >> >> Hi Dario,
>> >> >> >>
>> >> >> > Hi,
>> >> >> >
>> >> >> >> Thanks for the reply!
>> >> >> >>
>> >> >> > You're welcome. :-)
>> >> >> >
>> >> >> > I'm adding Meng to Cc...
>> >> >> >
>> >> >>
>> >> >> Thanks! :-)
>> >> >>
>> >> >> >> My goal for the experiment is to show that xen rtds scheduler is
>> >> >> >> better than credit scheduler when it comes to real time tasks.
>> >> >> >> so my set up is:
>> >> >> >>
>> >> >> >> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
>> >> >> >> scheduler(both with weight of 800 and capacity of 400)
>> >> >>
>> >> >> So you set up 400% cpu cap for each VM. In other words, each VM will
>> >> >> have computation capacity almost equal to 4 cores. Because VCPUs are
>> >> >> also scheduled, the four-core capacity is not equal to 4 physical
>> core
>> >> >> in bare metal, because the resource supplied to tasks from VCPUs
>> also
>> >> >> depend on the scheduling pattern (which affect the resource supply
>> >> >> pattern) of the VCPUs.
>> >> >>
>> >> >> >> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both
>> with
>> >> >> >> period of 10000 and budget of 5000)
>> >> >>
>> >> >> How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
>> >> >> 200% CPU capacity, which is only half compared to the configuration
>> >> >> you made for credit scheduler.
>> >> >>
>> >> >> >> in both setup, dom0 is using 1 core from cpu 8-15
>> >> >>
>> >> >> Do you have some quick evaluation report (similar to the evaluation
>> >> >> section in academic papers) that describe how you did the
>> experiments,
>> >> >> so that we can have a better guess on where goes wrong.
>> >> >>
>> >> >> Right now, I'm guessing that: the resource configured for each VM
>> >> >> under credit and rtds schedulers are not the same, and it is
>> possible
>> >> >> that some parameters are not configured correctly.
>> >> >>
>> >> >> Another thing is that:
>> >> >> credit scheduler is work conserving, while RTDS is not.
>> >> >> So under the under-loaded situation, you will see credit scheduler
>> may
>> >> >> work better because it try to use as much resource as it could. You
>> >> >> can make the comparision more failrly by setting the cap for credit
>> >> >> scheduler as you did, and running some background VM or tasks to
>> >> >> consume the idle resource.
>> >> >>
>> >> >> Meng
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >>
>> >>
>> >> -----------
>> >> Meng Xu
>> >> PhD Student in Computer and Information Science
>> >> University of Pennsylvania
>> >> http://www.cis.upenn.edu/~mengxu/
>> >
>> >
>>
>>
>>
>> --
>>
>>
>> -----------
>> Meng Xu
>> PhD Student in Computer and Information Science
>> University of Pennsylvania
>> http://www.cis.upenn.edu/~mengxu/
>>
>
>


-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

[-- Attachment #1.2: Type: text/html, Size: 17551 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-29 12:46                 ` Yu-An(Victor) Chen
  2015-11-29 15:38                   ` Meng Xu
@ 2015-11-29 16:18                   ` Dario Faggioli
  2015-11-29 16:21                     ` Meng Xu
  1 sibling, 1 reply; 31+ messages in thread
From: Dario Faggioli @ 2015-11-29 16:18 UTC (permalink / raw)
  To: Yu-An(Victor) Chen, Meng Xu; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2008 bytes --]

On Sun, 2015-11-29 at 04:46 -0800, Yu-An(Victor) Chen wrote:
> So st_jobs_stats for the missed deadline jobs are:
> 
> trial #1 composed of 2 tasks: total tasks utilization rate = 1
> 
> (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
> (period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss
> 
> trial #2 composed of 2 tasks: total tasks utilization rate = 1
> 
> (period, exe, deadline)=(68ms,40.685ms,68ms) -> miss all deadline
> (period, exe, deadline)=(70ms,28.118ms,70ms) -> no miss
> 
> trial #3 composed of 2 tasks: total tasks utilization rate = 1
> 
> (period, exe, deadline)=(16ms,11.613ms,16ms) -> miss all deadline
> (period, exe, deadline)=(46ms,12.612ms,46ms) -> no miss
> 
The point is: how did you come up with these numbers? What both me and
Meng are trying to say is that the budget(s) and period(s) of the
(vCPU's of a) VM and the budgets and periods of the task running inside
the VM must fulfill certain relationships, or schedulability is not
something you can even hope for! :-/

> I do notice that for within the task that misses deadline, the
> completion time get progressively longer, 
> for example: for trial #3, a snapshot of the task st_jobs_stats tells
> me that
> the completion time of the job is 79ms and then 87ms, and then 95ms
> 
That depends on how missing a deadline is handled and accounted for. I
don't recall how Litmus^RT does that.

> ok, I will look into the reference you provide, 
>
Highly recommended. You'll find there info about this relationship
necessary between vCPUs' and tasks' parameter we're talking about. :-)

> I just started my research in the rt fields, there is still a lot for
> me to learn.
>
:-D

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-29 16:18                   ` Dario Faggioli
@ 2015-11-29 16:21                     ` Meng Xu
  0 siblings, 0 replies; 31+ messages in thread
From: Meng Xu @ 2015-11-29 16:21 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Yu-An(Victor) Chen, xen-devel

2015-11-29 11:18 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
>
> On Sun, 2015-11-29 at 04:46 -0800, Yu-An(Victor) Chen wrote:
> > So st_jobs_stats for the missed deadline jobs are:
> >
> > trial #1 composed of 2 tasks: total tasks utilization rate = 1
> >
> > (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
> > (period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss
> >
> > trial #2 composed of 2 tasks: total tasks utilization rate = 1
> >
> > (period, exe, deadline)=(68ms,40.685ms,68ms) -> miss all deadline
> > (period, exe, deadline)=(70ms,28.118ms,70ms) -> no miss
> >
> > trial #3 composed of 2 tasks: total tasks utilization rate = 1
> >
> > (period, exe, deadline)=(16ms,11.613ms,16ms) -> miss all deadline
> > (period, exe, deadline)=(46ms,12.612ms,46ms) -> no miss
> >
> The point is: how did you come up with these numbers? What both me and
> Meng are trying to say is that the budget(s) and period(s) of the
> (vCPU's of a) VM and the budgets and periods of the task running inside
> the VM must fulfill certain relationships, or schedulability is not
> something you can even hope for! :-/
>
> > I do notice that for within the task that misses deadline, the
> > completion time get progressively longer,
> > for example: for trial #3, a snapshot of the task st_jobs_stats tells
> > me that
> > the completion time of the job is 79ms and then 87ms, and then 95ms
> >
> That depends on how missing a deadline is handled and accounted for. I
> don't recall how Litmus^RT does that.


LITMUS by default let the task keep running even when it misses deadline.
But if users specify the enforce_budget (the name may be a little
different), it will abandon the unfinished work at deadline miss.

Meng



-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-29 15:38                   ` Meng Xu
@ 2015-11-29 16:27                     ` Dario Faggioli
  2015-11-29 16:44                       ` Meng Xu
  0 siblings, 1 reply; 31+ messages in thread
From: Dario Faggioli @ 2015-11-29 16:27 UTC (permalink / raw)
  To: Meng Xu, Yu-An(Victor) Chen; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2863 bytes --]

On Sun, 2015-11-29 at 10:38 -0500, Meng Xu wrote:
> 
> 
> 2015-11-29 7:46 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> > Hi Meng,
> > 
> Hi, 
>  
> > 
> > So I will rewrite my setup here again, but this time I shorten the
> > period and budget for RTDS like you suggested:
> > 
> Nice! :-)
> 
>  
> > -----------------------------------------------------------------
> > ------------------------------------------------------------
> > 
> (I like this line, BTW. :-D) 
> > 
> > 
> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores
> > (cpu 0-7) using credit scheduler(both with weight of 800 and
> > capacity of 400)
> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores
> > (cpu0-7) using RTDS (both with period of 4000(4ms) and budget of
> > 2000(2ms))) 
> > In both setup, dom0 is using 1 core from cpu 8-15
> > 
> > In both setup:
> > 
> > I loaded VM2 with constant running task with total utilization of 4
> > cores.
> > and in VM1 I run iterations of tasks of total utilization rate of 1
> > cores, 2 cores, 3 cores, 4 cores, and then record their
> > schedulbility.
> > -----------------------------------------------------------------
> > ------------------------------------------------------------
> > 
> > So st_jobs_stats for the missed deadline jobs are:
> > 
> > trial #1 composed of 2 tasks: total tasks utilization rate = 1
> > 
> > (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
> > (period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss
> > 
> yes, this is the information I need and we can solve the mystery
> now... 
> Let's look at this task:
> (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
> Its utilization is 12.023 / 21 ~= 0.5614;
> Your VCPU utilization is only 2ms / 4ms = 0.5 
> So even when this task is pinned to one VCPU, it will still miss
> deadline because it has only one thread. :-)
>
Mmmm... As I said many times, I don't remember much of all those RT
schedulability formulas, but, is really that simple? I mean, if the in-
guest scheduling algorithm is global (e.g., global-EDF), the task could
migrate, couldn't it?

Is it really the case that you can never schedule tasks with U greater
than the smaller U of the various vCPUs (which seems to me to be what
you're implying)?

Anyway...

> So basically, it is because your configuration is incorrect. That's
> also why you need to read some literatures in real-time scheduling
> theory. ;-) 
> 
... I totally agree with this! ;-D

Regards,
Dario
>  
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-29 16:27                     ` Dario Faggioli
@ 2015-11-29 16:44                       ` Meng Xu
  2015-11-29 18:16                         ` Yu-An(Victor) Chen
  2015-12-01  8:59                         ` Dario Faggioli
  0 siblings, 2 replies; 31+ messages in thread
From: Meng Xu @ 2015-11-29 16:44 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Yu-An(Victor) Chen, xen-devel

2015-11-29 11:27 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
> On Sun, 2015-11-29 at 10:38 -0500, Meng Xu wrote:
>>
>>
>> 2015-11-29 7:46 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
>> > Hi Meng,
>> >
>> Hi,
>>
>> >
>> > So I will rewrite my setup here again, but this time I shorten the
>> > period and budget for RTDS like you suggested:
>> >
>> Nice! :-)
>>
>>
>> > -----------------------------------------------------------------
>> > ------------------------------------------------------------
>> >
>> (I like this line, BTW. :-D)
>> >
>> >
>> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores
>> > (cpu 0-7) using credit scheduler(both with weight of 800 and
>> > capacity of 400)
>> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores
>> > (cpu0-7) using RTDS (both with period of 4000(4ms) and budget of
>> > 2000(2ms)))
>> > In both setup, dom0 is using 1 core from cpu 8-15
>> >
>> > In both setup:
>> >
>> > I loaded VM2 with constant running task with total utilization of 4
>> > cores.
>> > and in VM1 I run iterations of tasks of total utilization rate of 1
>> > cores, 2 cores, 3 cores, 4 cores, and then record their
>> > schedulbility.
>> > -----------------------------------------------------------------
>> > ------------------------------------------------------------
>> >
>> > So st_jobs_stats for the missed deadline jobs are:
>> >
>> > trial #1 composed of 2 tasks: total tasks utilization rate = 1
>> >
>> > (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
>> > (period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss
>> >
>> yes, this is the information I need and we can solve the mystery
>> now...
>> Let's look at this task:
>> (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
>> Its utilization is 12.023 / 21 ~= 0.5614;
>> Your VCPU utilization is only 2ms / 4ms = 0.5
>> So even when this task is pinned to one VCPU, it will still miss
>> deadline because it has only one thread. :-)
>>
> Mmmm... As I said many times, I don't remember much of all those RT
> schedulability formulas, but, is really that simple?

Ah, let me clarify...
It is not that simple. ;-) I just simplify it, hoping it can simplify
the problem and highlight the possible reason.

> I mean, if the in-
> guest scheduling algorithm is global (e.g., global-EDF), the task could
> migrate, couldn't it?

Yes. If these partial VCPUs happen to be scheduled "sequentially", the
OS inside VM can migrate the task and make the task keep running. But
that is not the worst-case for the OS.

The worst case for the OS to schedule one task is that all of these
VCPUs are released at the same time, are schedulable as early as
possible in the first period and scheduled as late as possible in the
second period, which will create the largest starvation period to the
VM.
So in the worst case, you won't be able to schedule a task with
utilization U on k VCPUs with utilzation U, no matter how large k is.
(Think about that these k VCPUs are always scheduled at the same
time.)

The detailed illustration of the worst case scenario is at Arvind's
paper: http://link.springer.com/article/10.1007%2Fs11241-009-9073-x
My latest journal paper
(http://link.springer.com/article/10.1007%2Fs11241-015-9223-2) tighten
the resource supply bound function of the MPR model. I believe the
equations are too boring to most of people in the mailing list.

So let's avoid the complex equations here. ;-)

To Yu-An,
The basic idea is, as Dario mentioned in the previous email, that the
configuration of VCPUs are important to provide the schedulability of
tasks inside VM. Especially, if you are doing research in RT, you need
to (maybe have to) know the RT scheduling theory. :-)

>
> Is it really the case that you can never schedule tasks with U greater
> than the smaller U of the various vCPUs (which seems to me to be what
> you're implying)?
No. In the worst case, it is.
Because the VCPUs are created with the "similar" release time, they
may very likely be scheduled concurrently and fall into  the so-called
"worst case" (which is just worse case for sequential task) in an idle
system.


>
> Anyway...
>

OK. I can understand... Hope my explanation won't cause more confusion. :-D

Best,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-29 16:44                       ` Meng Xu
@ 2015-11-29 18:16                         ` Yu-An(Victor) Chen
  2015-12-01  8:59                         ` Dario Faggioli
  1 sibling, 0 replies; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-11-29 18:16 UTC (permalink / raw)
  To: Meng Xu; +Cc: Dario Faggioli, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4955 bytes --]

Hi Meng and Dario,

Thank you both very very much for the explanations! It really help a lot
for understanding RT and what I am doing wrong here! I will do more
research and reconfigure the experiments. Hope to have some good news soon!

Thanks again!



On Sun, Nov 29, 2015 at 8:44 AM, Meng Xu <xumengpanda@gmail.com> wrote:

> 2015-11-29 11:27 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
> > On Sun, 2015-11-29 at 10:38 -0500, Meng Xu wrote:
> >>
> >>
> >> 2015-11-29 7:46 GMT-05:00 Yu-An(Victor) Chen <chen116@usc.edu>:
> >> > Hi Meng,
> >> >
> >> Hi,
> >>
> >> >
> >> > So I will rewrite my setup here again, but this time I shorten the
> >> > period and budget for RTDS like you suggested:
> >> >
> >> Nice! :-)
> >>
> >>
> >> > -----------------------------------------------------------------
> >> > ------------------------------------------------------------
> >> >
> >> (I like this line, BTW. :-D)
> >> >
> >> >
> >> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores
> >> > (cpu 0-7) using credit scheduler(both with weight of 800 and
> >> > capacity of 400)
> >> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores
> >> > (cpu0-7) using RTDS (both with period of 4000(4ms) and budget of
> >> > 2000(2ms)))
> >> > In both setup, dom0 is using 1 core from cpu 8-15
> >> >
> >> > In both setup:
> >> >
> >> > I loaded VM2 with constant running task with total utilization of 4
> >> > cores.
> >> > and in VM1 I run iterations of tasks of total utilization rate of 1
> >> > cores, 2 cores, 3 cores, 4 cores, and then record their
> >> > schedulbility.
> >> > -----------------------------------------------------------------
> >> > ------------------------------------------------------------
> >> >
> >> > So st_jobs_stats for the missed deadline jobs are:
> >> >
> >> > trial #1 composed of 2 tasks: total tasks utilization rate = 1
> >> >
> >> > (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
> >> > (period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss
> >> >
> >> yes, this is the information I need and we can solve the mystery
> >> now...
> >> Let's look at this task:
> >> (period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
> >> Its utilization is 12.023 / 21 ~= 0.5614;
> >> Your VCPU utilization is only 2ms / 4ms = 0.5
> >> So even when this task is pinned to one VCPU, it will still miss
> >> deadline because it has only one thread. :-)
> >>
> > Mmmm... As I said many times, I don't remember much of all those RT
> > schedulability formulas, but, is really that simple?
>
> Ah, let me clarify...
> It is not that simple. ;-) I just simplify it, hoping it can simplify
> the problem and highlight the possible reason.
>
> > I mean, if the in-
> > guest scheduling algorithm is global (e.g., global-EDF), the task could
> > migrate, couldn't it?
>
> Yes. If these partial VCPUs happen to be scheduled "sequentially", the
> OS inside VM can migrate the task and make the task keep running. But
> that is not the worst-case for the OS.
>
> The worst case for the OS to schedule one task is that all of these
> VCPUs are released at the same time, are schedulable as early as
> possible in the first period and scheduled as late as possible in the
> second period, which will create the largest starvation period to the
> VM.
> So in the worst case, you won't be able to schedule a task with
> utilization U on k VCPUs with utilzation U, no matter how large k is.
> (Think about that these k VCPUs are always scheduled at the same
> time.)
>
> The detailed illustration of the worst case scenario is at Arvind's
> paper: http://link.springer.com/article/10.1007%2Fs11241-009-9073-x
> My latest journal paper
> (http://link.springer.com/article/10.1007%2Fs11241-015-9223-2) tighten
> the resource supply bound function of the MPR model. I believe the
> equations are too boring to most of people in the mailing list.
>
> So let's avoid the complex equations here. ;-)
>
> To Yu-An,
> The basic idea is, as Dario mentioned in the previous email, that the
> configuration of VCPUs are important to provide the schedulability of
> tasks inside VM. Especially, if you are doing research in RT, you need
> to (maybe have to) know the RT scheduling theory. :-)
>
> >
> > Is it really the case that you can never schedule tasks with U greater
> > than the smaller U of the various vCPUs (which seems to me to be what
> > you're implying)?
> No. In the worst case, it is.
> Because the VCPUs are created with the "similar" release time, they
> may very likely be scheduled concurrently and fall into  the so-called
> "worst case" (which is just worse case for sequential task) in an idle
> system.
>
>
> >
> > Anyway...
> >
>
> OK. I can understand... Hope my explanation won't cause more confusion. :-D
>
> Best,
>
> Meng
>
> -----------
> Meng Xu
> PhD Student in Computer and Information Science
> University of Pennsylvania
> http://www.cis.upenn.edu/~mengxu/
>

[-- Attachment #1.2: Type: text/html, Size: 6753 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-11-29 16:44                       ` Meng Xu
  2015-11-29 18:16                         ` Yu-An(Victor) Chen
@ 2015-12-01  8:59                         ` Dario Faggioli
  2015-12-01 10:11                           ` Lars Kurth
  1 sibling, 1 reply; 31+ messages in thread
From: Dario Faggioli @ 2015-12-01  8:59 UTC (permalink / raw)
  To: Meng Xu; +Cc: Yu-An(Victor) Chen, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1787 bytes --]

On Sun, 2015-11-29 at 11:44 -0500, Meng Xu wrote:
> 2015-11-29 11:27 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>
> :
> > 
> > Mmmm... As I said many times, I don't remember much of all those RT
> > schedulability formulas, but, is really that simple?
> 
> Ah, let me clarify...
> It is not that simple. ;-) I just simplify it, hoping it can simplify
> the problem and highlight the possible reason.
> 
Ok, glad to know I haven't completely lost my mind, or anything like
that! :-)

> > I mean, if the in-
> > guest scheduling algorithm is global (e.g., global-EDF), the task
> > could
> > migrate, couldn't it?
> 
> Yes. If these partial VCPUs happen to be scheduled "sequentially",
> the
> OS inside VM can migrate the task and make the task keep running. But
> that is not the worst-case for the OS.
> 
Right, I see it now, and (FWIW) I absolutely agree with the worst-case
analysis you provided (thanks). I did not get the fact that you were
talking about the worst-case, sorry for the noise. :-D

> The detailed illustration of the worst case scenario is at Arvind's
> paper: http://link.springer.com/article/10.1007%2Fs11241-009-9073-x
> My latest journal paper
> (http://link.springer.com/article/10.1007%2Fs11241-015-9223-2)
> tighten
> the resource supply bound function of the MPR model. I believe the
> equations are too boring to most of people in the mailing list.
> 
> So let's avoid the complex equations here. ;-)
> 
Thanks for this too! :-)

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-01  8:59                         ` Dario Faggioli
@ 2015-12-01 10:11                           ` Lars Kurth
  2015-12-01 16:01                             ` Yu-An(Victor) Chen
  2015-12-02  5:54                             ` Meng Xu
  0 siblings, 2 replies; 31+ messages in thread
From: Lars Kurth @ 2015-12-01 10:11 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Meng Xu, Yu-An(Victor) Chen, xen-devel

I wonder whether we need to add some health warnings and recommended background reading to http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler
Lars

> On 1 Dec 2015, at 08:59, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> 
> On Sun, 2015-11-29 at 11:44 -0500, Meng Xu wrote:
>> 2015-11-29 11:27 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>
>> :
>>>  
>>> Mmmm... As I said many times, I don't remember much of all those RT
>>> schedulability formulas, but, is really that simple?
>> 
>> Ah, let me clarify...
>> It is not that simple. ;-) I just simplify it, hoping it can simplify
>> the problem and highlight the possible reason.
>> 
> Ok, glad to know I haven't completely lost my mind, or anything like
> that! :-)
> 
>>> I mean, if the in-
>>> guest scheduling algorithm is global (e.g., global-EDF), the task
>>> could
>>> migrate, couldn't it?
>> 
>> Yes. If these partial VCPUs happen to be scheduled "sequentially",
>> the
>> OS inside VM can migrate the task and make the task keep running. But
>> that is not the worst-case for the OS.
>> 
> Right, I see it now, and (FWIW) I absolutely agree with the worst-case
> analysis you provided (thanks). I did not get the fact that you were
> talking about the worst-case, sorry for the noise. :-D
> 
>> The detailed illustration of the worst case scenario is at Arvind's
>> paper: http://link.springer.com/article/10.1007%2Fs11241-009-9073-x
>> My latest journal paper
>> (http://link.springer.com/article/10.1007%2Fs11241-015-9223-2)
>> tighten
>> the resource supply bound function of the MPR model. I believe the
>> equations are too boring to most of people in the mailing list.
>> 
>> So let's avoid the complex equations here. ;-)
>> 
> Thanks for this too! :-)
> 
> Regards,
> Dario
> -- 
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-01 10:11                           ` Lars Kurth
@ 2015-12-01 16:01                             ` Yu-An(Victor) Chen
  2015-12-02  5:54                             ` Meng Xu
  1 sibling, 0 replies; 31+ messages in thread
From: Yu-An(Victor) Chen @ 2015-12-01 16:01 UTC (permalink / raw)
  To: Lars Kurth; +Cc: Dario Faggioli, Meng Xu, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2553 bytes --]

Hi all,

A little update,

so yea when I run with tasks that has utilization less than 0.5. Xen-RTDS
is able to complete all those tasks.

Thank you guys very much for helping me understand the problem!



On Tue, Dec 1, 2015 at 2:11 AM, Lars Kurth <lars.kurth.xen@gmail.com> wrote:

> I wonder whether we need to add some health warnings and recommended
> background reading to http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler
> Lars
>
> > On 1 Dec 2015, at 08:59, Dario Faggioli <dario.faggioli@citrix.com>
> wrote:
> >
> > On Sun, 2015-11-29 at 11:44 -0500, Meng Xu wrote:
> >> 2015-11-29 11:27 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>
> >> :
> >>>
> >>> Mmmm... As I said many times, I don't remember much of all those RT
> >>> schedulability formulas, but, is really that simple?
> >>
> >> Ah, let me clarify...
> >> It is not that simple. ;-) I just simplify it, hoping it can simplify
> >> the problem and highlight the possible reason.
> >>
> > Ok, glad to know I haven't completely lost my mind, or anything like
> > that! :-)
> >
> >>> I mean, if the in-
> >>> guest scheduling algorithm is global (e.g., global-EDF), the task
> >>> could
> >>> migrate, couldn't it?
> >>
> >> Yes. If these partial VCPUs happen to be scheduled "sequentially",
> >> the
> >> OS inside VM can migrate the task and make the task keep running. But
> >> that is not the worst-case for the OS.
> >>
> > Right, I see it now, and (FWIW) I absolutely agree with the worst-case
> > analysis you provided (thanks). I did not get the fact that you were
> > talking about the worst-case, sorry for the noise. :-D
> >
> >> The detailed illustration of the worst case scenario is at Arvind's
> >> paper: http://link.springer.com/article/10.1007%2Fs11241-009-9073-x
> >> My latest journal paper
> >> (http://link.springer.com/article/10.1007%2Fs11241-015-9223-2)
> >> tighten
> >> the resource supply bound function of the MPR model. I believe the
> >> equations are too boring to most of people in the mailing list.
> >>
> >> So let's avoid the complex equations here. ;-)
> >>
> > Thanks for this too! :-)
> >
> > Regards,
> > Dario
> > --
> > <<This happens because I choose it to happen!>> (Raistlin Majere)
> > -----------------------------------------------------------------
> > Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

[-- Attachment #1.2: Type: text/html, Size: 4058 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-01 10:11                           ` Lars Kurth
  2015-12-01 16:01                             ` Yu-An(Victor) Chen
@ 2015-12-02  5:54                             ` Meng Xu
  2015-12-02 10:54                               ` Wei Liu
                                                 ` (2 more replies)
  1 sibling, 3 replies; 31+ messages in thread
From: Meng Xu @ 2015-12-02  5:54 UTC (permalink / raw)
  To: Lars Kurth; +Cc: Dario Faggioli, Yu-An(Victor) Chen, xen-devel

Hi Lars and Dario,

2015-12-01 4:11 GMT-06:00 Lars Kurth <lars.kurth.xen@gmail.com>:
>
> I wonder whether we need to add some health warnings and recommended background reading to http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler


Maybe we could add some health warning and add a link to this discussion?
Misconfiguration of the system will usually cause performance
degradation, even for the other schedulers, such as ARINC653, credit,
credit2.
What I'm thinking is how much expert information we should expose to
users. Sometimes, exposing too much information may not be so helpful.
Sometimes, more information just  cause more confusion.

What do you guys think which type of information we should include?

Thanks,

Meng


>
>
> Lars
>
> > On 1 Dec 2015, at 08:59, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> >
> > On Sun, 2015-11-29 at 11:44 -0500, Meng Xu wrote:
> >> 2015-11-29 11:27 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>
> >> :
> >>>
> >>> Mmmm... As I said many times, I don't remember much of all those RT
> >>> schedulability formulas, but, is really that simple?
> >>
> >> Ah, let me clarify...
> >> It is not that simple. ;-) I just simplify it, hoping it can simplify
> >> the problem and highlight the possible reason.
> >>
> > Ok, glad to know I haven't completely lost my mind, or anything like
> > that! :-)
> >
> >>> I mean, if the in-
> >>> guest scheduling algorithm is global (e.g., global-EDF), the task
> >>> could
> >>> migrate, couldn't it?
> >>
> >> Yes. If these partial VCPUs happen to be scheduled "sequentially",
> >> the
> >> OS inside VM can migrate the task and make the task keep running. But
> >> that is not the worst-case for the OS.
> >>
> > Right, I see it now, and (FWIW) I absolutely agree with the worst-case
> > analysis you provided (thanks). I did not get the fact that you were
> > talking about the worst-case, sorry for the noise. :-D
> >
> >> The detailed illustration of the worst case scenario is at Arvind's
> >> paper: http://link.springer.com/article/10.1007%2Fs11241-009-9073-x
> >> My latest journal paper
> >> (http://link.springer.com/article/10.1007%2Fs11241-015-9223-2)
> >> tighten
> >> the resource supply bound function of the MPR model. I believe the
> >> equations are too boring to most of people in the mailing list.
> >>
> >> So let's avoid the complex equations here. ;-)
> >>
> > Thanks for this too! :-)
> >
> > Regards,
> > Dario
> > --
> > <<This happens because I choose it to happen!>> (Raistlin Majere)
> > -----------------------------------------------------------------
> > Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>



-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-02  5:54                             ` Meng Xu
@ 2015-12-02 10:54                               ` Wei Liu
  2015-12-02 11:01                               ` Ian Campbell
  2015-12-02 11:03                               ` Lars Kurth
  2 siblings, 0 replies; 31+ messages in thread
From: Wei Liu @ 2015-12-02 10:54 UTC (permalink / raw)
  To: Meng Xu
  Cc: Lars Kurth, Dario Faggioli, wei.liu2, Yu-An(Victor) Chen, xen-devel

On Tue, Dec 01, 2015 at 11:54:14PM -0600, Meng Xu wrote:
> Hi Lars and Dario,
> 
> 2015-12-01 4:11 GMT-06:00 Lars Kurth <lars.kurth.xen@gmail.com>:
> >
> > I wonder whether we need to add some health warnings and recommended background reading to http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler
> 
> 
> Maybe we could add some health warning and add a link to this discussion?
> Misconfiguration of the system will usually cause performance
> degradation, even for the other schedulers, such as ARINC653, credit,
> credit2.
> What I'm thinking is how much expert information we should expose to
> users. Sometimes, exposing too much information may not be so helpful.
> Sometimes, more information just  cause more confusion.
> 
> What do you guys think which type of information we should include?
> 

Wiki and xl manpage ?

Wei.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-02  5:54                             ` Meng Xu
  2015-12-02 10:54                               ` Wei Liu
@ 2015-12-02 11:01                               ` Ian Campbell
  2015-12-02 11:27                                 ` Dario Faggioli
  2015-12-02 11:03                               ` Lars Kurth
  2 siblings, 1 reply; 31+ messages in thread
From: Ian Campbell @ 2015-12-02 11:01 UTC (permalink / raw)
  To: Meng Xu, Lars Kurth; +Cc: Dario Faggioli, Yu-An(Victor) Chen, xen-devel

On Tue, 2015-12-01 at 23:54 -0600, Meng Xu wrote:
> Hi Lars and Dario,
> 
> 2015-12-01 4:11 GMT-06:00 Lars Kurth <lars.kurth.xen@gmail.com>:
> > 
> > I wonder whether we need to add some health warnings and recommended
> > background reading to http://wiki.xenproject.org/wiki/RTDS-Based-Schedu
> > ler
> 
> 
> Maybe we could add some health warning and add a link to this discussion?
> Misconfiguration of the system will usually cause performance
> degradation, even for the other schedulers, such as ARINC653, credit,
> credit2.
>
> What I'm thinking is how much expert information we should expose to
> users. Sometimes, exposing too much information may not be so helpful.
> Sometimes, more information just  cause more confusion.
> 
> What do you guys think which type of information we should include?

I think there is an important distinction between credit2/credit and RT
schedulers such as arinc/rtds etc, which is that the former should just
work out of the box with no tweaking at all whereas the latter in general
need some sort of "intelligent input/configuration" to even begin using
them and have GIGO properties wrt their parameters.

(That's not to say you can't tweak credit* etc and break it if you want, but one typically doesn't need to start doing so just to get something running at all).

And AIUI the "intelligent input/configuration" requires some amount of background in RT scheduling, else you can get pathological results and think the scheduler and/or Xen is broken.

So I think some sort of warning that the RT schedulers do not "just work" and require one to understand the properties of your workloads and the schedulers and to feed them non-garbage inputs would be a useful to people who might otherwise expect to just "xl create" (maybe with some random inputs to the required settings) and have a useful result.

Having given that warning I don't think some links to some relevant background RT stuff would be too much info, neither would the inclusion of some specifics about the specific algorithm. After all that background and info is critical to being able to run a system using those schedulers, isn't it?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-02  5:54                             ` Meng Xu
  2015-12-02 10:54                               ` Wei Liu
  2015-12-02 11:01                               ` Ian Campbell
@ 2015-12-02 11:03                               ` Lars Kurth
  2015-12-02 11:19                                 ` Dario Faggioli
  2 siblings, 1 reply; 31+ messages in thread
From: Lars Kurth @ 2015-12-02 11:03 UTC (permalink / raw)
  To: Meng Xu; +Cc: Dario Faggioli, Yu-An(Victor) Chen, xen-devel


> On 2 Dec 2015, at 05:54, Meng Xu <xumengpanda@gmail.com> wrote:
> 
> Hi Lars and Dario,
> 
> 2015-12-01 4:11 GMT-06:00 Lars Kurth <lars.kurth.xen@gmail.com>:
>> 
>> I wonder whether we need to add some health warnings and recommended background reading to http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler
> 
> 
> Maybe we could add some health warning and add a link to this discussion?
> Misconfiguration of the system will usually cause performance
> degradation, even for the other schedulers, such as ARINC653, credit,
> credit2.

That is the minimum IMHO for the wiki and xl.

> What I'm thinking is how much expert information we should expose to
> users. Sometimes, exposing too much information may not be so helpful.
> Sometimes, more information just  cause more confusion.

Maybe just link to some of the theory on real-time schedulers (at least on the wiki). Maybe also to a couple of links to xen-devel@ threads like this one.

> What do you guys think which type of information we should include?

I am not the expert, but I get the point on too much information. 

Lars

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-02 11:03                               ` Lars Kurth
@ 2015-12-02 11:19                                 ` Dario Faggioli
  2015-12-02 16:57                                   ` Meng Xu
  0 siblings, 1 reply; 31+ messages in thread
From: Dario Faggioli @ 2015-12-02 11:19 UTC (permalink / raw)
  To: Lars Kurth, Meng Xu; +Cc: Yu-An(Victor) Chen, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1561 bytes --]

On Wed, 2015-12-02 at 11:03 +0000, Lars Kurth wrote:
> > On 2 Dec 2015, at 05:54, Meng Xu <xumengpanda@gmail.com> wrote:
> > 
> > Maybe we could add some health warning and add a link to this
> > discussion?
> > Misconfiguration of the system will usually cause performance
> > degradation, even for the other schedulers, such as ARINC653,
> > credit,
> > credit2.
> 
> That is the minimum IMHO for the wiki and xl.
> 
I'm not so sure. Perhaps just a quick hint, in xl manpage, I'd say.

> > What I'm thinking is how much expert information we should expose
> > to
> > users. Sometimes, exposing too much information may not be so
> > helpful.
> > Sometimes, more information just  cause more confusion.
> 
> Maybe just link to some of the theory on real-time schedulers (at
> least on the wiki). Maybe also to a couple of links to xen-devel@
> threads like this one.
> 
We can reference the thread on the wiki, and we can put a few sentences
about all this there. However, trying to link "some relevant theory" is
unpractical, as it won't be easy to define what both "some" and
"relevant" mean. :-)

I'd be in favour (and ca do so) of adding a reference to the RT-Xen
website, and maybe to one of (the original?) RT-Xen paper. But nothing
more.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-02 11:01                               ` Ian Campbell
@ 2015-12-02 11:27                                 ` Dario Faggioli
  0 siblings, 0 replies; 31+ messages in thread
From: Dario Faggioli @ 2015-12-02 11:27 UTC (permalink / raw)
  To: Ian Campbell, Meng Xu, Lars Kurth; +Cc: Yu-An(Victor) Chen, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2354 bytes --]

On Wed, 2015-12-02 at 11:01 +0000, Ian Campbell wrote:
> On Tue, 2015-12-01 at 23:54 -0600, Meng Xu wrote:
> > 
> > What do you guys think which type of information we should include?
> 
> I think there is an important distinction between credit2/credit and
> RT
> schedulers such as arinc/rtds etc, which is that the former should
> just
> work out of the box with no tweaking at all whereas the latter in
> general
> need some sort of "intelligent input/configuration" to even begin
> using
> them and have GIGO properties wrt their parameters.
> 
Exactly.

This is particularly true for the "issues" raised in this thread. In
fact, this has all been about 'schedulability'. Well, if you aim at
'schedulability', you (ought to) have the necessary RT background to
figure out what it takes to get there.


> And AIUI the "intelligent input/configuration" requires some amount
> of background in RT scheduling, else you can get pathological results
> and think the scheduler and/or Xen is broken.
> 
Yep.

> So I think some sort of warning that the RT schedulers do not "just
> work" and require one to understand the properties of your workloads
> and the schedulers and to feed them non-garbage inputs would be a
> useful to people who might otherwise expect to just "xl create"
> (maybe with some random inputs to the required settings) and have a
> useful result.
> 
Yeah, I guess we can add something like that. I will send a patch.

> Having given that warning I don't think some links to some relevant
> background RT stuff would be too much info, neither would the
> inclusion of some specifics about the specific algorithm. After all
> that background and info is critical to being able to run a system
> using those schedulers, isn't it?
> 
True, but I'd much rather avoid turning either xl manpage and/or our
wiki in a collection of references to real-time scheduling papers! As
said in the other mail, I really would try to limit all this, both in
terms of numbers and of complexity of the references themselves.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: xen 4.5.0 rtds scheduler perform poorly with 2vms
  2015-12-02 11:19                                 ` Dario Faggioli
@ 2015-12-02 16:57                                   ` Meng Xu
  0 siblings, 0 replies; 31+ messages in thread
From: Meng Xu @ 2015-12-02 16:57 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Lars Kurth, Yu-An(Victor) Chen, xen-devel

Hi Dario,

2015-12-02 5:19 GMT-06:00 Dario Faggioli <dario.faggioli@citrix.com>:
> On Wed, 2015-12-02 at 11:03 +0000, Lars Kurth wrote:
>> > On 2 Dec 2015, at 05:54, Meng Xu <xumengpanda@gmail.com> wrote:
>> >
>> > Maybe we could add some health warning and add a link to this
>> > discussion?
>> > Misconfiguration of the system will usually cause performance
>> > degradation, even for the other schedulers, such as ARINC653,
>> > credit,
>> > credit2.
>>
>> That is the minimum IMHO for the wiki and xl.
>>
> I'm not so sure. Perhaps just a quick hint, in xl manpage, I'd say.
>
>> > What I'm thinking is how much expert information we should expose
>> > to
>> > users. Sometimes, exposing too much information may not be so
>> > helpful.
>> > Sometimes, more information just  cause more confusion.
>>
>> Maybe just link to some of the theory on real-time schedulers (at
>> least on the wiki). Maybe also to a couple of links to xen-devel@
>> threads like this one.
>>
> We can reference the thread on the wiki, and we can put a few sentences
> about all this there. However, trying to link "some relevant theory" is
> unpractical, as it won't be easy to define what both "some" and
> "relevant" mean. :-)
>
> I'd be in favour (and ca do so) of adding a reference to the RT-Xen
> website, and maybe to one of (the original?) RT-Xen paper. But nothing
> more.

The first RT-Xen paper is for unicore scheduling and it didn't talk
about how to configure the VMs.
The most relevant RT-Xen paper for RTDS scheduler (right now) is the
EMSOFT14 paper. Instead of linking to IEEE, which requires money to
download, we can use this link:
http://www.cis.upenn.edu/~mengxu/emsoft14/emsoft14.pdf

Best,

Meng

-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2015-12-02 16:57 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-23 15:42 xen 4.5.0 rtds scheduler perform poorly with 2vms Yu-An(Victor) Chen
2015-11-23 16:07 ` Meng Xu
2015-11-23 16:35   ` Yu-An(Victor) Chen
2015-11-24  2:23     ` Meng Xu
2015-11-24  4:57       ` Yu-An(Victor) Chen
2015-11-24  6:19         ` Meng Xu
2015-11-25  0:15 ` Dario Faggioli
2015-11-27 16:36   ` Yu-An(Victor) Chen
2015-11-27 17:23     ` Dario Faggioli
2015-11-27 17:41       ` Meng Xu
2015-11-27 19:50         ` Yu-An(Victor) Chen
2015-11-28  0:17           ` Meng Xu
2015-11-28 12:20             ` Yu-An(Victor) Chen
2015-11-28 15:09               ` Meng Xu
2015-11-29 12:46                 ` Yu-An(Victor) Chen
2015-11-29 15:38                   ` Meng Xu
2015-11-29 16:27                     ` Dario Faggioli
2015-11-29 16:44                       ` Meng Xu
2015-11-29 18:16                         ` Yu-An(Victor) Chen
2015-12-01  8:59                         ` Dario Faggioli
2015-12-01 10:11                           ` Lars Kurth
2015-12-01 16:01                             ` Yu-An(Victor) Chen
2015-12-02  5:54                             ` Meng Xu
2015-12-02 10:54                               ` Wei Liu
2015-12-02 11:01                               ` Ian Campbell
2015-12-02 11:27                                 ` Dario Faggioli
2015-12-02 11:03                               ` Lars Kurth
2015-12-02 11:19                                 ` Dario Faggioli
2015-12-02 16:57                                   ` Meng Xu
2015-11-29 16:18                   ` Dario Faggioli
2015-11-29 16:21                     ` Meng Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.