All of lore.kernel.org
 help / color / mirror / Atom feed
* [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
@ 2009-08-27  1:46 Dong-Jae Kang
  2009-08-27  6:20 ` Ryo Tsuruta
       [not found] ` <2891419e0908261846v4fa19791q4552975f10dd64e5-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 2 replies; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-27  1:46 UTC (permalink / raw)
  To: Ryo Tsuruta, ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

[-- Attachment #1: Type: text/plain, Size: 491 bytes --]

Hi Ryo

I attached new test result file(ioband-partition-based-evaluation.xls)in
this mail.
In this time, it is not virtualization environment.
I evaluated partition-based use cases before I do it in vitualization
environment.
because I think the two cases are smilar each other.

The detailed information about the evaluation can be referred in attached
file.

If you have any questions or comments after examine it,
please give me your opinion.

Thank you.
-- 
Best Regards,
Dong-Jae Kang

[-- Attachment #2: ioband-partition-based-evaluation.xls --]
[-- Type: application/vnd.ms-excel, Size: 165888 bytes --]

[-- Attachment #3: Type: text/plain, Size: 206 bytes --]

_______________________________________________
Containers mailing list
Containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
https://lists.linux-foundation.org/mailman/listinfo/containers

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found] ` <2891419e0908261846v4fa19791q4552975f10dd64e5-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-08-27  6:20   ` Ryo Tsuruta
  2009-08-28 15:58   ` Munehiro Ikeda
  1 sibling, 0 replies; 16+ messages in thread
From: Ryo Tsuruta @ 2009-08-27  6:20 UTC (permalink / raw)
  To: baramsori72-Re5JQEeQqe8AvxtiuMwx3w
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

Hi Dong-Jae,

# I've added dm-devel to Cc:.

Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Hi Ryo
> 
> I attached new test result file(ioband-partition-based-evaluation.xls)in
> this mail.

Thanks for your great job.

> In this time, it is not virtualization environment.
> I evaluated partition-based use cases before I do it in vitualization
> environment.
> because I think the two cases are smilar each other.
> 
> The detailed information about the evaluation can be referred in attached
> file.
> 
> If you have any questions or comments after examine it,
> please give me your opinion.

I would like to know the throughput without dm-ioband in your
environment. Because the total throughput of range-bw policy is
8000KB/s, which means the device has a capability to perform over
8000KB/s, but the total throughput of weight policy is lower than
the range-bw policy. In my environment, there is no significant
difference in average throughput between with and without dm-ioband.
I ran fio in the way described in your result file. Here are the
results of my environment. The throughputs were calculated from
"iostat -k 1" outputs.

            buffered write test
           Avg. throughput [KB/s]
        w/o ioband     w/ioband
sdb2         14485         5788
sdb3         12494        22295
total        26979        28030

Thanks,
Ryo Tsuruta

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
  2009-08-27  1:46 [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment Dong-Jae Kang
@ 2009-08-27  6:20 ` Ryo Tsuruta
  2009-08-27 11:03   ` Dong-Jae Kang
       [not found]   ` <20090827.152042.104028330.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
       [not found] ` <2891419e0908261846v4fa19791q4552975f10dd64e5-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 2 replies; 16+ messages in thread
From: Ryo Tsuruta @ 2009-08-27  6:20 UTC (permalink / raw)
  To: baramsori72; +Cc: ioband-devel, containers, dm-devel, corsetproject

Hi Dong-Jae,

# I've added dm-devel to Cc:.

Dong-Jae Kang <baramsori72@gmail.com> wrote:
> Hi Ryo
> 
> I attached new test result file(ioband-partition-based-evaluation.xls)in
> this mail.

Thanks for your great job.

> In this time, it is not virtualization environment.
> I evaluated partition-based use cases before I do it in vitualization
> environment.
> because I think the two cases are smilar each other.
> 
> The detailed information about the evaluation can be referred in attached
> file.
> 
> If you have any questions or comments after examine it,
> please give me your opinion.

I would like to know the throughput without dm-ioband in your
environment. Because the total throughput of range-bw policy is
8000KB/s, which means the device has a capability to perform over
8000KB/s, but the total throughput of weight policy is lower than
the range-bw policy. In my environment, there is no significant
difference in average throughput between with and without dm-ioband.
I ran fio in the way described in your result file. Here are the
results of my environment. The throughputs were calculated from
"iostat -k 1" outputs.

            buffered write test
           Avg. throughput [KB/s]
        w/o ioband     w/ioband
sdb2         14485         5788
sdb3         12494        22295
total        26979        28030

Thanks,
Ryo Tsuruta

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found]   ` <20090827.152042.104028330.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
@ 2009-08-27 11:03     ` Dong-Jae Kang
  0 siblings, 0 replies; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-27 11:03 UTC (permalink / raw)
  To: Ryo Tsuruta
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

Hi Ryo

2009/8/27 Ryo Tsuruta <ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>

> Hi Dong-Jae,
>
> # I've added dm-devel to Cc:.
>
> Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > Hi Ryo
> >
> > I attached new test result file(ioband-partition-based-evaluation.xls)in
> > this mail.
>
> Thanks for your great job.
>
> > In this time, it is not virtualization environment.
> > I evaluated partition-based use cases before I do it in vitualization
> > environment.
> > because I think the two cases are smilar each other.
> >
> > The detailed information about the evaluation can be referred in attached
> > file.
> >
> > If you have any questions or comments after examine it,
> > please give me your opinion.
>
> I would like to know the throughput without dm-ioband in your
> environment. Because the total throughput of range-bw policy is
> 8000KB/s, which means the device has a capability to perform over
> 8000KB/s, but the total throughput of weight policy is lower than
> the range-bw policy. In my environment, there is no significant
> difference in average throughput between with and without dm-ioband.
> I ran fio in the way described in your result file. Here are the
> results of my environment. The throughputs were calculated from
> "iostat -k 1" outputs.
>
>            buffered write test
>           Avg. throughput [KB/s]
>        w/o ioband     w/ioband
> sdb2         14485         5788
> sdb3         12494        22295
> total        26979        28030
>

OK, good comments.
I omitted the total bandwidth of the evaluation system.

I will reply to you about it tomorrow after I check and re-test it again.

>
> Thanks,
> Ryo Tsuruta
>

Thank you for comments.

-- 
Best Regards,
Dong-Jae Kang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
  2009-08-27  6:20 ` Ryo Tsuruta
@ 2009-08-27 11:03   ` Dong-Jae Kang
       [not found]     ` <2891419e0908270403kd439d29gedeea6fcae753d6d-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2009-08-28 10:24     ` Dong-Jae Kang
       [not found]   ` <20090827.152042.104028330.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
  1 sibling, 2 replies; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-27 11:03 UTC (permalink / raw)
  To: Ryo Tsuruta; +Cc: ioband-devel, containers, dm-devel, corsetproject


[-- Attachment #1.1: Type: text/plain, Size: 1691 bytes --]

Hi Ryo

2009/8/27 Ryo Tsuruta <ryov@valinux.co.jp>

> Hi Dong-Jae,
>
> # I've added dm-devel to Cc:.
>
> Dong-Jae Kang <baramsori72@gmail.com> wrote:
> > Hi Ryo
> >
> > I attached new test result file(ioband-partition-based-evaluation.xls)in
> > this mail.
>
> Thanks for your great job.
>
> > In this time, it is not virtualization environment.
> > I evaluated partition-based use cases before I do it in vitualization
> > environment.
> > because I think the two cases are smilar each other.
> >
> > The detailed information about the evaluation can be referred in attached
> > file.
> >
> > If you have any questions or comments after examine it,
> > please give me your opinion.
>
> I would like to know the throughput without dm-ioband in your
> environment. Because the total throughput of range-bw policy is
> 8000KB/s, which means the device has a capability to perform over
> 8000KB/s, but the total throughput of weight policy is lower than
> the range-bw policy. In my environment, there is no significant
> difference in average throughput between with and without dm-ioband.
> I ran fio in the way described in your result file. Here are the
> results of my environment. The throughputs were calculated from
> "iostat -k 1" outputs.
>
>            buffered write test
>           Avg. throughput [KB/s]
>        w/o ioband     w/ioband
> sdb2         14485         5788
> sdb3         12494        22295
> total        26979        28030
>

OK, good comments.
I omitted the total bandwidth of the evaluation system.

I will reply to you about it tomorrow after I check and re-test it again.

>
> Thanks,
> Ryo Tsuruta
>

Thank you for comments.

-- 
Best Regards,
Dong-Jae Kang

[-- Attachment #1.2: Type: text/html, Size: 2425 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found]     ` <2891419e0908270403kd439d29gedeea6fcae753d6d-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-08-28 10:24       ` Dong-Jae Kang
  0 siblings, 0 replies; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-28 10:24 UTC (permalink / raw)
  To: Ryo Tsuruta
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

[-- Attachment #1: Type: text/plain, Size: 2561 bytes --]

Hi Ryo
I attached new file that includes I/O total bandwidth of evaluation system.
We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
and it is measured through iostat tool and dm-ioband debug patch which I
gave you several months ago.
Of course, the result in prior report was measured by dm-ioband debug patch.

As a result, the big difference in prior report derives from the location
where we measured I/O bandwidth
iostat counts it in application level and dm-ioband debug patch does it in
dm-ioband controller.
I think the difference is related with buffer cache.

Thank you.
Have a nice weekend

2009/8/27 Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

> Hi Ryo
>
> 2009/8/27 Ryo Tsuruta <ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
>
> Hi Dong-Jae,
>>
>> # I've added dm-devel to Cc:.
>>
>> Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>> > Hi Ryo
>> >
>> > I attached new test result file(ioband-partition-based-evaluation.xls)in
>> > this mail.
>>
>> Thanks for your great job.
>>
>> > In this time, it is not virtualization environment.
>> > I evaluated partition-based use cases before I do it in vitualization
>> > environment.
>> > because I think the two cases are smilar each other.
>> >
>> > The detailed information about the evaluation can be referred in
>> attached
>> > file.
>> >
>> > If you have any questions or comments after examine it,
>> > please give me your opinion.
>>
>> I would like to know the throughput without dm-ioband in your
>> environment. Because the total throughput of range-bw policy is
>> 8000KB/s, which means the device has a capability to perform over
>> 8000KB/s, but the total throughput of weight policy is lower than
>> the range-bw policy. In my environment, there is no significant
>> difference in average throughput between with and without dm-ioband.
>> I ran fio in the way described in your result file. Here are the
>> results of my environment. The throughputs were calculated from
>> "iostat -k 1" outputs.
>>
>>            buffered write test
>>           Avg. throughput [KB/s]
>>        w/o ioband     w/ioband
>> sdb2         14485         5788
>> sdb3         12494        22295
>> total        26979        28030
>>
>
> OK, good comments.
> I omitted the total bandwidth of the evaluation system.
>
> I will reply to you about it tomorrow after I check and re-test it again.
>
>>
>> Thanks,
>> Ryo Tsuruta
>>
>
> Thank you for comments.
>
>
> --
> Best Regards,
> Dong-Jae Kang
>



-- 
Best Regards,
Dong-Jae Kang

[-- Attachment #2: total bandwidth result.xls --]
[-- Type: application/vnd.ms-excel, Size: 33792 bytes --]

[-- Attachment #3: Type: text/plain, Size: 206 bytes --]

_______________________________________________
Containers mailing list
Containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
https://lists.linux-foundation.org/mailman/listinfo/containers

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
  2009-08-27 11:03   ` Dong-Jae Kang
       [not found]     ` <2891419e0908270403kd439d29gedeea6fcae753d6d-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-08-28 10:24     ` Dong-Jae Kang
  2009-08-31  2:12       ` Ryo Tsuruta
       [not found]       ` <2891419e0908280324v32e7ba3fq88eb754052cbf094-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 2 replies; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-28 10:24 UTC (permalink / raw)
  To: Ryo Tsuruta; +Cc: ioband-devel, containers, dm-devel, corsetproject


[-- Attachment #1.1: Type: text/plain, Size: 2475 bytes --]

Hi Ryo
I attached new file that includes I/O total bandwidth of evaluation system.
We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
and it is measured through iostat tool and dm-ioband debug patch which I
gave you several months ago.
Of course, the result in prior report was measured by dm-ioband debug patch.

As a result, the big difference in prior report derives from the location
where we measured I/O bandwidth
iostat counts it in application level and dm-ioband debug patch does it in
dm-ioband controller.
I think the difference is related with buffer cache.

Thank you.
Have a nice weekend

2009/8/27 Dong-Jae Kang <baramsori72@gmail.com>

> Hi Ryo
>
> 2009/8/27 Ryo Tsuruta <ryov@valinux.co.jp>
>
> Hi Dong-Jae,
>>
>> # I've added dm-devel to Cc:.
>>
>> Dong-Jae Kang <baramsori72@gmail.com> wrote:
>> > Hi Ryo
>> >
>> > I attached new test result file(ioband-partition-based-evaluation.xls)in
>> > this mail.
>>
>> Thanks for your great job.
>>
>> > In this time, it is not virtualization environment.
>> > I evaluated partition-based use cases before I do it in vitualization
>> > environment.
>> > because I think the two cases are smilar each other.
>> >
>> > The detailed information about the evaluation can be referred in
>> attached
>> > file.
>> >
>> > If you have any questions or comments after examine it,
>> > please give me your opinion.
>>
>> I would like to know the throughput without dm-ioband in your
>> environment. Because the total throughput of range-bw policy is
>> 8000KB/s, which means the device has a capability to perform over
>> 8000KB/s, but the total throughput of weight policy is lower than
>> the range-bw policy. In my environment, there is no significant
>> difference in average throughput between with and without dm-ioband.
>> I ran fio in the way described in your result file. Here are the
>> results of my environment. The throughputs were calculated from
>> "iostat -k 1" outputs.
>>
>>            buffered write test
>>           Avg. throughput [KB/s]
>>        w/o ioband     w/ioband
>> sdb2         14485         5788
>> sdb3         12494        22295
>> total        26979        28030
>>
>
> OK, good comments.
> I omitted the total bandwidth of the evaluation system.
>
> I will reply to you about it tomorrow after I check and re-test it again.
>
>>
>> Thanks,
>> Ryo Tsuruta
>>
>
> Thank you for comments.
>
>
> --
> Best Regards,
> Dong-Jae Kang
>



-- 
Best Regards,
Dong-Jae Kang

[-- Attachment #1.2: Type: text/html, Size: 3673 bytes --]

[-- Attachment #2: total bandwidth result.xls --]
[-- Type: application/vnd.ms-excel, Size: 33792 bytes --]

[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found] ` <2891419e0908261846v4fa19791q4552975f10dd64e5-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2009-08-27  6:20   ` Ryo Tsuruta
@ 2009-08-28 15:58   ` Munehiro Ikeda
       [not found]     ` <4A97FEC1.1010500-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
  1 sibling, 1 reply; 16+ messages in thread
From: Munehiro Ikeda @ 2009-08-28 15:58 UTC (permalink / raw)
  To: Dong-Jae Kang
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

Hello Dong-Jae,

Dong-Jae Kang wrote, on 08/26/2009 09:46 PM:
> Hi Ryo
>
> I attached new test result file(ioband-partition-based-evaluation.xls)in
> this mail.
> In this time, it is not virtualization environment.
> I evaluated partition-based use cases before I do it in vitualization
> environment.
> because I think the two cases are smilar each other.
>
> The detailed information about the evaluation can be referred in attached
> file.
>
> If you have any questions or comments after examine it,
> please give me your opinion.
>
> Thank you.

Good work.
Please let me ask silly questions.

(1) About what "target" means
I guess "device" means writing to device files directly
(--filename=/dev/mapper/ioband1)
and "directory" means mounting these device files and writing to some directory
on the filesystem
(--filename=/mnt/ioband1/test.dat, I'm assuming mount /dev/mapper/ioband1 on
/mnt/ioband1),
am I wright?

(2) Conditions  in RDF sheet
Conditions in sheet "RDF" and "RBF" are same but results are slightly different.
Should "Mode" in RDF sheet be "Direct"?


Regards,
Muuhh

-- 
IKEDA, Munehiro
   NEC Corporation of America
     m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found]       ` <2891419e0908280324v32e7ba3fq88eb754052cbf094-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-08-31  2:12         ` Ryo Tsuruta
  0 siblings, 0 replies; 16+ messages in thread
From: Ryo Tsuruta @ 2009-08-31  2:12 UTC (permalink / raw)
  To: baramsori72-Re5JQEeQqe8AvxtiuMwx3w
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

Hi Dong-Jae,

Thanks for testing.
Could you do the same test without dm-ioband? I would like to know
the throughput of your disk drive and a difference with and withour
dm-ioband.

Thanks,
Ryo Tsuruta

Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Hi Ryo
> I attached new file that includes I/O total bandwidth of evaluation system.
> We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
> and it is measured through iostat tool and dm-ioband debug patch which I
> gave you several months ago.
> Of course, the result in prior report was measured by dm-ioband debug patch.
> 
> As a result, the big difference in prior report derives from the location
> where we measured I/O bandwidth
> iostat counts it in application level and dm-ioband debug patch does it in
> dm-ioband controller.
> I think the difference is related with buffer cache.
> 
> Thank you.
> Have a nice weekend
> 
> 2009/8/27 Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> > Hi Ryo
> >
> > 2009/8/27 Ryo Tsuruta <ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
> >
> > Hi Dong-Jae,
> >>
> >> # I've added dm-devel to Cc:.
> >>
> >> Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> >> > Hi Ryo
> >> >
> >> > I attached new test result file(ioband-partition-based-evaluation.xls)in
> >> > this mail.
> >>
> >> Thanks for your great job.
> >>
> >> > In this time, it is not virtualization environment.
> >> > I evaluated partition-based use cases before I do it in vitualization
> >> > environment.
> >> > because I think the two cases are smilar each other.
> >> >
> >> > The detailed information about the evaluation can be referred in
> >> attached
> >> > file.
> >> >
> >> > If you have any questions or comments after examine it,
> >> > please give me your opinion.
> >>
> >> I would like to know the throughput without dm-ioband in your
> >> environment. Because the total throughput of range-bw policy is
> >> 8000KB/s, which means the device has a capability to perform over
> >> 8000KB/s, but the total throughput of weight policy is lower than
> >> the range-bw policy. In my environment, there is no significant
> >> difference in average throughput between with and without dm-ioband.
> >> I ran fio in the way described in your result file. Here are the
> >> results of my environment. The throughputs were calculated from
> >> "iostat -k 1" outputs.
> >>
> >>            buffered write test
> >>           Avg. throughput [KB/s]
> >>        w/o ioband     w/ioband
> >> sdb2         14485         5788
> >> sdb3         12494        22295
> >> total        26979        28030
> >>
> >
> > OK, good comments.
> > I omitted the total bandwidth of the evaluation system.
> >
> > I will reply to you about it tomorrow after I check and re-test it again.
> >
> >>
> >> Thanks,
> >> Ryo Tsuruta
> >>
> >
> > Thank you for comments.
> >
> >
> > --
> > Best Regards,
> > Dong-Jae Kang
> >
> 
> 
> 
> -- 
> Best Regards,
> Dong-Jae Kang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
  2009-08-28 10:24     ` Dong-Jae Kang
@ 2009-08-31  2:12       ` Ryo Tsuruta
       [not found]         ` <20090831.111233.226777940.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
  2009-08-31 10:08         ` Dong-Jae Kang
       [not found]       ` <2891419e0908280324v32e7ba3fq88eb754052cbf094-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 2 replies; 16+ messages in thread
From: Ryo Tsuruta @ 2009-08-31  2:12 UTC (permalink / raw)
  To: baramsori72; +Cc: ioband-devel, containers, dm-devel, corsetproject

Hi Dong-Jae,

Thanks for testing.
Could you do the same test without dm-ioband? I would like to know
the throughput of your disk drive and a difference with and withour
dm-ioband.

Thanks,
Ryo Tsuruta

Dong-Jae Kang <baramsori72@gmail.com> wrote:
> Hi Ryo
> I attached new file that includes I/O total bandwidth of evaluation system.
> We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
> and it is measured through iostat tool and dm-ioband debug patch which I
> gave you several months ago.
> Of course, the result in prior report was measured by dm-ioband debug patch.
> 
> As a result, the big difference in prior report derives from the location
> where we measured I/O bandwidth
> iostat counts it in application level and dm-ioband debug patch does it in
> dm-ioband controller.
> I think the difference is related with buffer cache.
> 
> Thank you.
> Have a nice weekend
> 
> 2009/8/27 Dong-Jae Kang <baramsori72@gmail.com>
> 
> > Hi Ryo
> >
> > 2009/8/27 Ryo Tsuruta <ryov@valinux.co.jp>
> >
> > Hi Dong-Jae,
> >>
> >> # I've added dm-devel to Cc:.
> >>
> >> Dong-Jae Kang <baramsori72@gmail.com> wrote:
> >> > Hi Ryo
> >> >
> >> > I attached new test result file(ioband-partition-based-evaluation.xls)in
> >> > this mail.
> >>
> >> Thanks for your great job.
> >>
> >> > In this time, it is not virtualization environment.
> >> > I evaluated partition-based use cases before I do it in vitualization
> >> > environment.
> >> > because I think the two cases are smilar each other.
> >> >
> >> > The detailed information about the evaluation can be referred in
> >> attached
> >> > file.
> >> >
> >> > If you have any questions or comments after examine it,
> >> > please give me your opinion.
> >>
> >> I would like to know the throughput without dm-ioband in your
> >> environment. Because the total throughput of range-bw policy is
> >> 8000KB/s, which means the device has a capability to perform over
> >> 8000KB/s, but the total throughput of weight policy is lower than
> >> the range-bw policy. In my environment, there is no significant
> >> difference in average throughput between with and without dm-ioband.
> >> I ran fio in the way described in your result file. Here are the
> >> results of my environment. The throughputs were calculated from
> >> "iostat -k 1" outputs.
> >>
> >>            buffered write test
> >>           Avg. throughput [KB/s]
> >>        w/o ioband     w/ioband
> >> sdb2         14485         5788
> >> sdb3         12494        22295
> >> total        26979        28030
> >>
> >
> > OK, good comments.
> > I omitted the total bandwidth of the evaluation system.
> >
> > I will reply to you about it tomorrow after I check and re-test it again.
> >
> >>
> >> Thanks,
> >> Ryo Tsuruta
> >>
> >
> > Thank you for comments.
> >
> >
> > --
> > Best Regards,
> > Dong-Jae Kang
> >
> 
> 
> 
> -- 
> Best Regards,
> Dong-Jae Kang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found]     ` <4A97FEC1.1010500-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
@ 2009-08-31  7:16       ` Dong-Jae Kang
       [not found]         ` <2891419e0908310016k7aa8acc6o548c1ce5a31ba2ba-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-31  7:16 UTC (permalink / raw)
  To: Munehiro Ikeda
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

Hi Munehiro Ikeda

Thanks for your attention.
and sorry for late reply.
2009/8/29 Munehiro Ikeda <m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>

> Hello Dong-Jae,
>
> Dong-Jae Kang wrote, on 08/26/2009 09:46 PM:
>
> Hi Ryo
>>
>> I attached new test result file(ioband-partition-based-evaluation.xls)in
>> this mail.
>> In this time, it is not virtualization environment.
>> I evaluated partition-based use cases before I do it in vitualization
>> environment.
>> because I think the two cases are smilar each other.
>>
>> The detailed information about the evaluation can be referred in attached
>> file.
>>
>> If you have any questions or comments after examine it,
>> please give me your opinion.
>>
>> Thank you.
>>
>
> Good work.
> Please let me ask silly questions.
>
> (1) About what "target" means
> I guess "device" means writing to device files directly
> (--filename=/dev/mapper/ioband1)
> and "directory" means mounting these device files and writing to some
> directory
> on the filesystem
> (--filename=/mnt/ioband1/test.dat, I'm assuming mount /dev/mapper/ioband1
> on
> /mnt/ioband1),
> am I wright?
>

Yes, you are right.
I also think the terms can leave misunderstanding. :)


>
> (2) Conditions  in RDF sheet
> Conditions in sheet "RDF" and "RBF" are same but results are slightly
> different.
> Should "Mode" in RDF sheet be "Direct"?


As the Report sheet in the file shows,
"D" in RDF means Direct I/O and "B" in RBF means Buffered I/O(delayed I/O).
I think the reason for difference in result, especially several fluctuation
in RBF, is
related with buffer cache  and pdflushd daemon.
So, generally, I/O bandwidth controll in direct I/O mode is more accurate
than that of buffered I/O mode.

>
>
>
> Regards,
> Muuhh
>
> --
> IKEDA, Munehiro
>  NEC Corporation of America
>    m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org
>
>


-- 
Best Regards,
Dong-Jae Kang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found]         ` <20090831.111233.226777940.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
@ 2009-08-31 10:08           ` Dong-Jae Kang
  0 siblings, 0 replies; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-31 10:08 UTC (permalink / raw)
  To: Ryo Tsuruta
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

[-- Attachment #1: Type: text/plain, Size: 4029 bytes --]

Hi Ryo
Sorry for late reply.

Mr Lee and I tested total bandwidth as you requested and the result is like
attached file.

I know file attaching is not proper in mailing list,
but I selected it for efficient and easy communication. sorry :)

And Buffed-Device case can be some strange,
it has too much variation in total bandwidth, So you had better to see as
only reference.
The rests didn't have big fluctuation such like Buffered-Device case.

if you have another request about the result,
reply to me about that.

Additionally, I will try to inform the test results in cgroup environment of
you in case by case.
What do you think about it?
If you are busy these days, it is OK to delay it later
.
Thank you.


2009/8/31 Ryo Tsuruta <ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>

> Hi Dong-Jae,
>
> Thanks for testing.
> Could you do the same test without dm-ioband? I would like to know
> the throughput of your disk drive and a difference with and withour
> dm-ioband.
>
> Thanks,
> Ryo Tsuruta
>
> Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > Hi Ryo
> > I attached new file that includes I/O total bandwidth of evaluation
> system.
> > We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
> > and it is measured through iostat tool and dm-ioband debug patch which I
> > gave you several months ago.
> > Of course, the result in prior report was measured by dm-ioband debug
> patch.
> >
> > As a result, the big difference in prior report derives from the location
> > where we measured I/O bandwidth
> > iostat counts it in application level and dm-ioband debug patch does it
> in
> > dm-ioband controller.
> > I think the difference is related with buffer cache.
> >
> > Thank you.
> > Have a nice weekend
> >
> > 2009/8/27 Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> >
> > > Hi Ryo
> > >
> > > 2009/8/27 Ryo Tsuruta <ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
> > >
> > > Hi Dong-Jae,
> > >>
> > >> # I've added dm-devel to Cc:.
> > >>
> > >> Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > >> > Hi Ryo
> > >> >
> > >> > I attached new test result
> file(ioband-partition-based-evaluation.xls)in
> > >> > this mail.
> > >>
> > >> Thanks for your great job.
> > >>
> > >> > In this time, it is not virtualization environment.
> > >> > I evaluated partition-based use cases before I do it in
> vitualization
> > >> > environment.
> > >> > because I think the two cases are smilar each other.
> > >> >
> > >> > The detailed information about the evaluation can be referred in
> > >> attached
> > >> > file.
> > >> >
> > >> > If you have any questions or comments after examine it,
> > >> > please give me your opinion.
> > >>
> > >> I would like to know the throughput without dm-ioband in your
> > >> environment. Because the total throughput of range-bw policy is
> > >> 8000KB/s, which means the device has a capability to perform over
> > >> 8000KB/s, but the total throughput of weight policy is lower than
> > >> the range-bw policy. In my environment, there is no significant
> > >> difference in average throughput between with and without dm-ioband.
> > >> I ran fio in the way described in your result file. Here are the
> > >> results of my environment. The throughputs were calculated from
> > >> "iostat -k 1" outputs.
> > >>
> > >>            buffered write test
> > >>           Avg. throughput [KB/s]
> > >>        w/o ioband     w/ioband
> > >> sdb2         14485         5788
> > >> sdb3         12494        22295
> > >> total        26979        28030
> > >>
> > >
> > > OK, good comments.
> > > I omitted the total bandwidth of the evaluation system.
> > >
> > > I will reply to you about it tomorrow after I check and re-test it
> again.
> > >
> > >>
> > >> Thanks,
> > >> Ryo Tsuruta
> > >>
> > >
> > > Thank you for comments.
> > >
> > >
> > > --
> > > Best Regards,
> > > Dong-Jae Kang
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Dong-Jae Kang
>



-- 
Best Regards,
Dong-Jae Kang

[-- Attachment #2: fio_test_with_without_dm-ioband_by_iostat.xls --]
[-- Type: application/vnd.ms-excel, Size: 34816 bytes --]

[-- Attachment #3: Type: text/plain, Size: 206 bytes --]

_______________________________________________
Containers mailing list
Containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
https://lists.linux-foundation.org/mailman/listinfo/containers

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
  2009-08-31  2:12       ` Ryo Tsuruta
       [not found]         ` <20090831.111233.226777940.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
@ 2009-08-31 10:08         ` Dong-Jae Kang
       [not found]           ` <2891419e0908310308w31c46180nce72d271cd947c98-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2009-09-02 11:18           ` Ryo Tsuruta
  1 sibling, 2 replies; 16+ messages in thread
From: Dong-Jae Kang @ 2009-08-31 10:08 UTC (permalink / raw)
  To: Ryo Tsuruta; +Cc: ioband-devel, containers, dm-devel, corsetproject


[-- Attachment #1.1: Type: text/plain, Size: 3887 bytes --]

Hi Ryo
Sorry for late reply.

Mr Lee and I tested total bandwidth as you requested and the result is like
attached file.

I know file attaching is not proper in mailing list,
but I selected it for efficient and easy communication. sorry :)

And Buffed-Device case can be some strange,
it has too much variation in total bandwidth, So you had better to see as
only reference.
The rests didn't have big fluctuation such like Buffered-Device case.

if you have another request about the result,
reply to me about that.

Additionally, I will try to inform the test results in cgroup environment of
you in case by case.
What do you think about it?
If you are busy these days, it is OK to delay it later
.
Thank you.


2009/8/31 Ryo Tsuruta <ryov@valinux.co.jp>

> Hi Dong-Jae,
>
> Thanks for testing.
> Could you do the same test without dm-ioband? I would like to know
> the throughput of your disk drive and a difference with and withour
> dm-ioband.
>
> Thanks,
> Ryo Tsuruta
>
> Dong-Jae Kang <baramsori72@gmail.com> wrote:
> > Hi Ryo
> > I attached new file that includes I/O total bandwidth of evaluation
> system.
> > We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
> > and it is measured through iostat tool and dm-ioband debug patch which I
> > gave you several months ago.
> > Of course, the result in prior report was measured by dm-ioband debug
> patch.
> >
> > As a result, the big difference in prior report derives from the location
> > where we measured I/O bandwidth
> > iostat counts it in application level and dm-ioband debug patch does it
> in
> > dm-ioband controller.
> > I think the difference is related with buffer cache.
> >
> > Thank you.
> > Have a nice weekend
> >
> > 2009/8/27 Dong-Jae Kang <baramsori72@gmail.com>
> >
> > > Hi Ryo
> > >
> > > 2009/8/27 Ryo Tsuruta <ryov@valinux.co.jp>
> > >
> > > Hi Dong-Jae,
> > >>
> > >> # I've added dm-devel to Cc:.
> > >>
> > >> Dong-Jae Kang <baramsori72@gmail.com> wrote:
> > >> > Hi Ryo
> > >> >
> > >> > I attached new test result
> file(ioband-partition-based-evaluation.xls)in
> > >> > this mail.
> > >>
> > >> Thanks for your great job.
> > >>
> > >> > In this time, it is not virtualization environment.
> > >> > I evaluated partition-based use cases before I do it in
> vitualization
> > >> > environment.
> > >> > because I think the two cases are smilar each other.
> > >> >
> > >> > The detailed information about the evaluation can be referred in
> > >> attached
> > >> > file.
> > >> >
> > >> > If you have any questions or comments after examine it,
> > >> > please give me your opinion.
> > >>
> > >> I would like to know the throughput without dm-ioband in your
> > >> environment. Because the total throughput of range-bw policy is
> > >> 8000KB/s, which means the device has a capability to perform over
> > >> 8000KB/s, but the total throughput of weight policy is lower than
> > >> the range-bw policy. In my environment, there is no significant
> > >> difference in average throughput between with and without dm-ioband.
> > >> I ran fio in the way described in your result file. Here are the
> > >> results of my environment. The throughputs were calculated from
> > >> "iostat -k 1" outputs.
> > >>
> > >>            buffered write test
> > >>           Avg. throughput [KB/s]
> > >>        w/o ioband     w/ioband
> > >> sdb2         14485         5788
> > >> sdb3         12494        22295
> > >> total        26979        28030
> > >>
> > >
> > > OK, good comments.
> > > I omitted the total bandwidth of the evaluation system.
> > >
> > > I will reply to you about it tomorrow after I check and re-test it
> again.
> > >
> > >>
> > >> Thanks,
> > >> Ryo Tsuruta
> > >>
> > >
> > > Thank you for comments.
> > >
> > >
> > > --
> > > Best Regards,
> > > Dong-Jae Kang
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Dong-Jae Kang
>



-- 
Best Regards,
Dong-Jae Kang

[-- Attachment #1.2: Type: text/html, Size: 5472 bytes --]

[-- Attachment #2: fio_test_with_without_dm-ioband_by_iostat.xls --]
[-- Type: application/vnd.ms-excel, Size: 34816 bytes --]

[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found]         ` <2891419e0908310016k7aa8acc6o548c1ce5a31ba2ba-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-08-31 16:40           ` Munehiro Ikeda
  0 siblings, 0 replies; 16+ messages in thread
From: Munehiro Ikeda @ 2009-08-31 16:40 UTC (permalink / raw)
  To: Dong-Jae Kang
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

Hello Dong-Jae,

Dong-Jae Kang wrote, on 08/31/2009 03:16 AM:
> Hi Munehiro Ikeda
> Thanks for your attention.
> and sorry for late reply.

No problem.


> 2009/8/29 Munehiro Ikeda <m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org
> <mailto:m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>>
>
>     Hello Dong-Jae,
>
>     Dong-Jae Kang wrote, on 08/26/2009 09:46 PM:
>
>         Hi Ryo
>
>         I attached new test result
>         file(ioband-partition-based-evaluation.xls)in
>         this mail.
>         In this time, it is not virtualization environment.
>         I evaluated partition-based use cases before I do it in
>         vitualization
>         environment.
>         because I think the two cases are smilar each other.
>
>         The detailed information about the evaluation can be referred in
>         attached
>         file.
>
>         If you have any questions or comments after examine it,
>         please give me your opinion.
>
>         Thank you.
>
>
>     Good work.
>     Please let me ask silly questions.
>
>     (1) About what "target" means
>     I guess "device" means writing to device files directly
>     (--filename=/dev/mapper/ioband1)
>     and "directory" means mounting these device files and writing to
>     some directory
>     on the filesystem
>     (--filename=/mnt/ioband1/test.dat, I'm assuming mount
>     /dev/mapper/ioband1 on
>     /mnt/ioband1),
>     am I wright?
>
> Yes, you are right.
> I also think the terms can leave misunderstanding. :)
>
>
>     (2) Conditions  in RDF sheet
>     Conditions in sheet "RDF" and "RBF" are same but results are
>     slightly different.
>     Should "Mode" in RDF sheet be "Direct"?
>
> As the Report sheet in the file shows,
> "D" in RDF means Direct I/O and "B" in RBF means Buffered I/O(delayed I/O).
> I think the reason for difference in result, especially several
> fluctuation in RBF, is
> related with buffer cache  and pdflushd daemon.
> So, generally, I/O bandwidth controll in direct I/O mode is
> more accurate than that of buffered I/O mode.

Alright, I see.  Completely no fluctuation in direct write is interesting.
Thank you for the explanation.

Muuhh

-- 
IKEDA, Munehiro
   NEC Corporation of America
     m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
       [not found]           ` <2891419e0908310308w31c46180nce72d271cd947c98-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-09-02 11:18             ` Ryo Tsuruta
  0 siblings, 0 replies; 16+ messages in thread
From: Ryo Tsuruta @ 2009-09-02 11:18 UTC (permalink / raw)
  To: baramsori72-Re5JQEeQqe8AvxtiuMwx3w
  Cc: ioband-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	corsetproject-/JYPxA39Uh5TLH3MbocFFw

Hi Dong-Jae,

Dong-Jae Kang <baramsori72-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Hi Ryo
> Sorry for late reply.
> 
> Mr Lee and I tested total bandwidth as you requested and the result is like
> attached file.

Thank you for your work.

> I know file attaching is not proper in mailing list,
> but I selected it for efficient and easy communication. sorry :)
> 
> And Buffed-Device case can be some strange,
> it has too much variation in total bandwidth, So you had better to see as
> only reference.
> The rests didn't have big fluctuation such like Buffered-Device case.
> 
> if you have another request about the result,
> reply to me about that.

Could you try the test on the weight policy with increaing token to
1280? I guess that the throughput difference between without and with
dm-ioband is caused since the token is a little small against the disk
speed.

> Additionally, I will try to inform the test results in cgroup environment of
> you in case by case.
> What do you think about it?

I would like to know the reason for the diffirence between debug patch
and iostat in the prvious test you did (total_bandwidth_result.xls).

Thanks,
Ryo Tsuruta

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment
  2009-08-31 10:08         ` Dong-Jae Kang
       [not found]           ` <2891419e0908310308w31c46180nce72d271cd947c98-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2009-09-02 11:18           ` Ryo Tsuruta
  1 sibling, 0 replies; 16+ messages in thread
From: Ryo Tsuruta @ 2009-09-02 11:18 UTC (permalink / raw)
  To: baramsori72; +Cc: ioband-devel, containers, dm-devel, corsetproject

Hi Dong-Jae,

Dong-Jae Kang <baramsori72@gmail.com> wrote:
> Hi Ryo
> Sorry for late reply.
> 
> Mr Lee and I tested total bandwidth as you requested and the result is like
> attached file.

Thank you for your work.

> I know file attaching is not proper in mailing list,
> but I selected it for efficient and easy communication. sorry :)
> 
> And Buffed-Device case can be some strange,
> it has too much variation in total bandwidth, So you had better to see as
> only reference.
> The rests didn't have big fluctuation such like Buffered-Device case.
> 
> if you have another request about the result,
> reply to me about that.

Could you try the test on the weight policy with increaing token to
1280? I guess that the throughput difference between without and with
dm-ioband is caused since the token is a little small against the disk
speed.

> Additionally, I will try to inform the test results in cgroup environment of
> you in case by case.
> What do you think about it?

I would like to know the reason for the diffirence between debug patch
and iostat in the prvious test you did (total_bandwidth_result.xls).

Thanks,
Ryo Tsuruta

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2009-09-02 11:18 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-27  1:46 [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment Dong-Jae Kang
2009-08-27  6:20 ` Ryo Tsuruta
2009-08-27 11:03   ` Dong-Jae Kang
     [not found]     ` <2891419e0908270403kd439d29gedeea6fcae753d6d-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-08-28 10:24       ` Dong-Jae Kang
2009-08-28 10:24     ` Dong-Jae Kang
2009-08-31  2:12       ` Ryo Tsuruta
     [not found]         ` <20090831.111233.226777940.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-08-31 10:08           ` Dong-Jae Kang
2009-08-31 10:08         ` Dong-Jae Kang
     [not found]           ` <2891419e0908310308w31c46180nce72d271cd947c98-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-09-02 11:18             ` Ryo Tsuruta
2009-09-02 11:18           ` Ryo Tsuruta
     [not found]       ` <2891419e0908280324v32e7ba3fq88eb754052cbf094-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-08-31  2:12         ` Ryo Tsuruta
     [not found]   ` <20090827.152042.104028330.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-08-27 11:03     ` Dong-Jae Kang
     [not found] ` <2891419e0908261846v4fa19791q4552975f10dd64e5-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-08-27  6:20   ` Ryo Tsuruta
2009-08-28 15:58   ` Munehiro Ikeda
     [not found]     ` <4A97FEC1.1010500-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-08-31  7:16       ` Dong-Jae Kang
     [not found]         ` <2891419e0908310016k7aa8acc6o548c1ce5a31ba2ba-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-08-31 16:40           ` Munehiro Ikeda

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.