ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RBD performance is not good
@ 2010-08-10  3:23 Xiaoguang Liu
  2010-08-10  4:58 ` Yehuda Sadeh Weinraub
  2010-08-10  5:46 ` Haifeng Liu
  0 siblings, 2 replies; 6+ messages in thread
From: Xiaoguang Liu @ 2010-08-10  3:23 UTC (permalink / raw)
  To: ceph-devel

on my 2-ods-node cluster, dd can only reach 13MB/s on rbd device. this
is much lower than what I expected.
I can get 30-60 MB/s on a ceph filesystem over the same cluster.

any idea?




[root@ceph-1 ~]# dd if=/dev/zero of=/dev/rbd0  bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 391.508 s, 13.4 MB/s

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RBD performance is not good
  2010-08-10  3:23 RBD performance is not good Xiaoguang Liu
@ 2010-08-10  4:58 ` Yehuda Sadeh Weinraub
  2010-08-10  5:46 ` Haifeng Liu
  1 sibling, 0 replies; 6+ messages in thread
From: Yehuda Sadeh Weinraub @ 2010-08-10  4:58 UTC (permalink / raw)
  To: Xiaoguang Liu; +Cc: ceph-devel

On Mon, Aug 9, 2010 at 8:23 PM, Xiaoguang Liu <syslxg@gmail.com> wrote:
> on my 2-ods-node cluster, dd can only reach 13MB/s on rbd device. this
> is much lower than what I expected.
> I can get 30-60 MB/s on a ceph filesystem over the same cluster.
>
> any idea?
>

It is hard to really know without more info, just note that when you
write directly to the block device you bypass any caches, as you do a
raw write. So it is expected that you'd notice a performance drop. You
can try running 'iostat -x 1', it should give some idea about what's
going on.

Thanks,
Yehuda

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RBD performance is not good
  2010-08-10  3:23 RBD performance is not good Xiaoguang Liu
  2010-08-10  4:58 ` Yehuda Sadeh Weinraub
@ 2010-08-10  5:46 ` Haifeng Liu
  2010-08-10  8:18   ` Ulrich Oelmann
  1 sibling, 1 reply; 6+ messages in thread
From: Haifeng Liu @ 2010-08-10  5:46 UTC (permalink / raw)
  To: ceph-devel

Xiaoguang,

This should be a normal result – the client’s pagecache buffered the write
when you dd onto a ceph fs, while no buffering when dd to a rbd. What do you
think? 

Thanks
-haifeng


On 8/10/10 11:23 AM, "Xiaoguang Liu" <syslxg@gmail.com> wrote:

> on my 2-ods-node cluster, dd can only reach 13MB/s on rbd device. this
> is much lower than what I expected.
> I can get 30-60 MB/s on a ceph filesystem over the same cluster.
> 
> any idea?
> 
> 
> 
> 
> [root@ceph-1 ~]# dd if=/dev/zero of=/dev/rbd0  bs=1M count=5000
> 5000+0 records in
> 5000+0 records out
> 5242880000 bytes (5.2 GB) copied, 391.508 s, 13.4 MB/s
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RBD performance is not good
  2010-08-10  5:46 ` Haifeng Liu
@ 2010-08-10  8:18   ` Ulrich Oelmann
  2010-08-10  8:27     ` Wido den Hollander
  0 siblings, 1 reply; 6+ messages in thread
From: Ulrich Oelmann @ 2010-08-10  8:18 UTC (permalink / raw)
  To: ceph-devel

Hi there,

I don't think that Xiaoguang saw the speed of 30-60 MB/s because of cache
effects as the speed should be in the order of at least some hundred MB/s
then. Do you agree?

Could perhaps someone having an rbd-setup running confirm or reject
Xiaoguangs measurements?

Best regards
Ulrich


-----Ursprüngliche Nachricht-----
Von: Haifeng Liu <haifeng@yahoo-inc.com>
Gesendet: Aug 10, 2010 7:46:52 AM
An: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>
Betreff: Re: RBD performance is not good

>Xiaoguang,
>
>This should be a normal result – the client’s pagecache buffered the write
>when you dd onto a ceph fs, while no buffering when dd to a rbd. What do you
>think? 
>
>Thanks
>-haifeng
>
>
>On 8/10/10 11:23 AM, "Xiaoguang Liu"  wrote:
>
>> on my 2-ods-node cluster, dd can only reach 13MB/s on rbd device. this
>> is much lower than what I expected.
>> I can get 30-60 MB/s on a ceph filesystem over the same cluster.
>> 
>> any idea?
>> 
>> 
>> 
>> 
>> [root@ceph-1 ~]# dd if=/dev/zero of=/dev/rbd0  bs=1M count=5000
>> 5000+0 records in
>> 5000+0 records out
>> 5242880000 bytes (5.2 GB) copied, 391.508 s, 13.4 MB/s
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
>
>N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay�\x1dʇڙ�,j\a��f���h���z�\x1e�w���\f���j:+v���w�j�m����\a����zZ+��ݢj"��
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RBD performance is not good
  2010-08-10  8:18   ` Ulrich Oelmann
@ 2010-08-10  8:27     ` Wido den Hollander
  2010-08-11  5:57       ` Xiaoguang Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Wido den Hollander @ 2010-08-10  8:27 UTC (permalink / raw)
  To: Ulrich Oelmann; +Cc: ceph-devel

Hi,

It might be usefull to benchmark your individual OSD's first.

Wiki: http://ceph.newdream.net/wiki/Troubleshooting#OSD_performance

There might be a slow OSD in your cluster which slows down the cluster.

With some test with the Ceph filesystem i was able to reach speeds up to
150MB/sec (dd with conv=sync) writing to my Ceph filesystem.

Haven't tried RBD yet on the cluster, but in my previous setup i was
reaching about 30 ~ 35MB/sec with RBD.

Wido

On Tue, 2010-08-10 at 10:18 +0200, Ulrich Oelmann wrote:
> Hi there,
> 
> I don't think that Xiaoguang saw the speed of 30-60 MB/s because of cache
> effects as the speed should be in the order of at least some hundred MB/s
> then. Do you agree?
> 
> Could perhaps someone having an rbd-setup running confirm or reject
> Xiaoguangs measurements?
> 
> Best regards
> Ulrich
> 
> 
> -----Ursprüngliche Nachricht-----
> Von: Haifeng Liu <haifeng@yahoo-inc.com>
> Gesendet: Aug 10, 2010 7:46:52 AM
> An: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>
> Betreff: Re: RBD performance is not good
> 
> >Xiaoguang,
> >
> >This should be a normal result – the client’s pagecache buffered the write
> >when you dd onto a ceph fs, while no buffering when dd to a rbd. What do you
> >think? 
> >
> >Thanks
> >-haifeng
> >
> >
> >On 8/10/10 11:23 AM, "Xiaoguang Liu"  wrote:
> >
> >> on my 2-ods-node cluster, dd can only reach 13MB/s on rbd device. this
> >> is much lower than what I expected.
> >> I can get 30-60 MB/s on a ceph filesystem over the same cluster.
> >> 
> >> any idea?
> >> 
> >> 
> >> 
> >> 
> >> [root@ceph-1 ~]# dd if=/dev/zero of=/dev/rbd0  bs=1M count=5000
> >> 5000+0 records in
> >> 5000+0 records out
> >> 5242880000 bytes (5.2 GB) copied, 391.508 s, 13.4 MB/s
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> 
> >
> >N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay�\x1dʇڙ�,j\a��f���h���z�\x1e�w���\f���j:+v���w�j�m����\a����zZ+��ݢj"��
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RBD performance is not good
  2010-08-10  8:27     ` Wido den Hollander
@ 2010-08-11  5:57       ` Xiaoguang Liu
  0 siblings, 0 replies; 6+ messages in thread
From: Xiaoguang Liu @ 2010-08-11  5:57 UTC (permalink / raw)
  To: Wido den Hollander; +Cc: Ulrich Oelmann, ceph-devel

the benchmark on both node is about 30MB/s. But I can only get 10MB/s
on /dev/rbd0. this is confusing.
the performance on ceph fs is affected by cache buffer very much. If I
write twice the size of system memory on ceph fs using dd(I suppose
buffer cache effect will be minor) the rate is about 30 MB/s, which
matchs the single osd benchmark very well.






964 1 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 34.203317
sec at 30657 KB/sec
10.08.11_13:25:41.370628   log 10.08.11_13:25:40.319855 osd0
172.16.12.1:6801/23964 2 : [INF] bench: wrote 1024 MB in blocks of
4096 KB in 33.138057 sec at 31642 KB/sec
10.08.11_13:26:32.776497   log 10.08.11_13:27:07.812769 osd1
172.16.12.2:6800/31012 1 : [INF] bench: wrote 1024 MB in blocks of
4096 KB in 33.503103 sec at 31297 KB/sec
10.08.11_13:27:10.954734   log 10.08.11_13:27:46.003163 osd1
172.16.12.2:6800/31012 2 : [INF] bench: wrote 1024 MB in blocks of
4096 KB in 34.287723 sec at 30581 KB/sec



On Tue, Aug 10, 2010 at 4:27 PM, Wido den Hollander <wido@widodh.nl> wrote:
> Hi,
>
> It might be usefull to benchmark your individual OSD's first.
>
> Wiki: http://ceph.newdream.net/wiki/Troubleshooting#OSD_performance
>
> There might be a slow OSD in your cluster which slows down the cluster.
>
> With some test with the Ceph filesystem i was able to reach speeds up to
> 150MB/sec (dd with conv=sync) writing to my Ceph filesystem.
>
> Haven't tried RBD yet on the cluster, but in my previous setup i was
> reaching about 30 ~ 35MB/sec with RBD.
>
> Wido
>
> On Tue, 2010-08-10 at 10:18 +0200, Ulrich Oelmann wrote:
>> Hi there,
>>
>> I don't think that Xiaoguang saw the speed of 30-60 MB/s because of cache
>> effects as the speed should be in the order of at least some hundred MB/s
>> then. Do you agree?
>>
>> Could perhaps someone having an rbd-setup running confirm or reject
>> Xiaoguangs measurements?
>>
>> Best regards
>> Ulrich
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Haifeng Liu <haifeng@yahoo-inc.com>
>> Gesendet: Aug 10, 2010 7:46:52 AM
>> An: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>
>> Betreff: Re: RBD performance is not good
>>
>> >Xiaoguang,
>> >
>> >This should be a normal result – the client’s pagecache buffered the write
>> >when you dd onto a ceph fs, while no buffering when dd to a rbd. What do you
>> >think?
>> >
>> >Thanks
>> >-haifeng
>> >
>> >
>> >On 8/10/10 11:23 AM, "Xiaoguang Liu"  wrote:
>> >
>> >> on my 2-ods-node cluster, dd can only reach 13MB/s on rbd device. this
>> >> is much lower than what I expected.
>> >> I can get 30-60 MB/s on a ceph filesystem over the same cluster.
>> >>
>> >> any idea?
>> >>
>> >>
>> >>
>> >>
>> >> [root@ceph-1 ~]# dd if=/dev/zero of=/dev/rbd0  bs=1M count=5000
>> >> 5000+0 records in
>> >> 5000+0 records out
>> >> 5242880000 bytes (5.2 GB) copied, 391.508 s, 13.4 MB/s
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> >> the body of a message to majordomo@vger.kernel.org
>> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >>
>> >
>> >N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay� ʇڙ�,j ��f���h���z� �w��� ���j:+v���w�j�m���� ����zZ+��ݢj"��
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-08-11  5:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-10  3:23 RBD performance is not good Xiaoguang Liu
2010-08-10  4:58 ` Yehuda Sadeh Weinraub
2010-08-10  5:46 ` Haifeng Liu
2010-08-10  8:18   ` Ulrich Oelmann
2010-08-10  8:27     ` Wido den Hollander
2010-08-11  5:57       ` Xiaoguang Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).