All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph/rbd benchmarks
@ 2011-08-24 15:29 Marcus Sorensen
  2011-08-24 16:36 ` Gregory Farnum
  0 siblings, 1 reply; 6+ messages in thread
From: Marcus Sorensen @ 2011-08-24 15:29 UTC (permalink / raw)
  To: ceph-devel

Just thought I'd share this basic testing I did, comparing cephfs 0.32
on 3.1-rc1 to nfs as well as rbd to iscsi. I'm sure you guys see a lot
of this. Any feedback would be appreciated.

The data is here:

http://learnitwithme.com/wp-content/uploads/2011/08/ceph-nfs-iscsi-benchmarks.ods

and the writeup is here:

http://learnitwithme.com/?p=303

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ceph/rbd benchmarks
  2011-08-24 15:29 ceph/rbd benchmarks Marcus Sorensen
@ 2011-08-24 16:36 ` Gregory Farnum
  2011-08-24 16:46   ` Sage Weil
  0 siblings, 1 reply; 6+ messages in thread
From: Gregory Farnum @ 2011-08-24 16:36 UTC (permalink / raw)
  To: Marcus Sorensen; +Cc: ceph-devel

On Wed, Aug 24, 2011 at 8:29 AM, Marcus Sorensen <shadowsor@gmail.com> wrote:
> Just thought I'd share this basic testing I did, comparing cephfs 0.32
> on 3.1-rc1 to nfs as well as rbd to iscsi. I'm sure you guys see a lot
> of this. Any feedback would be appreciated.
>
> The data is here:
>
> http://learnitwithme.com/wp-content/uploads/2011/08/ceph-nfs-iscsi-benchmarks.ods
>
> and the writeup is here:
>
> http://learnitwithme.com/?p=303

We see less of it than you'd think, actually. Thanks!

To address a few things specifically
Ceph is both the name of the project and of the POSIX-compliant
filesystem. RADOS stands for Reliable Autonomous Distributed Object
Store. Apparently we should publish this a bit more. :)

Looks like most of the differences in your tests have to do with our
relatively lousy read performance -- this is probably due to lousy
readahead, which nobody's spent a lot of time optimizing as we focus
on stability. Sage made some improvements a few weeks ago but I don't
remember what version of stuff they ended up in. :) (Optimizing
cross-server reads is hard!)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ceph/rbd benchmarks
  2011-08-24 16:36 ` Gregory Farnum
@ 2011-08-24 16:46   ` Sage Weil
  2011-08-24 17:19     ` Marcus Sorensen
  0 siblings, 1 reply; 6+ messages in thread
From: Sage Weil @ 2011-08-24 16:46 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: Marcus Sorensen, ceph-devel

On Wed, 24 Aug 2011, Gregory Farnum wrote:
> On Wed, Aug 24, 2011 at 8:29 AM, Marcus Sorensen <shadowsor@gmail.com> wrote:
> > Just thought I'd share this basic testing I did, comparing cephfs 0.32
> > on 3.1-rc1 to nfs as well as rbd to iscsi. I'm sure you guys see a lot
> > of this. Any feedback would be appreciated.
> >
> > The data is here:
> >
> > http://learnitwithme.com/wp-content/uploads/2011/08/ceph-nfs-iscsi-benchmarks.ods
> >
> > and the writeup is here:
> >
> > http://learnitwithme.com/?p=303
> 
> We see less of it than you'd think, actually. Thanks!
> 
> To address a few things specifically
> Ceph is both the name of the project and of the POSIX-compliant
> filesystem. RADOS stands for Reliable Autonomous Distributed Object
> Store. Apparently we should publish this a bit more. :)
> 
> Looks like most of the differences in your tests have to do with our
> relatively lousy read performance -- this is probably due to lousy
> readahead, which nobody's spent a lot of time optimizing as we focus
> on stability. Sage made some improvements a few weeks ago but I don't
> remember what version of stuff they ended up in. :) (Optimizing
> cross-server reads is hard!)

The readahead improvements are in the 'master' branch of ceph-client.git, 
and will go upstream for Linux 3.2-rc1 (I just missed the 3.1-rc1 cutoff).  
In my tests I was limited by the wire speed with these patches.  I'm 
guessing you were using 3.0 or earlier kernel?

The file copy test was also surprising.  I think there is a regression 
there somewhere, taking a look.

sage


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ceph/rbd benchmarks
  2011-08-24 16:46   ` Sage Weil
@ 2011-08-24 17:19     ` Marcus Sorensen
  2011-08-24 17:51       ` Sage Weil
  0 siblings, 1 reply; 6+ messages in thread
From: Marcus Sorensen @ 2011-08-24 17:19 UTC (permalink / raw)
  To: Sage Weil; +Cc: Gregory Farnum, ceph-devel

I knew I read the acronym in a pdf somewhere (something from '08 I
think), but I couldn't find it when I needed it, thanks.

Everything was running 3.1-rc1, I checked the kernel source before
building and it already included the following patches, so I assumed I
was good on the readahead thing.

https://patchwork.kernel.org/patch/1001462/
https://patchwork.kernel.org/patch/1001432/

On Wed, Aug 24, 2011 at 10:46 AM, Sage Weil <sage@newdream.net> wrote:
> On Wed, 24 Aug 2011, Gregory Farnum wrote:
>> On Wed, Aug 24, 2011 at 8:29 AM, Marcus Sorensen <shadowsor@gmail.com> wrote:
>> > Just thought I'd share this basic testing I did, comparing cephfs 0.32
>> > on 3.1-rc1 to nfs as well as rbd to iscsi. I'm sure you guys see a lot
>> > of this. Any feedback would be appreciated.
>> >
>> > The data is here:
>> >
>> > http://learnitwithme.com/wp-content/uploads/2011/08/ceph-nfs-iscsi-benchmarks.ods
>> >
>> > and the writeup is here:
>> >
>> > http://learnitwithme.com/?p=303
>>
>> We see less of it than you'd think, actually. Thanks!
>>
>> To address a few things specifically
>> Ceph is both the name of the project and of the POSIX-compliant
>> filesystem. RADOS stands for Reliable Autonomous Distributed Object
>> Store. Apparently we should publish this a bit more. :)
>>
>> Looks like most of the differences in your tests have to do with our
>> relatively lousy read performance -- this is probably due to lousy
>> readahead, which nobody's spent a lot of time optimizing as we focus
>> on stability. Sage made some improvements a few weeks ago but I don't
>> remember what version of stuff they ended up in. :) (Optimizing
>> cross-server reads is hard!)
>
> The readahead improvements are in the 'master' branch of ceph-client.git,
> and will go upstream for Linux 3.2-rc1 (I just missed the 3.1-rc1 cutoff).
> In my tests I was limited by the wire speed with these patches.  I'm
> guessing you were using 3.0 or earlier kernel?
>
> The file copy test was also surprising.  I think there is a regression
> there somewhere, taking a look.
>
> sage
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ceph/rbd benchmarks
  2011-08-24 17:19     ` Marcus Sorensen
@ 2011-08-24 17:51       ` Sage Weil
  2011-08-26 20:53         ` Marcus Sorensen
  0 siblings, 1 reply; 6+ messages in thread
From: Sage Weil @ 2011-08-24 17:51 UTC (permalink / raw)
  To: Marcus Sorensen; +Cc: Gregory Farnum, ceph-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 3000 bytes --]

On Wed, 24 Aug 2011, Marcus Sorensen wrote:
> I knew I read the acronym in a pdf somewhere (something from '08 I
> think), but I couldn't find it when I needed it, thanks.
> 
> Everything was running 3.1-rc1, I checked the kernel source before
> building and it already included the following patches, so I assumed I
> was good on the readahead thing.
> 
> https://patchwork.kernel.org/patch/1001462/
> https://patchwork.kernel.org/patch/1001432/

Yeah, those two help marginally, but the big fix is 

http://ceph.newdream.net/git/?p=ceph-client.git;a=commit;h=78e669966f994964581167c6e25c83d22ebb26c6

and you'd probably also want

http://ceph.newdream.net/git/?p=ceph-client.git;a=commitdiff;h=6468bfe33c8e674509e39e43fad6bc833398fee2

Those are in linux-next and will be sent upstream for 3.2-rc1.

Not sure if it's worth rerunning your tests just yet (I want to look at 
the MDS stuff still), but it should fix the sequential read performance.

sage


> 
> On Wed, Aug 24, 2011 at 10:46 AM, Sage Weil <sage@newdream.net> wrote:
> > On Wed, 24 Aug 2011, Gregory Farnum wrote:
> >> On Wed, Aug 24, 2011 at 8:29 AM, Marcus Sorensen <shadowsor@gmail.com> wrote:
> >> > Just thought I'd share this basic testing I did, comparing cephfs 0.32
> >> > on 3.1-rc1 to nfs as well as rbd to iscsi. I'm sure you guys see a lot
> >> > of this. Any feedback would be appreciated.
> >> >
> >> > The data is here:
> >> >
> >> > http://learnitwithme.com/wp-content/uploads/2011/08/ceph-nfs-iscsi-benchmarks.ods
> >> >
> >> > and the writeup is here:
> >> >
> >> > http://learnitwithme.com/?p=303
> >>
> >> We see less of it than you'd think, actually. Thanks!
> >>
> >> To address a few things specifically
> >> Ceph is both the name of the project and of the POSIX-compliant
> >> filesystem. RADOS stands for Reliable Autonomous Distributed Object
> >> Store. Apparently we should publish this a bit more. :)
> >>
> >> Looks like most of the differences in your tests have to do with our
> >> relatively lousy read performance -- this is probably due to lousy
> >> readahead, which nobody's spent a lot of time optimizing as we focus
> >> on stability. Sage made some improvements a few weeks ago but I don't
> >> remember what version of stuff they ended up in. :) (Optimizing
> >> cross-server reads is hard!)
> >
> > The readahead improvements are in the 'master' branch of ceph-client.git,
> > and will go upstream for Linux 3.2-rc1 (I just missed the 3.1-rc1 cutoff).
> > In my tests I was limited by the wire speed with these patches.  I'm
> > guessing you were using 3.0 or earlier kernel?
> >
> > The file copy test was also surprising.  I think there is a regression
> > there somewhere, taking a look.
> >
> > sage
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ceph/rbd benchmarks
  2011-08-24 17:51       ` Sage Weil
@ 2011-08-26 20:53         ` Marcus Sorensen
  0 siblings, 0 replies; 6+ messages in thread
From: Marcus Sorensen @ 2011-08-26 20:53 UTC (permalink / raw)
  To: Sage Weil; +Cc: Gregory Farnum, ceph-devel

I applied the patches to the client's existing kernel version, here
are the results with respect to the pre-patch:

seq char write	100.60%
seq blk write	97.18%
seq blk rewrite	120.07%
seq char read	161.52%
seq blk read	162.30%
seeks	141.93%
seq create	76.02%
seq read	186.78%
seq delete	103.18%
rand create	91.50%
rand read	86.34%
rand delete	84.94%


seq reads are now limited by the NIC like the other filesystems.
Rewrite also got a nice bump, as well as the other sequential reads,
but strangely the sequential create is down a non-trivial amount. I
expected the possibility of a small hit to random.

On Wed, Aug 24, 2011 at 11:51 AM, Sage Weil <sage@newdream.net> wrote:
> On Wed, 24 Aug 2011, Marcus Sorensen wrote:
>> I knew I read the acronym in a pdf somewhere (something from '08 I
>> think), but I couldn't find it when I needed it, thanks.
>>
>> Everything was running 3.1-rc1, I checked the kernel source before
>> building and it already included the following patches, so I assumed I
>> was good on the readahead thing.
>>
>> https://patchwork.kernel.org/patch/1001462/
>> https://patchwork.kernel.org/patch/1001432/
>
> Yeah, those two help marginally, but the big fix is
>
> http://ceph.newdream.net/git/?p=ceph-client.git;a=commit;h=78e669966f994964581167c6e25c83d22ebb26c6
>
> and you'd probably also want
>
> http://ceph.newdream.net/git/?p=ceph-client.git;a=commitdiff;h=6468bfe33c8e674509e39e43fad6bc833398fee2
>
> Those are in linux-next and will be sent upstream for 3.2-rc1.
>
> Not sure if it's worth rerunning your tests just yet (I want to look at
> the MDS stuff still), but it should fix the sequential read performance.
>
> sage
>
>
>>
>> On Wed, Aug 24, 2011 at 10:46 AM, Sage Weil <sage@newdream.net> wrote:
>> > On Wed, 24 Aug 2011, Gregory Farnum wrote:
>> >> On Wed, Aug 24, 2011 at 8:29 AM, Marcus Sorensen <shadowsor@gmail.com> wrote:
>> >> > Just thought I'd share this basic testing I did, comparing cephfs 0.32
>> >> > on 3.1-rc1 to nfs as well as rbd to iscsi. I'm sure you guys see a lot
>> >> > of this. Any feedback would be appreciated.
>> >> >
>> >> > The data is here:
>> >> >
>> >> > http://learnitwithme.com/wp-content/uploads/2011/08/ceph-nfs-iscsi-benchmarks.ods
>> >> >
>> >> > and the writeup is here:
>> >> >
>> >> > http://learnitwithme.com/?p=303
>> >>
>> >> We see less of it than you'd think, actually. Thanks!
>> >>
>> >> To address a few things specifically
>> >> Ceph is both the name of the project and of the POSIX-compliant
>> >> filesystem. RADOS stands for Reliable Autonomous Distributed Object
>> >> Store. Apparently we should publish this a bit more. :)
>> >>
>> >> Looks like most of the differences in your tests have to do with our
>> >> relatively lousy read performance -- this is probably due to lousy
>> >> readahead, which nobody's spent a lot of time optimizing as we focus
>> >> on stability. Sage made some improvements a few weeks ago but I don't
>> >> remember what version of stuff they ended up in. :) (Optimizing
>> >> cross-server reads is hard!)
>> >
>> > The readahead improvements are in the 'master' branch of ceph-client.git,
>> > and will go upstream for Linux 3.2-rc1 (I just missed the 3.1-rc1 cutoff).
>> > In my tests I was limited by the wire speed with these patches.  I'm
>> > guessing you were using 3.0 or earlier kernel?
>> >
>> > The file copy test was also surprising.  I think there is a regression
>> > there somewhere, taking a look.
>> >
>> > sage
>> >
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-08-26 20:53 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-24 15:29 ceph/rbd benchmarks Marcus Sorensen
2011-08-24 16:36 ` Gregory Farnum
2011-08-24 16:46   ` Sage Weil
2011-08-24 17:19     ` Marcus Sorensen
2011-08-24 17:51       ` Sage Weil
2011-08-26 20:53         ` Marcus Sorensen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.