All of lore.kernel.org
 help / color / mirror / Atom feed
* severe librbd performance degradation in Giant
@ 2014-09-17 20:55 Somnath Roy
  2014-09-17 20:59 ` Mark Nelson
                   ` (2 more replies)
  0 siblings, 3 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 20:55 UTC (permalink / raw)
  To: ceph-devel

Hi Sage,
We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.

1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
4. Giant RGW on top of Giant OSD is also scaling.


So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.

Thanks & Regards
Somnath

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-17 20:55 severe librbd performance degradation in Giant Somnath Roy
@ 2014-09-17 20:59 ` Mark Nelson
  2014-09-17 21:01   ` Somnath Roy
       [not found] ` <BA7B69AA-4906-4836-A2F6-5A6EE756A548@profihost.ag>
  2014-09-17 21:20 ` Josh Durgin
  2 siblings, 1 reply; 34+ messages in thread
From: Mark Nelson @ 2014-09-17 20:59 UTC (permalink / raw)
  To: Somnath Roy, ceph-devel

On 09/17/2014 03:55 PM, Somnath Roy wrote:
> Hi Sage,
> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>
> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> 4. Giant RGW on top of Giant OSD is also scaling.
>
>
> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.

Hi Somnath,

How much concurrency?

>
> Thanks & Regards
> Somnath
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-17 20:59 ` Mark Nelson
@ 2014-09-17 21:01   ` Somnath Roy
  0 siblings, 0 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 21:01 UTC (permalink / raw)
  To: Mark Nelson, ceph-devel

Mark,
All are running with concurrency 32.

Thanks & Regards
Somnath

-----Original Message-----
From: Mark Nelson [mailto:mark.nelson@inktank.com] 
Sent: Wednesday, September 17, 2014 1:59 PM
To: Somnath Roy; ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in Giant

On 09/17/2014 03:55 PM, Somnath Roy wrote:
> Hi Sage,
> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>
> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> 4. Giant RGW on top of Giant OSD is also scaling.
>
>
> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.

Hi Somnath,

How much concurrency?

>
> Thanks & Regards
> Somnath
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
       [not found] ` <BA7B69AA-4906-4836-A2F6-5A6EE756A548@profihost.ag>
@ 2014-09-17 21:08   ` Somnath Roy
  0 siblings, 0 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 21:08 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG; +Cc: ceph-devel

But, this time is ~10X degradation :-(

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
From: Stefan Priebe - Profihost AG [mailto:s.priebe@profihost.ag] 
Sent: Wednesday, September 17, 2014 2:02 PM
To: Somnath Roy
Cc: ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in Giant

I reported the same for librbd I'm firefly after upgrading from dumpling here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040664.html

Stefan

Excuse my typo sent from my mobile phone.

Am 17.09.2014 um 22:55 schrieb Somnath Roy <Somnath.Roy@sandisk.com>:
Hi Sage,
We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.

1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
4. Giant RGW on top of Giant OSD is also scaling.


So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.

Thanks & Regards
Somnath

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-17 20:55 severe librbd performance degradation in Giant Somnath Roy
  2014-09-17 20:59 ` Mark Nelson
       [not found] ` <BA7B69AA-4906-4836-A2F6-5A6EE756A548@profihost.ag>
@ 2014-09-17 21:20 ` Josh Durgin
  2014-09-17 21:29   ` Somnath Roy
  2 siblings, 1 reply; 34+ messages in thread
From: Josh Durgin @ 2014-09-17 21:20 UTC (permalink / raw)
  To: Somnath Roy, ceph-devel

On 09/17/2014 01:55 PM, Somnath Roy wrote:
> Hi Sage,
> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>
> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> 4. Giant RGW on top of Giant OSD is also scaling.
>
>
> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.

For giant the default cache settings changed to:

rbd cache = true
rbd cache writethrough until flush = true

If fio isn't sending flushes as the test is running, the cache will
stay in writethrough mode. Does the difference remain if you set rbd
cache writethrough until flush = false ?

Josh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-17 21:20 ` Josh Durgin
@ 2014-09-17 21:29   ` Somnath Roy
  2014-09-17 21:34     ` Mark Nelson
  2014-09-17 21:35     ` Sage Weil
  0 siblings, 2 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 21:29 UTC (permalink / raw)
  To: Josh Durgin, ceph-devel

I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.

rbd_cache_writethrough_until_flush = false

But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?

Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !

So, loks like rbd_cache=true was the culprit.

Thanks Josh !

Regards
Somnath

-----Original Message-----
From: Josh Durgin [mailto:josh.durgin@inktank.com]
Sent: Wednesday, September 17, 2014 2:20 PM
To: Somnath Roy; ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in Giant

On 09/17/2014 01:55 PM, Somnath Roy wrote:
> Hi Sage,
> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>
> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> 4. Giant RGW on top of Giant OSD is also scaling.
>
>
> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.

For giant the default cache settings changed to:

rbd cache = true
rbd cache writethrough until flush = true

If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?

Josh

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-17 21:29   ` Somnath Roy
@ 2014-09-17 21:34     ` Mark Nelson
  2014-09-17 21:37       ` Somnath Roy
  2014-09-17 21:40       ` Josh Durgin
  2014-09-17 21:35     ` Sage Weil
  1 sibling, 2 replies; 34+ messages in thread
From: Mark Nelson @ 2014-09-17 21:34 UTC (permalink / raw)
  To: Somnath Roy, Josh Durgin, ceph-devel

Any chance read ahead could be causing issues?

On 09/17/2014 04:29 PM, Somnath Roy wrote:
> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>
> rbd_cache_writethrough_until_flush = false
>
> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>
> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>
> So, loks like rbd_cache=true was the culprit.
>
> Thanks Josh !
>
> Regards
> Somnath
>
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 17, 2014 2:20 PM
> To: Somnath Roy; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
>
> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>> Hi Sage,
>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>>
>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
>> 4. Giant RGW on top of Giant OSD is also scaling.
>>
>>
>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>
> For giant the default cache settings changed to:
>
> rbd cache = true
> rbd cache writethrough until flush = true
>
> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>
> Josh
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-17 21:29   ` Somnath Roy
  2014-09-17 21:34     ` Mark Nelson
@ 2014-09-17 21:35     ` Sage Weil
  2014-09-17 21:38       ` Somnath Roy
  1 sibling, 1 reply; 34+ messages in thread
From: Sage Weil @ 2014-09-17 21:35 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Josh Durgin, ceph-devel

What was the io pattern?  Sequential or random?  For random a slowdown 
makes sense (tho maybe not 10x!) but not for sequentail....

s

On Wed, 17 Sep 2014, Somnath Roy wrote:

> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> 
> rbd_cache_writethrough_until_flush = false
> 
> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> 
> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> 
> So, loks like rbd_cache=true was the culprit.
> 
> Thanks Josh !
> 
> Regards
> Somnath
> 
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 17, 2014 2:20 PM
> To: Somnath Roy; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> > Hi Sage,
> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> >
> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> > 4. Giant RGW on top of Giant OSD is also scaling.
> >
> >
> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> 
> For giant the default cache settings changed to:
> 
> rbd cache = true
> rbd cache writethrough until flush = true
> 
> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> 
> Josh
> 
> ________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-17 21:34     ` Mark Nelson
@ 2014-09-17 21:37       ` Somnath Roy
  2014-09-17 21:40       ` Josh Durgin
  1 sibling, 0 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 21:37 UTC (permalink / raw)
  To: Mark Nelson, Josh Durgin, ceph-devel

It's default read ahead setting. I am doing random read , so, I don't think read ahead is the issue.
Also, in the cluster side, ceph -s is reporting same iops, so, ios are hitting the cluster.
-----Original Message-----
From: Mark Nelson [mailto:mark.nelson@inktank.com] 
Sent: Wednesday, September 17, 2014 2:34 PM
To: Somnath Roy; Josh Durgin; ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in Giant

Any chance read ahead could be causing issues?

On 09/17/2014 04:29 PM, Somnath Roy wrote:
> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>
> rbd_cache_writethrough_until_flush = false
>
> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>
> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>
> So, loks like rbd_cache=true was the culprit.
>
> Thanks Josh !
>
> Regards
> Somnath
>
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 17, 2014 2:20 PM
> To: Somnath Roy; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
>
> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>> Hi Sage,
>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>>
>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
>> 4. Giant RGW on top of Giant OSD is also scaling.
>>
>>
>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>
> For giant the default cache settings changed to:
>
> rbd cache = true
> rbd cache writethrough until flush = true
>
> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>
> Josh
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-17 21:35     ` Sage Weil
@ 2014-09-17 21:38       ` Somnath Roy
  2014-09-17 21:44         ` Somnath Roy
  2014-09-17 23:44         ` Somnath Roy
  0 siblings, 2 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 21:38 UTC (permalink / raw)
  To: Sage Weil; +Cc: Josh Durgin, ceph-devel

Sage,
It's a 4K random read.

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil@redhat.com] 
Sent: Wednesday, September 17, 2014 2:36 PM
To: Somnath Roy
Cc: Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....

s

On Wed, 17 Sep 2014, Somnath Roy wrote:

> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> 
> rbd_cache_writethrough_until_flush = false
> 
> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> 
> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> 
> So, loks like rbd_cache=true was the culprit.
> 
> Thanks Josh !
> 
> Regards
> Somnath
> 
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 17, 2014 2:20 PM
> To: Somnath Roy; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> > Hi Sage,
> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> >
> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> > 4. Giant RGW on top of Giant OSD is also scaling.
> >
> >
> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> 
> For giant the default cache settings changed to:
> 
> rbd cache = true
> rbd cache writethrough until flush = true
> 
> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> 
> Josh
> 
> ________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-17 21:34     ` Mark Nelson
  2014-09-17 21:37       ` Somnath Roy
@ 2014-09-17 21:40       ` Josh Durgin
  1 sibling, 0 replies; 34+ messages in thread
From: Josh Durgin @ 2014-09-17 21:40 UTC (permalink / raw)
  To: Mark Nelson, Somnath Roy, ceph-devel

No, it's not merged yet. The ObjectCacher (which implements rbd and
ceph-fuse caching) has a global lock, which could be a bottleneck in
this case.

On 09/17/2014 02:34 PM, Mark Nelson wrote:
> Any chance read ahead could be causing issues?
>
> On 09/17/2014 04:29 PM, Somnath Roy wrote:
>> I set the following in the client side /etc/ceph/ceph.conf where I am
>> running fio rbd.
>>
>> rbd_cache_writethrough_until_flush = false
>>
>> But, no difference. BTW, I am doing Random read, not write. Still this
>> setting applies ?
>>
>> Next, I tried to tweak the rbd_cache setting to false and I *got back*
>> the old performance. Now, it is similar to firefly throughput !
>>
>> So, loks like rbd_cache=true was the culprit.
>>
>> Thanks Josh !
>>
>> Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>> Sent: Wednesday, September 17, 2014 2:20 PM
>> To: Somnath Roy; ceph-devel@vger.kernel.org
>> Subject: Re: severe librbd performance degradation in Giant
>>
>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>>> Hi Sage,
>>> We are experiencing severe librbd performance degradation in Giant
>>> over firefly release. Here is the experiment we did to isolate it as
>>> a librbd problem.
>>>
>>> 1. Single OSD is running latest Giant and client is running fio rbd
>>> on top of firefly based librbd/librados. For one client it is giving
>>> ~11-12K  iops (4K RR).
>>> 2. Single OSD is running Giant and client is running fio rbd on top
>>> of Giant based librbd/librados. For one client it is giving ~1.9K
>>> iops (4K RR).
>>> 3. Single OSD is running latest Giant and client is running Giant
>>> based ceph_smaiobench on top of giant librados. For one client it is
>>> giving ~11-12K  iops (4K RR).
>>> 4. Giant RGW on top of Giant OSD is also scaling.
>>>
>>>
>>> So, it is obvious from the above that recent librbd has issues. I
>>> will raise a tracker to track this.
>>
>> For giant the default cache settings changed to:
>>
>> rbd cache = true
>> rbd cache writethrough until flush = true
>>
>> If fio isn't sending flushes as the test is running, the cache will
>> stay in writethrough mode. Does the difference remain if you set rbd
>> cache writethrough until flush = false ?
>>
>> Josh


^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-17 21:38       ` Somnath Roy
@ 2014-09-17 21:44         ` Somnath Roy
  2014-09-17 23:44         ` Somnath Roy
  1 sibling, 0 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 21:44 UTC (permalink / raw)
  To: Sage Weil; +Cc: Josh Durgin, ceph-devel

Created a tracker for this.

http://tracker.ceph.com/issues/9513

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Wednesday, September 17, 2014 2:39 PM
To: Sage Weil
Cc: Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

Sage,
It's a 4K random read.

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil@redhat.com]
Sent: Wednesday, September 17, 2014 2:36 PM
To: Somnath Roy
Cc: Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....

s

On Wed, 17 Sep 2014, Somnath Roy wrote:

> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> 
> rbd_cache_writethrough_until_flush = false
> 
> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> 
> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> 
> So, loks like rbd_cache=true was the culprit.
> 
> Thanks Josh !
> 
> Regards
> Somnath
> 
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 17, 2014 2:20 PM
> To: Somnath Roy; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> > Hi Sage,
> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> >
> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> > 4. Giant RGW on top of Giant OSD is also scaling.
> >
> >
> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> 
> For giant the default cache settings changed to:
> 
> rbd cache = true
> rbd cache writethrough until flush = true
> 
> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> 
> Josh
> 
> ________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-17 21:38       ` Somnath Roy
  2014-09-17 21:44         ` Somnath Roy
@ 2014-09-17 23:44         ` Somnath Roy
  2014-09-18  2:27           ` Haomai Wang
  1 sibling, 1 reply; 34+ messages in thread
From: Somnath Roy @ 2014-09-17 23:44 UTC (permalink / raw)
  To: Sage Weil; +Cc: Josh Durgin, ceph-devel

Josh/Sage,
I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.

Thanks & Regards
Somnath

-----Original Message-----
From: Somnath Roy 
Sent: Wednesday, September 17, 2014 2:44 PM
To: Sage Weil
Cc: Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

Created a tracker for this.

http://tracker.ceph.com/issues/9513

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Wednesday, September 17, 2014 2:39 PM
To: Sage Weil
Cc: Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

Sage,
It's a 4K random read.

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil@redhat.com]
Sent: Wednesday, September 17, 2014 2:36 PM
To: Somnath Roy
Cc: Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....

s

On Wed, 17 Sep 2014, Somnath Roy wrote:

> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> 
> rbd_cache_writethrough_until_flush = false
> 
> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> 
> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> 
> So, loks like rbd_cache=true was the culprit.
> 
> Thanks Josh !
> 
> Regards
> Somnath
> 
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 17, 2014 2:20 PM
> To: Somnath Roy; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> > Hi Sage,
> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> >
> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> > 4. Giant RGW on top of Giant OSD is also scaling.
> >
> >
> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> 
> For giant the default cache settings changed to:
> 
> rbd cache = true
> rbd cache writethrough until flush = true
> 
> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> 
> Josh
> 
> ________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-17 23:44         ` Somnath Roy
@ 2014-09-18  2:27           ` Haomai Wang
  2014-09-18  3:03             ` Somnath Roy
  2014-09-18  9:49             ` Alexandre DERUMIER
  0 siblings, 2 replies; 34+ messages in thread
From: Haomai Wang @ 2014-09-18  2:27 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, Josh Durgin, ceph-devel

According http://tracker.ceph.com/issues/9513, do you mean that rbd
cache will make 10x performance degradation for random read?

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Josh/Sage,
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Somnath Roy
> Sent: Wednesday, September 17, 2014 2:44 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> Created a tracker for this.
>
> http://tracker.ceph.com/issues/9513
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Wednesday, September 17, 2014 2:39 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> Sage,
> It's a 4K random read.
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, September 17, 2014 2:36 PM
> To: Somnath Roy
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
>
> s
>
> On Wed, 17 Sep 2014, Somnath Roy wrote:
>
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>>
>> rbd_cache_writethrough_until_flush = false
>>
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>>
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>>
>> So, loks like rbd_cache=true was the culprit.
>>
>> Thanks Josh !
>>
>> Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>> Sent: Wednesday, September 17, 2014 2:20 PM
>> To: Somnath Roy; ceph-devel@vger.kernel.org
>> Subject: Re: severe librbd performance degradation in Giant
>>
>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>> > Hi Sage,
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>> >
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
>> > 4. Giant RGW on top of Giant OSD is also scaling.
>> >
>> >
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>>
>> For giant the default cache settings changed to:
>>
>> rbd cache = true
>> rbd cache writethrough until flush = true
>>
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>>
>> Josh
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo
>> info at  http://vger.kernel.org/majordomo-info.html
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-18  2:27           ` Haomai Wang
@ 2014-09-18  3:03             ` Somnath Roy
  2014-09-18  3:52               ` Sage Weil
  2014-09-18  9:49             ` Alexandre DERUMIER
  1 sibling, 1 reply; 34+ messages in thread
From: Somnath Roy @ 2014-09-18  3:03 UTC (permalink / raw)
  To: Haomai Wang; +Cc: Sage Weil, Josh Durgin, ceph-devel

Yes Haomai...

-----Original Message-----
From: Haomai Wang [mailto:haomaiwang@gmail.com] 
Sent: Wednesday, September 17, 2014 7:28 PM
To: Somnath Roy
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in Giant

According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Josh/Sage,
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Somnath Roy
> Sent: Wednesday, September 17, 2014 2:44 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> Created a tracker for this.
>
> http://tracker.ceph.com/issues/9513
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Wednesday, September 17, 2014 2:39 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> Sage,
> It's a 4K random read.
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, September 17, 2014 2:36 PM
> To: Somnath Roy
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
>
> s
>
> On Wed, 17 Sep 2014, Somnath Roy wrote:
>
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>>
>> rbd_cache_writethrough_until_flush = false
>>
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>>
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>>
>> So, loks like rbd_cache=true was the culprit.
>>
>> Thanks Josh !
>>
>> Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>> Sent: Wednesday, September 17, 2014 2:20 PM
>> To: Somnath Roy; ceph-devel@vger.kernel.org
>> Subject: Re: severe librbd performance degradation in Giant
>>
>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>> > Hi Sage,
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>> >
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
>> > 4. Giant RGW on top of Giant OSD is also scaling.
>> >
>> >
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>>
>> For giant the default cache settings changed to:
>>
>> rbd cache = true
>> rbd cache writethrough until flush = true
>>
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>>
>> Josh
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at  http://vger.kernel.org/majordomo-info.html
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html



--
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-18  3:03             ` Somnath Roy
@ 2014-09-18  3:52               ` Sage Weil
  2014-09-18  6:24                 ` Somnath Roy
  0 siblings, 1 reply; 34+ messages in thread
From: Sage Weil @ 2014-09-18  3:52 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Haomai Wang, Josh Durgin, ceph-devel

On Thu, 18 Sep 2014, Somnath Roy wrote:
> Yes Haomai...

I would love to what a profiler says about the matter.  There is going 
to be some overhead on the client associated with the cache for a 
random io workload, but 10x is a problem!

sage


> 
> -----Original Message-----
> From: Haomai Wang [mailto:haomaiwang@gmail.com] 
> Sent: Wednesday, September 17, 2014 7:28 PM
> To: Somnath Roy
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?
> 
> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> > Josh/Sage,
> > I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Somnath Roy
> > Sent: Wednesday, September 17, 2014 2:44 PM
> > To: Sage Weil
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > Created a tracker for this.
> >
> > http://tracker.ceph.com/issues/9513
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: ceph-devel-owner@vger.kernel.org 
> > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> > Sent: Wednesday, September 17, 2014 2:39 PM
> > To: Sage Weil
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > Sage,
> > It's a 4K random read.
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Sage Weil [mailto:sweil@redhat.com]
> > Sent: Wednesday, September 17, 2014 2:36 PM
> > To: Somnath Roy
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
> >
> > s
> >
> > On Wed, 17 Sep 2014, Somnath Roy wrote:
> >
> >> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> >>
> >> rbd_cache_writethrough_until_flush = false
> >>
> >> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> >>
> >> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> >>
> >> So, loks like rbd_cache=true was the culprit.
> >>
> >> Thanks Josh !
> >>
> >> Regards
> >> Somnath
> >>
> >> -----Original Message-----
> >> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> >> Sent: Wednesday, September 17, 2014 2:20 PM
> >> To: Somnath Roy; ceph-devel@vger.kernel.org
> >> Subject: Re: severe librbd performance degradation in Giant
> >>
> >> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> >> > Hi Sage,
> >> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> >> >
> >> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> >> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> >> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> >> > 4. Giant RGW on top of Giant OSD is also scaling.
> >> >
> >> >
> >> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> >>
> >> For giant the default cache settings changed to:
> >>
> >> rbd cache = true
> >> rbd cache writethrough until flush = true
> >>
> >> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> >>
> >> Josh
> >>
> >> ________________________________
> >>
> >> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> >> in the body of a message to majordomo@vger.kernel.org More majordomo 
> >> info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> --
> Best Regards,
> 
> Wheat
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j\a??f???h?????\x1e?w???\f???j:+v???w????????\a????zZ+???????j"????i

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-18  3:52               ` Sage Weil
@ 2014-09-18  6:24                 ` Somnath Roy
  2014-09-18  8:45                   ` Chen, Xiaoxi
  2014-09-18 14:11                   ` Sage Weil
  0 siblings, 2 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-18  6:24 UTC (permalink / raw)
  To: Sage Weil; +Cc: Haomai Wang, Josh Durgin, ceph-devel

Sage,
Any reason why the cache is by default enabled in Giant ?
Regarding profiling, I will try if I can run Vtune/mutrace on this.

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil@redhat.com] 
Sent: Wednesday, September 17, 2014 8:53 PM
To: Somnath Roy
Cc: Haomai Wang; Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

On Thu, 18 Sep 2014, Somnath Roy wrote:
> Yes Haomai...

I would love to what a profiler says about the matter.  There is going to be some overhead on the client associated with the cache for a random io workload, but 10x is a problem!

sage


> 
> -----Original Message-----
> From: Haomai Wang [mailto:haomaiwang@gmail.com]
> Sent: Wednesday, September 17, 2014 7:28 PM
> To: Somnath Roy
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?
> 
> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> > Josh/Sage,
> > I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Somnath Roy
> > Sent: Wednesday, September 17, 2014 2:44 PM
> > To: Sage Weil
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > Created a tracker for this.
> >
> > http://tracker.ceph.com/issues/9513
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: ceph-devel-owner@vger.kernel.org 
> > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> > Sent: Wednesday, September 17, 2014 2:39 PM
> > To: Sage Weil
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > Sage,
> > It's a 4K random read.
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Sage Weil [mailto:sweil@redhat.com]
> > Sent: Wednesday, September 17, 2014 2:36 PM
> > To: Somnath Roy
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
> >
> > s
> >
> > On Wed, 17 Sep 2014, Somnath Roy wrote:
> >
> >> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> >>
> >> rbd_cache_writethrough_until_flush = false
> >>
> >> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> >>
> >> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> >>
> >> So, loks like rbd_cache=true was the culprit.
> >>
> >> Thanks Josh !
> >>
> >> Regards
> >> Somnath
> >>
> >> -----Original Message-----
> >> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> >> Sent: Wednesday, September 17, 2014 2:20 PM
> >> To: Somnath Roy; ceph-devel@vger.kernel.org
> >> Subject: Re: severe librbd performance degradation in Giant
> >>
> >> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> >> > Hi Sage,
> >> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> >> >
> >> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> >> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> >> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> >> > 4. Giant RGW on top of Giant OSD is also scaling.
> >> >
> >> >
> >> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> >>
> >> For giant the default cache settings changed to:
> >>
> >> rbd cache = true
> >> rbd cache writethrough until flush = true
> >>
> >> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> >>
> >> Josh
> >>
> >> ________________________________
> >>
> >> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> >> in the body of a message to majordomo@vger.kernel.org More 
> >> majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> --
> Best Regards,
> 
> Wheat
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
???j:+v???w???????? ????zZ+???????j"????i

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-18  6:24                 ` Somnath Roy
@ 2014-09-18  8:45                   ` Chen, Xiaoxi
  2014-09-18 14:11                   ` Sage Weil
  1 sibling, 0 replies; 34+ messages in thread
From: Chen, Xiaoxi @ 2014-09-18  8:45 UTC (permalink / raw)
  To: Somnath Roy, Sage Weil; +Cc: Haomai Wang, Josh Durgin, ceph-devel

Same question as Somnath,  some customer of us not feeling that comfortable with cache, they still have some consistent concern.

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Thursday, September 18, 2014 2:25 PM
To: Sage Weil
Cc: Haomai Wang; Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

Sage,
Any reason why the cache is by default enabled in Giant ?
Regarding profiling, I will try if I can run Vtune/mutrace on this.

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil@redhat.com]
Sent: Wednesday, September 17, 2014 8:53 PM
To: Somnath Roy
Cc: Haomai Wang; Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

On Thu, 18 Sep 2014, Somnath Roy wrote:
> Yes Haomai...

I would love to what a profiler says about the matter.  There is going to be some overhead on the client associated with the cache for a random io workload, but 10x is a problem!

sage


> 
> -----Original Message-----
> From: Haomai Wang [mailto:haomaiwang@gmail.com]
> Sent: Wednesday, September 17, 2014 7:28 PM
> To: Somnath Roy
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?
> 
> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> > Josh/Sage,
> > I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Somnath Roy
> > Sent: Wednesday, September 17, 2014 2:44 PM
> > To: Sage Weil
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > Created a tracker for this.
> >
> > http://tracker.ceph.com/issues/9513
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: ceph-devel-owner@vger.kernel.org 
> > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> > Sent: Wednesday, September 17, 2014 2:39 PM
> > To: Sage Weil
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > Sage,
> > It's a 4K random read.
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Sage Weil [mailto:sweil@redhat.com]
> > Sent: Wednesday, September 17, 2014 2:36 PM
> > To: Somnath Roy
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: RE: severe librbd performance degradation in Giant
> >
> > What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
> >
> > s
> >
> > On Wed, 17 Sep 2014, Somnath Roy wrote:
> >
> >> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> >>
> >> rbd_cache_writethrough_until_flush = false
> >>
> >> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> >>
> >> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> >>
> >> So, loks like rbd_cache=true was the culprit.
> >>
> >> Thanks Josh !
> >>
> >> Regards
> >> Somnath
> >>
> >> -----Original Message-----
> >> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> >> Sent: Wednesday, September 17, 2014 2:20 PM
> >> To: Somnath Roy; ceph-devel@vger.kernel.org
> >> Subject: Re: severe librbd performance degradation in Giant
> >>
> >> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> >> > Hi Sage,
> >> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> >> >
> >> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> >> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> >> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> >> > 4. Giant RGW on top of Giant OSD is also scaling.
> >> >
> >> >
> >> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> >>
> >> For giant the default cache settings changed to:
> >>
> >> rbd cache = true
> >> rbd cache writethrough until flush = true
> >>
> >> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> >>
> >> Josh
> >>
> >> ________________________________
> >>
> >> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> >> in the body of a message to majordomo@vger.kernel.org More 
> >> majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> --
> Best Regards,
> 
> Wheat
> N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
???j:+v???w???????? ????zZ+???????j"????i
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-18  2:27           ` Haomai Wang
  2014-09-18  3:03             ` Somnath Roy
@ 2014-09-18  9:49             ` Alexandre DERUMIER
  2014-09-18 12:38               ` Mark Nelson
  2014-09-18 18:02               ` Somnath Roy
  1 sibling, 2 replies; 34+ messages in thread
From: Alexandre DERUMIER @ 2014-09-18  9:49 UTC (permalink / raw)
  To: Haomai Wang; +Cc: Sage Weil, Josh Durgin, ceph-devel, Somnath Roy

>>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>cache will make 10x performance degradation for random read? 

Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.

firefly : around 12000iops (with or without rbd_cache)
giant : around 12000iops  (with or without rbd_cache)

(and I can reach around 20000-30000 iops on giant with disabling optracker).


rbd_cache only improve write performance for me (4k block )



----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com> 
À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 18 Septembre 2014 04:27:56 
Objet: Re: severe librbd performance degradation in Giant 

According http://tracker.ceph.com/issues/9513, do you mean that rbd 
cache will make 10x performance degradation for random read? 

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Josh/Sage, 
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:44 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Created a tracker for this. 
> 
> http://tracker.ceph.com/issues/9513 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:39 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Sage, 
> It's a 4K random read. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Sage Weil [mailto:sweil@redhat.com] 
> Sent: Wednesday, September 17, 2014 2:36 PM 
> To: Somnath Roy 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> 
> s 
> 
> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> 
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>> 
>> rbd_cache_writethrough_until_flush = false 
>> 
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>> 
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>> 
>> So, loks like rbd_cache=true was the culprit. 
>> 
>> Thanks Josh ! 
>> 
>> Regards 
>> Somnath 
>> 
>> -----Original Message----- 
>> From: Josh Durgin [mailto:josh.durgin@inktank.com] 
>> Sent: Wednesday, September 17, 2014 2:20 PM 
>> To: Somnath Roy; ceph-devel@vger.kernel.org 
>> Subject: Re: severe librbd performance degradation in Giant 
>> 
>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>> > Hi Sage, 
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>> > 
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 4. Giant RGW on top of Giant OSD is also scaling. 
>> > 
>> > 
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>> 
>> For giant the default cache settings changed to: 
>> 
>> rbd cache = true 
>> rbd cache writethrough until flush = true 
>> 
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>> 
>> Josh 
>> 
>> ________________________________ 
>> 
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> 
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html 
>> 
>> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
> the body of a message to majordomo@vger.kernel.org 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 



-- 
Best Regards, 

Wheat 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-18  9:49             ` Alexandre DERUMIER
@ 2014-09-18 12:38               ` Mark Nelson
  2014-09-18 18:02               ` Somnath Roy
  1 sibling, 0 replies; 34+ messages in thread
From: Mark Nelson @ 2014-09-18 12:38 UTC (permalink / raw)
  To: Alexandre DERUMIER, Haomai Wang
  Cc: Sage Weil, Josh Durgin, ceph-devel, Somnath Roy

On 09/18/2014 04:49 AM, Alexandre DERUMIER wrote:
>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd
>>> cache will make 10x performance degradation for random read?
>
> Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.
>
> firefly : around 12000iops (with or without rbd_cache)
> giant : around 12000iops  (with or without rbd_cache)
>
> (and I can reach around 20000-30000 iops on giant with disabling optracker).
>
>
> rbd_cache only improve write performance for me (4k block )

I can't do it right now since I'm in the middle of reinstalling fedora 
on the test nodes, but I will try to replicate this as well if we 
haven't figured it out before hand.

Mark

>
>
>
> ----- Mail original -----
>
> De: "Haomai Wang" <haomaiwang@gmail.com>
> À: "Somnath Roy" <Somnath.Roy@sandisk.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
> Envoyé: Jeudi 18 Septembre 2014 04:27:56
> Objet: Re: severe librbd performance degradation in Giant
>
> According http://tracker.ceph.com/issues/9513, do you mean that rbd
> cache will make 10x performance degradation for random read?
>
> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> Josh/Sage,
>> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Somnath Roy
>> Sent: Wednesday, September 17, 2014 2:44 PM
>> To: Sage Weil
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> Created a tracker for this.
>>
>> http://tracker.ceph.com/issues/9513
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
>> Sent: Wednesday, September 17, 2014 2:39 PM
>> To: Sage Weil
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> Sage,
>> It's a 4K random read.
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Sage Weil [mailto:sweil@redhat.com]
>> Sent: Wednesday, September 17, 2014 2:36 PM
>> To: Somnath Roy
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
>>
>> s
>>
>> On Wed, 17 Sep 2014, Somnath Roy wrote:
>>
>>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>>>
>>> rbd_cache_writethrough_until_flush = false
>>>
>>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>>>
>>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>>>
>>> So, loks like rbd_cache=true was the culprit.
>>>
>>> Thanks Josh !
>>>
>>> Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>>> Sent: Wednesday, September 17, 2014 2:20 PM
>>> To: Somnath Roy; ceph-devel@vger.kernel.org
>>> Subject: Re: severe librbd performance degradation in Giant
>>>
>>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>>>> Hi Sage,
>>>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>>>>
>>>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR).
>>>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR).
>>>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR).
>>>> 4. Giant RGW on top of Giant OSD is also scaling.
>>>>
>>>>
>>>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>>>
>>> For giant the default cache settings changed to:
>>>
>>> rbd cache = true
>>> rbd cache writethrough until flush = true
>>>
>>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>>>
>>> Josh
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo@vger.kernel.org More majordomo
>>> info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-18  6:24                 ` Somnath Roy
  2014-09-18  8:45                   ` Chen, Xiaoxi
@ 2014-09-18 14:11                   ` Sage Weil
  1 sibling, 0 replies; 34+ messages in thread
From: Sage Weil @ 2014-09-18 14:11 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Haomai Wang, Josh Durgin, ceph-devel

On Thu, 18 Sep 2014, Somnath Roy wrote:
> Sage,
> Any reason why the cache is by default enabled in Giant ?

It's recommended practice to turn it on.  It improves performance in 
general (especially with HDD OSDs).  Do you mind comparing sequential 
small IOs?

sage

> Regarding profiling, I will try if I can run Vtune/mutrace on this.
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com] 
> Sent: Wednesday, September 17, 2014 8:53 PM
> To: Somnath Roy
> Cc: Haomai Wang; Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> On Thu, 18 Sep 2014, Somnath Roy wrote:
> > Yes Haomai...
> 
> I would love to what a profiler says about the matter.  There is going to be some overhead on the client associated with the cache for a random io workload, but 10x is a problem!
> 
> sage
> 
> 
> > 
> > -----Original Message-----
> > From: Haomai Wang [mailto:haomaiwang@gmail.com]
> > Sent: Wednesday, September 17, 2014 7:28 PM
> > To: Somnath Roy
> > Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
> > Subject: Re: severe librbd performance degradation in Giant
> > 
> > According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?
> > 
> > On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> > > Josh/Sage,
> > > I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
> > >
> > > Thanks & Regards
> > > Somnath
> > >
> > > -----Original Message-----
> > > From: Somnath Roy
> > > Sent: Wednesday, September 17, 2014 2:44 PM
> > > To: Sage Weil
> > > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > > Subject: RE: severe librbd performance degradation in Giant
> > >
> > > Created a tracker for this.
> > >
> > > http://tracker.ceph.com/issues/9513
> > >
> > > Thanks & Regards
> > > Somnath
> > >
> > > -----Original Message-----
> > > From: ceph-devel-owner@vger.kernel.org 
> > > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> > > Sent: Wednesday, September 17, 2014 2:39 PM
> > > To: Sage Weil
> > > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > > Subject: RE: severe librbd performance degradation in Giant
> > >
> > > Sage,
> > > It's a 4K random read.
> > >
> > > Thanks & Regards
> > > Somnath
> > >
> > > -----Original Message-----
> > > From: Sage Weil [mailto:sweil@redhat.com]
> > > Sent: Wednesday, September 17, 2014 2:36 PM
> > > To: Somnath Roy
> > > Cc: Josh Durgin; ceph-devel@vger.kernel.org
> > > Subject: RE: severe librbd performance degradation in Giant
> > >
> > > What was the io pattern?  Sequential or random?  For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
> > >
> > > s
> > >
> > > On Wed, 17 Sep 2014, Somnath Roy wrote:
> > >
> > >> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
> > >>
> > >> rbd_cache_writethrough_until_flush = false
> > >>
> > >> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
> > >>
> > >> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
> > >>
> > >> So, loks like rbd_cache=true was the culprit.
> > >>
> > >> Thanks Josh !
> > >>
> > >> Regards
> > >> Somnath
> > >>
> > >> -----Original Message-----
> > >> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> > >> Sent: Wednesday, September 17, 2014 2:20 PM
> > >> To: Somnath Roy; ceph-devel@vger.kernel.org
> > >> Subject: Re: severe librbd performance degradation in Giant
> > >>
> > >> On 09/17/2014 01:55 PM, Somnath Roy wrote:
> > >> > Hi Sage,
> > >> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
> > >> >
> > >> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K  iops (4K RR).
> > >> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K  iops (4K RR).
> > >> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K  iops (4K RR).
> > >> > 4. Giant RGW on top of Giant OSD is also scaling.
> > >> >
> > >> >
> > >> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
> > >>
> > >> For giant the default cache settings changed to:
> > >>
> > >> rbd cache = true
> > >> rbd cache writethrough until flush = true
> > >>
> > >> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
> > >>
> > >> Josh
> > >>
> > >> ________________________________
> > >>
> > >> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> > >>
> > >> --
> > >> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > >> in the body of a message to majordomo@vger.kernel.org More 
> > >> majordomo info at  http://vger.kernel.org/majordomo-info.html
> > >>
> > >>
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > > info at  http://vger.kernel.org/majordomo-info.html
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > > info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> > 
> > --
> > Best Regards,
> > 
> > Wheat
> > N?????r??y??????X???v???)?{.n?????z?]z????ay?\x1d????j ??f???h?????\x1e?w???
> ???j:+v???w???????? ????zZ+???????j"????i
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-18  9:49             ` Alexandre DERUMIER
  2014-09-18 12:38               ` Mark Nelson
@ 2014-09-18 18:02               ` Somnath Roy
  2014-09-19  1:08                 ` Shu, Xinxin
  2014-09-19 10:09                 ` Alexandre DERUMIER
  1 sibling, 2 replies; 34+ messages in thread
From: Somnath Roy @ 2014-09-18 18:02 UTC (permalink / raw)
  To: Alexandre DERUMIER, Haomai Wang; +Cc: Sage Weil, Josh Durgin, ceph-devel

Alexandre,
What tool are you using ? I used fio rbd.

Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file.
FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it.

Thanks & Regards
Somnath

-----Original Message-----
From: Alexandre DERUMIER [mailto:aderumier@odiso.com] 
Sent: Thursday, September 18, 2014 2:49 AM
To: Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy
Subject: Re: severe librbd performance degradation in Giant

>>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>cache will make 10x performance degradation for random read?

Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.

firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops  (with or without rbd_cache)

(and I can reach around 20000-30000 iops on giant with disabling optracker).


rbd_cache only improve write performance for me (4k block )



----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com>
À: "Somnath Roy" <Somnath.Roy@sandisk.com>
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
Envoyé: Jeudi 18 Septembre 2014 04:27:56
Objet: Re: severe librbd performance degradation in Giant 

According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Josh/Sage,
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Somnath Roy
> Sent: Wednesday, September 17, 2014 2:44 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> Created a tracker for this. 
> 
> http://tracker.ceph.com/issues/9513
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Wednesday, September 17, 2014 2:39 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> Sage,
> It's a 4K random read. 
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, September 17, 2014 2:36 PM
> To: Somnath Roy
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> 
> s
> 
> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> 
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>> 
>> rbd_cache_writethrough_until_flush = false
>> 
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>> 
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>> 
>> So, loks like rbd_cache=true was the culprit. 
>> 
>> Thanks Josh ! 
>> 
>> Regards
>> Somnath
>> 
>> -----Original Message-----
>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>> Sent: Wednesday, September 17, 2014 2:20 PM
>> To: Somnath Roy; ceph-devel@vger.kernel.org
>> Subject: Re: severe librbd performance degradation in Giant
>> 
>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>> > Hi Sage,
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>> > 
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 4. Giant RGW on top of Giant OSD is also scaling. 
>> > 
>> > 
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>> 
>> For giant the default cache settings changed to: 
>> 
>> rbd cache = true
>> rbd cache writethrough until flush = true
>> 
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>> 
>> Josh
>> 
>> ________________________________
>> 
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html
>> 
>> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html



--
Best Regards, 

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-18 18:02               ` Somnath Roy
@ 2014-09-19  1:08                 ` Shu, Xinxin
  2014-09-19  1:10                   ` Shu, Xinxin
  2014-09-19  6:53                   ` Stefan Priebe
  2014-09-19 10:09                 ` Alexandre DERUMIER
  1 sibling, 2 replies; 34+ messages in thread
From: Shu, Xinxin @ 2014-09-19  1:08 UTC (permalink / raw)
  To: Somnath Roy, Alexandre DERUMIER, Haomai Wang
  Cc: Sage Weil, Josh Durgin, ceph-devel

I also observed performance degradation on my full SSD setup ,  I can got  ~270K IOPS for 4KB random read with 0.80.4 , but with latest master , I only got ~12K IOPS 

Cheers,
xinxin

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Friday, September 19, 2014 2:03 AM
To: Alexandre DERUMIER; Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

Alexandre,
What tool are you using ? I used fio rbd.

Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file.
FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it.

Thanks & Regards
Somnath

-----Original Message-----
From: Alexandre DERUMIER [mailto:aderumier@odiso.com]
Sent: Thursday, September 18, 2014 2:49 AM
To: Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy
Subject: Re: severe librbd performance degradation in Giant

>>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>cache will make 10x performance degradation for random read?

Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.

firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops  (with or without rbd_cache)

(and I can reach around 20000-30000 iops on giant with disabling optracker).


rbd_cache only improve write performance for me (4k block )



----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com>
À: "Somnath Roy" <Somnath.Roy@sandisk.com>
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
Envoyé: Jeudi 18 Septembre 2014 04:27:56
Objet: Re: severe librbd performance degradation in Giant 

According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Josh/Sage,
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Somnath Roy
> Sent: Wednesday, September 17, 2014 2:44 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> Created a tracker for this. 
> 
> http://tracker.ceph.com/issues/9513
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Wednesday, September 17, 2014 2:39 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> Sage,
> It's a 4K random read. 
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, September 17, 2014 2:36 PM
> To: Somnath Roy
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> 
> s
> 
> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> 
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>> 
>> rbd_cache_writethrough_until_flush = false
>> 
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>> 
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>> 
>> So, loks like rbd_cache=true was the culprit. 
>> 
>> Thanks Josh ! 
>> 
>> Regards
>> Somnath
>> 
>> -----Original Message-----
>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>> Sent: Wednesday, September 17, 2014 2:20 PM
>> To: Somnath Roy; ceph-devel@vger.kernel.org
>> Subject: Re: severe librbd performance degradation in Giant
>> 
>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>> > Hi Sage,
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>> > 
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 4. Giant RGW on top of Giant OSD is also scaling. 
>> > 
>> > 
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>> 
>> For giant the default cache settings changed to: 
>> 
>> rbd cache = true
>> rbd cache writethrough until flush = true
>> 
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>> 
>> Josh
>> 
>> ________________________________
>> 
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html
>> 
>> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html



--
Best Regards, 

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w       j:+v   w j m         zZ+     ݢj"  ! i

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-19  1:08                 ` Shu, Xinxin
@ 2014-09-19  1:10                   ` Shu, Xinxin
  2014-09-19  6:53                   ` Stefan Priebe
  1 sibling, 0 replies; 34+ messages in thread
From: Shu, Xinxin @ 2014-09-19  1:10 UTC (permalink / raw)
  To: Shu, Xinxin, Somnath Roy, Alexandre DERUMIER, Haomai Wang
  Cc: Sage Weil, Josh Durgin, ceph-devel

My bad , with latest master , we got ~ 120K IOPS.

Cheers,
xinxin

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Shu, Xinxin
Sent: Friday, September 19, 2014 9:08 AM
To: Somnath Roy; Alexandre DERUMIER; Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

I also observed performance degradation on my full SSD setup ,  I can got  ~270K IOPS for 4KB random read with 0.80.4 , but with latest master , I only got ~12K IOPS 

Cheers,
xinxin

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Friday, September 19, 2014 2:03 AM
To: Alexandre DERUMIER; Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
Subject: RE: severe librbd performance degradation in Giant

Alexandre,
What tool are you using ? I used fio rbd.

Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file.
FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it.

Thanks & Regards
Somnath

-----Original Message-----
From: Alexandre DERUMIER [mailto:aderumier@odiso.com]
Sent: Thursday, September 18, 2014 2:49 AM
To: Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy
Subject: Re: severe librbd performance degradation in Giant

>>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>cache will make 10x performance degradation for random read?

Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.

firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops  (with or without rbd_cache)

(and I can reach around 20000-30000 iops on giant with disabling optracker).


rbd_cache only improve write performance for me (4k block )



----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com>
À: "Somnath Roy" <Somnath.Roy@sandisk.com>
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
Envoyé: Jeudi 18 Septembre 2014 04:27:56
Objet: Re: severe librbd performance degradation in Giant 

According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Josh/Sage,
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Somnath Roy
> Sent: Wednesday, September 17, 2014 2:44 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> Created a tracker for this. 
> 
> http://tracker.ceph.com/issues/9513
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Wednesday, September 17, 2014 2:39 PM
> To: Sage Weil
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> Sage,
> It's a 4K random read. 
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, September 17, 2014 2:36 PM
> To: Somnath Roy
> Cc: Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
> 
> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> 
> s
> 
> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> 
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>> 
>> rbd_cache_writethrough_until_flush = false
>> 
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>> 
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>> 
>> So, loks like rbd_cache=true was the culprit. 
>> 
>> Thanks Josh ! 
>> 
>> Regards
>> Somnath
>> 
>> -----Original Message-----
>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>> Sent: Wednesday, September 17, 2014 2:20 PM
>> To: Somnath Roy; ceph-devel@vger.kernel.org
>> Subject: Re: severe librbd performance degradation in Giant
>> 
>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>> > Hi Sage,
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>> > 
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 4. Giant RGW on top of Giant OSD is also scaling. 
>> > 
>> > 
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>> 
>> For giant the default cache settings changed to: 
>> 
>> rbd cache = true
>> rbd cache writethrough until flush = true
>> 
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>> 
>> Josh
>> 
>> ________________________________
>> 
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html
>> 
>> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html



--
Best Regards, 

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w       j:+v   w j m         zZ+     ݢj"  ! i
\x04 {.n +       +%  lzwm  b 맲  r  yǩ ׯzX  \x17  ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z \x1e w   ?    & )ߢ^[f

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19  1:08                 ` Shu, Xinxin
  2014-09-19  1:10                   ` Shu, Xinxin
@ 2014-09-19  6:53                   ` Stefan Priebe
  2014-09-19 13:02                     ` Shu, Xinxin
  1 sibling, 1 reply; 34+ messages in thread
From: Stefan Priebe @ 2014-09-19  6:53 UTC (permalink / raw)
  To: Shu, Xinxin, Somnath Roy, Alexandre DERUMIER, Haomai Wang
  Cc: Sage Weil, Josh Durgin, ceph-devel

Am 19.09.2014 03:08, schrieb Shu, Xinxin:
> I also observed performance degradation on my full SSD setup ,  I can got  ~270K IOPS for 4KB random read with 0.80.4 , but with latest master , I only got ~12K IOPS

This are impressive numbers. Can you tell me how many OSDs you have and 
which SSDs you use?

Thanks,
Stefan


> Cheers,
> xinxin
>
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Friday, September 19, 2014 2:03 AM
> To: Alexandre DERUMIER; Haomai Wang
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> Alexandre,
> What tool are you using ? I used fio rbd.
>
> Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file.
> FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it.
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Alexandre DERUMIER [mailto:aderumier@odiso.com]
> Sent: Thursday, September 18, 2014 2:49 AM
> To: Haomai Wang
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy
> Subject: Re: severe librbd performance degradation in Giant
>
>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd
>>> cache will make 10x performance degradation for random read?
>
> Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.
>
> firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops  (with or without rbd_cache)
>
> (and I can reach around 20000-30000 iops on giant with disabling optracker).
>
>
> rbd_cache only improve write performance for me (4k block )
>
>
>
> ----- Mail original -----
>
> De: "Haomai Wang" <haomaiwang@gmail.com>
> À: "Somnath Roy" <Somnath.Roy@sandisk.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
> Envoyé: Jeudi 18 Septembre 2014 04:27:56
> Objet: Re: severe librbd performance degradation in Giant
>
> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?
>
> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> Josh/Sage,
>> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Somnath Roy
>> Sent: Wednesday, September 17, 2014 2:44 PM
>> To: Sage Weil
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> Created a tracker for this.
>>
>> http://tracker.ceph.com/issues/9513
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: ceph-devel-owner@vger.kernel.org
>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
>> Sent: Wednesday, September 17, 2014 2:39 PM
>> To: Sage Weil
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> Sage,
>> It's a 4K random read.
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Sage Weil [mailto:sweil@redhat.com]
>> Sent: Wednesday, September 17, 2014 2:36 PM
>> To: Somnath Roy
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
>>
>> s
>>
>> On Wed, 17 Sep 2014, Somnath Roy wrote:
>>
>>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>>>
>>> rbd_cache_writethrough_until_flush = false
>>>
>>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>>>
>>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>>>
>>> So, loks like rbd_cache=true was the culprit.
>>>
>>> Thanks Josh !
>>>
>>> Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>>> Sent: Wednesday, September 17, 2014 2:20 PM
>>> To: Somnath Roy; ceph-devel@vger.kernel.org
>>> Subject: Re: severe librbd performance degradation in Giant
>>>
>>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>>>> Hi Sage,
>>>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>>>>
>>>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR).
>>>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR).
>>>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR).
>>>> 4. Giant RGW on top of Giant OSD is also scaling.
>>>>
>>>>
>>>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>>>
>>> For giant the default cache settings changed to:
>>>
>>> rbd cache = true
>>> rbd cache writethrough until flush = true
>>>
>>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>>>
>>> Josh
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo@vger.kernel.org More majordomo
>>> info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo
>> info at http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo
>> info at http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> Best Regards,
>
> Wheat
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w       j:+v   w j m         zZ+     ݢj"  ! i
> N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay�\x1dʇڙ�,j\a��f���h���z�\x1e�w���\f���j:+v���w�j�m����\a����zZ+�����ݢj"��!tml=
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-18 18:02               ` Somnath Roy
  2014-09-19  1:08                 ` Shu, Xinxin
@ 2014-09-19 10:09                 ` Alexandre DERUMIER
  2014-09-19 11:30                   ` Alexandre DERUMIER
  1 sibling, 1 reply; 34+ messages in thread
From: Alexandre DERUMIER @ 2014-09-19 10:09 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, Josh Durgin, ceph-devel, Haomai Wang

>>What tool are you using ? I used fio rbd. 

fio rbd too


[global]
ioengine=rbd
clientname=admin
pool=test
rbdname=test
invalidate=0 
#rw=read
#rw=randwrite
#rw=write
rw=randread
bs=4k
direct=1
numjobs=2
group_reporting=1
size=10G

[rbd_iodepth32]
iodepth=32



I just notice something strange

with rbd_cache=true , I got around 60000iops  (and I don't see any network traffic)

So maybe they are a bug in fio ?
maybe this is related to:


http://tracker.ceph.com/issues/9391
"fio rbd driver rewrites same blocks"

----- Mail original ----- 

De: "Somnath Roy" <Somnath.Roy@sandisk.com> 
À: "Alexandre DERUMIER" <aderumier@odiso.com>, "Haomai Wang" <haomaiwang@gmail.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 18 Septembre 2014 20:02:49 
Objet: RE: severe librbd performance degradation in Giant 

Alexandre, 
What tool are you using ? I used fio rbd. 

Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file. 
FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it. 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: Alexandre DERUMIER [mailto:aderumier@odiso.com] 
Sent: Thursday, September 18, 2014 2:49 AM 
To: Haomai Wang 
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy 
Subject: Re: severe librbd performance degradation in Giant 

>>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>cache will make 10x performance degradation for random read? 

Hi, on my side, I don't see any degradation performance on read (seq or rand) with or without. 

firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops (with or without rbd_cache) 

(and I can reach around 20000-30000 iops on giant with disabling optracker). 


rbd_cache only improve write performance for me (4k block ) 



----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com> 
À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 18 Septembre 2014 04:27:56 
Objet: Re: severe librbd performance degradation in Giant 

According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Josh/Sage, 
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:44 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Created a tracker for this. 
> 
> http://tracker.ceph.com/issues/9513 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:39 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Sage, 
> It's a 4K random read. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Sage Weil [mailto:sweil@redhat.com] 
> Sent: Wednesday, September 17, 2014 2:36 PM 
> To: Somnath Roy 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> 
> s 
> 
> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> 
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>> 
>> rbd_cache_writethrough_until_flush = false 
>> 
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>> 
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>> 
>> So, loks like rbd_cache=true was the culprit. 
>> 
>> Thanks Josh ! 
>> 
>> Regards 
>> Somnath 
>> 
>> -----Original Message----- 
>> From: Josh Durgin [mailto:josh.durgin@inktank.com] 
>> Sent: Wednesday, September 17, 2014 2:20 PM 
>> To: Somnath Roy; ceph-devel@vger.kernel.org 
>> Subject: Re: severe librbd performance degradation in Giant 
>> 
>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>> > Hi Sage, 
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>> > 
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 4. Giant RGW on top of Giant OSD is also scaling. 
>> > 
>> > 
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>> 
>> For giant the default cache settings changed to: 
>> 
>> rbd cache = true 
>> rbd cache writethrough until flush = true 
>> 
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>> 
>> Josh 
>> 
>> ________________________________ 
>> 
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> 
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html 
>> 
>> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html 



-- 
Best Regards, 

Wheat 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19 10:09                 ` Alexandre DERUMIER
@ 2014-09-19 11:30                   ` Alexandre DERUMIER
  2014-09-19 12:51                     ` Alexandre DERUMIER
  2014-09-19 15:15                     ` Sage Weil
  0 siblings, 2 replies; 34+ messages in thread
From: Alexandre DERUMIER @ 2014-09-19 11:30 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, Josh Durgin, ceph-devel, Haomai Wang

>> with rbd_cache=true , I got around 60000iops (and I don't see any network traffic) 
>>
>>So maybe they are a bug in fio ? 
>>maybe this is related to: 

Oh, sorry, this was my fault, I didn't fill the rbd with datas before doing the bench

Now the results are (for 1 osd)

firefly
------
 bw=37460KB/s, iops=9364

giant
-----
 bw=32741KB/s, iops=8185


So, a little regression

(the results are equals rbd_cache=true|false)


I'll try to compare with more osds

----- Mail original ----- 

De: "Alexandre DERUMIER" <aderumier@odiso.com> 
À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org, "Haomai Wang" <haomaiwang@gmail.com> 
Envoyé: Vendredi 19 Septembre 2014 12:09:41 
Objet: Re: severe librbd performance degradation in Giant 

>>What tool are you using ? I used fio rbd. 

fio rbd too 


[global] 
ioengine=rbd 
clientname=admin 
pool=test 
rbdname=test 
invalidate=0 
#rw=read 
#rw=randwrite 
#rw=write 
rw=randread 
bs=4k 
direct=1 
numjobs=2 
group_reporting=1 
size=10G 

[rbd_iodepth32] 
iodepth=32 



I just notice something strange 

with rbd_cache=true , I got around 60000iops (and I don't see any network traffic) 

So maybe they are a bug in fio ? 
maybe this is related to: 


http://tracker.ceph.com/issues/9391 
"fio rbd driver rewrites same blocks" 

----- Mail original ----- 

De: "Somnath Roy" <Somnath.Roy@sandisk.com> 
À: "Alexandre DERUMIER" <aderumier@odiso.com>, "Haomai Wang" <haomaiwang@gmail.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 18 Septembre 2014 20:02:49 
Objet: RE: severe librbd performance degradation in Giant 

Alexandre, 
What tool are you using ? I used fio rbd. 

Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file. 
FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it. 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: Alexandre DERUMIER [mailto:aderumier@odiso.com] 
Sent: Thursday, September 18, 2014 2:49 AM 
To: Haomai Wang 
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy 
Subject: Re: severe librbd performance degradation in Giant 

>>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>cache will make 10x performance degradation for random read? 

Hi, on my side, I don't see any degradation performance on read (seq or rand) with or without. 

firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops (with or without rbd_cache) 

(and I can reach around 20000-30000 iops on giant with disabling optracker). 


rbd_cache only improve write performance for me (4k block ) 



----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com> 
À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 18 Septembre 2014 04:27:56 
Objet: Re: severe librbd performance degradation in Giant 

According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Josh/Sage, 
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:44 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Created a tracker for this. 
> 
> http://tracker.ceph.com/issues/9513 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:39 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Sage, 
> It's a 4K random read. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Sage Weil [mailto:sweil@redhat.com] 
> Sent: Wednesday, September 17, 2014 2:36 PM 
> To: Somnath Roy 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> 
> s 
> 
> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> 
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>> 
>> rbd_cache_writethrough_until_flush = false 
>> 
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>> 
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>> 
>> So, loks like rbd_cache=true was the culprit. 
>> 
>> Thanks Josh ! 
>> 
>> Regards 
>> Somnath 
>> 
>> -----Original Message----- 
>> From: Josh Durgin [mailto:josh.durgin@inktank.com] 
>> Sent: Wednesday, September 17, 2014 2:20 PM 
>> To: Somnath Roy; ceph-devel@vger.kernel.org 
>> Subject: Re: severe librbd performance degradation in Giant 
>> 
>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>> > Hi Sage, 
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>> > 
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 4. Giant RGW on top of Giant OSD is also scaling. 
>> > 
>> > 
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>> 
>> For giant the default cache settings changed to: 
>> 
>> rbd cache = true 
>> rbd cache writethrough until flush = true 
>> 
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>> 
>> Josh 
>> 
>> ________________________________ 
>> 
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> 
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html 
>> 
>> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html 



-- 
Best Regards, 

Wheat 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19 11:30                   ` Alexandre DERUMIER
@ 2014-09-19 12:51                     ` Alexandre DERUMIER
  2014-09-19 15:15                     ` Sage Weil
  1 sibling, 0 replies; 34+ messages in thread
From: Alexandre DERUMIER @ 2014-09-19 12:51 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, Josh Durgin, ceph-devel, Haomai Wang

giant results with 6 osd
------------------------
bw=118129KB/s, iops=29532  : rbd_cache = false
bw=101771KB/s, iops=25442 : rbd_cache = true



fio config (note that numjobs is important, i'm going from 18000iops -> 29000 iops for numjobs 1->4)
----------
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_log
ioengine=rbd
clientname=admin
pool=test
rbdname=test
invalidate=0    # mandatory
#rw=read
#rw=randwrite
#rw=write
rw=randread
bs=4K
direct=1
numjobs=4
group_reporting=1
size=10G

[rbd_iodepth32]
iodepth=32



ceph.conf
---------
       debug lockdep = 0/0
        debug context = 0/0
        debug crush = 0/0
        debug buffer = 0/0
        debug timer = 0/0
        debug journaler = 0/0
        debug osd = 0/0
        debug optracker = 0/0
        debug objclass = 0/0
        debug filestore = 0/0
        debug journal = 0/0
        debug ms = 0/0
        debug monc = 0/0
        debug tp = 0/0
        debug auth = 0/0
        debug finisher = 0/0
        debug heartbeatmap = 0/0
        debug perfcounter = 0/0
        debug asok = 0/0
        debug throttle = 0/0

        osd_op_num_threads_per_shard = 2
        osd_op_num_shards = 25
        filestore_fd_cache_size = 64
        filestore_fd_cache_shards = 32

         ms_nocrc = true
         cephx sign messages = false
         cephx require signatures = false

         ms_dispatch_throttle_bytes = 0
         throttler_perf_counter = false

[osd]
         osd_client_message_size_cap = 0
         osd_client_message_cap = 0
         osd_enable_op_tracker = false


----- Mail original ----- 

De: "Alexandre DERUMIER" <aderumier@odiso.com> 
À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org, "Haomai Wang" <haomaiwang@gmail.com> 
Envoyé: Vendredi 19 Septembre 2014 13:30:24 
Objet: Re: severe librbd performance degradation in Giant 

>> with rbd_cache=true , I got around 60000iops (and I don't see any network traffic) 
>> 
>>So maybe they are a bug in fio ? 
>>maybe this is related to: 

Oh, sorry, this was my fault, I didn't fill the rbd with datas before doing the bench 

Now the results are (for 1 osd) 

firefly 
------ 
bw=37460KB/s, iops=9364 

giant 
----- 
bw=32741KB/s, iops=8185 


So, a little regression 

(the results are equals rbd_cache=true|false) 


I'll try to compare with more osds 

----- Mail original ----- 

De: "Alexandre DERUMIER" <aderumier@odiso.com> 
À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org, "Haomai Wang" <haomaiwang@gmail.com> 
Envoyé: Vendredi 19 Septembre 2014 12:09:41 
Objet: Re: severe librbd performance degradation in Giant 

>>What tool are you using ? I used fio rbd. 

fio rbd too 


[global] 
ioengine=rbd 
clientname=admin 
pool=test 
rbdname=test 
invalidate=0 
#rw=read 
#rw=randwrite 
#rw=write 
rw=randread 
bs=4k 
direct=1 
numjobs=2 
group_reporting=1 
size=10G 

[rbd_iodepth32] 
iodepth=32 



I just notice something strange 

with rbd_cache=true , I got around 60000iops (and I don't see any network traffic) 

So maybe they are a bug in fio ? 
maybe this is related to: 


http://tracker.ceph.com/issues/9391 
"fio rbd driver rewrites same blocks" 

----- Mail original ----- 

De: "Somnath Roy" <Somnath.Roy@sandisk.com> 
À: "Alexandre DERUMIER" <aderumier@odiso.com>, "Haomai Wang" <haomaiwang@gmail.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 18 Septembre 2014 20:02:49 
Objet: RE: severe librbd performance degradation in Giant 

Alexandre, 
What tool are you using ? I used fio rbd. 

Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file. 
FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it. 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: Alexandre DERUMIER [mailto:aderumier@odiso.com] 
Sent: Thursday, September 18, 2014 2:49 AM 
To: Haomai Wang 
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy 
Subject: Re: severe librbd performance degradation in Giant 

>>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>cache will make 10x performance degradation for random read? 

Hi, on my side, I don't see any degradation performance on read (seq or rand) with or without. 

firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops (with or without rbd_cache) 

(and I can reach around 20000-30000 iops on giant with disabling optracker). 


rbd_cache only improve write performance for me (4k block ) 



----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com> 
À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 18 Septembre 2014 04:27:56 
Objet: Re: severe librbd performance degradation in Giant 

According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Josh/Sage, 
> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:44 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Created a tracker for this. 
> 
> http://tracker.ceph.com/issues/9513 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
> Sent: Wednesday, September 17, 2014 2:39 PM 
> To: Sage Weil 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> Sage, 
> It's a 4K random read. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Sage Weil [mailto:sweil@redhat.com] 
> Sent: Wednesday, September 17, 2014 2:36 PM 
> To: Somnath Roy 
> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: RE: severe librbd performance degradation in Giant 
> 
> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> 
> s 
> 
> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> 
>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>> 
>> rbd_cache_writethrough_until_flush = false 
>> 
>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>> 
>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>> 
>> So, loks like rbd_cache=true was the culprit. 
>> 
>> Thanks Josh ! 
>> 
>> Regards 
>> Somnath 
>> 
>> -----Original Message----- 
>> From: Josh Durgin [mailto:josh.durgin@inktank.com] 
>> Sent: Wednesday, September 17, 2014 2:20 PM 
>> To: Somnath Roy; ceph-devel@vger.kernel.org 
>> Subject: Re: severe librbd performance degradation in Giant 
>> 
>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>> > Hi Sage, 
>> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>> > 
>> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>> > 4. Giant RGW on top of Giant OSD is also scaling. 
>> > 
>> > 
>> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>> 
>> For giant the default cache settings changed to: 
>> 
>> rbd cache = true 
>> rbd cache writethrough until flush = true 
>> 
>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>> 
>> Josh 
>> 
>> ________________________________ 
>> 
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> 
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html 
>> 
>> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at http://vger.kernel.org/majordomo-info.html 



-- 
Best Regards, 

Wheat 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* RE: severe librbd performance degradation in Giant
  2014-09-19  6:53                   ` Stefan Priebe
@ 2014-09-19 13:02                     ` Shu, Xinxin
  2014-09-19 13:31                       ` Stefan Priebe - Profihost AG
  0 siblings, 1 reply; 34+ messages in thread
From: Shu, Xinxin @ 2014-09-19 13:02 UTC (permalink / raw)
  To: Stefan Priebe, Somnath Roy, Alexandre DERUMIER, Haomai Wang
  Cc: Sage Weil, Josh Durgin, ceph-devel

 12 x Intel DC 3700 200GB, every SSD has two OSDs.

Cheers,
xinxin

-----Original Message-----
From: Stefan Priebe [mailto:s.priebe@profihost.ag] 
Sent: Friday, September 19, 2014 2:54 PM
To: Shu, Xinxin; Somnath Roy; Alexandre DERUMIER; Haomai Wang
Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in Giant

Am 19.09.2014 03:08, schrieb Shu, Xinxin:
> I also observed performance degradation on my full SSD setup ,  I can 
> got  ~270K IOPS for 4KB random read with 0.80.4 , but with latest 
> master , I only got ~12K IOPS

This are impressive numbers. Can you tell me how many OSDs you have and which SSDs you use?

Thanks,
Stefan


> Cheers,
> xinxin
>
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Friday, September 19, 2014 2:03 AM
> To: Alexandre DERUMIER; Haomai Wang
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
> Subject: RE: severe librbd performance degradation in Giant
>
> Alexandre,
> What tool are you using ? I used fio rbd.
>
> Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file.
> FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it.
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Alexandre DERUMIER [mailto:aderumier@odiso.com]
> Sent: Thursday, September 18, 2014 2:49 AM
> To: Haomai Wang
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy
> Subject: Re: severe librbd performance degradation in Giant
>
>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>> cache will make 10x performance degradation for random read?
>
> Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.
>
> firefly : around 12000iops (with or without rbd_cache) giant : around 
> 12000iops  (with or without rbd_cache)
>
> (and I can reach around 20000-30000 iops on giant with disabling optracker).
>
>
> rbd_cache only improve write performance for me (4k block )
>
>
>
> ----- Mail original -----
>
> De: "Haomai Wang" <haomaiwang@gmail.com>
> À: "Somnath Roy" <Somnath.Roy@sandisk.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" 
> <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
> Envoyé: Jeudi 18 Septembre 2014 04:27:56
> Objet: Re: severe librbd performance degradation in Giant
>
> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?
>
> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> Josh/Sage,
>> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Somnath Roy
>> Sent: Wednesday, September 17, 2014 2:44 PM
>> To: Sage Weil
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> Created a tracker for this.
>>
>> http://tracker.ceph.com/issues/9513
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: ceph-devel-owner@vger.kernel.org 
>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
>> Sent: Wednesday, September 17, 2014 2:39 PM
>> To: Sage Weil
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> Sage,
>> It's a 4K random read.
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Sage Weil [mailto:sweil@redhat.com]
>> Sent: Wednesday, September 17, 2014 2:36 PM
>> To: Somnath Roy
>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
>>
>> s
>>
>> On Wed, 17 Sep 2014, Somnath Roy wrote:
>>
>>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>>>
>>> rbd_cache_writethrough_until_flush = false
>>>
>>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>>>
>>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>>>
>>> So, loks like rbd_cache=true was the culprit.
>>>
>>> Thanks Josh !
>>>
>>> Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>>> Sent: Wednesday, September 17, 2014 2:20 PM
>>> To: Somnath Roy; ceph-devel@vger.kernel.org
>>> Subject: Re: severe librbd performance degradation in Giant
>>>
>>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>>>> Hi Sage,
>>>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>>>>
>>>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR).
>>>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR).
>>>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR).
>>>> 4. Giant RGW on top of Giant OSD is also scaling.
>>>>
>>>>
>>>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>>>
>>> For giant the default cache settings changed to:
>>>
>>> rbd cache = true
>>> rbd cache writethrough until flush = true
>>>
>>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>>>
>>> Josh
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>> info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> Best Regards,
>
> Wheat
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w       j:+v   w j m         zZ+     ݢj"  ! i
> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w   
   j:+v   w j m         zZ+     ݢj"  !tml=
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19 13:02                     ` Shu, Xinxin
@ 2014-09-19 13:31                       ` Stefan Priebe - Profihost AG
  2014-09-19 13:49                         ` David Moreau Simard
  2014-09-19 13:56                         ` Alexandre DERUMIER
  0 siblings, 2 replies; 34+ messages in thread
From: Stefan Priebe - Profihost AG @ 2014-09-19 13:31 UTC (permalink / raw)
  To: Shu, Xinxin, Somnath Roy, Alexandre DERUMIER, Haomai Wang
  Cc: Sage Weil, Josh Durgin, ceph-devel

Am 19.09.2014 um 15:02 schrieb Shu, Xinxin:
>  12 x Intel DC 3700 200GB, every SSD has two OSDs.

Crazy, I've 56 SSDs and canÄt go above 20 000 iops.

Grüße Stefan

> Cheers,
> xinxin
> 
> -----Original Message-----
> From: Stefan Priebe [mailto:s.priebe@profihost.ag] 
> Sent: Friday, September 19, 2014 2:54 PM
> To: Shu, Xinxin; Somnath Roy; Alexandre DERUMIER; Haomai Wang
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
> Subject: Re: severe librbd performance degradation in Giant
> 
> Am 19.09.2014 03:08, schrieb Shu, Xinxin:
>> I also observed performance degradation on my full SSD setup ,  I can 
>> got  ~270K IOPS for 4KB random read with 0.80.4 , but with latest 
>> master , I only got ~12K IOPS
> 
> This are impressive numbers. Can you tell me how many OSDs you have and which SSDs you use?
> 
> Thanks,
> Stefan
> 
> 
>> Cheers,
>> xinxin
>>
>> -----Original Message-----
>> From: ceph-devel-owner@vger.kernel.org 
>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
>> Sent: Friday, September 19, 2014 2:03 AM
>> To: Alexandre DERUMIER; Haomai Wang
>> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: RE: severe librbd performance degradation in Giant
>>
>> Alexandre,
>> What tool are you using ? I used fio rbd.
>>
>> Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file.
>> FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it.
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Alexandre DERUMIER [mailto:aderumier@odiso.com]
>> Sent: Thursday, September 18, 2014 2:49 AM
>> To: Haomai Wang
>> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy
>> Subject: Re: severe librbd performance degradation in Giant
>>
>>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>>> cache will make 10x performance degradation for random read?
>>
>> Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.
>>
>> firefly : around 12000iops (with or without rbd_cache) giant : around 
>> 12000iops  (with or without rbd_cache)
>>
>> (and I can reach around 20000-30000 iops on giant with disabling optracker).
>>
>>
>> rbd_cache only improve write performance for me (4k block )
>>
>>
>>
>> ----- Mail original -----
>>
>> De: "Haomai Wang" <haomaiwang@gmail.com>
>> À: "Somnath Roy" <Somnath.Roy@sandisk.com>
>> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" 
>> <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
>> Envoyé: Jeudi 18 Septembre 2014 04:27:56
>> Objet: Re: severe librbd performance degradation in Giant
>>
>> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read?
>>
>> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>>> Josh/Sage,
>>> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.
>>>
>>> Thanks & Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: Somnath Roy
>>> Sent: Wednesday, September 17, 2014 2:44 PM
>>> To: Sage Weil
>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>>> Subject: RE: severe librbd performance degradation in Giant
>>>
>>> Created a tracker for this.
>>>
>>> http://tracker.ceph.com/issues/9513
>>>
>>> Thanks & Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: ceph-devel-owner@vger.kernel.org 
>>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
>>> Sent: Wednesday, September 17, 2014 2:39 PM
>>> To: Sage Weil
>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>>> Subject: RE: severe librbd performance degradation in Giant
>>>
>>> Sage,
>>> It's a 4K random read.
>>>
>>> Thanks & Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: Sage Weil [mailto:sweil@redhat.com]
>>> Sent: Wednesday, September 17, 2014 2:36 PM
>>> To: Somnath Roy
>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>>> Subject: RE: severe librbd performance degradation in Giant
>>>
>>> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....
>>>
>>> s
>>>
>>> On Wed, 17 Sep 2014, Somnath Roy wrote:
>>>
>>>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.
>>>>
>>>> rbd_cache_writethrough_until_flush = false
>>>>
>>>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?
>>>>
>>>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !
>>>>
>>>> So, loks like rbd_cache=true was the culprit.
>>>>
>>>> Thanks Josh !
>>>>
>>>> Regards
>>>> Somnath
>>>>
>>>> -----Original Message-----
>>>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>>>> Sent: Wednesday, September 17, 2014 2:20 PM
>>>> To: Somnath Roy; ceph-devel@vger.kernel.org
>>>> Subject: Re: severe librbd performance degradation in Giant
>>>>
>>>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>>>>> Hi Sage,
>>>>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.
>>>>>
>>>>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR).
>>>>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR).
>>>>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR).
>>>>> 4. Giant RGW on top of Giant OSD is also scaling.
>>>>>
>>>>>
>>>>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.
>>>>
>>>> For giant the default cache settings changed to:
>>>>
>>>> rbd cache = true
>>>> rbd cache writethrough until flush = true
>>>>
>>>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?
>>>>
>>>> Josh
>>>>
>>>> ________________________________
>>>>
>>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>>> info at http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>> info at http://vger.kernel.org/majordomo-info.html
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>> info at http://vger.kernel.org/majordomo-info.html
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w       j:+v   w j m         zZ+     ݢj"  ! i
>> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w   
>    j:+v   w j m         zZ+     ݢj"  !tml=
>>
> N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay�\x1dʇڙ�,j\a��f���h���z�\x1e�w���\f���j:+v���w�j�m����\a����zZ+�����ݢj"��!tml=
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19 13:31                       ` Stefan Priebe - Profihost AG
@ 2014-09-19 13:49                         ` David Moreau Simard
  2014-09-19 13:56                         ` Alexandre DERUMIER
  1 sibling, 0 replies; 34+ messages in thread
From: David Moreau Simard @ 2014-09-19 13:49 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG, Shu, Xinxin, Somnath Roy,
	Alexandre DERUMIER, Haomai Wang
  Cc: Sage Weil, Josh Durgin, ceph-devel

Numbers vary a lot from brand to brand and from model to model.

Just within Intel, you'd be surprised at the large difference between DC
S3500 and DC S3700:
http://ark.intel.com/compare/75680,71914
-- 
David Moreau Simard


Le 2014-09-19, 9:31 AM, « Stefan Priebe - Profihost AG »
<s.priebe@profihost.ag> a écrit :

>Am 19.09.2014 um 15:02 schrieb Shu, Xinxin:
>>  12 x Intel DC 3700 200GB, every SSD has two OSDs.
>
>Crazy, I've 56 SSDs and canÄt go above 20 000 iops.
>
>Grüße Stefan
>
>> Cheers,
>> xinxin
>> 
>> -----Original Message-----
>> From: Stefan Priebe [mailto:s.priebe@profihost.ag]
>> Sent: Friday, September 19, 2014 2:54 PM
>> To: Shu, Xinxin; Somnath Roy; Alexandre DERUMIER; Haomai Wang
>> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
>> Subject: Re: severe librbd performance degradation in Giant
>> 
>> Am 19.09.2014 03:08, schrieb Shu, Xinxin:
>>> I also observed performance degradation on my full SSD setup ,  I can
>>> got  ~270K IOPS for 4KB random read with 0.80.4 , but with latest
>>> master , I only got ~12K IOPS
>> 
>> This are impressive numbers. Can you tell me how many OSDs you have and
>>which SSDs you use?
>> 
>> Thanks,
>> Stefan
>> 
>> 
>>> Cheers,
>>> xinxin
>>>
>>> -----Original Message-----
>>> From: ceph-devel-owner@vger.kernel.org
>>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
>>> Sent: Friday, September 19, 2014 2:03 AM
>>> To: Alexandre DERUMIER; Haomai Wang
>>> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org
>>> Subject: RE: severe librbd performance degradation in Giant
>>>
>>> Alexandre,
>>> What tool are you using ? I used fio rbd.
>>>
>>> Also, I hope you have Giant package installed in the client side as
>>>well and rbd_cache =true is set on the client conf file.
>>> FYI, firefly librbd + librados and Giant cluster will work seamlessly
>>>and I had to make sure fio rbd is really loading giant librbd (if you
>>>have multiple copies around , which was in my case) for reproducing it.
>>>
>>> Thanks & Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: Alexandre DERUMIER [mailto:aderumier@odiso.com]
>>> Sent: Thursday, September 18, 2014 2:49 AM
>>> To: Haomai Wang
>>> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy
>>> Subject: Re: severe librbd performance degradation in Giant
>>>
>>>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd
>>>>> cache will make 10x performance degradation for random read?
>>>
>>> Hi, on my side, I don't see any degradation performance on read (seq
>>>or rand)  with or without.
>>>
>>> firefly : around 12000iops (with or without rbd_cache) giant : around
>>> 12000iops  (with or without rbd_cache)
>>>
>>> (and I can reach around 20000-30000 iops on giant with disabling
>>>optracker).
>>>
>>>
>>> rbd_cache only improve write performance for me (4k block )
>>>
>>>
>>>
>>> ----- Mail original -----
>>>
>>> De: "Haomai Wang" <haomaiwang@gmail.com>
>>> À: "Somnath Roy" <Somnath.Roy@sandisk.com>
>>> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin"
>>> <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org
>>> Envoyé: Jeudi 18 Septembre 2014 04:27:56
>>> Objet: Re: severe librbd performance degradation in Giant
>>>
>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd
>>>cache will make 10x performance degradation for random read?
>>>
>>> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com>
>>>wrote:
>>>> Josh/Sage,
>>>> I should mention that even after turning off rbd cache I am getting
>>>>~20% degradation over Firefly.
>>>>
>>>> Thanks & Regards
>>>> Somnath
>>>>
>>>> -----Original Message-----
>>>> From: Somnath Roy
>>>> Sent: Wednesday, September 17, 2014 2:44 PM
>>>> To: Sage Weil
>>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>>>> Subject: RE: severe librbd performance degradation in Giant
>>>>
>>>> Created a tracker for this.
>>>>
>>>> http://tracker.ceph.com/issues/9513
>>>>
>>>> Thanks & Regards
>>>> Somnath
>>>>
>>>> -----Original Message-----
>>>> From: ceph-devel-owner@vger.kernel.org
>>>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
>>>> Sent: Wednesday, September 17, 2014 2:39 PM
>>>> To: Sage Weil
>>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>>>> Subject: RE: severe librbd performance degradation in Giant
>>>>
>>>> Sage,
>>>> It's a 4K random read.
>>>>
>>>> Thanks & Regards
>>>> Somnath
>>>>
>>>> -----Original Message-----
>>>> From: Sage Weil [mailto:sweil@redhat.com]
>>>> Sent: Wednesday, September 17, 2014 2:36 PM
>>>> To: Somnath Roy
>>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org
>>>> Subject: RE: severe librbd performance degradation in Giant
>>>>
>>>> What was the io pattern? Sequential or random? For random a slowdown
>>>>makes sense (tho maybe not 10x!) but not for sequentail....
>>>>
>>>> s
>>>>
>>>> On Wed, 17 Sep 2014, Somnath Roy wrote:
>>>>
>>>>> I set the following in the client side /etc/ceph/ceph.conf where I
>>>>>am running fio rbd.
>>>>>
>>>>> rbd_cache_writethrough_until_flush = false
>>>>>
>>>>> But, no difference. BTW, I am doing Random read, not write. Still
>>>>>this setting applies ?
>>>>>
>>>>> Next, I tried to tweak the rbd_cache setting to false and I *got
>>>>>back* the old performance. Now, it is similar to firefly throughput !
>>>>>
>>>>> So, loks like rbd_cache=true was the culprit.
>>>>>
>>>>> Thanks Josh !
>>>>>
>>>>> Regards
>>>>> Somnath
>>>>>
>>>>> -----Original Message-----
>>>>> From: Josh Durgin [mailto:josh.durgin@inktank.com]
>>>>> Sent: Wednesday, September 17, 2014 2:20 PM
>>>>> To: Somnath Roy; ceph-devel@vger.kernel.org
>>>>> Subject: Re: severe librbd performance degradation in Giant
>>>>>
>>>>> On 09/17/2014 01:55 PM, Somnath Roy wrote:
>>>>>> Hi Sage,
>>>>>> We are experiencing severe librbd performance degradation in Giant
>>>>>>over firefly release. Here is the experiment we did to isolate it as
>>>>>>a librbd problem.
>>>>>>
>>>>>> 1. Single OSD is running latest Giant and client is running fio rbd
>>>>>>on top of firefly based librbd/librados. For one client it is giving
>>>>>>~11-12K iops (4K RR).
>>>>>> 2. Single OSD is running Giant and client is running fio rbd on top
>>>>>>of Giant based librbd/librados. For one client it is giving ~1.9K
>>>>>>iops (4K RR).
>>>>>> 3. Single OSD is running latest Giant and client is running Giant
>>>>>>based ceph_smaiobench on top of giant librados. For one client it is
>>>>>>giving ~11-12K iops (4K RR).
>>>>>> 4. Giant RGW on top of Giant OSD is also scaling.
>>>>>>
>>>>>>
>>>>>> So, it is obvious from the above that recent librbd has issues. I
>>>>>>will raise a tracker to track this.
>>>>>
>>>>> For giant the default cache settings changed to:
>>>>>
>>>>> rbd cache = true
>>>>> rbd cache writethrough until flush = true
>>>>>
>>>>> If fio isn't sending flushes as the test is running, the cache will
>>>>>stay in writethrough mode. Does the difference remain if you set rbd
>>>>>cache writethrough until flush = false ?
>>>>>
>>>>> Josh
>>>>>
>>>>> ________________________________
>>>>>
>>>>> PLEASE NOTE: The information contained in this electronic mail
>>>>>message is intended only for the use of the designated recipient(s)
>>>>>named above. If the reader of this message is not the intended
>>>>>recipient, you are hereby notified that you have received this
>>>>>message in error and that any review, dissemination, distribution, or
>>>>>copying of this message is strictly prohibited. If you have received
>>>>>this communication in error, please notify the sender by telephone or
>>>>>e-mail (as shown above) immediately and destroy any and all copies of
>>>>>this message in your possession (whether hard copies or
>>>>>electronically stored copies).
>>>>>
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>>>> in the body of a message to majordomo@vger.kernel.org More majordomo
>>>>> info at http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>>> in the body of a message to majordomo@vger.kernel.org More majordomo
>>>> info at http://vger.kernel.org/majordomo-info.html
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>>> in the body of a message to majordomo@vger.kernel.org More majordomo
>>>> info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>>
>>> Wheat
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>>in the body of a message to majordomo@vger.kernel.org More majordomo
>>>info at http://vger.kernel.org/majordomo-info.html
>>> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w
>>>  j:+v   w j m         zZ+     ݢj"  ! i
>>> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w
>>    j:+v   w j m         zZ+     ݢj"  !tml=
>>>
>> 
>>N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay�\x1dʇڙ�,j\a��f���h���z�\x1e�w���\f���
>>j:+v���w�j�m����\a����zZ+�����ݢj"��!tml=
>> 
>--
>To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19 13:31                       ` Stefan Priebe - Profihost AG
  2014-09-19 13:49                         ` David Moreau Simard
@ 2014-09-19 13:56                         ` Alexandre DERUMIER
  2014-09-19 15:28                           ` Sage Weil
  1 sibling, 1 reply; 34+ messages in thread
From: Alexandre DERUMIER @ 2014-09-19 13:56 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG
  Cc: Sage Weil, Josh Durgin, ceph-devel, Xinxin Shu, Somnath Roy, Haomai Wang

>>Crazy, I've 56 SSDs and canÄt go above 20 000 iops.

I just notice than my fio benchmark is cpu bound...

I can reach around 40000iops. Don't have more client machines for the moment to bench


----- Mail original ----- 

De: "Stefan Priebe - Profihost AG" <s.priebe@profihost.ag> 
À: "Xinxin Shu" <xinxin.shu@intel.com>, "Somnath Roy" <Somnath.Roy@sandisk.com>, "Alexandre DERUMIER" <aderumier@odiso.com>, "Haomai Wang" <haomaiwang@gmail.com> 
Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
Envoyé: Vendredi 19 Septembre 2014 15:31:14 
Objet: Re: severe librbd performance degradation in Giant 

Am 19.09.2014 um 15:02 schrieb Shu, Xinxin: 
> 12 x Intel DC 3700 200GB, every SSD has two OSDs. 

Crazy, I've 56 SSDs and canÄt go above 20 000 iops. 

Grüße Stefan 

> Cheers, 
> xinxin 
> 
> -----Original Message----- 
> From: Stefan Priebe [mailto:s.priebe@profihost.ag] 
> Sent: Friday, September 19, 2014 2:54 PM 
> To: Shu, Xinxin; Somnath Roy; Alexandre DERUMIER; Haomai Wang 
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org 
> Subject: Re: severe librbd performance degradation in Giant 
> 
> Am 19.09.2014 03:08, schrieb Shu, Xinxin: 
>> I also observed performance degradation on my full SSD setup , I can 
>> got ~270K IOPS for 4KB random read with 0.80.4 , but with latest 
>> master , I only got ~12K IOPS 
> 
> This are impressive numbers. Can you tell me how many OSDs you have and which SSDs you use? 
> 
> Thanks, 
> Stefan 
> 
> 
>> Cheers, 
>> xinxin 
>> 
>> -----Original Message----- 
>> From: ceph-devel-owner@vger.kernel.org 
>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
>> Sent: Friday, September 19, 2014 2:03 AM 
>> To: Alexandre DERUMIER; Haomai Wang 
>> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org 
>> Subject: RE: severe librbd performance degradation in Giant 
>> 
>> Alexandre, 
>> What tool are you using ? I used fio rbd. 
>> 
>> Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file. 
>> FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it. 
>> 
>> Thanks & Regards 
>> Somnath 
>> 
>> -----Original Message----- 
>> From: Alexandre DERUMIER [mailto:aderumier@odiso.com] 
>> Sent: Thursday, September 18, 2014 2:49 AM 
>> To: Haomai Wang 
>> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy 
>> Subject: Re: severe librbd performance degradation in Giant 
>> 
>>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd 
>>>> cache will make 10x performance degradation for random read? 
>> 
>> Hi, on my side, I don't see any degradation performance on read (seq or rand) with or without. 
>> 
>> firefly : around 12000iops (with or without rbd_cache) giant : around 
>> 12000iops (with or without rbd_cache) 
>> 
>> (and I can reach around 20000-30000 iops on giant with disabling optracker). 
>> 
>> 
>> rbd_cache only improve write performance for me (4k block ) 
>> 
>> 
>> 
>> ----- Mail original ----- 
>> 
>> De: "Haomai Wang" <haomaiwang@gmail.com> 
>> À: "Somnath Roy" <Somnath.Roy@sandisk.com> 
>> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" 
>> <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
>> Envoyé: Jeudi 18 Septembre 2014 04:27:56 
>> Objet: Re: severe librbd performance degradation in Giant 
>> 
>> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 
>> 
>> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
>>> Josh/Sage, 
>>> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
>>> 
>>> Thanks & Regards 
>>> Somnath 
>>> 
>>> -----Original Message----- 
>>> From: Somnath Roy 
>>> Sent: Wednesday, September 17, 2014 2:44 PM 
>>> To: Sage Weil 
>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
>>> Subject: RE: severe librbd performance degradation in Giant 
>>> 
>>> Created a tracker for this. 
>>> 
>>> http://tracker.ceph.com/issues/9513 
>>> 
>>> Thanks & Regards 
>>> Somnath 
>>> 
>>> -----Original Message----- 
>>> From: ceph-devel-owner@vger.kernel.org 
>>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
>>> Sent: Wednesday, September 17, 2014 2:39 PM 
>>> To: Sage Weil 
>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
>>> Subject: RE: severe librbd performance degradation in Giant 
>>> 
>>> Sage, 
>>> It's a 4K random read. 
>>> 
>>> Thanks & Regards 
>>> Somnath 
>>> 
>>> -----Original Message----- 
>>> From: Sage Weil [mailto:sweil@redhat.com] 
>>> Sent: Wednesday, September 17, 2014 2:36 PM 
>>> To: Somnath Roy 
>>> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
>>> Subject: RE: severe librbd performance degradation in Giant 
>>> 
>>> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
>>> 
>>> s 
>>> 
>>> On Wed, 17 Sep 2014, Somnath Roy wrote: 
>>> 
>>>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
>>>> 
>>>> rbd_cache_writethrough_until_flush = false 
>>>> 
>>>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
>>>> 
>>>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
>>>> 
>>>> So, loks like rbd_cache=true was the culprit. 
>>>> 
>>>> Thanks Josh ! 
>>>> 
>>>> Regards 
>>>> Somnath 
>>>> 
>>>> -----Original Message----- 
>>>> From: Josh Durgin [mailto:josh.durgin@inktank.com] 
>>>> Sent: Wednesday, September 17, 2014 2:20 PM 
>>>> To: Somnath Roy; ceph-devel@vger.kernel.org 
>>>> Subject: Re: severe librbd performance degradation in Giant 
>>>> 
>>>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
>>>>> Hi Sage, 
>>>>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
>>>>> 
>>>>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
>>>>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
>>>>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
>>>>> 4. Giant RGW on top of Giant OSD is also scaling. 
>>>>> 
>>>>> 
>>>>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
>>>> 
>>>> For giant the default cache settings changed to: 
>>>> 
>>>> rbd cache = true 
>>>> rbd cache writethrough until flush = true 
>>>> 
>>>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
>>>> 
>>>> Josh 
>>>> 
>>>> ________________________________ 
>>>> 
>>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>>>> 
>>>> -- 
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>>> info at http://vger.kernel.org/majordomo-info.html 
>>>> 
>>>> 
>>> -- 
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>> info at http://vger.kernel.org/majordomo-info.html 
>>> -- 
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>> info at http://vger.kernel.org/majordomo-info.html 
>> 
>> 
>> 
>> -- 
>> Best Regards, 
>> 
>> Wheat 
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 
>> N r y b X ǧv ^ )޺{.n + z ]z {ay ʇڙ ,j f h z w j:+v w j m zZ+ ݢj" ! i 
>> N r y b X ǧv ^ )޺{.n + z ]z {ay ʇڙ ,j f h z w 
> j:+v w j m zZ+ ݢj" !tml= 
>> 
> N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w������j:+v���w�j�m��������zZ+�����ݢj"��!tml= 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19 11:30                   ` Alexandre DERUMIER
  2014-09-19 12:51                     ` Alexandre DERUMIER
@ 2014-09-19 15:15                     ` Sage Weil
  1 sibling, 0 replies; 34+ messages in thread
From: Sage Weil @ 2014-09-19 15:15 UTC (permalink / raw)
  To: Alexandre DERUMIER; +Cc: Somnath Roy, Josh Durgin, ceph-devel, Haomai Wang

On Fri, 19 Sep 2014, Alexandre DERUMIER wrote:
> >> with rbd_cache=true , I got around 60000iops (and I don't see any network traffic) 
> >>
> >>So maybe they are a bug in fio ? 
> >>maybe this is related to: 
> 
> Oh, sorry, this was my fault, I didn't fill the rbd with datas before doing the bench
> 
> Now the results are (for 1 osd)
> 
> firefly
> ------
>  bw=37460KB/s, iops=9364
> 
> giant
> -----
>  bw=32741KB/s, iops=8185
> 
> 
> So, a little regression
> 
> (the results are equals rbd_cache=true|false)

Do you see a difference with rados bench, or is it just librbd?

Thanks!
sage


> 
> 
> I'll try to compare with more osds
> 
> ----- Mail original ----- 
> 
> De: "Alexandre DERUMIER" <aderumier@odiso.com> 
> ?: "Somnath Roy" <Somnath.Roy@sandisk.com> 
> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org, "Haomai Wang" <haomaiwang@gmail.com> 
> Envoy?: Vendredi 19 Septembre 2014 12:09:41 
> Objet: Re: severe librbd performance degradation in Giant 
> 
> >>What tool are you using ? I used fio rbd. 
> 
> fio rbd too 
> 
> 
> [global] 
> ioengine=rbd 
> clientname=admin 
> pool=test 
> rbdname=test 
> invalidate=0 
> #rw=read 
> #rw=randwrite 
> #rw=write 
> rw=randread 
> bs=4k 
> direct=1 
> numjobs=2 
> group_reporting=1 
> size=10G 
> 
> [rbd_iodepth32] 
> iodepth=32 
> 
> 
> 
> I just notice something strange 
> 
> with rbd_cache=true , I got around 60000iops (and I don't see any network traffic) 
> 
> So maybe they are a bug in fio ? 
> maybe this is related to: 
> 
> 
> http://tracker.ceph.com/issues/9391 
> "fio rbd driver rewrites same blocks" 
> 
> ----- Mail original ----- 
> 
> De: "Somnath Roy" <Somnath.Roy@sandisk.com> 
> ?: "Alexandre DERUMIER" <aderumier@odiso.com>, "Haomai Wang" <haomaiwang@gmail.com> 
> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
> Envoy?: Jeudi 18 Septembre 2014 20:02:49 
> Objet: RE: severe librbd performance degradation in Giant 
> 
> Alexandre, 
> What tool are you using ? I used fio rbd. 
> 
> Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file. 
> FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it. 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Alexandre DERUMIER [mailto:aderumier@odiso.com] 
> Sent: Thursday, September 18, 2014 2:49 AM 
> To: Haomai Wang 
> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy 
> Subject: Re: severe librbd performance degradation in Giant 
> 
> >>According http://tracker.ceph.com/issues/9513, do you mean that rbd 
> >>cache will make 10x performance degradation for random read? 
> 
> Hi, on my side, I don't see any degradation performance on read (seq or rand) with or without. 
> 
> firefly : around 12000iops (with or without rbd_cache) giant : around 12000iops (with or without rbd_cache) 
> 
> (and I can reach around 20000-30000 iops on giant with disabling optracker). 
> 
> 
> rbd_cache only improve write performance for me (4k block ) 
> 
> 
> 
> ----- Mail original ----- 
> 
> De: "Haomai Wang" <haomaiwang@gmail.com> 
> ?: "Somnath Roy" <Somnath.Roy@sandisk.com> 
> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
> Envoy?: Jeudi 18 Septembre 2014 04:27:56 
> Objet: Re: severe librbd performance degradation in Giant 
> 
> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 
> 
> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> > Josh/Sage, 
> > I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> > 
> > Thanks & Regards 
> > Somnath 
> > 
> > -----Original Message----- 
> > From: Somnath Roy 
> > Sent: Wednesday, September 17, 2014 2:44 PM 
> > To: Sage Weil 
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> > Subject: RE: severe librbd performance degradation in Giant 
> > 
> > Created a tracker for this. 
> > 
> > http://tracker.ceph.com/issues/9513 
> > 
> > Thanks & Regards 
> > Somnath 
> > 
> > -----Original Message----- 
> > From: ceph-devel-owner@vger.kernel.org 
> > [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
> > Sent: Wednesday, September 17, 2014 2:39 PM 
> > To: Sage Weil 
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> > Subject: RE: severe librbd performance degradation in Giant 
> > 
> > Sage, 
> > It's a 4K random read. 
> > 
> > Thanks & Regards 
> > Somnath 
> > 
> > -----Original Message----- 
> > From: Sage Weil [mailto:sweil@redhat.com] 
> > Sent: Wednesday, September 17, 2014 2:36 PM 
> > To: Somnath Roy 
> > Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> > Subject: RE: severe librbd performance degradation in Giant 
> > 
> > What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> > 
> > s 
> > 
> > On Wed, 17 Sep 2014, Somnath Roy wrote: 
> > 
> >> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
> >> 
> >> rbd_cache_writethrough_until_flush = false 
> >> 
> >> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
> >> 
> >> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
> >> 
> >> So, loks like rbd_cache=true was the culprit. 
> >> 
> >> Thanks Josh ! 
> >> 
> >> Regards 
> >> Somnath 
> >> 
> >> -----Original Message----- 
> >> From: Josh Durgin [mailto:josh.durgin@inktank.com] 
> >> Sent: Wednesday, September 17, 2014 2:20 PM 
> >> To: Somnath Roy; ceph-devel@vger.kernel.org 
> >> Subject: Re: severe librbd performance degradation in Giant 
> >> 
> >> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
> >> > Hi Sage, 
> >> > We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
> >> > 
> >> > 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
> >> > 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
> >> > 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
> >> > 4. Giant RGW on top of Giant OSD is also scaling. 
> >> > 
> >> > 
> >> > So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
> >> 
> >> For giant the default cache settings changed to: 
> >> 
> >> rbd cache = true 
> >> rbd cache writethrough until flush = true 
> >> 
> >> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
> >> 
> >> Josh 
> >> 
> >> ________________________________ 
> >> 
> >> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
> >> 
> >> -- 
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> >> in the body of a message to majordomo@vger.kernel.org More majordomo 
> >> info at http://vger.kernel.org/majordomo-info.html 
> >> 
> >> 
> > -- 
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at http://vger.kernel.org/majordomo-info.html 
> > -- 
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at http://vger.kernel.org/majordomo-info.html 
> 
> 
> 
> -- 
> Best Regards, 
> 
> Wheat 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
> the body of a message to majordomo@vger.kernel.org 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: severe librbd performance degradation in Giant
  2014-09-19 13:56                         ` Alexandre DERUMIER
@ 2014-09-19 15:28                           ` Sage Weil
  0 siblings, 0 replies; 34+ messages in thread
From: Sage Weil @ 2014-09-19 15:28 UTC (permalink / raw)
  To: Alexandre DERUMIER
  Cc: Stefan Priebe - Profihost AG, Josh Durgin, ceph-devel,
	Xinxin Shu, Somnath Roy, Haomai Wang

On Fri, 19 Sep 2014, Alexandre DERUMIER wrote:
> >>Crazy, I've 56 SSDs and can?t go above 20 000 iops.
> 
> I just notice than my fio benchmark is cpu bound...
> 
> I can reach around 40000iops. Don't have more client machines for the moment to bench

A quick aside on the fio testing: Mark noticed a few weeks back that the 
fio rbd driver is doing quite the right thing when you turn up the number 
of threads: each one issues its own IOs but they touch the same blocks in 
the image (or something like that).  See

	http://tracker.ceph.com/issues/9391

It would be great to get this fixed in fio...

sage


> 
> 
> ----- Mail original ----- 
> 
> De: "Stefan Priebe - Profihost AG" <s.priebe@profihost.ag> 
> ?: "Xinxin Shu" <xinxin.shu@intel.com>, "Somnath Roy" <Somnath.Roy@sandisk.com>, "Alexandre DERUMIER" <aderumier@odiso.com>, "Haomai Wang" <haomaiwang@gmail.com> 
> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
> Envoy?: Vendredi 19 Septembre 2014 15:31:14 
> Objet: Re: severe librbd performance degradation in Giant 
> 
> Am 19.09.2014 um 15:02 schrieb Shu, Xinxin: 
> > 12 x Intel DC 3700 200GB, every SSD has two OSDs. 
> 
> Crazy, I've 56 SSDs and can?t go above 20 000 iops. 
> 
> Gr??e Stefan 
> 
> > Cheers, 
> > xinxin 
> > 
> > -----Original Message----- 
> > From: Stefan Priebe [mailto:s.priebe@profihost.ag] 
> > Sent: Friday, September 19, 2014 2:54 PM 
> > To: Shu, Xinxin; Somnath Roy; Alexandre DERUMIER; Haomai Wang 
> > Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org 
> > Subject: Re: severe librbd performance degradation in Giant 
> > 
> > Am 19.09.2014 03:08, schrieb Shu, Xinxin: 
> >> I also observed performance degradation on my full SSD setup , I can 
> >> got ~270K IOPS for 4KB random read with 0.80.4 , but with latest 
> >> master , I only got ~12K IOPS 
> > 
> > This are impressive numbers. Can you tell me how many OSDs you have and which SSDs you use? 
> > 
> > Thanks, 
> > Stefan 
> > 
> > 
> >> Cheers, 
> >> xinxin 
> >> 
> >> -----Original Message----- 
> >> From: ceph-devel-owner@vger.kernel.org 
> >> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
> >> Sent: Friday, September 19, 2014 2:03 AM 
> >> To: Alexandre DERUMIER; Haomai Wang 
> >> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org 
> >> Subject: RE: severe librbd performance degradation in Giant 
> >> 
> >> Alexandre, 
> >> What tool are you using ? I used fio rbd. 
> >> 
> >> Also, I hope you have Giant package installed in the client side as well and rbd_cache =true is set on the client conf file. 
> >> FYI, firefly librbd + librados and Giant cluster will work seamlessly and I had to make sure fio rbd is really loading giant librbd (if you have multiple copies around , which was in my case) for reproducing it. 
> >> 
> >> Thanks & Regards 
> >> Somnath 
> >> 
> >> -----Original Message----- 
> >> From: Alexandre DERUMIER [mailto:aderumier@odiso.com] 
> >> Sent: Thursday, September 18, 2014 2:49 AM 
> >> To: Haomai Wang 
> >> Cc: Sage Weil; Josh Durgin; ceph-devel@vger.kernel.org; Somnath Roy 
> >> Subject: Re: severe librbd performance degradation in Giant 
> >> 
> >>>> According http://tracker.ceph.com/issues/9513, do you mean that rbd 
> >>>> cache will make 10x performance degradation for random read? 
> >> 
> >> Hi, on my side, I don't see any degradation performance on read (seq or rand) with or without. 
> >> 
> >> firefly : around 12000iops (with or without rbd_cache) giant : around 
> >> 12000iops (with or without rbd_cache) 
> >> 
> >> (and I can reach around 20000-30000 iops on giant with disabling optracker). 
> >> 
> >> 
> >> rbd_cache only improve write performance for me (4k block ) 
> >> 
> >> 
> >> 
> >> ----- Mail original ----- 
> >> 
> >> De: "Haomai Wang" <haomaiwang@gmail.com> 
> >> ?: "Somnath Roy" <Somnath.Roy@sandisk.com> 
> >> Cc: "Sage Weil" <sweil@redhat.com>, "Josh Durgin" 
> >> <josh.durgin@inktank.com>, ceph-devel@vger.kernel.org 
> >> Envoy?: Jeudi 18 Septembre 2014 04:27:56 
> >> Objet: Re: severe librbd performance degradation in Giant 
> >> 
> >> According http://tracker.ceph.com/issues/9513, do you mean that rbd cache will make 10x performance degradation for random read? 
> >> 
> >> On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> >>> Josh/Sage, 
> >>> I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly. 
> >>> 
> >>> Thanks & Regards 
> >>> Somnath 
> >>> 
> >>> -----Original Message----- 
> >>> From: Somnath Roy 
> >>> Sent: Wednesday, September 17, 2014 2:44 PM 
> >>> To: Sage Weil 
> >>> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> >>> Subject: RE: severe librbd performance degradation in Giant 
> >>> 
> >>> Created a tracker for this. 
> >>> 
> >>> http://tracker.ceph.com/issues/9513 
> >>> 
> >>> Thanks & Regards 
> >>> Somnath 
> >>> 
> >>> -----Original Message----- 
> >>> From: ceph-devel-owner@vger.kernel.org 
> >>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy 
> >>> Sent: Wednesday, September 17, 2014 2:39 PM 
> >>> To: Sage Weil 
> >>> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> >>> Subject: RE: severe librbd performance degradation in Giant 
> >>> 
> >>> Sage, 
> >>> It's a 4K random read. 
> >>> 
> >>> Thanks & Regards 
> >>> Somnath 
> >>> 
> >>> -----Original Message----- 
> >>> From: Sage Weil [mailto:sweil@redhat.com] 
> >>> Sent: Wednesday, September 17, 2014 2:36 PM 
> >>> To: Somnath Roy 
> >>> Cc: Josh Durgin; ceph-devel@vger.kernel.org 
> >>> Subject: RE: severe librbd performance degradation in Giant 
> >>> 
> >>> What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail.... 
> >>> 
> >>> s 
> >>> 
> >>> On Wed, 17 Sep 2014, Somnath Roy wrote: 
> >>> 
> >>>> I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd. 
> >>>> 
> >>>> rbd_cache_writethrough_until_flush = false 
> >>>> 
> >>>> But, no difference. BTW, I am doing Random read, not write. Still this setting applies ? 
> >>>> 
> >>>> Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput ! 
> >>>> 
> >>>> So, loks like rbd_cache=true was the culprit. 
> >>>> 
> >>>> Thanks Josh ! 
> >>>> 
> >>>> Regards 
> >>>> Somnath 
> >>>> 
> >>>> -----Original Message----- 
> >>>> From: Josh Durgin [mailto:josh.durgin@inktank.com] 
> >>>> Sent: Wednesday, September 17, 2014 2:20 PM 
> >>>> To: Somnath Roy; ceph-devel@vger.kernel.org 
> >>>> Subject: Re: severe librbd performance degradation in Giant 
> >>>> 
> >>>> On 09/17/2014 01:55 PM, Somnath Roy wrote: 
> >>>>> Hi Sage, 
> >>>>> We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 
> >>>>> 
> >>>>> 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR). 
> >>>>> 2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR). 
> >>>>> 3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR). 
> >>>>> 4. Giant RGW on top of Giant OSD is also scaling. 
> >>>>> 
> >>>>> 
> >>>>> So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this. 
> >>>> 
> >>>> For giant the default cache settings changed to: 
> >>>> 
> >>>> rbd cache = true 
> >>>> rbd cache writethrough until flush = true 
> >>>> 
> >>>> If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ? 
> >>>> 
> >>>> Josh 
> >>>> 
> >>>> ________________________________ 
> >>>> 
> >>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
> >>>> 
> >>>> -- 
> >>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> >>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
> >>>> info at http://vger.kernel.org/majordomo-info.html 
> >>>> 
> >>>> 
> >>> -- 
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> >>> in the body of a message to majordomo@vger.kernel.org More majordomo 
> >>> info at http://vger.kernel.org/majordomo-info.html 
> >>> -- 
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> >>> in the body of a message to majordomo@vger.kernel.org More majordomo 
> >>> info at http://vger.kernel.org/majordomo-info.html 
> >> 
> >> 
> >> 
> >> -- 
> >> Best Regards, 
> >> 
> >> Wheat 
> >> -- 
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html 
> >> N r y b X ?v ^ )?{.n + z ]z {ay ?? ,j f h z w j:+v w j m zZ+ ?j" ! i 
> >> N r y b X ?v ^ )?{.n + z ]z {ay ?? ,j f h z w 
> > j:+v w j m zZ+ ?j" !tml= 
> >> 
> > N???????????????r??????y?????????b???X???????v???^???)?{.n???+?????????z???]z?????????{ay????????,j??????f?????????h?????????z??????w??????????????????j:+v?????????w???j???m????????????????????????zZ+????????????????j"??????!tml= 
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2014-09-19 15:28 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-17 20:55 severe librbd performance degradation in Giant Somnath Roy
2014-09-17 20:59 ` Mark Nelson
2014-09-17 21:01   ` Somnath Roy
     [not found] ` <BA7B69AA-4906-4836-A2F6-5A6EE756A548@profihost.ag>
2014-09-17 21:08   ` Somnath Roy
2014-09-17 21:20 ` Josh Durgin
2014-09-17 21:29   ` Somnath Roy
2014-09-17 21:34     ` Mark Nelson
2014-09-17 21:37       ` Somnath Roy
2014-09-17 21:40       ` Josh Durgin
2014-09-17 21:35     ` Sage Weil
2014-09-17 21:38       ` Somnath Roy
2014-09-17 21:44         ` Somnath Roy
2014-09-17 23:44         ` Somnath Roy
2014-09-18  2:27           ` Haomai Wang
2014-09-18  3:03             ` Somnath Roy
2014-09-18  3:52               ` Sage Weil
2014-09-18  6:24                 ` Somnath Roy
2014-09-18  8:45                   ` Chen, Xiaoxi
2014-09-18 14:11                   ` Sage Weil
2014-09-18  9:49             ` Alexandre DERUMIER
2014-09-18 12:38               ` Mark Nelson
2014-09-18 18:02               ` Somnath Roy
2014-09-19  1:08                 ` Shu, Xinxin
2014-09-19  1:10                   ` Shu, Xinxin
2014-09-19  6:53                   ` Stefan Priebe
2014-09-19 13:02                     ` Shu, Xinxin
2014-09-19 13:31                       ` Stefan Priebe - Profihost AG
2014-09-19 13:49                         ` David Moreau Simard
2014-09-19 13:56                         ` Alexandre DERUMIER
2014-09-19 15:28                           ` Sage Weil
2014-09-19 10:09                 ` Alexandre DERUMIER
2014-09-19 11:30                   ` Alexandre DERUMIER
2014-09-19 12:51                     ` Alexandre DERUMIER
2014-09-19 15:15                     ` Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.