All of lore.kernel.org
 help / color / mirror / Atom feed
* Ceph Hammer OSD Shard Tuning Test Results
@ 2015-02-26  4:44 Mark Nelson
       [not found] ` <934614223.2346576.1425017430135.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Mark Nelson @ 2015-02-26  4:44 UTC (permalink / raw)
  To: ceph-devel, ceph-users-idqoXFIVOFJgJs9I8MT0rw

[-- Attachment #1: Type: text/plain, Size: 709 bytes --]

Hi Everyone,

In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison 
thread, Alexandre DERUMIER wondered if changing the default shard and 
threads per shard OSD settings might have a positive effect on 
performance in our tests.  I went back and used one of the PCIe SSDs 
from our previous tests to experiment with a recent master pull.  I 
wanted to know how performance was affected by changing these parameters 
and also to validate that the default settings still appear to be correct.

I plan to conduct more tests (potentially across multiple SATA SSDs in 
the same box), but these initial results seem to show that the default 
settings that were chosen are quite reasonable.

Mark

[-- Attachment #2: Ceph_Hammer_OSD_Shard_Tuning_PCIe.pdf --]
[-- Type: application/pdf, Size: 201925 bytes --]

[-- Attachment #3: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph Hammer OSD Shard Tuning Test Results
       [not found] ` <934614223.2346576.1425017430135.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
@ 2015-02-27  6:10   ` Alexandre DERUMIER
       [not found]     ` <1334533598.2513656.1425220674977.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Alexandre DERUMIER @ 2015-02-27  6:10 UTC (permalink / raw)
  To: Mark Nelson; +Cc: ceph-devel, ceph-users

Thanks Mark for the results,
default values seem to be quite resonable indeed.


I also wonder is cpu frequency can have an impact on latency or not.
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks,
I'll try replay your benchmark to compare



----- Mail original -----
De: "Mark Nelson" <mnelson@redhat.com>
À: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Jeudi 26 Février 2015 05:44:15
Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results

Hi Everyone, 

In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison 
thread, Alexandre DERUMIER wondered if changing the default shard and 
threads per shard OSD settings might have a positive effect on 
performance in our tests. I went back and used one of the PCIe SSDs 
from our previous tests to experiment with a recent master pull. I 
wanted to know how performance was affected by changing these parameters 
and also to validate that the default settings still appear to be correct. 

I plan to conduct more tests (potentially across multiple SATA SSDs in 
the same box), but these initial results seem to show that the default 
settings that were chosen are quite reasonable. 

Mark 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph Hammer OSD Shard Tuning Test Results
       [not found]     ` <1334533598.2513656.1425220674977.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
@ 2015-03-01 14:38       ` Alexandre DERUMIER
       [not found]         ` <770882624.2513660.1425220699487.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Alexandre DERUMIER @ 2015-03-01 14:38 UTC (permalink / raw)
  To: Mark Nelson; +Cc: ceph-devel, ceph-users

Hi Mark,

I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger)

http://www.spinics.net/lists/ceph-devel/msg22414.html

and with 1 osd, he was able to reach 105k iops with simple messenger

. ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)

this was with more powerfull nodes, but the difference seem to be quite huge



----- Mail original -----
De: "aderumier" <aderumier@odiso.com>
À: "Mark Nelson" <mnelson@redhat.com>
Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Vendredi 27 Février 2015 07:10:42
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results

Thanks Mark for the results, 
default values seem to be quite resonable indeed. 


I also wonder is cpu frequency can have an impact on latency or not. 
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks, 
I'll try replay your benchmark to compare 



----- Mail original ----- 
De: "Mark Nelson" <mnelson@redhat.com> 
À: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com> 
Envoyé: Jeudi 26 Février 2015 05:44:15 
Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results 

Hi Everyone, 

In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison 
thread, Alexandre DERUMIER wondered if changing the default shard and 
threads per shard OSD settings might have a positive effect on 
performance in our tests. I went back and used one of the PCIe SSDs 
from our previous tests to experiment with a recent master pull. I 
wanted to know how performance was affected by changing these parameters 
and also to validate that the default settings still appear to be correct. 

I plan to conduct more tests (potentially across multiple SATA SSDs in 
the same box), but these initial results seem to show that the default 
settings that were chosen are quite reasonable. 

Mark 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph Hammer OSD Shard Tuning Test Results
       [not found]         ` <770882624.2513660.1425220699487.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
@ 2015-03-01 21:49           ` Kevin Walker
       [not found]             ` <268261243.2571232.1425278514343.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
  2015-03-02 14:39           ` Mark Nelson
  1 sibling, 1 reply; 8+ messages in thread
From: Kevin Walker @ 2015-03-01 21:49 UTC (permalink / raw)
  To: Alexandre DERUMIER; +Cc: ceph-devel, ceph-users

Can I ask what xio and simple messenger are and the differences?

Kind regards

Kevin Walker
+968 9765 1742

On 1 Mar 2015, at 18:38, Alexandre DERUMIER <aderumier@odiso.com> wrote:

Hi Mark,

I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger)

http://www.spinics.net/lists/ceph-devel/msg22414.html

and with 1 osd, he was able to reach 105k iops with simple messenger

. ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)

this was with more powerfull nodes, but the difference seem to be quite huge



----- Mail original -----
De: "aderumier" <aderumier@odiso.com>
À: "Mark Nelson" <mnelson@redhat.com>
Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Vendredi 27 Février 2015 07:10:42
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results

Thanks Mark for the results, 
default values seem to be quite resonable indeed. 


I also wonder is cpu frequency can have an impact on latency or not. 
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks, 
I'll try replay your benchmark to compare 



----- Mail original ----- 
De: "Mark Nelson" <mnelson@redhat.com> 
À: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com> 
Envoyé: Jeudi 26 Février 2015 05:44:15 
Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results 

Hi Everyone, 

In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison 
thread, Alexandre DERUMIER wondered if changing the default shard and 
threads per shard OSD settings might have a positive effect on 
performance in our tests. I went back and used one of the PCIe SSDs 
from our previous tests to experiment with a recent master pull. I 
wanted to know how performance was affected by changing these parameters 
and also to validate that the default settings still appear to be correct. 

I plan to conduct more tests (potentially across multiple SATA SSDs in 
the same box), but these initial results seem to show that the default 
settings that were chosen are quite reasonable. 

Mark 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph Hammer OSD Shard Tuning Test Results
       [not found]             ` <268261243.2571232.1425278514343.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
@ 2015-03-02  6:42               ` Alexandre DERUMIER
  0 siblings, 0 replies; 8+ messages in thread
From: Alexandre DERUMIER @ 2015-03-02  6:42 UTC (permalink / raw)
  To: Kevin Walker; +Cc: ceph-devel, ceph-users

>>Can I ask what xio and simple messenger are and the differences? 

simple messenger is the classic messenger protocol used since the begining of ceph.
xio messenger is for rdma (infiniband or Roce over ethernet)
they are also a new async messenger.

They should help to reduce latencies (and also cpu usage for rdma, because you don't have tcp overhead)


----- Mail original -----
De: "Kevin Walker" <kwalker@virtualviolet.net>
À: "aderumier" <aderumier@odiso.com>
Cc: "Mark Nelson" <mnelson@redhat.com>, "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Dimanche 1 Mars 2015 22:49:23
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results

Can I ask what xio and simple messenger are and the differences? 

Kind regards 

Kevin Walker 
+968 9765 1742 

On 1 Mar 2015, at 18:38, Alexandre DERUMIER <aderumier@odiso.com> wrote: 

Hi Mark, 

I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger) 

http://www.spinics.net/lists/ceph-devel/msg22414.html 

and with 1 osd, he was able to reach 105k iops with simple messenger 

. ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32) 

this was with more powerfull nodes, but the difference seem to be quite huge 



----- Mail original ----- 
De: "aderumier" <aderumier@odiso.com> 
À: "Mark Nelson" <mnelson@redhat.com> 
Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com> 
Envoyé: Vendredi 27 Février 2015 07:10:42 
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results 

Thanks Mark for the results, 
default values seem to be quite resonable indeed. 


I also wonder is cpu frequency can have an impact on latency or not. 
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks, 
I'll try replay your benchmark to compare 



----- Mail original ----- 
De: "Mark Nelson" <mnelson@redhat.com> 
À: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com> 
Envoyé: Jeudi 26 Février 2015 05:44:15 
Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results 

Hi Everyone, 

In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison 
thread, Alexandre DERUMIER wondered if changing the default shard and 
threads per shard OSD settings might have a positive effect on 
performance in our tests. I went back and used one of the PCIe SSDs 
from our previous tests to experiment with a recent master pull. I 
wanted to know how performance was affected by changing these parameters 
and also to validate that the default settings still appear to be correct. 

I plan to conduct more tests (potentially across multiple SATA SSDs in 
the same box), but these initial results seem to show that the default 
settings that were chosen are quite reasonable. 

Mark 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Ceph Hammer OSD Shard Tuning Test Results
       [not found]         ` <770882624.2513660.1425220699487.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
  2015-03-01 21:49           ` Kevin Walker
@ 2015-03-02 14:39           ` Mark Nelson
       [not found]             ` <1210381094.2666418.1425311646322.JavaMail.zimbra@oxygem.tv>
  1 sibling, 1 reply; 8+ messages in thread
From: Mark Nelson @ 2015-03-02 14:39 UTC (permalink / raw)
  To: Alexandre DERUMIER; +Cc: ceph-devel, ceph-users

Hi Alex,

I see I even responded in the same thread!  This would be a good thing 
to bring up in the meeting on Wednesday.  Those are far faster single 
OSD results than I've been able to muster with simplemessenger.  I 
wonder how much effect flow-control and header/data crc had.  He did 
have quite a bit more CPU (Intel specs say 14 cores @ 2.6GHz, 28 if you 
count hyperthreading).  Depending on whether there were 1 or 2 CPUs in 
that node, that might be around 3x the CPU power I have here.

Some other thoughts:  Were the simplemessenger tests on IPoIB or native? 
  How big was the RBD volume that was created (could some data be 
locally cached)?  Did network data transfer statistics match the 
benchmark result numbers?

I also did some tests on fdcache, though just glancing at the results it 
doesn't look like tweaking those parameters had much effect.

Mark

On 03/01/2015 08:38 AM, Alexandre DERUMIER wrote:
> Hi Mark,
>
> I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger)
>
> http://www.spinics.net/lists/ceph-devel/msg22414.html
>
> and with 1 osd, he was able to reach 105k iops with simple messenger
>
> . ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)
>
> this was with more powerfull nodes, but the difference seem to be quite huge
>
>
>
> ----- Mail original -----
> De: "aderumier" <aderumier@odiso.com>
> À: "Mark Nelson" <mnelson@redhat.com>
> Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com>
> Envoyé: Vendredi 27 Février 2015 07:10:42
> Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
>
> Thanks Mark for the results,
> default values seem to be quite resonable indeed.
>
>
> I also wonder is cpu frequency can have an impact on latency or not.
> I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks,
> I'll try replay your benchmark to compare
>
>
>
> ----- Mail original -----
> De: "Mark Nelson" <mnelson@redhat.com>
> À: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com>
> Envoyé: Jeudi 26 Février 2015 05:44:15
> Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
>
> Hi Everyone,
>
> In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
> thread, Alexandre DERUMIER wondered if changing the default shard and
> threads per shard OSD settings might have a positive effect on
> performance in our tests. I went back and used one of the PCIe SSDs
> from our previous tests to experiment with a recent master pull. I
> wanted to know how performance was affected by changing these parameters
> and also to validate that the default settings still appear to be correct.
>
> I plan to conduct more tests (potentially across multiple SATA SSDs in
> the same box), but these initial results seem to show that the default
> settings that were chosen are quite reasonable.
>
> Mark
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
       [not found]             ` <1210381094.2666418.1425311646322.JavaMail.zimbra@oxygem.tv>
@ 2015-03-02 15:54               ` Alexandre DERUMIER
  2015-03-02 19:56                 ` Vu Pham
  0 siblings, 1 reply; 8+ messages in thread
From: Alexandre DERUMIER @ 2015-03-02 15:54 UTC (permalink / raw)
  To: Mark Nelson; +Cc: ceph-devel, ceph-users, Vu Pham

>> This would be a good thing to bring up in the meeting on Wednesday. 
yes !

>>I wonder how much effect flow-control and header/data crc had. 
yes. I known that sommath also disable crc for his bench

>>Were the simplemessenger tests on IPoIB or native? 

I think it's native, as the Vu Pham benchmark was done on mellanox sx1012 (ethernet).
And xio messenger was on Roce (rdma over ethernet)

>>How big was the RBD volume that was created (could some data be 
>>locally cached)? Did network data transfer statistics match the 
>>benchmark result numbers? 



I @cc Vu pham to this mail maybe it'll be able to give us answer.


Note that I'll have same mellanox switches (sx1012) for my production cluster in some weeks,
so I'll be able to reproduce the bench. (with 2x10 cores 3,1ghz nodes and clients).





----- Mail original -----
De: "Mark Nelson" <mnelson@redhat.com>
À: "aderumier" <aderumier@odiso.com>
Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Lundi 2 Mars 2015 15:39:24
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results

Hi Alex, 

I see I even responded in the same thread! This would be a good thing 
to bring up in the meeting on Wednesday. Those are far faster single 
OSD results than I've been able to muster with simplemessenger. I 
wonder how much effect flow-control and header/data crc had. He did 
have quite a bit more CPU (Intel specs say 14 cores @ 2.6GHz, 28 if you 
count hyperthreading). Depending on whether there were 1 or 2 CPUs in 
that node, that might be around 3x the CPU power I have here. 

Some other thoughts: Were the simplemessenger tests on IPoIB or native? 
How big was the RBD volume that was created (could some data be 
locally cached)? Did network data transfer statistics match the 
benchmark result numbers? 

I also did some tests on fdcache, though just glancing at the results it 
doesn't look like tweaking those parameters had much effect. 

Mark 

On 03/01/2015 08:38 AM, Alexandre DERUMIER wrote: 
> Hi Mark, 
> 
> I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger) 
> 
> http://www.spinics.net/lists/ceph-devel/msg22414.html 
> 
> and with 1 osd, he was able to reach 105k iops with simple messenger 
> 
> . ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32) 
> 
> this was with more powerfull nodes, but the difference seem to be quite huge 
> 
> 
> 
> ----- Mail original ----- 
> De: "aderumier" <aderumier@odiso.com> 
> À: "Mark Nelson" <mnelson@redhat.com> 
> Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com> 
> Envoyé: Vendredi 27 Février 2015 07:10:42 
> Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results 
> 
> Thanks Mark for the results, 
> default values seem to be quite resonable indeed. 
> 
> 
> I also wonder is cpu frequency can have an impact on latency or not. 
> I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks, 
> I'll try replay your benchmark to compare 
> 
> 
> 
> ----- Mail original ----- 
> De: "Mark Nelson" <mnelson@redhat.com> 
> À: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" <ceph-users@lists.ceph.com> 
> Envoyé: Jeudi 26 Février 2015 05:44:15 
> Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results 
> 
> Hi Everyone, 
> 
> In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison 
> thread, Alexandre DERUMIER wondered if changing the default shard and 
> threads per shard OSD settings might have a positive effect on 
> performance in our tests. I went back and used one of the PCIe SSDs 
> from our previous tests to experiment with a recent master pull. I 
> wanted to know how performance was affected by changing these parameters 
> and also to validate that the default settings still appear to be correct. 
> 
> I plan to conduct more tests (potentially across multiple SATA SSDs in 
> the same box), but these initial results seem to show that the default 
> settings that were chosen are quite reasonable. 
> 
> Mark 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
> the body of a message to majordomo@vger.kernel.org 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
  2015-03-02 15:54               ` [ceph-users] " Alexandre DERUMIER
@ 2015-03-02 19:56                 ` Vu Pham
  0 siblings, 0 replies; 8+ messages in thread
From: Vu Pham @ 2015-03-02 19:56 UTC (permalink / raw)
  To: Alexandre DERUMIER, Mark Nelson; +Cc: ceph-devel, ceph-users


>>>  This would be a good thing to bring up in the meeting on Wednesday.
>yes !
>

Yes, we can discuss details on Wed's call.


>
>>>I wonder how much effect flow-control and header/data crc had.
>yes. I known that sommath also disable crc for his bench
>

I disabled ceph's header/data crc for both simplemessenger & xio but 
didn't run with header/data crc enable to see the differences.


>
>>>Were the simplemessenger tests on IPoIB or native?
>
>I think it's native, as the Vu Pham benchmark was done on mellanox 
>sx1012 (ethernet).
>And xio messenger was on Roce (rdma over ethernet)
>
Yes, it's native for simplemessenger and RoCE for xio messenger


>
>>>How big was the RBD volume that was created (could some data be
>>>locally cached)? Did network data transfer statistics match the
>>>benchmark result numbers?
>
Single OSD on 4GB ramdisk, journal size is 256MB.

RBD volume is only 128MB; however, I ran fio_rbd client with direct=1 to 
bypass local buffer cache
Yes, the network data xfer statistics match the benchmark result 
numbers.
I used "dstat -N <ethX>" to monitor the network data statistics

I also turned all cores @ full speed and applied one parameter tuning 
for Mellanox ConnectX-3 HCA mlx4_core driver
(options mlx4_core  log_num_mgm_entry_size=-7)

$ cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq
2601000

$ for c in ./cpu[0-55]*; do echo 2601000 > 
${c}/cpufreq/scaling_min_freq; done



>
>
>
>I @cc Vu pham to this mail maybe it'll be able to give us answer.
>
>
>Note that I'll have same mellanox switches (sx1012) for my production 
>cluster in some weeks,
>so I'll be able to reproduce the bench. (with 2x10 cores 3,1ghz nodes 
>and clients).
>
>
>
>
>
>----- Mail original -----
>De: "Mark Nelson" <mnelson@redhat.com>
>À: "aderumier" <aderumier@odiso.com>
>Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" 
><ceph-users@lists.ceph.com>
>Envoyé: Lundi 2 Mars 2015 15:39:24
>Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
>
>Hi Alex,
>
>I see I even responded in the same thread! This would be a good thing
>to bring up in the meeting on Wednesday. Those are far faster single
>OSD results than I've been able to muster with simplemessenger. I
>wonder how much effect flow-control and header/data crc had. He did
>have quite a bit more CPU (Intel specs say 14 cores @ 2.6GHz, 28 if you
>count hyperthreading). Depending on whether there were 1 or 2 CPUs in
>that node, that might be around 3x the CPU power I have here.
>
>Some other thoughts: Were the simplemessenger tests on IPoIB or native?
>How big was the RBD volume that was created (could some data be
>locally cached)? Did network data transfer statistics match the
>benchmark result numbers?
>
>I also did some tests on fdcache, though just glancing at the results 
>it
>doesn't look like tweaking those parameters had much effect.
>
>Mark
>
>On 03/01/2015 08:38 AM, Alexandre DERUMIER wrote:
>>  Hi Mark,
>>
>>  I found an previous bench from Vu Pham (it's was about 
>>simplemessenger vs xiomessenger)
>>
>>  http://www.spinics.net/lists/ceph-devel/msg22414.html
>>
>>  and with 1 osd, he was able to reach 105k iops with simple messenger
>>
>>  . ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)
>>
>>  this was with more powerfull nodes, but the difference seem to be 
>>quite huge
>>
>>
>>
>>  ----- Mail original -----
>>  De: "aderumier" <aderumier@odiso.com>
>>  À: "Mark Nelson" <mnelson@redhat.com>
>>  Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" 
>><ceph-users@lists.ceph.com>
>>  Envoyé: Vendredi 27 Février 2015 07:10:42
>>  Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
>>
>>  Thanks Mark for the results,
>>  default values seem to be quite resonable indeed.
>>
>>
>>  I also wonder is cpu frequency can have an impact on latency or not.
>>  I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming 
>>weeks,
>>  I'll try replay your benchmark to compare
>>
>>
>>
>>  ----- Mail original -----
>>  De: "Mark Nelson" <mnelson@redhat.com>
>>  À: "ceph-devel" <ceph-devel@vger.kernel.org>, "ceph-users" 
>><ceph-users@lists.ceph.com>
>>  Envoyé: Jeudi 26 Février 2015 05:44:15
>>  Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
>>
>>  Hi Everyone,
>>
>>  In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance 
>>comparison
>>  thread, Alexandre DERUMIER wondered if changing the default shard and
>>  threads per shard OSD settings might have a positive effect on
>>  performance in our tests. I went back and used one of the PCIe SSDs
>>  from our previous tests to experiment with a recent master pull. I
>>  wanted to know how performance was affected by changing these 
>>parameters
>>  and also to validate that the default settings still appear to be 
>>correct.
>>
>>  I plan to conduct more tests (potentially across multiple SATA SSDs 
>>in
>>  the same box), but these initial results seem to show that the 
>>default
>>  settings that were chosen are quite reasonable.
>>
>>  Mark
>>
>>  _______________________________________________
>>  ceph-users mailing list
>>  ceph-users@lists.ceph.com
>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>  _______________________________________________
>>  ceph-users mailing list
>>  ceph-users@lists.ceph.com
>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>  --
>>  To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>>in
>>  the body of a message to majordomo@vger.kernel.org
>>  More majordomo info at http://vger.kernel.org/majordomo-info.html
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-03-02 20:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-26  4:44 Ceph Hammer OSD Shard Tuning Test Results Mark Nelson
     [not found] ` <934614223.2346576.1425017430135.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
2015-02-27  6:10   ` Alexandre DERUMIER
     [not found]     ` <1334533598.2513656.1425220674977.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
2015-03-01 14:38       ` Alexandre DERUMIER
     [not found]         ` <770882624.2513660.1425220699487.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
2015-03-01 21:49           ` Kevin Walker
     [not found]             ` <268261243.2571232.1425278514343.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
2015-03-02  6:42               ` Alexandre DERUMIER
2015-03-02 14:39           ` Mark Nelson
     [not found]             ` <1210381094.2666418.1425311646322.JavaMail.zimbra@oxygem.tv>
2015-03-02 15:54               ` [ceph-users] " Alexandre DERUMIER
2015-03-02 19:56                 ` Vu Pham

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.