All of lore.kernel.org
 help / color / mirror / Atom feed
* Infernalis on NVMe cephx authentication impact
@ 2016-02-03  0:18 Moreno, Orlando
  2016-02-03  0:52 ` Mark Nelson
  0 siblings, 1 reply; 4+ messages in thread
From: Moreno, Orlando @ 2016-02-03  0:18 UTC (permalink / raw)
  To: ceph-devel; +Cc: Blinick, Stephen L

Following up on the first pass of all-flash performance numbers for Infernalis, we ran the same tests with cephx authentication on. Previously, for both Hammer and Infernalis, the performance reported was with authentication turned off. Since the authentication interface changed to libnss between Hammer and Infernalis, we wanted to verify the impact of authentication in a high-performance Ceph cluster.

With authentication on, random 4K read performance drops by about 11% and clearly hits a wall in terms of max IOPS. Random writes see a greater impact with max performance reaching 176K IOPS compared to the 200K+ IOPS with authentication off. The mixed workload seems to be affected by this as well and maxes out at 408K IOPS. Considering the high throughput of the cluster, these numbers seem reasonable and the overhead of adding authentication did not degrade the performance as much as Hammer's authentication implementation where we saw at least a 30% hit on reads/writes.

Below is a comparison table of authentication on vs off. More detailed data is available for anyone that is interested.

				Infernalis			Infernalis w/ cephx	
		IODepth	IOPS		Avg Lat (ms)	IOPS		Avg Lat (ms)
========================================================================
100% Rand Read	4	383747		0.619167	347850		0.683139
			8	645551		0.7345		581384		0.815726
			16	955990		0.994833	820765		1.153785
			32	1072001	1.774667	937832		2.023074
			64	1028112	3.578667	942742		4.036471
			96	1070847	5.402833	941746		6.06505
			128	1088625	7.085		N/A		N/A

100% Rand Write	4	131447		1.820833	111689		2.135115
			8	175180		2.8385		138931		3.43494
			16	198219		5.129333	163417		5.844878
			32	191775		10.0895		174522		10.956354
			64	185602		21.089167	176733		21.669579
			96	204202		30.601833	163963		35.046025
			128	233095		37.532667	150943		50.75762

70% Rand Read		4	234445		1.015333	210798		1.129461
			8	337808		1.417667	309584		1.538977
			16	394676		2.425333	360150		2.649063
			32	445295		4.3465		391638		4.879287
			64	478867		8.297833	408463		9.364712
			96	513590		11.9885		407493		14.084794
			128	532439		15.970333	406757		18.814887


Thanks,

Orlando

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Infernalis on NVMe cephx authentication impact
  2016-02-03  0:18 Infernalis on NVMe cephx authentication impact Moreno, Orlando
@ 2016-02-03  0:52 ` Mark Nelson
  2016-02-03  1:53   ` Josh Durgin
  0 siblings, 1 reply; 4+ messages in thread
From: Mark Nelson @ 2016-02-03  0:52 UTC (permalink / raw)
  To: Moreno, Orlando, ceph-devel; +Cc: Blinick, Stephen L

Hi Orlando,

Hrm, looks like https://github.com/ceph/ceph/pull/3896 didn't make it 
into Hammer at release. :(  That might also account for a large part of 
the performance disparity.  Coincidentally Josh Durgin made a branch 
with the PR backported to hammer a couple of weeks ago here:

https://github.com/ceph/ceph/commits/wip-auth-hammer

Mark

On 02/02/2016 06:18 PM, Moreno, Orlando wrote:
> Following up on the first pass of all-flash performance numbers for Infernalis, we ran the same tests with cephx authentication on. Previously, for both Hammer and Infernalis, the performance reported was with authentication turned off. Since the authentication interface changed to libnss between Hammer and Infernalis, we wanted to verify the impact of authentication in a high-performance Ceph cluster.
>
> With authentication on, random 4K read performance drops by about 11% and clearly hits a wall in terms of max IOPS. Random writes see a greater impact with max performance reaching 176K IOPS compared to the 200K+ IOPS with authentication off. The mixed workload seems to be affected by this as well and maxes out at 408K IOPS. Considering the high throughput of the cluster, these numbers seem reasonable and the overhead of adding authentication did not degrade the performance as much as Hammer's authentication implementation where we saw at least a 30% hit on reads/writes.
>
> Below is a comparison table of authentication on vs off. More detailed data is available for anyone that is interested.
>
> 				Infernalis			Infernalis w/ cephx	
> 		IODepth	IOPS		Avg Lat (ms)	IOPS		Avg Lat (ms)
> ========================================================================
> 100% Rand Read	4	383747		0.619167	347850		0.683139
> 			8	645551		0.7345		581384		0.815726
> 			16	955990		0.994833	820765		1.153785
> 			32	1072001	1.774667	937832		2.023074
> 			64	1028112	3.578667	942742		4.036471
> 			96	1070847	5.402833	941746		6.06505
> 			128	1088625	7.085		N/A		N/A
>
> 100% Rand Write	4	131447		1.820833	111689		2.135115
> 			8	175180		2.8385		138931		3.43494
> 			16	198219		5.129333	163417		5.844878
> 			32	191775		10.0895		174522		10.956354
> 			64	185602		21.089167	176733		21.669579
> 			96	204202		30.601833	163963		35.046025
> 			128	233095		37.532667	150943		50.75762
>
> 70% Rand Read		4	234445		1.015333	210798		1.129461
> 			8	337808		1.417667	309584		1.538977
> 			16	394676		2.425333	360150		2.649063
> 			32	445295		4.3465		391638		4.879287
> 			64	478867		8.297833	408463		9.364712
> 			96	513590		11.9885		407493		14.084794
> 			128	532439		15.970333	406757		18.814887
>
>
> Thanks,
>
> Orlando
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Infernalis on NVMe cephx authentication impact
  2016-02-03  0:52 ` Mark Nelson
@ 2016-02-03  1:53   ` Josh Durgin
  2016-02-03  2:30     ` Somnath Roy
  0 siblings, 1 reply; 4+ messages in thread
From: Josh Durgin @ 2016-02-03  1:53 UTC (permalink / raw)
  To: Mark Nelson, Moreno, Orlando, ceph-devel; +Cc: Blinick, Stephen L

On 02/02/2016 04:52 PM, Mark Nelson wrote:
> Hi Orlando,
>
> Hrm, looks like https://github.com/ceph/ceph/pull/3896 didn't make it
> into Hammer at release. :(  That might also account for a large part of
> the performance disparity.  Coincidentally Josh Durgin made a branch
> with the PR backported to hammer a couple of weeks ago here:
>
> https://github.com/ceph/ceph/commits/wip-auth-hammer

Turns out it fixes a bug [0] in addition to being faster, so expect it
in a future hammer release.

Has anyone measured the overhead of message signing (the 'cephx sign 
messages' option)? It's on by default with cephx, but can be disabled
separately. What auth settings were these tests using?

It may be worth requiring message signatures by default now that it's
been supported in the kernel client since 3.19.

Josh

[0] http://tracker.ceph.com/issues/14620

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: Infernalis on NVMe cephx authentication impact
  2016-02-03  1:53   ` Josh Durgin
@ 2016-02-03  2:30     ` Somnath Roy
  0 siblings, 0 replies; 4+ messages in thread
From: Somnath Roy @ 2016-02-03  2:30 UTC (permalink / raw)
  To: Josh Durgin, Mark Nelson, Moreno, Orlando, ceph-devel; +Cc: Blinick, Stephen L

Josh,
At least from my past experience message signing was taking lot of cpus and all of our performance run we disabled that.
Never tested with Infernalis/master though by enabling auth/signatures and see the impact..
Thanks Orlando for sharing this..

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Josh Durgin
Sent: Tuesday, February 02, 2016 5:54 PM
To: Mark Nelson; Moreno, Orlando; ceph-devel@vger.kernel.org
Cc: Blinick, Stephen L
Subject: Re: Infernalis on NVMe cephx authentication impact

On 02/02/2016 04:52 PM, Mark Nelson wrote:
> Hi Orlando,
>
> Hrm, looks like https://github.com/ceph/ceph/pull/3896 didn't make it
> into Hammer at release. :(  That might also account for a large part
> of the performance disparity.  Coincidentally Josh Durgin made a
> branch with the PR backported to hammer a couple of weeks ago here:
>
> https://github.com/ceph/ceph/commits/wip-auth-hammer

Turns out it fixes a bug [0] in addition to being faster, so expect it in a future hammer release.

Has anyone measured the overhead of message signing (the 'cephx sign messages' option)? It's on by default with cephx, but can be disabled separately. What auth settings were these tests using?

It may be worth requiring message signatures by default now that it's been supported in the kernel client since 3.19.

Josh

[0] http://tracker.ceph.com/issues/14620
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-02-03  2:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-03  0:18 Infernalis on NVMe cephx authentication impact Moreno, Orlando
2016-02-03  0:52 ` Mark Nelson
2016-02-03  1:53   ` Josh Durgin
2016-02-03  2:30     ` Somnath Roy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.