* Ceph FileStore optimization with Intel drive and scrubbing optimization
@ 2017-06-29 16:20 LIU, Fei
2017-06-29 20:14 ` Jianjian Huo
2017-06-30 1:06 ` Moreno, Orlando
0 siblings, 2 replies; 3+ messages in thread
From: LIU, Fei @ 2017-06-29 16:20 UTC (permalink / raw)
To: Ceph Development
Cc: Sage Weil, Mark Nelson, 蔡进(柏羽), Jianjian Huo
Hi All,
Below link is the slides we presented today in Ceph weekly performance meeting regarding to Ceph Filestore optimization with Intel drive and scrubbing optimization.
https://www.slideshare.net/jupiturliu/ceph-filestore-with-optane-drive-scrub-optimization
Please feel free to let us know if you have any questions.
Regards,
James
Alibaba Storage Team
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Ceph FileStore optimization with Intel drive and scrubbing optimization
2017-06-29 16:20 Ceph FileStore optimization with Intel drive and scrubbing optimization LIU, Fei
@ 2017-06-29 20:14 ` Jianjian Huo
2017-06-30 1:06 ` Moreno, Orlando
1 sibling, 0 replies; 3+ messages in thread
From: Jianjian Huo @ 2017-06-29 20:14 UTC (permalink / raw)
To: LIU, Fei, Ceph Development
Cc: Sage Weil, Mark Nelson, 蔡进(柏羽)
Hi Sage and Mark,
We had another test case (24 hours filestore run with journal stored in Optane drive) done, under our write intensive workloads, we saw IOPS increased by 59%, average latency reduced by 37%, but tail latency (99.99th) went up 65%. This new result is more meaningful for our production consideration.
Do you have any thoughts on why tail latency went up?
Thanks,
Jianjian
On 6/29/17, 9:20 AM, "LIU, Fei" <james.liu@alibaba-inc.com> wrote:
Hi All,
Below link is the slides we presented today in Ceph weekly performance meeting regarding to Ceph Filestore optimization with Intel drive and scrubbing optimization.
https://www.slideshare.net/jupiturliu/ceph-filestore-with-optane-drive-scrub-optimization
Please feel free to let us know if you have any questions.
Regards,
James
Alibaba Storage Team
^ permalink raw reply [flat|nested] 3+ messages in thread
* RE: Ceph FileStore optimization with Intel drive and scrubbing optimization
2017-06-29 16:20 Ceph FileStore optimization with Intel drive and scrubbing optimization LIU, Fei
2017-06-29 20:14 ` Jianjian Huo
@ 2017-06-30 1:06 ` Moreno, Orlando
1 sibling, 0 replies; 3+ messages in thread
From: Moreno, Orlando @ 2017-06-30 1:06 UTC (permalink / raw)
To: LIU, Fei, Ceph Development; +Cc: Sage Weil, Mark Nelson, ??(??), Jianjian Huo
Hi James,
Do you know how many systems were running the RBD clients?
Thanks,
Orlando
-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of LIU, Fei
Sent: Thursday, June 29, 2017 9:20 AM
To: Ceph Development <ceph-devel@vger.kernel.org>
Cc: Sage Weil <sage@newdream.net>; Mark Nelson <mnelson@redhat.com>; 蔡进(柏羽) <caijin.caij@alibaba-inc.com>; Jianjian Huo <j.huo@alibaba-inc.com>
Subject: Ceph FileStore optimization with Intel drive and scrubbing optimization
Hi All,
Below link is the slides we presented today in Ceph weekly performance meeting regarding to Ceph Filestore optimization with Intel drive and scrubbing optimization.
https://www.slideshare.net/jupiturliu/ceph-filestore-with-optane-drive-scrub-optimization
Please feel free to let us know if you have any questions.
Regards,
James
Alibaba Storage Team
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2017-06-30 1:06 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-29 16:20 Ceph FileStore optimization with Intel drive and scrubbing optimization LIU, Fei
2017-06-29 20:14 ` Jianjian Huo
2017-06-30 1:06 ` Moreno, Orlando
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.