All of lore.kernel.org
 help / color / mirror / Atom feed
From: Junqin JQ7 Zhang <zhangjq7@lenovo.com>
To: Mark Nelson <mark.a.nelson@gmail.com>,
	Mark Nelson <mnelson@redhat.com>,
	Ceph Development <ceph-devel@vger.kernel.org>
Subject: RE: Ceph Bluestore OSD CPU utilization
Date: Wed, 12 Jul 2017 02:44:43 +0000	[thread overview]
Message-ID: <694B98CBCEF42547AE4CD1A693225B5D085545A3@CNMAILEX04.lenovo.com> (raw)
In-Reply-To: <e09c864a-fb65-a68a-4802-f8b4d29f88fc@gmail.com>

Hi Mark,

Actually, we tested filestore on same Ceph version v12.1.0 and same cluster.
# ceph -v
ceph version 12.1.0 (262617c9f16c55e863693258061c5b25dea5b086) luminous (dev)

CPU utilization of each OSD on filestore can reach max to around 200%, but CPU utilization of OSD on bluestore is only around 30%.
Then, BlueStore's performance is only about 20% of filestore.
We think there must be something wrong with our configuration.

I tried to change ceph config, like
osd op threads = 8
osd disk threads = 4

but still can't get a good result.

Any idea of this?

BTW. We changed some filestore related configured during test
filestore fd cache size = 2048576000
filestore fd cache shards = 16
filestore async threads = 0
filestore max sync interval = 15
filestore wbthrottle enable = false
filestore commit timeout = 1200
filestore_op_thread_suicide_timeout = 0
filestore queue max ops = 1048576
filestore queue max bytes = 17179869184
max open files = 262144
filestore fadvise = false
filestore ondisk finisher threads = 4
filestore op threads = 8

Thanks a lot!

B.R.
Junqin Zhang
-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, July 11, 2017 11:47 PM
To: Junqin JQ7 Zhang; Mark Nelson; Ceph Development
Subject: Re: Ceph Bluestore OSD CPU utilization



On 07/11/2017 10:31 AM, Junqin JQ7 Zhang wrote:
> Hi Mark,
> 
> Thanks for your reply.
> 
> The hardware is as below for each 3 hosts.
> 2 SATA SSD and 8 HDD

The model of SSD potentially could be very important here.  The devices we test in our lab are enterprise grade SSDs with power loss protection. 
  That means they don't have to flush data on sync requests.  O_DSYNC writes are much faster as a result.  I don't know how bad of an impact this has on rocksdb wal/db, but it definitely hurts with filestore journals.

> Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
> Network: 20000Mb/s
> 
> I configured OSD like
> [osd.0]
> host = ceph-1
> osd data = /var/lib/ceph/osd/ceph-0        # a 100M partition of SSD
> bluestore block db path = /dev/sda5         # a 10G partition of SSD

Bluestore automatically roles rocksdb data over to the HDD with the db gets full.  I bet with 10GB you'll see good performance at first and then you'll start seeing lots of extra reads/writes on the HDD once it fills up with metadata (the more extents that are written out the more likely you'll hit this boundary).  You'll want to make the db partitions use the majority of the SSD(s).

> bluestore block wal path = /dev/sda6       # a 10G partition of SSD

The WAL can be smaller.  1-2GB is enough (potentially even less if you adjust the rocksdb buffer settings, but 1-2GB should be small enough to devote most of your SSDs to DB storage).

> bluestore block path = /dev/sdd                # a HDD disk
> 
> We use fio to test one or more 100G RBDs, an example of our fio config 
> [global] ioengine=rbd clientname=admin pool=rbd rw=randrw bs=8k
> runtime=120
> iodepth=16
> numjobs=4

with the rbd engine I try to avoid numjobs as it can give erroneous results in some cases.  it's probably better generally to stick with multiple independent fio processes (though in this case for a randrw workload it might not matter).

> direct=1
> rwmixread=0
> new_group
> group_reporting
> [rbd_image0]
> rbdname=testimage_100GB_0
> 
> Any suggestion?

What kind of performance are you seeing and what do you expect to get?

Mark

> Thanks.
> 
> B.R.
> Junqin zhang
> 
> -----Original Message-----
> From: Mark Nelson [mailto:mnelson@redhat.com]
> Sent: Tuesday, July 11, 2017 7:32 PM
> To: Junqin JQ7 Zhang; Ceph Development
> Subject: Re: Ceph Bluestore OSD CPU utilization
> 
> Ugh, small sequential *reads* I meant to say.  :)
> 
> Mark
> 
> On 07/11/2017 06:31 AM, Mark Nelson wrote:
>> Hi Junqin,
>>
>> Can you tell us your hardware configuration (models and quantities of 
>> cpus, network cards, disks, ssds, etc) and the command and options 
>> you used to measure performance?
>>
>> In many cases bluestore is faster than filestore, but there are a 
>> couple of cases where it is notably slower, the big one being when 
>> doing small sequential writes without client-side readahead.
>>
>> Mark
>>
>> On 07/11/2017 05:34 AM, Junqin JQ7 Zhang wrote:
>>> Hi,
>>>
>>> I installed Ceph luminous v12.1.0 in 3 nodes cluster with BlueStore 
>>> and did some fio test.
>>> During test,  I found the each OSD CPU utilization rate was only 
>>> aroud 30%.
>>> And the performance seems not good to me.
>>> Is  there any configuration to help increase OSD CPU utilization to 
>>> improve performance?
>>> Change kernel.pid_max? Any BlueStore specific configuration?
>>>
>>> Thanks a lot!
>>>
>>> B.R.
>>> Junqin Zhang
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>> info at  http://vger.kernel.org/majordomo-info.html
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2017-07-12  2:47 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-11 10:34 Ceph Bluestore OSD CPU utilization Junqin JQ7 Zhang
2017-07-11 11:31 ` Mark Nelson
2017-07-11 11:32   ` Mark Nelson
2017-07-11 15:31     ` Junqin JQ7 Zhang
2017-07-11 15:46       ` Mark Nelson
2017-07-12  2:44         ` Junqin JQ7 Zhang [this message]
2017-07-12 10:21         ` Junqin JQ7 Zhang
2017-07-12 15:29           ` Mark Nelson
2017-07-13 13:37             ` Junqin JQ7 Zhang
2017-07-27  2:40               ` Brad Hubbard
2017-07-27  3:55                 ` Mark Nelson
2017-07-28 10:34                   ` Junqin JQ7 Zhang
2017-08-02 10:39                     ` Junqin JQ7 Zhang
2017-08-02 13:15                       ` Mark Nelson
2017-07-28 20:57                   ` Jianjian Huo
2017-07-30  3:34                     ` Mark Nelson
2017-07-31 18:29                       ` Jianjian Huo
2017-07-31 19:23                         ` Mark Nelson
2017-08-03 23:28                           ` Jianjian Huo
2017-08-01  7:35                         ` Mohamad Gebai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=694B98CBCEF42547AE4CD1A693225B5D085545A3@CNMAILEX04.lenovo.com \
    --to=zhangjq7@lenovo.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=mark.a.nelson@gmail.com \
    --cc=mnelson@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.