All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christian Balzer <chibi-FW+hd8ioUD0@public.gmane.org>
To: "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org"
	<ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>
Cc: Blair Bethwaite
	<blair.bethwaite-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Ceph Development
	<ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: Dramatic performance drop at certain number of objects in pool
Date: Fri, 24 Jun 2016 09:08:06 +0900	[thread overview]
Message-ID: <20160624090806.1246b1ff@batzmaru.gol.ad.jp> (raw)
In-Reply-To: <BL2PR02MB2115BD5C173011A0CB92F964F42D0-TNqo25UYn65rzea/mugEKanrV9Ap65cLvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>


Hello,

On Thu, 23 Jun 2016 22:24:59 +0000 Somnath Roy wrote:

> Or even vm.vfs_cache_pressure = 0 if you have sufficient memory to *pin*
> inode/dentries in memory. We are using that for long now (with 128 TB
> node memory) and it seems helping specially for the random write
> workload and saving xattrs read in between.
>
128TB node memory, really?
Can I have some of those, too? ^o^
And here I was thinking that Wade's 660GB machines were on the excessive
side.

There's something to be said (and optimized) when your storage nodes have
the same or more RAM as your compute nodes...

As for Warren, well spotted. 
I personally use vm.vfs_cache_pressure = 1, this avoids the potential
fireworks if your memory is really needed elsewhere, while keeping things
in memory normally. 

Christian

> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@lists.ceph.com] On Behalf Of
> Warren Wang - ISD Sent: Thursday, June 23, 2016 3:09 PM
> To: Wade Holler; Blair Bethwaite
> Cc: Ceph Development; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Dramatic performance drop at certain number of
> objects in pool
> 
> vm.vfs_cache_pressure = 100
> 
> Go the other direction on that. You¹ll want to keep it low to help keep
> inode/dentry info in memory. We use 10, and haven¹t had a problem.
> 
> 
> Warren Wang
> 
> 
> 
> 
> On 6/22/16, 9:41 PM, "Wade Holler" <wade.holler@gmail.com> wrote:
> 
> >Blairo,
> >
> >We'll speak in pre-replication numbers, replication for this pool is 3.
> >
> >23.3 Million Objects / OSD
> >pg_num 2048
> >16 OSDs / Server
> >3 Servers
> >660 GB RAM Total, 179 GB Used (free -t) / Server vm.swappiness = 1
> >vm.vfs_cache_pressure = 100
> >
> >Workload is native librados with python.  ALL 4k objects.
> >
> >Best Regards,
> >Wade
> >
> >
> >On Wed, Jun 22, 2016 at 9:33 PM, Blair Bethwaite
> ><blair.bethwaite@gmail.com> wrote:
> >> Wade, good to know.
> >>
> >> For the record, what does this work out to roughly per OSD? And how
> >> much RAM and how many PGs per OSD do you have?
> >>
> >> What's your workload? I wonder whether for certain workloads (e.g.
> >> RBD) it's better to increase default object size somewhat before
> >> pushing the split/merge up a lot...
> >>
> >> Cheers,
> >>
> >> On 23 June 2016 at 11:26, Wade Holler <wade.holler@gmail.com> wrote:
> >>> Based on everyones suggestions; The first modification to 50 / 16
> >>> enabled our config to get to ~645Mill objects before the behavior in
> >>> question was observed (~330 was the previous ceiling).  Subsequent
> >>> modification to 50 / 24 has enabled us to get to 1.1 Billion+
> >>>
> >>> Thank you all very much for your support and assistance.
> >>>
> >>> Best Regards,
> >>> Wade
> >>>
> >>>
> >>> On Mon, Jun 20, 2016 at 6:58 PM, Christian Balzer <chibi@gol.com>
> >>>wrote:
> >>>>
> >>>> Hello,
> >>>>
> >>>> On Mon, 20 Jun 2016 20:47:32 +0000 Warren Wang - ISD wrote:
> >>>>
> >>>>> Sorry, late to the party here. I agree, up the merge and split
> >>>>>thresholds. We're as high as 50/12. I chimed in on an RH ticket
> >>>>>here.
> >>>>> One of those things you just have to find out as an operator since
> >>>>>it's  not well documented :(
> >>>>>
> >>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1219974
> >>>>>
> >>>>> We have over 200 million objects in this cluster, and it's still
> >>>>>doing  over 15000 write IOPS all day long with 302 spinning drives
> >>>>>+ SATA SSD  journals. Having enough memory and dropping your
> >>>>>vfs_cache_pressure  should also help.
> >>>>>
> >>>> Indeed.
> >>>>
> >>>> Since it was asked in that bug report and also my first suspicion,
> >>>>it  would probably be good time to clarify that it isn't the splits
> >>>>that cause  the performance degradation, but the resulting inflation
> >>>>of dir entries  and exhaustion of SLAB and thus having to go to disk
> >>>>for things that  normally would be in memory.
> >>>>
> >>>> Looking at Blair's graph from yesterday pretty much makes that
> >>>>clear, a  purely split caused degradation should have relented much
> >>>>quicker.
> >>>>
> >>>>
> >>>>> Keep in mind that if you change the values, it won't take effect
> >>>>> immediately. It only merges them back if the directory is under
> >>>>> the calculated threshold and a write occurs (maybe a read, I
> >>>>> forget).
> >>>>>
> >>>> If it's a read a plain scrub might do the trick.
> >>>>
> >>>> Christian
> >>>>> Warren
> >>>>>
> >>>>>
> >>>>> From: ceph-users
> >>>>>
> >>>>><ceph-users-bounces@lists.ceph.com<mailto:ceph-users-bounces@lists.
> >>>>>cep
> >>>>>h.com>>
> >>>>> on behalf of Wade Holler
> >>>>> <wade.holler@gmail.com<mailto:wade.holler@gmail.com>> Date:
> >>>>>Monday, June  20, 2016 at 2:48 PM To: Blair Bethwaite
> >>>>><blair.bethwaite@gmail.com<mailto:blair.bethwaite@gmail.com>>, Wido
> >>>>>den  Hollander <wido@42on.com<mailto:wido@42on.com>> Cc: Ceph
> >>>>>Development
> >>>>><ceph-devel@vger.kernel.org<mailto:ceph-devel@vger.kernel.org>>,
> >>>>> "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>"
> >>>>> <ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
> >>>>>Subject:
> >>>>> Re: [ceph-users] Dramatic performance drop at certain number of
> >>>>>objects  in pool
> >>>>>
> >>>>> Thanks everyone for your replies.  I sincerely appreciate it. We
> >>>>> are testing with different pg_num and filestore_split_multiple
> >>>>> settings. Early indications are .... well not great. Regardless it
> >>>>> is nice to understand the symptoms better so we try to design
> >>>>> around it.
> >>>>>
> >>>>> Best Regards,
> >>>>> Wade
> >>>>>
> >>>>>
> >>>>> On Mon, Jun 20, 2016 at 2:32 AM Blair Bethwaite
> >>>>><blair.bethwaite@gmail.com<mailto:blair.bethwaite@gmail.com>> wrote:
> >>>>>On
> >>>>> 20 June 2016 at 09:21, Blair Bethwaite
> >>>>><blair.bethwaite@gmail.com<mailto:blair.bethwaite@gmail.com>> wrote:
> >>>>> > slow request issues). If you watch your xfs stats you'll likely
> >>>>> > get further confirmation. In my experience xs_dir_lookups
> >>>>> > balloons
> >>>>>(which
> >>>>> > means directory lookups are missing cache and going to disk).
> >>>>>
> >>>>> Murphy's a bitch. Today we upgraded a cluster to latest Hammer in
> >>>>> preparation for Jewel/RHCS2. Turns out when we last hit this very
> >>>>> problem we had only ephemerally set the new filestore merge/split
> >>>>> values - oops. Here's what started happening when we upgraded and
> >>>>> restarted a bunch of OSDs:
> >>>>>
> >>>>>https://au-east.erc.monash.edu.au/swift/v1/public/grafana-ceph-xs_d
> >>>>>ir_
> >>>>>lookup.png
> >>>>>
> >>>>> Seemed to cause lots of slow requests :-/. We corrected it about
> >>>>> 12:30, then still took a while to settle.
> >>>>>
> >>>>> --
> >>>>> Cheers,
> >>>>> ~Blairo
> >>>>>
> >>>>> This email and any files transmitted with it are confidential and
> >>>>>intended solely for the individual or entity to whom they are
> >>>>>addressed.
> >>>>> If you have received this email in error destroy it immediately.
> >>>>>***  Walmart Confidential ***
> >>>>
> >>>>
> >>>> --
> >>>> Christian Balzer        Network/Systems Engineer
> >>>> chibi@gol.com           Global OnLine Japan/Rakuten Communications
> >>>> http://www.gol.com/
> >>
> >>
> >>
> >> --
> >> Cheers,
> >> ~Blairo
> 
> This email and any files transmitted with it are confidential and
> intended solely for the individual or entity to whom they are addressed.
> If you have received this email in error destroy it immediately. ***
> Walmart Confidential *** _______________________________________________
> ceph-users mailing list ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com PLEASE NOTE: The
> information contained in this electronic mail message is intended only
> for the use of the designated recipient(s) named above. If the reader of
> this message is not the intended recipient, you are hereby notified that
> you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please
> notify the sender by telephone or e-mail (as shown above) immediately
> and destroy any and all copies of this message in your possession
> (whether hard copies or electronically stored copies).
> _______________________________________________ ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@gol.com   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  parent reply	other threads:[~2016-06-24  0:08 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-16 12:14 Dramatic performance drop at certain number of objects in pool Wade Holler
2016-06-16 12:48 ` Blair Bethwaite
     [not found]   ` <CA+z5Dsz=e1N9RxRoF5Wao8Dogf_S1UstNZaCJ=oj-efj83HBig-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-16 14:20     ` Dramatic performance drop at certain number ofobjects " Mykola
2016-06-16 14:30     ` Dramatic performance drop at certain number of objects " Wade Holler
2016-06-16 14:32     ` Wade Holler
2016-06-16 13:38 ` Wido den Hollander
2016-06-16 14:47   ` Wade Holler
2016-06-16 16:08     ` Wade Holler
2016-06-17  8:49       ` Wido den Hollander
2016-06-19 23:21   ` Blair Bethwaite
     [not found]     ` <CA+z5DszqHuevkAF3W01R=7AAeqVcyuHZTX0+bAvThgihvOjwuA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-20  0:52       ` Christian Balzer
2016-06-20  6:32     ` Blair Bethwaite
     [not found]       ` <CA+z5Dsy4tbyiL71C8CQCTQ66tY1=9thSWdNA4BSn6=tNfGUE6w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-20 18:48         ` Wade Holler
     [not found]           ` <CA+e22Sc3iY5Lvp4oGwJ_wwpJsOJsWdB1thaHWEAuYP=bbGHAeg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-20 20:47             ` Warren Wang - ISD
     [not found]               ` <D38DCB57.131AE%warren.wang-dFwxUrggiyBBDgjK7y7TUQ@public.gmane.org>
2016-06-20 22:58                 ` Christian Balzer
2016-06-23  1:26                   ` [ceph-users] " Wade Holler
     [not found]                     ` <CA+e22SdrwRHmAD=67MpVtUXVyCOmidcoUXrANZVeDJc2tcJfnQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-23  1:33                       ` Blair Bethwaite
2016-06-23  1:41                         ` [ceph-users] " Wade Holler
2016-06-23  2:01                           ` Blair Bethwaite
2016-06-23  2:28                             ` Christian Balzer
2016-06-23  2:36                               ` Blair Bethwaite
2016-06-23  2:31                             ` Wade Holler
     [not found]                           ` <CA+e22SfaiBUQ9Wanay6_oji9t7131o67B2oDtaEW_zXwqCJfbQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-23 22:09                             ` Warren Wang - ISD
     [not found]                               ` <D391D1A4.145D6%warren.wang-dFwxUrggiyBBDgjK7y7TUQ@public.gmane.org>
2016-06-23 22:24                                 ` Somnath Roy
     [not found]                                   ` <BL2PR02MB2115BD5C173011A0CB92F964F42D0-TNqo25UYn65rzea/mugEKanrV9Ap65cLvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2016-06-24  0:08                                     ` Christian Balzer [this message]
     [not found]                                       ` <20160624090806.1246b1ff-9yhXNL7Kh0lSCLKNlHTxZM8NsWr+9BEh@public.gmane.org>
2016-06-24  0:09                                         ` Somnath Roy
2016-06-24 14:23                                           ` [ceph-users] " Wade Holler
     [not found]                                             ` <CA+e22SdmGJVzJX9+63T41UGsfFcxs9R=xZqniQyTgu-yG=h0cA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-24 16:24                                               ` Warren Wang - ISD
     [not found]                                                 ` <D392D6EB.146C6%warren.wang-dFwxUrggiyBBDgjK7y7TUQ@public.gmane.org>
2016-06-24 19:45                                                   ` Wade Holler
2016-06-25  3:07                                                     ` [ceph-users] " Christian Balzer
     [not found]                                             ` <CAFMfnwoqbr+_c913oyxpvzHNS+NPdXX17dMdXoC1ZiuZM1GzPw@mail.gmail.com>
     [not found]                                               ` <CAFMfnwoqbr+_c913oyxpvzHNS+NPdXX17dMdXoC1ZiuZM1GzPw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-27  8:12                                                 ` Blair Bethwaite
2016-06-23  2:37                         ` [ceph-users] " Christian Balzer
     [not found]                           ` <20160623113717.446a1f9d-9yhXNL7Kh0lSCLKNlHTxZM8NsWr+9BEh@public.gmane.org>
2016-06-23  2:55                             ` Blair Bethwaite
     [not found]                               ` <CA+z5DszcLqV32NnWeuu+WsRZoZwM493Jfy7WcSpVtaDyArwFAQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-06-23  3:38                                 ` Christian Balzer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160624090806.1246b1ff@batzmaru.gol.ad.jp \
    --to=chibi-fw+hd8ioud0@public.gmane.org \
    --cc=blair.bethwaite-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.