All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sage Weil <sage-BnTBU8nroG7k1uMJSBkQmQ@public.gmane.org>
To: Aaron Ten Clay <aarontc-q67U1YB0R7xBDgjK7y7TUQ@public.gmane.org>
Cc: "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org"
	<ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>,
	ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
Date: Mon, 17 Apr 2017 15:15:53 +0000 (UTC)	[thread overview]
Message-ID: <alpine.DEB.2.11.1704171457320.10661@piezo.novalocal> (raw)
In-Reply-To: <CAFFcurqEctQ2fHHDcGYfy3YCuaq9DxZr0VU4e8dNVACNVLDmqA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

[-- Attachment #1: Type: TEXT/PLAIN, Size: 2881 bytes --]

On Sat, 15 Apr 2017, Aaron Ten Clay wrote:
> Hi all,
> 
> Our cluster is experiencing a very odd issue and I'm hoping for some
> guidance on troubleshooting steps and/or suggestions to mitigate the issue.
> tl;dr: Individual ceph-osd processes try to allocate > 90GiB of RAM and are
> eventually nuked by oom_killer.

My guess is that there there is a bug in a decoding path and it's 
trying to allocate some huge amount of memory.  Can you try setting a 
memory ulimit to something like 40gb and then enabling core dumps so you 
can get a core?  Something like

ulimit -c unlimited
ulimit -m 20000000

or whatever the corresponding systemd unit file options are...

Once we have a core file it will hopefully be clear who is 
doing the bad allocation...

sage



> 
> I'll try to explain the situation in detail:
> 
> We have 24-4TB bluestore HDD OSDs, and 4-600GB SSD OSDs. The SSD OSDs are in
> a different CRUSH "root", used as a cache tier for the main storage pools,
> which are erasure coded and used for cephfs. The OSDs are spread across two
> identical machines with 128GiB of RAM each, and there are three monitor
> nodes on different hardware.
> 
> Several times we've encountered crippling bugs with previous Ceph releases
> when we were on RC or betas, or using non-recommended configurations, so in
> January we abandoned all previous Ceph usage, deployed LTS Ubuntu 16.04, and
> went with stable Kraken 11.2.0 with the configuration mentioned above.
> Everything was fine until the end of March, when one day we find all but a
> couple of OSDs are "down" inexplicably. Investigation reveals oom_killer
> came along and nuked almost all the ceph-osd processes.
> 
> We've gone through a bunch of iterations of restarting the OSDs, trying to
> bring them up one at a time gradually, all at once, various configuration
> settings to reduce cache size as suggested in this ticket:
> http://tracker.ceph.com/issues/18924...
> 
> I don't know if that ticket really pertains to our situation or not, I have
> no experience with memory allocation debugging. I'd be willing to try if
> someone can point me to a guide or walk me through the process.
> 
> I've even tried, just to see if the situation was  transitory, adding over
> 300GiB of swap to both OSD machines. The OSD procs managed to allocate, in a
> matter of 5-10 minutes, more than 300GiB of RAM pressure and became
> oom_killer victims once again.
> 
> No software or hardware changes took place around the time this problem
> started, and no significant data changes occurred either. We added about
> 40GiB of ~1GiB files a week or so before the problem started and that's the
> last time data was written.
> 
> I can only assume we've found another crippling bug of some kind, this level
> of memory usage is entirely unprecedented. What can we do?
> 
> Thanks in advance for any suggestions.
> -Aaron
> 
> 

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

       reply	other threads:[~2017-04-17 15:15 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAFFcurqEctQ2fHHDcGYfy3YCuaq9DxZr0VU4e8dNVACNVLDmqA@mail.gmail.com>
     [not found] ` <CAFFcurqEctQ2fHHDcGYfy3YCuaq9DxZr0VU4e8dNVACNVLDmqA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-04-17 15:15   ` Sage Weil [this message]
     [not found]     ` <alpine.DEB.2.11.1704171457320.10661-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-04-19 23:18       ` Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore) Aaron Ten Clay
     [not found]         ` <CAFFcurrA8BKF0a+9gdGAsTDbE78ci9X8dwEaWEycoF4DNQN8uw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-05-04 20:57           ` Aaron Ten Clay
     [not found]             ` <CAFFcuroumvWGZA+KC4V7wOiF9T0y7k9v+Ms=GgOh+bGm8gP__g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-05-04 21:25               ` Sage Weil
     [not found]                 ` <alpine.DEB.2.11.1705042124210.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-05-15 23:01                   ` Aaron Ten Clay
2017-05-16  1:35                     ` [ceph-users] " Sage Weil
2017-06-02 21:56                       ` Aaron Ten Clay
2017-04-19 23:22     ` Aaron Ten Clay

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.11.1704171457320.10661@piezo.novalocal \
    --to=sage-bntbu8nrog7k1umjsbkqmq@public.gmane.org \
    --cc=aarontc-q67U1YB0R7xBDgjK7y7TUQ@public.gmane.org \
    --cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.