All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Turner <drakonstein-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: "Piotr Dałek" <piotr.dalek-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
Cc: ceph-devel <ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	ceph-users <ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>
Subject: Re: Snap trim queue length issues
Date: Thu, 14 Dec 2017 16:31:20 +0000	[thread overview]
Message-ID: <CAN-Gep+JEdr1V8B42YTy0rZzFM8B0TwHRqTs8WjrjcQm8tFgHA@mail.gmail.com> (raw)
In-Reply-To: <82009aab-6b20-ef21-9bbd-76fddf84e0a3-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>


[-- Attachment #1.1: Type: text/plain, Size: 3889 bytes --]

I've tracked this in a much more manual way.  I would grab a random subset
of PGs in the pool and query the PGs counting how much were in there
queues.  After that, you average it out by how many PGs you queried and how
many objects there were and multiply it back out by how many PGs are in the
pool.  That gave us a relatively accurate size of the snaptrimq.  Well
enough to be monitored at least.  We could run this in a matter of minutes
with a subset of 200 PGs and it was generally accurate in a pool with 32k
pgs.

I also created a daemon that ran against the cluster watching for cluster
load and modifying the snap_trim_sleep accordingly.  The combination of
those 2 things and we were able to keep up with deleting hundreds of GB of
snapshots/day while not killing VM performance.  We hit a bug where we had
to disable snap trimming completely for about a week and on a dozen osds
for about a month.  We ended up with a snaptrimq over 100M objects, but
with these tools we were able to catch up within a couple weeks taking care
of the daily snapshots being added to the queue.

This was all on a Hammer cluster.  The changes to the snap trimming queues
going into the main osd thread made it so that our use case was not viable
on Jewel until changes to Jewel that happened after I left.  It's exciting
that this will actually be a reportable value from the cluster.

Sorry that this story doesn't really answer your question, except to say
that people aware of this problem likely have a work around for it.
However I'm certain that a lot more clusters are impacted by this than are
aware of it and being able to quickly see that would be beneficial to
troubleshooting problems.  Backporting would be nice.  I run a few Jewel
clusters that have some VM's and it would be nice to see how well the
cluster handle snap trimming.  But they are much less critical on how much
snapshots they do.

On Thu, Dec 14, 2017 at 9:36 AM Piotr Dałek <piotr.dalek-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
wrote:

> Hi,
>
> We recently ran into low disk space issues on our clusters, and it wasn't
> because of actual data. On those affected clusters we're hosting VMs and
> volumes, so naturally there are snapshots involved. For some time, we
> observed increased disk space usage that we couldn't explain, as there was
> discrepancy between  what Ceph reported and actual space used on disks. We
> finally found out that snap trim queues were both long and not getting any
> shorter, and decreasing snap trim sleep and increasing max concurrent snap
> trims helped reversing the trend - we're safe now.
> The problem is, we haven't been aware of this issue for some time, and
> there's no easy (and fast[1]) way to check this. I made a pull request[2]
> that makes snap trim queue lengths available to monitoring tools
> and also generates health warning when things go out of control, so an
> admin
> can act before hell breaks loose.
>
> My question is, how many Jewel users would be interested in a such feature?
> There's a lot of changes between Luminous and Jewel, and it's not going to
> be a straight backport, but it's not a big patch either, so I won't mind
> doing it myself. But having some support from users would be helpful in
> pushing this into next Jewel release.
>
> Thanks!
>
>
> [1] one of our guys hacked a bash oneliner that printed out snap trim queue
> lengths for all pgs, but full run takes over an hour to complete on a
> cluster with over 20k pgs...
> [2] https://github.com/ceph/ceph/pull/19520
>
> --
> Piotr Dałek
> piotr.dalek-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org
> https://www.ovh.com/us/
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 4749 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  parent reply	other threads:[~2017-12-14 16:31 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-14 14:36 Snap trim queue length issues Piotr Dałek
     [not found] ` <82009aab-6b20-ef21-9bbd-76fddf84e0a3-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
2017-12-14 16:31   ` David Turner [this message]
2017-12-15  9:00     ` [ceph-users] " Piotr Dałek
     [not found]       ` <81eabcfe-59b1-70c9-4f4f-2abbc86b9456-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
2017-12-15 14:58         ` Sage Weil
     [not found]           ` <alpine.DEB.2.11.1712151454030.2838-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-12-18  8:52             ` Piotr Dałek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAN-Gep+JEdr1V8B42YTy0rZzFM8B0TwHRqTs8WjrjcQm8tFgHA@mail.gmail.com \
    --to=drakonstein-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org \
    --cc=piotr.dalek-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.