All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Schermer <jan-SB6/BxVxTjHtwjQa/ONI9g@public.gmane.org>
To: Wukongming <wu.kongming-vVzyEvZLFYE@public.gmane.org>
Cc: "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org"
	<ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>,
	"&RD-STOR-FIRE-vVzyEvZLFYE@public.gmane.org"
	<&RD-STOR-FIRE-vVzyEvZLFYE@public.gmane.org>
Subject: Re: Client io blocked when removing snapshot
Date: Thu, 10 Dec 2015 11:42:50 +0100	[thread overview]
Message-ID: <34168A89-37E0-4FCE-96EC-EBA0EC6CA904@schermer.cz> (raw)
In-Reply-To: <47D132BF400BE64BAE6D71033F7D3D7503DE0DF4-JwQOC20i6vT3cnzPNjVLboSsE/coCuR8pWgKQ6/u3Fg@public.gmane.org>

Removing snapshot means looking for every *potential* object the snapshot can have, and this takes a very long time (6TB snapshot will consist of 1.5M objects (in one replica) assuming the default 4MB object size). The same applies to large thin volumes (don't try creating and then dropping a 1 EiB volume, even if you only have 1GB of physical space :)).
Doing this is simply expensive and might saturate your OSDs. If you don't have enough RAM to cache the structure then all the "is there a file /var/lib/ceph/...." will go to disk and that can hurt a lot.
I don't think there's any priority to this (is there?), so it competes with everything else.

I'm not sure how snapshots are exactly coded in Ceph, but in a COW filesystem you simply don't dereference blocks of the parent of the  snapshot when doing writes to it and that's cheap, but Ceph stores "blocks" in files with computable names and has no pointers to them that could be modified,  so by creating a snapshot you hurt the performance a lot (you need to create a copy of the 4MB object into the snapshot(s) when you dirty a byte in there). Though I remember reading that the logic is actually reversed and it is the snapshot that gets the original blocks(??)...
Anyway if you are removing snapshot at the same time as writing to the parent there could be potentionaly a problem in what gets done first. Is Ceph smart enough to not care about snapshots that are getting deleted? I have no idea but I think it must be because we use snapshots a lot and haven't had that any issues with it.

Jan

> On 10 Dec 2015, at 07:52, Wukongming <wu.kongming@h3c.com> wrote:
> 
> Hi, All
> 
> I used a rbd command to create a 6TB-size image, And then created a snapshot of this image. After that, I kept writing something like modifying files so the snapshots would be cloned one by one.
> At this time, I did the fellow 2 ops simultaneously.
> 
> 1. keep client io to this image.
> 2. excute a rbd snap rm command to delete snapshot.
> 
> Finally ,I found client io blocked for quite a long time. I used SATA disk to test, and felt that ceph makes it a priority to remove snapshot.
> Also we use iostat tool to help watch the disk state, and it runs in full workload.
> 
> So, should we have a priority to deal with client io instead of removing snapshot?
> ---------------------------------------------
> wukongming ID: 12019
> Tel:0571-86760239
> Dept:2014 UIS2 ONEStor
> 
> -------------------------------------------------------------------------------------------------------------------------------------
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  parent reply	other threads:[~2015-12-10 10:42 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-10  6:52 Client io blocked when removing snapshot Wukongming
     [not found] ` <47D132BF400BE64BAE6D71033F7D3D7503DE0DF4-JwQOC20i6vT3cnzPNjVLboSsE/coCuR8pWgKQ6/u3Fg@public.gmane.org>
2015-12-10  8:01   ` Florent Manens
2015-12-10 10:42   ` Jan Schermer [this message]
2015-12-10 11:27     ` 答复: [ceph-users] " Wukongming
2015-12-10 14:14     ` Sage Weil
     [not found]       ` <alpine.DEB.2.00.1512100613120.19170-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2015-12-10 14:21         ` Jan Schermer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=34168A89-37E0-4FCE-96EC-EBA0EC6CA904@schermer.cz \
    --to=jan-sb6/bxvxtjhtwjqa/oni9g@public.gmane.org \
    --cc=&RD-STOR-FIRE-vVzyEvZLFYE@public.gmane.org \
    --cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org \
    --cc=wu.kongming-vVzyEvZLFYE@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.