All of lore.kernel.org
 help / color / mirror / Atom feed
* Reboot blocked when undoing unmap op.
@ 2015-11-20  2:19 Wukongming
  2015-11-20  2:42 ` 转发: " Wukongming
  2015-11-20  9:30 ` Ilya Dryomov
  0 siblings, 2 replies; 5+ messages in thread
From: Wukongming @ 2015-11-20  2:19 UTC (permalink / raw)
  To: ceph-devel, Sage Weil

Hi Sage,

I created a rbd image, and mapped to a local which means I can find /dev/rbd0, at this time I reboot the system, in last step of shutting down, it blocked with an error

[235618.0202207] libceph: connect 172.16.57.252:6789 error -101.

My Works’ Env:

Ubuntu kernel 3.19.0
Ceph 0.94.5
A cluster of 2 Servers with iscsitgt and open-iscsi, both as server and client. Multipath process is on but not affect this issue. I’ve tried stopping multipath, but the issue still there.
I map a rbd image to a local, why show me a connect error?

I saw your reply on http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/13077, but just apart. Is this issue resolved and how?

Thanks!!
wukongming
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* 转发: Reboot blocked when undoing unmap op.
  2015-11-20  2:19 Reboot blocked when undoing unmap op Wukongming
@ 2015-11-20  2:42 ` Wukongming
  2015-11-20  9:30 ` Ilya Dryomov
  1 sibling, 0 replies; 5+ messages in thread
From: Wukongming @ 2015-11-20  2:42 UTC (permalink / raw)
  To: ceph-devel, Sage Weil; +Cc: &RD-STOR-FIRE

By the way, the ceph cluster is OK before rebooting. But when failed rebooting, we should Cold reboot the server and may cause ceph cluster with bad condition especially when heartbeating network is added.

2015-10-26 06:39:41.065157 mon.0 172.16.142.139:6789/0 2519 : cluster [INF] pgmap v19973: 2048 pgs: 655 active+undersized+degraded, 10 active+remapped, 724 active+clean, 659 undersized+degraded+peered; 436 MB data, 2290 MB used, 15740 GB / 15742 GB avail; 68/232 objects degraded (29.310%)

---------------------------------------------
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 OneStor


-----邮件原件-----
发件人: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] 代表 Wukongming
发送时间: 2015年11月20日 10:19
收件人: ceph-devel@vger.kernel.org; Sage Weil
主题: Reboot blocked when undoing unmap op.

Hi Sage,

I created a rbd image, and mapped to a local which means I can find /dev/rbd0, at this time I reboot the system, in last step of shutting down, it blocked with an error

[235618.0202207] libceph: connect 172.16.57.252:6789 error -101.

My Works’ Env:

Ubuntu kernel 3.19.0
Ceph 0.94.5
A cluster of 2 Servers with iscsitgt and open-iscsi, both as server and client. Multipath process is on but not affect this issue. I’ve tried stopping multipath, but the issue still there.
I map a rbd image to a local, why show me a connect error?

I saw your reply on http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/13077, but just apart. Is this issue resolved and how?

Thanks!!
wukongming
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!
N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w       j:+v   w j m         zZ+     ݢj"  ! i

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Reboot blocked when undoing unmap op.
  2015-11-20  2:19 Reboot blocked when undoing unmap op Wukongming
  2015-11-20  2:42 ` 转发: " Wukongming
@ 2015-11-20  9:30 ` Ilya Dryomov
  2015-11-25  6:36   ` 答复: " Wukongming
       [not found]   ` <47D132BF400BE64BAE6D71033F7D3D7503DEF839@H3CMLB12-EX.srv.huawei-3com.com>
  1 sibling, 2 replies; 5+ messages in thread
From: Ilya Dryomov @ 2015-11-20  9:30 UTC (permalink / raw)
  To: Wukongming; +Cc: ceph-devel, Sage Weil

On Fri, Nov 20, 2015 at 3:19 AM, Wukongming <wu.kongming@h3c.com> wrote:
> Hi Sage,
>
> I created a rbd image, and mapped to a local which means I can find /dev/rbd0, at this time I reboot the system, in last step of shutting down, it blocked with an error
>
> [235618.0202207] libceph: connect 172.16.57.252:6789 error -101.
>
> My Works’ Env:
>
> Ubuntu kernel 3.19.0
> Ceph 0.94.5
> A cluster of 2 Servers with iscsitgt and open-iscsi, both as server and client. Multipath process is on but not affect this issue. I’ve tried stopping multipath, but the issue still there.
> I map a rbd image to a local, why show me a connect error?
>
> I saw your reply on http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/13077, but just apart. Is this issue resolved and how?

Yeah, this has been a long standing problem with libceph/rbd.  The
issue is that you *have* to umount (and ideally also unmap, but unmap
isn't strictly necessary) before you reboot.  Otherwise (and I assume
by mapped to a local you mean you've got MONs and OSDs on the same node
as you do rbd map), when you issue a reboot, daemons get killed and the
kernel client ends up waiting for the them to come back, because of
outstanding writes issued by umount called by systemd (or whatever).
There are other variations of this, but it all comes down to you having
to cold reboot.

The right fix is to have all init systems sequence the killing of ceph
daemons after the umount/unmap.  I also toyed with adding a reboot
notifier for libceph to save a cold reboot, but the problem with that
in the general case is data integrity.  However, in cases like the one
I described above, there is no going back so we might as well kill
libceph through a notifier.  I have an incomplete patch somewhere, but
it really shouldn't be necessary...

Thanks,

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* 答复: Reboot blocked when undoing unmap op.
  2015-11-20  9:30 ` Ilya Dryomov
@ 2015-11-25  6:36   ` Wukongming
       [not found]   ` <47D132BF400BE64BAE6D71033F7D3D7503DEF839@H3CMLB12-EX.srv.huawei-3com.com>
  1 sibling, 0 replies; 5+ messages in thread
From: Wukongming @ 2015-11-25  6:36 UTC (permalink / raw)
  To: Ilya Dryomov; +Cc: ceph-devel, Sage Weil

I am gonna modify reboot script, but still not Satisfying. if you can share your patch, that will be very cool.

Thanks,

                        Kongming Wu
---------------------------------------------
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 OneStor

-----邮件原件-----
发件人: Ilya Dryomov [mailto:idryomov@gmail.com]
发送时间: 2015年11月20日 17:30
收件人: wukongming 12019 (RD)
抄送: ceph-devel@vger.kernel.org; Sage Weil
主题: Re: Reboot blocked when undoing unmap op.

On Fri, Nov 20, 2015 at 3:19 AM, Wukongming <wu.kongming@h3c.com> wrote:
> Hi Sage,
>
> I created a rbd image, and mapped to a local which means I can find
> /dev/rbd0, at this time I reboot the system, in last step of shutting
> down, it blocked with an error
>
> [235618.0202207] libceph: connect 172.16.57.252:6789 error -101.
>
> My Works’ Env:
>
> Ubuntu kernel 3.19.0
> Ceph 0.94.5
> A cluster of 2 Servers with iscsitgt and open-iscsi, both as server and client. Multipath process is on but not affect this issue. I’ve tried stopping multipath, but the issue still there.
> I map a rbd image to a local, why show me a connect error?
>
> I saw your reply on http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/13077, but just apart. Is this issue resolved and how?

Yeah, this has been a long standing problem with libceph/rbd.  The issue is that you *have* to umount (and ideally also unmap, but unmap isn't strictly necessary) before you reboot.  Otherwise (and I assume by mapped to a local you mean you've got MONs and OSDs on the same node as you do rbd map), when you issue a reboot, daemons get killed and the kernel client ends up waiting for the them to come back, because of outstanding writes issued by umount called by systemd (or whatever).
There are other variations of this, but it all comes down to you having to cold reboot.

The right fix is to have all init systems sequence the killing of ceph daemons after the umount/unmap.  I also toyed with adding a reboot notifier for libceph to save a cold reboot, but the problem with that in the general case is data integrity.  However, in cases like the one I described above, there is no going back so we might as well kill libceph through a notifier.  I have an incomplete patch somewhere, but it really shouldn't be necessary...

Thanks,

                Ilya
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 答复: Reboot blocked when undoing unmap op.
       [not found]   ` <47D132BF400BE64BAE6D71033F7D3D7503DEF839@H3CMLB12-EX.srv.huawei-3com.com>
@ 2016-01-04 16:21     ` Ilya Dryomov
  0 siblings, 0 replies; 5+ messages in thread
From: Ilya Dryomov @ 2016-01-04 16:21 UTC (permalink / raw)
  To: Wukongming; +Cc: Ceph Development

On Mon, Jan 4, 2016 at 10:51 AM, Wukongming <wu.kongming@h3c.com> wrote:
> Hi, Ilya,
>
> It is an old problem.
> When you say "when you issue a reboot, daemons get killed and the kernel client ends up waiting for the them to come back, because of outstanding writes issued by umount called by systemd (or whatever)."
>
> Do you mean if umount rbd successfully, the process of kernel client will stop waiting? What kind of Communication mechanism between libceph and daemons(or ceph userspace)?

If you umount the filesystem on top of rbd and unmap rbd image, there
won't be anything to wait for.  In fact, if there aren't any other rbd
images mapped, libceph will clean up after itself and exit.

If you umount the filesystem on top of rbd but don't unmap the image,
libceph will remain there, along with some amount of communication
(keepalive messages, watch requests, etc).  However, all of that is
internal and is unlikely to block reboot.

If you don't umount the filesystem, your init system will try to umount
it, issuing FS requests to the rbd device.  We don't want to drop those
requests, so, if daemons are gone by then, libceph ends up blocking.

Thanks,

                Ilya

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-01-04 16:21 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-20  2:19 Reboot blocked when undoing unmap op Wukongming
2015-11-20  2:42 ` 转发: " Wukongming
2015-11-20  9:30 ` Ilya Dryomov
2015-11-25  6:36   ` 答复: " Wukongming
     [not found]   ` <47D132BF400BE64BAE6D71033F7D3D7503DEF839@H3CMLB12-EX.srv.huawei-3com.com>
2016-01-04 16:21     ` Ilya Dryomov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.