All of lore.kernel.org
 help / color / mirror / Atom feed
From: Robin Geuze <robin.geuze@nl.team.blue>
To: Ilya Dryomov <idryomov@gmail.com>
Cc: Ceph Development <ceph-devel@vger.kernel.org>
Subject: Re: All RBD IO stuck after flapping OSD's
Date: Wed, 16 Jun 2021 11:56:01 +0000	[thread overview]
Message-ID: <666938090a8746a7ad8ae40ebf116e1c@nl.team.blue> (raw)
In-Reply-To: <CAOi1vP-8i-rKEDd8Emq+MtxCjvK-6VG8KaXdzvQLW89174jUZA@mail.gmail.com>

Hey Ilya,

Sorry for the long delay, but we've finally managed to somewhat reliably reproduce this issue and produced a bunch of debug data. Its really big, so you can find the files here: https://dionbosschieter.stackstorage.com/s/RhM3FHLD28EcVJJ2

We also got some stack traces those are in there as well.

The way we reproduce it is that on one of the two ceph machines in the cluster (its a test cluster) we toggle both the bond NIC ports down, sleep 40 seconds, put them back up, wait another 15 seconds and then put them back down, wait another 40 seconds and  then put them back up.

Exact command line I used on the ceph machine:
ip l set ens785f1 down; sleep 1 ip l set ens785f0 down; sleep 40; ip l set ens785f1 up; sleep 5; ip l set ens785f0 up; sleep 15; ip l set ens785f1 down; sleep 1 ip l set ens785f0 down; sleep 40; ip l set ens785f1 up; sleep 5; ip l set ens785f0 up

Regards,

Robin Geuze 
  
From: Ilya Dryomov <idryomov@gmail.com>
Sent: 19 April 2021 14:40:00
To: Robin Geuze
Cc: Ceph Development
Subject: Re: All RBD IO stuck after flapping OSD's
    
On Thu, Apr 15, 2021 at 2:21 PM Robin Geuze <robin.geuze@nl.team.blue> wrote:
>
> Hey Ilya,
>
> We had to reboot the machine unfortunately, since we had customers unable to work with their VM's. We did manage to make a dynamic debugging dump of an earlier occurence, maybe that can help? I've attached it to this email.

No, I don't see anything to go on there.  Next time, enable logging for
both libceph and rbd modules and make sure that at least one instance of
the error (i.e. "pre object map update failed: -16") makes it into the
attached log.

>
> Those messages constantly occur, even after we kill the VM using the mount, I guess because there is pending IO which cannot be flushed.
>
> As for how its getting worse, if you try any management operations (eg unmap) on any of the RBD mounts that aren't affected, they hang and more often than not the IO for that one also stalls (not always though).

One obvious workaround workaround is to unmap, disable object-map and
exclusive-lock features with "rbd feature disable", and map back.  You
would lose the benefits of object map, but if it is affecting customer
workloads it is probably the best course of action until this thing is
root caused.

Thanks,

                Ilya

>
> Regards,
>
> Robin Geuze
>
> From: Ilya Dryomov <idryomov@gmail.com>
> Sent: 14 April 2021 19:00:20
> To: Robin Geuze
> Cc: Ceph Development
> Subject: Re: All RBD IO stuck after flapping OSD's
>
> On Wed, Apr 14, 2021 at 4:56 PM Robin Geuze <robin.geuze@nl.team.blue> wrote:
> >
> > Hey,
> >
> > We've encountered a weird issue when using the kernel RBD module. It starts with a bunch of OSD's flapping (in our case because of a network card issue which caused the LACP to constantly flap), which is logged in dmesg:
> >
> > Apr 14 05:45:02 hv1 kernel: [647677.112461] libceph: osd56 down
> > Apr 14 05:45:03 hv1 kernel: [647678.114962] libceph: osd54 down
> > Apr 14 05:45:05 hv1 kernel: [647680.127329] libceph: osd50 down
> > (...)
> >
> > After a while of that we start getting these errors being spammed in dmesg:
> >
> > Apr 14 05:47:35 hv1 kernel: [647830.671263] rbd: rbd14: pre object map update failed: -16
> > Apr 14 05:47:35 hv1 kernel: [647830.671268] rbd: rbd14: write at objno 192 2564096~2048 result -16
> > Apr 14 05:47:35 hv1 kernel: [647830.671271] rbd: rbd14: write result -16
> >
> > (In this case for two different RBD mounts)
> >
> > At this point the IO for these two mounts is completely gone, and the only reason we can still perform IO on the other RBD devices is because we use noshare. Unfortunately unmounting the other devices is no longer possible, which means we cannot migrate  our  VM's to another HV, since to make the messages go away we have to reboot the server.
>
> Hi Robin,
>
> Do these messages appear even if no I/O is issued to /dev/rbd14 or only
> if you attempt to write?
>
> >
> > All of this wouldn't be such a big issue if it recovered once the cluster started behaving normally again, but it doesn't, it just keeps being stuck, and the longer we wait with rebooting this the worse the issue get.
>
> Please explain how it's getting worse.
>
> I think the problem is that the object map isn't locked.  What
> probably happened is the kernel client lost its watch on the image
> and for some reason can't get it back.   The flapping has likely
> trigged some edge condition in the watch/notify code.
>
> To confirm:
>
> - paste the contents of /sys/bus/rbd/devices/14/client_addr
>
> - paste the contents of /sys/kernel/debug/ceph/<cluster id>.client<id>/osdc
>   for /dev/rbd14.  If you are using noshare, you will have multiple
>   client instances with the same cluster id.  The one you need can be
>   identified with /sys/bus/rbd/devices/14/client_id.
>
> - paste the output of "rbd status <rbd14 image>" (image name can be
>   identified from "rbd showmapped")
>
> I'm also curious who actually has the lock on the header object and the
> object map object.  Paste the output of
>
> $ ID=$(bin/rbd info --format=json <rbd14 pool>/<rbd14 image> | jq -r .id)
> $ rados -p <rbd14 pool> lock info rbd_header.$ID rbd_lock | jq
> $ rados -p <rbd14 pool> lock info rbd_object_map.$ID rbd_lock | jq
>
> Thanks,
>
>                 Ilya
>
    

  reply	other threads:[~2021-06-16 12:04 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-14  8:51 All RBD IO stuck after flapping OSD's Robin Geuze
2021-04-14 17:00 ` Ilya Dryomov
     [not found]   ` <8eb12c996e404870803e9a7c77e508d6@nl.team.blue>
2021-04-19 12:40     ` Ilya Dryomov
2021-06-16 11:56       ` Robin Geuze [this message]
2021-06-17  8:36         ` Ilya Dryomov
2021-06-17  8:42           ` Robin Geuze
2021-06-17  9:40             ` Ilya Dryomov
2021-06-17 10:17               ` Robin Geuze
2021-06-17 11:09                 ` Ilya Dryomov
2021-06-17 11:12                   ` Robin Geuze
2021-06-29  8:39                     ` Robin Geuze
2021-06-29 10:07                       ` Ilya Dryomov
2021-07-06 17:21                         ` Ilya Dryomov
2021-07-07  7:35                           ` Robin Geuze
2021-07-20 12:04                             ` Robin Geuze
2021-07-20 16:42                               ` Ilya Dryomov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=666938090a8746a7ad8ae40ebf116e1c@nl.team.blue \
    --to=robin.geuze@nl.team.blue \
    --cc=ceph-devel@vger.kernel.org \
    --cc=idryomov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.