All of lore.kernel.org
 help / color / mirror / Atom feed
* All RBD IO stuck after flapping OSD's
@ 2021-04-14  8:51 Robin Geuze
  2021-04-14 17:00 ` Ilya Dryomov
  0 siblings, 1 reply; 16+ messages in thread
From: Robin Geuze @ 2021-04-14  8:51 UTC (permalink / raw)
  To: Ceph Development

Hey,

We've encountered a weird issue when using the kernel RBD module. It starts with a bunch of OSD's flapping (in our case because of a network card issue which caused the LACP to constantly flap), which is logged in dmesg:

Apr 14 05:45:02 hv1 kernel: [647677.112461] libceph: osd56 down
Apr 14 05:45:03 hv1 kernel: [647678.114962] libceph: osd54 down
Apr 14 05:45:05 hv1 kernel: [647680.127329] libceph: osd50 down
(...)

After a while of that we start getting these errors being spammed in dmesg:

Apr 14 05:47:35 hv1 kernel: [647830.671263] rbd: rbd14: pre object map update failed: -16
Apr 14 05:47:35 hv1 kernel: [647830.671268] rbd: rbd14: write at objno 192 2564096~2048 result -16
Apr 14 05:47:35 hv1 kernel: [647830.671271] rbd: rbd14: write result -16

(In this case for two different RBD mounts)

At this point the IO for these two mounts is completely gone, and the only reason we can still perform IO on the other RBD devices is because we use noshare. Unfortunately unmounting the other devices is no longer possible, which means we cannot migrate our VM's to another HV, since to make the messages go away we have to reboot the server.

All of this wouldn't be such a big issue if it recovered once the cluster started behaving normally again, but it doesn't, it just keeps being stuck, and the longer we wait with rebooting this the worse the issue get.

We've seen this multiple times on various different machines and with various different clusters with differing problem types, so its not a freak incident. Does anyone have any ideas on how we can potentially solve this?

Regards,

Robin Geuze

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-07-20 16:45 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-14  8:51 All RBD IO stuck after flapping OSD's Robin Geuze
2021-04-14 17:00 ` Ilya Dryomov
     [not found]   ` <8eb12c996e404870803e9a7c77e508d6@nl.team.blue>
2021-04-19 12:40     ` Ilya Dryomov
2021-06-16 11:56       ` Robin Geuze
2021-06-17  8:36         ` Ilya Dryomov
2021-06-17  8:42           ` Robin Geuze
2021-06-17  9:40             ` Ilya Dryomov
2021-06-17 10:17               ` Robin Geuze
2021-06-17 11:09                 ` Ilya Dryomov
2021-06-17 11:12                   ` Robin Geuze
2021-06-29  8:39                     ` Robin Geuze
2021-06-29 10:07                       ` Ilya Dryomov
2021-07-06 17:21                         ` Ilya Dryomov
2021-07-07  7:35                           ` Robin Geuze
2021-07-20 12:04                             ` Robin Geuze
2021-07-20 16:42                               ` Ilya Dryomov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.