All of lore.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM TOPIC] I/O error handling and fsync()
@ 2017-01-10 16:02 Kevin Wolf
  2017-01-11  0:41 ` NeilBrown
  2017-01-11  5:03 ` Theodore Ts'o
  0 siblings, 2 replies; 53+ messages in thread
From: Kevin Wolf @ 2017-01-10 16:02 UTC (permalink / raw)
  To: lsf-pc
  Cc: linux-fsdevel, linux-mm, Christoph Hellwig, Ric Wheeler, Rik van Riel

Hi all,

when I mentioned the I/O error handling problem especially with fsync()
we have in QEMU to Christoph Hellwig, he thought it would be great topic
for LSF/MM, so here I am. This came up a few months ago on qemu-devel [1]
and we managed to ignore it for a while, but it's a real and potentially
serious problem, so I think I agree with Christoph that it makes sense
to get it discussed at LSF/MM.


At the heart of it is the semantics of fsync(). A few years ago, fsync()
was fixed to actually flush data to the disk, so we now have a defined
and useful meaning of fsync() as long as all your fsync() calls return
success.

However, as soon as one fsync() call fails, even if the root problem is
solved later (network connection restored, some space freed for thin
provisioned storage, etc.), the state we're in is mostly undefined. As
Ric Wheeler told me back in the qemu-devel discussion, when a writeout
fails, you get an fsync() error returned (once), but the kernel page
cache simply marks the respective page as clean and consequently won't
ever retry the writeout. Instead, it can evict it from the cache even
though it isn't actually consistent with the state on disk, which means
throwing away data that was written by some process.

So if you do another fsync() and it returns success, this doesn't
currently mean that all of the data you wrote is on disk, but if
anything, it's just about the data you wrote after the failed fsync().
This isn't very helpful, to say the least, because you called fsync() in
order to get a consistent state on disk, and you still don't have that.

Essentially this means that once you got a fsync() failure, there is no
hope to recover for the application and it has to stop using the file.


To give some context about my perspective as the maintainer for the QEMU
block subsystem: QEMU has a mode (which is usually enabled in
production) where I/O failure isn't communicated to the guest, which
would probably offline the filesystem, thinking its hard disk has died,
but instead QEMU pauses the VM and allows the administrator to resume
when the problem has been fixed. Often the problem is only temporary,
e.g. a network hiccup when a disk image is stored on NFS, so this is a
quite helpful approach.

When QEMU is told to resume the VM, the request is just resubmitted.
This works fine for read/write, but not so much for fsync, because after
the first failure all bets are off even if a subsequent fsync()
succeeds.

So this is the aspect that directly affects me, even though the problem
is much broader and by far doesn't only affect QEMU.


This leads to a few invidivual points to be discussed:

1. Fix the data corruption problem that follows from the current
   behaviour. Imagine the following scenario:

   Process A writes to some file, calls fsync() and gets a failure. The
   data it wrote is marked clean in the page cache even though it's
   inconsistent with the disk. Process A knows that fsync() fails, so
   maybe it can deal with it, at least by stop using the file.

   Now process B opens the same file, reads the updated data that
   process A wrote, makes some additional changes based on that and
   calls fsync() again.  Now fsync() return success. The data written by
   B is on disk, but the data written by A isn't. Oops, this is data
   corruption, and process B doesn't even know about it because all its
   operations succeeded.

2. Define fsync() semantics that include the state after a failure (this
   probably goes a long way towards fixing 1.).

   The semantics that QEMU uses internally (and which it needs to map)
   is that after a successful flush, all writes to the disk image that
   have successfully completed before the flush was issued are stable on
   disk (no matter whether a previous flush failed).

   A possible adaption to Linux, which considers that unlike QEMU
   images, files can be opened more than once, might be that a
   succeeding fsync() on a file descriptor means that all data that has
   been read or written through this file descriptor is consistent
   between the page cache and the disk (the read part is for avoiding
   the scenario from 1.; it means that fsync flushes data written on a
   different file descriptor if it has been seen by this one; hence, the
   page cache can't contain non-dirty pages which aren't consistent with
   the disk).

3. Actually make fsync() failure recoverable.

   You can implement 2. by making sure that a file descriptor for which
   pages have been thrown away always returns an error and never goes
   back to suceeding (it can't succeed according to the definition of 2.
   because the data that would have to be written out is gone). This is
   already a much better interface, but it doesn't really solve the
   actual problem we have.

   We also need to make sure that after a failed fsync() there is a
   chance to recover. This means that the pages shouldn't be thrown away
   immediately; but at the same time, you probably also don't want to
   keep pages indefinitely when there is a permanent writeout error.
   However, if we can make sure that these pages are only evicted in
   case of actual memory pressure, and only if there are no actually
   clean page to evict, I think a lot would be already won.

   In the common case, you could then recover from a temporary failure,
   but if this state isn't maintainable, at least we get consistent
   fsync() failure telling us that the data is gone.


I think I've summarised most aspects here, but if something is unclear
or you'd like to see some more context, please refer to the qemu-devel
discussion [1] that I mentioned, or feel free to just ask.

Thanks,
Kevin

[1] https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00576.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2017-01-30 16:04 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-10 16:02 [LSF/MM TOPIC] I/O error handling and fsync() Kevin Wolf
2017-01-11  0:41 ` NeilBrown
2017-01-13 11:09   ` Kevin Wolf
2017-01-13 14:21     ` Theodore Ts'o
2017-01-13 16:00       ` Kevin Wolf
2017-01-13 22:28         ` NeilBrown
2017-01-14  6:18           ` Darrick J. Wong
2017-01-16 12:14           ` [Lsf-pc] " Jeff Layton
2017-01-22 22:44             ` NeilBrown
2017-01-22 23:31               ` Jeff Layton
2017-01-23  0:21                 ` Theodore Ts'o
2017-01-23 10:09                   ` Kevin Wolf
2017-01-23 12:10                     ` Jeff Layton
2017-01-23 12:10                       ` Jeff Layton
2017-01-23 17:25                       ` Theodore Ts'o
2017-01-23 17:53                         ` Chuck Lever
2017-01-23 22:40                         ` Jeff Layton
2017-01-23 22:40                           ` Jeff Layton
2017-01-23 22:35                     ` Jeff Layton
2017-01-23 22:35                       ` Jeff Layton
2017-01-23 23:09                       ` Trond Myklebust
2017-01-24  0:16                         ` NeilBrown
2017-01-24  0:16                           ` NeilBrown
2017-01-24  0:46                           ` Jeff Layton
2017-01-24  0:46                             ` Jeff Layton
2017-01-24 21:58                             ` NeilBrown
2017-01-24 21:58                               ` NeilBrown
2017-01-25 13:00                               ` Jeff Layton
2017-01-25 13:00                                 ` Jeff Layton
2017-01-30  5:30                                 ` NeilBrown
2017-01-30  5:30                                   ` NeilBrown
2017-01-24  3:34                           ` Trond Myklebust
2017-01-25 18:35                             ` Theodore Ts'o
2017-01-26  0:36                               ` NeilBrown
2017-01-26  0:36                                 ` NeilBrown
2017-01-26  9:25                                 ` Jan Kara
2017-01-26 22:19                                   ` NeilBrown
2017-01-26 22:19                                     ` NeilBrown
2017-01-27  3:23                                     ` Theodore Ts'o
2017-01-27  6:03                                       ` NeilBrown
2017-01-27  6:03                                         ` NeilBrown
2017-01-30 16:04                                       ` Jan Kara
2017-01-13 18:40     ` Al Viro
2017-01-13 19:06       ` Kevin Wolf
2017-01-11  5:03 ` Theodore Ts'o
2017-01-11  9:47   ` [Lsf-pc] " Jan Kara
2017-01-11 15:45     ` Theodore Ts'o
2017-01-11 10:55   ` Chris Vest
2017-01-11 11:40   ` Kevin Wolf
2017-01-13  4:51     ` NeilBrown
2017-01-13 11:51       ` Kevin Wolf
2017-01-13 21:55         ` NeilBrown
2017-01-11 12:14   ` Chris Vest

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.