linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Amir Goldstein <amir73il@gmail.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Jayashree <jaya@cs.utexas.edu>, fstests <fstests@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-doc@vger.kernel.org,
	Vijaychidambaram Velayudhan Pillai <vijay@cs.utexas.edu>,
	Theodore Tso <tytso@mit.edu>,
	chao@kernel.org, Filipe Manana <fdmanana@gmail.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Josef Bacik <josef@toxicpanda.com>,
	Anna Schumaker <Anna.Schumaker@netapp.com>
Subject: Re: [PATCH v2] Documenting the crash-recovery guarantees of Linux file systems
Date: Mon, 18 Mar 2019 09:13:58 +0200	[thread overview]
Message-ID: <CAOQ4uxgwa9CeyLXJ022VW8dac3h_fGUi9BtF_apMv0T1UjJ3iw@mail.gmail.com> (raw)
In-Reply-To: <20190317221652.GQ26298@dastard>

On Mon, Mar 18, 2019 at 12:16 AM Dave Chinner <david@fromorbit.com> wrote:
>
> On Fri, Mar 15, 2019 at 05:44:49AM +0200, Amir Goldstein wrote:
> > On Fri, Mar 15, 2019 at 5:03 AM Dave Chinner <david@fromorbit.com> wrote:
> > >
> > > On Thu, Mar 14, 2019 at 09:19:03AM +0200, Amir Goldstein wrote:
> > > > On Thu, Mar 14, 2019 at 3:19 AM Dave Chinner <david@fromorbit.com> wrote:
> > > > > On Tue, Mar 12, 2019 at 02:27:00PM -0500, Jayashree wrote:
> > > > > > +Strictly Ordered Metadata Consistency
> > > > > > +-------------------------------------
> > > > > > +With each file system providing varying levels of persistence
> > > > > > +guarantees, a consensus in this regard, will benefit application
> > > > > > +developers to work with certain fixed assumptions about file system
> > > > > > +guarantees. Dave Chinner proposed a unified model called the
> > > > > > +Strictly Ordered Metadata Consistency (SOMC) [5].
> > > > > > +
> > > > > > +Under this scheme, the file system guarantees to persist all previous
> > > > > > +dependent modifications to the object upon fsync().  If you fsync() an
> > > > > > +inode, it will persist all the changes required to reference the inode
> > > > > > +and its data. SOMC can be defined as follows [6]:
> > > > > > +
> > > > > > +If op1 precedes op2 in program order (in-memory execution order), and
> > > > > > +op1 and op2 share a dependency, then op2 must not be observed by a
> > > > > > +user after recovery without also observing op1.
> > > > > > +
> > > > > > +Unfortunately, SOMC's definition depends upon whether two operations
> > > > > > +share a dependency, which could be file-system specific. It might
> > > > > > +require a developer to understand file-system internals to know if
> > > > > > +SOMC would order one operation before another.
> > > > >
> > > > > That's largely an internal implementation detail, and users should
> > > > > not have to care about the internal implementation because the
> > > > > fundamental dependencies are all defined by the directory heirarchy
> > > > > relationships that users can see and manipulate.
> > > > >
> > > > > i.e. fs internal dependencies only increase the size of the graph
> > > > > that is persisted, but it will never be reduced to less than what
> > > > > the user can observe in the directory heirarchy.
> > > > >
> > > > > So this can be further refined:
> > > > >
> > > > >         If op1 precedes op2 in program order (in-memory execution
> > > > >         order), and op1 and op2 share a user visible reference, then
> > > > >         op2 must not be observed by a user after recovery without
> > > > >         also observing op1.
> > > > >
> > > > > e.g. in the case of the parent directory - the parent has a link
> > > > > count. Hence every create, unlink, rename, hard link, symlink, etc
> > > > > operation in a directory modifies a user visible link count
> > > > > reference.  Hence fsync of one of those children will persist the
> > > > > directory link count, and then all of the other preceeding
> > > > > transactions that modified the link count also need to be persisted.
> > > > >
> > > >
> > > > One thing that bothers me is that the definition of SOMC (as well as
> > > > your refined definition) doesn't mention fsync at all, but all the examples
> > > > only discuss use cases with fsync.
> > >
> > > You can't discuss operational ordering without a point in time to
> > > use as a reference for that ordering.  SOMC behaviour is preserved
> > > at any point the filesystem checkpoints itself, and the only thing
> > > that changes is the scope of that checkpoint. fsync is just a
> > > convenient, widely understood, minimum dependecy reference point
> > > that people can reason from. All the interesting ordering problems
> > > come from minimum dependecy reference point (i.e. fsync()), not from
> > > background filesystem-wide checkpoints.
> > >
> >
> > Yes, I was referring to rename as a commonly used operation used
> > by application as "metadata barrier".
>
> What is a "metadata barrier" and what are it's semantics supposed to
> be?
>

In this context I mean that effects of metadata operations before the
barrier (e.g. setxattr, truncate) must be observed after crash if the effects
of barrier operation (e.g. file was renamed) are observed after crash.

> > > > I personally find SOMC guaranty *much* more powerful in the absence
> > > > of fsync. I have an application that creates sparse files, sets xattrs, mtime
> > > > and moves them into place. The observed requirement is that after crash
> > > > those files either exist with correct mtime, xattr or not exist.
> >
> > I wasn't clear:
> > 1. "sparse" meaning no data at all only hole.
>
> That's not sparse, that is an empty file or "contains no data".
> "Sparse" means the file has "sparse data" - the data in the file is
> separated by holes. A file that is just a single hole does not
> contain "sparse data", it contains no data at all.
>
> IOWs, if you mean "file has no data in it", then say that as it is a
> clear and unambiguous statement of what the file contains.
>
> > 2. "exist" meaning found at rename destination
> > Naturally, its applications responsibility to cleanup temp files that were
> > not moved into rename destination.
> >
> > >
> > > SOMC does not provide the guarantees you seek in the absence of a
> > > known data synchronisation point:
> > >
> > >         a) a background metadata checkpoint can land anywhere in
> > >         that series of operations and hence recovery will land in an
> > >         intermediate state.
> >
> > Yes, that results in temp files that would be cleaned up on recovery.
>
> Ambiguous. "recovery" is something filesystems do to bring the
> filesystem into a consistent state after a crash. If you are talking
> about applicaiton level behaviour, then you need to make that
> explicit.
>
> i.e. I can /assume/ you are talking about application level recovery
> from your previous statement, but that assumption is obviously wrong
> if the application is using O_TMPFILE and linkat rather than rename,
> in which case it will be fileystem level recovery that is doing the
> cleanup. Ambiguous, yes?
>

Yes. From the application writers POV, the fact that doing things
"atomically" is possible is what matters. Whether filesystem provides
the recovery from incomplete transaction (O_TMPFILE+linkat), or
application can cleanup leftovers on startup (rename).
I have some applications that use the former and some that use the
latter for directories and for portability with OS/fs that don't have
O_TMPFILE.

>
> > >         b) there is data that needs writing, and SOMC provides no
> > >         ordering guarantees for data. So after recovery file could
> > >         exist with correct mtime and xattrs, but have no (or
> > >         partial) data.
> > >
> >
> > There is no data in my use case, only metadata, that is why
> > SOMC without fsync is an option.
>
> Perhaps, but I am not clear on exactly what you are proposing
> because I don't know what the hell a "metadata barrier" is, what it
> does or what it implies for filesystem integrity operations...
>
> > > > To my understanding, SOMC provides a guaranty that the application does
> > > > not need to do any fsync at all,
> > >
> > > Absolutely not true. If the application has atomic creation
> > > requirements that need multiple syscalls to set up, it must
> > > implement them itself and use fsync to synchronise data and metadata
> > > before the "atomic create" operation that makes it visible to the
> > > application.
> > >
> > > SOMC only guarantees what /metadata/ you see at a fileystem
> > > synchronisation point; it does not provide ACID semantics to a
> > > random set of system calls into the filesystem.
> > >
> >
> > So I re-state my claim above after having explained the use case.
>
> With words that I can only guess the meaning of.
>
> Amir, if you are asking a complex question as to whether something
> conforms to a specification, then please slow down and take the time
> to define all the terms, the initial state, the observable behaviour
> that you expect to see, etc in clear, unambiguous and well defined
> terms.  Otherwise the question cannot be answered....
>

Sure. TBH, I didn't even dare to ask the complex question yet,
because it was hard for me to define all terms. I sketched the
use case with the example of create+setxattr+truncate+rename
because I figured it is rather easy to understand.

The more complex question has do to with explicit "data dependency"
operation. At the moment, I will not explain what that means in details,
but I am sure you can figure it out.
With fdatasync+rename, fdatasync created a dependency between
data and metadata of the file, so with SOMC, if file is observed after
crash in rename destination, it also contains the data changes before
fdatasync. But fdatasync gives a stringer guaranty than what
my application actually needs, because in many cases it will cause
journal flush. What it really needs is filemap_write_and_wait().
Metadata doesn't need to be flushed as rename takes care of
metadata ordering guaranties.
As far as I can tell, there is no "official" API to do what I need
and there is certainly no documentation about this expected behavior.
Apologies, if above was not clear, I promise to explain in person
during LSF to whoever is interested.

Judging by the volume and passion of this thread, I think a
session on LSF fs track would probably be a good idea.
[CC Josef and Anna.]

I find our behavior as a group of filesystem developers on this matter
slightly bi-polar - on the one hand we wish to maintain implementation
freedom for future performance improvements and don't wish to commit
to existing behavior by documenting it. On the other hand, we wish to
not break existing application, whose expectations from filesystems are
far from what filesystems guaranty in documentation.

There is no one good answer that fits all aspects of this subject and I
personally agree with Ted on not wanting to document the ext4 "hacks"
that are meant to cater misbehaving applications.

I think it is good that Jayashree posted this patch as a basis for discussion
of what needs to be documented and how.
Eventually, instead of trying to formalize filesystem expected behavior, it
might be better to just encode the expected crash behavior tests
in a readable manner, as Jayashree already started to do.
Or maybe there is room for both documentation and tests.

Thanks,
Amir.

  reply	other threads:[~2019-03-18  7:14 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-12 19:27 [PATCH v2] Documenting the crash-recovery guarantees of Linux file systems Jayashree
2019-03-13 17:13 ` Filipe Manana
2019-03-13 18:43 ` Amir Goldstein
2019-03-14  1:19 ` Dave Chinner
2019-03-14  7:19   ` Amir Goldstein
2019-03-15  3:03     ` Dave Chinner
2019-03-15  3:44       ` Amir Goldstein
2019-03-17 22:16         ` Dave Chinner
2019-03-18  7:13           ` Amir Goldstein [this message]
2019-03-19  2:37             ` Vijay Chidambaram
2019-03-19  4:37               ` Dave Chinner
2019-03-19 15:17               ` Theodore Ts'o
2019-03-19 21:08                 ` Dave Chinner
2019-03-19  3:13             ` Dave Chinner
2019-03-19  7:35               ` Amir Goldstein
2019-03-19 20:43                 ` Dave Chinner
2019-03-18  2:48     ` Theodore Ts'o
2019-03-18  5:46       ` Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOQ4uxgwa9CeyLXJ022VW8dac3h_fGUi9BtF_apMv0T1UjJ3iw@mail.gmail.com \
    --to=amir73il@gmail.com \
    --cc=Anna.Schumaker@netapp.com \
    --cc=chao@kernel.org \
    --cc=corbet@lwn.net \
    --cc=david@fromorbit.com \
    --cc=fdmanana@gmail.com \
    --cc=fstests@vger.kernel.org \
    --cc=jaya@cs.utexas.edu \
    --cc=josef@toxicpanda.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=tytso@mit.edu \
    --cc=vijay@cs.utexas.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).