From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4606BC43381 for ; Mon, 18 Mar 2019 07:14:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E1D2F20835 for ; Mon, 18 Mar 2019 07:14:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="J63NXcN+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726844AbfCRHOL (ORCPT ); Mon, 18 Mar 2019 03:14:11 -0400 Received: from mail-yw1-f68.google.com ([209.85.161.68]:33544 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726135AbfCRHOL (ORCPT ); Mon, 18 Mar 2019 03:14:11 -0400 Received: by mail-yw1-f68.google.com with SMTP id a199so12132383ywa.0; Mon, 18 Mar 2019 00:14:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=VSrP756N4Ib54lDAJoP2MSkYm4uwmKVbpvL0WL0NNpA=; b=J63NXcN+qrdcgbRdMQMCf7BtNedrbcx1SwoUy4Pgm/1a4qfo7MnjGRhZ+kIFLwFLay fMWJUqc52tPgE3OrpbZ/SF0mPDXUFIgInJLn8oFjeVL8Y5BzlvUa5bVkiEhjy9UxslaQ d9rruc6bI/yy1MbTsjySQ0in/oOVwNg1fl9BznyrxdAKGlNIJVLIw5QH10FfeyDD8Pql r1XHPRY5PA0c+Vre5AfKjM2VOBM2LP7FDn59iQ5+FdmdIW4QM9yuzfNKWjuBWFFvvTtK olIf+rbxa3OOCPGGt2Iaelt1PUNvNsxgQIeKRPQ8HNqp/nW+G1NaXNire9IQFwzRGi5g m0JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=VSrP756N4Ib54lDAJoP2MSkYm4uwmKVbpvL0WL0NNpA=; b=RZLgCGzQ4ZwX9hmuv7+yOyA/pSvaSy+FRVLGdeogx23B7HGq/t6OfM502Xzh/UPDu6 JHV8AKaCG0olBAEbrkb3bpE8jZyN8u1UMkBX05X1x3TD+aElfnqNHvRx83dYxjcZ1bRn vR4xdpz1WIPUUOcynEjHVUxWV9AUV+EAxj+JDj4+ovDagq2ApUveOIuY4CZvrRC4krYW sASqFmfy+JomyVhY96mlfKNRV3OeV2CuzMp889qBOMthxEw10H2plE3WNIyugjlcWIeW eWTVJ1Q1iyXIpHoTxnncNvBLfTdZM0LpBKSaqLP5DpNynpnjtK9xZ9IfhS0ZZN5JwXVv b7NA== X-Gm-Message-State: APjAAAWdcjS3itQFs+Fq/BGJskNyR2XISky7PS7DLBol1qvXET6+LxpT GzcmI9KUrwfk4dgUMinjywmjDUnyBEwmawP+LHI= X-Google-Smtp-Source: APXvYqxdaTxgPDRhqsfb7Ew0F+u4CZJOcF2vYXsjHvo5dU5gNaEwatm8p7XY/q/897JT/pnYtXFtdD1P6XLWwybOpRQ= X-Received: by 2002:a81:1b4b:: with SMTP id b72mr12499503ywb.211.1552893250095; Mon, 18 Mar 2019 00:14:10 -0700 (PDT) MIME-Version: 1.0 References: <1552418820-18102-1-git-send-email-jaya@cs.utexas.edu> <20190314011925.GG23020@dastard> <20190315030313.GP26298@dastard> <20190317221652.GQ26298@dastard> In-Reply-To: <20190317221652.GQ26298@dastard> From: Amir Goldstein Date: Mon, 18 Mar 2019 09:13:58 +0200 Message-ID: Subject: Re: [PATCH v2] Documenting the crash-recovery guarantees of Linux file systems To: Dave Chinner Cc: Jayashree , fstests , linux-fsdevel , linux-doc@vger.kernel.org, Vijaychidambaram Velayudhan Pillai , Theodore Tso , chao@kernel.org, Filipe Manana , Jonathan Corbet , Josef Bacik , Anna Schumaker Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Mar 18, 2019 at 12:16 AM Dave Chinner wrote: > > On Fri, Mar 15, 2019 at 05:44:49AM +0200, Amir Goldstein wrote: > > On Fri, Mar 15, 2019 at 5:03 AM Dave Chinner wrote: > > > > > > On Thu, Mar 14, 2019 at 09:19:03AM +0200, Amir Goldstein wrote: > > > > On Thu, Mar 14, 2019 at 3:19 AM Dave Chinner wrote: > > > > > On Tue, Mar 12, 2019 at 02:27:00PM -0500, Jayashree wrote: > > > > > > +Strictly Ordered Metadata Consistency > > > > > > +------------------------------------- > > > > > > +With each file system providing varying levels of persistence > > > > > > +guarantees, a consensus in this regard, will benefit application > > > > > > +developers to work with certain fixed assumptions about file system > > > > > > +guarantees. Dave Chinner proposed a unified model called the > > > > > > +Strictly Ordered Metadata Consistency (SOMC) [5]. > > > > > > + > > > > > > +Under this scheme, the file system guarantees to persist all previous > > > > > > +dependent modifications to the object upon fsync(). If you fsync() an > > > > > > +inode, it will persist all the changes required to reference the inode > > > > > > +and its data. SOMC can be defined as follows [6]: > > > > > > + > > > > > > +If op1 precedes op2 in program order (in-memory execution order), and > > > > > > +op1 and op2 share a dependency, then op2 must not be observed by a > > > > > > +user after recovery without also observing op1. > > > > > > + > > > > > > +Unfortunately, SOMC's definition depends upon whether two operations > > > > > > +share a dependency, which could be file-system specific. It might > > > > > > +require a developer to understand file-system internals to know if > > > > > > +SOMC would order one operation before another. > > > > > > > > > > That's largely an internal implementation detail, and users should > > > > > not have to care about the internal implementation because the > > > > > fundamental dependencies are all defined by the directory heirarchy > > > > > relationships that users can see and manipulate. > > > > > > > > > > i.e. fs internal dependencies only increase the size of the graph > > > > > that is persisted, but it will never be reduced to less than what > > > > > the user can observe in the directory heirarchy. > > > > > > > > > > So this can be further refined: > > > > > > > > > > If op1 precedes op2 in program order (in-memory execution > > > > > order), and op1 and op2 share a user visible reference, then > > > > > op2 must not be observed by a user after recovery without > > > > > also observing op1. > > > > > > > > > > e.g. in the case of the parent directory - the parent has a link > > > > > count. Hence every create, unlink, rename, hard link, symlink, etc > > > > > operation in a directory modifies a user visible link count > > > > > reference. Hence fsync of one of those children will persist the > > > > > directory link count, and then all of the other preceeding > > > > > transactions that modified the link count also need to be persisted. > > > > > > > > > > > > > One thing that bothers me is that the definition of SOMC (as well as > > > > your refined definition) doesn't mention fsync at all, but all the examples > > > > only discuss use cases with fsync. > > > > > > You can't discuss operational ordering without a point in time to > > > use as a reference for that ordering. SOMC behaviour is preserved > > > at any point the filesystem checkpoints itself, and the only thing > > > that changes is the scope of that checkpoint. fsync is just a > > > convenient, widely understood, minimum dependecy reference point > > > that people can reason from. All the interesting ordering problems > > > come from minimum dependecy reference point (i.e. fsync()), not from > > > background filesystem-wide checkpoints. > > > > > > > Yes, I was referring to rename as a commonly used operation used > > by application as "metadata barrier". > > What is a "metadata barrier" and what are it's semantics supposed to > be? > In this context I mean that effects of metadata operations before the barrier (e.g. setxattr, truncate) must be observed after crash if the effects of barrier operation (e.g. file was renamed) are observed after crash. > > > > I personally find SOMC guaranty *much* more powerful in the absence > > > > of fsync. I have an application that creates sparse files, sets xattrs, mtime > > > > and moves them into place. The observed requirement is that after crash > > > > those files either exist with correct mtime, xattr or not exist. > > > > I wasn't clear: > > 1. "sparse" meaning no data at all only hole. > > That's not sparse, that is an empty file or "contains no data". > "Sparse" means the file has "sparse data" - the data in the file is > separated by holes. A file that is just a single hole does not > contain "sparse data", it contains no data at all. > > IOWs, if you mean "file has no data in it", then say that as it is a > clear and unambiguous statement of what the file contains. > > > 2. "exist" meaning found at rename destination > > Naturally, its applications responsibility to cleanup temp files that were > > not moved into rename destination. > > > > > > > > SOMC does not provide the guarantees you seek in the absence of a > > > known data synchronisation point: > > > > > > a) a background metadata checkpoint can land anywhere in > > > that series of operations and hence recovery will land in an > > > intermediate state. > > > > Yes, that results in temp files that would be cleaned up on recovery. > > Ambiguous. "recovery" is something filesystems do to bring the > filesystem into a consistent state after a crash. If you are talking > about applicaiton level behaviour, then you need to make that > explicit. > > i.e. I can /assume/ you are talking about application level recovery > from your previous statement, but that assumption is obviously wrong > if the application is using O_TMPFILE and linkat rather than rename, > in which case it will be fileystem level recovery that is doing the > cleanup. Ambiguous, yes? > Yes. From the application writers POV, the fact that doing things "atomically" is possible is what matters. Whether filesystem provides the recovery from incomplete transaction (O_TMPFILE+linkat), or application can cleanup leftovers on startup (rename). I have some applications that use the former and some that use the latter for directories and for portability with OS/fs that don't have O_TMPFILE. > > > > b) there is data that needs writing, and SOMC provides no > > > ordering guarantees for data. So after recovery file could > > > exist with correct mtime and xattrs, but have no (or > > > partial) data. > > > > > > > There is no data in my use case, only metadata, that is why > > SOMC without fsync is an option. > > Perhaps, but I am not clear on exactly what you are proposing > because I don't know what the hell a "metadata barrier" is, what it > does or what it implies for filesystem integrity operations... > > > > > To my understanding, SOMC provides a guaranty that the application does > > > > not need to do any fsync at all, > > > > > > Absolutely not true. If the application has atomic creation > > > requirements that need multiple syscalls to set up, it must > > > implement them itself and use fsync to synchronise data and metadata > > > before the "atomic create" operation that makes it visible to the > > > application. > > > > > > SOMC only guarantees what /metadata/ you see at a fileystem > > > synchronisation point; it does not provide ACID semantics to a > > > random set of system calls into the filesystem. > > > > > > > So I re-state my claim above after having explained the use case. > > With words that I can only guess the meaning of. > > Amir, if you are asking a complex question as to whether something > conforms to a specification, then please slow down and take the time > to define all the terms, the initial state, the observable behaviour > that you expect to see, etc in clear, unambiguous and well defined > terms. Otherwise the question cannot be answered.... > Sure. TBH, I didn't even dare to ask the complex question yet, because it was hard for me to define all terms. I sketched the use case with the example of create+setxattr+truncate+rename because I figured it is rather easy to understand. The more complex question has do to with explicit "data dependency" operation. At the moment, I will not explain what that means in details, but I am sure you can figure it out. With fdatasync+rename, fdatasync created a dependency between data and metadata of the file, so with SOMC, if file is observed after crash in rename destination, it also contains the data changes before fdatasync. But fdatasync gives a stringer guaranty than what my application actually needs, because in many cases it will cause journal flush. What it really needs is filemap_write_and_wait(). Metadata doesn't need to be flushed as rename takes care of metadata ordering guaranties. As far as I can tell, there is no "official" API to do what I need and there is certainly no documentation about this expected behavior. Apologies, if above was not clear, I promise to explain in person during LSF to whoever is interested. Judging by the volume and passion of this thread, I think a session on LSF fs track would probably be a good idea. [CC Josef and Anna.] I find our behavior as a group of filesystem developers on this matter slightly bi-polar - on the one hand we wish to maintain implementation freedom for future performance improvements and don't wish to commit to existing behavior by documenting it. On the other hand, we wish to not break existing application, whose expectations from filesystems are far from what filesystems guaranty in documentation. There is no one good answer that fits all aspects of this subject and I personally agree with Ted on not wanting to document the ext4 "hacks" that are meant to cater misbehaving applications. I think it is good that Jayashree posted this patch as a basis for discussion of what needs to be documented and how. Eventually, instead of trying to formalize filesystem expected behavior, it might be better to just encode the expected crash behavior tests in a readable manner, as Jayashree already started to do. Or maybe there is room for both documentation and tests. Thanks, Amir.