linux-unionfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sargun Dhillon <sargun@sargun.me>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Amir Goldstein <amir73il@gmail.com>,
	overlayfs <linux-unionfs@vger.kernel.org>,
	Miklos Szeredi <miklos@szeredi.hu>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Giuseppe Scrivano <gscrivan@redhat.com>,
	Daniel J Walsh <dwalsh@redhat.com>,
	David Howells <dhowells@redhat.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Chengguang Xu <cgxu519@mykernel.net>
Subject: Re: [RFC PATCH 3/3] overlay: Add the ability to remount volatile directories when safe
Date: Mon, 16 Nov 2020 18:25:56 +0000	[thread overview]
Message-ID: <20201116182553.GB18698@ircssh-2.c.rugged-nimbus-611.internal> (raw)
In-Reply-To: <20201116163615.GA17680@redhat.com>

On Mon, Nov 16, 2020 at 11:36:15AM -0500, Vivek Goyal wrote:
> On Mon, Nov 16, 2020 at 05:20:04PM +0200, Amir Goldstein wrote:
> > On Mon, Nov 16, 2020 at 4:42 PM Vivek Goyal <vgoyal@redhat.com> wrote:
> > >
> > > On Sun, Nov 15, 2020 at 08:57:58PM -0800, Sargun Dhillon wrote:
> > > > Overlayfs added the ability to setup mounts where all syncs could be
> > > > short-circuted in (2a99ddacee43: ovl: provide a mount option "volatile").
> > > >
> > > > A user might want to remount this fs, but we do not let the user because
> > > > of the "incompat" detection feature. In the case of volatile, it is safe
> > > > to do something like[1]:
> > > >
> > > > $ sync -f /root/upperdir
> > > > $ rm -rf /root/workdir/incompat/volatile
> > > >
> > > > There are two ways to go about this. You can call sync on the underlying
> > > > filesystem, check the error code, and delete the dirty file if everything
> > > > is clean. If you're running lots of containers on the same filesystem, or
> > > > you want to avoid all unnecessary I/O, this may be suboptimal.
> > > >
> > >
> > > Hi Sargun,
> > >
> > > I had asked bunch of questions in previous mail thread to be more
> > > clear on your requirements but never got any response. It would
> > > have helped understanding your requirements better.
> > >
> > > How about following patch set which seems to sync only dirty inodes of
> > > upper belonging to a particular overlayfs instance.
> > >
> > > https://lore.kernel.org/linux-unionfs/20201113065555.147276-1-cgxu519@mykernel.net/
> > >
> > > So if could implement a mount option which ignores fsync but upon
> > > syncfs, only syncs dirty inodes of that overlayfs instance, it will
> > > make sure we are not syncing whole of the upper fs. And we could
> > > do this syncing on unmount of overlayfs and remove dirty file upon
> > > successful sync.
> > >
> > > Looks like this will be much simpler method and should be able to
> > > meet your requirements (As long as you are fine with syncing dirty
> > > upper inodes of this overlay instance on unmount).
> > >
> > 
> > Do note that the latest patch set by Chengguang not only syncs dirty
> > inodes of this overlay instance, but also waits for in-flight writeback on
> > all the upper fs inodes and I think that with !ovl_should_sync(ofs)
> > we will not re-dirty the ovl inodes and lose track of the list of dirty
> > inodes - maybe that can be fixed.
> > 
> > Also, I am not sure anymore that we can safely remove the dirty file after
> > sync dirty inodes sync_fs and umount. If someone did sync_fs before us
> > and consumed the error, we may have a copied up file in upper whose
> > data is not on disk, but when we sync_fs on unmount we won't get an
> > error? not sure.
> 
> May be we can save errseq_t when mounting overlay and compare with
> errseq_t stored in upper sb after unmount. That will tell us whether
> error has happened since we mounted overlay. (Similar to what Sargun
> is doing).
> 
> In fact, if this is a concern, we have this issue with user space
> "sync <upper>" too? Other sync might fail and this one succeeds
> and we will think upper is just fine. May be container tools can
> keep a file/dir open at the time of mount and call syncfs using
> that fd instead. (And that should catch errors since that fd
> was opened, I am assuming).
> 
> > 
> > I am less concerned about ways to allow re-mount of volatile
> > overlayfs than I am about turning volatile overlayfs into non-volatile.
> 
> If we are not interested in converting volatile containers into
> non-volatile, then whole point of these patch series is to detect
> if any writeback error has happened or not. If writeback error has
> happened, then we detect that at remount and possibly throw away
> container.
> 
> What happens today if writeback error has happened. Is that page thrown
> away from page cache and read back from disk? IOW, will user lose
> the data it had written in page cache because writeback failed. I am
> assuming we can't keep the dirty page around for very long otherwise
> it has potential to fill up all the available ram with dirty pages which
> can't be written back.
> 
> Why is it important to detect writeback error only during remount. What
> happens if container overlay instance is already mounted and writeback
> error happens. We will not detct that, right?
> 
> IOW, if capturing writeback error is important for volatile containers,
> then capturing it only during remount time is not enough. Normally
> fsync/syncfs should catch it and now we have skipped those, so in
> the process we lost mechanism to detect writeback errrors for
> volatile containers?
> 
> Thanks
> Vivek
> 

At least for my use case, any kind of syncing is generally bad unless
it can be controlled, and:
1. Generate a limited set of IOPs
2. Not block metadata operatons
----

This is a challenge that is left up to the filesystem developers that hasn't 
really been addressed yet. The closest we've seen is individual block devices 
per upper dir using something like device mapper, and throttling at that level.

I liken this to "eatmydata". I think it makes sense to force the user to go from 
volatile -> volatile. I do think that adding the safety feature which explicitly 
warns users that their system is a state where they may be experiencing data 
loss (checking errseq_t) is useful. Although, we emit the error via dmesg today, 
if we move over to the new mount API, we could emit the error from the fsfd, 
either forcing the user to set another flag, "reallyvolatile" or deleting the 
dirty bit on disk. I'm partial to the flag approach because it involves less
API surface area.

Partially because one of the overall use cases I want to be able to implement
is LXC-style seccomp-fd based mount syscall interception, and the fewer
things to juggle (and corner cases to handle), the better.

  reply	other threads:[~2020-11-16 18:26 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-16  4:57 [RFC PATCH 0/3] Make overlayfs volatile mounts reusable Sargun Dhillon
2020-11-16  4:57 ` [RFC PATCH 1/3] fs: Add s_instance_id field to superblock for unique identification Sargun Dhillon
2020-11-16  5:07   ` Sargun Dhillon
2020-11-16  4:57 ` [RFC PATCH 2/3] overlay: Add ovl_do_getxattr helper Sargun Dhillon
2020-11-16 11:00   ` Amir Goldstein
2020-11-16  4:57 ` [RFC PATCH 3/3] overlay: Add the ability to remount volatile directories when safe Sargun Dhillon
2020-11-16  9:31   ` Amir Goldstein
2020-11-16 10:30     ` Sargun Dhillon
2020-11-16 11:17       ` Amir Goldstein
2020-11-16 12:52         ` Amir Goldstein
2020-11-16 14:42   ` Vivek Goyal
2020-11-16 14:45     ` Vivek Goyal
2020-11-16 15:20     ` Amir Goldstein
2020-11-16 16:36       ` Vivek Goyal
2020-11-16 18:25         ` Sargun Dhillon [this message]
2020-11-16 19:27           ` Vivek Goyal
2020-11-16 20:18         ` Amir Goldstein
2020-11-16 21:09           ` Vivek Goyal
2020-11-17  5:33             ` Amir Goldstein
2020-11-17 14:48               ` Vivek Goyal
2020-11-17 15:24                 ` Amir Goldstein
2020-11-17 15:40                   ` Vivek Goyal
2020-11-17 16:46                   ` Vivek Goyal
2020-11-17 18:03                     ` Amir Goldstein
2020-11-17 18:29                       ` Vivek Goyal
2020-11-18  7:24                         ` Amir Goldstein
2020-11-18  8:27                           ` Sargun Dhillon
2020-11-18 10:46                             ` Amir Goldstein
2020-11-18 14:55                           ` Vivek Goyal
2020-11-16 21:26           ` Vivek Goyal
2020-11-16 22:14             ` Sargun Dhillon
2020-11-17  5:41               ` Amir Goldstein
2020-11-17 17:05               ` Vivek Goyal
2020-11-16 17:38     ` Sargun Dhillon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201116182553.GB18698@ircssh-2.c.rugged-nimbus-611.internal \
    --to=sargun@sargun.me \
    --cc=amir73il@gmail.com \
    --cc=cgxu519@mykernel.net \
    --cc=dhowells@redhat.com \
    --cc=dwalsh@redhat.com \
    --cc=gscrivan@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-unionfs@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=vgoyal@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).