linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Filipe Manana <fdmanana@gmail.com>
To: "Darrick J. Wong" <darrick.wong@oracle.com>
Cc: Goldwyn Rodrigues <rgoldwyn@suse.de>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	Johannes Thumshirn <Johannes.Thumshirn@wdc.com>,
	Christoph Hellwig <hch@infradead.org>,
	dsterba@suse.cz
Subject: Re: [PATCH] iomap: Return zero in case of unsuccessful pagecache invalidation before DIO
Date: Wed, 3 Jun 2020 20:10:50 +0100	[thread overview]
Message-ID: <CAL3q7H4gHHHKMNifbTthvT3y3KaTZDSX+L0z7f1uXz7rzDe8BA@mail.gmail.com> (raw)
In-Reply-To: <20200603190252.GG8204@magnolia>

On Wed, Jun 3, 2020 at 8:02 PM Darrick J. Wong <darrick.wong@oracle.com> wrote:
>
> On Wed, Jun 03, 2020 at 12:32:15PM +0100, Filipe Manana wrote:
> > On Wed, Jun 3, 2020 at 12:23 PM Filipe Manana <fdmanana@gmail.com> wrote:
> > >
> > > On Mon, Jun 1, 2020 at 4:16 PM Goldwyn Rodrigues <rgoldwyn@suse.de> wrote:
> > > >
> > > > On 17:23 28/05, Darrick J. Wong wrote:
> > > > > On Thu, May 28, 2020 at 02:21:03PM -0500, Goldwyn Rodrigues wrote:
> > > > > >
> > > > > > Filesystems such as btrfs are unable to guarantee page invalidation
> > > > > > because pages could be locked as a part of the extent. Return zero
> > > > >
> > > > > Locked for what?  filemap_write_and_wait_range should have just cleaned
> > > > > them off.
> > > > >
> > > > > > in case a page cache invalidation is unsuccessful so filesystems can
> > > > > > fallback to buffered I/O. This is similar to
> > > > > > generic_file_direct_write().
> > > > > >
> > > > > > This takes care of the following invalidation warning during btrfs
> > > > > > mixed buffered and direct I/O using iomap_dio_rw():
> > > > > >
> > > > > > Page cache invalidation failure on direct I/O.  Possible data
> > > > > > corruption due to collision with buffered I/O!
> > > > > >
> > > > > > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> > > > > >
> > > > > > diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> > > > > > index e4addfc58107..215315be6233 100644
> > > > > > --- a/fs/iomap/direct-io.c
> > > > > > +++ b/fs/iomap/direct-io.c
> > > > > > @@ -483,9 +483,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
> > > > > >      */
> > > > > >     ret = invalidate_inode_pages2_range(mapping,
> > > > > >                     pos >> PAGE_SHIFT, end >> PAGE_SHIFT);
> > > > > > -   if (ret)
> > > > > > -           dio_warn_stale_pagecache(iocb->ki_filp);
> > > > > > -   ret = 0;
> > > > > > +   /*
> > > > > > +    * If a page can not be invalidated, return 0 to fall back
> > > > > > +    * to buffered write.
> > > > > > +    */
> > > > > > +   if (ret) {
> > > > > > +           if (ret == -EBUSY)
> > > > > > +                   ret = 0;
> > > > > > +           goto out_free_dio;
> > > > >
> > > > > XFS doesn't fall back to buffered io when directio fails, which means
> > > > > this will cause a regression there.
> > > > >
> > > > > Granted mixing write types is bogus...
> > > > >
> > > >
> > > > I have not seen page invalidation failure errors on XFS, but what should
>
> What happens if you try to dirty an mmap page at the same time as
> issuing a directio?
>
> > > > happen hypothetically if they do occur? Carry on with the direct I/O?
> > > > Would an error return like -ENOTBLK be better?
>
> In the old days, we would only WARN when we encountered collisions like
> this.  How about only setting EIO in the mapping if we fail the
> pagecache invalidation in directio completion?  If a buffered write
> dirties the page after the direct write thread flushes the dirty pages
> but before invalidation, we can argue that we didn't lose anything; the
> direct write simply happened after the buffered write.
>
> XFS doesn't implement buffered write fallback, and it never has.  Either
> the entire directio succeeds, or it returns a negative error code.  Some
> of the iomap_dio_rw callers (ext4, jfs2) will notice a short direct
> write and try to finish the rest with buffered io, but xfs and zonefs do
> not.
>
> The net effect of this (on xfs anyway) is that when buffered and direct
> writes collide, before we'd make the buffered writer lose, now we make
> the direct writer lose.
>
> You also /could/ propose teaching xfs how to fall back to an
> invalidating synchronous buffered write like ext4 does, but that's not
> part of this patch set, and that's not a behavior I want to introduce
> suddenly during the merge window.
>
> > > It doesn't make much to me to emit the warning and then proceed to the
> > > direct IO write path anyway, as if nothing happened.
> > > If we are concerned about possible corruption, we should either return
> > > an error or fallback to buffered IO just like
> > > generic_file_direct_write() did, and not allow the possibility for
> > > corruptions.
> > >
> > > Btw, this is causing a regression in Btrfs now. The problem is that
> > > dio_warn_stale_pagecache() sets an EIO error in the inode's mapping:
> > >
> > > errseq_set(&inode->i_mapping->wb_err, -EIO);
> > >
> > > So the next fsync on the file will return that error, despite the
> > > fsync having completed successfully with any errors.
> > >
> > > Since patchset to make btrfs direct IO use iomap is already in Linus'
> > > tree, we need to fix this somehow.
>
> Y'all /just/ sent the pull request containing that conversion 2 days
> ago.  Why did you move forward with that when you knew there were
> unresolved fstests failures?
>
> > > This makes generic/547 fail often for example - buffered write against
> > > file + direct IO write + fsync - the later returns -EIO.
> >
> > Just to make it clear, despite the -EIO error, there was actually no
> > data loss or corruption (generic/547 checks that),
> > since the direct IO write path in btrfs figures out there's a buffered
> > write still ongoing and waits for it to complete before proceeding
> > with the dio write.
> >
> > Nevertheless, it's still a regression, -EIO shouldn't be returned as
> > everything went fine.
>
> Now I'm annoyed because I feel like you're trying to strong-arm me into
> making last minute changes to iomap when you could have held off for
> another cycle.

If you are talking to me, I'm not trying to strong-arm anyone nor
point a fingers.
I'm just reporting a problem that I found earlier today while testing
some work I was doing.

Thanks.

>
> While I'm pretty sure your analysis is correct that we could /probably/
> get away with only setting EIO if we can't invalidate the cache after
> we've already written the disk blocks directly (because then we really
> did lose something), I /really/ want these kinds of behavioral changes
> to shared code to soak in for-next for long enough that we can work out
> the kinks without exposing the upstream kernel to unnecessary risk.
>
> This conversation would be /much/ different had you not just merged the
> btrfs directio iomap conversion yesterday.
>
> --D
>
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > --
> > > > Goldwyn
> > >
> > >
> > >
> > > --
> > > Filipe David Manana,
> > >
> > > “Whether you think you can, or you think you can't — you're right.”
> >
> >
> >
> > --
> > Filipe David Manana,
> >
> > “Whether you think you can, or you think you can't — you're right.”



-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

  reply	other threads:[~2020-06-03 19:11 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-28 19:21 [PATCH] iomap: Return zero in case of unsuccessful pagecache invalidation before DIO Goldwyn Rodrigues
2020-05-29  0:23 ` Darrick J. Wong
2020-05-29 10:55   ` Filipe Manana
2020-05-29 11:31     ` Matthew Wilcox
2020-05-29 11:50       ` Filipe Manana
2020-05-29 12:45         ` Goldwyn Rodrigues
2020-06-01 15:16   ` Goldwyn Rodrigues
2020-06-03 11:23     ` Filipe Manana
2020-06-03 11:32       ` Filipe Manana
2020-06-03 19:02         ` Darrick J. Wong
2020-06-03 19:10           ` Filipe Manana [this message]
2020-06-03 19:18             ` Matthew Wilcox
2020-06-03 21:07           ` Goldwyn Rodrigues
2020-06-04 13:55           ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAL3q7H4gHHHKMNifbTthvT3y3KaTZDSX+L0z7f1uXz7rzDe8BA@mail.gmail.com \
    --to=fdmanana@gmail.com \
    --cc=Johannes.Thumshirn@wdc.com \
    --cc=darrick.wong@oracle.com \
    --cc=dsterba@suse.cz \
    --cc=hch@infradead.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=rgoldwyn@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).