From: Pankaj Gupta <pagupta@redhat.com>
To: Dave Chinner <david@fromorbit.com>,
darrick wong <darrick.wong@oracle.com>
Cc: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-acpi@vger.kernel.org,
qemu-devel@nongnu.org, linux-ext4@vger.kernel.org,
linux-xfs@vger.kernel.org,
dan j williams <dan.j.williams@intel.com>,
zwisler@kernel.org, vishal l verma <vishal.l.verma@intel.com>,
dave jiang <dave.jiang@intel.com>,
mst@redhat.com, jasowang@redhat.com, willy@infradead.org,
rjw@rjwysocki.net, hch@infradead.org, lenb@kernel.org,
jack@suse.cz, tytso@mit.edu,
adilger kernel <adilger.kernel@dilger.ca>,
darrick wong <darrick.wong@oracle.com>,
lcapitulino@redhat.com, kwolf@redhat.com, imammedo@redhat.com,
jmoyer@redhat.com, nilal@redhat.com, riel@surriel.com,
stefanha@redhat.com, aarcange@redhat.com, david@redhat.com,
cohuck@redhat.com,
xiaoguangrong eric <xiaoguangrong.eric@gmail.com>
Subject: Re: [PATCH v4 5/5] xfs: disable map_sync for async flush
Date: Thu, 4 Apr 2019 02:12:30 -0400 (EDT) [thread overview]
Message-ID: <1518295599.17367903.1554358350929.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <20190403220912.GB26298@dastard>
Hi Dave,
> > Virtio pmem provides asynchronous host page cache flush
> > mechanism. we don't support 'MAP_SYNC' with virtio pmem
> > and xfs.
> >
> > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > ---
> > fs/xfs/xfs_file.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> > index 1f2e2845eb76..dced2eb8c91a 100644
> > --- a/fs/xfs/xfs_file.c
> > +++ b/fs/xfs/xfs_file.c
> > @@ -1203,6 +1203,14 @@ xfs_file_mmap(
> > if (!IS_DAX(file_inode(filp)) && (vma->vm_flags & VM_SYNC))
> > return -EOPNOTSUPP;
> >
> > + /* We don't support synchronous mappings with DAX files if
> > + * dax_device is not synchronous.
> > + */
> > + if (IS_DAX(file_inode(filp)) && !dax_synchronous(
> > + xfs_find_daxdev_for_inode(file_inode(filp))) &&
> > + (vma->vm_flags & VM_SYNC))
> > + return -EOPNOTSUPP;
> > +
> > file_accessed(filp);
> > vma->vm_ops = &xfs_file_vm_ops;
> > if (IS_DAX(file_inode(filp)))
>
> All this ad hoc IS_DAX conditional logic is getting pretty nasty.
>
> xfs_file_mmap(
> ....
> {
> struct inode *inode = file_inode(filp);
>
> if (vma->vm_flags & VM_SYNC) {
> if (!IS_DAX(inode))
> return -EOPNOTSUPP;
> if (!dax_synchronous(xfs_find_daxdev_for_inode(inode))
> return -EOPNOTSUPP;
> }
>
> file_accessed(filp);
> vma->vm_ops = &xfs_file_vm_ops;
> if (IS_DAX(inode))
> vma->vm_flags |= VM_HUGEPAGE;
> return 0;
> }
Sure, this is better.
>
>
> Even better, factor out all the "MAP_SYNC supported" checks into a
> helper so that the filesystem code just doesn't have to care about
> the details of checking for DAX+MAP_SYNC support....
o.k. Will add one common helper function for both ext4 & xfs filesystems.
Thanks for the suggestion.
Best regards,
Pankaj
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
next prev parent reply other threads:[~2019-04-04 6:12 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-03 10:40 [PATCH v4 0/5] virtio pmem driver Pankaj Gupta
2019-04-03 10:40 ` [PATCH v4 1/5] ibnvdimm: nd_region flush callback support Pankaj Gupta
2019-04-03 10:40 ` [PATCH v4 2/5] virtio-pmem: Add virtio pmem driver Pankaj Gupta
2019-04-03 11:43 ` [Qemu-devel] " Yuval Shaia
2019-04-03 12:40 ` Pankaj Gupta
2019-04-04 6:40 ` Yuval Shaia
2019-04-04 7:14 ` Pankaj Gupta
2019-04-03 10:40 ` [PATCH v4 3/5] libnvdimm: add dax_dev sync flag Pankaj Gupta
2019-04-03 10:40 ` [PATCH v4 4/5] ext4: disable map_sync for async flush Pankaj Gupta
2019-04-03 11:30 ` Jan Kara
2019-04-03 10:40 ` [PATCH v4 5/5] xfs: " Pankaj Gupta
2019-04-03 22:09 ` Dave Chinner
2019-04-03 22:39 ` Darrick J. Wong
2019-04-04 6:13 ` Pankaj Gupta
2019-04-04 9:09 ` Pankaj Gupta
2019-04-04 9:40 ` Jan Kara
2019-04-04 10:08 ` [Qemu-devel] " Pankaj Gupta
2019-04-04 15:00 ` Darrick J. Wong
2019-04-04 15:50 ` Pankaj Gupta
2019-04-04 6:12 ` Pankaj Gupta [this message]
2019-04-04 9:56 ` Adam Borowski
2019-04-04 10:52 ` Pankaj Gupta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1518295599.17367903.1554358350929.JavaMail.zimbra@redhat.com \
--to=pagupta@redhat.com \
--cc=aarcange@redhat.com \
--cc=adilger.kernel@dilger.ca \
--cc=cohuck@redhat.com \
--cc=dan.j.williams@intel.com \
--cc=darrick.wong@oracle.com \
--cc=dave.jiang@intel.com \
--cc=david@fromorbit.com \
--cc=david@redhat.com \
--cc=hch@infradead.org \
--cc=imammedo@redhat.com \
--cc=jack@suse.cz \
--cc=jasowang@redhat.com \
--cc=jmoyer@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=kwolf@redhat.com \
--cc=lcapitulino@redhat.com \
--cc=lenb@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvdimm@lists.01.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mst@redhat.com \
--cc=nilal@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=riel@surriel.com \
--cc=rjw@rjwysocki.net \
--cc=stefanha@redhat.com \
--cc=tytso@mit.edu \
--cc=virtualization@lists.linux-foundation.org \
--cc=vishal.l.verma@intel.com \
--cc=willy@infradead.org \
--cc=xiaoguangrong.eric@gmail.com \
--cc=zwisler@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).