linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Miklos Szeredi <mszeredi@redhat.com>
To: Boaz Harrosh <boazh@netapp.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Ric Wheeler <rwheeler@redhat.com>,
	Steve French <smfrench@gmail.com>,
	Steven Whitehouse <swhiteho@redhat.com>,
	Jefff moyer <jmoyer@redhat.com>, Sage Weil <sweil@redhat.com>,
	Jan Kara <jack@suse.cz>, Amir Goldstein <amir73il@gmail.com>,
	Andy Rudof <andy.rudoff@intel.com>,
	Anna Schumaker <Anna.Schumaker@netapp.com>,
	Amit Golander <Amit.Golander@netapp.com>,
	Sagi Manole <sagim@netapp.com>,
	Shachar Sharon <Shachar.Sharon@netapp.com>
Subject: Re: [RFC 1/7] mm: Add new vma flag VM_LOCAL_CPU
Date: Mon, 7 May 2018 12:46:24 +0200	[thread overview]
Message-ID: <CAOssrKf2JpxAkcaO+CTBLZbG-OYMKx_DkJv_XbuV0dzQRjapLQ@mail.gmail.com> (raw)
In-Reply-To: <cfe92bf2-cca1-085a-95f9-fb298e09f709@netapp.com>

On Wed, Apr 25, 2018 at 2:21 PM, Boaz Harrosh <boazh@netapp.com> wrote:
>
> On 03/15/2018 02:42 PM, Miklos Szeredi wrote:
>>
>> Ideally most of the complexity would be in the page cache.  Not sure
>> how ready it is to handle pmem pages?
>>
>> The general case (non-pmem) will always have to be handled
>> differently; you've just stated that it's much less latency sensitive
>> and needs async handling.    Basing the design on just trying to make
>> it use the same mechanism (userspace copy) is flawed in my opinion,
>> since it's suboptimal for either case.
>>
>> Thanks,
>> Miklos
>
>
> OK So I was thinking hard on all this and am changing my mind and
> agreeing with all that was said.
>
> I want that the usFS plugin will have all the different options and
> have an easy way to tell Kernel which mode to use.
>
> Let me summarize all the options:
>
> 1. Sync, userspace copy directly to app-buffers (current implementation)
>
> 2. Async block device operation (none pmem)
>     zuf owns all devices pmem and none pmem at mount time and provides
>     a very efficient access to both. In the harddisk / ssd case as part of
> an IO call
>     the server returns -EWOULD_BLOCK and in the background will issue a
>     scatter_gather call through zuf.
>     The memory target for the IO can be pmem, directly to user-buffers
> (DIO), transient
>      server buffers.
>      On completion an up call is made to ZUF to complete the IO operation
> and
>      release the waiting application.
>
> 3. Splice and R-spilce
>     In the case that the IO target is not a block-device but an external
> path like
>     network / rdma / some none block device.
>     Zuf already holds an internal object describing the IO context including
> the
>     GUP app buffers. This internal object can be made the memory target of a
> splice
>     operation.
>
> 4. Get-io_map type operation (currently implemented for mmap)
>     The zus-FS returns a set of dpp_t(s) to kernel and the Kernel does the
> memcopy
>     to app buffers. The Server also specifies if those buffers should be
> cached
>     on a per inode radix-tree (xarray) and if so at the next access to the
> same
>     range Kernel does the copy and never dispatches to user-space
>     In this mode the Server can also revoke a cached mapping when needed
>
> 5. Use of VFS page-cache
>     For a very slow backing device the FS request the regular VFS
> page-cache.
>     On read/write_pages() vector zuf uses option 1. above to read into
> page-cache
>     instead of app-buffers directly. Only cache misses dispatch back to
> user-space
>
> Have I forgotten anything?
>
> This way the zus-FS is in control and can do the "right thing" depending on
> target device and FS characteristics. The interface lets us have a rich set
> of
> tools to be used.
>
> Hope that answers your concerns

Why keep options 1 and 2?  An io-map (4) type interface should cover
this efficiently, shouldn't it?

I don't think page-cache is just for slow backing devices or that it
needs to be a separate interface.  Caches are and will always be the
fastest, no matter how fast your device is.  In linux the page cache
seems like the most convenient place to put a pmem mapping, for
example.

Of course, caches are also a  big PITA when dealing with distributed
filesystems.  Fuse doesn't have a perfect solution for that.  It's one
of key areas that needs improvement.

Also I'll add one more use case that crops up often with fuse: "whole
file data mapping".   Basically this means, the file's data in a
(virtual) userspace filesystem is equivalent to a file's data on an
underlying (physical) filesystem.  We could accelerate I/O in that
case tremendously as well as eliminating double caching.   I've been
undecided what to do with it;  for some time I was resisting, then
saying that I'll accept patches, and at some point I'll probably do a
patch myself.

Thanks,
Miklos

  reply	other threads:[~2018-05-07 10:46 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-13 17:14 [RFC 0/7] first draft of ZUFS - the Kernel part Boaz Harrosh
2018-03-13 17:15 ` [RFC 1/7] mm: Add new vma flag VM_LOCAL_CPU Boaz Harrosh
2018-03-13 18:56   ` Matthew Wilcox
2018-03-14  8:20     ` Miklos Szeredi
2018-03-14 11:17       ` Matthew Wilcox
2018-03-14 11:31         ` Miklos Szeredi
2018-03-14 11:45           ` Matthew Wilcox
2018-03-14 14:49             ` Miklos Szeredi
2018-03-14 14:57               ` Matthew Wilcox
2018-03-14 15:39                 ` Miklos Szeredi
     [not found]                   ` <CAON-v2ygEDCn90C9t-zadjsd5GRgj0ECqntQSDDtO_Zjk=KoVw@mail.gmail.com>
2018-03-14 16:48                     ` Matthew Wilcox
2018-03-14 21:41       ` Boaz Harrosh
2018-03-15  8:47         ` Miklos Szeredi
2018-03-15 15:27           ` Boaz Harrosh
2018-03-15 15:34             ` Matthew Wilcox
2018-03-15 15:58               ` Boaz Harrosh
2018-03-15 16:10             ` Miklos Szeredi
2018-03-15 16:30               ` Boaz Harrosh
2018-03-15 20:42                 ` Miklos Szeredi
2018-04-25 12:21                   ` Boaz Harrosh
2018-05-07 10:46                     ` Miklos Szeredi [this message]
2018-03-13 17:17 ` [RFC 2/7] fs: Add the ZUF filesystem to the build + License Boaz Harrosh
2018-03-13 20:16   ` Andreas Dilger
2018-03-14 17:21     ` Boaz Harrosh
2018-03-15  4:21       ` Andreas Dilger
2018-03-15 13:58         ` Boaz Harrosh
2018-03-13 17:18 ` [RFC 3/7] zuf: Preliminary Documentation Boaz Harrosh
2018-03-13 20:32   ` Randy Dunlap
2018-03-14 18:01     ` Boaz Harrosh
2018-03-14 19:16       ` Randy Dunlap
2018-03-13 17:22 ` [RFC 4/7] zuf: zuf-rootfs && zuf-core Boaz Harrosh
2018-03-13 17:36   ` Boaz Harrosh
2018-03-14 12:56     ` Nikolay Borisov
2018-03-14 18:34       ` Boaz Harrosh
2018-03-13 17:25 ` [RFC 5/7] zus: Devices && mounting Boaz Harrosh
2018-03-13 17:38   ` Boaz Harrosh
2018-03-13 17:28 ` [RFC 6/7] zuf: Filesystem operations Boaz Harrosh
2018-03-13 17:39   ` Boaz Harrosh
2018-03-13 17:32 ` [RFC 7/7] zuf: Write/Read && mmap implementation Boaz Harrosh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOssrKf2JpxAkcaO+CTBLZbG-OYMKx_DkJv_XbuV0dzQRjapLQ@mail.gmail.com \
    --to=mszeredi@redhat.com \
    --cc=Amit.Golander@netapp.com \
    --cc=Anna.Schumaker@netapp.com \
    --cc=Shachar.Sharon@netapp.com \
    --cc=amir73il@gmail.com \
    --cc=andy.rudoff@intel.com \
    --cc=boazh@netapp.com \
    --cc=jack@suse.cz \
    --cc=jmoyer@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=rwheeler@redhat.com \
    --cc=sagim@netapp.com \
    --cc=smfrench@gmail.com \
    --cc=sweil@redhat.com \
    --cc=swhiteho@redhat.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).