linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Amir Goldstein <amir73il@gmail.com>, Jan Kara <jack@suse.cz>,
	Ioannis Angelakopoulos <iangelak@redhat.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	virtio-fs-list <virtio-fs@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Miklos Szeredi <miklos@szeredi.hu>,
	Steve French <sfrench@samba.org>
Subject: Re: [RFC PATCH 0/7] Inotify support in FUSE and virtiofs
Date: Thu, 4 Nov 2021 11:03:16 +0100	[thread overview]
Message-ID: <20211104100316.GA10060@quack2.suse.cz> (raw)
In-Reply-To: <YYMO1ip9ynXFXc8f@redhat.com>

On Wed 03-11-21 18:36:06, Vivek Goyal wrote:
> On Wed, Nov 03, 2021 at 01:17:36PM +0200, Amir Goldstein wrote:
> > > > > > Hi Jan,
> > > > > >
> > > > > > Agreed. That's what Ioannis is trying to say. That some of the remote events
> > > > > > can be lost if fuse/guest local inode is unlinked. I think problem exists
> > > > > > both for shared and non-shared directory case.
> > > > > >
> > > > > > With local filesystems we have a control that we can first queue up
> > > > > > the event in buffer before we remove local watches. With events travelling
> > > > > > from a remote server, there is no such control/synchronization. It can
> > > > > > very well happen that events got delayed in the communication path
> > > > > > somewhere and local watches went away and now there is no way to
> > > > > > deliver those events to the application.
> > > > >
> > > > > So after thinking for some time about this I have the following question
> > > > > about the architecture of this solution: Why do you actually have local
> > > > > fsnotify watches at all? They seem to cause quite some trouble... I mean
> > > > > cannot we have fsnotify marks only on FUSE server and generate all events
> > > > > there? When e.g. file is created from the client, client tells the server
> > > > > about creation, the server performs the creation which generates the
> > > > > fsnotify event, that is received by the server and forwared back to the
> > > > > client which just queues it into notification group's queue for userspace
> > > > > to read it.
> > > > >
> > > > > Now with this architecture there's no problem with duplicate events for
> > > > > local & server notification marks, similarly there's no problem with lost
> > > > > events after inode deletion because events received by the client are
> > > > > directly queued into notification queue without any checking whether inode
> > > > > is still alive etc. Would this work or am I missing something?
> > > > >
> > > >
> > > > What about group #1 that wants mask A and group #2 that wants mask B
> > > > events?
> > > >
> > > > Do you propose to maintain separate event queues over the protocol?
> > > > Attach a "recipient list" to each event?
> > >
> > > Yes, that was my idea. Essentially when we see group A creates mark on FUSE
> > > for path P, we notify server, it will create notification group A on the
> > > server (if not already existing - there we need some notification group
> > > identifier unique among all clients), and place mark for it on path P. Then
> > > the full stream of notification events generated for group A on the server
> > > will just be forwarded to the client and inserted into the A's notification
> > > queue. IMO this is very simple solution to implement - you just need to
> > > forward mark addition / removal events from the client to the server and you
> > > forward event stream from the server to the client. Everything else is
> > > handled by the fsnotify infrastructure on the server.
> > >
> > > > I just don't see how this can scale other than:
> > > > - Local marks and connectors manage the subscriptions on local machine
> > > > - Protocol updates the server with the combined masks for watched objects
> > >
> > > I agree that depending on the usecase and particular FUSE filesystem
> > > performance of this solution may be a concern. OTOH the only additional
> > > cost of this solution I can see (compared to all those processes just
> > > watching files locally) is the passing of the events from the server to the
> > > client. For local FUSE filesystems such as virtiofs this should be rather
> > > cheap since you have to do very little processing for each generated event.
> > > For filesystems such as sshfs, I can imagine this would be a bigger deal.
> > >
> > > Also one problem I can see with my proposal is that it will have problems
> > > with stuff such as leases - i.e., if the client does not notify the server
> > > of the changes quickly but rather batches local operations and tells the
> > > server about them only on special occasions. I don't know enough about FUSE
> > > filesystems to tell whether this is a frequent problem or not.
> > >
> > > > I think that the "post-mortem events" issue could be solved by keeping an
> > > > S_DEAD fuse inode object in limbo just for the mark.
> > > > When a remote server sends FS_IN_IGNORED or FS_DELETE_SELF for
> > > > an inode, the fuse client inode can be finally evicted.
> > > > I haven't tried to see how hard that would be to implement.
> > >
> > > Sure, there can be other solutions to this particular problem. I just
> > > want to discuss the other architecture to see why we cannot to it in a
> > > simple way :).
> > >
> > 
> > Fair enough.
> > 
> > Beyond the scalability aspects, I think that a design that exposes the group
> > to the remote server and allows to "inject" events to the group queue
> > will prevent
> > users from useful features going forward.
> > 
> > For example, fanotify ignored_mask could be added to a group, even on
> > a mount mark, even if the remote server only supports inode marks and it
> > would just work.
> > 
> > Another point of view for the post-mortem events:
> > As Miklos once noted and as you wrote above, for cache coherency and leases,
> > an async notification queue is not adequate and synchronous notifications are
> > too costly, so there needs to be some shared memory solution involving guest
> > cache invalidation by host.
> 
> Any shared memory solution works only limited setup. If server is remote
> on other machine, there is no sharing. I am hoping that this can be
> generic enough to support other remote filesystems down the line.

OK, so do I understand both you and Amir correctly that you think that
always relying on the FUSE server for generating the events and just piping
them to the client is not long-term viable design for FUSE? Mostly because
caching of modifications on the client is essentially inevitable and hence
generating events from the server would be unreliable (delayed too much)?

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  parent reply	other threads:[~2021-11-04 10:03 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-25 20:46 [RFC PATCH 0/7] Inotify support in FUSE and virtiofs Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 1/7] FUSE: Add the fsnotify opcode and in/out structs to FUSE Ioannis Angelakopoulos
2021-10-26 14:56   ` Amir Goldstein
2021-10-26 18:28     ` Amir Goldstein
     [not found]     ` <CAO17o20+jiij64y7b3eKoCjG5b_mLZj6o1LSnZ7+8exN3dFYEg@mail.gmail.com>
2021-10-27  5:46       ` Amir Goldstein
2021-10-27 21:46     ` Vivek Goyal
2021-10-28  4:13       ` Amir Goldstein
2021-10-28 14:20         ` Vivek Goyal
2021-10-25 20:46 ` [RFC PATCH 2/7] FUSE: Add the remote inotify support capability " Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 3/7] FUSE,Inotify,Fsnotify,VFS: Add the fuse_fsnotify_update_mark inode operation Ioannis Angelakopoulos
2021-10-26 15:06   ` Amir Goldstein
2021-11-01 17:49     ` Vivek Goyal
2021-11-02  7:34       ` Amir Goldstein
2021-10-25 20:46 ` [RFC PATCH 4/7] FUSE: Add the fuse_fsnotify_send_request to FUSE Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 5/7] Fsnotify: Add a wrapper around the fsnotify function Ioannis Angelakopoulos
2021-10-26 14:37   ` Amir Goldstein
2021-10-26 15:38     ` Vivek Goyal
2021-10-25 20:46 ` [RFC PATCH 6/7] FUSE,Fsnotify: Add the fuse_fsnotify_event inode operation Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 7/7] virtiofs: Add support for handling the remote fsnotify notifications Ioannis Angelakopoulos
2021-10-26 15:23 ` [RFC PATCH 0/7] Inotify support in FUSE and virtiofs Amir Goldstein
2021-10-26 15:52   ` Vivek Goyal
2021-10-26 18:19     ` Amir Goldstein
2021-10-26 16:18   ` Vivek Goyal
2021-10-26 17:59     ` Amir Goldstein
2021-10-26 18:27       ` Vivek Goyal
2021-10-26 19:04         ` Amir Goldstein
     [not found]         ` <CAO17o20sdKAWQN6w7Oe0Ze06qcK+J=6rrmA_aWGnY__MRVDCKw@mail.gmail.com>
2021-10-27  5:59           ` Amir Goldstein
2021-10-27 13:23             ` Jan Kara
2021-10-27 20:29               ` Vivek Goyal
2021-10-27 20:37                 ` Vivek Goyal
2021-11-02 11:09                 ` Jan Kara
2021-11-02 12:54                   ` Amir Goldstein
2021-11-02 20:34                     ` Vivek Goyal
2021-11-03  7:31                       ` Amir Goldstein
2021-11-03 22:29                         ` Vivek Goyal
2021-11-04  5:19                           ` Amir Goldstein
2021-11-03 10:09                     ` Jan Kara
2021-11-03 11:17                       ` Amir Goldstein
2021-11-03 22:36                         ` Vivek Goyal
2021-11-04  5:29                           ` Amir Goldstein
2021-11-04 10:03                           ` Jan Kara [this message]
2021-11-05 14:30                             ` Vivek Goyal
2021-11-10  6:28                               ` Amir Goldstein
2021-11-11 17:30                                 ` Jan Kara
2021-11-11 20:52                                   ` Amir Goldstein
2021-10-27 20:24             ` Vivek Goyal
2021-10-28  5:11               ` Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211104100316.GA10060@quack2.suse.cz \
    --to=jack@suse.cz \
    --cc=amir73il@gmail.com \
    --cc=iangelak@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=sfrench@samba.org \
    --cc=vgoyal@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).