linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Jan Kara <jack@suse.cz>,
	Ioannis Angelakopoulos <iangelak@redhat.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	virtio-fs-list <virtio-fs@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Miklos Szeredi <miklos@szeredi.hu>,
	Steve French <sfrench@samba.org>
Subject: Re: [RFC PATCH 0/7] Inotify support in FUSE and virtiofs
Date: Wed, 3 Nov 2021 18:29:18 -0400	[thread overview]
Message-ID: <YYMNPqVnOWD3gNsw@redhat.com> (raw)
In-Reply-To: <CAOQ4uxg_KAg34TgmVRQ5nrfgHddzQepVv_bAUAhqtkDfHB7URw@mail.gmail.com>

On Wed, Nov 03, 2021 at 09:31:02AM +0200, Amir Goldstein wrote:
> > > >
> > >
> > > What about group #1 that wants mask A and group #2 that wants mask B
> > > events?
> > >
> > > Do you propose to maintain separate event queues over the protocol?
> > > Attach a "recipient list" to each event?
> > >
> > > I just don't see how this can scale other than:
> > > - Local marks and connectors manage the subscriptions on local machine
> > > - Protocol updates the server with the combined masks for watched objects
> > >
> > > I think that the "post-mortem events" issue could be solved by keeping an
> > > S_DEAD fuse inode object in limbo just for the mark.
> > > When a remote server sends FS_IN_IGNORED or FS_DELETE_SELF for
> > > an inode, the fuse client inode can be finally evicted.
> >
> > There is no guarantee that FS_IN_IGNORED or FS_DELETE_SELF will come
> > or when will it come. If another guest has reference on inode it might
> > not come for a long time. And this will kind of become a mechanism
> > for one guest to keep other's inode cache full of such objects.
> >
> > If event queue becomes too full, we might drop these events. But I guess
> > in that case we will have to generate IN_Q_OVERFLOW and that can somehow
> > be used to cleanup such S_DEAD inodes?
> 
> That depends on the server implementation.
> If the server is watching host fs using fanotify filesystem mark, then
> an overflow
> event does NOT mean that other new events on inode may be missed only
> that old events could have been missed.
> Server should know about all the watched inodes, so it can check on overflow
> if any of the watched inodes were deleted and notify the client using a reliable
> channel.

Ok. We have only one channel for notifications. I guess we can program
the channel in such a way so that it does not drop overflow events but
can drop other kind of events if things get crazy. If too many overflow
events and we allocate too much of memory, I guess at some point of
time, oom killer will kick in a kill server.

> 
> Given the current server implementation with inotify, IN_Q_OVERFLOW
> means server may have lost an IN_IGNORED event and may not get any
> more events on inode, so server should check all the watched inodes after
> overflow, notify the client of all deleted inodes and try to re-create
> the watches
> for all inodes with known path or use magic /prod/pid/fd path if that
> works (??).

Re-doing the watches sounds very painful. That means we will need to
keep track of aggregated mask in server side inode as well. As of
now we just pass mask to kernel using inotify_add_watch() and forget
about it.

/proc/pid/fd should work because I think that's how ioannis is putting
current watches on inodes. We don't send path info to server.

> 
> >
> > nodeid is managed by server. So I am assuming that FORGET messages will
> > not be sent to server for this inode till we have seen FS_IN_IGNORED
> > and FS_DELETE_SELF events?
> >
> 
> Or until the application that requested the watch calls
> inotify_rm_watch() or closes
> the inotify fd.
> 
> IOW, when fs implements remote fsnotify, the local watch keeps the local deleted
> inode object in limbo until the local watch is removed.
> When the remote fsnotify server informs that the remote watch (or remote inode)
> is gone, the local watch is removed as well and then the inotify
> application also gets
> an FS_IN_IGNORED event.

Hmm.., I guess remote server will simply send IN_DELETE event when it
gets it and forward to client. And client will have to then cleanup
this S_DEAD inode which is in limbo waiting for IN_DELETE_SELF event.
And that should trigger cleanup of marks/local-watches on the inode, IIUC.

> 
> Lifetime of local inode is complicated and lifetime of this "shared inode"
> is much more complicated, so I am not pretending to claim that I have this all
> figured out or that it could be reliably done at all.

Yes this handling of IN_DELETE_SELF is turning out to be the most
complicated piece of this proposal. I wish initial implementation
could just be designed that it does not send IN_DELETE_SELF and
IN_INGORED is generated locally. And later enhance it to support
reliable delivery of IN_DELETE_SELF.

Thanks
Vivek


  reply	other threads:[~2021-11-03 22:29 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-25 20:46 [RFC PATCH 0/7] Inotify support in FUSE and virtiofs Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 1/7] FUSE: Add the fsnotify opcode and in/out structs to FUSE Ioannis Angelakopoulos
2021-10-26 14:56   ` Amir Goldstein
2021-10-26 18:28     ` Amir Goldstein
     [not found]     ` <CAO17o20+jiij64y7b3eKoCjG5b_mLZj6o1LSnZ7+8exN3dFYEg@mail.gmail.com>
2021-10-27  5:46       ` Amir Goldstein
2021-10-27 21:46     ` Vivek Goyal
2021-10-28  4:13       ` Amir Goldstein
2021-10-28 14:20         ` Vivek Goyal
2021-10-25 20:46 ` [RFC PATCH 2/7] FUSE: Add the remote inotify support capability " Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 3/7] FUSE,Inotify,Fsnotify,VFS: Add the fuse_fsnotify_update_mark inode operation Ioannis Angelakopoulos
2021-10-26 15:06   ` Amir Goldstein
2021-11-01 17:49     ` Vivek Goyal
2021-11-02  7:34       ` Amir Goldstein
2021-10-25 20:46 ` [RFC PATCH 4/7] FUSE: Add the fuse_fsnotify_send_request to FUSE Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 5/7] Fsnotify: Add a wrapper around the fsnotify function Ioannis Angelakopoulos
2021-10-26 14:37   ` Amir Goldstein
2021-10-26 15:38     ` Vivek Goyal
2021-10-25 20:46 ` [RFC PATCH 6/7] FUSE,Fsnotify: Add the fuse_fsnotify_event inode operation Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 7/7] virtiofs: Add support for handling the remote fsnotify notifications Ioannis Angelakopoulos
2021-10-26 15:23 ` [RFC PATCH 0/7] Inotify support in FUSE and virtiofs Amir Goldstein
2021-10-26 15:52   ` Vivek Goyal
2021-10-26 18:19     ` Amir Goldstein
2021-10-26 16:18   ` Vivek Goyal
2021-10-26 17:59     ` Amir Goldstein
2021-10-26 18:27       ` Vivek Goyal
2021-10-26 19:04         ` Amir Goldstein
     [not found]         ` <CAO17o20sdKAWQN6w7Oe0Ze06qcK+J=6rrmA_aWGnY__MRVDCKw@mail.gmail.com>
2021-10-27  5:59           ` Amir Goldstein
2021-10-27 13:23             ` Jan Kara
2021-10-27 20:29               ` Vivek Goyal
2021-10-27 20:37                 ` Vivek Goyal
2021-11-02 11:09                 ` Jan Kara
2021-11-02 12:54                   ` Amir Goldstein
2021-11-02 20:34                     ` Vivek Goyal
2021-11-03  7:31                       ` Amir Goldstein
2021-11-03 22:29                         ` Vivek Goyal [this message]
2021-11-04  5:19                           ` Amir Goldstein
2021-11-03 10:09                     ` Jan Kara
2021-11-03 11:17                       ` Amir Goldstein
2021-11-03 22:36                         ` Vivek Goyal
2021-11-04  5:29                           ` Amir Goldstein
2021-11-04 10:03                           ` Jan Kara
2021-11-05 14:30                             ` Vivek Goyal
2021-11-10  6:28                               ` Amir Goldstein
2021-11-11 17:30                                 ` Jan Kara
2021-11-11 20:52                                   ` Amir Goldstein
2021-10-27 20:24             ` Vivek Goyal
2021-10-28  5:11               ` Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YYMNPqVnOWD3gNsw@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=amir73il@gmail.com \
    --cc=iangelak@redhat.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=sfrench@samba.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).