All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Jan Kara <jack@suse.cz>,
	Ioannis Angelakopoulos <iangelak@redhat.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	virtio-fs-list <virtio-fs@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Miklos Szeredi <miklos@szeredi.hu>,
	Steve French <sfrench@samba.org>
Subject: Re: [RFC PATCH 0/7] Inotify support in FUSE and virtiofs
Date: Tue, 2 Nov 2021 16:34:31 -0400	[thread overview]
Message-ID: <YYGg1w/q31SC3PQ8@redhat.com> (raw)
In-Reply-To: <CAOQ4uxiYQYG8Ta=MNJKpa_0pAPd0MS9PL2r_0ZRD+_TKOw6C7g@mail.gmail.com>

On Tue, Nov 02, 2021 at 02:54:01PM +0200, Amir Goldstein wrote:
> On Tue, Nov 2, 2021 at 1:09 PM Jan Kara <jack@suse.cz> wrote:
> >
> > On Wed 27-10-21 16:29:40, Vivek Goyal wrote:
> > > On Wed, Oct 27, 2021 at 03:23:19PM +0200, Jan Kara wrote:
> > > > On Wed 27-10-21 08:59:15, Amir Goldstein wrote:
> > > > > On Tue, Oct 26, 2021 at 10:14 PM Ioannis Angelakopoulos
> > > > > <iangelak@redhat.com> wrote:
> > > > > > On Tue, Oct 26, 2021 at 2:27 PM Vivek Goyal <vgoyal@redhat.com> wrote:
> > > > > > The problem here is that the OPEN event might still be travelling towards the guest in the
> > > > > > virtqueues and arrives after the guest has already deleted its local inode.
> > > > > > While the remote event (OPEN) received by the guest is valid, its fsnotify
> > > > > > subsystem will drop it since the local inode is not there.
> > > > > >
> > > > >
> > > > > I have a feeling that we are mixing issues related to shared server
> > > > > and remote fsnotify.
> > > >
> > > > I don't think Ioannis was speaking about shared server case here. I think
> > > > he says that in a simple FUSE remote notification setup we can loose OPEN
> > > > events (or basically any other) if the inode for which the event happens
> > > > gets deleted sufficiently early after the event being generated. That seems
> > > > indeed somewhat unexpected and could be confusing if it happens e.g. for
> > > > some directory operations.
> > >
> > > Hi Jan,
> > >
> > > Agreed. That's what Ioannis is trying to say. That some of the remote events
> > > can be lost if fuse/guest local inode is unlinked. I think problem exists
> > > both for shared and non-shared directory case.
> > >
> > > With local filesystems we have a control that we can first queue up
> > > the event in buffer before we remove local watches. With events travelling
> > > from a remote server, there is no such control/synchronization. It can
> > > very well happen that events got delayed in the communication path
> > > somewhere and local watches went away and now there is no way to
> > > deliver those events to the application.
> >
> > So after thinking for some time about this I have the following question
> > about the architecture of this solution: Why do you actually have local
> > fsnotify watches at all? They seem to cause quite some trouble... I mean
> > cannot we have fsnotify marks only on FUSE server and generate all events
> > there?

I think currently we are already implementing this part of the proposal.
We are sending "group" pointer to server while updating a watch. And server
is managing watches per inode per group. IOW, if client has group1 wanting
mask A and group2 wanting mask B, then server is going to add two watches
with two masks on same inotify fd instance.

Admittedly we did this because we did not know that an aggregate mask
exists. And using an aggregate mask in guest kernel and then server
putting a single watch for that inode based on aggregate mask simplifies
server implementation.

One downside of this approach is more complexity on server. Also more
number of events will be travelling from host to guest. So if two groups
are watching same events on same inode, then I think two copies of
events will travel to guest. One for the group1 and one for group2 (as
we are having separate watches on host). If we use aggregate watch on
host, then only one event can travel between host and guest and I am
assuming same event can be replicated among multiple groups, depending
on their interest.

> When e.g. file is created from the client, client tells the server
> > about creation, the server performs the creation which generates the
> > fsnotify event, that is received by the server and forwared back to the
> > client which just queues it into notification group's queue for userspace
> > to read it.

This is the part we have not implemented. When we generate the event,
we just generate the event for the inode. There is no notion
of that this event has been generated for a specific group with-in
this inode. As of now that's left to the local fsnotify code to manage
and figure out.

So the idea is, that when event arrives from remote, queue it up directly
into the group (without having to worry about inode). Hmm.., how do we do
that. That means we need to return that group identifier in notification
event atleast so that client can find out the group (without having to
worry about inode?).

So group will basically become part of the remote protocol if we were
to go this way. And implementation becomes more complicated.

> >
> > Now with this architecture there's no problem with duplicate events for
> > local & server notification marks,

I guess supressing local events is really trivial. Either we have that
inode flag Amir suggested or have an inode operation to let file system
decide.

> similarly there's no problem with lost
> > events after inode deletion because events received by the client are
> > directly queued into notification queue without any checking whether inode
> > is still alive etc. Would this work or am I missing something?


So when will the watches on remote go away. When a file is unlinked and
inode is going away we call fsnotify_inoderemove(). This generates
FS_DELETE_SELF and then seems to remove all local marks on the inode.

Now if we don't have local marks and guest inode is going away, and client
sends FUSE_FORGET message, I am assuming that will be the time to cleanup
all the remote watches and groups etc. And if file is open by some other
guest, then DELETE_SELF event will not have been generated by then and
we will clean remote watches.

Even if other guest did not have file open, cleanup of remote watches
and DELETE_SELF will be parallel operation and can be racy. So if
thread reading inotify descriptor gets little late in reading DELETE_SELF,
it is possible another thread in virtiofsd cleaned up all remote
watches and associated groups. And now there is no way to send event
back to guest and we lost event?

My understanding of this notification magic is very primitive. So it
is very much possible I am misunderstanding how remote watches and
groups will be managed and reported back. But my current assumption
is that their life time will have to be tied to remote inode we
are managing. Otherwise when will remote server clean its own
internal state (watch descriptors), when inode goes away. 

> >
> 
> What about group #1 that wants mask A and group #2 that wants mask B
> events?
> 
> Do you propose to maintain separate event queues over the protocol?
> Attach a "recipient list" to each event?
> 
> I just don't see how this can scale other than:
> - Local marks and connectors manage the subscriptions on local machine
> - Protocol updates the server with the combined masks for watched objects
> 
> I think that the "post-mortem events" issue could be solved by keeping an
> S_DEAD fuse inode object in limbo just for the mark.
> When a remote server sends FS_IN_IGNORED or FS_DELETE_SELF for
> an inode, the fuse client inode can be finally evicted.

There is no guarantee that FS_IN_IGNORED or FS_DELETE_SELF will come
or when will it come. If another guest has reference on inode it might
not come for a long time. And this will kind of become a mechanism
for one guest to keep other's inode cache full of such objects.

If event queue becomes too full, we might drop these events. But I guess
in that case we will have to generate IN_Q_OVERFLOW and that can somehow
be used to cleanup such S_DEAD inodes?

nodeid is managed by server. So I am assuming that FORGET messages will
not be sent to server for this inode till we have seen FS_IN_IGNORED
and FS_DELETE_SELF events?

Thanks
Vivek

> I haven't tried to see how hard that would be to implement.
> 
> Thanks,
> Amir.
> 


WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Steve French <sfrench@samba.org>, Jan Kara <jack@suse.cz>,
	Miklos Szeredi <miklos@szeredi.hu>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	virtio-fs-list <virtio-fs@redhat.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [Virtio-fs] [RFC PATCH 0/7] Inotify support in FUSE and virtiofs
Date: Tue, 2 Nov 2021 16:34:31 -0400	[thread overview]
Message-ID: <YYGg1w/q31SC3PQ8@redhat.com> (raw)
In-Reply-To: <CAOQ4uxiYQYG8Ta=MNJKpa_0pAPd0MS9PL2r_0ZRD+_TKOw6C7g@mail.gmail.com>

On Tue, Nov 02, 2021 at 02:54:01PM +0200, Amir Goldstein wrote:
> On Tue, Nov 2, 2021 at 1:09 PM Jan Kara <jack@suse.cz> wrote:
> >
> > On Wed 27-10-21 16:29:40, Vivek Goyal wrote:
> > > On Wed, Oct 27, 2021 at 03:23:19PM +0200, Jan Kara wrote:
> > > > On Wed 27-10-21 08:59:15, Amir Goldstein wrote:
> > > > > On Tue, Oct 26, 2021 at 10:14 PM Ioannis Angelakopoulos
> > > > > <iangelak@redhat.com> wrote:
> > > > > > On Tue, Oct 26, 2021 at 2:27 PM Vivek Goyal <vgoyal@redhat.com> wrote:
> > > > > > The problem here is that the OPEN event might still be travelling towards the guest in the
> > > > > > virtqueues and arrives after the guest has already deleted its local inode.
> > > > > > While the remote event (OPEN) received by the guest is valid, its fsnotify
> > > > > > subsystem will drop it since the local inode is not there.
> > > > > >
> > > > >
> > > > > I have a feeling that we are mixing issues related to shared server
> > > > > and remote fsnotify.
> > > >
> > > > I don't think Ioannis was speaking about shared server case here. I think
> > > > he says that in a simple FUSE remote notification setup we can loose OPEN
> > > > events (or basically any other) if the inode for which the event happens
> > > > gets deleted sufficiently early after the event being generated. That seems
> > > > indeed somewhat unexpected and could be confusing if it happens e.g. for
> > > > some directory operations.
> > >
> > > Hi Jan,
> > >
> > > Agreed. That's what Ioannis is trying to say. That some of the remote events
> > > can be lost if fuse/guest local inode is unlinked. I think problem exists
> > > both for shared and non-shared directory case.
> > >
> > > With local filesystems we have a control that we can first queue up
> > > the event in buffer before we remove local watches. With events travelling
> > > from a remote server, there is no such control/synchronization. It can
> > > very well happen that events got delayed in the communication path
> > > somewhere and local watches went away and now there is no way to
> > > deliver those events to the application.
> >
> > So after thinking for some time about this I have the following question
> > about the architecture of this solution: Why do you actually have local
> > fsnotify watches at all? They seem to cause quite some trouble... I mean
> > cannot we have fsnotify marks only on FUSE server and generate all events
> > there?

I think currently we are already implementing this part of the proposal.
We are sending "group" pointer to server while updating a watch. And server
is managing watches per inode per group. IOW, if client has group1 wanting
mask A and group2 wanting mask B, then server is going to add two watches
with two masks on same inotify fd instance.

Admittedly we did this because we did not know that an aggregate mask
exists. And using an aggregate mask in guest kernel and then server
putting a single watch for that inode based on aggregate mask simplifies
server implementation.

One downside of this approach is more complexity on server. Also more
number of events will be travelling from host to guest. So if two groups
are watching same events on same inode, then I think two copies of
events will travel to guest. One for the group1 and one for group2 (as
we are having separate watches on host). If we use aggregate watch on
host, then only one event can travel between host and guest and I am
assuming same event can be replicated among multiple groups, depending
on their interest.

> When e.g. file is created from the client, client tells the server
> > about creation, the server performs the creation which generates the
> > fsnotify event, that is received by the server and forwared back to the
> > client which just queues it into notification group's queue for userspace
> > to read it.

This is the part we have not implemented. When we generate the event,
we just generate the event for the inode. There is no notion
of that this event has been generated for a specific group with-in
this inode. As of now that's left to the local fsnotify code to manage
and figure out.

So the idea is, that when event arrives from remote, queue it up directly
into the group (without having to worry about inode). Hmm.., how do we do
that. That means we need to return that group identifier in notification
event atleast so that client can find out the group (without having to
worry about inode?).

So group will basically become part of the remote protocol if we were
to go this way. And implementation becomes more complicated.

> >
> > Now with this architecture there's no problem with duplicate events for
> > local & server notification marks,

I guess supressing local events is really trivial. Either we have that
inode flag Amir suggested or have an inode operation to let file system
decide.

> similarly there's no problem with lost
> > events after inode deletion because events received by the client are
> > directly queued into notification queue without any checking whether inode
> > is still alive etc. Would this work or am I missing something?


So when will the watches on remote go away. When a file is unlinked and
inode is going away we call fsnotify_inoderemove(). This generates
FS_DELETE_SELF and then seems to remove all local marks on the inode.

Now if we don't have local marks and guest inode is going away, and client
sends FUSE_FORGET message, I am assuming that will be the time to cleanup
all the remote watches and groups etc. And if file is open by some other
guest, then DELETE_SELF event will not have been generated by then and
we will clean remote watches.

Even if other guest did not have file open, cleanup of remote watches
and DELETE_SELF will be parallel operation and can be racy. So if
thread reading inotify descriptor gets little late in reading DELETE_SELF,
it is possible another thread in virtiofsd cleaned up all remote
watches and associated groups. And now there is no way to send event
back to guest and we lost event?

My understanding of this notification magic is very primitive. So it
is very much possible I am misunderstanding how remote watches and
groups will be managed and reported back. But my current assumption
is that their life time will have to be tied to remote inode we
are managing. Otherwise when will remote server clean its own
internal state (watch descriptors), when inode goes away. 

> >
> 
> What about group #1 that wants mask A and group #2 that wants mask B
> events?
> 
> Do you propose to maintain separate event queues over the protocol?
> Attach a "recipient list" to each event?
> 
> I just don't see how this can scale other than:
> - Local marks and connectors manage the subscriptions on local machine
> - Protocol updates the server with the combined masks for watched objects
> 
> I think that the "post-mortem events" issue could be solved by keeping an
> S_DEAD fuse inode object in limbo just for the mark.
> When a remote server sends FS_IN_IGNORED or FS_DELETE_SELF for
> an inode, the fuse client inode can be finally evicted.

There is no guarantee that FS_IN_IGNORED or FS_DELETE_SELF will come
or when will it come. If another guest has reference on inode it might
not come for a long time. And this will kind of become a mechanism
for one guest to keep other's inode cache full of such objects.

If event queue becomes too full, we might drop these events. But I guess
in that case we will have to generate IN_Q_OVERFLOW and that can somehow
be used to cleanup such S_DEAD inodes?

nodeid is managed by server. So I am assuming that FORGET messages will
not be sent to server for this inode till we have seen FS_IN_IGNORED
and FS_DELETE_SELF events?

Thanks
Vivek

> I haven't tried to see how hard that would be to implement.
> 
> Thanks,
> Amir.
> 


  reply	other threads:[~2021-11-02 20:34 UTC|newest]

Thread overview: 137+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-25 20:46 [RFC PATCH 0/7] Inotify support in FUSE and virtiofs Ioannis Angelakopoulos
2021-10-25 20:46 ` [Virtio-fs] " Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 1/7] FUSE: Add the fsnotify opcode and in/out structs to FUSE Ioannis Angelakopoulos
2021-10-25 20:46   ` [Virtio-fs] " Ioannis Angelakopoulos
2021-10-26 14:56   ` Amir Goldstein
2021-10-26 14:56     ` [Virtio-fs] " Amir Goldstein
2021-10-26 18:28     ` Amir Goldstein
2021-10-26 18:28       ` [Virtio-fs] " Amir Goldstein
2021-10-26 20:47     ` Ioannis Angelakopoulos
2021-10-27  5:46       ` Amir Goldstein
2021-10-27  5:46         ` [Virtio-fs] " Amir Goldstein
2021-10-27 21:46     ` Vivek Goyal
2021-10-27 21:46       ` [Virtio-fs] " Vivek Goyal
2021-10-28  4:13       ` Amir Goldstein
2021-10-28  4:13         ` [Virtio-fs] " Amir Goldstein
2021-10-28 14:20         ` Vivek Goyal
2021-10-28 14:20           ` [Virtio-fs] " Vivek Goyal
2021-10-25 20:46 ` [RFC PATCH 2/7] FUSE: Add the remote inotify support capability " Ioannis Angelakopoulos
2021-10-25 20:46   ` [Virtio-fs] " Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 3/7] FUSE,Inotify,Fsnotify,VFS: Add the fuse_fsnotify_update_mark inode operation Ioannis Angelakopoulos
2021-10-25 20:46   ` [Virtio-fs] [RFC PATCH 3/7] FUSE, Inotify, Fsnotify, VFS: " Ioannis Angelakopoulos
2021-10-26 15:06   ` [RFC PATCH 3/7] FUSE,Inotify,Fsnotify,VFS: " Amir Goldstein
2021-10-26 15:06     ` [Virtio-fs] [RFC PATCH 3/7] FUSE, Inotify, Fsnotify, VFS: " Amir Goldstein
2021-11-01 17:49     ` [RFC PATCH 3/7] FUSE,Inotify,Fsnotify,VFS: " Vivek Goyal
2021-11-01 17:49       ` [Virtio-fs] [RFC PATCH 3/7] FUSE, Inotify, Fsnotify, VFS: " Vivek Goyal
2021-11-02  7:34       ` [RFC PATCH 3/7] FUSE,Inotify,Fsnotify,VFS: " Amir Goldstein
2021-11-02  7:34         ` [Virtio-fs] [RFC PATCH 3/7] FUSE, Inotify, Fsnotify, VFS: " Amir Goldstein
2021-10-25 20:46 ` [RFC PATCH 4/7] FUSE: Add the fuse_fsnotify_send_request to FUSE Ioannis Angelakopoulos
2021-10-25 20:46   ` [Virtio-fs] " Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 5/7] Fsnotify: Add a wrapper around the fsnotify function Ioannis Angelakopoulos
2021-10-25 20:46   ` [Virtio-fs] " Ioannis Angelakopoulos
2021-10-26 14:37   ` Amir Goldstein
2021-10-26 14:37     ` [Virtio-fs] " Amir Goldstein
2021-10-26 15:38     ` Vivek Goyal
2021-10-26 15:38       ` [Virtio-fs] " Vivek Goyal
2021-10-25 20:46 ` [RFC PATCH 6/7] FUSE,Fsnotify: Add the fuse_fsnotify_event inode operation Ioannis Angelakopoulos
2021-10-25 20:46   ` [Virtio-fs] [RFC PATCH 6/7] FUSE, Fsnotify: " Ioannis Angelakopoulos
2021-10-25 20:46 ` [RFC PATCH 7/7] virtiofs: Add support for handling the remote fsnotify notifications Ioannis Angelakopoulos
2021-10-25 20:46   ` [Virtio-fs] " Ioannis Angelakopoulos
2021-10-26 15:23 ` [RFC PATCH 0/7] Inotify support in FUSE and virtiofs Amir Goldstein
2021-10-26 15:23   ` [Virtio-fs] " Amir Goldstein
2021-10-26 15:52   ` Vivek Goyal
2021-10-26 15:52     ` [Virtio-fs] " Vivek Goyal
2021-10-26 18:19     ` Amir Goldstein
2021-10-26 18:19       ` [Virtio-fs] " Amir Goldstein
2021-10-26 16:18   ` Vivek Goyal
2021-10-26 16:18     ` [Virtio-fs] " Vivek Goyal
2021-10-26 17:59     ` Amir Goldstein
2021-10-26 17:59       ` [Virtio-fs] " Amir Goldstein
2021-10-26 18:27       ` Vivek Goyal
2021-10-26 18:27         ` [Virtio-fs] " Vivek Goyal
2021-10-26 19:04         ` Amir Goldstein
2021-10-26 19:04           ` [Virtio-fs] " Amir Goldstein
2021-10-26 19:14         ` Ioannis Angelakopoulos
2021-10-27  5:59           ` Amir Goldstein
2021-10-27  5:59             ` [Virtio-fs] " Amir Goldstein
2021-10-27 13:23             ` Jan Kara
2021-10-27 13:23               ` [Virtio-fs] " Jan Kara
2021-10-27 20:29               ` Vivek Goyal
2021-10-27 20:29                 ` [Virtio-fs] " Vivek Goyal
2021-10-27 20:37                 ` Vivek Goyal
2021-10-27 20:37                   ` [Virtio-fs] " Vivek Goyal
2021-11-02 11:09                 ` Jan Kara
2021-11-02 11:09                   ` [Virtio-fs] " Jan Kara
2021-11-02 12:54                   ` Amir Goldstein
2021-11-02 12:54                     ` [Virtio-fs] " Amir Goldstein
2021-11-02 20:34                     ` Vivek Goyal [this message]
2021-11-02 20:34                       ` Vivek Goyal
2021-11-03  7:31                       ` Amir Goldstein
2021-11-03  7:31                         ` [Virtio-fs] " Amir Goldstein
2021-11-03 22:29                         ` Vivek Goyal
2021-11-03 22:29                           ` [Virtio-fs] " Vivek Goyal
2021-11-04  5:19                           ` Amir Goldstein
2021-11-04  5:19                             ` [Virtio-fs] " Amir Goldstein
2021-11-03 10:09                     ` Jan Kara
2021-11-03 10:09                       ` [Virtio-fs] " Jan Kara
2021-11-03 11:17                       ` Amir Goldstein
2021-11-03 11:17                         ` [Virtio-fs] " Amir Goldstein
2021-11-03 22:36                         ` Vivek Goyal
2021-11-03 22:36                           ` [Virtio-fs] " Vivek Goyal
2021-11-04  5:29                           ` Amir Goldstein
2021-11-04  5:29                             ` [Virtio-fs] " Amir Goldstein
2021-11-04 10:03                           ` Jan Kara
2021-11-04 10:03                             ` [Virtio-fs] " Jan Kara
2021-11-05 14:30                             ` Vivek Goyal
2021-11-05 14:30                               ` [Virtio-fs] " Vivek Goyal
2021-11-10  6:28                               ` Amir Goldstein
2021-11-10  6:28                                 ` [Virtio-fs] " Amir Goldstein
2021-11-11 17:30                                 ` Jan Kara
2021-11-11 17:30                                   ` [Virtio-fs] " Jan Kara
2021-11-11 20:52                                   ` Amir Goldstein
2021-11-11 20:52                                     ` [Virtio-fs] " Amir Goldstein
2021-11-16  5:09                                     ` Stef Bon
2021-11-16  5:09                                       ` [Virtio-fs] " Stef Bon
2021-11-16 18:00                                       ` Ioannis Angelakopoulos
2021-11-16 22:12                                       ` Ioannis Angelakopoulos
2021-11-17  6:40                                         ` Amir Goldstein
2021-11-17  6:40                                           ` [Virtio-fs] " Amir Goldstein
2021-11-30 15:27                                           ` Vivek Goyal
2021-11-30 15:27                                             ` [Virtio-fs] " Vivek Goyal
2021-12-14 23:21                                             ` Ioannis Angelakopoulos
2021-12-15  7:10                                               ` Amir Goldstein
2021-12-15  7:10                                                 ` [Virtio-fs] " Amir Goldstein
2021-12-15 16:44                                                 ` Vivek Goyal
2021-12-15 16:44                                                   ` [Virtio-fs] " Vivek Goyal
2021-12-15 17:29                                                   ` Amir Goldstein
2021-12-15 17:29                                                     ` [Virtio-fs] " Amir Goldstein
2021-12-15 19:20                                                     ` Vivek Goyal
2021-12-15 19:20                                                       ` [Virtio-fs] " Vivek Goyal
2021-12-15 19:54                                                       ` Amir Goldstein
2021-12-15 19:54                                                         ` [Virtio-fs] " Amir Goldstein
2021-12-16 11:03                                                         ` Amir Goldstein
2021-12-16 11:03                                                           ` [Virtio-fs] " Amir Goldstein
2021-12-16 16:24                                                           ` Vivek Goyal
2021-12-16 16:24                                                             ` [Virtio-fs] " Vivek Goyal
2021-12-16 18:22                                                             ` Amir Goldstein
2021-12-16 18:22                                                               ` [Virtio-fs] " Amir Goldstein
2021-12-16 22:24                                                               ` Vivek Goyal
2021-12-16 22:24                                                                 ` [Virtio-fs] " Vivek Goyal
2021-12-17  4:21                                                                 ` Amir Goldstein
2021-12-17  4:21                                                                   ` [Virtio-fs] " Amir Goldstein
2021-12-17 14:15                                                                   ` Vivek Goyal
2021-12-17 14:15                                                                     ` [Virtio-fs] " Vivek Goyal
2021-12-18  8:28                                                                     ` Amir Goldstein
2021-12-18  8:28                                                                       ` [Virtio-fs] " Amir Goldstein
2021-12-20 16:41                                                                       ` Vivek Goyal
2021-12-20 16:41                                                                         ` [Virtio-fs] " Vivek Goyal
2021-12-20 18:22                                                                         ` Amir Goldstein
2021-12-20 18:22                                                                           ` [Virtio-fs] " Amir Goldstein
2022-01-06 23:37                                             ` Steve French
2022-01-06 23:37                                               ` [Virtio-fs] " Steve French
2021-11-30 15:36                                       ` Vivek Goyal
2021-11-30 15:36                                         ` [Virtio-fs] " Vivek Goyal
2021-10-27 20:24             ` Vivek Goyal
2021-10-27 20:24               ` [Virtio-fs] " Vivek Goyal
2021-10-28  5:11               ` Amir Goldstein
2021-10-28  5:11                 ` [Virtio-fs] " Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YYGg1w/q31SC3PQ8@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=amir73il@gmail.com \
    --cc=iangelak@redhat.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=sfrench@samba.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.