All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Hanna Reitz <hreitz@redhat.com>
Cc: virtio-fs@redhat.com, Stefan Hajnoczi <stefanha@redhat.com>,
	qemu-devel@nongnu.org,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v3 08/10] virtiofsd: Add inodes_by_handle hash table
Date: Tue, 10 Aug 2021 13:51:18 -0400	[thread overview]
Message-ID: <YRK8lmDlRVV2dDih@redhat.com> (raw)
In-Reply-To: <915cd307-e3db-1f62-22ac-8caeffd35ca1@redhat.com>

On Tue, Aug 10, 2021 at 04:13:44PM +0200, Hanna Reitz wrote:
> On 10.08.21 16:07, Vivek Goyal wrote:
> > On Mon, Aug 09, 2021 at 06:47:18PM +0200, Hanna Reitz wrote:
> > > On 09.08.21 18:10, Vivek Goyal wrote:
> > > > On Fri, Jul 30, 2021 at 05:01:32PM +0200, Max Reitz wrote:
> > > > > Currently, lo_inode.fhandle is always NULL and so always keep an O_PATH
> > > > > FD in lo_inode.fd.  Therefore, when the respective inode is unlinked,
> > > > > its inode ID will remain in use until we drop our lo_inode (and
> > > > > lo_inode_put() thus closes the FD).  Therefore, lo_find() can safely use
> > > > > the inode ID as an lo_inode key, because any inode with an inode ID we
> > > > > find in lo_data.inodes (on the same filesystem) must be the exact same
> > > > > file.
> > > > > 
> > > > > This will change when we start setting lo_inode.fhandle so we do not
> > > > > have to keep an O_PATH FD open.  Then, unlinking such an inode will
> > > > > immediately remove it, so its ID can then be reused by newly created
> > > > > files, even while the lo_inode object is still there[1].
> > > > > 
> > > > > So creating a new file can then reuse the old file's inode ID, and
> > > > > looking up the new file would lead to us finding the old file's
> > > > > lo_inode, which is not ideal.
> > > > > 
> > > > > Luckily, just as file handles cause this problem, they also solve it:  A
> > > > > file handle contains a generation ID, which changes when an inode ID is
> > > > > reused, so the new file can be distinguished from the old one.  So all
> > > > > we need to do is to add a second map besides lo_data.inodes that maps
> > > > > file handles to lo_inodes, namely lo_data.inodes_by_handle.  For
> > > > > clarity, lo_data.inodes is renamed to lo_data.inodes_by_ids.
> > > > > 
> > > > > Unfortunately, we cannot rely on being able to generate file handles
> > > > > every time.  Therefore, we still enter every lo_inode object into
> > > > > inodes_by_ids, but having an entry in inodes_by_handle is optional.  A
> > > > > potential inodes_by_handle entry then has precedence, the inodes_by_ids
> > > > > entry is just a fallback.
> > > > > 
> > > > > Note that we do not generate lo_fhandle objects yet, and so we also do
> > > > > not enter anything into the inodes_by_handle map yet.  Also, all lookups
> > > > > skip that map.  We might manually create file handles with some code
> > > > > that is immediately removed by the next patch again, but that would
> > > > > break the assumption in lo_find() that every lo_inode with a non-NULL
> > > > > .fhandle must have an entry in inodes_by_handle and vice versa.  So we
> > > > > leave actually using the inodes_by_handle map for the next patch.
> > > > > 
> > > > > [1] If some application in the guest still has the file open, there is
> > > > > going to be a corresponding FD mapping in lo_data.fd_map.  In such a
> > > > > case, the inode will only go away once every application in the guest
> > > > > has closed it.  The problem described only applies to cases where the
> > > > > guest does not have the file open, and it is just in the dentry cache,
> > > > > basically.
> > > > > 
> > > > > Signed-off-by: Max Reitz <mreitz@redhat.com>
> > > > > ---
> > > > >    tools/virtiofsd/passthrough_ll.c | 81 +++++++++++++++++++++++++-------
> > > > >    1 file changed, 65 insertions(+), 16 deletions(-)
> > > > > 
> > > > > diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c
> > > > > index 487448d666..f9d8b2f134 100644
> > > > > --- a/tools/virtiofsd/passthrough_ll.c
> > > > > +++ b/tools/virtiofsd/passthrough_ll.c
> > > > > @@ -180,7 +180,8 @@ struct lo_data {
> > > > >        int announce_submounts;
> > > > >        bool use_statx;
> > > > >        struct lo_inode root;
> > > > > -    GHashTable *inodes; /* protected by lo->mutex */
> > > > > +    GHashTable *inodes_by_ids; /* protected by lo->mutex */
> > > > > +    GHashTable *inodes_by_handle; /* protected by lo->mutex */
> > > > >        struct lo_map ino_map; /* protected by lo->mutex */
> > > > >        struct lo_map dirp_map; /* protected by lo->mutex */
> > > > >        struct lo_map fd_map; /* protected by lo->mutex */
> > > > > @@ -263,8 +264,9 @@ static struct {
> > > > >    /* That we loaded cap-ng in the current thread from the saved */
> > > > >    static __thread bool cap_loaded = 0;
> > > > > -static struct lo_inode *lo_find(struct lo_data *lo, struct stat *st,
> > > > > -                                uint64_t mnt_id);
> > > > > +static struct lo_inode *lo_find(struct lo_data *lo,
> > > > > +                                const struct lo_fhandle *fhandle,
> > > > > +                                struct stat *st, uint64_t mnt_id);
> > > > >    static int xattr_map_client(const struct lo_data *lo, const char *client_name,
> > > > >                                char **out_name);
> > > > > @@ -1064,18 +1066,40 @@ out_err:
> > > > >        fuse_reply_err(req, saverr);
> > > > >    }
> > > > > -static struct lo_inode *lo_find(struct lo_data *lo, struct stat *st,
> > > > > -                                uint64_t mnt_id)
> > > > > +static struct lo_inode *lo_find(struct lo_data *lo,
> > > > > +                                const struct lo_fhandle *fhandle,
> > > > > +                                struct stat *st, uint64_t mnt_id)
> > > > >    {
> > > > > -    struct lo_inode *p;
> > > > > -    struct lo_key key = {
> > > > > +    struct lo_inode *p = NULL;
> > > > > +    struct lo_key ids_key = {
> > > > >            .ino = st->st_ino,
> > > > >            .dev = st->st_dev,
> > > > >            .mnt_id = mnt_id,
> > > > >        };
> > > > >        pthread_mutex_lock(&lo->mutex);
> > > > > -    p = g_hash_table_lookup(lo->inodes, &key);
> > > > > +    if (fhandle) {
> > > > > +        p = g_hash_table_lookup(lo->inodes_by_handle, fhandle);
> > > > > +    }
> > > > > +    if (!p) {
> > > > > +        p = g_hash_table_lookup(lo->inodes_by_ids, &ids_key);
> > > > So even if fhandle is not NULL, we will still lookup the inode
> > > > object in lo->inodes_by_ids? I thought fallback was only required
> > > > if we could not generate file handle to begin with and in that case
> > > > fhandle will be NULL?
> > > Well.  I think it depends again on when file handle generation can fail and
> > > when it cannot.  If we assume it can randomly fail at any time, then it’s
> > > possible we create an lo_inode with an O_PATH fd, but later we are able to
> > > generate a file handle for it.  So we first try a lookup by file handle
> > > here, which would fail, but we’d still have to try a lookup by IDs, so we
> > > can find the O_PATH lo_inode.
> > > 
> > > An example case would be if at first we weren’t able to open a mount fd
> > > (because this file is a device node and the first lo_inode looked up on its
> > > filesystem), and so we couldn’t generate a file handle that we would be sure
> > > would work; but later for the lookup we can generate a file handle (because
> > > some other node on that filesystem has been opened by then, so we have a
> > > mount fd).
> > Ok, got it. If we are assuming that file handle generation can fail
> > randomly, then what will happen in following scenario.
> > 
> > - lookup, file handle generated, inode added to both hash tables.
> > 
> > - another lookup, handle generation failed. We call lo_find(), it
> >    finds inode in lo->inodes_by_ids but rejects it because p->fd == -1.
> > 
> > - Now lo_find() will return NULL and caller will assume inode could
> >    not be found (despite the fact it is in there) and caller lo_do_lookup()
> >    will try to add new inode to hash tables. So we will have two inode
> >    instances in hash table with same st_dev, st_ino, mnt_id. One will
> >    have file handle while other will have O_PATH fd.
> > 
> > So we have two inodes in cache representing same file. One using file
> > handle while other using O_PATH fd.
> > 
> > One side affect of this is says guest has looked up a file (and got
> > node id 1, fhandle based inode). And later guest is revalidating
> > that inode, this time it could get inode 2 (O_PATH fd). Guest will
> > think inode has changed and discard previous inode and trigger
> > another lookup. This typically happens only if file has gone away.
> > But now it will happen because we have two inodes in cache representing
> > same file.
> > 
> > There might be other cases where this is bad. I can't think of any
> > at this point of time.
> > 
> > If could solve the issue of mount_fd, then we have to use fallback
> > path probably only for EOPNOTSUPP case. And then we can be sure
> > that cache will always have one inode either fhandle based or
> > O_PATH based (and not both).
> 
> OK, but can we truly solve the mount_fd issue?
> 
> What I think we could do is have two variants of the file handle generation
> function, one which is supposed to create a usable file handle (so this
> version will ensure mount_fds contains a valid fd for the mount ID), and one
> that just generates a file handle for lookup (i.e. it doesn’t look into
> mount_fds at all).  The latter version would practically only fail in the
> EOPNOTSUPP case.
> 
> Would that get around the issue?

IIUC, suggestion is that in lo_do_lookup() we will use first variant
and in lookup_rename() we will use second variant. If yes, that does not
solve the issue of having two inodes representing same file.
lo_do_lookup() might be successful first time and add inode with fhandle
and fail next time and add a new inode with O_PATH fd. 

Maybe this will not happen easily because first operation will add
mount_fd and then second operation will find existing mount_fd and
will not fail atleast due to mount_fd. Might fail due to some other
temporary resource failure etc.

Vivek



WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: Hanna Reitz <hreitz@redhat.com>
Cc: virtio-fs@redhat.com, qemu-devel@nongnu.org,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [Virtio-fs] [PATCH v3 08/10] virtiofsd: Add inodes_by_handle hash table
Date: Tue, 10 Aug 2021 13:51:18 -0400	[thread overview]
Message-ID: <YRK8lmDlRVV2dDih@redhat.com> (raw)
In-Reply-To: <915cd307-e3db-1f62-22ac-8caeffd35ca1@redhat.com>

On Tue, Aug 10, 2021 at 04:13:44PM +0200, Hanna Reitz wrote:
> On 10.08.21 16:07, Vivek Goyal wrote:
> > On Mon, Aug 09, 2021 at 06:47:18PM +0200, Hanna Reitz wrote:
> > > On 09.08.21 18:10, Vivek Goyal wrote:
> > > > On Fri, Jul 30, 2021 at 05:01:32PM +0200, Max Reitz wrote:
> > > > > Currently, lo_inode.fhandle is always NULL and so always keep an O_PATH
> > > > > FD in lo_inode.fd.  Therefore, when the respective inode is unlinked,
> > > > > its inode ID will remain in use until we drop our lo_inode (and
> > > > > lo_inode_put() thus closes the FD).  Therefore, lo_find() can safely use
> > > > > the inode ID as an lo_inode key, because any inode with an inode ID we
> > > > > find in lo_data.inodes (on the same filesystem) must be the exact same
> > > > > file.
> > > > > 
> > > > > This will change when we start setting lo_inode.fhandle so we do not
> > > > > have to keep an O_PATH FD open.  Then, unlinking such an inode will
> > > > > immediately remove it, so its ID can then be reused by newly created
> > > > > files, even while the lo_inode object is still there[1].
> > > > > 
> > > > > So creating a new file can then reuse the old file's inode ID, and
> > > > > looking up the new file would lead to us finding the old file's
> > > > > lo_inode, which is not ideal.
> > > > > 
> > > > > Luckily, just as file handles cause this problem, they also solve it:  A
> > > > > file handle contains a generation ID, which changes when an inode ID is
> > > > > reused, so the new file can be distinguished from the old one.  So all
> > > > > we need to do is to add a second map besides lo_data.inodes that maps
> > > > > file handles to lo_inodes, namely lo_data.inodes_by_handle.  For
> > > > > clarity, lo_data.inodes is renamed to lo_data.inodes_by_ids.
> > > > > 
> > > > > Unfortunately, we cannot rely on being able to generate file handles
> > > > > every time.  Therefore, we still enter every lo_inode object into
> > > > > inodes_by_ids, but having an entry in inodes_by_handle is optional.  A
> > > > > potential inodes_by_handle entry then has precedence, the inodes_by_ids
> > > > > entry is just a fallback.
> > > > > 
> > > > > Note that we do not generate lo_fhandle objects yet, and so we also do
> > > > > not enter anything into the inodes_by_handle map yet.  Also, all lookups
> > > > > skip that map.  We might manually create file handles with some code
> > > > > that is immediately removed by the next patch again, but that would
> > > > > break the assumption in lo_find() that every lo_inode with a non-NULL
> > > > > .fhandle must have an entry in inodes_by_handle and vice versa.  So we
> > > > > leave actually using the inodes_by_handle map for the next patch.
> > > > > 
> > > > > [1] If some application in the guest still has the file open, there is
> > > > > going to be a corresponding FD mapping in lo_data.fd_map.  In such a
> > > > > case, the inode will only go away once every application in the guest
> > > > > has closed it.  The problem described only applies to cases where the
> > > > > guest does not have the file open, and it is just in the dentry cache,
> > > > > basically.
> > > > > 
> > > > > Signed-off-by: Max Reitz <mreitz@redhat.com>
> > > > > ---
> > > > >    tools/virtiofsd/passthrough_ll.c | 81 +++++++++++++++++++++++++-------
> > > > >    1 file changed, 65 insertions(+), 16 deletions(-)
> > > > > 
> > > > > diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c
> > > > > index 487448d666..f9d8b2f134 100644
> > > > > --- a/tools/virtiofsd/passthrough_ll.c
> > > > > +++ b/tools/virtiofsd/passthrough_ll.c
> > > > > @@ -180,7 +180,8 @@ struct lo_data {
> > > > >        int announce_submounts;
> > > > >        bool use_statx;
> > > > >        struct lo_inode root;
> > > > > -    GHashTable *inodes; /* protected by lo->mutex */
> > > > > +    GHashTable *inodes_by_ids; /* protected by lo->mutex */
> > > > > +    GHashTable *inodes_by_handle; /* protected by lo->mutex */
> > > > >        struct lo_map ino_map; /* protected by lo->mutex */
> > > > >        struct lo_map dirp_map; /* protected by lo->mutex */
> > > > >        struct lo_map fd_map; /* protected by lo->mutex */
> > > > > @@ -263,8 +264,9 @@ static struct {
> > > > >    /* That we loaded cap-ng in the current thread from the saved */
> > > > >    static __thread bool cap_loaded = 0;
> > > > > -static struct lo_inode *lo_find(struct lo_data *lo, struct stat *st,
> > > > > -                                uint64_t mnt_id);
> > > > > +static struct lo_inode *lo_find(struct lo_data *lo,
> > > > > +                                const struct lo_fhandle *fhandle,
> > > > > +                                struct stat *st, uint64_t mnt_id);
> > > > >    static int xattr_map_client(const struct lo_data *lo, const char *client_name,
> > > > >                                char **out_name);
> > > > > @@ -1064,18 +1066,40 @@ out_err:
> > > > >        fuse_reply_err(req, saverr);
> > > > >    }
> > > > > -static struct lo_inode *lo_find(struct lo_data *lo, struct stat *st,
> > > > > -                                uint64_t mnt_id)
> > > > > +static struct lo_inode *lo_find(struct lo_data *lo,
> > > > > +                                const struct lo_fhandle *fhandle,
> > > > > +                                struct stat *st, uint64_t mnt_id)
> > > > >    {
> > > > > -    struct lo_inode *p;
> > > > > -    struct lo_key key = {
> > > > > +    struct lo_inode *p = NULL;
> > > > > +    struct lo_key ids_key = {
> > > > >            .ino = st->st_ino,
> > > > >            .dev = st->st_dev,
> > > > >            .mnt_id = mnt_id,
> > > > >        };
> > > > >        pthread_mutex_lock(&lo->mutex);
> > > > > -    p = g_hash_table_lookup(lo->inodes, &key);
> > > > > +    if (fhandle) {
> > > > > +        p = g_hash_table_lookup(lo->inodes_by_handle, fhandle);
> > > > > +    }
> > > > > +    if (!p) {
> > > > > +        p = g_hash_table_lookup(lo->inodes_by_ids, &ids_key);
> > > > So even if fhandle is not NULL, we will still lookup the inode
> > > > object in lo->inodes_by_ids? I thought fallback was only required
> > > > if we could not generate file handle to begin with and in that case
> > > > fhandle will be NULL?
> > > Well.  I think it depends again on when file handle generation can fail and
> > > when it cannot.  If we assume it can randomly fail at any time, then it’s
> > > possible we create an lo_inode with an O_PATH fd, but later we are able to
> > > generate a file handle for it.  So we first try a lookup by file handle
> > > here, which would fail, but we’d still have to try a lookup by IDs, so we
> > > can find the O_PATH lo_inode.
> > > 
> > > An example case would be if at first we weren’t able to open a mount fd
> > > (because this file is a device node and the first lo_inode looked up on its
> > > filesystem), and so we couldn’t generate a file handle that we would be sure
> > > would work; but later for the lookup we can generate a file handle (because
> > > some other node on that filesystem has been opened by then, so we have a
> > > mount fd).
> > Ok, got it. If we are assuming that file handle generation can fail
> > randomly, then what will happen in following scenario.
> > 
> > - lookup, file handle generated, inode added to both hash tables.
> > 
> > - another lookup, handle generation failed. We call lo_find(), it
> >    finds inode in lo->inodes_by_ids but rejects it because p->fd == -1.
> > 
> > - Now lo_find() will return NULL and caller will assume inode could
> >    not be found (despite the fact it is in there) and caller lo_do_lookup()
> >    will try to add new inode to hash tables. So we will have two inode
> >    instances in hash table with same st_dev, st_ino, mnt_id. One will
> >    have file handle while other will have O_PATH fd.
> > 
> > So we have two inodes in cache representing same file. One using file
> > handle while other using O_PATH fd.
> > 
> > One side affect of this is says guest has looked up a file (and got
> > node id 1, fhandle based inode). And later guest is revalidating
> > that inode, this time it could get inode 2 (O_PATH fd). Guest will
> > think inode has changed and discard previous inode and trigger
> > another lookup. This typically happens only if file has gone away.
> > But now it will happen because we have two inodes in cache representing
> > same file.
> > 
> > There might be other cases where this is bad. I can't think of any
> > at this point of time.
> > 
> > If could solve the issue of mount_fd, then we have to use fallback
> > path probably only for EOPNOTSUPP case. And then we can be sure
> > that cache will always have one inode either fhandle based or
> > O_PATH based (and not both).
> 
> OK, but can we truly solve the mount_fd issue?
> 
> What I think we could do is have two variants of the file handle generation
> function, one which is supposed to create a usable file handle (so this
> version will ensure mount_fds contains a valid fd for the mount ID), and one
> that just generates a file handle for lookup (i.e. it doesn’t look into
> mount_fds at all).  The latter version would practically only fail in the
> EOPNOTSUPP case.
> 
> Would that get around the issue?

IIUC, suggestion is that in lo_do_lookup() we will use first variant
and in lookup_rename() we will use second variant. If yes, that does not
solve the issue of having two inodes representing same file.
lo_do_lookup() might be successful first time and add inode with fhandle
and fail next time and add a new inode with O_PATH fd. 

Maybe this will not happen easily because first operation will add
mount_fd and then second operation will find existing mount_fd and
will not fail atleast due to mount_fd. Might fail due to some other
temporary resource failure etc.

Vivek


  reply	other threads:[~2021-08-10 17:52 UTC|newest]

Thread overview: 88+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-30 15:01 [PATCH v3 00/10] virtiofsd: Allow using file handles instead of O_PATH FDs Max Reitz
2021-07-30 15:01 ` [Virtio-fs] " Max Reitz
2021-07-30 15:01 ` [PATCH v3 01/10] virtiofsd: Limit setxattr()'s creds-dropped region Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-06 14:16   ` Vivek Goyal
2021-08-06 14:16     ` [Virtio-fs] " Vivek Goyal
2021-08-09 10:30     ` Max Reitz
2021-08-09 10:30       ` [Virtio-fs] " Max Reitz
2021-07-30 15:01 ` [PATCH v3 02/10] virtiofsd: Add TempFd structure Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-06 14:41   ` Vivek Goyal
2021-08-06 14:41     ` [Virtio-fs] " Vivek Goyal
2021-08-09 10:44     ` Max Reitz
2021-08-09 10:44       ` [Virtio-fs] " Max Reitz
2021-07-30 15:01 ` [PATCH v3 03/10] virtiofsd: Use lo_inode_open() instead of openat() Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-06 15:42   ` Vivek Goyal
2021-08-06 15:42     ` [Virtio-fs] " Vivek Goyal
2021-07-30 15:01 ` [PATCH v3 04/10] virtiofsd: Add lo_inode_fd() helper Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-06 18:25   ` Vivek Goyal
2021-08-06 18:25     ` [Virtio-fs] " Vivek Goyal
2021-08-09 10:48     ` Max Reitz
2021-08-09 10:48       ` [Virtio-fs] " Max Reitz
2021-07-30 15:01 ` [PATCH v3 05/10] virtiofsd: Let lo_fd() return a TempFd Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-07-30 15:01 ` [PATCH v3 06/10] virtiofsd: Let lo_inode_open() " Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-06 19:55   ` Vivek Goyal
2021-08-06 19:55     ` [Virtio-fs] " Vivek Goyal
2021-08-09 13:40     ` Max Reitz
2021-08-09 13:40       ` [Virtio-fs] " Max Reitz
2021-07-30 15:01 ` [PATCH v3 07/10] virtiofsd: Add lo_inode.fhandle Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-09 15:21   ` Vivek Goyal
2021-08-09 15:21     ` [Virtio-fs] " Vivek Goyal
2021-08-09 16:41     ` Hanna Reitz
2021-08-09 16:41       ` [Virtio-fs] " Hanna Reitz
2021-07-30 15:01 ` [PATCH v3 08/10] virtiofsd: Add inodes_by_handle hash table Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-09 16:10   ` Vivek Goyal
2021-08-09 16:10     ` [Virtio-fs] " Vivek Goyal
2021-08-09 16:47     ` Hanna Reitz
2021-08-09 16:47       ` [Virtio-fs] " Hanna Reitz
2021-08-10 14:07       ` Vivek Goyal
2021-08-10 14:07         ` [Virtio-fs] " Vivek Goyal
2021-08-10 14:13         ` Hanna Reitz
2021-08-10 14:13           ` [Virtio-fs] " Hanna Reitz
2021-08-10 17:51           ` Vivek Goyal [this message]
2021-08-10 17:51             ` Vivek Goyal
2021-07-30 15:01 ` [PATCH v3 09/10] virtiofsd: Optionally fill lo_inode.fhandle Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-09 18:41   ` Vivek Goyal
2021-08-09 18:41     ` [Virtio-fs] " Vivek Goyal
2021-08-10  8:32     ` Hanna Reitz
2021-08-10  8:32       ` [Virtio-fs] " Hanna Reitz
2021-08-10 15:23       ` Vivek Goyal
2021-08-10 15:23         ` [Virtio-fs] " Vivek Goyal
2021-08-10 15:26         ` Hanna Reitz
2021-08-10 15:26           ` [Virtio-fs] " Hanna Reitz
2021-08-10 15:57           ` Vivek Goyal
2021-08-10 15:57             ` [Virtio-fs] " Vivek Goyal
2021-08-11  6:41             ` Hanna Reitz
2021-08-11  6:41               ` [Virtio-fs] " Hanna Reitz
2021-08-16 19:44               ` Vivek Goyal
2021-08-16 19:44                 ` [Virtio-fs] " Vivek Goyal
2021-08-17  8:27                 ` Hanna Reitz
2021-08-17  8:27                   ` [Virtio-fs] " Hanna Reitz
2021-08-17 19:45                   ` Vivek Goyal
2021-08-17 19:45                     ` [Virtio-fs] " Vivek Goyal
2021-08-18  0:14                     ` Vivek Goyal
2021-08-18  0:14                       ` [Virtio-fs] " Vivek Goyal
2021-08-18 13:32                       ` Vivek Goyal
2021-08-18 13:32                         ` [Virtio-fs] " Vivek Goyal
2021-08-18 13:48                         ` Hanna Reitz
2021-08-18 13:48                           ` [Virtio-fs] " Hanna Reitz
2021-08-19 16:38   ` Dr. David Alan Gilbert
2021-08-19 16:38     ` [Virtio-fs] " Dr. David Alan Gilbert
2021-07-30 15:01 ` [PATCH v3 10/10] virtiofsd: Add lazy lo_do_find() Max Reitz
2021-07-30 15:01   ` [Virtio-fs] " Max Reitz
2021-08-09 19:08   ` Vivek Goyal
2021-08-09 19:08     ` [Virtio-fs] " Vivek Goyal
2021-08-10  8:38     ` Hanna Reitz
2021-08-10  8:38       ` [Virtio-fs] " Hanna Reitz
2021-08-10 14:12       ` Vivek Goyal
2021-08-10 14:12         ` [Virtio-fs] " Vivek Goyal
2021-08-10 14:17         ` Hanna Reitz
2021-08-10 14:17           ` [Virtio-fs] " Hanna Reitz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YRK8lmDlRVV2dDih@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=hreitz@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.