From: Stefan Hajnoczi <stefanha@redhat.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: linux-fsdevel@vger.kernel.org,
virtualization@lists.linux-foundation.org, miklos@szeredi.hu,
linux-kernel@vger.kernel.org, virtio-fs@redhat.com,
dgilbert@redhat.com, mst@redhat.com
Subject: Re: [PATCH 16/18] virtiofs: Use virtio_fs_mutex for races w.r.t ->remove and mount path
Date: Mon, 9 Sep 2019 18:13:05 +0200 [thread overview]
Message-ID: <20190909161305.GF20875@stefanha-x1.localdomain> (raw)
In-Reply-To: <20190906135131.GE22083@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 5023 bytes --]
On Fri, Sep 06, 2019 at 09:51:31AM -0400, Vivek Goyal wrote:
> On Fri, Sep 06, 2019 at 01:05:34PM +0100, Stefan Hajnoczi wrote:
> > On Thu, Sep 05, 2019 at 03:48:57PM -0400, Vivek Goyal wrote:
> > > It is possible that a mount is in progress and device is being removed at
> > > the same time. Use virtio_fs_mutex to avoid races.
> > >
> > > This also takes care of bunch of races and removes some TODO items.
> > >
> > > Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
> > > ---
> > > fs/fuse/virtio_fs.c | 32 ++++++++++++++++++++++----------
> > > 1 file changed, 22 insertions(+), 10 deletions(-)
> >
> > Let's move to a per-device mutex in the future. That way a single
> > device that fails to drain/complete requests will not hang all other
> > virtio-fs instances. This is fine for now.
>
> Good point. For now I updated the patch so that it applies cleanly
> after previous two patches changed.
>
> Subject: virtiofs: Use virtio_fs_mutex for races w.r.t ->remove and mount path
>
> It is possible that a mount is in progress and device is being removed at
> the same time. Use virtio_fs_mutex to avoid races.
>
> This also takes care of bunch of races and removes some TODO items.
>
> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
> ---
> fs/fuse/virtio_fs.c | 32 ++++++++++++++++++++++----------
> 1 file changed, 22 insertions(+), 10 deletions(-)
>
> Index: rhvgoyal-linux-fuse/fs/fuse/virtio_fs.c
> ===================================================================
> --- rhvgoyal-linux-fuse.orig/fs/fuse/virtio_fs.c 2019-09-06 09:40:53.309245246 -0400
> +++ rhvgoyal-linux-fuse/fs/fuse/virtio_fs.c 2019-09-06 09:43:25.335245246 -0400
> @@ -13,7 +13,9 @@
> #include <linux/highmem.h>
> #include "fuse_i.h"
>
> -/* List of virtio-fs device instances and a lock for the list */
> +/* List of virtio-fs device instances and a lock for the list. Also provides
> + * mutual exclusion in device removal and mounting path
> + */
> static DEFINE_MUTEX(virtio_fs_mutex);
> static LIST_HEAD(virtio_fs_instances);
>
> @@ -72,17 +74,19 @@ static void release_virtio_fs_obj(struct
> kfree(vfs);
> }
>
> +/* Make sure virtiofs_mutex is held */
> static void virtio_fs_put(struct virtio_fs *fs)
> {
> - mutex_lock(&virtio_fs_mutex);
> kref_put(&fs->refcount, release_virtio_fs_obj);
> - mutex_unlock(&virtio_fs_mutex);
> }
>
> static void virtio_fs_fiq_release(struct fuse_iqueue *fiq)
> {
> struct virtio_fs *vfs = fiq->priv;
> +
> + mutex_lock(&virtio_fs_mutex);
> virtio_fs_put(vfs);
> + mutex_unlock(&virtio_fs_mutex);
> }
>
> static void virtio_fs_drain_queue(struct virtio_fs_vq *fsvq)
> @@ -596,9 +600,8 @@ static void virtio_fs_remove(struct virt
> struct virtio_fs *fs = vdev->priv;
>
> mutex_lock(&virtio_fs_mutex);
> + /* This device is going away. No one should get new reference */
> list_del_init(&fs->list);
> - mutex_unlock(&virtio_fs_mutex);
> -
> virtio_fs_stop_all_queues(fs);
> virtio_fs_drain_all_queues(fs);
> vdev->config->reset(vdev);
> @@ -607,6 +610,7 @@ static void virtio_fs_remove(struct virt
> vdev->priv = NULL;
> /* Put device reference on virtio_fs object */
> virtio_fs_put(fs);
> + mutex_unlock(&virtio_fs_mutex);
> }
>
> #ifdef CONFIG_PM_SLEEP
> @@ -978,10 +982,15 @@ static int virtio_fs_fill_super(struct s
> .no_force_umount = true,
> };
>
> - /* TODO lock */
> - if (fs->vqs[VQ_REQUEST].fud) {
> - pr_err("virtio-fs: device already in use\n");
> - err = -EBUSY;
> + mutex_lock(&virtio_fs_mutex);
> +
> + /* After holding mutex, make sure virtiofs device is still there.
> + * Though we are holding a refernce to it, drive ->remove might
> + * still have cleaned up virtual queues. In that case bail out.
> + */
> + err = -EINVAL;
> + if (list_empty(&fs->list)) {
> + pr_info("virtio-fs: tag <%s> not found\n", fs->tag);
> goto err;
> }
>
> @@ -1007,7 +1016,6 @@ static int virtio_fs_fill_super(struct s
>
> fc = fs->vqs[VQ_REQUEST].fud->fc;
>
> - /* TODO take fuse_mutex around this loop? */
> for (i = 0; i < fs->nvqs; i++) {
> struct virtio_fs_vq *fsvq = &fs->vqs[i];
>
> @@ -1020,6 +1028,7 @@ static int virtio_fs_fill_super(struct s
> /* Previous unmount will stop all queues. Start these again */
> virtio_fs_start_all_queues(fs);
> fuse_send_init(fc, init_req);
> + mutex_unlock(&virtio_fs_mutex);
> return 0;
>
> err_free_init_req:
> @@ -1027,6 +1036,7 @@ err_free_init_req:
> err_free_fuse_devs:
> virtio_fs_free_devs(fs);
> err:
> + mutex_unlock(&virtio_fs_mutex);
> return err;
> }
>
> @@ -1100,7 +1110,9 @@ static int virtio_fs_get_tree(struct fs_
>
> fc = kzalloc(sizeof(struct fuse_conn), GFP_KERNEL);
> if (!fc) {
> + mutex_lock(&virtio_fs_mutex);
> virtio_fs_put(fs);
> + mutex_unlock(&virtio_fs_mutex);
> return -ENOMEM;
> }
>
>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2019-09-09 16:13 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-05 19:48 [PATCH 00/18] virtiofs: Fix various races and cleanups round 1 Vivek Goyal
2019-09-05 19:48 ` [PATCH 01/18] virtiofs: Remove request from processing list before calling end Vivek Goyal
2019-09-06 10:40 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 02/18] virtiofs: Check whether hiprio queue is connected at submission time Vivek Goyal
2019-09-06 10:43 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 03/18] virtiofs: Pass fsvq instead of vq as parameter to virtio_fs_enqueue_req Vivek Goyal
2019-09-06 10:44 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 04/18] virtiofs: Check connected state for VQ_REQUEST queue as well Vivek Goyal
2019-09-06 10:45 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 05/18] Maintain count of in flight requests for VQ_REQUEST queue Vivek Goyal
2019-09-06 10:46 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 06/18] virtiofs: ->remove should not clean virtiofs fuse devices Vivek Goyal
2019-09-06 10:47 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 07/18] virtiofs: Stop virtiofs queues when device is being removed Vivek Goyal
2019-09-06 10:47 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 08/18] virtiofs: Drain all pending requests during ->remove time Vivek Goyal
2019-09-06 10:52 ` Stefan Hajnoczi
2019-09-06 14:17 ` Vivek Goyal
2019-09-06 14:18 ` Michael S. Tsirkin
2019-09-09 16:10 ` Stefan Hajnoczi
2019-09-08 10:11 ` [Virtio-fs] " piaojun
2019-09-05 19:48 ` [PATCH 09/18] virtiofs: Add an helper to start all the queues Vivek Goyal
2019-09-06 10:54 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 10/18] virtiofs: Do not use device managed mem for virtio_fs and virtio_fs_vq Vivek Goyal
2019-09-06 10:56 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 11/18] virtiofs: stop and drain queues after sending DESTROY Vivek Goyal
2019-09-06 11:50 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 12/18] virtiofs: Use virtio_fs_free_devs() in error path Vivek Goyal
2019-09-06 11:51 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 13/18] virtiofs: Do not access virtqueue in request submission path Vivek Goyal
2019-09-06 11:52 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 14/18] virtiofs: Add a fuse_iqueue operation to put() reference Vivek Goyal
2019-09-06 12:00 ` Stefan Hajnoczi
2019-09-06 13:35 ` Vivek Goyal
2019-09-05 19:48 ` [PATCH 15/18] virtiofs: Make virtio_fs object refcounted Vivek Goyal
2019-09-06 12:03 ` Stefan Hajnoczi
2019-09-06 13:50 ` Vivek Goyal
2019-09-09 16:12 ` Stefan Hajnoczi
2019-09-08 11:10 ` [Virtio-fs] " piaojun
2019-09-09 15:35 ` Vivek Goyal
2019-09-05 19:48 ` [PATCH 16/18] virtiofs: Use virtio_fs_mutex for races w.r.t ->remove and mount path Vivek Goyal
2019-09-06 12:05 ` Stefan Hajnoczi
2019-09-06 13:51 ` Vivek Goyal
2019-09-09 16:13 ` Stefan Hajnoczi [this message]
2019-09-08 11:19 ` [Virtio-fs] " piaojun
2019-09-05 19:48 ` [PATCH 17/18] virtiofs: Remove TODO to quiesce/end_requests Vivek Goyal
2019-09-06 12:06 ` Stefan Hajnoczi
2019-09-05 19:48 ` [PATCH 18/18] virtiofs: Remove TODO item from virtio_fs_free_devs() Vivek Goyal
2019-09-06 12:06 ` Stefan Hajnoczi
2019-09-06 8:15 ` [PATCH 00/18] virtiofs: Fix various races and cleanups round 1 Miklos Szeredi
2019-09-06 10:36 ` Stefan Hajnoczi
2019-09-06 11:52 ` Miklos Szeredi
2019-09-06 12:08 ` Vivek Goyal
2019-09-06 13:55 ` Miklos Szeredi
2019-09-06 13:57 ` Michael S. Tsirkin
2019-09-06 14:11 ` Miklos Szeredi
2019-09-06 14:17 ` Michael S. Tsirkin
2019-09-08 11:53 ` [Virtio-fs] " piaojun
2019-09-09 16:14 ` Stefan Hajnoczi
2019-09-09 16:18 ` Michael S. Tsirkin
2019-09-09 23:52 ` piaojun
2019-09-06 13:37 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190909161305.GF20875@stefanha-x1.localdomain \
--to=stefanha@redhat.com \
--cc=dgilbert@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=miklos@szeredi.hu \
--cc=mst@redhat.com \
--cc=vgoyal@redhat.com \
--cc=virtio-fs@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).