All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: linux-fsdevel@vger.kernel.org, virtio-fs@redhat.com,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	chirantan@chromium.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCH 1/5] virtiofs: Do not end request in submission context
Date: Mon, 21 Oct 2019 07:52:24 -0400	[thread overview]
Message-ID: <20191021115224.GA13573@redhat.com> (raw)
In-Reply-To: <CAJfpegtrFvuBUuQ7B+ynCLVmgZ8zRjbUYZYg+BzG6HDmt5RyXw@mail.gmail.com>

On Mon, Oct 21, 2019 at 10:03:39AM +0200, Miklos Szeredi wrote:

[..]
> >  static void virtio_fs_hiprio_dispatch_work(struct work_struct *work)
> > @@ -502,6 +522,7 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
> >         names[VQ_HIPRIO] = fs->vqs[VQ_HIPRIO].name;
> >         INIT_WORK(&fs->vqs[VQ_HIPRIO].done_work, virtio_fs_hiprio_done_work);
> >         INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].queued_reqs);
> > +       INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].end_reqs);
> >         INIT_DELAYED_WORK(&fs->vqs[VQ_HIPRIO].dispatch_work,
> >                         virtio_fs_hiprio_dispatch_work);
> >         spin_lock_init(&fs->vqs[VQ_HIPRIO].lock);
> > @@ -511,8 +532,9 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
> >                 spin_lock_init(&fs->vqs[i].lock);
> >                 INIT_WORK(&fs->vqs[i].done_work, virtio_fs_requests_done_work);
> >                 INIT_DELAYED_WORK(&fs->vqs[i].dispatch_work,
> > -                                       virtio_fs_dummy_dispatch_work);
> > +                                 virtio_fs_request_dispatch_work);
> >                 INIT_LIST_HEAD(&fs->vqs[i].queued_reqs);
> > +               INIT_LIST_HEAD(&fs->vqs[i].end_reqs);
> >                 snprintf(fs->vqs[i].name, sizeof(fs->vqs[i].name),
> >                          "requests.%u", i - VQ_REQUEST);
> >                 callbacks[i] = virtio_fs_vq_done;
> > @@ -918,6 +940,7 @@ __releases(fiq->lock)
> >         struct fuse_conn *fc;
> >         struct fuse_req *req;
> >         struct fuse_pqueue *fpq;
> > +       struct virtio_fs_vq *fsvq;
> >         int ret;
> >
> >         WARN_ON(list_empty(&fiq->pending));
> > @@ -951,7 +974,8 @@ __releases(fiq->lock)
> >         smp_mb__after_atomic();
> >
> >  retry:
> > -       ret = virtio_fs_enqueue_req(&fs->vqs[queue_id], req);
> > +       fsvq = &fs->vqs[queue_id];
> > +       ret = virtio_fs_enqueue_req(fsvq, req);
> >         if (ret < 0) {
> >                 if (ret == -ENOMEM || ret == -ENOSPC) {
> >                         /* Virtqueue full. Retry submission */
> > @@ -965,7 +989,13 @@ __releases(fiq->lock)
> >                 clear_bit(FR_SENT, &req->flags);
> >                 list_del_init(&req->list);
> >                 spin_unlock(&fpq->lock);
> > -               fuse_request_end(fc, req);
> > +
> > +               /* Can't end request in submission context. Use a worker */
> > +               spin_lock(&fsvq->lock);
> > +               list_add_tail(&req->list, &fsvq->end_reqs);
> > +               schedule_delayed_work(&fsvq->dispatch_work,
> > +                                     msecs_to_jiffies(1));
> 
> What's the reason to delay by one msec?  If this is purely for
> deadlock avoidance, then a zero delay would work better, no?

Hi Miklos,

I have no good reason to do that. Will change it to zero delay.

Thanks
Vivek


WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: chirantan@chromium.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	virtualization@lists.linux-foundation.org, virtio-fs@redhat.com,
	Stefan Hajnoczi <stefanha@redhat.com>,
	linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 1/5] virtiofs: Do not end request in submission context
Date: Mon, 21 Oct 2019 07:52:24 -0400	[thread overview]
Message-ID: <20191021115224.GA13573@redhat.com> (raw)
In-Reply-To: <CAJfpegtrFvuBUuQ7B+ynCLVmgZ8zRjbUYZYg+BzG6HDmt5RyXw@mail.gmail.com>

On Mon, Oct 21, 2019 at 10:03:39AM +0200, Miklos Szeredi wrote:

[..]
> >  static void virtio_fs_hiprio_dispatch_work(struct work_struct *work)
> > @@ -502,6 +522,7 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
> >         names[VQ_HIPRIO] = fs->vqs[VQ_HIPRIO].name;
> >         INIT_WORK(&fs->vqs[VQ_HIPRIO].done_work, virtio_fs_hiprio_done_work);
> >         INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].queued_reqs);
> > +       INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].end_reqs);
> >         INIT_DELAYED_WORK(&fs->vqs[VQ_HIPRIO].dispatch_work,
> >                         virtio_fs_hiprio_dispatch_work);
> >         spin_lock_init(&fs->vqs[VQ_HIPRIO].lock);
> > @@ -511,8 +532,9 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
> >                 spin_lock_init(&fs->vqs[i].lock);
> >                 INIT_WORK(&fs->vqs[i].done_work, virtio_fs_requests_done_work);
> >                 INIT_DELAYED_WORK(&fs->vqs[i].dispatch_work,
> > -                                       virtio_fs_dummy_dispatch_work);
> > +                                 virtio_fs_request_dispatch_work);
> >                 INIT_LIST_HEAD(&fs->vqs[i].queued_reqs);
> > +               INIT_LIST_HEAD(&fs->vqs[i].end_reqs);
> >                 snprintf(fs->vqs[i].name, sizeof(fs->vqs[i].name),
> >                          "requests.%u", i - VQ_REQUEST);
> >                 callbacks[i] = virtio_fs_vq_done;
> > @@ -918,6 +940,7 @@ __releases(fiq->lock)
> >         struct fuse_conn *fc;
> >         struct fuse_req *req;
> >         struct fuse_pqueue *fpq;
> > +       struct virtio_fs_vq *fsvq;
> >         int ret;
> >
> >         WARN_ON(list_empty(&fiq->pending));
> > @@ -951,7 +974,8 @@ __releases(fiq->lock)
> >         smp_mb__after_atomic();
> >
> >  retry:
> > -       ret = virtio_fs_enqueue_req(&fs->vqs[queue_id], req);
> > +       fsvq = &fs->vqs[queue_id];
> > +       ret = virtio_fs_enqueue_req(fsvq, req);
> >         if (ret < 0) {
> >                 if (ret == -ENOMEM || ret == -ENOSPC) {
> >                         /* Virtqueue full. Retry submission */
> > @@ -965,7 +989,13 @@ __releases(fiq->lock)
> >                 clear_bit(FR_SENT, &req->flags);
> >                 list_del_init(&req->list);
> >                 spin_unlock(&fpq->lock);
> > -               fuse_request_end(fc, req);
> > +
> > +               /* Can't end request in submission context. Use a worker */
> > +               spin_lock(&fsvq->lock);
> > +               list_add_tail(&req->list, &fsvq->end_reqs);
> > +               schedule_delayed_work(&fsvq->dispatch_work,
> > +                                     msecs_to_jiffies(1));
> 
> What's the reason to delay by one msec?  If this is purely for
> deadlock avoidance, then a zero delay would work better, no?

Hi Miklos,

I have no good reason to do that. Will change it to zero delay.

Thanks
Vivek

WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: virtualization@lists.linux-foundation.org, virtio-fs@redhat.com,
	linux-fsdevel@vger.kernel.org
Subject: Re: [Virtio-fs] [PATCH 1/5] virtiofs: Do not end request in submission context
Date: Mon, 21 Oct 2019 07:52:24 -0400	[thread overview]
Message-ID: <20191021115224.GA13573@redhat.com> (raw)
In-Reply-To: <CAJfpegtrFvuBUuQ7B+ynCLVmgZ8zRjbUYZYg+BzG6HDmt5RyXw@mail.gmail.com>

On Mon, Oct 21, 2019 at 10:03:39AM +0200, Miklos Szeredi wrote:

[..]
> >  static void virtio_fs_hiprio_dispatch_work(struct work_struct *work)
> > @@ -502,6 +522,7 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
> >         names[VQ_HIPRIO] = fs->vqs[VQ_HIPRIO].name;
> >         INIT_WORK(&fs->vqs[VQ_HIPRIO].done_work, virtio_fs_hiprio_done_work);
> >         INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].queued_reqs);
> > +       INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].end_reqs);
> >         INIT_DELAYED_WORK(&fs->vqs[VQ_HIPRIO].dispatch_work,
> >                         virtio_fs_hiprio_dispatch_work);
> >         spin_lock_init(&fs->vqs[VQ_HIPRIO].lock);
> > @@ -511,8 +532,9 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
> >                 spin_lock_init(&fs->vqs[i].lock);
> >                 INIT_WORK(&fs->vqs[i].done_work, virtio_fs_requests_done_work);
> >                 INIT_DELAYED_WORK(&fs->vqs[i].dispatch_work,
> > -                                       virtio_fs_dummy_dispatch_work);
> > +                                 virtio_fs_request_dispatch_work);
> >                 INIT_LIST_HEAD(&fs->vqs[i].queued_reqs);
> > +               INIT_LIST_HEAD(&fs->vqs[i].end_reqs);
> >                 snprintf(fs->vqs[i].name, sizeof(fs->vqs[i].name),
> >                          "requests.%u", i - VQ_REQUEST);
> >                 callbacks[i] = virtio_fs_vq_done;
> > @@ -918,6 +940,7 @@ __releases(fiq->lock)
> >         struct fuse_conn *fc;
> >         struct fuse_req *req;
> >         struct fuse_pqueue *fpq;
> > +       struct virtio_fs_vq *fsvq;
> >         int ret;
> >
> >         WARN_ON(list_empty(&fiq->pending));
> > @@ -951,7 +974,8 @@ __releases(fiq->lock)
> >         smp_mb__after_atomic();
> >
> >  retry:
> > -       ret = virtio_fs_enqueue_req(&fs->vqs[queue_id], req);
> > +       fsvq = &fs->vqs[queue_id];
> > +       ret = virtio_fs_enqueue_req(fsvq, req);
> >         if (ret < 0) {
> >                 if (ret == -ENOMEM || ret == -ENOSPC) {
> >                         /* Virtqueue full. Retry submission */
> > @@ -965,7 +989,13 @@ __releases(fiq->lock)
> >                 clear_bit(FR_SENT, &req->flags);
> >                 list_del_init(&req->list);
> >                 spin_unlock(&fpq->lock);
> > -               fuse_request_end(fc, req);
> > +
> > +               /* Can't end request in submission context. Use a worker */
> > +               spin_lock(&fsvq->lock);
> > +               list_add_tail(&req->list, &fsvq->end_reqs);
> > +               schedule_delayed_work(&fsvq->dispatch_work,
> > +                                     msecs_to_jiffies(1));
> 
> What's the reason to delay by one msec?  If this is purely for
> deadlock avoidance, then a zero delay would work better, no?

Hi Miklos,

I have no good reason to do that. Will change it to zero delay.

Thanks
Vivek


  reply	other threads:[~2019-10-21 11:52 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-15 17:46 [PATCH 0/5] virtiofs: Fix couple of deadlocks Vivek Goyal
2019-10-15 17:46 ` [Virtio-fs] " Vivek Goyal
2019-10-15 17:46 ` Vivek Goyal
2019-10-15 17:46 ` [PATCH 1/5] virtiofs: Do not end request in submission context Vivek Goyal
2019-10-15 17:46   ` [Virtio-fs] " Vivek Goyal
2019-10-15 17:46   ` Vivek Goyal
2019-10-21  8:03   ` Miklos Szeredi
2019-10-21  8:03     ` [Virtio-fs] " Miklos Szeredi
2019-10-21 11:52     ` Vivek Goyal [this message]
2019-10-21 11:52       ` Vivek Goyal
2019-10-21 11:52       ` Vivek Goyal
2019-10-21 13:58       ` Miklos Szeredi
2019-10-21 13:58         ` [Virtio-fs] " Miklos Szeredi
2019-10-15 17:46 ` [PATCH 2/5] virtiofs: No need to check fpq->connected state Vivek Goyal
2019-10-15 17:46   ` [Virtio-fs] " Vivek Goyal
2019-10-15 17:46   ` Vivek Goyal
2019-10-15 17:46 ` [PATCH 3/5] virtiofs: Set FR_SENT flag only after request has been sent Vivek Goyal
2019-10-15 17:46   ` [Virtio-fs] " Vivek Goyal
2019-10-15 17:46   ` Vivek Goyal
2019-10-15 17:46 ` [PATCH 4/5] virtiofs: Count pending forgets as in_flight forgets Vivek Goyal
2019-10-15 17:46   ` [Virtio-fs] " Vivek Goyal
2019-10-15 17:46   ` Vivek Goyal
2019-10-15 17:46 ` [PATCH 5/5] virtiofs: Retry request submission from worker context Vivek Goyal
2019-10-15 17:46   ` [Virtio-fs] " Vivek Goyal
2019-10-15 17:46   ` Vivek Goyal
2019-10-21  8:15   ` Miklos Szeredi
2019-10-21  8:15     ` [Virtio-fs] " Miklos Szeredi
2019-10-21 13:01     ` Vivek Goyal
2019-10-21 13:01       ` [Virtio-fs] " Vivek Goyal
2019-10-21 13:01       ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191021115224.GA13573@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=chirantan@chromium.org \
    --cc=dgilbert@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=stefanha@redhat.com \
    --cc=virtio-fs@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.