All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com
Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com,
	vgoyal@redhat.com, miklos@szeredi.hu
Subject: [PATCH 11/13] virtiofsd: Shutdown notification queue in the end
Date: Thu, 30 Sep 2021 11:30:35 -0400	[thread overview]
Message-ID: <20210930153037.1194279-12-vgoyal@redhat.com> (raw)
In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com>

So far we did not have the notion of cross queue traffic. That is, we
get request on a queue and send back response on same queue. So if a
request be being processed and at the same time a stop queue request
comes in, we wait for all pending requests to finish and then queue
is stopped and associated data structure cleaned.

But with notification queue, now it is possible that we get a locking
request on request queue and send the notification back on a different
queue (notificaiton queue). This means, we need to make sure that
notifiation queue has not already been shutdown or is not being
shutdown in parallel while we are trying to send a notification back.
Otherwise bad things are bound to happen.

One way to solve this problem is that stop notification queue in the
end. First stop hiprio and all request queues. That means by the
time we are trying to stop notification queue, we know no other
request can be in progress which can try to send something on
notification queue.

But problem is that currently we don't have any control on in what
order queues should be stopped. If there was a notion of whole device
being stopped, then we could decide in what order queues should be
stopped.

Stefan mentioned that there is a command to stop whole device
VHOST_USER_SET_STATUS but it is not implemented in libvhost-user
yet. Also we probably could not move away from per queue stop
logic we have as of now.

As an alternative, he said if we stop all queue when qidx 0 is
being stopped, it should be fine and we can solve the issue of
notification queue shutdown order.

So in this patch I am shutting down all queues when queue 0
is being shutdown. And also changed shutdown order in such a
way that notification queue is shutdown last.

Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 tools/virtiofsd/fuse_virtio.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
index c67c2e0e7a..a87e88e286 100644
--- a/tools/virtiofsd/fuse_virtio.c
+++ b/tools/virtiofsd/fuse_virtio.c
@@ -826,6 +826,11 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx)
     assert(qidx < vud->nqueues);
     ourqi = vud->qi[qidx];
 
+    /* Queue is already stopped */
+    if (!ourqi) {
+        return;
+    }
+
     /* qidx == 1 is the notification queue if notifications are enabled */
     if (!se->notify_enabled || qidx != 1) {
         /* Kill the thread */
@@ -847,14 +852,25 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx)
 
 static void stop_all_queues(struct fv_VuDev *vud)
 {
+    struct fuse_session *se = vud->se;
+
     for (int i = 0; i < vud->nqueues; i++) {
         if (!vud->qi[i]) {
             continue;
         }
 
+        /* Shutdown notification queue in the end */
+        if (se->notify_enabled && i == 1) {
+            continue;
+        }
         fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, i);
         fv_queue_cleanup_thread(vud, i);
     }
+
+    if (se->notify_enabled) {
+        fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, 1);
+        fv_queue_cleanup_thread(vud, 1);
+    }
 }
 
 /* Callback from libvhost-user on start or stop of a queue */
@@ -934,7 +950,16 @@ static void fv_queue_set_started(VuDev *dev, int qidx, bool started)
          * the queue thread doesn't block in virtio_send_msg().
          */
         vu_dispatch_unlock(vud);
-        fv_queue_cleanup_thread(vud, qidx);
+
+        /*
+         * If queue 0 is being shutdown, treat it as if device is being
+         * shutdown and stop all queues.
+         */
+        if (qidx == 0) {
+            stop_all_queues(vud);
+        } else {
+            fv_queue_cleanup_thread(vud, qidx);
+        }
         vu_dispatch_wrlock(vud);
     }
 }
-- 
2.31.1



WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com
Cc: vgoyal@redhat.com, miklos@szeredi.hu
Subject: [Virtio-fs] [PATCH 11/13] virtiofsd: Shutdown notification queue in the end
Date: Thu, 30 Sep 2021 11:30:35 -0400	[thread overview]
Message-ID: <20210930153037.1194279-12-vgoyal@redhat.com> (raw)
In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com>

So far we did not have the notion of cross queue traffic. That is, we
get request on a queue and send back response on same queue. So if a
request be being processed and at the same time a stop queue request
comes in, we wait for all pending requests to finish and then queue
is stopped and associated data structure cleaned.

But with notification queue, now it is possible that we get a locking
request on request queue and send the notification back on a different
queue (notificaiton queue). This means, we need to make sure that
notifiation queue has not already been shutdown or is not being
shutdown in parallel while we are trying to send a notification back.
Otherwise bad things are bound to happen.

One way to solve this problem is that stop notification queue in the
end. First stop hiprio and all request queues. That means by the
time we are trying to stop notification queue, we know no other
request can be in progress which can try to send something on
notification queue.

But problem is that currently we don't have any control on in what
order queues should be stopped. If there was a notion of whole device
being stopped, then we could decide in what order queues should be
stopped.

Stefan mentioned that there is a command to stop whole device
VHOST_USER_SET_STATUS but it is not implemented in libvhost-user
yet. Also we probably could not move away from per queue stop
logic we have as of now.

As an alternative, he said if we stop all queue when qidx 0 is
being stopped, it should be fine and we can solve the issue of
notification queue shutdown order.

So in this patch I am shutting down all queues when queue 0
is being shutdown. And also changed shutdown order in such a
way that notification queue is shutdown last.

Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 tools/virtiofsd/fuse_virtio.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
index c67c2e0e7a..a87e88e286 100644
--- a/tools/virtiofsd/fuse_virtio.c
+++ b/tools/virtiofsd/fuse_virtio.c
@@ -826,6 +826,11 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx)
     assert(qidx < vud->nqueues);
     ourqi = vud->qi[qidx];
 
+    /* Queue is already stopped */
+    if (!ourqi) {
+        return;
+    }
+
     /* qidx == 1 is the notification queue if notifications are enabled */
     if (!se->notify_enabled || qidx != 1) {
         /* Kill the thread */
@@ -847,14 +852,25 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx)
 
 static void stop_all_queues(struct fv_VuDev *vud)
 {
+    struct fuse_session *se = vud->se;
+
     for (int i = 0; i < vud->nqueues; i++) {
         if (!vud->qi[i]) {
             continue;
         }
 
+        /* Shutdown notification queue in the end */
+        if (se->notify_enabled && i == 1) {
+            continue;
+        }
         fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, i);
         fv_queue_cleanup_thread(vud, i);
     }
+
+    if (se->notify_enabled) {
+        fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, 1);
+        fv_queue_cleanup_thread(vud, 1);
+    }
 }
 
 /* Callback from libvhost-user on start or stop of a queue */
@@ -934,7 +950,16 @@ static void fv_queue_set_started(VuDev *dev, int qidx, bool started)
          * the queue thread doesn't block in virtio_send_msg().
          */
         vu_dispatch_unlock(vud);
-        fv_queue_cleanup_thread(vud, qidx);
+
+        /*
+         * If queue 0 is being shutdown, treat it as if device is being
+         * shutdown and stop all queues.
+         */
+        if (qidx == 0) {
+            stop_all_queues(vud);
+        } else {
+            fv_queue_cleanup_thread(vud, qidx);
+        }
         vu_dispatch_wrlock(vud);
     }
 }
-- 
2.31.1


  parent reply	other threads:[~2021-09-30 15:57 UTC|newest]

Thread overview: 106+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-30 15:30 [PATCH 00/13] virtiofsd: Support notification queue and Vivek Goyal
2021-09-30 15:30 ` [Virtio-fs] " Vivek Goyal
2021-09-30 15:30 ` [PATCH 01/13] virtio_fs.h: Add notification queue feature bit Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 13:12   ` Stefan Hajnoczi
2021-10-04 13:12     ` [Virtio-fs] " Stefan Hajnoczi
2021-09-30 15:30 ` [PATCH 02/13] virtiofsd: fuse.h header file changes for lock notification Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 13:16   ` Stefan Hajnoczi
2021-10-04 13:16     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-04 14:01     ` Vivek Goyal
2021-10-04 14:01       ` [Virtio-fs] " Vivek Goyal
2021-09-30 15:30 ` [PATCH 03/13] virtiofsd: Remove unused virtio_fs_config definition Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 13:17   ` Stefan Hajnoczi
2021-10-04 13:17     ` [Virtio-fs] " Stefan Hajnoczi
2021-09-30 15:30 ` [PATCH 04/13] virtiofsd: Add a helper to send element on virtqueue Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 13:19   ` Stefan Hajnoczi
2021-10-04 13:19     ` [Virtio-fs] " Stefan Hajnoczi
2021-09-30 15:30 ` [PATCH 05/13] virtiofsd: Add a helper to stop all queues Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 13:22   ` Stefan Hajnoczi
2021-10-04 13:22     ` [Virtio-fs] " Stefan Hajnoczi
2021-09-30 15:30 ` [PATCH 06/13] vhost-user-fs: Use helpers to create/cleanup virtqueue Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 13:54   ` Stefan Hajnoczi
2021-10-04 13:54     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-04 19:58     ` Vivek Goyal
2021-10-04 19:58       ` [Virtio-fs] " Vivek Goyal
2021-10-05  8:09       ` Stefan Hajnoczi
2021-10-05  8:09         ` [Virtio-fs] " Stefan Hajnoczi
2021-10-06 13:35   ` Christophe de Dinechin
2021-10-06 13:35     ` Christophe de Dinechin
2021-10-06 17:40     ` Vivek Goyal
2021-10-06 17:40       ` Vivek Goyal
2021-09-30 15:30 ` [PATCH 07/13] virtiofsd: Release file locks using F_UNLCK Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-05 13:37   ` Christophe de Dinechin
2021-10-05 13:37     ` Christophe de Dinechin
2021-10-05 15:38     ` Vivek Goyal
2021-10-05 15:38       ` Vivek Goyal
2021-09-30 15:30 ` [PATCH 08/13] virtiofsd: Create a notification queue Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 14:30   ` Stefan Hajnoczi
2021-10-04 14:30     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-04 21:01     ` Vivek Goyal
2021-10-04 21:01       ` [Virtio-fs] " Vivek Goyal
2021-10-05  8:14       ` Stefan Hajnoczi
2021-10-05  8:14         ` [Virtio-fs] " Stefan Hajnoczi
2021-10-05 12:31         ` Vivek Goyal
2021-10-05 12:31           ` [Virtio-fs] " Vivek Goyal
2021-09-30 15:30 ` [PATCH 09/13] virtiofsd: Specify size of notification buffer using config space Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 14:33   ` Stefan Hajnoczi
2021-10-04 14:33     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-04 21:10     ` Vivek Goyal
2021-10-04 21:10       ` [Virtio-fs] " Vivek Goyal
2021-10-06 10:05   ` Christophe de Dinechin
2021-10-06 10:05     ` Christophe de Dinechin
2021-09-30 15:30 ` [PATCH 10/13] virtiofsd: Custom threadpool for remote blocking posix locks requests Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 14:54   ` Stefan Hajnoczi
2021-10-04 14:54     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-05 13:06     ` Vivek Goyal
2021-10-05 13:06       ` [Virtio-fs] " Vivek Goyal
2021-10-05 20:09     ` Vivek Goyal
2021-10-05 20:09       ` [Virtio-fs] " Vivek Goyal
2021-10-06 10:26       ` Stefan Hajnoczi
2021-10-06 10:26         ` [Virtio-fs] " Stefan Hajnoczi
2021-09-30 15:30 ` Vivek Goyal [this message]
2021-09-30 15:30   ` [Virtio-fs] [PATCH 11/13] virtiofsd: Shutdown notification queue in the end Vivek Goyal
2021-10-04 15:01   ` Stefan Hajnoczi
2021-10-04 15:01     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-05 13:19     ` Vivek Goyal
2021-10-05 13:19       ` [Virtio-fs] " Vivek Goyal
2021-10-06 15:15   ` Christophe de Dinechin
2021-10-06 15:15     ` Christophe de Dinechin
2021-10-06 17:58     ` Vivek Goyal
2021-10-06 17:58       ` Vivek Goyal
2021-09-30 15:30 ` [PATCH 12/13] virtiofsd: Implement blocking posix locks Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-04 15:07   ` Stefan Hajnoczi
2021-10-04 15:07     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-05 13:26     ` Vivek Goyal
2021-10-05 13:26       ` [Virtio-fs] " Vivek Goyal
2021-10-05 12:22   ` Stefan Hajnoczi
2021-10-05 12:22     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-05 15:14     ` Vivek Goyal
2021-10-05 15:14       ` [Virtio-fs] " Vivek Goyal
2021-10-05 15:49       ` Stefan Hajnoczi
2021-10-05 15:49         ` [Virtio-fs] " Stefan Hajnoczi
2021-10-06 15:34   ` Christophe de Dinechin
2021-10-06 15:34     ` Christophe de Dinechin
2021-10-06 18:17     ` Vivek Goyal
2021-10-06 18:17       ` Vivek Goyal
2021-09-30 15:30 ` [PATCH 13/13] virtiofsd, seccomp: Add clock_nanosleep() to allow list Vivek Goyal
2021-09-30 15:30   ` [Virtio-fs] " Vivek Goyal
2021-10-05 12:22   ` Stefan Hajnoczi
2021-10-05 12:22     ` [Virtio-fs] " Stefan Hajnoczi
2021-10-05 15:16     ` Vivek Goyal
2021-10-05 15:50       ` Stefan Hajnoczi
2021-10-05 17:28         ` Vivek Goyal
2021-10-06 10:27           ` Stefan Hajnoczi
2021-10-25 18:00 ` [PATCH 00/13] virtiofsd: Support notification queue and Dr. David Alan Gilbert
2021-10-25 18:00   ` [Virtio-fs] " Dr. David Alan Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210930153037.1194279-12-vgoyal@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=iangelak@redhat.com \
    --cc=jaggel@bu.edu \
    --cc=miklos@szeredi.hu \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.