All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/8] vhost flush cleanups
@ 2022-05-15 20:29 Mike Christie
  2022-05-15 20:29 ` [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper Mike Christie
                   ` (9 more replies)
  0 siblings, 10 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

The following patches are Andrey Ryabinin's flush cleanups and some
from me. They reduce the number of flush calls and remove some bogus
ones where we don't even have a worker running anymore or they were
based on outdated or incorrect assumptions.

Jason, for the patches you gave me an explicit acked/reviewed tag I
added it. For the replies where I answered your review questions and
you only replied with an affirimative reply I did not add a tag,
because I was not not 100% sure what you wanted to do.

These patches will be used in the vhost threading patches which I think
will make progress again now that I have Christian's email figured out
:) (he had moved to microsoft but I've been using the ubuntu address).
I think the patches are ok cleanups in general so I thought they could
get merged separately if people agree.

V2:
- Added patch to rename vhost_work_dev_flush to just vhost_dev_flush
to handle review comment from Jason about the naming not being so
great.


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-17  3:14   ` Jason Wang
  2022-05-17 13:56   ` Stefano Garzarella
  2022-05-15 20:29 ` [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop Mike Christie
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

From: Andrey Ryabinin <arbn@yandex-team.com>

vhost_poll_flush() is a simple wrapper around vhost_work_dev_flush().
It gives wrong impression that we are doing some work over vhost_poll,
while in fact it flushes vhost_poll->dev.
It only complicate understanding of the code and leads to mistakes
like flushing the same vhost_dev several times in a row.

Just remove vhost_poll_flush() and call vhost_work_dev_flush() directly.

Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
[merge vhost_poll_flush removal from Stefano Garzarella]
Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/net.c   |  4 ++--
 drivers/vhost/test.c  |  2 +-
 drivers/vhost/vhost.c | 12 ++----------
 drivers/vhost/vhost.h |  1 -
 drivers/vhost/vsock.c |  2 +-
 5 files changed, 6 insertions(+), 15 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 792ab5f23647..4e55ad8c942a 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1376,8 +1376,8 @@ static void vhost_net_stop(struct vhost_net *n, struct socket **tx_sock,
 
 static void vhost_net_flush_vq(struct vhost_net *n, int index)
 {
-	vhost_poll_flush(n->poll + index);
-	vhost_poll_flush(&n->vqs[index].vq.poll);
+	vhost_work_dev_flush(n->poll[index].dev);
+	vhost_work_dev_flush(n->vqs[index].vq.poll.dev);
 }
 
 static void vhost_net_flush(struct vhost_net *n)
diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index 05740cba1cd8..f0ac9e35f5d6 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -146,7 +146,7 @@ static void vhost_test_stop(struct vhost_test *n, void **privatep)
 
 static void vhost_test_flush_vq(struct vhost_test *n, int index)
 {
-	vhost_poll_flush(&n->vqs[index].poll);
+	vhost_work_dev_flush(n->vqs[index].poll.dev);
 }
 
 static void vhost_test_flush(struct vhost_test *n)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index d02173fb290c..1d84a2818c6f 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -245,14 +245,6 @@ void vhost_work_dev_flush(struct vhost_dev *dev)
 }
 EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
 
-/* Flush any work that has been scheduled. When calling this, don't hold any
- * locks that are also used by the callback. */
-void vhost_poll_flush(struct vhost_poll *poll)
-{
-	vhost_work_dev_flush(poll->dev);
-}
-EXPORT_SYMBOL_GPL(vhost_poll_flush);
-
 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
 {
 	if (!dev->worker)
@@ -663,7 +655,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
 	for (i = 0; i < dev->nvqs; ++i) {
 		if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick) {
 			vhost_poll_stop(&dev->vqs[i]->poll);
-			vhost_poll_flush(&dev->vqs[i]->poll);
+			vhost_work_dev_flush(dev->vqs[i]->poll.dev);
 		}
 	}
 }
@@ -1719,7 +1711,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
 	mutex_unlock(&vq->mutex);
 
 	if (pollstop && vq->handle_kick)
-		vhost_poll_flush(&vq->poll);
+		vhost_work_dev_flush(vq->poll.dev);
 	return r;
 }
 EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 638bb640d6b4..aeb8e1ad1496 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -44,7 +44,6 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
 		     __poll_t mask, struct vhost_dev *dev);
 int vhost_poll_start(struct vhost_poll *poll, struct file *file);
 void vhost_poll_stop(struct vhost_poll *poll);
-void vhost_poll_flush(struct vhost_poll *poll);
 void vhost_poll_queue(struct vhost_poll *poll);
 void vhost_work_dev_flush(struct vhost_dev *dev);
 
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index e6c9d41db1de..a4c8ae92a0fb 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -709,7 +709,7 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock)
 
 	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
 		if (vsock->vqs[i].handle_kick)
-			vhost_poll_flush(&vsock->vqs[i].poll);
+			vhost_work_dev_flush(vsock->vqs[i].poll.dev);
 	vhost_work_dev_flush(&vsock->dev);
 }
 
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
  2022-05-15 20:29 ` [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-17  3:16   ` Jason Wang
  2022-05-17 13:57   ` Stefano Garzarella
  2022-05-15 20:29 ` [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls Mike Christie
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

When vhost_work_dev_flush returns all work queued at that time will have
completed. There is then no need to flush after every vhost_poll_stop
call, and we can move the flush call to after the loop that stops the
pollers.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/vhost.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 1d84a2818c6f..9f8de04bb673 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -653,11 +653,11 @@ void vhost_dev_stop(struct vhost_dev *dev)
 	int i;
 
 	for (i = 0; i < dev->nvqs; ++i) {
-		if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick) {
+		if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick)
 			vhost_poll_stop(&dev->vqs[i]->poll);
-			vhost_work_dev_flush(dev->vqs[i]->poll.dev);
-		}
 	}
+
+	vhost_work_dev_flush(dev);
 }
 EXPORT_SYMBOL_GPL(vhost_dev_stop);
 
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
  2022-05-15 20:29 ` [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper Mike Christie
  2022-05-15 20:29 ` [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-17  3:22   ` Jason Wang
  2022-05-17 14:06   ` Stefano Garzarella
  2022-05-15 20:29 ` [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq() Mike Christie
                   ` (6 subsequent siblings)
  9 siblings, 2 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

From: Andrey Ryabinin <arbn@yandex-team.com>

vhost_net_flush_vq() calls vhost_work_dev_flush() twice passing
vhost_dev pointer obtained via 'n->poll[index].dev' and
'n->vqs[index].vq.poll.dev'. This is actually the same pointer,
initialized in vhost_net_open()/vhost_dev_init()/vhost_poll_init()

Remove vhost_net_flush_vq() and call vhost_work_dev_flush() directly.
Do the flushes only once instead of several flush calls in a row
which seems rather useless.

Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
[drop vhost_dev forward declaration in vhost.h]
Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/net.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 4e55ad8c942a..047b7b05109a 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1374,16 +1374,9 @@ static void vhost_net_stop(struct vhost_net *n, struct socket **tx_sock,
 	*rx_sock = vhost_net_stop_vq(n, &n->vqs[VHOST_NET_VQ_RX].vq);
 }
 
-static void vhost_net_flush_vq(struct vhost_net *n, int index)
-{
-	vhost_work_dev_flush(n->poll[index].dev);
-	vhost_work_dev_flush(n->vqs[index].vq.poll.dev);
-}
-
 static void vhost_net_flush(struct vhost_net *n)
 {
-	vhost_net_flush_vq(n, VHOST_NET_VQ_TX);
-	vhost_net_flush_vq(n, VHOST_NET_VQ_RX);
+	vhost_work_dev_flush(&n->dev);
 	if (n->vqs[VHOST_NET_VQ_TX].ubufs) {
 		mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
 		n->tx_flush = true;
@@ -1573,7 +1566,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	}
 
 	if (oldsock) {
-		vhost_net_flush_vq(n, index);
+		vhost_work_dev_flush(&n->dev);
 		sockfd_put(oldsock);
 	}
 
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq()
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
                   ` (2 preceding siblings ...)
  2022-05-15 20:29 ` [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-17  3:23   ` Jason Wang
  2022-05-17 14:11   ` Stefano Garzarella
  2022-05-15 20:29 ` [PATCH V2 5/8] vhost_vsock: simplify vhost_vsock_flush() Mike Christie
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

From: Andrey Ryabinin <arbn@yandex-team.com>

vhost_test_flush_vq() just a simple wrapper around vhost_work_dev_flush()
which seems have no value. It's just easier to call vhost_work_dev_flush()
directly. Besides there is no point in obtaining vhost_dev pointer
via 'n->vqs[index].poll.dev' while we can just use &n->dev.
It's the same pointers, see vhost_test_open()/vhost_dev_init().

Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/test.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index f0ac9e35f5d6..837148d0a6a8 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -144,14 +144,9 @@ static void vhost_test_stop(struct vhost_test *n, void **privatep)
 	*privatep = vhost_test_stop_vq(n, n->vqs + VHOST_TEST_VQ);
 }
 
-static void vhost_test_flush_vq(struct vhost_test *n, int index)
-{
-	vhost_work_dev_flush(n->vqs[index].poll.dev);
-}
-
 static void vhost_test_flush(struct vhost_test *n)
 {
-	vhost_test_flush_vq(n, VHOST_TEST_VQ);
+	vhost_work_dev_flush(&n->dev);
 }
 
 static int vhost_test_release(struct inode *inode, struct file *f)
@@ -210,7 +205,7 @@ static long vhost_test_run(struct vhost_test *n, int test)
 			goto err;
 
 		if (oldpriv) {
-			vhost_test_flush_vq(n, index);
+			vhost_test_flush(n, index);
 		}
 	}
 
@@ -303,7 +298,7 @@ static long vhost_test_set_backend(struct vhost_test *n, unsigned index, int fd)
 	mutex_unlock(&vq->mutex);
 
 	if (enable) {
-		vhost_test_flush_vq(n, index);
+		vhost_test_flush(n);
 	}
 
 	mutex_unlock(&n->dev.mutex);
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 5/8] vhost_vsock: simplify vhost_vsock_flush()
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
                   ` (3 preceding siblings ...)
  2022-05-15 20:29 ` [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq() Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-17  3:23   ` Jason Wang
  2022-05-15 20:29 ` [PATCH V2 6/8] vhost-scsi: drop flush after vhost_dev_cleanup Mike Christie
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

From: Andrey Ryabinin <arbn@yandex-team.com>

vhost_vsock_flush() calls vhost_work_dev_flush(vsock->vqs[i].poll.dev)
before vhost_work_dev_flush(&vsock->dev). This seems pointless
as vsock->vqs[i].poll.dev is the same as &vsock->dev and several flushes
in a row doesn't do anything useful, one is just enough.

Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/vsock.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index a4c8ae92a0fb..96be63697117 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -705,11 +705,6 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
 
 static void vhost_vsock_flush(struct vhost_vsock *vsock)
 {
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
-		if (vsock->vqs[i].handle_kick)
-			vhost_work_dev_flush(vsock->vqs[i].poll.dev);
 	vhost_work_dev_flush(&vsock->dev);
 }
 
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 6/8] vhost-scsi: drop flush after vhost_dev_cleanup
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
                   ` (4 preceding siblings ...)
  2022-05-15 20:29 ` [PATCH V2 5/8] vhost_vsock: simplify vhost_vsock_flush() Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-15 20:29 ` [PATCH V2 7/8] vhost-test: " Mike Christie
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

The flush after vhost_dev_cleanup is not needed because:

1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread
so the flush call will just return since the worker has not device.

2. It's not needed for the re-queue case. vhost_scsi_evt_handle_kick grabs
the mutex and if the backend is NULL will return without queueing a work.
vhost_scsi_clear_endpoint will set the backend to NULL under the vq->mutex
then drops the mutex and does a flush. So we know when
vhost_scsi_clear_endpoint has dropped the mutex after clearing the backend
no evt related work will be able to requeue. The flush would then make sure
any queued evts are run and return.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/scsi.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 532e204f2b1b..94535c813ef7 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1827,8 +1827,6 @@ static int vhost_scsi_release(struct inode *inode, struct file *f)
 	vhost_scsi_clear_endpoint(vs, &t);
 	vhost_dev_stop(&vs->dev);
 	vhost_dev_cleanup(&vs->dev);
-	/* Jobs can re-queue themselves in evt kick handler. Do extra flush. */
-	vhost_scsi_flush(vs);
 	kfree(vs->dev.vqs);
 	kvfree(vs);
 	return 0;
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 7/8] vhost-test: drop flush after vhost_dev_cleanup
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
                   ` (5 preceding siblings ...)
  2022-05-15 20:29 ` [PATCH V2 6/8] vhost-scsi: drop flush after vhost_dev_cleanup Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-17  3:28   ` Jason Wang
  2022-05-15 20:29 ` [PATCH V2 8/8] vhost: rename vhost_work_dev_flush Mike Christie
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

The flush after vhost_dev_cleanup is not needed because:

1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread
so the flush call will just return since the worker has not device.

2. It's not needed. The comment about jobs re-queueing themselves does
not look correct because handle_vq does not requeue work.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/test.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index 837148d0a6a8..a63a626a554e 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -158,9 +158,6 @@ static int vhost_test_release(struct inode *inode, struct file *f)
 	vhost_test_flush(n);
 	vhost_dev_stop(&n->dev);
 	vhost_dev_cleanup(&n->dev);
-	/* We do an extra flush before freeing memory,
-	 * since jobs can re-queue themselves. */
-	vhost_test_flush(n);
 	kfree(n->dev.vqs);
 	kfree(n);
 	return 0;
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 8/8] vhost: rename vhost_work_dev_flush
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
                   ` (6 preceding siblings ...)
  2022-05-15 20:29 ` [PATCH V2 7/8] vhost-test: " Mike Christie
@ 2022-05-15 20:29 ` Mike Christie
  2022-05-17  3:28   ` Jason Wang
  2022-05-17 14:18   ` Stefano Garzarella
  2022-05-17  3:13 ` [PATCH V2 0/8] vhost flush cleanups Jason Wang
  2022-05-17 14:20 ` Stefano Garzarella
  9 siblings, 2 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-15 20:29 UTC (permalink / raw)
  To: stefanha, jasowang, mst, sgarzare, virtualization, arbn

This patch renames vhost_work_dev_flush to just vhost_dev_flush to
relfect that it flushes everything on the device and that drivers
don't know/care that polls are based on vhost_works. Drivers just
flush the entire device and polls, and works for vhost-scsi
management TMFs and IO net virtqueues, etc all are flushed.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/net.c   |  4 ++--
 drivers/vhost/scsi.c  |  2 +-
 drivers/vhost/test.c  |  2 +-
 drivers/vhost/vhost.c | 10 +++++-----
 drivers/vhost/vhost.h |  2 +-
 drivers/vhost/vsock.c |  2 +-
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 047b7b05109a..0e4ff6a08f5f 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1376,7 +1376,7 @@ static void vhost_net_stop(struct vhost_net *n, struct socket **tx_sock,
 
 static void vhost_net_flush(struct vhost_net *n)
 {
-	vhost_work_dev_flush(&n->dev);
+	vhost_dev_flush(&n->dev);
 	if (n->vqs[VHOST_NET_VQ_TX].ubufs) {
 		mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
 		n->tx_flush = true;
@@ -1566,7 +1566,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	}
 
 	if (oldsock) {
-		vhost_work_dev_flush(&n->dev);
+		vhost_dev_flush(&n->dev);
 		sockfd_put(oldsock);
 	}
 
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 94535c813ef7..ffd9e6c2ffc1 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1436,7 +1436,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 		kref_put(&old_inflight[i]->kref, vhost_scsi_done_inflight);
 
 	/* Flush both the vhost poll and vhost work */
-	vhost_work_dev_flush(&vs->dev);
+	vhost_dev_flush(&vs->dev);
 
 	/* Wait for all reqs issued before the flush to be finished */
 	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index a63a626a554e..87498c1109fe 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -146,7 +146,7 @@ static void vhost_test_stop(struct vhost_test *n, void **privatep)
 
 static void vhost_test_flush(struct vhost_test *n)
 {
-	vhost_work_dev_flush(&n->dev);
+	vhost_dev_flush(&n->dev);
 }
 
 static int vhost_test_release(struct inode *inode, struct file *f)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 9f8de04bb673..716a80c61fa9 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -231,7 +231,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
 }
 EXPORT_SYMBOL_GPL(vhost_poll_stop);
 
-void vhost_work_dev_flush(struct vhost_dev *dev)
+void vhost_dev_flush(struct vhost_dev *dev)
 {
 	struct vhost_flush_struct flush;
 
@@ -243,7 +243,7 @@ void vhost_work_dev_flush(struct vhost_dev *dev)
 		wait_for_completion(&flush.wait_event);
 	}
 }
-EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
+EXPORT_SYMBOL_GPL(vhost_dev_flush);
 
 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
 {
@@ -530,7 +530,7 @@ static int vhost_attach_cgroups(struct vhost_dev *dev)
 	attach.owner = current;
 	vhost_work_init(&attach.work, vhost_attach_cgroups_work);
 	vhost_work_queue(dev, &attach.work);
-	vhost_work_dev_flush(dev);
+	vhost_dev_flush(dev);
 	return attach.ret;
 }
 
@@ -657,7 +657,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
 			vhost_poll_stop(&dev->vqs[i]->poll);
 	}
 
-	vhost_work_dev_flush(dev);
+	vhost_dev_flush(dev);
 }
 EXPORT_SYMBOL_GPL(vhost_dev_stop);
 
@@ -1711,7 +1711,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
 	mutex_unlock(&vq->mutex);
 
 	if (pollstop && vq->handle_kick)
-		vhost_work_dev_flush(vq->poll.dev);
+		vhost_dev_flush(vq->poll.dev);
 	return r;
 }
 EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index aeb8e1ad1496..d02adf1b2bf8 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -45,7 +45,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
 int vhost_poll_start(struct vhost_poll *poll, struct file *file);
 void vhost_poll_stop(struct vhost_poll *poll);
 void vhost_poll_queue(struct vhost_poll *poll);
-void vhost_work_dev_flush(struct vhost_dev *dev);
+void vhost_dev_flush(struct vhost_dev *dev);
 
 struct vhost_log {
 	u64 addr;
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 96be63697117..368330417bde 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -705,7 +705,7 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
 
 static void vhost_vsock_flush(struct vhost_vsock *vsock)
 {
-	vhost_work_dev_flush(&vsock->dev);
+	vhost_dev_flush(&vsock->dev);
 }
 
 static void vhost_vsock_reset_orphans(struct sock *sk)
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 0/8] vhost flush cleanups
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
                   ` (7 preceding siblings ...)
  2022-05-15 20:29 ` [PATCH V2 8/8] vhost: rename vhost_work_dev_flush Mike Christie
@ 2022-05-17  3:13 ` Jason Wang
  2022-05-17 14:20 ` Stefano Garzarella
  9 siblings, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:13 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> The following patches are Andrey Ryabinin's flush cleanups and some
> from me. They reduce the number of flush calls and remove some bogus
> ones where we don't even have a worker running anymore or they were
> based on outdated or incorrect assumptions.
>
> Jason, for the patches you gave me an explicit acked/reviewed tag I
> added it. For the replies where I answered your review questions and
> you only replied with an affirimative reply I did not add a tag,
> because I was not not 100% sure what you wanted to do.


Fine, I will go through the codes.

Thanks


>
> These patches will be used in the vhost threading patches which I think
> will make progress again now that I have Christian's email figured out
> :) (he had moved to microsoft but I've been using the ubuntu address).
> I think the patches are ok cleanups in general so I thought they could
> get merged separately if people agree.
>
> V2:
> - Added patch to rename vhost_work_dev_flush to just vhost_dev_flush
> to handle review comment from Jason about the naming not being so
> great.
>
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper
  2022-05-15 20:29 ` [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper Mike Christie
@ 2022-05-17  3:14   ` Jason Wang
  2022-05-17 13:56   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:14 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> From: Andrey Ryabinin <arbn@yandex-team.com>
>
> vhost_poll_flush() is a simple wrapper around vhost_work_dev_flush().
> It gives wrong impression that we are doing some work over vhost_poll,
> while in fact it flushes vhost_poll->dev.
> It only complicate understanding of the code and leads to mistakes
> like flushing the same vhost_dev several times in a row.
>
> Just remove vhost_poll_flush() and call vhost_work_dev_flush() directly.
>
> Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
> [merge vhost_poll_flush removal from Stefano Garzarella]
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/net.c   |  4 ++--
>   drivers/vhost/test.c  |  2 +-
>   drivers/vhost/vhost.c | 12 ++----------
>   drivers/vhost/vhost.h |  1 -
>   drivers/vhost/vsock.c |  2 +-
>   5 files changed, 6 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 792ab5f23647..4e55ad8c942a 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -1376,8 +1376,8 @@ static void vhost_net_stop(struct vhost_net *n, struct socket **tx_sock,
>   
>   static void vhost_net_flush_vq(struct vhost_net *n, int index)
>   {
> -	vhost_poll_flush(n->poll + index);
> -	vhost_poll_flush(&n->vqs[index].vq.poll);
> +	vhost_work_dev_flush(n->poll[index].dev);
> +	vhost_work_dev_flush(n->vqs[index].vq.poll.dev);
>   }
>   
>   static void vhost_net_flush(struct vhost_net *n)
> diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> index 05740cba1cd8..f0ac9e35f5d6 100644
> --- a/drivers/vhost/test.c
> +++ b/drivers/vhost/test.c
> @@ -146,7 +146,7 @@ static void vhost_test_stop(struct vhost_test *n, void **privatep)
>   
>   static void vhost_test_flush_vq(struct vhost_test *n, int index)
>   {
> -	vhost_poll_flush(&n->vqs[index].poll);
> +	vhost_work_dev_flush(n->vqs[index].poll.dev);
>   }
>   
>   static void vhost_test_flush(struct vhost_test *n)
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index d02173fb290c..1d84a2818c6f 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -245,14 +245,6 @@ void vhost_work_dev_flush(struct vhost_dev *dev)
>   }
>   EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
>   
> -/* Flush any work that has been scheduled. When calling this, don't hold any
> - * locks that are also used by the callback. */
> -void vhost_poll_flush(struct vhost_poll *poll)
> -{
> -	vhost_work_dev_flush(poll->dev);
> -}
> -EXPORT_SYMBOL_GPL(vhost_poll_flush);
> -
>   void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
>   {
>   	if (!dev->worker)
> @@ -663,7 +655,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
>   	for (i = 0; i < dev->nvqs; ++i) {
>   		if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick) {
>   			vhost_poll_stop(&dev->vqs[i]->poll);
> -			vhost_poll_flush(&dev->vqs[i]->poll);
> +			vhost_work_dev_flush(dev->vqs[i]->poll.dev);
>   		}
>   	}
>   }
> @@ -1719,7 +1711,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
>   	mutex_unlock(&vq->mutex);
>   
>   	if (pollstop && vq->handle_kick)
> -		vhost_poll_flush(&vq->poll);
> +		vhost_work_dev_flush(vq->poll.dev);
>   	return r;
>   }
>   EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 638bb640d6b4..aeb8e1ad1496 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -44,7 +44,6 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
>   		     __poll_t mask, struct vhost_dev *dev);
>   int vhost_poll_start(struct vhost_poll *poll, struct file *file);
>   void vhost_poll_stop(struct vhost_poll *poll);
> -void vhost_poll_flush(struct vhost_poll *poll);
>   void vhost_poll_queue(struct vhost_poll *poll);
>   void vhost_work_dev_flush(struct vhost_dev *dev);
>   
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index e6c9d41db1de..a4c8ae92a0fb 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -709,7 +709,7 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock)
>   
>   	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
>   		if (vsock->vqs[i].handle_kick)
> -			vhost_poll_flush(&vsock->vqs[i].poll);
> +			vhost_work_dev_flush(vsock->vqs[i].poll.dev);
>   	vhost_work_dev_flush(&vsock->dev);
>   }
>   

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop
  2022-05-15 20:29 ` [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop Mike Christie
@ 2022-05-17  3:16   ` Jason Wang
  2022-05-17 13:57   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:16 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> When vhost_work_dev_flush returns all work queued at that time will have
> completed. There is then no need to flush after every vhost_poll_stop
> call, and we can move the flush call to after the loop that stops the
> pollers.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/vhost.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 1d84a2818c6f..9f8de04bb673 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -653,11 +653,11 @@ void vhost_dev_stop(struct vhost_dev *dev)
>   	int i;
>   
>   	for (i = 0; i < dev->nvqs; ++i) {
> -		if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick) {
> +		if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick)
>   			vhost_poll_stop(&dev->vqs[i]->poll);
> -			vhost_work_dev_flush(dev->vqs[i]->poll.dev);
> -		}
>   	}
> +
> +	vhost_work_dev_flush(dev);
>   }
>   EXPORT_SYMBOL_GPL(vhost_dev_stop);
>   

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls
  2022-05-15 20:29 ` [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls Mike Christie
@ 2022-05-17  3:22   ` Jason Wang
  2022-05-17 14:06   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:22 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> From: Andrey Ryabinin <arbn@yandex-team.com>
>
> vhost_net_flush_vq() calls vhost_work_dev_flush() twice passing
> vhost_dev pointer obtained via 'n->poll[index].dev' and
> 'n->vqs[index].vq.poll.dev'. This is actually the same pointer,
> initialized in vhost_net_open()/vhost_dev_init()/vhost_poll_init()
>
> Remove vhost_net_flush_vq() and call vhost_work_dev_flush() directly.
> Do the flushes only once instead of several flush calls in a row
> which seems rather useless.
>
> Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
> [drop vhost_dev forward declaration in vhost.h]
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/net.c | 11 ++---------
>   1 file changed, 2 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 4e55ad8c942a..047b7b05109a 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -1374,16 +1374,9 @@ static void vhost_net_stop(struct vhost_net *n, struct socket **tx_sock,
>   	*rx_sock = vhost_net_stop_vq(n, &n->vqs[VHOST_NET_VQ_RX].vq);
>   }
>   
> -static void vhost_net_flush_vq(struct vhost_net *n, int index)
> -{
> -	vhost_work_dev_flush(n->poll[index].dev);
> -	vhost_work_dev_flush(n->vqs[index].vq.poll.dev);
> -}
> -
>   static void vhost_net_flush(struct vhost_net *n)
>   {
> -	vhost_net_flush_vq(n, VHOST_NET_VQ_TX);
> -	vhost_net_flush_vq(n, VHOST_NET_VQ_RX);
> +	vhost_work_dev_flush(&n->dev);
>   	if (n->vqs[VHOST_NET_VQ_TX].ubufs) {
>   		mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
>   		n->tx_flush = true;
> @@ -1573,7 +1566,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
>   	}
>   
>   	if (oldsock) {
> -		vhost_net_flush_vq(n, index);
> +		vhost_work_dev_flush(&n->dev);
>   		sockfd_put(oldsock);
>   	}
>   

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq()
  2022-05-15 20:29 ` [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq() Mike Christie
@ 2022-05-17  3:23   ` Jason Wang
  2022-05-17 14:11   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:23 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> From: Andrey Ryabinin <arbn@yandex-team.com>
>
> vhost_test_flush_vq() just a simple wrapper around vhost_work_dev_flush()
> which seems have no value. It's just easier to call vhost_work_dev_flush()
> directly. Besides there is no point in obtaining vhost_dev pointer
> via 'n->vqs[index].poll.dev' while we can just use &n->dev.
> It's the same pointers, see vhost_test_open()/vhost_dev_init().
>
> Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/test.c | 11 +++--------
>   1 file changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> index f0ac9e35f5d6..837148d0a6a8 100644
> --- a/drivers/vhost/test.c
> +++ b/drivers/vhost/test.c
> @@ -144,14 +144,9 @@ static void vhost_test_stop(struct vhost_test *n, void **privatep)
>   	*privatep = vhost_test_stop_vq(n, n->vqs + VHOST_TEST_VQ);
>   }
>   
> -static void vhost_test_flush_vq(struct vhost_test *n, int index)
> -{
> -	vhost_work_dev_flush(n->vqs[index].poll.dev);
> -}
> -
>   static void vhost_test_flush(struct vhost_test *n)
>   {
> -	vhost_test_flush_vq(n, VHOST_TEST_VQ);
> +	vhost_work_dev_flush(&n->dev);
>   }
>   
>   static int vhost_test_release(struct inode *inode, struct file *f)
> @@ -210,7 +205,7 @@ static long vhost_test_run(struct vhost_test *n, int test)
>   			goto err;
>   
>   		if (oldpriv) {
> -			vhost_test_flush_vq(n, index);
> +			vhost_test_flush(n, index);
>   		}
>   	}
>   
> @@ -303,7 +298,7 @@ static long vhost_test_set_backend(struct vhost_test *n, unsigned index, int fd)
>   	mutex_unlock(&vq->mutex);
>   
>   	if (enable) {
> -		vhost_test_flush_vq(n, index);
> +		vhost_test_flush(n);
>   	}
>   
>   	mutex_unlock(&n->dev.mutex);

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 5/8] vhost_vsock: simplify vhost_vsock_flush()
  2022-05-15 20:29 ` [PATCH V2 5/8] vhost_vsock: simplify vhost_vsock_flush() Mike Christie
@ 2022-05-17  3:23   ` Jason Wang
  0 siblings, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:23 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> From: Andrey Ryabinin <arbn@yandex-team.com>
>
> vhost_vsock_flush() calls vhost_work_dev_flush(vsock->vqs[i].poll.dev)
> before vhost_work_dev_flush(&vsock->dev). This seems pointless
> as vsock->vqs[i].poll.dev is the same as &vsock->dev and several flushes
> in a row doesn't do anything useful, one is just enough.
>
> Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/vsock.c | 5 -----
>   1 file changed, 5 deletions(-)
>
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index a4c8ae92a0fb..96be63697117 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -705,11 +705,6 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
>   
>   static void vhost_vsock_flush(struct vhost_vsock *vsock)
>   {
> -	int i;
> -
> -	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
> -		if (vsock->vqs[i].handle_kick)
> -			vhost_work_dev_flush(vsock->vqs[i].poll.dev);
>   	vhost_work_dev_flush(&vsock->dev);
>   }
>   

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 7/8] vhost-test: drop flush after vhost_dev_cleanup
  2022-05-15 20:29 ` [PATCH V2 7/8] vhost-test: " Mike Christie
@ 2022-05-17  3:28   ` Jason Wang
  0 siblings, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:28 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> The flush after vhost_dev_cleanup is not needed because:
>
> 1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread
> so the flush call will just return since the worker has not device.
>
> 2. It's not needed. The comment about jobs re-queueing themselves does
> not look correct because handle_vq does not requeue work.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/test.c | 3 ---
>   1 file changed, 3 deletions(-)
>
> diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> index 837148d0a6a8..a63a626a554e 100644
> --- a/drivers/vhost/test.c
> +++ b/drivers/vhost/test.c
> @@ -158,9 +158,6 @@ static int vhost_test_release(struct inode *inode, struct file *f)
>   	vhost_test_flush(n);
>   	vhost_dev_stop(&n->dev);
>   	vhost_dev_cleanup(&n->dev);
> -	/* We do an extra flush before freeing memory,
> -	 * since jobs can re-queue themselves. */
> -	vhost_test_flush(n);
>   	kfree(n->dev.vqs);
>   	kfree(n);
>   	return 0;

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 8/8] vhost: rename vhost_work_dev_flush
  2022-05-15 20:29 ` [PATCH V2 8/8] vhost: rename vhost_work_dev_flush Mike Christie
@ 2022-05-17  3:28   ` Jason Wang
  2022-05-17 14:18   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Jason Wang @ 2022-05-17  3:28 UTC (permalink / raw)
  To: Mike Christie, stefanha, mst, sgarzare, virtualization, arbn


在 2022/5/16 04:29, Mike Christie 写道:
> This patch renames vhost_work_dev_flush to just vhost_dev_flush to
> relfect that it flushes everything on the device and that drivers
> don't know/care that polls are based on vhost_works. Drivers just
> flush the entire device and polls, and works for vhost-scsi
> management TMFs and IO net virtqueues, etc all are flushed.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/net.c   |  4 ++--
>   drivers/vhost/scsi.c  |  2 +-
>   drivers/vhost/test.c  |  2 +-
>   drivers/vhost/vhost.c | 10 +++++-----
>   drivers/vhost/vhost.h |  2 +-
>   drivers/vhost/vsock.c |  2 +-
>   6 files changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 047b7b05109a..0e4ff6a08f5f 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -1376,7 +1376,7 @@ static void vhost_net_stop(struct vhost_net *n, struct socket **tx_sock,
>   
>   static void vhost_net_flush(struct vhost_net *n)
>   {
> -	vhost_work_dev_flush(&n->dev);
> +	vhost_dev_flush(&n->dev);
>   	if (n->vqs[VHOST_NET_VQ_TX].ubufs) {
>   		mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
>   		n->tx_flush = true;
> @@ -1566,7 +1566,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
>   	}
>   
>   	if (oldsock) {
> -		vhost_work_dev_flush(&n->dev);
> +		vhost_dev_flush(&n->dev);
>   		sockfd_put(oldsock);
>   	}
>   
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index 94535c813ef7..ffd9e6c2ffc1 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -1436,7 +1436,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>   		kref_put(&old_inflight[i]->kref, vhost_scsi_done_inflight);
>   
>   	/* Flush both the vhost poll and vhost work */
> -	vhost_work_dev_flush(&vs->dev);
> +	vhost_dev_flush(&vs->dev);
>   
>   	/* Wait for all reqs issued before the flush to be finished */
>   	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
> diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> index a63a626a554e..87498c1109fe 100644
> --- a/drivers/vhost/test.c
> +++ b/drivers/vhost/test.c
> @@ -146,7 +146,7 @@ static void vhost_test_stop(struct vhost_test *n, void **privatep)
>   
>   static void vhost_test_flush(struct vhost_test *n)
>   {
> -	vhost_work_dev_flush(&n->dev);
> +	vhost_dev_flush(&n->dev);
>   }
>   
>   static int vhost_test_release(struct inode *inode, struct file *f)
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 9f8de04bb673..716a80c61fa9 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -231,7 +231,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
>   }
>   EXPORT_SYMBOL_GPL(vhost_poll_stop);
>   
> -void vhost_work_dev_flush(struct vhost_dev *dev)
> +void vhost_dev_flush(struct vhost_dev *dev)
>   {
>   	struct vhost_flush_struct flush;
>   
> @@ -243,7 +243,7 @@ void vhost_work_dev_flush(struct vhost_dev *dev)
>   		wait_for_completion(&flush.wait_event);
>   	}
>   }
> -EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
> +EXPORT_SYMBOL_GPL(vhost_dev_flush);
>   
>   void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
>   {
> @@ -530,7 +530,7 @@ static int vhost_attach_cgroups(struct vhost_dev *dev)
>   	attach.owner = current;
>   	vhost_work_init(&attach.work, vhost_attach_cgroups_work);
>   	vhost_work_queue(dev, &attach.work);
> -	vhost_work_dev_flush(dev);
> +	vhost_dev_flush(dev);
>   	return attach.ret;
>   }
>   
> @@ -657,7 +657,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
>   			vhost_poll_stop(&dev->vqs[i]->poll);
>   	}
>   
> -	vhost_work_dev_flush(dev);
> +	vhost_dev_flush(dev);
>   }
>   EXPORT_SYMBOL_GPL(vhost_dev_stop);
>   
> @@ -1711,7 +1711,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
>   	mutex_unlock(&vq->mutex);
>   
>   	if (pollstop && vq->handle_kick)
> -		vhost_work_dev_flush(vq->poll.dev);
> +		vhost_dev_flush(vq->poll.dev);
>   	return r;
>   }
>   EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index aeb8e1ad1496..d02adf1b2bf8 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -45,7 +45,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
>   int vhost_poll_start(struct vhost_poll *poll, struct file *file);
>   void vhost_poll_stop(struct vhost_poll *poll);
>   void vhost_poll_queue(struct vhost_poll *poll);
> -void vhost_work_dev_flush(struct vhost_dev *dev);
> +void vhost_dev_flush(struct vhost_dev *dev);
>   
>   struct vhost_log {
>   	u64 addr;
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index 96be63697117..368330417bde 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -705,7 +705,7 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
>   
>   static void vhost_vsock_flush(struct vhost_vsock *vsock)
>   {
> -	vhost_work_dev_flush(&vsock->dev);
> +	vhost_dev_flush(&vsock->dev);
>   }
>   
>   static void vhost_vsock_reset_orphans(struct sock *sk)

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper
  2022-05-15 20:29 ` [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper Mike Christie
  2022-05-17  3:14   ` Jason Wang
@ 2022-05-17 13:56   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Stefano Garzarella @ 2022-05-17 13:56 UTC (permalink / raw)
  To: Mike Christie; +Cc: virtualization, arbn, stefanha, mst

On Sun, May 15, 2022 at 03:29:15PM -0500, Mike Christie wrote:
>From: Andrey Ryabinin <arbn@yandex-team.com>
>
>vhost_poll_flush() is a simple wrapper around vhost_work_dev_flush().
>It gives wrong impression that we are doing some work over vhost_poll,
>while in fact it flushes vhost_poll->dev.
>It only complicate understanding of the code and leads to mistakes
>like flushing the same vhost_dev several times in a row.
>
>Just remove vhost_poll_flush() and call vhost_work_dev_flush() directly.
>
>Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
>[merge vhost_poll_flush removal from Stefano Garzarella]
>Signed-off-by: Mike Christie <michael.christie@oracle.com>
>---
> drivers/vhost/net.c   |  4 ++--
> drivers/vhost/test.c  |  2 +-
> drivers/vhost/vhost.c | 12 ++----------
> drivers/vhost/vhost.h |  1 -
> drivers/vhost/vsock.c |  2 +-
> 5 files changed, 6 insertions(+), 15 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop
  2022-05-15 20:29 ` [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop Mike Christie
  2022-05-17  3:16   ` Jason Wang
@ 2022-05-17 13:57   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Stefano Garzarella @ 2022-05-17 13:57 UTC (permalink / raw)
  To: Mike Christie; +Cc: virtualization, arbn, stefanha, mst

On Sun, May 15, 2022 at 03:29:16PM -0500, Mike Christie wrote:
>When vhost_work_dev_flush returns all work queued at that time will have
>completed. There is then no need to flush after every vhost_poll_stop
>call, and we can move the flush call to after the loop that stops the
>pollers.
>
>Signed-off-by: Mike Christie <michael.christie@oracle.com>
>---
> drivers/vhost/vhost.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls
  2022-05-15 20:29 ` [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls Mike Christie
  2022-05-17  3:22   ` Jason Wang
@ 2022-05-17 14:06   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Stefano Garzarella @ 2022-05-17 14:06 UTC (permalink / raw)
  To: Mike Christie; +Cc: virtualization, arbn, stefanha, mst

On Sun, May 15, 2022 at 03:29:17PM -0500, Mike Christie wrote:
>From: Andrey Ryabinin <arbn@yandex-team.com>
>
>vhost_net_flush_vq() calls vhost_work_dev_flush() twice passing
>vhost_dev pointer obtained via 'n->poll[index].dev' and
>'n->vqs[index].vq.poll.dev'. This is actually the same pointer,
>initialized in vhost_net_open()/vhost_dev_init()/vhost_poll_init()
>
>Remove vhost_net_flush_vq() and call vhost_work_dev_flush() directly.
>Do the flushes only once instead of several flush calls in a row
>which seems rather useless.
>
>Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
>[drop vhost_dev forward declaration in vhost.h]
>Signed-off-by: Mike Christie <michael.christie@oracle.com>
>---
> drivers/vhost/net.c | 11 ++---------
> 1 file changed, 2 insertions(+), 9 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq()
  2022-05-15 20:29 ` [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq() Mike Christie
  2022-05-17  3:23   ` Jason Wang
@ 2022-05-17 14:11   ` Stefano Garzarella
  2022-05-17 16:54     ` Mike Christie
  1 sibling, 1 reply; 24+ messages in thread
From: Stefano Garzarella @ 2022-05-17 14:11 UTC (permalink / raw)
  To: Mike Christie; +Cc: virtualization, arbn, stefanha, mst

On Sun, May 15, 2022 at 03:29:18PM -0500, Mike Christie wrote:
>From: Andrey Ryabinin <arbn@yandex-team.com>
>
>vhost_test_flush_vq() just a simple wrapper around vhost_work_dev_flush()
>which seems have no value. It's just easier to call vhost_work_dev_flush()
>directly. Besides there is no point in obtaining vhost_dev pointer
>via 'n->vqs[index].poll.dev' while we can just use &n->dev.
>It's the same pointers, see vhost_test_open()/vhost_dev_init().
>
>Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
>Signed-off-by: Mike Christie <michael.christie@oracle.com>
>---
> drivers/vhost/test.c | 11 +++--------
> 1 file changed, 3 insertions(+), 8 deletions(-)
>
>diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
>index f0ac9e35f5d6..837148d0a6a8 100644
>--- a/drivers/vhost/test.c
>+++ b/drivers/vhost/test.c
>@@ -144,14 +144,9 @@ static void vhost_test_stop(struct vhost_test *n, void **privatep)
> 	*privatep = vhost_test_stop_vq(n, n->vqs + VHOST_TEST_VQ);
> }
>
>-static void vhost_test_flush_vq(struct vhost_test *n, int index)
>-{
>-	vhost_work_dev_flush(n->vqs[index].poll.dev);
>-}
>-
> static void vhost_test_flush(struct vhost_test *n)
> {
>-	vhost_test_flush_vq(n, VHOST_TEST_VQ);
>+	vhost_work_dev_flush(&n->dev);
> }
>
> static int vhost_test_release(struct inode *inode, struct file *f)
>@@ -210,7 +205,7 @@ static long vhost_test_run(struct vhost_test *n, int test)
> 			goto err;
>
> 		if (oldpriv) {
>-			vhost_test_flush_vq(n, index);
>+			vhost_test_flush(n, index);
                                             ^
Should we remove the `index` parameter?

> 		}
> 	}
>
>@@ -303,7 +298,7 @@ static long vhost_test_set_backend(struct vhost_test *n, unsigned index, int fd)
> 	mutex_unlock(&vq->mutex);
>
> 	if (enable) {
>-		vhost_test_flush_vq(n, index);
>+		vhost_test_flush(n);
> 	}
>
> 	mutex_unlock(&n->dev.mutex);
>-- 
>2.25.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 8/8] vhost: rename vhost_work_dev_flush
  2022-05-15 20:29 ` [PATCH V2 8/8] vhost: rename vhost_work_dev_flush Mike Christie
  2022-05-17  3:28   ` Jason Wang
@ 2022-05-17 14:18   ` Stefano Garzarella
  1 sibling, 0 replies; 24+ messages in thread
From: Stefano Garzarella @ 2022-05-17 14:18 UTC (permalink / raw)
  To: Mike Christie; +Cc: virtualization, arbn, stefanha, mst

On Sun, May 15, 2022 at 03:29:22PM -0500, Mike Christie wrote:
>This patch renames vhost_work_dev_flush to just vhost_dev_flush to
>relfect that it flushes everything on the device and that drivers
>don't know/care that polls are based on vhost_works. Drivers just
>flush the entire device and polls, and works for vhost-scsi
>management TMFs and IO net virtqueues, etc all are flushed.
>
>Signed-off-by: Mike Christie <michael.christie@oracle.com>
>---
> drivers/vhost/net.c   |  4 ++--
> drivers/vhost/scsi.c  |  2 +-
> drivers/vhost/test.c  |  2 +-
> drivers/vhost/vhost.c | 10 +++++-----
> drivers/vhost/vhost.h |  2 +-
> drivers/vhost/vsock.c |  2 +-
> 6 files changed, 11 insertions(+), 11 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 0/8] vhost flush cleanups
  2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
                   ` (8 preceding siblings ...)
  2022-05-17  3:13 ` [PATCH V2 0/8] vhost flush cleanups Jason Wang
@ 2022-05-17 14:20 ` Stefano Garzarella
  9 siblings, 0 replies; 24+ messages in thread
From: Stefano Garzarella @ 2022-05-17 14:20 UTC (permalink / raw)
  To: Mike Christie; +Cc: virtualization, arbn, stefanha, mst

On Sun, May 15, 2022 at 03:29:14PM -0500, Mike Christie wrote:
>The following patches are Andrey Ryabinin's flush cleanups and some
>from me. They reduce the number of flush calls and remove some bogus
>ones where we don't even have a worker running anymore or they were
>based on outdated or incorrect assumptions.
>
>Jason, for the patches you gave me an explicit acked/reviewed tag I
>added it. For the replies where I answered your review questions and
>you only replied with an affirimative reply I did not add a tag,
>because I was not not 100% sure what you wanted to do.
>
>These patches will be used in the vhost threading patches which I think
>will make progress again now that I have Christian's email figured out
>:) (he had moved to microsoft but I've been using the ubuntu address).
>I think the patches are ok cleanups in general so I thought they could
>get merged separately if people agree.
>
>V2:
>- Added patch to rename vhost_work_dev_flush to just vhost_dev_flush
>to handle review comment from Jason about the naming not being so
>great.
>
>

Thank you for this cleanup!

I think there is only a small issue to fix in vhost/test.c, otherwise it 
looks good to me :-)

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq()
  2022-05-17 14:11   ` Stefano Garzarella
@ 2022-05-17 16:54     ` Mike Christie
  0 siblings, 0 replies; 24+ messages in thread
From: Mike Christie @ 2022-05-17 16:54 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: virtualization, arbn, stefanha, mst

On 5/17/22 9:11 AM, Stefano Garzarella wrote:
>>
>> static int vhost_test_release(struct inode *inode, struct file *f)
>> @@ -210,7 +205,7 @@ static long vhost_test_run(struct vhost_test *n, int test)
>>             goto err;
>>
>>         if (oldpriv) {
>> -            vhost_test_flush_vq(n, index);
>> +            vhost_test_flush(n, index);
>                                             ^
> Should we remove the `index` parameter?

You are right. I'll fix up and repost. Thanks.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2022-05-17 16:54 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-15 20:29 [PATCH V2 0/8] vhost flush cleanups Mike Christie
2022-05-15 20:29 ` [PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper Mike Christie
2022-05-17  3:14   ` Jason Wang
2022-05-17 13:56   ` Stefano Garzarella
2022-05-15 20:29 ` [PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop Mike Christie
2022-05-17  3:16   ` Jason Wang
2022-05-17 13:57   ` Stefano Garzarella
2022-05-15 20:29 ` [PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls Mike Christie
2022-05-17  3:22   ` Jason Wang
2022-05-17 14:06   ` Stefano Garzarella
2022-05-15 20:29 ` [PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq() Mike Christie
2022-05-17  3:23   ` Jason Wang
2022-05-17 14:11   ` Stefano Garzarella
2022-05-17 16:54     ` Mike Christie
2022-05-15 20:29 ` [PATCH V2 5/8] vhost_vsock: simplify vhost_vsock_flush() Mike Christie
2022-05-17  3:23   ` Jason Wang
2022-05-15 20:29 ` [PATCH V2 6/8] vhost-scsi: drop flush after vhost_dev_cleanup Mike Christie
2022-05-15 20:29 ` [PATCH V2 7/8] vhost-test: " Mike Christie
2022-05-17  3:28   ` Jason Wang
2022-05-15 20:29 ` [PATCH V2 8/8] vhost: rename vhost_work_dev_flush Mike Christie
2022-05-17  3:28   ` Jason Wang
2022-05-17 14:18   ` Stefano Garzarella
2022-05-17  3:13 ` [PATCH V2 0/8] vhost flush cleanups Jason Wang
2022-05-17 14:20 ` Stefano Garzarella

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.