All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] vhost private_data rcu removal
@ 2013-05-07  6:54 Asias He
  2013-05-07  6:54 ` [PATCH 1/4] vhost-net: Always access vq->private_data under vq mutex Asias He
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Asias He @ 2013-05-07  6:54 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Asias He (4):
  vhost-net: Always access vq->private_data under vq mutex
  vhost-test: Always access vq->private_data under vq mutex
  vhost-scsi: Always access vq->private_data under vq mutex
  vhost: Remove custom vhost rcu usage

 drivers/vhost/net.c   | 37 ++++++++++++++++---------------------
 drivers/vhost/scsi.c  | 17 ++++++-----------
 drivers/vhost/test.c  | 20 +++++++++-----------
 drivers/vhost/vhost.h | 10 ++--------
 4 files changed, 33 insertions(+), 51 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] vhost-net: Always access vq->private_data under vq mutex
  2013-05-07  6:54 [PATCH 0/4] vhost private_data rcu removal Asias He
@ 2013-05-07  6:54 ` Asias He
  2013-05-07  6:54 ` [PATCH 2/4] vhost-test: " Asias He
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Asias He @ 2013-05-07  6:54 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/net.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 2b51e23..b616d9a 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -349,12 +349,11 @@ static void handle_tx(struct vhost_net *net)
 	struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
 	bool zcopy, zcopy_used;
 
-	/* TODO: check that we are running from vhost_worker? */
-	sock = rcu_dereference_check(vq->private_data, 1);
+	mutex_lock(&vq->mutex);
+	sock = vq->private_data;
 	if (!sock)
-		return;
+		goto out;
 
-	mutex_lock(&vq->mutex);
 	vhost_disable_notify(&net->dev, vq);
 
 	hdr_size = nvq->vhost_hlen;
@@ -463,7 +462,7 @@ static void handle_tx(struct vhost_net *net)
 			break;
 		}
 	}
-
+out:
 	mutex_unlock(&vq->mutex);
 }
 
@@ -572,14 +571,14 @@ static void handle_rx(struct vhost_net *net)
 	s16 headcount;
 	size_t vhost_hlen, sock_hlen;
 	size_t vhost_len, sock_len;
-	/* TODO: check that we are running from vhost_worker? */
-	struct socket *sock = rcu_dereference_check(vq->private_data, 1);
-
-	if (!sock)
-		return;
+	struct socket *sock;
 
 	mutex_lock(&vq->mutex);
+	sock = vq->private_data;
+	if (!sock)
+		goto out;
 	vhost_disable_notify(&net->dev, vq);
+
 	vhost_hlen = nvq->vhost_hlen;
 	sock_hlen = nvq->sock_hlen;
 
@@ -654,7 +653,7 @@ static void handle_rx(struct vhost_net *net)
 			break;
 		}
 	}
-
+out:
 	mutex_unlock(&vq->mutex);
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] vhost-test: Always access vq->private_data under vq mutex
  2013-05-07  6:54 [PATCH 0/4] vhost private_data rcu removal Asias He
  2013-05-07  6:54 ` [PATCH 1/4] vhost-net: Always access vq->private_data under vq mutex Asias He
@ 2013-05-07  6:54 ` Asias He
  2013-05-07  6:54 ` [PATCH 3/4] vhost-scsi: " Asias He
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Asias He @ 2013-05-07  6:54 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/test.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index dc526eb..435b911 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -48,11 +48,12 @@ static void handle_vq(struct vhost_test *n)
 	size_t len, total_len = 0;
 	void *private;
 
-	private = rcu_dereference_check(vq->private_data, 1);
-	if (!private)
-		return;
 
 	mutex_lock(&vq->mutex);
+	private = vq->private_data;
+	if (!private)
+		goto out;
+
 	vhost_disable_notify(&n->dev, vq);
 
 	for (;;) {
@@ -89,7 +90,7 @@ static void handle_vq(struct vhost_test *n)
 			break;
 		}
 	}
-
+out:
 	mutex_unlock(&vq->mutex);
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] vhost-scsi: Always access vq->private_data under vq mutex
  2013-05-07  6:54 [PATCH 0/4] vhost private_data rcu removal Asias He
  2013-05-07  6:54 ` [PATCH 1/4] vhost-net: Always access vq->private_data under vq mutex Asias He
  2013-05-07  6:54 ` [PATCH 2/4] vhost-test: " Asias He
@ 2013-05-07  6:54 ` Asias He
  2013-05-07  6:54 ` [PATCH 4/4] vhost: Remove custom vhost rcu usage Asias He
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Asias He @ 2013-05-07  6:54 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 5531ebc..d78768b 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -896,19 +896,15 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 	int head, ret;
 	u8 target;
 
+	mutex_lock(&vq->mutex);
 	/*
 	 * We can handle the vq only after the endpoint is setup by calling the
 	 * VHOST_SCSI_SET_ENDPOINT ioctl.
-	 *
-	 * TODO: Check that we are running from vhost_worker which acts
-	 * as read-side critical section for vhost kind of RCU.
-	 * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h
 	 */
-	vs_tpg = rcu_dereference_check(vq->private_data, 1);
+	vs_tpg = vq->private_data;
 	if (!vs_tpg)
-		return;
+		goto out;
 
-	mutex_lock(&vq->mutex);
 	vhost_disable_notify(&vs->dev, vq);
 
 	for (;;) {
@@ -1058,6 +1054,7 @@ err_free:
 	vhost_scsi_free_cmd(cmd);
 err_cmd:
 	vhost_scsi_send_bad_target(vs, vq, head, out);
+out:
 	mutex_unlock(&vq->mutex);
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] vhost: Remove custom vhost rcu usage
  2013-05-07  6:54 [PATCH 0/4] vhost private_data rcu removal Asias He
                   ` (2 preceding siblings ...)
  2013-05-07  6:54 ` [PATCH 3/4] vhost-scsi: " Asias He
@ 2013-05-07  6:54 ` Asias He
  2013-05-07  6:54 ` Asias He
  2013-05-07 11:19 ` [PATCH 0/4] vhost private_data rcu removal Michael S. Tsirkin
  5 siblings, 0 replies; 7+ messages in thread
From: Asias He @ 2013-05-07  6:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

Now, vq->private_data is always accessed under vq mutex. No need to play
the vhost rcu trick.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/net.c   | 16 ++++++----------
 drivers/vhost/scsi.c  |  6 ++----
 drivers/vhost/test.c  | 11 ++++-------
 drivers/vhost/vhost.h | 10 ++--------
 4 files changed, 14 insertions(+), 29 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index b616d9a..05bdc3c 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -15,7 +15,6 @@
 #include <linux/moduleparam.h>
 #include <linux/mutex.h>
 #include <linux/workqueue.h>
-#include <linux/rcupdate.h>
 #include <linux/file.h>
 #include <linux/slab.h>
 
@@ -751,8 +750,7 @@ static int vhost_net_enable_vq(struct vhost_net *n,
 	struct vhost_poll *poll = n->poll + (nvq - n->vqs);
 	struct socket *sock;
 
-	sock = rcu_dereference_protected(vq->private_data,
-					 lockdep_is_held(&vq->mutex));
+	sock = vq->private_data;
 	if (!sock)
 		return 0;
 
@@ -765,10 +763,9 @@ static struct socket *vhost_net_stop_vq(struct vhost_net *n,
 	struct socket *sock;
 
 	mutex_lock(&vq->mutex);
-	sock = rcu_dereference_protected(vq->private_data,
-					 lockdep_is_held(&vq->mutex));
+	sock = vq->private_data;
 	vhost_net_disable_vq(n, vq);
-	rcu_assign_pointer(vq->private_data, NULL);
+	vq->private_data = NULL;
 	mutex_unlock(&vq->mutex);
 	return sock;
 }
@@ -924,8 +921,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	}
 
 	/* start polling new socket */
-	oldsock = rcu_dereference_protected(vq->private_data,
-					    lockdep_is_held(&vq->mutex));
+	oldsock = vq->private_data;
 	if (sock != oldsock) {
 		ubufs = vhost_net_ubuf_alloc(vq,
 					     sock && vhost_sock_zcopy(sock));
@@ -935,7 +931,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 		}
 
 		vhost_net_disable_vq(n, vq);
-		rcu_assign_pointer(vq->private_data, sock);
+		vq->private_data = sock;
 		r = vhost_init_used(vq);
 		if (r)
 			goto err_used;
@@ -969,7 +965,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	return 0;
 
 err_used:
-	rcu_assign_pointer(vq->private_data, oldsock);
+	vq->private_data = oldsock;
 	vhost_net_enable_vq(n, vq);
 	if (ubufs)
 		vhost_net_ubuf_put_and_wait(ubufs);
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index d78768b..de6d817 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1223,9 +1223,8 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
 		       sizeof(vs->vs_vhost_wwpn));
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
 			vq = &vs->vqs[i].vq;
-			/* Flushing the vhost_work acts as synchronize_rcu */
 			mutex_lock(&vq->mutex);
-			rcu_assign_pointer(vq->private_data, vs_tpg);
+			vq->private_data = vs_tpg;
 			vhost_init_used(vq);
 			mutex_unlock(&vq->mutex);
 		}
@@ -1304,9 +1303,8 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 	if (match) {
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
 			vq = &vs->vqs[i].vq;
-			/* Flushing the vhost_work acts as synchronize_rcu */
 			mutex_lock(&vq->mutex);
-			rcu_assign_pointer(vq->private_data, NULL);
+			vq->private_data = NULL;
 			mutex_unlock(&vq->mutex);
 		}
 	}
diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index 435b911..5f23477 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -13,7 +13,6 @@
 #include <linux/module.h>
 #include <linux/mutex.h>
 #include <linux/workqueue.h>
-#include <linux/rcupdate.h>
 #include <linux/file.h>
 #include <linux/slab.h>
 
@@ -139,9 +138,8 @@ static void *vhost_test_stop_vq(struct vhost_test *n,
 	void *private;
 
 	mutex_lock(&vq->mutex);
-	private = rcu_dereference_protected(vq->private_data,
-					 lockdep_is_held(&vq->mutex));
-	rcu_assign_pointer(vq->private_data, NULL);
+	private = vq->private_data;
+	vq->private_data = NULL;
 	mutex_unlock(&vq->mutex);
 	return private;
 }
@@ -205,9 +203,8 @@ static long vhost_test_run(struct vhost_test *n, int test)
 		priv = test ? n : NULL;
 
 		/* start polling new socket */
-		oldpriv = rcu_dereference_protected(vq->private_data,
-						    lockdep_is_held(&vq->mutex));
-		rcu_assign_pointer(vq->private_data, priv);
+		oldpriv = vq->private_data;
+		vq->private_data = priv;
 
 		r = vhost_init_used(&n->vqs[index].vq);
 
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 51aeb5f..2a8c655 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -103,14 +103,8 @@ struct vhost_virtqueue {
 	struct iovec iov[UIO_MAXIOV];
 	struct iovec *indirect;
 	struct vring_used_elem *heads;
-	/* We use a kind of RCU to access private pointer.
-	 * All readers access it from worker, which makes it possible to
-	 * flush the vhost_work instead of synchronize_rcu. Therefore readers do
-	 * not need to call rcu_read_lock/rcu_read_unlock: the beginning of
-	 * vhost_work execution acts instead of rcu_read_lock() and the end of
-	 * vhost_work execution acts instead of rcu_read_unlock().
-	 * Writers use virtqueue mutex. */
-	void __rcu *private_data;
+	/* Protected by virtqueue mutex. */
+	void *private_data;
 	/* Log write descriptors */
 	void __user *log_base;
 	struct vhost_log *log;
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] vhost: Remove custom vhost rcu usage
  2013-05-07  6:54 [PATCH 0/4] vhost private_data rcu removal Asias He
                   ` (3 preceding siblings ...)
  2013-05-07  6:54 ` [PATCH 4/4] vhost: Remove custom vhost rcu usage Asias He
@ 2013-05-07  6:54 ` Asias He
  2013-05-07 11:19 ` [PATCH 0/4] vhost private_data rcu removal Michael S. Tsirkin
  5 siblings, 0 replies; 7+ messages in thread
From: Asias He @ 2013-05-07  6:54 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Now, vq->private_data is always accessed under vq mutex. No need to play
the vhost rcu trick.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/net.c   | 16 ++++++----------
 drivers/vhost/scsi.c  |  6 ++----
 drivers/vhost/test.c  | 11 ++++-------
 drivers/vhost/vhost.h | 10 ++--------
 4 files changed, 14 insertions(+), 29 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index b616d9a..05bdc3c 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -15,7 +15,6 @@
 #include <linux/moduleparam.h>
 #include <linux/mutex.h>
 #include <linux/workqueue.h>
-#include <linux/rcupdate.h>
 #include <linux/file.h>
 #include <linux/slab.h>
 
@@ -751,8 +750,7 @@ static int vhost_net_enable_vq(struct vhost_net *n,
 	struct vhost_poll *poll = n->poll + (nvq - n->vqs);
 	struct socket *sock;
 
-	sock = rcu_dereference_protected(vq->private_data,
-					 lockdep_is_held(&vq->mutex));
+	sock = vq->private_data;
 	if (!sock)
 		return 0;
 
@@ -765,10 +763,9 @@ static struct socket *vhost_net_stop_vq(struct vhost_net *n,
 	struct socket *sock;
 
 	mutex_lock(&vq->mutex);
-	sock = rcu_dereference_protected(vq->private_data,
-					 lockdep_is_held(&vq->mutex));
+	sock = vq->private_data;
 	vhost_net_disable_vq(n, vq);
-	rcu_assign_pointer(vq->private_data, NULL);
+	vq->private_data = NULL;
 	mutex_unlock(&vq->mutex);
 	return sock;
 }
@@ -924,8 +921,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	}
 
 	/* start polling new socket */
-	oldsock = rcu_dereference_protected(vq->private_data,
-					    lockdep_is_held(&vq->mutex));
+	oldsock = vq->private_data;
 	if (sock != oldsock) {
 		ubufs = vhost_net_ubuf_alloc(vq,
 					     sock && vhost_sock_zcopy(sock));
@@ -935,7 +931,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 		}
 
 		vhost_net_disable_vq(n, vq);
-		rcu_assign_pointer(vq->private_data, sock);
+		vq->private_data = sock;
 		r = vhost_init_used(vq);
 		if (r)
 			goto err_used;
@@ -969,7 +965,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	return 0;
 
 err_used:
-	rcu_assign_pointer(vq->private_data, oldsock);
+	vq->private_data = oldsock;
 	vhost_net_enable_vq(n, vq);
 	if (ubufs)
 		vhost_net_ubuf_put_and_wait(ubufs);
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index d78768b..de6d817 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1223,9 +1223,8 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
 		       sizeof(vs->vs_vhost_wwpn));
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
 			vq = &vs->vqs[i].vq;
-			/* Flushing the vhost_work acts as synchronize_rcu */
 			mutex_lock(&vq->mutex);
-			rcu_assign_pointer(vq->private_data, vs_tpg);
+			vq->private_data = vs_tpg;
 			vhost_init_used(vq);
 			mutex_unlock(&vq->mutex);
 		}
@@ -1304,9 +1303,8 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 	if (match) {
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
 			vq = &vs->vqs[i].vq;
-			/* Flushing the vhost_work acts as synchronize_rcu */
 			mutex_lock(&vq->mutex);
-			rcu_assign_pointer(vq->private_data, NULL);
+			vq->private_data = NULL;
 			mutex_unlock(&vq->mutex);
 		}
 	}
diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index 435b911..5f23477 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -13,7 +13,6 @@
 #include <linux/module.h>
 #include <linux/mutex.h>
 #include <linux/workqueue.h>
-#include <linux/rcupdate.h>
 #include <linux/file.h>
 #include <linux/slab.h>
 
@@ -139,9 +138,8 @@ static void *vhost_test_stop_vq(struct vhost_test *n,
 	void *private;
 
 	mutex_lock(&vq->mutex);
-	private = rcu_dereference_protected(vq->private_data,
-					 lockdep_is_held(&vq->mutex));
-	rcu_assign_pointer(vq->private_data, NULL);
+	private = vq->private_data;
+	vq->private_data = NULL;
 	mutex_unlock(&vq->mutex);
 	return private;
 }
@@ -205,9 +203,8 @@ static long vhost_test_run(struct vhost_test *n, int test)
 		priv = test ? n : NULL;
 
 		/* start polling new socket */
-		oldpriv = rcu_dereference_protected(vq->private_data,
-						    lockdep_is_held(&vq->mutex));
-		rcu_assign_pointer(vq->private_data, priv);
+		oldpriv = vq->private_data;
+		vq->private_data = priv;
 
 		r = vhost_init_used(&n->vqs[index].vq);
 
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 51aeb5f..2a8c655 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -103,14 +103,8 @@ struct vhost_virtqueue {
 	struct iovec iov[UIO_MAXIOV];
 	struct iovec *indirect;
 	struct vring_used_elem *heads;
-	/* We use a kind of RCU to access private pointer.
-	 * All readers access it from worker, which makes it possible to
-	 * flush the vhost_work instead of synchronize_rcu. Therefore readers do
-	 * not need to call rcu_read_lock/rcu_read_unlock: the beginning of
-	 * vhost_work execution acts instead of rcu_read_lock() and the end of
-	 * vhost_work execution acts instead of rcu_read_unlock().
-	 * Writers use virtqueue mutex. */
-	void __rcu *private_data;
+	/* Protected by virtqueue mutex. */
+	void *private_data;
 	/* Log write descriptors */
 	void __user *log_base;
 	struct vhost_log *log;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/4] vhost private_data rcu removal
  2013-05-07  6:54 [PATCH 0/4] vhost private_data rcu removal Asias He
                   ` (4 preceding siblings ...)
  2013-05-07  6:54 ` Asias He
@ 2013-05-07 11:19 ` Michael S. Tsirkin
  5 siblings, 0 replies; 7+ messages in thread
From: Michael S. Tsirkin @ 2013-05-07 11:19 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Tue, May 07, 2013 at 02:54:32PM +0800, Asias He wrote:
> Asias He (4):
>   vhost-net: Always access vq->private_data under vq mutex
>   vhost-test: Always access vq->private_data under vq mutex
>   vhost-scsi: Always access vq->private_data under vq mutex
>   vhost: Remove custom vhost rcu usage

Looks good, I'll queue this up for 3.11.

>  drivers/vhost/net.c   | 37 ++++++++++++++++---------------------
>  drivers/vhost/scsi.c  | 17 ++++++-----------
>  drivers/vhost/test.c  | 20 +++++++++-----------
>  drivers/vhost/vhost.h | 10 ++--------
>  4 files changed, 33 insertions(+), 51 deletions(-)
> 
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-05-07 11:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-07  6:54 [PATCH 0/4] vhost private_data rcu removal Asias He
2013-05-07  6:54 ` [PATCH 1/4] vhost-net: Always access vq->private_data under vq mutex Asias He
2013-05-07  6:54 ` [PATCH 2/4] vhost-test: " Asias He
2013-05-07  6:54 ` [PATCH 3/4] vhost-scsi: " Asias He
2013-05-07  6:54 ` [PATCH 4/4] vhost: Remove custom vhost rcu usage Asias He
2013-05-07  6:54 ` Asias He
2013-05-07 11:19 ` [PATCH 0/4] vhost private_data rcu removal Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.