All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/11] vhost cleanups
@ 2013-05-06  8:38 Asias He
  2013-05-06  8:38 ` [PATCH v2 01/11] vhost: Remove vhost_enable_zcopy in vhost.h Asias He
                   ` (21 more replies)
  0 siblings, 22 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

MST, This is on top of [PATCH 0/2] vhost-net fix ubuf.

Asias He (11):
  vhost: Remove vhost_enable_zcopy in vhost.h
  vhost: Move VHOST_NET_FEATURES to net.c
  vhost: Make vhost a separate module
  vhost: Remove comments for hdr in vhost.h
  vhost: Simplify dev->vqs[i] access
  vhost-net: Cleanup vhost_ubuf and vhost_zcopy
  vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
  vhost-scsi: Rename struct vhost_scsi *s to *vs
  vhost-scsi: Make func indention more consistent
  vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
  vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd

 drivers/vhost/Kconfig  |   8 +
 drivers/vhost/Makefile |   3 +-
 drivers/vhost/net.c    |  64 ++++---
 drivers/vhost/scsi.c   | 470 ++++++++++++++++++++++++++-----------------------
 drivers/vhost/vhost.c  |  86 +++++++--
 drivers/vhost/vhost.h  |  11 +-
 6 files changed, 361 insertions(+), 281 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v2 01/11] vhost: Remove vhost_enable_zcopy in vhost.h
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 02/11] vhost: Move VHOST_NET_FEATURES to net.c Asias He
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

It is net.c specific.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/vhost.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index cc23bc4..076c9ac 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -192,7 +192,4 @@ static inline int vhost_has_feature(struct vhost_dev *dev, int bit)
 	acked_features = rcu_dereference_index_check(dev->acked_features, 1);
 	return acked_features & (1 << bit);
 }
-
-void vhost_enable_zcopy(int vq);
-
 #endif
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 02/11] vhost: Move VHOST_NET_FEATURES to net.c
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
  2013-05-06  8:38 ` [PATCH v2 01/11] vhost: Remove vhost_enable_zcopy in vhost.h Asias He
  2013-05-06  8:38 ` [PATCH v2 02/11] vhost: Move VHOST_NET_FEATURES to net.c Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 03/11] vhost: Make vhost a separate module Asias He
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

vhost.h should not depend on device specific marcos like
VHOST_NET_F_VIRTIO_NET_HDR and VIRTIO_NET_F_MRG_RXBUF.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/net.c   | 6 ++++++
 drivers/vhost/vhost.h | 3 ---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 354665a..06b2447 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -59,6 +59,12 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
 #define VHOST_DMA_IS_DONE(len) ((len) >= VHOST_DMA_DONE_LEN)
 
 enum {
+	VHOST_NET_FEATURES = VHOST_FEATURES |
+			 (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) |
+			 (1ULL << VIRTIO_NET_F_MRG_RXBUF),
+};
+
+enum {
 	VHOST_NET_VQ_RX = 0,
 	VHOST_NET_VQ_TX = 1,
 	VHOST_NET_VQ_MAX = 2,
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 076c9ac..6bf81a9 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -178,9 +178,6 @@ enum {
 			 (1ULL << VIRTIO_RING_F_INDIRECT_DESC) |
 			 (1ULL << VIRTIO_RING_F_EVENT_IDX) |
 			 (1ULL << VHOST_F_LOG_ALL),
-	VHOST_NET_FEATURES = VHOST_FEATURES |
-			 (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) |
-			 (1ULL << VIRTIO_NET_F_MRG_RXBUF),
 };
 
 static inline int vhost_has_feature(struct vhost_dev *dev, int bit)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 02/11] vhost: Move VHOST_NET_FEATURES to net.c
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
  2013-05-06  8:38 ` [PATCH v2 01/11] vhost: Remove vhost_enable_zcopy in vhost.h Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

vhost.h should not depend on device specific marcos like
VHOST_NET_F_VIRTIO_NET_HDR and VIRTIO_NET_F_MRG_RXBUF.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/net.c   | 6 ++++++
 drivers/vhost/vhost.h | 3 ---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 354665a..06b2447 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -59,6 +59,12 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
 #define VHOST_DMA_IS_DONE(len) ((len) >= VHOST_DMA_DONE_LEN)
 
 enum {
+	VHOST_NET_FEATURES = VHOST_FEATURES |
+			 (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) |
+			 (1ULL << VIRTIO_NET_F_MRG_RXBUF),
+};
+
+enum {
 	VHOST_NET_VQ_RX = 0,
 	VHOST_NET_VQ_TX = 1,
 	VHOST_NET_VQ_MAX = 2,
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 076c9ac..6bf81a9 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -178,9 +178,6 @@ enum {
 			 (1ULL << VIRTIO_RING_F_INDIRECT_DESC) |
 			 (1ULL << VIRTIO_RING_F_EVENT_IDX) |
 			 (1ULL << VHOST_F_LOG_ALL),
-	VHOST_NET_FEATURES = VHOST_FEATURES |
-			 (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) |
-			 (1ULL << VIRTIO_NET_F_MRG_RXBUF),
 };
 
 static inline int vhost_has_feature(struct vhost_dev *dev, int bit)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 03/11] vhost: Make vhost a separate module
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (2 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  9:53   ` Michael S. Tsirkin
  2013-05-06 10:03   ` Michael S. Tsirkin
  2013-05-06  8:38 ` [PATCH v2 04/11] vhost: Remove comments for hdr in vhost.h Asias He
                   ` (17 subsequent siblings)
  21 siblings, 2 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Currently, vhost-net and vhost-scsi are sharing the vhost core code.
However, vhost-scsi shares the code by including the vhost.c file
directly.

Making vhost a separate module makes it is easier to share code with
other vhost devices.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/Kconfig  |  8 ++++++++
 drivers/vhost/Makefile |  3 ++-
 drivers/vhost/scsi.c   |  1 -
 drivers/vhost/vhost.c  | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
 drivers/vhost/vhost.h  |  2 ++
 5 files changed, 62 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 8b9226d..017a1e8 100644
--- a/drivers/vhost/Kconfig
+++ b/drivers/vhost/Kconfig
@@ -1,6 +1,7 @@
 config VHOST_NET
 	tristate "Host kernel accelerator for virtio net"
 	depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
+	select VHOST
 	select VHOST_RING
 	---help---
 	  This kernel module can be loaded in host kernel to accelerate
@@ -13,6 +14,7 @@ config VHOST_NET
 config VHOST_SCSI
 	tristate "VHOST_SCSI TCM fabric driver"
 	depends on TARGET_CORE && EVENTFD && m
+	select VHOST
 	select VHOST_RING
 	default n
 	---help---
@@ -24,3 +26,9 @@ config VHOST_RING
 	---help---
 	  This option is selected by any driver which needs to access
 	  the host side of a virtio ring.
+
+config VHOST
+	tristate
+	---help---
+	  This option is selected by any driver which needs to access
+	  the core of vhost.
diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
index 654e9afb..e0441c3 100644
--- a/drivers/vhost/Makefile
+++ b/drivers/vhost/Makefile
@@ -1,7 +1,8 @@
 obj-$(CONFIG_VHOST_NET) += vhost_net.o
-vhost_net-y := vhost.o net.o
+vhost_net-y := net.o
 
 obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
 vhost_scsi-y := scsi.o
 
 obj-$(CONFIG_VHOST_RING) += vringh.o
+obj-$(CONFIG_VHOST)	+= vhost.o
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 5179f7a..2dcb94a 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -49,7 +49,6 @@
 #include <linux/llist.h>
 #include <linux/bitmap.h>
 
-#include "vhost.c"
 #include "vhost.h"
 
 #define TCM_VHOST_VERSION  "v0.1"
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index de9441a..e406d5f 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -25,6 +25,7 @@
 #include <linux/slab.h>
 #include <linux/kthread.h>
 #include <linux/cgroup.h>
+#include <linux/module.h>
 
 #include "vhost.h"
 
@@ -66,6 +67,7 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
 	work->flushing = 0;
 	work->queue_seq = work->done_seq = 0;
 }
+EXPORT_SYMBOL_GPL(vhost_work_init);
 
 /* Init poll structure */
 void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
@@ -79,6 +81,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
 
 	vhost_work_init(&poll->work, fn);
 }
+EXPORT_SYMBOL_GPL(vhost_poll_init);
 
 /* Start polling a file. We add ourselves to file's wait queue. The caller must
  * keep a reference to a file until after vhost_poll_stop is called. */
@@ -101,6 +104,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
 
 	return ret;
 }
+EXPORT_SYMBOL_GPL(vhost_poll_start);
 
 /* Stop polling a file. After this function returns, it becomes safe to drop the
  * file reference. You must also flush afterwards. */
@@ -111,6 +115,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
 		poll->wqh = NULL;
 	}
 }
+EXPORT_SYMBOL_GPL(vhost_poll_stop);
 
 static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
 				unsigned seq)
@@ -123,7 +128,7 @@ static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
 	return left <= 0;
 }
 
-static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
+void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
 {
 	unsigned seq;
 	int flushing;
@@ -138,6 +143,7 @@ static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
 	spin_unlock_irq(&dev->work_lock);
 	BUG_ON(flushing < 0);
 }
+EXPORT_SYMBOL_GPL(vhost_work_flush);
 
 /* Flush any work that has been scheduled. When calling this, don't hold any
  * locks that are also used by the callback. */
@@ -145,6 +151,7 @@ void vhost_poll_flush(struct vhost_poll *poll)
 {
 	vhost_work_flush(poll->dev, &poll->work);
 }
+EXPORT_SYMBOL_GPL(vhost_poll_flush);
 
 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
 {
@@ -158,11 +165,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
 	}
 	spin_unlock_irqrestore(&dev->work_lock, flags);
 }
+EXPORT_SYMBOL_GPL(vhost_work_queue);
 
 void vhost_poll_queue(struct vhost_poll *poll)
 {
 	vhost_work_queue(poll->dev, &poll->work);
 }
+EXPORT_SYMBOL_GPL(vhost_poll_queue);
 
 static void vhost_vq_reset(struct vhost_dev *dev,
 			   struct vhost_virtqueue *vq)
@@ -310,6 +319,7 @@ long vhost_dev_init(struct vhost_dev *dev,
 
 	return 0;
 }
+EXPORT_SYMBOL_GPL(vhost_dev_init);
 
 /* Caller should have device mutex */
 long vhost_dev_check_owner(struct vhost_dev *dev)
@@ -317,6 +327,7 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
 	/* Are you the owner? If not, I don't think you mean to do that */
 	return dev->mm == current->mm ? 0 : -EPERM;
 }
+EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
 
 struct vhost_attach_cgroups_struct {
 	struct vhost_work work;
@@ -385,11 +396,13 @@ err_worker:
 err_mm:
 	return err;
 }
+EXPORT_SYMBOL_GPL(vhost_dev_set_owner);
 
 struct vhost_memory *vhost_dev_reset_owner_prepare(void)
 {
 	return kmalloc(offsetof(struct vhost_memory, regions), GFP_KERNEL);
 }
+EXPORT_SYMBOL_GPL(vhost_dev_reset_owner_prepare);
 
 /* Caller should have device mutex */
 void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
@@ -400,6 +413,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
 	memory->nregions = 0;
 	RCU_INIT_POINTER(dev->memory, memory);
 }
+EXPORT_SYMBOL_GPL(vhost_dev_reset_owner);
 
 void vhost_dev_stop(struct vhost_dev *dev)
 {
@@ -412,6 +426,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
 		}
 	}
 }
+EXPORT_SYMBOL_GPL(vhost_dev_stop);
 
 /* Caller should have device mutex if and only if locked is set */
 void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
@@ -452,6 +467,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
 		mmput(dev->mm);
 	dev->mm = NULL;
 }
+EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
 
 static int log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
 {
@@ -537,6 +553,7 @@ int vhost_log_access_ok(struct vhost_dev *dev)
 				       lockdep_is_held(&dev->mutex));
 	return memory_access_ok(dev, mp, 1);
 }
+EXPORT_SYMBOL_GPL(vhost_log_access_ok);
 
 /* Verify access for write logging. */
 /* Caller should have vq mutex and device mutex */
@@ -562,6 +579,7 @@ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
 	return vq_access_ok(vq->dev, vq->num, vq->desc, vq->avail, vq->used) &&
 		vq_log_access_ok(vq->dev, vq, vq->log_base);
 }
+EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
 
 static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
 {
@@ -791,6 +809,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
 		vhost_poll_flush(&vq->poll);
 	return r;
 }
+EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
 
 /* Caller must have device mutex */
 long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
@@ -871,6 +890,7 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
 done:
 	return r;
 }
+EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
 
 static const struct vhost_memory_region *find_region(struct vhost_memory *mem,
 						     __u64 addr, __u32 len)
@@ -962,6 +982,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
 	BUG();
 	return 0;
 }
+EXPORT_SYMBOL_GPL(vhost_log_write);
 
 static int vhost_update_used_flags(struct vhost_virtqueue *vq)
 {
@@ -1013,6 +1034,7 @@ int vhost_init_used(struct vhost_virtqueue *vq)
 	vq->signalled_used_valid = false;
 	return get_user(vq->last_used_idx, &vq->used->idx);
 }
+EXPORT_SYMBOL_GPL(vhost_init_used);
 
 static int translate_desc(struct vhost_dev *dev, u64 addr, u32 len,
 			  struct iovec iov[], int iov_size)
@@ -1289,12 +1311,14 @@ int vhost_get_vq_desc(struct vhost_dev *dev, struct vhost_virtqueue *vq,
 	BUG_ON(!(vq->used_flags & VRING_USED_F_NO_NOTIFY));
 	return head;
 }
+EXPORT_SYMBOL_GPL(vhost_get_vq_desc);
 
 /* Reverse the effect of vhost_get_vq_desc. Useful for error handling. */
 void vhost_discard_vq_desc(struct vhost_virtqueue *vq, int n)
 {
 	vq->last_avail_idx -= n;
 }
+EXPORT_SYMBOL_GPL(vhost_discard_vq_desc);
 
 /* After we've used one of their buffers, we tell them about it.  We'll then
  * want to notify the guest, using eventfd. */
@@ -1343,6 +1367,7 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
 		vq->signalled_used_valid = false;
 	return 0;
 }
+EXPORT_SYMBOL_GPL(vhost_add_used);
 
 static int __vhost_add_used_n(struct vhost_virtqueue *vq,
 			    struct vring_used_elem *heads,
@@ -1412,6 +1437,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
 	}
 	return r;
 }
+EXPORT_SYMBOL_GPL(vhost_add_used_n);
 
 static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
 {
@@ -1456,6 +1482,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
 	if (vq->call_ctx && vhost_notify(dev, vq))
 		eventfd_signal(vq->call_ctx, 1);
 }
+EXPORT_SYMBOL_GPL(vhost_signal);
 
 /* And here's the combo meal deal.  Supersize me! */
 void vhost_add_used_and_signal(struct vhost_dev *dev,
@@ -1465,6 +1492,7 @@ void vhost_add_used_and_signal(struct vhost_dev *dev,
 	vhost_add_used(vq, head, len);
 	vhost_signal(dev, vq);
 }
+EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
 
 /* multi-buffer version of vhost_add_used_and_signal */
 void vhost_add_used_and_signal_n(struct vhost_dev *dev,
@@ -1474,6 +1502,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
 	vhost_add_used_n(vq, heads, count);
 	vhost_signal(dev, vq);
 }
+EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
 
 /* OK, now we need to know about added descriptors. */
 bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
@@ -1511,6 +1540,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
 
 	return avail_idx != vq->avail_idx;
 }
+EXPORT_SYMBOL_GPL(vhost_enable_notify);
 
 /* We don't need to be notified again. */
 void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
@@ -1527,3 +1557,22 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
 			       &vq->used->flags, r);
 	}
 }
+EXPORT_SYMBOL_GPL(vhost_disable_notify);
+
+static int __init vhost_init(void)
+{
+	return 0;
+}
+
+static void __exit vhost_exit(void)
+{
+	return;
+}
+
+module_init(vhost_init);
+module_exit(vhost_exit);
+
+MODULE_VERSION("0.0.1");
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Michael S. Tsirkin");
+MODULE_DESCRIPTION("Host kernel accelerator for virtio");
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 6bf81a9..94a80eb 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -46,6 +46,8 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file);
 void vhost_poll_stop(struct vhost_poll *poll);
 void vhost_poll_flush(struct vhost_poll *poll);
 void vhost_poll_queue(struct vhost_poll *poll);
+void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
+long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
 
 struct vhost_log {
 	u64 addr;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 04/11] vhost: Remove comments for hdr in vhost.h
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (4 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 04/11] vhost: Remove comments for hdr in vhost.h Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 05/11] vhost: Simplify dev->vqs[i] access Asias He
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

It is supposed to be removed when hdr is moved into vhost_net_virtqueue.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/vhost.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 94a80eb..51aeb5f 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -101,9 +101,6 @@ struct vhost_virtqueue {
 	u64 log_addr;
 
 	struct iovec iov[UIO_MAXIOV];
-	/* hdr is used to store the virtio header.
-	 * Since each iovec has >= 1 byte length, we never need more than
-	 * header length entries to store the header. */
 	struct iovec *indirect;
 	struct vring_used_elem *heads;
 	/* We use a kind of RCU to access private pointer.
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 04/11] vhost: Remove comments for hdr in vhost.h
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (3 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 03/11] vhost: Make vhost a separate module Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

It is supposed to be removed when hdr is moved into vhost_net_virtqueue.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/vhost.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 94a80eb..51aeb5f 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -101,9 +101,6 @@ struct vhost_virtqueue {
 	u64 log_addr;
 
 	struct iovec iov[UIO_MAXIOV];
-	/* hdr is used to store the virtio header.
-	 * Since each iovec has >= 1 byte length, we never need more than
-	 * header length entries to store the header. */
 	struct iovec *indirect;
 	struct vring_used_elem *heads;
 	/* We use a kind of RCU to access private pointer.
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 05/11] vhost: Simplify dev->vqs[i] access
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (5 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/vhost.c | 35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index e406d5f..74bc779 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -260,17 +260,16 @@ static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
 /* Helper to allocate iovec buffers for all vqs. */
 static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
 {
+	struct vhost_virtqueue *vq;
 	int i;
 
 	for (i = 0; i < dev->nvqs; ++i) {
-		dev->vqs[i]->indirect = kmalloc(sizeof *dev->vqs[i]->indirect *
-					       UIO_MAXIOV, GFP_KERNEL);
-		dev->vqs[i]->log = kmalloc(sizeof *dev->vqs[i]->log * UIO_MAXIOV,
-					  GFP_KERNEL);
-		dev->vqs[i]->heads = kmalloc(sizeof *dev->vqs[i]->heads *
-					    UIO_MAXIOV, GFP_KERNEL);
-		if (!dev->vqs[i]->indirect || !dev->vqs[i]->log ||
-			!dev->vqs[i]->heads)
+		vq = dev->vqs[i];
+		vq->indirect = kmalloc(sizeof *vq->indirect * UIO_MAXIOV,
+				       GFP_KERNEL);
+		vq->log = kmalloc(sizeof *vq->log * UIO_MAXIOV, GFP_KERNEL);
+		vq->heads = kmalloc(sizeof *vq->heads * UIO_MAXIOV, GFP_KERNEL);
+		if (!vq->indirect || !vq->log || !vq->heads)
 			goto err_nomem;
 	}
 	return 0;
@@ -292,6 +291,7 @@ static void vhost_dev_free_iovecs(struct vhost_dev *dev)
 long vhost_dev_init(struct vhost_dev *dev,
 		    struct vhost_virtqueue **vqs, int nvqs)
 {
+	struct vhost_virtqueue *vq;
 	int i;
 
 	dev->vqs = vqs;
@@ -306,15 +306,16 @@ long vhost_dev_init(struct vhost_dev *dev,
 	dev->worker = NULL;
 
 	for (i = 0; i < dev->nvqs; ++i) {
-		dev->vqs[i]->log = NULL;
-		dev->vqs[i]->indirect = NULL;
-		dev->vqs[i]->heads = NULL;
-		dev->vqs[i]->dev = dev;
-		mutex_init(&dev->vqs[i]->mutex);
-		vhost_vq_reset(dev, dev->vqs[i]);
-		if (dev->vqs[i]->handle_kick)
-			vhost_poll_init(&dev->vqs[i]->poll,
-					dev->vqs[i]->handle_kick, POLLIN, dev);
+		vq = dev->vqs[i];
+		vq->log = NULL;
+		vq->indirect = NULL;
+		vq->heads = NULL;
+		vq->dev = dev;
+		mutex_init(&vq->mutex);
+		vhost_vq_reset(dev, vq);
+		if (vq->handle_kick)
+			vhost_poll_init(&vq->poll, vq->handle_kick,
+					POLLIN, dev);
 	}
 
 	return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 05/11] vhost: Simplify dev->vqs[i] access
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (6 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 05/11] vhost: Simplify dev->vqs[i] access Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 06/11] vhost-net: Cleanup vhost_ubuf and vhost_zcopy Asias He
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/vhost.c | 35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index e406d5f..74bc779 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -260,17 +260,16 @@ static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
 /* Helper to allocate iovec buffers for all vqs. */
 static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
 {
+	struct vhost_virtqueue *vq;
 	int i;
 
 	for (i = 0; i < dev->nvqs; ++i) {
-		dev->vqs[i]->indirect = kmalloc(sizeof *dev->vqs[i]->indirect *
-					       UIO_MAXIOV, GFP_KERNEL);
-		dev->vqs[i]->log = kmalloc(sizeof *dev->vqs[i]->log * UIO_MAXIOV,
-					  GFP_KERNEL);
-		dev->vqs[i]->heads = kmalloc(sizeof *dev->vqs[i]->heads *
-					    UIO_MAXIOV, GFP_KERNEL);
-		if (!dev->vqs[i]->indirect || !dev->vqs[i]->log ||
-			!dev->vqs[i]->heads)
+		vq = dev->vqs[i];
+		vq->indirect = kmalloc(sizeof *vq->indirect * UIO_MAXIOV,
+				       GFP_KERNEL);
+		vq->log = kmalloc(sizeof *vq->log * UIO_MAXIOV, GFP_KERNEL);
+		vq->heads = kmalloc(sizeof *vq->heads * UIO_MAXIOV, GFP_KERNEL);
+		if (!vq->indirect || !vq->log || !vq->heads)
 			goto err_nomem;
 	}
 	return 0;
@@ -292,6 +291,7 @@ static void vhost_dev_free_iovecs(struct vhost_dev *dev)
 long vhost_dev_init(struct vhost_dev *dev,
 		    struct vhost_virtqueue **vqs, int nvqs)
 {
+	struct vhost_virtqueue *vq;
 	int i;
 
 	dev->vqs = vqs;
@@ -306,15 +306,16 @@ long vhost_dev_init(struct vhost_dev *dev,
 	dev->worker = NULL;
 
 	for (i = 0; i < dev->nvqs; ++i) {
-		dev->vqs[i]->log = NULL;
-		dev->vqs[i]->indirect = NULL;
-		dev->vqs[i]->heads = NULL;
-		dev->vqs[i]->dev = dev;
-		mutex_init(&dev->vqs[i]->mutex);
-		vhost_vq_reset(dev, dev->vqs[i]);
-		if (dev->vqs[i]->handle_kick)
-			vhost_poll_init(&dev->vqs[i]->poll,
-					dev->vqs[i]->handle_kick, POLLIN, dev);
+		vq = dev->vqs[i];
+		vq->log = NULL;
+		vq->indirect = NULL;
+		vq->heads = NULL;
+		vq->dev = dev;
+		mutex_init(&vq->mutex);
+		vhost_vq_reset(dev, vq);
+		if (vq->handle_kick)
+			vhost_poll_init(&vq->poll, vq->handle_kick,
+					POLLIN, dev);
 	}
 
 	return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 06/11] vhost-net: Cleanup vhost_ubuf and vhost_zcopy
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (7 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06 10:25   ` Michael S. Tsirkin
  2013-05-06  8:38 ` Asias He
                   ` (12 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

- Rename vhost_ubuf to vhost_net_ubuf
- Rename vhost_zcopy_mask to vhost_net_zcopy_mask
- Make funcs static

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/net.c | 58 +++++++++++++++++++++++++++--------------------------
 1 file changed, 30 insertions(+), 28 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 06b2447..2b51e23 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -70,7 +70,7 @@ enum {
 	VHOST_NET_VQ_MAX = 2,
 };
 
-struct vhost_ubuf_ref {
+struct vhost_net_ubuf_ref {
 	struct kref kref;
 	wait_queue_head_t wait;
 	struct vhost_virtqueue *vq;
@@ -93,7 +93,7 @@ struct vhost_net_virtqueue {
 	struct ubuf_info *ubuf_info;
 	/* Reference counting for outstanding ubufs.
 	 * Protected by vq mutex. Writers must also take device mutex. */
-	struct vhost_ubuf_ref *ubufs;
+	struct vhost_net_ubuf_ref *ubufs;
 };
 
 struct vhost_net {
@@ -110,24 +110,25 @@ struct vhost_net {
 	bool tx_flush;
 };
 
-static unsigned vhost_zcopy_mask __read_mostly;
+static unsigned vhost_net_zcopy_mask __read_mostly;
 
-void vhost_enable_zcopy(int vq)
+static void vhost_net_enable_zcopy(int vq)
 {
-	vhost_zcopy_mask |= 0x1 << vq;
+	vhost_net_zcopy_mask |= 0x1 << vq;
 }
 
-static void vhost_zerocopy_done_signal(struct kref *kref)
+static void vhost_net_zerocopy_done_signal(struct kref *kref)
 {
-	struct vhost_ubuf_ref *ubufs = container_of(kref, struct vhost_ubuf_ref,
-						    kref);
+	struct vhost_net_ubuf_ref *ubufs;
+
+	ubufs = container_of(kref, struct vhost_net_ubuf_ref, kref);
 	wake_up(&ubufs->wait);
 }
 
-struct vhost_ubuf_ref *vhost_ubuf_alloc(struct vhost_virtqueue *vq,
-					bool zcopy)
+static struct vhost_net_ubuf_ref *
+vhost_net_ubuf_alloc(struct vhost_virtqueue *vq, bool zcopy)
 {
-	struct vhost_ubuf_ref *ubufs;
+	struct vhost_net_ubuf_ref *ubufs;
 	/* No zero copy backend? Nothing to count. */
 	if (!zcopy)
 		return NULL;
@@ -140,14 +141,14 @@ struct vhost_ubuf_ref *vhost_ubuf_alloc(struct vhost_virtqueue *vq,
 	return ubufs;
 }
 
-void vhost_ubuf_put(struct vhost_ubuf_ref *ubufs)
+static void vhost_net_ubuf_put(struct vhost_net_ubuf_ref *ubufs)
 {
-	kref_put(&ubufs->kref, vhost_zerocopy_done_signal);
+	kref_put(&ubufs->kref, vhost_net_zerocopy_done_signal);
 }
 
-void vhost_ubuf_put_and_wait(struct vhost_ubuf_ref *ubufs)
+static void vhost_net_ubuf_put_and_wait(struct vhost_net_ubuf_ref *ubufs)
 {
-	kref_put(&ubufs->kref, vhost_zerocopy_done_signal);
+	kref_put(&ubufs->kref, vhost_net_zerocopy_done_signal);
 	wait_event(ubufs->wait, !atomic_read(&ubufs->kref.refcount));
 	kfree(ubufs);
 }
@@ -159,7 +160,7 @@ static void vhost_net_clear_ubuf_info(struct vhost_net *n)
 	int i;
 
 	for (i = 0; i < n->dev.nvqs; ++i) {
-		zcopy = vhost_zcopy_mask & (0x1 << i);
+		zcopy = vhost_net_zcopy_mask & (0x1 << i);
 		if (zcopy)
 			kfree(n->vqs[i].ubuf_info);
 	}
@@ -171,7 +172,7 @@ int vhost_net_set_ubuf_info(struct vhost_net *n)
 	int i;
 
 	for (i = 0; i < n->dev.nvqs; ++i) {
-		zcopy = vhost_zcopy_mask & (0x1 << i);
+		zcopy = vhost_net_zcopy_mask & (0x1 << i);
 		if (!zcopy)
 			continue;
 		n->vqs[i].ubuf_info = kmalloc(sizeof(*n->vqs[i].ubuf_info) *
@@ -183,7 +184,7 @@ int vhost_net_set_ubuf_info(struct vhost_net *n)
 
 err:
 	while (i--) {
-		zcopy = vhost_zcopy_mask & (0x1 << i);
+		zcopy = vhost_net_zcopy_mask & (0x1 << i);
 		if (!zcopy)
 			continue;
 		kfree(n->vqs[i].ubuf_info);
@@ -305,7 +306,7 @@ static int vhost_zerocopy_signal_used(struct vhost_net *net,
 
 static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
 {
-	struct vhost_ubuf_ref *ubufs = ubuf->ctx;
+	struct vhost_net_ubuf_ref *ubufs = ubuf->ctx;
 	struct vhost_virtqueue *vq = ubufs->vq;
 	int cnt = atomic_read(&ubufs->kref.refcount);
 
@@ -322,7 +323,7 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
 	/* set len to mark this desc buffers done DMA */
 	vq->heads[ubuf->desc].len = success ?
 		VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN;
-	vhost_ubuf_put(ubufs);
+	vhost_net_ubuf_put(ubufs);
 }
 
 /* Expects to be always run from workqueue - which acts as
@@ -345,7 +346,7 @@ static void handle_tx(struct vhost_net *net)
 	int err;
 	size_t hdr_size;
 	struct socket *sock;
-	struct vhost_ubuf_ref *uninitialized_var(ubufs);
+	struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
 	bool zcopy, zcopy_used;
 
 	/* TODO: check that we are running from vhost_worker? */
@@ -441,7 +442,7 @@ static void handle_tx(struct vhost_net *net)
 		if (unlikely(err < 0)) {
 			if (zcopy_used) {
 				if (ubufs)
-					vhost_ubuf_put(ubufs);
+					vhost_net_ubuf_put(ubufs);
 				nvq->upend_idx = ((unsigned)nvq->upend_idx - 1)
 					% UIO_MAXIOV;
 			}
@@ -795,7 +796,7 @@ static void vhost_net_flush(struct vhost_net *n)
 		n->tx_flush = true;
 		mutex_unlock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
 		/* Wait for all lower device DMAs done. */
-		vhost_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs);
+		vhost_net_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs);
 		mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
 		n->tx_flush = false;
 		kref_init(&n->vqs[VHOST_NET_VQ_TX].ubufs->kref);
@@ -896,7 +897,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	struct socket *sock, *oldsock;
 	struct vhost_virtqueue *vq;
 	struct vhost_net_virtqueue *nvq;
-	struct vhost_ubuf_ref *ubufs, *oldubufs = NULL;
+	struct vhost_net_ubuf_ref *ubufs, *oldubufs = NULL;
 	int r;
 
 	mutex_lock(&n->dev.mutex);
@@ -927,7 +928,8 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	oldsock = rcu_dereference_protected(vq->private_data,
 					    lockdep_is_held(&vq->mutex));
 	if (sock != oldsock) {
-		ubufs = vhost_ubuf_alloc(vq, sock && vhost_sock_zcopy(sock));
+		ubufs = vhost_net_ubuf_alloc(vq,
+					     sock && vhost_sock_zcopy(sock));
 		if (IS_ERR(ubufs)) {
 			r = PTR_ERR(ubufs);
 			goto err_ubufs;
@@ -953,7 +955,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	mutex_unlock(&vq->mutex);
 
 	if (oldubufs) {
-		vhost_ubuf_put_and_wait(oldubufs);
+		vhost_net_ubuf_put_and_wait(oldubufs);
 		mutex_lock(&vq->mutex);
 		vhost_zerocopy_signal_used(n, vq);
 		mutex_unlock(&vq->mutex);
@@ -971,7 +973,7 @@ err_used:
 	rcu_assign_pointer(vq->private_data, oldsock);
 	vhost_net_enable_vq(n, vq);
 	if (ubufs)
-		vhost_ubuf_put_and_wait(ubufs);
+		vhost_net_ubuf_put_and_wait(ubufs);
 err_ubufs:
 	fput(sock->file);
 err_vq:
@@ -1133,7 +1135,7 @@ static struct miscdevice vhost_net_misc = {
 static int vhost_net_init(void)
 {
 	if (experimental_zcopytx)
-		vhost_enable_zcopy(VHOST_NET_VQ_TX);
+		vhost_net_enable_zcopy(VHOST_NET_VQ_TX);
 	return misc_register(&vhost_net_misc);
 }
 module_init(vhost_net_init);
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 06/11] vhost-net: Cleanup vhost_ubuf and vhost_zcopy
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (8 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 06/11] vhost-net: Cleanup vhost_ubuf and vhost_zcopy Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 07/11] vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration Asias He
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

- Rename vhost_ubuf to vhost_net_ubuf
- Rename vhost_zcopy_mask to vhost_net_zcopy_mask
- Make funcs static

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/net.c | 58 +++++++++++++++++++++++++++--------------------------
 1 file changed, 30 insertions(+), 28 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 06b2447..2b51e23 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -70,7 +70,7 @@ enum {
 	VHOST_NET_VQ_MAX = 2,
 };
 
-struct vhost_ubuf_ref {
+struct vhost_net_ubuf_ref {
 	struct kref kref;
 	wait_queue_head_t wait;
 	struct vhost_virtqueue *vq;
@@ -93,7 +93,7 @@ struct vhost_net_virtqueue {
 	struct ubuf_info *ubuf_info;
 	/* Reference counting for outstanding ubufs.
 	 * Protected by vq mutex. Writers must also take device mutex. */
-	struct vhost_ubuf_ref *ubufs;
+	struct vhost_net_ubuf_ref *ubufs;
 };
 
 struct vhost_net {
@@ -110,24 +110,25 @@ struct vhost_net {
 	bool tx_flush;
 };
 
-static unsigned vhost_zcopy_mask __read_mostly;
+static unsigned vhost_net_zcopy_mask __read_mostly;
 
-void vhost_enable_zcopy(int vq)
+static void vhost_net_enable_zcopy(int vq)
 {
-	vhost_zcopy_mask |= 0x1 << vq;
+	vhost_net_zcopy_mask |= 0x1 << vq;
 }
 
-static void vhost_zerocopy_done_signal(struct kref *kref)
+static void vhost_net_zerocopy_done_signal(struct kref *kref)
 {
-	struct vhost_ubuf_ref *ubufs = container_of(kref, struct vhost_ubuf_ref,
-						    kref);
+	struct vhost_net_ubuf_ref *ubufs;
+
+	ubufs = container_of(kref, struct vhost_net_ubuf_ref, kref);
 	wake_up(&ubufs->wait);
 }
 
-struct vhost_ubuf_ref *vhost_ubuf_alloc(struct vhost_virtqueue *vq,
-					bool zcopy)
+static struct vhost_net_ubuf_ref *
+vhost_net_ubuf_alloc(struct vhost_virtqueue *vq, bool zcopy)
 {
-	struct vhost_ubuf_ref *ubufs;
+	struct vhost_net_ubuf_ref *ubufs;
 	/* No zero copy backend? Nothing to count. */
 	if (!zcopy)
 		return NULL;
@@ -140,14 +141,14 @@ struct vhost_ubuf_ref *vhost_ubuf_alloc(struct vhost_virtqueue *vq,
 	return ubufs;
 }
 
-void vhost_ubuf_put(struct vhost_ubuf_ref *ubufs)
+static void vhost_net_ubuf_put(struct vhost_net_ubuf_ref *ubufs)
 {
-	kref_put(&ubufs->kref, vhost_zerocopy_done_signal);
+	kref_put(&ubufs->kref, vhost_net_zerocopy_done_signal);
 }
 
-void vhost_ubuf_put_and_wait(struct vhost_ubuf_ref *ubufs)
+static void vhost_net_ubuf_put_and_wait(struct vhost_net_ubuf_ref *ubufs)
 {
-	kref_put(&ubufs->kref, vhost_zerocopy_done_signal);
+	kref_put(&ubufs->kref, vhost_net_zerocopy_done_signal);
 	wait_event(ubufs->wait, !atomic_read(&ubufs->kref.refcount));
 	kfree(ubufs);
 }
@@ -159,7 +160,7 @@ static void vhost_net_clear_ubuf_info(struct vhost_net *n)
 	int i;
 
 	for (i = 0; i < n->dev.nvqs; ++i) {
-		zcopy = vhost_zcopy_mask & (0x1 << i);
+		zcopy = vhost_net_zcopy_mask & (0x1 << i);
 		if (zcopy)
 			kfree(n->vqs[i].ubuf_info);
 	}
@@ -171,7 +172,7 @@ int vhost_net_set_ubuf_info(struct vhost_net *n)
 	int i;
 
 	for (i = 0; i < n->dev.nvqs; ++i) {
-		zcopy = vhost_zcopy_mask & (0x1 << i);
+		zcopy = vhost_net_zcopy_mask & (0x1 << i);
 		if (!zcopy)
 			continue;
 		n->vqs[i].ubuf_info = kmalloc(sizeof(*n->vqs[i].ubuf_info) *
@@ -183,7 +184,7 @@ int vhost_net_set_ubuf_info(struct vhost_net *n)
 
 err:
 	while (i--) {
-		zcopy = vhost_zcopy_mask & (0x1 << i);
+		zcopy = vhost_net_zcopy_mask & (0x1 << i);
 		if (!zcopy)
 			continue;
 		kfree(n->vqs[i].ubuf_info);
@@ -305,7 +306,7 @@ static int vhost_zerocopy_signal_used(struct vhost_net *net,
 
 static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
 {
-	struct vhost_ubuf_ref *ubufs = ubuf->ctx;
+	struct vhost_net_ubuf_ref *ubufs = ubuf->ctx;
 	struct vhost_virtqueue *vq = ubufs->vq;
 	int cnt = atomic_read(&ubufs->kref.refcount);
 
@@ -322,7 +323,7 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
 	/* set len to mark this desc buffers done DMA */
 	vq->heads[ubuf->desc].len = success ?
 		VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN;
-	vhost_ubuf_put(ubufs);
+	vhost_net_ubuf_put(ubufs);
 }
 
 /* Expects to be always run from workqueue - which acts as
@@ -345,7 +346,7 @@ static void handle_tx(struct vhost_net *net)
 	int err;
 	size_t hdr_size;
 	struct socket *sock;
-	struct vhost_ubuf_ref *uninitialized_var(ubufs);
+	struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
 	bool zcopy, zcopy_used;
 
 	/* TODO: check that we are running from vhost_worker? */
@@ -441,7 +442,7 @@ static void handle_tx(struct vhost_net *net)
 		if (unlikely(err < 0)) {
 			if (zcopy_used) {
 				if (ubufs)
-					vhost_ubuf_put(ubufs);
+					vhost_net_ubuf_put(ubufs);
 				nvq->upend_idx = ((unsigned)nvq->upend_idx - 1)
 					% UIO_MAXIOV;
 			}
@@ -795,7 +796,7 @@ static void vhost_net_flush(struct vhost_net *n)
 		n->tx_flush = true;
 		mutex_unlock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
 		/* Wait for all lower device DMAs done. */
-		vhost_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs);
+		vhost_net_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs);
 		mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
 		n->tx_flush = false;
 		kref_init(&n->vqs[VHOST_NET_VQ_TX].ubufs->kref);
@@ -896,7 +897,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	struct socket *sock, *oldsock;
 	struct vhost_virtqueue *vq;
 	struct vhost_net_virtqueue *nvq;
-	struct vhost_ubuf_ref *ubufs, *oldubufs = NULL;
+	struct vhost_net_ubuf_ref *ubufs, *oldubufs = NULL;
 	int r;
 
 	mutex_lock(&n->dev.mutex);
@@ -927,7 +928,8 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	oldsock = rcu_dereference_protected(vq->private_data,
 					    lockdep_is_held(&vq->mutex));
 	if (sock != oldsock) {
-		ubufs = vhost_ubuf_alloc(vq, sock && vhost_sock_zcopy(sock));
+		ubufs = vhost_net_ubuf_alloc(vq,
+					     sock && vhost_sock_zcopy(sock));
 		if (IS_ERR(ubufs)) {
 			r = PTR_ERR(ubufs);
 			goto err_ubufs;
@@ -953,7 +955,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
 	mutex_unlock(&vq->mutex);
 
 	if (oldubufs) {
-		vhost_ubuf_put_and_wait(oldubufs);
+		vhost_net_ubuf_put_and_wait(oldubufs);
 		mutex_lock(&vq->mutex);
 		vhost_zerocopy_signal_used(n, vq);
 		mutex_unlock(&vq->mutex);
@@ -971,7 +973,7 @@ err_used:
 	rcu_assign_pointer(vq->private_data, oldsock);
 	vhost_net_enable_vq(n, vq);
 	if (ubufs)
-		vhost_ubuf_put_and_wait(ubufs);
+		vhost_net_ubuf_put_and_wait(ubufs);
 err_ubufs:
 	fput(sock->file);
 err_vq:
@@ -1133,7 +1135,7 @@ static struct miscdevice vhost_net_misc = {
 static int vhost_net_init(void)
 {
 	if (experimental_zcopytx)
-		vhost_enable_zcopy(VHOST_NET_VQ_TX);
+		vhost_net_enable_zcopy(VHOST_NET_VQ_TX);
 	return misc_register(&vhost_net_misc);
 }
 module_init(vhost_net_init);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 07/11] vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (9 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

It was needed when struct tcm_vhost_tpg is in tcm_vhost.h

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 2dcb94a..02ddedd 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -115,7 +115,6 @@ struct tcm_vhost_nacl {
 	struct se_node_acl se_node_acl;
 };
 
-struct vhost_scsi;
 struct tcm_vhost_tpg {
 	/* Vhost port target portal group tag for TCM */
 	u16 tport_tpgt;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 07/11] vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (10 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 07/11] vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 08/11] vhost-scsi: Rename struct vhost_scsi *s to *vs Asias He
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

It was needed when struct tcm_vhost_tpg is in tcm_vhost.h

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 2dcb94a..02ddedd 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -115,7 +115,6 @@ struct tcm_vhost_nacl {
 	struct se_node_acl se_node_acl;
 };
 
-struct vhost_scsi;
 struct tcm_vhost_tpg {
 	/* Vhost port target portal group tag for TCM */
 	u16 tport_tpgt;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 08/11] vhost-scsi: Rename struct vhost_scsi *s to *vs
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (12 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 08/11] vhost-scsi: Rename struct vhost_scsi *s to *vs Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 09/11] vhost-scsi: Make func indention more consistent Asias He
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

vs is used everywhere, make the naming more consistent.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 56 ++++++++++++++++++++++++++--------------------------
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 02ddedd..d4798e1 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1342,63 +1342,63 @@ static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
 
 static int vhost_scsi_open(struct inode *inode, struct file *f)
 {
-	struct vhost_scsi *s;
+	struct vhost_scsi *vs;
 	struct vhost_virtqueue **vqs;
 	int r, i;
 
-	s = kzalloc(sizeof(*s), GFP_KERNEL);
-	if (!s)
+	vs = kzalloc(sizeof(*vs), GFP_KERNEL);
+	if (!vs)
 		return -ENOMEM;
 
 	vqs = kmalloc(VHOST_SCSI_MAX_VQ * sizeof(*vqs), GFP_KERNEL);
 	if (!vqs) {
-		kfree(s);
+		kfree(vs);
 		return -ENOMEM;
 	}
 
-	vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work);
-	vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work);
+	vhost_work_init(&vs->vs_completion_work, vhost_scsi_complete_cmd_work);
+	vhost_work_init(&vs->vs_event_work, tcm_vhost_evt_work);
 
-	s->vs_events_nr = 0;
-	s->vs_events_missed = false;
+	vs->vs_events_nr = 0;
+	vs->vs_events_missed = false;
 
-	vqs[VHOST_SCSI_VQ_CTL] = &s->vqs[VHOST_SCSI_VQ_CTL].vq;
-	vqs[VHOST_SCSI_VQ_EVT] = &s->vqs[VHOST_SCSI_VQ_EVT].vq;
-	s->vqs[VHOST_SCSI_VQ_CTL].vq.handle_kick = vhost_scsi_ctl_handle_kick;
-	s->vqs[VHOST_SCSI_VQ_EVT].vq.handle_kick = vhost_scsi_evt_handle_kick;
+	vqs[VHOST_SCSI_VQ_CTL] = &vs->vqs[VHOST_SCSI_VQ_CTL].vq;
+	vqs[VHOST_SCSI_VQ_EVT] = &vs->vqs[VHOST_SCSI_VQ_EVT].vq;
+	vs->vqs[VHOST_SCSI_VQ_CTL].vq.handle_kick = vhost_scsi_ctl_handle_kick;
+	vs->vqs[VHOST_SCSI_VQ_EVT].vq.handle_kick = vhost_scsi_evt_handle_kick;
 	for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) {
-		vqs[i] = &s->vqs[i].vq;
-		s->vqs[i].vq.handle_kick = vhost_scsi_handle_kick;
+		vqs[i] = &vs->vqs[i].vq;
+		vs->vqs[i].vq.handle_kick = vhost_scsi_handle_kick;
 	}
-	r = vhost_dev_init(&s->dev, vqs, VHOST_SCSI_MAX_VQ);
+	r = vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ);
 
-	tcm_vhost_init_inflight(s, NULL);
+	tcm_vhost_init_inflight(vs, NULL);
 
 	if (r < 0) {
 		kfree(vqs);
-		kfree(s);
+		kfree(vs);
 		return r;
 	}
 
-	f->private_data = s;
+	f->private_data = vs;
 	return 0;
 }
 
 static int vhost_scsi_release(struct inode *inode, struct file *f)
 {
-	struct vhost_scsi *s = f->private_data;
+	struct vhost_scsi *vs = f->private_data;
 	struct vhost_scsi_target t;
 
-	mutex_lock(&s->dev.mutex);
-	memcpy(t.vhost_wwpn, s->vs_vhost_wwpn, sizeof(t.vhost_wwpn));
-	mutex_unlock(&s->dev.mutex);
-	vhost_scsi_clear_endpoint(s, &t);
-	vhost_dev_stop(&s->dev);
-	vhost_dev_cleanup(&s->dev, false);
+	mutex_lock(&vs->dev.mutex);
+	memcpy(t.vhost_wwpn, vs->vs_vhost_wwpn, sizeof(t.vhost_wwpn));
+	mutex_unlock(&vs->dev.mutex);
+	vhost_scsi_clear_endpoint(vs, &t);
+	vhost_dev_stop(&vs->dev);
+	vhost_dev_cleanup(&vs->dev, false);
 	/* Jobs can re-queue themselves in evt kick handler. Do extra flush. */
-	vhost_scsi_flush(s);
-	kfree(s->dev.vqs);
-	kfree(s);
+	vhost_scsi_flush(vs);
+	kfree(vs->dev.vqs);
+	kfree(vs);
 	return 0;
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 08/11] vhost-scsi: Rename struct vhost_scsi *s to *vs
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (11 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

vs is used everywhere, make the naming more consistent.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 56 ++++++++++++++++++++++++++--------------------------
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 02ddedd..d4798e1 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1342,63 +1342,63 @@ static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
 
 static int vhost_scsi_open(struct inode *inode, struct file *f)
 {
-	struct vhost_scsi *s;
+	struct vhost_scsi *vs;
 	struct vhost_virtqueue **vqs;
 	int r, i;
 
-	s = kzalloc(sizeof(*s), GFP_KERNEL);
-	if (!s)
+	vs = kzalloc(sizeof(*vs), GFP_KERNEL);
+	if (!vs)
 		return -ENOMEM;
 
 	vqs = kmalloc(VHOST_SCSI_MAX_VQ * sizeof(*vqs), GFP_KERNEL);
 	if (!vqs) {
-		kfree(s);
+		kfree(vs);
 		return -ENOMEM;
 	}
 
-	vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work);
-	vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work);
+	vhost_work_init(&vs->vs_completion_work, vhost_scsi_complete_cmd_work);
+	vhost_work_init(&vs->vs_event_work, tcm_vhost_evt_work);
 
-	s->vs_events_nr = 0;
-	s->vs_events_missed = false;
+	vs->vs_events_nr = 0;
+	vs->vs_events_missed = false;
 
-	vqs[VHOST_SCSI_VQ_CTL] = &s->vqs[VHOST_SCSI_VQ_CTL].vq;
-	vqs[VHOST_SCSI_VQ_EVT] = &s->vqs[VHOST_SCSI_VQ_EVT].vq;
-	s->vqs[VHOST_SCSI_VQ_CTL].vq.handle_kick = vhost_scsi_ctl_handle_kick;
-	s->vqs[VHOST_SCSI_VQ_EVT].vq.handle_kick = vhost_scsi_evt_handle_kick;
+	vqs[VHOST_SCSI_VQ_CTL] = &vs->vqs[VHOST_SCSI_VQ_CTL].vq;
+	vqs[VHOST_SCSI_VQ_EVT] = &vs->vqs[VHOST_SCSI_VQ_EVT].vq;
+	vs->vqs[VHOST_SCSI_VQ_CTL].vq.handle_kick = vhost_scsi_ctl_handle_kick;
+	vs->vqs[VHOST_SCSI_VQ_EVT].vq.handle_kick = vhost_scsi_evt_handle_kick;
 	for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) {
-		vqs[i] = &s->vqs[i].vq;
-		s->vqs[i].vq.handle_kick = vhost_scsi_handle_kick;
+		vqs[i] = &vs->vqs[i].vq;
+		vs->vqs[i].vq.handle_kick = vhost_scsi_handle_kick;
 	}
-	r = vhost_dev_init(&s->dev, vqs, VHOST_SCSI_MAX_VQ);
+	r = vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ);
 
-	tcm_vhost_init_inflight(s, NULL);
+	tcm_vhost_init_inflight(vs, NULL);
 
 	if (r < 0) {
 		kfree(vqs);
-		kfree(s);
+		kfree(vs);
 		return r;
 	}
 
-	f->private_data = s;
+	f->private_data = vs;
 	return 0;
 }
 
 static int vhost_scsi_release(struct inode *inode, struct file *f)
 {
-	struct vhost_scsi *s = f->private_data;
+	struct vhost_scsi *vs = f->private_data;
 	struct vhost_scsi_target t;
 
-	mutex_lock(&s->dev.mutex);
-	memcpy(t.vhost_wwpn, s->vs_vhost_wwpn, sizeof(t.vhost_wwpn));
-	mutex_unlock(&s->dev.mutex);
-	vhost_scsi_clear_endpoint(s, &t);
-	vhost_dev_stop(&s->dev);
-	vhost_dev_cleanup(&s->dev, false);
+	mutex_lock(&vs->dev.mutex);
+	memcpy(t.vhost_wwpn, vs->vs_vhost_wwpn, sizeof(t.vhost_wwpn));
+	mutex_unlock(&vs->dev.mutex);
+	vhost_scsi_clear_endpoint(vs, &t);
+	vhost_dev_stop(&vs->dev);
+	vhost_dev_cleanup(&vs->dev, false);
 	/* Jobs can re-queue themselves in evt kick handler. Do extra flush. */
-	vhost_scsi_flush(s);
-	kfree(s->dev.vqs);
-	kfree(s);
+	vhost_scsi_flush(vs);
+	kfree(vs->dev.vqs);
+	kfree(vs);
 	return 0;
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 09/11] vhost-scsi: Make func indention more consistent
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (13 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 154 +++++++++++++++++++++++++++++----------------------
 1 file changed, 88 insertions(+), 66 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index d4798e1..d9781ed 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -333,11 +333,12 @@ static u32 tcm_vhost_get_default_depth(struct se_portal_group *se_tpg)
 	return 1;
 }
 
-static u32 tcm_vhost_get_pr_transport_id(struct se_portal_group *se_tpg,
-	struct se_node_acl *se_nacl,
-	struct t10_pr_registration *pr_reg,
-	int *format_code,
-	unsigned char *buf)
+static u32
+tcm_vhost_get_pr_transport_id(struct se_portal_group *se_tpg,
+			      struct se_node_acl *se_nacl,
+			      struct t10_pr_registration *pr_reg,
+			      int *format_code,
+			      unsigned char *buf)
 {
 	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -363,10 +364,11 @@ static u32 tcm_vhost_get_pr_transport_id(struct se_portal_group *se_tpg,
 			format_code, buf);
 }
 
-static u32 tcm_vhost_get_pr_transport_id_len(struct se_portal_group *se_tpg,
-	struct se_node_acl *se_nacl,
-	struct t10_pr_registration *pr_reg,
-	int *format_code)
+static u32
+tcm_vhost_get_pr_transport_id_len(struct se_portal_group *se_tpg,
+				  struct se_node_acl *se_nacl,
+				  struct t10_pr_registration *pr_reg,
+				  int *format_code)
 {
 	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -392,10 +394,11 @@ static u32 tcm_vhost_get_pr_transport_id_len(struct se_portal_group *se_tpg,
 			format_code);
 }
 
-static char *tcm_vhost_parse_pr_out_transport_id(struct se_portal_group *se_tpg,
-	const char *buf,
-	u32 *out_tid_len,
-	char **port_nexus_ptr)
+static char *
+tcm_vhost_parse_pr_out_transport_id(struct se_portal_group *se_tpg,
+				    const char *buf,
+				    u32 *out_tid_len,
+				    char **port_nexus_ptr)
 {
 	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -421,8 +424,8 @@ static char *tcm_vhost_parse_pr_out_transport_id(struct se_portal_group *se_tpg,
 			port_nexus_ptr);
 }
 
-static struct se_node_acl *tcm_vhost_alloc_fabric_acl(
-	struct se_portal_group *se_tpg)
+static struct se_node_acl *
+tcm_vhost_alloc_fabric_acl(struct se_portal_group *se_tpg)
 {
 	struct tcm_vhost_nacl *nacl;
 
@@ -435,8 +438,9 @@ static struct se_node_acl *tcm_vhost_alloc_fabric_acl(
 	return &nacl->se_node_acl;
 }
 
-static void tcm_vhost_release_fabric_acl(struct se_portal_group *se_tpg,
-	struct se_node_acl *se_nacl)
+static void
+tcm_vhost_release_fabric_acl(struct se_portal_group *se_tpg,
+			     struct se_node_acl *se_nacl)
 {
 	struct tcm_vhost_nacl *nacl = container_of(se_nacl,
 			struct tcm_vhost_nacl, se_node_acl);
@@ -531,8 +535,9 @@ static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt)
 	kfree(evt);
 }
 
-static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
-	u32 event, u32 reason)
+static struct tcm_vhost_evt *
+tcm_vhost_allocate_evt(struct vhost_scsi *vs,
+		       u32 event, u32 reason)
 {
 	struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT].vq;
 	struct tcm_vhost_evt *evt;
@@ -576,8 +581,8 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
 	kfree(tv_cmd);
 }
 
-static void tcm_vhost_do_evt_work(struct vhost_scsi *vs,
-	struct tcm_vhost_evt *evt)
+static void
+tcm_vhost_do_evt_work(struct vhost_scsi *vs, struct tcm_vhost_evt *evt)
 {
 	struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT].vq;
 	struct virtio_scsi_event *event = &evt->event;
@@ -698,12 +703,12 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 		vhost_signal(&vs->dev, &vs->vqs[vq].vq);
 }
 
-static struct tcm_vhost_cmd *vhost_scsi_allocate_cmd(
-	struct vhost_virtqueue *vq,
-	struct tcm_vhost_tpg *tv_tpg,
-	struct virtio_scsi_cmd_req *v_req,
-	u32 exp_data_len,
-	int data_direction)
+static struct tcm_vhost_cmd *
+vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
+			struct tcm_vhost_tpg *tv_tpg,
+			struct virtio_scsi_cmd_req *v_req,
+			u32 exp_data_len,
+			int data_direction)
 {
 	struct tcm_vhost_cmd *tv_cmd;
 	struct tcm_vhost_nexus *tv_nexus;
@@ -734,8 +739,11 @@ static struct tcm_vhost_cmd *vhost_scsi_allocate_cmd(
  *
  * Returns the number of scatterlist entries used or -errno on error.
  */
-static int vhost_scsi_map_to_sgl(struct scatterlist *sgl,
-	unsigned int sgl_count, struct iovec *iov, int write)
+static int
+vhost_scsi_map_to_sgl(struct scatterlist *sgl,
+		      unsigned int sgl_count,
+		      struct iovec *iov,
+		      int write)
 {
 	unsigned int npages = 0, pages_nr, offset, nbytes;
 	struct scatterlist *sg = sgl;
@@ -779,8 +787,11 @@ out:
 	return ret;
 }
 
-static int vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
-	struct iovec *iov, unsigned int niov, int write)
+static int
+vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
+			  struct iovec *iov,
+			  unsigned int niov,
+			  int write)
 {
 	int ret;
 	unsigned int i;
@@ -860,8 +871,10 @@ static void tcm_vhost_submission_work(struct work_struct *work)
 	}
 }
 
-static void vhost_scsi_send_bad_target(struct vhost_scsi *vs,
-	struct vhost_virtqueue *vq, int head, unsigned out)
+static void
+vhost_scsi_send_bad_target(struct vhost_scsi *vs,
+			   struct vhost_virtqueue *vq,
+			   int head, unsigned out)
 {
 	struct virtio_scsi_cmd_resp __user *resp;
 	struct virtio_scsi_cmd_resp rsp;
@@ -877,8 +890,8 @@ static void vhost_scsi_send_bad_target(struct vhost_scsi *vs,
 		pr_err("Faulted on virtio_scsi_cmd_resp\n");
 }
 
-static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
-	struct vhost_virtqueue *vq)
+static void
+vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 {
 	struct tcm_vhost_tpg **vs_tpg;
 	struct virtio_scsi_cmd_req v_req;
@@ -1059,8 +1072,12 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work)
 	pr_debug("%s: The handling func for control queue.\n", __func__);
 }
 
-static void tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg,
-	struct se_lun *lun, u32 event, u32 reason)
+static void
+tcm_vhost_send_evt(struct vhost_scsi *vs,
+		   struct tcm_vhost_tpg *tpg,
+		   struct se_lun *lun,
+		   u32 event,
+		   u32 reason)
 {
 	struct tcm_vhost_evt *evt;
 
@@ -1150,9 +1167,9 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
  *  The lock nesting rule is:
  *    tcm_vhost_mutex -> vs->dev.mutex -> tpg->tv_tpg_mutex -> vq->mutex
  */
-static int vhost_scsi_set_endpoint(
-	struct vhost_scsi *vs,
-	struct vhost_scsi_target *t)
+static int
+vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+			struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
 	struct tcm_vhost_tpg *tv_tpg;
@@ -1240,9 +1257,9 @@ out:
 	return ret;
 }
 
-static int vhost_scsi_clear_endpoint(
-	struct vhost_scsi *vs,
-	struct vhost_scsi_target *t)
+static int
+vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
+			  struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
 	struct tcm_vhost_tpg *tv_tpg;
@@ -1402,8 +1419,10 @@ static int vhost_scsi_release(struct inode *inode, struct file *f)
 	return 0;
 }
 
-static long vhost_scsi_ioctl(struct file *f, unsigned int ioctl,
-				unsigned long arg)
+static long
+vhost_scsi_ioctl(struct file *f,
+		 unsigned int ioctl,
+		 unsigned long arg)
 {
 	struct vhost_scsi *vs = f->private_data;
 	struct vhost_scsi_target backend;
@@ -1519,8 +1538,9 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport)
 	return "Unknown";
 }
 
-static void tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg,
-	struct se_lun *lun, bool plug)
+static void
+tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg,
+		  struct se_lun *lun, bool plug)
 {
 
 	struct vhost_scsi *vs = tpg->vhost_scsi;
@@ -1560,7 +1580,7 @@ static void tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
 }
 
 static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
-	struct se_lun *lun)
+			       struct se_lun *lun)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1579,7 +1599,7 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 }
 
 static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
-	struct se_lun *lun)
+				  struct se_lun *lun)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1595,10 +1615,10 @@ static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
 	mutex_unlock(&tcm_vhost_mutex);
 }
 
-static struct se_node_acl *tcm_vhost_make_nodeacl(
-	struct se_portal_group *se_tpg,
-	struct config_group *group,
-	const char *name)
+static struct se_node_acl *
+tcm_vhost_make_nodeacl(struct se_portal_group *se_tpg,
+		       struct config_group *group,
+		       const char *name)
 {
 	struct se_node_acl *se_nacl, *se_nacl_new;
 	struct tcm_vhost_nacl *nacl;
@@ -1640,7 +1660,7 @@ static void tcm_vhost_drop_nodeacl(struct se_node_acl *se_acl)
 }
 
 static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
-	const char *name)
+				const char *name)
 {
 	struct se_portal_group *se_tpg;
 	struct tcm_vhost_nexus *tv_nexus;
@@ -1744,7 +1764,7 @@ static int tcm_vhost_drop_nexus(struct tcm_vhost_tpg *tpg)
 }
 
 static ssize_t tcm_vhost_tpg_show_nexus(struct se_portal_group *se_tpg,
-	char *page)
+					char *page)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1765,8 +1785,8 @@ static ssize_t tcm_vhost_tpg_show_nexus(struct se_portal_group *se_tpg,
 }
 
 static ssize_t tcm_vhost_tpg_store_nexus(struct se_portal_group *se_tpg,
-	const char *page,
-	size_t count)
+					 const char *page,
+					 size_t count)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1849,9 +1869,10 @@ static struct configfs_attribute *tcm_vhost_tpg_attrs[] = {
 	NULL,
 };
 
-static struct se_portal_group *tcm_vhost_make_tpg(struct se_wwn *wwn,
-	struct config_group *group,
-	const char *name)
+static struct se_portal_group *
+tcm_vhost_make_tpg(struct se_wwn *wwn,
+		   struct config_group *group,
+		   const char *name)
 {
 	struct tcm_vhost_tport *tport = container_of(wwn,
 			struct tcm_vhost_tport, tport_wwn);
@@ -1907,9 +1928,10 @@ static void tcm_vhost_drop_tpg(struct se_portal_group *se_tpg)
 	kfree(tpg);
 }
 
-static struct se_wwn *tcm_vhost_make_tport(struct target_fabric_configfs *tf,
-	struct config_group *group,
-	const char *name)
+static struct se_wwn *
+tcm_vhost_make_tport(struct target_fabric_configfs *tf,
+		     struct config_group *group,
+		     const char *name)
 {
 	struct tcm_vhost_tport *tport;
 	char *ptr;
@@ -1979,9 +2001,9 @@ static void tcm_vhost_drop_tport(struct se_wwn *wwn)
 	kfree(tport);
 }
 
-static ssize_t tcm_vhost_wwn_show_attr_version(
-	struct target_fabric_configfs *tf,
-	char *page)
+static ssize_t
+tcm_vhost_wwn_show_attr_version(struct target_fabric_configfs *tf,
+				char *page)
 {
 	return sprintf(page, "TCM_VHOST fabric module %s on %s/%s"
 		"on "UTS_RELEASE"\n", TCM_VHOST_VERSION, utsname()->sysname,
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 09/11] vhost-scsi: Make func indention more consistent
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (14 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 09/11] vhost-scsi: Make func indention more consistent Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 10/11] vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg Asias He
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 154 +++++++++++++++++++++++++++++----------------------
 1 file changed, 88 insertions(+), 66 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index d4798e1..d9781ed 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -333,11 +333,12 @@ static u32 tcm_vhost_get_default_depth(struct se_portal_group *se_tpg)
 	return 1;
 }
 
-static u32 tcm_vhost_get_pr_transport_id(struct se_portal_group *se_tpg,
-	struct se_node_acl *se_nacl,
-	struct t10_pr_registration *pr_reg,
-	int *format_code,
-	unsigned char *buf)
+static u32
+tcm_vhost_get_pr_transport_id(struct se_portal_group *se_tpg,
+			      struct se_node_acl *se_nacl,
+			      struct t10_pr_registration *pr_reg,
+			      int *format_code,
+			      unsigned char *buf)
 {
 	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -363,10 +364,11 @@ static u32 tcm_vhost_get_pr_transport_id(struct se_portal_group *se_tpg,
 			format_code, buf);
 }
 
-static u32 tcm_vhost_get_pr_transport_id_len(struct se_portal_group *se_tpg,
-	struct se_node_acl *se_nacl,
-	struct t10_pr_registration *pr_reg,
-	int *format_code)
+static u32
+tcm_vhost_get_pr_transport_id_len(struct se_portal_group *se_tpg,
+				  struct se_node_acl *se_nacl,
+				  struct t10_pr_registration *pr_reg,
+				  int *format_code)
 {
 	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -392,10 +394,11 @@ static u32 tcm_vhost_get_pr_transport_id_len(struct se_portal_group *se_tpg,
 			format_code);
 }
 
-static char *tcm_vhost_parse_pr_out_transport_id(struct se_portal_group *se_tpg,
-	const char *buf,
-	u32 *out_tid_len,
-	char **port_nexus_ptr)
+static char *
+tcm_vhost_parse_pr_out_transport_id(struct se_portal_group *se_tpg,
+				    const char *buf,
+				    u32 *out_tid_len,
+				    char **port_nexus_ptr)
 {
 	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -421,8 +424,8 @@ static char *tcm_vhost_parse_pr_out_transport_id(struct se_portal_group *se_tpg,
 			port_nexus_ptr);
 }
 
-static struct se_node_acl *tcm_vhost_alloc_fabric_acl(
-	struct se_portal_group *se_tpg)
+static struct se_node_acl *
+tcm_vhost_alloc_fabric_acl(struct se_portal_group *se_tpg)
 {
 	struct tcm_vhost_nacl *nacl;
 
@@ -435,8 +438,9 @@ static struct se_node_acl *tcm_vhost_alloc_fabric_acl(
 	return &nacl->se_node_acl;
 }
 
-static void tcm_vhost_release_fabric_acl(struct se_portal_group *se_tpg,
-	struct se_node_acl *se_nacl)
+static void
+tcm_vhost_release_fabric_acl(struct se_portal_group *se_tpg,
+			     struct se_node_acl *se_nacl)
 {
 	struct tcm_vhost_nacl *nacl = container_of(se_nacl,
 			struct tcm_vhost_nacl, se_node_acl);
@@ -531,8 +535,9 @@ static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt)
 	kfree(evt);
 }
 
-static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
-	u32 event, u32 reason)
+static struct tcm_vhost_evt *
+tcm_vhost_allocate_evt(struct vhost_scsi *vs,
+		       u32 event, u32 reason)
 {
 	struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT].vq;
 	struct tcm_vhost_evt *evt;
@@ -576,8 +581,8 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
 	kfree(tv_cmd);
 }
 
-static void tcm_vhost_do_evt_work(struct vhost_scsi *vs,
-	struct tcm_vhost_evt *evt)
+static void
+tcm_vhost_do_evt_work(struct vhost_scsi *vs, struct tcm_vhost_evt *evt)
 {
 	struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT].vq;
 	struct virtio_scsi_event *event = &evt->event;
@@ -698,12 +703,12 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 		vhost_signal(&vs->dev, &vs->vqs[vq].vq);
 }
 
-static struct tcm_vhost_cmd *vhost_scsi_allocate_cmd(
-	struct vhost_virtqueue *vq,
-	struct tcm_vhost_tpg *tv_tpg,
-	struct virtio_scsi_cmd_req *v_req,
-	u32 exp_data_len,
-	int data_direction)
+static struct tcm_vhost_cmd *
+vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
+			struct tcm_vhost_tpg *tv_tpg,
+			struct virtio_scsi_cmd_req *v_req,
+			u32 exp_data_len,
+			int data_direction)
 {
 	struct tcm_vhost_cmd *tv_cmd;
 	struct tcm_vhost_nexus *tv_nexus;
@@ -734,8 +739,11 @@ static struct tcm_vhost_cmd *vhost_scsi_allocate_cmd(
  *
  * Returns the number of scatterlist entries used or -errno on error.
  */
-static int vhost_scsi_map_to_sgl(struct scatterlist *sgl,
-	unsigned int sgl_count, struct iovec *iov, int write)
+static int
+vhost_scsi_map_to_sgl(struct scatterlist *sgl,
+		      unsigned int sgl_count,
+		      struct iovec *iov,
+		      int write)
 {
 	unsigned int npages = 0, pages_nr, offset, nbytes;
 	struct scatterlist *sg = sgl;
@@ -779,8 +787,11 @@ out:
 	return ret;
 }
 
-static int vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
-	struct iovec *iov, unsigned int niov, int write)
+static int
+vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
+			  struct iovec *iov,
+			  unsigned int niov,
+			  int write)
 {
 	int ret;
 	unsigned int i;
@@ -860,8 +871,10 @@ static void tcm_vhost_submission_work(struct work_struct *work)
 	}
 }
 
-static void vhost_scsi_send_bad_target(struct vhost_scsi *vs,
-	struct vhost_virtqueue *vq, int head, unsigned out)
+static void
+vhost_scsi_send_bad_target(struct vhost_scsi *vs,
+			   struct vhost_virtqueue *vq,
+			   int head, unsigned out)
 {
 	struct virtio_scsi_cmd_resp __user *resp;
 	struct virtio_scsi_cmd_resp rsp;
@@ -877,8 +890,8 @@ static void vhost_scsi_send_bad_target(struct vhost_scsi *vs,
 		pr_err("Faulted on virtio_scsi_cmd_resp\n");
 }
 
-static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
-	struct vhost_virtqueue *vq)
+static void
+vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 {
 	struct tcm_vhost_tpg **vs_tpg;
 	struct virtio_scsi_cmd_req v_req;
@@ -1059,8 +1072,12 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work)
 	pr_debug("%s: The handling func for control queue.\n", __func__);
 }
 
-static void tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg,
-	struct se_lun *lun, u32 event, u32 reason)
+static void
+tcm_vhost_send_evt(struct vhost_scsi *vs,
+		   struct tcm_vhost_tpg *tpg,
+		   struct se_lun *lun,
+		   u32 event,
+		   u32 reason)
 {
 	struct tcm_vhost_evt *evt;
 
@@ -1150,9 +1167,9 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
  *  The lock nesting rule is:
  *    tcm_vhost_mutex -> vs->dev.mutex -> tpg->tv_tpg_mutex -> vq->mutex
  */
-static int vhost_scsi_set_endpoint(
-	struct vhost_scsi *vs,
-	struct vhost_scsi_target *t)
+static int
+vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+			struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
 	struct tcm_vhost_tpg *tv_tpg;
@@ -1240,9 +1257,9 @@ out:
 	return ret;
 }
 
-static int vhost_scsi_clear_endpoint(
-	struct vhost_scsi *vs,
-	struct vhost_scsi_target *t)
+static int
+vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
+			  struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
 	struct tcm_vhost_tpg *tv_tpg;
@@ -1402,8 +1419,10 @@ static int vhost_scsi_release(struct inode *inode, struct file *f)
 	return 0;
 }
 
-static long vhost_scsi_ioctl(struct file *f, unsigned int ioctl,
-				unsigned long arg)
+static long
+vhost_scsi_ioctl(struct file *f,
+		 unsigned int ioctl,
+		 unsigned long arg)
 {
 	struct vhost_scsi *vs = f->private_data;
 	struct vhost_scsi_target backend;
@@ -1519,8 +1538,9 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport)
 	return "Unknown";
 }
 
-static void tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg,
-	struct se_lun *lun, bool plug)
+static void
+tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg,
+		  struct se_lun *lun, bool plug)
 {
 
 	struct vhost_scsi *vs = tpg->vhost_scsi;
@@ -1560,7 +1580,7 @@ static void tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
 }
 
 static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
-	struct se_lun *lun)
+			       struct se_lun *lun)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1579,7 +1599,7 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 }
 
 static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
-	struct se_lun *lun)
+				  struct se_lun *lun)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1595,10 +1615,10 @@ static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
 	mutex_unlock(&tcm_vhost_mutex);
 }
 
-static struct se_node_acl *tcm_vhost_make_nodeacl(
-	struct se_portal_group *se_tpg,
-	struct config_group *group,
-	const char *name)
+static struct se_node_acl *
+tcm_vhost_make_nodeacl(struct se_portal_group *se_tpg,
+		       struct config_group *group,
+		       const char *name)
 {
 	struct se_node_acl *se_nacl, *se_nacl_new;
 	struct tcm_vhost_nacl *nacl;
@@ -1640,7 +1660,7 @@ static void tcm_vhost_drop_nodeacl(struct se_node_acl *se_acl)
 }
 
 static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
-	const char *name)
+				const char *name)
 {
 	struct se_portal_group *se_tpg;
 	struct tcm_vhost_nexus *tv_nexus;
@@ -1744,7 +1764,7 @@ static int tcm_vhost_drop_nexus(struct tcm_vhost_tpg *tpg)
 }
 
 static ssize_t tcm_vhost_tpg_show_nexus(struct se_portal_group *se_tpg,
-	char *page)
+					char *page)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1765,8 +1785,8 @@ static ssize_t tcm_vhost_tpg_show_nexus(struct se_portal_group *se_tpg,
 }
 
 static ssize_t tcm_vhost_tpg_store_nexus(struct se_portal_group *se_tpg,
-	const char *page,
-	size_t count)
+					 const char *page,
+					 size_t count)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
@@ -1849,9 +1869,10 @@ static struct configfs_attribute *tcm_vhost_tpg_attrs[] = {
 	NULL,
 };
 
-static struct se_portal_group *tcm_vhost_make_tpg(struct se_wwn *wwn,
-	struct config_group *group,
-	const char *name)
+static struct se_portal_group *
+tcm_vhost_make_tpg(struct se_wwn *wwn,
+		   struct config_group *group,
+		   const char *name)
 {
 	struct tcm_vhost_tport *tport = container_of(wwn,
 			struct tcm_vhost_tport, tport_wwn);
@@ -1907,9 +1928,10 @@ static void tcm_vhost_drop_tpg(struct se_portal_group *se_tpg)
 	kfree(tpg);
 }
 
-static struct se_wwn *tcm_vhost_make_tport(struct target_fabric_configfs *tf,
-	struct config_group *group,
-	const char *name)
+static struct se_wwn *
+tcm_vhost_make_tport(struct target_fabric_configfs *tf,
+		     struct config_group *group,
+		     const char *name)
 {
 	struct tcm_vhost_tport *tport;
 	char *ptr;
@@ -1979,9 +2001,9 @@ static void tcm_vhost_drop_tport(struct se_wwn *wwn)
 	kfree(tport);
 }
 
-static ssize_t tcm_vhost_wwn_show_attr_version(
-	struct target_fabric_configfs *tf,
-	char *page)
+static ssize_t
+tcm_vhost_wwn_show_attr_version(struct target_fabric_configfs *tf,
+				char *page)
 {
 	return sprintf(page, "TCM_VHOST fabric module %s on %s/%s"
 		"on "UTS_RELEASE"\n", TCM_VHOST_VERSION, utsname()->sysname,
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 10/11] vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (16 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 10/11] vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` [PATCH v2 11/11] vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd Asias He
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 122 +++++++++++++++++++++++++--------------------------
 1 file changed, 61 insertions(+), 61 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index d9781ed..353145f 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -705,7 +705,7 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 
 static struct tcm_vhost_cmd *
 vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
-			struct tcm_vhost_tpg *tv_tpg,
+			struct tcm_vhost_tpg *tpg,
 			struct virtio_scsi_cmd_req *v_req,
 			u32 exp_data_len,
 			int data_direction)
@@ -713,7 +713,7 @@ vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
 	struct tcm_vhost_cmd *tv_cmd;
 	struct tcm_vhost_nexus *tv_nexus;
 
-	tv_nexus = tv_tpg->tpg_nexus;
+	tv_nexus = tpg->tpg_nexus;
 	if (!tv_nexus) {
 		pr_err("Unable to locate active struct tcm_vhost_nexus\n");
 		return ERR_PTR(-EIO);
@@ -895,7 +895,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 {
 	struct tcm_vhost_tpg **vs_tpg;
 	struct virtio_scsi_cmd_req v_req;
-	struct tcm_vhost_tpg *tv_tpg;
+	struct tcm_vhost_tpg *tpg;
 	struct tcm_vhost_cmd *tv_cmd;
 	u32 exp_data_len, data_first, data_num, data_direction;
 	unsigned out, in, i;
@@ -981,10 +981,10 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 
 		/* Extract the tpgt */
 		target = v_req.lun[1];
-		tv_tpg = ACCESS_ONCE(vs_tpg[target]);
+		tpg = ACCESS_ONCE(vs_tpg[target]);
 
 		/* Target does not exist, fail the request */
-		if (unlikely(!tv_tpg)) {
+		if (unlikely(!tpg)) {
 			vhost_scsi_send_bad_target(vs, vq, head, out);
 			continue;
 		}
@@ -993,7 +993,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 		for (i = 0; i < data_num; i++)
 			exp_data_len += vq->iov[data_first + i].iov_len;
 
-		tv_cmd = vhost_scsi_allocate_cmd(vq, tv_tpg, &v_req,
+		tv_cmd = vhost_scsi_allocate_cmd(vq, tpg, &v_req,
 					exp_data_len, data_direction);
 		if (IS_ERR(tv_cmd)) {
 			vq_err(vq, "vhost_scsi_allocate_cmd failed %ld\n",
@@ -1172,7 +1172,7 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
 			struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
-	struct tcm_vhost_tpg *tv_tpg;
+	struct tcm_vhost_tpg *tpg;
 	struct tcm_vhost_tpg **vs_tpg;
 	struct vhost_virtqueue *vq;
 	int index, ret, i, len;
@@ -1199,32 +1199,32 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
 	if (vs->vs_tpg)
 		memcpy(vs_tpg, vs->vs_tpg, len);
 
-	list_for_each_entry(tv_tpg, &tcm_vhost_list, tv_tpg_list) {
-		mutex_lock(&tv_tpg->tv_tpg_mutex);
-		if (!tv_tpg->tpg_nexus) {
-			mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	list_for_each_entry(tpg, &tcm_vhost_list, tv_tpg_list) {
+		mutex_lock(&tpg->tv_tpg_mutex);
+		if (!tpg->tpg_nexus) {
+			mutex_unlock(&tpg->tv_tpg_mutex);
 			continue;
 		}
-		if (tv_tpg->tv_tpg_vhost_count != 0) {
-			mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		if (tpg->tv_tpg_vhost_count != 0) {
+			mutex_unlock(&tpg->tv_tpg_mutex);
 			continue;
 		}
-		tv_tport = tv_tpg->tport;
+		tv_tport = tpg->tport;
 
 		if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
-			if (vs->vs_tpg && vs->vs_tpg[tv_tpg->tport_tpgt]) {
+			if (vs->vs_tpg && vs->vs_tpg[tpg->tport_tpgt]) {
 				kfree(vs_tpg);
-				mutex_unlock(&tv_tpg->tv_tpg_mutex);
+				mutex_unlock(&tpg->tv_tpg_mutex);
 				ret = -EEXIST;
 				goto out;
 			}
-			tv_tpg->tv_tpg_vhost_count++;
-			tv_tpg->vhost_scsi = vs;
-			vs_tpg[tv_tpg->tport_tpgt] = tv_tpg;
+			tpg->tv_tpg_vhost_count++;
+			tpg->vhost_scsi = vs;
+			vs_tpg[tpg->tport_tpgt] = tpg;
 			smp_mb__after_atomic_inc();
 			match = true;
 		}
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 	}
 
 	if (match) {
@@ -1262,7 +1262,7 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 			  struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
-	struct tcm_vhost_tpg *tv_tpg;
+	struct tcm_vhost_tpg *tpg;
 	struct vhost_virtqueue *vq;
 	bool match = false;
 	int index, ret, i;
@@ -1285,30 +1285,30 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 
 	for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
 		target = i;
-		tv_tpg = vs->vs_tpg[target];
-		if (!tv_tpg)
+		tpg = vs->vs_tpg[target];
+		if (!tpg)
 			continue;
 
-		mutex_lock(&tv_tpg->tv_tpg_mutex);
-		tv_tport = tv_tpg->tport;
+		mutex_lock(&tpg->tv_tpg_mutex);
+		tv_tport = tpg->tport;
 		if (!tv_tport) {
 			ret = -ENODEV;
 			goto err_tpg;
 		}
 
 		if (strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
-			pr_warn("tv_tport->tport_name: %s, tv_tpg->tport_tpgt: %hu"
+			pr_warn("tv_tport->tport_name: %s, tpg->tport_tpgt: %hu"
 				" does not match t->vhost_wwpn: %s, t->vhost_tpgt: %hu\n",
-				tv_tport->tport_name, tv_tpg->tport_tpgt,
+				tv_tport->tport_name, tpg->tport_tpgt,
 				t->vhost_wwpn, t->vhost_tpgt);
 			ret = -EINVAL;
 			goto err_tpg;
 		}
-		tv_tpg->tv_tpg_vhost_count--;
-		tv_tpg->vhost_scsi = NULL;
+		tpg->tv_tpg_vhost_count--;
+		tpg->vhost_scsi = NULL;
 		vs->vs_tpg[target] = NULL;
 		match = true;
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 	}
 	if (match) {
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
@@ -1332,7 +1332,7 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 	return 0;
 
 err_tpg:
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_unlock(&tpg->tv_tpg_mutex);
 err_dev:
 	mutex_unlock(&vs->dev.mutex);
 	mutex_unlock(&tcm_vhost_mutex);
@@ -1582,16 +1582,16 @@ static void tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
 static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 			       struct se_lun *lun)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
 
 	mutex_lock(&tcm_vhost_mutex);
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	tv_tpg->tv_tpg_port_count++;
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_lock(&tpg->tv_tpg_mutex);
+	tpg->tv_tpg_port_count++;
+	mutex_unlock(&tpg->tv_tpg_mutex);
 
-	tcm_vhost_hotplug(tv_tpg, lun);
+	tcm_vhost_hotplug(tpg, lun);
 
 	mutex_unlock(&tcm_vhost_mutex);
 
@@ -1601,16 +1601,16 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
 				  struct se_lun *lun)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
 
 	mutex_lock(&tcm_vhost_mutex);
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	tv_tpg->tv_tpg_port_count--;
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_lock(&tpg->tv_tpg_mutex);
+	tpg->tv_tpg_port_count--;
+	mutex_unlock(&tpg->tv_tpg_mutex);
 
-	tcm_vhost_hotunplug(tv_tpg, lun);
+	tcm_vhost_hotunplug(tpg, lun);
 
 	mutex_unlock(&tcm_vhost_mutex);
 }
@@ -1659,23 +1659,23 @@ static void tcm_vhost_drop_nodeacl(struct se_node_acl *se_acl)
 	kfree(nacl);
 }
 
-static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
+static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tpg,
 				const char *name)
 {
 	struct se_portal_group *se_tpg;
 	struct tcm_vhost_nexus *tv_nexus;
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	if (tv_tpg->tpg_nexus) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
-		pr_debug("tv_tpg->tpg_nexus already exists\n");
+	mutex_lock(&tpg->tv_tpg_mutex);
+	if (tpg->tpg_nexus) {
+		mutex_unlock(&tpg->tv_tpg_mutex);
+		pr_debug("tpg->tpg_nexus already exists\n");
 		return -EEXIST;
 	}
-	se_tpg = &tv_tpg->se_tpg;
+	se_tpg = &tpg->se_tpg;
 
 	tv_nexus = kzalloc(sizeof(struct tcm_vhost_nexus), GFP_KERNEL);
 	if (!tv_nexus) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		pr_err("Unable to allocate struct tcm_vhost_nexus\n");
 		return -ENOMEM;
 	}
@@ -1684,7 +1684,7 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
 	 */
 	tv_nexus->tvn_se_sess = transport_init_session();
 	if (IS_ERR(tv_nexus->tvn_se_sess)) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		kfree(tv_nexus);
 		return -ENOMEM;
 	}
@@ -1696,7 +1696,7 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
 	tv_nexus->tvn_se_sess->se_node_acl = core_tpg_check_initiator_node_acl(
 				se_tpg, (unsigned char *)name);
 	if (!tv_nexus->tvn_se_sess->se_node_acl) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		pr_debug("core_tpg_check_initiator_node_acl() failed"
 				" for %s\n", name);
 		transport_free_session(tv_nexus->tvn_se_sess);
@@ -1709,9 +1709,9 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
 	 */
 	__transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
 			tv_nexus->tvn_se_sess, tv_nexus);
-	tv_tpg->tpg_nexus = tv_nexus;
+	tpg->tpg_nexus = tv_nexus;
 
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_unlock(&tpg->tv_tpg_mutex);
 	return 0;
 }
 
@@ -1766,20 +1766,20 @@ static int tcm_vhost_drop_nexus(struct tcm_vhost_tpg *tpg)
 static ssize_t tcm_vhost_tpg_show_nexus(struct se_portal_group *se_tpg,
 					char *page)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
 	struct tcm_vhost_nexus *tv_nexus;
 	ssize_t ret;
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	tv_nexus = tv_tpg->tpg_nexus;
+	mutex_lock(&tpg->tv_tpg_mutex);
+	tv_nexus = tpg->tpg_nexus;
 	if (!tv_nexus) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		return -ENODEV;
 	}
 	ret = snprintf(page, PAGE_SIZE, "%s\n",
 			tv_nexus->tvn_se_sess->se_node_acl->initiatorname);
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_unlock(&tpg->tv_tpg_mutex);
 
 	return ret;
 }
@@ -1788,16 +1788,16 @@ static ssize_t tcm_vhost_tpg_store_nexus(struct se_portal_group *se_tpg,
 					 const char *page,
 					 size_t count)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
-	struct tcm_vhost_tport *tport_wwn = tv_tpg->tport;
+	struct tcm_vhost_tport *tport_wwn = tpg->tport;
 	unsigned char i_port[TCM_VHOST_NAMELEN], *ptr, *port_ptr;
 	int ret;
 	/*
 	 * Shutdown the active I_T nexus if 'NULL' is passed..
 	 */
 	if (!strncmp(page, "NULL", 4)) {
-		ret = tcm_vhost_drop_nexus(tv_tpg);
+		ret = tcm_vhost_drop_nexus(tpg);
 		return (!ret) ? count : ret;
 	}
 	/*
@@ -1855,7 +1855,7 @@ check_newline:
 	if (i_port[strlen(i_port)-1] == '\n')
 		i_port[strlen(i_port)-1] = '\0';
 
-	ret = tcm_vhost_make_nexus(tv_tpg, port_ptr);
+	ret = tcm_vhost_make_nexus(tpg, port_ptr);
 	if (ret < 0)
 		return ret;
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 10/11] vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (15 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 122 +++++++++++++++++++++++++--------------------------
 1 file changed, 61 insertions(+), 61 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index d9781ed..353145f 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -705,7 +705,7 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 
 static struct tcm_vhost_cmd *
 vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
-			struct tcm_vhost_tpg *tv_tpg,
+			struct tcm_vhost_tpg *tpg,
 			struct virtio_scsi_cmd_req *v_req,
 			u32 exp_data_len,
 			int data_direction)
@@ -713,7 +713,7 @@ vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
 	struct tcm_vhost_cmd *tv_cmd;
 	struct tcm_vhost_nexus *tv_nexus;
 
-	tv_nexus = tv_tpg->tpg_nexus;
+	tv_nexus = tpg->tpg_nexus;
 	if (!tv_nexus) {
 		pr_err("Unable to locate active struct tcm_vhost_nexus\n");
 		return ERR_PTR(-EIO);
@@ -895,7 +895,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 {
 	struct tcm_vhost_tpg **vs_tpg;
 	struct virtio_scsi_cmd_req v_req;
-	struct tcm_vhost_tpg *tv_tpg;
+	struct tcm_vhost_tpg *tpg;
 	struct tcm_vhost_cmd *tv_cmd;
 	u32 exp_data_len, data_first, data_num, data_direction;
 	unsigned out, in, i;
@@ -981,10 +981,10 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 
 		/* Extract the tpgt */
 		target = v_req.lun[1];
-		tv_tpg = ACCESS_ONCE(vs_tpg[target]);
+		tpg = ACCESS_ONCE(vs_tpg[target]);
 
 		/* Target does not exist, fail the request */
-		if (unlikely(!tv_tpg)) {
+		if (unlikely(!tpg)) {
 			vhost_scsi_send_bad_target(vs, vq, head, out);
 			continue;
 		}
@@ -993,7 +993,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 		for (i = 0; i < data_num; i++)
 			exp_data_len += vq->iov[data_first + i].iov_len;
 
-		tv_cmd = vhost_scsi_allocate_cmd(vq, tv_tpg, &v_req,
+		tv_cmd = vhost_scsi_allocate_cmd(vq, tpg, &v_req,
 					exp_data_len, data_direction);
 		if (IS_ERR(tv_cmd)) {
 			vq_err(vq, "vhost_scsi_allocate_cmd failed %ld\n",
@@ -1172,7 +1172,7 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
 			struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
-	struct tcm_vhost_tpg *tv_tpg;
+	struct tcm_vhost_tpg *tpg;
 	struct tcm_vhost_tpg **vs_tpg;
 	struct vhost_virtqueue *vq;
 	int index, ret, i, len;
@@ -1199,32 +1199,32 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
 	if (vs->vs_tpg)
 		memcpy(vs_tpg, vs->vs_tpg, len);
 
-	list_for_each_entry(tv_tpg, &tcm_vhost_list, tv_tpg_list) {
-		mutex_lock(&tv_tpg->tv_tpg_mutex);
-		if (!tv_tpg->tpg_nexus) {
-			mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	list_for_each_entry(tpg, &tcm_vhost_list, tv_tpg_list) {
+		mutex_lock(&tpg->tv_tpg_mutex);
+		if (!tpg->tpg_nexus) {
+			mutex_unlock(&tpg->tv_tpg_mutex);
 			continue;
 		}
-		if (tv_tpg->tv_tpg_vhost_count != 0) {
-			mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		if (tpg->tv_tpg_vhost_count != 0) {
+			mutex_unlock(&tpg->tv_tpg_mutex);
 			continue;
 		}
-		tv_tport = tv_tpg->tport;
+		tv_tport = tpg->tport;
 
 		if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
-			if (vs->vs_tpg && vs->vs_tpg[tv_tpg->tport_tpgt]) {
+			if (vs->vs_tpg && vs->vs_tpg[tpg->tport_tpgt]) {
 				kfree(vs_tpg);
-				mutex_unlock(&tv_tpg->tv_tpg_mutex);
+				mutex_unlock(&tpg->tv_tpg_mutex);
 				ret = -EEXIST;
 				goto out;
 			}
-			tv_tpg->tv_tpg_vhost_count++;
-			tv_tpg->vhost_scsi = vs;
-			vs_tpg[tv_tpg->tport_tpgt] = tv_tpg;
+			tpg->tv_tpg_vhost_count++;
+			tpg->vhost_scsi = vs;
+			vs_tpg[tpg->tport_tpgt] = tpg;
 			smp_mb__after_atomic_inc();
 			match = true;
 		}
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 	}
 
 	if (match) {
@@ -1262,7 +1262,7 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 			  struct vhost_scsi_target *t)
 {
 	struct tcm_vhost_tport *tv_tport;
-	struct tcm_vhost_tpg *tv_tpg;
+	struct tcm_vhost_tpg *tpg;
 	struct vhost_virtqueue *vq;
 	bool match = false;
 	int index, ret, i;
@@ -1285,30 +1285,30 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 
 	for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
 		target = i;
-		tv_tpg = vs->vs_tpg[target];
-		if (!tv_tpg)
+		tpg = vs->vs_tpg[target];
+		if (!tpg)
 			continue;
 
-		mutex_lock(&tv_tpg->tv_tpg_mutex);
-		tv_tport = tv_tpg->tport;
+		mutex_lock(&tpg->tv_tpg_mutex);
+		tv_tport = tpg->tport;
 		if (!tv_tport) {
 			ret = -ENODEV;
 			goto err_tpg;
 		}
 
 		if (strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
-			pr_warn("tv_tport->tport_name: %s, tv_tpg->tport_tpgt: %hu"
+			pr_warn("tv_tport->tport_name: %s, tpg->tport_tpgt: %hu"
 				" does not match t->vhost_wwpn: %s, t->vhost_tpgt: %hu\n",
-				tv_tport->tport_name, tv_tpg->tport_tpgt,
+				tv_tport->tport_name, tpg->tport_tpgt,
 				t->vhost_wwpn, t->vhost_tpgt);
 			ret = -EINVAL;
 			goto err_tpg;
 		}
-		tv_tpg->tv_tpg_vhost_count--;
-		tv_tpg->vhost_scsi = NULL;
+		tpg->tv_tpg_vhost_count--;
+		tpg->vhost_scsi = NULL;
 		vs->vs_tpg[target] = NULL;
 		match = true;
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 	}
 	if (match) {
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
@@ -1332,7 +1332,7 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
 	return 0;
 
 err_tpg:
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_unlock(&tpg->tv_tpg_mutex);
 err_dev:
 	mutex_unlock(&vs->dev.mutex);
 	mutex_unlock(&tcm_vhost_mutex);
@@ -1582,16 +1582,16 @@ static void tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
 static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 			       struct se_lun *lun)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
 
 	mutex_lock(&tcm_vhost_mutex);
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	tv_tpg->tv_tpg_port_count++;
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_lock(&tpg->tv_tpg_mutex);
+	tpg->tv_tpg_port_count++;
+	mutex_unlock(&tpg->tv_tpg_mutex);
 
-	tcm_vhost_hotplug(tv_tpg, lun);
+	tcm_vhost_hotplug(tpg, lun);
 
 	mutex_unlock(&tcm_vhost_mutex);
 
@@ -1601,16 +1601,16 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
 				  struct se_lun *lun)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
 
 	mutex_lock(&tcm_vhost_mutex);
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	tv_tpg->tv_tpg_port_count--;
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_lock(&tpg->tv_tpg_mutex);
+	tpg->tv_tpg_port_count--;
+	mutex_unlock(&tpg->tv_tpg_mutex);
 
-	tcm_vhost_hotunplug(tv_tpg, lun);
+	tcm_vhost_hotunplug(tpg, lun);
 
 	mutex_unlock(&tcm_vhost_mutex);
 }
@@ -1659,23 +1659,23 @@ static void tcm_vhost_drop_nodeacl(struct se_node_acl *se_acl)
 	kfree(nacl);
 }
 
-static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
+static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tpg,
 				const char *name)
 {
 	struct se_portal_group *se_tpg;
 	struct tcm_vhost_nexus *tv_nexus;
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	if (tv_tpg->tpg_nexus) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
-		pr_debug("tv_tpg->tpg_nexus already exists\n");
+	mutex_lock(&tpg->tv_tpg_mutex);
+	if (tpg->tpg_nexus) {
+		mutex_unlock(&tpg->tv_tpg_mutex);
+		pr_debug("tpg->tpg_nexus already exists\n");
 		return -EEXIST;
 	}
-	se_tpg = &tv_tpg->se_tpg;
+	se_tpg = &tpg->se_tpg;
 
 	tv_nexus = kzalloc(sizeof(struct tcm_vhost_nexus), GFP_KERNEL);
 	if (!tv_nexus) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		pr_err("Unable to allocate struct tcm_vhost_nexus\n");
 		return -ENOMEM;
 	}
@@ -1684,7 +1684,7 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
 	 */
 	tv_nexus->tvn_se_sess = transport_init_session();
 	if (IS_ERR(tv_nexus->tvn_se_sess)) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		kfree(tv_nexus);
 		return -ENOMEM;
 	}
@@ -1696,7 +1696,7 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
 	tv_nexus->tvn_se_sess->se_node_acl = core_tpg_check_initiator_node_acl(
 				se_tpg, (unsigned char *)name);
 	if (!tv_nexus->tvn_se_sess->se_node_acl) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		pr_debug("core_tpg_check_initiator_node_acl() failed"
 				" for %s\n", name);
 		transport_free_session(tv_nexus->tvn_se_sess);
@@ -1709,9 +1709,9 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tv_tpg,
 	 */
 	__transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,
 			tv_nexus->tvn_se_sess, tv_nexus);
-	tv_tpg->tpg_nexus = tv_nexus;
+	tpg->tpg_nexus = tv_nexus;
 
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_unlock(&tpg->tv_tpg_mutex);
 	return 0;
 }
 
@@ -1766,20 +1766,20 @@ static int tcm_vhost_drop_nexus(struct tcm_vhost_tpg *tpg)
 static ssize_t tcm_vhost_tpg_show_nexus(struct se_portal_group *se_tpg,
 					char *page)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
 	struct tcm_vhost_nexus *tv_nexus;
 	ssize_t ret;
 
-	mutex_lock(&tv_tpg->tv_tpg_mutex);
-	tv_nexus = tv_tpg->tpg_nexus;
+	mutex_lock(&tpg->tv_tpg_mutex);
+	tv_nexus = tpg->tpg_nexus;
 	if (!tv_nexus) {
-		mutex_unlock(&tv_tpg->tv_tpg_mutex);
+		mutex_unlock(&tpg->tv_tpg_mutex);
 		return -ENODEV;
 	}
 	ret = snprintf(page, PAGE_SIZE, "%s\n",
 			tv_nexus->tvn_se_sess->se_node_acl->initiatorname);
-	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+	mutex_unlock(&tpg->tv_tpg_mutex);
 
 	return ret;
 }
@@ -1788,16 +1788,16 @@ static ssize_t tcm_vhost_tpg_store_nexus(struct se_portal_group *se_tpg,
 					 const char *page,
 					 size_t count)
 {
-	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
+	struct tcm_vhost_tpg *tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
-	struct tcm_vhost_tport *tport_wwn = tv_tpg->tport;
+	struct tcm_vhost_tport *tport_wwn = tpg->tport;
 	unsigned char i_port[TCM_VHOST_NAMELEN], *ptr, *port_ptr;
 	int ret;
 	/*
 	 * Shutdown the active I_T nexus if 'NULL' is passed..
 	 */
 	if (!strncmp(page, "NULL", 4)) {
-		ret = tcm_vhost_drop_nexus(tv_tpg);
+		ret = tcm_vhost_drop_nexus(tpg);
 		return (!ret) ? count : ret;
 	}
 	/*
@@ -1855,7 +1855,7 @@ check_newline:
 	if (i_port[strlen(i_port)-1] == '\n')
 		i_port[strlen(i_port)-1] = '\0';
 
-	ret = tcm_vhost_make_nexus(tv_tpg, port_ptr);
+	ret = tcm_vhost_make_nexus(tpg, port_ptr);
 	if (ret < 0)
 		return ret;
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 11/11] vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (18 preceding siblings ...)
  2013-05-06  8:38 ` [PATCH v2 11/11] vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:56 ` [PATCH v2 00/11] vhost cleanups Michael S. Tsirkin
  2013-05-06 10:07 ` Michael S. Tsirkin
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Nicholas Bellinger, Rusty Russell, kvm, virtualization,
	target-devel, Asias He

This way, we use cmd for struct tcm_vhost_cmd and evt for struct
tcm_vhost_cmd.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 142 +++++++++++++++++++++++++--------------------------
 1 file changed, 71 insertions(+), 71 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 353145f..d860b58 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -499,28 +499,28 @@ static int tcm_vhost_get_cmd_state(struct se_cmd *se_cmd)
 	return 0;
 }
 
-static void vhost_scsi_complete_cmd(struct tcm_vhost_cmd *tv_cmd)
+static void vhost_scsi_complete_cmd(struct tcm_vhost_cmd *cmd)
 {
-	struct vhost_scsi *vs = tv_cmd->tvc_vhost;
+	struct vhost_scsi *vs = cmd->tvc_vhost;
 
-	llist_add(&tv_cmd->tvc_completion_list, &vs->vs_completion_list);
+	llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
 
 	vhost_work_queue(&vs->dev, &vs->vs_completion_work);
 }
 
 static int tcm_vhost_queue_data_in(struct se_cmd *se_cmd)
 {
-	struct tcm_vhost_cmd *tv_cmd = container_of(se_cmd,
+	struct tcm_vhost_cmd *cmd = container_of(se_cmd,
 				struct tcm_vhost_cmd, tvc_se_cmd);
-	vhost_scsi_complete_cmd(tv_cmd);
+	vhost_scsi_complete_cmd(cmd);
 	return 0;
 }
 
 static int tcm_vhost_queue_status(struct se_cmd *se_cmd)
 {
-	struct tcm_vhost_cmd *tv_cmd = container_of(se_cmd,
+	struct tcm_vhost_cmd *cmd = container_of(se_cmd,
 				struct tcm_vhost_cmd, tvc_se_cmd);
-	vhost_scsi_complete_cmd(tv_cmd);
+	vhost_scsi_complete_cmd(cmd);
 	return 0;
 }
 
@@ -561,24 +561,24 @@ tcm_vhost_allocate_evt(struct vhost_scsi *vs,
 	return evt;
 }
 
-static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
+static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *cmd)
 {
-	struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd;
+	struct se_cmd *se_cmd = &cmd->tvc_se_cmd;
 
 	/* TODO locking against target/backend threads? */
 	transport_generic_free_cmd(se_cmd, 1);
 
-	if (tv_cmd->tvc_sgl_count) {
+	if (cmd->tvc_sgl_count) {
 		u32 i;
-		for (i = 0; i < tv_cmd->tvc_sgl_count; i++)
-			put_page(sg_page(&tv_cmd->tvc_sgl[i]));
+		for (i = 0; i < cmd->tvc_sgl_count; i++)
+			put_page(sg_page(&cmd->tvc_sgl[i]));
 
-		kfree(tv_cmd->tvc_sgl);
+		kfree(cmd->tvc_sgl);
 	}
 
-	tcm_vhost_put_inflight(tv_cmd->inflight);
+	tcm_vhost_put_inflight(cmd->inflight);
 
-	kfree(tv_cmd);
+	kfree(cmd);
 }
 
 static void
@@ -661,7 +661,7 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 					vs_completion_work);
 	DECLARE_BITMAP(signal, VHOST_SCSI_MAX_VQ);
 	struct virtio_scsi_cmd_resp v_rsp;
-	struct tcm_vhost_cmd *tv_cmd;
+	struct tcm_vhost_cmd *cmd;
 	struct llist_node *llnode;
 	struct se_cmd *se_cmd;
 	int ret, vq;
@@ -669,32 +669,32 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 	bitmap_zero(signal, VHOST_SCSI_MAX_VQ);
 	llnode = llist_del_all(&vs->vs_completion_list);
 	while (llnode) {
-		tv_cmd = llist_entry(llnode, struct tcm_vhost_cmd,
+		cmd = llist_entry(llnode, struct tcm_vhost_cmd,
 				     tvc_completion_list);
 		llnode = llist_next(llnode);
-		se_cmd = &tv_cmd->tvc_se_cmd;
+		se_cmd = &cmd->tvc_se_cmd;
 
 		pr_debug("%s tv_cmd %p resid %u status %#02x\n", __func__,
-			tv_cmd, se_cmd->residual_count, se_cmd->scsi_status);
+			cmd, se_cmd->residual_count, se_cmd->scsi_status);
 
 		memset(&v_rsp, 0, sizeof(v_rsp));
 		v_rsp.resid = se_cmd->residual_count;
 		/* TODO is status_qualifier field needed? */
 		v_rsp.status = se_cmd->scsi_status;
 		v_rsp.sense_len = se_cmd->scsi_sense_length;
-		memcpy(v_rsp.sense, tv_cmd->tvc_sense_buf,
+		memcpy(v_rsp.sense, cmd->tvc_sense_buf,
 		       v_rsp.sense_len);
-		ret = copy_to_user(tv_cmd->tvc_resp, &v_rsp, sizeof(v_rsp));
+		ret = copy_to_user(cmd->tvc_resp, &v_rsp, sizeof(v_rsp));
 		if (likely(ret == 0)) {
 			struct vhost_scsi_virtqueue *q;
-			vhost_add_used(tv_cmd->tvc_vq, tv_cmd->tvc_vq_desc, 0);
-			q = container_of(tv_cmd->tvc_vq, struct vhost_scsi_virtqueue, vq);
+			vhost_add_used(cmd->tvc_vq, cmd->tvc_vq_desc, 0);
+			q = container_of(cmd->tvc_vq, struct vhost_scsi_virtqueue, vq);
 			vq = q - vs->vqs;
 			__set_bit(vq, signal);
 		} else
 			pr_err("Faulted on virtio_scsi_cmd_resp\n");
 
-		vhost_scsi_free_cmd(tv_cmd);
+		vhost_scsi_free_cmd(cmd);
 	}
 
 	vq = -1;
@@ -710,7 +710,7 @@ vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
 			u32 exp_data_len,
 			int data_direction)
 {
-	struct tcm_vhost_cmd *tv_cmd;
+	struct tcm_vhost_cmd *cmd;
 	struct tcm_vhost_nexus *tv_nexus;
 
 	tv_nexus = tpg->tpg_nexus;
@@ -719,19 +719,19 @@ vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
 		return ERR_PTR(-EIO);
 	}
 
-	tv_cmd = kzalloc(sizeof(struct tcm_vhost_cmd), GFP_ATOMIC);
-	if (!tv_cmd) {
+	cmd = kzalloc(sizeof(struct tcm_vhost_cmd), GFP_ATOMIC);
+	if (!cmd) {
 		pr_err("Unable to allocate struct tcm_vhost_cmd\n");
 		return ERR_PTR(-ENOMEM);
 	}
-	tv_cmd->tvc_tag = v_req->tag;
-	tv_cmd->tvc_task_attr = v_req->task_attr;
-	tv_cmd->tvc_exp_data_len = exp_data_len;
-	tv_cmd->tvc_data_direction = data_direction;
-	tv_cmd->tvc_nexus = tv_nexus;
-	tv_cmd->inflight = tcm_vhost_get_inflight(vq);
+	cmd->tvc_tag = v_req->tag;
+	cmd->tvc_task_attr = v_req->task_attr;
+	cmd->tvc_exp_data_len = exp_data_len;
+	cmd->tvc_data_direction = data_direction;
+	cmd->tvc_nexus = tv_nexus;
+	cmd->inflight = tcm_vhost_get_inflight(vq);
 
-	return tv_cmd;
+	return cmd;
 }
 
 /*
@@ -788,7 +788,7 @@ out:
 }
 
 static int
-vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
+vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *cmd,
 			  struct iovec *iov,
 			  unsigned int niov,
 			  int write)
@@ -807,25 +807,25 @@ vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
 
 	/* TODO overflow checking */
 
-	sg = kmalloc(sizeof(tv_cmd->tvc_sgl[0]) * sgl_count, GFP_ATOMIC);
+	sg = kmalloc(sizeof(cmd->tvc_sgl[0]) * sgl_count, GFP_ATOMIC);
 	if (!sg)
 		return -ENOMEM;
 	pr_debug("%s sg %p sgl_count %u is_err %d\n", __func__,
 	       sg, sgl_count, !sg);
 	sg_init_table(sg, sgl_count);
 
-	tv_cmd->tvc_sgl = sg;
-	tv_cmd->tvc_sgl_count = sgl_count;
+	cmd->tvc_sgl = sg;
+	cmd->tvc_sgl_count = sgl_count;
 
 	pr_debug("Mapping %u iovecs for %u pages\n", niov, sgl_count);
 	for (i = 0; i < niov; i++) {
 		ret = vhost_scsi_map_to_sgl(sg, sgl_count, &iov[i], write);
 		if (ret < 0) {
-			for (i = 0; i < tv_cmd->tvc_sgl_count; i++)
-				put_page(sg_page(&tv_cmd->tvc_sgl[i]));
-			kfree(tv_cmd->tvc_sgl);
-			tv_cmd->tvc_sgl = NULL;
-			tv_cmd->tvc_sgl_count = 0;
+			for (i = 0; i < cmd->tvc_sgl_count; i++)
+				put_page(sg_page(&cmd->tvc_sgl[i]));
+			kfree(cmd->tvc_sgl);
+			cmd->tvc_sgl = NULL;
+			cmd->tvc_sgl_count = 0;
 			return ret;
 		}
 
@@ -837,15 +837,15 @@ vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
 
 static void tcm_vhost_submission_work(struct work_struct *work)
 {
-	struct tcm_vhost_cmd *tv_cmd =
+	struct tcm_vhost_cmd *cmd =
 		container_of(work, struct tcm_vhost_cmd, work);
 	struct tcm_vhost_nexus *tv_nexus;
-	struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd;
+	struct se_cmd *se_cmd = &cmd->tvc_se_cmd;
 	struct scatterlist *sg_ptr, *sg_bidi_ptr = NULL;
 	int rc, sg_no_bidi = 0;
 
-	if (tv_cmd->tvc_sgl_count) {
-		sg_ptr = tv_cmd->tvc_sgl;
+	if (cmd->tvc_sgl_count) {
+		sg_ptr = cmd->tvc_sgl;
 /* FIXME: Fix BIDI operation in tcm_vhost_submission_work() */
 #if 0
 		if (se_cmd->se_cmd_flags & SCF_BIDI) {
@@ -856,13 +856,13 @@ static void tcm_vhost_submission_work(struct work_struct *work)
 	} else {
 		sg_ptr = NULL;
 	}
-	tv_nexus = tv_cmd->tvc_nexus;
+	tv_nexus = cmd->tvc_nexus;
 
 	rc = target_submit_cmd_map_sgls(se_cmd, tv_nexus->tvn_se_sess,
-			tv_cmd->tvc_cdb, &tv_cmd->tvc_sense_buf[0],
-			tv_cmd->tvc_lun, tv_cmd->tvc_exp_data_len,
-			tv_cmd->tvc_task_attr, tv_cmd->tvc_data_direction,
-			0, sg_ptr, tv_cmd->tvc_sgl_count,
+			cmd->tvc_cdb, &cmd->tvc_sense_buf[0],
+			cmd->tvc_lun, cmd->tvc_exp_data_len,
+			cmd->tvc_task_attr, cmd->tvc_data_direction,
+			0, sg_ptr, cmd->tvc_sgl_count,
 			sg_bidi_ptr, sg_no_bidi);
 	if (rc < 0) {
 		transport_send_check_condition_and_sense(se_cmd,
@@ -896,7 +896,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 	struct tcm_vhost_tpg **vs_tpg;
 	struct virtio_scsi_cmd_req v_req;
 	struct tcm_vhost_tpg *tpg;
-	struct tcm_vhost_cmd *tv_cmd;
+	struct tcm_vhost_cmd *cmd;
 	u32 exp_data_len, data_first, data_num, data_direction;
 	unsigned out, in, i;
 	int head, ret;
@@ -993,46 +993,46 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 		for (i = 0; i < data_num; i++)
 			exp_data_len += vq->iov[data_first + i].iov_len;
 
-		tv_cmd = vhost_scsi_allocate_cmd(vq, tpg, &v_req,
+		cmd = vhost_scsi_allocate_cmd(vq, tpg, &v_req,
 					exp_data_len, data_direction);
-		if (IS_ERR(tv_cmd)) {
+		if (IS_ERR(cmd)) {
 			vq_err(vq, "vhost_scsi_allocate_cmd failed %ld\n",
-					PTR_ERR(tv_cmd));
+					PTR_ERR(cmd));
 			goto err_cmd;
 		}
 		pr_debug("Allocated tv_cmd: %p exp_data_len: %d, data_direction"
-			": %d\n", tv_cmd, exp_data_len, data_direction);
+			": %d\n", cmd, exp_data_len, data_direction);
 
-		tv_cmd->tvc_vhost = vs;
-		tv_cmd->tvc_vq = vq;
-		tv_cmd->tvc_resp = vq->iov[out].iov_base;
+		cmd->tvc_vhost = vs;
+		cmd->tvc_vq = vq;
+		cmd->tvc_resp = vq->iov[out].iov_base;
 
 		/*
-		 * Copy in the recieved CDB descriptor into tv_cmd->tvc_cdb
+		 * Copy in the recieved CDB descriptor into cmd->tvc_cdb
 		 * that will be used by tcm_vhost_new_cmd_map() and down into
 		 * target_setup_cmd_from_cdb()
 		 */
-		memcpy(tv_cmd->tvc_cdb, v_req.cdb, TCM_VHOST_MAX_CDB_SIZE);
+		memcpy(cmd->tvc_cdb, v_req.cdb, TCM_VHOST_MAX_CDB_SIZE);
 		/*
 		 * Check that the recieved CDB size does not exceeded our
 		 * hardcoded max for tcm_vhost
 		 */
 		/* TODO what if cdb was too small for varlen cdb header? */
-		if (unlikely(scsi_command_size(tv_cmd->tvc_cdb) >
+		if (unlikely(scsi_command_size(cmd->tvc_cdb) >
 					TCM_VHOST_MAX_CDB_SIZE)) {
 			vq_err(vq, "Received SCSI CDB with command_size: %d that"
 				" exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n",
-				scsi_command_size(tv_cmd->tvc_cdb),
+				scsi_command_size(cmd->tvc_cdb),
 				TCM_VHOST_MAX_CDB_SIZE);
 			goto err_free;
 		}
-		tv_cmd->tvc_lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
+		cmd->tvc_lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
 
 		pr_debug("vhost_scsi got command opcode: %#02x, lun: %d\n",
-			tv_cmd->tvc_cdb[0], tv_cmd->tvc_lun);
+			cmd->tvc_cdb[0], cmd->tvc_lun);
 
 		if (data_direction != DMA_NONE) {
-			ret = vhost_scsi_map_iov_to_sgl(tv_cmd,
+			ret = vhost_scsi_map_iov_to_sgl(cmd,
 					&vq->iov[data_first], data_num,
 					data_direction == DMA_TO_DEVICE);
 			if (unlikely(ret)) {
@@ -1046,22 +1046,22 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 		 * complete the virtio-scsi request in TCM callback context via
 		 * tcm_vhost_queue_data_in() and tcm_vhost_queue_status()
 		 */
-		tv_cmd->tvc_vq_desc = head;
+		cmd->tvc_vq_desc = head;
 		/*
 		 * Dispatch tv_cmd descriptor for cmwq execution in process
 		 * context provided by tcm_vhost_workqueue.  This also ensures
 		 * tv_cmd is executed on the same kworker CPU as this vhost
 		 * thread to gain positive L2 cache locality effects..
 		 */
-		INIT_WORK(&tv_cmd->work, tcm_vhost_submission_work);
-		queue_work(tcm_vhost_workqueue, &tv_cmd->work);
+		INIT_WORK(&cmd->work, tcm_vhost_submission_work);
+		queue_work(tcm_vhost_workqueue, &cmd->work);
 	}
 
 	mutex_unlock(&vq->mutex);
 	return;
 
 err_free:
-	vhost_scsi_free_cmd(tv_cmd);
+	vhost_scsi_free_cmd(cmd);
 err_cmd:
 	vhost_scsi_send_bad_target(vs, vq, head, out);
 	mutex_unlock(&vq->mutex);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 11/11] vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (17 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:38 ` Asias He
  2013-05-06  8:38 ` Asias He
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06  8:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel

This way, we use cmd for struct tcm_vhost_cmd and evt for struct
tcm_vhost_cmd.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/scsi.c | 142 +++++++++++++++++++++++++--------------------------
 1 file changed, 71 insertions(+), 71 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 353145f..d860b58 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -499,28 +499,28 @@ static int tcm_vhost_get_cmd_state(struct se_cmd *se_cmd)
 	return 0;
 }
 
-static void vhost_scsi_complete_cmd(struct tcm_vhost_cmd *tv_cmd)
+static void vhost_scsi_complete_cmd(struct tcm_vhost_cmd *cmd)
 {
-	struct vhost_scsi *vs = tv_cmd->tvc_vhost;
+	struct vhost_scsi *vs = cmd->tvc_vhost;
 
-	llist_add(&tv_cmd->tvc_completion_list, &vs->vs_completion_list);
+	llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
 
 	vhost_work_queue(&vs->dev, &vs->vs_completion_work);
 }
 
 static int tcm_vhost_queue_data_in(struct se_cmd *se_cmd)
 {
-	struct tcm_vhost_cmd *tv_cmd = container_of(se_cmd,
+	struct tcm_vhost_cmd *cmd = container_of(se_cmd,
 				struct tcm_vhost_cmd, tvc_se_cmd);
-	vhost_scsi_complete_cmd(tv_cmd);
+	vhost_scsi_complete_cmd(cmd);
 	return 0;
 }
 
 static int tcm_vhost_queue_status(struct se_cmd *se_cmd)
 {
-	struct tcm_vhost_cmd *tv_cmd = container_of(se_cmd,
+	struct tcm_vhost_cmd *cmd = container_of(se_cmd,
 				struct tcm_vhost_cmd, tvc_se_cmd);
-	vhost_scsi_complete_cmd(tv_cmd);
+	vhost_scsi_complete_cmd(cmd);
 	return 0;
 }
 
@@ -561,24 +561,24 @@ tcm_vhost_allocate_evt(struct vhost_scsi *vs,
 	return evt;
 }
 
-static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
+static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *cmd)
 {
-	struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd;
+	struct se_cmd *se_cmd = &cmd->tvc_se_cmd;
 
 	/* TODO locking against target/backend threads? */
 	transport_generic_free_cmd(se_cmd, 1);
 
-	if (tv_cmd->tvc_sgl_count) {
+	if (cmd->tvc_sgl_count) {
 		u32 i;
-		for (i = 0; i < tv_cmd->tvc_sgl_count; i++)
-			put_page(sg_page(&tv_cmd->tvc_sgl[i]));
+		for (i = 0; i < cmd->tvc_sgl_count; i++)
+			put_page(sg_page(&cmd->tvc_sgl[i]));
 
-		kfree(tv_cmd->tvc_sgl);
+		kfree(cmd->tvc_sgl);
 	}
 
-	tcm_vhost_put_inflight(tv_cmd->inflight);
+	tcm_vhost_put_inflight(cmd->inflight);
 
-	kfree(tv_cmd);
+	kfree(cmd);
 }
 
 static void
@@ -661,7 +661,7 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 					vs_completion_work);
 	DECLARE_BITMAP(signal, VHOST_SCSI_MAX_VQ);
 	struct virtio_scsi_cmd_resp v_rsp;
-	struct tcm_vhost_cmd *tv_cmd;
+	struct tcm_vhost_cmd *cmd;
 	struct llist_node *llnode;
 	struct se_cmd *se_cmd;
 	int ret, vq;
@@ -669,32 +669,32 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 	bitmap_zero(signal, VHOST_SCSI_MAX_VQ);
 	llnode = llist_del_all(&vs->vs_completion_list);
 	while (llnode) {
-		tv_cmd = llist_entry(llnode, struct tcm_vhost_cmd,
+		cmd = llist_entry(llnode, struct tcm_vhost_cmd,
 				     tvc_completion_list);
 		llnode = llist_next(llnode);
-		se_cmd = &tv_cmd->tvc_se_cmd;
+		se_cmd = &cmd->tvc_se_cmd;
 
 		pr_debug("%s tv_cmd %p resid %u status %#02x\n", __func__,
-			tv_cmd, se_cmd->residual_count, se_cmd->scsi_status);
+			cmd, se_cmd->residual_count, se_cmd->scsi_status);
 
 		memset(&v_rsp, 0, sizeof(v_rsp));
 		v_rsp.resid = se_cmd->residual_count;
 		/* TODO is status_qualifier field needed? */
 		v_rsp.status = se_cmd->scsi_status;
 		v_rsp.sense_len = se_cmd->scsi_sense_length;
-		memcpy(v_rsp.sense, tv_cmd->tvc_sense_buf,
+		memcpy(v_rsp.sense, cmd->tvc_sense_buf,
 		       v_rsp.sense_len);
-		ret = copy_to_user(tv_cmd->tvc_resp, &v_rsp, sizeof(v_rsp));
+		ret = copy_to_user(cmd->tvc_resp, &v_rsp, sizeof(v_rsp));
 		if (likely(ret == 0)) {
 			struct vhost_scsi_virtqueue *q;
-			vhost_add_used(tv_cmd->tvc_vq, tv_cmd->tvc_vq_desc, 0);
-			q = container_of(tv_cmd->tvc_vq, struct vhost_scsi_virtqueue, vq);
+			vhost_add_used(cmd->tvc_vq, cmd->tvc_vq_desc, 0);
+			q = container_of(cmd->tvc_vq, struct vhost_scsi_virtqueue, vq);
 			vq = q - vs->vqs;
 			__set_bit(vq, signal);
 		} else
 			pr_err("Faulted on virtio_scsi_cmd_resp\n");
 
-		vhost_scsi_free_cmd(tv_cmd);
+		vhost_scsi_free_cmd(cmd);
 	}
 
 	vq = -1;
@@ -710,7 +710,7 @@ vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
 			u32 exp_data_len,
 			int data_direction)
 {
-	struct tcm_vhost_cmd *tv_cmd;
+	struct tcm_vhost_cmd *cmd;
 	struct tcm_vhost_nexus *tv_nexus;
 
 	tv_nexus = tpg->tpg_nexus;
@@ -719,19 +719,19 @@ vhost_scsi_allocate_cmd(struct vhost_virtqueue *vq,
 		return ERR_PTR(-EIO);
 	}
 
-	tv_cmd = kzalloc(sizeof(struct tcm_vhost_cmd), GFP_ATOMIC);
-	if (!tv_cmd) {
+	cmd = kzalloc(sizeof(struct tcm_vhost_cmd), GFP_ATOMIC);
+	if (!cmd) {
 		pr_err("Unable to allocate struct tcm_vhost_cmd\n");
 		return ERR_PTR(-ENOMEM);
 	}
-	tv_cmd->tvc_tag = v_req->tag;
-	tv_cmd->tvc_task_attr = v_req->task_attr;
-	tv_cmd->tvc_exp_data_len = exp_data_len;
-	tv_cmd->tvc_data_direction = data_direction;
-	tv_cmd->tvc_nexus = tv_nexus;
-	tv_cmd->inflight = tcm_vhost_get_inflight(vq);
+	cmd->tvc_tag = v_req->tag;
+	cmd->tvc_task_attr = v_req->task_attr;
+	cmd->tvc_exp_data_len = exp_data_len;
+	cmd->tvc_data_direction = data_direction;
+	cmd->tvc_nexus = tv_nexus;
+	cmd->inflight = tcm_vhost_get_inflight(vq);
 
-	return tv_cmd;
+	return cmd;
 }
 
 /*
@@ -788,7 +788,7 @@ out:
 }
 
 static int
-vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
+vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *cmd,
 			  struct iovec *iov,
 			  unsigned int niov,
 			  int write)
@@ -807,25 +807,25 @@ vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
 
 	/* TODO overflow checking */
 
-	sg = kmalloc(sizeof(tv_cmd->tvc_sgl[0]) * sgl_count, GFP_ATOMIC);
+	sg = kmalloc(sizeof(cmd->tvc_sgl[0]) * sgl_count, GFP_ATOMIC);
 	if (!sg)
 		return -ENOMEM;
 	pr_debug("%s sg %p sgl_count %u is_err %d\n", __func__,
 	       sg, sgl_count, !sg);
 	sg_init_table(sg, sgl_count);
 
-	tv_cmd->tvc_sgl = sg;
-	tv_cmd->tvc_sgl_count = sgl_count;
+	cmd->tvc_sgl = sg;
+	cmd->tvc_sgl_count = sgl_count;
 
 	pr_debug("Mapping %u iovecs for %u pages\n", niov, sgl_count);
 	for (i = 0; i < niov; i++) {
 		ret = vhost_scsi_map_to_sgl(sg, sgl_count, &iov[i], write);
 		if (ret < 0) {
-			for (i = 0; i < tv_cmd->tvc_sgl_count; i++)
-				put_page(sg_page(&tv_cmd->tvc_sgl[i]));
-			kfree(tv_cmd->tvc_sgl);
-			tv_cmd->tvc_sgl = NULL;
-			tv_cmd->tvc_sgl_count = 0;
+			for (i = 0; i < cmd->tvc_sgl_count; i++)
+				put_page(sg_page(&cmd->tvc_sgl[i]));
+			kfree(cmd->tvc_sgl);
+			cmd->tvc_sgl = NULL;
+			cmd->tvc_sgl_count = 0;
 			return ret;
 		}
 
@@ -837,15 +837,15 @@ vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *tv_cmd,
 
 static void tcm_vhost_submission_work(struct work_struct *work)
 {
-	struct tcm_vhost_cmd *tv_cmd =
+	struct tcm_vhost_cmd *cmd =
 		container_of(work, struct tcm_vhost_cmd, work);
 	struct tcm_vhost_nexus *tv_nexus;
-	struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd;
+	struct se_cmd *se_cmd = &cmd->tvc_se_cmd;
 	struct scatterlist *sg_ptr, *sg_bidi_ptr = NULL;
 	int rc, sg_no_bidi = 0;
 
-	if (tv_cmd->tvc_sgl_count) {
-		sg_ptr = tv_cmd->tvc_sgl;
+	if (cmd->tvc_sgl_count) {
+		sg_ptr = cmd->tvc_sgl;
 /* FIXME: Fix BIDI operation in tcm_vhost_submission_work() */
 #if 0
 		if (se_cmd->se_cmd_flags & SCF_BIDI) {
@@ -856,13 +856,13 @@ static void tcm_vhost_submission_work(struct work_struct *work)
 	} else {
 		sg_ptr = NULL;
 	}
-	tv_nexus = tv_cmd->tvc_nexus;
+	tv_nexus = cmd->tvc_nexus;
 
 	rc = target_submit_cmd_map_sgls(se_cmd, tv_nexus->tvn_se_sess,
-			tv_cmd->tvc_cdb, &tv_cmd->tvc_sense_buf[0],
-			tv_cmd->tvc_lun, tv_cmd->tvc_exp_data_len,
-			tv_cmd->tvc_task_attr, tv_cmd->tvc_data_direction,
-			0, sg_ptr, tv_cmd->tvc_sgl_count,
+			cmd->tvc_cdb, &cmd->tvc_sense_buf[0],
+			cmd->tvc_lun, cmd->tvc_exp_data_len,
+			cmd->tvc_task_attr, cmd->tvc_data_direction,
+			0, sg_ptr, cmd->tvc_sgl_count,
 			sg_bidi_ptr, sg_no_bidi);
 	if (rc < 0) {
 		transport_send_check_condition_and_sense(se_cmd,
@@ -896,7 +896,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 	struct tcm_vhost_tpg **vs_tpg;
 	struct virtio_scsi_cmd_req v_req;
 	struct tcm_vhost_tpg *tpg;
-	struct tcm_vhost_cmd *tv_cmd;
+	struct tcm_vhost_cmd *cmd;
 	u32 exp_data_len, data_first, data_num, data_direction;
 	unsigned out, in, i;
 	int head, ret;
@@ -993,46 +993,46 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 		for (i = 0; i < data_num; i++)
 			exp_data_len += vq->iov[data_first + i].iov_len;
 
-		tv_cmd = vhost_scsi_allocate_cmd(vq, tpg, &v_req,
+		cmd = vhost_scsi_allocate_cmd(vq, tpg, &v_req,
 					exp_data_len, data_direction);
-		if (IS_ERR(tv_cmd)) {
+		if (IS_ERR(cmd)) {
 			vq_err(vq, "vhost_scsi_allocate_cmd failed %ld\n",
-					PTR_ERR(tv_cmd));
+					PTR_ERR(cmd));
 			goto err_cmd;
 		}
 		pr_debug("Allocated tv_cmd: %p exp_data_len: %d, data_direction"
-			": %d\n", tv_cmd, exp_data_len, data_direction);
+			": %d\n", cmd, exp_data_len, data_direction);
 
-		tv_cmd->tvc_vhost = vs;
-		tv_cmd->tvc_vq = vq;
-		tv_cmd->tvc_resp = vq->iov[out].iov_base;
+		cmd->tvc_vhost = vs;
+		cmd->tvc_vq = vq;
+		cmd->tvc_resp = vq->iov[out].iov_base;
 
 		/*
-		 * Copy in the recieved CDB descriptor into tv_cmd->tvc_cdb
+		 * Copy in the recieved CDB descriptor into cmd->tvc_cdb
 		 * that will be used by tcm_vhost_new_cmd_map() and down into
 		 * target_setup_cmd_from_cdb()
 		 */
-		memcpy(tv_cmd->tvc_cdb, v_req.cdb, TCM_VHOST_MAX_CDB_SIZE);
+		memcpy(cmd->tvc_cdb, v_req.cdb, TCM_VHOST_MAX_CDB_SIZE);
 		/*
 		 * Check that the recieved CDB size does not exceeded our
 		 * hardcoded max for tcm_vhost
 		 */
 		/* TODO what if cdb was too small for varlen cdb header? */
-		if (unlikely(scsi_command_size(tv_cmd->tvc_cdb) >
+		if (unlikely(scsi_command_size(cmd->tvc_cdb) >
 					TCM_VHOST_MAX_CDB_SIZE)) {
 			vq_err(vq, "Received SCSI CDB with command_size: %d that"
 				" exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n",
-				scsi_command_size(tv_cmd->tvc_cdb),
+				scsi_command_size(cmd->tvc_cdb),
 				TCM_VHOST_MAX_CDB_SIZE);
 			goto err_free;
 		}
-		tv_cmd->tvc_lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
+		cmd->tvc_lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
 
 		pr_debug("vhost_scsi got command opcode: %#02x, lun: %d\n",
-			tv_cmd->tvc_cdb[0], tv_cmd->tvc_lun);
+			cmd->tvc_cdb[0], cmd->tvc_lun);
 
 		if (data_direction != DMA_NONE) {
-			ret = vhost_scsi_map_iov_to_sgl(tv_cmd,
+			ret = vhost_scsi_map_iov_to_sgl(cmd,
 					&vq->iov[data_first], data_num,
 					data_direction == DMA_TO_DEVICE);
 			if (unlikely(ret)) {
@@ -1046,22 +1046,22 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 		 * complete the virtio-scsi request in TCM callback context via
 		 * tcm_vhost_queue_data_in() and tcm_vhost_queue_status()
 		 */
-		tv_cmd->tvc_vq_desc = head;
+		cmd->tvc_vq_desc = head;
 		/*
 		 * Dispatch tv_cmd descriptor for cmwq execution in process
 		 * context provided by tcm_vhost_workqueue.  This also ensures
 		 * tv_cmd is executed on the same kworker CPU as this vhost
 		 * thread to gain positive L2 cache locality effects..
 		 */
-		INIT_WORK(&tv_cmd->work, tcm_vhost_submission_work);
-		queue_work(tcm_vhost_workqueue, &tv_cmd->work);
+		INIT_WORK(&cmd->work, tcm_vhost_submission_work);
+		queue_work(tcm_vhost_workqueue, &cmd->work);
 	}
 
 	mutex_unlock(&vq->mutex);
 	return;
 
 err_free:
-	vhost_scsi_free_cmd(tv_cmd);
+	vhost_scsi_free_cmd(cmd);
 err_cmd:
 	vhost_scsi_send_bad_target(vs, vq, head, out);
 	mutex_unlock(&vq->mutex);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 00/11] vhost cleanups
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (19 preceding siblings ...)
  2013-05-06  8:38 ` Asias He
@ 2013-05-06  8:56 ` Michael S. Tsirkin
  2013-05-06 10:07 ` Michael S. Tsirkin
  21 siblings, 0 replies; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-05-06  8:56 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 04:38:18PM +0800, Asias He wrote:
> MST, This is on top of [PATCH 0/2] vhost-net fix ubuf.

Acked-by: Michael S. Tsirkin <mst@redhat.com>

Once -rc1 is out I'll fork -next and apply them.
Thanks a lot!

Nicholas, recently attempts to push patches through both net and target
trees resulted in a bit of a mess, so let's stick to the common tree
(unless there's a dependency that makes us not to) until rate of changes
in the common code calms down a bit.  OK?

> Asias He (11):
>   vhost: Remove vhost_enable_zcopy in vhost.h
>   vhost: Move VHOST_NET_FEATURES to net.c
>   vhost: Make vhost a separate module
>   vhost: Remove comments for hdr in vhost.h
>   vhost: Simplify dev->vqs[i] access
>   vhost-net: Cleanup vhost_ubuf and vhost_zcopy
>   vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
>   vhost-scsi: Rename struct vhost_scsi *s to *vs
>   vhost-scsi: Make func indention more consistent
>   vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
>   vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd
> 
>  drivers/vhost/Kconfig  |   8 +
>  drivers/vhost/Makefile |   3 +-
>  drivers/vhost/net.c    |  64 ++++---
>  drivers/vhost/scsi.c   | 470 ++++++++++++++++++++++++++-----------------------
>  drivers/vhost/vhost.c  |  86 +++++++--
>  drivers/vhost/vhost.h  |  11 +-
>  6 files changed, 361 insertions(+), 281 deletions(-)
> 
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] vhost: Make vhost a separate module
  2013-05-06  8:38 ` [PATCH v2 03/11] vhost: Make vhost a separate module Asias He
@ 2013-05-06  9:53   ` Michael S. Tsirkin
  2013-05-06 10:03   ` Michael S. Tsirkin
  1 sibling, 0 replies; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-05-06  9:53 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 04:38:21PM +0800, Asias He wrote:
> Currently, vhost-net and vhost-scsi are sharing the vhost core code.
> However, vhost-scsi shares the code by including the vhost.c file
> directly.
> 
> Making vhost a separate module makes it is easier to share code with
> other vhost devices.
> 
> Signed-off-by: Asias He <asias@redhat.com>
> ---
>  drivers/vhost/Kconfig  |  8 ++++++++
>  drivers/vhost/Makefile |  3 ++-
>  drivers/vhost/scsi.c   |  1 -
>  drivers/vhost/vhost.c  | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
>  drivers/vhost/vhost.h  |  2 ++
>  5 files changed, 62 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> index 8b9226d..017a1e8 100644
> --- a/drivers/vhost/Kconfig
> +++ b/drivers/vhost/Kconfig
> @@ -1,6 +1,7 @@
>  config VHOST_NET
>  	tristate "Host kernel accelerator for virtio net"
>  	depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
> +	select VHOST
>  	select VHOST_RING
>  	---help---
>  	  This kernel module can be loaded in host kernel to accelerate
> @@ -13,6 +14,7 @@ config VHOST_NET
>  config VHOST_SCSI
>  	tristate "VHOST_SCSI TCM fabric driver"
>  	depends on TARGET_CORE && EVENTFD && m
> +	select VHOST
>  	select VHOST_RING
>  	default n
>  	---help---
> @@ -24,3 +26,9 @@ config VHOST_RING
>  	---help---
>  	  This option is selected by any driver which needs to access
>  	  the host side of a virtio ring.
> +
> +config VHOST
> +	tristate
> +	---help---
> +	  This option is selected by any driver which needs to access
> +	  the core of vhost.
> diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> index 654e9afb..e0441c3 100644
> --- a/drivers/vhost/Makefile
> +++ b/drivers/vhost/Makefile
> @@ -1,7 +1,8 @@
>  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> -vhost_net-y := vhost.o net.o
> +vhost_net-y := net.o
>  
>  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
>  vhost_scsi-y := scsi.o
>  
>  obj-$(CONFIG_VHOST_RING) += vringh.o
> +obj-$(CONFIG_VHOST)	+= vhost.o
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index 5179f7a..2dcb94a 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -49,7 +49,6 @@
>  #include <linux/llist.h>
>  #include <linux/bitmap.h>
>  
> -#include "vhost.c"
>  #include "vhost.h"
>  
>  #define TCM_VHOST_VERSION  "v0.1"
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index de9441a..e406d5f 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -25,6 +25,7 @@
>  #include <linux/slab.h>
>  #include <linux/kthread.h>
>  #include <linux/cgroup.h>
> +#include <linux/module.h>
>  
>  #include "vhost.h"
>  
> @@ -66,6 +67,7 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
>  	work->flushing = 0;
>  	work->queue_seq = work->done_seq = 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_work_init);
>  
>  /* Init poll structure */
>  void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> @@ -79,6 +81,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
>  
>  	vhost_work_init(&poll->work, fn);
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_init);
>  
>  /* Start polling a file. We add ourselves to file's wait queue. The caller must
>   * keep a reference to a file until after vhost_poll_stop is called. */
> @@ -101,6 +104,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
>  
>  	return ret;
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_start);
>  
>  /* Stop polling a file. After this function returns, it becomes safe to drop the
>   * file reference. You must also flush afterwards. */
> @@ -111,6 +115,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
>  		poll->wqh = NULL;
>  	}
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_stop);
>  
>  static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
>  				unsigned seq)
> @@ -123,7 +128,7 @@ static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
>  	return left <= 0;
>  }
>  
> -static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
>  {
>  	unsigned seq;
>  	int flushing;
> @@ -138,6 +143,7 @@ static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
>  	spin_unlock_irq(&dev->work_lock);
>  	BUG_ON(flushing < 0);
>  }
> +EXPORT_SYMBOL_GPL(vhost_work_flush);
>  
>  /* Flush any work that has been scheduled. When calling this, don't hold any
>   * locks that are also used by the callback. */
> @@ -145,6 +151,7 @@ void vhost_poll_flush(struct vhost_poll *poll)
>  {
>  	vhost_work_flush(poll->dev, &poll->work);
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_flush);
>  
>  void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
>  {
> @@ -158,11 +165,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
>  	}
>  	spin_unlock_irqrestore(&dev->work_lock, flags);
>  }
> +EXPORT_SYMBOL_GPL(vhost_work_queue);
>  
>  void vhost_poll_queue(struct vhost_poll *poll)
>  {
>  	vhost_work_queue(poll->dev, &poll->work);
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_queue);
>  
>  static void vhost_vq_reset(struct vhost_dev *dev,
>  			   struct vhost_virtqueue *vq)
> @@ -310,6 +319,7 @@ long vhost_dev_init(struct vhost_dev *dev,
>  
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_init);
>  
>  /* Caller should have device mutex */
>  long vhost_dev_check_owner(struct vhost_dev *dev)
> @@ -317,6 +327,7 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
>  	/* Are you the owner? If not, I don't think you mean to do that */
>  	return dev->mm == current->mm ? 0 : -EPERM;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
>  
>  struct vhost_attach_cgroups_struct {
>  	struct vhost_work work;
> @@ -385,11 +396,13 @@ err_worker:
>  err_mm:
>  	return err;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_set_owner);
>  
>  struct vhost_memory *vhost_dev_reset_owner_prepare(void)
>  {
>  	return kmalloc(offsetof(struct vhost_memory, regions), GFP_KERNEL);
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner_prepare);
>  
>  /* Caller should have device mutex */
>  void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> @@ -400,6 +413,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
>  	memory->nregions = 0;
>  	RCU_INIT_POINTER(dev->memory, memory);
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner);
>  
>  void vhost_dev_stop(struct vhost_dev *dev)
>  {
> @@ -412,6 +426,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
>  		}
>  	}
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_stop);
>  
>  /* Caller should have device mutex if and only if locked is set */
>  void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> @@ -452,6 +467,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
>  		mmput(dev->mm);
>  	dev->mm = NULL;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
>  
>  static int log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
>  {
> @@ -537,6 +553,7 @@ int vhost_log_access_ok(struct vhost_dev *dev)
>  				       lockdep_is_held(&dev->mutex));
>  	return memory_access_ok(dev, mp, 1);
>  }
> +EXPORT_SYMBOL_GPL(vhost_log_access_ok);
>  
>  /* Verify access for write logging. */
>  /* Caller should have vq mutex and device mutex */
> @@ -562,6 +579,7 @@ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
>  	return vq_access_ok(vq->dev, vq->num, vq->desc, vq->avail, vq->used) &&
>  		vq_log_access_ok(vq->dev, vq, vq->log_base);
>  }
> +EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
>  
>  static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
>  {
> @@ -791,6 +809,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
>  		vhost_poll_flush(&vq->poll);
>  	return r;
>  }
> +EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
>  
>  /* Caller must have device mutex */
>  long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> @@ -871,6 +890,7 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
>  done:
>  	return r;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
>  
>  static const struct vhost_memory_region *find_region(struct vhost_memory *mem,
>  						     __u64 addr, __u32 len)
> @@ -962,6 +982,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
>  	BUG();
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_log_write);
>  
>  static int vhost_update_used_flags(struct vhost_virtqueue *vq)
>  {
> @@ -1013,6 +1034,7 @@ int vhost_init_used(struct vhost_virtqueue *vq)
>  	vq->signalled_used_valid = false;
>  	return get_user(vq->last_used_idx, &vq->used->idx);
>  }
> +EXPORT_SYMBOL_GPL(vhost_init_used);
>  
>  static int translate_desc(struct vhost_dev *dev, u64 addr, u32 len,
>  			  struct iovec iov[], int iov_size)
> @@ -1289,12 +1311,14 @@ int vhost_get_vq_desc(struct vhost_dev *dev, struct vhost_virtqueue *vq,
>  	BUG_ON(!(vq->used_flags & VRING_USED_F_NO_NOTIFY));
>  	return head;
>  }
> +EXPORT_SYMBOL_GPL(vhost_get_vq_desc);
>  
>  /* Reverse the effect of vhost_get_vq_desc. Useful for error handling. */
>  void vhost_discard_vq_desc(struct vhost_virtqueue *vq, int n)
>  {
>  	vq->last_avail_idx -= n;
>  }
> +EXPORT_SYMBOL_GPL(vhost_discard_vq_desc);
>  
>  /* After we've used one of their buffers, we tell them about it.  We'll then
>   * want to notify the guest, using eventfd. */
> @@ -1343,6 +1367,7 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
>  		vq->signalled_used_valid = false;
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used);
>  
>  static int __vhost_add_used_n(struct vhost_virtqueue *vq,
>  			    struct vring_used_elem *heads,
> @@ -1412,6 +1437,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
>  	}
>  	return r;
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used_n);
>  
>  static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  {
> @@ -1456,6 +1482,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  	if (vq->call_ctx && vhost_notify(dev, vq))
>  		eventfd_signal(vq->call_ctx, 1);
>  }
> +EXPORT_SYMBOL_GPL(vhost_signal);
>  
>  /* And here's the combo meal deal.  Supersize me! */
>  void vhost_add_used_and_signal(struct vhost_dev *dev,
> @@ -1465,6 +1492,7 @@ void vhost_add_used_and_signal(struct vhost_dev *dev,
>  	vhost_add_used(vq, head, len);
>  	vhost_signal(dev, vq);
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
>  
>  /* multi-buffer version of vhost_add_used_and_signal */
>  void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> @@ -1474,6 +1502,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
>  	vhost_add_used_n(vq, heads, count);
>  	vhost_signal(dev, vq);
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
>  
>  /* OK, now we need to know about added descriptors. */
>  bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> @@ -1511,6 +1540,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  
>  	return avail_idx != vq->avail_idx;
>  }
> +EXPORT_SYMBOL_GPL(vhost_enable_notify);
>  
>  /* We don't need to be notified again. */
>  void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> @@ -1527,3 +1557,22 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  			       &vq->used->flags, r);
>  	}
>  }
> +EXPORT_SYMBOL_GPL(vhost_disable_notify);
> +
> +static int __init vhost_init(void)
> +{
> +	return 0;
> +}
> +
> +static void __exit vhost_exit(void)
> +{
> +	return;

No need for return here.

> +}
> +
> +module_init(vhost_init);
> +module_exit(vhost_exit);
> +
> +MODULE_VERSION("0.0.1");
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Michael S. Tsirkin");
> +MODULE_DESCRIPTION("Host kernel accelerator for virtio");
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 6bf81a9..94a80eb 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -46,6 +46,8 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file);
>  void vhost_poll_stop(struct vhost_poll *poll);
>  void vhost_poll_flush(struct vhost_poll *poll);
>  void vhost_poll_queue(struct vhost_poll *poll);
> +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> +long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
>  
>  struct vhost_log {
>  	u64 addr;
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] vhost: Make vhost a separate module
  2013-05-06  8:38 ` [PATCH v2 03/11] vhost: Make vhost a separate module Asias He
  2013-05-06  9:53   ` Michael S. Tsirkin
@ 2013-05-06 10:03   ` Michael S. Tsirkin
  2013-05-06 12:10     ` Asias He
  1 sibling, 1 reply; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-05-06 10:03 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 04:38:21PM +0800, Asias He wrote:
> Currently, vhost-net and vhost-scsi are sharing the vhost core code.
> However, vhost-scsi shares the code by including the vhost.c file
> directly.
> 
> Making vhost a separate module makes it is easier to share code with
> other vhost devices.
> 
> Signed-off-by: Asias He <asias@redhat.com>

Also this will break test.c, right? Let's fix it in the same
commit too.

> ---
>  drivers/vhost/Kconfig  |  8 ++++++++
>  drivers/vhost/Makefile |  3 ++-
>  drivers/vhost/scsi.c   |  1 -
>  drivers/vhost/vhost.c  | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
>  drivers/vhost/vhost.h  |  2 ++
>  5 files changed, 62 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> index 8b9226d..017a1e8 100644
> --- a/drivers/vhost/Kconfig
> +++ b/drivers/vhost/Kconfig
> @@ -1,6 +1,7 @@
>  config VHOST_NET
>  	tristate "Host kernel accelerator for virtio net"
>  	depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
> +	select VHOST
>  	select VHOST_RING
>  	---help---
>  	  This kernel module can be loaded in host kernel to accelerate
> @@ -13,6 +14,7 @@ config VHOST_NET
>  config VHOST_SCSI
>  	tristate "VHOST_SCSI TCM fabric driver"
>  	depends on TARGET_CORE && EVENTFD && m
> +	select VHOST
>  	select VHOST_RING
>  	default n
>  	---help---
> @@ -24,3 +26,9 @@ config VHOST_RING
>  	---help---
>  	  This option is selected by any driver which needs to access
>  	  the host side of a virtio ring.
> +
> +config VHOST
> +	tristate
> +	---help---
> +	  This option is selected by any driver which needs to access
> +	  the core of vhost.
> diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> index 654e9afb..e0441c3 100644
> --- a/drivers/vhost/Makefile
> +++ b/drivers/vhost/Makefile
> @@ -1,7 +1,8 @@
>  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> -vhost_net-y := vhost.o net.o
> +vhost_net-y := net.o
>  
>  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
>  vhost_scsi-y := scsi.o
>  
>  obj-$(CONFIG_VHOST_RING) += vringh.o
> +obj-$(CONFIG_VHOST)	+= vhost.o
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index 5179f7a..2dcb94a 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -49,7 +49,6 @@
>  #include <linux/llist.h>
>  #include <linux/bitmap.h>
>  
> -#include "vhost.c"
>  #include "vhost.h"
>  
>  #define TCM_VHOST_VERSION  "v0.1"
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index de9441a..e406d5f 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -25,6 +25,7 @@
>  #include <linux/slab.h>
>  #include <linux/kthread.h>
>  #include <linux/cgroup.h>
> +#include <linux/module.h>
>  
>  #include "vhost.h"
>  
> @@ -66,6 +67,7 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
>  	work->flushing = 0;
>  	work->queue_seq = work->done_seq = 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_work_init);
>  
>  /* Init poll structure */
>  void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> @@ -79,6 +81,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
>  
>  	vhost_work_init(&poll->work, fn);
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_init);
>  
>  /* Start polling a file. We add ourselves to file's wait queue. The caller must
>   * keep a reference to a file until after vhost_poll_stop is called. */
> @@ -101,6 +104,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
>  
>  	return ret;
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_start);
>  
>  /* Stop polling a file. After this function returns, it becomes safe to drop the
>   * file reference. You must also flush afterwards. */
> @@ -111,6 +115,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
>  		poll->wqh = NULL;
>  	}
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_stop);
>  
>  static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
>  				unsigned seq)
> @@ -123,7 +128,7 @@ static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
>  	return left <= 0;
>  }
>  
> -static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
>  {
>  	unsigned seq;
>  	int flushing;
> @@ -138,6 +143,7 @@ static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
>  	spin_unlock_irq(&dev->work_lock);
>  	BUG_ON(flushing < 0);
>  }
> +EXPORT_SYMBOL_GPL(vhost_work_flush);
>  
>  /* Flush any work that has been scheduled. When calling this, don't hold any
>   * locks that are also used by the callback. */
> @@ -145,6 +151,7 @@ void vhost_poll_flush(struct vhost_poll *poll)
>  {
>  	vhost_work_flush(poll->dev, &poll->work);
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_flush);
>  
>  void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
>  {
> @@ -158,11 +165,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
>  	}
>  	spin_unlock_irqrestore(&dev->work_lock, flags);
>  }
> +EXPORT_SYMBOL_GPL(vhost_work_queue);
>  
>  void vhost_poll_queue(struct vhost_poll *poll)
>  {
>  	vhost_work_queue(poll->dev, &poll->work);
>  }
> +EXPORT_SYMBOL_GPL(vhost_poll_queue);
>  
>  static void vhost_vq_reset(struct vhost_dev *dev,
>  			   struct vhost_virtqueue *vq)
> @@ -310,6 +319,7 @@ long vhost_dev_init(struct vhost_dev *dev,
>  
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_init);
>  
>  /* Caller should have device mutex */
>  long vhost_dev_check_owner(struct vhost_dev *dev)
> @@ -317,6 +327,7 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
>  	/* Are you the owner? If not, I don't think you mean to do that */
>  	return dev->mm == current->mm ? 0 : -EPERM;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
>  
>  struct vhost_attach_cgroups_struct {
>  	struct vhost_work work;
> @@ -385,11 +396,13 @@ err_worker:
>  err_mm:
>  	return err;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_set_owner);
>  
>  struct vhost_memory *vhost_dev_reset_owner_prepare(void)
>  {
>  	return kmalloc(offsetof(struct vhost_memory, regions), GFP_KERNEL);
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner_prepare);
>  
>  /* Caller should have device mutex */
>  void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> @@ -400,6 +413,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
>  	memory->nregions = 0;
>  	RCU_INIT_POINTER(dev->memory, memory);
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner);
>  
>  void vhost_dev_stop(struct vhost_dev *dev)
>  {
> @@ -412,6 +426,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
>  		}
>  	}
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_stop);
>  
>  /* Caller should have device mutex if and only if locked is set */
>  void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> @@ -452,6 +467,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
>  		mmput(dev->mm);
>  	dev->mm = NULL;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
>  
>  static int log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
>  {
> @@ -537,6 +553,7 @@ int vhost_log_access_ok(struct vhost_dev *dev)
>  				       lockdep_is_held(&dev->mutex));
>  	return memory_access_ok(dev, mp, 1);
>  }
> +EXPORT_SYMBOL_GPL(vhost_log_access_ok);
>  
>  /* Verify access for write logging. */
>  /* Caller should have vq mutex and device mutex */
> @@ -562,6 +579,7 @@ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
>  	return vq_access_ok(vq->dev, vq->num, vq->desc, vq->avail, vq->used) &&
>  		vq_log_access_ok(vq->dev, vq, vq->log_base);
>  }
> +EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
>  
>  static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
>  {
> @@ -791,6 +809,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
>  		vhost_poll_flush(&vq->poll);
>  	return r;
>  }
> +EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
>  
>  /* Caller must have device mutex */
>  long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> @@ -871,6 +890,7 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
>  done:
>  	return r;
>  }
> +EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
>  
>  static const struct vhost_memory_region *find_region(struct vhost_memory *mem,
>  						     __u64 addr, __u32 len)
> @@ -962,6 +982,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
>  	BUG();
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_log_write);
>  
>  static int vhost_update_used_flags(struct vhost_virtqueue *vq)
>  {
> @@ -1013,6 +1034,7 @@ int vhost_init_used(struct vhost_virtqueue *vq)
>  	vq->signalled_used_valid = false;
>  	return get_user(vq->last_used_idx, &vq->used->idx);
>  }
> +EXPORT_SYMBOL_GPL(vhost_init_used);
>  
>  static int translate_desc(struct vhost_dev *dev, u64 addr, u32 len,
>  			  struct iovec iov[], int iov_size)
> @@ -1289,12 +1311,14 @@ int vhost_get_vq_desc(struct vhost_dev *dev, struct vhost_virtqueue *vq,
>  	BUG_ON(!(vq->used_flags & VRING_USED_F_NO_NOTIFY));
>  	return head;
>  }
> +EXPORT_SYMBOL_GPL(vhost_get_vq_desc);
>  
>  /* Reverse the effect of vhost_get_vq_desc. Useful for error handling. */
>  void vhost_discard_vq_desc(struct vhost_virtqueue *vq, int n)
>  {
>  	vq->last_avail_idx -= n;
>  }
> +EXPORT_SYMBOL_GPL(vhost_discard_vq_desc);
>  
>  /* After we've used one of their buffers, we tell them about it.  We'll then
>   * want to notify the guest, using eventfd. */
> @@ -1343,6 +1367,7 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
>  		vq->signalled_used_valid = false;
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used);
>  
>  static int __vhost_add_used_n(struct vhost_virtqueue *vq,
>  			    struct vring_used_elem *heads,
> @@ -1412,6 +1437,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
>  	}
>  	return r;
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used_n);
>  
>  static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  {
> @@ -1456,6 +1482,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  	if (vq->call_ctx && vhost_notify(dev, vq))
>  		eventfd_signal(vq->call_ctx, 1);
>  }
> +EXPORT_SYMBOL_GPL(vhost_signal);
>  
>  /* And here's the combo meal deal.  Supersize me! */
>  void vhost_add_used_and_signal(struct vhost_dev *dev,
> @@ -1465,6 +1492,7 @@ void vhost_add_used_and_signal(struct vhost_dev *dev,
>  	vhost_add_used(vq, head, len);
>  	vhost_signal(dev, vq);
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
>  
>  /* multi-buffer version of vhost_add_used_and_signal */
>  void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> @@ -1474,6 +1502,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
>  	vhost_add_used_n(vq, heads, count);
>  	vhost_signal(dev, vq);
>  }
> +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
>  
>  /* OK, now we need to know about added descriptors. */
>  bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> @@ -1511,6 +1540,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  
>  	return avail_idx != vq->avail_idx;
>  }
> +EXPORT_SYMBOL_GPL(vhost_enable_notify);
>  
>  /* We don't need to be notified again. */
>  void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> @@ -1527,3 +1557,22 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
>  			       &vq->used->flags, r);
>  	}
>  }
> +EXPORT_SYMBOL_GPL(vhost_disable_notify);
> +
> +static int __init vhost_init(void)
> +{
> +	return 0;
> +}
> +
> +static void __exit vhost_exit(void)
> +{
> +	return;
> +}
> +
> +module_init(vhost_init);
> +module_exit(vhost_exit);
> +
> +MODULE_VERSION("0.0.1");
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Michael S. Tsirkin");
> +MODULE_DESCRIPTION("Host kernel accelerator for virtio");
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 6bf81a9..94a80eb 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -46,6 +46,8 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file);
>  void vhost_poll_stop(struct vhost_poll *poll);
>  void vhost_poll_flush(struct vhost_poll *poll);
>  void vhost_poll_queue(struct vhost_poll *poll);
> +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> +long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
>  
>  struct vhost_log {
>  	u64 addr;
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 00/11] vhost cleanups
  2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
                   ` (20 preceding siblings ...)
  2013-05-06  8:56 ` [PATCH v2 00/11] vhost cleanups Michael S. Tsirkin
@ 2013-05-06 10:07 ` Michael S. Tsirkin
  2013-05-06 12:05   ` Asias He
  21 siblings, 1 reply; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-05-06 10:07 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 04:38:18PM +0800, Asias He wrote:
> MST, This is on top of [PATCH 0/2] vhost-net fix ubuf.

Okay, how about making EVENT_IDX work for virtio-scsi?
I'm guessing it's some messup with feature negotiation,
that's what all event-idx bugs came down to so far.

> Asias He (11):
>   vhost: Remove vhost_enable_zcopy in vhost.h
>   vhost: Move VHOST_NET_FEATURES to net.c
>   vhost: Make vhost a separate module
>   vhost: Remove comments for hdr in vhost.h
>   vhost: Simplify dev->vqs[i] access
>   vhost-net: Cleanup vhost_ubuf and vhost_zcopy
>   vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
>   vhost-scsi: Rename struct vhost_scsi *s to *vs
>   vhost-scsi: Make func indention more consistent
>   vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
>   vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd
> 
>  drivers/vhost/Kconfig  |   8 +
>  drivers/vhost/Makefile |   3 +-
>  drivers/vhost/net.c    |  64 ++++---
>  drivers/vhost/scsi.c   | 470 ++++++++++++++++++++++++++-----------------------
>  drivers/vhost/vhost.c  |  86 +++++++--
>  drivers/vhost/vhost.h  |  11 +-
>  6 files changed, 361 insertions(+), 281 deletions(-)
> 
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 06/11] vhost-net: Cleanup vhost_ubuf and vhost_zcopy
  2013-05-06  8:38 ` [PATCH v2 06/11] vhost-net: Cleanup vhost_ubuf and vhost_zcopy Asias He
@ 2013-05-06 10:25   ` Michael S. Tsirkin
  0 siblings, 0 replies; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-05-06 10:25 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 04:38:24PM +0800, Asias He wrote:
> - Rename vhost_ubuf to vhost_net_ubuf
> - Rename vhost_zcopy_mask to vhost_net_zcopy_mask
> - Make funcs static
> 
> Signed-off-by: Asias He <asias@redhat.com>

OK this actually fixes a warning introduced by patch 1,
so I'll pull this in too (don't like builds with warnings).
Then your patch 1 can go in as is (some warnings
during bisect builds this might trigger don't worry me).

> ---
>  drivers/vhost/net.c | 58 +++++++++++++++++++++++++++--------------------------
>  1 file changed, 30 insertions(+), 28 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 06b2447..2b51e23 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -70,7 +70,7 @@ enum {
>  	VHOST_NET_VQ_MAX = 2,
>  };
>  
> -struct vhost_ubuf_ref {
> +struct vhost_net_ubuf_ref {
>  	struct kref kref;
>  	wait_queue_head_t wait;
>  	struct vhost_virtqueue *vq;
> @@ -93,7 +93,7 @@ struct vhost_net_virtqueue {
>  	struct ubuf_info *ubuf_info;
>  	/* Reference counting for outstanding ubufs.
>  	 * Protected by vq mutex. Writers must also take device mutex. */
> -	struct vhost_ubuf_ref *ubufs;
> +	struct vhost_net_ubuf_ref *ubufs;
>  };
>  
>  struct vhost_net {
> @@ -110,24 +110,25 @@ struct vhost_net {
>  	bool tx_flush;
>  };
>  
> -static unsigned vhost_zcopy_mask __read_mostly;
> +static unsigned vhost_net_zcopy_mask __read_mostly;
>  
> -void vhost_enable_zcopy(int vq)
> +static void vhost_net_enable_zcopy(int vq)
>  {
> -	vhost_zcopy_mask |= 0x1 << vq;
> +	vhost_net_zcopy_mask |= 0x1 << vq;
>  }
>  
> -static void vhost_zerocopy_done_signal(struct kref *kref)
> +static void vhost_net_zerocopy_done_signal(struct kref *kref)
>  {
> -	struct vhost_ubuf_ref *ubufs = container_of(kref, struct vhost_ubuf_ref,
> -						    kref);
> +	struct vhost_net_ubuf_ref *ubufs;
> +
> +	ubufs = container_of(kref, struct vhost_net_ubuf_ref, kref);
>  	wake_up(&ubufs->wait);
>  }
>  
> -struct vhost_ubuf_ref *vhost_ubuf_alloc(struct vhost_virtqueue *vq,
> -					bool zcopy)
> +static struct vhost_net_ubuf_ref *
> +vhost_net_ubuf_alloc(struct vhost_virtqueue *vq, bool zcopy)
>  {
> -	struct vhost_ubuf_ref *ubufs;
> +	struct vhost_net_ubuf_ref *ubufs;
>  	/* No zero copy backend? Nothing to count. */
>  	if (!zcopy)
>  		return NULL;
> @@ -140,14 +141,14 @@ struct vhost_ubuf_ref *vhost_ubuf_alloc(struct vhost_virtqueue *vq,
>  	return ubufs;
>  }
>  
> -void vhost_ubuf_put(struct vhost_ubuf_ref *ubufs)
> +static void vhost_net_ubuf_put(struct vhost_net_ubuf_ref *ubufs)
>  {
> -	kref_put(&ubufs->kref, vhost_zerocopy_done_signal);
> +	kref_put(&ubufs->kref, vhost_net_zerocopy_done_signal);
>  }
>  
> -void vhost_ubuf_put_and_wait(struct vhost_ubuf_ref *ubufs)
> +static void vhost_net_ubuf_put_and_wait(struct vhost_net_ubuf_ref *ubufs)
>  {
> -	kref_put(&ubufs->kref, vhost_zerocopy_done_signal);
> +	kref_put(&ubufs->kref, vhost_net_zerocopy_done_signal);
>  	wait_event(ubufs->wait, !atomic_read(&ubufs->kref.refcount));
>  	kfree(ubufs);
>  }
> @@ -159,7 +160,7 @@ static void vhost_net_clear_ubuf_info(struct vhost_net *n)
>  	int i;
>  
>  	for (i = 0; i < n->dev.nvqs; ++i) {
> -		zcopy = vhost_zcopy_mask & (0x1 << i);
> +		zcopy = vhost_net_zcopy_mask & (0x1 << i);
>  		if (zcopy)
>  			kfree(n->vqs[i].ubuf_info);
>  	}
> @@ -171,7 +172,7 @@ int vhost_net_set_ubuf_info(struct vhost_net *n)
>  	int i;
>  
>  	for (i = 0; i < n->dev.nvqs; ++i) {
> -		zcopy = vhost_zcopy_mask & (0x1 << i);
> +		zcopy = vhost_net_zcopy_mask & (0x1 << i);
>  		if (!zcopy)
>  			continue;
>  		n->vqs[i].ubuf_info = kmalloc(sizeof(*n->vqs[i].ubuf_info) *
> @@ -183,7 +184,7 @@ int vhost_net_set_ubuf_info(struct vhost_net *n)
>  
>  err:
>  	while (i--) {
> -		zcopy = vhost_zcopy_mask & (0x1 << i);
> +		zcopy = vhost_net_zcopy_mask & (0x1 << i);
>  		if (!zcopy)
>  			continue;
>  		kfree(n->vqs[i].ubuf_info);
> @@ -305,7 +306,7 @@ static int vhost_zerocopy_signal_used(struct vhost_net *net,
>  
>  static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
>  {
> -	struct vhost_ubuf_ref *ubufs = ubuf->ctx;
> +	struct vhost_net_ubuf_ref *ubufs = ubuf->ctx;
>  	struct vhost_virtqueue *vq = ubufs->vq;
>  	int cnt = atomic_read(&ubufs->kref.refcount);
>  
> @@ -322,7 +323,7 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
>  	/* set len to mark this desc buffers done DMA */
>  	vq->heads[ubuf->desc].len = success ?
>  		VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN;
> -	vhost_ubuf_put(ubufs);
> +	vhost_net_ubuf_put(ubufs);
>  }
>  
>  /* Expects to be always run from workqueue - which acts as
> @@ -345,7 +346,7 @@ static void handle_tx(struct vhost_net *net)
>  	int err;
>  	size_t hdr_size;
>  	struct socket *sock;
> -	struct vhost_ubuf_ref *uninitialized_var(ubufs);
> +	struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
>  	bool zcopy, zcopy_used;
>  
>  	/* TODO: check that we are running from vhost_worker? */
> @@ -441,7 +442,7 @@ static void handle_tx(struct vhost_net *net)
>  		if (unlikely(err < 0)) {
>  			if (zcopy_used) {
>  				if (ubufs)
> -					vhost_ubuf_put(ubufs);
> +					vhost_net_ubuf_put(ubufs);
>  				nvq->upend_idx = ((unsigned)nvq->upend_idx - 1)
>  					% UIO_MAXIOV;
>  			}
> @@ -795,7 +796,7 @@ static void vhost_net_flush(struct vhost_net *n)
>  		n->tx_flush = true;
>  		mutex_unlock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
>  		/* Wait for all lower device DMAs done. */
> -		vhost_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs);
> +		vhost_net_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs);
>  		mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex);
>  		n->tx_flush = false;
>  		kref_init(&n->vqs[VHOST_NET_VQ_TX].ubufs->kref);
> @@ -896,7 +897,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
>  	struct socket *sock, *oldsock;
>  	struct vhost_virtqueue *vq;
>  	struct vhost_net_virtqueue *nvq;
> -	struct vhost_ubuf_ref *ubufs, *oldubufs = NULL;
> +	struct vhost_net_ubuf_ref *ubufs, *oldubufs = NULL;
>  	int r;
>  
>  	mutex_lock(&n->dev.mutex);
> @@ -927,7 +928,8 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
>  	oldsock = rcu_dereference_protected(vq->private_data,
>  					    lockdep_is_held(&vq->mutex));
>  	if (sock != oldsock) {
> -		ubufs = vhost_ubuf_alloc(vq, sock && vhost_sock_zcopy(sock));
> +		ubufs = vhost_net_ubuf_alloc(vq,
> +					     sock && vhost_sock_zcopy(sock));
>  		if (IS_ERR(ubufs)) {
>  			r = PTR_ERR(ubufs);
>  			goto err_ubufs;
> @@ -953,7 +955,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
>  	mutex_unlock(&vq->mutex);
>  
>  	if (oldubufs) {
> -		vhost_ubuf_put_and_wait(oldubufs);
> +		vhost_net_ubuf_put_and_wait(oldubufs);
>  		mutex_lock(&vq->mutex);
>  		vhost_zerocopy_signal_used(n, vq);
>  		mutex_unlock(&vq->mutex);
> @@ -971,7 +973,7 @@ err_used:
>  	rcu_assign_pointer(vq->private_data, oldsock);
>  	vhost_net_enable_vq(n, vq);
>  	if (ubufs)
> -		vhost_ubuf_put_and_wait(ubufs);
> +		vhost_net_ubuf_put_and_wait(ubufs);
>  err_ubufs:
>  	fput(sock->file);
>  err_vq:
> @@ -1133,7 +1135,7 @@ static struct miscdevice vhost_net_misc = {
>  static int vhost_net_init(void)
>  {
>  	if (experimental_zcopytx)
> -		vhost_enable_zcopy(VHOST_NET_VQ_TX);
> +		vhost_net_enable_zcopy(VHOST_NET_VQ_TX);
>  	return misc_register(&vhost_net_misc);
>  }
>  module_init(vhost_net_init);
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 00/11] vhost cleanups
  2013-05-06 10:07 ` Michael S. Tsirkin
@ 2013-05-06 12:05   ` Asias He
  2013-05-06 13:15     ` Michael S. Tsirkin
  0 siblings, 1 reply; 33+ messages in thread
From: Asias He @ 2013-05-06 12:05 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 01:07:46PM +0300, Michael S. Tsirkin wrote:
> On Mon, May 06, 2013 at 04:38:18PM +0800, Asias He wrote:
> > MST, This is on top of [PATCH 0/2] vhost-net fix ubuf.
> 
> Okay, how about making EVENT_IDX work for virtio-scsi?
> I'm guessing it's some messup with feature negotiation,
> that's what all event-idx bugs came down to so far.

Yes, IIRC, EVENT_IDX works for vhost-scsi now. Will cook a patch to
enable it. It should go 3.10, right?

> > Asias He (11):
> >   vhost: Remove vhost_enable_zcopy in vhost.h
> >   vhost: Move VHOST_NET_FEATURES to net.c
> >   vhost: Make vhost a separate module
> >   vhost: Remove comments for hdr in vhost.h
> >   vhost: Simplify dev->vqs[i] access
> >   vhost-net: Cleanup vhost_ubuf and vhost_zcopy
> >   vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
> >   vhost-scsi: Rename struct vhost_scsi *s to *vs
> >   vhost-scsi: Make func indention more consistent
> >   vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
> >   vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd
> > 
> >  drivers/vhost/Kconfig  |   8 +
> >  drivers/vhost/Makefile |   3 +-
> >  drivers/vhost/net.c    |  64 ++++---
> >  drivers/vhost/scsi.c   | 470 ++++++++++++++++++++++++++-----------------------
> >  drivers/vhost/vhost.c  |  86 +++++++--
> >  drivers/vhost/vhost.h  |  11 +-
> >  6 files changed, 361 insertions(+), 281 deletions(-)
> > 
> > -- 
> > 1.8.1.4

-- 
Asias

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] vhost: Make vhost a separate module
  2013-05-06 10:03   ` Michael S. Tsirkin
@ 2013-05-06 12:10     ` Asias He
  2013-07-07 11:37       ` Michael S. Tsirkin
  0 siblings, 1 reply; 33+ messages in thread
From: Asias He @ 2013-05-06 12:10 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 01:03:42PM +0300, Michael S. Tsirkin wrote:
> On Mon, May 06, 2013 at 04:38:21PM +0800, Asias He wrote:
> > Currently, vhost-net and vhost-scsi are sharing the vhost core code.
> > However, vhost-scsi shares the code by including the vhost.c file
> > directly.
> > 
> > Making vhost a separate module makes it is easier to share code with
> > other vhost devices.
> > 
> > Signed-off-by: Asias He <asias@redhat.com>
> 
> Also this will break test.c, right? Let's fix it in the same
> commit too.

I will fix it up and remove the useless 'return'.

> > ---
> >  drivers/vhost/Kconfig  |  8 ++++++++
> >  drivers/vhost/Makefile |  3 ++-
> >  drivers/vhost/scsi.c   |  1 -
> >  drivers/vhost/vhost.c  | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
> >  drivers/vhost/vhost.h  |  2 ++
> >  5 files changed, 62 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > index 8b9226d..017a1e8 100644
> > --- a/drivers/vhost/Kconfig
> > +++ b/drivers/vhost/Kconfig
> > @@ -1,6 +1,7 @@
> >  config VHOST_NET
> >  	tristate "Host kernel accelerator for virtio net"
> >  	depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
> > +	select VHOST
> >  	select VHOST_RING
> >  	---help---
> >  	  This kernel module can be loaded in host kernel to accelerate
> > @@ -13,6 +14,7 @@ config VHOST_NET
> >  config VHOST_SCSI
> >  	tristate "VHOST_SCSI TCM fabric driver"
> >  	depends on TARGET_CORE && EVENTFD && m
> > +	select VHOST
> >  	select VHOST_RING
> >  	default n
> >  	---help---
> > @@ -24,3 +26,9 @@ config VHOST_RING
> >  	---help---
> >  	  This option is selected by any driver which needs to access
> >  	  the host side of a virtio ring.
> > +
> > +config VHOST
> > +	tristate
> > +	---help---
> > +	  This option is selected by any driver which needs to access
> > +	  the core of vhost.
> > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > index 654e9afb..e0441c3 100644
> > --- a/drivers/vhost/Makefile
> > +++ b/drivers/vhost/Makefile
> > @@ -1,7 +1,8 @@
> >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > -vhost_net-y := vhost.o net.o
> > +vhost_net-y := net.o
> >  
> >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> >  vhost_scsi-y := scsi.o
> >  
> >  obj-$(CONFIG_VHOST_RING) += vringh.o
> > +obj-$(CONFIG_VHOST)	+= vhost.o
> > diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> > index 5179f7a..2dcb94a 100644
> > --- a/drivers/vhost/scsi.c
> > +++ b/drivers/vhost/scsi.c
> > @@ -49,7 +49,6 @@
> >  #include <linux/llist.h>
> >  #include <linux/bitmap.h>
> >  
> > -#include "vhost.c"
> >  #include "vhost.h"
> >  
> >  #define TCM_VHOST_VERSION  "v0.1"
> > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > index de9441a..e406d5f 100644
> > --- a/drivers/vhost/vhost.c
> > +++ b/drivers/vhost/vhost.c
> > @@ -25,6 +25,7 @@
> >  #include <linux/slab.h>
> >  #include <linux/kthread.h>
> >  #include <linux/cgroup.h>
> > +#include <linux/module.h>
> >  
> >  #include "vhost.h"
> >  
> > @@ -66,6 +67,7 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
> >  	work->flushing = 0;
> >  	work->queue_seq = work->done_seq = 0;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_work_init);
> >  
> >  /* Init poll structure */
> >  void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> > @@ -79,6 +81,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> >  
> >  	vhost_work_init(&poll->work, fn);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_poll_init);
> >  
> >  /* Start polling a file. We add ourselves to file's wait queue. The caller must
> >   * keep a reference to a file until after vhost_poll_stop is called. */
> > @@ -101,6 +104,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
> >  
> >  	return ret;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_poll_start);
> >  
> >  /* Stop polling a file. After this function returns, it becomes safe to drop the
> >   * file reference. You must also flush afterwards. */
> > @@ -111,6 +115,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
> >  		poll->wqh = NULL;
> >  	}
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_poll_stop);
> >  
> >  static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> >  				unsigned seq)
> > @@ -123,7 +128,7 @@ static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> >  	return left <= 0;
> >  }
> >  
> > -static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> >  {
> >  	unsigned seq;
> >  	int flushing;
> > @@ -138,6 +143,7 @@ static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> >  	spin_unlock_irq(&dev->work_lock);
> >  	BUG_ON(flushing < 0);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_work_flush);
> >  
> >  /* Flush any work that has been scheduled. When calling this, don't hold any
> >   * locks that are also used by the callback. */
> > @@ -145,6 +151,7 @@ void vhost_poll_flush(struct vhost_poll *poll)
> >  {
> >  	vhost_work_flush(poll->dev, &poll->work);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_poll_flush);
> >  
> >  void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> >  {
> > @@ -158,11 +165,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> >  	}
> >  	spin_unlock_irqrestore(&dev->work_lock, flags);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_work_queue);
> >  
> >  void vhost_poll_queue(struct vhost_poll *poll)
> >  {
> >  	vhost_work_queue(poll->dev, &poll->work);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_poll_queue);
> >  
> >  static void vhost_vq_reset(struct vhost_dev *dev,
> >  			   struct vhost_virtqueue *vq)
> > @@ -310,6 +319,7 @@ long vhost_dev_init(struct vhost_dev *dev,
> >  
> >  	return 0;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_init);
> >  
> >  /* Caller should have device mutex */
> >  long vhost_dev_check_owner(struct vhost_dev *dev)
> > @@ -317,6 +327,7 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
> >  	/* Are you the owner? If not, I don't think you mean to do that */
> >  	return dev->mm == current->mm ? 0 : -EPERM;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
> >  
> >  struct vhost_attach_cgroups_struct {
> >  	struct vhost_work work;
> > @@ -385,11 +396,13 @@ err_worker:
> >  err_mm:
> >  	return err;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_set_owner);
> >  
> >  struct vhost_memory *vhost_dev_reset_owner_prepare(void)
> >  {
> >  	return kmalloc(offsetof(struct vhost_memory, regions), GFP_KERNEL);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner_prepare);
> >  
> >  /* Caller should have device mutex */
> >  void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> > @@ -400,6 +413,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> >  	memory->nregions = 0;
> >  	RCU_INIT_POINTER(dev->memory, memory);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner);
> >  
> >  void vhost_dev_stop(struct vhost_dev *dev)
> >  {
> > @@ -412,6 +426,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
> >  		}
> >  	}
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_stop);
> >  
> >  /* Caller should have device mutex if and only if locked is set */
> >  void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> > @@ -452,6 +467,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> >  		mmput(dev->mm);
> >  	dev->mm = NULL;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
> >  
> >  static int log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
> >  {
> > @@ -537,6 +553,7 @@ int vhost_log_access_ok(struct vhost_dev *dev)
> >  				       lockdep_is_held(&dev->mutex));
> >  	return memory_access_ok(dev, mp, 1);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_log_access_ok);
> >  
> >  /* Verify access for write logging. */
> >  /* Caller should have vq mutex and device mutex */
> > @@ -562,6 +579,7 @@ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
> >  	return vq_access_ok(vq->dev, vq->num, vq->desc, vq->avail, vq->used) &&
> >  		vq_log_access_ok(vq->dev, vq, vq->log_base);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
> >  
> >  static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
> >  {
> > @@ -791,6 +809,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
> >  		vhost_poll_flush(&vq->poll);
> >  	return r;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
> >  
> >  /* Caller must have device mutex */
> >  long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> > @@ -871,6 +890,7 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> >  done:
> >  	return r;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
> >  
> >  static const struct vhost_memory_region *find_region(struct vhost_memory *mem,
> >  						     __u64 addr, __u32 len)
> > @@ -962,6 +982,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
> >  	BUG();
> >  	return 0;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_log_write);
> >  
> >  static int vhost_update_used_flags(struct vhost_virtqueue *vq)
> >  {
> > @@ -1013,6 +1034,7 @@ int vhost_init_used(struct vhost_virtqueue *vq)
> >  	vq->signalled_used_valid = false;
> >  	return get_user(vq->last_used_idx, &vq->used->idx);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_init_used);
> >  
> >  static int translate_desc(struct vhost_dev *dev, u64 addr, u32 len,
> >  			  struct iovec iov[], int iov_size)
> > @@ -1289,12 +1311,14 @@ int vhost_get_vq_desc(struct vhost_dev *dev, struct vhost_virtqueue *vq,
> >  	BUG_ON(!(vq->used_flags & VRING_USED_F_NO_NOTIFY));
> >  	return head;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_get_vq_desc);
> >  
> >  /* Reverse the effect of vhost_get_vq_desc. Useful for error handling. */
> >  void vhost_discard_vq_desc(struct vhost_virtqueue *vq, int n)
> >  {
> >  	vq->last_avail_idx -= n;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_discard_vq_desc);
> >  
> >  /* After we've used one of their buffers, we tell them about it.  We'll then
> >   * want to notify the guest, using eventfd. */
> > @@ -1343,6 +1367,7 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
> >  		vq->signalled_used_valid = false;
> >  	return 0;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_add_used);
> >  
> >  static int __vhost_add_used_n(struct vhost_virtqueue *vq,
> >  			    struct vring_used_elem *heads,
> > @@ -1412,6 +1437,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
> >  	}
> >  	return r;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_add_used_n);
> >  
> >  static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> >  {
> > @@ -1456,6 +1482,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> >  	if (vq->call_ctx && vhost_notify(dev, vq))
> >  		eventfd_signal(vq->call_ctx, 1);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_signal);
> >  
> >  /* And here's the combo meal deal.  Supersize me! */
> >  void vhost_add_used_and_signal(struct vhost_dev *dev,
> > @@ -1465,6 +1492,7 @@ void vhost_add_used_and_signal(struct vhost_dev *dev,
> >  	vhost_add_used(vq, head, len);
> >  	vhost_signal(dev, vq);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
> >  
> >  /* multi-buffer version of vhost_add_used_and_signal */
> >  void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> > @@ -1474,6 +1502,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> >  	vhost_add_used_n(vq, heads, count);
> >  	vhost_signal(dev, vq);
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
> >  
> >  /* OK, now we need to know about added descriptors. */
> >  bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > @@ -1511,6 +1540,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> >  
> >  	return avail_idx != vq->avail_idx;
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_enable_notify);
> >  
> >  /* We don't need to be notified again. */
> >  void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > @@ -1527,3 +1557,22 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> >  			       &vq->used->flags, r);
> >  	}
> >  }
> > +EXPORT_SYMBOL_GPL(vhost_disable_notify);
> > +
> > +static int __init vhost_init(void)
> > +{
> > +	return 0;
> > +}
> > +
> > +static void __exit vhost_exit(void)
> > +{
> > +	return;
> > +}
> > +
> > +module_init(vhost_init);
> > +module_exit(vhost_exit);
> > +
> > +MODULE_VERSION("0.0.1");
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_AUTHOR("Michael S. Tsirkin");
> > +MODULE_DESCRIPTION("Host kernel accelerator for virtio");
> > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> > index 6bf81a9..94a80eb 100644
> > --- a/drivers/vhost/vhost.h
> > +++ b/drivers/vhost/vhost.h
> > @@ -46,6 +46,8 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file);
> >  void vhost_poll_stop(struct vhost_poll *poll);
> >  void vhost_poll_flush(struct vhost_poll *poll);
> >  void vhost_poll_queue(struct vhost_poll *poll);
> > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> > +long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
> >  
> >  struct vhost_log {
> >  	u64 addr;
> > -- 
> > 1.8.1.4

-- 
Asias

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 00/11] vhost cleanups
  2013-05-06 12:05   ` Asias He
@ 2013-05-06 13:15     ` Michael S. Tsirkin
  2013-05-06 13:19       ` Asias He
  0 siblings, 1 reply; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-05-06 13:15 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 08:05:26PM +0800, Asias He wrote:
> On Mon, May 06, 2013 at 01:07:46PM +0300, Michael S. Tsirkin wrote:
> > On Mon, May 06, 2013 at 04:38:18PM +0800, Asias He wrote:
> > > MST, This is on top of [PATCH 0/2] vhost-net fix ubuf.
> > 
> > Okay, how about making EVENT_IDX work for virtio-scsi?
> > I'm guessing it's some messup with feature negotiation,
> > that's what all event-idx bugs came down to so far.
> 
> Yes, IIRC, EVENT_IDX works for vhost-scsi now. Will cook a patch to
> enable it. It should go 3.10, right?

If it's early in the cycle, I think it can.

> > > Asias He (11):
> > >   vhost: Remove vhost_enable_zcopy in vhost.h
> > >   vhost: Move VHOST_NET_FEATURES to net.c
> > >   vhost: Make vhost a separate module
> > >   vhost: Remove comments for hdr in vhost.h
> > >   vhost: Simplify dev->vqs[i] access
> > >   vhost-net: Cleanup vhost_ubuf and vhost_zcopy
> > >   vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
> > >   vhost-scsi: Rename struct vhost_scsi *s to *vs
> > >   vhost-scsi: Make func indention more consistent
> > >   vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
> > >   vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd
> > > 
> > >  drivers/vhost/Kconfig  |   8 +
> > >  drivers/vhost/Makefile |   3 +-
> > >  drivers/vhost/net.c    |  64 ++++---
> > >  drivers/vhost/scsi.c   | 470 ++++++++++++++++++++++++++-----------------------
> > >  drivers/vhost/vhost.c  |  86 +++++++--
> > >  drivers/vhost/vhost.h  |  11 +-
> > >  6 files changed, 361 insertions(+), 281 deletions(-)
> > > 
> > > -- 
> > > 1.8.1.4
> 
> -- 
> Asias

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 00/11] vhost cleanups
  2013-05-06 13:15     ` Michael S. Tsirkin
@ 2013-05-06 13:19       ` Asias He
  0 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-05-06 13:19 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 04:15:35PM +0300, Michael S. Tsirkin wrote:
> On Mon, May 06, 2013 at 08:05:26PM +0800, Asias He wrote:
> > On Mon, May 06, 2013 at 01:07:46PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, May 06, 2013 at 04:38:18PM +0800, Asias He wrote:
> > > > MST, This is on top of [PATCH 0/2] vhost-net fix ubuf.
> > > 
> > > Okay, how about making EVENT_IDX work for virtio-scsi?
> > > I'm guessing it's some messup with feature negotiation,
> > > that's what all event-idx bugs came down to so far.
> > 
> > Yes, IIRC, EVENT_IDX works for vhost-scsi now. Will cook a patch to
> > enable it. It should go 3.10, right?
> 
> If it's early in the cycle, I think it can.

Well, let's queue it for 3.11.

> > > > Asias He (11):
> > > >   vhost: Remove vhost_enable_zcopy in vhost.h
> > > >   vhost: Move VHOST_NET_FEATURES to net.c
> > > >   vhost: Make vhost a separate module
> > > >   vhost: Remove comments for hdr in vhost.h
> > > >   vhost: Simplify dev->vqs[i] access
> > > >   vhost-net: Cleanup vhost_ubuf and vhost_zcopy
> > > >   vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
> > > >   vhost-scsi: Rename struct vhost_scsi *s to *vs
> > > >   vhost-scsi: Make func indention more consistent
> > > >   vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg
> > > >   vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd
> > > > 
> > > >  drivers/vhost/Kconfig  |   8 +
> > > >  drivers/vhost/Makefile |   3 +-
> > > >  drivers/vhost/net.c    |  64 ++++---
> > > >  drivers/vhost/scsi.c   | 470 ++++++++++++++++++++++++++-----------------------
> > > >  drivers/vhost/vhost.c  |  86 +++++++--
> > > >  drivers/vhost/vhost.h  |  11 +-
> > > >  6 files changed, 361 insertions(+), 281 deletions(-)
> > > > 
> > > > -- 
> > > > 1.8.1.4
> > 
> > -- 
> > Asias

-- 
Asias

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] vhost: Make vhost a separate module
  2013-05-06 12:10     ` Asias He
@ 2013-07-07 11:37       ` Michael S. Tsirkin
  2013-07-07 14:40         ` Michael S. Tsirkin
  0 siblings, 1 reply; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-07-07 11:37 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Mon, May 06, 2013 at 08:10:03PM +0800, Asias He wrote:
> On Mon, May 06, 2013 at 01:03:42PM +0300, Michael S. Tsirkin wrote:
> > On Mon, May 06, 2013 at 04:38:21PM +0800, Asias He wrote:
> > > Currently, vhost-net and vhost-scsi are sharing the vhost core code.
> > > However, vhost-scsi shares the code by including the vhost.c file
> > > directly.
> > > 
> > > Making vhost a separate module makes it is easier to share code with
> > > other vhost devices.
> > > 
> > > Signed-off-by: Asias He <asias@redhat.com>
> > 
> > Also this will break test.c, right? Let's fix it in the same
> > commit too.
> 
> I will fix it up and remove the useless 'return'.

Don't see v3 anywhere?

> > > ---
> > >  drivers/vhost/Kconfig  |  8 ++++++++
> > >  drivers/vhost/Makefile |  3 ++-
> > >  drivers/vhost/scsi.c   |  1 -
> > >  drivers/vhost/vhost.c  | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
> > >  drivers/vhost/vhost.h  |  2 ++
> > >  5 files changed, 62 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > index 8b9226d..017a1e8 100644
> > > --- a/drivers/vhost/Kconfig
> > > +++ b/drivers/vhost/Kconfig
> > > @@ -1,6 +1,7 @@
> > >  config VHOST_NET
> > >  	tristate "Host kernel accelerator for virtio net"
> > >  	depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
> > > +	select VHOST
> > >  	select VHOST_RING
> > >  	---help---
> > >  	  This kernel module can be loaded in host kernel to accelerate
> > > @@ -13,6 +14,7 @@ config VHOST_NET
> > >  config VHOST_SCSI
> > >  	tristate "VHOST_SCSI TCM fabric driver"
> > >  	depends on TARGET_CORE && EVENTFD && m
> > > +	select VHOST
> > >  	select VHOST_RING
> > >  	default n
> > >  	---help---
> > > @@ -24,3 +26,9 @@ config VHOST_RING
> > >  	---help---
> > >  	  This option is selected by any driver which needs to access
> > >  	  the host side of a virtio ring.
> > > +
> > > +config VHOST
> > > +	tristate
> > > +	---help---
> > > +	  This option is selected by any driver which needs to access
> > > +	  the core of vhost.
> > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > index 654e9afb..e0441c3 100644
> > > --- a/drivers/vhost/Makefile
> > > +++ b/drivers/vhost/Makefile
> > > @@ -1,7 +1,8 @@
> > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > -vhost_net-y := vhost.o net.o
> > > +vhost_net-y := net.o
> > >  
> > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > >  vhost_scsi-y := scsi.o
> > >  
> > >  obj-$(CONFIG_VHOST_RING) += vringh.o
> > > +obj-$(CONFIG_VHOST)	+= vhost.o
> > > diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> > > index 5179f7a..2dcb94a 100644
> > > --- a/drivers/vhost/scsi.c
> > > +++ b/drivers/vhost/scsi.c
> > > @@ -49,7 +49,6 @@
> > >  #include <linux/llist.h>
> > >  #include <linux/bitmap.h>
> > >  
> > > -#include "vhost.c"
> > >  #include "vhost.h"
> > >  
> > >  #define TCM_VHOST_VERSION  "v0.1"
> > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > index de9441a..e406d5f 100644
> > > --- a/drivers/vhost/vhost.c
> > > +++ b/drivers/vhost/vhost.c
> > > @@ -25,6 +25,7 @@
> > >  #include <linux/slab.h>
> > >  #include <linux/kthread.h>
> > >  #include <linux/cgroup.h>
> > > +#include <linux/module.h>
> > >  
> > >  #include "vhost.h"
> > >  
> > > @@ -66,6 +67,7 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
> > >  	work->flushing = 0;
> > >  	work->queue_seq = work->done_seq = 0;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_work_init);
> > >  
> > >  /* Init poll structure */
> > >  void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> > > @@ -79,6 +81,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> > >  
> > >  	vhost_work_init(&poll->work, fn);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_poll_init);
> > >  
> > >  /* Start polling a file. We add ourselves to file's wait queue. The caller must
> > >   * keep a reference to a file until after vhost_poll_stop is called. */
> > > @@ -101,6 +104,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
> > >  
> > >  	return ret;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_poll_start);
> > >  
> > >  /* Stop polling a file. After this function returns, it becomes safe to drop the
> > >   * file reference. You must also flush afterwards. */
> > > @@ -111,6 +115,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
> > >  		poll->wqh = NULL;
> > >  	}
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_poll_stop);
> > >  
> > >  static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> > >  				unsigned seq)
> > > @@ -123,7 +128,7 @@ static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> > >  	return left <= 0;
> > >  }
> > >  
> > > -static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > >  {
> > >  	unsigned seq;
> > >  	int flushing;
> > > @@ -138,6 +143,7 @@ static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > >  	spin_unlock_irq(&dev->work_lock);
> > >  	BUG_ON(flushing < 0);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_work_flush);
> > >  
> > >  /* Flush any work that has been scheduled. When calling this, don't hold any
> > >   * locks that are also used by the callback. */
> > > @@ -145,6 +151,7 @@ void vhost_poll_flush(struct vhost_poll *poll)
> > >  {
> > >  	vhost_work_flush(poll->dev, &poll->work);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_poll_flush);
> > >  
> > >  void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> > >  {
> > > @@ -158,11 +165,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> > >  	}
> > >  	spin_unlock_irqrestore(&dev->work_lock, flags);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_work_queue);
> > >  
> > >  void vhost_poll_queue(struct vhost_poll *poll)
> > >  {
> > >  	vhost_work_queue(poll->dev, &poll->work);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_poll_queue);
> > >  
> > >  static void vhost_vq_reset(struct vhost_dev *dev,
> > >  			   struct vhost_virtqueue *vq)
> > > @@ -310,6 +319,7 @@ long vhost_dev_init(struct vhost_dev *dev,
> > >  
> > >  	return 0;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_init);
> > >  
> > >  /* Caller should have device mutex */
> > >  long vhost_dev_check_owner(struct vhost_dev *dev)
> > > @@ -317,6 +327,7 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
> > >  	/* Are you the owner? If not, I don't think you mean to do that */
> > >  	return dev->mm == current->mm ? 0 : -EPERM;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
> > >  
> > >  struct vhost_attach_cgroups_struct {
> > >  	struct vhost_work work;
> > > @@ -385,11 +396,13 @@ err_worker:
> > >  err_mm:
> > >  	return err;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_set_owner);
> > >  
> > >  struct vhost_memory *vhost_dev_reset_owner_prepare(void)
> > >  {
> > >  	return kmalloc(offsetof(struct vhost_memory, regions), GFP_KERNEL);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner_prepare);
> > >  
> > >  /* Caller should have device mutex */
> > >  void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> > > @@ -400,6 +413,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> > >  	memory->nregions = 0;
> > >  	RCU_INIT_POINTER(dev->memory, memory);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner);
> > >  
> > >  void vhost_dev_stop(struct vhost_dev *dev)
> > >  {
> > > @@ -412,6 +426,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
> > >  		}
> > >  	}
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_stop);
> > >  
> > >  /* Caller should have device mutex if and only if locked is set */
> > >  void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> > > @@ -452,6 +467,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> > >  		mmput(dev->mm);
> > >  	dev->mm = NULL;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
> > >  
> > >  static int log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
> > >  {
> > > @@ -537,6 +553,7 @@ int vhost_log_access_ok(struct vhost_dev *dev)
> > >  				       lockdep_is_held(&dev->mutex));
> > >  	return memory_access_ok(dev, mp, 1);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_log_access_ok);
> > >  
> > >  /* Verify access for write logging. */
> > >  /* Caller should have vq mutex and device mutex */
> > > @@ -562,6 +579,7 @@ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
> > >  	return vq_access_ok(vq->dev, vq->num, vq->desc, vq->avail, vq->used) &&
> > >  		vq_log_access_ok(vq->dev, vq, vq->log_base);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
> > >  
> > >  static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
> > >  {
> > > @@ -791,6 +809,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
> > >  		vhost_poll_flush(&vq->poll);
> > >  	return r;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
> > >  
> > >  /* Caller must have device mutex */
> > >  long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> > > @@ -871,6 +890,7 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> > >  done:
> > >  	return r;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
> > >  
> > >  static const struct vhost_memory_region *find_region(struct vhost_memory *mem,
> > >  						     __u64 addr, __u32 len)
> > > @@ -962,6 +982,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
> > >  	BUG();
> > >  	return 0;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_log_write);
> > >  
> > >  static int vhost_update_used_flags(struct vhost_virtqueue *vq)
> > >  {
> > > @@ -1013,6 +1034,7 @@ int vhost_init_used(struct vhost_virtqueue *vq)
> > >  	vq->signalled_used_valid = false;
> > >  	return get_user(vq->last_used_idx, &vq->used->idx);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_init_used);
> > >  
> > >  static int translate_desc(struct vhost_dev *dev, u64 addr, u32 len,
> > >  			  struct iovec iov[], int iov_size)
> > > @@ -1289,12 +1311,14 @@ int vhost_get_vq_desc(struct vhost_dev *dev, struct vhost_virtqueue *vq,
> > >  	BUG_ON(!(vq->used_flags & VRING_USED_F_NO_NOTIFY));
> > >  	return head;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_get_vq_desc);
> > >  
> > >  /* Reverse the effect of vhost_get_vq_desc. Useful for error handling. */
> > >  void vhost_discard_vq_desc(struct vhost_virtqueue *vq, int n)
> > >  {
> > >  	vq->last_avail_idx -= n;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_discard_vq_desc);
> > >  
> > >  /* After we've used one of their buffers, we tell them about it.  We'll then
> > >   * want to notify the guest, using eventfd. */
> > > @@ -1343,6 +1367,7 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
> > >  		vq->signalled_used_valid = false;
> > >  	return 0;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_add_used);
> > >  
> > >  static int __vhost_add_used_n(struct vhost_virtqueue *vq,
> > >  			    struct vring_used_elem *heads,
> > > @@ -1412,6 +1437,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
> > >  	}
> > >  	return r;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_add_used_n);
> > >  
> > >  static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > >  {
> > > @@ -1456,6 +1482,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > >  	if (vq->call_ctx && vhost_notify(dev, vq))
> > >  		eventfd_signal(vq->call_ctx, 1);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_signal);
> > >  
> > >  /* And here's the combo meal deal.  Supersize me! */
> > >  void vhost_add_used_and_signal(struct vhost_dev *dev,
> > > @@ -1465,6 +1492,7 @@ void vhost_add_used_and_signal(struct vhost_dev *dev,
> > >  	vhost_add_used(vq, head, len);
> > >  	vhost_signal(dev, vq);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
> > >  
> > >  /* multi-buffer version of vhost_add_used_and_signal */
> > >  void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> > > @@ -1474,6 +1502,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> > >  	vhost_add_used_n(vq, heads, count);
> > >  	vhost_signal(dev, vq);
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
> > >  
> > >  /* OK, now we need to know about added descriptors. */
> > >  bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > @@ -1511,6 +1540,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > >  
> > >  	return avail_idx != vq->avail_idx;
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_enable_notify);
> > >  
> > >  /* We don't need to be notified again. */
> > >  void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > @@ -1527,3 +1557,22 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > >  			       &vq->used->flags, r);
> > >  	}
> > >  }
> > > +EXPORT_SYMBOL_GPL(vhost_disable_notify);
> > > +
> > > +static int __init vhost_init(void)
> > > +{
> > > +	return 0;
> > > +}
> > > +
> > > +static void __exit vhost_exit(void)
> > > +{
> > > +	return;
> > > +}
> > > +
> > > +module_init(vhost_init);
> > > +module_exit(vhost_exit);
> > > +
> > > +MODULE_VERSION("0.0.1");
> > > +MODULE_LICENSE("GPL v2");
> > > +MODULE_AUTHOR("Michael S. Tsirkin");
> > > +MODULE_DESCRIPTION("Host kernel accelerator for virtio");
> > > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> > > index 6bf81a9..94a80eb 100644
> > > --- a/drivers/vhost/vhost.h
> > > +++ b/drivers/vhost/vhost.h
> > > @@ -46,6 +46,8 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file);
> > >  void vhost_poll_stop(struct vhost_poll *poll);
> > >  void vhost_poll_flush(struct vhost_poll *poll);
> > >  void vhost_poll_queue(struct vhost_poll *poll);
> > > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> > > +long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
> > >  
> > >  struct vhost_log {
> > >  	u64 addr;
> > > -- 
> > > 1.8.1.4
> 
> -- 
> Asias
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] vhost: Make vhost a separate module
  2013-07-07 11:37       ` Michael S. Tsirkin
@ 2013-07-07 14:40         ` Michael S. Tsirkin
  2013-07-10  2:02           ` Asias He
  0 siblings, 1 reply; 33+ messages in thread
From: Michael S. Tsirkin @ 2013-07-07 14:40 UTC (permalink / raw)
  To: Asias He; +Cc: target-devel, kvm, virtualization

On Sun, Jul 07, 2013 at 02:37:10PM +0300, Michael S. Tsirkin wrote:
> On Mon, May 06, 2013 at 08:10:03PM +0800, Asias He wrote:
> > On Mon, May 06, 2013 at 01:03:42PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, May 06, 2013 at 04:38:21PM +0800, Asias He wrote:
> > > > Currently, vhost-net and vhost-scsi are sharing the vhost core code.
> > > > However, vhost-scsi shares the code by including the vhost.c file
> > > > directly.
> > > > 
> > > > Making vhost a separate module makes it is easier to share code with
> > > > other vhost devices.
> > > > 
> > > > Signed-off-by: Asias He <asias@redhat.com>
> > > 
> > > Also this will break test.c, right? Let's fix it in the same
> > > commit too.
> > 
> > I will fix it up and remove the useless 'return'.
> 
> Don't see v3 anywhere?

I did these tweaks, you can see the result on the vhost
branch in my tree.

> > > > ---
> > > >  drivers/vhost/Kconfig  |  8 ++++++++
> > > >  drivers/vhost/Makefile |  3 ++-
> > > >  drivers/vhost/scsi.c   |  1 -
> > > >  drivers/vhost/vhost.c  | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
> > > >  drivers/vhost/vhost.h  |  2 ++
> > > >  5 files changed, 62 insertions(+), 3 deletions(-)
> > > > 
> > > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > > index 8b9226d..017a1e8 100644
> > > > --- a/drivers/vhost/Kconfig
> > > > +++ b/drivers/vhost/Kconfig
> > > > @@ -1,6 +1,7 @@
> > > >  config VHOST_NET
> > > >  	tristate "Host kernel accelerator for virtio net"
> > > >  	depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
> > > > +	select VHOST
> > > >  	select VHOST_RING
> > > >  	---help---
> > > >  	  This kernel module can be loaded in host kernel to accelerate
> > > > @@ -13,6 +14,7 @@ config VHOST_NET
> > > >  config VHOST_SCSI
> > > >  	tristate "VHOST_SCSI TCM fabric driver"
> > > >  	depends on TARGET_CORE && EVENTFD && m
> > > > +	select VHOST
> > > >  	select VHOST_RING
> > > >  	default n
> > > >  	---help---
> > > > @@ -24,3 +26,9 @@ config VHOST_RING
> > > >  	---help---
> > > >  	  This option is selected by any driver which needs to access
> > > >  	  the host side of a virtio ring.
> > > > +
> > > > +config VHOST
> > > > +	tristate
> > > > +	---help---
> > > > +	  This option is selected by any driver which needs to access
> > > > +	  the core of vhost.
> > > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > > index 654e9afb..e0441c3 100644
> > > > --- a/drivers/vhost/Makefile
> > > > +++ b/drivers/vhost/Makefile
> > > > @@ -1,7 +1,8 @@
> > > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > > -vhost_net-y := vhost.o net.o
> > > > +vhost_net-y := net.o
> > > >  
> > > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > > >  vhost_scsi-y := scsi.o
> > > >  
> > > >  obj-$(CONFIG_VHOST_RING) += vringh.o
> > > > +obj-$(CONFIG_VHOST)	+= vhost.o
> > > > diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> > > > index 5179f7a..2dcb94a 100644
> > > > --- a/drivers/vhost/scsi.c
> > > > +++ b/drivers/vhost/scsi.c
> > > > @@ -49,7 +49,6 @@
> > > >  #include <linux/llist.h>
> > > >  #include <linux/bitmap.h>
> > > >  
> > > > -#include "vhost.c"
> > > >  #include "vhost.h"
> > > >  
> > > >  #define TCM_VHOST_VERSION  "v0.1"
> > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > > index de9441a..e406d5f 100644
> > > > --- a/drivers/vhost/vhost.c
> > > > +++ b/drivers/vhost/vhost.c
> > > > @@ -25,6 +25,7 @@
> > > >  #include <linux/slab.h>
> > > >  #include <linux/kthread.h>
> > > >  #include <linux/cgroup.h>
> > > > +#include <linux/module.h>
> > > >  
> > > >  #include "vhost.h"
> > > >  
> > > > @@ -66,6 +67,7 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
> > > >  	work->flushing = 0;
> > > >  	work->queue_seq = work->done_seq = 0;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_work_init);
> > > >  
> > > >  /* Init poll structure */
> > > >  void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> > > > @@ -79,6 +81,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> > > >  
> > > >  	vhost_work_init(&poll->work, fn);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_poll_init);
> > > >  
> > > >  /* Start polling a file. We add ourselves to file's wait queue. The caller must
> > > >   * keep a reference to a file until after vhost_poll_stop is called. */
> > > > @@ -101,6 +104,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
> > > >  
> > > >  	return ret;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_poll_start);
> > > >  
> > > >  /* Stop polling a file. After this function returns, it becomes safe to drop the
> > > >   * file reference. You must also flush afterwards. */
> > > > @@ -111,6 +115,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
> > > >  		poll->wqh = NULL;
> > > >  	}
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_poll_stop);
> > > >  
> > > >  static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> > > >  				unsigned seq)
> > > > @@ -123,7 +128,7 @@ static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> > > >  	return left <= 0;
> > > >  }
> > > >  
> > > > -static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > > > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > > >  {
> > > >  	unsigned seq;
> > > >  	int flushing;
> > > > @@ -138,6 +143,7 @@ static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > > >  	spin_unlock_irq(&dev->work_lock);
> > > >  	BUG_ON(flushing < 0);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_work_flush);
> > > >  
> > > >  /* Flush any work that has been scheduled. When calling this, don't hold any
> > > >   * locks that are also used by the callback. */
> > > > @@ -145,6 +151,7 @@ void vhost_poll_flush(struct vhost_poll *poll)
> > > >  {
> > > >  	vhost_work_flush(poll->dev, &poll->work);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_poll_flush);
> > > >  
> > > >  void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> > > >  {
> > > > @@ -158,11 +165,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> > > >  	}
> > > >  	spin_unlock_irqrestore(&dev->work_lock, flags);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_work_queue);
> > > >  
> > > >  void vhost_poll_queue(struct vhost_poll *poll)
> > > >  {
> > > >  	vhost_work_queue(poll->dev, &poll->work);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_poll_queue);
> > > >  
> > > >  static void vhost_vq_reset(struct vhost_dev *dev,
> > > >  			   struct vhost_virtqueue *vq)
> > > > @@ -310,6 +319,7 @@ long vhost_dev_init(struct vhost_dev *dev,
> > > >  
> > > >  	return 0;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_init);
> > > >  
> > > >  /* Caller should have device mutex */
> > > >  long vhost_dev_check_owner(struct vhost_dev *dev)
> > > > @@ -317,6 +327,7 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
> > > >  	/* Are you the owner? If not, I don't think you mean to do that */
> > > >  	return dev->mm == current->mm ? 0 : -EPERM;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
> > > >  
> > > >  struct vhost_attach_cgroups_struct {
> > > >  	struct vhost_work work;
> > > > @@ -385,11 +396,13 @@ err_worker:
> > > >  err_mm:
> > > >  	return err;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_set_owner);
> > > >  
> > > >  struct vhost_memory *vhost_dev_reset_owner_prepare(void)
> > > >  {
> > > >  	return kmalloc(offsetof(struct vhost_memory, regions), GFP_KERNEL);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner_prepare);
> > > >  
> > > >  /* Caller should have device mutex */
> > > >  void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> > > > @@ -400,6 +413,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> > > >  	memory->nregions = 0;
> > > >  	RCU_INIT_POINTER(dev->memory, memory);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner);
> > > >  
> > > >  void vhost_dev_stop(struct vhost_dev *dev)
> > > >  {
> > > > @@ -412,6 +426,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
> > > >  		}
> > > >  	}
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_stop);
> > > >  
> > > >  /* Caller should have device mutex if and only if locked is set */
> > > >  void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> > > > @@ -452,6 +467,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> > > >  		mmput(dev->mm);
> > > >  	dev->mm = NULL;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
> > > >  
> > > >  static int log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
> > > >  {
> > > > @@ -537,6 +553,7 @@ int vhost_log_access_ok(struct vhost_dev *dev)
> > > >  				       lockdep_is_held(&dev->mutex));
> > > >  	return memory_access_ok(dev, mp, 1);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_log_access_ok);
> > > >  
> > > >  /* Verify access for write logging. */
> > > >  /* Caller should have vq mutex and device mutex */
> > > > @@ -562,6 +579,7 @@ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
> > > >  	return vq_access_ok(vq->dev, vq->num, vq->desc, vq->avail, vq->used) &&
> > > >  		vq_log_access_ok(vq->dev, vq, vq->log_base);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
> > > >  
> > > >  static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
> > > >  {
> > > > @@ -791,6 +809,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
> > > >  		vhost_poll_flush(&vq->poll);
> > > >  	return r;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
> > > >  
> > > >  /* Caller must have device mutex */
> > > >  long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> > > > @@ -871,6 +890,7 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> > > >  done:
> > > >  	return r;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
> > > >  
> > > >  static const struct vhost_memory_region *find_region(struct vhost_memory *mem,
> > > >  						     __u64 addr, __u32 len)
> > > > @@ -962,6 +982,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
> > > >  	BUG();
> > > >  	return 0;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_log_write);
> > > >  
> > > >  static int vhost_update_used_flags(struct vhost_virtqueue *vq)
> > > >  {
> > > > @@ -1013,6 +1034,7 @@ int vhost_init_used(struct vhost_virtqueue *vq)
> > > >  	vq->signalled_used_valid = false;
> > > >  	return get_user(vq->last_used_idx, &vq->used->idx);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_init_used);
> > > >  
> > > >  static int translate_desc(struct vhost_dev *dev, u64 addr, u32 len,
> > > >  			  struct iovec iov[], int iov_size)
> > > > @@ -1289,12 +1311,14 @@ int vhost_get_vq_desc(struct vhost_dev *dev, struct vhost_virtqueue *vq,
> > > >  	BUG_ON(!(vq->used_flags & VRING_USED_F_NO_NOTIFY));
> > > >  	return head;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_get_vq_desc);
> > > >  
> > > >  /* Reverse the effect of vhost_get_vq_desc. Useful for error handling. */
> > > >  void vhost_discard_vq_desc(struct vhost_virtqueue *vq, int n)
> > > >  {
> > > >  	vq->last_avail_idx -= n;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_discard_vq_desc);
> > > >  
> > > >  /* After we've used one of their buffers, we tell them about it.  We'll then
> > > >   * want to notify the guest, using eventfd. */
> > > > @@ -1343,6 +1367,7 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
> > > >  		vq->signalled_used_valid = false;
> > > >  	return 0;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_add_used);
> > > >  
> > > >  static int __vhost_add_used_n(struct vhost_virtqueue *vq,
> > > >  			    struct vring_used_elem *heads,
> > > > @@ -1412,6 +1437,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
> > > >  	}
> > > >  	return r;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_add_used_n);
> > > >  
> > > >  static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > >  {
> > > > @@ -1456,6 +1482,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > >  	if (vq->call_ctx && vhost_notify(dev, vq))
> > > >  		eventfd_signal(vq->call_ctx, 1);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_signal);
> > > >  
> > > >  /* And here's the combo meal deal.  Supersize me! */
> > > >  void vhost_add_used_and_signal(struct vhost_dev *dev,
> > > > @@ -1465,6 +1492,7 @@ void vhost_add_used_and_signal(struct vhost_dev *dev,
> > > >  	vhost_add_used(vq, head, len);
> > > >  	vhost_signal(dev, vq);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
> > > >  
> > > >  /* multi-buffer version of vhost_add_used_and_signal */
> > > >  void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> > > > @@ -1474,6 +1502,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> > > >  	vhost_add_used_n(vq, heads, count);
> > > >  	vhost_signal(dev, vq);
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
> > > >  
> > > >  /* OK, now we need to know about added descriptors. */
> > > >  bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > @@ -1511,6 +1540,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > >  
> > > >  	return avail_idx != vq->avail_idx;
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_enable_notify);
> > > >  
> > > >  /* We don't need to be notified again. */
> > > >  void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > @@ -1527,3 +1557,22 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > >  			       &vq->used->flags, r);
> > > >  	}
> > > >  }
> > > > +EXPORT_SYMBOL_GPL(vhost_disable_notify);
> > > > +
> > > > +static int __init vhost_init(void)
> > > > +{
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static void __exit vhost_exit(void)
> > > > +{
> > > > +	return;
> > > > +}
> > > > +
> > > > +module_init(vhost_init);
> > > > +module_exit(vhost_exit);
> > > > +
> > > > +MODULE_VERSION("0.0.1");
> > > > +MODULE_LICENSE("GPL v2");
> > > > +MODULE_AUTHOR("Michael S. Tsirkin");
> > > > +MODULE_DESCRIPTION("Host kernel accelerator for virtio");
> > > > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> > > > index 6bf81a9..94a80eb 100644
> > > > --- a/drivers/vhost/vhost.h
> > > > +++ b/drivers/vhost/vhost.h
> > > > @@ -46,6 +46,8 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file);
> > > >  void vhost_poll_stop(struct vhost_poll *poll);
> > > >  void vhost_poll_flush(struct vhost_poll *poll);
> > > >  void vhost_poll_queue(struct vhost_poll *poll);
> > > > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> > > > +long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
> > > >  
> > > >  struct vhost_log {
> > > >  	u64 addr;
> > > > -- 
> > > > 1.8.1.4
> > 
> > -- 
> > Asias
> > _______________________________________________
> > Virtualization mailing list
> > Virtualization@lists.linux-foundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/11] vhost: Make vhost a separate module
  2013-07-07 14:40         ` Michael S. Tsirkin
@ 2013-07-10  2:02           ` Asias He
  0 siblings, 0 replies; 33+ messages in thread
From: Asias He @ 2013-07-10  2:02 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: target-devel, kvm, virtualization

On Sun, Jul 07, 2013 at 05:40:51PM +0300, Michael S. Tsirkin wrote:
> On Sun, Jul 07, 2013 at 02:37:10PM +0300, Michael S. Tsirkin wrote:
> > On Mon, May 06, 2013 at 08:10:03PM +0800, Asias He wrote:
> > > On Mon, May 06, 2013 at 01:03:42PM +0300, Michael S. Tsirkin wrote:
> > > > On Mon, May 06, 2013 at 04:38:21PM +0800, Asias He wrote:
> > > > > Currently, vhost-net and vhost-scsi are sharing the vhost core code.
> > > > > However, vhost-scsi shares the code by including the vhost.c file
> > > > > directly.
> > > > > 
> > > > > Making vhost a separate module makes it is easier to share code with
> > > > > other vhost devices.
> > > > > 
> > > > > Signed-off-by: Asias He <asias@redhat.com>
> > > > 
> > > > Also this will break test.c, right? Let's fix it in the same
> > > > commit too.
> > > 
> > > I will fix it up and remove the useless 'return'.
> > 
> > Don't see v3 anywhere?

The fix for vhost/test.c was inflight, see '[PATCH v2] vhost-test: Make
vhost/test.c work'.

> I did these tweaks, you can see the result on the vhost
> branch in my tree.

Thanks! /me just come back from vacation.

> > > > > ---
> > > > >  drivers/vhost/Kconfig  |  8 ++++++++
> > > > >  drivers/vhost/Makefile |  3 ++-
> > > > >  drivers/vhost/scsi.c   |  1 -
> > > > >  drivers/vhost/vhost.c  | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
> > > > >  drivers/vhost/vhost.h  |  2 ++
> > > > >  5 files changed, 62 insertions(+), 3 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > > > index 8b9226d..017a1e8 100644
> > > > > --- a/drivers/vhost/Kconfig
> > > > > +++ b/drivers/vhost/Kconfig
> > > > > @@ -1,6 +1,7 @@
> > > > >  config VHOST_NET
> > > > >  	tristate "Host kernel accelerator for virtio net"
> > > > >  	depends on NET && EVENTFD && (TUN || !TUN) && (MACVTAP || !MACVTAP)
> > > > > +	select VHOST
> > > > >  	select VHOST_RING
> > > > >  	---help---
> > > > >  	  This kernel module can be loaded in host kernel to accelerate
> > > > > @@ -13,6 +14,7 @@ config VHOST_NET
> > > > >  config VHOST_SCSI
> > > > >  	tristate "VHOST_SCSI TCM fabric driver"
> > > > >  	depends on TARGET_CORE && EVENTFD && m
> > > > > +	select VHOST
> > > > >  	select VHOST_RING
> > > > >  	default n
> > > > >  	---help---
> > > > > @@ -24,3 +26,9 @@ config VHOST_RING
> > > > >  	---help---
> > > > >  	  This option is selected by any driver which needs to access
> > > > >  	  the host side of a virtio ring.
> > > > > +
> > > > > +config VHOST
> > > > > +	tristate
> > > > > +	---help---
> > > > > +	  This option is selected by any driver which needs to access
> > > > > +	  the core of vhost.
> > > > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > > > index 654e9afb..e0441c3 100644
> > > > > --- a/drivers/vhost/Makefile
> > > > > +++ b/drivers/vhost/Makefile
> > > > > @@ -1,7 +1,8 @@
> > > > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > > > -vhost_net-y := vhost.o net.o
> > > > > +vhost_net-y := net.o
> > > > >  
> > > > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > > > >  vhost_scsi-y := scsi.o
> > > > >  
> > > > >  obj-$(CONFIG_VHOST_RING) += vringh.o
> > > > > +obj-$(CONFIG_VHOST)	+= vhost.o
> > > > > diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> > > > > index 5179f7a..2dcb94a 100644
> > > > > --- a/drivers/vhost/scsi.c
> > > > > +++ b/drivers/vhost/scsi.c
> > > > > @@ -49,7 +49,6 @@
> > > > >  #include <linux/llist.h>
> > > > >  #include <linux/bitmap.h>
> > > > >  
> > > > > -#include "vhost.c"
> > > > >  #include "vhost.h"
> > > > >  
> > > > >  #define TCM_VHOST_VERSION  "v0.1"
> > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > > > index de9441a..e406d5f 100644
> > > > > --- a/drivers/vhost/vhost.c
> > > > > +++ b/drivers/vhost/vhost.c
> > > > > @@ -25,6 +25,7 @@
> > > > >  #include <linux/slab.h>
> > > > >  #include <linux/kthread.h>
> > > > >  #include <linux/cgroup.h>
> > > > > +#include <linux/module.h>
> > > > >  
> > > > >  #include "vhost.h"
> > > > >  
> > > > > @@ -66,6 +67,7 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
> > > > >  	work->flushing = 0;
> > > > >  	work->queue_seq = work->done_seq = 0;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_work_init);
> > > > >  
> > > > >  /* Init poll structure */
> > > > >  void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> > > > > @@ -79,6 +81,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> > > > >  
> > > > >  	vhost_work_init(&poll->work, fn);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_poll_init);
> > > > >  
> > > > >  /* Start polling a file. We add ourselves to file's wait queue. The caller must
> > > > >   * keep a reference to a file until after vhost_poll_stop is called. */
> > > > > @@ -101,6 +104,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
> > > > >  
> > > > >  	return ret;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_poll_start);
> > > > >  
> > > > >  /* Stop polling a file. After this function returns, it becomes safe to drop the
> > > > >   * file reference. You must also flush afterwards. */
> > > > > @@ -111,6 +115,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
> > > > >  		poll->wqh = NULL;
> > > > >  	}
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_poll_stop);
> > > > >  
> > > > >  static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> > > > >  				unsigned seq)
> > > > > @@ -123,7 +128,7 @@ static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
> > > > >  	return left <= 0;
> > > > >  }
> > > > >  
> > > > > -static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > > > > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > > > >  {
> > > > >  	unsigned seq;
> > > > >  	int flushing;
> > > > > @@ -138,6 +143,7 @@ static void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> > > > >  	spin_unlock_irq(&dev->work_lock);
> > > > >  	BUG_ON(flushing < 0);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_work_flush);
> > > > >  
> > > > >  /* Flush any work that has been scheduled. When calling this, don't hold any
> > > > >   * locks that are also used by the callback. */
> > > > > @@ -145,6 +151,7 @@ void vhost_poll_flush(struct vhost_poll *poll)
> > > > >  {
> > > > >  	vhost_work_flush(poll->dev, &poll->work);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_poll_flush);
> > > > >  
> > > > >  void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> > > > >  {
> > > > > @@ -158,11 +165,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> > > > >  	}
> > > > >  	spin_unlock_irqrestore(&dev->work_lock, flags);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_work_queue);
> > > > >  
> > > > >  void vhost_poll_queue(struct vhost_poll *poll)
> > > > >  {
> > > > >  	vhost_work_queue(poll->dev, &poll->work);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_poll_queue);
> > > > >  
> > > > >  static void vhost_vq_reset(struct vhost_dev *dev,
> > > > >  			   struct vhost_virtqueue *vq)
> > > > > @@ -310,6 +319,7 @@ long vhost_dev_init(struct vhost_dev *dev,
> > > > >  
> > > > >  	return 0;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_init);
> > > > >  
> > > > >  /* Caller should have device mutex */
> > > > >  long vhost_dev_check_owner(struct vhost_dev *dev)
> > > > > @@ -317,6 +327,7 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
> > > > >  	/* Are you the owner? If not, I don't think you mean to do that */
> > > > >  	return dev->mm == current->mm ? 0 : -EPERM;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
> > > > >  
> > > > >  struct vhost_attach_cgroups_struct {
> > > > >  	struct vhost_work work;
> > > > > @@ -385,11 +396,13 @@ err_worker:
> > > > >  err_mm:
> > > > >  	return err;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_set_owner);
> > > > >  
> > > > >  struct vhost_memory *vhost_dev_reset_owner_prepare(void)
> > > > >  {
> > > > >  	return kmalloc(offsetof(struct vhost_memory, regions), GFP_KERNEL);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner_prepare);
> > > > >  
> > > > >  /* Caller should have device mutex */
> > > > >  void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> > > > > @@ -400,6 +413,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_memory *memory)
> > > > >  	memory->nregions = 0;
> > > > >  	RCU_INIT_POINTER(dev->memory, memory);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_reset_owner);
> > > > >  
> > > > >  void vhost_dev_stop(struct vhost_dev *dev)
> > > > >  {
> > > > > @@ -412,6 +426,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
> > > > >  		}
> > > > >  	}
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_stop);
> > > > >  
> > > > >  /* Caller should have device mutex if and only if locked is set */
> > > > >  void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> > > > > @@ -452,6 +467,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev, bool locked)
> > > > >  		mmput(dev->mm);
> > > > >  	dev->mm = NULL;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
> > > > >  
> > > > >  static int log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
> > > > >  {
> > > > > @@ -537,6 +553,7 @@ int vhost_log_access_ok(struct vhost_dev *dev)
> > > > >  				       lockdep_is_held(&dev->mutex));
> > > > >  	return memory_access_ok(dev, mp, 1);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_log_access_ok);
> > > > >  
> > > > >  /* Verify access for write logging. */
> > > > >  /* Caller should have vq mutex and device mutex */
> > > > > @@ -562,6 +579,7 @@ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
> > > > >  	return vq_access_ok(vq->dev, vq->num, vq->desc, vq->avail, vq->used) &&
> > > > >  		vq_log_access_ok(vq->dev, vq, vq->log_base);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
> > > > >  
> > > > >  static long vhost_set_memory(struct vhost_dev *d, struct vhost_memory __user *m)
> > > > >  {
> > > > > @@ -791,6 +809,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
> > > > >  		vhost_poll_flush(&vq->poll);
> > > > >  	return r;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
> > > > >  
> > > > >  /* Caller must have device mutex */
> > > > >  long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> > > > > @@ -871,6 +890,7 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> > > > >  done:
> > > > >  	return r;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
> > > > >  
> > > > >  static const struct vhost_memory_region *find_region(struct vhost_memory *mem,
> > > > >  						     __u64 addr, __u32 len)
> > > > > @@ -962,6 +982,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
> > > > >  	BUG();
> > > > >  	return 0;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_log_write);
> > > > >  
> > > > >  static int vhost_update_used_flags(struct vhost_virtqueue *vq)
> > > > >  {
> > > > > @@ -1013,6 +1034,7 @@ int vhost_init_used(struct vhost_virtqueue *vq)
> > > > >  	vq->signalled_used_valid = false;
> > > > >  	return get_user(vq->last_used_idx, &vq->used->idx);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_init_used);
> > > > >  
> > > > >  static int translate_desc(struct vhost_dev *dev, u64 addr, u32 len,
> > > > >  			  struct iovec iov[], int iov_size)
> > > > > @@ -1289,12 +1311,14 @@ int vhost_get_vq_desc(struct vhost_dev *dev, struct vhost_virtqueue *vq,
> > > > >  	BUG_ON(!(vq->used_flags & VRING_USED_F_NO_NOTIFY));
> > > > >  	return head;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_get_vq_desc);
> > > > >  
> > > > >  /* Reverse the effect of vhost_get_vq_desc. Useful for error handling. */
> > > > >  void vhost_discard_vq_desc(struct vhost_virtqueue *vq, int n)
> > > > >  {
> > > > >  	vq->last_avail_idx -= n;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_discard_vq_desc);
> > > > >  
> > > > >  /* After we've used one of their buffers, we tell them about it.  We'll then
> > > > >   * want to notify the guest, using eventfd. */
> > > > > @@ -1343,6 +1367,7 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
> > > > >  		vq->signalled_used_valid = false;
> > > > >  	return 0;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_add_used);
> > > > >  
> > > > >  static int __vhost_add_used_n(struct vhost_virtqueue *vq,
> > > > >  			    struct vring_used_elem *heads,
> > > > > @@ -1412,6 +1437,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
> > > > >  	}
> > > > >  	return r;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_add_used_n);
> > > > >  
> > > > >  static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > >  {
> > > > > @@ -1456,6 +1482,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > >  	if (vq->call_ctx && vhost_notify(dev, vq))
> > > > >  		eventfd_signal(vq->call_ctx, 1);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_signal);
> > > > >  
> > > > >  /* And here's the combo meal deal.  Supersize me! */
> > > > >  void vhost_add_used_and_signal(struct vhost_dev *dev,
> > > > > @@ -1465,6 +1492,7 @@ void vhost_add_used_and_signal(struct vhost_dev *dev,
> > > > >  	vhost_add_used(vq, head, len);
> > > > >  	vhost_signal(dev, vq);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
> > > > >  
> > > > >  /* multi-buffer version of vhost_add_used_and_signal */
> > > > >  void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> > > > > @@ -1474,6 +1502,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> > > > >  	vhost_add_used_n(vq, heads, count);
> > > > >  	vhost_signal(dev, vq);
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
> > > > >  
> > > > >  /* OK, now we need to know about added descriptors. */
> > > > >  bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > > @@ -1511,6 +1540,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > >  
> > > > >  	return avail_idx != vq->avail_idx;
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_enable_notify);
> > > > >  
> > > > >  /* We don't need to be notified again. */
> > > > >  void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > > @@ -1527,3 +1557,22 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > >  			       &vq->used->flags, r);
> > > > >  	}
> > > > >  }
> > > > > +EXPORT_SYMBOL_GPL(vhost_disable_notify);
> > > > > +
> > > > > +static int __init vhost_init(void)
> > > > > +{
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static void __exit vhost_exit(void)
> > > > > +{
> > > > > +	return;
> > > > > +}
> > > > > +
> > > > > +module_init(vhost_init);
> > > > > +module_exit(vhost_exit);
> > > > > +
> > > > > +MODULE_VERSION("0.0.1");
> > > > > +MODULE_LICENSE("GPL v2");
> > > > > +MODULE_AUTHOR("Michael S. Tsirkin");
> > > > > +MODULE_DESCRIPTION("Host kernel accelerator for virtio");
> > > > > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> > > > > index 6bf81a9..94a80eb 100644
> > > > > --- a/drivers/vhost/vhost.h
> > > > > +++ b/drivers/vhost/vhost.h
> > > > > @@ -46,6 +46,8 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file);
> > > > >  void vhost_poll_stop(struct vhost_poll *poll);
> > > > >  void vhost_poll_flush(struct vhost_poll *poll);
> > > > >  void vhost_poll_queue(struct vhost_poll *poll);
> > > > > +void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> > > > > +long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
> > > > >  
> > > > >  struct vhost_log {
> > > > >  	u64 addr;
> > > > > -- 
> > > > > 1.8.1.4
> > > 
> > > -- 
> > > Asias
> > > _______________________________________________
> > > Virtualization mailing list
> > > Virtualization@lists.linux-foundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization

-- 
Asias

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2013-07-10  2:02 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-06  8:38 [PATCH v2 00/11] vhost cleanups Asias He
2013-05-06  8:38 ` [PATCH v2 01/11] vhost: Remove vhost_enable_zcopy in vhost.h Asias He
2013-05-06  8:38 ` [PATCH v2 02/11] vhost: Move VHOST_NET_FEATURES to net.c Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 03/11] vhost: Make vhost a separate module Asias He
2013-05-06  9:53   ` Michael S. Tsirkin
2013-05-06 10:03   ` Michael S. Tsirkin
2013-05-06 12:10     ` Asias He
2013-07-07 11:37       ` Michael S. Tsirkin
2013-07-07 14:40         ` Michael S. Tsirkin
2013-07-10  2:02           ` Asias He
2013-05-06  8:38 ` [PATCH v2 04/11] vhost: Remove comments for hdr in vhost.h Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 05/11] vhost: Simplify dev->vqs[i] access Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 06/11] vhost-net: Cleanup vhost_ubuf and vhost_zcopy Asias He
2013-05-06 10:25   ` Michael S. Tsirkin
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 07/11] vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 08/11] vhost-scsi: Rename struct vhost_scsi *s to *vs Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 09/11] vhost-scsi: Make func indention more consistent Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 10/11] vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:38 ` [PATCH v2 11/11] vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd Asias He
2013-05-06  8:38 ` Asias He
2013-05-06  8:56 ` [PATCH v2 00/11] vhost cleanups Michael S. Tsirkin
2013-05-06 10:07 ` Michael S. Tsirkin
2013-05-06 12:05   ` Asias He
2013-05-06 13:15     ` Michael S. Tsirkin
2013-05-06 13:19       ` Asias He

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.