All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] New Xen netback implementation
@ 2012-01-13 16:59 Wei Liu
  2012-01-13 16:59 ` [RFC PATCH 1/6] netback: page pool version 1 Wei Liu
                   ` (5 more replies)
  0 siblings, 6 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-13 16:59 UTC (permalink / raw)
  To: ian.campbell, konrad.wilk, xen-devel, netdev

A new netback implementation which includes three major features:

 - Global page pool support
 - NAPI + kthread 1:1 model
 - Netback internal name changes

This patch series is the foundation of furture work. So it is better
to get it right first. Patch 1 and 3 have the real meat.

The first benifit of 1:1 model will be scheduling fairness.

The rational behind a global page pool is that we need to limit
overall RAM consumed by all vifs.

Utilization of NAPI enables the possibility to mitigate
interrupts/events, but this is not yet done.

Netback internal changes cleans up the code structure after switching
to 1:1 model. It also prepares netback for further code layout
changes.


---
 drivers/net/xen-netback/Makefile    |    2 +-
 drivers/net/xen-netback/common.h    |   94 +++--
 drivers/net/xen-netback/interface.c |  115 ++++--
 drivers/net/xen-netback/netback.c   |  743 +++++++++++------------------------
 drivers/net/xen-netback/page_pool.c |  183 +++++++++
 drivers/net/xen-netback/page_pool.h |   61 +++
 drivers/net/xen-netback/xenbus.c    |    6 +-
 7 files changed, 620 insertions(+), 584 deletions(-)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH 1/6] netback: page pool version 1
  2012-01-13 16:59 [RFC PATCH] New Xen netback implementation Wei Liu
@ 2012-01-13 16:59 ` Wei Liu
  2012-01-13 17:37   ` Konrad Rzeszutek Wilk
  2012-01-13 16:59 ` [RFC PATCH 2/6] netback: add module unload function Wei Liu
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 30+ messages in thread
From: Wei Liu @ 2012-01-13 16:59 UTC (permalink / raw)
  To: ian.campbell, konrad.wilk, xen-devel, netdev; +Cc: Wei Liu

A global page pool. Since we are moving to 1:1 model netback, it is
better to limit total RAM consumed by all the vifs.

With this patch, each vif gets page from the pool and puts the page
back when it is finished with the page.

This pool is only meant to access via exported interfaces. Internals
are subject to change when we discover new requirements for the pool.

Current exported interfaces include:

page_pool_init: pool init
page_pool_destroy: pool destruction
page_pool_get: get a page from pool
page_pool_put: put page back to pool
is_in_pool: tell whether a page belongs to the pool

Current implementation has following defects:
 - Global locking
 - No starve prevention mechanism / reservation logic

Global locking tends to cause contention on the pool. No reservation
logic may cause vif to starve. A possible solution to these two
problems will be each vif maintains its local cache and claims a
portion of the pool. However the implementation will be tricky when
coming to pool management, so let's worry about that later.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/Makefile    |    2 +-
 drivers/net/xen-netback/netback.c   |   93 ++++--------------
 drivers/net/xen-netback/page_pool.c |  183 +++++++++++++++++++++++++++++++++++
 drivers/net/xen-netback/page_pool.h |   61 ++++++++++++
 4 files changed, 266 insertions(+), 73 deletions(-)
 create mode 100644 drivers/net/xen-netback/page_pool.c
 create mode 100644 drivers/net/xen-netback/page_pool.h

diff --git a/drivers/net/xen-netback/Makefile b/drivers/net/xen-netback/Makefile
index e346e81..dc4b8b1 100644
--- a/drivers/net/xen-netback/Makefile
+++ b/drivers/net/xen-netback/Makefile
@@ -1,3 +1,3 @@
 obj-$(CONFIG_XEN_NETDEV_BACKEND) := xen-netback.o
 
-xen-netback-y := netback.o xenbus.o interface.o
+xen-netback-y := netback.o xenbus.o interface.o page_pool.o
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 59effac..26af7b7 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -33,6 +33,7 @@
  */
 
 #include "common.h"
+#include "page_pool.h"
 
 #include <linux/kthread.h>
 #include <linux/if_vlan.h>
@@ -65,21 +66,6 @@ struct netbk_rx_meta {
 
 #define MAX_BUFFER_OFFSET PAGE_SIZE
 
-/* extra field used in struct page */
-union page_ext {
-	struct {
-#if BITS_PER_LONG < 64
-#define IDX_WIDTH   8
-#define GROUP_WIDTH (BITS_PER_LONG - IDX_WIDTH)
-		unsigned int group:GROUP_WIDTH;
-		unsigned int idx:IDX_WIDTH;
-#else
-		unsigned int group, idx;
-#endif
-	} e;
-	void *mapping;
-};
-
 struct xen_netbk {
 	wait_queue_head_t wq;
 	struct task_struct *task;
@@ -89,7 +75,7 @@ struct xen_netbk {
 
 	struct timer_list net_timer;
 
-	struct page *mmap_pages[MAX_PENDING_REQS];
+	idx_t mmap_pages[MAX_PENDING_REQS];
 
 	pending_ring_idx_t pending_prod;
 	pending_ring_idx_t pending_cons;
@@ -160,7 +146,7 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 static inline unsigned long idx_to_pfn(struct xen_netbk *netbk,
 				       u16 idx)
 {
-	return page_to_pfn(netbk->mmap_pages[idx]);
+	return page_to_pfn(to_page(netbk->mmap_pages[idx]));
 }
 
 static inline unsigned long idx_to_kaddr(struct xen_netbk *netbk,
@@ -169,45 +155,6 @@ static inline unsigned long idx_to_kaddr(struct xen_netbk *netbk,
 	return (unsigned long)pfn_to_kaddr(idx_to_pfn(netbk, idx));
 }
 
-/* extra field used in struct page */
-static inline void set_page_ext(struct page *pg, struct xen_netbk *netbk,
-				unsigned int idx)
-{
-	unsigned int group = netbk - xen_netbk;
-	union page_ext ext = { .e = { .group = group + 1, .idx = idx } };
-
-	BUILD_BUG_ON(sizeof(ext) > sizeof(ext.mapping));
-	pg->mapping = ext.mapping;
-}
-
-static int get_page_ext(struct page *pg,
-			unsigned int *pgroup, unsigned int *pidx)
-{
-	union page_ext ext = { .mapping = pg->mapping };
-	struct xen_netbk *netbk;
-	unsigned int group, idx;
-
-	group = ext.e.group - 1;
-
-	if (group < 0 || group >= xen_netbk_group_nr)
-		return 0;
-
-	netbk = &xen_netbk[group];
-
-	idx = ext.e.idx;
-
-	if ((idx < 0) || (idx >= MAX_PENDING_REQS))
-		return 0;
-
-	if (netbk->mmap_pages[idx] != pg)
-		return 0;
-
-	*pgroup = group;
-	*pidx = idx;
-
-	return 1;
-}
-
 /*
  * This is the amount of packet we copy rather than map, so that the
  * guest can't fiddle with the contents of the headers while we do
@@ -398,8 +345,8 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 	 * These variables are used iff get_page_ext returns true,
 	 * in which case they are guaranteed to be initialized.
 	 */
-	unsigned int uninitialized_var(group), uninitialized_var(idx);
-	int foreign = get_page_ext(page, &group, &idx);
+	unsigned int uninitialized_var(idx);
+	int foreign = is_in_pool(page, &idx);
 	unsigned long bytes;
 
 	/* Data must not cross a page boundary. */
@@ -427,7 +374,7 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop = npo->copy + npo->copy_prod++;
 		copy_gop->flags = GNTCOPY_dest_gref;
 		if (foreign) {
-			struct xen_netbk *netbk = &xen_netbk[group];
+			struct xen_netbk *netbk = to_netbk(idx);
 			struct pending_tx_info *src_pend;
 
 			src_pend = &netbk->pending_tx_info[idx];
@@ -906,11 +853,11 @@ static struct page *xen_netbk_alloc_page(struct xen_netbk *netbk,
 					 u16 pending_idx)
 {
 	struct page *page;
-	page = alloc_page(GFP_KERNEL|__GFP_COLD);
+	int idx;
+	page = page_pool_get(netbk, &idx);
 	if (!page)
 		return NULL;
-	set_page_ext(page, netbk, pending_idx);
-	netbk->mmap_pages[pending_idx] = page;
+	netbk->mmap_pages[pending_idx] = idx;
 	return page;
 }
 
@@ -1053,7 +1000,7 @@ static void xen_netbk_fill_frags(struct xen_netbk *netbk, struct sk_buff *skb)
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xen_netbk_idx_release */
-		get_page(netbk->mmap_pages[pending_idx]);
+		get_page(to_page(netbk->mmap_pages[pending_idx]));
 		xen_netbk_idx_release(netbk, pending_idx);
 	}
 }
@@ -1482,7 +1429,7 @@ static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
 	pending_ring_idx_t index;
 
 	/* Already complete? */
-	if (netbk->mmap_pages[pending_idx] == NULL)
+	if (netbk->mmap_pages[pending_idx] == INVALID_ENTRY)
 		return;
 
 	pending_tx_info = &netbk->pending_tx_info[pending_idx];
@@ -1496,9 +1443,9 @@ static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
 
 	xenvif_put(vif);
 
-	netbk->mmap_pages[pending_idx]->mapping = 0;
-	put_page(netbk->mmap_pages[pending_idx]);
-	netbk->mmap_pages[pending_idx] = NULL;
+	page_pool_put(netbk->mmap_pages[pending_idx]);
+
+	netbk->mmap_pages[pending_idx] = INVALID_ENTRY;
 }
 
 static void make_tx_response(struct xenvif *vif,
@@ -1681,19 +1628,21 @@ static int __init netback_init(void)
 		wake_up_process(netbk->task);
 	}
 
-	rc = xenvif_xenbus_init();
+	rc = page_pool_init();
 	if (rc)
 		goto failed_init;
 
+	rc = xenvif_xenbus_init();
+	if (rc)
+		goto pool_failed_init;
+
 	return 0;
 
+pool_failed_init:
+	page_pool_destroy();
 failed_init:
 	while (--group >= 0) {
 		struct xen_netbk *netbk = &xen_netbk[group];
-		for (i = 0; i < MAX_PENDING_REQS; i++) {
-			if (netbk->mmap_pages[i])
-				__free_page(netbk->mmap_pages[i]);
-		}
 		del_timer(&netbk->net_timer);
 		kthread_stop(netbk->task);
 	}
diff --git a/drivers/net/xen-netback/page_pool.c b/drivers/net/xen-netback/page_pool.c
new file mode 100644
index 0000000..8904869
--- /dev/null
+++ b/drivers/net/xen-netback/page_pool.c
@@ -0,0 +1,183 @@
+/*
+ * Global page pool for netback.
+ *
+ * Wei Liu <wei.liu2@citrix.com>
+ * Copyright (c) Citrix Systems
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include "common.h"
+#include "page_pool.h"
+#include <asm/xen/page.h>
+
+static idx_t free_head;
+static int free_count;
+static unsigned long pool_size;
+static DEFINE_SPINLOCK(pool_lock);
+static struct page_pool_entry *pool;
+
+static int get_free_entry(void)
+{
+	unsigned long flag;
+	int idx;
+
+	spin_lock_irqsave(&pool_lock, flag);
+
+	if (free_count == 0) {
+		spin_unlock_irqrestore(&pool_lock, flag);
+		return -ENOSPC;
+	}
+
+	idx = free_head;
+	free_count--;
+	free_head = pool[idx].u.fl;
+	pool[idx].u.fl = INVALID_ENTRY;
+
+	spin_unlock_irqrestore(&pool_lock, flag);
+
+	return idx;
+}
+
+static void put_free_entry(idx_t idx)
+{
+	unsigned long flag;
+
+	spin_lock_irqsave(&pool_lock, flag);
+
+	pool[idx].u.fl = free_head;
+	free_head = idx;
+	free_count++;
+
+	spin_unlock_irqrestore(&pool_lock, flag);
+}
+
+static inline void set_page_ext(struct page *pg, unsigned int idx)
+{
+	union page_ext ext = { .idx = idx };
+
+	BUILD_BUG_ON(sizeof(ext) > sizeof(ext.mapping));
+	pg->mapping = ext.mapping;
+}
+
+static int get_page_ext(struct page *pg, unsigned int *pidx)
+{
+	union page_ext ext = { .mapping = pg->mapping };
+	int idx;
+
+	idx = ext.idx;
+
+	if ((idx < 0) || (idx >= pool_size))
+		return 0;
+
+	if (pool[idx].page != pg)
+		return 0;
+
+	*pidx = idx;
+
+	return 1;
+}
+
+int is_in_pool(struct page *page, int *pidx)
+{
+	return get_page_ext(page, pidx);
+}
+
+struct page *page_pool_get(struct xen_netbk *netbk, int *pidx)
+{
+	int idx;
+	struct page *page;
+
+	idx = get_free_entry();
+	if (idx < 0)
+		return NULL;
+	page = alloc_page(GFP_ATOMIC);
+
+	if (page == NULL) {
+		put_free_entry(idx);
+		return NULL;
+	}
+
+	set_page_ext(page, idx);
+	pool[idx].u.netbk = netbk;
+	pool[idx].page = page;
+
+	*pidx = idx;
+
+	return page;
+}
+
+void page_pool_put(int idx)
+{
+	struct page *page = pool[idx].page;
+
+	pool[idx].page = NULL;
+	pool[idx].u.netbk = NULL;
+	page->mapping = 0;
+	put_page(page);
+	put_free_entry(idx);
+}
+
+int page_pool_init()
+{
+	int cpus = 0;
+	int i;
+
+	cpus = num_online_cpus();
+	pool_size = cpus * ENTRIES_PER_CPU;
+
+	pool = vzalloc(sizeof(struct page_pool_entry) * pool_size);
+
+	if (!pool)
+		return -ENOMEM;
+
+	for (i = 0; i < pool_size - 1; i++)
+		pool[i].u.fl = i+1;
+	pool[pool_size-1].u.fl = INVALID_ENTRY;
+	free_count = pool_size;
+	free_head = 0;
+
+	return 0;
+}
+
+void page_pool_destroy()
+{
+	int i;
+	for (i = 0; i < pool_size; i++)
+		if (pool[i].page)
+			put_page(pool[i].page);
+
+	vfree(pool);
+}
+
+struct page *to_page(int idx)
+{
+	return pool[idx].page;
+}
+
+struct xen_netbk *to_netbk(int idx)
+{
+	return pool[idx].u.netbk;
+}
diff --git a/drivers/net/xen-netback/page_pool.h b/drivers/net/xen-netback/page_pool.h
new file mode 100644
index 0000000..52a6fc7
--- /dev/null
+++ b/drivers/net/xen-netback/page_pool.h
@@ -0,0 +1,61 @@
+/*
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef __PAGE_POOL_H__
+#define __PAGE_POOL_H__
+
+#include "common.h"
+
+typedef uint32_t idx_t;
+
+#define ENTRIES_PER_CPU (1024)
+#define INVALID_ENTRY 0xffffffff
+
+struct page_pool_entry {
+	struct page *page;
+	union {
+		struct xen_netbk *netbk;
+		idx_t             fl;
+	} u;
+};
+
+union page_ext {
+	idx_t idx;
+	void *mapping;
+};
+
+int  page_pool_init(void);
+void page_pool_destroy(void);
+
+
+struct page *page_pool_get(struct xen_netbk *netbk, int *pidx);
+void         page_pool_put(int idx);
+int          is_in_pool(struct page *page, int *pidx);
+
+struct page      *to_page(int idx);
+struct xen_netbk *to_netbk(int idx);
+
+#endif /* __PAGE_POOL_H__ */
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 2/6] netback: add module unload function.
  2012-01-13 16:59 [RFC PATCH] New Xen netback implementation Wei Liu
  2012-01-13 16:59 ` [RFC PATCH 1/6] netback: page pool version 1 Wei Liu
@ 2012-01-13 16:59 ` Wei Liu
  2012-01-13 17:57   ` [Xen-devel] " David Vrabel
  2012-01-13 18:47   ` David Vrabel
  2012-01-13 16:59 ` [RFC PATCH 3/6] netback: switch to NAPI + kthread model Wei Liu
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-13 16:59 UTC (permalink / raw)
  To: ian.campbell, konrad.wilk, xen-devel, netdev; +Cc: Wei Liu

Enables users to unload netback module.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/common.h  |    1 +
 drivers/net/xen-netback/netback.c |   14 ++++++++++++++
 drivers/net/xen-netback/xenbus.c  |    5 +++++
 3 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 94b79c3..263df73 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -120,6 +120,7 @@ void xenvif_get(struct xenvif *vif);
 void xenvif_put(struct xenvif *vif);
 
 int xenvif_xenbus_init(void);
+void xenvif_xenbus_exit(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 26af7b7..dd10c0d 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1653,5 +1653,19 @@ failed_init:
 
 module_init(netback_init);
 
+static void __exit netback_exit(void)
+{
+	int i;
+	for (i = 0; i < xen_netbk_group_nr; i++) {
+		struct xen_netbk *netbk = &xen_netbk[i];
+		del_timer(&netbk->net_timer);
+		kthread_stop(netbk->task);
+	}
+	vfree(xen_netbk);
+	page_pool_destroy();
+	xenvif_xenbus_exit();
+}
+module_exit(netback_exit);
+
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_ALIAS("xen-backend:vif");
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 410018c..65d14f2 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -485,3 +485,8 @@ int xenvif_xenbus_init(void)
 {
 	return xenbus_register_backend(&netback_driver);
 }
+
+void xenvif_xenbus_exit(void)
+{
+	return xenbus_unregister_driver(&netback_driver);
+}
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-13 16:59 [RFC PATCH] New Xen netback implementation Wei Liu
  2012-01-13 16:59 ` [RFC PATCH 1/6] netback: page pool version 1 Wei Liu
  2012-01-13 16:59 ` [RFC PATCH 2/6] netback: add module unload function Wei Liu
@ 2012-01-13 16:59 ` Wei Liu
  2012-01-13 18:21     ` David Vrabel
  2012-01-16 10:14     ` Paul Durrant
  2012-01-13 16:59 ` [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect Wei Liu
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-13 16:59 UTC (permalink / raw)
  To: ian.campbell, konrad.wilk, xen-devel, netdev; +Cc: Wei Liu

This patch implements 1:1 model netback. We utilizes NAPI and kthread
to do the weight-lifting job:

  - NAPI is used for guest side TX (host side RX)
  - kthread is used for guest side RX (host side TX)

This model provides better scheduling fairness among vifs. It also
lays the foundation for future work.

The major defect for the current implementation is that in the NAPI
poll handler we don't actually disable interrupt. Xen stuff is
different from real hardware, it requires some other tuning of ring
macros.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/common.h    |   33 ++--
 drivers/net/xen-netback/interface.c |   92 +++++++---
 drivers/net/xen-netback/netback.c   |  363 ++++++++++-------------------------
 drivers/net/xen-netback/xenbus.c    |    1 -
 4 files changed, 183 insertions(+), 306 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 263df73..1f6156d 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -55,14 +55,17 @@ struct xenvif {
 	/* Reference to netback processing backend. */
 	struct xen_netbk *netbk;
 
+	/* Use NAPI for guest TX */
+	struct napi_struct napi;
+	/* Use kthread for guest RX */
+	struct task_struct *task;
+	wait_queue_head_t wq;
+
 	u8               fe_dev_addr[6];
 
 	/* Physical parameters of the comms window. */
 	unsigned int     irq;
 
-	/* List of frontends to notify after a batch of frames sent. */
-	struct list_head notify_list;
-
 	/* The shared rings and indexes. */
 	struct xen_netif_tx_back_ring tx;
 	struct xen_netif_rx_back_ring rx;
@@ -93,11 +96,7 @@ struct xenvif {
 	unsigned long rx_gso_checksum_fixup;
 
 	/* Miscellaneous private stuff. */
-	struct list_head schedule_list;
-	atomic_t         refcnt;
 	struct net_device *dev;
-
-	wait_queue_head_t waiting_to_free;
 };
 
 static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif)
@@ -116,9 +115,6 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int evtchn);
 void xenvif_disconnect(struct xenvif *vif);
 
-void xenvif_get(struct xenvif *vif);
-void xenvif_put(struct xenvif *vif);
-
 int xenvif_xenbus_init(void);
 void xenvif_xenbus_exit(void);
 
@@ -134,14 +130,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
 				 grant_ref_t tx_ring_ref,
 				 grant_ref_t rx_ring_ref);
 
-/* (De)Register a xenvif with the netback backend. */
-void xen_netbk_add_xenvif(struct xenvif *vif);
-void xen_netbk_remove_xenvif(struct xenvif *vif);
-
-/* (De)Schedule backend processing for a xenvif */
-void xen_netbk_schedule_xenvif(struct xenvif *vif);
-void xen_netbk_deschedule_xenvif(struct xenvif *vif);
-
 /* Check for SKBs from frontend and schedule backend processing */
 void xen_netbk_check_rx_xenvif(struct xenvif *vif);
 /* Receive an SKB from the frontend */
@@ -155,4 +143,13 @@ void xenvif_notify_tx_completion(struct xenvif *vif);
 /* Returns number of ring slots required to send an skb to the frontend */
 unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
 
+/* Allocate and free xen_netbk structure */
+struct xen_netbk *xen_netbk_alloc_netbk(struct xenvif *vif);
+void xen_netbk_free_netbk(struct xen_netbk *netbk);
+
+void xen_netbk_tx_action(struct xen_netbk *netbk, int *work_done, int budget);
+void xen_netbk_rx_action(struct xen_netbk *netbk);
+
+int xen_netbk_kthread(void *data);
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 1825629..93cb212 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -30,6 +30,7 @@
 
 #include "common.h"
 
+#include <linux/kthread.h>
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
@@ -38,17 +39,7 @@
 #include <asm/xen/hypercall.h>
 
 #define XENVIF_QUEUE_LENGTH 32
-
-void xenvif_get(struct xenvif *vif)
-{
-	atomic_inc(&vif->refcnt);
-}
-
-void xenvif_put(struct xenvif *vif)
-{
-	if (atomic_dec_and_test(&vif->refcnt))
-		wake_up(&vif->waiting_to_free);
-}
+#define XENVIF_NAPI_WEIGHT  XENVIF_QUEUE_LENGTH
 
 int xenvif_schedulable(struct xenvif *vif)
 {
@@ -67,14 +58,37 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	if (vif->netbk == NULL)
 		return IRQ_NONE;
 
-	xen_netbk_schedule_xenvif(vif);
-
 	if (xenvif_rx_schedulable(vif))
 		netif_wake_queue(vif->dev);
 
+	if (likely(napi_schedule_prep(&vif->napi)))
+		__napi_schedule(&vif->napi);
+
 	return IRQ_HANDLED;
 }
 
+static int xenvif_poll(struct napi_struct *napi, int budget)
+{
+	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	int work_done = 0;
+
+	xen_netbk_tx_action(vif->netbk, &work_done, budget);
+
+	if (work_done < budget) {
+		int more_to_do = 0;
+		unsigned long flag;
+		local_irq_save(flag);
+
+		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		if (!more_to_do)
+			__napi_complete(napi);
+
+		local_irq_restore(flag);
+	}
+
+	return work_done;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
@@ -90,7 +104,6 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 
 	/* Reserve ring slots for the worst-case number of fragments. */
 	vif->rx_req_cons_peek += xen_netbk_count_skb_slots(vif, skb);
-	xenvif_get(vif);
 
 	if (vif->can_queue && xen_netbk_must_stop_queue(vif))
 		netif_stop_queue(dev);
@@ -107,7 +120,7 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 
 void xenvif_receive_skb(struct xenvif *vif, struct sk_buff *skb)
 {
-	netif_rx_ni(skb);
+	netif_receive_skb(skb);
 }
 
 void xenvif_notify_tx_completion(struct xenvif *vif)
@@ -124,16 +137,15 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 
 static void xenvif_up(struct xenvif *vif)
 {
-	xen_netbk_add_xenvif(vif);
+	napi_enable(&vif->napi);
 	enable_irq(vif->irq);
 	xen_netbk_check_rx_xenvif(vif);
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
+	napi_disable(&vif->napi);
 	disable_irq(vif->irq);
-	xen_netbk_deschedule_xenvif(vif);
-	xen_netbk_remove_xenvif(vif);
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -259,14 +271,11 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif = netdev_priv(dev);
 	vif->domid  = domid;
 	vif->handle = handle;
-	vif->netbk  = NULL;
+	vif->netbk = NULL;
+
 	vif->can_sg = 1;
 	vif->csum = 1;
-	atomic_set(&vif->refcnt, 1);
-	init_waitqueue_head(&vif->waiting_to_free);
 	vif->dev = dev;
-	INIT_LIST_HEAD(&vif->schedule_list);
-	INIT_LIST_HEAD(&vif->notify_list);
 
 	vif->credit_bytes = vif->remaining_credit = ~0UL;
 	vif->credit_usec  = 0UL;
@@ -290,6 +299,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
+	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
+
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -324,7 +335,23 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	vif->irq = err;
 	disable_irq(vif->irq);
 
-	xenvif_get(vif);
+	vif->netbk = xen_netbk_alloc_netbk(vif);
+	if (!vif->netbk) {
+		pr_warn("Could not allocate xen_netbk\n");
+		err = -ENOMEM;
+		goto err_unbind;
+	}
+
+
+	init_waitqueue_head(&vif->wq);
+	vif->task = kthread_create(xen_netbk_kthread,
+				   (void *)vif,
+				   "vif%d.%d", vif->domid, vif->handle);
+	if (IS_ERR(vif->task)) {
+		pr_warn("Could not create kthread\n");
+		err = PTR_ERR(vif->task);
+		goto err_free_netbk;
+	}
 
 	rtnl_lock();
 	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
@@ -335,7 +362,13 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		xenvif_up(vif);
 	rtnl_unlock();
 
+	wake_up_process(vif->task);
+
 	return 0;
+err_free_netbk:
+	xen_netbk_free_netbk(vif->netbk);
+err_unbind:
+	unbind_from_irqhandler(vif->irq, vif);
 err_unmap:
 	xen_netbk_unmap_frontend_rings(vif);
 err:
@@ -345,17 +378,22 @@ err:
 void xenvif_disconnect(struct xenvif *vif)
 {
 	struct net_device *dev = vif->dev;
+
 	if (netif_carrier_ok(dev)) {
 		rtnl_lock();
 		netif_carrier_off(dev); /* discard queued packets */
 		if (netif_running(dev))
 			xenvif_down(vif);
 		rtnl_unlock();
-		xenvif_put(vif);
 	}
 
-	atomic_dec(&vif->refcnt);
-	wait_event(vif->waiting_to_free, atomic_read(&vif->refcnt) == 0);
+	if (vif->task)
+		kthread_stop(vif->task);
+
+	if (vif->netbk)
+		xen_netbk_free_netbk(vif->netbk);
+
+	netif_napi_del(&vif->napi);
 
 	del_timer_sync(&vif->credit_timeout);
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index dd10c0d..e486fd6 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -49,7 +49,6 @@
 
 struct pending_tx_info {
 	struct xen_netif_tx_request req;
-	struct xenvif *vif;
 };
 typedef unsigned int pending_ring_idx_t;
 
@@ -67,24 +66,16 @@ struct netbk_rx_meta {
 #define MAX_BUFFER_OFFSET PAGE_SIZE
 
 struct xen_netbk {
-	wait_queue_head_t wq;
-	struct task_struct *task;
-
 	struct sk_buff_head rx_queue;
 	struct sk_buff_head tx_queue;
 
-	struct timer_list net_timer;
-
 	idx_t mmap_pages[MAX_PENDING_REQS];
 
 	pending_ring_idx_t pending_prod;
 	pending_ring_idx_t pending_cons;
 	struct list_head net_schedule_list;
 
-	/* Protect the net_schedule_list in netif. */
-	spinlock_t net_schedule_list_lock;
-
-	atomic_t netfront_count;
+	struct xenvif *vif;
 
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
@@ -100,42 +91,14 @@ struct xen_netbk {
 	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
 };
 
-static struct xen_netbk *xen_netbk;
-static int xen_netbk_group_nr;
-
-void xen_netbk_add_xenvif(struct xenvif *vif)
-{
-	int i;
-	int min_netfront_count;
-	int min_group = 0;
-	struct xen_netbk *netbk;
-
-	min_netfront_count = atomic_read(&xen_netbk[0].netfront_count);
-	for (i = 0; i < xen_netbk_group_nr; i++) {
-		int netfront_count = atomic_read(&xen_netbk[i].netfront_count);
-		if (netfront_count < min_netfront_count) {
-			min_group = i;
-			min_netfront_count = netfront_count;
-		}
-	}
-
-	netbk = &xen_netbk[min_group];
-
-	vif->netbk = netbk;
-	atomic_inc(&netbk->netfront_count);
-}
-
-void xen_netbk_remove_xenvif(struct xenvif *vif)
-{
-	struct xen_netbk *netbk = vif->netbk;
-	vif->netbk = NULL;
-	atomic_dec(&netbk->netfront_count);
-}
-
 static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx);
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
+
+static inline int tx_work_todo(struct xen_netbk *netbk);
+static inline int rx_work_todo(struct xen_netbk *netbk);
+
 static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 					     u16      id,
 					     s8       st,
@@ -186,11 +149,6 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xen_netbk *netbk)
 		netbk->pending_prod + netbk->pending_cons;
 }
 
-static void xen_netbk_kick_thread(struct xen_netbk *netbk)
-{
-	wake_up(&netbk->wq);
-}
-
 static int max_required_rx_slots(struct xenvif *vif)
 {
 	int max = DIV_ROUND_UP(vif->dev->mtu, PAGE_SIZE);
@@ -379,7 +337,7 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 
 			src_pend = &netbk->pending_tx_info[idx];
 
-			copy_gop->source.domid = src_pend->vif->domid;
+			copy_gop->source.domid = netbk->vif->domid;
 			copy_gop->source.u.ref = src_pend->req.gref;
 			copy_gop->flags |= GNTCOPY_source_gref;
 		} else {
@@ -537,11 +495,18 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-static void xen_netbk_rx_action(struct xen_netbk *netbk)
+static void xen_netbk_kick_thread(struct xen_netbk *netbk)
 {
-	struct xenvif *vif = NULL, *tmp;
+	struct xenvif *vif = netbk->vif;
+
+	wake_up(&vif->wq);
+}
+
+void xen_netbk_rx_action(struct xen_netbk *netbk)
+{
+	struct xenvif *vif = NULL;
 	s8 status;
-	u16 irq, flags;
+	u16 flags;
 	struct xen_netif_rx_response *resp;
 	struct sk_buff_head rxq;
 	struct sk_buff *skb;
@@ -551,6 +516,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk)
 	int count;
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
+	int need_to_notify = 0;
 
 	struct netrx_pending_operations npo = {
 		.copy  = netbk->grant_copy_op,
@@ -651,25 +617,19 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk)
 					 sco->meta_slots_used);
 
 		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
-		irq = vif->irq;
-		if (ret && list_empty(&vif->notify_list))
-			list_add_tail(&vif->notify_list, &notify);
+		if (ret)
+			need_to_notify = 1;
 
 		xenvif_notify_tx_completion(vif);
 
-		xenvif_put(vif);
 		npo.meta_cons += sco->meta_slots_used;
 		dev_kfree_skb(skb);
 	}
 
-	list_for_each_entry_safe(vif, tmp, &notify, notify_list) {
+	if (need_to_notify)
 		notify_remote_via_irq(vif->irq);
-		list_del_init(&vif->notify_list);
-	}
 
-	/* More work to do? */
-	if (!skb_queue_empty(&netbk->rx_queue) &&
-			!timer_pending(&netbk->net_timer))
+	if (!skb_queue_empty(&netbk->rx_queue))
 		xen_netbk_kick_thread(netbk);
 }
 
@@ -682,86 +642,17 @@ void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
 	xen_netbk_kick_thread(netbk);
 }
 
-static void xen_netbk_alarm(unsigned long data)
-{
-	struct xen_netbk *netbk = (struct xen_netbk *)data;
-	xen_netbk_kick_thread(netbk);
-}
-
-static int __on_net_schedule_list(struct xenvif *vif)
-{
-	return !list_empty(&vif->schedule_list);
-}
-
-/* Must be called with net_schedule_list_lock held */
-static void remove_from_net_schedule_list(struct xenvif *vif)
-{
-	if (likely(__on_net_schedule_list(vif))) {
-		list_del_init(&vif->schedule_list);
-		xenvif_put(vif);
-	}
-}
-
-static struct xenvif *poll_net_schedule_list(struct xen_netbk *netbk)
-{
-	struct xenvif *vif = NULL;
-
-	spin_lock_irq(&netbk->net_schedule_list_lock);
-	if (list_empty(&netbk->net_schedule_list))
-		goto out;
-
-	vif = list_first_entry(&netbk->net_schedule_list,
-			       struct xenvif, schedule_list);
-	if (!vif)
-		goto out;
-
-	xenvif_get(vif);
-
-	remove_from_net_schedule_list(vif);
-out:
-	spin_unlock_irq(&netbk->net_schedule_list_lock);
-	return vif;
-}
-
-void xen_netbk_schedule_xenvif(struct xenvif *vif)
-{
-	unsigned long flags;
-	struct xen_netbk *netbk = vif->netbk;
-
-	if (__on_net_schedule_list(vif))
-		goto kick;
-
-	spin_lock_irqsave(&netbk->net_schedule_list_lock, flags);
-	if (!__on_net_schedule_list(vif) &&
-	    likely(xenvif_schedulable(vif))) {
-		list_add_tail(&vif->schedule_list, &netbk->net_schedule_list);
-		xenvif_get(vif);
-	}
-	spin_unlock_irqrestore(&netbk->net_schedule_list_lock, flags);
-
-kick:
-	smp_mb();
-	if ((nr_pending_reqs(netbk) < (MAX_PENDING_REQS/2)) &&
-	    !list_empty(&netbk->net_schedule_list))
-		xen_netbk_kick_thread(netbk);
-}
-
-void xen_netbk_deschedule_xenvif(struct xenvif *vif)
-{
-	struct xen_netbk *netbk = vif->netbk;
-	spin_lock_irq(&netbk->net_schedule_list_lock);
-	remove_from_net_schedule_list(vif);
-	spin_unlock_irq(&netbk->net_schedule_list_lock);
-}
-
 void xen_netbk_check_rx_xenvif(struct xenvif *vif)
 {
 	int more_to_do;
 
 	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
 
+	/* In this check function, we are supposed to do fe's rx,
+	 * which means be's tx */
+
 	if (more_to_do)
-		xen_netbk_schedule_xenvif(vif);
+		napi_schedule(&vif->napi);
 }
 
 static void tx_add_credit(struct xenvif *vif)
@@ -804,7 +695,6 @@ static void netbk_tx_err(struct xenvif *vif,
 	} while (1);
 	vif->tx.req_cons = cons;
 	xen_netbk_check_rx_xenvif(vif);
-	xenvif_put(vif);
 }
 
 static int netbk_count_requests(struct xenvif *vif,
@@ -901,8 +791,6 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk,
 		gop++;
 
 		memcpy(&pending_tx_info[pending_idx].req, txp, sizeof(*txp));
-		xenvif_get(vif);
-		pending_tx_info[pending_idx].vif = vif;
 		frag_set_pending_idx(&frags[i], pending_idx);
 	}
 
@@ -916,7 +804,7 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 	struct gnttab_copy *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
 	struct pending_tx_info *pending_tx_info = netbk->pending_tx_info;
-	struct xenvif *vif = pending_tx_info[pending_idx].vif;
+	struct xenvif *vif = netbk->vif;
 	struct xen_netif_tx_request *txp;
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -930,7 +818,6 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 		txp = &pending_tx_info[pending_idx].req;
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
 		netbk->pending_ring[index] = pending_idx;
-		xenvif_put(vif);
 	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -956,7 +843,6 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
 		index = pending_index(netbk->pending_prod++);
 		netbk->pending_ring[index] = pending_idx;
-		xenvif_put(vif);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -1167,10 +1053,9 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 	struct gnttab_copy *gop = netbk->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
+	struct xenvif *vif = netbk->vif;
 
-	while (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
-		!list_empty(&netbk->net_schedule_list)) {
-		struct xenvif *vif;
+	while ((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[MAX_SKB_FRAGS];
 		struct page *page;
@@ -1181,26 +1066,19 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		/* Get a netif from the list with work to do. */
-		vif = poll_net_schedule_list(netbk);
-		if (!vif)
-			continue;
-
 		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do);
 		if (!work_to_do) {
-			xenvif_put(vif);
-			continue;
+			break;
 		}
 
 		idx = vif->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
 		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
 
-		/* Credit-based scheduling. */
+		/* Credit-based traffic shaping. */
 		if (txreq.size > vif->remaining_credit &&
 		    tx_credit_exceeded(vif, txreq.size)) {
-			xenvif_put(vif);
-			continue;
+			break;
 		}
 
 		vif->remaining_credit -= txreq.size;
@@ -1215,14 +1093,14 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 			idx = vif->tx.req_cons;
 			if (unlikely(work_to_do < 0)) {
 				netbk_tx_err(vif, &txreq, idx);
-				continue;
+				break;
 			}
 		}
 
 		ret = netbk_count_requests(vif, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0)) {
 			netbk_tx_err(vif, &txreq, idx - ret);
-			continue;
+			break;
 		}
 		idx += ret;
 
@@ -1230,7 +1108,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 			netdev_dbg(vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
 			netbk_tx_err(vif, &txreq, idx);
-			continue;
+			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
@@ -1240,7 +1118,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
 			netbk_tx_err(vif, &txreq, idx);
-			continue;
+			break;
 		}
 
 		index = pending_index(netbk->pending_cons);
@@ -1269,7 +1147,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 			if (netbk_set_skb_gso(vif, skb, gso)) {
 				kfree_skb(skb);
 				netbk_tx_err(vif, &txreq, idx);
-				continue;
+				break;
 			}
 		}
 
@@ -1278,7 +1156,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 		if (!page) {
 			kfree_skb(skb);
 			netbk_tx_err(vif, &txreq, idx);
-			continue;
+			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
@@ -1296,7 +1174,6 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 
 		memcpy(&netbk->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		netbk->pending_tx_info[pending_idx].vif = vif;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1320,7 +1197,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 		if (request_gop == NULL) {
 			kfree_skb(skb);
 			netbk_tx_err(vif, &txreq, idx);
-			continue;
+			break;
 		}
 		gop = request_gop;
 
@@ -1334,19 +1211,20 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 	return gop - netbk->tx_copy_ops;
 }
 
-static void xen_netbk_tx_submit(struct xen_netbk *netbk)
+static void xen_netbk_tx_submit(struct xen_netbk *netbk,
+				int *work_done, int budget)
 {
 	struct gnttab_copy *gop = netbk->tx_copy_ops;
 	struct sk_buff *skb;
+	struct xenvif *vif = netbk->vif;
 
-	while ((skb = __skb_dequeue(&netbk->tx_queue)) != NULL) {
+	while ((*work_done < budget) &&
+	       (skb = __skb_dequeue(&netbk->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
-		struct xenvif *vif;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		vif = netbk->pending_tx_info[pending_idx].vif;
 		txp = &netbk->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
@@ -1398,18 +1276,23 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk)
 		}
 
 		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		vif->dev->stats.rx_packets++;\
+
+		(*work_done)++;
 
 		xenvif_receive_skb(vif, skb);
 	}
 }
 
 /* Called after netfront has transmitted */
-static void xen_netbk_tx_action(struct xen_netbk *netbk)
+void xen_netbk_tx_action(struct xen_netbk *netbk, int *work_done, int budget)
 {
 	unsigned nr_gops;
 	int ret;
 
+	if (unlikely(!tx_work_todo(netbk)))
+		return;
+
 	nr_gops = xen_netbk_tx_build_gops(netbk);
 
 	if (nr_gops == 0)
@@ -1418,13 +1301,12 @@ static void xen_netbk_tx_action(struct xen_netbk *netbk)
 					netbk->tx_copy_ops, nr_gops);
 	BUG_ON(ret);
 
-	xen_netbk_tx_submit(netbk);
-
+	xen_netbk_tx_submit(netbk, work_done, budget);
 }
 
 static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
 {
-	struct xenvif *vif;
+	struct xenvif *vif = netbk->vif;
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
 
@@ -1434,15 +1316,11 @@ static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
 
 	pending_tx_info = &netbk->pending_tx_info[pending_idx];
 
-	vif = pending_tx_info->vif;
-
 	make_tx_response(vif, &pending_tx_info->req, XEN_NETIF_RSP_OKAY);
 
 	index = pending_index(netbk->pending_prod++);
 	netbk->pending_ring[index] = pending_idx;
 
-	xenvif_put(vif);
-
 	page_pool_put(netbk->mmap_pages[pending_idx]);
 
 	netbk->mmap_pages[pending_idx] = INVALID_ENTRY;
@@ -1499,37 +1377,13 @@ static inline int rx_work_todo(struct xen_netbk *netbk)
 
 static inline int tx_work_todo(struct xen_netbk *netbk)
 {
-
-	if (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
-			!list_empty(&netbk->net_schedule_list))
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&netbk->vif->tx)) &&
+	    (nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS)
 		return 1;
 
 	return 0;
 }
 
-static int xen_netbk_kthread(void *data)
-{
-	struct xen_netbk *netbk = data;
-	while (!kthread_should_stop()) {
-		wait_event_interruptible(netbk->wq,
-				rx_work_todo(netbk) ||
-				tx_work_todo(netbk) ||
-				kthread_should_stop());
-		cond_resched();
-
-		if (kthread_should_stop())
-			break;
-
-		if (rx_work_todo(netbk))
-			xen_netbk_rx_action(netbk);
-
-		if (tx_work_todo(netbk))
-			xen_netbk_tx_action(netbk);
-	}
-
-	return 0;
-}
-
 void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
@@ -1575,78 +1429,74 @@ err:
 	return err;
 }
 
-static int __init netback_init(void)
+struct xen_netbk *xen_netbk_alloc_netbk(struct xenvif *vif)
 {
 	int i;
-	int rc = 0;
-	int group;
-
-	if (!xen_domain())
-		return -ENODEV;
+	struct xen_netbk *netbk;
 
-	xen_netbk_group_nr = num_online_cpus();
-	xen_netbk = vzalloc(sizeof(struct xen_netbk) * xen_netbk_group_nr);
-	if (!xen_netbk) {
+	netbk = vzalloc(sizeof(struct xen_netbk));
+	if (!netbk) {
 		printk(KERN_ALERT "%s: out of memory\n", __func__);
-		return -ENOMEM;
+		return NULL;
 	}
 
-	for (group = 0; group < xen_netbk_group_nr; group++) {
-		struct xen_netbk *netbk = &xen_netbk[group];
-		skb_queue_head_init(&netbk->rx_queue);
-		skb_queue_head_init(&netbk->tx_queue);
-
-		init_timer(&netbk->net_timer);
-		netbk->net_timer.data = (unsigned long)netbk;
-		netbk->net_timer.function = xen_netbk_alarm;
-
-		netbk->pending_cons = 0;
-		netbk->pending_prod = MAX_PENDING_REQS;
-		for (i = 0; i < MAX_PENDING_REQS; i++)
-			netbk->pending_ring[i] = i;
-
-		init_waitqueue_head(&netbk->wq);
-		netbk->task = kthread_create(xen_netbk_kthread,
-					     (void *)netbk,
-					     "netback/%u", group);
-
-		if (IS_ERR(netbk->task)) {
-			printk(KERN_ALERT "kthread_create() fails at netback\n");
-			del_timer(&netbk->net_timer);
-			rc = PTR_ERR(netbk->task);
-			goto failed_init;
-		}
+	netbk->vif = vif;
+
+	skb_queue_head_init(&netbk->rx_queue);
+	skb_queue_head_init(&netbk->tx_queue);
 
-		kthread_bind(netbk->task, group);
+	netbk->pending_cons = 0;
+	netbk->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; i++)
+		netbk->pending_ring[i] = i;
 
-		INIT_LIST_HEAD(&netbk->net_schedule_list);
+	for (i = 0; i < MAX_PENDING_REQS; i++)
+		netbk->mmap_pages[i] = INVALID_ENTRY;
 
-		spin_lock_init(&netbk->net_schedule_list_lock);
+	return netbk;
+}
 
-		atomic_set(&netbk->netfront_count, 0);
+void xen_netbk_free_netbk(struct xen_netbk *netbk)
+{
+	vfree(netbk);
+}
 
-		wake_up_process(netbk->task);
+int xen_netbk_kthread(void *data)
+{
+	struct xenvif *vif = data;
+	struct xen_netbk *netbk = vif->netbk;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(vif->wq,
+					 rx_work_todo(netbk) ||
+					 kthread_should_stop());
+		cond_resched();
+
+		if (kthread_should_stop())
+			break;
+
+		if (rx_work_todo(netbk))
+			xen_netbk_rx_action(netbk);
 	}
 
+	return 0;
+}
+
+
+static int __init netback_init(void)
+{
+	int rc = 0;
+
+	if (!xen_domain())
+		return -ENODEV;
+
 	rc = page_pool_init();
 	if (rc)
 		goto failed_init;
 
-	rc = xenvif_xenbus_init();
-	if (rc)
-		goto pool_failed_init;
-
-	return 0;
+	return xenvif_xenbus_init();
 
-pool_failed_init:
-	page_pool_destroy();
 failed_init:
-	while (--group >= 0) {
-		struct xen_netbk *netbk = &xen_netbk[group];
-		del_timer(&netbk->net_timer);
-		kthread_stop(netbk->task);
-	}
-	vfree(xen_netbk);
 	return rc;
 
 }
@@ -1655,13 +1505,6 @@ module_init(netback_init);
 
 static void __exit netback_exit(void)
 {
-	int i;
-	for (i = 0; i < xen_netbk_group_nr; i++) {
-		struct xen_netbk *netbk = &xen_netbk[i];
-		del_timer(&netbk->net_timer);
-		kthread_stop(netbk->task);
-	}
-	vfree(xen_netbk);
 	page_pool_destroy();
 	xenvif_xenbus_exit();
 }
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 65d14f2..f1e89ca 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -387,7 +387,6 @@ static void connect(struct backend_info *be)
 	netif_wake_queue(be->vif->dev);
 }
 
-
 static int connect_rings(struct backend_info *be)
 {
 	struct xenvif *vif = be->vif;
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect.
  2012-01-13 16:59 [RFC PATCH] New Xen netback implementation Wei Liu
                   ` (2 preceding siblings ...)
  2012-01-13 16:59 ` [RFC PATCH 3/6] netback: switch to NAPI + kthread model Wei Liu
@ 2012-01-13 16:59 ` Wei Liu
  2012-01-13 18:44   ` [Xen-devel] " David Vrabel
  2012-01-13 16:59 ` [RFC PATCH 5/6] netback: melt xen_netbk into xenvif Wei Liu
  2012-01-13 16:59 ` [RFC PATCH 6/6] netback: alter internal function/structure names Wei Liu
  5 siblings, 1 reply; 30+ messages in thread
From: Wei Liu @ 2012-01-13 16:59 UTC (permalink / raw)
  To: ian.campbell, konrad.wilk, xen-devel, netdev; +Cc: Wei Liu

If there is vif running and user unloads netback, it will certainly
cause problems.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/interface.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 93cb212..3126028 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -323,6 +323,8 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	if (vif->irq)
 		return 0;
 
+	__module_get(THIS_MODULE);
+
 	err = xen_netbk_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
@@ -405,4 +407,6 @@ void xenvif_disconnect(struct xenvif *vif)
 	xen_netbk_unmap_frontend_rings(vif);
 
 	free_netdev(vif->dev);
+
+	module_put(THIS_MODULE);
 }
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 5/6] netback: melt xen_netbk into xenvif
  2012-01-13 16:59 [RFC PATCH] New Xen netback implementation Wei Liu
                   ` (3 preceding siblings ...)
  2012-01-13 16:59 ` [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect Wei Liu
@ 2012-01-13 16:59 ` Wei Liu
  2012-01-13 16:59 ` [RFC PATCH 6/6] netback: alter internal function/structure names Wei Liu
  5 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-13 16:59 UTC (permalink / raw)
  To: ian.campbell, konrad.wilk, xen-devel, netdev; +Cc: Wei Liu

In the 1:1 model, there is no need to keep xen_netbk and xenvif
separated.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/common.h    |   58 ++++++--
 drivers/net/xen-netback/interface.c |   35 ++---
 drivers/net/xen-netback/netback.c   |  279 ++++++++++++-----------------------
 drivers/net/xen-netback/page_pool.c |   10 +-
 drivers/net/xen-netback/page_pool.h |   10 +-
 5 files changed, 166 insertions(+), 226 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 1f6156d..6b99246 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -45,16 +45,34 @@
 #include <xen/grant_table.h>
 #include <xen/xenbus.h>
 
-struct xen_netbk;
+#include "page_pool.h"
+
+struct pending_tx_info {
+	struct xen_netif_tx_request req;
+};
+typedef unsigned int pending_ring_idx_t;
+
+struct netbk_rx_meta {
+	int id;
+	int size;
+	int gso_size;
+};
+
+#define MAX_PENDING_REQS 256
+
+/* Discriminate from any valid pending_idx value. */
+#define INVALID_PENDING_IDX 0xFFFF
+
+#define MAX_BUFFER_OFFSET PAGE_SIZE
+
+#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
+#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 
 struct xenvif {
 	/* Unique identifier for this interface. */
 	domid_t          domid;
 	unsigned int     handle;
 
-	/* Reference to netback processing backend. */
-	struct xen_netbk *netbk;
-
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* Use kthread for guest RX */
@@ -97,6 +115,27 @@ struct xenvif {
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
+
+	struct sk_buff_head rx_queue;
+	struct sk_buff_head tx_queue;
+
+	idx_t mmap_pages[MAX_PENDING_REQS];
+
+	pending_ring_idx_t pending_prod;
+	pending_ring_idx_t pending_cons;
+
+	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
+
+	u16 pending_ring[MAX_PENDING_REQS];
+
+	/*
+	 * Given MAX_BUFFER_OFFSET of 4096 the worst case is that each
+	 * head/fragment page uses 2 copy operations because it
+	 * straddles two buffers in the frontend.
+	 */
+	struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE];
+	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
 };
 
 static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif)
@@ -104,9 +143,6 @@ static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif)
 	return to_xenbus_device(vif->dev->dev.parent);
 }
 
-#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
-#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
-
 struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
@@ -143,12 +179,8 @@ void xenvif_notify_tx_completion(struct xenvif *vif);
 /* Returns number of ring slots required to send an skb to the frontend */
 unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
 
-/* Allocate and free xen_netbk structure */
-struct xen_netbk *xen_netbk_alloc_netbk(struct xenvif *vif);
-void xen_netbk_free_netbk(struct xen_netbk *netbk);
-
-void xen_netbk_tx_action(struct xen_netbk *netbk, int *work_done, int budget);
-void xen_netbk_rx_action(struct xen_netbk *netbk);
+void xen_netbk_tx_action(struct xenvif *vif, int *work_done, int budget);
+void xen_netbk_rx_action(struct xenvif *vif);
 
 int xen_netbk_kthread(void *data);
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 3126028..69184d1 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -55,7 +55,7 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	if (vif->netbk == NULL)
+	if (vif->task == NULL)
 		return IRQ_NONE;
 
 	if (xenvif_rx_schedulable(vif))
@@ -72,7 +72,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 	struct xenvif *vif = container_of(napi, struct xenvif, napi);
 	int work_done = 0;
 
-	xen_netbk_tx_action(vif->netbk, &work_done, budget);
+	xen_netbk_tx_action(vif, &work_done, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -95,9 +95,6 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 
 	BUG_ON(skb->dev != dev);
 
-	if (vif->netbk == NULL)
-		goto drop;
-
 	/* Drop the packet if the target domain has no receive buffers. */
 	if (!xenvif_rx_schedulable(vif))
 		goto drop;
@@ -257,6 +254,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	int err;
 	struct net_device *dev;
 	struct xenvif *vif;
+	int i;
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
@@ -271,7 +269,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif = netdev_priv(dev);
 	vif->domid  = domid;
 	vif->handle = handle;
-	vif->netbk = NULL;
 
 	vif->can_sg = 1;
 	vif->csum = 1;
@@ -290,6 +287,17 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
+	skb_queue_head_init(&vif->rx_queue);
+	skb_queue_head_init(&vif->tx_queue);
+
+	vif->pending_cons = 0;
+	vif->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; i++)
+		vif->pending_ring[i] = i;
+
+	for (i = 0; i < MAX_PENDING_REQS; i++)
+		vif->mmap_pages[i] = INVALID_ENTRY;
+
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -337,14 +345,6 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	vif->irq = err;
 	disable_irq(vif->irq);
 
-	vif->netbk = xen_netbk_alloc_netbk(vif);
-	if (!vif->netbk) {
-		pr_warn("Could not allocate xen_netbk\n");
-		err = -ENOMEM;
-		goto err_unbind;
-	}
-
-
 	init_waitqueue_head(&vif->wq);
 	vif->task = kthread_create(xen_netbk_kthread,
 				   (void *)vif,
@@ -352,7 +352,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	if (IS_ERR(vif->task)) {
 		pr_warn("Could not create kthread\n");
 		err = PTR_ERR(vif->task);
-		goto err_free_netbk;
+		goto err_unbind;
 	}
 
 	rtnl_lock();
@@ -367,8 +367,6 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	wake_up_process(vif->task);
 
 	return 0;
-err_free_netbk:
-	xen_netbk_free_netbk(vif->netbk);
 err_unbind:
 	unbind_from_irqhandler(vif->irq, vif);
 err_unmap:
@@ -392,9 +390,6 @@ void xenvif_disconnect(struct xenvif *vif)
 	if (vif->task)
 		kthread_stop(vif->task);
 
-	if (vif->netbk)
-		xen_netbk_free_netbk(vif->netbk);
-
 	netif_napi_del(&vif->napi);
 
 	del_timer_sync(&vif->credit_timeout);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e486fd6..133ebb3 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -47,57 +47,13 @@
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
 
-struct pending_tx_info {
-	struct xen_netif_tx_request req;
-};
-typedef unsigned int pending_ring_idx_t;
-
-struct netbk_rx_meta {
-	int id;
-	int size;
-	int gso_size;
-};
-
-#define MAX_PENDING_REQS 256
-
-/* Discriminate from any valid pending_idx value. */
-#define INVALID_PENDING_IDX 0xFFFF
-
-#define MAX_BUFFER_OFFSET PAGE_SIZE
-
-struct xen_netbk {
-	struct sk_buff_head rx_queue;
-	struct sk_buff_head tx_queue;
-
-	idx_t mmap_pages[MAX_PENDING_REQS];
-
-	pending_ring_idx_t pending_prod;
-	pending_ring_idx_t pending_cons;
-	struct list_head net_schedule_list;
-
-	struct xenvif *vif;
-
-	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
-	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
-
-	u16 pending_ring[MAX_PENDING_REQS];
-
-	/*
-	 * Given MAX_BUFFER_OFFSET of 4096 the worst case is that each
-	 * head/fragment page uses 2 copy operations because it
-	 * straddles two buffers in the frontend.
-	 */
-	struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE];
-	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
-};
-
-static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx);
+static void xen_netbk_idx_release(struct xenvif *vif, u16 pending_idx);
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xen_netbk *netbk);
-static inline int rx_work_todo(struct xen_netbk *netbk);
+static inline int tx_work_todo(struct xenvif *vif);
+static inline int rx_work_todo(struct xenvif *vif);
 
 static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 					     u16      id,
@@ -106,16 +62,16 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xen_netbk *netbk,
+static inline unsigned long idx_to_pfn(struct xenvif *vif,
 				       u16 idx)
 {
-	return page_to_pfn(to_page(netbk->mmap_pages[idx]));
+	return page_to_pfn(to_page(vif->mmap_pages[idx]));
 }
 
-static inline unsigned long idx_to_kaddr(struct xen_netbk *netbk,
+static inline unsigned long idx_to_kaddr(struct xenvif *vif,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(netbk, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
 }
 
 /*
@@ -143,10 +99,10 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xen_netbk *netbk)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
 {
 	return MAX_PENDING_REQS -
-		netbk->pending_prod + netbk->pending_cons;
+		vif->pending_prod + vif->pending_cons;
 }
 
 static int max_required_rx_slots(struct xenvif *vif)
@@ -332,12 +288,12 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop = npo->copy + npo->copy_prod++;
 		copy_gop->flags = GNTCOPY_dest_gref;
 		if (foreign) {
-			struct xen_netbk *netbk = to_netbk(idx);
+			struct xenvif *vif = to_vif(idx);
 			struct pending_tx_info *src_pend;
 
-			src_pend = &netbk->pending_tx_info[idx];
+			src_pend = &vif->pending_tx_info[idx];
 
-			copy_gop->source.domid = netbk->vif->domid;
+			copy_gop->source.domid = vif->domid;
 			copy_gop->source.u.ref = src_pend->req.gref;
 			copy_gop->flags |= GNTCOPY_source_gref;
 		} else {
@@ -495,16 +451,13 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-static void xen_netbk_kick_thread(struct xen_netbk *netbk)
+static void xen_netbk_kick_thread(struct xenvif *vif)
 {
-	struct xenvif *vif = netbk->vif;
-
 	wake_up(&vif->wq);
 }
 
-void xen_netbk_rx_action(struct xen_netbk *netbk)
+void xen_netbk_rx_action(struct xenvif *vif)
 {
-	struct xenvif *vif = NULL;
 	s8 status;
 	u16 flags;
 	struct xen_netif_rx_response *resp;
@@ -519,15 +472,15 @@ void xen_netbk_rx_action(struct xen_netbk *netbk)
 	int need_to_notify = 0;
 
 	struct netrx_pending_operations npo = {
-		.copy  = netbk->grant_copy_op,
-		.meta  = netbk->meta,
+		.copy  = vif->grant_copy_op,
+		.meta  = vif->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
 	count = 0;
 
-	while ((skb = skb_dequeue(&netbk->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
 		vif = netdev_priv(skb->dev);
 		nr_frags = skb_shinfo(skb)->nr_frags;
 
@@ -543,29 +496,29 @@ void xen_netbk_rx_action(struct xen_netbk *netbk)
 			break;
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
 	if (!npo.copy_prod)
 		return;
 
-	BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
-	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy, &netbk->grant_copy_op,
+	BUG_ON(npo.copy_prod > ARRAY_SIZE(vif->grant_copy_op));
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy, &vif->grant_copy_op,
 					npo.copy_prod);
 	BUG_ON(ret != 0);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		vif = netdev_priv(skb->dev);
+		/* vif = netdev_priv(skb->dev); */
 
-		if (netbk->meta[npo.meta_cons].gso_size && vif->gso_prefix) {
+		if (vif->meta[npo.meta_cons].gso_size && vif->gso_prefix) {
 			resp = RING_GET_RESPONSE(&vif->rx,
 						vif->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = netbk->meta[npo.meta_cons].gso_size;
-			resp->id = netbk->meta[npo.meta_cons].id;
+			resp->offset = vif->meta[npo.meta_cons].gso_size;
+			resp->id = vif->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -590,12 +543,12 @@ void xen_netbk_rx_action(struct xen_netbk *netbk)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, netbk->meta[npo.meta_cons].id,
+		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
 					status, offset,
-					netbk->meta[npo.meta_cons].size,
+					vif->meta[npo.meta_cons].size,
 					flags);
 
-		if (netbk->meta[npo.meta_cons].gso_size && !vif->gso_prefix) {
+		if (vif->meta[npo.meta_cons].gso_size && !vif->gso_prefix) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
 				RING_GET_RESPONSE(&vif->rx,
@@ -603,7 +556,7 @@ void xen_netbk_rx_action(struct xen_netbk *netbk)
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.size = netbk->meta[npo.meta_cons].gso_size;
+			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
 			gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
@@ -613,7 +566,7 @@ void xen_netbk_rx_action(struct xen_netbk *netbk)
 		}
 
 		netbk_add_frag_responses(vif, status,
-					 netbk->meta + npo.meta_cons + 1,
+					 vif->meta + npo.meta_cons + 1,
 					 sco->meta_slots_used);
 
 		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
@@ -629,17 +582,15 @@ void xen_netbk_rx_action(struct xen_netbk *netbk)
 	if (need_to_notify)
 		notify_remote_via_irq(vif->irq);
 
-	if (!skb_queue_empty(&netbk->rx_queue))
-		xen_netbk_kick_thread(netbk);
+	if (!skb_queue_empty(&vif->rx_queue))
+		xen_netbk_kick_thread(vif);
 }
 
 void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
 {
-	struct xen_netbk *netbk = vif->netbk;
-
-	skb_queue_tail(&netbk->rx_queue, skb);
+	skb_queue_tail(&vif->rx_queue, skb);
 
-	xen_netbk_kick_thread(netbk);
+	xen_netbk_kick_thread(vif);
 }
 
 void xen_netbk_check_rx_xenvif(struct xenvif *vif)
@@ -738,21 +689,20 @@ static int netbk_count_requests(struct xenvif *vif,
 	return frags;
 }
 
-static struct page *xen_netbk_alloc_page(struct xen_netbk *netbk,
+static struct page *xen_netbk_alloc_page(struct xenvif *vif,
 					 struct sk_buff *skb,
 					 u16 pending_idx)
 {
 	struct page *page;
 	int idx;
-	page = page_pool_get(netbk, &idx);
+	page = page_pool_get(vif, &idx);
 	if (!page)
 		return NULL;
-	netbk->mmap_pages[pending_idx] = idx;
+	vif->mmap_pages[pending_idx] = idx;
 	return page;
 }
 
-static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk,
-						  struct xenvif *vif,
+static struct gnttab_copy *xen_netbk_get_requests(struct xenvif *vif,
 						  struct sk_buff *skb,
 						  struct xen_netif_tx_request *txp,
 						  struct gnttab_copy *gop)
@@ -769,11 +719,11 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk,
 		struct page *page;
 		pending_ring_idx_t index;
 		struct pending_tx_info *pending_tx_info =
-			netbk->pending_tx_info;
+			vif->pending_tx_info;
 
-		index = pending_index(netbk->pending_cons++);
-		pending_idx = netbk->pending_ring[index];
-		page = xen_netbk_alloc_page(netbk, skb, pending_idx);
+		index = pending_index(vif->pending_cons++);
+		pending_idx = vif->pending_ring[index];
+		page = xen_netbk_alloc_page(vif, skb, pending_idx);
 		if (!page)
 			return NULL;
 
@@ -797,14 +747,13 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk,
 	return gop;
 }
 
-static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
+static int xen_netbk_tx_check_gop(struct xenvif *vif,
 				  struct sk_buff *skb,
 				  struct gnttab_copy **gopp)
 {
 	struct gnttab_copy *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
-	struct pending_tx_info *pending_tx_info = netbk->pending_tx_info;
-	struct xenvif *vif = netbk->vif;
+	struct pending_tx_info *pending_tx_info = vif->pending_tx_info;
 	struct xen_netif_tx_request *txp;
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -814,10 +763,10 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 	err = gop->status;
 	if (unlikely(err)) {
 		pending_ring_idx_t index;
-		index = pending_index(netbk->pending_prod++);
+		index = pending_index(vif->pending_prod++);
 		txp = &pending_tx_info[pending_idx].req;
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
-		netbk->pending_ring[index] = pending_idx;
+		vif->pending_ring[index] = pending_idx;
 	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -834,15 +783,15 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xen_netbk_idx_release(netbk, pending_idx);
+				xen_netbk_idx_release(vif, pending_idx);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		txp = &netbk->pending_tx_info[pending_idx].req;
+		txp = &vif->pending_tx_info[pending_idx].req;
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
-		index = pending_index(netbk->pending_prod++);
-		netbk->pending_ring[index] = pending_idx;
+		index = pending_index(vif->pending_prod++);
+		vif->pending_ring[index] = pending_idx;
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -850,10 +799,10 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xen_netbk_idx_release(netbk, pending_idx);
+		xen_netbk_idx_release(vif, pending_idx);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xen_netbk_idx_release(netbk, pending_idx);
+			xen_netbk_idx_release(vif, pending_idx);
 		}
 
 		/* Remember the error: invalidate all subsequent fragments. */
@@ -864,7 +813,7 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 	return err;
 }
 
-static void xen_netbk_fill_frags(struct xen_netbk *netbk, struct sk_buff *skb)
+static void xen_netbk_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -878,16 +827,16 @@ static void xen_netbk_fill_frags(struct xen_netbk *netbk, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &netbk->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(netbk, pending_idx));
+		txp = &vif->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xen_netbk_idx_release */
-		get_page(to_page(netbk->mmap_pages[pending_idx]));
-		xen_netbk_idx_release(netbk, pending_idx);
+		get_page(to_page(vif->mmap_pages[pending_idx]));
+		xen_netbk_idx_release(vif, pending_idx);
 	}
 }
 
@@ -1048,14 +997,13 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
+static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 {
-	struct gnttab_copy *gop = netbk->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
-	struct xenvif *vif = netbk->vif;
 
-	while ((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) {
+	while ((nr_pending_reqs(vif) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[MAX_SKB_FRAGS];
 		struct page *page;
@@ -1121,8 +1069,8 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 			break;
 		}
 
-		index = pending_index(netbk->pending_cons);
-		pending_idx = netbk->pending_ring[index];
+		index = pending_index(vif->pending_cons);
+		pending_idx = vif->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < MAX_SKB_FRAGS) ?
@@ -1152,7 +1100,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 		}
 
 		/* XXX could copy straight to head */
-		page = xen_netbk_alloc_page(netbk, skb, pending_idx);
+		page = xen_netbk_alloc_page(vif, skb, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
 			netbk_tx_err(vif, &txreq, idx);
@@ -1172,7 +1120,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 
 		gop++;
 
-		memcpy(&netbk->pending_tx_info[pending_idx].req,
+		memcpy(&vif->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
 		*((u16 *)skb->data) = pending_idx;
 
@@ -1188,11 +1136,11 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 					     INVALID_PENDING_IDX);
 		}
 
-		__skb_queue_tail(&netbk->tx_queue, skb);
+		__skb_queue_tail(&vif->tx_queue, skb);
 
-		netbk->pending_cons++;
+		vif->pending_cons++;
 
-		request_gop = xen_netbk_get_requests(netbk, vif,
+		request_gop = xen_netbk_get_requests(vif,
 						     skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
@@ -1204,31 +1152,30 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
 		vif->tx.req_cons = idx;
 		xen_netbk_check_rx_xenvif(vif);
 
-		if ((gop-netbk->tx_copy_ops) >= ARRAY_SIZE(netbk->tx_copy_ops))
+		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
 			break;
 	}
 
-	return gop - netbk->tx_copy_ops;
+	return gop - vif->tx_copy_ops;
 }
 
-static void xen_netbk_tx_submit(struct xen_netbk *netbk,
+static void xen_netbk_tx_submit(struct xenvif *vif,
 				int *work_done, int budget)
 {
-	struct gnttab_copy *gop = netbk->tx_copy_ops;
+	struct gnttab_copy *gop = vif->tx_copy_ops;
 	struct sk_buff *skb;
-	struct xenvif *vif = netbk->vif;
 
 	while ((*work_done < budget) &&
-	       (skb = __skb_dequeue(&netbk->tx_queue)) != NULL) {
+	       (skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &netbk->pending_tx_info[pending_idx].req;
+		txp = &vif->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xen_netbk_tx_check_gop(netbk, skb, &gop))) {
+		if (unlikely(xen_netbk_tx_check_gop(vif, skb, &gop))) {
 			netdev_dbg(vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
@@ -1237,7 +1184,7 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk,
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(netbk, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1245,7 +1192,7 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk,
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xen_netbk_idx_release(netbk, pending_idx);
+			xen_netbk_idx_release(vif, pending_idx);
 		}
 
 		if (txp->flags & XEN_NETTXF_csum_blank)
@@ -1253,7 +1200,7 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk,
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xen_netbk_fill_frags(netbk, skb);
+		xen_netbk_fill_frags(vif, skb);
 
 		/*
 		 * If the initial fragment was < PKT_PROT_LEN then
@@ -1285,45 +1232,44 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk,
 }
 
 /* Called after netfront has transmitted */
-void xen_netbk_tx_action(struct xen_netbk *netbk, int *work_done, int budget)
+void xen_netbk_tx_action(struct xenvif *vif, int *work_done, int budget)
 {
 	unsigned nr_gops;
 	int ret;
 
-	if (unlikely(!tx_work_todo(netbk)))
+	if (unlikely(!tx_work_todo(vif)))
 		return;
 
-	nr_gops = xen_netbk_tx_build_gops(netbk);
+	nr_gops = xen_netbk_tx_build_gops(vif);
 
 	if (nr_gops == 0)
 		return;
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
-					netbk->tx_copy_ops, nr_gops);
+					vif->tx_copy_ops, nr_gops);
 	BUG_ON(ret);
 
-	xen_netbk_tx_submit(netbk, work_done, budget);
+	xen_netbk_tx_submit(vif, work_done, budget);
 }
 
-static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
+static void xen_netbk_idx_release(struct xenvif *vif, u16 pending_idx)
 {
-	struct xenvif *vif = netbk->vif;
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
 
 	/* Already complete? */
-	if (netbk->mmap_pages[pending_idx] == INVALID_ENTRY)
+	if (vif->mmap_pages[pending_idx] == INVALID_ENTRY)
 		return;
 
-	pending_tx_info = &netbk->pending_tx_info[pending_idx];
+	pending_tx_info = &vif->pending_tx_info[pending_idx];
 
 	make_tx_response(vif, &pending_tx_info->req, XEN_NETIF_RSP_OKAY);
 
-	index = pending_index(netbk->pending_prod++);
-	netbk->pending_ring[index] = pending_idx;
+	index = pending_index(vif->pending_prod++);
+	vif->pending_ring[index] = pending_idx;
 
-	page_pool_put(netbk->mmap_pages[pending_idx]);
+	page_pool_put(vif->mmap_pages[pending_idx]);
 
-	netbk->mmap_pages[pending_idx] = INVALID_ENTRY;
+	vif->mmap_pages[pending_idx] = INVALID_ENTRY;
 }
 
 static void make_tx_response(struct xenvif *vif,
@@ -1370,15 +1316,15 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	return resp;
 }
 
-static inline int rx_work_todo(struct xen_netbk *netbk)
+static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&netbk->rx_queue);
+	return !skb_queue_empty(&vif->rx_queue);
 }
 
-static inline int tx_work_todo(struct xen_netbk *netbk)
+static inline int tx_work_todo(struct xenvif *vif)
 {
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&netbk->vif->tx)) &&
-	    (nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS)
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
+	    (nr_pending_reqs(vif) + MAX_SKB_FRAGS) < MAX_PENDING_REQS)
 		return 1;
 
 	return 0;
@@ -1429,54 +1375,21 @@ err:
 	return err;
 }
 
-struct xen_netbk *xen_netbk_alloc_netbk(struct xenvif *vif)
-{
-	int i;
-	struct xen_netbk *netbk;
-
-	netbk = vzalloc(sizeof(struct xen_netbk));
-	if (!netbk) {
-		printk(KERN_ALERT "%s: out of memory\n", __func__);
-		return NULL;
-	}
-
-	netbk->vif = vif;
-
-	skb_queue_head_init(&netbk->rx_queue);
-	skb_queue_head_init(&netbk->tx_queue);
-
-	netbk->pending_cons = 0;
-	netbk->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		netbk->pending_ring[i] = i;
-
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		netbk->mmap_pages[i] = INVALID_ENTRY;
-
-	return netbk;
-}
-
-void xen_netbk_free_netbk(struct xen_netbk *netbk)
-{
-	vfree(netbk);
-}
-
 int xen_netbk_kthread(void *data)
 {
 	struct xenvif *vif = data;
-	struct xen_netbk *netbk = vif->netbk;
 
 	while (!kthread_should_stop()) {
 		wait_event_interruptible(vif->wq,
-					 rx_work_todo(netbk) ||
+					 rx_work_todo(vif) ||
 					 kthread_should_stop());
 		cond_resched();
 
 		if (kthread_should_stop())
 			break;
 
-		if (rx_work_todo(netbk))
-			xen_netbk_rx_action(netbk);
+		if (rx_work_todo(vif))
+			xen_netbk_rx_action(vif);
 	}
 
 	return 0;
diff --git a/drivers/net/xen-netback/page_pool.c b/drivers/net/xen-netback/page_pool.c
index 8904869..19f2a21 100644
--- a/drivers/net/xen-netback/page_pool.c
+++ b/drivers/net/xen-netback/page_pool.c
@@ -105,7 +105,7 @@ int is_in_pool(struct page *page, int *pidx)
 	return get_page_ext(page, pidx);
 }
 
-struct page *page_pool_get(struct xen_netbk *netbk, int *pidx)
+struct page *page_pool_get(struct xenvif *vif, int *pidx)
 {
 	int idx;
 	struct page *page;
@@ -121,7 +121,7 @@ struct page *page_pool_get(struct xen_netbk *netbk, int *pidx)
 	}
 
 	set_page_ext(page, idx);
-	pool[idx].u.netbk = netbk;
+	pool[idx].u.vif = vif;
 	pool[idx].page = page;
 
 	*pidx = idx;
@@ -134,7 +134,7 @@ void page_pool_put(int idx)
 	struct page *page = pool[idx].page;
 
 	pool[idx].page = NULL;
-	pool[idx].u.netbk = NULL;
+	pool[idx].u.vif = NULL;
 	page->mapping = 0;
 	put_page(page);
 	put_free_entry(idx);
@@ -177,7 +177,7 @@ struct page *to_page(int idx)
 	return pool[idx].page;
 }
 
-struct xen_netbk *to_netbk(int idx)
+struct xenvif *to_vif(int idx)
 {
-	return pool[idx].u.netbk;
+	return pool[idx].u.vif;
 }
diff --git a/drivers/net/xen-netback/page_pool.h b/drivers/net/xen-netback/page_pool.h
index 52a6fc7..9bd7c55 100644
--- a/drivers/net/xen-netback/page_pool.h
+++ b/drivers/net/xen-netback/page_pool.h
@@ -37,8 +37,8 @@ typedef uint32_t idx_t;
 struct page_pool_entry {
 	struct page *page;
 	union {
-		struct xen_netbk *netbk;
-		idx_t             fl;
+		struct xenvif *vif;
+		idx_t          fl;
 	} u;
 };
 
@@ -51,11 +51,11 @@ int  page_pool_init(void);
 void page_pool_destroy(void);
 
 
-struct page *page_pool_get(struct xen_netbk *netbk, int *pidx);
+struct page *page_pool_get(struct xenvif *vif, int *pidx);
 void         page_pool_put(int idx);
 int          is_in_pool(struct page *page, int *pidx);
 
-struct page      *to_page(int idx);
-struct xen_netbk *to_netbk(int idx);
+struct page   *to_page(int idx);
+struct xenvif *to_vif(int idx);
 
 #endif /* __PAGE_POOL_H__ */
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 6/6] netback: alter internal function/structure names.
  2012-01-13 16:59 [RFC PATCH] New Xen netback implementation Wei Liu
                   ` (4 preceding siblings ...)
  2012-01-13 16:59 ` [RFC PATCH 5/6] netback: melt xen_netbk into xenvif Wei Liu
@ 2012-01-13 16:59 ` Wei Liu
  5 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-13 16:59 UTC (permalink / raw)
  To: ian.campbell, konrad.wilk, xen-devel, netdev; +Cc: Wei Liu

Since we've melted xen_netbk into xenvif, so it is better to give
functions clearer names.

Also alter napi poll handler function prototypes a bit.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/common.h    |   28 +++---
 drivers/net/xen-netback/interface.c |   20 ++--
 drivers/net/xen-netback/netback.c   |  210 ++++++++++++++++++-----------------
 3 files changed, 130 insertions(+), 128 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 6b99246..f7ec35c 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -52,7 +52,7 @@ struct pending_tx_info {
 };
 typedef unsigned int pending_ring_idx_t;
 
-struct netbk_rx_meta {
+struct xenvif_rx_meta {
 	int id;
 	int size;
 	int gso_size;
@@ -135,7 +135,7 @@ struct xenvif {
 	 * straddles two buffers in the frontend.
 	 */
 	struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE];
-	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
+	struct xenvif_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
 };
 
 static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif)
@@ -156,32 +156,32 @@ void xenvif_xenbus_exit(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xen_netbk_rx_ring_full(struct xenvif *vif);
+int xenvif_rx_ring_full(struct xenvif *vif);
 
-int xen_netbk_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif *vif);
 
 /* (Un)Map communication rings. */
-void xen_netbk_unmap_frontend_rings(struct xenvif *vif);
-int xen_netbk_map_frontend_rings(struct xenvif *vif,
-				 grant_ref_t tx_ring_ref,
-				 grant_ref_t rx_ring_ref);
+void xenvif_unmap_frontend_rings(struct xenvif *vif);
+int xenvif_map_frontend_rings(struct xenvif *vif,
+			      grant_ref_t tx_ring_ref,
+			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xen_netbk_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif *vif);
 /* Receive an SKB from the frontend */
 void xenvif_receive_skb(struct xenvif *vif, struct sk_buff *skb);
 
 /* Queue an SKB for transmission to the frontend */
-void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
+void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
 /* Notify xenvif that ring now has space to send an skb to the frontend */
 void xenvif_notify_tx_completion(struct xenvif *vif);
 
 /* Returns number of ring slots required to send an skb to the frontend */
-unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
+unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
 
-void xen_netbk_tx_action(struct xenvif *vif, int *work_done, int budget);
-void xen_netbk_rx_action(struct xenvif *vif);
+int xenvif_tx_action(struct xenvif *vif, int budget);
+void xenvif_rx_action(struct xenvif *vif);
 
-int xen_netbk_kthread(void *data);
+int xenvif_kthread(void *data);
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 69184d1..a71039e 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -48,7 +48,7 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static int xenvif_rx_schedulable(struct xenvif *vif)
 {
-	return xenvif_schedulable(vif) && !xen_netbk_rx_ring_full(vif);
+	return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif);
 }
 
 static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
@@ -72,7 +72,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 	struct xenvif *vif = container_of(napi, struct xenvif, napi);
 	int work_done = 0;
 
-	xen_netbk_tx_action(vif, &work_done, budget);
+	work_done = xenvif_tx_action(vif, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -100,12 +100,12 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 
 	/* Reserve ring slots for the worst-case number of fragments. */
-	vif->rx_req_cons_peek += xen_netbk_count_skb_slots(vif, skb);
+	vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
 
-	if (vif->can_queue && xen_netbk_must_stop_queue(vif))
+	if (vif->can_queue && xenvif_must_stop_queue(vif))
 		netif_stop_queue(dev);
 
-	xen_netbk_queue_tx_skb(vif, skb);
+	xenvif_queue_tx_skb(vif, skb);
 
 	return NETDEV_TX_OK;
 
@@ -136,7 +136,7 @@ static void xenvif_up(struct xenvif *vif)
 {
 	napi_enable(&vif->napi);
 	enable_irq(vif->irq);
-	xen_netbk_check_rx_xenvif(vif);
+	xenvif_check_rx_xenvif(vif);
 }
 
 static void xenvif_down(struct xenvif *vif)
@@ -333,7 +333,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 
 	__module_get(THIS_MODULE);
 
-	err = xen_netbk_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
@@ -346,7 +346,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	disable_irq(vif->irq);
 
 	init_waitqueue_head(&vif->wq);
-	vif->task = kthread_create(xen_netbk_kthread,
+	vif->task = kthread_create(xenvif_kthread,
 				   (void *)vif,
 				   "vif%d.%d", vif->domid, vif->handle);
 	if (IS_ERR(vif->task)) {
@@ -370,7 +370,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 err_unbind:
 	unbind_from_irqhandler(vif->irq, vif);
 err_unmap:
-	xen_netbk_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(vif);
 err:
 	return err;
 }
@@ -399,7 +399,7 @@ void xenvif_disconnect(struct xenvif *vif)
 
 	unregister_netdev(vif->dev);
 
-	xen_netbk_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(vif);
 
 	free_netdev(vif->dev);
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 133ebb3..6a9b412 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -47,7 +47,7 @@
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
 
-static void xen_netbk_idx_release(struct xenvif *vif, u16 pending_idx);
+static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx);
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
@@ -115,7 +115,7 @@ static int max_required_rx_slots(struct xenvif *vif)
 	return max;
 }
 
-int xen_netbk_rx_ring_full(struct xenvif *vif)
+int xenvif_rx_ring_full(struct xenvif *vif)
 {
 	RING_IDX peek   = vif->rx_req_cons_peek;
 	RING_IDX needed = max_required_rx_slots(vif);
@@ -124,16 +124,16 @@ int xen_netbk_rx_ring_full(struct xenvif *vif)
 	       ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed);
 }
 
-int xen_netbk_must_stop_queue(struct xenvif *vif)
+int xenvif_must_stop_queue(struct xenvif *vif)
 {
-	if (!xen_netbk_rx_ring_full(vif))
+	if (!xenvif_rx_ring_full(vif))
 		return 0;
 
 	vif->rx.sring->req_event = vif->rx_req_cons_peek +
 		max_required_rx_slots(vif);
 	mb(); /* request notification /then/ check the queue */
 
-	return xen_netbk_rx_ring_full(vif);
+	return xenvif_rx_ring_full(vif);
 }
 
 /*
@@ -179,9 +179,9 @@ static bool start_new_rx_buffer(int offset, unsigned long size, int head)
 /*
  * Figure out how many ring slots we're going to need to send @skb to
  * the guest. This function is essentially a dry run of
- * netbk_gop_frag_copy.
+ * xenvif_gop_frag_copy.
  */
-unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb)
+unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb)
 {
 	unsigned int count;
 	int i, copy_off;
@@ -220,15 +220,15 @@ struct netrx_pending_operations {
 	unsigned copy_prod, copy_cons;
 	unsigned meta_prod, meta_cons;
 	struct gnttab_copy *copy;
-	struct netbk_rx_meta *meta;
+	struct xenvif_rx_meta *meta;
 	int copy_off;
 	grant_ref_t copy_gref;
 };
 
-static struct netbk_rx_meta *get_next_rx_buffer(struct xenvif *vif,
-						struct netrx_pending_operations *npo)
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+					struct netrx_pending_operations *npo)
 {
-	struct netbk_rx_meta *meta;
+	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
 	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
@@ -248,13 +248,13 @@ static struct netbk_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
-				struct netrx_pending_operations *npo,
-				struct page *page, unsigned long size,
-				unsigned long offset, int *head)
+static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+				 struct netrx_pending_operations *npo,
+				 struct page *page, unsigned long size,
+				 unsigned long offset, int *head)
 {
 	struct gnttab_copy *copy_gop;
-	struct netbk_rx_meta *meta;
+	struct xenvif_rx_meta *meta;
 	/*
 	 * These variables are used iff get_page_ext returns true,
 	 * in which case they are guaranteed to be initialized.
@@ -335,14 +335,14 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * zero GSO descriptors (for non-GSO packets) or one descriptor (for
  * frontend-side LRO).
  */
-static int netbk_gop_skb(struct sk_buff *skb,
-			 struct netrx_pending_operations *npo)
+static int xenvif_gop_skb(struct sk_buff *skb,
+			  struct netrx_pending_operations *npo)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
 	int i;
 	struct xen_netif_rx_request *req;
-	struct netbk_rx_meta *meta;
+	struct xenvif_rx_meta *meta;
 	unsigned char *data;
 	int head = 1;
 	int old_meta_prod;
@@ -379,30 +379,30 @@ static int netbk_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		netbk_gop_frag_copy(vif, skb, npo,
-				    virt_to_page(data), len, offset, &head);
+		xenvif_gop_frag_copy(vif, skb, npo,
+				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		netbk_gop_frag_copy(vif, skb, npo,
-				    skb_frag_page(&skb_shinfo(skb)->frags[i]),
-				    skb_frag_size(&skb_shinfo(skb)->frags[i]),
-				    skb_shinfo(skb)->frags[i].page_offset,
-				    &head);
+		xenvif_gop_frag_copy(vif, skb, npo,
+				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
+				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
+				     skb_shinfo(skb)->frags[i].page_offset,
+				     &head);
 	}
 
 	return npo->meta_prod - old_meta_prod;
 }
 
 /*
- * This is a twin to netbk_gop_skb.  Assume that netbk_gop_skb was
+ * This is a twin to xenvif_gop_skb.  Assume that xenvif_gop_skb was
  * used to set up the operations on the top of
  * netrx_pending_operations, which have since been done.  Check that
  * they didn't give any errors and advance over them.
  */
-static int netbk_check_gop(struct xenvif *vif, int nr_meta_slots,
-			   struct netrx_pending_operations *npo)
+static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
+			    struct netrx_pending_operations *npo)
 {
 	struct gnttab_copy     *copy_op;
 	int status = XEN_NETIF_RSP_OKAY;
@@ -421,9 +421,9 @@ static int netbk_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void netbk_add_frag_responses(struct xenvif *vif, int status,
-				     struct netbk_rx_meta *meta,
-				     int nr_meta_slots)
+static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+				      struct xenvif_rx_meta *meta,
+				      int nr_meta_slots)
 {
 	int i;
 	unsigned long offset;
@@ -451,12 +451,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-static void xen_netbk_kick_thread(struct xenvif *vif)
+static void xenvif_kick_thread(struct xenvif *vif)
 {
 	wake_up(&vif->wq);
 }
 
-void xen_netbk_rx_action(struct xenvif *vif)
+void xenvif_rx_action(struct xenvif *vif)
 {
 	s8 status;
 	u16 flags;
@@ -485,7 +485,7 @@ void xen_netbk_rx_action(struct xenvif *vif)
 		nr_frags = skb_shinfo(skb)->nr_frags;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
 
 		count += nr_frags + 1;
 
@@ -529,7 +529,7 @@ void xen_netbk_rx_action(struct xenvif *vif)
 		vif->dev->stats.tx_bytes += skb->len;
 		vif->dev->stats.tx_packets++;
 
-		status = netbk_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -565,7 +565,7 @@ void xen_netbk_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		netbk_add_frag_responses(vif, status,
+		xenvif_add_frag_responses(vif, status,
 					 vif->meta + npo.meta_cons + 1,
 					 sco->meta_slots_used);
 
@@ -583,17 +583,17 @@ void xen_netbk_rx_action(struct xenvif *vif)
 		notify_remote_via_irq(vif->irq);
 
 	if (!skb_queue_empty(&vif->rx_queue))
-		xen_netbk_kick_thread(vif);
+		xenvif_kick_thread(vif);
 }
 
-void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
+void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
 {
 	skb_queue_tail(&vif->rx_queue, skb);
 
-	xen_netbk_kick_thread(vif);
+	xenvif_kick_thread(vif);
 }
 
-void xen_netbk_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif *vif)
 {
 	int more_to_do;
 
@@ -630,11 +630,11 @@ static void tx_credit_callback(unsigned long data)
 {
 	struct xenvif *vif = (struct xenvif *)data;
 	tx_add_credit(vif);
-	xen_netbk_check_rx_xenvif(vif);
+	xenvif_check_rx_xenvif(vif);
 }
 
-static void netbk_tx_err(struct xenvif *vif,
-			 struct xen_netif_tx_request *txp, RING_IDX end)
+static void xenvif_tx_err(struct xenvif *vif,
+			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
 	RING_IDX cons = vif->tx.req_cons;
 
@@ -645,10 +645,10 @@ static void netbk_tx_err(struct xenvif *vif,
 		txp = RING_GET_REQUEST(&vif->tx, cons++);
 	} while (1);
 	vif->tx.req_cons = cons;
-	xen_netbk_check_rx_xenvif(vif);
+	xenvif_check_rx_xenvif(vif);
 }
 
-static int netbk_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif *vif,
 				struct xen_netif_tx_request *first,
 				struct xen_netif_tx_request *txp,
 				int work_to_do)
@@ -689,9 +689,9 @@ static int netbk_count_requests(struct xenvif *vif,
 	return frags;
 }
 
-static struct page *xen_netbk_alloc_page(struct xenvif *vif,
-					 struct sk_buff *skb,
-					 u16 pending_idx)
+static struct page *xenvif_alloc_page(struct xenvif *vif,
+				      struct sk_buff *skb,
+				      u16 pending_idx)
 {
 	struct page *page;
 	int idx;
@@ -702,10 +702,10 @@ static struct page *xen_netbk_alloc_page(struct xenvif *vif,
 	return page;
 }
 
-static struct gnttab_copy *xen_netbk_get_requests(struct xenvif *vif,
-						  struct sk_buff *skb,
-						  struct xen_netif_tx_request *txp,
-						  struct gnttab_copy *gop)
+static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+					       struct sk_buff *skb,
+					       struct xen_netif_tx_request *txp,
+					       struct gnttab_copy *gop)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
@@ -723,7 +723,7 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xenvif *vif,
 
 		index = pending_index(vif->pending_cons++);
 		pending_idx = vif->pending_ring[index];
-		page = xen_netbk_alloc_page(vif, skb, pending_idx);
+		page = xenvif_alloc_page(vif, skb, pending_idx);
 		if (!page)
 			return NULL;
 
@@ -747,9 +747,9 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xenvif *vif,
 	return gop;
 }
 
-static int xen_netbk_tx_check_gop(struct xenvif *vif,
-				  struct sk_buff *skb,
-				  struct gnttab_copy **gopp)
+static int xenvif_tx_check_gop(struct xenvif *vif,
+			       struct sk_buff *skb,
+			       struct gnttab_copy **gopp)
 {
 	struct gnttab_copy *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
@@ -783,7 +783,7 @@ static int xen_netbk_tx_check_gop(struct xenvif *vif,
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xen_netbk_idx_release(vif, pending_idx);
+				xenvif_idx_release(vif, pending_idx);
 			continue;
 		}
 
@@ -799,10 +799,10 @@ static int xen_netbk_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xen_netbk_idx_release(vif, pending_idx);
+		xenvif_idx_release(vif, pending_idx);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xen_netbk_idx_release(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx);
 		}
 
 		/* Remember the error: invalidate all subsequent fragments. */
@@ -813,7 +813,7 @@ static int xen_netbk_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xen_netbk_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -834,15 +834,15 @@ static void xen_netbk_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
-		/* Take an extra reference to offset xen_netbk_idx_release */
+		/* Take an extra reference to offset xenvif_idx_release */
 		get_page(to_page(vif->mmap_pages[pending_idx]));
-		xen_netbk_idx_release(vif, pending_idx);
+		xenvif_idx_release(vif, pending_idx);
 	}
 }
 
-static int xen_netbk_get_extras(struct xenvif *vif,
-				struct xen_netif_extra_info *extras,
-				int work_to_do)
+static int xenvif_get_extras(struct xenvif *vif,
+			     struct xen_netif_extra_info *extras,
+			     int work_to_do)
 {
 	struct xen_netif_extra_info extra;
 	RING_IDX cons = vif->tx.req_cons;
@@ -870,9 +870,9 @@ static int xen_netbk_get_extras(struct xenvif *vif,
 	return work_to_do;
 }
 
-static int netbk_set_skb_gso(struct xenvif *vif,
-			     struct sk_buff *skb,
-			     struct xen_netif_extra_info *gso)
+static int xenvif_set_skb_gso(struct xenvif *vif,
+			      struct sk_buff *skb,
+			      struct xen_netif_extra_info *gso)
 {
 	if (!gso->u.gso.size) {
 		netdev_dbg(vif->dev, "GSO size must not be zero.\n");
@@ -997,7 +997,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
+static unsigned xenvif_tx_build_gops(struct xenvif *vif)
 {
 	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
@@ -1036,18 +1036,18 @@ static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xen_netbk_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(vif, extras,
 							  work_to_do);
 			idx = vif->tx.req_cons;
 			if (unlikely(work_to_do < 0)) {
-				netbk_tx_err(vif, &txreq, idx);
+				xenvif_tx_err(vif, &txreq, idx);
 				break;
 			}
 		}
 
-		ret = netbk_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0)) {
-			netbk_tx_err(vif, &txreq, idx - ret);
+			xenvif_tx_err(vif, &txreq, idx - ret);
 			break;
 		}
 		idx += ret;
@@ -1055,7 +1055,7 @@ static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 		if (unlikely(txreq.size < ETH_HLEN)) {
 			netdev_dbg(vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			netbk_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(vif, &txreq, idx);
 			break;
 		}
 
@@ -1065,7 +1065,7 @@ static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			netbk_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(vif, &txreq, idx);
 			break;
 		}
 
@@ -1081,7 +1081,7 @@ static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 		if (unlikely(skb == NULL)) {
 			netdev_dbg(vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			netbk_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(vif, &txreq, idx);
 			break;
 		}
 
@@ -1092,18 +1092,18 @@ static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (netbk_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(vif, skb, gso)) {
 				kfree_skb(skb);
-				netbk_tx_err(vif, &txreq, idx);
+				xenvif_tx_err(vif, &txreq, idx);
 				break;
 			}
 		}
 
 		/* XXX could copy straight to head */
-		page = xen_netbk_alloc_page(vif, skb, pending_idx);
+		page = xenvif_alloc_page(vif, skb, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			netbk_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(vif, &txreq, idx);
 			break;
 		}
 
@@ -1140,17 +1140,17 @@ static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 
 		vif->pending_cons++;
 
-		request_gop = xen_netbk_get_requests(vif,
+		request_gop = xenvif_get_requests(vif,
 						     skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			netbk_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(vif, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
 		vif->tx.req_cons = idx;
-		xen_netbk_check_rx_xenvif(vif);
+		xenvif_check_rx_xenvif(vif);
 
 		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
 			break;
@@ -1159,13 +1159,13 @@ static unsigned xen_netbk_tx_build_gops(struct xenvif *vif)
 	return gop - vif->tx_copy_ops;
 }
 
-static void xen_netbk_tx_submit(struct xenvif *vif,
-				int *work_done, int budget)
+static int xenvif_tx_submit(struct xenvif *vif, int budget)
 {
 	struct gnttab_copy *gop = vif->tx_copy_ops;
 	struct sk_buff *skb;
+	int work_done = 0;
 
-	while ((*work_done < budget) &&
+	while ((work_done < budget) &&
 	       (skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
@@ -1175,7 +1175,7 @@ static void xen_netbk_tx_submit(struct xenvif *vif,
 		txp = &vif->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xen_netbk_tx_check_gop(vif, skb, &gop))) {
+		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
 			netdev_dbg(vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
@@ -1192,7 +1192,7 @@ static void xen_netbk_tx_submit(struct xenvif *vif,
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xen_netbk_idx_release(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx);
 		}
 
 		if (txp->flags & XEN_NETTXF_csum_blank)
@@ -1200,7 +1200,7 @@ static void xen_netbk_tx_submit(struct xenvif *vif,
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xen_netbk_fill_frags(vif, skb);
+		xenvif_fill_frags(vif, skb);
 
 		/*
 		 * If the initial fragment was < PKT_PROT_LEN then
@@ -1225,33 +1225,35 @@ static void xen_netbk_tx_submit(struct xenvif *vif,
 		vif->dev->stats.rx_bytes += skb->len;
 		vif->dev->stats.rx_packets++;\
 
-		(*work_done)++;
+		work_done++;
 
 		xenvif_receive_skb(vif, skb);
 	}
+
+	return work_done;
 }
 
 /* Called after netfront has transmitted */
-void xen_netbk_tx_action(struct xenvif *vif, int *work_done, int budget)
+int xenvif_tx_action(struct xenvif *vif, int budget)
 {
 	unsigned nr_gops;
 	int ret;
 
 	if (unlikely(!tx_work_todo(vif)))
-		return;
+		return 0;
 
-	nr_gops = xen_netbk_tx_build_gops(vif);
+	nr_gops = xenvif_tx_build_gops(vif);
 
 	if (nr_gops == 0)
-		return;
+		return 0;
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
 					vif->tx_copy_ops, nr_gops);
 	BUG_ON(ret);
 
-	xen_netbk_tx_submit(vif, work_done, budget);
+	return xenvif_tx_submit(vif, budget);
 }
 
-static void xen_netbk_idx_release(struct xenvif *vif, u16 pending_idx)
+static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
@@ -1330,7 +1332,7 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
-void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
 		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
@@ -1340,9 +1342,9 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
 					vif->rx.sring);
 }
 
-int xen_netbk_map_frontend_rings(struct xenvif *vif,
-				 grant_ref_t tx_ring_ref,
-				 grant_ref_t rx_ring_ref)
+int xenvif_map_frontend_rings(struct xenvif *vif,
+			      grant_ref_t tx_ring_ref,
+			      grant_ref_t rx_ring_ref)
 {
 	void *addr;
 	struct xen_netif_tx_sring *txs;
@@ -1371,11 +1373,11 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
 	return 0;
 
 err:
-	xen_netbk_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(vif);
 	return err;
 }
 
-int xen_netbk_kthread(void *data)
+int xenvif_kthread(void *data)
 {
 	struct xenvif *vif = data;
 
@@ -1389,7 +1391,7 @@ int xen_netbk_kthread(void *data)
 			break;
 
 		if (rx_work_todo(vif))
-			xen_netbk_rx_action(vif);
+			xenvif_rx_action(vif);
 	}
 
 	return 0;
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 1/6] netback: page pool version 1
  2012-01-13 16:59 ` [RFC PATCH 1/6] netback: page pool version 1 Wei Liu
@ 2012-01-13 17:37   ` Konrad Rzeszutek Wilk
  2012-01-16  9:31       ` Wei Liu
  0 siblings, 1 reply; 30+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-01-13 17:37 UTC (permalink / raw)
  To: Wei Liu; +Cc: ian.campbell, xen-devel, netdev

> +static idx_t free_head;
> +static int free_count;
> +static unsigned long pool_size;
> +static DEFINE_SPINLOCK(pool_lock);
> +static struct page_pool_entry *pool;
> +
> +static int get_free_entry(void)
> +{
> +	unsigned long flag;
> +	int idx;
> +
> +	spin_lock_irqsave(&pool_lock, flag);

What is the benfit of using the irq version of the spinlock instead
of the normal one??

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 2/6] netback: add module unload function.
  2012-01-13 16:59 ` [RFC PATCH 2/6] netback: add module unload function Wei Liu
@ 2012-01-13 17:57   ` David Vrabel
  2012-01-16  9:31       ` Wei Liu
  2012-01-13 18:47   ` David Vrabel
  1 sibling, 1 reply; 30+ messages in thread
From: David Vrabel @ 2012-01-13 17:57 UTC (permalink / raw)
  To: Wei Liu; +Cc: ian.campbell, konrad.wilk, xen-devel, netdev

On 13/01/12 16:59, Wei Liu wrote:
> Enables users to unload netback module.
[...]
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 26af7b7..dd10c0d 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -1653,5 +1653,19 @@ failed_init:
>  
>  module_init(netback_init);
>  
> +static void __exit netback_exit(void)
> +{
> +	int i;
> +	for (i = 0; i < xen_netbk_group_nr; i++) {
> +		struct xen_netbk *netbk = &xen_netbk[i];
> +		del_timer(&netbk->net_timer);

This needs to be del_timer_sync().

> +		kthread_stop(netbk->task);
> +	}
> +	vfree(xen_netbk);
> +	page_pool_destroy();
> +	xenvif_xenbus_exit();
> +}

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-13 16:59 ` [RFC PATCH 3/6] netback: switch to NAPI + kthread model Wei Liu
@ 2012-01-13 18:21     ` David Vrabel
  2012-01-16 10:14     ` Paul Durrant
  1 sibling, 0 replies; 30+ messages in thread
From: David Vrabel @ 2012-01-13 18:21 UTC (permalink / raw)
  To: Wei Liu; +Cc: ian.campbell, konrad.wilk, xen-devel, netdev

On 13/01/12 16:59, Wei Liu wrote:
> This patch implements 1:1 model netback. We utilizes NAPI and kthread
> to do the weight-lifting job:
> 
>   - NAPI is used for guest side TX (host side RX)
>   - kthread is used for guest side RX (host side TX)
> 
> This model provides better scheduling fairness among vifs. It also
> lays the foundation for future work.
> 
> The major defect for the current implementation is that in the NAPI
> poll handler we don't actually disable interrupt. Xen stuff is
> different from real hardware, it requires some other tuning of ring
> macros.

RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to me.

David

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
@ 2012-01-13 18:21     ` David Vrabel
  0 siblings, 0 replies; 30+ messages in thread
From: David Vrabel @ 2012-01-13 18:21 UTC (permalink / raw)
  To: Wei Liu; +Cc: ian.campbell, konrad.wilk, xen-devel, netdev

On 13/01/12 16:59, Wei Liu wrote:
> This patch implements 1:1 model netback. We utilizes NAPI and kthread
> to do the weight-lifting job:
> 
>   - NAPI is used for guest side TX (host side RX)
>   - kthread is used for guest side RX (host side TX)
> 
> This model provides better scheduling fairness among vifs. It also
> lays the foundation for future work.
> 
> The major defect for the current implementation is that in the NAPI
> poll handler we don't actually disable interrupt. Xen stuff is
> different from real hardware, it requires some other tuning of ring
> macros.

RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to me.

David

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect.
  2012-01-13 16:59 ` [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect Wei Liu
@ 2012-01-13 18:44   ` David Vrabel
  2012-01-16  9:43       ` Wei Liu
  0 siblings, 1 reply; 30+ messages in thread
From: David Vrabel @ 2012-01-13 18:44 UTC (permalink / raw)
  To: Wei Liu; +Cc: ian.campbell, konrad.wilk, xen-devel, netdev

On 13/01/12 16:59, Wei Liu wrote:
> If there is vif running and user unloads netback, it will certainly
> cause problems.

Is this necessary?  As part of module unload netback_remove() will be
called and this will clean everything correctly, yes?

> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |    4 ++++
>  1 files changed, 4 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 93cb212..3126028 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -323,6 +323,8 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  	if (vif->irq)
>  		return 0;
>  
> +	__module_get(THIS_MODULE);
> +
>  	err = xen_netbk_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
>  	if (err < 0)
>  		goto err;
> @@ -405,4 +407,6 @@ void xenvif_disconnect(struct xenvif *vif)
>  	xen_netbk_unmap_frontend_rings(vif);
>  
>  	free_netdev(vif->dev);
> +
> +	module_put(THIS_MODULE);
>  }

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 2/6] netback: add module unload function.
  2012-01-13 16:59 ` [RFC PATCH 2/6] netback: add module unload function Wei Liu
  2012-01-13 17:57   ` [Xen-devel] " David Vrabel
@ 2012-01-13 18:47   ` David Vrabel
  1 sibling, 0 replies; 30+ messages in thread
From: David Vrabel @ 2012-01-13 18:47 UTC (permalink / raw)
  To: Wei Liu; +Cc: ian.campbell, konrad.wilk, xen-devel, netdev

On 13/01/12 16:59, Wei Liu wrote:
> Enables users to unload netback module.
[...]
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 26af7b7..dd10c0d 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -1653,5 +1653,19 @@ failed_init:
>  
>  module_init(netback_init);
>  
> +static void __exit netback_exit(void)
> +{
> +	int i;
> +	for (i = 0; i < xen_netbk_group_nr; i++) {
> +		struct xen_netbk *netbk = &xen_netbk[i];
> +		del_timer(&netbk->net_timer);
> +		kthread_stop(netbk->task);
> +	}
> +	vfree(xen_netbk);
> +	page_pool_destroy();
> +	xenvif_xenbus_exit();

I think you need to call xenvif_xenbus_exit() first, before cleaning up
all the other bits and pieces.

> +}
> +module_exit(netback_exit);
> +
>  MODULE_LICENSE("Dual BSD/GPL");
>  MODULE_ALIAS("xen-backend:vif");
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
> index 410018c..65d14f2 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -485,3 +485,8 @@ int xenvif_xenbus_init(void)
>  {
>  	return xenbus_register_backend(&netback_driver);
>  }
> +
> +void xenvif_xenbus_exit(void)
> +{
> +	return xenbus_unregister_driver(&netback_driver);
> +}

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 1/6] netback: page pool version 1
  2012-01-13 17:37   ` Konrad Rzeszutek Wilk
@ 2012-01-16  9:31       ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:31 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: wei.liu2, Ian Campbell, xen-devel, netdev

On Fri, 2012-01-13 at 17:37 +0000, Konrad Rzeszutek Wilk wrote:
> > +static idx_t free_head;
> > +static int free_count;
> > +static unsigned long pool_size;
> > +static DEFINE_SPINLOCK(pool_lock);
> > +static struct page_pool_entry *pool;
> > +
> > +static int get_free_entry(void)
> > +{
> > +	unsigned long flag;
> > +	int idx;
> > +
> > +	spin_lock_irqsave(&pool_lock, flag);
> 
> What is the benfit of using the irq version of the spinlock instead
> of the normal one??
> 

This should be vestige of iterations, fixed.

Thanks
Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 1/6] netback: page pool version 1
@ 2012-01-16  9:31       ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:31 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: wei.liu2, Ian Campbell, xen-devel, netdev

On Fri, 2012-01-13 at 17:37 +0000, Konrad Rzeszutek Wilk wrote:
> > +static idx_t free_head;
> > +static int free_count;
> > +static unsigned long pool_size;
> > +static DEFINE_SPINLOCK(pool_lock);
> > +static struct page_pool_entry *pool;
> > +
> > +static int get_free_entry(void)
> > +{
> > +	unsigned long flag;
> > +	int idx;
> > +
> > +	spin_lock_irqsave(&pool_lock, flag);
> 
> What is the benfit of using the irq version of the spinlock instead
> of the normal one??
> 

This should be vestige of iterations, fixed.

Thanks
Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 2/6] netback: add module unload function.
  2012-01-13 17:57   ` [Xen-devel] " David Vrabel
@ 2012-01-16  9:31       ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:31 UTC (permalink / raw)
  To: David Vrabel; +Cc: wei.liu2, Ian Campbell, konrad.wilk, xen-devel, netdev

On Fri, 2012-01-13 at 17:57 +0000, David Vrabel wrote:
> On 13/01/12 16:59, Wei Liu wrote:
> > Enables users to unload netback module.
> [...]
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > index 26af7b7..dd10c0d 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -1653,5 +1653,19 @@ failed_init:
> >  
> >  module_init(netback_init);
> >  
> > +static void __exit netback_exit(void)
> > +{
> > +	int i;
> > +	for (i = 0; i < xen_netbk_group_nr; i++) {
> > +		struct xen_netbk *netbk = &xen_netbk[i];
> > +		del_timer(&netbk->net_timer);
> 
> This needs to be del_timer_sync().
> 
> > +		kthread_stop(netbk->task);
> > +	}
> > +	vfree(xen_netbk);
> > +	page_pool_destroy();
> > +	xenvif_xenbus_exit();
> > +}

Both fixed.

Thanks
Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 2/6] netback: add module unload function.
@ 2012-01-16  9:31       ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:31 UTC (permalink / raw)
  To: David Vrabel; +Cc: wei.liu2, Ian Campbell, konrad.wilk, xen-devel, netdev

On Fri, 2012-01-13 at 17:57 +0000, David Vrabel wrote:
> On 13/01/12 16:59, Wei Liu wrote:
> > Enables users to unload netback module.
> [...]
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > index 26af7b7..dd10c0d 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -1653,5 +1653,19 @@ failed_init:
> >  
> >  module_init(netback_init);
> >  
> > +static void __exit netback_exit(void)
> > +{
> > +	int i;
> > +	for (i = 0; i < xen_netbk_group_nr; i++) {
> > +		struct xen_netbk *netbk = &xen_netbk[i];
> > +		del_timer(&netbk->net_timer);
> 
> This needs to be del_timer_sync().
> 
> > +		kthread_stop(netbk->task);
> > +	}
> > +	vfree(xen_netbk);
> > +	page_pool_destroy();
> > +	xenvif_xenbus_exit();
> > +}

Both fixed.

Thanks
Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-13 18:21     ` David Vrabel
@ 2012-01-16  9:33       ` Wei Liu
  -1 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:33 UTC (permalink / raw)
  To: David Vrabel; +Cc: wei.liu2, Ian Campbell, konrad.wilk, xen-devel, netdev

On Fri, 2012-01-13 at 18:21 +0000, David Vrabel wrote:
> On 13/01/12 16:59, Wei Liu wrote:
> > This patch implements 1:1 model netback. We utilizes NAPI and kthread
> > to do the weight-lifting job:
> > 
> >   - NAPI is used for guest side TX (host side RX)
> >   - kthread is used for guest side RX (host side TX)
> > 
> > This model provides better scheduling fairness among vifs. It also
> > lays the foundation for future work.
> > 
> > The major defect for the current implementation is that in the NAPI
> > poll handler we don't actually disable interrupt. Xen stuff is
> > different from real hardware, it requires some other tuning of ring
> > macros.
> 
> RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to me.
> 
> David

I need to stop the other end from generating events, so
RING_FINAL_CHECK_FOR_REQUESTS is not the right answer I think.


Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
@ 2012-01-16  9:33       ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:33 UTC (permalink / raw)
  To: David Vrabel; +Cc: wei.liu2, Ian Campbell, konrad.wilk, xen-devel, netdev

On Fri, 2012-01-13 at 18:21 +0000, David Vrabel wrote:
> On 13/01/12 16:59, Wei Liu wrote:
> > This patch implements 1:1 model netback. We utilizes NAPI and kthread
> > to do the weight-lifting job:
> > 
> >   - NAPI is used for guest side TX (host side RX)
> >   - kthread is used for guest side RX (host side TX)
> > 
> > This model provides better scheduling fairness among vifs. It also
> > lays the foundation for future work.
> > 
> > The major defect for the current implementation is that in the NAPI
> > poll handler we don't actually disable interrupt. Xen stuff is
> > different from real hardware, it requires some other tuning of ring
> > macros.
> 
> RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to me.
> 
> David

I need to stop the other end from generating events, so
RING_FINAL_CHECK_FOR_REQUESTS is not the right answer I think.


Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect.
  2012-01-13 18:44   ` [Xen-devel] " David Vrabel
@ 2012-01-16  9:43       ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:43 UTC (permalink / raw)
  To: David Vrabel; +Cc: wei.liu2, Ian Campbell, konrad.wilk, xen-devel, netdev

On Fri, 2012-01-13 at 18:44 +0000, David Vrabel wrote:
> On 13/01/12 16:59, Wei Liu wrote:
> > If there is vif running and user unloads netback, it will certainly
> > cause problems.
> 
> Is this necessary?  As part of module unload netback_remove() will be
> called and this will clean everything correctly, yes?
> 

You're right here from the host's perspective of view. Everything gets
cleaned up.

But from the guest's perspective of view, its network interface just
mysteriously stops working.

So my "problems" in the above statements comes from guest, will make
commit message more clear.


Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect.
@ 2012-01-16  9:43       ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16  9:43 UTC (permalink / raw)
  To: David Vrabel; +Cc: wei.liu2, Ian Campbell, konrad.wilk, xen-devel, netdev

On Fri, 2012-01-13 at 18:44 +0000, David Vrabel wrote:
> On 13/01/12 16:59, Wei Liu wrote:
> > If there is vif running and user unloads netback, it will certainly
> > cause problems.
> 
> Is this necessary?  As part of module unload netback_remove() will be
> called and this will clean everything correctly, yes?
> 

You're right here from the host's perspective of view. Everything gets
cleaned up.

But from the guest's perspective of view, its network interface just
mysteriously stops working.

So my "problems" in the above statements comes from guest, will make
commit message more clear.


Wei.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-13 16:59 ` [RFC PATCH 3/6] netback: switch to NAPI + kthread model Wei Liu
@ 2012-01-16 10:14     ` Paul Durrant
  2012-01-16 10:14     ` Paul Durrant
  1 sibling, 0 replies; 30+ messages in thread
From: Paul Durrant @ 2012-01-16 10:14 UTC (permalink / raw)
  To: Wei Liu (Intern), Ian Campbell, konrad.wilk, xen-devel, netdev
  Cc: Wei Liu (Intern)

> -----Original Message-----
> From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-
> bounces@lists.xensource.com] On Behalf Of Wei Liu
> Sent: 13 January 2012 16:59
> To: Ian Campbell; konrad.wilk@oracle.com; xen-
> devel@lists.xensource.com; netdev@vger.kernel.org
> Cc: Wei Liu (Intern)
> Subject: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> model
> 
> This patch implements 1:1 model netback. We utilizes NAPI and kthread to
> do the weight-lifting job:
> 
>   - NAPI is used for guest side TX (host side RX)
>   - kthread is used for guest side RX (host side TX)
> 
> This model provides better scheduling fairness among vifs. It also lays the
> foundation for future work.
> 
> The major defect for the current implementation is that in the NAPI poll
> handler we don't actually disable interrupt. Xen stuff is different from real
> hardware, it requires some other tuning of ring macros.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
[snip]
> 
>  	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
>  	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; @@ -100,42
> +91,14 @@ struct xen_netbk {
>  	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];  };
> 

Keeping these big inline arrays might cause scalability issues. pending_tx_info should arguably me more closely tied in and possibly implemented within your page pool code.

  Paul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
@ 2012-01-16 10:14     ` Paul Durrant
  0 siblings, 0 replies; 30+ messages in thread
From: Paul Durrant @ 2012-01-16 10:14 UTC (permalink / raw)
  To: Ian Campbell, konrad.wilk, xen-devel, netdev; +Cc: Wei Liu (Intern)

> -----Original Message-----
> From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-
> bounces@lists.xensource.com] On Behalf Of Wei Liu
> Sent: 13 January 2012 16:59
> To: Ian Campbell; konrad.wilk@oracle.com; xen-
> devel@lists.xensource.com; netdev@vger.kernel.org
> Cc: Wei Liu (Intern)
> Subject: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> model
> 
> This patch implements 1:1 model netback. We utilizes NAPI and kthread to
> do the weight-lifting job:
> 
>   - NAPI is used for guest side TX (host side RX)
>   - kthread is used for guest side RX (host side TX)
> 
> This model provides better scheduling fairness among vifs. It also lays the
> foundation for future work.
> 
> The major defect for the current implementation is that in the NAPI poll
> handler we don't actually disable interrupt. Xen stuff is different from real
> hardware, it requires some other tuning of ring macros.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
[snip]
> 
>  	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
>  	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; @@ -100,42
> +91,14 @@ struct xen_netbk {
>  	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];  };
> 

Keeping these big inline arrays might cause scalability issues. pending_tx_info should arguably me more closely tied in and possibly implemented within your page pool code.

  Paul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-16 10:14     ` Paul Durrant
  (?)
@ 2012-01-16 10:31     ` Ian Campbell
  -1 siblings, 0 replies; 30+ messages in thread
From: Ian Campbell @ 2012-01-16 10:31 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Wei Liu (Intern), konrad.wilk, xen-devel, netdev

On Mon, 2012-01-16 at 10:14 +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-
> > bounces@lists.xensource.com] On Behalf Of Wei Liu
> > Sent: 13 January 2012 16:59
> > To: Ian Campbell; konrad.wilk@oracle.com; xen-
> > devel@lists.xensource.com; netdev@vger.kernel.org
> > Cc: Wei Liu (Intern)
> > Subject: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> > model
> > 
> > This patch implements 1:1 model netback. We utilizes NAPI and kthread to
> > do the weight-lifting job:
> > 
> >   - NAPI is used for guest side TX (host side RX)
> >   - kthread is used for guest side RX (host side TX)
> > 
> > This model provides better scheduling fairness among vifs. It also lays the
> > foundation for future work.
> > 
> > The major defect for the current implementation is that in the NAPI poll
> > handler we don't actually disable interrupt. Xen stuff is different from real
> > hardware, it requires some other tuning of ring macros.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> [snip]
> > 
> >  	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
> >  	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; @@ -100,42
> > +91,14 @@ struct xen_netbk {
> >  	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];  };
> > 
> 
> Keeping these big inline arrays might cause scalability issues.
> pending_tx_info should arguably me more closely tied in and possibly
> implemented within your page pool code.

For pending_tx_info that probably makes sense since there is a 1:1
mapping between page pool entries and pending_tx_info.

For some of the others the arrays are the runtime scratch space used by
the netback during each processing pass. Since, regardless of the number
of VIFs, there can only ever be nr_online_cpus netback's active at once
perhaps per-CPU scratch space (with appropriate locking etc) is the way
to go.

Ian.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-16  9:33       ` Wei Liu
  (?)
@ 2012-01-16 10:45       ` Ian Campbell
  2012-01-16 10:49           ` Wei Liu
  2012-01-16 10:56         ` Paul Durrant
  -1 siblings, 2 replies; 30+ messages in thread
From: Ian Campbell @ 2012-01-16 10:45 UTC (permalink / raw)
  To: Wei Liu (Intern); +Cc: David Vrabel, konrad.wilk, xen-devel, netdev

On Mon, 2012-01-16 at 09:33 +0000, Wei Liu (Intern) wrote:
> On Fri, 2012-01-13 at 18:21 +0000, David Vrabel wrote:
> > On 13/01/12 16:59, Wei Liu wrote:
> > > This patch implements 1:1 model netback. We utilizes NAPI and kthread
> > > to do the weight-lifting job:
> > > 
> > >   - NAPI is used for guest side TX (host side RX)
> > >   - kthread is used for guest side RX (host side TX)
> > > 
> > > This model provides better scheduling fairness among vifs. It also
> > > lays the foundation for future work.
> > > 
> > > The major defect for the current implementation is that in the NAPI
> > > poll handler we don't actually disable interrupt. Xen stuff is
> > > different from real hardware, it requires some other tuning of ring
> > > macros.
> > 
> > RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to me.
> > 
> > David
> 
> I need to stop the other end from generating events, so
> RING_FINAL_CHECK_FOR_REQUESTS is not the right answer I think.

What you need is a variant which sets req_event some large distance into
the future instead of to just req_cons + 1. Or possibly it should be set
to just in the past (e.g. req_cons - 1). Call it something like
RING_POLL_FOR_REQUESTS().

Ian.

> 
> 
> Wei.
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-16 10:45       ` Ian Campbell
@ 2012-01-16 10:49           ` Wei Liu
  2012-01-16 10:56         ` Paul Durrant
  1 sibling, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16 10:49 UTC (permalink / raw)
  To: Ian Campbell; +Cc: wei.liu2, David Vrabel, konrad.wilk, xen-devel, netdev

On Mon, 2012-01-16 at 10:45 +0000, Ian Campbell wrote:
> On Mon, 2012-01-16 at 09:33 +0000, Wei Liu (Intern) wrote:
> > On Fri, 2012-01-13 at 18:21 +0000, David Vrabel wrote:
> > > On 13/01/12 16:59, Wei Liu wrote:
> > > > This patch implements 1:1 model netback. We utilizes NAPI and kthread
> > > > to do the weight-lifting job:
> > > > 
> > > >   - NAPI is used for guest side TX (host side RX)
> > > >   - kthread is used for guest side RX (host side TX)
> > > > 
> > > > This model provides better scheduling fairness among vifs. It also
> > > > lays the foundation for future work.
> > > > 
> > > > The major defect for the current implementation is that in the NAPI
> > > > poll handler we don't actually disable interrupt. Xen stuff is
> > > > different from real hardware, it requires some other tuning of ring
> > > > macros.
> > > 
> > > RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to me.
> > > 
> > > David
> > 
> > I need to stop the other end from generating events, so
> > RING_FINAL_CHECK_FOR_REQUESTS is not the right answer I think.
> 
> What you need is a variant which sets req_event some large distance into
> the future instead of to just req_cons + 1. Or possibly it should be set
> to just in the past (e.g. req_cons - 1). Call it something like
> RING_POLL_FOR_REQUESTS().
> 

Seems like a right direction. Will try this.


Wei.

> Ian.
> 
> > 
> > 
> > Wei.
> > 
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
@ 2012-01-16 10:49           ` Wei Liu
  0 siblings, 0 replies; 30+ messages in thread
From: Wei Liu @ 2012-01-16 10:49 UTC (permalink / raw)
  To: Ian Campbell; +Cc: wei.liu2, David Vrabel, konrad.wilk, xen-devel, netdev

On Mon, 2012-01-16 at 10:45 +0000, Ian Campbell wrote:
> On Mon, 2012-01-16 at 09:33 +0000, Wei Liu (Intern) wrote:
> > On Fri, 2012-01-13 at 18:21 +0000, David Vrabel wrote:
> > > On 13/01/12 16:59, Wei Liu wrote:
> > > > This patch implements 1:1 model netback. We utilizes NAPI and kthread
> > > > to do the weight-lifting job:
> > > > 
> > > >   - NAPI is used for guest side TX (host side RX)
> > > >   - kthread is used for guest side RX (host side TX)
> > > > 
> > > > This model provides better scheduling fairness among vifs. It also
> > > > lays the foundation for future work.
> > > > 
> > > > The major defect for the current implementation is that in the NAPI
> > > > poll handler we don't actually disable interrupt. Xen stuff is
> > > > different from real hardware, it requires some other tuning of ring
> > > > macros.
> > > 
> > > RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to me.
> > > 
> > > David
> > 
> > I need to stop the other end from generating events, so
> > RING_FINAL_CHECK_FOR_REQUESTS is not the right answer I think.
> 
> What you need is a variant which sets req_event some large distance into
> the future instead of to just req_cons + 1. Or possibly it should be set
> to just in the past (e.g. req_cons - 1). Call it something like
> RING_POLL_FOR_REQUESTS().
> 

Seems like a right direction. Will try this.


Wei.

> Ian.
> 
> > 
> > 
> > Wei.
> > 
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-16 10:45       ` Ian Campbell
  2012-01-16 10:49           ` Wei Liu
@ 2012-01-16 10:56         ` Paul Durrant
  2012-01-16 11:09           ` Ian Campbell
  1 sibling, 1 reply; 30+ messages in thread
From: Paul Durrant @ 2012-01-16 10:56 UTC (permalink / raw)
  To: Ian Campbell, Wei Liu (Intern)
  Cc: netdev, xen-devel, David Vrabel, konrad.wilk

> -----Original Message-----
> From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-
> bounces@lists.xensource.com] On Behalf Of Ian Campbell
> Sent: 16 January 2012 10:45
> To: Wei Liu (Intern)
> Cc: netdev@vger.kernel.org; xen-devel@lists.xensource.com; David Vrabel;
> konrad.wilk@oracle.com
> Subject: Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> model
> 
> On Mon, 2012-01-16 at 09:33 +0000, Wei Liu (Intern) wrote:
> > On Fri, 2012-01-13 at 18:21 +0000, David Vrabel wrote:
> > > On 13/01/12 16:59, Wei Liu wrote:
> > > > This patch implements 1:1 model netback. We utilizes NAPI and
> > > > kthread to do the weight-lifting job:
> > > >
> > > >   - NAPI is used for guest side TX (host side RX)
> > > >   - kthread is used for guest side RX (host side TX)
> > > >
> > > > This model provides better scheduling fairness among vifs. It also
> > > > lays the foundation for future work.
> > > >
> > > > The major defect for the current implementation is that in the
> > > > NAPI poll handler we don't actually disable interrupt. Xen stuff
> > > > is different from real hardware, it requires some other tuning of
> > > > ring macros.
> > >
> > > RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to
> me.
> > >
> > > David
> >
> > I need to stop the other end from generating events, so
> > RING_FINAL_CHECK_FOR_REQUESTS is not the right answer I think.
> 
> What you need is a variant which sets req_event some large distance into
> the future instead of to just req_cons + 1. Or possibly it should be set to just
> in the past (e.g. req_cons - 1). Call it something like
> RING_POLL_FOR_REQUESTS().
> 

Can you just simply avoid calling RING_FINAL_CHECK_FOR_REQUESTS() unless you actually want to re-enable 'interrupts'? All it does is manipulate the event pointer and tell you whether there are still unconsumed requests.

  Paul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-16 10:56         ` Paul Durrant
@ 2012-01-16 11:09           ` Ian Campbell
  2012-01-16 11:46             ` David Vrabel
  0 siblings, 1 reply; 30+ messages in thread
From: Ian Campbell @ 2012-01-16 11:09 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Wei Liu (Intern), netdev, xen-devel, David Vrabel, konrad.wilk

On Mon, 2012-01-16 at 10:56 +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-
> > bounces@lists.xensource.com] On Behalf Of Ian Campbell
> > Sent: 16 January 2012 10:45
> > To: Wei Liu (Intern)
> > Cc: netdev@vger.kernel.org; xen-devel@lists.xensource.com; David Vrabel;
> > konrad.wilk@oracle.com
> > Subject: Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> > model
> > 
> > On Mon, 2012-01-16 at 09:33 +0000, Wei Liu (Intern) wrote:
> > > On Fri, 2012-01-13 at 18:21 +0000, David Vrabel wrote:
> > > > On 13/01/12 16:59, Wei Liu wrote:
> > > > > This patch implements 1:1 model netback. We utilizes NAPI and
> > > > > kthread to do the weight-lifting job:
> > > > >
> > > > >   - NAPI is used for guest side TX (host side RX)
> > > > >   - kthread is used for guest side RX (host side TX)
> > > > >
> > > > > This model provides better scheduling fairness among vifs. It also
> > > > > lays the foundation for future work.
> > > > >
> > > > > The major defect for the current implementation is that in the
> > > > > NAPI poll handler we don't actually disable interrupt. Xen stuff
> > > > > is different from real hardware, it requires some other tuning of
> > > > > ring macros.
> > > >
> > > > RING_FINAL_CHECK_FOR_REQUESTS() looks it does the correct thing to
> > me.
> > > >
> > > > David
> > >
> > > I need to stop the other end from generating events, so
> > > RING_FINAL_CHECK_FOR_REQUESTS is not the right answer I think.
> > 
> > What you need is a variant which sets req_event some large distance into
> > the future instead of to just req_cons + 1. Or possibly it should be set to just
> > in the past (e.g. req_cons - 1). Call it something like
> > RING_POLL_FOR_REQUESTS().
> > 
> 
> Can you just simply avoid calling RING_FINAL_CHECK_FOR_REQUESTS()
> unless you actually want to re-enable 'interrupts'? All it does is
> manipulate the event pointer and tell you whether there are still
> unconsumed requests.

Perhaps but I think you'd want to keep moving the event pointer to
handle wrap around, i.e. by keeping it always either far enough away or
right behind. (I think "req_cons - 1" is probably the correct option
BTW).

Ian.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model
  2012-01-16 11:09           ` Ian Campbell
@ 2012-01-16 11:46             ` David Vrabel
  0 siblings, 0 replies; 30+ messages in thread
From: David Vrabel @ 2012-01-16 11:46 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Paul Durrant, Wei Liu (Intern), netdev, xen-devel, konrad.wilk

On 16/01/12 11:09, Ian Campbell wrote:
> I think you'd want to keep moving the event pointer to
> handle wrap around, i.e. by keeping it always either far enough away or
> right behind. (I think "req_cons - 1" is probably the correct option
> BTW).

When using RING_FINAL_CHECK_FOR_REQUESTS() as-is you will get an
additional spurious event every 4 billion events.

Something like this would fix it.

#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do {
    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);
    if (_work_to_do) {
        /* ensure req_event is always in the past to avoid spurious
           interrupt on wrap-around. */
        (_r)->sring->req_event = (_r)->req_cons;
        break;
    }
    (_r)->sring->req_event = (_r)->req_cons + 1;
    mb();
    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);
} while (0)

And similarly for RING_FINAL_CHECK_FOR_RESPONSES().

David

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2012-01-16 11:46 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-13 16:59 [RFC PATCH] New Xen netback implementation Wei Liu
2012-01-13 16:59 ` [RFC PATCH 1/6] netback: page pool version 1 Wei Liu
2012-01-13 17:37   ` Konrad Rzeszutek Wilk
2012-01-16  9:31     ` Wei Liu
2012-01-16  9:31       ` Wei Liu
2012-01-13 16:59 ` [RFC PATCH 2/6] netback: add module unload function Wei Liu
2012-01-13 17:57   ` [Xen-devel] " David Vrabel
2012-01-16  9:31     ` Wei Liu
2012-01-16  9:31       ` Wei Liu
2012-01-13 18:47   ` David Vrabel
2012-01-13 16:59 ` [RFC PATCH 3/6] netback: switch to NAPI + kthread model Wei Liu
2012-01-13 18:21   ` [Xen-devel] " David Vrabel
2012-01-13 18:21     ` David Vrabel
2012-01-16  9:33     ` Wei Liu
2012-01-16  9:33       ` Wei Liu
2012-01-16 10:45       ` Ian Campbell
2012-01-16 10:49         ` Wei Liu
2012-01-16 10:49           ` Wei Liu
2012-01-16 10:56         ` Paul Durrant
2012-01-16 11:09           ` Ian Campbell
2012-01-16 11:46             ` David Vrabel
2012-01-16 10:14   ` Paul Durrant
2012-01-16 10:14     ` Paul Durrant
2012-01-16 10:31     ` Ian Campbell
2012-01-13 16:59 ` [RFC PATCH 4/6] netback: add module get/put operations along with vif connect/disconnect Wei Liu
2012-01-13 18:44   ` [Xen-devel] " David Vrabel
2012-01-16  9:43     ` Wei Liu
2012-01-16  9:43       ` Wei Liu
2012-01-13 16:59 ` [RFC PATCH 5/6] netback: melt xen_netbk into xenvif Wei Liu
2012-01-13 16:59 ` [RFC PATCH 6/6] netback: alter internal function/structure names Wei Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.