All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 net-next 0/5] net: low latency Ethernet device polling
@ 2013-05-29  6:39 Eliezer Tamir
  2013-05-29  6:39   ` Eliezer Tamir
                   ` (4 more replies)
  0 siblings, 5 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: linux-kernel, netdev, Jesse Brandeburg, Don Skidmore,
	e1000-devel, Willem de Bruijn, Eric Dumazet, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

One more round of fixes.
Thank you all for your input.

-Eliezer

change log:
v6
- many small fixes suggested by Eric Dumazet:
  data locality, typos, documentation
  protect napi_hash insert/delete with a spinlock (napi_gen_id is no
  longer atomic_t since it's only accessed with the spinlock held.)
- added IPv6 TCP and UDP support (only minimally tested)

v5
- corrections suggested by Ben Hutchings:
  fixed typos, moved the config option and sysctl value from IPv4 to net
- moved sk_mark_ll() to the protocol handlers
- removed global id mechanism, replaced with a hashed napi_id.
  based on code sample from Eric Dumazet
  Note that ixgbe_free_q_vector() already waits an rcu grace period
  before freeing the q_vector, so nothing additional needs to be done
  when adding a call to napi_hash_del().
- simple poll/select support

v4
- removed separate config option for TCP as suggested Eric Dumazet.
- added linux mib counter for packets received through the low latency path,
  as suggested by Andi Kleen.
- re-allow module unloading, remove module param, use a global generation id
  instead to prevent the use of a stale napi pointer, as suggested
  by Eric Dumazet
- updated Documentation/networking/ip-sysctl.txt text

v3
- coding style changes suggested by Dave Miller

v2
- the sysctl knob is now in microseconds. The default value is now 0 (off).
- for now the code depends at configure time on CONFIG_I86_TSC 
- the napi reference in struct skb is now a union with the dma cookie
  since the former is only used on RX and the latter on TX,
  as suggested by Eric Dumazet.
- we do a better job at honoring non-blocking operations.
- removed busy-polling support for tcp_read_sock()
- remove dynamic disabling of GRO
- coding style fixes
- disallow unloading the device module after the feature has been used

Credit:
Jesse Brandeburg, Arun Chekhov Ilango, Julie Cummings,
Alexander Duyck, Eric Geisler, Jason Neighbors, Yadong Li,
Mike Polehn, Anil Vasudevan, Don Wood
Special thanks for finding bugs in earlier versions:
Willem de Bruijn and Andi Kleen

Thanks,
Eliezer

^ permalink raw reply	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 1/5] net: add napi_id and hash
  2013-05-29  6:39 [PATCH v6 net-next 0/5] net: low latency Ethernet device polling Eliezer Tamir
@ 2013-05-29  6:39   ` Eliezer Tamir
  2013-05-29  6:39   ` Eliezer Tamir
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: linux-kernel, netdev, Jesse Brandeburg, Don Skidmore,
	e1000-devel, Willem de Bruijn, Eric Dumazet, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

Adds a napi_id and a hashing mechanism to lookup a napi by id.
This will be used by subsequent patches to implement low latency
Ethernet device polling.
Based on a code sample by Eric Dumazet.

Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 include/linux/netdevice.h |   29 ++++++++++++++++++++++++
 net/core/dev.c            |   54 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 83 insertions(+), 0 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 8f967e3..964648e 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -324,12 +324,15 @@ struct napi_struct {
 	struct sk_buff		*gro_list;
 	struct sk_buff		*skb;
 	struct list_head	dev_list;
+	struct hlist_node	napi_hash_node;
+	unsigned int		napi_id;
 };
 
 enum {
 	NAPI_STATE_SCHED,	/* Poll is scheduled */
 	NAPI_STATE_DISABLE,	/* Disable pending */
 	NAPI_STATE_NPSVC,	/* Netpoll - don't dequeue from poll_list */
+	NAPI_STATE_HASHED,	/* In NAPI hash */
 };
 
 enum gro_result {
@@ -446,6 +449,32 @@ extern void __napi_complete(struct napi_struct *n);
 extern void napi_complete(struct napi_struct *n);
 
 /**
+ *	napi_hash_add - add a NAPI to global hashtable
+ *	@napi: napi context
+ *
+ * generate a new napi_id and store a @napi under it in napi_hash
+ */
+extern void napi_hash_add(struct napi_struct *napi);
+
+/**
+ *	napi_hash_del - remove a NAPI from global table
+ *	@napi: napi context
+ *
+ * Warning: caller must observe rcu grace period
+ * before freeing memory containing @napi
+ */
+extern void napi_hash_del(struct napi_struct *napi);
+
+/**
+ *	napi_by_id - lookup a NAPI by napi_id
+ *	@napi_id: hashed napi_id
+ *
+ * lookup @napi_id in napi_hash table
+ * must be called under rcu_read_lock()
+ */
+extern struct napi_struct *napi_by_id(int napi_id);
+
+/**
  *	napi_disable - prevent NAPI from scheduling
  *	@n: napi context
  *
diff --git a/net/core/dev.c b/net/core/dev.c
index b2e9057..0f39481 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -129,6 +129,7 @@
 #include <linux/inetdevice.h>
 #include <linux/cpu_rmap.h>
 #include <linux/static_key.h>
+#include <linux/hashtable.h>
 
 #include "net-sysfs.h"
 
@@ -166,6 +167,12 @@ static struct list_head offload_base __read_mostly;
 DEFINE_RWLOCK(dev_base_lock);
 EXPORT_SYMBOL(dev_base_lock);
 
+/* protects napi_hash addition/deletion and napi_gen_id */
+DEFINE_SPINLOCK(napi_hash_lock);
+
+unsigned int napi_gen_id;
+DEFINE_HASHTABLE(napi_hash, 8);
+
 seqcount_t devnet_rename_seq;
 
 static inline void dev_base_seq_inc(struct net *net)
@@ -4136,6 +4143,53 @@ void napi_complete(struct napi_struct *n)
 }
 EXPORT_SYMBOL(napi_complete);
 
+void napi_hash_add(struct napi_struct *napi)
+{
+	if (!test_and_set_bit(NAPI_STATE_HASHED, &napi->state)) {
+
+		spin_lock(&napi_hash_lock);
+
+		/* 0 is not a valid id */
+		napi->napi_id = 0;
+		while (!napi->napi_id)
+			napi->napi_id = ++napi_gen_id;
+
+		hlist_add_head_rcu(&napi->napi_hash_node,
+			&napi_hash[napi->napi_id % HASH_SIZE(napi_hash)]);
+
+		spin_unlock(&napi_hash_lock);
+	}
+}
+EXPORT_SYMBOL_GPL(napi_hash_add);
+
+/* Warning : caller is responsible to make sure rcu grace period
+ * is respected before freeing memory containing @napi
+ */
+void napi_hash_del(struct napi_struct *napi)
+{
+	spin_lock(&napi_hash_lock);
+
+	if (test_and_clear_bit(NAPI_STATE_HASHED, &napi->state))
+		hlist_del_rcu(&napi->napi_hash_node);
+
+	spin_unlock(&napi_hash_lock);
+}
+EXPORT_SYMBOL_GPL(napi_hash_del);
+
+/* must be called under rcu_read_lock(), as we dont take a reference */
+struct napi_struct *napi_by_id(int napi_id)
+{
+	unsigned int hash = napi_id % HASH_SIZE(napi_hash);
+	struct napi_struct *napi;
+
+	hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node)
+		if (napi->napi_id == napi_id)
+			return napi;
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(napi_by_id);
+
 void netif_napi_add(struct net_device *dev, struct napi_struct *napi,
 		    int (*poll)(struct napi_struct *, int), int weight)
 {


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 1/5] net: add napi_id and hash
@ 2013-05-29  6:39   ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	linux-kernel, Alex Rosenbaum, Jesse Brandeburg, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien

Adds a napi_id and a hashing mechanism to lookup a napi by id.
This will be used by subsequent patches to implement low latency
Ethernet device polling.
Based on a code sample by Eric Dumazet.

Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 include/linux/netdevice.h |   29 ++++++++++++++++++++++++
 net/core/dev.c            |   54 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 83 insertions(+), 0 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 8f967e3..964648e 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -324,12 +324,15 @@ struct napi_struct {
 	struct sk_buff		*gro_list;
 	struct sk_buff		*skb;
 	struct list_head	dev_list;
+	struct hlist_node	napi_hash_node;
+	unsigned int		napi_id;
 };
 
 enum {
 	NAPI_STATE_SCHED,	/* Poll is scheduled */
 	NAPI_STATE_DISABLE,	/* Disable pending */
 	NAPI_STATE_NPSVC,	/* Netpoll - don't dequeue from poll_list */
+	NAPI_STATE_HASHED,	/* In NAPI hash */
 };
 
 enum gro_result {
@@ -446,6 +449,32 @@ extern void __napi_complete(struct napi_struct *n);
 extern void napi_complete(struct napi_struct *n);
 
 /**
+ *	napi_hash_add - add a NAPI to global hashtable
+ *	@napi: napi context
+ *
+ * generate a new napi_id and store a @napi under it in napi_hash
+ */
+extern void napi_hash_add(struct napi_struct *napi);
+
+/**
+ *	napi_hash_del - remove a NAPI from global table
+ *	@napi: napi context
+ *
+ * Warning: caller must observe rcu grace period
+ * before freeing memory containing @napi
+ */
+extern void napi_hash_del(struct napi_struct *napi);
+
+/**
+ *	napi_by_id - lookup a NAPI by napi_id
+ *	@napi_id: hashed napi_id
+ *
+ * lookup @napi_id in napi_hash table
+ * must be called under rcu_read_lock()
+ */
+extern struct napi_struct *napi_by_id(int napi_id);
+
+/**
  *	napi_disable - prevent NAPI from scheduling
  *	@n: napi context
  *
diff --git a/net/core/dev.c b/net/core/dev.c
index b2e9057..0f39481 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -129,6 +129,7 @@
 #include <linux/inetdevice.h>
 #include <linux/cpu_rmap.h>
 #include <linux/static_key.h>
+#include <linux/hashtable.h>
 
 #include "net-sysfs.h"
 
@@ -166,6 +167,12 @@ static struct list_head offload_base __read_mostly;
 DEFINE_RWLOCK(dev_base_lock);
 EXPORT_SYMBOL(dev_base_lock);
 
+/* protects napi_hash addition/deletion and napi_gen_id */
+DEFINE_SPINLOCK(napi_hash_lock);
+
+unsigned int napi_gen_id;
+DEFINE_HASHTABLE(napi_hash, 8);
+
 seqcount_t devnet_rename_seq;
 
 static inline void dev_base_seq_inc(struct net *net)
@@ -4136,6 +4143,53 @@ void napi_complete(struct napi_struct *n)
 }
 EXPORT_SYMBOL(napi_complete);
 
+void napi_hash_add(struct napi_struct *napi)
+{
+	if (!test_and_set_bit(NAPI_STATE_HASHED, &napi->state)) {
+
+		spin_lock(&napi_hash_lock);
+
+		/* 0 is not a valid id */
+		napi->napi_id = 0;
+		while (!napi->napi_id)
+			napi->napi_id = ++napi_gen_id;
+
+		hlist_add_head_rcu(&napi->napi_hash_node,
+			&napi_hash[napi->napi_id % HASH_SIZE(napi_hash)]);
+
+		spin_unlock(&napi_hash_lock);
+	}
+}
+EXPORT_SYMBOL_GPL(napi_hash_add);
+
+/* Warning : caller is responsible to make sure rcu grace period
+ * is respected before freeing memory containing @napi
+ */
+void napi_hash_del(struct napi_struct *napi)
+{
+	spin_lock(&napi_hash_lock);
+
+	if (test_and_clear_bit(NAPI_STATE_HASHED, &napi->state))
+		hlist_del_rcu(&napi->napi_hash_node);
+
+	spin_unlock(&napi_hash_lock);
+}
+EXPORT_SYMBOL_GPL(napi_hash_del);
+
+/* must be called under rcu_read_lock(), as we dont take a reference */
+struct napi_struct *napi_by_id(int napi_id)
+{
+	unsigned int hash = napi_id % HASH_SIZE(napi_hash);
+	struct napi_struct *napi;
+
+	hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node)
+		if (napi->napi_id == napi_id)
+			return napi;
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(napi_by_id);
+
 void netif_napi_add(struct net_device *dev, struct napi_struct *napi,
 		    int (*poll)(struct napi_struct *, int), int weight)
 {


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29  6:39 [PATCH v6 net-next 0/5] net: low latency Ethernet device polling Eliezer Tamir
@ 2013-05-29  6:39   ` Eliezer Tamir
  2013-05-29  6:39   ` Eliezer Tamir
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: linux-kernel, netdev, Jesse Brandeburg, Don Skidmore,
	e1000-devel, Willem de Bruijn, Eric Dumazet, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

Adds a new ndo_ll_poll method and the code that supports and uses it.
This method can be used by low latency applications to busy poll Ethernet
device queues directly from the socket code. The value of sysctl_net_ll_poll
controls how many microseconds to poll. Set to zero to disable.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 Documentation/sysctl/net.txt |    7 ++
 fs/select.c                  |    7 ++
 include/linux/netdevice.h    |    3 +
 include/linux/skbuff.h       |    8 ++-
 include/net/ll_poll.h        |  126 ++++++++++++++++++++++++++++++++++++++++++
 include/net/sock.h           |    4 +
 include/uapi/linux/snmp.h    |    1 
 net/Kconfig                  |   12 ++++
 net/core/datagram.c          |    4 +
 net/core/skbuff.c            |    4 +
 net/core/sock.c              |    6 ++
 net/core/sysctl_net_core.c   |   10 +++
 net/ipv4/proc.c              |    1 
 net/ipv4/udp.c               |    6 ++
 net/ipv6/udp.c               |    6 ++
 net/socket.c                 |   16 +++++
 16 files changed, 216 insertions(+), 5 deletions(-)
 create mode 100644 include/net/ll_poll.h

diff --git a/Documentation/sysctl/net.txt b/Documentation/sysctl/net.txt
index c1f8640..85ab72d 100644
--- a/Documentation/sysctl/net.txt
+++ b/Documentation/sysctl/net.txt
@@ -50,6 +50,13 @@ The maximum number of packets that kernel can handle on a NAPI interrupt,
 it's a Per-CPU variable.
 Default: 64
 
+low_latency_poll
+----------------
+Low latency busy poll timeout. (needs CONFIG_NET_LL_RX_POLL)
+Approximate time in us to spin waiting for packets on the device queue.
+Recommended value is 50. May increase power usage.
+Default: 0 (off)
+
 rmem_default
 ------------
 
diff --git a/fs/select.c b/fs/select.c
index 8c1c96c..0ef246d 100644
--- a/fs/select.c
+++ b/fs/select.c
@@ -27,6 +27,7 @@
 #include <linux/rcupdate.h>
 #include <linux/hrtimer.h>
 #include <linux/sched/rt.h>
+#include <net/ll_poll.h>
 
 #include <asm/uaccess.h>
 
@@ -400,6 +401,7 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
 	poll_table *wait;
 	int retval, i, timed_out = 0;
 	unsigned long slack = 0;
+	unsigned long ll_time = ll_end_time();
 
 	rcu_read_lock();
 	retval = max_select_fd(n, fds);
@@ -486,6 +488,8 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
 			break;
 		}
 
+		if (can_poll_ll(ll_time))
+			continue;
 		/*
 		 * If this is the first loop and we have a timeout
 		 * given, then we convert to ktime_t and set the to
@@ -750,6 +754,7 @@ static int do_poll(unsigned int nfds,  struct poll_list *list,
 	ktime_t expire, *to = NULL;
 	int timed_out = 0, count = 0;
 	unsigned long slack = 0;
+	unsigned long ll_time = ll_end_time();
 
 	/* Optimise the no-wait case */
 	if (end_time && !end_time->tv_sec && !end_time->tv_nsec) {
@@ -795,6 +800,8 @@ static int do_poll(unsigned int nfds,  struct poll_list *list,
 		if (count || timed_out)
 			break;
 
+		if (can_poll_ll(ll_time))
+			continue;
 		/*
 		 * If this is the first loop and we have a timeout
 		 * given, then we convert to ktime_t and set the to
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 964648e..7acea42 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -972,6 +972,9 @@ struct net_device_ops {
 						     gfp_t gfp);
 	void			(*ndo_netpoll_cleanup)(struct net_device *dev);
 #endif
+#ifdef CONFIG_NET_LL_RX_POLL
+	int			(*ndo_ll_poll)(struct napi_struct *dev);
+#endif
 	int			(*ndo_set_vf_mac)(struct net_device *dev,
 						  int queue, u8 *mac);
 	int			(*ndo_set_vf_vlan)(struct net_device *dev,
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 8f2b830..77f0a14 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -386,6 +386,7 @@ typedef unsigned char *sk_buff_data_t;
  *	@no_fcs:  Request NIC to treat last 4 bytes as Ethernet FCS
  *	@dma_cookie: a cookie to one of several possible DMA operations
  *		done by skb DMA functions
+  *	@napi_id: id of the NAPI struct this skb came from
  *	@secmark: security marking
  *	@mark: Generic packet mark
  *	@dropcount: total number of sk_receive_queue overflows
@@ -500,8 +501,11 @@ struct sk_buff {
 	/* 7/9 bit hole (depending on ndisc_nodetype presence) */
 	kmemcheck_bitfield_end(flags2);
 
-#ifdef CONFIG_NET_DMA
-	dma_cookie_t		dma_cookie;
+#if defined CONFIG_NET_DMA || defined CONFIG_NET_LL_RX_POLL
+	union {
+		unsigned int	napi_id;
+		dma_cookie_t	dma_cookie;
+	};
 #endif
 #ifdef CONFIG_NETWORK_SECMARK
 	__u32			secmark;
diff --git a/include/net/ll_poll.h b/include/net/ll_poll.h
new file mode 100644
index 0000000..9e1c972
--- /dev/null
+++ b/include/net/ll_poll.h
@@ -0,0 +1,126 @@
+/*
+ * low latency network device queue flush
+ * Copyright(c) 2013 Intel Corporation.
+ * Author: Eliezer Tamir
+ *
+ * For now this depends on CONFIG_X86_TSC
+ */
+
+#ifndef _LINUX_NET_LL_POLL_H
+#define _LINUX_NET_LL_POLL_H
+
+#include <linux/netdevice.h>
+#include <net/ip.h>
+
+#ifdef CONFIG_NET_LL_RX_POLL
+
+struct napi_struct;
+extern int sysctl_net_ll_poll __read_mostly;
+
+/* return values from ndo_ll_poll */
+#define LL_FLUSH_FAILED		-1
+#define LL_FLUSH_BUSY		-2
+
+/* we don't mind a ~2.5% imprecision */
+#define TSC_MHZ (tsc_khz >> 10)
+
+static inline unsigned long ll_end_time(void)
+{
+	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
+}
+
+static inline bool sk_valid_ll(struct sock *sk)
+{
+	return sysctl_net_ll_poll && sk->sk_napi_id &&
+	       !need_resched() && !signal_pending(current);
+}
+
+static inline bool can_poll_ll(unsigned long end_time)
+{
+	return !time_after((unsigned long)get_cycles(), end_time);
+}
+
+static inline bool sk_poll_ll(struct sock *sk, int nonblock)
+{
+	unsigned long end_time = ll_end_time();
+	const struct net_device_ops *ops;
+	struct napi_struct *napi;
+	int rc = false;
+
+	/*
+	 * rcu read lock for napi hash
+	 * bh so we don't race with net_rx_action
+	 */
+	rcu_read_lock_bh();
+
+	napi = napi_by_id(sk->sk_napi_id);
+	if (!napi)
+		goto out;
+
+	ops = napi->dev->netdev_ops;
+	if (!ops->ndo_ll_poll)
+		goto out;
+
+	do {
+
+		rc = ops->ndo_ll_poll(napi);
+
+		if (rc == LL_FLUSH_FAILED)
+			break; /* permanent failure */
+
+		if (rc > 0)
+			/* local bh are disabled so it is ok to use _BH */
+			NET_ADD_STATS_BH(sock_net(sk),
+					 LINUX_MIB_LOWLATENCYRXPACKETS, rc);
+
+	} while (skb_queue_empty(&sk->sk_receive_queue)
+			&& can_poll_ll(end_time) && !nonblock);
+
+	rc = !skb_queue_empty(&sk->sk_receive_queue);
+out:
+	rcu_read_unlock_bh();
+	return rc;
+}
+
+static inline void skb_mark_ll(struct sk_buff *skb, struct napi_struct *napi)
+{
+	skb->napi_id = napi->napi_id;
+}
+
+static inline void sk_mark_ll(struct sock *sk, struct sk_buff *skb)
+{
+	sk->sk_napi_id = skb->napi_id;
+}
+
+#else /* CONFIG_NET_LL_RX_POLL */
+
+static inline unsigned long ll_end_time(void)
+{
+	return 0;
+}
+
+static inline bool sk_valid_ll(struct sock *sk)
+{
+	return 0;
+}
+
+static inline bool sk_poll_ll(struct sock *sk, int nonblock)
+{
+	return 0;
+}
+
+static inline void skb_mark_ll(struct sk_buff *skb, struct napi_struct *napi)
+{
+}
+
+static inline void sk_mark_ll(struct sock *sk, struct sk_buff *skb)
+{
+}
+
+static inline bool can_poll_ll(unsigned long end_time)
+{
+	return false;
+}
+
+#endif /* CONFIG_NET_LL_RX_POLL */
+#endif /* _LINUX_NET_LL_POLL_H */
diff --git a/include/net/sock.h b/include/net/sock.h
index 66772cf..ac8e181 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -229,6 +229,7 @@ struct cg_proto;
   *	@sk_omem_alloc: "o" is "option" or "other"
   *	@sk_wmem_queued: persistent queue size
   *	@sk_forward_alloc: space allocated forward
+  *	@sk_napi_id: id of the last napi context to receive data for sk
   *	@sk_allocation: allocation mode
   *	@sk_sndbuf: size of send buffer in bytes
   *	@sk_flags: %SO_LINGER (l_onoff), %SO_BROADCAST, %SO_KEEPALIVE,
@@ -325,6 +326,9 @@ struct sock {
 #ifdef CONFIG_RPS
 	__u32			sk_rxhash;
 #endif
+#ifdef CONFIG_NET_LL_RX_POLL
+	unsigned int		sk_napi_id;
+#endif
 	atomic_t		sk_drops;
 	int			sk_rcvbuf;
 
diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h
index df2e8b4..26cbf76 100644
--- a/include/uapi/linux/snmp.h
+++ b/include/uapi/linux/snmp.h
@@ -253,6 +253,7 @@ enum
 	LINUX_MIB_TCPFASTOPENLISTENOVERFLOW,	/* TCPFastOpenListenOverflow */
 	LINUX_MIB_TCPFASTOPENCOOKIEREQD,	/* TCPFastOpenCookieReqd */
 	LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES, /* TCPSpuriousRtxHostQueues */
+	LINUX_MIB_LOWLATENCYRXPACKETS,		/* LowLatencyRxPackets */
 	__LINUX_MIB_MAX
 };
 
diff --git a/net/Kconfig b/net/Kconfig
index 523e43e..d6a9ce6 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -243,6 +243,18 @@ config NETPRIO_CGROUP
 	  Cgroup subsystem for use in assigning processes to network priorities on
 	  a per-interface basis
 
+config NET_LL_RX_POLL
+	bool "Low Latency Receive Poll"
+	depends on X86_TSC
+	default n
+	---help---
+	  Support Low Latency Receive Queue Poll.
+	  (For network card drivers which support this option.)
+	  When waiting for data in read or poll call directly into the the device driver
+	  to flush packets which may be pending on the device queues into the stack.
+
+	  If unsure, say N.
+
 config BQL
 	boolean
 	depends on SYSFS
diff --git a/net/core/datagram.c b/net/core/datagram.c
index b71423d..9cbaba9 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -56,6 +56,7 @@
 #include <net/sock.h>
 #include <net/tcp_states.h>
 #include <trace/events/skb.h>
+#include <net/ll_poll.h>
 
 /*
  *	Is a socket 'connection oriented' ?
@@ -207,6 +208,9 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags,
 		}
 		spin_unlock_irqrestore(&queue->lock, cpu_flags);
 
+		if (sk_valid_ll(sk) && sk_poll_ll(sk, flags & MSG_DONTWAIT))
+			continue;
+
 		/* User doesn't want to wait */
 		error = -EAGAIN;
 		if (!timeo)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index f45de07..674bcde 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -739,6 +739,10 @@ static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
 	new->vlan_tci		= old->vlan_tci;
 
 	skb_copy_secmark(new, old);
+
+#ifdef CONFIG_NET_LL_RX_POLL
+	new->napi_id	= old->napi_id;
+#endif
 }
 
 /*
diff --git a/net/core/sock.c b/net/core/sock.c
index 6ba327d..804fd5b 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -139,6 +139,8 @@
 #include <net/tcp.h>
 #endif
 
+#include <net/ll_poll.h>
+
 static DEFINE_MUTEX(proto_list_mutex);
 static LIST_HEAD(proto_list);
 
@@ -2284,6 +2286,10 @@ void sock_init_data(struct socket *sock, struct sock *sk)
 
 	sk->sk_stamp = ktime_set(-1L, 0);
 
+#ifdef CONFIG_NET_LL_RX_POLL
+	sk->sk_napi_id		=	0;
+#endif
+
 	/*
 	 * Before updating sk_refcnt, we must commit prior changes to memory
 	 * (Documentation/RCU/rculist_nulls.txt for details)
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
index 741db5fc..4ca5702 100644
--- a/net/core/sysctl_net_core.c
+++ b/net/core/sysctl_net_core.c
@@ -19,6 +19,7 @@
 #include <net/ip.h>
 #include <net/sock.h>
 #include <net/net_ratelimit.h>
+#include <net/ll_poll.h>
 
 static int one = 1;
 
@@ -284,6 +285,15 @@ static struct ctl_table net_core_table[] = {
 		.proc_handler	= flow_limit_table_len_sysctl
 	},
 #endif /* CONFIG_NET_FLOW_LIMIT */
+#ifdef CONFIG_NET_LL_RX_POLL
+	{
+		.procname	= "low_latency_poll",
+		.data		= &sysctl_net_ll_poll,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec
+	},
+#endif
 #endif /* CONFIG_NET */
 	{
 		.procname	= "netdev_budget",
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 2a5bf86..6577a11 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -273,6 +273,7 @@ static const struct snmp_mib snmp4_net_list[] = {
 	SNMP_MIB_ITEM("TCPFastOpenListenOverflow", LINUX_MIB_TCPFASTOPENLISTENOVERFLOW),
 	SNMP_MIB_ITEM("TCPFastOpenCookieReqd", LINUX_MIB_TCPFASTOPENCOOKIEREQD),
 	SNMP_MIB_ITEM("TCPSpuriousRtxHostQueues", LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES),
+	SNMP_MIB_ITEM("LowLatencyRxPackets", LINUX_MIB_LOWLATENCYRXPACKETS),
 	SNMP_MIB_SENTINEL
 };
 
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index aa5eff4..e7ab9c8 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -109,6 +109,7 @@
 #include <trace/events/udp.h>
 #include <linux/static_key.h>
 #include <trace/events/skb.h>
+#include <net/ll_poll.h>
 #include "udp_impl.h"
 
 struct udp_table udp_table __read_mostly;
@@ -1709,7 +1710,10 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
 	sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
 
 	if (sk != NULL) {
-		int ret = udp_queue_rcv_skb(sk, skb);
+		int ret;
+
+		sk_mark_ll(sk, skb);
+		ret = udp_queue_rcv_skb(sk, skb);
 		sock_put(sk);
 
 		/* a return value > 0 means to resubmit the input, but
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 42923b1..4035997 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -46,6 +46,7 @@
 #include <net/ip6_checksum.h>
 #include <net/xfrm.h>
 #include <net/inet6_hashtables.h>
+#include <net/ll_poll.h>
 
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
@@ -841,7 +842,10 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
 	 */
 	sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
 	if (sk != NULL) {
-		int ret = udpv6_queue_rcv_skb(sk, skb);
+		int ret;
+
+		sk_mark_ll(sk, skb);
+		ret = udpv6_queue_rcv_skb(sk, skb);
 		sock_put(sk);
 
 		/* a return value > 0 means to resubmit the input, but
diff --git a/net/socket.c b/net/socket.c
index 6b94633..c3725eb 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -104,6 +104,12 @@
 #include <linux/route.h>
 #include <linux/sockios.h>
 #include <linux/atalk.h>
+#include <net/ll_poll.h>
+
+#ifdef CONFIG_NET_LL_RX_POLL
+int sysctl_net_ll_poll __read_mostly;
+EXPORT_SYMBOL_GPL(sysctl_net_ll_poll);
+#endif
 
 static int sock_no_open(struct inode *irrelevant, struct file *dontcare);
 static ssize_t sock_aio_read(struct kiocb *iocb, const struct iovec *iov,
@@ -1142,13 +1148,21 @@ EXPORT_SYMBOL(sock_create_lite);
 /* No kernel lock held - perfect */
 static unsigned int sock_poll(struct file *file, poll_table *wait)
 {
+	unsigned int poll_result;
 	struct socket *sock;
 
 	/*
 	 *      We can't return errors to poll, so it's either yes or no.
 	 */
 	sock = file->private_data;
-	return sock->ops->poll(file, sock, wait);
+
+	poll_result = sock->ops->poll(file, sock, wait);
+
+	if (!(poll_result & (POLLRDNORM | POLLERR | POLLRDHUP | POLLHUP)) &&
+		sk_valid_ll(sock->sk) && sk_poll_ll(sock->sk, 1))
+			poll_result = sock->ops->poll(file, sock, NULL);
+
+	return poll_result;
 }
 
 static int sock_mmap(struct file *file, struct vm_area_struct *vma)


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29  6:39   ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	linux-kernel, Alex Rosenbaum, Jesse Brandeburg, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien

Adds a new ndo_ll_poll method and the code that supports and uses it.
This method can be used by low latency applications to busy poll Ethernet
device queues directly from the socket code. The value of sysctl_net_ll_poll
controls how many microseconds to poll. Set to zero to disable.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 Documentation/sysctl/net.txt |    7 ++
 fs/select.c                  |    7 ++
 include/linux/netdevice.h    |    3 +
 include/linux/skbuff.h       |    8 ++-
 include/net/ll_poll.h        |  126 ++++++++++++++++++++++++++++++++++++++++++
 include/net/sock.h           |    4 +
 include/uapi/linux/snmp.h    |    1 
 net/Kconfig                  |   12 ++++
 net/core/datagram.c          |    4 +
 net/core/skbuff.c            |    4 +
 net/core/sock.c              |    6 ++
 net/core/sysctl_net_core.c   |   10 +++
 net/ipv4/proc.c              |    1 
 net/ipv4/udp.c               |    6 ++
 net/ipv6/udp.c               |    6 ++
 net/socket.c                 |   16 +++++
 16 files changed, 216 insertions(+), 5 deletions(-)
 create mode 100644 include/net/ll_poll.h

diff --git a/Documentation/sysctl/net.txt b/Documentation/sysctl/net.txt
index c1f8640..85ab72d 100644
--- a/Documentation/sysctl/net.txt
+++ b/Documentation/sysctl/net.txt
@@ -50,6 +50,13 @@ The maximum number of packets that kernel can handle on a NAPI interrupt,
 it's a Per-CPU variable.
 Default: 64
 
+low_latency_poll
+----------------
+Low latency busy poll timeout. (needs CONFIG_NET_LL_RX_POLL)
+Approximate time in us to spin waiting for packets on the device queue.
+Recommended value is 50. May increase power usage.
+Default: 0 (off)
+
 rmem_default
 ------------
 
diff --git a/fs/select.c b/fs/select.c
index 8c1c96c..0ef246d 100644
--- a/fs/select.c
+++ b/fs/select.c
@@ -27,6 +27,7 @@
 #include <linux/rcupdate.h>
 #include <linux/hrtimer.h>
 #include <linux/sched/rt.h>
+#include <net/ll_poll.h>
 
 #include <asm/uaccess.h>
 
@@ -400,6 +401,7 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
 	poll_table *wait;
 	int retval, i, timed_out = 0;
 	unsigned long slack = 0;
+	unsigned long ll_time = ll_end_time();
 
 	rcu_read_lock();
 	retval = max_select_fd(n, fds);
@@ -486,6 +488,8 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
 			break;
 		}
 
+		if (can_poll_ll(ll_time))
+			continue;
 		/*
 		 * If this is the first loop and we have a timeout
 		 * given, then we convert to ktime_t and set the to
@@ -750,6 +754,7 @@ static int do_poll(unsigned int nfds,  struct poll_list *list,
 	ktime_t expire, *to = NULL;
 	int timed_out = 0, count = 0;
 	unsigned long slack = 0;
+	unsigned long ll_time = ll_end_time();
 
 	/* Optimise the no-wait case */
 	if (end_time && !end_time->tv_sec && !end_time->tv_nsec) {
@@ -795,6 +800,8 @@ static int do_poll(unsigned int nfds,  struct poll_list *list,
 		if (count || timed_out)
 			break;
 
+		if (can_poll_ll(ll_time))
+			continue;
 		/*
 		 * If this is the first loop and we have a timeout
 		 * given, then we convert to ktime_t and set the to
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 964648e..7acea42 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -972,6 +972,9 @@ struct net_device_ops {
 						     gfp_t gfp);
 	void			(*ndo_netpoll_cleanup)(struct net_device *dev);
 #endif
+#ifdef CONFIG_NET_LL_RX_POLL
+	int			(*ndo_ll_poll)(struct napi_struct *dev);
+#endif
 	int			(*ndo_set_vf_mac)(struct net_device *dev,
 						  int queue, u8 *mac);
 	int			(*ndo_set_vf_vlan)(struct net_device *dev,
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 8f2b830..77f0a14 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -386,6 +386,7 @@ typedef unsigned char *sk_buff_data_t;
  *	@no_fcs:  Request NIC to treat last 4 bytes as Ethernet FCS
  *	@dma_cookie: a cookie to one of several possible DMA operations
  *		done by skb DMA functions
+  *	@napi_id: id of the NAPI struct this skb came from
  *	@secmark: security marking
  *	@mark: Generic packet mark
  *	@dropcount: total number of sk_receive_queue overflows
@@ -500,8 +501,11 @@ struct sk_buff {
 	/* 7/9 bit hole (depending on ndisc_nodetype presence) */
 	kmemcheck_bitfield_end(flags2);
 
-#ifdef CONFIG_NET_DMA
-	dma_cookie_t		dma_cookie;
+#if defined CONFIG_NET_DMA || defined CONFIG_NET_LL_RX_POLL
+	union {
+		unsigned int	napi_id;
+		dma_cookie_t	dma_cookie;
+	};
 #endif
 #ifdef CONFIG_NETWORK_SECMARK
 	__u32			secmark;
diff --git a/include/net/ll_poll.h b/include/net/ll_poll.h
new file mode 100644
index 0000000..9e1c972
--- /dev/null
+++ b/include/net/ll_poll.h
@@ -0,0 +1,126 @@
+/*
+ * low latency network device queue flush
+ * Copyright(c) 2013 Intel Corporation.
+ * Author: Eliezer Tamir
+ *
+ * For now this depends on CONFIG_X86_TSC
+ */
+
+#ifndef _LINUX_NET_LL_POLL_H
+#define _LINUX_NET_LL_POLL_H
+
+#include <linux/netdevice.h>
+#include <net/ip.h>
+
+#ifdef CONFIG_NET_LL_RX_POLL
+
+struct napi_struct;
+extern int sysctl_net_ll_poll __read_mostly;
+
+/* return values from ndo_ll_poll */
+#define LL_FLUSH_FAILED		-1
+#define LL_FLUSH_BUSY		-2
+
+/* we don't mind a ~2.5% imprecision */
+#define TSC_MHZ (tsc_khz >> 10)
+
+static inline unsigned long ll_end_time(void)
+{
+	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
+}
+
+static inline bool sk_valid_ll(struct sock *sk)
+{
+	return sysctl_net_ll_poll && sk->sk_napi_id &&
+	       !need_resched() && !signal_pending(current);
+}
+
+static inline bool can_poll_ll(unsigned long end_time)
+{
+	return !time_after((unsigned long)get_cycles(), end_time);
+}
+
+static inline bool sk_poll_ll(struct sock *sk, int nonblock)
+{
+	unsigned long end_time = ll_end_time();
+	const struct net_device_ops *ops;
+	struct napi_struct *napi;
+	int rc = false;
+
+	/*
+	 * rcu read lock for napi hash
+	 * bh so we don't race with net_rx_action
+	 */
+	rcu_read_lock_bh();
+
+	napi = napi_by_id(sk->sk_napi_id);
+	if (!napi)
+		goto out;
+
+	ops = napi->dev->netdev_ops;
+	if (!ops->ndo_ll_poll)
+		goto out;
+
+	do {
+
+		rc = ops->ndo_ll_poll(napi);
+
+		if (rc == LL_FLUSH_FAILED)
+			break; /* permanent failure */
+
+		if (rc > 0)
+			/* local bh are disabled so it is ok to use _BH */
+			NET_ADD_STATS_BH(sock_net(sk),
+					 LINUX_MIB_LOWLATENCYRXPACKETS, rc);
+
+	} while (skb_queue_empty(&sk->sk_receive_queue)
+			&& can_poll_ll(end_time) && !nonblock);
+
+	rc = !skb_queue_empty(&sk->sk_receive_queue);
+out:
+	rcu_read_unlock_bh();
+	return rc;
+}
+
+static inline void skb_mark_ll(struct sk_buff *skb, struct napi_struct *napi)
+{
+	skb->napi_id = napi->napi_id;
+}
+
+static inline void sk_mark_ll(struct sock *sk, struct sk_buff *skb)
+{
+	sk->sk_napi_id = skb->napi_id;
+}
+
+#else /* CONFIG_NET_LL_RX_POLL */
+
+static inline unsigned long ll_end_time(void)
+{
+	return 0;
+}
+
+static inline bool sk_valid_ll(struct sock *sk)
+{
+	return 0;
+}
+
+static inline bool sk_poll_ll(struct sock *sk, int nonblock)
+{
+	return 0;
+}
+
+static inline void skb_mark_ll(struct sk_buff *skb, struct napi_struct *napi)
+{
+}
+
+static inline void sk_mark_ll(struct sock *sk, struct sk_buff *skb)
+{
+}
+
+static inline bool can_poll_ll(unsigned long end_time)
+{
+	return false;
+}
+
+#endif /* CONFIG_NET_LL_RX_POLL */
+#endif /* _LINUX_NET_LL_POLL_H */
diff --git a/include/net/sock.h b/include/net/sock.h
index 66772cf..ac8e181 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -229,6 +229,7 @@ struct cg_proto;
   *	@sk_omem_alloc: "o" is "option" or "other"
   *	@sk_wmem_queued: persistent queue size
   *	@sk_forward_alloc: space allocated forward
+  *	@sk_napi_id: id of the last napi context to receive data for sk
   *	@sk_allocation: allocation mode
   *	@sk_sndbuf: size of send buffer in bytes
   *	@sk_flags: %SO_LINGER (l_onoff), %SO_BROADCAST, %SO_KEEPALIVE,
@@ -325,6 +326,9 @@ struct sock {
 #ifdef CONFIG_RPS
 	__u32			sk_rxhash;
 #endif
+#ifdef CONFIG_NET_LL_RX_POLL
+	unsigned int		sk_napi_id;
+#endif
 	atomic_t		sk_drops;
 	int			sk_rcvbuf;
 
diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h
index df2e8b4..26cbf76 100644
--- a/include/uapi/linux/snmp.h
+++ b/include/uapi/linux/snmp.h
@@ -253,6 +253,7 @@ enum
 	LINUX_MIB_TCPFASTOPENLISTENOVERFLOW,	/* TCPFastOpenListenOverflow */
 	LINUX_MIB_TCPFASTOPENCOOKIEREQD,	/* TCPFastOpenCookieReqd */
 	LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES, /* TCPSpuriousRtxHostQueues */
+	LINUX_MIB_LOWLATENCYRXPACKETS,		/* LowLatencyRxPackets */
 	__LINUX_MIB_MAX
 };
 
diff --git a/net/Kconfig b/net/Kconfig
index 523e43e..d6a9ce6 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -243,6 +243,18 @@ config NETPRIO_CGROUP
 	  Cgroup subsystem for use in assigning processes to network priorities on
 	  a per-interface basis
 
+config NET_LL_RX_POLL
+	bool "Low Latency Receive Poll"
+	depends on X86_TSC
+	default n
+	---help---
+	  Support Low Latency Receive Queue Poll.
+	  (For network card drivers which support this option.)
+	  When waiting for data in read or poll call directly into the the device driver
+	  to flush packets which may be pending on the device queues into the stack.
+
+	  If unsure, say N.
+
 config BQL
 	boolean
 	depends on SYSFS
diff --git a/net/core/datagram.c b/net/core/datagram.c
index b71423d..9cbaba9 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -56,6 +56,7 @@
 #include <net/sock.h>
 #include <net/tcp_states.h>
 #include <trace/events/skb.h>
+#include <net/ll_poll.h>
 
 /*
  *	Is a socket 'connection oriented' ?
@@ -207,6 +208,9 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags,
 		}
 		spin_unlock_irqrestore(&queue->lock, cpu_flags);
 
+		if (sk_valid_ll(sk) && sk_poll_ll(sk, flags & MSG_DONTWAIT))
+			continue;
+
 		/* User doesn't want to wait */
 		error = -EAGAIN;
 		if (!timeo)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index f45de07..674bcde 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -739,6 +739,10 @@ static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
 	new->vlan_tci		= old->vlan_tci;
 
 	skb_copy_secmark(new, old);
+
+#ifdef CONFIG_NET_LL_RX_POLL
+	new->napi_id	= old->napi_id;
+#endif
 }
 
 /*
diff --git a/net/core/sock.c b/net/core/sock.c
index 6ba327d..804fd5b 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -139,6 +139,8 @@
 #include <net/tcp.h>
 #endif
 
+#include <net/ll_poll.h>
+
 static DEFINE_MUTEX(proto_list_mutex);
 static LIST_HEAD(proto_list);
 
@@ -2284,6 +2286,10 @@ void sock_init_data(struct socket *sock, struct sock *sk)
 
 	sk->sk_stamp = ktime_set(-1L, 0);
 
+#ifdef CONFIG_NET_LL_RX_POLL
+	sk->sk_napi_id		=	0;
+#endif
+
 	/*
 	 * Before updating sk_refcnt, we must commit prior changes to memory
 	 * (Documentation/RCU/rculist_nulls.txt for details)
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
index 741db5fc..4ca5702 100644
--- a/net/core/sysctl_net_core.c
+++ b/net/core/sysctl_net_core.c
@@ -19,6 +19,7 @@
 #include <net/ip.h>
 #include <net/sock.h>
 #include <net/net_ratelimit.h>
+#include <net/ll_poll.h>
 
 static int one = 1;
 
@@ -284,6 +285,15 @@ static struct ctl_table net_core_table[] = {
 		.proc_handler	= flow_limit_table_len_sysctl
 	},
 #endif /* CONFIG_NET_FLOW_LIMIT */
+#ifdef CONFIG_NET_LL_RX_POLL
+	{
+		.procname	= "low_latency_poll",
+		.data		= &sysctl_net_ll_poll,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec
+	},
+#endif
 #endif /* CONFIG_NET */
 	{
 		.procname	= "netdev_budget",
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 2a5bf86..6577a11 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -273,6 +273,7 @@ static const struct snmp_mib snmp4_net_list[] = {
 	SNMP_MIB_ITEM("TCPFastOpenListenOverflow", LINUX_MIB_TCPFASTOPENLISTENOVERFLOW),
 	SNMP_MIB_ITEM("TCPFastOpenCookieReqd", LINUX_MIB_TCPFASTOPENCOOKIEREQD),
 	SNMP_MIB_ITEM("TCPSpuriousRtxHostQueues", LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES),
+	SNMP_MIB_ITEM("LowLatencyRxPackets", LINUX_MIB_LOWLATENCYRXPACKETS),
 	SNMP_MIB_SENTINEL
 };
 
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index aa5eff4..e7ab9c8 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -109,6 +109,7 @@
 #include <trace/events/udp.h>
 #include <linux/static_key.h>
 #include <trace/events/skb.h>
+#include <net/ll_poll.h>
 #include "udp_impl.h"
 
 struct udp_table udp_table __read_mostly;
@@ -1709,7 +1710,10 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
 	sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
 
 	if (sk != NULL) {
-		int ret = udp_queue_rcv_skb(sk, skb);
+		int ret;
+
+		sk_mark_ll(sk, skb);
+		ret = udp_queue_rcv_skb(sk, skb);
 		sock_put(sk);
 
 		/* a return value > 0 means to resubmit the input, but
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 42923b1..4035997 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -46,6 +46,7 @@
 #include <net/ip6_checksum.h>
 #include <net/xfrm.h>
 #include <net/inet6_hashtables.h>
+#include <net/ll_poll.h>
 
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
@@ -841,7 +842,10 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
 	 */
 	sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
 	if (sk != NULL) {
-		int ret = udpv6_queue_rcv_skb(sk, skb);
+		int ret;
+
+		sk_mark_ll(sk, skb);
+		ret = udpv6_queue_rcv_skb(sk, skb);
 		sock_put(sk);
 
 		/* a return value > 0 means to resubmit the input, but
diff --git a/net/socket.c b/net/socket.c
index 6b94633..c3725eb 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -104,6 +104,12 @@
 #include <linux/route.h>
 #include <linux/sockios.h>
 #include <linux/atalk.h>
+#include <net/ll_poll.h>
+
+#ifdef CONFIG_NET_LL_RX_POLL
+int sysctl_net_ll_poll __read_mostly;
+EXPORT_SYMBOL_GPL(sysctl_net_ll_poll);
+#endif
 
 static int sock_no_open(struct inode *irrelevant, struct file *dontcare);
 static ssize_t sock_aio_read(struct kiocb *iocb, const struct iovec *iov,
@@ -1142,13 +1148,21 @@ EXPORT_SYMBOL(sock_create_lite);
 /* No kernel lock held - perfect */
 static unsigned int sock_poll(struct file *file, poll_table *wait)
 {
+	unsigned int poll_result;
 	struct socket *sock;
 
 	/*
 	 *      We can't return errors to poll, so it's either yes or no.
 	 */
 	sock = file->private_data;
-	return sock->ops->poll(file, sock, wait);
+
+	poll_result = sock->ops->poll(file, sock, wait);
+
+	if (!(poll_result & (POLLRDNORM | POLLERR | POLLRDHUP | POLLHUP)) &&
+		sk_valid_ll(sock->sk) && sk_poll_ll(sock->sk, 1))
+			poll_result = sock->ops->poll(file, sock, NULL);
+
+	return poll_result;
 }
 
 static int sock_mmap(struct file *file, struct vm_area_struct *vma)


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 3/5] tcp: add TCP support for low latency receive poll.
  2013-05-29  6:39 [PATCH v6 net-next 0/5] net: low latency Ethernet device polling Eliezer Tamir
@ 2013-05-29  6:39   ` Eliezer Tamir
  2013-05-29  6:39   ` Eliezer Tamir
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: linux-kernel, netdev, Jesse Brandeburg, Don Skidmore,
	e1000-devel, Willem de Bruijn, Eric Dumazet, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

Adds busy-poll support for TCP.
Copy the napi_id from an incomming skb to the sk in tcp_v[46]_rcv().
when there is no data in the socket we busy-poll in tcp_recvmsg().
This is a good example of how to add busy-poll support to new protocols.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 net/ipv4/tcp.c      |    5 +++++
 net/ipv4/tcp_ipv4.c |    2 ++
 net/ipv6/tcp_ipv6.c |    2 ++
 3 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index ba4186e..becd105 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -279,6 +279,7 @@
 
 #include <asm/uaccess.h>
 #include <asm/ioctls.h>
+#include <net/ll_poll.h>
 
 int sysctl_tcp_fin_timeout __read_mostly = TCP_FIN_TIMEOUT;
 
@@ -1551,6 +1552,10 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
 	struct sk_buff *skb;
 	u32 urg_hole = 0;
 
+	if (sk_valid_ll(sk) && skb_queue_empty(&sk->sk_receive_queue)
+	    && (sk->sk_state == TCP_ESTABLISHED))
+		sk_poll_ll(sk, nonblock);
+
 	lock_sock(sk);
 
 	err = -ENOTCONN;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index d20ede0..35fd8bc 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -75,6 +75,7 @@
 #include <net/netdma.h>
 #include <net/secure_seq.h>
 #include <net/tcp_memcontrol.h>
+#include <net/ll_poll.h>
 
 #include <linux/inet.h>
 #include <linux/ipv6.h>
@@ -2011,6 +2012,7 @@ process:
 	if (sk_filter(sk, skb))
 		goto discard_and_relse;
 
+	sk_mark_ll(sk, skb);
 	skb->dev = NULL;
 
 	bh_lock_sock_nested(sk);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 0a17ed9..5cffa5c 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -63,6 +63,7 @@
 #include <net/inet_common.h>
 #include <net/secure_seq.h>
 #include <net/tcp_memcontrol.h>
+#include <net/ll_poll.h>
 
 #include <asm/uaccess.h>
 
@@ -1498,6 +1499,7 @@ process:
 	if (sk_filter(sk, skb))
 		goto discard_and_relse;
 
+	sk_mark_ll(sk, skb);
 	skb->dev = NULL;
 
 	bh_lock_sock_nested(sk);


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 3/5] tcp: add TCP support for low latency receive poll.
@ 2013-05-29  6:39   ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	linux-kernel, Alex Rosenbaum, Jesse Brandeburg, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien

Adds busy-poll support for TCP.
Copy the napi_id from an incomming skb to the sk in tcp_v[46]_rcv().
when there is no data in the socket we busy-poll in tcp_recvmsg().
This is a good example of how to add busy-poll support to new protocols.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 net/ipv4/tcp.c      |    5 +++++
 net/ipv4/tcp_ipv4.c |    2 ++
 net/ipv6/tcp_ipv6.c |    2 ++
 3 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index ba4186e..becd105 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -279,6 +279,7 @@
 
 #include <asm/uaccess.h>
 #include <asm/ioctls.h>
+#include <net/ll_poll.h>
 
 int sysctl_tcp_fin_timeout __read_mostly = TCP_FIN_TIMEOUT;
 
@@ -1551,6 +1552,10 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
 	struct sk_buff *skb;
 	u32 urg_hole = 0;
 
+	if (sk_valid_ll(sk) && skb_queue_empty(&sk->sk_receive_queue)
+	    && (sk->sk_state == TCP_ESTABLISHED))
+		sk_poll_ll(sk, nonblock);
+
 	lock_sock(sk);
 
 	err = -ENOTCONN;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index d20ede0..35fd8bc 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -75,6 +75,7 @@
 #include <net/netdma.h>
 #include <net/secure_seq.h>
 #include <net/tcp_memcontrol.h>
+#include <net/ll_poll.h>
 
 #include <linux/inet.h>
 #include <linux/ipv6.h>
@@ -2011,6 +2012,7 @@ process:
 	if (sk_filter(sk, skb))
 		goto discard_and_relse;
 
+	sk_mark_ll(sk, skb);
 	skb->dev = NULL;
 
 	bh_lock_sock_nested(sk);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 0a17ed9..5cffa5c 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -63,6 +63,7 @@
 #include <net/inet_common.h>
 #include <net/secure_seq.h>
 #include <net/tcp_memcontrol.h>
+#include <net/ll_poll.h>
 
 #include <asm/uaccess.h>
 
@@ -1498,6 +1499,7 @@ process:
 	if (sk_filter(sk, skb))
 		goto discard_and_relse;
 
+	sk_mark_ll(sk, skb);
 	skb->dev = NULL;
 
 	bh_lock_sock_nested(sk);


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 4/5] ixgbe: Add support for ndo_ll_poll
  2013-05-29  6:39 [PATCH v6 net-next 0/5] net: low latency Ethernet device polling Eliezer Tamir
@ 2013-05-29  6:39   ` Eliezer Tamir
  2013-05-29  6:39   ` Eliezer Tamir
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: linux-kernel, netdev, Jesse Brandeburg, Don Skidmore,
	e1000-devel, Willem de Bruijn, Eric Dumazet, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

Add the ixgbe driver code implementing ndo_ll_poll.
It should be easy for other drivers to do something similar
in order to enable support for CONFIG_NET_LL_RX_POLL

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 drivers/net/ethernet/intel/ixgbe/ixgbe.h      |  120 +++++++++++++++++++++++++
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    2 
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   63 +++++++++++--
 3 files changed, 177 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index ca93238..04fdbf6 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -52,6 +52,8 @@
 #include <linux/dca.h>
 #endif
 
+#include <net/ll_poll.h>
+
 /* common prefix used by pr_<> macros */
 #undef pr_fmt
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -356,9 +358,127 @@ struct ixgbe_q_vector {
 	struct rcu_head rcu;	/* to avoid race with update stats on free */
 	char name[IFNAMSIZ + 9];
 
+#ifdef CONFIG_NET_LL_RX_POLL
+	unsigned int state;
+#define IXGBE_QV_STATE_IDLE        0
+#define IXGBE_QV_STATE_NAPI	   1    /* NAPI owns this QV */
+#define IXGBE_QV_STATE_POLL	   2    /* poll owns this QV */
+#define IXGBE_QV_LOCKED (IXGBE_QV_STATE_NAPI | IXGBE_QV_STATE_POLL)
+#define IXGBE_QV_STATE_NAPI_YIELD  4    /* NAPI yielded this QV */
+#define IXGBE_QV_STATE_POLL_YIELD  8    /* poll yielded this QV */
+#define IXGBE_QV_YIELD (IXGBE_QV_STATE_NAPI_YIELD | IXGBE_QV_STATE_POLL_YIELD)
+#define IXGBE_QV_USER_PEND (IXGBE_QV_STATE_POLL | IXGBE_QV_STATE_POLL_YIELD)
+	spinlock_t lock;
+#endif  /* CONFIG_NET_LL_RX_POLL */
+
 	/* for dynamic allocation of rings associated with this q_vector */
 	struct ixgbe_ring ring[0] ____cacheline_internodealigned_in_smp;
 };
+#ifdef CONFIG_NET_LL_RX_POLL
+static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector)
+{
+
+	spin_lock_init(&q_vector->lock);
+	q_vector->state = IXGBE_QV_STATE_IDLE;
+}
+
+/* called from the device poll rutine to get ownership of a q_vector */
+static inline int ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
+{
+	int rc = true;
+	spin_lock(&q_vector->lock);
+	if (q_vector->state & IXGBE_QV_LOCKED) {
+		WARN_ON(q_vector->state & IXGBE_QV_STATE_NAPI);
+		q_vector->state |= IXGBE_QV_STATE_NAPI_YIELD;
+		rc = false;
+	} else
+		/* we don't care if someone yielded */
+		q_vector->state = IXGBE_QV_STATE_NAPI;
+	spin_unlock(&q_vector->lock);
+	return rc;
+}
+
+/* returns true is someone tried to get the qv while napi had it */
+static inline int ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
+{
+	int rc = false;
+	spin_lock(&q_vector->lock);
+	WARN_ON(q_vector->state & (IXGBE_QV_STATE_POLL |
+			       IXGBE_QV_STATE_NAPI_YIELD));
+
+	if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
+		rc = true;
+	q_vector->state = IXGBE_QV_STATE_IDLE;
+	spin_unlock(&q_vector->lock);
+	return rc;
+}
+
+/* called from ixgbe_low_latency_poll() */
+static inline int ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
+{
+	int rc = true;
+	spin_lock_bh(&q_vector->lock);
+	if ((q_vector->state & IXGBE_QV_LOCKED)) {
+		q_vector->state |= IXGBE_QV_STATE_POLL_YIELD;
+		rc = false;
+	} else
+		/* preserve yield marks */
+		q_vector->state |= IXGBE_QV_STATE_POLL;
+	spin_unlock_bh(&q_vector->lock);
+	return rc;
+}
+
+/* returns true if someone tried to get the qv while it was locked */
+static inline int ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
+{
+	int rc = false;
+	spin_lock_bh(&q_vector->lock);
+	WARN_ON(q_vector->state & (IXGBE_QV_STATE_NAPI));
+
+	if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
+		rc = true;
+	q_vector->state = IXGBE_QV_STATE_IDLE;
+	spin_unlock_bh(&q_vector->lock);
+	return rc;
+}
+
+/* true if a socket is polling, even if it did not get the lock */
+static inline int ixgbe_qv_ll_polling(struct ixgbe_q_vector *q_vector)
+{
+	WARN_ON(!(q_vector->state & IXGBE_QV_LOCKED));
+	return q_vector->state & IXGBE_QV_USER_PEND;
+}
+#else /* CONFIG_NET_LL_RX_POLL */
+static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector)
+{
+}
+
+static inline int ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
+{
+	return true;
+}
+
+static inline int ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+
+static inline int ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+
+static inline int ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+
+static inline int ixgbe_qv_ll_polling(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+#endif /* CONFIG_NET_LL_RX_POLL */
+
 #ifdef CONFIG_IXGBE_HWMON
 
 #define IXGBE_HWMON_TYPE_LOC		0
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index ef5f7a6..90b4e10 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -811,6 +811,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
 	/* initialize NAPI */
 	netif_napi_add(adapter->netdev, &q_vector->napi,
 		       ixgbe_poll, 64);
+	napi_hash_add(&q_vector->napi);
 
 	/* tie q_vector and adapter together */
 	adapter->q_vector[v_idx] = q_vector;
@@ -931,6 +932,7 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx)
 		adapter->rx_ring[ring->queue_index] = NULL;
 
 	adapter->q_vector[v_idx] = NULL;
+	napi_hash_del(&q_vector->napi);
 	netif_napi_del(&q_vector->napi);
 
 	/*
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index d30fbdd..9a7dc40 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1504,7 +1504,9 @@ static void ixgbe_rx_skb(struct ixgbe_q_vector *q_vector,
 {
 	struct ixgbe_adapter *adapter = q_vector->adapter;
 
-	if (!(adapter->flags & IXGBE_FLAG_IN_NETPOLL))
+	if (ixgbe_qv_ll_polling(q_vector))
+		netif_receive_skb(skb);
+	else if (!(adapter->flags & IXGBE_FLAG_IN_NETPOLL))
 		napi_gro_receive(&q_vector->napi, skb);
 	else
 		netif_rx(skb);
@@ -1892,9 +1894,9 @@ dma_sync:
  * expensive overhead for IOMMU access this provides a means of avoiding
  * it by maintaining the mapping of the page to the syste.
  *
- * Returns true if all work is completed without reaching budget
+ * Returns amount of work completed
  **/
-static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 			       struct ixgbe_ring *rx_ring,
 			       const int budget)
 {
@@ -1976,6 +1978,7 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 		}
 
 #endif /* IXGBE_FCOE */
+		skb_mark_ll(skb, &q_vector->napi);
 		ixgbe_rx_skb(q_vector, skb);
 
 		/* update budget accounting */
@@ -1992,9 +1995,37 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 	if (cleaned_count)
 		ixgbe_alloc_rx_buffers(rx_ring, cleaned_count);
 
-	return (total_rx_packets < budget);
+	return total_rx_packets;
 }
 
+#ifdef CONFIG_NET_LL_RX_POLL
+/* must be called with local_bh_disable()d */
+static int ixgbe_low_latency_recv(struct napi_struct *napi)
+{
+	struct ixgbe_q_vector *q_vector =
+			container_of(napi, struct ixgbe_q_vector, napi);
+	struct ixgbe_adapter *adapter = q_vector->adapter;
+	struct ixgbe_ring  *ring;
+	int found = 0;
+
+	if (test_bit(__IXGBE_DOWN, &adapter->state))
+		return LL_FLUSH_FAILED;
+
+	if (!ixgbe_qv_lock_poll(q_vector))
+		return LL_FLUSH_BUSY;
+
+	ixgbe_for_each_ring(ring, q_vector->rx) {
+		found = ixgbe_clean_rx_irq(q_vector, ring, 4);
+		if (found)
+			break;
+	}
+
+	ixgbe_qv_unlock_poll(q_vector);
+
+	return found;
+}
+#endif	/* CONFIG_NET_LL_RX_POLL */
+
 /**
  * ixgbe_configure_msix - Configure MSI-X hardware
  * @adapter: board private structure
@@ -2550,6 +2581,9 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
 	ixgbe_for_each_ring(ring, q_vector->tx)
 		clean_complete &= !!ixgbe_clean_tx_irq(q_vector, ring);
 
+	if (!ixgbe_qv_lock_napi(q_vector))
+		return budget;
+
 	/* attempt to distribute budget to each queue fairly, but don't allow
 	 * the budget to go below 1 because we'll exit polling */
 	if (q_vector->rx.count > 1)
@@ -2558,9 +2592,10 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
 		per_ring_budget = budget;
 
 	ixgbe_for_each_ring(ring, q_vector->rx)
-		clean_complete &= ixgbe_clean_rx_irq(q_vector, ring,
-						     per_ring_budget);
+		clean_complete &= (ixgbe_clean_rx_irq(q_vector, ring,
+				   per_ring_budget) < per_ring_budget);
 
+	ixgbe_qv_unlock_napi(q_vector);
 	/* If all work not completed, return budget and keep polling */
 	if (!clean_complete)
 		return budget;
@@ -3747,16 +3782,25 @@ static void ixgbe_napi_enable_all(struct ixgbe_adapter *adapter)
 {
 	int q_idx;
 
-	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++)
+	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
+		ixgbe_qv_init_lock(adapter->q_vector[q_idx]);
 		napi_enable(&adapter->q_vector[q_idx]->napi);
+	}
 }
 
 static void ixgbe_napi_disable_all(struct ixgbe_adapter *adapter)
 {
 	int q_idx;
 
-	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++)
+	local_bh_disable(); /* for ixgbe_qv_lock_napi() */
+	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
 		napi_disable(&adapter->q_vector[q_idx]->napi);
+		while (!ixgbe_qv_lock_napi(adapter->q_vector[q_idx])) {
+			pr_info("QV %d locked\n", q_idx);
+			mdelay(1);
+		}
+	}
+	local_bh_enable();
 }
 
 #ifdef CONFIG_IXGBE_DCB
@@ -7177,6 +7221,9 @@ static const struct net_device_ops ixgbe_netdev_ops = {
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller	= ixgbe_netpoll,
 #endif
+#ifdef CONFIG_NET_LL_RX_POLL
+	.ndo_ll_poll		= ixgbe_low_latency_recv,
+#endif
 #ifdef IXGBE_FCOE
 	.ndo_fcoe_ddp_setup = ixgbe_fcoe_ddp_get,
 	.ndo_fcoe_ddp_target = ixgbe_fcoe_ddp_target,


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 4/5] ixgbe: Add support for ndo_ll_poll
@ 2013-05-29  6:39   ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:39 UTC (permalink / raw)
  To: David Miller
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	linux-kernel, Alex Rosenbaum, Jesse Brandeburg, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien

Add the ixgbe driver code implementing ndo_ll_poll.
It should be easy for other drivers to do something similar
in order to enable support for CONFIG_NET_LL_RX_POLL

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 drivers/net/ethernet/intel/ixgbe/ixgbe.h      |  120 +++++++++++++++++++++++++
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    2 
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   63 +++++++++++--
 3 files changed, 177 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index ca93238..04fdbf6 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -52,6 +52,8 @@
 #include <linux/dca.h>
 #endif
 
+#include <net/ll_poll.h>
+
 /* common prefix used by pr_<> macros */
 #undef pr_fmt
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -356,9 +358,127 @@ struct ixgbe_q_vector {
 	struct rcu_head rcu;	/* to avoid race with update stats on free */
 	char name[IFNAMSIZ + 9];
 
+#ifdef CONFIG_NET_LL_RX_POLL
+	unsigned int state;
+#define IXGBE_QV_STATE_IDLE        0
+#define IXGBE_QV_STATE_NAPI	   1    /* NAPI owns this QV */
+#define IXGBE_QV_STATE_POLL	   2    /* poll owns this QV */
+#define IXGBE_QV_LOCKED (IXGBE_QV_STATE_NAPI | IXGBE_QV_STATE_POLL)
+#define IXGBE_QV_STATE_NAPI_YIELD  4    /* NAPI yielded this QV */
+#define IXGBE_QV_STATE_POLL_YIELD  8    /* poll yielded this QV */
+#define IXGBE_QV_YIELD (IXGBE_QV_STATE_NAPI_YIELD | IXGBE_QV_STATE_POLL_YIELD)
+#define IXGBE_QV_USER_PEND (IXGBE_QV_STATE_POLL | IXGBE_QV_STATE_POLL_YIELD)
+	spinlock_t lock;
+#endif  /* CONFIG_NET_LL_RX_POLL */
+
 	/* for dynamic allocation of rings associated with this q_vector */
 	struct ixgbe_ring ring[0] ____cacheline_internodealigned_in_smp;
 };
+#ifdef CONFIG_NET_LL_RX_POLL
+static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector)
+{
+
+	spin_lock_init(&q_vector->lock);
+	q_vector->state = IXGBE_QV_STATE_IDLE;
+}
+
+/* called from the device poll rutine to get ownership of a q_vector */
+static inline int ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
+{
+	int rc = true;
+	spin_lock(&q_vector->lock);
+	if (q_vector->state & IXGBE_QV_LOCKED) {
+		WARN_ON(q_vector->state & IXGBE_QV_STATE_NAPI);
+		q_vector->state |= IXGBE_QV_STATE_NAPI_YIELD;
+		rc = false;
+	} else
+		/* we don't care if someone yielded */
+		q_vector->state = IXGBE_QV_STATE_NAPI;
+	spin_unlock(&q_vector->lock);
+	return rc;
+}
+
+/* returns true is someone tried to get the qv while napi had it */
+static inline int ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
+{
+	int rc = false;
+	spin_lock(&q_vector->lock);
+	WARN_ON(q_vector->state & (IXGBE_QV_STATE_POLL |
+			       IXGBE_QV_STATE_NAPI_YIELD));
+
+	if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
+		rc = true;
+	q_vector->state = IXGBE_QV_STATE_IDLE;
+	spin_unlock(&q_vector->lock);
+	return rc;
+}
+
+/* called from ixgbe_low_latency_poll() */
+static inline int ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
+{
+	int rc = true;
+	spin_lock_bh(&q_vector->lock);
+	if ((q_vector->state & IXGBE_QV_LOCKED)) {
+		q_vector->state |= IXGBE_QV_STATE_POLL_YIELD;
+		rc = false;
+	} else
+		/* preserve yield marks */
+		q_vector->state |= IXGBE_QV_STATE_POLL;
+	spin_unlock_bh(&q_vector->lock);
+	return rc;
+}
+
+/* returns true if someone tried to get the qv while it was locked */
+static inline int ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
+{
+	int rc = false;
+	spin_lock_bh(&q_vector->lock);
+	WARN_ON(q_vector->state & (IXGBE_QV_STATE_NAPI));
+
+	if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
+		rc = true;
+	q_vector->state = IXGBE_QV_STATE_IDLE;
+	spin_unlock_bh(&q_vector->lock);
+	return rc;
+}
+
+/* true if a socket is polling, even if it did not get the lock */
+static inline int ixgbe_qv_ll_polling(struct ixgbe_q_vector *q_vector)
+{
+	WARN_ON(!(q_vector->state & IXGBE_QV_LOCKED));
+	return q_vector->state & IXGBE_QV_USER_PEND;
+}
+#else /* CONFIG_NET_LL_RX_POLL */
+static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector)
+{
+}
+
+static inline int ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
+{
+	return true;
+}
+
+static inline int ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+
+static inline int ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+
+static inline int ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+
+static inline int ixgbe_qv_ll_polling(struct ixgbe_q_vector *q_vector)
+{
+	return false;
+}
+#endif /* CONFIG_NET_LL_RX_POLL */
+
 #ifdef CONFIG_IXGBE_HWMON
 
 #define IXGBE_HWMON_TYPE_LOC		0
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index ef5f7a6..90b4e10 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -811,6 +811,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
 	/* initialize NAPI */
 	netif_napi_add(adapter->netdev, &q_vector->napi,
 		       ixgbe_poll, 64);
+	napi_hash_add(&q_vector->napi);
 
 	/* tie q_vector and adapter together */
 	adapter->q_vector[v_idx] = q_vector;
@@ -931,6 +932,7 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx)
 		adapter->rx_ring[ring->queue_index] = NULL;
 
 	adapter->q_vector[v_idx] = NULL;
+	napi_hash_del(&q_vector->napi);
 	netif_napi_del(&q_vector->napi);
 
 	/*
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index d30fbdd..9a7dc40 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1504,7 +1504,9 @@ static void ixgbe_rx_skb(struct ixgbe_q_vector *q_vector,
 {
 	struct ixgbe_adapter *adapter = q_vector->adapter;
 
-	if (!(adapter->flags & IXGBE_FLAG_IN_NETPOLL))
+	if (ixgbe_qv_ll_polling(q_vector))
+		netif_receive_skb(skb);
+	else if (!(adapter->flags & IXGBE_FLAG_IN_NETPOLL))
 		napi_gro_receive(&q_vector->napi, skb);
 	else
 		netif_rx(skb);
@@ -1892,9 +1894,9 @@ dma_sync:
  * expensive overhead for IOMMU access this provides a means of avoiding
  * it by maintaining the mapping of the page to the syste.
  *
- * Returns true if all work is completed without reaching budget
+ * Returns amount of work completed
  **/
-static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 			       struct ixgbe_ring *rx_ring,
 			       const int budget)
 {
@@ -1976,6 +1978,7 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 		}
 
 #endif /* IXGBE_FCOE */
+		skb_mark_ll(skb, &q_vector->napi);
 		ixgbe_rx_skb(q_vector, skb);
 
 		/* update budget accounting */
@@ -1992,9 +1995,37 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 	if (cleaned_count)
 		ixgbe_alloc_rx_buffers(rx_ring, cleaned_count);
 
-	return (total_rx_packets < budget);
+	return total_rx_packets;
 }
 
+#ifdef CONFIG_NET_LL_RX_POLL
+/* must be called with local_bh_disable()d */
+static int ixgbe_low_latency_recv(struct napi_struct *napi)
+{
+	struct ixgbe_q_vector *q_vector =
+			container_of(napi, struct ixgbe_q_vector, napi);
+	struct ixgbe_adapter *adapter = q_vector->adapter;
+	struct ixgbe_ring  *ring;
+	int found = 0;
+
+	if (test_bit(__IXGBE_DOWN, &adapter->state))
+		return LL_FLUSH_FAILED;
+
+	if (!ixgbe_qv_lock_poll(q_vector))
+		return LL_FLUSH_BUSY;
+
+	ixgbe_for_each_ring(ring, q_vector->rx) {
+		found = ixgbe_clean_rx_irq(q_vector, ring, 4);
+		if (found)
+			break;
+	}
+
+	ixgbe_qv_unlock_poll(q_vector);
+
+	return found;
+}
+#endif	/* CONFIG_NET_LL_RX_POLL */
+
 /**
  * ixgbe_configure_msix - Configure MSI-X hardware
  * @adapter: board private structure
@@ -2550,6 +2581,9 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
 	ixgbe_for_each_ring(ring, q_vector->tx)
 		clean_complete &= !!ixgbe_clean_tx_irq(q_vector, ring);
 
+	if (!ixgbe_qv_lock_napi(q_vector))
+		return budget;
+
 	/* attempt to distribute budget to each queue fairly, but don't allow
 	 * the budget to go below 1 because we'll exit polling */
 	if (q_vector->rx.count > 1)
@@ -2558,9 +2592,10 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
 		per_ring_budget = budget;
 
 	ixgbe_for_each_ring(ring, q_vector->rx)
-		clean_complete &= ixgbe_clean_rx_irq(q_vector, ring,
-						     per_ring_budget);
+		clean_complete &= (ixgbe_clean_rx_irq(q_vector, ring,
+				   per_ring_budget) < per_ring_budget);
 
+	ixgbe_qv_unlock_napi(q_vector);
 	/* If all work not completed, return budget and keep polling */
 	if (!clean_complete)
 		return budget;
@@ -3747,16 +3782,25 @@ static void ixgbe_napi_enable_all(struct ixgbe_adapter *adapter)
 {
 	int q_idx;
 
-	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++)
+	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
+		ixgbe_qv_init_lock(adapter->q_vector[q_idx]);
 		napi_enable(&adapter->q_vector[q_idx]->napi);
+	}
 }
 
 static void ixgbe_napi_disable_all(struct ixgbe_adapter *adapter)
 {
 	int q_idx;
 
-	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++)
+	local_bh_disable(); /* for ixgbe_qv_lock_napi() */
+	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
 		napi_disable(&adapter->q_vector[q_idx]->napi);
+		while (!ixgbe_qv_lock_napi(adapter->q_vector[q_idx])) {
+			pr_info("QV %d locked\n", q_idx);
+			mdelay(1);
+		}
+	}
+	local_bh_enable();
 }
 
 #ifdef CONFIG_IXGBE_DCB
@@ -7177,6 +7221,9 @@ static const struct net_device_ops ixgbe_netdev_ops = {
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller	= ixgbe_netpoll,
 #endif
+#ifdef CONFIG_NET_LL_RX_POLL
+	.ndo_ll_poll		= ixgbe_low_latency_recv,
+#endif
 #ifdef IXGBE_FCOE
 	.ndo_fcoe_ddp_setup = ixgbe_fcoe_ddp_get,
 	.ndo_fcoe_ddp_target = ixgbe_fcoe_ddp_target,


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 5/5] ixgbe: add extra stats for ndo_ll_poll
  2013-05-29  6:39 [PATCH v6 net-next 0/5] net: low latency Ethernet device polling Eliezer Tamir
@ 2013-05-29  6:40   ` Eliezer Tamir
  2013-05-29  6:39   ` Eliezer Tamir
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:40 UTC (permalink / raw)
  To: David Miller
  Cc: linux-kernel, netdev, Jesse Brandeburg, Don Skidmore,
	e1000-devel, Willem de Bruijn, Eric Dumazet, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

Add additional statistics to the ixgbe driver for ndo_ll_poll
Defined under LL_EXTENDED_STATS

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 drivers/net/ethernet/intel/ixgbe/ixgbe.h         |   14 ++++++++
 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |   40 ++++++++++++++++++++++
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c    |    6 +++
 3 files changed, 60 insertions(+), 0 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index 04fdbf6..9765772 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -54,6 +54,9 @@
 
 #include <net/ll_poll.h>
 
+#ifdef CONFIG_NET_LL_RX_POLL
+#define LL_EXTENDED_STATS
+#endif
 /* common prefix used by pr_<> macros */
 #undef pr_fmt
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -184,6 +187,11 @@ struct ixgbe_rx_buffer {
 struct ixgbe_queue_stats {
 	u64 packets;
 	u64 bytes;
+#ifdef LL_EXTENDED_STATS
+	u64 yields;
+	u64 misses;
+	u64 cleaned;
+#endif  /* LL_EXTENDED_STATS */
 };
 
 struct ixgbe_tx_queue_stats {
@@ -391,6 +399,9 @@ static inline int ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
 		WARN_ON(q_vector->state & IXGBE_QV_STATE_NAPI);
 		q_vector->state |= IXGBE_QV_STATE_NAPI_YIELD;
 		rc = false;
+#ifdef LL_EXTENDED_STATS
+		q_vector->tx.ring->stats.yields++;
+#endif
 	} else
 		/* we don't care if someone yielded */
 		q_vector->state = IXGBE_QV_STATE_NAPI;
@@ -421,6 +432,9 @@ static inline int ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
 	if ((q_vector->state & IXGBE_QV_LOCKED)) {
 		q_vector->state |= IXGBE_QV_STATE_POLL_YIELD;
 		rc = false;
+#ifdef LL_EXTENDED_STATS
+		q_vector->rx.ring->stats.yields++;
+#endif
 	} else
 		/* preserve yield marks */
 		q_vector->state |= IXGBE_QV_STATE_POLL;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index d375472..24e2e7a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -1054,6 +1054,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i] = 0;
 			data[i+1] = 0;
 			i += 2;
+#ifdef LL_EXTENDED_STATS
+			data[i] = 0;
+			data[i+1] = 0;
+			data[i+2] = 0;
+			i += 3;
+#endif
 			continue;
 		}
 
@@ -1063,6 +1069,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i+1] = ring->stats.bytes;
 		} while (u64_stats_fetch_retry_bh(&ring->syncp, start));
 		i += 2;
+#ifdef LL_EXTENDED_STATS
+		data[i] = ring->stats.yields;
+		data[i+1] = ring->stats.misses;
+		data[i+2] = ring->stats.cleaned;
+		i += 3;
+#endif
 	}
 	for (j = 0; j < IXGBE_NUM_RX_QUEUES; j++) {
 		ring = adapter->rx_ring[j];
@@ -1070,6 +1082,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i] = 0;
 			data[i+1] = 0;
 			i += 2;
+#ifdef LL_EXTENDED_STATS
+			data[i] = 0;
+			data[i+1] = 0;
+			data[i+2] = 0;
+			i += 3;
+#endif
 			continue;
 		}
 
@@ -1079,6 +1097,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i+1] = ring->stats.bytes;
 		} while (u64_stats_fetch_retry_bh(&ring->syncp, start));
 		i += 2;
+#ifdef LL_EXTENDED_STATS
+		data[i] = ring->stats.yields;
+		data[i+1] = ring->stats.misses;
+		data[i+2] = ring->stats.cleaned;
+		i += 3;
+#endif
 	}
 
 	for (j = 0; j < IXGBE_MAX_PACKET_BUFFERS; j++) {
@@ -1115,12 +1139,28 @@ static void ixgbe_get_strings(struct net_device *netdev, u32 stringset,
 			p += ETH_GSTRING_LEN;
 			sprintf(p, "tx_queue_%u_bytes", i);
 			p += ETH_GSTRING_LEN;
+#ifdef LL_EXTENDED_STATS
+			sprintf(p, "tx_q_%u_napi_yield", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "tx_q_%u_misses", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "tx_q_%u_cleaned", i);
+			p += ETH_GSTRING_LEN;
+#endif /* LL_EXTENDED_STATS */
 		}
 		for (i = 0; i < IXGBE_NUM_RX_QUEUES; i++) {
 			sprintf(p, "rx_queue_%u_packets", i);
 			p += ETH_GSTRING_LEN;
 			sprintf(p, "rx_queue_%u_bytes", i);
 			p += ETH_GSTRING_LEN;
+#ifdef LL_EXTENDED_STATS
+			sprintf(p, "rx_q_%u_ll_poll_yield", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "rx_q_%u_misses", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "rx_q_%u_cleaned", i);
+			p += ETH_GSTRING_LEN;
+#endif /* LL_EXTENDED_STATS */
 		}
 		for (i = 0; i < IXGBE_MAX_PACKET_BUFFERS; i++) {
 			sprintf(p, "tx_pb_%u_pxon", i);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 9a7dc40..047ebaa 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2016,6 +2016,12 @@ static int ixgbe_low_latency_recv(struct napi_struct *napi)
 
 	ixgbe_for_each_ring(ring, q_vector->rx) {
 		found = ixgbe_clean_rx_irq(q_vector, ring, 4);
+#ifdef LL_EXTENDED_STATS
+		if (found)
+			ring->stats.cleaned += found;
+		else
+			ring->stats.misses++;
+#endif
 		if (found)
 			break;
 	}


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 net-next 5/5] ixgbe: add extra stats for ndo_ll_poll
@ 2013-05-29  6:40   ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29  6:40 UTC (permalink / raw)
  To: David Miller
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	linux-kernel, Alex Rosenbaum, Jesse Brandeburg, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien

Add additional statistics to the ixgbe driver for ndo_ll_poll
Defined under LL_EXTENDED_STATS

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
---

 drivers/net/ethernet/intel/ixgbe/ixgbe.h         |   14 ++++++++
 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |   40 ++++++++++++++++++++++
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c    |    6 +++
 3 files changed, 60 insertions(+), 0 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index 04fdbf6..9765772 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -54,6 +54,9 @@
 
 #include <net/ll_poll.h>
 
+#ifdef CONFIG_NET_LL_RX_POLL
+#define LL_EXTENDED_STATS
+#endif
 /* common prefix used by pr_<> macros */
 #undef pr_fmt
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -184,6 +187,11 @@ struct ixgbe_rx_buffer {
 struct ixgbe_queue_stats {
 	u64 packets;
 	u64 bytes;
+#ifdef LL_EXTENDED_STATS
+	u64 yields;
+	u64 misses;
+	u64 cleaned;
+#endif  /* LL_EXTENDED_STATS */
 };
 
 struct ixgbe_tx_queue_stats {
@@ -391,6 +399,9 @@ static inline int ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
 		WARN_ON(q_vector->state & IXGBE_QV_STATE_NAPI);
 		q_vector->state |= IXGBE_QV_STATE_NAPI_YIELD;
 		rc = false;
+#ifdef LL_EXTENDED_STATS
+		q_vector->tx.ring->stats.yields++;
+#endif
 	} else
 		/* we don't care if someone yielded */
 		q_vector->state = IXGBE_QV_STATE_NAPI;
@@ -421,6 +432,9 @@ static inline int ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
 	if ((q_vector->state & IXGBE_QV_LOCKED)) {
 		q_vector->state |= IXGBE_QV_STATE_POLL_YIELD;
 		rc = false;
+#ifdef LL_EXTENDED_STATS
+		q_vector->rx.ring->stats.yields++;
+#endif
 	} else
 		/* preserve yield marks */
 		q_vector->state |= IXGBE_QV_STATE_POLL;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index d375472..24e2e7a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -1054,6 +1054,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i] = 0;
 			data[i+1] = 0;
 			i += 2;
+#ifdef LL_EXTENDED_STATS
+			data[i] = 0;
+			data[i+1] = 0;
+			data[i+2] = 0;
+			i += 3;
+#endif
 			continue;
 		}
 
@@ -1063,6 +1069,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i+1] = ring->stats.bytes;
 		} while (u64_stats_fetch_retry_bh(&ring->syncp, start));
 		i += 2;
+#ifdef LL_EXTENDED_STATS
+		data[i] = ring->stats.yields;
+		data[i+1] = ring->stats.misses;
+		data[i+2] = ring->stats.cleaned;
+		i += 3;
+#endif
 	}
 	for (j = 0; j < IXGBE_NUM_RX_QUEUES; j++) {
 		ring = adapter->rx_ring[j];
@@ -1070,6 +1082,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i] = 0;
 			data[i+1] = 0;
 			i += 2;
+#ifdef LL_EXTENDED_STATS
+			data[i] = 0;
+			data[i+1] = 0;
+			data[i+2] = 0;
+			i += 3;
+#endif
 			continue;
 		}
 
@@ -1079,6 +1097,12 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
 			data[i+1] = ring->stats.bytes;
 		} while (u64_stats_fetch_retry_bh(&ring->syncp, start));
 		i += 2;
+#ifdef LL_EXTENDED_STATS
+		data[i] = ring->stats.yields;
+		data[i+1] = ring->stats.misses;
+		data[i+2] = ring->stats.cleaned;
+		i += 3;
+#endif
 	}
 
 	for (j = 0; j < IXGBE_MAX_PACKET_BUFFERS; j++) {
@@ -1115,12 +1139,28 @@ static void ixgbe_get_strings(struct net_device *netdev, u32 stringset,
 			p += ETH_GSTRING_LEN;
 			sprintf(p, "tx_queue_%u_bytes", i);
 			p += ETH_GSTRING_LEN;
+#ifdef LL_EXTENDED_STATS
+			sprintf(p, "tx_q_%u_napi_yield", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "tx_q_%u_misses", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "tx_q_%u_cleaned", i);
+			p += ETH_GSTRING_LEN;
+#endif /* LL_EXTENDED_STATS */
 		}
 		for (i = 0; i < IXGBE_NUM_RX_QUEUES; i++) {
 			sprintf(p, "rx_queue_%u_packets", i);
 			p += ETH_GSTRING_LEN;
 			sprintf(p, "rx_queue_%u_bytes", i);
 			p += ETH_GSTRING_LEN;
+#ifdef LL_EXTENDED_STATS
+			sprintf(p, "rx_q_%u_ll_poll_yield", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "rx_q_%u_misses", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "rx_q_%u_cleaned", i);
+			p += ETH_GSTRING_LEN;
+#endif /* LL_EXTENDED_STATS */
 		}
 		for (i = 0; i < IXGBE_MAX_PACKET_BUFFERS; i++) {
 			sprintf(p, "tx_pb_%u_pxon", i);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 9a7dc40..047ebaa 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2016,6 +2016,12 @@ static int ixgbe_low_latency_recv(struct napi_struct *napi)
 
 	ixgbe_for_each_ring(ring, q_vector->rx) {
 		found = ixgbe_clean_rx_irq(q_vector, ring, 4);
+#ifdef LL_EXTENDED_STATS
+		if (found)
+			ring->stats.cleaned += found;
+		else
+			ring->stats.misses++;
+#endif
 		if (found)
 			break;
 	}


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
  2013-05-29  6:39   ` Eliezer Tamir
  (?)
@ 2013-05-29 12:56   ` Eric Dumazet
  2013-05-29 13:09       ` David Laight
  2013-05-29 15:04     ` Eliezer Tamir
  -1 siblings, 2 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 12:56 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
> Adds a napi_id and a hashing mechanism to lookup a napi by id.
> This will be used by subsequent patches to implement low latency
> Ethernet device polling.
> Based on a code sample by Eric Dumazet.
> 
> Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
> ---

OK this looks good enough for inclusion.

If a v7 ever is submitted, please add a 'static' for

static DEFINE_SPINLOCK(napi_hash_lock);
static unsigned int napi_gen_id;
static DEFINE_HASHTABLE(napi_hash, 8);

If David chose to apply v6, I'll submit a patch for this.

Signed-off-by: Eric Dumazet <edumazet@google.com>




^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [PATCH v6 net-next 1/5] net: add napi_id and hash
  2013-05-29 12:56   ` Eric Dumazet
@ 2013-05-29 13:09       ` David Laight
  2013-05-29 15:04     ` Eliezer Tamir
  1 sibling, 0 replies; 48+ messages in thread
From: David Laight @ 2013-05-29 13:09 UTC (permalink / raw)
  To: Eric Dumazet, Eliezer Tamir
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 336 bytes --]

> > Adds a napi_id and a hashing mechanism to lookup a napi by id.

Is this one of the places where the 'id' can be selected
so that the 'hash' lookup never collides?

	David

ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
@ 2013-05-29 13:09       ` David Laight
  0 siblings, 0 replies; 48+ messages in thread
From: David Laight @ 2013-05-29 13:09 UTC (permalink / raw)
  To: Eric Dumazet, Eliezer Tamir
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller

> > Adds a napi_id and a hashing mechanism to lookup a napi by id.

Is this one of the places where the 'id' can be selected
so that the 'hash' lookup never collides?

	David

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29  6:39   ` Eliezer Tamir
@ 2013-05-29 13:37     ` Eric Dumazet
  -1 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:37 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:

> +/* we don't mind a ~2.5% imprecision */
> +#define TSC_MHZ (tsc_khz >> 10)
> +
> +static inline unsigned long ll_end_time(void)
> +{
> +	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
> +}static inline unsigned long ll_end_time(void)
>+{
>+	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
>+}

This can overflow.

Multiply is giving 32bits, as tsc_khz is an int, and sysctl_net_ll_poll
is an int.

unsigned long sysctl_net_ll_poll ?

Also, if we want this to work on i386, the correct type to use for
ll_end_time(void) would be cycles_t




^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 13:37     ` Eric Dumazet
  0 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:37 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: Willem de Bruijn, Don, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller

On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:

> +/* we don't mind a ~2.5% imprecision */
> +#define TSC_MHZ (tsc_khz >> 10)
> +
> +static inline unsigned long ll_end_time(void)
> +{
> +	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
> +}static inline unsigned long ll_end_time(void)
>+{
>+	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
>+}

This can overflow.

Multiply is giving 32bits, as tsc_khz is an int, and sysctl_net_ll_poll
is an int.

unsigned long sysctl_net_ll_poll ?

Also, if we want this to work on i386, the correct type to use for
ll_end_time(void) would be cycles_t




------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 3/5] tcp: add TCP support for low latency receive poll.
  2013-05-29  6:39   ` Eliezer Tamir
@ 2013-05-29 13:38     ` Eric Dumazet
  -1 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:38 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
> Adds busy-poll support for TCP.
> Copy the napi_id from an incomming skb to the sk in tcp_v[46]_rcv().
> when there is no data in the socket we busy-poll in tcp_recvmsg().
> This is a good example of how to add busy-poll support to new protocols.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Tested-by: Willem de Bruijn <willemb@google.com>
> Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
> ---
> 
>  net/ipv4/tcp.c      |    5 +++++
>  net/ipv4/tcp_ipv4.c |    2 ++
>  net/ipv6/tcp_ipv6.c |    2 ++
>  3 files changed, 9 insertions(+), 0 deletions(-)

Acked-by: Eric Dumazet <edumazet@google.com>



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 3/5] tcp: add TCP support for low latency receive poll.
@ 2013-05-29 13:38     ` Eric Dumazet
  0 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:38 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: Willem de Bruijn, Don, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller

On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
> Adds busy-poll support for TCP.
> Copy the napi_id from an incomming skb to the sk in tcp_v[46]_rcv().
> when there is no data in the socket we busy-poll in tcp_recvmsg().
> This is a good example of how to add busy-poll support to new protocols.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Tested-by: Willem de Bruijn <willemb@google.com>
> Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
> ---
> 
>  net/ipv4/tcp.c      |    5 +++++
>  net/ipv4/tcp_ipv4.c |    2 ++
>  net/ipv6/tcp_ipv6.c |    2 ++
>  3 files changed, 9 insertions(+), 0 deletions(-)

Acked-by: Eric Dumazet <edumazet@google.com>



------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 13:37     ` Eric Dumazet
@ 2013-05-29 13:42       ` David Laight
  -1 siblings, 0 replies; 48+ messages in thread
From: David Laight @ 2013-05-29 13:42 UTC (permalink / raw)
  To: Eric Dumazet, Eliezer Tamir
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 288 bytes --]

> > +/* we don't mind a ~2.5% imprecision */
> > +#define TSC_MHZ (tsc_khz >> 10)

Wouldn't (tsc_khz << 10) be better?

	David

ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 13:42       ` David Laight
  0 siblings, 0 replies; 48+ messages in thread
From: David Laight @ 2013-05-29 13:42 UTC (permalink / raw)
  To: Eric Dumazet, Eliezer Tamir
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller

> > +/* we don't mind a ~2.5% imprecision */
> > +#define TSC_MHZ (tsc_khz >> 10)

Wouldn't (tsc_khz << 10) be better?

	David

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [PATCH v6 net-next 1/5] net: add napi_id and hash
  2013-05-29 13:09       ` David Laight
@ 2013-05-29 13:43         ` Eric Dumazet
  -1 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:43 UTC (permalink / raw)
  To: David Laight
  Cc: Eliezer Tamir, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Or Gerlitz, Alex Rosenbaum,
	Eliezer Tamir

On Wed, 2013-05-29 at 14:09 +0100, David Laight wrote:
> > > Adds a napi_id and a hashing mechanism to lookup a napi by id.
> 
> Is this one of the places where the 'id' can be selected
> so that the 'hash' lookup never collides?

Very few devices will ever call napi_hash_add()

[ Real NIC RX queues, not virtual devices ]

We use a hash table with 256 slots, the chance of collision is about 0%

Lets not over engineer the thing before its even used.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
@ 2013-05-29 13:43         ` Eric Dumazet
  0 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:43 UTC (permalink / raw)
  To: David Laight
  Cc: Willem de Bruijn, Don, Eliezer Tamir, Or Gerlitz, e1000-devel,
	netdev, HPA, Jesse Brandeburg, Alex Rosenbaum, linux-kernel,
	Eliezer Tamir, Andi Kleen, Eilon Greenstien, David Miller

On Wed, 2013-05-29 at 14:09 +0100, David Laight wrote:
> > > Adds a napi_id and a hashing mechanism to lookup a napi by id.
> 
> Is this one of the places where the 'id' can be selected
> so that the 'hash' lookup never collides?

Very few devices will ever call napi_hash_add()

[ Real NIC RX queues, not virtual devices ]

We use a hash table with 256 slots, the chance of collision is about 0%

Lets not over engineer the thing before its even used.



------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 13:42       ` David Laight
@ 2013-05-29 13:48         ` Eric Dumazet
  -1 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:48 UTC (permalink / raw)
  To: David Laight
  Cc: Eliezer Tamir, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Or Gerlitz, Alex Rosenbaum,
	Eliezer Tamir

On Wed, 2013-05-29 at 14:42 +0100, David Laight wrote:
> > > +/* we don't mind a ~2.5% imprecision */
> > > +#define TSC_MHZ (tsc_khz >> 10)
> 
> Wouldn't (tsc_khz << 10) be better?

We want number of cycles per usec.

Your formula gives number of cycles per 1.024 second.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 13:48         ` Eric Dumazet
  0 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 13:48 UTC (permalink / raw)
  To: David Laight
  Cc: Willem de Bruijn, Don, Eliezer Tamir, Or Gerlitz, e1000-devel,
	netdev, HPA, Jesse Brandeburg, Alex Rosenbaum, linux-kernel,
	Eliezer Tamir, Andi Kleen, Eilon Greenstien, David Miller

On Wed, 2013-05-29 at 14:42 +0100, David Laight wrote:
> > > +/* we don't mind a ~2.5% imprecision */
> > > +#define TSC_MHZ (tsc_khz >> 10)
> 
> Wouldn't (tsc_khz << 10) be better?

We want number of cycles per usec.

Your formula gives number of cycles per 1.024 second.



------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 13:37     ` Eric Dumazet
@ 2013-05-29 14:01       ` Eliezer Tamir
  -1 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29 14:01 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

On 29/05/2013 16:37, Eric Dumazet wrote:
> On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:

>> +static inline unsigned long ll_end_time(void)
>> +{
>> +	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
>> +}
>
> This can overflow.
>
> Multiply is giving 32bits, as tsc_khz is an int, and sysctl_net_ll_poll
> is an int.
>
> unsigned long sysctl_net_ll_poll ?
OK

> Also, if we want this to work on i386, the correct type to use for
> ll_end_time(void) would be cycles_t

OK
I would be really surprised if someone uses this on an i386, but I guess 
you never know.

Thanks!
-Eliezer

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 14:01       ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29 14:01 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller

On 29/05/2013 16:37, Eric Dumazet wrote:
> On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:

>> +static inline unsigned long ll_end_time(void)
>> +{
>> +	return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_poll) + get_cycles();
>> +}
>
> This can overflow.
>
> Multiply is giving 32bits, as tsc_khz is an int, and sysctl_net_ll_poll
> is an int.
>
> unsigned long sysctl_net_ll_poll ?
OK

> Also, if we want this to work on i386, the correct type to use for
> ll_end_time(void) would be cycles_t

OK
I would be really surprised if someone uses this on an i386, but I guess 
you never know.

Thanks!
-Eliezer

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29  6:39   ` Eliezer Tamir
@ 2013-05-29 14:14     ` Or Gerlitz
  -1 siblings, 0 replies; 48+ messages in thread
From: Or Gerlitz @ 2013-05-29 14:14 UTC (permalink / raw)
  To: Eliezer Tamir, Eric Dumazet
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Alex Rosenbaum, Eliezer Tamir

On Wed, May 29, 2013 at 9:39 AM, Eliezer Tamir
<eliezer.tamir@linux.intel.com> wrote:
> Adds a new ndo_ll_poll method and the code that supports and uses it.
> This method can be used by low latency applications to busy poll Ethernet
> device queues directly from the socket code. The value of sysctl_net_ll_poll
> controls how many microseconds to poll. Set to zero to disable.

Unlike with TCP sockets, UDP sockets may receive packets from multiple
sources and hence the receiving context may be steered to be executed
on different cores through RSS or other Flow-Steering HW mechanisms
which could mean different napi contexts for the same socket, is that
a problem here? what's the severity?

Or.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 14:14     ` Or Gerlitz
  0 siblings, 0 replies; 48+ messages in thread
From: Or Gerlitz @ 2013-05-29 14:14 UTC (permalink / raw)
  To: Eliezer Tamir, Eric Dumazet
  Cc: Willem de Bruijn, e1000-devel, netdev, HPA, Jesse Brandeburg,
	Alex Rosenbaum, linux-kernel, Eliezer Tamir, Andi Kleen,
	Eilon Greenstien, David Miller

On Wed, May 29, 2013 at 9:39 AM, Eliezer Tamir
<eliezer.tamir@linux.intel.com> wrote:
> Adds a new ndo_ll_poll method and the code that supports and uses it.
> This method can be used by low latency applications to busy poll Ethernet
> device queues directly from the socket code. The value of sysctl_net_ll_poll
> controls how many microseconds to poll. Set to zero to disable.

Unlike with TCP sockets, UDP sockets may receive packets from multiple
sources and hence the receiving context may be steered to be executed
on different cores through RSS or other Flow-Steering HW mechanisms
which could mean different napi contexts for the same socket, is that
a problem here? what's the severity?

Or.

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 14:01       ` Eliezer Tamir
  (?)
@ 2013-05-29 14:20       ` yaniv saar
  2013-05-29 15:01         ` Eliezer Tamir
  -1 siblings, 1 reply; 48+ messages in thread
From: yaniv saar @ 2013-05-29 14:20 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller


[-- Attachment #1.1: Type: text/plain, Size: 1322 bytes --]

Hi Eliezer,

(If I'm too late then a future note...)
Why make polling a system-wide configuration?
Wouldn't it make more sense to implement a sock option?
An even better solution might be aggregation/combination of both types of
configurations.

-- Yaniv Sa'ar



On Wed, May 29, 2013 at 5:01 PM, Eliezer Tamir <
eliezer.tamir@linux.intel.com> wrote:

> On 29/05/2013 16:37, Eric Dumazet wrote:
>
>> On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
>>
>
>  +static inline unsigned long ll_end_time(void)
>>> +{
>>> +       return TSC_MHZ * ACCESS_ONCE(sysctl_net_ll_**poll) +
>>> get_cycles();
>>> +}
>>>
>>
>> This can overflow.
>>
>> Multiply is giving 32bits, as tsc_khz is an int, and sysctl_net_ll_poll
>> is an int.
>>
>> unsigned long sysctl_net_ll_poll ?
>>
> OK
>
>
>  Also, if we want this to work on i386, the correct type to use for
>> ll_end_time(void) would be cycles_t
>>
>
> OK
> I would be really surprised if someone uses this on an i386, but I guess
> you never know.
>
> Thanks!
> -Eliezer
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/**majordomo-info.html<http://vger.kernel.org/majordomo-info.html>
> Please read the FAQ at  http://www.tux.org/lkml/
>

[-- Attachment #2: Type: text/plain, Size: 383 bytes --]

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1

[-- Attachment #3: Type: text/plain, Size: 257 bytes --]

_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 14:14     ` Or Gerlitz
  (?)
@ 2013-05-29 14:40     ` yaniv saar
  -1 siblings, 0 replies; 48+ messages in thread
From: yaniv saar @ 2013-05-29 14:40 UTC (permalink / raw)
  To: linux-kernel, netdev

Hi Eliezer,

(If I'm too late then a future note...)
Why make polling a system-wide configuration?
Wouldn't it make more sense to implement a sock option?
An even better solution might be aggregation/combination of both types
of configurations.

-- Yaniv Sa'ar


On Wed, May 29, 2013 at 5:14 PM, Or Gerlitz <or.gerlitz@gmail.com> wrote:
> On Wed, May 29, 2013 at 9:39 AM, Eliezer Tamir
> <eliezer.tamir@linux.intel.com> wrote:
>> Adds a new ndo_ll_poll method and the code that supports and uses it.
>> This method can be used by low latency applications to busy poll Ethernet
>> device queues directly from the socket code. The value of sysctl_net_ll_poll
>> controls how many microseconds to poll. Set to zero to disable.
>
> Unlike with TCP sockets, UDP sockets may receive packets from multiple
> sources and hence the receiving context may be steered to be executed
> on different cores through RSS or other Flow-Steering HW mechanisms
> which could mean different napi contexts for the same socket, is that
> a problem here? what's the severity?
>
> Or.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 14:14     ` Or Gerlitz
@ 2013-05-29 14:59       ` Eliezer Tamir
  -1 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29 14:59 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Eric Dumazet, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Alex Rosenbaum, Eliezer Tamir

On 29/05/2013 17:14, Or Gerlitz wrote:
> On Wed, May 29, 2013 at 9:39 AM, Eliezer Tamir
> <eliezer.tamir@linux.intel.com> wrote:
>> Adds a new ndo_ll_poll method and the code that supports and uses it.
>> This method can be used by low latency applications to busy poll Ethernet
>> device queues directly from the socket code. The value of sysctl_net_ll_poll
>> controls how many microseconds to poll. Set to zero to disable.
>
> Unlike with TCP sockets, UDP sockets may receive packets from multiple
> sources and hence the receiving context may be steered to be executed
> on different cores through RSS or other Flow-Steering HW mechanisms
> which could mean different napi contexts for the same socket, is that
> a problem here? what's the severity?

Nothing will break if you poll on the wrong queue.
Your data will come through normal NAPI processing of the right queue.

One of the things we plan on adding in the next version is a more fine 
grained control over which sockets get to busy poll.

-Eliezer

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 14:59       ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29 14:59 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Willem de Bruijn, e1000-devel, netdev, HPA, Jesse Brandeburg,
	Alex Rosenbaum, linux-kernel, Eliezer Tamir, Andi Kleen,
	Eric Dumazet, Eilon Greenstien, David Miller

On 29/05/2013 17:14, Or Gerlitz wrote:
> On Wed, May 29, 2013 at 9:39 AM, Eliezer Tamir
> <eliezer.tamir@linux.intel.com> wrote:
>> Adds a new ndo_ll_poll method and the code that supports and uses it.
>> This method can be used by low latency applications to busy poll Ethernet
>> device queues directly from the socket code. The value of sysctl_net_ll_poll
>> controls how many microseconds to poll. Set to zero to disable.
>
> Unlike with TCP sockets, UDP sockets may receive packets from multiple
> sources and hence the receiving context may be steered to be executed
> on different cores through RSS or other Flow-Steering HW mechanisms
> which could mean different napi contexts for the same socket, is that
> a problem here? what's the severity?

Nothing will break if you poll on the wrong queue.
Your data will come through normal NAPI processing of the right queue.

One of the things we plan on adding in the next version is a more fine 
grained control over which sockets get to busy poll.

-Eliezer

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 14:20       ` yaniv saar
@ 2013-05-29 15:01         ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29 15:01 UTC (permalink / raw)
  To: yaniv saar
  Cc: Eric Dumazet, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Or Gerlitz, Alex Rosenbaum,
	Eliezer Tamir

On 29/05/2013 17:20, yaniv saar wrote:
> Hi Eliezer,
>
> (If I'm too late then a future note...)
> Why make polling a system-wide configuration?
> Wouldn't it make more sense to implement a sock option?
> An even better solution might be aggregation/combination of both types of
> configurations.
>
> -- Yaniv Sa'ar

We plan on adding a socket option in the future.

-Eliezer

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
  2013-05-29 12:56   ` Eric Dumazet
  2013-05-29 13:09       ` David Laight
@ 2013-05-29 15:04     ` Eliezer Tamir
  1 sibling, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-29 15:04 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Or Gerlitz, Alex Rosenbaum, Eliezer Tamir

On 29/05/2013 15:56, Eric Dumazet wrote:
> On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
>> Adds a napi_id and a hashing mechanism to lookup a napi by id.
>> This will be used by subsequent patches to implement low latency
>> Ethernet device polling.
>> Based on a code sample by Eric Dumazet.
>>
>> Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
>> ---
>
> OK this looks good enough for inclusion.
>
> If a v7 ever is submitted, please add a 'static' for
>
> static DEFINE_SPINLOCK(napi_hash_lock);
> static unsigned int napi_gen_id;
> static DEFINE_HASHTABLE(napi_hash, 8);
>

I will post a v7 along with the changes you suggested to 2/5, I will 
wait a bit to see if there are other things to fix.

Thanks,
Eliezer

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 14:59       ` Eliezer Tamir
@ 2013-05-29 18:52         ` Or Gerlitz
  -1 siblings, 0 replies; 48+ messages in thread
From: Or Gerlitz @ 2013-05-29 18:52 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: Eric Dumazet, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Alex Rosenbaum, Eliezer Tamir

Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
> Or Gerlitz wrote:

>> Unlike with TCP sockets, UDP sockets may receive packets from multiple
>> sources and hence the receiving context may be steered to be executed
>> on different cores through RSS or other Flow-Steering HW mechanisms
>> which could mean different napi contexts for the same socket, is that
>> a problem here? what's the severity?

> Nothing will break if you poll on the wrong queue.
> Your data will come through normal NAPI processing of the right queue.

Can you elaborate a little further, why you call this "wrong" and "right"?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 18:52         ` Or Gerlitz
  0 siblings, 0 replies; 48+ messages in thread
From: Or Gerlitz @ 2013-05-29 18:52 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: Willem de Bruijn, e1000-devel, netdev, HPA, Jesse Brandeburg,
	Alex Rosenbaum, linux-kernel, Eliezer Tamir, Andi Kleen,
	Eric Dumazet, Eilon Greenstien, David Miller

Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
> Or Gerlitz wrote:

>> Unlike with TCP sockets, UDP sockets may receive packets from multiple
>> sources and hence the receiving context may be steered to be executed
>> on different cores through RSS or other Flow-Steering HW mechanisms
>> which could mean different napi contexts for the same socket, is that
>> a problem here? what's the severity?

> Nothing will break if you poll on the wrong queue.
> Your data will come through normal NAPI processing of the right queue.

Can you elaborate a little further, why you call this "wrong" and "right"?

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 18:52         ` Or Gerlitz
@ 2013-05-29 19:08           ` Eric Dumazet
  -1 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 19:08 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Eliezer Tamir, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Alex Rosenbaum, Eliezer Tamir

On Wed, 2013-05-29 at 21:52 +0300, Or Gerlitz wrote:
> Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
> > Or Gerlitz wrote:
> 
> >> Unlike with TCP sockets, UDP sockets may receive packets from multiple
> >> sources and hence the receiving context may be steered to be executed
> >> on different cores through RSS or other Flow-Steering HW mechanisms
> >> which could mean different napi contexts for the same socket, is that
> >> a problem here? what's the severity?
> 
> > Nothing will break if you poll on the wrong queue.
> > Your data will come through normal NAPI processing of the right queue.
> 
> Can you elaborate a little further, why you call this "wrong" and "right"?
> --

This definitely need some documentation, because before llpoll, device
RX path was serviced by the cpu receiving the harwdare interrupt.

So the "wrong" queue could add false sharing, and wrong NUMA
allocations.




^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 19:08           ` Eric Dumazet
  0 siblings, 0 replies; 48+ messages in thread
From: Eric Dumazet @ 2013-05-29 19:08 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Willem de Bruijn, Don, Eliezer Tamir, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller

On Wed, 2013-05-29 at 21:52 +0300, Or Gerlitz wrote:
> Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
> > Or Gerlitz wrote:
> 
> >> Unlike with TCP sockets, UDP sockets may receive packets from multiple
> >> sources and hence the receiving context may be steered to be executed
> >> on different cores through RSS or other Flow-Steering HW mechanisms
> >> which could mean different napi contexts for the same socket, is that
> >> a problem here? what's the severity?
> 
> > Nothing will break if you poll on the wrong queue.
> > Your data will come through normal NAPI processing of the right queue.
> 
> Can you elaborate a little further, why you call this "wrong" and "right"?
> --

This definitely need some documentation, because before llpoll, device
RX path was serviced by the cpu receiving the harwdare interrupt.

So the "wrong" queue could add false sharing, and wrong NUMA
allocations.




------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
  2013-05-29  6:39   ` Eliezer Tamir
@ 2013-05-29 20:09     ` Ben Hutchings
  -1 siblings, 0 replies; 48+ messages in thread
From: Ben Hutchings @ 2013-05-29 20:09 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Eric Dumazet,
	Andi Kleen, HPA, Eilon Greenstien, Or Gerlitz, Alex Rosenbaum,
	Eliezer Tamir

On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
> Adds a napi_id and a hashing mechanism to lookup a napi by id.
> This will be used by subsequent patches to implement low latency
> Ethernet device polling.
> Based on a code sample by Eric Dumazet.
> 
> Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
[...]
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
[...]
> @@ -4136,6 +4143,53 @@ void napi_complete(struct napi_struct *n)
>  }
>  EXPORT_SYMBOL(napi_complete);
>  
> +void napi_hash_add(struct napi_struct *napi)
> +{
> +	if (!test_and_set_bit(NAPI_STATE_HASHED, &napi->state)) {
> +
> +		spin_lock(&napi_hash_lock);
> +
> +		/* 0 is not a valid id */
> +		napi->napi_id = 0;
> +		while (!napi->napi_id)
> +			napi->napi_id = ++napi_gen_id;

Suppose we're loading/unloading one driver repeatedly while another one
remains loaded the whole time.  Then once napi_gen_id wraps around, the
same ID can be assigned to multiple contexts.

So far as I can see, assigning the same ID twice will just make polling
stop working for one of the NAPI contexts; I don't think it causes a
crash.  And it is exceedingly unlikely to happen in production.  But if
you're going to the trouble of handling wrap-around at all, you'd better
handle this.

[...]
> +/* must be called under rcu_read_lock(), as we dont take a reference */
> +struct napi_struct *napi_by_id(int napi_id)
> +{
> +	unsigned int hash = napi_id % HASH_SIZE(napi_hash);
[...]

napi_id should be declared unsigned int here, as elsewhere.  The
division can't actually yield a negative result because HASH_SIZE() has
type size_t and napi_id is promoted to match, but I had to go and look
at hashtable.h to check that.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
@ 2013-05-29 20:09     ` Ben Hutchings
  0 siblings, 0 replies; 48+ messages in thread
From: Ben Hutchings @ 2013-05-29 20:09 UTC (permalink / raw)
  To: Eliezer Tamir
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien, David Miller

On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
> Adds a napi_id and a hashing mechanism to lookup a napi by id.
> This will be used by subsequent patches to implement low latency
> Ethernet device polling.
> Based on a code sample by Eric Dumazet.
> 
> Signed-off-by: Eliezer Tamir <eliezer.tamir@linux.intel.com>
[...]
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
[...]
> @@ -4136,6 +4143,53 @@ void napi_complete(struct napi_struct *n)
>  }
>  EXPORT_SYMBOL(napi_complete);
>  
> +void napi_hash_add(struct napi_struct *napi)
> +{
> +	if (!test_and_set_bit(NAPI_STATE_HASHED, &napi->state)) {
> +
> +		spin_lock(&napi_hash_lock);
> +
> +		/* 0 is not a valid id */
> +		napi->napi_id = 0;
> +		while (!napi->napi_id)
> +			napi->napi_id = ++napi_gen_id;

Suppose we're loading/unloading one driver repeatedly while another one
remains loaded the whole time.  Then once napi_gen_id wraps around, the
same ID can be assigned to multiple contexts.

So far as I can see, assigning the same ID twice will just make polling
stop working for one of the NAPI contexts; I don't think it causes a
crash.  And it is exceedingly unlikely to happen in production.  But if
you're going to the trouble of handling wrap-around at all, you'd better
handle this.

[...]
> +/* must be called under rcu_read_lock(), as we dont take a reference */
> +struct napi_struct *napi_by_id(int napi_id)
> +{
> +	unsigned int hash = napi_id % HASH_SIZE(napi_hash);
[...]

napi_id should be declared unsigned int here, as elsewhere.  The
division can't actually yield a negative result because HASH_SIZE() has
type size_t and napi_id is promoted to match, but I had to go and look
at hashtable.h to check that.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 14:14     ` Or Gerlitz
@ 2013-05-29 20:20       ` Ben Hutchings
  -1 siblings, 0 replies; 48+ messages in thread
From: Ben Hutchings @ 2013-05-29 20:20 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Eliezer Tamir, Eric Dumazet, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Alex Rosenbaum, Eliezer Tamir

On Wed, 2013-05-29 at 17:14 +0300, Or Gerlitz wrote:
> On Wed, May 29, 2013 at 9:39 AM, Eliezer Tamir
> <eliezer.tamir@linux.intel.com> wrote:
> > Adds a new ndo_ll_poll method and the code that supports and uses it.
> > This method can be used by low latency applications to busy poll Ethernet
> > device queues directly from the socket code. The value of sysctl_net_ll_poll
> > controls how many microseconds to poll. Set to zero to disable.
> 
> Unlike with TCP sockets, UDP sockets may receive packets from multiple
> sources and hence the receiving context may be steered to be executed
> on different cores through RSS or other Flow-Steering HW mechanisms
> which could mean different napi contexts for the same socket, is that
> a problem here? what's the severity?

Maybe ARFS could be extended so the driver can tell whether a UDP socket
it's steering for is connected or not.  Then for disconnected sockets
the driver can use a filter that only matches destination address.
(Though that's probably undesirable if the socket has SO_REUSEPORT set.)

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-29 20:20       ` Ben Hutchings
  0 siblings, 0 replies; 48+ messages in thread
From: Ben Hutchings @ 2013-05-29 20:20 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Willem de Bruijn, Eliezer Tamir, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien, David Miller

On Wed, 2013-05-29 at 17:14 +0300, Or Gerlitz wrote:
> On Wed, May 29, 2013 at 9:39 AM, Eliezer Tamir
> <eliezer.tamir@linux.intel.com> wrote:
> > Adds a new ndo_ll_poll method and the code that supports and uses it.
> > This method can be used by low latency applications to busy poll Ethernet
> > device queues directly from the socket code. The value of sysctl_net_ll_poll
> > controls how many microseconds to poll. Set to zero to disable.
> 
> Unlike with TCP sockets, UDP sockets may receive packets from multiple
> sources and hence the receiving context may be steered to be executed
> on different cores through RSS or other Flow-Steering HW mechanisms
> which could mean different napi contexts for the same socket, is that
> a problem here? what's the severity?

Maybe ARFS could be extended so the driver can tell whether a UDP socket
it's steering for is connected or not.  Then for disconnected sockets
the driver can use a filter that only matches destination address.
(Though that's probably undesirable if the socket has SO_REUSEPORT set.)

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 19:08           ` Eric Dumazet
@ 2013-05-30  5:58             ` Eliezer Tamir
  -1 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-30  5:58 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Or Gerlitz, David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Andi Kleen, HPA,
	Eilon Greenstien, Alex Rosenbaum, Eliezer Tamir

On 29/05/2013 22:08, Eric Dumazet wrote:
> On Wed, 2013-05-29 at 21:52 +0300, Or Gerlitz wrote:
>> Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
>>> Or Gerlitz wrote:
>>
>>>> Unlike with TCP sockets, UDP sockets may receive packets from multiple
>>>> sources and hence the receiving context may be steered to be executed
>>>> on different cores through RSS or other Flow-Steering HW mechanisms
>>>> which could mean different napi contexts for the same socket, is that
>>>> a problem here? what's the severity?
>>
>>> Nothing will break if you poll on the wrong queue.
>>> Your data will come through normal NAPI processing of the right queue.
>>
>> Can you elaborate a little further, why you call this "wrong" and "right"?
>> --
>
> This definitely need some documentation, because before llpoll, device
> RX path was serviced by the cpu receiving the harwdare interrupt.
>
> So the "wrong" queue could add false sharing, and wrong NUMA
> allocations.

Yes,
To work properly when you have more than one NUMA node, you have to have 
packet steering set up, either by your NIC or by HW accelerated RFS.

I would like to add a short writeup of the design and suggested 
configuration. Where should it go?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-30  5:58             ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-30  5:58 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eilon Greenstien, David Miller

On 29/05/2013 22:08, Eric Dumazet wrote:
> On Wed, 2013-05-29 at 21:52 +0300, Or Gerlitz wrote:
>> Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
>>> Or Gerlitz wrote:
>>
>>>> Unlike with TCP sockets, UDP sockets may receive packets from multiple
>>>> sources and hence the receiving context may be steered to be executed
>>>> on different cores through RSS or other Flow-Steering HW mechanisms
>>>> which could mean different napi contexts for the same socket, is that
>>>> a problem here? what's the severity?
>>
>>> Nothing will break if you poll on the wrong queue.
>>> Your data will come through normal NAPI processing of the right queue.
>>
>> Can you elaborate a little further, why you call this "wrong" and "right"?
>> --
>
> This definitely need some documentation, because before llpoll, device
> RX path was serviced by the cpu receiving the harwdare interrupt.
>
> So the "wrong" queue could add false sharing, and wrong NUMA
> allocations.

Yes,
To work properly when you have more than one NUMA node, you have to have 
packet steering set up, either by your NIC or by HW accelerated RFS.

I would like to add a short writeup of the design and suggested 
configuration. Where should it go?

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
  2013-05-29 18:52         ` Or Gerlitz
@ 2013-05-30  6:04           ` Eliezer Tamir
  -1 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-30  6:04 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Eric Dumazet, David Miller, linux-kernel, netdev,
	Jesse Brandeburg, Don Skidmore, e1000-devel, Willem de Bruijn,
	Andi Kleen, HPA, Eilon Greenstien, Alex Rosenbaum, Eliezer Tamir

On 29/05/2013 21:52, Or Gerlitz wrote:
> Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
>> Or Gerlitz wrote:
>
>>> Unlike with TCP sockets, UDP sockets may receive packets from multiple
>>> sources and hence the receiving context may be steered to be executed
>>> on different cores through RSS or other Flow-Steering HW mechanisms
>>> which could mean different napi contexts for the same socket, is that
>>> a problem here? what's the severity?
>
>> Nothing will break if you poll on the wrong queue.
>> Your data will come through normal NAPI processing of the right queue.
>
> Can you elaborate a little further, why you call this "wrong" and "right"?

Right == the queue the packets arrive on.
Wrong == any other queue.

BTW, if you have an application that receives UDP data to an unbound 
socket, wouldn't it be better in any case to steer all of the incoming 
packets for this UDP socket to a single queue disregarding the source 
address? (Can't your hardware do that?)

The general approach is that userspace needs to make sure that threads, 
connections and IRQs are bound to the right CPUs.

-Eliezer


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 2/5] net: implement support for low latency socket polling
@ 2013-05-30  6:04           ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-30  6:04 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Willem de Bruijn, e1000-devel, netdev, HPA, Jesse Brandeburg,
	Alex Rosenbaum, linux-kernel, Eliezer Tamir, Andi Kleen,
	Eric Dumazet, Eilon Greenstien, David Miller

On 29/05/2013 21:52, Or Gerlitz wrote:
> Eliezer Tamir <eliezer.tamir@linux.intel.com> wrote:
>> Or Gerlitz wrote:
>
>>> Unlike with TCP sockets, UDP sockets may receive packets from multiple
>>> sources and hence the receiving context may be steered to be executed
>>> on different cores through RSS or other Flow-Steering HW mechanisms
>>> which could mean different napi contexts for the same socket, is that
>>> a problem here? what's the severity?
>
>> Nothing will break if you poll on the wrong queue.
>> Your data will come through normal NAPI processing of the right queue.
>
> Can you elaborate a little further, why you call this "wrong" and "right"?

Right == the queue the packets arrive on.
Wrong == any other queue.

BTW, if you have an application that receives UDP data to an unbound 
socket, wouldn't it be better in any case to steer all of the incoming 
packets for this UDP socket to a single queue disregarding the source 
address? (Can't your hardware do that?)

The general approach is that userspace needs to make sure that threads, 
connections and IRQs are bound to the right CPUs.

-Eliezer


------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
  2013-05-29 20:09     ` Ben Hutchings
@ 2013-05-30  6:51       ` Eliezer Tamir
  -1 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-30  6:51 UTC (permalink / raw)
  To: Ben Hutchings
  Cc: David Miller, linux-kernel, netdev, Jesse Brandeburg,
	Don Skidmore, e1000-devel, Willem de Bruijn, Eric Dumazet,
	Andi Kleen, HPA, Eilon Greenstien, Or Gerlitz, Alex Rosenbaum,
	Eliezer Tamir

On 29/05/2013 23:09, Ben Hutchings wrote:
> On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
>> +void napi_hash_add(struct napi_struct *napi)
>> +{
>> +	if (!test_and_set_bit(NAPI_STATE_HASHED, &napi->state)) {
>> +
>> +		spin_lock(&napi_hash_lock);
>> +
>> +		/* 0 is not a valid id */
>> +		napi->napi_id = 0;
>> +		while (!napi->napi_id)
>> +			napi->napi_id = ++napi_gen_id;
>
> Suppose we're loading/unloading one driver repeatedly while another one
> remains loaded the whole time.  Then once napi_gen_id wraps around, the
> same ID can be assigned to multiple contexts.
>
> So far as I can see, assigning the same ID twice will just make polling
> stop working for one of the NAPI contexts; I don't think it causes a
> crash.  And it is exceedingly unlikely to happen in production.  But if
> you're going to the trouble of handling wrap-around at all, you'd better
> handle this.

OK


> [...]
>> +/* must be called under rcu_read_lock(), as we dont take a reference */
>> +struct napi_struct *napi_by_id(int napi_id)
>> +{
>> +	unsigned int hash = napi_id % HASH_SIZE(napi_hash);
> [...]
>
> napi_id should be declared unsigned int here, as elsewhere.  The
> division can't actually yield a negative result because HASH_SIZE() has
> type size_t and napi_id is promoted to match, but I had to go and look
> at hashtable.h to check that.

Good catch,

Thanks,
Eliezer

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 net-next 1/5] net: add napi_id and hash
@ 2013-05-30  6:51       ` Eliezer Tamir
  0 siblings, 0 replies; 48+ messages in thread
From: Eliezer Tamir @ 2013-05-30  6:51 UTC (permalink / raw)
  To: Ben Hutchings
  Cc: Willem de Bruijn, Or Gerlitz, e1000-devel, netdev, HPA,
	Jesse Brandeburg, Alex Rosenbaum, linux-kernel, Eliezer Tamir,
	Andi Kleen, Eric Dumazet, Eilon Greenstien, David Miller

On 29/05/2013 23:09, Ben Hutchings wrote:
> On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
>> +void napi_hash_add(struct napi_struct *napi)
>> +{
>> +	if (!test_and_set_bit(NAPI_STATE_HASHED, &napi->state)) {
>> +
>> +		spin_lock(&napi_hash_lock);
>> +
>> +		/* 0 is not a valid id */
>> +		napi->napi_id = 0;
>> +		while (!napi->napi_id)
>> +			napi->napi_id = ++napi_gen_id;
>
> Suppose we're loading/unloading one driver repeatedly while another one
> remains loaded the whole time.  Then once napi_gen_id wraps around, the
> same ID can be assigned to multiple contexts.
>
> So far as I can see, assigning the same ID twice will just make polling
> stop working for one of the NAPI contexts; I don't think it causes a
> crash.  And it is exceedingly unlikely to happen in production.  But if
> you're going to the trouble of handling wrap-around at all, you'd better
> handle this.

OK


> [...]
>> +/* must be called under rcu_read_lock(), as we dont take a reference */
>> +struct napi_struct *napi_by_id(int napi_id)
>> +{
>> +	unsigned int hash = napi_id % HASH_SIZE(napi_hash);
> [...]
>
> napi_id should be declared unsigned int here, as elsewhere.  The
> division can't actually yield a negative result because HASH_SIZE() has
> type size_t and napi_id is promoted to match, but I had to go and look
> at hashtable.h to check that.

Good catch,

Thanks,
Eliezer

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit http://communities.intel.com/community/wired

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2013-05-30  6:51 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-29  6:39 [PATCH v6 net-next 0/5] net: low latency Ethernet device polling Eliezer Tamir
2013-05-29  6:39 ` [PATCH v6 net-next 1/5] net: add napi_id and hash Eliezer Tamir
2013-05-29  6:39   ` Eliezer Tamir
2013-05-29 12:56   ` Eric Dumazet
2013-05-29 13:09     ` David Laight
2013-05-29 13:09       ` David Laight
2013-05-29 13:43       ` Eric Dumazet
2013-05-29 13:43         ` Eric Dumazet
2013-05-29 15:04     ` Eliezer Tamir
2013-05-29 20:09   ` Ben Hutchings
2013-05-29 20:09     ` Ben Hutchings
2013-05-30  6:51     ` Eliezer Tamir
2013-05-30  6:51       ` Eliezer Tamir
2013-05-29  6:39 ` [PATCH v6 net-next 2/5] net: implement support for low latency socket polling Eliezer Tamir
2013-05-29  6:39   ` Eliezer Tamir
2013-05-29 13:37   ` Eric Dumazet
2013-05-29 13:37     ` Eric Dumazet
2013-05-29 13:42     ` David Laight
2013-05-29 13:42       ` David Laight
2013-05-29 13:48       ` Eric Dumazet
2013-05-29 13:48         ` Eric Dumazet
2013-05-29 14:01     ` Eliezer Tamir
2013-05-29 14:01       ` Eliezer Tamir
2013-05-29 14:20       ` yaniv saar
2013-05-29 15:01         ` Eliezer Tamir
2013-05-29 14:14   ` Or Gerlitz
2013-05-29 14:14     ` Or Gerlitz
2013-05-29 14:40     ` yaniv saar
2013-05-29 14:59     ` Eliezer Tamir
2013-05-29 14:59       ` Eliezer Tamir
2013-05-29 18:52       ` Or Gerlitz
2013-05-29 18:52         ` Or Gerlitz
2013-05-29 19:08         ` Eric Dumazet
2013-05-29 19:08           ` Eric Dumazet
2013-05-30  5:58           ` Eliezer Tamir
2013-05-30  5:58             ` Eliezer Tamir
2013-05-30  6:04         ` Eliezer Tamir
2013-05-30  6:04           ` Eliezer Tamir
2013-05-29 20:20     ` Ben Hutchings
2013-05-29 20:20       ` Ben Hutchings
2013-05-29  6:39 ` [PATCH v6 net-next 3/5] tcp: add TCP support for low latency receive poll Eliezer Tamir
2013-05-29  6:39   ` Eliezer Tamir
2013-05-29 13:38   ` Eric Dumazet
2013-05-29 13:38     ` Eric Dumazet
2013-05-29  6:39 ` [PATCH v6 net-next 4/5] ixgbe: Add support for ndo_ll_poll Eliezer Tamir
2013-05-29  6:39   ` Eliezer Tamir
2013-05-29  6:40 ` [PATCH v6 net-next 5/5] ixgbe: add extra stats " Eliezer Tamir
2013-05-29  6:40   ` Eliezer Tamir

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.