All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates
@ 2014-09-02 14:35 Jesper Dangaard Brouer
  2014-09-02 14:35 ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits Jesper Dangaard Brouer
                   ` (4 more replies)
  0 siblings, 5 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-02 14:35 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, David S. Miller, netdev
  Cc: Florian Westphal, Hannes Frederic Sowa, Daniel Borkmann

This patchset uses DaveM's recent API changes to dev_hard_start_xmit(),
from the qdisc layer, to implement dequeue bulking.

Open questions:

- For now set bulk limit to 8 packets, don't want to stress the driver
  avail ring_buffer space.

- We are not doing proper accounting for weight_p/quota in __qdisc_run(). Do we care?

- Is the (!skb->next) check in dequeue necessary?

- Do we need some checks in dev_requeue_skb() as we could be requeuing a SKB list?

 - Should we rename dequeue_skb() to dequeue_skb_list() ?


Based on top of:
 commit 364a9e93243d ("sock: deduplicate errqueue dequeue")

---

Jesper Dangaard Brouer (3):
      qdisc: sysctl to adjust bulk dequeue limit
      qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
      qdisc: adjustments for API allowing skb list xmits


 include/net/sch_generic.h  |    2 ++
 net/core/sysctl_net_core.c |    9 +++++++++
 net/sched/sch_generic.c    |   33 ++++++++++++++++++++++++++++-----
 3 files changed, 39 insertions(+), 5 deletions(-)

-- 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits
  2014-09-02 14:35 [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Jesper Dangaard Brouer
@ 2014-09-02 14:35 ` Jesper Dangaard Brouer
  2014-09-02 15:25   ` Eric Dumazet
  2014-09-02 21:06   ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits David Miller
  2014-09-02 14:35 ` [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE Jesper Dangaard Brouer
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-02 14:35 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, David S. Miller, netdev
  Cc: Florian Westphal, Hannes Frederic Sowa, Daniel Borkmann

Minor adjustments for merge commit 53fda7f7f9e (Merge branch 'xmit_list')
that allows us to work with a list of SKBs.

Update code doc to function sch_direct_xmit().

In handle_dev_cpu_collision() use kfree_skb_list() in error handling.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---

 net/sched/sch_generic.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index a8bf9f9..5b261e9 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -93,7 +93,7 @@ static inline int handle_dev_cpu_collision(struct sk_buff *skb,
 		 * detect it by checking xmit owner and drop the packet when
 		 * deadloop is detected. Return OK to try the next skb.
 		 */
-		kfree_skb(skb);
+		kfree_skb_list(skb);
 		net_warn_ratelimited("Dead loop on netdevice %s, fix it urgently!\n",
 				     dev_queue->dev->name);
 		ret = qdisc_qlen(q);
@@ -110,9 +110,9 @@ static inline int handle_dev_cpu_collision(struct sk_buff *skb,
 }
 
 /*
- * Transmit one skb, and handle the return status as required. Holding the
- * __QDISC___STATE_RUNNING bit guarantees that only one CPU can execute this
- * function.
+ * Transmit possibly several skbs, and handle the return status as
+ * required. Holding the __QDISC___STATE_RUNNING bit guarantees that
+ * only one CPU can execute this function.
  *
  * Returns to the caller:
  *				0  - queue is empty or throttled.

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
  2014-09-02 14:35 [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Jesper Dangaard Brouer
  2014-09-02 14:35 ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits Jesper Dangaard Brouer
@ 2014-09-02 14:35 ` Jesper Dangaard Brouer
  2014-09-02 15:22   ` Eric Dumazet
  2014-09-02 16:14   ` Daniel Borkmann
  2014-09-02 14:36 ` [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit Jesper Dangaard Brouer
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-02 14:35 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, David S. Miller, netdev
  Cc: Florian Westphal, Hannes Frederic Sowa, Daniel Borkmann

Based on DaveM's recent API work on dev_hard_start_xmit(), that allows
sending/processing an entire skb list.

This patch implements qdisc bulk dequeue, by allowing multiple packets
to be dequeued in dequeue_skb().

One restriction of the new API is that every SKB must belong to the
same TXQ.  This patch takes the easy way out, by restricting bulk
dequeue to qdisc's with the TCQ_F_ONETXQUEUE flag, that specifies the
qdisc only have attached a single TXQ.


Testing if this have the desired effect is the challenging part.
Generating enough packets for a backlog queue to form at the qdisc is
a challenge (because overhead else-where is a limiting factor
e.g. I've measured the pure skb_alloc/free cycle to cost 80ns).

After trying many qdisc setups, I figured out that, the easiest way to
make a backlog form is to fully load the system, all CPUs.  And I can
even demonstrate this with the default MQ disc.

This is a 12 core CPU (without HT) running trafgen on all 12 cores,
via qdisc-path using sendto():
 * trafgen --cpp --dev $DEV --conf udp_example02_const.trafgen --qdisc-path -t0 --cpus 12

Measuring TX pps:
 * Baseline  : 12,815,925 pps
 * This patch: 14,892,001 pps

This is crazy fast. This measurement is actually "too-high" as
10Gbit/s wirespeed is 14,880,952 (11049 pps too fast).

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---

 net/sched/sch_generic.c |   23 ++++++++++++++++++++++-
 1 files changed, 22 insertions(+), 1 deletions(-)

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 5b261e9..30814ef 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -56,6 +56,9 @@ static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
 	return 0;
 }
 
+/* Note that dequeue_skb can possibly return a SKB list (via skb->next).
+ * A requeued skb (via q->gso_skb) can also be a SKB list.
+ */
 static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
 {
 	struct sk_buff *skb = q->gso_skb;
@@ -70,10 +73,28 @@ static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
 		} else
 			skb = NULL;
 	} else {
-		if (!(q->flags & TCQ_F_ONETXQUEUE) || !netif_xmit_frozen_or_stopped(txq)) {
+		if (!(q->flags & TCQ_F_ONETXQUEUE)
+		    || !netif_xmit_frozen_or_stopped(txq)) {
 			skb = q->dequeue(q);
 			if (skb)
 				skb = validate_xmit_skb(skb, qdisc_dev(q));
+			/* bulk dequeue */
+			if (skb && !skb->next && (q->flags & TCQ_F_ONETXQUEUE)) {
+				struct sk_buff *new, *head = skb;
+				int limit = 7;
+
+				do {
+					new = q->dequeue(q);
+					if (new)
+						new = validate_xmit_skb(
+							new, qdisc_dev(q));
+					if (new) {
+						skb->next = new;
+						skb = new;
+					}
+				} while (new && --limit);
+				skb = head;
+			}
 		}
 	}
 

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit
  2014-09-02 14:35 [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Jesper Dangaard Brouer
  2014-09-02 14:35 ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits Jesper Dangaard Brouer
  2014-09-02 14:35 ` [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE Jesper Dangaard Brouer
@ 2014-09-02 14:36 ` Jesper Dangaard Brouer
  2014-09-02 15:23   ` Jesper Dangaard Brouer
                     ` (2 more replies)
  2014-09-02 18:04 ` [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Tom Herbert
  2014-09-02 21:05 ` David Miller
  4 siblings, 3 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-02 14:36 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, David S. Miller, netdev
  Cc: Florian Westphal, Hannes Frederic Sowa, Daniel Borkmann

Allow userspace to adjust how many packet the qdisc is allowed to
bulk dequeue.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

---
Question should we allow this to be adjusted?

 include/net/sch_generic.h  |    2 ++
 net/core/sysctl_net_core.c |    9 +++++++++
 net/sched/sch_generic.c    |    4 +++-
 3 files changed, 14 insertions(+), 1 deletions(-)

diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index a3cfb8e..b0ac7b5 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -14,6 +14,8 @@ struct qdisc_walker;
 struct tcf_walker;
 struct module;
 
+extern int qdisc_bulk_dequeue_limit;
+
 struct qdisc_rate_table {
 	struct tc_ratespec rate;
 	u32		data[256];
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
index cf9cd13..5505841 100644
--- a/net/core/sysctl_net_core.c
+++ b/net/core/sysctl_net_core.c
@@ -361,6 +361,15 @@ static struct ctl_table net_core_table[] = {
 		.mode		= 0644,
 		.proc_handler	= proc_dointvec
 	},
+	{
+		.procname	= "qdisc_bulk_dequeue_limit",
+		.data		= &qdisc_bulk_dequeue_limit,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.extra1		= &zero,
+		.extra2		= &ushort_max,
+		.proc_handler	= proc_dointvec_minmax
+	},
 	{ }
 };
 
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 30814ef..9cb08c0 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -34,6 +34,8 @@
 const struct Qdisc_ops *default_qdisc_ops = &pfifo_fast_ops;
 EXPORT_SYMBOL(default_qdisc_ops);
 
+int qdisc_bulk_dequeue_limit __read_mostly = 7;
+
 /* Main transmission queue. */
 
 /* Modifications to data participating in scheduling must be protected with
@@ -81,7 +83,7 @@ static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
 			/* bulk dequeue */
 			if (skb && !skb->next && (q->flags & TCQ_F_ONETXQUEUE)) {
 				struct sk_buff *new, *head = skb;
-				int limit = 7;
+				int limit = qdisc_bulk_dequeue_limit;
 
 				do {
 					new = q->dequeue(q);

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
  2014-09-02 14:35 ` [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE Jesper Dangaard Brouer
@ 2014-09-02 15:22   ` Eric Dumazet
  2014-09-03  9:31     ` Jesper Dangaard Brouer
  2014-09-02 16:14   ` Daniel Borkmann
  1 sibling, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2014-09-02 15:22 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa,
	Daniel Borkmann

On Tue, 2014-09-02 at 16:35 +0200, Jesper Dangaard Brouer wrote:

> This is crazy fast. This measurement is actually "too-high" as
> 10Gbit/s wirespeed is 14,880,952 (11049 pps too fast).
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---

This looks buggy, you forgot about GSO.

(You did the test only for first dequeued packet, not the followings)

Make sure you test your patch with something else than pktgen.

Also, our idea was to use a byte count limit (aka BQL)

If we dequeue 8 64KB packets, this patch adds head of line blocking,
which we fought hard.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit
  2014-09-02 14:36 ` [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit Jesper Dangaard Brouer
@ 2014-09-02 15:23   ` Jesper Dangaard Brouer
  2014-09-02 15:33   ` Daniel Borkmann
  2014-09-02 21:20   ` Cong Wang
  2 siblings, 0 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-02 15:23 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa,
	Daniel Borkmann

On Tue, 02 Sep 2014 16:36:09 +0200
Jesper Dangaard Brouer <brouer@redhat.com> wrote:

> Allow userspace to adjust how many packet the qdisc is allowed to
> bulk dequeue.
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> 
> ---
> Question should we allow this to be adjusted?
> 
>  include/net/sch_generic.h  |    2 ++
>  net/core/sysctl_net_core.c |    9 +++++++++
>  net/sched/sch_generic.c    |    4 +++-
>  3 files changed, 14 insertions(+), 1 deletions(-)
> 
> diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
> index a3cfb8e..b0ac7b5 100644
> --- a/include/net/sch_generic.h
> +++ b/include/net/sch_generic.h
> @@ -14,6 +14,8 @@ struct qdisc_walker;
>  struct tcf_walker;
>  struct module;
>  
> +extern int qdisc_bulk_dequeue_limit;
> +
>  struct qdisc_rate_table {
>  	struct tc_ratespec rate;
>  	u32		data[256];
> diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
> index cf9cd13..5505841 100644
> --- a/net/core/sysctl_net_core.c
> +++ b/net/core/sysctl_net_core.c
> @@ -361,6 +361,15 @@ static struct ctl_table net_core_table[] = {
>  		.mode		= 0644,
>  		.proc_handler	= proc_dointvec
>  	},
> +	{
> +		.procname	= "qdisc_bulk_dequeue_limit",
> +		.data		= &qdisc_bulk_dequeue_limit,
> +		.maxlen		= sizeof(int),
> +		.mode		= 0644,
> +		.extra1		= &zero,
> +		.extra2		= &ushort_max,
> +		.proc_handler	= proc_dointvec_minmax
> +	},
>  	{ }
>  };
>  
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index 30814ef..9cb08c0 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -34,6 +34,8 @@
>  const struct Qdisc_ops *default_qdisc_ops = &pfifo_fast_ops;
>  EXPORT_SYMBOL(default_qdisc_ops);
>  
> +int qdisc_bulk_dequeue_limit __read_mostly = 7;
> +
>  /* Main transmission queue. */
>  
>  /* Modifications to data participating in scheduling must be protected with
> @@ -81,7 +83,7 @@ static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
>  			/* bulk dequeue */
>  			if (skb && !skb->next && (q->flags & TCQ_F_ONETXQUEUE)) {

Will update the patch with:
-                       if (skb && !skb->next && (q->flags & TCQ_F_ONETXQUEUE)) {
+                       if (qdisc_bulk_dequeue_limit && skb &&
+                           !skb->next && (q->flags & TCQ_F_ONETXQUEUE)) {

To allow qdisc_bulk_dequeue_limit==0 to actually disable bulking.



>  				struct sk_buff *new, *head = skb;
> -				int limit = 7;
> +				int limit = qdisc_bulk_dequeue_limit;
>  
>  				do {
>  					new = q->dequeue(q);
> 



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits
  2014-09-02 14:35 ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits Jesper Dangaard Brouer
@ 2014-09-02 15:25   ` Eric Dumazet
  2014-09-03 10:12     ` [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer Jesper Dangaard Brouer
  2014-09-02 21:06   ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits David Miller
  1 sibling, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2014-09-02 15:25 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa,
	Daniel Borkmann

On Tue, 2014-09-02 at 16:35 +0200, Jesper Dangaard Brouer wrote:
> Minor adjustments for merge commit 53fda7f7f9e (Merge branch 'xmit_list')
> that allows us to work with a list of SKBs.
> 
> Update code doc to function sch_direct_xmit().
> 
> In handle_dev_cpu_collision() use kfree_skb_list() in error handling.
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---

Please fix all other similar issues in a single patch.

qdisc_reset() and qdisc_destroy() are buggy as well.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit
  2014-09-02 14:36 ` [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit Jesper Dangaard Brouer
  2014-09-02 15:23   ` Jesper Dangaard Brouer
@ 2014-09-02 15:33   ` Daniel Borkmann
  2014-09-02 21:20   ` Cong Wang
  2 siblings, 0 replies; 24+ messages in thread
From: Daniel Borkmann @ 2014-09-02 15:33 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa

On 09/02/2014 04:36 PM, Jesper Dangaard Brouer wrote:
> Allow userspace to adjust how many packet the qdisc is allowed to
> bulk dequeue.
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>
> ---
> Question should we allow this to be adjusted?
>
>   include/net/sch_generic.h  |    2 ++
>   net/core/sysctl_net_core.c |    9 +++++++++
>   net/sched/sch_generic.c    |    4 +++-
>   3 files changed, 14 insertions(+), 1 deletions(-)
>
> diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
> index a3cfb8e..b0ac7b5 100644
> --- a/include/net/sch_generic.h
> +++ b/include/net/sch_generic.h
> @@ -14,6 +14,8 @@ struct qdisc_walker;
>   struct tcf_walker;
>   struct module;
>
> +extern int qdisc_bulk_dequeue_limit;
> +
>   struct qdisc_rate_table {
>   	struct tc_ratespec rate;
>   	u32		data[256];
> diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
> index cf9cd13..5505841 100644
> --- a/net/core/sysctl_net_core.c
> +++ b/net/core/sysctl_net_core.c
> @@ -361,6 +361,15 @@ static struct ctl_table net_core_table[] = {
>   		.mode		= 0644,
>   		.proc_handler	= proc_dointvec
>   	},
> +	{
> +		.procname	= "qdisc_bulk_dequeue_limit",
> +		.data		= &qdisc_bulk_dequeue_limit,
> +		.maxlen		= sizeof(int),
> +		.mode		= 0644,
> +		.extra1		= &zero,

If zero, then this would bulk INT_MAX due to a wrap around in patch 2.

> +		.extra2		= &ushort_max,
> +		.proc_handler	= proc_dointvec_minmax
> +	},
>   	{ }
>   };
>
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index 30814ef..9cb08c0 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -34,6 +34,8 @@
>   const struct Qdisc_ops *default_qdisc_ops = &pfifo_fast_ops;
>   EXPORT_SYMBOL(default_qdisc_ops);
>
> +int qdisc_bulk_dequeue_limit __read_mostly = 7;
> +

Just nit, but this should be adjusted plus the exit condition in
dequeue_skb() to work with 8 when you mention the max limit of 8
as bulk dequeue.

It would be great, if we could go with 0 knobs first, though.

>   /* Main transmission queue. */
>
>   /* Modifications to data participating in scheduling must be protected with
> @@ -81,7 +83,7 @@ static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
>   			/* bulk dequeue */
>   			if (skb && !skb->next && (q->flags & TCQ_F_ONETXQUEUE)) {
>   				struct sk_buff *new, *head = skb;
> -				int limit = 7;
> +				int limit = qdisc_bulk_dequeue_limit;
>
>   				do {
>   					new = q->dequeue(q);
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
  2014-09-02 14:35 ` [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE Jesper Dangaard Brouer
  2014-09-02 15:22   ` Eric Dumazet
@ 2014-09-02 16:14   ` Daniel Borkmann
  2014-09-03 12:27     ` Jesper Dangaard Brouer
  1 sibling, 1 reply; 24+ messages in thread
From: Daniel Borkmann @ 2014-09-02 16:14 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa

On 09/02/2014 04:35 PM, Jesper Dangaard Brouer wrote:
> Based on DaveM's recent API work on dev_hard_start_xmit(), that allows
> sending/processing an entire skb list.
>
> This patch implements qdisc bulk dequeue, by allowing multiple packets
> to be dequeued in dequeue_skb().
>
> One restriction of the new API is that every SKB must belong to the
> same TXQ.  This patch takes the easy way out, by restricting bulk
> dequeue to qdisc's with the TCQ_F_ONETXQUEUE flag, that specifies the
> qdisc only have attached a single TXQ.
>
> Testing if this have the desired effect is the challenging part.
> Generating enough packets for a backlog queue to form at the qdisc is
> a challenge (because overhead else-where is a limiting factor
> e.g. I've measured the pure skb_alloc/free cycle to cost 80ns).
>
> After trying many qdisc setups, I figured out that, the easiest way to
> make a backlog form is to fully load the system, all CPUs.  And I can
> even demonstrate this with the default MQ disc.
>
> This is a 12 core CPU (without HT) running trafgen on all 12 cores,
> via qdisc-path using sendto():
>   * trafgen --cpp --dev $DEV --conf udp_example02_const.trafgen --qdisc-path -t0 --cpus 12
>
> Measuring TX pps:
>   * Baseline  : 12,815,925 pps
>   * This patch: 14,892,001 pps

I'm curious about *_RR tests and *_STREAM results e.g. from super_netperf.

One thing we might want to be careful about when comparing before and
after numbers though is that now we still have the old quota limit, but
don't adjust it here. It probably depends on how you interpret quota,
but we now do more work within our quota than before.

> This is crazy fast. This measurement is actually "too-high" as
> 10Gbit/s wirespeed is 14,880,952 (11049 pps too fast).
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
...
>   net/sched/sch_generic.c |   23 ++++++++++++++++++++++-
>   1 files changed, 22 insertions(+), 1 deletions(-)
>
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index 5b261e9..30814ef 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -56,6 +56,9 @@ static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
>   	return 0;
>   }
>
> +/* Note that dequeue_skb can possibly return a SKB list (via skb->next).
> + * A requeued skb (via q->gso_skb) can also be a SKB list.
> + */
>   static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
>   {
>   	struct sk_buff *skb = q->gso_skb;
> @@ -70,10 +73,28 @@ static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
>   		} else
>   			skb = NULL;
>   	} else {
> -		if (!(q->flags & TCQ_F_ONETXQUEUE) || !netif_xmit_frozen_or_stopped(txq)) {
> +		if (!(q->flags & TCQ_F_ONETXQUEUE)
> +		    || !netif_xmit_frozen_or_stopped(txq)) {
>   			skb = q->dequeue(q);
>   			if (skb)
>   				skb = validate_xmit_skb(skb, qdisc_dev(q));
> +			/* bulk dequeue */
> +			if (skb && !skb->next && (q->flags & TCQ_F_ONETXQUEUE)) {

This check should better be an inline for sch_generic.h, e.g. ...

static inline bool qdisc_may_bulk(const struct Qdisc *qdisc,
				  const struct sk_buff *skb)
{
	return (qdisc->flags & TCQ_F_ONETXQUEUE) && !skb->next;
}

> +				struct sk_buff *new, *head = skb;
> +				int limit = 7;
> +
> +				do {
> +					new = q->dequeue(q);
> +					if (new)
> +						new = validate_xmit_skb(
> +							new, qdisc_dev(q));

This and above dequeue() + validate_xmit_skb() code should probably also go
into a helper if you're at it, e.g. ...

static inline struct sk_buff *qdisc_dequeue_validate(struct Qdisc *qdisc)
{
	struct sk_buff *skb = qdisc->dequeue(qdisc);

	if (skb != NULL)
		skb = validate_xmit_skb(skb, qdisc_dev(qdisc));

	return skb;
}

> +					if (new) {
> +						skb->next = new;
> +						skb = new;
> +					}
> +				} while (new && --limit);
> +				skb = head;
> +			}
>   		}
>   	}
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates
  2014-09-02 14:35 [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Jesper Dangaard Brouer
                   ` (2 preceding siblings ...)
  2014-09-02 14:36 ` [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit Jesper Dangaard Brouer
@ 2014-09-02 18:04 ` Tom Herbert
  2014-09-03 12:47   ` Jesper Dangaard Brouer
  2014-09-02 21:05 ` David Miller
  4 siblings, 1 reply; 24+ messages in thread
From: Tom Herbert @ 2014-09-02 18:04 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: David S. Miller, Linux Netdev List, Florian Westphal,
	Hannes Frederic Sowa, Daniel Borkmann

On Tue, Sep 2, 2014 at 7:35 AM, Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
> This patchset uses DaveM's recent API changes to dev_hard_start_xmit(),
> from the qdisc layer, to implement dequeue bulking.
>
> Open questions:
>
> - For now set bulk limit to 8 packets, don't want to stress the driver
>   avail ring_buffer space.
>
Please get limit from BQL also, see:
[PATCH net-next] net: Functions to report space available in device TX queues

> - We are not doing proper accounting for weight_p/quota in __qdisc_run(). Do we care?
>
> - Is the (!skb->next) check in dequeue necessary?
>
> - Do we need some checks in dev_requeue_skb() as we could be requeuing a SKB list?
>
>  - Should we rename dequeue_skb() to dequeue_skb_list() ?
>
>
> Based on top of:
>  commit 364a9e93243d ("sock: deduplicate errqueue dequeue")
>
> ---
>
> Jesper Dangaard Brouer (3):
>       qdisc: sysctl to adjust bulk dequeue limit
>       qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
>       qdisc: adjustments for API allowing skb list xmits
>
>
>  include/net/sch_generic.h  |    2 ++
>  net/core/sysctl_net_core.c |    9 +++++++++
>  net/sched/sch_generic.c    |   33 ++++++++++++++++++++++++++++-----
>  3 files changed, 39 insertions(+), 5 deletions(-)
>
> --
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates
  2014-09-02 14:35 [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Jesper Dangaard Brouer
                   ` (3 preceding siblings ...)
  2014-09-02 18:04 ` [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Tom Herbert
@ 2014-09-02 21:05 ` David Miller
  4 siblings, 0 replies; 24+ messages in thread
From: David Miller @ 2014-09-02 21:05 UTC (permalink / raw)
  To: brouer; +Cc: netdev, fw, hannes, dborkman

From: Jesper Dangaard Brouer <brouer@redhat.com>
Date: Tue, 02 Sep 2014 16:35:19 +0200

> Open questions:
> 
> - For now set bulk limit to 8 packets, don't want to stress the driver
>   avail ring_buffer space.

Maybe we can start out with something even lower, like 5.

> - Is the (!skb->next) check in dequeue necessary?
> 
> - Do we need some checks in dev_requeue_skb() as we could be requeuing a SKB list?

The skb->next stuff is to handle GSO software segmented frames.

dev_requeue_skb() naturally handles any list of SKBs already,
GSO software segmented or multi-dequeue, it shouldn't care.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits
  2014-09-02 14:35 ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits Jesper Dangaard Brouer
  2014-09-02 15:25   ` Eric Dumazet
@ 2014-09-02 21:06   ` David Miller
  1 sibling, 0 replies; 24+ messages in thread
From: David Miller @ 2014-09-02 21:06 UTC (permalink / raw)
  To: brouer; +Cc: netdev, fw, hannes, dborkman

From: Jesper Dangaard Brouer <brouer@redhat.com>
Date: Tue, 02 Sep 2014 16:35:33 +0200

> Minor adjustments for merge commit 53fda7f7f9e (Merge branch 'xmit_list')
> that allows us to work with a list of SKBs.
> 
> Update code doc to function sch_direct_xmit().
> 
> In handle_dev_cpu_collision() use kfree_skb_list() in error handling.
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

Applied, thanks for catching the kfree_skb_list() issue.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit
  2014-09-02 14:36 ` [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit Jesper Dangaard Brouer
  2014-09-02 15:23   ` Jesper Dangaard Brouer
  2014-09-02 15:33   ` Daniel Borkmann
@ 2014-09-02 21:20   ` Cong Wang
  2014-09-03  0:12     ` Tom Herbert
  2 siblings, 1 reply; 24+ messages in thread
From: Cong Wang @ 2014-09-02 21:20 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa,
	Daniel Borkmann

On Tue, Sep 2, 2014 at 7:36 AM, Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
> Allow userspace to adjust how many packet the qdisc is allowed to
> bulk dequeue.
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>
> ---
> Question should we allow this to be adjusted?
>

A sysctl is ugly and seems not fit well with Qdisc which always uses netlink,
so I think a netlink flag might be better if we can find a generic one.

Also, you forgot to document it.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit
  2014-09-02 21:20   ` Cong Wang
@ 2014-09-03  0:12     ` Tom Herbert
  0 siblings, 0 replies; 24+ messages in thread
From: Tom Herbert @ 2014-09-03  0:12 UTC (permalink / raw)
  To: Cong Wang
  Cc: Jesper Dangaard Brouer, David S. Miller, netdev,
	Florian Westphal, Hannes Frederic Sowa, Daniel Borkmann

On Tue, Sep 2, 2014 at 2:20 PM, Cong Wang <cwang@twopensource.com> wrote:
>
> On Tue, Sep 2, 2014 at 7:36 AM, Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> > Allow userspace to adjust how many packet the qdisc is allowed to
> > bulk dequeue.
> >
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> >
> > ---
> > Question should we allow this to be adjusted?
> >
>
> A sysctl is ugly and seems not fit well with Qdisc which always uses netlink,
> so I think a netlink flag might be better if we can find a generic one.
>
You could also make this a device specific attribute easily enough.

> Also, you forgot to document it.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
  2014-09-02 15:22   ` Eric Dumazet
@ 2014-09-03  9:31     ` Jesper Dangaard Brouer
  2014-09-03 10:23       ` Florian Westphal
  0 siblings, 1 reply; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-03  9:31 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa,
	Daniel Borkmann, brouer

On Tue, 02 Sep 2014 08:22:42 -0700
Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Tue, 2014-09-02 at 16:35 +0200, Jesper Dangaard Brouer wrote:
> 
> > This is crazy fast. This measurement is actually "too-high" as
> > 10Gbit/s wirespeed is 14,880,952 (11049 pps too fast).
> > 
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > ---
> 
> This looks buggy, you forgot about GSO.

The GSO check is (!skb->next), right(?)

> (You did the test only for first dequeued packet, not the followings)

Ah! yes, for the following packets!


> Make sure you test your patch with something else than pktgen.

I were not using pktgen, but trafgen (raw/af_packet) ;-)
And I've also been using netperf and super_netperf for testing.

pktgen is difficult to use in this scenerio, because it bypass'es the
qdisc layer.  But I have actually (with help from John Fastabend)
constructed a setup, where I add a VLAN interface, which pktgen can
xmit on, which will make packets travel the qdisc layer of the
underlying real device. (pktgen's clone_skb must be 0 in this setup).
So, I'm not currently using pktgen...


> Also, our idea was to use a byte count limit (aka BQL)

It makes sense. I will look into using the BQL limits.


> If we dequeue 8 64KB packets, this patch adds head of line blocking,
> which we fought hard.

I should not be able to add GSO packets to the skb list in
dequeue_skb(), or did I miss something.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer
  2014-09-02 15:25   ` Eric Dumazet
@ 2014-09-03 10:12     ` Jesper Dangaard Brouer
  2014-09-04  3:42       ` David Miller
  0 siblings, 1 reply; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-03 10:12 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, David S. Miller, netdev
  Cc: Florian Westphal, Hannes Frederic Sowa, Daniel Borkmann

More minor fixes to merge commit 53fda7f7f9e (Merge branch 'xmit_list')
that allows us to work with a list of SKBs.

Fixing exit cases in qdisc_reset() and qdisc_destroy(), where a
leftover requeued SKB (qdisc->gso_skb) can have the potential of
being a skb list, thus use kfree_skb_list().

This is a followup to commit 10770bc2d1 ("qdisc: adjustments for
API allowing skb list xmits").

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---

 net/sched/sch_generic.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 5b261e9..19696eb 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -621,7 +621,7 @@ void qdisc_reset(struct Qdisc *qdisc)
 		ops->reset(qdisc);
 
 	if (qdisc->gso_skb) {
-		kfree_skb(qdisc->gso_skb);
+		kfree_skb_list(qdisc->gso_skb);
 		qdisc->gso_skb = NULL;
 		qdisc->q.qlen = 0;
 	}
@@ -657,7 +657,7 @@ void qdisc_destroy(struct Qdisc *qdisc)
 	module_put(ops->owner);
 	dev_put(qdisc_dev(qdisc));
 
-	kfree_skb(qdisc->gso_skb);
+	kfree_skb_list(qdisc->gso_skb);
 	/*
 	 * gen_estimator est_timer() might access qdisc->q.lock,
 	 * wait a RCU grace period before freeing qdisc.

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
  2014-09-03  9:31     ` Jesper Dangaard Brouer
@ 2014-09-03 10:23       ` Florian Westphal
  0 siblings, 0 replies; 24+ messages in thread
From: Florian Westphal @ 2014-09-03 10:23 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Eric Dumazet, David S. Miller, netdev, Florian Westphal,
	Hannes Frederic Sowa, Daniel Borkmann

Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> On Tue, 02 Sep 2014 08:22:42 -0700
> Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> > On Tue, 2014-09-02 at 16:35 +0200, Jesper Dangaard Brouer wrote:
> > 
> > > This is crazy fast. This measurement is actually "too-high" as
> > > 10Gbit/s wirespeed is 14,880,952 (11049 pps too fast).
> > > 
> > > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > > ---
> > 
> > This looks buggy, you forgot about GSO.
> 
> The GSO check is (!skb->next), right(?)

No, you'd want to use skb_is_gso() helper.

> > Also, our idea was to use a byte count limit (aka BQL)
> 
> It makes sense. I will look into using the BQL limits.

Thanks Jesper.

> > If we dequeue 8 64KB packets, this patch adds head of line blocking,
> > which we fought hard.

Just to make sure: Using bql limits would resolve your HOL blocking
concerns, right?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE
  2014-09-02 16:14   ` Daniel Borkmann
@ 2014-09-03 12:27     ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-03 12:27 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: David S. Miller, netdev, Florian Westphal, Hannes Frederic Sowa, brouer

On Tue, 02 Sep 2014 18:14:36 +0200
Daniel Borkmann <dborkman@redhat.com> wrote:

> On 09/02/2014 04:35 PM, Jesper Dangaard Brouer wrote:
>
> > This patch implements qdisc bulk dequeue, by allowing multiple packets
> > to be dequeued in dequeue_skb().
> >
[...]
 
> This check should better be an inline for sch_generic.h, e.g. ...
> 
> static inline bool qdisc_may_bulk(const struct Qdisc *qdisc,
> 				  const struct sk_buff *skb)
> {
> 	return (qdisc->flags & TCQ_F_ONETXQUEUE) && !skb->next;
> }

Good ideas of inlining these functions, it should help readability of
the code, thanks.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates
  2014-09-02 18:04 ` [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Tom Herbert
@ 2014-09-03 12:47   ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-03 12:47 UTC (permalink / raw)
  To: Tom Herbert
  Cc: David S. Miller, Linux Netdev List, Florian Westphal,
	Hannes Frederic Sowa, Daniel Borkmann, brouer

On Tue, 2 Sep 2014 11:04:06 -0700
Tom Herbert <therbert@google.com> wrote:

> On Tue, Sep 2, 2014 at 7:35 AM, Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> > This patchset uses DaveM's recent API changes to dev_hard_start_xmit(),
> > from the qdisc layer, to implement dequeue bulking.
> >
> > Open questions:
> >
> > - For now set bulk limit to 8 packets, don't want to stress the driver
> >   avail ring_buffer space.
> >
> Please get limit from BQL also, see:
> [PATCH net-next] net: Functions to report space available in device TX queues

Thank you Tom. I will include that patch in my stack (keeping you as
author), and use the functions you are proposing.


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer
  2014-09-03 10:12     ` [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer Jesper Dangaard Brouer
@ 2014-09-04  3:42       ` David Miller
  2014-09-04  5:24         ` Eric Dumazet
  0 siblings, 1 reply; 24+ messages in thread
From: David Miller @ 2014-09-04  3:42 UTC (permalink / raw)
  To: brouer; +Cc: netdev, fw, hannes, dborkman

From: Jesper Dangaard Brouer <brouer@redhat.com>
Date: Wed, 03 Sep 2014 12:12:50 +0200

> More minor fixes to merge commit 53fda7f7f9e (Merge branch 'xmit_list')
> that allows us to work with a list of SKBs.
> 
> Fixing exit cases in qdisc_reset() and qdisc_destroy(), where a
> leftover requeued SKB (qdisc->gso_skb) can have the potential of
> being a skb list, thus use kfree_skb_list().
> 
> This is a followup to commit 10770bc2d1 ("qdisc: adjustments for
> API allowing skb list xmits").
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

Applied.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer
  2014-09-04  3:42       ` David Miller
@ 2014-09-04  5:24         ` Eric Dumazet
  2014-09-04  5:39           ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2014-09-04  5:24 UTC (permalink / raw)
  To: David Miller; +Cc: brouer, netdev, fw, hannes, dborkman

On Wed, 2014-09-03 at 20:42 -0700, David Miller wrote:
> From: Jesper Dangaard Brouer <brouer@redhat.com>
> Date: Wed, 03 Sep 2014 12:12:50 +0200
> 
> > More minor fixes to merge commit 53fda7f7f9e (Merge branch 'xmit_list')
> > that allows us to work with a list of SKBs.
> > 
> > Fixing exit cases in qdisc_reset() and qdisc_destroy(), where a
> > leftover requeued SKB (qdisc->gso_skb) can have the potential of
> > being a skb list, thus use kfree_skb_list().
> > 
> > This is a followup to commit 10770bc2d1 ("qdisc: adjustments for
> > API allowing skb list xmits").
> > 
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> 
> Applied.

Strange, I sent same patch a little earlier ;)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer
  2014-09-04  5:24         ` Eric Dumazet
@ 2014-09-04  5:39           ` Jesper Dangaard Brouer
  2014-09-05  5:41             ` David Miller
  0 siblings, 1 reply; 24+ messages in thread
From: Jesper Dangaard Brouer @ 2014-09-04  5:39 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, fw, hannes, dborkman, brouer

On Wed, 03 Sep 2014 22:24:19 -0700
Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Wed, 2014-09-03 at 20:42 -0700, David Miller wrote:
> > From: Jesper Dangaard Brouer <brouer@redhat.com>
> > Date: Wed, 03 Sep 2014 12:12:50 +0200
> > 
> > > More minor fixes to merge commit 53fda7f7f9e (Merge branch 'xmit_list')
> > > that allows us to work with a list of SKBs.
> > > 
> > > Fixing exit cases in qdisc_reset() and qdisc_destroy(), where a
> > > leftover requeued SKB (qdisc->gso_skb) can have the potential of
> > > being a skb list, thus use kfree_skb_list().
> > > 
> > > This is a followup to commit 10770bc2d1 ("qdisc: adjustments for
> > > API allowing skb list xmits").
> > > 
> > > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > 
> > Applied.
> 
> Strange, I sent same patch a little earlier ;)

Yes, very strange, especially as Dave also said he had applied your
patch too (I did comment on your post, apologizing for not noticing
your patch, before sending this one).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer
  2014-09-04  5:39           ` Jesper Dangaard Brouer
@ 2014-09-05  5:41             ` David Miller
  2014-09-05 13:42               ` Eric Dumazet
  0 siblings, 1 reply; 24+ messages in thread
From: David Miller @ 2014-09-05  5:41 UTC (permalink / raw)
  To: brouer; +Cc: eric.dumazet, netdev, fw, hannes, dborkman

From: Jesper Dangaard Brouer <brouer@redhat.com>
Date: Thu, 4 Sep 2014 07:39:12 +0200

> On Wed, 03 Sep 2014 22:24:19 -0700
> Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
>> On Wed, 2014-09-03 at 20:42 -0700, David Miller wrote:
>> > From: Jesper Dangaard Brouer <brouer@redhat.com>
>> > Date: Wed, 03 Sep 2014 12:12:50 +0200
>> > 
>> > > More minor fixes to merge commit 53fda7f7f9e (Merge branch 'xmit_list')
>> > > that allows us to work with a list of SKBs.
>> > > 
>> > > Fixing exit cases in qdisc_reset() and qdisc_destroy(), where a
>> > > leftover requeued SKB (qdisc->gso_skb) can have the potential of
>> > > being a skb list, thus use kfree_skb_list().
>> > > 
>> > > This is a followup to commit 10770bc2d1 ("qdisc: adjustments for
>> > > API allowing skb list xmits").
>> > > 
>> > > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>> > 
>> > Applied.
>> 
>> Strange, I sent same patch a little earlier ;)
> 
> Yes, very strange, especially as Dave also said he had applied your
> patch too (I did comment on your post, apologizing for not noticing
> your patch, before sending this one).

This is my fault.

I put Eric's copy into my GIT tree on my workstation at home, build
tested it, but did not push it out.

Then I did GIT work from my laptop for the following two days. :-/

Sorry about that.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer
  2014-09-05  5:41             ` David Miller
@ 2014-09-05 13:42               ` Eric Dumazet
  0 siblings, 0 replies; 24+ messages in thread
From: Eric Dumazet @ 2014-09-05 13:42 UTC (permalink / raw)
  To: David Miller; +Cc: brouer, netdev, fw, hannes, dborkman

On Thu, 2014-09-04 at 22:41 -0700, David Miller wrote:

> I put Eric's copy into my GIT tree on my workstation at home, build
> tested it, but did not push it out.
> 
> Then I did GIT work from my laptop for the following two days. :-/
> 
> Sorry about that.

No worries, I knew it was something like that ;)

Thanks !

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-09-05 13:42 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-02 14:35 [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Jesper Dangaard Brouer
2014-09-02 14:35 ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits Jesper Dangaard Brouer
2014-09-02 15:25   ` Eric Dumazet
2014-09-03 10:12     ` [net-next PATCH] qdisc: exit case fixes for skb list handling in qdisc layer Jesper Dangaard Brouer
2014-09-04  3:42       ` David Miller
2014-09-04  5:24         ` Eric Dumazet
2014-09-04  5:39           ` Jesper Dangaard Brouer
2014-09-05  5:41             ` David Miller
2014-09-05 13:42               ` Eric Dumazet
2014-09-02 21:06   ` [net-next PATCH 1/3] qdisc: adjustments for API allowing skb list xmits David Miller
2014-09-02 14:35 ` [net-next PATCH 2/3] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE Jesper Dangaard Brouer
2014-09-02 15:22   ` Eric Dumazet
2014-09-03  9:31     ` Jesper Dangaard Brouer
2014-09-03 10:23       ` Florian Westphal
2014-09-02 16:14   ` Daniel Borkmann
2014-09-03 12:27     ` Jesper Dangaard Brouer
2014-09-02 14:36 ` [net-next PATCH 3/3] qdisc: sysctl to adjust bulk dequeue limit Jesper Dangaard Brouer
2014-09-02 15:23   ` Jesper Dangaard Brouer
2014-09-02 15:33   ` Daniel Borkmann
2014-09-02 21:20   ` Cong Wang
2014-09-03  0:12     ` Tom Herbert
2014-09-02 18:04 ` [net-next PATCH 0/3] qdisc bulk dequeuing and utilizing delayed tailptr updates Tom Herbert
2014-09-03 12:47   ` Jesper Dangaard Brouer
2014-09-02 21:05 ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.