netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch net-next v5 00/11] couple of net/sched fixes+improvements
@ 2013-02-12 10:11 Jiri Pirko
  2013-02-12 10:11 ` [patch net-next v5 01/11] htb: use PSCHED_TICKS2NS() Jiri Pirko
                   ` (10 more replies)
  0 siblings, 11 replies; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:11 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

v4->v5:
 - added patch "sch_api: introduce qdisc_watchdog_schedule_ns()"
 - remove bogus patch "tbf: fix value set for q->ptokens"
 - fixed watchdog scheduling
   (patch "tbf: improved accuracy at high rates")
 - fixed q->mtu handling
   (patch "tbf: improved accuracy at high rates")
 - added gso skb checks to "peak" branches
   (patch "tbf: take into account gso skbs" and
    patch "act_police: remove <=mtu check for gso skbs")

v3->v4:
 - cache mtu_ptokens value instead of compute in by psched_l2t_ns in fast path
   (patch "act_police: improved accuracy at high rates" and
    patch "tbf: fix value set for q->ptokens")

v2->v3:
 - fixed schedule while atomic issue
   (patch "act_police: improved accuracy at high rates")

v1->v2:
 - made struct psched_ratecfg const in params of couple of inline functions
   (patch "sch: make htb_rate_cfg and functions around that generic")
 - fixes misspelled "peak"
   (patch "tbf: improved accuracy at high rates")
 - added last 4 patches to this set

Jiri Pirko (11):
  htb: use PSCHED_TICKS2NS()
  htb: fix values in opt dump
  htb: remove pointless first initialization of buffer and cbuffer
  htb: initialize cl->tokens and cl->ctokens correctly
  sch: make htb_rate_cfg and functions around that generic
  sch_api: introduce qdisc_watchdog_schedule_ns()
  tbf: improved accuracy at high rates
  act_police: move struct tcf_police to act_police.c
  act_police: improved accuracy at high rates
  tbf: take into account gso skbs
  act_police: remove <=mtu check for gso skbs

 include/net/act_api.h     |  15 -------
 include/net/pkt_sched.h   |  10 ++++-
 include/net/sch_generic.h |  19 +++++++++
 net/sched/act_police.c    | 102 ++++++++++++++++++++++++++++------------------
 net/sched/sch_api.c       |   6 +--
 net/sched/sch_generic.c   |  37 +++++++++++++++++
 net/sched/sch_htb.c       |  80 +++++++-----------------------------
 net/sched/sch_tbf.c       |  78 +++++++++++++++++------------------
 8 files changed, 182 insertions(+), 165 deletions(-)

-- 
1.8.1.2

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [patch net-next v5 01/11] htb: use PSCHED_TICKS2NS()
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
@ 2013-02-12 10:11 ` Jiri Pirko
  2013-02-13  0:00   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 02/11] htb: fix values in opt dump Jiri Pirko
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:11 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Acked-by: Eric Dumazet <edumazet@google.com>
---
 net/sched/sch_htb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index 51561ea..476992c 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -1512,8 +1512,8 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 	htb_precompute_ratedata(&cl->rate);
 	htb_precompute_ratedata(&cl->ceil);
 
-	cl->buffer = hopt->buffer << PSCHED_SHIFT;
-	cl->cbuffer = hopt->buffer << PSCHED_SHIFT;
+	cl->buffer = PSCHED_TICKS2NS(hopt->buffer);
+	cl->cbuffer = PSCHED_TICKS2NS(hopt->buffer);
 
 	sch_tree_unlock(sch);
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 02/11] htb: fix values in opt dump
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
  2013-02-12 10:11 ` [patch net-next v5 01/11] htb: use PSCHED_TICKS2NS() Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-12 23:51   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 03/11] htb: remove pointless first initialization of buffer and cbuffer Jiri Pirko
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

in htb_change_class() cl->buffer and cl->buffer are stored in ns.
So in dump, convert them back to psched ticks.

Note this was introduced by:
commit 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd
    htb: improved accuracy at high rates

Please consider this for -net/-stable.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Acked-by: Eric Dumazet <edumazet@google.com>
---
 net/sched/sch_htb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index 476992c..14a83dc 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -1135,9 +1135,9 @@ static int htb_dump_class(struct Qdisc *sch, unsigned long arg,
 	memset(&opt, 0, sizeof(opt));
 
 	opt.rate.rate = cl->rate.rate_bps >> 3;
-	opt.buffer = cl->buffer;
+	opt.buffer = PSCHED_NS2TICKS(cl->buffer);
 	opt.ceil.rate = cl->ceil.rate_bps >> 3;
-	opt.cbuffer = cl->cbuffer;
+	opt.cbuffer = PSCHED_NS2TICKS(cl->cbuffer);
 	opt.quantum = cl->quantum;
 	opt.prio = cl->prio;
 	opt.level = cl->level;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 03/11] htb: remove pointless first initialization of buffer and cbuffer
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
  2013-02-12 10:11 ` [patch net-next v5 01/11] htb: use PSCHED_TICKS2NS() Jiri Pirko
  2013-02-12 10:12 ` [patch net-next v5 02/11] htb: fix values in opt dump Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-13  0:00   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 04/11] htb: initialize cl->tokens and cl->ctokens correctly Jiri Pirko
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

these are initialized correctly couple of lines later in the function.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Acked-by: Eric Dumazet <edumazet@google.com>
---
 net/sched/sch_htb.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index 14a83dc..547912e9 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -1503,9 +1503,6 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 			cl->prio = TC_HTB_NUMPRIO - 1;
 	}
 
-	cl->buffer = hopt->buffer;
-	cl->cbuffer = hopt->cbuffer;
-
 	cl->rate.rate_bps = (u64)hopt->rate.rate << 3;
 	cl->ceil.rate_bps = (u64)hopt->ceil.rate << 3;
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 04/11] htb: initialize cl->tokens and cl->ctokens correctly
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (2 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 03/11] htb: remove pointless first initialization of buffer and cbuffer Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-13  0:00   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 05/11] sch: make htb_rate_cfg and functions around that generic Jiri Pirko
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

These are in ns so convert from ticks to ns.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Acked-by: Eric Dumazet <edumazet@google.com>
---
 net/sched/sch_htb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index 547912e9..2b22544 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -1459,8 +1459,8 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 		cl->parent = parent;
 
 		/* set class to be in HTB_CAN_SEND state */
-		cl->tokens = hopt->buffer;
-		cl->ctokens = hopt->cbuffer;
+		cl->tokens = PSCHED_TICKS2NS(hopt->buffer);
+		cl->ctokens = PSCHED_TICKS2NS(hopt->cbuffer);
 		cl->mbuffer = 60 * PSCHED_TICKS_PER_SEC;	/* 1min */
 		cl->t_c = psched_get_time();
 		cl->cmode = HTB_CAN_SEND;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 05/11] sch: make htb_rate_cfg and functions around that generic
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (3 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 04/11] htb: initialize cl->tokens and cl->ctokens correctly Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-13  0:00   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns() Jiri Pirko
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

As it is going to be used in tbf as well, push these to generic code.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Acked-by: Eric Dumazet <edumazet@google.com>
---
 include/net/sch_generic.h | 19 ++++++++++++++
 net/sched/sch_generic.c   | 37 +++++++++++++++++++++++++++
 net/sched/sch_htb.c       | 65 +++++++----------------------------------------
 3 files changed, 65 insertions(+), 56 deletions(-)

diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 2d06c2a..2761c90 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -679,4 +679,23 @@ static inline struct sk_buff *skb_act_clone(struct sk_buff *skb, gfp_t gfp_mask,
 }
 #endif
 
+struct psched_ratecfg {
+	u64 rate_bps;
+	u32 mult;
+	u32 shift;
+};
+
+static inline u64 psched_l2t_ns(const struct psched_ratecfg *r,
+				unsigned int len)
+{
+	return ((u64)len * r->mult) >> r->shift;
+}
+
+extern void psched_ratecfg_precompute(struct psched_ratecfg *r, u32 rate);
+
+static inline u32 psched_ratecfg_getrate(const struct psched_ratecfg *r)
+{
+	return r->rate_bps >> 3;
+}
+
 #endif
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 5d81a44..ffad481 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -25,6 +25,7 @@
 #include <linux/rcupdate.h>
 #include <linux/list.h>
 #include <linux/slab.h>
+#include <net/sch_generic.h>
 #include <net/pkt_sched.h>
 #include <net/dst.h>
 
@@ -896,3 +897,39 @@ void dev_shutdown(struct net_device *dev)
 
 	WARN_ON(timer_pending(&dev->watchdog_timer));
 }
+
+void psched_ratecfg_precompute(struct psched_ratecfg *r, u32 rate)
+{
+	u64 factor;
+	u64 mult;
+	int shift;
+
+	r->rate_bps = rate << 3;
+	r->shift = 0;
+	r->mult = 1;
+	/*
+	 * Calibrate mult, shift so that token counting is accurate
+	 * for smallest packet size (64 bytes).  Token (time in ns) is
+	 * computed as (bytes * 8) * NSEC_PER_SEC / rate_bps.  It will
+	 * work as long as the smallest packet transfer time can be
+	 * accurately represented in nanosec.
+	 */
+	if (r->rate_bps > 0) {
+		/*
+		 * Higher shift gives better accuracy.  Find the largest
+		 * shift such that mult fits in 32 bits.
+		 */
+		for (shift = 0; shift < 16; shift++) {
+			r->shift = shift;
+			factor = 8LLU * NSEC_PER_SEC * (1 << r->shift);
+			mult = div64_u64(factor, r->rate_bps);
+			if (mult > UINT_MAX)
+				break;
+		}
+
+		r->shift = shift - 1;
+		factor = 8LLU * NSEC_PER_SEC * (1 << r->shift);
+		r->mult = div64_u64(factor, r->rate_bps);
+	}
+}
+EXPORT_SYMBOL(psched_ratecfg_precompute);
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index 2b22544..03c2692 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -38,6 +38,7 @@
 #include <linux/workqueue.h>
 #include <linux/slab.h>
 #include <net/netlink.h>
+#include <net/sch_generic.h>
 #include <net/pkt_sched.h>
 
 /* HTB algorithm.
@@ -71,12 +72,6 @@ enum htb_cmode {
 	HTB_CAN_SEND		/* class can send */
 };
 
-struct htb_rate_cfg {
-	u64 rate_bps;
-	u32 mult;
-	u32 shift;
-};
-
 /* interior & leaf nodes; props specific to leaves are marked L: */
 struct htb_class {
 	struct Qdisc_class_common common;
@@ -124,8 +119,8 @@ struct htb_class {
 	int filter_cnt;
 
 	/* token bucket parameters */
-	struct htb_rate_cfg rate;
-	struct htb_rate_cfg ceil;
+	struct psched_ratecfg rate;
+	struct psched_ratecfg ceil;
 	s64 buffer, cbuffer;	/* token bucket depth/rate */
 	psched_tdiff_t mbuffer;	/* max wait time */
 	s64 tokens, ctokens;	/* current number of tokens */
@@ -168,45 +163,6 @@ struct htb_sched {
 	struct work_struct work;
 };
 
-static u64 l2t_ns(struct htb_rate_cfg *r, unsigned int len)
-{
-	return ((u64)len * r->mult) >> r->shift;
-}
-
-static void htb_precompute_ratedata(struct htb_rate_cfg *r)
-{
-	u64 factor;
-	u64 mult;
-	int shift;
-
-	r->shift = 0;
-	r->mult = 1;
-	/*
-	 * Calibrate mult, shift so that token counting is accurate
-	 * for smallest packet size (64 bytes).  Token (time in ns) is
-	 * computed as (bytes * 8) * NSEC_PER_SEC / rate_bps.  It will
-	 * work as long as the smallest packet transfer time can be
-	 * accurately represented in nanosec.
-	 */
-	if (r->rate_bps > 0) {
-		/*
-		 * Higher shift gives better accuracy.  Find the largest
-		 * shift such that mult fits in 32 bits.
-		 */
-		for (shift = 0; shift < 16; shift++) {
-			r->shift = shift;
-			factor = 8LLU * NSEC_PER_SEC * (1 << r->shift);
-			mult = div64_u64(factor, r->rate_bps);
-			if (mult > UINT_MAX)
-				break;
-		}
-
-		r->shift = shift - 1;
-		factor = 8LLU * NSEC_PER_SEC * (1 << r->shift);
-		r->mult = div64_u64(factor, r->rate_bps);
-	}
-}
-
 /* find class in global hash table using given handle */
 static inline struct htb_class *htb_find(u32 handle, struct Qdisc *sch)
 {
@@ -632,7 +588,7 @@ static inline void htb_accnt_tokens(struct htb_class *cl, int bytes, s64 diff)
 
 	if (toks > cl->buffer)
 		toks = cl->buffer;
-	toks -= (s64) l2t_ns(&cl->rate, bytes);
+	toks -= (s64) psched_l2t_ns(&cl->rate, bytes);
 	if (toks <= -cl->mbuffer)
 		toks = 1 - cl->mbuffer;
 
@@ -645,7 +601,7 @@ static inline void htb_accnt_ctokens(struct htb_class *cl, int bytes, s64 diff)
 
 	if (toks > cl->cbuffer)
 		toks = cl->cbuffer;
-	toks -= (s64) l2t_ns(&cl->ceil, bytes);
+	toks -= (s64) psched_l2t_ns(&cl->ceil, bytes);
 	if (toks <= -cl->mbuffer)
 		toks = 1 - cl->mbuffer;
 
@@ -1134,9 +1090,9 @@ static int htb_dump_class(struct Qdisc *sch, unsigned long arg,
 
 	memset(&opt, 0, sizeof(opt));
 
-	opt.rate.rate = cl->rate.rate_bps >> 3;
+	opt.rate.rate = psched_ratecfg_getrate(&cl->rate);
 	opt.buffer = PSCHED_NS2TICKS(cl->buffer);
-	opt.ceil.rate = cl->ceil.rate_bps >> 3;
+	opt.ceil.rate = psched_ratecfg_getrate(&cl->ceil);
 	opt.cbuffer = PSCHED_NS2TICKS(cl->cbuffer);
 	opt.quantum = cl->quantum;
 	opt.prio = cl->prio;
@@ -1503,11 +1459,8 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
 			cl->prio = TC_HTB_NUMPRIO - 1;
 	}
 
-	cl->rate.rate_bps = (u64)hopt->rate.rate << 3;
-	cl->ceil.rate_bps = (u64)hopt->ceil.rate << 3;
-
-	htb_precompute_ratedata(&cl->rate);
-	htb_precompute_ratedata(&cl->ceil);
+	psched_ratecfg_precompute(&cl->rate, hopt->rate.rate);
+	psched_ratecfg_precompute(&cl->ceil, hopt->ceil.rate);
 
 	cl->buffer = PSCHED_TICKS2NS(hopt->buffer);
 	cl->cbuffer = PSCHED_TICKS2NS(hopt->buffer);
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns()
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (4 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 05/11] sch: make htb_rate_cfg and functions around that generic Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-12 16:32   ` Eric Dumazet
  2013-02-13  0:00   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 07/11] tbf: improved accuracy at high rates Jiri Pirko
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

tbf will need to schedule watchdog in ns. No need to convert it twice.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 include/net/pkt_sched.h | 10 ++++++++--
 net/sched/sch_api.c     |  6 +++---
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index 66f5ac3..388bf8b 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -65,8 +65,14 @@ struct qdisc_watchdog {
 };
 
 extern void qdisc_watchdog_init(struct qdisc_watchdog *wd, struct Qdisc *qdisc);
-extern void qdisc_watchdog_schedule(struct qdisc_watchdog *wd,
-				    psched_time_t expires);
+extern void qdisc_watchdog_schedule_ns(struct qdisc_watchdog *wd, u64 expires);
+
+static inline void qdisc_watchdog_schedule(struct qdisc_watchdog *wd,
+					   psched_time_t expires)
+{
+	qdisc_watchdog_schedule_ns(wd, PSCHED_TICKS2NS(expires));
+}
+
 extern void qdisc_watchdog_cancel(struct qdisc_watchdog *wd);
 
 extern struct Qdisc_ops pfifo_qdisc_ops;
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index d84f7e7..fe1ba54 100644
--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -493,7 +493,7 @@ void qdisc_watchdog_init(struct qdisc_watchdog *wd, struct Qdisc *qdisc)
 }
 EXPORT_SYMBOL(qdisc_watchdog_init);
 
-void qdisc_watchdog_schedule(struct qdisc_watchdog *wd, psched_time_t expires)
+void qdisc_watchdog_schedule_ns(struct qdisc_watchdog *wd, u64 expires)
 {
 	if (test_bit(__QDISC_STATE_DEACTIVATED,
 		     &qdisc_root_sleeping(wd->qdisc)->state))
@@ -502,10 +502,10 @@ void qdisc_watchdog_schedule(struct qdisc_watchdog *wd, psched_time_t expires)
 	qdisc_throttled(wd->qdisc);
 
 	hrtimer_start(&wd->timer,
-		      ns_to_ktime(PSCHED_TICKS2NS(expires)),
+		      ns_to_ktime(expires),
 		      HRTIMER_MODE_ABS);
 }
-EXPORT_SYMBOL(qdisc_watchdog_schedule);
+EXPORT_SYMBOL(qdisc_watchdog_schedule_ns);
 
 void qdisc_watchdog_cancel(struct qdisc_watchdog *wd)
 {
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 07/11] tbf: improved accuracy at high rates
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (5 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns() Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-12 16:34   ` Eric Dumazet
  2013-02-13  0:01   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c Jiri Pirko
                   ` (3 subsequent siblings)
  10 siblings, 2 replies; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

Current TBF uses rate table computed by the "tc" userspace program,
which has the following issue:

The rate table has 256 entries to map packet lengths to
token (time units).  With TSO sized packets, the 256 entry granularity
leads to loss/gain of rate, making the token bucket inaccurate.

Thus, instead of relying on rate table, this patch explicitly computes
the time and accounts for packet transmission times with nanosecond
granularity.

This is a followup to 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 net/sched/sch_tbf.c | 76 ++++++++++++++++++++++++++---------------------------
 1 file changed, 37 insertions(+), 39 deletions(-)

diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index 4b056c15..c8388f3 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -19,6 +19,7 @@
 #include <linux/errno.h>
 #include <linux/skbuff.h>
 #include <net/netlink.h>
+#include <net/sch_generic.h>
 #include <net/pkt_sched.h>
 
 
@@ -100,23 +101,21 @@
 struct tbf_sched_data {
 /* Parameters */
 	u32		limit;		/* Maximal length of backlog: bytes */
-	u32		buffer;		/* Token bucket depth/rate: MUST BE >= MTU/B */
-	u32		mtu;
+	s64		buffer;		/* Token bucket depth/rate: MUST BE >= MTU/B */
+	s64		mtu;
 	u32		max_size;
-	struct qdisc_rate_table	*R_tab;
-	struct qdisc_rate_table	*P_tab;
+	struct psched_ratecfg rate;
+	struct psched_ratecfg peak;
+	bool peak_present;
 
 /* Variables */
-	long	tokens;			/* Current number of B tokens */
-	long	ptokens;		/* Current number of P tokens */
-	psched_time_t	t_c;		/* Time check-point */
+	s64	tokens;			/* Current number of B tokens */
+	s64	ptokens;		/* Current number of P tokens */
+	s64	t_c;			/* Time check-point */
 	struct Qdisc	*qdisc;		/* Inner qdisc, default - bfifo queue */
 	struct qdisc_watchdog watchdog;	/* Watchdog timer */
 };
 
-#define L2T(q, L)   qdisc_l2t((q)->R_tab, L)
-#define L2T_P(q, L) qdisc_l2t((q)->P_tab, L)
-
 static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 {
 	struct tbf_sched_data *q = qdisc_priv(sch);
@@ -156,24 +155,24 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
 	skb = q->qdisc->ops->peek(q->qdisc);
 
 	if (skb) {
-		psched_time_t now;
-		long toks;
-		long ptoks = 0;
+		s64 now;
+		s64 toks;
+		s64 ptoks = 0;
 		unsigned int len = qdisc_pkt_len(skb);
 
-		now = psched_get_time();
-		toks = psched_tdiff_bounded(now, q->t_c, q->buffer);
+		now = ktime_to_ns(ktime_get());
+		toks = min_t(s64, now - q->t_c, q->buffer);
 
-		if (q->P_tab) {
+		if (q->peak_present) {
 			ptoks = toks + q->ptokens;
-			if (ptoks > (long)q->mtu)
+			if (ptoks > q->mtu)
 				ptoks = q->mtu;
-			ptoks -= L2T_P(q, len);
+			ptoks -= (s64) psched_l2t_ns(&q->peak, len);
 		}
 		toks += q->tokens;
-		if (toks > (long)q->buffer)
+		if (toks > q->buffer)
 			toks = q->buffer;
-		toks -= L2T(q, len);
+		toks -= (s64) psched_l2t_ns(&q->rate, len);
 
 		if ((toks|ptoks) >= 0) {
 			skb = qdisc_dequeue_peeked(q->qdisc);
@@ -189,8 +188,8 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
 			return skb;
 		}
 
-		qdisc_watchdog_schedule(&q->watchdog,
-					now + max_t(long, -toks, -ptoks));
+		qdisc_watchdog_schedule_ns(&q->watchdog,
+					   now + max_t(long, -toks, -ptoks));
 
 		/* Maybe we have a shorter packet in the queue,
 		   which can be sent now. It sounds cool,
@@ -214,7 +213,7 @@ static void tbf_reset(struct Qdisc *sch)
 
 	qdisc_reset(q->qdisc);
 	sch->q.qlen = 0;
-	q->t_c = psched_get_time();
+	q->t_c = ktime_to_ns(ktime_get());
 	q->tokens = q->buffer;
 	q->ptokens = q->mtu;
 	qdisc_watchdog_cancel(&q->watchdog);
@@ -293,14 +292,19 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt)
 		q->qdisc = child;
 	}
 	q->limit = qopt->limit;
-	q->mtu = qopt->mtu;
+	q->mtu = PSCHED_TICKS2NS(qopt->mtu);
 	q->max_size = max_size;
-	q->buffer = qopt->buffer;
+	q->buffer = PSCHED_TICKS2NS(qopt->buffer);
 	q->tokens = q->buffer;
 	q->ptokens = q->mtu;
 
-	swap(q->R_tab, rtab);
-	swap(q->P_tab, ptab);
+	psched_ratecfg_precompute(&q->rate, rtab->rate.rate);
+	if (ptab) {
+		psched_ratecfg_precompute(&q->peak, ptab->rate.rate);
+		q->peak_present = true;
+	} else {
+		q->peak_present = false;
+	}
 
 	sch_tree_unlock(sch);
 	err = 0;
@@ -319,7 +323,7 @@ static int tbf_init(struct Qdisc *sch, struct nlattr *opt)
 	if (opt == NULL)
 		return -EINVAL;
 
-	q->t_c = psched_get_time();
+	q->t_c = ktime_to_ns(ktime_get());
 	qdisc_watchdog_init(&q->watchdog, sch);
 	q->qdisc = &noop_qdisc;
 
@@ -331,12 +335,6 @@ static void tbf_destroy(struct Qdisc *sch)
 	struct tbf_sched_data *q = qdisc_priv(sch);
 
 	qdisc_watchdog_cancel(&q->watchdog);
-
-	if (q->P_tab)
-		qdisc_put_rtab(q->P_tab);
-	if (q->R_tab)
-		qdisc_put_rtab(q->R_tab);
-
 	qdisc_destroy(q->qdisc);
 }
 
@@ -352,13 +350,13 @@ static int tbf_dump(struct Qdisc *sch, struct sk_buff *skb)
 		goto nla_put_failure;
 
 	opt.limit = q->limit;
-	opt.rate = q->R_tab->rate;
-	if (q->P_tab)
-		opt.peakrate = q->P_tab->rate;
+	opt.rate.rate = psched_ratecfg_getrate(&q->rate);
+	if (q->peak_present)
+		opt.peakrate.rate = psched_ratecfg_getrate(&q->peak);
 	else
 		memset(&opt.peakrate, 0, sizeof(opt.peakrate));
-	opt.mtu = q->mtu;
-	opt.buffer = q->buffer;
+	opt.mtu = PSCHED_NS2TICKS(q->mtu);
+	opt.buffer = PSCHED_NS2TICKS(q->buffer);
 	if (nla_put(skb, TCA_TBF_PARMS, sizeof(opt), &opt))
 		goto nla_put_failure;
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (6 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 07/11] tbf: improved accuracy at high rates Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-12 12:08   ` Jamal Hadi Salim
                     ` (2 more replies)
  2013-02-12 10:12 ` [patch net-next v5 09/11] act_police: improved accuracy at high rates Jiri Pirko
                   ` (2 subsequent siblings)
  10 siblings, 3 replies; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

It's not used anywhere else, so move it.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 include/net/act_api.h  | 15 ---------------
 net/sched/act_police.c | 15 +++++++++++++++
 2 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/net/act_api.h b/include/net/act_api.h
index 112c25c..06ef7e9 100644
--- a/include/net/act_api.h
+++ b/include/net/act_api.h
@@ -35,21 +35,6 @@ struct tcf_common {
 #define tcf_lock	common.tcfc_lock
 #define tcf_rcu		common.tcfc_rcu
 
-struct tcf_police {
-	struct tcf_common	common;
-	int			tcfp_result;
-	u32			tcfp_ewma_rate;
-	u32			tcfp_burst;
-	u32			tcfp_mtu;
-	u32			tcfp_toks;
-	u32			tcfp_ptoks;
-	psched_time_t		tcfp_t_c;
-	struct qdisc_rate_table	*tcfp_R_tab;
-	struct qdisc_rate_table	*tcfp_P_tab;
-};
-#define to_police(pc)	\
-	container_of(pc, struct tcf_police, common)
-
 struct tcf_hashinfo {
 	struct tcf_common	**htab;
 	unsigned int		hmask;
diff --git a/net/sched/act_police.c b/net/sched/act_police.c
index 8dbd695..378a649 100644
--- a/net/sched/act_police.c
+++ b/net/sched/act_police.c
@@ -22,6 +22,21 @@
 #include <net/act_api.h>
 #include <net/netlink.h>
 
+struct tcf_police {
+	struct tcf_common	common;
+	int			tcfp_result;
+	u32			tcfp_ewma_rate;
+	u32			tcfp_burst;
+	u32			tcfp_mtu;
+	u32			tcfp_toks;
+	u32			tcfp_ptoks;
+	psched_time_t		tcfp_t_c;
+	struct qdisc_rate_table	*tcfp_R_tab;
+	struct qdisc_rate_table	*tcfp_P_tab;
+};
+#define to_police(pc)	\
+	container_of(pc, struct tcf_police, common)
+
 #define L2T(p, L)   qdisc_l2t((p)->tcfp_R_tab, L)
 #define L2T_P(p, L) qdisc_l2t((p)->tcfp_P_tab, L)
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 09/11] act_police: improved accuracy at high rates
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (7 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-12 13:31   ` Jamal Hadi Salim
  2013-02-13  0:01   ` David Miller
  2013-02-12 10:12 ` [patch net-next v5 10/11] tbf: take into account gso skbs Jiri Pirko
  2013-02-12 10:12 ` [patch net-next v5 11/11] act_police: remove <=mtu check for " Jiri Pirko
  10 siblings, 2 replies; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

Current act_police uses rate table computed by the "tc" userspace program,
which has the following issue:

The rate table has 256 entries to map packet lengths to
token (time units).  With TSO sized packets, the 256 entry granularity
leads to loss/gain of rate, making the token bucket inaccurate.

Thus, instead of relying on rate table, this patch explicitly computes
the time and accounts for packet transmission times with nanosecond
granularity.

This is a followup to 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 net/sched/act_police.c | 99 +++++++++++++++++++++++++++-----------------------
 1 file changed, 53 insertions(+), 46 deletions(-)

diff --git a/net/sched/act_police.c b/net/sched/act_police.c
index 378a649..823463a 100644
--- a/net/sched/act_police.c
+++ b/net/sched/act_police.c
@@ -26,20 +26,20 @@ struct tcf_police {
 	struct tcf_common	common;
 	int			tcfp_result;
 	u32			tcfp_ewma_rate;
-	u32			tcfp_burst;
+	s64			tcfp_burst;
 	u32			tcfp_mtu;
-	u32			tcfp_toks;
-	u32			tcfp_ptoks;
-	psched_time_t		tcfp_t_c;
-	struct qdisc_rate_table	*tcfp_R_tab;
-	struct qdisc_rate_table	*tcfp_P_tab;
+	s64			tcfp_toks;
+	s64			tcfp_ptoks;
+	s64			tcfp_mtu_ptoks;
+	s64			tcfp_t_c;
+	struct psched_ratecfg	rate;
+	bool			rate_present;
+	struct psched_ratecfg	peak;
+	bool			peak_present;
 };
 #define to_police(pc)	\
 	container_of(pc, struct tcf_police, common)
 
-#define L2T(p, L)   qdisc_l2t((p)->tcfp_R_tab, L)
-#define L2T_P(p, L) qdisc_l2t((p)->tcfp_P_tab, L)
-
 #define POL_TAB_MASK     15
 static struct tcf_common *tcf_police_ht[POL_TAB_MASK + 1];
 static u32 police_idx_gen;
@@ -123,10 +123,6 @@ static void tcf_police_destroy(struct tcf_police *p)
 			write_unlock_bh(&police_lock);
 			gen_kill_estimator(&p->tcf_bstats,
 					   &p->tcf_rate_est);
-			if (p->tcfp_R_tab)
-				qdisc_put_rtab(p->tcfp_R_tab);
-			if (p->tcfp_P_tab)
-				qdisc_put_rtab(p->tcfp_P_tab);
 			/*
 			 * gen_estimator est_timer() might access p->tcf_lock
 			 * or bstats, wait a RCU grace period before freeing p
@@ -227,26 +223,36 @@ override:
 	}
 
 	/* No failure allowed after this point */
-	if (R_tab != NULL) {
-		qdisc_put_rtab(police->tcfp_R_tab);
-		police->tcfp_R_tab = R_tab;
+	police->tcfp_mtu = parm->mtu;
+	if (police->tcfp_mtu == 0) {
+		police->tcfp_mtu = ~0;
+		if (R_tab)
+			police->tcfp_mtu = 255 << R_tab->rate.cell_log;
+	}
+	if (R_tab) {
+		police->rate_present = true;
+		psched_ratecfg_precompute(&police->rate, R_tab->rate.rate);
+		qdisc_put_rtab(R_tab);
+	} else {
+		police->rate_present = false;
 	}
-	if (P_tab != NULL) {
-		qdisc_put_rtab(police->tcfp_P_tab);
-		police->tcfp_P_tab = P_tab;
+	if (P_tab) {
+		police->peak_present = true;
+		psched_ratecfg_precompute(&police->peak, P_tab->rate.rate);
+		qdisc_put_rtab(P_tab);
+	} else {
+		police->peak_present = false;
 	}
 
 	if (tb[TCA_POLICE_RESULT])
 		police->tcfp_result = nla_get_u32(tb[TCA_POLICE_RESULT]);
-	police->tcfp_toks = police->tcfp_burst = parm->burst;
-	police->tcfp_mtu = parm->mtu;
-	if (police->tcfp_mtu == 0) {
-		police->tcfp_mtu = ~0;
-		if (police->tcfp_R_tab)
-			police->tcfp_mtu = 255<<police->tcfp_R_tab->rate.cell_log;
+	police->tcfp_burst = PSCHED_TICKS2NS(parm->burst);
+	police->tcfp_toks = police->tcfp_burst;
+	if (police->peak_present) {
+		police->tcfp_mtu_ptoks = (s64) psched_l2t_ns(&police->peak,
+							     police->tcfp_mtu);
+		police->tcfp_ptoks = police->tcfp_mtu_ptoks;
 	}
-	if (police->tcfp_P_tab)
-		police->tcfp_ptoks = L2T_P(police, police->tcfp_mtu);
 	police->tcf_action = parm->action;
 
 	if (tb[TCA_POLICE_AVRATE])
@@ -256,7 +262,7 @@ override:
 	if (ret != ACT_P_CREATED)
 		return ret;
 
-	police->tcfp_t_c = psched_get_time();
+	police->tcfp_t_c = ktime_to_ns(ktime_get());
 	police->tcf_index = parm->index ? parm->index :
 		tcf_hash_new_index(&police_idx_gen, &police_hash_info);
 	h = tcf_hash(police->tcf_index, POL_TAB_MASK);
@@ -302,9 +308,9 @@ static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
 			  struct tcf_result *res)
 {
 	struct tcf_police *police = a->priv;
-	psched_time_t now;
-	long toks;
-	long ptoks = 0;
+	s64 now;
+	s64 toks;
+	s64 ptoks = 0;
 
 	spin_lock(&police->tcf_lock);
 
@@ -320,24 +326,25 @@ static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
 	}
 
 	if (qdisc_pkt_len(skb) <= police->tcfp_mtu) {
-		if (police->tcfp_R_tab == NULL) {
+		if (!police->rate_present) {
 			spin_unlock(&police->tcf_lock);
 			return police->tcfp_result;
 		}
 
-		now = psched_get_time();
-		toks = psched_tdiff_bounded(now, police->tcfp_t_c,
-					    police->tcfp_burst);
-		if (police->tcfp_P_tab) {
+		now = ktime_to_ns(ktime_get());
+		toks = min_t(s64, now - police->tcfp_t_c,
+			     police->tcfp_burst);
+		if (police->peak_present) {
 			ptoks = toks + police->tcfp_ptoks;
-			if (ptoks > (long)L2T_P(police, police->tcfp_mtu))
-				ptoks = (long)L2T_P(police, police->tcfp_mtu);
-			ptoks -= L2T_P(police, qdisc_pkt_len(skb));
+			if (ptoks > police->tcfp_mtu_ptoks)
+				ptoks = police->tcfp_mtu_ptoks;
+			ptoks -= (s64) psched_l2t_ns(&police->peak,
+						     qdisc_pkt_len(skb));
 		}
 		toks += police->tcfp_toks;
-		if (toks > (long)police->tcfp_burst)
+		if (toks > police->tcfp_burst)
 			toks = police->tcfp_burst;
-		toks -= L2T(police, qdisc_pkt_len(skb));
+		toks -= (s64) psched_l2t_ns(&police->rate, qdisc_pkt_len(skb));
 		if ((toks|ptoks) >= 0) {
 			police->tcfp_t_c = now;
 			police->tcfp_toks = toks;
@@ -363,15 +370,15 @@ tcf_act_police_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
 		.index = police->tcf_index,
 		.action = police->tcf_action,
 		.mtu = police->tcfp_mtu,
-		.burst = police->tcfp_burst,
+		.burst = PSCHED_NS2TICKS(police->tcfp_burst),
 		.refcnt = police->tcf_refcnt - ref,
 		.bindcnt = police->tcf_bindcnt - bind,
 	};
 
-	if (police->tcfp_R_tab)
-		opt.rate = police->tcfp_R_tab->rate;
-	if (police->tcfp_P_tab)
-		opt.peakrate = police->tcfp_P_tab->rate;
+	if (police->rate_present)
+		opt.rate.rate = psched_ratecfg_getrate(&police->rate);
+	if (police->peak_present)
+		opt.peakrate.rate = psched_ratecfg_getrate(&police->peak);
 	if (nla_put(skb, TCA_POLICE_TBF, sizeof(opt), &opt))
 		goto nla_put_failure;
 	if (police->tcfp_result &&
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (8 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 09/11] act_police: improved accuracy at high rates Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-12 16:39   ` Eric Dumazet
  2013-02-12 10:12 ` [patch net-next v5 11/11] act_police: remove <=mtu check for " Jiri Pirko
  10 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

Ignore max_size check for gso skbs. This check made bigger packets
incorrectly dropped. Remove this limitation for gso skbs.

Also for peaks, ignore mtu for gso skbs.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 net/sched/sch_tbf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index c8388f3..8973e93 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -121,7 +121,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 	struct tbf_sched_data *q = qdisc_priv(sch);
 	int ret;
 
-	if (qdisc_pkt_len(skb) > q->max_size)
+	if (qdisc_pkt_len(skb) > q->max_size && !skb_is_gso(skb))
 		return qdisc_reshape_fail(skb, sch);
 
 	ret = qdisc_enqueue(skb, q->qdisc);
@@ -165,7 +165,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
 
 		if (q->peak_present) {
 			ptoks = toks + q->ptokens;
-			if (ptoks > q->mtu)
+			if (ptoks > q->mtu && !skb_is_gso(skb))
 				ptoks = q->mtu;
 			ptoks -= (s64) psched_l2t_ns(&q->peak, len);
 		}
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [patch net-next v5 11/11] act_police: remove <=mtu check for gso skbs
  2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
                   ` (9 preceding siblings ...)
  2013-02-12 10:12 ` [patch net-next v5 10/11] tbf: take into account gso skbs Jiri Pirko
@ 2013-02-12 10:12 ` Jiri Pirko
  2013-02-12 16:40   ` Eric Dumazet
  10 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 10:12 UTC (permalink / raw)
  To: netdev; +Cc: davem, edumazet, jhs, kuznet, j.vimal

This check made bigger packets incorrectly dropped. Remove this
limitation for gso skbs.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 net/sched/act_police.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sched/act_police.c b/net/sched/act_police.c
index 823463a..2dba297 100644
--- a/net/sched/act_police.c
+++ b/net/sched/act_police.c
@@ -325,7 +325,7 @@ static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
 		return police->tcf_action;
 	}
 
-	if (qdisc_pkt_len(skb) <= police->tcfp_mtu) {
+	if (qdisc_pkt_len(skb) <= police->tcfp_mtu || skb_is_gso(skb)) {
 		if (!police->rate_present) {
 			spin_unlock(&police->tcf_lock);
 			return police->tcfp_result;
@@ -336,7 +336,7 @@ static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
 			     police->tcfp_burst);
 		if (police->peak_present) {
 			ptoks = toks + police->tcfp_ptoks;
-			if (ptoks > police->tcfp_mtu_ptoks)
+			if (ptoks > police->tcfp_mtu_ptoks && !skb_is_gso(skb))
 				ptoks = police->tcfp_mtu_ptoks;
 			ptoks -= (s64) psched_l2t_ns(&police->peak,
 						     qdisc_pkt_len(skb));
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c
  2013-02-12 10:12 ` [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c Jiri Pirko
@ 2013-02-12 12:08   ` Jamal Hadi Salim
  2013-02-12 16:34   ` Eric Dumazet
  2013-02-13  0:01   ` David Miller
  2 siblings, 0 replies; 39+ messages in thread
From: Jamal Hadi Salim @ 2013-02-12 12:08 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, kuznet, j.vimal

On 13-02-12 05:12 AM, Jiri Pirko wrote:
> It's not used anywhere else, so move it.
>
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>


Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 09/11] act_police: improved accuracy at high rates
  2013-02-12 10:12 ` [patch net-next v5 09/11] act_police: improved accuracy at high rates Jiri Pirko
@ 2013-02-12 13:31   ` Jamal Hadi Salim
  2013-02-12 13:39     ` Jiri Pirko
  2013-02-13  0:01   ` David Miller
  1 sibling, 1 reply; 39+ messages in thread
From: Jamal Hadi Salim @ 2013-02-12 13:31 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, kuznet, j.vimal

Jiri,

Does this work with your new kernel changes + old tc?
Some of the kernel fields you have changes that are updated from
user space with matching 32 bits (eg the pair tcfp_burst which you made 
64 bits vs user space u32 burst).
It probably will work - doesnt harm to check on _all_ the fields you
upgraded to 64 bit.

cheers,
jamal

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 09/11] act_police: improved accuracy at high rates
  2013-02-12 13:31   ` Jamal Hadi Salim
@ 2013-02-12 13:39     ` Jiri Pirko
  0 siblings, 0 replies; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 13:39 UTC (permalink / raw)
  To: Jamal Hadi Salim; +Cc: netdev, davem, edumazet, kuznet, j.vimal

Tue, Feb 12, 2013 at 02:31:21PM CET, jhs@mojatatu.com wrote:
>Jiri,
>
>Does this work with your new kernel changes + old tc?
>Some of the kernel fields you have changes that are updated from
>user space with matching 32 bits (eg the pair tcfp_burst which you
>made 64 bits vs user space u32 burst).
>It probably will work - doesnt harm to check on _all_ the fields you
>upgraded to 64 bit.

Yes, this shouldn't be a problem. Userspace passes this values in
struct tc_police *parm and these values are converted into new 64bits
(ticks->ns) ones. Also in dump functions, these are converted back.

Similar changes were made to htb some time ago, I believe that without
issues.

>
>cheers,
>jamal

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns()
  2013-02-12 10:12 ` [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns() Jiri Pirko
@ 2013-02-12 16:32   ` Eric Dumazet
  2013-02-13  0:00   ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: Eric Dumazet @ 2013-02-12 16:32 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Tue, 2013-02-12 at 11:12 +0100, Jiri Pirko wrote:
> tbf will need to schedule watchdog in ns. No need to convert it twice.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> ---
>  include/net/pkt_sched.h | 10 ++++++++--
>  net/sched/sch_api.c     |  6 +++---
>  2 files changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
> index 66f5ac3..388bf8b 100644
> --- a/include/net/pkt_sched.h
> +++ b/include/net/pkt_sched.h
> @@ -65,8 +65,14 @@ struct qdisc_watchdog {
>  };
>  
>  extern void qdisc_watchdog_init(struct qdisc_watchdog *wd, struct Qdisc *qdisc);
> -extern void qdisc_watchdog_schedule(struct qdisc_watchdog *wd,
> -				    psched_time_t expires);
> +extern void qdisc_watchdog_schedule_ns(struct qdisc_watchdog *wd, u64 expires);
> +
> +static inline void qdisc_watchdog_schedule(struct qdisc_watchdog *wd,
> +					   psched_time_t expires)
> +{
> +	qdisc_watchdog_schedule_ns(wd, PSCHED_TICKS2NS(expires));
> +}

Acked-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 07/11] tbf: improved accuracy at high rates
  2013-02-12 10:12 ` [patch net-next v5 07/11] tbf: improved accuracy at high rates Jiri Pirko
@ 2013-02-12 16:34   ` Eric Dumazet
  2013-02-13  0:01   ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: Eric Dumazet @ 2013-02-12 16:34 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Tue, 2013-02-12 at 11:12 +0100, Jiri Pirko wrote:
> Current TBF uses rate table computed by the "tc" userspace program,
> which has the following issue:
> 
> The rate table has 256 entries to map packet lengths to
> token (time units).  With TSO sized packets, the 256 entry granularity
> leads to loss/gain of rate, making the token bucket inaccurate.
> 
> Thus, instead of relying on rate table, this patch explicitly computes
> the time and accounts for packet transmission times with nanosecond
> granularity.
> 
> This is a followup to 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>

Acked-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c
  2013-02-12 10:12 ` [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c Jiri Pirko
  2013-02-12 12:08   ` Jamal Hadi Salim
@ 2013-02-12 16:34   ` Eric Dumazet
  2013-02-13  0:01   ` David Miller
  2 siblings, 0 replies; 39+ messages in thread
From: Eric Dumazet @ 2013-02-12 16:34 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Tue, 2013-02-12 at 11:12 +0100, Jiri Pirko wrote:
> It's not used anywhere else, so move it.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> ---

Acked-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-12 10:12 ` [patch net-next v5 10/11] tbf: take into account gso skbs Jiri Pirko
@ 2013-02-12 16:39   ` Eric Dumazet
  2013-02-12 17:31     ` Jiri Pirko
  2013-02-17 16:18     ` Jiri Pirko
  0 siblings, 2 replies; 39+ messages in thread
From: Eric Dumazet @ 2013-02-12 16:39 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Tue, 2013-02-12 at 11:12 +0100, Jiri Pirko wrote:
> Ignore max_size check for gso skbs. This check made bigger packets
> incorrectly dropped. Remove this limitation for gso skbs.
> 
> Also for peaks, ignore mtu for gso skbs.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> ---
>  net/sched/sch_tbf.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
> index c8388f3..8973e93 100644
> --- a/net/sched/sch_tbf.c
> +++ b/net/sched/sch_tbf.c
> @@ -121,7 +121,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
>  	struct tbf_sched_data *q = qdisc_priv(sch);
>  	int ret;
>  
> -	if (qdisc_pkt_len(skb) > q->max_size)
> +	if (qdisc_pkt_len(skb) > q->max_size && !skb_is_gso(skb))
>  		return qdisc_reshape_fail(skb, sch);
>  
>  	ret = qdisc_enqueue(skb, q->qdisc);
> @@ -165,7 +165,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
>  
>  		if (q->peak_present) {
>  			ptoks = toks + q->ptokens;
> -			if (ptoks > q->mtu)
> +			if (ptoks > q->mtu && !skb_is_gso(skb))
>  				ptoks = q->mtu;
>  			ptoks -= (s64) psched_l2t_ns(&q->peak, len);
>  		}


I guess this part is wrong.

If we dont cap ptoks to q->mtu we allow bigger bursts.

Ideally we could re-segment the skb if psched_l2t_ns(&q->peak, len) is
bigger than q->mtu

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 11/11] act_police: remove <=mtu check for gso skbs
  2013-02-12 10:12 ` [patch net-next v5 11/11] act_police: remove <=mtu check for " Jiri Pirko
@ 2013-02-12 16:40   ` Eric Dumazet
  0 siblings, 0 replies; 39+ messages in thread
From: Eric Dumazet @ 2013-02-12 16:40 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Tue, 2013-02-12 at 11:12 +0100, Jiri Pirko wrote:
> This check made bigger packets incorrectly dropped. Remove this
> limitation for gso skbs.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> ---
>  net/sched/act_police.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/net/sched/act_police.c b/net/sched/act_police.c
> index 823463a..2dba297 100644
> --- a/net/sched/act_police.c
> +++ b/net/sched/act_police.c
> @@ -325,7 +325,7 @@ static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
>  		return police->tcf_action;
>  	}
>  
> -	if (qdisc_pkt_len(skb) <= police->tcfp_mtu) {
> +	if (qdisc_pkt_len(skb) <= police->tcfp_mtu || skb_is_gso(skb)) {
>  		if (!police->rate_present) {
>  			spin_unlock(&police->tcf_lock);
>  			return police->tcfp_result;
> @@ -336,7 +336,7 @@ static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
>  			     police->tcfp_burst);
>  		if (police->peak_present) {
>  			ptoks = toks + police->tcfp_ptoks;
> -			if (ptoks > police->tcfp_mtu_ptoks)
> +			if (ptoks > police->tcfp_mtu_ptoks && !skb_is_gso(skb))
>  				ptoks = police->tcfp_mtu_ptoks;
>  			ptoks -= (s64) psched_l2t_ns(&police->peak,
>  						     qdisc_pkt_len(skb));


Same remark here : This chunks looks wrong to me.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-12 16:39   ` Eric Dumazet
@ 2013-02-12 17:31     ` Jiri Pirko
  2013-02-12 17:54       ` Eric Dumazet
  2013-02-17 16:18     ` Jiri Pirko
  1 sibling, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-12 17:31 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

Tue, Feb 12, 2013 at 05:39:42PM CET, eric.dumazet@gmail.com wrote:
>On Tue, 2013-02-12 at 11:12 +0100, Jiri Pirko wrote:
>> Ignore max_size check for gso skbs. This check made bigger packets
>> incorrectly dropped. Remove this limitation for gso skbs.
>> 
>> Also for peaks, ignore mtu for gso skbs.
>> 
>> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
>> ---
>>  net/sched/sch_tbf.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
>> index c8388f3..8973e93 100644
>> --- a/net/sched/sch_tbf.c
>> +++ b/net/sched/sch_tbf.c
>> @@ -121,7 +121,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
>>  	struct tbf_sched_data *q = qdisc_priv(sch);
>>  	int ret;
>>  
>> -	if (qdisc_pkt_len(skb) > q->max_size)
>> +	if (qdisc_pkt_len(skb) > q->max_size && !skb_is_gso(skb))
>>  		return qdisc_reshape_fail(skb, sch);
>>  
>>  	ret = qdisc_enqueue(skb, q->qdisc);
>> @@ -165,7 +165,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
>>  
>>  		if (q->peak_present) {
>>  			ptoks = toks + q->ptokens;
>> -			if (ptoks > q->mtu)
>> +			if (ptoks > q->mtu && !skb_is_gso(skb))
>>  				ptoks = q->mtu;
>>  			ptoks -= (s64) psched_l2t_ns(&q->peak, len);
>>  		}
>
>
>I guess this part is wrong.
>
>If we dont cap ptoks to q->mtu we allow bigger bursts.
>
>Ideally we could re-segment the skb if psched_l2t_ns(&q->peak, len) is
>bigger than q->mtu

Okay - that sounds reasonable. Can you give me some hint how would you
imagine to do this?

Thanks.

Jiri

>
>
>
>
>
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-12 17:31     ` Jiri Pirko
@ 2013-02-12 17:54       ` Eric Dumazet
  0 siblings, 0 replies; 39+ messages in thread
From: Eric Dumazet @ 2013-02-12 17:54 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Tue, 2013-02-12 at 18:31 +0100, Jiri Pirko wrote:
> Tue, Feb 12, 2013 at 05:39:42PM CET, eric.dumazet@gmail.com wrote:

> >Ideally we could re-segment the skb if psched_l2t_ns(&q->peak, len) is
> >bigger than q->mtu
> 
> Okay - that sounds reasonable. Can you give me some hint how would you
> imagine to do this?
> 

This should be a generic helper, and we could use it in sch_codel /
sch_fq_codel / netem as well.

The trick in a qdisc is that we have to call qdisc_tree_decrease_qlen()
to alert parents that packet count changed.

If a GSO packet with 10 segments is segmented, we have to
qdisc_tree_decrease_qlen(sch, 1 - 10);

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 02/11] htb: fix values in opt dump
  2013-02-12 10:12 ` [patch net-next v5 02/11] htb: fix values in opt dump Jiri Pirko
@ 2013-02-12 23:51   ` David Miller
  0 siblings, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-12 23:51 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:00 +0100

> in htb_change_class() cl->buffer and cl->buffer are stored in ns.
> So in dump, convert them back to psched ticks.
> 
> Note this was introduced by:
> commit 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd
>     htb: improved accuracy at high rates
> 
> Please consider this for -net/-stable.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> Acked-by: Eric Dumazet <edumazet@google.com>

Applied to 'net' and queued up for -stable, thanks.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 01/11] htb: use PSCHED_TICKS2NS()
  2013-02-12 10:11 ` [patch net-next v5 01/11] htb: use PSCHED_TICKS2NS() Jiri Pirko
@ 2013-02-13  0:00   ` David Miller
  0 siblings, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:00 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:11:59 +0100

> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> Acked-by: Eric Dumazet <edumazet@google.com>

Applied.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 03/11] htb: remove pointless first initialization of buffer and cbuffer
  2013-02-12 10:12 ` [patch net-next v5 03/11] htb: remove pointless first initialization of buffer and cbuffer Jiri Pirko
@ 2013-02-13  0:00   ` David Miller
  0 siblings, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:00 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:01 +0100

> these are initialized correctly couple of lines later in the function.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> Acked-by: Eric Dumazet <edumazet@google.com>

Applied.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 04/11] htb: initialize cl->tokens and cl->ctokens correctly
  2013-02-12 10:12 ` [patch net-next v5 04/11] htb: initialize cl->tokens and cl->ctokens correctly Jiri Pirko
@ 2013-02-13  0:00   ` David Miller
  0 siblings, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:00 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:02 +0100

> These are in ns so convert from ticks to ns.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> Acked-by: Eric Dumazet <edumazet@google.com>

Applied.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 05/11] sch: make htb_rate_cfg and functions around that generic
  2013-02-12 10:12 ` [patch net-next v5 05/11] sch: make htb_rate_cfg and functions around that generic Jiri Pirko
@ 2013-02-13  0:00   ` David Miller
  0 siblings, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:00 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:03 +0100

> As it is going to be used in tbf as well, push these to generic code.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> Acked-by: Eric Dumazet <edumazet@google.com>

Applied.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns()
  2013-02-12 10:12 ` [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns() Jiri Pirko
  2013-02-12 16:32   ` Eric Dumazet
@ 2013-02-13  0:00   ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:00 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:04 +0100

> tbf will need to schedule watchdog in ns. No need to convert it twice.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>

Applied.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 07/11] tbf: improved accuracy at high rates
  2013-02-12 10:12 ` [patch net-next v5 07/11] tbf: improved accuracy at high rates Jiri Pirko
  2013-02-12 16:34   ` Eric Dumazet
@ 2013-02-13  0:01   ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:01 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:05 +0100

> Current TBF uses rate table computed by the "tc" userspace program,
> which has the following issue:
> 
> The rate table has 256 entries to map packet lengths to
> token (time units).  With TSO sized packets, the 256 entry granularity
> leads to loss/gain of rate, making the token bucket inaccurate.
> 
> Thus, instead of relying on rate table, this patch explicitly computes
> the time and accounts for packet transmission times with nanosecond
> granularity.
> 
> This is a followup to 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd

Please reference commits by commit ID and also the commit log header
text (in parenthesis and double quotes), in order to remove ambiguity
when changes are applies into multiple trees.

> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>

Applied.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c
  2013-02-12 10:12 ` [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c Jiri Pirko
  2013-02-12 12:08   ` Jamal Hadi Salim
  2013-02-12 16:34   ` Eric Dumazet
@ 2013-02-13  0:01   ` David Miller
  2 siblings, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:01 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:06 +0100

> It's not used anywhere else, so move it.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>

Applied.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 09/11] act_police: improved accuracy at high rates
  2013-02-12 10:12 ` [patch net-next v5 09/11] act_police: improved accuracy at high rates Jiri Pirko
  2013-02-12 13:31   ` Jamal Hadi Salim
@ 2013-02-13  0:01   ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: David Miller @ 2013-02-13  0:01 UTC (permalink / raw)
  To: jiri; +Cc: netdev, edumazet, jhs, kuznet, j.vimal

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 12 Feb 2013 11:12:07 +0100

> Current act_police uses rate table computed by the "tc" userspace program,
> which has the following issue:
> 
> The rate table has 256 entries to map packet lengths to
> token (time units).  With TSO sized packets, the 256 entry granularity
> leads to loss/gain of rate, making the token bucket inaccurate.
> 
> Thus, instead of relying on rate table, this patch explicitly computes
> the time and accounts for packet transmission times with nanosecond
> granularity.
> 
> This is a followup to 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd

Same comment here about referencing commits properly.

> Signed-off-by: Jiri Pirko <jiri@resnulli.us>

Applied, thanks.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-12 16:39   ` Eric Dumazet
  2013-02-12 17:31     ` Jiri Pirko
@ 2013-02-17 16:18     ` Jiri Pirko
  2013-02-17 17:54       ` Eric Dumazet
  1 sibling, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-17 16:18 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

Tue, Feb 12, 2013 at 05:39:42PM CET, eric.dumazet@gmail.com wrote:
>On Tue, 2013-02-12 at 11:12 +0100, Jiri Pirko wrote:
>> Ignore max_size check for gso skbs. This check made bigger packets
>> incorrectly dropped. Remove this limitation for gso skbs.
>> 
>> Also for peaks, ignore mtu for gso skbs.
>> 
>> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
>> ---
>>  net/sched/sch_tbf.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
>> index c8388f3..8973e93 100644
>> --- a/net/sched/sch_tbf.c
>> +++ b/net/sched/sch_tbf.c
>> @@ -121,7 +121,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
>>  	struct tbf_sched_data *q = qdisc_priv(sch);
>>  	int ret;
>>  
>> -	if (qdisc_pkt_len(skb) > q->max_size)
>> +	if (qdisc_pkt_len(skb) > q->max_size && !skb_is_gso(skb))
>>  		return qdisc_reshape_fail(skb, sch);
>>  
>>  	ret = qdisc_enqueue(skb, q->qdisc);
>> @@ -165,7 +165,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
>>  
>>  		if (q->peak_present) {
>>  			ptoks = toks + q->ptokens;
>> -			if (ptoks > q->mtu)
>> +			if (ptoks > q->mtu && !skb_is_gso(skb))
>>  				ptoks = q->mtu;
>>  			ptoks -= (s64) psched_l2t_ns(&q->peak, len);
>>  		}
>
>
>I guess this part is wrong.
>
>If we dont cap ptoks to q->mtu we allow bigger bursts.


I'm going through this issue back and front and on the second thought,
I think this patch might not be so wrong after all.

"Accumulating" time in ptoks would effectively cause the skb to be sent
only in case time for whole skb is available (accumulated).

The re-segmenting will only cause the skb fragments sent in each time frame.

I can't see how the bigger bursts you are reffering to can happen.

Or am I missing something?

>
>Ideally we could re-segment the skb if psched_l2t_ns(&q->peak, len) is
>bigger than q->mtu
>
>
>
>
>
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-17 16:18     ` Jiri Pirko
@ 2013-02-17 17:54       ` Eric Dumazet
  2013-02-18  9:58         ` Jiri Pirko
  0 siblings, 1 reply; 39+ messages in thread
From: Eric Dumazet @ 2013-02-17 17:54 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Sun, 2013-02-17 at 17:18 +0100, Jiri Pirko wrote:

> I'm going through this issue back and front and on the second thought,
> I think this patch might not be so wrong after all.
> 
> "Accumulating" time in ptoks would effectively cause the skb to be sent
> only in case time for whole skb is available (accumulated).
> 
> The re-segmenting will only cause the skb fragments sent in each time frame.
> 
> I can't see how the bigger bursts you are reffering to can happen.
> 
> Or am I missing something?

Token Bucket Filter doesnt allow to accumulate tokens above a given
threshold. Thats the whole point of the algo.

After a one hour idle time, you don't want to allow your device sending
a burst exceeding the constraint.

This is all about avoiding packet drops in a device with a very small
queue.

Your patch was pretty close to solve the problem.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-17 17:54       ` Eric Dumazet
@ 2013-02-18  9:58         ` Jiri Pirko
  2013-02-19 16:15           ` Eric Dumazet
  0 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-18  9:58 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

Sun, Feb 17, 2013 at 06:54:23PM CET, eric.dumazet@gmail.com wrote:
>On Sun, 2013-02-17 at 17:18 +0100, Jiri Pirko wrote:
>
>> I'm going through this issue back and front and on the second thought,
>> I think this patch might not be so wrong after all.
>> 
>> "Accumulating" time in ptoks would effectively cause the skb to be sent
>> only in case time for whole skb is available (accumulated).
>> 
>> The re-segmenting will only cause the skb fragments sent in each time frame.
>> 
>> I can't see how the bigger bursts you are reffering to can happen.
>> 
>> Or am I missing something?
>
>Token Bucket Filter doesnt allow to accumulate tokens above a given
>threshold. Thats the whole point of the algo.
>
>After a one hour idle time, you don't want to allow your device sending
>a burst exceeding the constraint.

You are right, therefore I said "not so wrong". Let me illustrate my
thoughts. Here is a patch:

Subject: [patch net-next RFC] tbf: take into account gso skbs

Ignore max_size check for gso skbs. This check made bigger packets
incorrectly dropped. Remove this limitation for gso skbs.

Also for peaks, accumulate time for big gso skbs.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 net/sched/sch_tbf.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index c8388f3..bd36977 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -114,6 +114,8 @@ struct tbf_sched_data {
 	s64	t_c;			/* Time check-point */
 	struct Qdisc	*qdisc;		/* Inner qdisc, default - bfifo queue */
 	struct qdisc_watchdog watchdog;	/* Watchdog timer */
+	bool	last_dequeued;		/* Flag to indicate that a skb was
+					   returned by last dequeue */
 };
 
 static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
@@ -121,7 +123,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 	struct tbf_sched_data *q = qdisc_priv(sch);
 	int ret;
 
-	if (qdisc_pkt_len(skb) > q->max_size)
+	if (qdisc_pkt_len(skb) > q->max_size && !skb_is_gso(skb))
 		return qdisc_reshape_fail(skb, sch);
 
 	ret = qdisc_enqueue(skb, q->qdisc);
@@ -164,10 +166,18 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
 		toks = min_t(s64, now - q->t_c, q->buffer);
 
 		if (q->peak_present) {
+			s64 skb_ptoks = (s64) psched_l2t_ns(&q->peak, len);
+			bool big_gso = skb_is_gso(skb) && skb_ptoks > q->mtu;
+
 			ptoks = toks + q->ptokens;
-			if (ptoks > q->mtu)
+			/* In case we hit big GSO packet, don't cap to MTU
+			 * when skb is here for >= 2 time and rather accumulate
+			 * time over calls to send it as a whole.
+			 */
+			if (ptoks > q->mtu &&
+			    (!big_gso || q->last_dequeued))
 				ptoks = q->mtu;
-			ptoks -= (s64) psched_l2t_ns(&q->peak, len);
+			ptoks -= skb_ptoks;
 		}
 		toks += q->tokens;
 		if (toks > q->buffer)
@@ -177,7 +187,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
 		if ((toks|ptoks) >= 0) {
 			skb = qdisc_dequeue_peeked(q->qdisc);
 			if (unlikely(!skb))
-				return NULL;
+				goto null_out;
 
 			q->t_c = now;
 			q->tokens = toks;
@@ -185,6 +195,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
 			sch->q.qlen--;
 			qdisc_unthrottled(sch);
 			qdisc_bstats_update(sch, skb);
+			q->last_dequeued = true;
 			return skb;
 		}
 
@@ -204,6 +215,8 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
 
 		sch->qstats.overlimits++;
 	}
+null_out:
+	q->last_dequeued = false;
 	return NULL;
 }
 
@@ -212,6 +225,7 @@ static void tbf_reset(struct Qdisc *sch)
 	struct tbf_sched_data *q = qdisc_priv(sch);
 
 	qdisc_reset(q->qdisc);
+	q->last_dequeued = false;
 	sch->q.qlen = 0;
 	q->t_c = ktime_to_ns(ktime_get());
 	q->tokens = q->buffer;
@@ -290,6 +304,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt)
 		qdisc_tree_decrease_qlen(q->qdisc, q->qdisc->q.qlen);
 		qdisc_destroy(q->qdisc);
 		q->qdisc = child;
+		q->last_dequeued = false;
 	}
 	q->limit = qopt->limit;
 	q->mtu = PSCHED_TICKS2NS(qopt->mtu);
@@ -392,6 +407,7 @@ static int tbf_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
 	q->qdisc = new;
 	qdisc_tree_decrease_qlen(*old, (*old)->q.qlen);
 	qdisc_reset(*old);
+	q->last_dequeued = false;
 	sch_tree_unlock(sch);
 
 	return 0;
-- 
1.8.1.2


This allows to send whole gso skb at once when enough time is accumulated:

let's say one gso skb would segment into seg1,seg2,seg3,seg4
compare table:
time  segs         gso skb
1     deq seg1     accumulate this time
2     deq seg2     accumulate this time
3     deq seg3     accumulate this time
4     deq seg4     deq skb

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-18  9:58         ` Jiri Pirko
@ 2013-02-19 16:15           ` Eric Dumazet
  2013-02-19 16:46             ` Jiri Pirko
  0 siblings, 1 reply; 39+ messages in thread
From: Eric Dumazet @ 2013-02-19 16:15 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Mon, 2013-02-18 at 10:58 +0100, Jiri Pirko wrote:
> Sun, Feb 17, 2013 at 06:54:23PM CET, eric.dumazet@gmail.com wrote:
> >On Sun, 2013-02-17 at 17:18 +0100, Jiri Pirko wrote:
> >
> >> I'm going through this issue back and front and on the second thought,
> >> I think this patch might not be so wrong after all.
> >> 
> >> "Accumulating" time in ptoks would effectively cause the skb to be sent
> >> only in case time for whole skb is available (accumulated).
> >> 
> >> The re-segmenting will only cause the skb fragments sent in each time frame.
> >> 
> >> I can't see how the bigger bursts you are reffering to can happen.
> >> 
> >> Or am I missing something?
> >
> >Token Bucket Filter doesnt allow to accumulate tokens above a given
> >threshold. Thats the whole point of the algo.
> >
> >After a one hour idle time, you don't want to allow your device sending
> >a burst exceeding the constraint.
> 
> You are right, therefore I said "not so wrong". Let me illustrate my
> thoughts. Here is a patch:
> 
> Subject: [patch net-next RFC] tbf: take into account gso skbs
> 
> Ignore max_size check for gso skbs. This check made bigger packets
> incorrectly dropped. Remove this limitation for gso skbs.
> 
> Also for peaks, accumulate time for big gso skbs.
> 
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> ---

I am sorry, we can not do this accumulation.

If we are allowed to send 1k per second, we are not allowed to send 10k
after 10 seconds of idle.

Either we are able to split the GSO packet, and respect the TBF
constraints, either we must drop it.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-19 16:15           ` Eric Dumazet
@ 2013-02-19 16:46             ` Jiri Pirko
  2013-02-19 17:01               ` Eric Dumazet
  0 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-02-19 16:46 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

Tue, Feb 19, 2013 at 05:15:02PM CET, eric.dumazet@gmail.com wrote:
>On Mon, 2013-02-18 at 10:58 +0100, Jiri Pirko wrote:
>> Sun, Feb 17, 2013 at 06:54:23PM CET, eric.dumazet@gmail.com wrote:
>> >On Sun, 2013-02-17 at 17:18 +0100, Jiri Pirko wrote:
>> >
>> >> I'm going through this issue back and front and on the second thought,
>> >> I think this patch might not be so wrong after all.
>> >> 
>> >> "Accumulating" time in ptoks would effectively cause the skb to be sent
>> >> only in case time for whole skb is available (accumulated).
>> >> 
>> >> The re-segmenting will only cause the skb fragments sent in each time frame.
>> >> 
>> >> I can't see how the bigger bursts you are reffering to can happen.
>> >> 
>> >> Or am I missing something?
>> >
>> >Token Bucket Filter doesnt allow to accumulate tokens above a given
>> >threshold. Thats the whole point of the algo.
>> >
>> >After a one hour idle time, you don't want to allow your device sending
>> >a burst exceeding the constraint.
>> 
>> You are right, therefore I said "not so wrong". Let me illustrate my
>> thoughts. Here is a patch:
>> 
>> Subject: [patch net-next RFC] tbf: take into account gso skbs
>> 
>> Ignore max_size check for gso skbs. This check made bigger packets
>> incorrectly dropped. Remove this limitation for gso skbs.
>> 
>> Also for peaks, accumulate time for big gso skbs.
>> 
>> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
>> ---
>
>I am sorry, we can not do this accumulation.
>
>If we are allowed to send 1k per second, we are not allowed to send 10k
>after 10 seconds of idle.
>
>Either we are able to split the GSO packet, and respect the TBF
>constraints, either we must drop it.


That's a shame. Would be easy this way, also applicable to act_police :/

About the gso_segment, do you see any cons doing that on enqueue path
rather than dequeue?

Thanks.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-19 16:46             ` Jiri Pirko
@ 2013-02-19 17:01               ` Eric Dumazet
  2013-03-08 15:23                 ` Jiri Pirko
  0 siblings, 1 reply; 39+ messages in thread
From: Eric Dumazet @ 2013-02-19 17:01 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

On Tue, 2013-02-19 at 17:46 +0100, Jiri Pirko wrote:

> About the gso_segment, do you see any cons doing that on enqueue path
> rather than dequeue?
> 

It would be fine, and could be done in core stack instead of qdisc.

netif_skb_features() for example has the following (incomplete) check

if (skb_shinfo(skb)->gso_segs > skb->dev->gso_max_segs)
    features &= ~NETIF_F_GSO_MASK;

We do have a dev->gso_max_size, but its currently used in TCP stack to
size the skbs built in tcp_sendmsg().

In a forwarding workload, it seems we dont use/check gso_max_size.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-02-19 17:01               ` Eric Dumazet
@ 2013-03-08 15:23                 ` Jiri Pirko
  2013-03-22 10:02                   ` Jiri Pirko
  0 siblings, 1 reply; 39+ messages in thread
From: Jiri Pirko @ 2013-03-08 15:23 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

Tue, Feb 19, 2013 at 06:01:27PM CET, eric.dumazet@gmail.com wrote:
>On Tue, 2013-02-19 at 17:46 +0100, Jiri Pirko wrote:
>
>> About the gso_segment, do you see any cons doing that on enqueue path
>> rather than dequeue?
>> 
>
>It would be fine, and could be done in core stack instead of qdisc.
>

So you mean for example in tcp code? the maximum possible size would be
propagated from set qdiscs up to the tcp code?

I'm not sure how exactly do that.

>netif_skb_features() for example has the following (incomplete) check
>
>if (skb_shinfo(skb)->gso_segs > skb->dev->gso_max_segs)
>    features &= ~NETIF_F_GSO_MASK;

Why this is incomplete?

>
>We do have a dev->gso_max_size, but its currently used in TCP stack to
>size the skbs built in tcp_sendmsg().

Where exactly in tcp_sendmsg() this is? I found dev->gso_max_size is copied to
sk_gso_max_size in tcp_v4_connect->sk_setup_caps.

>
>In a forwarding workload, it seems we dont use/check gso_max_size.

Yep, that would require to do the segmentation in enqueue anyway. Maybe
I can implement segmentation in enqueue path first and provide tcp
optimalization after that. What do you think?


Thanks!

Jiri

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [patch net-next v5 10/11] tbf: take into account gso skbs
  2013-03-08 15:23                 ` Jiri Pirko
@ 2013-03-22 10:02                   ` Jiri Pirko
  0 siblings, 0 replies; 39+ messages in thread
From: Jiri Pirko @ 2013-03-22 10:02 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, davem, edumazet, jhs, kuznet, j.vimal

Fri, Mar 08, 2013 at 04:23:38PM CET, jiri@resnulli.us wrote:
>Tue, Feb 19, 2013 at 06:01:27PM CET, eric.dumazet@gmail.com wrote:
>>On Tue, 2013-02-19 at 17:46 +0100, Jiri Pirko wrote:
>>
>>> About the gso_segment, do you see any cons doing that on enqueue path
>>> rather than dequeue?
>>> 
>>
>>It would be fine, and could be done in core stack instead of qdisc.
>>
>
>So you mean for example in tcp code? the maximum possible size would be
>propagated from set qdiscs up to the tcp code?
>
>I'm not sure how exactly do that.
>
>>netif_skb_features() for example has the following (incomplete) check
>>
>>if (skb_shinfo(skb)->gso_segs > skb->dev->gso_max_segs)
>>    features &= ~NETIF_F_GSO_MASK;
>
>Why this is incomplete?
>
>>
>>We do have a dev->gso_max_size, but its currently used in TCP stack to
>>size the skbs built in tcp_sendmsg().
>
>Where exactly in tcp_sendmsg() this is? I found dev->gso_max_size is copied to
>sk_gso_max_size in tcp_v4_connect->sk_setup_caps.
>
>>
>>In a forwarding workload, it seems we dont use/check gso_max_size.
>
>Yep, that would require to do the segmentation in enqueue anyway. Maybe
>I can implement segmentation in enqueue path first and provide tcp
>optimalization after that. What do you think?


Reminding myself with this...

Thanks.

Jiri

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2013-03-22 10:02 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-12 10:11 [patch net-next v5 00/11] couple of net/sched fixes+improvements Jiri Pirko
2013-02-12 10:11 ` [patch net-next v5 01/11] htb: use PSCHED_TICKS2NS() Jiri Pirko
2013-02-13  0:00   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 02/11] htb: fix values in opt dump Jiri Pirko
2013-02-12 23:51   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 03/11] htb: remove pointless first initialization of buffer and cbuffer Jiri Pirko
2013-02-13  0:00   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 04/11] htb: initialize cl->tokens and cl->ctokens correctly Jiri Pirko
2013-02-13  0:00   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 05/11] sch: make htb_rate_cfg and functions around that generic Jiri Pirko
2013-02-13  0:00   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 06/11] sch_api: introduce qdisc_watchdog_schedule_ns() Jiri Pirko
2013-02-12 16:32   ` Eric Dumazet
2013-02-13  0:00   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 07/11] tbf: improved accuracy at high rates Jiri Pirko
2013-02-12 16:34   ` Eric Dumazet
2013-02-13  0:01   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 08/11] act_police: move struct tcf_police to act_police.c Jiri Pirko
2013-02-12 12:08   ` Jamal Hadi Salim
2013-02-12 16:34   ` Eric Dumazet
2013-02-13  0:01   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 09/11] act_police: improved accuracy at high rates Jiri Pirko
2013-02-12 13:31   ` Jamal Hadi Salim
2013-02-12 13:39     ` Jiri Pirko
2013-02-13  0:01   ` David Miller
2013-02-12 10:12 ` [patch net-next v5 10/11] tbf: take into account gso skbs Jiri Pirko
2013-02-12 16:39   ` Eric Dumazet
2013-02-12 17:31     ` Jiri Pirko
2013-02-12 17:54       ` Eric Dumazet
2013-02-17 16:18     ` Jiri Pirko
2013-02-17 17:54       ` Eric Dumazet
2013-02-18  9:58         ` Jiri Pirko
2013-02-19 16:15           ` Eric Dumazet
2013-02-19 16:46             ` Jiri Pirko
2013-02-19 17:01               ` Eric Dumazet
2013-03-08 15:23                 ` Jiri Pirko
2013-03-22 10:02                   ` Jiri Pirko
2013-02-12 10:12 ` [patch net-next v5 11/11] act_police: remove <=mtu check for " Jiri Pirko
2013-02-12 16:40   ` Eric Dumazet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).