All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake
@ 2019-03-01 15:04 Toke Høiland-Jørgensen
  2019-03-01 15:04 ` [PATCH net-next v1 1/3] sch_cake: Make the dual modes fairer Toke Høiland-Jørgensen
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-03-01 15:04 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, cake

This series includes a couple of patches with updates from the out-of-tree
version of sch_cake. The first one is a fix to the fairness scheduling when
dual-mode fairness is enabled. The second patch is an additional feature flag
that allows using fwmark as a tin selector, as a convenience for people who want
to customise tin selection. The third patch is just a cleanup to the tin
selection logic.

---

George Amanakis (1):
      sch_cake: Make the dual modes fairer

Kevin Darbyshire-Bryant (1):
      sch_cake: Permit use of connmarks as tin classifiers

Toke Høiland-Jørgensen (1):
      sch_cake: Simplify logic in cake_select_tin()


 include/uapi/linux/pkt_sched.h |    1 
 net/sched/sch_cake.c           |  155 +++++++++++++++++++++++++---------------
 2 files changed, 98 insertions(+), 58 deletions(-)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH net-next v1 2/3] sch_cake: Permit use of connmarks as tin classifiers
  2019-03-01 15:04 [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake Toke Høiland-Jørgensen
  2019-03-01 15:04 ` [PATCH net-next v1 1/3] sch_cake: Make the dual modes fairer Toke Høiland-Jørgensen
  2019-03-01 15:04 ` [PATCH net-next v1 3/3] sch_cake: Simplify logic in cake_select_tin() Toke Høiland-Jørgensen
@ 2019-03-01 15:04 ` Toke Høiland-Jørgensen
  2019-03-04  4:38 ` [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake David Miller
  3 siblings, 0 replies; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-03-01 15:04 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, cake

From: Kevin Darbyshire-Bryant <ldir@darbyshire-bryant.me.uk>

Add flag 'FWMARK' to enable use of firewall connmarks as tin selector.
The connmark (skbuff->mark) needs to be in the range 1->tin_cnt ie.
for diffserv3 the mark needs to be 1->3.

Background

Typically CAKE uses DSCP as the basis for tin selection.  DSCP values
are relatively easily changed as part of the egress path, usually with
iptables & the mangle table, ingress is more challenging.  CAKE is often
used on the WAN interface of a residential gateway where passthrough of
DSCP from the ISP is either missing or set to unhelpful values thus use
of ingress DSCP values for tin selection isn't helpful in that
environment.

An approach to solving the ingress tin selection problem is to use
CAKE's understanding of tc filters.  Naive tc filters could match on
source/destination port numbers and force tin selection that way, but
multiple filters don't scale particularly well as each filter must be
traversed whether it matches or not. e.g. a simple example to map 3
firewall marks to tins:

MAJOR=$( tc qdisc show dev $DEV | head -1 | awk '{print $3}' )
tc filter add dev $DEV parent $MAJOR protocol all handle 0x01 fw action skbedit priority ${MAJOR}1
tc filter add dev $DEV parent $MAJOR protocol all handle 0x02 fw action skbedit priority ${MAJOR}2
tc filter add dev $DEV parent $MAJOR protocol all handle 0x03 fw action skbedit priority ${MAJOR}3

Another option is to use eBPF cls_act with tc filters e.g.

MAJOR=$( tc qdisc show dev $DEV | head -1 | awk '{print $3}' )
tc filter add dev $DEV parent $MAJOR bpf da obj my-bpf-fwmark-to-class.o

This has the disadvantages of a) needing someone to write & maintain
the bpf program, b) a bpf toolchain to compile it and c) needing to
hardcode the major number in the bpf program so it matches the cake
instance (or forcing the cake instance to a particular major number)
since the major number cannot be passed to the bpf program via tc
command line.

As already hinted at by the previous examples, it would be helpful
to associate tins with something that survives the Internet path and
ideally allows tin selection on both egress and ingress.  Netfilter's
conntrack permits setting an identifying mark on a connection which
can also be restored to an ingress packet with tc action connmark e.g.

tc filter add dev eth0 parent ffff: protocol all prio 10 u32 \
	match u32 0 0 flowid 1:1 action connmark action mirred egress redirect dev ifb1

Since tc's connmark action has restored any connmark into skb->mark,
any of the previous solutions are based upon it and in one form or
another copy that mark to the skb->priority field where again CAKE
picks this up.

This change cuts out at least one of the (less intuitive &
non-scalable) middlemen and permit direct access to skb->mark.

Signed-off-by: Kevin Darbyshire-Bryant <ldir@darbyshire-bryant.me.uk>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
---
 include/uapi/linux/pkt_sched.h |    1 +
 net/sched/sch_cake.c           |   34 +++++++++++++++++++++++++++-------
 2 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
index 1eb572ef3f27..7ee74c3474bf 100644
--- a/include/uapi/linux/pkt_sched.h
+++ b/include/uapi/linux/pkt_sched.h
@@ -1021,6 +1021,7 @@ enum {
 	TCA_CAKE_INGRESS,
 	TCA_CAKE_ACK_FILTER,
 	TCA_CAKE_SPLIT_GSO,
+	TCA_CAKE_FWMARK,
 	__TCA_CAKE_MAX
 };
 #define TCA_CAKE_MAX	(__TCA_CAKE_MAX - 1)
diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 4d688b3b471b..f1cc4779699b 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -258,7 +258,8 @@ enum {
 	CAKE_FLAG_AUTORATE_INGRESS = BIT(1),
 	CAKE_FLAG_INGRESS	   = BIT(2),
 	CAKE_FLAG_WASH		   = BIT(3),
-	CAKE_FLAG_SPLIT_GSO	   = BIT(4)
+	CAKE_FLAG_SPLIT_GSO	   = BIT(4),
+	CAKE_FLAG_FWMARK	   = BIT(5)
 };
 
 /* COBALT operates the Codel and BLUE algorithms in parallel, in order to
@@ -1566,12 +1567,20 @@ static struct cake_tin_data *cake_select_tin(struct Qdisc *sch,
 		if (q->rate_flags & CAKE_FLAG_WASH)
 			cake_wash_diffserv(skb);
 	} else if (q->tin_mode != CAKE_DIFFSERV_BESTEFFORT) {
-		/* extract the Diffserv Precedence field, if it exists */
-		/* and clear DSCP bits if washing */
-		tin = q->tin_index[cake_handle_diffserv(skb,
-				q->rate_flags & CAKE_FLAG_WASH)];
-		if (unlikely(tin >= q->tin_cnt))
-			tin = 0;
+		if (q->rate_flags & CAKE_FLAG_FWMARK && /* use fw mark */
+		    skb->mark &&
+		    skb->mark <= q->tin_cnt) {
+			tin = q->tin_order[skb->mark - 1];
+			if (q->rate_flags & CAKE_FLAG_WASH)
+				cake_wash_diffserv(skb);
+		} else {
+			/* extract the Diffserv Precedence field, if it exists */
+			/* and clear DSCP bits if washing */
+			tin = q->tin_index[cake_handle_diffserv(skb,
+					q->rate_flags & CAKE_FLAG_WASH)];
+			if (unlikely(tin >= q->tin_cnt))
+				tin = 0;
+		}
 	} else {
 		tin = 0;
 		if (q->rate_flags & CAKE_FLAG_WASH)
@@ -2624,6 +2633,13 @@ static int cake_change(struct Qdisc *sch, struct nlattr *opt,
 			q->rate_flags &= ~CAKE_FLAG_SPLIT_GSO;
 	}
 
+	if (tb[TCA_CAKE_FWMARK]) {
+		if (!!nla_get_u32(tb[TCA_CAKE_FWMARK]))
+			q->rate_flags |= CAKE_FLAG_FWMARK;
+		else
+			q->rate_flags &= ~CAKE_FLAG_FWMARK;
+	}
+
 	if (q->tins) {
 		sch_tree_lock(sch);
 		cake_reconfigure(sch);
@@ -2783,6 +2799,10 @@ static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
 			!!(q->rate_flags & CAKE_FLAG_SPLIT_GSO)))
 		goto nla_put_failure;
 
+	if (nla_put_u32(skb, TCA_CAKE_FWMARK,
+			!!(q->rate_flags & CAKE_FLAG_FWMARK)))
+		goto nla_put_failure;
+
 	return nla_nest_end(skb, opts);
 
 nla_put_failure:


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net-next v1 1/3] sch_cake: Make the dual modes fairer
  2019-03-01 15:04 [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake Toke Høiland-Jørgensen
@ 2019-03-01 15:04 ` Toke Høiland-Jørgensen
  2019-03-01 15:04 ` [PATCH net-next v1 3/3] sch_cake: Simplify logic in cake_select_tin() Toke Høiland-Jørgensen
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-03-01 15:04 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, cake

From: George Amanakis <gamanakis@gmail.com>

CAKE host fairness does not work well with TCP flows in dual-srchost and
dual-dsthost setup. The reason is that ACKs generated by TCP flows are
classified as sparse flows, and affect flow isolation from other hosts. Fix
this by calculating host_load based only on the bulk flows a host
generates. In a hash collision the host_bulk_flow_count values must be
decremented on the old hosts and incremented on the new ones *if* the queue
is in the bulk set.

Reported-by: Pete Heist <peteheist@gmail.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
---
 net/sched/sch_cake.c |   92 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 29 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 73940293700d..4d688b3b471b 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -138,8 +138,8 @@ struct cake_flow {
 struct cake_host {
 	u32 srchost_tag;
 	u32 dsthost_tag;
-	u16 srchost_refcnt;
-	u16 dsthost_refcnt;
+	u16 srchost_bulk_flow_count;
+	u16 dsthost_bulk_flow_count;
 };
 
 struct cake_heap_entry {
@@ -746,8 +746,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
 		 * queue, accept the collision, update the host tags.
 		 */
 		q->way_collisions++;
-		q->hosts[q->flows[reduced_hash].srchost].srchost_refcnt--;
-		q->hosts[q->flows[reduced_hash].dsthost].dsthost_refcnt--;
+		if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
+			q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
+			q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
+		}
 		allocate_src = cake_dsrc(flow_mode);
 		allocate_dst = cake_ddst(flow_mode);
 found:
@@ -767,13 +769,14 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
 			}
 			for (i = 0; i < CAKE_SET_WAYS;
 				i++, k = (k + 1) % CAKE_SET_WAYS) {
-				if (!q->hosts[outer_hash + k].srchost_refcnt)
+				if (!q->hosts[outer_hash + k].srchost_bulk_flow_count)
 					break;
 			}
 			q->hosts[outer_hash + k].srchost_tag = srchost_hash;
 found_src:
 			srchost_idx = outer_hash + k;
-			q->hosts[srchost_idx].srchost_refcnt++;
+			if (q->flows[reduced_hash].set == CAKE_SET_BULK)
+				q->hosts[srchost_idx].srchost_bulk_flow_count++;
 			q->flows[reduced_hash].srchost = srchost_idx;
 		}
 
@@ -789,13 +792,14 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
 			}
 			for (i = 0; i < CAKE_SET_WAYS;
 			     i++, k = (k + 1) % CAKE_SET_WAYS) {
-				if (!q->hosts[outer_hash + k].dsthost_refcnt)
+				if (!q->hosts[outer_hash + k].dsthost_bulk_flow_count)
 					break;
 			}
 			q->hosts[outer_hash + k].dsthost_tag = dsthost_hash;
 found_dst:
 			dsthost_idx = outer_hash + k;
-			q->hosts[dsthost_idx].dsthost_refcnt++;
+			if (q->flows[reduced_hash].set == CAKE_SET_BULK)
+				q->hosts[dsthost_idx].dsthost_bulk_flow_count++;
 			q->flows[reduced_hash].dsthost = dsthost_idx;
 		}
 	}
@@ -1794,20 +1798,30 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 		b->sparse_flow_count++;
 
 		if (cake_dsrc(q->flow_mode))
-			host_load = max(host_load, srchost->srchost_refcnt);
+			host_load = max(host_load, srchost->srchost_bulk_flow_count);
 
 		if (cake_ddst(q->flow_mode))
-			host_load = max(host_load, dsthost->dsthost_refcnt);
+			host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
 
 		flow->deficit = (b->flow_quantum *
 				 quantum_div[host_load]) >> 16;
 	} else if (flow->set == CAKE_SET_SPARSE_WAIT) {
+		struct cake_host *srchost = &b->hosts[flow->srchost];
+		struct cake_host *dsthost = &b->hosts[flow->dsthost];
+
 		/* this flow was empty, accounted as a sparse flow, but actually
 		 * in the bulk rotation.
 		 */
 		flow->set = CAKE_SET_BULK;
 		b->sparse_flow_count--;
 		b->bulk_flow_count++;
+
+		if (cake_dsrc(q->flow_mode))
+			srchost->srchost_bulk_flow_count++;
+
+		if (cake_ddst(q->flow_mode))
+			dsthost->dsthost_bulk_flow_count++;
+
 	}
 
 	if (q->buffer_used > q->buffer_max_used)
@@ -1975,23 +1989,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
 	dsthost = &b->hosts[flow->dsthost];
 	host_load = 1;
 
-	if (cake_dsrc(q->flow_mode))
-		host_load = max(host_load, srchost->srchost_refcnt);
-
-	if (cake_ddst(q->flow_mode))
-		host_load = max(host_load, dsthost->dsthost_refcnt);
-
-	WARN_ON(host_load > CAKE_QUEUES);
-
 	/* flow isolation (DRR++) */
 	if (flow->deficit <= 0) {
-		/* The shifted prandom_u32() is a way to apply dithering to
-		 * avoid accumulating roundoff errors
-		 */
-		flow->deficit += (b->flow_quantum * quantum_div[host_load] +
-				  (prandom_u32() >> 16)) >> 16;
-		list_move_tail(&flow->flowchain, &b->old_flows);
-
 		/* Keep all flows with deficits out of the sparse and decaying
 		 * rotations.  No non-empty flow can go into the decaying
 		 * rotation, so they can't get deficits
@@ -2000,6 +1999,13 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
 			if (flow->head) {
 				b->sparse_flow_count--;
 				b->bulk_flow_count++;
+
+				if (cake_dsrc(q->flow_mode))
+					srchost->srchost_bulk_flow_count++;
+
+				if (cake_ddst(q->flow_mode))
+					dsthost->dsthost_bulk_flow_count++;
+
 				flow->set = CAKE_SET_BULK;
 			} else {
 				/* we've moved it to the bulk rotation for
@@ -2009,6 +2015,22 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
 				flow->set = CAKE_SET_SPARSE_WAIT;
 			}
 		}
+
+		if (cake_dsrc(q->flow_mode))
+			host_load = max(host_load, srchost->srchost_bulk_flow_count);
+
+		if (cake_ddst(q->flow_mode))
+			host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
+
+		WARN_ON(host_load > CAKE_QUEUES);
+
+		/* The shifted prandom_u32() is a way to apply dithering to
+		 * avoid accumulating roundoff errors
+		 */
+		flow->deficit += (b->flow_quantum * quantum_div[host_load] +
+				  (prandom_u32() >> 16)) >> 16;
+		list_move_tail(&flow->flowchain, &b->old_flows);
+
 		goto retry;
 	}
 
@@ -2029,6 +2051,13 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
 					       &b->decaying_flows);
 				if (flow->set == CAKE_SET_BULK) {
 					b->bulk_flow_count--;
+
+					if (cake_dsrc(q->flow_mode))
+						srchost->srchost_bulk_flow_count--;
+
+					if (cake_ddst(q->flow_mode))
+						dsthost->dsthost_bulk_flow_count--;
+
 					b->decaying_flow_count++;
 				} else if (flow->set == CAKE_SET_SPARSE ||
 					   flow->set == CAKE_SET_SPARSE_WAIT) {
@@ -2042,14 +2071,19 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
 				if (flow->set == CAKE_SET_SPARSE ||
 				    flow->set == CAKE_SET_SPARSE_WAIT)
 					b->sparse_flow_count--;
-				else if (flow->set == CAKE_SET_BULK)
+				else if (flow->set == CAKE_SET_BULK) {
 					b->bulk_flow_count--;
-				else
+
+					if (cake_dsrc(q->flow_mode))
+						srchost->srchost_bulk_flow_count--;
+
+					if (cake_ddst(q->flow_mode))
+						dsthost->dsthost_bulk_flow_count--;
+
+				} else
 					b->decaying_flow_count--;
 
 				flow->set = CAKE_SET_NONE;
-				srchost->srchost_refcnt--;
-				dsthost->dsthost_refcnt--;
 			}
 			goto begin;
 		}


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net-next v1 3/3] sch_cake: Simplify logic in cake_select_tin()
  2019-03-01 15:04 [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake Toke Høiland-Jørgensen
  2019-03-01 15:04 ` [PATCH net-next v1 1/3] sch_cake: Make the dual modes fairer Toke Høiland-Jørgensen
@ 2019-03-01 15:04 ` Toke Høiland-Jørgensen
  2019-03-01 15:04 ` [PATCH net-next v1 2/3] sch_cake: Permit use of connmarks as tin classifiers Toke Høiland-Jørgensen
  2019-03-04  4:38 ` [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake David Miller
  3 siblings, 0 replies; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-03-01 15:04 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, cake

From: Toke Høiland-Jørgensen <toke@toke.dk>

With more modes added the logic in cake_select_tin() was getting a bit
hairy, and it turns out we can actually simplify it quite a bit. This also
allows us to get rid of one of the two diffserv parsing functions, which
has the added benefit that already-zeroed DSCP fields won't get re-written.

Suggested-by: Kevin Darbyshire-Bryant <ldir@darbyshire-bryant.me.uk>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
---
 net/sched/sch_cake.c |   61 +++++++++++++++++++-------------------------------
 1 file changed, 23 insertions(+), 38 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index f1cc4779699b..1d2a12132abc 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -1513,20 +1513,6 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
 	return idx + (tin << 16);
 }
 
-static void cake_wash_diffserv(struct sk_buff *skb)
-{
-	switch (skb->protocol) {
-	case htons(ETH_P_IP):
-		ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
-		break;
-	case htons(ETH_P_IPV6):
-		ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
-		break;
-	default:
-		break;
-	}
-}
-
 static u8 cake_handle_diffserv(struct sk_buff *skb, u16 wash)
 {
 	u8 dscp;
@@ -1558,33 +1544,32 @@ static struct cake_tin_data *cake_select_tin(struct Qdisc *sch,
 {
 	struct cake_sched_data *q = qdisc_priv(sch);
 	u32 tin;
+	u8 dscp;
 
-	if (TC_H_MAJ(skb->priority) == sch->handle &&
-	    TC_H_MIN(skb->priority) > 0 &&
-	    TC_H_MIN(skb->priority) <= q->tin_cnt) {
-		tin = q->tin_order[TC_H_MIN(skb->priority) - 1];
+	/* Tin selection: Default to diffserv-based selection, allow overriding
+	 * using firewall marks or skb->priority.
+	 */
+	dscp = cake_handle_diffserv(skb,
+				    q->rate_flags & CAKE_FLAG_WASH);
 
-		if (q->rate_flags & CAKE_FLAG_WASH)
-			cake_wash_diffserv(skb);
-	} else if (q->tin_mode != CAKE_DIFFSERV_BESTEFFORT) {
-		if (q->rate_flags & CAKE_FLAG_FWMARK && /* use fw mark */
-		    skb->mark &&
-		    skb->mark <= q->tin_cnt) {
-			tin = q->tin_order[skb->mark - 1];
-			if (q->rate_flags & CAKE_FLAG_WASH)
-				cake_wash_diffserv(skb);
-		} else {
-			/* extract the Diffserv Precedence field, if it exists */
-			/* and clear DSCP bits if washing */
-			tin = q->tin_index[cake_handle_diffserv(skb,
-					q->rate_flags & CAKE_FLAG_WASH)];
-			if (unlikely(tin >= q->tin_cnt))
-				tin = 0;
-		}
-	} else {
+	if (q->tin_mode == CAKE_DIFFSERV_BESTEFFORT)
 		tin = 0;
-		if (q->rate_flags & CAKE_FLAG_WASH)
-			cake_wash_diffserv(skb);
+
+	else if (q->rate_flags & CAKE_FLAG_FWMARK && /* use fw mark */
+		 skb->mark &&
+		 skb->mark <= q->tin_cnt)
+		tin = q->tin_order[skb->mark - 1];
+
+	else if (TC_H_MAJ(skb->priority) == sch->handle &&
+		 TC_H_MIN(skb->priority) > 0 &&
+		 TC_H_MIN(skb->priority) <= q->tin_cnt)
+		tin = q->tin_order[TC_H_MIN(skb->priority) - 1];
+
+	else {
+		tin = q->tin_index[dscp];
+
+		if (unlikely(tin >= q->tin_cnt))
+			tin = 0;
 	}
 
 	return &q->tins[tin];


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake
  2019-03-01 15:04 [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake Toke Høiland-Jørgensen
                   ` (2 preceding siblings ...)
  2019-03-01 15:04 ` [PATCH net-next v1 2/3] sch_cake: Permit use of connmarks as tin classifiers Toke Høiland-Jørgensen
@ 2019-03-04  4:38 ` David Miller
  3 siblings, 0 replies; 5+ messages in thread
From: David Miller @ 2019-03-04  4:38 UTC (permalink / raw)
  To: toke; +Cc: netdev, cake

From: Toke Høiland-Jørgensen <toke@redhat.com>
Date: Fri, 01 Mar 2019 16:04:05 +0100

> This series includes a couple of patches with updates from the out-of-tree
> version of sch_cake. The first one is a fix to the fairness scheduling when
> dual-mode fairness is enabled. The second patch is an additional feature flag
> that allows using fwmark as a tin selector, as a convenience for people who want
> to customise tin selection. The third patch is just a cleanup to the tin
> selection logic.

Series applied, thanks Toke.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-03-04  4:38 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-01 15:04 [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake Toke Høiland-Jørgensen
2019-03-01 15:04 ` [PATCH net-next v1 1/3] sch_cake: Make the dual modes fairer Toke Høiland-Jørgensen
2019-03-01 15:04 ` [PATCH net-next v1 3/3] sch_cake: Simplify logic in cake_select_tin() Toke Høiland-Jørgensen
2019-03-01 15:04 ` [PATCH net-next v1 2/3] sch_cake: Permit use of connmarks as tin classifiers Toke Høiland-Jørgensen
2019-03-04  4:38 ` [PATCH net-next v1 0/3] sched: Patches from out-of-tree version of sch_cake David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.