All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 1/3] net: implement mechanism for HW based QOS
@ 2010-12-01 18:22 John Fastabend
  2010-12-01 18:22 ` [RFC PATCH v2 2/3] netlink: implement nla_policy for HW QOS John Fastabend
  2010-12-01 18:23 ` [RFC PATCH v2 3/3] ixgbe: add multiple txqs per tc John Fastabend
  0 siblings, 2 replies; 5+ messages in thread
From: John Fastabend @ 2010-12-01 18:22 UTC (permalink / raw)
  To: davem; +Cc: john.r.fastabend, netdev, tgraf, eric.dumazet

This patch provides a mechanism for lower layer devices to
steer traffic using skb->priority to tx queues. This allows
for hardware based QOS schemes to use the default qdisc without
incurring the penalties related to global state and the qdisc
lock. While reliably receiving skbs on the correct tx ring
to avoid head of line blocking resulting from shuffling in
the LLD. Finally, all the goodness from txq caching and xps/rps
can still be leveraged.

Many drivers and hardware exist with the ability to implement
QOS schemes in the hardware but currently these drivers tend
to rely on firmware to reroute specific traffic, a driver
specific select_queue or the queue_mapping action in the
qdisc.

None of these solutions are ideal or generic so we end up
with driver specific solutions that one-off traffic types
for example FCoE traffic is steered in ixgbe with the
queue_select routine. By using select_queue for this drivers
need to be updated for each and every traffic type and we
loose the goodness of much of the upstream work.

Firmware solutions are inherently inflexible. And finally if
admins are expected to build a qdisc and filter rules to steer
traffic this requires knowledge of how the hardware is currently
configured. The number of tx queues and the queue offsets may
change depending on resources. Also this approach incurs all the
overhead of a qdisc with filters.

With this mechanism users can set skb priority using expected
methods either socket options or the stack can set this directly.
Then the skb will be steered to the correct tx queues aligned
with hardware QOS traffic classes. In the normal case with a
single traffic class and all queues in this class every thing
works as is until the LLD enables multiple tcs.

To steer the skb we mask out the lower 8 bits of the priority
and allow the hardware to configure upto 15 distinct classes
of traffic. This is expected to be sufficient for most applications
at any rate it is more then the 8021Q spec designates and is
equal to the number of prio bands currently implemented in
the default qdisc.

This in conjunction with a userspace application such as
lldpad can be used to implement 8021Q transmission selection
algorithms one of these algorithms being the extended transmission
selection algorithm currently being used for DCB.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---

 include/linux/netdevice.h |   64 +++++++++++++++++++++++++++++++++++++++++++++
 net/core/dev.c            |   41 ++++++++++++++++++++++++++++-
 2 files changed, 104 insertions(+), 1 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 4b0c7f3..3307979 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -628,6 +628,12 @@ struct xps_dev_maps {
     (nr_cpu_ids * sizeof(struct xps_map *)))
 #endif /* CONFIG_XPS */
 
+/* HW offloaded queuing disciplines txq count and offset maps */
+struct netdev_tc_txq {
+	u16 count;
+	u16 offset;
+};
+
 /*
  * This structure defines the management hooks for network devices.
  * The following hooks can be defined; unless noted otherwise, they are
@@ -1128,6 +1134,10 @@ struct net_device {
 	/* Data Center Bridging netlink ops */
 	const struct dcbnl_rtnl_ops *dcbnl_ops;
 #endif
+	u8 max_tcs;
+	u8 num_tcs;
+	struct netdev_tc_txq *_tc_to_txq;
+	u8 prio_tc_map[16];
 
 #if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE)
 	/* max exchange id for FCoE LRO by ddp */
@@ -1144,6 +1154,57 @@ struct net_device {
 #define	NETDEV_ALIGN		32
 
 static inline
+int netdev_get_prio_tc_map(const struct net_device *dev, u32 prio)
+{
+	return dev->prio_tc_map[prio & 15];
+}
+
+static inline
+int netdev_set_prio_tc_map(struct net_device *dev, u8 prio, u8 tc)
+{
+	if (tc >= dev->num_tcs)
+		return -EINVAL;
+
+	return dev->prio_tc_map[prio & 15] = tc & 15;
+}
+
+static inline
+int netdev_set_tc_queue(struct net_device *dev, u8 tc, u16 count, u16 offset)
+{
+	struct netdev_tc_txq *tcp;
+
+	if (tc >= dev->num_tcs)
+		return -EINVAL;
+
+	tcp = &dev->_tc_to_txq[tc];
+	tcp->count = count;
+	tcp->offset = offset;
+	return 0;
+}
+
+static inline
+struct netdev_tc_txq *netdev_get_tc_queue(const struct net_device *dev, u8 tc)
+{
+	return &dev->_tc_to_txq[tc];
+}
+
+static inline
+int netdev_set_num_tc(struct net_device *dev, u8 num_tc)
+{
+	if (num_tc > dev->max_tcs)
+		return -EINVAL;
+
+	dev->num_tcs = num_tc;
+	return 0;
+}
+
+static inline
+u8 netdev_get_num_tc(const struct net_device *dev)
+{
+	return dev->num_tcs;
+}
+
+static inline
 struct netdev_queue *netdev_get_tx_queue(const struct net_device *dev,
 					 unsigned int index)
 {
@@ -1368,6 +1429,9 @@ static inline void unregister_netdevice(struct net_device *dev)
 	unregister_netdevice_queue(dev, NULL);
 }
 
+extern int		netdev_alloc_max_tcs(struct net_device *dev, u8 tcs);
+extern void		netdev_free_tcs(struct net_device *dev);
+
 extern int 		netdev_refcnt_read(const struct net_device *dev);
 extern void		free_netdev(struct net_device *dev);
 extern void		synchronize_net(void);
diff --git a/net/core/dev.c b/net/core/dev.c
index 3259d2c..66c3af8 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2118,6 +2118,8 @@ static u32 hashrnd __read_mostly;
 u16 skb_tx_hash(const struct net_device *dev, const struct sk_buff *skb)
 {
 	u32 hash;
+	u16 qoffset = 0;
+	u16 qcount = dev->real_num_tx_queues;
 
 	if (skb_rx_queue_recorded(skb)) {
 		hash = skb_get_rx_queue(skb);
@@ -2126,13 +2128,20 @@ u16 skb_tx_hash(const struct net_device *dev, const struct sk_buff *skb)
 		return hash;
 	}
 
+	if (dev->num_tcs) {
+		u8 tc = netdev_get_prio_tc_map(dev, skb->priority);
+		struct netdev_tc_txq *tcp = netdev_get_tc_queue(dev, tc);
+		qoffset = tcp->offset;
+		qcount = tcp->count;
+	}
+
 	if (skb->sk && skb->sk->sk_hash)
 		hash = skb->sk->sk_hash;
 	else
 		hash = (__force u16) skb->protocol ^ skb->rxhash;
 	hash = jhash_1word(hash, hashrnd);
 
-	return (u16) (((u64) hash * dev->real_num_tx_queues) >> 32);
+	return (u16) ((((u64) hash * qcount)) >> 32) + qoffset;
 }
 EXPORT_SYMBOL(skb_tx_hash);
 
@@ -5088,6 +5097,35 @@ void netif_stacked_transfer_operstate(const struct net_device *rootdev,
 }
 EXPORT_SYMBOL(netif_stacked_transfer_operstate);
 
+int netdev_alloc_max_tcs(struct net_device *dev, u8 tcs)
+{
+	struct netdev_tc_txq *tcp;
+
+	if (tcs > 15)
+		return -EINVAL;
+
+	tcp = kcalloc(tcs, sizeof(*tcp), GFP_KERNEL);
+	if (!tcp)
+		return -ENOMEM;
+
+	dev->_tc_to_txq = tcp;
+	dev->max_tcs = tcs;
+	return tcs;
+}
+EXPORT_SYMBOL(netdev_alloc_max_tcs);
+
+void netdev_free_tcs(struct net_device *dev)
+{
+	u8 *prio_map = dev->prio_tc_map;
+
+	dev->max_tcs = 0;
+	dev->num_tcs = 0;
+	memset(prio_map, 0, sizeof(*prio_map) * 16);
+	kfree(dev->_tc_to_txq);
+	dev->_tc_to_txq = NULL;
+}
+EXPORT_SYMBOL(netdev_free_tcs);
+
 #ifdef CONFIG_RPS
 static int netif_alloc_rx_queues(struct net_device *dev)
 {
@@ -5695,6 +5733,7 @@ void free_netdev(struct net_device *dev)
 #ifdef CONFIG_RPS
 	kfree(dev->_rx);
 #endif
+	netdev_free_tcs(dev);
 
 	kfree(rcu_dereference_raw(dev->ingress_queue));
 


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH v2 2/3] netlink: implement nla_policy for HW QOS
  2010-12-01 18:22 [RFC PATCH v2 1/3] net: implement mechanism for HW based QOS John Fastabend
@ 2010-12-01 18:22 ` John Fastabend
  2010-12-02 10:20   ` Thomas Graf
  2010-12-01 18:23 ` [RFC PATCH v2 3/3] ixgbe: add multiple txqs per tc John Fastabend
  1 sibling, 1 reply; 5+ messages in thread
From: John Fastabend @ 2010-12-01 18:22 UTC (permalink / raw)
  To: davem; +Cc: john.r.fastabend, netdev, tgraf, eric.dumazet

Implement nla_policy hooks to get/set HW offloaded QOS policies.
The following types are added to RTM_{GET|SET}LINK.


 [IFLA_TC]
	[IFLA_TC_MAX_TC]
 	[IFLA_TC_NUM_TC]
 	[IFLA_TC_TXQS]
		[IFLA_TC_TXQ]
 		...
	[IFLA_TC_MAPS]
		[IFLA_TC_MAP]
		...

The following are read only,

IFLA_TC_MAX_TC
IFLA_TC_TXQS

The IFLA_TC_MAX_TC attribute can only be set by the lower layer drivers
because it is a hardware limit. The IFLA_TC_TXQ_* values provide insight
into how the hardware has aligned the tx queues with traffic classes
but can not be modified.

This adds a net_device ops ndo_set_num_tc() to callback into drivers
to change the number of traffic classes. Lower layer drivers may need to
move resources around or reconfigure HW to support changing number
of traffic classes.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---

 include/linux/if_link.h   |   50 ++++++++++++++++++++++
 include/linux/netdevice.h |    4 ++
 net/core/rtnetlink.c      |  103 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 156 insertions(+), 1 deletions(-)

diff --git a/include/linux/if_link.h b/include/linux/if_link.h
index 6485d2a..ebe13a0 100644
--- a/include/linux/if_link.h
+++ b/include/linux/if_link.h
@@ -135,6 +135,7 @@ enum {
 	IFLA_VF_PORTS,
 	IFLA_PORT_SELF,
 	IFLA_AF_SPEC,
+	IFLA_TC,
 	__IFLA_MAX
 };
 
@@ -378,4 +379,53 @@ struct ifla_port_vsi {
 	__u8 pad[3];
 };
 
+/* HW QOS management section
+ *
+ *	Nested layout of set/get msg is:
+ *
+ *		[IFLA_TC]
+ *			[IFLA_TC_MAX_TC]
+ *			[IFLA_TC_NUM_TC]
+ *			[IFLA_TC_TXQS]
+ *				[IFLA_TC_TXQ]
+ *				...
+ *			[IFLA_TC_MAPS]
+ *				[IFLA_TC_MAP]
+ *				...
+ */
+enum {
+	IFLA_TC_UNSPEC,
+	IFLA_TC_TXMAX,
+	IFLA_TC_TXNUM,
+	IFLA_TC_TXQS,
+	IFLA_TC_MAPS,
+	__IFLA_TC_MAX,
+};
+#define IFLA_TC_MAX (__IFLA_TC_MAX - 1)
+
+struct ifla_tc_txq {
+	__u8 tc;
+	__u16 count;
+	__u16 offset;
+};
+
+enum {
+	IFLA_TC_TXQ_UNSPEC,
+	IFLA_TC_TXQ,
+	__IFLA_TC_TCQ_MAX,
+};
+#define IFLA_TC_TXQS_MAX (__IFLA_TC_TCQ_MAX - 1)
+
+struct ifla_tc_map {
+	__u8 prio;
+	__u8 tc;
+};
+
+enum {
+	IFLA_TC_MAP_UNSPEC,
+	IFLA_TC_MAP,
+	__IFLA_TC_MAP_MAX,
+};
+#define IFLA_TC_MAPS_MAX (__IFLA_TC_TCQ_MAX - 1)
+
 #endif /* _LINUX_IF_LINK_H */
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 3307979..c44da29 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -744,6 +744,8 @@ struct netdev_tc_txq {
  * int (*ndo_set_vf_port)(struct net_device *dev, int vf,
  *			  struct nlattr *port[]);
  * int (*ndo_get_vf_port)(struct net_device *dev, int vf, struct sk_buff *skb);
+ *
+ * int (*ndo_set_num_tc)(struct net_device *dev, int tcs);
  */
 #define HAVE_NET_DEVICE_OPS
 struct net_device_ops {
@@ -802,6 +804,8 @@ struct net_device_ops {
 						   struct nlattr *port[]);
 	int			(*ndo_get_vf_port)(struct net_device *dev,
 						   int vf, struct sk_buff *skb);
+	int			(*ndo_set_num_tc)(struct net_device *dev,
+						  u8 tcs);
 #if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE)
 	int			(*ndo_fcoe_enable)(struct net_device *dev);
 	int			(*ndo_fcoe_disable)(struct net_device *dev);
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index 750db57..12bdff5 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -739,6 +739,21 @@ static size_t rtnl_port_size(const struct net_device *dev)
 		return port_self_size;
 }
 
+static size_t rtnl_tc_size(const struct net_device *dev)
+{
+	u8 num_tcs = netdev_get_num_tc(dev);
+	size_t table_size = nla_total_size(8)	/* IFLA_TC_TXMAX */
+		+ nla_total_size(8);		/* IFLA_TC_TXNUM */
+
+	table_size += nla_total_size(sizeof(struct nlattr));
+	table_size += num_tcs * nla_total_size(sizeof(struct ifla_tc_txq));
+
+	table_size += nla_total_size(sizeof(struct nlattr));
+	table_size += 16 * nla_total_size(sizeof(struct ifla_tc_map));
+
+	return table_size;
+}
+
 static noinline size_t if_nlmsg_size(const struct net_device *dev)
 {
 	return NLMSG_ALIGN(sizeof(struct ifinfomsg))
@@ -761,7 +776,8 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev)
 	       + rtnl_vfinfo_size(dev) /* IFLA_VFINFO_LIST */
 	       + rtnl_port_size(dev) /* IFLA_VF_PORTS + IFLA_PORT_SELF */
 	       + rtnl_link_get_size(dev) /* IFLA_LINKINFO */
-	       + rtnl_link_get_af_size(dev); /* IFLA_AF_SPEC */
+	       + rtnl_link_get_af_size(dev) /* IFLA_AF_SPEC */
+	       + rtnl_tc_size(dev); /* IFLA_TC */
 }
 
 static int rtnl_vf_ports_fill(struct sk_buff *skb, struct net_device *dev)
@@ -952,6 +968,41 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
 	if (rtnl_port_fill(skb, dev))
 		goto nla_put_failure;
 
+	if (dev->max_tcs) {
+		struct nlattr *tc_tbl, *tc_txq, *tc_map;
+		struct netdev_tc_txq *tcq;
+		struct ifla_tc_txq ifla_tcq;
+		struct ifla_tc_map ifla_map;
+		u8 i;
+
+		tc_tbl = nla_nest_start(skb, IFLA_TC);
+		if (!tc_tbl)
+			goto nla_put_failure;
+
+		NLA_PUT_U8(skb, IFLA_TC_TXMAX, dev->max_tcs);
+		NLA_PUT_U8(skb, IFLA_TC_TXNUM, dev->num_tcs);
+
+		tc_txq = nla_nest_start(skb, IFLA_TC_TXQS);
+		for (i = 0; i < dev->num_tcs; i++) {
+			tcq = netdev_get_tc_queue(dev, i);
+			ifla_tcq.tc = i;
+			ifla_tcq.count = tcq->count;
+			ifla_tcq.offset = tcq->offset;
+
+			NLA_PUT(skb, IFLA_TC_TXQ, sizeof(ifla_tcq), &ifla_tcq);
+		}
+		nla_nest_end(skb, tc_txq);
+
+		tc_map = nla_nest_start(skb, IFLA_TC_MAPS);
+		for (i = 0; i < 16; i++) {
+			ifla_map.prio = i;
+			ifla_map.tc = netdev_get_prio_tc_map(dev, i);
+			NLA_PUT(skb, IFLA_TC_MAP, sizeof(ifla_map), &ifla_map);
+		}
+		nla_nest_end(skb, tc_map);
+		nla_nest_end(skb, tc_tbl);
+	}
+
 	if (dev->rtnl_link_ops) {
 		if (rtnl_link_fill(skb, dev) < 0)
 			goto nla_put_failure;
@@ -1046,6 +1097,7 @@ const struct nla_policy ifla_policy[IFLA_MAX+1] = {
 	[IFLA_VF_PORTS]		= { .type = NLA_NESTED },
 	[IFLA_PORT_SELF]	= { .type = NLA_NESTED },
 	[IFLA_AF_SPEC]		= { .type = NLA_NESTED },
+	[IFLA_TC]		= { .type = NLA_NESTED },
 };
 EXPORT_SYMBOL(ifla_policy);
 
@@ -1081,6 +1133,23 @@ static const struct nla_policy ifla_port_policy[IFLA_PORT_MAX+1] = {
 	[IFLA_PORT_RESPONSE]	= { .type = NLA_U16, },
 };
 
+static const struct nla_policy ifla_tc_policy[IFLA_TC_MAX+1] = {
+	[IFLA_TC_TXMAX]		= { .type = NLA_U8 },
+	[IFLA_TC_TXNUM]		= { .type = NLA_U8 },
+	[IFLA_TC_TXQS]		= { .type = NLA_NESTED },
+	[IFLA_TC_MAPS]		= { .type = NLA_NESTED },
+};
+
+static const struct nla_policy ifla_tc_txq[IFLA_TC_TXQS_MAX+1] = {
+	[IFLA_TC_TXQ]		= { .type = NLA_BINARY,
+				    .len = sizeof(struct ifla_tc_txq)},
+};
+
+static const struct nla_policy ifla_tc_map[IFLA_TC_MAPS_MAX+1] = {
+	[IFLA_TC_MAP]		= { .type = NLA_BINARY,
+				    .len = sizeof(struct ifla_tc_map)},
+};
+
 struct net *rtnl_link_get_net(struct net *src_net, struct nlattr *tb[])
 {
 	struct net *net;
@@ -1389,6 +1458,38 @@ static int do_setlink(struct net_device *dev, struct ifinfomsg *ifm,
 	}
 	err = 0;
 
+	if (tb[IFLA_TC]) {
+		struct nlattr *table[IFLA_TC_MAX+1];
+		struct nlattr *tc_maps;
+		int rem;
+
+		err = nla_parse_nested(table, IFLA_TC_MAX, tb[IFLA_TC],
+				       ifla_tc_policy);
+		if (err < 0)
+			goto errout;
+
+		if (table[IFLA_TC_TXNUM]) {
+			u8 tcs = nla_get_u8(table[IFLA_TC_TXNUM]);
+			err = -EOPNOTSUPP;
+			if (ops->ndo_set_num_tc)
+				err = ops->ndo_set_num_tc(dev, tcs);
+			if (err < 0)
+				goto errout;
+		}
+
+		if (table[IFLA_TC_MAPS]) {
+			nla_for_each_nested(tc_maps, table[IFLA_TC_MAPS], rem) {
+				struct ifla_tc_map *map;
+				map = nla_data(tc_maps);
+				err = netdev_set_prio_tc_map(dev, map->prio,
+							     map->tc);
+				if (err < 0)
+					goto errout;
+			}
+		}
+	}
+	err = 0;
+
 errout:
 	if (err < 0 && modified && net_ratelimit())
 		printk(KERN_WARNING "A link change request failed with "


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH v2 3/3] ixgbe: add multiple txqs per tc
  2010-12-01 18:22 [RFC PATCH v2 1/3] net: implement mechanism for HW based QOS John Fastabend
  2010-12-01 18:22 ` [RFC PATCH v2 2/3] netlink: implement nla_policy for HW QOS John Fastabend
@ 2010-12-01 18:23 ` John Fastabend
  1 sibling, 0 replies; 5+ messages in thread
From: John Fastabend @ 2010-12-01 18:23 UTC (permalink / raw)
  To: davem; +Cc: john.r.fastabend, netdev, tgraf, eric.dumazet

This is sample code to illustrate the usage model for hardware
QOS offloading. It needs some polishing, but should be good
enough to illustrate how the API can be used.

Currently, DCB only enables a single queue per tc. Due to
complications with how to map tc filter rules to traffic classes
when multiple queues are enabled. And previously there was no
mechanism to map flows to multiple queues by priority.

Using the QOS offloading API we allocate multiple queues per
tc and configure the stack to hash across these queues. The
hardware then offloads the DCB extended transmission selection
algorithm. Sockets can set the priority using the SO_PRIORITY
socket option and expect ETS to work.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
---

 drivers/net/ixgbe/ixgbe.h        |    2 
 drivers/net/ixgbe/ixgbe_dcb_nl.c |    3 
 drivers/net/ixgbe/ixgbe_main.c   |  264 +++++++++++++++++++-------------------
 3 files changed, 134 insertions(+), 135 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe.h b/drivers/net/ixgbe/ixgbe.h
index 3ae30b8..860b1fa 100644
--- a/drivers/net/ixgbe/ixgbe.h
+++ b/drivers/net/ixgbe/ixgbe.h
@@ -243,7 +243,7 @@ enum ixgbe_ring_f_enum {
 	RING_F_ARRAY_SIZE      /* must be last in enum set */
 };
 
-#define IXGBE_MAX_DCB_INDICES   8
+#define IXGBE_MAX_DCB_INDICES  64
 #define IXGBE_MAX_RSS_INDICES  16
 #define IXGBE_MAX_VMDQ_INDICES 64
 #define IXGBE_MAX_FDIR_INDICES 64
diff --git a/drivers/net/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ixgbe/ixgbe_dcb_nl.c
index bf566e8..8e203ae 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
+++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
@@ -155,6 +155,7 @@ static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state)
 			if (netif_running(netdev))
 				netdev->netdev_ops->ndo_stop(netdev);
 			ixgbe_clear_interrupt_scheme(adapter);
+			netdev_set_num_tc(adapter->netdev, 0);
 
 			adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
 			adapter->temp_dcb_cfg.pfc_mode_enable = false;
@@ -359,7 +360,7 @@ static u8 ixgbe_dcbnl_set_all(struct net_device *netdev)
 		return DCB_NO_HW_CHG;
 
 	ret = ixgbe_copy_dcb_cfg(&adapter->temp_dcb_cfg, &adapter->dcb_cfg,
-				 adapter->ring_feature[RING_F_DCB].indices);
+				 netdev->num_tcs);
 
 	if (ret)
 		return DCB_NO_HW_CHG;
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index 494cb57..1517e5c 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -634,6 +634,18 @@ void ixgbe_unmap_and_free_tx_resource(struct ixgbe_ring *tx_ring,
 	/* tx_buffer_info must be completely set up in the transmit path */
 }
 
+int ixgbe_set_num_tc(struct net_device *dev, u8 tcs)
+{
+	struct ixgbe_adapter *adapter = netdev_priv(dev);
+	int err = 0;
+
+	/* Do not allow change while DCB controlled */
+	if (adapter->flags & IXGBE_FLAG_DCB_ENABLED)
+		return -EINVAL;
+
+	return netdev_set_num_tc(dev, tcs);
+}
+
 /**
  * ixgbe_dcb_txq_to_tc - convert a reg index to a traffic class
  * @adapter: driver private struct
@@ -647,7 +659,7 @@ void ixgbe_unmap_and_free_tx_resource(struct ixgbe_ring *tx_ring,
 u8 ixgbe_dcb_txq_to_tc(struct ixgbe_adapter *adapter, u8 reg_idx)
 {
 	int tc = -1;
-	int dcb_i = adapter->ring_feature[RING_F_DCB].indices;
+	u8 num_tcs = netdev_get_num_tc(adapter->netdev);
 
 	/* if DCB is not enabled the queues have no TC */
 	if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
@@ -662,13 +674,13 @@ u8 ixgbe_dcb_txq_to_tc(struct ixgbe_adapter *adapter, u8 reg_idx)
 		tc = reg_idx >> 2;
 		break;
 	default:
-		if (dcb_i != 4 && dcb_i != 8)
+		if (num_tcs != 4 && num_tcs != 8)
 			break;
 
 		/* if VMDq is enabled the lowest order bits determine TC */
 		if (adapter->flags & (IXGBE_FLAG_SRIOV_ENABLED |
 				      IXGBE_FLAG_VMDQ_ENABLED)) {
-			tc = reg_idx & (dcb_i - 1);
+			tc = reg_idx & (num_tcs - 1);
 			break;
 		}
 
@@ -681,9 +693,9 @@ u8 ixgbe_dcb_txq_to_tc(struct ixgbe_adapter *adapter, u8 reg_idx)
 		 * will only ever be 8 or 4 and that reg_idx will never
 		 * be greater then 128. The code without the power of 2
 		 * optimizations would be:
-		 * (((reg_idx % 32) + 32) * dcb_i) >> (9 - reg_idx / 32)
+		 * (((reg_idx % 32) + 32) * num_tcs) >> (9 - reg_idx / 32)
 		 */
-		tc = ((reg_idx & 0X1F) + 0x20) * dcb_i;
+		tc = ((reg_idx & 0X1F) + 0x20) * num_tcs;
 		tc >>= 9 - (reg_idx >> 5);
 	}
 
@@ -4190,14 +4202,29 @@ static void ixgbe_reset_task(struct work_struct *work)
 }
 
 #ifdef CONFIG_IXGBE_DCB
+#define MAX_Q_PER_TC 8
+
 static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter)
 {
 	bool ret = false;
 	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_DCB];
+	int num_tcs;
+	int i, q;
 
 	if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
 		return ret;
 
+	num_tcs = 8;
+	netdev_set_num_tc(adapter->netdev, num_tcs);
+
+	f->indices = 0;
+	for (i = 0; i < num_tcs; i++) {
+		q = min((int)num_online_cpus(), MAX_Q_PER_TC);
+		netdev_set_prio_tc_map(adapter->netdev, i, i);
+		netdev_set_tc_queue(adapter->netdev, i, q, f->indices);
+		f->indices += q;
+	}
+
 	f->mask = 0x7 << 3;
 	adapter->num_rx_queues = f->indices;
 	adapter->num_tx_queues = f->indices;
@@ -4284,12 +4311,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
 	if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) {
 		adapter->num_rx_queues = 1;
 		adapter->num_tx_queues = 1;
-#ifdef CONFIG_IXGBE_DCB
-		if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
-			e_info(probe, "FCoE enabled with DCB\n");
-			ixgbe_set_dcb_queues(adapter);
-		}
-#endif
+
 		if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
 			e_info(probe, "FCoE enabled with RSS\n");
 			if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
@@ -4345,16 +4367,15 @@ static int ixgbe_set_num_queues(struct ixgbe_adapter *adapter)
 	if (ixgbe_set_sriov_queues(adapter))
 		goto done;
 
-#ifdef IXGBE_FCOE
-	if (ixgbe_set_fcoe_queues(adapter))
-		goto done;
-
-#endif /* IXGBE_FCOE */
 #ifdef CONFIG_IXGBE_DCB
 	if (ixgbe_set_dcb_queues(adapter))
 		goto done;
-
 #endif
+
+#ifdef IXGBE_FCOE
+	if (ixgbe_set_fcoe_queues(adapter))
+		goto done;
+#endif /* IXGBE_FCOE */
 	if (ixgbe_set_fdir_queues(adapter))
 		goto done;
 
@@ -4446,6 +4467,63 @@ static inline bool ixgbe_cache_ring_rss(struct ixgbe_adapter *adapter)
 }
 
 #ifdef CONFIG_IXGBE_DCB
+
+ /* ixgbe_get_first_reg_idx - Return first register index associated
+ *  with this traffic class
+ */
+void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc,
+				     unsigned int *tx, unsigned int *rx)
+{
+	struct net_device *dev = adapter->netdev;
+	struct ixgbe_hw *hw = &adapter->hw;
+	u8 num_tcs = netdev_get_num_tc(dev);
+
+	*tx = 0;
+	*rx = 0;
+
+	switch (hw->mac.type) {
+	case ixgbe_mac_82598EB:
+		*tx = tc << 3;
+		*rx = tc << 2;
+		break;
+	case ixgbe_mac_82599EB:
+	case ixgbe_mac_X540:
+		if (num_tcs == 8) {
+			if (tc < 3) {
+				*tx = tc << 5;
+				*rx = tc << 4;
+			} else if (tc <  5) {
+				*tx = ((tc + 2) << 4);
+				*rx = tc << 4;
+			} else if (tc < num_tcs) {
+				*tx = ((tc + 8) << 3);
+				*rx = tc << 4;
+			}
+		} else if (num_tcs == 4) {
+				*rx =  tc << 5;
+				switch (tc) {
+				case 0:
+					*tx =  0;
+					break;
+				case 1:
+					*tx = 64;
+					break;
+				case 2:
+					*tx = 96;
+					break;
+				case 3:
+					*tx = 112;
+					break;
+				default:
+					break;
+				}
+		}
+		break;
+	default:
+		break;
+	}
+}
+
 /**
  * ixgbe_cache_ring_dcb - Descriptor ring to register mapping for DCB
  * @adapter: board private structure to initialize
@@ -4455,72 +4533,26 @@ static inline bool ixgbe_cache_ring_rss(struct ixgbe_adapter *adapter)
  **/
 static inline bool ixgbe_cache_ring_dcb(struct ixgbe_adapter *adapter)
 {
-	int i;
-	bool ret = false;
-	int dcb_i = adapter->ring_feature[RING_F_DCB].indices;
+	struct net_device *dev = adapter->netdev;
+	int i, j, k;
+	u8 num_tcs = netdev_get_num_tc(dev);
+	unsigned int tx_s, rx_s;
 
 	if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
 		return false;
 
 	/* the number of queues is assumed to be symmetric */
-	switch (adapter->hw.mac.type) {
-	case ixgbe_mac_82598EB:
-		for (i = 0; i < dcb_i; i++) {
-			adapter->rx_ring[i]->reg_idx = i << 3;
-			adapter->tx_ring[i]->reg_idx = i << 2;
+	for (i = 0, k = 0; i < num_tcs; i++) {
+		struct netdev_tc_txq *tcp = netdev_get_tc_queue(dev, i);
+		u16 qcount = tcp->count;
+		ixgbe_get_first_reg_idx(adapter, i, &tx_s, &rx_s);
+		for (j = 0; j < qcount; j++, k++) {
+			adapter->tx_ring[k]->reg_idx = tx_s + j;
+			adapter->rx_ring[k]->reg_idx = rx_s + j;
 		}
-		ret = true;
-		break;
-	case ixgbe_mac_82599EB:
-	case ixgbe_mac_X540:
-		if (dcb_i == 8) {
-			/*
-			 * Tx TC0 starts at: descriptor queue 0
-			 * Tx TC1 starts at: descriptor queue 32
-			 * Tx TC2 starts at: descriptor queue 64
-			 * Tx TC3 starts at: descriptor queue 80
-			 * Tx TC4 starts at: descriptor queue 96
-			 * Tx TC5 starts at: descriptor queue 104
-			 * Tx TC6 starts at: descriptor queue 112
-			 * Tx TC7 starts at: descriptor queue 120
-			 *
-			 * Rx TC0-TC7 are offset by 16 queues each
-			 */
-			for (i = 0; i < 3; i++) {
-				adapter->tx_ring[i]->reg_idx = i << 5;
-				adapter->rx_ring[i]->reg_idx = i << 4;
-			}
-			for ( ; i < 5; i++) {
-				adapter->tx_ring[i]->reg_idx = ((i + 2) << 4);
-				adapter->rx_ring[i]->reg_idx = i << 4;
-			}
-			for ( ; i < dcb_i; i++) {
-				adapter->tx_ring[i]->reg_idx = ((i + 8) << 3);
-				adapter->rx_ring[i]->reg_idx = i << 4;
-			}
-			ret = true;
-		} else if (dcb_i == 4) {
-			/*
-			 * Tx TC0 starts at: descriptor queue 0
-			 * Tx TC1 starts at: descriptor queue 64
-			 * Tx TC2 starts at: descriptor queue 96
-			 * Tx TC3 starts at: descriptor queue 112
-			 *
-			 * Rx TC0-TC3 are offset by 32 queues each
-			 */
-			adapter->tx_ring[0]->reg_idx = 0;
-			adapter->tx_ring[1]->reg_idx = 64;
-			adapter->tx_ring[2]->reg_idx = 96;
-			adapter->tx_ring[3]->reg_idx = 112;
-			for (i = 0 ; i < dcb_i; i++)
-				adapter->rx_ring[i]->reg_idx = i << 5;
-			ret = true;
-		}
-		break;
-	default:
-		break;
 	}
-	return ret;
+
+	return true;
 }
 #endif
 
@@ -4648,17 +4680,15 @@ static void ixgbe_cache_ring_register(struct ixgbe_adapter *adapter)
 
 	if (ixgbe_cache_ring_sriov(adapter))
 		return;
-
+#ifdef CONFIG_IXGBE_DCB
+	if (ixgbe_cache_ring_dcb(adapter))
+		return;
+#endif /* IXGBE_DCB */
 #ifdef IXGBE_FCOE
 	if (ixgbe_cache_ring_fcoe(adapter))
 		return;
 
 #endif /* IXGBE_FCOE */
-#ifdef CONFIG_IXGBE_DCB
-	if (ixgbe_cache_ring_dcb(adapter))
-		return;
-
-#endif
 	if (ixgbe_cache_ring_fdir(adapter))
 		return;
 
@@ -5122,7 +5152,7 @@ static int __devinit ixgbe_sw_init(struct ixgbe_adapter *adapter)
 	adapter->dcb_cfg.round_robin_enable = false;
 	adapter->dcb_set_bitmap = 0x00;
 	ixgbe_copy_dcb_cfg(&adapter->dcb_cfg, &adapter->temp_dcb_cfg,
-			   adapter->ring_feature[RING_F_DCB].indices);
+			   dev->max_tcs);
 
 #endif
 
@@ -5975,7 +6005,7 @@ static void ixgbe_watchdog_task(struct work_struct *work)
 		if (link_up) {
 #ifdef CONFIG_DCB
 			if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
-				for (i = 0; i < MAX_TRAFFIC_CLASS; i++)
+				for (i = 0; i < netdev->max_tcs; i++)
 					hw->mac.ops.fc_enable(hw, i);
 			} else {
 				hw->mac.ops.fc_enable(hw, 0);
@@ -6500,25 +6530,6 @@ static u16 ixgbe_select_queue(struct net_device *dev, struct sk_buff *skb)
 {
 	struct ixgbe_adapter *adapter = netdev_priv(dev);
 	int txq = smp_processor_id();
-#ifdef IXGBE_FCOE
-	__be16 protocol;
-
-	protocol = vlan_get_protocol(skb);
-
-	if ((protocol == htons(ETH_P_FCOE)) ||
-	    (protocol == htons(ETH_P_FIP))) {
-		if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) {
-			txq &= (adapter->ring_feature[RING_F_FCOE].indices - 1);
-			txq += adapter->ring_feature[RING_F_FCOE].mask;
-			return txq;
-#ifdef CONFIG_IXGBE_DCB
-		} else if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
-			txq = adapter->fcoe.up;
-			return txq;
-#endif
-		}
-	}
-#endif
 
 	if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) {
 		while (unlikely(txq >= dev->real_num_tx_queues))
@@ -6526,14 +6537,20 @@ static u16 ixgbe_select_queue(struct net_device *dev, struct sk_buff *skb)
 		return txq;
 	}
 
-	if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
-		if (skb->priority == TC_PRIO_CONTROL)
-			txq = adapter->ring_feature[RING_F_DCB].indices-1;
-		else
-			txq = (skb->vlan_tci & IXGBE_TX_FLAGS_VLAN_PRIO_MASK)
-			       >> 13;
+#ifdef IXGBE_FCOE
+	/*
+	 * If DCB is not enabled to assign FCoE a priority mapping
+	 * we need to steer the skb to FCoE enabled tx rings.
+	 */
+	if ((adapter->flags & IXGBE_FLAG_FCOE_ENABLED) &&
+	    !(adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
+	    ((skb->protocol == htons(ETH_P_FCOE)) ||
+	     (skb->protocol == htons(ETH_P_FIP)))) {
+		txq &= (adapter->ring_feature[RING_F_FCOE].indices - 1);
+		txq += adapter->ring_feature[RING_F_FCOE].mask;
 		return txq;
 	}
+#endif
 
 	return skb_tx_hash(dev, skb);
 }
@@ -6556,33 +6573,12 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb,
 
 	if (vlan_tx_tag_present(skb)) {
 		tx_flags |= vlan_tx_tag_get(skb);
-		if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
-			tx_flags &= ~IXGBE_TX_FLAGS_VLAN_PRIO_MASK;
-			tx_flags |= ((skb->queue_mapping & 0x7) << 13);
-		}
-		tx_flags <<= IXGBE_TX_FLAGS_VLAN_SHIFT;
-		tx_flags |= IXGBE_TX_FLAGS_VLAN;
-	} else if (adapter->flags & IXGBE_FLAG_DCB_ENABLED &&
-		   skb->priority != TC_PRIO_CONTROL) {
-		tx_flags |= ((skb->queue_mapping & 0x7) << 13);
 		tx_flags <<= IXGBE_TX_FLAGS_VLAN_SHIFT;
 		tx_flags |= IXGBE_TX_FLAGS_VLAN;
 	}
 
 #ifdef IXGBE_FCOE
-	/* for FCoE with DCB, we force the priority to what
-	 * was specified by the switch */
-	if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED &&
-	    (protocol == htons(ETH_P_FCOE) ||
-	     protocol == htons(ETH_P_FIP))) {
-#ifdef CONFIG_IXGBE_DCB
-		if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
-			tx_flags &= ~(IXGBE_TX_FLAGS_VLAN_PRIO_MASK
-				      << IXGBE_TX_FLAGS_VLAN_SHIFT);
-			tx_flags |= ((adapter->fcoe.up << 13)
-				      << IXGBE_TX_FLAGS_VLAN_SHIFT);
-		}
-#endif
+	if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) {
 		/* flag for FCoE offloads */
 		if (protocol == htons(ETH_P_FCOE))
 			tx_flags |= IXGBE_TX_FLAGS_FCOE;
@@ -6856,6 +6852,7 @@ static const struct net_device_ops ixgbe_netdev_ops = {
 	.ndo_set_vf_tx_rate	= ixgbe_ndo_set_vf_bw,
 	.ndo_get_vf_config	= ixgbe_ndo_get_vf_config,
 	.ndo_get_stats64	= ixgbe_get_stats64,
+	.ndo_set_num_tc		= ixgbe_set_num_tc,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller	= ixgbe_netpoll,
 #endif
@@ -6994,9 +6991,9 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev,
 		indices = min_t(unsigned int, indices, IXGBE_MAX_RSS_INDICES);
 	else
 		indices = min_t(unsigned int, indices, IXGBE_MAX_FDIR_INDICES);
-
+#if defined(CONFIG_IXGBE_DCB)
 	indices = max_t(unsigned int, indices, IXGBE_MAX_DCB_INDICES);
-#ifdef IXGBE_FCOE
+#elif defined(IXGBE_FCOE)
 	indices += min_t(unsigned int, num_possible_cpus(),
 			 IXGBE_MAX_FCOE_INDICES);
 #endif
@@ -7157,6 +7154,7 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev,
 
 #ifdef CONFIG_IXGBE_DCB
 	netdev->dcbnl_ops = &dcbnl_ops;
+	netdev_alloc_max_tcs(netdev, MAX_TRAFFIC_CLASS);
 #endif
 
 #ifdef IXGBE_FCOE


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH v2 2/3] netlink: implement nla_policy for HW QOS
  2010-12-01 18:22 ` [RFC PATCH v2 2/3] netlink: implement nla_policy for HW QOS John Fastabend
@ 2010-12-02 10:20   ` Thomas Graf
  2010-12-02 19:53     ` John Fastabend
  0 siblings, 1 reply; 5+ messages in thread
From: Thomas Graf @ 2010-12-02 10:20 UTC (permalink / raw)
  To: John Fastabend; +Cc: davem, netdev, eric.dumazet

On Wed, Dec 01, 2010 at 10:22:58AM -0800, John Fastabend wrote:
> +
> +		NLA_PUT_U8(skb, IFLA_TC_TXMAX, dev->max_tcs);
> +		NLA_PUT_U8(skb, IFLA_TC_TXNUM, dev->num_tcs);
> +
> +		tc_txq = nla_nest_start(skb, IFLA_TC_TXQS);

You have to check the return value here.

> +		for (i = 0; i < dev->num_tcs; i++) {
> +			tcq = netdev_get_tc_queue(dev, i);
> +			ifla_tcq.tc = i;
> +			ifla_tcq.count = tcq->count;
> +			ifla_tcq.offset = tcq->offset;
> +
> +			NLA_PUT(skb, IFLA_TC_TXQ, sizeof(ifla_tcq), &ifla_tcq);
> +		}
> +		nla_nest_end(skb, tc_txq);
> +
> +		tc_map = nla_nest_start(skb, IFLA_TC_MAPS);

Same here

> +		for (i = 0; i < 16; i++) {
> +			ifla_map.prio = i;
> +			ifla_map.tc = netdev_get_prio_tc_map(dev, i);
> +			NLA_PUT(skb, IFLA_TC_MAP, sizeof(ifla_map), &ifla_map);
> +		}
>  
> +
> +static const struct nla_policy ifla_tc_txq[IFLA_TC_TXQS_MAX+1] = {
> +	[IFLA_TC_TXQ]		= { .type = NLA_BINARY,
> +				    .len = sizeof(struct ifla_tc_txq)},

This is probably not what you want. NLA_BINARY only enforces a maximum
payload length but no minimum payload length.

Omit the .type and let it fall back to NLA_UNSPEC and only specify a
.len. This enforces that the attribute payload is at least .len in
length. You should not worry about payload that exceeds your size
expectations. This allows to extend ifla_tc_txq in the future.

> +static const struct nla_policy ifla_tc_map[IFLA_TC_MAPS_MAX+1] = {
> +	[IFLA_TC_MAP]		= { .type = NLA_BINARY,
> +				    .len = sizeof(struct ifla_tc_map)},
> +};

Same here

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH v2 2/3] netlink: implement nla_policy for HW QOS
  2010-12-02 10:20   ` Thomas Graf
@ 2010-12-02 19:53     ` John Fastabend
  0 siblings, 0 replies; 5+ messages in thread
From: John Fastabend @ 2010-12-02 19:53 UTC (permalink / raw)
  To: davem, netdev, eric.dumazet

On 12/2/2010 2:20 AM, Thomas Graf wrote:
> On Wed, Dec 01, 2010 at 10:22:58AM -0800, John Fastabend wrote:
>> +
>> +		NLA_PUT_U8(skb, IFLA_TC_TXMAX, dev->max_tcs);
>> +		NLA_PUT_U8(skb, IFLA_TC_TXNUM, dev->num_tcs);
>> +
>> +		tc_txq = nla_nest_start(skb, IFLA_TC_TXQS);
> 
> You have to check the return value here.
> 
>> +		for (i = 0; i < dev->num_tcs; i++) {
>> +			tcq = netdev_get_tc_queue(dev, i);
>> +			ifla_tcq.tc = i;
>> +			ifla_tcq.count = tcq->count;
>> +			ifla_tcq.offset = tcq->offset;
>> +
>> +			NLA_PUT(skb, IFLA_TC_TXQ, sizeof(ifla_tcq), &ifla_tcq);
>> +		}
>> +		nla_nest_end(skb, tc_txq);
>> +
>> +		tc_map = nla_nest_start(skb, IFLA_TC_MAPS);
> 
> Same here
> 
>> +		for (i = 0; i < 16; i++) {
>> +			ifla_map.prio = i;
>> +			ifla_map.tc = netdev_get_prio_tc_map(dev, i);
>> +			NLA_PUT(skb, IFLA_TC_MAP, sizeof(ifla_map), &ifla_map);
>> +		}
>>  
>> +
>> +static const struct nla_policy ifla_tc_txq[IFLA_TC_TXQS_MAX+1] = {
>> +	[IFLA_TC_TXQ]		= { .type = NLA_BINARY,
>> +				    .len = sizeof(struct ifla_tc_txq)},
> 
> This is probably not what you want. NLA_BINARY only enforces a maximum
> payload length but no minimum payload length.
> 
> Omit the .type and let it fall back to NLA_UNSPEC and only specify a
> .len. This enforces that the attribute payload is at least .len in
> length. You should not worry about payload that exceeds your size
> expectations. This allows to extend ifla_tc_txq in the future.
> 
>> +static const struct nla_policy ifla_tc_map[IFLA_TC_MAPS_MAX+1] = {
>> +	[IFLA_TC_MAP]		= { .type = NLA_BINARY,
>> +				    .len = sizeof(struct ifla_tc_map)},
>> +};
> 
> Same here


errors noted. Thanks for the clarification I'll fix this up. Also I'll look into Jamal's comment regarding moving this to use 'tc'.

-- John  

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-12-02 19:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-12-01 18:22 [RFC PATCH v2 1/3] net: implement mechanism for HW based QOS John Fastabend
2010-12-01 18:22 ` [RFC PATCH v2 2/3] netlink: implement nla_policy for HW QOS John Fastabend
2010-12-02 10:20   ` Thomas Graf
2010-12-02 19:53     ` John Fastabend
2010-12-01 18:23 ` [RFC PATCH v2 3/3] ixgbe: add multiple txqs per tc John Fastabend

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.