All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] IB/ipoib: Add handling of skb with many frags
@ 2016-02-18  8:37 ` Hans Westgaard Ry
  0 siblings, 0 replies; 6+ messages in thread
From: Hans Westgaard Ry @ 2016-02-18  8:37 UTC (permalink / raw)
  Cc: Hans Westgaard Ry, Doug Ledford, Sean Hefty, Hal Rosenstock,
	Bart Van Assche, Yuval Shaia, Christian Marie, Jason Gunthorpe,
	Håkon Bugge, Wei Lin Guay, Or Gerlitz, Erez Shitrit,
	Haggai Eran, Chuck Lever, Matan Barak,
	open list:INFINIBAND SUBSYSTEM, open list

IPoIB converts skb-fragments to sge adding 1 extra sge when offloading
is enabled. Current codepath assumes that the max number of sge a device
support is at least MAX_SKB_FRAGS+1, there is no interaction with upper
layers to limit number of fragments in an skb if a device suports fewer
sge. The assumptions also lead to requesting a fixed number ot sge when
IPoIB creates queue-pairs with scatter/gather enabled.

A fallback/slowpath is implemented using skb_linearize to
handle cases where the conversion would result in more sges than supported.

Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Reviewed-by: Håkon Bugge <haakon.bugge-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Reviewed-by: Wei Lin Guay <wei.lin.guay-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---
 drivers/infiniband/ulp/ipoib/ipoib_cm.c    | 4 +++-
 drivers/infiniband/ulp/ipoib/ipoib_ib.c    | 9 +++++++++
 drivers/infiniband/ulp/ipoib/ipoib_verbs.c | 4 +++-
 3 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 917e46e..0a2bd43 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -1031,7 +1031,9 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
 	struct ib_qp *tx_qp;
 
 	if (dev->features & NETIF_F_SG)
-		attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		attr.cap.max_send_sge = min_t(u32,
+					      priv->ca->attrs.max_sge,
+					      MAX_SKB_FRAGS + 1);
 
 	tx_qp = ib_create_qp(priv->pd, &attr);
 	if (PTR_ERR(tx_qp) == -EINVAL) {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 5ea0c14..b4f2240 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -541,6 +541,15 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
 	int hlen, rc;
 	void *phead;
 
+	if (skb_shinfo(skb)->nr_frags >= priv->ca->attrs.max_sge) {
+		if (skb_linearize(skb) != 0) {
+			ipoib_warn(priv, "skb could not be linearized\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+	}
 	if (skb_is_gso(skb)) {
 		hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
 		phead = skb->data;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
index d48c5ba..62f8ec3 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
@@ -206,7 +206,9 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 		init_attr.create_flags |= IB_QP_CREATE_NETIF_QP;
 
 	if (dev->features & NETIF_F_SG)
-		init_attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		init_attr.cap.max_send_sge = min_t(u32,
+						   priv->ca->attrs.max_sge,
+						   MAX_SKB_FRAGS + 1);
 
 	priv->qp = ib_create_qp(priv->pd, &init_attr);
 	if (IS_ERR(priv->qp)) {
-- 
2.4.3

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH] IB/ipoib: Add handling of skb with many frags
@ 2016-02-18  8:37 ` Hans Westgaard Ry
  0 siblings, 0 replies; 6+ messages in thread
From: Hans Westgaard Ry @ 2016-02-18  8:37 UTC (permalink / raw)
  Cc: Hans Westgaard Ry, Doug Ledford, Sean Hefty, Hal Rosenstock,
	Bart Van Assche, Yuval Shaia, Christian Marie, Jason Gunthorpe,
	Håkon Bugge, Wei Lin Guay, Or Gerlitz, Erez Shitrit,
	Haggai Eran, Chuck Lever, Matan Barak,
	open list:INFINIBAND SUBSYSTEM, open list

IPoIB converts skb-fragments to sge adding 1 extra sge when offloading
is enabled. Current codepath assumes that the max number of sge a device
support is at least MAX_SKB_FRAGS+1, there is no interaction with upper
layers to limit number of fragments in an skb if a device suports fewer
sge. The assumptions also lead to requesting a fixed number ot sge when
IPoIB creates queue-pairs with scatter/gather enabled.

A fallback/slowpath is implemented using skb_linearize to
handle cases where the conversion would result in more sges than supported.

Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Wei Lin Guay <wei.lin.guay@oracle.com>
---
 drivers/infiniband/ulp/ipoib/ipoib_cm.c    | 4 +++-
 drivers/infiniband/ulp/ipoib/ipoib_ib.c    | 9 +++++++++
 drivers/infiniband/ulp/ipoib/ipoib_verbs.c | 4 +++-
 3 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 917e46e..0a2bd43 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -1031,7 +1031,9 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
 	struct ib_qp *tx_qp;
 
 	if (dev->features & NETIF_F_SG)
-		attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		attr.cap.max_send_sge = min_t(u32,
+					      priv->ca->attrs.max_sge,
+					      MAX_SKB_FRAGS + 1);
 
 	tx_qp = ib_create_qp(priv->pd, &attr);
 	if (PTR_ERR(tx_qp) == -EINVAL) {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 5ea0c14..b4f2240 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -541,6 +541,15 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
 	int hlen, rc;
 	void *phead;
 
+	if (skb_shinfo(skb)->nr_frags >= priv->ca->attrs.max_sge) {
+		if (skb_linearize(skb) != 0) {
+			ipoib_warn(priv, "skb could not be linearized\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+	}
 	if (skb_is_gso(skb)) {
 		hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
 		phead = skb->data;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
index d48c5ba..62f8ec3 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
@@ -206,7 +206,9 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 		init_attr.create_flags |= IB_QP_CREATE_NETIF_QP;
 
 	if (dev->features & NETIF_F_SG)
-		init_attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		init_attr.cap.max_send_sge = min_t(u32,
+						   priv->ca->attrs.max_sge,
+						   MAX_SKB_FRAGS + 1);
 
 	priv->qp = ib_create_qp(priv->pd, &init_attr);
 	if (IS_ERR(priv->qp)) {
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2] IB/ipoib: Add handling for sending of skb with many frags
  2016-02-18  8:37 ` Hans Westgaard Ry
@ 2016-03-02 12:44     ` Hans Westgaard Ry
  -1 siblings, 0 replies; 6+ messages in thread
From: Hans Westgaard Ry @ 2016-03-02 12:44 UTC (permalink / raw)
  Cc: Hans Westgaard Ry, Doug Ledford, Sean Hefty, Hal Rosenstock,
	Christoph Lameter, Erez Shitrit, Or Gerlitz, Bart Van Assche,
	Yuval Shaia, Haakon Bugge, Wei Lin Guay, Chuck Lever,
	Jason Gunthorpe, Haggai Eran, Matan Barak,
	open list:INFINIBAND SUBSYSTEM, open list

IPoIB converts skb-fragments to sge adding 1 extra sge when SG is enabled.
Current codepath assumes that the max number of sge a device support
is at least MAX_SKB_FRAGS+1, there is no interaction with upper layers
to limit number of fragments in an skb if a device suports fewer
sges. The assumptions also lead to requesting a fixed number of sge
when IPoIB creates queue-pairs with SG enabled.

A fallback/slowpath is implemented using skb_linearize to
handle cases where the conversion would result in more sges than supported.

Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Reviewed-by: Håkon Bugge <haakon.bugge-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Reviewed-by: Wei Lin Guay <wei.lin.guay-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---
 drivers/infiniband/ulp/ipoib/ipoib.h       |  2 ++
 drivers/infiniband/ulp/ipoib/ipoib_cm.c    | 23 +++++++++++++++++++++--
 drivers/infiniband/ulp/ipoib/ipoib_ib.c    | 18 ++++++++++++++++++
 drivers/infiniband/ulp/ipoib/ipoib_verbs.c |  5 ++++-
 4 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
index a6f3eab..85be0de 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib.h
+++ b/drivers/infiniband/ulp/ipoib/ipoib.h
@@ -244,6 +244,7 @@ struct ipoib_cm_tx {
 	unsigned	     tx_tail;
 	unsigned long	     flags;
 	u32		     mtu;
+	unsigned             max_send_sge;
 };
 
 struct ipoib_cm_rx_buf {
@@ -390,6 +391,7 @@ struct ipoib_dev_priv {
 	int	hca_caps;
 	struct ipoib_ethtool_st ethtool;
 	struct timer_list poll_timer;
+	unsigned max_send_sge;
 };
 
 struct ipoib_ah {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 917e46e..c8ed535 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -710,6 +710,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
 	struct ipoib_dev_priv *priv = netdev_priv(dev);
 	struct ipoib_tx_buf *tx_req;
 	int rc;
+	unsigned usable_sge = tx->max_send_sge - !!skb_headlen(skb);
 
 	if (unlikely(skb->len > tx->mtu)) {
 		ipoib_warn(priv, "packet len %d (> %d) too long to send, dropping\n",
@@ -719,7 +720,23 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
 		ipoib_cm_skb_too_long(dev, skb, tx->mtu - IPOIB_ENCAP_LEN);
 		return;
 	}
-
+	if (skb_shinfo(skb)->nr_frags > usable_sge) {
+		if (skb_linearize(skb) < 0) {
+			ipoib_warn(priv, "skb could not be linearized\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+		/* Does skb_linearize return ok without reducing nr_frags? */
+		if (skb_shinfo(skb)->nr_frags > usable_sge) {
+			ipoib_warn(priv, "too many frags after skb linearize\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+	}
 	ipoib_dbg_data(priv, "sending packet: head 0x%x length %d connection 0x%x\n",
 		       tx->tx_head, skb->len, tx->qp->qp_num);
 
@@ -1031,7 +1048,8 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
 	struct ib_qp *tx_qp;
 
 	if (dev->features & NETIF_F_SG)
-		attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		attr.cap.max_send_sge =
+			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
 
 	tx_qp = ib_create_qp(priv->pd, &attr);
 	if (PTR_ERR(tx_qp) == -EINVAL) {
@@ -1040,6 +1058,7 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
 		attr.create_flags &= ~IB_QP_CREATE_USE_GFP_NOIO;
 		tx_qp = ib_create_qp(priv->pd, &attr);
 	}
+	tx->max_send_sge = attr.cap.max_send_sge;
 	return tx_qp;
 }
 
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 5ea0c14..ee7a555 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -540,6 +540,7 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
 	struct ipoib_tx_buf *tx_req;
 	int hlen, rc;
 	void *phead;
+	unsigned usable_sge = priv->max_send_sge - !!skb_headlen(skb);
 
 	if (skb_is_gso(skb)) {
 		hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
@@ -563,6 +564,23 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
 		phead = NULL;
 		hlen  = 0;
 	}
+	if (skb_shinfo(skb)->nr_frags > usable_sge) {
+		if (skb_linearize(skb) < 0) {
+			ipoib_warn(priv, "skb could not be linearized\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+		/* Does skb_linearize return ok without reducing nr_frags? */
+		if (skb_shinfo(skb)->nr_frags > usable_sge) {
+			ipoib_warn(priv, "too many frags after skb linearize\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+	}
 
 	ipoib_dbg_data(priv, "sending packet, length=%d address=%p qpn=0x%06x\n",
 		       skb->len, address, qpn);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
index d48c5ba..b809c37 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
@@ -206,7 +206,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 		init_attr.create_flags |= IB_QP_CREATE_NETIF_QP;
 
 	if (dev->features & NETIF_F_SG)
-		init_attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		init_attr.cap.max_send_sge =
+			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
 
 	priv->qp = ib_create_qp(priv->pd, &init_attr);
 	if (IS_ERR(priv->qp)) {
@@ -233,6 +234,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 	priv->rx_wr.next = NULL;
 	priv->rx_wr.sg_list = priv->rx_sge;
 
+	priv->max_send_sge = init_attr.cap.max_send_sge;
+
 	return 0;
 
 out_free_send_cq:
-- 
2.4.3

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2] IB/ipoib: Add handling for sending of skb with many frags
@ 2016-03-02 12:44     ` Hans Westgaard Ry
  0 siblings, 0 replies; 6+ messages in thread
From: Hans Westgaard Ry @ 2016-03-02 12:44 UTC (permalink / raw)
  Cc: Hans Westgaard Ry, Doug Ledford, Sean Hefty, Hal Rosenstock,
	Christoph Lameter, Erez Shitrit, Or Gerlitz, Bart Van Assche,
	Yuval Shaia, Haakon Bugge, Wei Lin Guay, Chuck Lever,
	Jason Gunthorpe, Haggai Eran, Matan Barak,
	open list:INFINIBAND SUBSYSTEM, open list

IPoIB converts skb-fragments to sge adding 1 extra sge when SG is enabled.
Current codepath assumes that the max number of sge a device support
is at least MAX_SKB_FRAGS+1, there is no interaction with upper layers
to limit number of fragments in an skb if a device suports fewer
sges. The assumptions also lead to requesting a fixed number of sge
when IPoIB creates queue-pairs with SG enabled.

A fallback/slowpath is implemented using skb_linearize to
handle cases where the conversion would result in more sges than supported.

Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Wei Lin Guay <wei.lin.guay@oracle.com>
---
 drivers/infiniband/ulp/ipoib/ipoib.h       |  2 ++
 drivers/infiniband/ulp/ipoib/ipoib_cm.c    | 23 +++++++++++++++++++++--
 drivers/infiniband/ulp/ipoib/ipoib_ib.c    | 18 ++++++++++++++++++
 drivers/infiniband/ulp/ipoib/ipoib_verbs.c |  5 ++++-
 4 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
index a6f3eab..85be0de 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib.h
+++ b/drivers/infiniband/ulp/ipoib/ipoib.h
@@ -244,6 +244,7 @@ struct ipoib_cm_tx {
 	unsigned	     tx_tail;
 	unsigned long	     flags;
 	u32		     mtu;
+	unsigned             max_send_sge;
 };
 
 struct ipoib_cm_rx_buf {
@@ -390,6 +391,7 @@ struct ipoib_dev_priv {
 	int	hca_caps;
 	struct ipoib_ethtool_st ethtool;
 	struct timer_list poll_timer;
+	unsigned max_send_sge;
 };
 
 struct ipoib_ah {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 917e46e..c8ed535 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -710,6 +710,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
 	struct ipoib_dev_priv *priv = netdev_priv(dev);
 	struct ipoib_tx_buf *tx_req;
 	int rc;
+	unsigned usable_sge = tx->max_send_sge - !!skb_headlen(skb);
 
 	if (unlikely(skb->len > tx->mtu)) {
 		ipoib_warn(priv, "packet len %d (> %d) too long to send, dropping\n",
@@ -719,7 +720,23 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
 		ipoib_cm_skb_too_long(dev, skb, tx->mtu - IPOIB_ENCAP_LEN);
 		return;
 	}
-
+	if (skb_shinfo(skb)->nr_frags > usable_sge) {
+		if (skb_linearize(skb) < 0) {
+			ipoib_warn(priv, "skb could not be linearized\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+		/* Does skb_linearize return ok without reducing nr_frags? */
+		if (skb_shinfo(skb)->nr_frags > usable_sge) {
+			ipoib_warn(priv, "too many frags after skb linearize\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+	}
 	ipoib_dbg_data(priv, "sending packet: head 0x%x length %d connection 0x%x\n",
 		       tx->tx_head, skb->len, tx->qp->qp_num);
 
@@ -1031,7 +1048,8 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
 	struct ib_qp *tx_qp;
 
 	if (dev->features & NETIF_F_SG)
-		attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		attr.cap.max_send_sge =
+			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
 
 	tx_qp = ib_create_qp(priv->pd, &attr);
 	if (PTR_ERR(tx_qp) == -EINVAL) {
@@ -1040,6 +1058,7 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
 		attr.create_flags &= ~IB_QP_CREATE_USE_GFP_NOIO;
 		tx_qp = ib_create_qp(priv->pd, &attr);
 	}
+	tx->max_send_sge = attr.cap.max_send_sge;
 	return tx_qp;
 }
 
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 5ea0c14..ee7a555 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -540,6 +540,7 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
 	struct ipoib_tx_buf *tx_req;
 	int hlen, rc;
 	void *phead;
+	unsigned usable_sge = priv->max_send_sge - !!skb_headlen(skb);
 
 	if (skb_is_gso(skb)) {
 		hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
@@ -563,6 +564,23 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
 		phead = NULL;
 		hlen  = 0;
 	}
+	if (skb_shinfo(skb)->nr_frags > usable_sge) {
+		if (skb_linearize(skb) < 0) {
+			ipoib_warn(priv, "skb could not be linearized\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+		/* Does skb_linearize return ok without reducing nr_frags? */
+		if (skb_shinfo(skb)->nr_frags > usable_sge) {
+			ipoib_warn(priv, "too many frags after skb linearize\n");
+			++dev->stats.tx_dropped;
+			++dev->stats.tx_errors;
+			dev_kfree_skb_any(skb);
+			return;
+		}
+	}
 
 	ipoib_dbg_data(priv, "sending packet, length=%d address=%p qpn=0x%06x\n",
 		       skb->len, address, qpn);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
index d48c5ba..b809c37 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
@@ -206,7 +206,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 		init_attr.create_flags |= IB_QP_CREATE_NETIF_QP;
 
 	if (dev->features & NETIF_F_SG)
-		init_attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+		init_attr.cap.max_send_sge =
+			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
 
 	priv->qp = ib_create_qp(priv->pd, &init_attr);
 	if (IS_ERR(priv->qp)) {
@@ -233,6 +234,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 	priv->rx_wr.next = NULL;
 	priv->rx_wr.sg_list = priv->rx_sge;
 
+	priv->max_send_sge = init_attr.cap.max_send_sge;
+
 	return 0;
 
 out_free_send_cq:
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] IB/ipoib: Add handling for sending of skb with many frags
       [not found]     ` <1456922668-24956-1-git-send-email-hans.westgaard.ry-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
@ 2016-03-02 17:03       ` Yuval Shaia
  0 siblings, 0 replies; 6+ messages in thread
From: Yuval Shaia @ 2016-03-02 17:03 UTC (permalink / raw)
  To: Hans Westgaard Ry
  Cc: Doug Ledford, Sean Hefty, Hal Rosenstock, Christoph Lameter,
	Erez Shitrit, Or Gerlitz, Bart Van Assche, Haakon Bugge,
	Wei Lin Guay, Chuck Lever, Jason Gunthorpe, Haggai Eran,
	Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA

Reviewed-by: Yuval Shaia <yuval.shaia-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>

On Wed, Mar 02, 2016 at 01:44:28PM +0100, Hans Westgaard Ry wrote:
> IPoIB converts skb-fragments to sge adding 1 extra sge when SG is enabled.
> Current codepath assumes that the max number of sge a device support
> is at least MAX_SKB_FRAGS+1, there is no interaction with upper layers
> to limit number of fragments in an skb if a device suports fewer
> sges. The assumptions also lead to requesting a fixed number of sge
> when IPoIB creates queue-pairs with SG enabled.
> 
> A fallback/slowpath is implemented using skb_linearize to
> handle cases where the conversion would result in more sges than supported.
> 
> Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
> Reviewed-by: Håkon Bugge <haakon.bugge-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
> Reviewed-by: Wei Lin Guay <wei.lin.guay-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/infiniband/ulp/ipoib/ipoib.h       |  2 ++
>  drivers/infiniband/ulp/ipoib/ipoib_cm.c    | 23 +++++++++++++++++++++--
>  drivers/infiniband/ulp/ipoib/ipoib_ib.c    | 18 ++++++++++++++++++
>  drivers/infiniband/ulp/ipoib/ipoib_verbs.c |  5 ++++-
>  4 files changed, 45 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
> index a6f3eab..85be0de 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib.h
> +++ b/drivers/infiniband/ulp/ipoib/ipoib.h
> @@ -244,6 +244,7 @@ struct ipoib_cm_tx {
>  	unsigned	     tx_tail;
>  	unsigned long	     flags;
>  	u32		     mtu;
> +	unsigned             max_send_sge;
>  };
>  
>  struct ipoib_cm_rx_buf {
> @@ -390,6 +391,7 @@ struct ipoib_dev_priv {
>  	int	hca_caps;
>  	struct ipoib_ethtool_st ethtool;
>  	struct timer_list poll_timer;
> +	unsigned max_send_sge;
>  };
>  
>  struct ipoib_ah {
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> index 917e46e..c8ed535 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> @@ -710,6 +710,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
>  	struct ipoib_dev_priv *priv = netdev_priv(dev);
>  	struct ipoib_tx_buf *tx_req;
>  	int rc;
> +	unsigned usable_sge = tx->max_send_sge - !!skb_headlen(skb);
>  
>  	if (unlikely(skb->len > tx->mtu)) {
>  		ipoib_warn(priv, "packet len %d (> %d) too long to send, dropping\n",
> @@ -719,7 +720,23 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
>  		ipoib_cm_skb_too_long(dev, skb, tx->mtu - IPOIB_ENCAP_LEN);
>  		return;
>  	}
> -
> +	if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +		if (skb_linearize(skb) < 0) {
> +			ipoib_warn(priv, "skb could not be linearized\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +		/* Does skb_linearize return ok without reducing nr_frags? */
Per our offline chat, i think that it might that nr_frags will still be > 0
> +		if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +			ipoib_warn(priv, "too many frags after skb linearize\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +	}
>  	ipoib_dbg_data(priv, "sending packet: head 0x%x length %d connection 0x%x\n",
>  		       tx->tx_head, skb->len, tx->qp->qp_num);
>  
> @@ -1031,7 +1048,8 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
>  	struct ib_qp *tx_qp;
>  
>  	if (dev->features & NETIF_F_SG)
> -		attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
> +		attr.cap.max_send_sge =
> +			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
>  
>  	tx_qp = ib_create_qp(priv->pd, &attr);
>  	if (PTR_ERR(tx_qp) == -EINVAL) {
> @@ -1040,6 +1058,7 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
>  		attr.create_flags &= ~IB_QP_CREATE_USE_GFP_NOIO;
>  		tx_qp = ib_create_qp(priv->pd, &attr);
>  	}
> +	tx->max_send_sge = attr.cap.max_send_sge;
>  	return tx_qp;
>  }
>  
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
> index 5ea0c14..ee7a555 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
> @@ -540,6 +540,7 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
>  	struct ipoib_tx_buf *tx_req;
>  	int hlen, rc;
>  	void *phead;
> +	unsigned usable_sge = priv->max_send_sge - !!skb_headlen(skb);
>  
>  	if (skb_is_gso(skb)) {
>  		hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
> @@ -563,6 +564,23 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
>  		phead = NULL;
>  		hlen  = 0;
>  	}
> +	if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +		if (skb_linearize(skb) < 0) {
> +			ipoib_warn(priv, "skb could not be linearized\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +		/* Does skb_linearize return ok without reducing nr_frags? */
> +		if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +			ipoib_warn(priv, "too many frags after skb linearize\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +	}
>  
>  	ipoib_dbg_data(priv, "sending packet, length=%d address=%p qpn=0x%06x\n",
>  		       skb->len, address, qpn);
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
> index d48c5ba..b809c37 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
> @@ -206,7 +206,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
>  		init_attr.create_flags |= IB_QP_CREATE_NETIF_QP;
>  
>  	if (dev->features & NETIF_F_SG)
> -		init_attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
> +		init_attr.cap.max_send_sge =
> +			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
>  
>  	priv->qp = ib_create_qp(priv->pd, &init_attr);
>  	if (IS_ERR(priv->qp)) {
> @@ -233,6 +234,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
>  	priv->rx_wr.next = NULL;
>  	priv->rx_wr.sg_list = priv->rx_sge;
>  
> +	priv->max_send_sge = init_attr.cap.max_send_sge;
> +
>  	return 0;
>  
>  out_free_send_cq:
> -- 
> 2.4.3
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] IB/ipoib: Add handling for sending of skb with many frags
  2016-03-02 12:44     ` Hans Westgaard Ry
  (?)
  (?)
@ 2016-03-03 15:11     ` Doug Ledford
  -1 siblings, 0 replies; 6+ messages in thread
From: Doug Ledford @ 2016-03-03 15:11 UTC (permalink / raw)
  To: Hans Westgaard Ry
  Cc: Sean Hefty, Hal Rosenstock, Christoph Lameter, Erez Shitrit,
	Or Gerlitz, Bart Van Assche, Yuval Shaia, Haakon Bugge,
	Wei Lin Guay, Chuck Lever, Jason Gunthorpe, Haggai Eran,
	Matan Barak, open list:INFINIBAND SUBSYSTEM, open list

[-- Attachment #1: Type: text/plain, Size: 6453 bytes --]

On 03/02/2016 07:44 AM, Hans Westgaard Ry wrote:
> IPoIB converts skb-fragments to sge adding 1 extra sge when SG is enabled.
> Current codepath assumes that the max number of sge a device support
> is at least MAX_SKB_FRAGS+1, there is no interaction with upper layers
> to limit number of fragments in an skb if a device suports fewer
> sges. The assumptions also lead to requesting a fixed number of sge
> when IPoIB creates queue-pairs with SG enabled.
> 
> A fallback/slowpath is implemented using skb_linearize to
> handle cases where the conversion would result in more sges than supported.
> 
> Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
> Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
> Reviewed-by: Wei Lin Guay <wei.lin.guay@oracle.com>

Thanks for the version 2 that handles both connected and disconnected
mode.  Applied.

> ---
>  drivers/infiniband/ulp/ipoib/ipoib.h       |  2 ++
>  drivers/infiniband/ulp/ipoib/ipoib_cm.c    | 23 +++++++++++++++++++++--
>  drivers/infiniband/ulp/ipoib/ipoib_ib.c    | 18 ++++++++++++++++++
>  drivers/infiniband/ulp/ipoib/ipoib_verbs.c |  5 ++++-
>  4 files changed, 45 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
> index a6f3eab..85be0de 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib.h
> +++ b/drivers/infiniband/ulp/ipoib/ipoib.h
> @@ -244,6 +244,7 @@ struct ipoib_cm_tx {
>  	unsigned	     tx_tail;
>  	unsigned long	     flags;
>  	u32		     mtu;
> +	unsigned             max_send_sge;
>  };
>  
>  struct ipoib_cm_rx_buf {
> @@ -390,6 +391,7 @@ struct ipoib_dev_priv {
>  	int	hca_caps;
>  	struct ipoib_ethtool_st ethtool;
>  	struct timer_list poll_timer;
> +	unsigned max_send_sge;
>  };
>  
>  struct ipoib_ah {
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> index 917e46e..c8ed535 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> @@ -710,6 +710,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
>  	struct ipoib_dev_priv *priv = netdev_priv(dev);
>  	struct ipoib_tx_buf *tx_req;
>  	int rc;
> +	unsigned usable_sge = tx->max_send_sge - !!skb_headlen(skb);
>  
>  	if (unlikely(skb->len > tx->mtu)) {
>  		ipoib_warn(priv, "packet len %d (> %d) too long to send, dropping\n",
> @@ -719,7 +720,23 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
>  		ipoib_cm_skb_too_long(dev, skb, tx->mtu - IPOIB_ENCAP_LEN);
>  		return;
>  	}
> -
> +	if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +		if (skb_linearize(skb) < 0) {
> +			ipoib_warn(priv, "skb could not be linearized\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +		/* Does skb_linearize return ok without reducing nr_frags? */
> +		if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +			ipoib_warn(priv, "too many frags after skb linearize\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +	}
>  	ipoib_dbg_data(priv, "sending packet: head 0x%x length %d connection 0x%x\n",
>  		       tx->tx_head, skb->len, tx->qp->qp_num);
>  
> @@ -1031,7 +1048,8 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
>  	struct ib_qp *tx_qp;
>  
>  	if (dev->features & NETIF_F_SG)
> -		attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
> +		attr.cap.max_send_sge =
> +			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
>  
>  	tx_qp = ib_create_qp(priv->pd, &attr);
>  	if (PTR_ERR(tx_qp) == -EINVAL) {
> @@ -1040,6 +1058,7 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
>  		attr.create_flags &= ~IB_QP_CREATE_USE_GFP_NOIO;
>  		tx_qp = ib_create_qp(priv->pd, &attr);
>  	}
> +	tx->max_send_sge = attr.cap.max_send_sge;
>  	return tx_qp;
>  }
>  
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
> index 5ea0c14..ee7a555 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
> @@ -540,6 +540,7 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
>  	struct ipoib_tx_buf *tx_req;
>  	int hlen, rc;
>  	void *phead;
> +	unsigned usable_sge = priv->max_send_sge - !!skb_headlen(skb);
>  
>  	if (skb_is_gso(skb)) {
>  		hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
> @@ -563,6 +564,23 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
>  		phead = NULL;
>  		hlen  = 0;
>  	}
> +	if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +		if (skb_linearize(skb) < 0) {
> +			ipoib_warn(priv, "skb could not be linearized\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +		/* Does skb_linearize return ok without reducing nr_frags? */
> +		if (skb_shinfo(skb)->nr_frags > usable_sge) {
> +			ipoib_warn(priv, "too many frags after skb linearize\n");
> +			++dev->stats.tx_dropped;
> +			++dev->stats.tx_errors;
> +			dev_kfree_skb_any(skb);
> +			return;
> +		}
> +	}
>  
>  	ipoib_dbg_data(priv, "sending packet, length=%d address=%p qpn=0x%06x\n",
>  		       skb->len, address, qpn);
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
> index d48c5ba..b809c37 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
> @@ -206,7 +206,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
>  		init_attr.create_flags |= IB_QP_CREATE_NETIF_QP;
>  
>  	if (dev->features & NETIF_F_SG)
> -		init_attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
> +		init_attr.cap.max_send_sge =
> +			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
>  
>  	priv->qp = ib_create_qp(priv->pd, &init_attr);
>  	if (IS_ERR(priv->qp)) {
> @@ -233,6 +234,8 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
>  	priv->rx_wr.next = NULL;
>  	priv->rx_wr.sg_list = priv->rx_sge;
>  
> +	priv->max_send_sge = init_attr.cap.max_send_sge;
> +
>  	return 0;
>  
>  out_free_send_cq:
> 


-- 
Doug Ledford <dledford@redhat.com>
              GPG KeyID: 0E572FDD



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 884 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-03-03 15:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-18  8:37 [PATCH] IB/ipoib: Add handling of skb with many frags Hans Westgaard Ry
2016-02-18  8:37 ` Hans Westgaard Ry
     [not found] ` <1455784674-8412-1-git-send-email-hans.westgaard.ry-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2016-03-02 12:44   ` [PATCH v2] IB/ipoib: Add handling for sending " Hans Westgaard Ry
2016-03-02 12:44     ` Hans Westgaard Ry
     [not found]     ` <1456922668-24956-1-git-send-email-hans.westgaard.ry-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2016-03-02 17:03       ` Yuval Shaia
2016-03-03 15:11     ` Doug Ledford

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.