All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools
@ 2021-09-01  0:08 Sukadev Bhattiprolu
  2021-09-01  0:08 ` [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool() Sukadev Bhattiprolu
                   ` (10 more replies)
  0 siblings, 11 replies; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

It can take a long time to free and reallocate rx and tx pools and long
term buffer (LTB) during each reset of the VNIC. This is specially true
when the partition (LPAR) is heavily loaded and going through a Logical
Partition Migration (LPM). The long drawn reset causes the LPAR to lose
connectivity for extended periods of time and results in "RMC connection"
errors and the LPM failing.

What is worse is that during the LPM we could get a failover because
of the lost connectivity. At that point, the vnic driver releases
even the resources it has already allocated and starts over.

As long as the resources we have already allocated are valid/applicable,
we might as well hold on to them while trying to allocate the remaining
resources. This patch set attempts to reuse the resources previously
allocated as long as they are valid. It seems to vastly improve the
time taken for the vnic reset. We have also not seen any RMC connection
issues during our testing with this patch set.

If the backing devices for a vnic adapter are not "matched" (see "pool
parameters" in patches 8 and 9) it is possible that we will still free
all the resources and allocate them. If that becomes a common problem,
we have to address it separately.

Thanks to input and extensive testing from Brian King, Cris Forno,
Dany Madden, Rick Lindsley.

Sukadev Bhattiprolu (9):
  ibmvnic: consolidate related code in replenish_rx_pool()
  ibmvnic: Fix up some comments and messages
  ibmvnic: Use/rename local vars in init_rx_pools
  ibmvnic: Use/rename local vars in init_tx_pools
  ibmvnic: init_tx_pools move loop-invariant code out
  ibmvnic: use bitmap for LTB map_ids
  ibmvnic: Reuse LTB when possible
  ibmvnic: Reuse rx pools when possible
  ibmvnic: Reuse tx pools when possible

 drivers/net/ethernet/ibm/ibmvnic.c | 592 ++++++++++++++++++-----------
 drivers/net/ethernet/ibm/ibmvnic.h |  10 +-
 2 files changed, 379 insertions(+), 223 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool()
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:26   ` Dany Madden
  2021-09-01  0:08 ` [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages Sukadev Bhattiprolu
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

For better readability, consolidate related code in replenish_rx_pool()
and add some comments.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index a775c69e4fd7..e8b1231be485 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -371,6 +371,8 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
 		}
 
 		index = pool->free_map[pool->next_free];
+		pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP;
+		pool->next_free = (pool->next_free + 1) % pool->size;
 
 		if (pool->rx_buff[index].skb)
 			dev_err(dev, "Inconsistent free_map!\n");
@@ -380,14 +382,15 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
 		dst = pool->long_term_buff.buff + offset;
 		memset(dst, 0, pool->buff_size);
 		dma_addr = pool->long_term_buff.addr + offset;
-		pool->rx_buff[index].data = dst;
 
-		pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP;
+		/* add the skb to an rx_buff in the pool */
+		pool->rx_buff[index].data = dst;
 		pool->rx_buff[index].dma = dma_addr;
 		pool->rx_buff[index].skb = skb;
 		pool->rx_buff[index].pool_index = pool->index;
 		pool->rx_buff[index].size = pool->buff_size;
 
+		/* queue the rx_buff for the next send_subcrq_indirect */
 		sub_crq = &ind_bufp->indir_arr[ind_bufp->index++];
 		memset(sub_crq, 0, sizeof(*sub_crq));
 		sub_crq->rx_add.first = IBMVNIC_CRQ_CMD;
@@ -405,7 +408,8 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
 		shift = 8;
 #endif
 		sub_crq->rx_add.len = cpu_to_be32(pool->buff_size << shift);
-		pool->next_free = (pool->next_free + 1) % pool->size;
+
+		/* if send_subcrq_indirect queue is full, flush to VIOS */
 		if (ind_bufp->index == IBMVNIC_MAX_IND_DESCS ||
 		    i == count - 1) {
 			lpar_rc =
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
  2021-09-01  0:08 ` [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool() Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:28   ` Dany Madden
  2021-09-01  8:58     ` kernel test robot
  2021-09-01  0:08 ` [PATCH net-next 3/9] ibmvnic: Use/rename local vars in init_rx_pools Sukadev Bhattiprolu
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

Add/update some comments/function headers and fix up some messages.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 40 +++++++++++++++++++++++++-----
 1 file changed, 34 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index e8b1231be485..911315b10731 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -243,14 +243,13 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
 
 	rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
 	if (rc) {
-		dev_err(dev,
-			"Long term map request aborted or timed out,rc = %d\n",
+		dev_err(dev, "LTB map request aborted or timed out, rc = %d\n",
 			rc);
 		goto out;
 	}
 
 	if (adapter->fw_done_rc) {
-		dev_err(dev, "Couldn't map long term buffer,rc = %d\n",
+		dev_err(dev, "Couldn't map LTB, rc = %d\n",
 			adapter->fw_done_rc);
 		rc = -1;
 		goto out;
@@ -281,7 +280,9 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
 	    adapter->reset_reason != VNIC_RESET_MOBILITY &&
 	    adapter->reset_reason != VNIC_RESET_TIMEOUT)
 		send_request_unmap(adapter, ltb->map_id);
+
 	dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+
 	ltb->buff = NULL;
 	ltb->map_id = 0;
 }
@@ -574,6 +575,10 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
 	return 0;
 }
 
+/**
+ * Release any rx_pools attached to @adapter.
+ * Safe to call this multiple times - even if no pools are attached.
+ */
 static void release_rx_pools(struct ibmvnic_adapter *adapter)
 {
 	struct ibmvnic_rx_pool *rx_pool;
@@ -628,6 +633,9 @@ static int init_rx_pools(struct net_device *netdev)
 		return -1;
 	}
 
+	/* Set num_active_rx_pools early. If we fail below after partial
+	 * allocation, release_rx_pools() will know how many to look for.
+	 */
 	adapter->num_active_rx_pools = rxadd_subcrqs;
 
 	for (i = 0; i < rxadd_subcrqs; i++) {
@@ -646,6 +654,7 @@ static int init_rx_pools(struct net_device *netdev)
 		rx_pool->free_map = kcalloc(rx_pool->size, sizeof(int),
 					    GFP_KERNEL);
 		if (!rx_pool->free_map) {
+			dev_err(dev, "Couldn't alloc free_map %d\n", i);
 			release_rx_pools(adapter);
 			return -1;
 		}
@@ -739,10 +748,17 @@ static void release_one_tx_pool(struct ibmvnic_adapter *adapter,
 	free_long_term_buff(adapter, &tx_pool->long_term_buff);
 }
 
+/**
+ * Release any tx and tso pools attached to @adapter.
+ * Safe to call this multiple times - even if no pools are attached.
+ */
 static void release_tx_pools(struct ibmvnic_adapter *adapter)
 {
 	int i;
 
+	/* init_tx_pools() ensures that ->tx_pool and ->tso_pool are
+	 * both NULL or both non-NULL. So we only need to check one.
+	 */
 	if (!adapter->tx_pool)
 		return;
 
@@ -793,6 +809,7 @@ static int init_one_tx_pool(struct net_device *netdev,
 static int init_tx_pools(struct net_device *netdev)
 {
 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+	struct device *dev = &adapter->vdev->dev;
 	int tx_subcrqs;
 	u64 buff_size;
 	int i, rc;
@@ -805,17 +822,27 @@ static int init_tx_pools(struct net_device *netdev)
 
 	adapter->tso_pool = kcalloc(tx_subcrqs,
 				    sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
+	/* To simplify release_tx_pools() ensure that ->tx_pool and
+	 * ->tso_pool are either both NULL or both non-NULL.
+	 */
 	if (!adapter->tso_pool) {
 		kfree(adapter->tx_pool);
 		adapter->tx_pool = NULL;
 		return -1;
 	}
 
+	/* Set num_active_tx_pools early. If we fail below after partial
+	 * allocation, release_tx_pools() will know how many to look for.
+	 */
 	adapter->num_active_tx_pools = tx_subcrqs;
 
 	for (i = 0; i < tx_subcrqs; i++) {
 		buff_size = adapter->req_mtu + VLAN_HLEN;
 		buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
+
+		dev_dbg(dev, "Init tx pool %d [%llu, %llu]\n",
+			i, adapter->req_tx_entries_per_subcrq, buff_size);
+
 		rc = init_one_tx_pool(netdev, &adapter->tx_pool[i],
 				      adapter->req_tx_entries_per_subcrq,
 				      buff_size);
@@ -4774,9 +4801,10 @@ static void handle_query_map_rsp(union ibmvnic_crq *crq,
 		dev_err(dev, "Error %ld in QUERY_MAP_RSP\n", rc);
 		return;
 	}
-	netdev_dbg(netdev, "page_size = %d\ntot_pages = %d\nfree_pages = %d\n",
-		   crq->query_map_rsp.page_size, crq->query_map_rsp.tot_pages,
-		   crq->query_map_rsp.free_pages);
+	netdev_dbg(netdev, "page_size = %d\ntot_pages = %u\nfree_pages = %u\n",
+		   crq->query_map_rsp.page_size,
+		   __be32_to_cpu(crq->query_map_rsp.tot_pages),
+		   __be32_to_cpu(crq->query_map_rsp.free_pages));
 }
 
 static void handle_query_cap_rsp(union ibmvnic_crq *crq,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 3/9] ibmvnic: Use/rename local vars in init_rx_pools
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
  2021-09-01  0:08 ` [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool() Sukadev Bhattiprolu
  2021-09-01  0:08 ` [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:28   ` Dany Madden
  2021-09-01  0:08 ` [PATCH net-next 4/9] ibmvnic: Use/rename local vars in init_tx_pools Sukadev Bhattiprolu
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

To make the code more readable, use/rename some local variables.
Basically we have a set of pools, num_pools. Each pool has a set of
buffers, pool_size and each buffer is of size buff_size.

pool_size is a bit ambiguous (whether size in bytes or buffers). Add
a comment in the header file to make it explicit.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 17 +++++++++--------
 drivers/net/ethernet/ibm/ibmvnic.h |  2 +-
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 911315b10731..a611bd3f2539 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -618,14 +618,16 @@ static int init_rx_pools(struct net_device *netdev)
 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
 	struct device *dev = &adapter->vdev->dev;
 	struct ibmvnic_rx_pool *rx_pool;
-	int rxadd_subcrqs;
+	u64 num_pools;
+	u64 pool_size;		/* # of buffers in one pool */
 	u64 buff_size;
 	int i, j;
 
-	rxadd_subcrqs = adapter->num_active_rx_scrqs;
+	num_pools = adapter->num_active_rx_scrqs;
+	pool_size = adapter->req_rx_add_entries_per_subcrq;
 	buff_size = adapter->cur_rx_buf_sz;
 
-	adapter->rx_pool = kcalloc(rxadd_subcrqs,
+	adapter->rx_pool = kcalloc(num_pools,
 				   sizeof(struct ibmvnic_rx_pool),
 				   GFP_KERNEL);
 	if (!adapter->rx_pool) {
@@ -636,17 +638,16 @@ static int init_rx_pools(struct net_device *netdev)
 	/* Set num_active_rx_pools early. If we fail below after partial
 	 * allocation, release_rx_pools() will know how many to look for.
 	 */
-	adapter->num_active_rx_pools = rxadd_subcrqs;
+	adapter->num_active_rx_pools = num_pools;
 
-	for (i = 0; i < rxadd_subcrqs; i++) {
+	for (i = 0; i < num_pools; i++) {
 		rx_pool = &adapter->rx_pool[i];
 
 		netdev_dbg(adapter->netdev,
 			   "Initializing rx_pool[%d], %lld buffs, %lld bytes each\n",
-			   i, adapter->req_rx_add_entries_per_subcrq,
-			   buff_size);
+			   i, pool_size, buff_size);
 
-		rx_pool->size = adapter->req_rx_add_entries_per_subcrq;
+		rx_pool->size = pool_size;
 		rx_pool->index = i;
 		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
 		rx_pool->active = 1;
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index 22df602323bc..5652566818fb 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -827,7 +827,7 @@ struct ibmvnic_rx_buff {
 
 struct ibmvnic_rx_pool {
 	struct ibmvnic_rx_buff *rx_buff;
-	int size;
+	int size;			/* # of buffers in the pool */
 	int index;
 	int buff_size;
 	atomic_t available;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 4/9] ibmvnic: Use/rename local vars in init_tx_pools
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (2 preceding siblings ...)
  2021-09-01  0:08 ` [PATCH net-next 3/9] ibmvnic: Use/rename local vars in init_rx_pools Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:30   ` Dany Madden
  2021-09-01  0:08 ` [PATCH net-next 5/9] ibmvnic: init_tx_pools move loop-invariant code out Sukadev Bhattiprolu
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

Use/rename local variables in init_tx_pools() for consistency with
init_rx_pools() and for readability. Also add some comments

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index a611bd3f2539..4c6739b250df 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -777,31 +777,31 @@ static void release_tx_pools(struct ibmvnic_adapter *adapter)
 
 static int init_one_tx_pool(struct net_device *netdev,
 			    struct ibmvnic_tx_pool *tx_pool,
-			    int num_entries, int buf_size)
+			    int pool_size, int buf_size)
 {
 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
 	int i;
 
-	tx_pool->tx_buff = kcalloc(num_entries,
+	tx_pool->tx_buff = kcalloc(pool_size,
 				   sizeof(struct ibmvnic_tx_buff),
 				   GFP_KERNEL);
 	if (!tx_pool->tx_buff)
 		return -1;
 
 	if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff,
-				 num_entries * buf_size))
+				 pool_size * buf_size))
 		return -1;
 
-	tx_pool->free_map = kcalloc(num_entries, sizeof(int), GFP_KERNEL);
+	tx_pool->free_map = kcalloc(pool_size, sizeof(int), GFP_KERNEL);
 	if (!tx_pool->free_map)
 		return -1;
 
-	for (i = 0; i < num_entries; i++)
+	for (i = 0; i < pool_size; i++)
 		tx_pool->free_map[i] = i;
 
 	tx_pool->consumer_index = 0;
 	tx_pool->producer_index = 0;
-	tx_pool->num_buffers = num_entries;
+	tx_pool->num_buffers = pool_size;
 	tx_pool->buf_size = buf_size;
 
 	return 0;
@@ -811,17 +811,20 @@ static int init_tx_pools(struct net_device *netdev)
 {
 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
 	struct device *dev = &adapter->vdev->dev;
-	int tx_subcrqs;
+	int num_pools;
+	u64 pool_size;		/* # of buffers in pool */
 	u64 buff_size;
 	int i, rc;
 
-	tx_subcrqs = adapter->num_active_tx_scrqs;
-	adapter->tx_pool = kcalloc(tx_subcrqs,
+	pool_size = adapter->req_tx_entries_per_subcrq;
+	num_pools = adapter->num_active_tx_scrqs;
+
+	adapter->tx_pool = kcalloc(num_pools,
 				   sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
 	if (!adapter->tx_pool)
 		return -1;
 
-	adapter->tso_pool = kcalloc(tx_subcrqs,
+	adapter->tso_pool = kcalloc(num_pools,
 				    sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
 	/* To simplify release_tx_pools() ensure that ->tx_pool and
 	 * ->tso_pool are either both NULL or both non-NULL.
@@ -835,9 +838,9 @@ static int init_tx_pools(struct net_device *netdev)
 	/* Set num_active_tx_pools early. If we fail below after partial
 	 * allocation, release_tx_pools() will know how many to look for.
 	 */
-	adapter->num_active_tx_pools = tx_subcrqs;
+	adapter->num_active_tx_pools = num_pools;
 
-	for (i = 0; i < tx_subcrqs; i++) {
+	for (i = 0; i < num_pools; i++) {
 		buff_size = adapter->req_mtu + VLAN_HLEN;
 		buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
 
@@ -845,8 +848,7 @@ static int init_tx_pools(struct net_device *netdev)
 			i, adapter->req_tx_entries_per_subcrq, buff_size);
 
 		rc = init_one_tx_pool(netdev, &adapter->tx_pool[i],
-				      adapter->req_tx_entries_per_subcrq,
-				      buff_size);
+				      pool_size, buff_size);
 		if (rc) {
 			release_tx_pools(adapter);
 			return rc;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 5/9] ibmvnic: init_tx_pools move loop-invariant code out
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (3 preceding siblings ...)
  2021-09-01  0:08 ` [PATCH net-next 4/9] ibmvnic: Use/rename local vars in init_tx_pools Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:32   ` Dany Madden
  2021-09-01  0:08 ` [PATCH net-next 6/9] ibmvnic: Use bitmap for LTB map_ids Sukadev Bhattiprolu
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

In init_tx_pools() move some loop-invariant code out of the loop.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 4c6739b250df..8894afdb3cb3 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -839,11 +839,10 @@ static int init_tx_pools(struct net_device *netdev)
 	 * allocation, release_tx_pools() will know how many to look for.
 	 */
 	adapter->num_active_tx_pools = num_pools;
+	buff_size = adapter->req_mtu + VLAN_HLEN;
+	buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
 
 	for (i = 0; i < num_pools; i++) {
-		buff_size = adapter->req_mtu + VLAN_HLEN;
-		buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
-
 		dev_dbg(dev, "Init tx pool %d [%llu, %llu]\n",
 			i, adapter->req_tx_entries_per_subcrq, buff_size);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 6/9] ibmvnic: Use bitmap for LTB map_ids
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (4 preceding siblings ...)
  2021-09-01  0:08 ` [PATCH net-next 5/9] ibmvnic: init_tx_pools move loop-invariant code out Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:33   ` Dany Madden
  2021-09-01  0:08 ` [PATCH net-next 7/9] ibmvnic: Reuse LTB when possible Sukadev Bhattiprolu
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

In a follow-on patch, we will reuse long term buffers when possible.
When doing so we have to be careful to properly assign map ids. We
can no longer assign them sequentially because a lower map id may be
available and we could wrap at 255 and collide with an in-use map id.

Instead, use a bitmap to track active map ids and to find a free map id.
Don't need to take locks here since the map_id only changes during reset
and at that time only the reset worker thread should be using the adapter.

Noticed this when analyzing an error Dany Madden ran into with the
patch set.

Reported-by: Dany Madden <drt@linux.ibm.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 12 ++++++++----
 drivers/net/ethernet/ibm/ibmvnic.h |  3 ++-
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 8894afdb3cb3..30153a8bb5ec 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -228,8 +228,9 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
 		dev_err(dev, "Couldn't alloc long term buffer\n");
 		return -ENOMEM;
 	}
-	ltb->map_id = adapter->map_id;
-	adapter->map_id++;
+	ltb->map_id = find_first_zero_bit(adapter->map_ids,
+					  MAX_MAP_ID);
+	bitmap_set(adapter->map_ids, ltb->map_id, 1);
 
 	mutex_lock(&adapter->fw_lock);
 	adapter->fw_done_rc = 0;
@@ -284,6 +285,8 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
 	dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
 
 	ltb->buff = NULL;
+	/* mark this map_id free */
+	bitmap_clear(adapter->map_ids, ltb->map_id, 1);
 	ltb->map_id = 0;
 }
 
@@ -1231,8 +1234,6 @@ static int init_resources(struct ibmvnic_adapter *adapter)
 		return rc;
 	}
 
-	adapter->map_id = 1;
-
 	rc = init_napi(adapter);
 	if (rc)
 		return rc;
@@ -5553,6 +5554,9 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
 	adapter->vdev = dev;
 	adapter->netdev = netdev;
 	adapter->login_pending = false;
+	memset(&adapter->map_ids, 0, sizeof(adapter->map_ids));
+	/* map_ids start at 1, so ensure map_id 0 is always "in-use" */
+	bitmap_set(adapter->map_ids, 0, 1);
 
 	ether_addr_copy(adapter->mac_addr, mac_addr_p);
 	ether_addr_copy(netdev->dev_addr, adapter->mac_addr);
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index 5652566818fb..e97f1aa98c05 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -979,7 +979,8 @@ struct ibmvnic_adapter {
 	u64 opt_tx_entries_per_subcrq;
 	u64 opt_rxba_entries_per_subcrq;
 	__be64 tx_rx_desc_req;
-	u8 map_id;
+#define MAX_MAP_ID	255
+	DECLARE_BITMAP(map_ids, MAX_MAP_ID);
 	u32 num_active_rx_scrqs;
 	u32 num_active_rx_pools;
 	u32 num_active_rx_napi;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 7/9] ibmvnic: Reuse LTB when possible
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (5 preceding siblings ...)
  2021-09-01  0:08 ` [PATCH net-next 6/9] ibmvnic: Use bitmap for LTB map_ids Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:34   ` Dany Madden
  2021-09-01  0:08 ` [PATCH net-next 8/9] ibmvnic: Reuse rx pools " Sukadev Bhattiprolu
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

Reuse the long term buffer during a reset as long as its size has
not changed. If the size has changed, free it and allocate a new
one of the appropriate size.

When we do this, alloc_long_term_buff() and reset_long_term_buff()
become identical. Drop reset_long_term_buff().

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 122 ++++++++++++++---------------
 1 file changed, 59 insertions(+), 63 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 30153a8bb5ec..1bb5996c4313 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -108,6 +108,8 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter);
 static int send_query_phys_parms(struct ibmvnic_adapter *adapter);
 static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
 					 struct ibmvnic_sub_crq_queue *tx_scrq);
+static void free_long_term_buff(struct ibmvnic_adapter *adapter,
+				struct ibmvnic_long_term_buff *ltb);
 
 struct ibmvnic_stat {
 	char name[ETH_GSTRING_LEN];
@@ -214,23 +216,62 @@ static int ibmvnic_wait_for_completion(struct ibmvnic_adapter *adapter,
 	return -ETIMEDOUT;
 }
 
+/**
+ * Reuse long term buffer unless size has changed.
+ */
+static bool reuse_ltb(struct ibmvnic_long_term_buff *ltb, int size)
+{
+	return (ltb->buff && ltb->size == size);
+}
+
+/**
+ * Allocate a long term buffer of the specified size and notify VIOS.
+ *
+ * If the given @ltb already has the correct size, reuse it. Otherwise if
+ * its non-NULL, free it. Then allocate a new one of the correct size.
+ * Notify the VIOS either way since we may now be working with a new VIOS.
+ *
+ * Allocating larger chunks of memory during resets, specially LPM or under
+ * low memory situations can cause resets to fail/timeout and for LPAR to
+ * lose connectivity. So hold onto the LTB even if we fail to communicate
+ * with the VIOS and reuse it on next open. Free LTB when adapter is closed.
+ */
 static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
 				struct ibmvnic_long_term_buff *ltb, int size)
 {
 	struct device *dev = &adapter->vdev->dev;
 	int rc;
 
-	ltb->size = size;
-	ltb->buff = dma_alloc_coherent(dev, ltb->size, &ltb->addr,
-				       GFP_KERNEL);
+	if (!reuse_ltb(ltb, size)) {
+		dev_dbg(dev,
+			"LTB size changed from 0x%llx to 0x%x, reallocating\n",
+			 ltb->size, size);
+		free_long_term_buff(adapter, ltb);
+	}
 
-	if (!ltb->buff) {
-		dev_err(dev, "Couldn't alloc long term buffer\n");
-		return -ENOMEM;
+	if (ltb->buff) {
+		dev_dbg(dev, "Reusing LTB [map %d, size 0x%llx]\n",
+			ltb->map_id, ltb->size);
+	} else {
+		ltb->buff = dma_alloc_coherent(dev, size, &ltb->addr,
+					       GFP_KERNEL);
+		if (!ltb->buff) {
+			dev_err(dev, "Couldn't alloc long term buffer\n");
+			return -ENOMEM;
+		}
+		ltb->size = size;
+
+		ltb->map_id = find_first_zero_bit(adapter->map_ids,
+						  MAX_MAP_ID);
+		bitmap_set(adapter->map_ids, ltb->map_id, 1);
+
+		dev_dbg(dev,
+			"Allocated new LTB [map %d, size 0x%llx]\n",
+			 ltb->map_id, ltb->size);
 	}
-	ltb->map_id = find_first_zero_bit(adapter->map_ids,
-					  MAX_MAP_ID);
-	bitmap_set(adapter->map_ids, ltb->map_id, 1);
+
+	/* Ensure ltb is zeroed - specially when reusing it. */
+	memset(ltb->buff, 0, ltb->size);
 
 	mutex_lock(&adapter->fw_lock);
 	adapter->fw_done_rc = 0;
@@ -257,10 +298,7 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
 	}
 	rc = 0;
 out:
-	if (rc) {
-		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
-		ltb->buff = NULL;
-	}
+	/* don't free LTB on communication error - see function header */
 	mutex_unlock(&adapter->fw_lock);
 	return rc;
 }
@@ -290,43 +328,6 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
 	ltb->map_id = 0;
 }
 
-static int reset_long_term_buff(struct ibmvnic_adapter *adapter,
-				struct ibmvnic_long_term_buff *ltb)
-{
-	struct device *dev = &adapter->vdev->dev;
-	int rc;
-
-	memset(ltb->buff, 0, ltb->size);
-
-	mutex_lock(&adapter->fw_lock);
-	adapter->fw_done_rc = 0;
-
-	reinit_completion(&adapter->fw_done);
-	rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id);
-	if (rc) {
-		mutex_unlock(&adapter->fw_lock);
-		return rc;
-	}
-
-	rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
-	if (rc) {
-		dev_info(dev,
-			 "Reset failed, long term map request timed out or aborted\n");
-		mutex_unlock(&adapter->fw_lock);
-		return rc;
-	}
-
-	if (adapter->fw_done_rc) {
-		dev_info(dev,
-			 "Reset failed, attempting to free and reallocate buffer\n");
-		free_long_term_buff(adapter, ltb);
-		mutex_unlock(&adapter->fw_lock);
-		return alloc_long_term_buff(adapter, ltb, ltb->size);
-	}
-	mutex_unlock(&adapter->fw_lock);
-	return 0;
-}
-
 static void deactivate_rx_pools(struct ibmvnic_adapter *adapter)
 {
 	int i;
@@ -548,18 +549,10 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
 
 		netdev_dbg(adapter->netdev, "Re-setting rx_pool[%d]\n", i);
 
-		if (rx_pool->buff_size != buff_size) {
-			free_long_term_buff(adapter, &rx_pool->long_term_buff);
-			rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
-			rc = alloc_long_term_buff(adapter,
-						  &rx_pool->long_term_buff,
-						  rx_pool->size *
-						  rx_pool->buff_size);
-		} else {
-			rc = reset_long_term_buff(adapter,
-						  &rx_pool->long_term_buff);
-		}
-
+		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
+		rc = alloc_long_term_buff(adapter,
+					  &rx_pool->long_term_buff,
+					  rx_pool->size * rx_pool->buff_size);
 		if (rc)
 			return rc;
 
@@ -692,9 +685,12 @@ static int init_rx_pools(struct net_device *netdev)
 static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
 			     struct ibmvnic_tx_pool *tx_pool)
 {
+	struct ibmvnic_long_term_buff *ltb;
 	int rc, i;
 
-	rc = reset_long_term_buff(adapter, &tx_pool->long_term_buff);
+	ltb = &tx_pool->long_term_buff;
+
+	rc = alloc_long_term_buff(adapter, ltb, ltb->size);
 	if (rc)
 		return rc;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 8/9] ibmvnic: Reuse rx pools when possible
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (6 preceding siblings ...)
  2021-09-01  0:08 ` [PATCH net-next 7/9] ibmvnic: Reuse LTB when possible Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:35   ` Dany Madden
  2021-09-01  0:08 ` [PATCH net-next 9/9] ibmvnic: Reuse tx " Sukadev Bhattiprolu
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

Rather than releasing the rx pools on and reallocating them on every
reset, reuse the rx pools unless the pool parameters (number of pools,
size of each pool or size of each buffer in a pool) have changed.

If the pool parameters changed, then release the old pools (if any)
and allocate new ones.

Specifically release rx pools, if:
	- adapter is removed,
	- pool parameters change during reset,
	- we encounter an error when opening the adapter in response
	  to a user request (in ibmvnic_open()).

and don't release them:
	- in __ibmvnic_close() or
	- on errors in __ibmvnic_open()

in the hope that we can reuse them on the next reset.

With these, reset_rx_pools() can be dropped because its optimzation is
now included in init_rx_pools() itself.

cleanup_rx_pools() releases all the skbs associated with the pool and
is called from ibmvnic_cleanup(), which is called on every reset. Since
we want to reuse skbs across resets, move cleanup_rx_pools() out of
ibmvnic_cleanup() and call it only when user closes the adapter.

Add two new adapter fields, ->prev_rx_buf_sz, ->prev_rx_pool_size to
keep track of the previous values and use them to decide whether to
reuse or realloc the pools.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 183 +++++++++++++++++++----------
 drivers/net/ethernet/ibm/ibmvnic.h |   3 +
 2 files changed, 122 insertions(+), 64 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 1bb5996c4313..ebd525b6fc87 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -368,20 +368,27 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
 	 * be 0.
 	 */
 	for (i = ind_bufp->index; i < count; ++i) {
-		skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
+		index = pool->free_map[pool->next_free];
+
+		/* We maybe reusing the skb from earlier resets. Allocate
+		 * only if necessary. But since the LTB may have changed
+		 * during reset (see init_rx_pools()), update LTB below
+		 * even if reusing skb.
+		 */
+		skb = pool->rx_buff[index].skb;
 		if (!skb) {
-			dev_err(dev, "Couldn't replenish rx buff\n");
-			adapter->replenish_no_mem++;
-			break;
+			skb = netdev_alloc_skb(adapter->netdev,
+					       pool->buff_size);
+			if (!skb) {
+				dev_err(dev, "Couldn't replenish rx buff\n");
+				adapter->replenish_no_mem++;
+				break;
+			}
 		}
 
-		index = pool->free_map[pool->next_free];
 		pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP;
 		pool->next_free = (pool->next_free + 1) % pool->size;
 
-		if (pool->rx_buff[index].skb)
-			dev_err(dev, "Inconsistent free_map!\n");
-
 		/* Copy the skb to the long term mapped DMA buffer */
 		offset = index * pool->buff_size;
 		dst = pool->long_term_buff.buff + offset;
@@ -532,45 +539,6 @@ static int init_stats_token(struct ibmvnic_adapter *adapter)
 	return 0;
 }
 
-static int reset_rx_pools(struct ibmvnic_adapter *adapter)
-{
-	struct ibmvnic_rx_pool *rx_pool;
-	u64 buff_size;
-	int rx_scrqs;
-	int i, j, rc;
-
-	if (!adapter->rx_pool)
-		return -1;
-
-	buff_size = adapter->cur_rx_buf_sz;
-	rx_scrqs = adapter->num_active_rx_pools;
-	for (i = 0; i < rx_scrqs; i++) {
-		rx_pool = &adapter->rx_pool[i];
-
-		netdev_dbg(adapter->netdev, "Re-setting rx_pool[%d]\n", i);
-
-		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
-		rc = alloc_long_term_buff(adapter,
-					  &rx_pool->long_term_buff,
-					  rx_pool->size * rx_pool->buff_size);
-		if (rc)
-			return rc;
-
-		for (j = 0; j < rx_pool->size; j++)
-			rx_pool->free_map[j] = j;
-
-		memset(rx_pool->rx_buff, 0,
-		       rx_pool->size * sizeof(struct ibmvnic_rx_buff));
-
-		atomic_set(&rx_pool->available, 0);
-		rx_pool->next_alloc = 0;
-		rx_pool->next_free = 0;
-		rx_pool->active = 1;
-	}
-
-	return 0;
-}
-
 /**
  * Release any rx_pools attached to @adapter.
  * Safe to call this multiple times - even if no pools are attached.
@@ -589,6 +557,7 @@ static void release_rx_pools(struct ibmvnic_adapter *adapter)
 		netdev_dbg(adapter->netdev, "Releasing rx_pool[%d]\n", i);
 
 		kfree(rx_pool->free_map);
+
 		free_long_term_buff(adapter, &rx_pool->long_term_buff);
 
 		if (!rx_pool->rx_buff)
@@ -607,8 +576,53 @@ static void release_rx_pools(struct ibmvnic_adapter *adapter)
 	kfree(adapter->rx_pool);
 	adapter->rx_pool = NULL;
 	adapter->num_active_rx_pools = 0;
+	adapter->prev_rx_pool_size = 0;
+}
+
+/**
+ * Return true if we can reuse the existing rx pools.
+ * NOTE: This assumes that all pools have the same number of buffers
+ *       which is the case currently. If that changes, we must fix this.
+ */
+static bool reuse_rx_pools(struct ibmvnic_adapter *adapter)
+{
+	u64 old_num_pools, new_num_pools;
+	u64 old_pool_size, new_pool_size;
+	u64 old_buff_size, new_buff_size;
+
+	if (!adapter->rx_pool)
+		return false;
+
+	old_num_pools = adapter->num_active_rx_pools;
+	new_num_pools = adapter->req_rx_queues;
+
+	old_pool_size = adapter->prev_rx_pool_size;
+	new_pool_size = adapter->req_rx_add_entries_per_subcrq;
+
+	old_buff_size = adapter->prev_rx_buf_sz;
+	new_buff_size = adapter->cur_rx_buf_sz;
+
+	/* Require buff size to be exactly same for now */
+	if (old_buff_size != new_buff_size)
+		return false;
+
+	if (old_num_pools == new_num_pools && old_pool_size == new_pool_size)
+		return true;
+
+	if (old_num_pools < adapter->min_rx_queues ||
+	    old_num_pools > adapter->max_rx_queues ||
+	    old_pool_size < adapter->min_rx_add_entries_per_subcrq ||
+	    old_pool_size > adapter->max_rx_add_entries_per_subcrq)
+		return false;
+
+	return true;
 }
 
+/**
+ * Initialize the set of receiver pools in the adapter. Reuse existing
+ * pools if possible. Otherwise allocate a new set of pools before
+ * initializing them.
+ */
 static int init_rx_pools(struct net_device *netdev)
 {
 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
@@ -619,10 +633,18 @@ static int init_rx_pools(struct net_device *netdev)
 	u64 buff_size;
 	int i, j;
 
-	num_pools = adapter->num_active_rx_scrqs;
 	pool_size = adapter->req_rx_add_entries_per_subcrq;
+	num_pools = adapter->req_rx_queues;
 	buff_size = adapter->cur_rx_buf_sz;
 
+	if (reuse_rx_pools(adapter)) {
+		dev_dbg(dev, "Reusing rx pools\n");
+		goto update_ltb;
+	}
+
+	/* Allocate/populate the pools. */
+	release_rx_pools(adapter);
+
 	adapter->rx_pool = kcalloc(num_pools,
 				   sizeof(struct ibmvnic_rx_pool),
 				   GFP_KERNEL);
@@ -646,14 +668,12 @@ static int init_rx_pools(struct net_device *netdev)
 		rx_pool->size = pool_size;
 		rx_pool->index = i;
 		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
-		rx_pool->active = 1;
 
 		rx_pool->free_map = kcalloc(rx_pool->size, sizeof(int),
 					    GFP_KERNEL);
 		if (!rx_pool->free_map) {
 			dev_err(dev, "Couldn't alloc free_map %d\n", i);
-			release_rx_pools(adapter);
-			return -1;
+			goto out_release;
 		}
 
 		rx_pool->rx_buff = kcalloc(rx_pool->size,
@@ -661,25 +681,58 @@ static int init_rx_pools(struct net_device *netdev)
 					   GFP_KERNEL);
 		if (!rx_pool->rx_buff) {
 			dev_err(dev, "Couldn't alloc rx buffers\n");
-			release_rx_pools(adapter);
-			return -1;
+			goto out_release;
 		}
+	}
+
+	adapter->prev_rx_pool_size = pool_size;
+	adapter->prev_rx_buf_sz = adapter->cur_rx_buf_sz;
+
+update_ltb:
+	for (i = 0; i < num_pools; i++) {
+		rx_pool = &adapter->rx_pool[i];
+		dev_dbg(dev, "Updating LTB for rx pool %d [%d, %d]\n",
+			i, rx_pool->size, rx_pool->buff_size);
 
 		if (alloc_long_term_buff(adapter, &rx_pool->long_term_buff,
-					 rx_pool->size * rx_pool->buff_size)) {
-			release_rx_pools(adapter);
-			return -1;
-		}
+					 rx_pool->size * rx_pool->buff_size))
+			goto out;
+
+		for (j = 0; j < rx_pool->size; ++j) {
+			struct ibmvnic_rx_buff *rx_buff;
 
-		for (j = 0; j < rx_pool->size; ++j)
 			rx_pool->free_map[j] = j;
 
+			/* NOTE: Don't clear rx_buff->skb here - will leak
+			 * memory! replenish_rx_pool() will reuse skbs or
+			 * allocate as necessary.
+			 */
+			rx_buff = &rx_pool->rx_buff[j];
+			rx_buff->dma = 0;
+			rx_buff->data = 0;
+			rx_buff->size = 0;
+			rx_buff->pool_index = 0;
+		}
+
+		/* Mark pool "empty" so replenish_rx_pools() will
+		 * update the LTB info for each buffer
+		 */
 		atomic_set(&rx_pool->available, 0);
 		rx_pool->next_alloc = 0;
 		rx_pool->next_free = 0;
+		/* replenish_rx_pool() may have called deactivate_rx_pools()
+		 * on failover. Ensure pool is active now.
+		 */
+		rx_pool->active = 1;
 	}
-
 	return 0;
+out_release:
+	release_rx_pools(adapter);
+out:
+	/* We failed to allocate one or more LTBs or map them on the VIOS.
+	 * Hold onto the pools and any LTBs that we did allocate/map.
+	 */
+	return -1;
 }
 
 static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
@@ -1053,7 +1106,6 @@ static void release_resources(struct ibmvnic_adapter *adapter)
 	release_vpd_data(adapter);
 
 	release_tx_pools(adapter);
-	release_rx_pools(adapter);
 
 	release_napi(adapter);
 	release_login_buffer(adapter);
@@ -1326,6 +1378,7 @@ static int ibmvnic_open(struct net_device *netdev)
 		if (rc) {
 			netdev_err(netdev, "failed to initialize resources\n");
 			release_resources(adapter);
+			release_rx_pools(adapter);
 			goto out;
 		}
 	}
@@ -1455,7 +1508,6 @@ static void ibmvnic_cleanup(struct net_device *netdev)
 	ibmvnic_napi_disable(adapter);
 	ibmvnic_disable_irqs(adapter);
 
-	clean_rx_pools(adapter);
 	clean_tx_pools(adapter);
 }
 
@@ -1490,6 +1542,7 @@ static int ibmvnic_close(struct net_device *netdev)
 
 	rc = __ibmvnic_close(netdev);
 	ibmvnic_cleanup(netdev);
+	clean_rx_pools(adapter);
 
 	return rc;
 }
@@ -2218,7 +2271,6 @@ static int do_reset(struct ibmvnic_adapter *adapter,
 		    !adapter->rx_pool ||
 		    !adapter->tso_pool ||
 		    !adapter->tx_pool) {
-			release_rx_pools(adapter);
 			release_tx_pools(adapter);
 			release_napi(adapter);
 			release_vpd_data(adapter);
@@ -2235,9 +2287,10 @@ static int do_reset(struct ibmvnic_adapter *adapter,
 				goto out;
 			}
 
-			rc = reset_rx_pools(adapter);
+			rc = init_rx_pools(netdev);
 			if (rc) {
-				netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
+				netdev_dbg(netdev,
+					   "init rx pools failed (%d)\n",
 					   rc);
 				goto out;
 			}
@@ -5573,6 +5626,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
 	init_completion(&adapter->reset_done);
 	init_completion(&adapter->stats_done);
 	clear_bit(0, &adapter->resetting);
+	adapter->prev_rx_buf_sz = 0;
 
 	init_success = false;
 	do {
@@ -5673,6 +5727,7 @@ static void ibmvnic_remove(struct vio_dev *dev)
 	unregister_netdevice(netdev);
 
 	release_resources(adapter);
+	release_rx_pools(adapter);
 	release_sub_crqs(adapter, 1);
 	release_crq_queue(adapter);
 
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index e97f1aa98c05..b73a1b812368 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -986,7 +986,10 @@ struct ibmvnic_adapter {
 	u32 num_active_rx_napi;
 	u32 num_active_tx_scrqs;
 	u32 num_active_tx_pools;
+
+	u32 prev_rx_pool_size;
 	u32 cur_rx_buf_sz;
+	u32 prev_rx_buf_sz;
 
 	struct tasklet_struct tasklet;
 	enum vnic_state state;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next 9/9] ibmvnic: Reuse tx pools when possible
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (7 preceding siblings ...)
  2021-09-01  0:08 ` [PATCH net-next 8/9] ibmvnic: Reuse rx pools " Sukadev Bhattiprolu
@ 2021-09-01  0:08 ` Sukadev Bhattiprolu
  2021-09-01  1:36   ` Dany Madden
  2021-09-01  1:21 ` [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Rick Lindsley
  2021-09-01  2:35 ` Jakub Kicinski
  10 siblings, 1 reply; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01  0:08 UTC (permalink / raw)
  To: netdev; +Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

Rather than releasing the tx pools on every close and reallocating
them on open, reuse the tx pools unless the pool parameters (number
of pools, size of each pool or size of each buffer in a pool) have
changed.

If the pool parameters changed, then release the old pools (if
any) and allocate new ones.

Specifically release tx pools, if:
	- adapter is removed,
	- pool parameters change during reset,
	- we encounter an error when opening the adapter in response
	  to a user request (in ibmvnic_open()).

and don't release them:
	- in __ibmvnic_close() or
	- on errors in __ibmvnic_open()

in the hope that we can reuse them during this or next reset.

With these changes reset_tx_pools() can be dropped because its
optimization is now included in init_tx_pools() itself.

cleanup_tx_pools() releases all the skbs associated with the pool and
is called from ibmvnic_cleanup(), which is called on every reset. Since
we want to reuse skbs across resets, move cleanup_tx_pools() out of
ibmvnic_cleanup() and call it only when user closes the adapter.

Add two new adapter fields, ->prev_mtu, ->prev_tx_pool_size to track the
previous values and use them to decide whether to reuse or realloc the
pools.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 201 +++++++++++++++++++----------
 drivers/net/ethernet/ibm/ibmvnic.h |   2 +
 2 files changed, 133 insertions(+), 70 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index ebd525b6fc87..8c422a717e88 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -735,53 +735,6 @@ static int init_rx_pools(struct net_device *netdev)
 	return -1;
 }
 
-static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
-			     struct ibmvnic_tx_pool *tx_pool)
-{
-	struct ibmvnic_long_term_buff *ltb;
-	int rc, i;
-
-	ltb = &tx_pool->long_term_buff;
-
-	rc = alloc_long_term_buff(adapter, ltb, ltb->size);
-	if (rc)
-		return rc;
-
-	memset(tx_pool->tx_buff, 0,
-	       tx_pool->num_buffers *
-	       sizeof(struct ibmvnic_tx_buff));
-
-	for (i = 0; i < tx_pool->num_buffers; i++)
-		tx_pool->free_map[i] = i;
-
-	tx_pool->consumer_index = 0;
-	tx_pool->producer_index = 0;
-
-	return 0;
-}
-
-static int reset_tx_pools(struct ibmvnic_adapter *adapter)
-{
-	int tx_scrqs;
-	int i, rc;
-
-	if (!adapter->tx_pool)
-		return -1;
-
-	tx_scrqs = adapter->num_active_tx_pools;
-	for (i = 0; i < tx_scrqs; i++) {
-		ibmvnic_tx_scrq_clean_buffer(adapter, adapter->tx_scrq[i]);
-		rc = reset_one_tx_pool(adapter, &adapter->tso_pool[i]);
-		if (rc)
-			return rc;
-		rc = reset_one_tx_pool(adapter, &adapter->tx_pool[i]);
-		if (rc)
-			return rc;
-	}
-
-	return 0;
-}
-
 static void release_vpd_data(struct ibmvnic_adapter *adapter)
 {
 	if (!adapter->vpd)
@@ -825,13 +778,13 @@ static void release_tx_pools(struct ibmvnic_adapter *adapter)
 	kfree(adapter->tso_pool);
 	adapter->tso_pool = NULL;
 	adapter->num_active_tx_pools = 0;
+	adapter->prev_tx_pool_size = 0;
 }
 
 static int init_one_tx_pool(struct net_device *netdev,
 			    struct ibmvnic_tx_pool *tx_pool,
 			    int pool_size, int buf_size)
 {
-	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
 	int i;
 
 	tx_pool->tx_buff = kcalloc(pool_size,
@@ -840,13 +793,12 @@ static int init_one_tx_pool(struct net_device *netdev,
 	if (!tx_pool->tx_buff)
 		return -1;
 
-	if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff,
-				 pool_size * buf_size))
-		return -1;
-
 	tx_pool->free_map = kcalloc(pool_size, sizeof(int), GFP_KERNEL);
-	if (!tx_pool->free_map)
+	if (!tx_pool->free_map) {
+		kfree(tx_pool->tx_buff);
+		tx_pool->tx_buff = NULL;
 		return -1;
+	}
 
 	for (i = 0; i < pool_size; i++)
 		tx_pool->free_map[i] = i;
@@ -859,6 +811,48 @@ static int init_one_tx_pool(struct net_device *netdev,
 	return 0;
 }
 
+/**
+ * Return true if we can reuse the existing tx pools, false otherwise
+ * NOTE: This assumes that all pools have the same number of buffers
+ *       which is the case currently. If that changes, we must fix this.
+ */
+static bool reuse_tx_pools(struct ibmvnic_adapter *adapter)
+{
+	u64 old_num_pools, new_num_pools;
+	u64 old_pool_size, new_pool_size;
+	u64 old_mtu, new_mtu;
+
+	if (!adapter->tx_pool)
+		return false;
+
+	old_num_pools = adapter->num_active_tx_pools;
+	new_num_pools = adapter->num_active_tx_scrqs;
+	old_pool_size = adapter->prev_tx_pool_size;
+	new_pool_size = adapter->req_tx_entries_per_subcrq;
+	old_mtu = adapter->prev_mtu;
+	new_mtu = adapter->req_mtu;
+
+	/* Require MTU to be exactly same to reuse pools for now */
+	if (old_mtu != new_mtu)
+		return false;
+
+	if (old_num_pools == new_num_pools && old_pool_size == new_pool_size)
+		return true;
+
+	if (old_num_pools < adapter->min_tx_queues ||
+	    old_num_pools > adapter->max_tx_queues ||
+	    old_pool_size < adapter->min_tx_entries_per_subcrq ||
+	    old_pool_size > adapter->max_tx_entries_per_subcrq)
+		return false;
+
+	return true;
+}
+
+/**
+ * Initialize the set of transmit pools in the adapter. Reuse existing
+ * pools if possible. Otherwise allocate a new set of pools before
+ * initializing them.
+ */
 static int init_tx_pools(struct net_device *netdev)
 {
 	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
@@ -866,7 +860,21 @@ static int init_tx_pools(struct net_device *netdev)
 	int num_pools;
 	u64 pool_size;		/* # of buffers in pool */
 	u64 buff_size;
-	int i, rc;
+	int i, j, rc;
+
+	num_pools = adapter->req_tx_queues;
+
+	/* We must notify the VIOS about the LTB on all resets - but we only
+	 * need to alloc/populate pools if either the number of buffers or
+	 * size of each buffer in the pool has changed.
+	 */
+	if (reuse_tx_pools(adapter)) {
+		netdev_dbg(netdev, "Reusing tx pools\n");
+		goto update_ltb;
+	}
+
+	/* Allocate/populate the pools. */
+	release_tx_pools(adapter);
 
 	pool_size = adapter->req_tx_entries_per_subcrq;
 	num_pools = adapter->num_active_tx_scrqs;
@@ -891,6 +899,7 @@ static int init_tx_pools(struct net_device *netdev)
 	 * allocation, release_tx_pools() will know how many to look for.
 	 */
 	adapter->num_active_tx_pools = num_pools;
+
 	buff_size = adapter->req_mtu + VLAN_HLEN;
 	buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
 
@@ -900,21 +909,73 @@ static int init_tx_pools(struct net_device *netdev)
 
 		rc = init_one_tx_pool(netdev, &adapter->tx_pool[i],
 				      pool_size, buff_size);
-		if (rc) {
-			release_tx_pools(adapter);
-			return rc;
-		}
+		if (rc)
+			goto out_release;
 
 		rc = init_one_tx_pool(netdev, &adapter->tso_pool[i],
 				      IBMVNIC_TSO_BUFS,
 				      IBMVNIC_TSO_BUF_SZ);
-		if (rc) {
-			release_tx_pools(adapter);
-			return rc;
-		}
+		if (rc)
+			goto out_release;
+	}
+
+	adapter->prev_tx_pool_size = pool_size;
+	adapter->prev_mtu = adapter->req_mtu;
+
+update_ltb:
+	/* NOTE: All tx_pools have the same number of buffers (which is
+	 *       same as pool_size). All tso_pools have IBMVNIC_TSO_BUFS
+	 *       buffers (see calls init_one_tx_pool() for these).
+	 *       For consistency, we use tx_pool->num_buffers and
+	 *       tso_pool->num_buffers below.
+	 */
+	rc = -1;
+	for (i = 0; i < num_pools; i++) {
+		struct ibmvnic_tx_pool *tso_pool;
+		struct ibmvnic_tx_pool *tx_pool;
+		u32 ltb_size;
+
+		tx_pool = &adapter->tx_pool[i];
+		ltb_size = tx_pool->num_buffers * tx_pool->buf_size;
+		if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff,
+					 ltb_size))
+			goto out;
+
+		dev_dbg(dev, "Updated LTB for tx pool %d [%p, %d, %d]\n",
+			i, tx_pool->long_term_buff.buff,
+			tx_pool->num_buffers, tx_pool->buf_size);
+
+		tx_pool->consumer_index = 0;
+		tx_pool->producer_index = 0;
+
+		for (j = 0; j < tx_pool->num_buffers; j++)
+			tx_pool->free_map[j] = j;
+
+		tso_pool = &adapter->tso_pool[i];
+		ltb_size = tso_pool->num_buffers * tso_pool->buf_size;
+		if (alloc_long_term_buff(adapter, &tso_pool->long_term_buff,
+					 ltb_size))
+			goto out;
+
+		dev_dbg(dev, "Updated LTB for tso pool %d [%p, %d, %d]\n",
+			i, tso_pool->long_term_buff.buff,
+			tso_pool->num_buffers, tso_pool->buf_size);
+
+		tso_pool->consumer_index = 0;
+		tso_pool->producer_index = 0;
+
+		for (j = 0; j < tso_pool->num_buffers; j++)
+			tso_pool->free_map[j] = j;
 	}
 
 	return 0;
+out_release:
+	release_tx_pools(adapter);
+out:
+	/* We failed to allocate one or more LTBs or map them on the VIOS.
+	 * Hold onto the pools and any LTBs that we did allocate/map.
+	 */
+	return rc;
 }
 
 static void ibmvnic_napi_enable(struct ibmvnic_adapter *adapter)
@@ -1105,8 +1166,6 @@ static void release_resources(struct ibmvnic_adapter *adapter)
 {
 	release_vpd_data(adapter);
 
-	release_tx_pools(adapter);
-
 	release_napi(adapter);
 	release_login_buffer(adapter);
 	release_login_rsp_buffer(adapter);
@@ -1379,6 +1438,7 @@ static int ibmvnic_open(struct net_device *netdev)
 			netdev_err(netdev, "failed to initialize resources\n");
 			release_resources(adapter);
 			release_rx_pools(adapter);
+			release_tx_pools(adapter);
 			goto out;
 		}
 	}
@@ -1507,8 +1567,6 @@ static void ibmvnic_cleanup(struct net_device *netdev)
 
 	ibmvnic_napi_disable(adapter);
 	ibmvnic_disable_irqs(adapter);
-
-	clean_tx_pools(adapter);
 }
 
 static int __ibmvnic_close(struct net_device *netdev)
@@ -1543,6 +1601,7 @@ static int ibmvnic_close(struct net_device *netdev)
 	rc = __ibmvnic_close(netdev);
 	ibmvnic_cleanup(netdev);
 	clean_rx_pools(adapter);
+	clean_tx_pools(adapter);
 
 	return rc;
 }
@@ -2119,9 +2178,9 @@ static const char *reset_reason_to_string(enum ibmvnic_reset_reason reason)
 static int do_reset(struct ibmvnic_adapter *adapter,
 		    struct ibmvnic_rwi *rwi, u32 reset_state)
 {
+	struct net_device *netdev = adapter->netdev;
 	u64 old_num_rx_queues, old_num_tx_queues;
 	u64 old_num_rx_slots, old_num_tx_slots;
-	struct net_device *netdev = adapter->netdev;
 	int rc;
 
 	netdev_dbg(adapter->netdev,
@@ -2271,7 +2330,6 @@ static int do_reset(struct ibmvnic_adapter *adapter,
 		    !adapter->rx_pool ||
 		    !adapter->tso_pool ||
 		    !adapter->tx_pool) {
-			release_tx_pools(adapter);
 			release_napi(adapter);
 			release_vpd_data(adapter);
 
@@ -2280,9 +2338,10 @@ static int do_reset(struct ibmvnic_adapter *adapter,
 				goto out;
 
 		} else {
-			rc = reset_tx_pools(adapter);
+			rc = init_tx_pools(netdev);
 			if (rc) {
-				netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
+				netdev_dbg(netdev,
+					   "init tx pools failed (%d)\n",
 					   rc);
 				goto out;
 			}
@@ -5627,6 +5686,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
 	init_completion(&adapter->stats_done);
 	clear_bit(0, &adapter->resetting);
 	adapter->prev_rx_buf_sz = 0;
+	adapter->prev_mtu = 0;
 
 	init_success = false;
 	do {
@@ -5728,6 +5788,7 @@ static void ibmvnic_remove(struct vio_dev *dev)
 
 	release_resources(adapter);
 	release_rx_pools(adapter);
+	release_tx_pools(adapter);
 	release_sub_crqs(adapter, 1);
 	release_crq_queue(adapter);
 
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index b73a1b812368..b8e42f67d897 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -967,6 +967,7 @@ struct ibmvnic_adapter {
 	u64 min_mtu;
 	u64 max_mtu;
 	u64 req_mtu;
+	u64 prev_mtu;
 	u64 max_multicast_filters;
 	u64 vlan_header_insertion;
 	u64 rx_vlan_header_insertion;
@@ -988,6 +989,7 @@ struct ibmvnic_adapter {
 	u32 num_active_tx_pools;
 
 	u32 prev_rx_pool_size;
+	u32 prev_tx_pool_size;
 	u32 cur_rx_buf_sz;
 	u32 prev_rx_buf_sz;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (8 preceding siblings ...)
  2021-09-01  0:08 ` [PATCH net-next 9/9] ibmvnic: Reuse tx " Sukadev Bhattiprolu
@ 2021-09-01  1:21 ` Rick Lindsley
  2021-09-01  2:35 ` Jakub Kicinski
  10 siblings, 0 replies; 24+ messages in thread
From: Rick Lindsley @ 2021-09-01  1:21 UTC (permalink / raw)
  To: Sukadev Bhattiprolu, netdev
  Cc: Brian King, cforno12, Dany Madden, Rick Lindsley

On 8/31/21 5:08 PM, Sukadev Bhattiprolu wrote:
> It can take a long time to free and reallocate rx and tx pools and long
> term buffer (LTB) during each reset of the VNIC. This is specially true
> when the partition (LPAR) is heavily loaded and going through a Logical
> Partition Migration (LPM). The long drawn reset causes the LPAR to lose
> connectivity for extended periods of time and results in "RMC connection"
> errors and the LPM failing.
> 
> What is worse is that during the LPM we could get a failover because
> of the lost connectivity. At that point, the vnic driver releases
> even the resources it has already allocated and starts over.
> 
> As long as the resources we have already allocated are valid/applicable,
> we might as well hold on to them while trying to allocate the remaining
> resources. This patch set attempts to reuse the resources previously
> allocated as long as they are valid. It seems to vastly improve the
> time taken for the vnic reset. We have also not seen any RMC connection
> issues during our testing with this patch set.
> 
> If the backing devices for a vnic adapter are not "matched" (see "pool
> parameters" in patches 8 and 9) it is possible that we will still free
> all the resources and allocate them. If that becomes a common problem,
> we have to address it separately.

We have spent a good amount of time going over this. Thans for all the work.

Reviewed-by:  Rick Lindsley <ricklind@linux.vnet.ibm.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool()
  2021-09-01  0:08 ` [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool() Sukadev Bhattiprolu
@ 2021-09-01  1:26   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:26 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> For better readability, consolidate related code in replenish_rx_pool()
> and add some comments.
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index a775c69e4fd7..e8b1231be485 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -371,6 +371,8 @@ static void replenish_rx_pool(struct
> ibmvnic_adapter *adapter,
>  		}
> 
>  		index = pool->free_map[pool->next_free];
> +		pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP;
> +		pool->next_free = (pool->next_free + 1) % pool->size;
> 
>  		if (pool->rx_buff[index].skb)
>  			dev_err(dev, "Inconsistent free_map!\n");
> @@ -380,14 +382,15 @@ static void replenish_rx_pool(struct
> ibmvnic_adapter *adapter,
>  		dst = pool->long_term_buff.buff + offset;
>  		memset(dst, 0, pool->buff_size);
>  		dma_addr = pool->long_term_buff.addr + offset;
> -		pool->rx_buff[index].data = dst;
> 
> -		pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP;
> +		/* add the skb to an rx_buff in the pool */
> +		pool->rx_buff[index].data = dst;
>  		pool->rx_buff[index].dma = dma_addr;
>  		pool->rx_buff[index].skb = skb;
>  		pool->rx_buff[index].pool_index = pool->index;
>  		pool->rx_buff[index].size = pool->buff_size;
> 
> +		/* queue the rx_buff for the next send_subcrq_indirect */
>  		sub_crq = &ind_bufp->indir_arr[ind_bufp->index++];
>  		memset(sub_crq, 0, sizeof(*sub_crq));
>  		sub_crq->rx_add.first = IBMVNIC_CRQ_CMD;
> @@ -405,7 +408,8 @@ static void replenish_rx_pool(struct
> ibmvnic_adapter *adapter,
>  		shift = 8;
>  #endif
>  		sub_crq->rx_add.len = cpu_to_be32(pool->buff_size << shift);
> -		pool->next_free = (pool->next_free + 1) % pool->size;
> +
> +		/* if send_subcrq_indirect queue is full, flush to VIOS */
>  		if (ind_bufp->index == IBMVNIC_MAX_IND_DESCS ||
>  		    i == count - 1) {
>  			lpar_rc =

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages
  2021-09-01  0:08 ` [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages Sukadev Bhattiprolu
@ 2021-09-01  1:28   ` Dany Madden
  2021-09-01  8:58     ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:28 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> Add/update some comments/function headers and fix up some messages.
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 40 +++++++++++++++++++++++++-----
>  1 file changed, 34 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index e8b1231be485..911315b10731 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -243,14 +243,13 @@ static int alloc_long_term_buff(struct
> ibmvnic_adapter *adapter,
> 
>  	rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
>  	if (rc) {
> -		dev_err(dev,
> -			"Long term map request aborted or timed out,rc = %d\n",
> +		dev_err(dev, "LTB map request aborted or timed out, rc = %d\n",
>  			rc);
>  		goto out;
>  	}
> 
>  	if (adapter->fw_done_rc) {
> -		dev_err(dev, "Couldn't map long term buffer,rc = %d\n",
> +		dev_err(dev, "Couldn't map LTB, rc = %d\n",
>  			adapter->fw_done_rc);
>  		rc = -1;
>  		goto out;
> @@ -281,7 +280,9 @@ static void free_long_term_buff(struct
> ibmvnic_adapter *adapter,
>  	    adapter->reset_reason != VNIC_RESET_MOBILITY &&
>  	    adapter->reset_reason != VNIC_RESET_TIMEOUT)
>  		send_request_unmap(adapter, ltb->map_id);
> +
>  	dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
> +
>  	ltb->buff = NULL;
>  	ltb->map_id = 0;
>  }
> @@ -574,6 +575,10 @@ static int reset_rx_pools(struct ibmvnic_adapter 
> *adapter)
>  	return 0;
>  }
> 
> +/**
> + * Release any rx_pools attached to @adapter.
> + * Safe to call this multiple times - even if no pools are attached.
> + */
>  static void release_rx_pools(struct ibmvnic_adapter *adapter)
>  {
>  	struct ibmvnic_rx_pool *rx_pool;
> @@ -628,6 +633,9 @@ static int init_rx_pools(struct net_device *netdev)
>  		return -1;
>  	}
> 
> +	/* Set num_active_rx_pools early. If we fail below after partial
> +	 * allocation, release_rx_pools() will know how many to look for.
> +	 */
>  	adapter->num_active_rx_pools = rxadd_subcrqs;
> 
>  	for (i = 0; i < rxadd_subcrqs; i++) {
> @@ -646,6 +654,7 @@ static int init_rx_pools(struct net_device *netdev)
>  		rx_pool->free_map = kcalloc(rx_pool->size, sizeof(int),
>  					    GFP_KERNEL);
>  		if (!rx_pool->free_map) {
> +			dev_err(dev, "Couldn't alloc free_map %d\n", i);
>  			release_rx_pools(adapter);
>  			return -1;
>  		}
> @@ -739,10 +748,17 @@ static void release_one_tx_pool(struct
> ibmvnic_adapter *adapter,
>  	free_long_term_buff(adapter, &tx_pool->long_term_buff);
>  }
> 
> +/**
> + * Release any tx and tso pools attached to @adapter.
> + * Safe to call this multiple times - even if no pools are attached.
> + */
>  static void release_tx_pools(struct ibmvnic_adapter *adapter)
>  {
>  	int i;
> 
> +	/* init_tx_pools() ensures that ->tx_pool and ->tso_pool are
> +	 * both NULL or both non-NULL. So we only need to check one.
> +	 */
>  	if (!adapter->tx_pool)
>  		return;
> 
> @@ -793,6 +809,7 @@ static int init_one_tx_pool(struct net_device 
> *netdev,
>  static int init_tx_pools(struct net_device *netdev)
>  {
>  	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
> +	struct device *dev = &adapter->vdev->dev;
>  	int tx_subcrqs;
>  	u64 buff_size;
>  	int i, rc;
> @@ -805,17 +822,27 @@ static int init_tx_pools(struct net_device 
> *netdev)
> 
>  	adapter->tso_pool = kcalloc(tx_subcrqs,
>  				    sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
> +	/* To simplify release_tx_pools() ensure that ->tx_pool and
> +	 * ->tso_pool are either both NULL or both non-NULL.
> +	 */
>  	if (!adapter->tso_pool) {
>  		kfree(adapter->tx_pool);
>  		adapter->tx_pool = NULL;
>  		return -1;
>  	}
> 
> +	/* Set num_active_tx_pools early. If we fail below after partial
> +	 * allocation, release_tx_pools() will know how many to look for.
> +	 */
>  	adapter->num_active_tx_pools = tx_subcrqs;
> 
>  	for (i = 0; i < tx_subcrqs; i++) {
>  		buff_size = adapter->req_mtu + VLAN_HLEN;
>  		buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> +
> +		dev_dbg(dev, "Init tx pool %d [%llu, %llu]\n",
> +			i, adapter->req_tx_entries_per_subcrq, buff_size);
> +
>  		rc = init_one_tx_pool(netdev, &adapter->tx_pool[i],
>  				      adapter->req_tx_entries_per_subcrq,
>  				      buff_size);
> @@ -4774,9 +4801,10 @@ static void handle_query_map_rsp(union 
> ibmvnic_crq *crq,
>  		dev_err(dev, "Error %ld in QUERY_MAP_RSP\n", rc);
>  		return;
>  	}
> -	netdev_dbg(netdev, "page_size = %d\ntot_pages = %d\nfree_pages = 
> %d\n",
> -		   crq->query_map_rsp.page_size, crq->query_map_rsp.tot_pages,
> -		   crq->query_map_rsp.free_pages);
> +	netdev_dbg(netdev, "page_size = %d\ntot_pages = %u\nfree_pages = 
> %u\n",
> +		   crq->query_map_rsp.page_size,
> +		   __be32_to_cpu(crq->query_map_rsp.tot_pages),
> +		   __be32_to_cpu(crq->query_map_rsp.free_pages));
>  }
> 
>  static void handle_query_cap_rsp(union ibmvnic_crq *crq,

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 3/9] ibmvnic: Use/rename local vars in init_rx_pools
  2021-09-01  0:08 ` [PATCH net-next 3/9] ibmvnic: Use/rename local vars in init_rx_pools Sukadev Bhattiprolu
@ 2021-09-01  1:28   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:28 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> To make the code more readable, use/rename some local variables.
> Basically we have a set of pools, num_pools. Each pool has a set of
> buffers, pool_size and each buffer is of size buff_size.
> 
> pool_size is a bit ambiguous (whether size in bytes or buffers). Add
> a comment in the header file to make it explicit.
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 17 +++++++++--------
>  drivers/net/ethernet/ibm/ibmvnic.h |  2 +-
>  2 files changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index 911315b10731..a611bd3f2539 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -618,14 +618,16 @@ static int init_rx_pools(struct net_device 
> *netdev)
>  	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
>  	struct device *dev = &adapter->vdev->dev;
>  	struct ibmvnic_rx_pool *rx_pool;
> -	int rxadd_subcrqs;
> +	u64 num_pools;
> +	u64 pool_size;		/* # of buffers in one pool */
>  	u64 buff_size;
>  	int i, j;
> 
> -	rxadd_subcrqs = adapter->num_active_rx_scrqs;
> +	num_pools = adapter->num_active_rx_scrqs;
> +	pool_size = adapter->req_rx_add_entries_per_subcrq;
>  	buff_size = adapter->cur_rx_buf_sz;
> 
> -	adapter->rx_pool = kcalloc(rxadd_subcrqs,
> +	adapter->rx_pool = kcalloc(num_pools,
>  				   sizeof(struct ibmvnic_rx_pool),
>  				   GFP_KERNEL);
>  	if (!adapter->rx_pool) {
> @@ -636,17 +638,16 @@ static int init_rx_pools(struct net_device 
> *netdev)
>  	/* Set num_active_rx_pools early. If we fail below after partial
>  	 * allocation, release_rx_pools() will know how many to look for.
>  	 */
> -	adapter->num_active_rx_pools = rxadd_subcrqs;
> +	adapter->num_active_rx_pools = num_pools;
> 
> -	for (i = 0; i < rxadd_subcrqs; i++) {
> +	for (i = 0; i < num_pools; i++) {
>  		rx_pool = &adapter->rx_pool[i];
> 
>  		netdev_dbg(adapter->netdev,
>  			   "Initializing rx_pool[%d], %lld buffs, %lld bytes each\n",
> -			   i, adapter->req_rx_add_entries_per_subcrq,
> -			   buff_size);
> +			   i, pool_size, buff_size);
> 
> -		rx_pool->size = adapter->req_rx_add_entries_per_subcrq;
> +		rx_pool->size = pool_size;
>  		rx_pool->index = i;
>  		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
>  		rx_pool->active = 1;
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.h
> b/drivers/net/ethernet/ibm/ibmvnic.h
> index 22df602323bc..5652566818fb 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.h
> +++ b/drivers/net/ethernet/ibm/ibmvnic.h
> @@ -827,7 +827,7 @@ struct ibmvnic_rx_buff {
> 
>  struct ibmvnic_rx_pool {
>  	struct ibmvnic_rx_buff *rx_buff;
> -	int size;
> +	int size;			/* # of buffers in the pool */
>  	int index;
>  	int buff_size;
>  	atomic_t available;

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 4/9] ibmvnic: Use/rename local vars in init_tx_pools
  2021-09-01  0:08 ` [PATCH net-next 4/9] ibmvnic: Use/rename local vars in init_tx_pools Sukadev Bhattiprolu
@ 2021-09-01  1:30   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:30 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> Use/rename local variables in init_tx_pools() for consistency with
> init_rx_pools() and for readability. Also add some comments
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 30 ++++++++++++++++--------------
>  1 file changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index a611bd3f2539..4c6739b250df 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -777,31 +777,31 @@ static void release_tx_pools(struct
> ibmvnic_adapter *adapter)
> 
>  static int init_one_tx_pool(struct net_device *netdev,
>  			    struct ibmvnic_tx_pool *tx_pool,
> -			    int num_entries, int buf_size)
> +			    int pool_size, int buf_size)
>  {
>  	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
>  	int i;
> 
> -	tx_pool->tx_buff = kcalloc(num_entries,
> +	tx_pool->tx_buff = kcalloc(pool_size,
>  				   sizeof(struct ibmvnic_tx_buff),
>  				   GFP_KERNEL);
>  	if (!tx_pool->tx_buff)
>  		return -1;
> 
>  	if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff,
> -				 num_entries * buf_size))
> +				 pool_size * buf_size))
>  		return -1;
> 
> -	tx_pool->free_map = kcalloc(num_entries, sizeof(int), GFP_KERNEL);
> +	tx_pool->free_map = kcalloc(pool_size, sizeof(int), GFP_KERNEL);
>  	if (!tx_pool->free_map)
>  		return -1;
> 
> -	for (i = 0; i < num_entries; i++)
> +	for (i = 0; i < pool_size; i++)
>  		tx_pool->free_map[i] = i;
> 
>  	tx_pool->consumer_index = 0;
>  	tx_pool->producer_index = 0;
> -	tx_pool->num_buffers = num_entries;
> +	tx_pool->num_buffers = pool_size;
>  	tx_pool->buf_size = buf_size;
> 
>  	return 0;
> @@ -811,17 +811,20 @@ static int init_tx_pools(struct net_device 
> *netdev)
>  {
>  	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
>  	struct device *dev = &adapter->vdev->dev;
> -	int tx_subcrqs;
> +	int num_pools;
> +	u64 pool_size;		/* # of buffers in pool */
>  	u64 buff_size;
>  	int i, rc;
> 
> -	tx_subcrqs = adapter->num_active_tx_scrqs;
> -	adapter->tx_pool = kcalloc(tx_subcrqs,
> +	pool_size = adapter->req_tx_entries_per_subcrq;
> +	num_pools = adapter->num_active_tx_scrqs;
> +
> +	adapter->tx_pool = kcalloc(num_pools,
>  				   sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
>  	if (!adapter->tx_pool)
>  		return -1;
> 
> -	adapter->tso_pool = kcalloc(tx_subcrqs,
> +	adapter->tso_pool = kcalloc(num_pools,
>  				    sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
>  	/* To simplify release_tx_pools() ensure that ->tx_pool and
>  	 * ->tso_pool are either both NULL or both non-NULL.
> @@ -835,9 +838,9 @@ static int init_tx_pools(struct net_device *netdev)
>  	/* Set num_active_tx_pools early. If we fail below after partial
>  	 * allocation, release_tx_pools() will know how many to look for.
>  	 */
> -	adapter->num_active_tx_pools = tx_subcrqs;
> +	adapter->num_active_tx_pools = num_pools;
> 
> -	for (i = 0; i < tx_subcrqs; i++) {
> +	for (i = 0; i < num_pools; i++) {
>  		buff_size = adapter->req_mtu + VLAN_HLEN;
>  		buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> 
> @@ -845,8 +848,7 @@ static int init_tx_pools(struct net_device *netdev)
>  			i, adapter->req_tx_entries_per_subcrq, buff_size);
> 
>  		rc = init_one_tx_pool(netdev, &adapter->tx_pool[i],
> -				      adapter->req_tx_entries_per_subcrq,
> -				      buff_size);
> +				      pool_size, buff_size);
>  		if (rc) {
>  			release_tx_pools(adapter);
>  			return rc;

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 5/9] ibmvnic: init_tx_pools move loop-invariant code out
  2021-09-01  0:08 ` [PATCH net-next 5/9] ibmvnic: init_tx_pools move loop-invariant code out Sukadev Bhattiprolu
@ 2021-09-01  1:32   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:32 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> In init_tx_pools() move some loop-invariant code out of the loop.
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index 4c6739b250df..8894afdb3cb3 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -839,11 +839,10 @@ static int init_tx_pools(struct net_device 
> *netdev)
>  	 * allocation, release_tx_pools() will know how many to look for.
>  	 */
>  	adapter->num_active_tx_pools = num_pools;
> +	buff_size = adapter->req_mtu + VLAN_HLEN;
> +	buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> 
>  	for (i = 0; i < num_pools; i++) {
> -		buff_size = adapter->req_mtu + VLAN_HLEN;
> -		buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> -
>  		dev_dbg(dev, "Init tx pool %d [%llu, %llu]\n",
>  			i, adapter->req_tx_entries_per_subcrq, buff_size);

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 6/9] ibmvnic: Use bitmap for LTB map_ids
  2021-09-01  0:08 ` [PATCH net-next 6/9] ibmvnic: Use bitmap for LTB map_ids Sukadev Bhattiprolu
@ 2021-09-01  1:33   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:33 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> In a follow-on patch, we will reuse long term buffers when possible.
> When doing so we have to be careful to properly assign map ids. We
> can no longer assign them sequentially because a lower map id may be
> available and we could wrap at 255 and collide with an in-use map id.
> 
> Instead, use a bitmap to track active map ids and to find a free map 
> id.
> Don't need to take locks here since the map_id only changes during 
> reset
> and at that time only the reset worker thread should be using the 
> adapter.
> 
> Noticed this when analyzing an error Dany Madden ran into with the
> patch set.
> 
> Reported-by: Dany Madden <drt@linux.ibm.com>
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 12 ++++++++----
>  drivers/net/ethernet/ibm/ibmvnic.h |  3 ++-
>  2 files changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index 8894afdb3cb3..30153a8bb5ec 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -228,8 +228,9 @@ static int alloc_long_term_buff(struct
> ibmvnic_adapter *adapter,
>  		dev_err(dev, "Couldn't alloc long term buffer\n");
>  		return -ENOMEM;
>  	}
> -	ltb->map_id = adapter->map_id;
> -	adapter->map_id++;
> +	ltb->map_id = find_first_zero_bit(adapter->map_ids,
> +					  MAX_MAP_ID);
> +	bitmap_set(adapter->map_ids, ltb->map_id, 1);
> 
>  	mutex_lock(&adapter->fw_lock);
>  	adapter->fw_done_rc = 0;
> @@ -284,6 +285,8 @@ static void free_long_term_buff(struct
> ibmvnic_adapter *adapter,
>  	dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
> 
>  	ltb->buff = NULL;
> +	/* mark this map_id free */
> +	bitmap_clear(adapter->map_ids, ltb->map_id, 1);
>  	ltb->map_id = 0;
>  }
> 
> @@ -1231,8 +1234,6 @@ static int init_resources(struct ibmvnic_adapter 
> *adapter)
>  		return rc;
>  	}
> 
> -	adapter->map_id = 1;
> -
>  	rc = init_napi(adapter);
>  	if (rc)
>  		return rc;
> @@ -5553,6 +5554,9 @@ static int ibmvnic_probe(struct vio_dev *dev,
> const struct vio_device_id *id)
>  	adapter->vdev = dev;
>  	adapter->netdev = netdev;
>  	adapter->login_pending = false;
> +	memset(&adapter->map_ids, 0, sizeof(adapter->map_ids));
> +	/* map_ids start at 1, so ensure map_id 0 is always "in-use" */
> +	bitmap_set(adapter->map_ids, 0, 1);
> 
>  	ether_addr_copy(adapter->mac_addr, mac_addr_p);
>  	ether_addr_copy(netdev->dev_addr, adapter->mac_addr);
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.h
> b/drivers/net/ethernet/ibm/ibmvnic.h
> index 5652566818fb..e97f1aa98c05 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.h
> +++ b/drivers/net/ethernet/ibm/ibmvnic.h
> @@ -979,7 +979,8 @@ struct ibmvnic_adapter {
>  	u64 opt_tx_entries_per_subcrq;
>  	u64 opt_rxba_entries_per_subcrq;
>  	__be64 tx_rx_desc_req;
> -	u8 map_id;
> +#define MAX_MAP_ID	255
> +	DECLARE_BITMAP(map_ids, MAX_MAP_ID);
>  	u32 num_active_rx_scrqs;
>  	u32 num_active_rx_pools;
>  	u32 num_active_rx_napi;

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 7/9] ibmvnic: Reuse LTB when possible
  2021-09-01  0:08 ` [PATCH net-next 7/9] ibmvnic: Reuse LTB when possible Sukadev Bhattiprolu
@ 2021-09-01  1:34   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:34 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> Reuse the long term buffer during a reset as long as its size has
> not changed. If the size has changed, free it and allocate a new
> one of the appropriate size.
> 
> When we do this, alloc_long_term_buff() and reset_long_term_buff()
> become identical. Drop reset_long_term_buff().
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 122 ++++++++++++++---------------
>  1 file changed, 59 insertions(+), 63 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index 30153a8bb5ec..1bb5996c4313 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -108,6 +108,8 @@ static int init_crq_queue(struct ibmvnic_adapter 
> *adapter);
>  static int send_query_phys_parms(struct ibmvnic_adapter *adapter);
>  static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter 
> *adapter,
>  					 struct ibmvnic_sub_crq_queue *tx_scrq);
> +static void free_long_term_buff(struct ibmvnic_adapter *adapter,
> +				struct ibmvnic_long_term_buff *ltb);
> 
>  struct ibmvnic_stat {
>  	char name[ETH_GSTRING_LEN];
> @@ -214,23 +216,62 @@ static int ibmvnic_wait_for_completion(struct
> ibmvnic_adapter *adapter,
>  	return -ETIMEDOUT;
>  }
> 
> +/**
> + * Reuse long term buffer unless size has changed.
> + */
> +static bool reuse_ltb(struct ibmvnic_long_term_buff *ltb, int size)
> +{
> +	return (ltb->buff && ltb->size == size);
> +}
> +
> +/**
> + * Allocate a long term buffer of the specified size and notify VIOS.
> + *
> + * If the given @ltb already has the correct size, reuse it. Otherwise 
> if
> + * its non-NULL, free it. Then allocate a new one of the correct size.
> + * Notify the VIOS either way since we may now be working with a new 
> VIOS.
> + *
> + * Allocating larger chunks of memory during resets, specially LPM or 
> under
> + * low memory situations can cause resets to fail/timeout and for LPAR 
> to
> + * lose connectivity. So hold onto the LTB even if we fail to 
> communicate
> + * with the VIOS and reuse it on next open. Free LTB when adapter is 
> closed.
> + */
>  static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
>  				struct ibmvnic_long_term_buff *ltb, int size)
>  {
>  	struct device *dev = &adapter->vdev->dev;
>  	int rc;
> 
> -	ltb->size = size;
> -	ltb->buff = dma_alloc_coherent(dev, ltb->size, &ltb->addr,
> -				       GFP_KERNEL);
> +	if (!reuse_ltb(ltb, size)) {
> +		dev_dbg(dev,
> +			"LTB size changed from 0x%llx to 0x%x, reallocating\n",
> +			 ltb->size, size);
> +		free_long_term_buff(adapter, ltb);
> +	}
> 
> -	if (!ltb->buff) {
> -		dev_err(dev, "Couldn't alloc long term buffer\n");
> -		return -ENOMEM;
> +	if (ltb->buff) {
> +		dev_dbg(dev, "Reusing LTB [map %d, size 0x%llx]\n",
> +			ltb->map_id, ltb->size);
> +	} else {
> +		ltb->buff = dma_alloc_coherent(dev, size, &ltb->addr,
> +					       GFP_KERNEL);
> +		if (!ltb->buff) {
> +			dev_err(dev, "Couldn't alloc long term buffer\n");
> +			return -ENOMEM;
> +		}
> +		ltb->size = size;
> +
> +		ltb->map_id = find_first_zero_bit(adapter->map_ids,
> +						  MAX_MAP_ID);
> +		bitmap_set(adapter->map_ids, ltb->map_id, 1);
> +
> +		dev_dbg(dev,
> +			"Allocated new LTB [map %d, size 0x%llx]\n",
> +			 ltb->map_id, ltb->size);
>  	}
> -	ltb->map_id = find_first_zero_bit(adapter->map_ids,
> -					  MAX_MAP_ID);
> -	bitmap_set(adapter->map_ids, ltb->map_id, 1);
> +
> +	/* Ensure ltb is zeroed - specially when reusing it. */
> +	memset(ltb->buff, 0, ltb->size);
> 
>  	mutex_lock(&adapter->fw_lock);
>  	adapter->fw_done_rc = 0;
> @@ -257,10 +298,7 @@ static int alloc_long_term_buff(struct
> ibmvnic_adapter *adapter,
>  	}
>  	rc = 0;
>  out:
> -	if (rc) {
> -		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
> -		ltb->buff = NULL;
> -	}
> +	/* don't free LTB on communication error - see function header */
>  	mutex_unlock(&adapter->fw_lock);
>  	return rc;
>  }
> @@ -290,43 +328,6 @@ static void free_long_term_buff(struct
> ibmvnic_adapter *adapter,
>  	ltb->map_id = 0;
>  }
> 
> -static int reset_long_term_buff(struct ibmvnic_adapter *adapter,
> -				struct ibmvnic_long_term_buff *ltb)
> -{
> -	struct device *dev = &adapter->vdev->dev;
> -	int rc;
> -
> -	memset(ltb->buff, 0, ltb->size);
> -
> -	mutex_lock(&adapter->fw_lock);
> -	adapter->fw_done_rc = 0;
> -
> -	reinit_completion(&adapter->fw_done);
> -	rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id);
> -	if (rc) {
> -		mutex_unlock(&adapter->fw_lock);
> -		return rc;
> -	}
> -
> -	rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
> -	if (rc) {
> -		dev_info(dev,
> -			 "Reset failed, long term map request timed out or aborted\n");
> -		mutex_unlock(&adapter->fw_lock);
> -		return rc;
> -	}
> -
> -	if (adapter->fw_done_rc) {
> -		dev_info(dev,
> -			 "Reset failed, attempting to free and reallocate buffer\n");
> -		free_long_term_buff(adapter, ltb);
> -		mutex_unlock(&adapter->fw_lock);
> -		return alloc_long_term_buff(adapter, ltb, ltb->size);
> -	}
> -	mutex_unlock(&adapter->fw_lock);
> -	return 0;
> -}
> -
>  static void deactivate_rx_pools(struct ibmvnic_adapter *adapter)
>  {
>  	int i;
> @@ -548,18 +549,10 @@ static int reset_rx_pools(struct ibmvnic_adapter 
> *adapter)
> 
>  		netdev_dbg(adapter->netdev, "Re-setting rx_pool[%d]\n", i);
> 
> -		if (rx_pool->buff_size != buff_size) {
> -			free_long_term_buff(adapter, &rx_pool->long_term_buff);
> -			rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> -			rc = alloc_long_term_buff(adapter,
> -						  &rx_pool->long_term_buff,
> -						  rx_pool->size *
> -						  rx_pool->buff_size);
> -		} else {
> -			rc = reset_long_term_buff(adapter,
> -						  &rx_pool->long_term_buff);
> -		}
> -
> +		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> +		rc = alloc_long_term_buff(adapter,
> +					  &rx_pool->long_term_buff,
> +					  rx_pool->size * rx_pool->buff_size);
>  		if (rc)
>  			return rc;
> 
> @@ -692,9 +685,12 @@ static int init_rx_pools(struct net_device 
> *netdev)
>  static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
>  			     struct ibmvnic_tx_pool *tx_pool)
>  {
> +	struct ibmvnic_long_term_buff *ltb;
>  	int rc, i;
> 
> -	rc = reset_long_term_buff(adapter, &tx_pool->long_term_buff);
> +	ltb = &tx_pool->long_term_buff;
> +
> +	rc = alloc_long_term_buff(adapter, ltb, ltb->size);
>  	if (rc)
>  		return rc;

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 8/9] ibmvnic: Reuse rx pools when possible
  2021-09-01  0:08 ` [PATCH net-next 8/9] ibmvnic: Reuse rx pools " Sukadev Bhattiprolu
@ 2021-09-01  1:35   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:35 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> Rather than releasing the rx pools on and reallocating them on every
> reset, reuse the rx pools unless the pool parameters (number of pools,
> size of each pool or size of each buffer in a pool) have changed.
> 
> If the pool parameters changed, then release the old pools (if any)
> and allocate new ones.
> 
> Specifically release rx pools, if:
> 	- adapter is removed,
> 	- pool parameters change during reset,
> 	- we encounter an error when opening the adapter in response
> 	  to a user request (in ibmvnic_open()).
> 
> and don't release them:
> 	- in __ibmvnic_close() or
> 	- on errors in __ibmvnic_open()
> 
> in the hope that we can reuse them on the next reset.
> 
> With these, reset_rx_pools() can be dropped because its optimzation is
> now included in init_rx_pools() itself.
> 
> cleanup_rx_pools() releases all the skbs associated with the pool and
> is called from ibmvnic_cleanup(), which is called on every reset. Since
> we want to reuse skbs across resets, move cleanup_rx_pools() out of
> ibmvnic_cleanup() and call it only when user closes the adapter.
> 
> Add two new adapter fields, ->prev_rx_buf_sz, ->prev_rx_pool_size to
> keep track of the previous values and use them to decide whether to
> reuse or realloc the pools.
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 183 +++++++++++++++++++----------
>  drivers/net/ethernet/ibm/ibmvnic.h |   3 +
>  2 files changed, 122 insertions(+), 64 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index 1bb5996c4313..ebd525b6fc87 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -368,20 +368,27 @@ static void replenish_rx_pool(struct
> ibmvnic_adapter *adapter,
>  	 * be 0.
>  	 */
>  	for (i = ind_bufp->index; i < count; ++i) {
> -		skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
> +		index = pool->free_map[pool->next_free];
> +
> +		/* We maybe reusing the skb from earlier resets. Allocate
> +		 * only if necessary. But since the LTB may have changed
> +		 * during reset (see init_rx_pools()), update LTB below
> +		 * even if reusing skb.
> +		 */
> +		skb = pool->rx_buff[index].skb;
>  		if (!skb) {
> -			dev_err(dev, "Couldn't replenish rx buff\n");
> -			adapter->replenish_no_mem++;
> -			break;
> +			skb = netdev_alloc_skb(adapter->netdev,
> +					       pool->buff_size);
> +			if (!skb) {
> +				dev_err(dev, "Couldn't replenish rx buff\n");
> +				adapter->replenish_no_mem++;
> +				break;
> +			}
>  		}
> 
> -		index = pool->free_map[pool->next_free];
>  		pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP;
>  		pool->next_free = (pool->next_free + 1) % pool->size;
> 
> -		if (pool->rx_buff[index].skb)
> -			dev_err(dev, "Inconsistent free_map!\n");
> -
>  		/* Copy the skb to the long term mapped DMA buffer */
>  		offset = index * pool->buff_size;
>  		dst = pool->long_term_buff.buff + offset;
> @@ -532,45 +539,6 @@ static int init_stats_token(struct
> ibmvnic_adapter *adapter)
>  	return 0;
>  }
> 
> -static int reset_rx_pools(struct ibmvnic_adapter *adapter)
> -{
> -	struct ibmvnic_rx_pool *rx_pool;
> -	u64 buff_size;
> -	int rx_scrqs;
> -	int i, j, rc;
> -
> -	if (!adapter->rx_pool)
> -		return -1;
> -
> -	buff_size = adapter->cur_rx_buf_sz;
> -	rx_scrqs = adapter->num_active_rx_pools;
> -	for (i = 0; i < rx_scrqs; i++) {
> -		rx_pool = &adapter->rx_pool[i];
> -
> -		netdev_dbg(adapter->netdev, "Re-setting rx_pool[%d]\n", i);
> -
> -		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> -		rc = alloc_long_term_buff(adapter,
> -					  &rx_pool->long_term_buff,
> -					  rx_pool->size * rx_pool->buff_size);
> -		if (rc)
> -			return rc;
> -
> -		for (j = 0; j < rx_pool->size; j++)
> -			rx_pool->free_map[j] = j;
> -
> -		memset(rx_pool->rx_buff, 0,
> -		       rx_pool->size * sizeof(struct ibmvnic_rx_buff));
> -
> -		atomic_set(&rx_pool->available, 0);
> -		rx_pool->next_alloc = 0;
> -		rx_pool->next_free = 0;
> -		rx_pool->active = 1;
> -	}
> -
> -	return 0;
> -}
> -
>  /**
>   * Release any rx_pools attached to @adapter.
>   * Safe to call this multiple times - even if no pools are attached.
> @@ -589,6 +557,7 @@ static void release_rx_pools(struct
> ibmvnic_adapter *adapter)
>  		netdev_dbg(adapter->netdev, "Releasing rx_pool[%d]\n", i);
> 
>  		kfree(rx_pool->free_map);
> +
>  		free_long_term_buff(adapter, &rx_pool->long_term_buff);
> 
>  		if (!rx_pool->rx_buff)
> @@ -607,8 +576,53 @@ static void release_rx_pools(struct
> ibmvnic_adapter *adapter)
>  	kfree(adapter->rx_pool);
>  	adapter->rx_pool = NULL;
>  	adapter->num_active_rx_pools = 0;
> +	adapter->prev_rx_pool_size = 0;
> +}
> +
> +/**
> + * Return true if we can reuse the existing rx pools.
> + * NOTE: This assumes that all pools have the same number of buffers
> + *       which is the case currently. If that changes, we must fix 
> this.
> + */
> +static bool reuse_rx_pools(struct ibmvnic_adapter *adapter)
> +{
> +	u64 old_num_pools, new_num_pools;
> +	u64 old_pool_size, new_pool_size;
> +	u64 old_buff_size, new_buff_size;
> +
> +	if (!adapter->rx_pool)
> +		return false;
> +
> +	old_num_pools = adapter->num_active_rx_pools;
> +	new_num_pools = adapter->req_rx_queues;
> +
> +	old_pool_size = adapter->prev_rx_pool_size;
> +	new_pool_size = adapter->req_rx_add_entries_per_subcrq;
> +
> +	old_buff_size = adapter->prev_rx_buf_sz;
> +	new_buff_size = adapter->cur_rx_buf_sz;
> +
> +	/* Require buff size to be exactly same for now */
> +	if (old_buff_size != new_buff_size)
> +		return false;
> +
> +	if (old_num_pools == new_num_pools && old_pool_size == new_pool_size)
> +		return true;
> +
> +	if (old_num_pools < adapter->min_rx_queues ||
> +	    old_num_pools > adapter->max_rx_queues ||
> +	    old_pool_size < adapter->min_rx_add_entries_per_subcrq ||
> +	    old_pool_size > adapter->max_rx_add_entries_per_subcrq)
> +		return false;
> +
> +	return true;
>  }
> 
> +/**
> + * Initialize the set of receiver pools in the adapter. Reuse existing
> + * pools if possible. Otherwise allocate a new set of pools before
> + * initializing them.
> + */
>  static int init_rx_pools(struct net_device *netdev)
>  {
>  	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
> @@ -619,10 +633,18 @@ static int init_rx_pools(struct net_device 
> *netdev)
>  	u64 buff_size;
>  	int i, j;
> 
> -	num_pools = adapter->num_active_rx_scrqs;
>  	pool_size = adapter->req_rx_add_entries_per_subcrq;
> +	num_pools = adapter->req_rx_queues;
>  	buff_size = adapter->cur_rx_buf_sz;
> 
> +	if (reuse_rx_pools(adapter)) {
> +		dev_dbg(dev, "Reusing rx pools\n");
> +		goto update_ltb;
> +	}
> +
> +	/* Allocate/populate the pools. */
> +	release_rx_pools(adapter);
> +
>  	adapter->rx_pool = kcalloc(num_pools,
>  				   sizeof(struct ibmvnic_rx_pool),
>  				   GFP_KERNEL);
> @@ -646,14 +668,12 @@ static int init_rx_pools(struct net_device 
> *netdev)
>  		rx_pool->size = pool_size;
>  		rx_pool->index = i;
>  		rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> -		rx_pool->active = 1;
> 
>  		rx_pool->free_map = kcalloc(rx_pool->size, sizeof(int),
>  					    GFP_KERNEL);
>  		if (!rx_pool->free_map) {
>  			dev_err(dev, "Couldn't alloc free_map %d\n", i);
> -			release_rx_pools(adapter);
> -			return -1;
> +			goto out_release;
>  		}
> 
>  		rx_pool->rx_buff = kcalloc(rx_pool->size,
> @@ -661,25 +681,58 @@ static int init_rx_pools(struct net_device 
> *netdev)
>  					   GFP_KERNEL);
>  		if (!rx_pool->rx_buff) {
>  			dev_err(dev, "Couldn't alloc rx buffers\n");
> -			release_rx_pools(adapter);
> -			return -1;
> +			goto out_release;
>  		}
> +	}
> +
> +	adapter->prev_rx_pool_size = pool_size;
> +	adapter->prev_rx_buf_sz = adapter->cur_rx_buf_sz;
> +
> +update_ltb:
> +	for (i = 0; i < num_pools; i++) {
> +		rx_pool = &adapter->rx_pool[i];
> +		dev_dbg(dev, "Updating LTB for rx pool %d [%d, %d]\n",
> +			i, rx_pool->size, rx_pool->buff_size);
> 
>  		if (alloc_long_term_buff(adapter, &rx_pool->long_term_buff,
> -					 rx_pool->size * rx_pool->buff_size)) {
> -			release_rx_pools(adapter);
> -			return -1;
> -		}
> +					 rx_pool->size * rx_pool->buff_size))
> +			goto out;
> +
> +		for (j = 0; j < rx_pool->size; ++j) {
> +			struct ibmvnic_rx_buff *rx_buff;
> 
> -		for (j = 0; j < rx_pool->size; ++j)
>  			rx_pool->free_map[j] = j;
> 
> +			/* NOTE: Don't clear rx_buff->skb here - will leak
> +			 * memory! replenish_rx_pool() will reuse skbs or
> +			 * allocate as necessary.
> +			 */
> +			rx_buff = &rx_pool->rx_buff[j];
> +			rx_buff->dma = 0;
> +			rx_buff->data = 0;
> +			rx_buff->size = 0;
> +			rx_buff->pool_index = 0;
> +		}
> +
> +		/* Mark pool "empty" so replenish_rx_pools() will
> +		 * update the LTB info for each buffer
> +		 */
>  		atomic_set(&rx_pool->available, 0);
>  		rx_pool->next_alloc = 0;
>  		rx_pool->next_free = 0;
> +		/* replenish_rx_pool() may have called deactivate_rx_pools()
> +		 * on failover. Ensure pool is active now.
> +		 */
> +		rx_pool->active = 1;
>  	}
> -
>  	return 0;
> +out_release:
> +	release_rx_pools(adapter);
> +out:
> +	/* We failed to allocate one or more LTBs or map them on the VIOS.
> +	 * Hold onto the pools and any LTBs that we did allocate/map.
> +	 */
> +	return -1;
>  }
> 
>  static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
> @@ -1053,7 +1106,6 @@ static void release_resources(struct
> ibmvnic_adapter *adapter)
>  	release_vpd_data(adapter);
> 
>  	release_tx_pools(adapter);
> -	release_rx_pools(adapter);
> 
>  	release_napi(adapter);
>  	release_login_buffer(adapter);
> @@ -1326,6 +1378,7 @@ static int ibmvnic_open(struct net_device 
> *netdev)
>  		if (rc) {
>  			netdev_err(netdev, "failed to initialize resources\n");
>  			release_resources(adapter);
> +			release_rx_pools(adapter);
>  			goto out;
>  		}
>  	}
> @@ -1455,7 +1508,6 @@ static void ibmvnic_cleanup(struct net_device 
> *netdev)
>  	ibmvnic_napi_disable(adapter);
>  	ibmvnic_disable_irqs(adapter);
> 
> -	clean_rx_pools(adapter);
>  	clean_tx_pools(adapter);
>  }
> 
> @@ -1490,6 +1542,7 @@ static int ibmvnic_close(struct net_device 
> *netdev)
> 
>  	rc = __ibmvnic_close(netdev);
>  	ibmvnic_cleanup(netdev);
> +	clean_rx_pools(adapter);
> 
>  	return rc;
>  }
> @@ -2218,7 +2271,6 @@ static int do_reset(struct ibmvnic_adapter 
> *adapter,
>  		    !adapter->rx_pool ||
>  		    !adapter->tso_pool ||
>  		    !adapter->tx_pool) {
> -			release_rx_pools(adapter);
>  			release_tx_pools(adapter);
>  			release_napi(adapter);
>  			release_vpd_data(adapter);
> @@ -2235,9 +2287,10 @@ static int do_reset(struct ibmvnic_adapter 
> *adapter,
>  				goto out;
>  			}
> 
> -			rc = reset_rx_pools(adapter);
> +			rc = init_rx_pools(netdev);
>  			if (rc) {
> -				netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
> +				netdev_dbg(netdev,
> +					   "init rx pools failed (%d)\n",
>  					   rc);
>  				goto out;
>  			}
> @@ -5573,6 +5626,7 @@ static int ibmvnic_probe(struct vio_dev *dev,
> const struct vio_device_id *id)
>  	init_completion(&adapter->reset_done);
>  	init_completion(&adapter->stats_done);
>  	clear_bit(0, &adapter->resetting);
> +	adapter->prev_rx_buf_sz = 0;
> 
>  	init_success = false;
>  	do {
> @@ -5673,6 +5727,7 @@ static void ibmvnic_remove(struct vio_dev *dev)
>  	unregister_netdevice(netdev);
> 
>  	release_resources(adapter);
> +	release_rx_pools(adapter);
>  	release_sub_crqs(adapter, 1);
>  	release_crq_queue(adapter);
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.h
> b/drivers/net/ethernet/ibm/ibmvnic.h
> index e97f1aa98c05..b73a1b812368 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.h
> +++ b/drivers/net/ethernet/ibm/ibmvnic.h
> @@ -986,7 +986,10 @@ struct ibmvnic_adapter {
>  	u32 num_active_rx_napi;
>  	u32 num_active_tx_scrqs;
>  	u32 num_active_tx_pools;
> +
> +	u32 prev_rx_pool_size;
>  	u32 cur_rx_buf_sz;
> +	u32 prev_rx_buf_sz;
> 
>  	struct tasklet_struct tasklet;
>  	enum vnic_state state;

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 9/9] ibmvnic: Reuse tx pools when possible
  2021-09-01  0:08 ` [PATCH net-next 9/9] ibmvnic: Reuse tx " Sukadev Bhattiprolu
@ 2021-09-01  1:36   ` Dany Madden
  0 siblings, 0 replies; 24+ messages in thread
From: Dany Madden @ 2021-09-01  1:36 UTC (permalink / raw)
  To: Sukadev Bhattiprolu; +Cc: netdev, Brian King, cforno12, Rick Lindsley

On 2021-08-31 17:08, Sukadev Bhattiprolu wrote:
> Rather than releasing the tx pools on every close and reallocating
> them on open, reuse the tx pools unless the pool parameters (number
> of pools, size of each pool or size of each buffer in a pool) have
> changed.
> 
> If the pool parameters changed, then release the old pools (if
> any) and allocate new ones.
> 
> Specifically release tx pools, if:
> 	- adapter is removed,
> 	- pool parameters change during reset,
> 	- we encounter an error when opening the adapter in response
> 	  to a user request (in ibmvnic_open()).
> 
> and don't release them:
> 	- in __ibmvnic_close() or
> 	- on errors in __ibmvnic_open()
> 
> in the hope that we can reuse them during this or next reset.
> 
> With these changes reset_tx_pools() can be dropped because its
> optimization is now included in init_tx_pools() itself.
> 
> cleanup_tx_pools() releases all the skbs associated with the pool and
> is called from ibmvnic_cleanup(), which is called on every reset. Since
> we want to reuse skbs across resets, move cleanup_tx_pools() out of
> ibmvnic_cleanup() and call it only when user closes the adapter.
> 
> Add two new adapter fields, ->prev_mtu, ->prev_tx_pool_size to track 
> the
> previous values and use them to decide whether to reuse or realloc the
> pools.
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 201 +++++++++++++++++++----------
>  drivers/net/ethernet/ibm/ibmvnic.h |   2 +
>  2 files changed, 133 insertions(+), 70 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index ebd525b6fc87..8c422a717e88 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -735,53 +735,6 @@ static int init_rx_pools(struct net_device 
> *netdev)
>  	return -1;
>  }
> 
> -static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
> -			     struct ibmvnic_tx_pool *tx_pool)
> -{
> -	struct ibmvnic_long_term_buff *ltb;
> -	int rc, i;
> -
> -	ltb = &tx_pool->long_term_buff;
> -
> -	rc = alloc_long_term_buff(adapter, ltb, ltb->size);
> -	if (rc)
> -		return rc;
> -
> -	memset(tx_pool->tx_buff, 0,
> -	       tx_pool->num_buffers *
> -	       sizeof(struct ibmvnic_tx_buff));
> -
> -	for (i = 0; i < tx_pool->num_buffers; i++)
> -		tx_pool->free_map[i] = i;
> -
> -	tx_pool->consumer_index = 0;
> -	tx_pool->producer_index = 0;
> -
> -	return 0;
> -}
> -
> -static int reset_tx_pools(struct ibmvnic_adapter *adapter)
> -{
> -	int tx_scrqs;
> -	int i, rc;
> -
> -	if (!adapter->tx_pool)
> -		return -1;
> -
> -	tx_scrqs = adapter->num_active_tx_pools;
> -	for (i = 0; i < tx_scrqs; i++) {
> -		ibmvnic_tx_scrq_clean_buffer(adapter, adapter->tx_scrq[i]);
> -		rc = reset_one_tx_pool(adapter, &adapter->tso_pool[i]);
> -		if (rc)
> -			return rc;
> -		rc = reset_one_tx_pool(adapter, &adapter->tx_pool[i]);
> -		if (rc)
> -			return rc;
> -	}
> -
> -	return 0;
> -}
> -
>  static void release_vpd_data(struct ibmvnic_adapter *adapter)
>  {
>  	if (!adapter->vpd)
> @@ -825,13 +778,13 @@ static void release_tx_pools(struct
> ibmvnic_adapter *adapter)
>  	kfree(adapter->tso_pool);
>  	adapter->tso_pool = NULL;
>  	adapter->num_active_tx_pools = 0;
> +	adapter->prev_tx_pool_size = 0;
>  }
> 
>  static int init_one_tx_pool(struct net_device *netdev,
>  			    struct ibmvnic_tx_pool *tx_pool,
>  			    int pool_size, int buf_size)
>  {
> -	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
>  	int i;
> 
>  	tx_pool->tx_buff = kcalloc(pool_size,
> @@ -840,13 +793,12 @@ static int init_one_tx_pool(struct net_device 
> *netdev,
>  	if (!tx_pool->tx_buff)
>  		return -1;
> 
> -	if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff,
> -				 pool_size * buf_size))
> -		return -1;
> -
>  	tx_pool->free_map = kcalloc(pool_size, sizeof(int), GFP_KERNEL);
> -	if (!tx_pool->free_map)
> +	if (!tx_pool->free_map) {
> +		kfree(tx_pool->tx_buff);
> +		tx_pool->tx_buff = NULL;
>  		return -1;
> +	}
> 
>  	for (i = 0; i < pool_size; i++)
>  		tx_pool->free_map[i] = i;
> @@ -859,6 +811,48 @@ static int init_one_tx_pool(struct net_device 
> *netdev,
>  	return 0;
>  }
> 
> +/**
> + * Return true if we can reuse the existing tx pools, false otherwise
> + * NOTE: This assumes that all pools have the same number of buffers
> + *       which is the case currently. If that changes, we must fix 
> this.
> + */
> +static bool reuse_tx_pools(struct ibmvnic_adapter *adapter)
> +{
> +	u64 old_num_pools, new_num_pools;
> +	u64 old_pool_size, new_pool_size;
> +	u64 old_mtu, new_mtu;
> +
> +	if (!adapter->tx_pool)
> +		return false;
> +
> +	old_num_pools = adapter->num_active_tx_pools;
> +	new_num_pools = adapter->num_active_tx_scrqs;
> +	old_pool_size = adapter->prev_tx_pool_size;
> +	new_pool_size = adapter->req_tx_entries_per_subcrq;
> +	old_mtu = adapter->prev_mtu;
> +	new_mtu = adapter->req_mtu;
> +
> +	/* Require MTU to be exactly same to reuse pools for now */
> +	if (old_mtu != new_mtu)
> +		return false;
> +
> +	if (old_num_pools == new_num_pools && old_pool_size == new_pool_size)
> +		return true;
> +
> +	if (old_num_pools < adapter->min_tx_queues ||
> +	    old_num_pools > adapter->max_tx_queues ||
> +	    old_pool_size < adapter->min_tx_entries_per_subcrq ||
> +	    old_pool_size > adapter->max_tx_entries_per_subcrq)
> +		return false;
> +
> +	return true;
> +}
> +
> +/**
> + * Initialize the set of transmit pools in the adapter. Reuse existing
> + * pools if possible. Otherwise allocate a new set of pools before
> + * initializing them.
> + */
>  static int init_tx_pools(struct net_device *netdev)
>  {
>  	struct ibmvnic_adapter *adapter = netdev_priv(netdev);
> @@ -866,7 +860,21 @@ static int init_tx_pools(struct net_device 
> *netdev)
>  	int num_pools;
>  	u64 pool_size;		/* # of buffers in pool */
>  	u64 buff_size;
> -	int i, rc;
> +	int i, j, rc;
> +
> +	num_pools = adapter->req_tx_queues;
> +
> +	/* We must notify the VIOS about the LTB on all resets - but we only
> +	 * need to alloc/populate pools if either the number of buffers or
> +	 * size of each buffer in the pool has changed.
> +	 */
> +	if (reuse_tx_pools(adapter)) {
> +		netdev_dbg(netdev, "Reusing tx pools\n");
> +		goto update_ltb;
> +	}
> +
> +	/* Allocate/populate the pools. */
> +	release_tx_pools(adapter);
> 
>  	pool_size = adapter->req_tx_entries_per_subcrq;
>  	num_pools = adapter->num_active_tx_scrqs;
> @@ -891,6 +899,7 @@ static int init_tx_pools(struct net_device *netdev)
>  	 * allocation, release_tx_pools() will know how many to look for.
>  	 */
>  	adapter->num_active_tx_pools = num_pools;
> +
>  	buff_size = adapter->req_mtu + VLAN_HLEN;
>  	buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
> 
> @@ -900,21 +909,73 @@ static int init_tx_pools(struct net_device 
> *netdev)
> 
>  		rc = init_one_tx_pool(netdev, &adapter->tx_pool[i],
>  				      pool_size, buff_size);
> -		if (rc) {
> -			release_tx_pools(adapter);
> -			return rc;
> -		}
> +		if (rc)
> +			goto out_release;
> 
>  		rc = init_one_tx_pool(netdev, &adapter->tso_pool[i],
>  				      IBMVNIC_TSO_BUFS,
>  				      IBMVNIC_TSO_BUF_SZ);
> -		if (rc) {
> -			release_tx_pools(adapter);
> -			return rc;
> -		}
> +		if (rc)
> +			goto out_release;
> +	}
> +
> +	adapter->prev_tx_pool_size = pool_size;
> +	adapter->prev_mtu = adapter->req_mtu;
> +
> +update_ltb:
> +	/* NOTE: All tx_pools have the same number of buffers (which is
> +	 *       same as pool_size). All tso_pools have IBMVNIC_TSO_BUFS
> +	 *       buffers (see calls init_one_tx_pool() for these).
> +	 *       For consistency, we use tx_pool->num_buffers and
> +	 *       tso_pool->num_buffers below.
> +	 */
> +	rc = -1;
> +	for (i = 0; i < num_pools; i++) {
> +		struct ibmvnic_tx_pool *tso_pool;
> +		struct ibmvnic_tx_pool *tx_pool;
> +		u32 ltb_size;
> +
> +		tx_pool = &adapter->tx_pool[i];
> +		ltb_size = tx_pool->num_buffers * tx_pool->buf_size;
> +		if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff,
> +					 ltb_size))
> +			goto out;
> +
> +		dev_dbg(dev, "Updated LTB for tx pool %d [%p, %d, %d]\n",
> +			i, tx_pool->long_term_buff.buff,
> +			tx_pool->num_buffers, tx_pool->buf_size);
> +
> +		tx_pool->consumer_index = 0;
> +		tx_pool->producer_index = 0;
> +
> +		for (j = 0; j < tx_pool->num_buffers; j++)
> +			tx_pool->free_map[j] = j;
> +
> +		tso_pool = &adapter->tso_pool[i];
> +		ltb_size = tso_pool->num_buffers * tso_pool->buf_size;
> +		if (alloc_long_term_buff(adapter, &tso_pool->long_term_buff,
> +					 ltb_size))
> +			goto out;
> +
> +		dev_dbg(dev, "Updated LTB for tso pool %d [%p, %d, %d]\n",
> +			i, tso_pool->long_term_buff.buff,
> +			tso_pool->num_buffers, tso_pool->buf_size);
> +
> +		tso_pool->consumer_index = 0;
> +		tso_pool->producer_index = 0;
> +
> +		for (j = 0; j < tso_pool->num_buffers; j++)
> +			tso_pool->free_map[j] = j;
>  	}
> 
>  	return 0;
> +out_release:
> +	release_tx_pools(adapter);
> +out:
> +	/* We failed to allocate one or more LTBs or map them on the VIOS.
> +	 * Hold onto the pools and any LTBs that we did allocate/map.
> +	 */
> +	return rc;
>  }
> 
>  static void ibmvnic_napi_enable(struct ibmvnic_adapter *adapter)
> @@ -1105,8 +1166,6 @@ static void release_resources(struct
> ibmvnic_adapter *adapter)
>  {
>  	release_vpd_data(adapter);
> 
> -	release_tx_pools(adapter);
> -
>  	release_napi(adapter);
>  	release_login_buffer(adapter);
>  	release_login_rsp_buffer(adapter);
> @@ -1379,6 +1438,7 @@ static int ibmvnic_open(struct net_device 
> *netdev)
>  			netdev_err(netdev, "failed to initialize resources\n");
>  			release_resources(adapter);
>  			release_rx_pools(adapter);
> +			release_tx_pools(adapter);
>  			goto out;
>  		}
>  	}
> @@ -1507,8 +1567,6 @@ static void ibmvnic_cleanup(struct net_device 
> *netdev)
> 
>  	ibmvnic_napi_disable(adapter);
>  	ibmvnic_disable_irqs(adapter);
> -
> -	clean_tx_pools(adapter);
>  }
> 
>  static int __ibmvnic_close(struct net_device *netdev)
> @@ -1543,6 +1601,7 @@ static int ibmvnic_close(struct net_device 
> *netdev)
>  	rc = __ibmvnic_close(netdev);
>  	ibmvnic_cleanup(netdev);
>  	clean_rx_pools(adapter);
> +	clean_tx_pools(adapter);
> 
>  	return rc;
>  }
> @@ -2119,9 +2178,9 @@ static const char *reset_reason_to_string(enum
> ibmvnic_reset_reason reason)
>  static int do_reset(struct ibmvnic_adapter *adapter,
>  		    struct ibmvnic_rwi *rwi, u32 reset_state)
>  {
> +	struct net_device *netdev = adapter->netdev;
>  	u64 old_num_rx_queues, old_num_tx_queues;
>  	u64 old_num_rx_slots, old_num_tx_slots;
> -	struct net_device *netdev = adapter->netdev;
>  	int rc;
> 
>  	netdev_dbg(adapter->netdev,
> @@ -2271,7 +2330,6 @@ static int do_reset(struct ibmvnic_adapter 
> *adapter,
>  		    !adapter->rx_pool ||
>  		    !adapter->tso_pool ||
>  		    !adapter->tx_pool) {
> -			release_tx_pools(adapter);
>  			release_napi(adapter);
>  			release_vpd_data(adapter);
> 
> @@ -2280,9 +2338,10 @@ static int do_reset(struct ibmvnic_adapter 
> *adapter,
>  				goto out;
> 
>  		} else {
> -			rc = reset_tx_pools(adapter);
> +			rc = init_tx_pools(netdev);
>  			if (rc) {
> -				netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
> +				netdev_dbg(netdev,
> +					   "init tx pools failed (%d)\n",
>  					   rc);
>  				goto out;
>  			}
> @@ -5627,6 +5686,7 @@ static int ibmvnic_probe(struct vio_dev *dev,
> const struct vio_device_id *id)
>  	init_completion(&adapter->stats_done);
>  	clear_bit(0, &adapter->resetting);
>  	adapter->prev_rx_buf_sz = 0;
> +	adapter->prev_mtu = 0;
> 
>  	init_success = false;
>  	do {
> @@ -5728,6 +5788,7 @@ static void ibmvnic_remove(struct vio_dev *dev)
> 
>  	release_resources(adapter);
>  	release_rx_pools(adapter);
> +	release_tx_pools(adapter);
>  	release_sub_crqs(adapter, 1);
>  	release_crq_queue(adapter);
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.h
> b/drivers/net/ethernet/ibm/ibmvnic.h
> index b73a1b812368..b8e42f67d897 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.h
> +++ b/drivers/net/ethernet/ibm/ibmvnic.h
> @@ -967,6 +967,7 @@ struct ibmvnic_adapter {
>  	u64 min_mtu;
>  	u64 max_mtu;
>  	u64 req_mtu;
> +	u64 prev_mtu;
>  	u64 max_multicast_filters;
>  	u64 vlan_header_insertion;
>  	u64 rx_vlan_header_insertion;
> @@ -988,6 +989,7 @@ struct ibmvnic_adapter {
>  	u32 num_active_tx_pools;
> 
>  	u32 prev_rx_pool_size;
> +	u32 prev_tx_pool_size;
>  	u32 cur_rx_buf_sz;
>  	u32 prev_rx_buf_sz;

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools
  2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
                   ` (9 preceding siblings ...)
  2021-09-01  1:21 ` [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Rick Lindsley
@ 2021-09-01  2:35 ` Jakub Kicinski
  2021-09-01 18:07   ` Sukadev Bhattiprolu
  10 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2021-09-01  2:35 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: netdev, Brian King, cforno12, Dany Madden, Rick Lindsley

On Tue, 31 Aug 2021 17:08:03 -0700 Sukadev Bhattiprolu wrote:
> It can take a long time to free and reallocate rx and tx pools and long
> term buffer (LTB) during each reset of the VNIC. This is specially true
> when the partition (LPAR) is heavily loaded and going through a Logical
> Partition Migration (LPM). The long drawn reset causes the LPAR to lose
> connectivity for extended periods of time and results in "RMC connection"
> errors and the LPM failing.
> 
> What is worse is that during the LPM we could get a failover because
> of the lost connectivity. At that point, the vnic driver releases
> even the resources it has already allocated and starts over.
> 
> As long as the resources we have already allocated are valid/applicable,
> we might as well hold on to them while trying to allocate the remaining
> resources. This patch set attempts to reuse the resources previously
> allocated as long as they are valid. It seems to vastly improve the
> time taken for the vnic reset. We have also not seen any RMC connection
> issues during our testing with this patch set.
> 
> If the backing devices for a vnic adapter are not "matched" (see "pool
> parameters" in patches 8 and 9) it is possible that we will still free
> all the resources and allocate them. If that becomes a common problem,
> we have to address it separately.
> 
> Thanks to input and extensive testing from Brian King, Cris Forno,
> Dany Madden, Rick Lindsley.

Please fix the kdoc issues in this submission. Kdoc script is your
friend:

./scripts/kernel-doc -none <files>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages
  2021-09-01  0:08 ` [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages Sukadev Bhattiprolu
@ 2021-09-01  8:58     ` kernel test robot
  2021-09-01  8:58     ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2021-09-01  8:58 UTC (permalink / raw)
  To: Sukadev Bhattiprolu, netdev
  Cc: kbuild-all, Brian King, cforno12, Dany Madden, Rick Lindsley

[-- Attachment #1: Type: text/plain, Size: 2927 bytes --]

Hi Sukadev,

I love your patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Sukadev-Bhattiprolu/ibmvnic-Reuse-ltb-rx-tx-pools/20210901-081123
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 29ce8f9701072fc221d9c38ad952de1a9578f95c
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/a28141472dfbc71fb3b53b1ba9213b3450435588
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Sukadev-Bhattiprolu/ibmvnic-Reuse-ltb-rx-tx-pools/20210901-081123
        git checkout a28141472dfbc71fb3b53b1ba9213b3450435588
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/net/ethernet/ibm/ibmvnic.c:579: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
    * Release any rx_pools attached to @adapter.
   drivers/net/ethernet/ibm/ibmvnic.c:752: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
    * Release any tx and tso pools attached to @adapter.


vim +579 drivers/net/ethernet/ibm/ibmvnic.c

   577	
   578	/**
 > 579	 * Release any rx_pools attached to @adapter.
   580	 * Safe to call this multiple times - even if no pools are attached.
   581	 */
   582	static void release_rx_pools(struct ibmvnic_adapter *adapter)
   583	{
   584		struct ibmvnic_rx_pool *rx_pool;
   585		int i, j;
   586	
   587		if (!adapter->rx_pool)
   588			return;
   589	
   590		for (i = 0; i < adapter->num_active_rx_pools; i++) {
   591			rx_pool = &adapter->rx_pool[i];
   592	
   593			netdev_dbg(adapter->netdev, "Releasing rx_pool[%d]\n", i);
   594	
   595			kfree(rx_pool->free_map);
   596			free_long_term_buff(adapter, &rx_pool->long_term_buff);
   597	
   598			if (!rx_pool->rx_buff)
   599				continue;
   600	
   601			for (j = 0; j < rx_pool->size; j++) {
   602				if (rx_pool->rx_buff[j].skb) {
   603					dev_kfree_skb_any(rx_pool->rx_buff[j].skb);
   604					rx_pool->rx_buff[j].skb = NULL;
   605				}
   606			}
   607	
   608			kfree(rx_pool->rx_buff);
   609		}
   610	
   611		kfree(adapter->rx_pool);
   612		adapter->rx_pool = NULL;
   613		adapter->num_active_rx_pools = 0;
   614	}
   615	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 73419 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages
@ 2021-09-01  8:58     ` kernel test robot
  0 siblings, 0 replies; 24+ messages in thread
From: kernel test robot @ 2021-09-01  8:58 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3005 bytes --]

Hi Sukadev,

I love your patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Sukadev-Bhattiprolu/ibmvnic-Reuse-ltb-rx-tx-pools/20210901-081123
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 29ce8f9701072fc221d9c38ad952de1a9578f95c
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/a28141472dfbc71fb3b53b1ba9213b3450435588
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Sukadev-Bhattiprolu/ibmvnic-Reuse-ltb-rx-tx-pools/20210901-081123
        git checkout a28141472dfbc71fb3b53b1ba9213b3450435588
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/net/ethernet/ibm/ibmvnic.c:579: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
    * Release any rx_pools attached to @adapter.
   drivers/net/ethernet/ibm/ibmvnic.c:752: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
    * Release any tx and tso pools attached to @adapter.


vim +579 drivers/net/ethernet/ibm/ibmvnic.c

   577	
   578	/**
 > 579	 * Release any rx_pools attached to @adapter.
   580	 * Safe to call this multiple times - even if no pools are attached.
   581	 */
   582	static void release_rx_pools(struct ibmvnic_adapter *adapter)
   583	{
   584		struct ibmvnic_rx_pool *rx_pool;
   585		int i, j;
   586	
   587		if (!adapter->rx_pool)
   588			return;
   589	
   590		for (i = 0; i < adapter->num_active_rx_pools; i++) {
   591			rx_pool = &adapter->rx_pool[i];
   592	
   593			netdev_dbg(adapter->netdev, "Releasing rx_pool[%d]\n", i);
   594	
   595			kfree(rx_pool->free_map);
   596			free_long_term_buff(adapter, &rx_pool->long_term_buff);
   597	
   598			if (!rx_pool->rx_buff)
   599				continue;
   600	
   601			for (j = 0; j < rx_pool->size; j++) {
   602				if (rx_pool->rx_buff[j].skb) {
   603					dev_kfree_skb_any(rx_pool->rx_buff[j].skb);
   604					rx_pool->rx_buff[j].skb = NULL;
   605				}
   606			}
   607	
   608			kfree(rx_pool->rx_buff);
   609		}
   610	
   611		kfree(adapter->rx_pool);
   612		adapter->rx_pool = NULL;
   613		adapter->num_active_rx_pools = 0;
   614	}
   615	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 73419 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools
  2021-09-01  2:35 ` Jakub Kicinski
@ 2021-09-01 18:07   ` Sukadev Bhattiprolu
  0 siblings, 0 replies; 24+ messages in thread
From: Sukadev Bhattiprolu @ 2021-09-01 18:07 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, Brian King, cforno12, Dany Madden, Rick Lindsley

Jakub Kicinski [kuba@kernel.org] wrote:
> 
> Please fix the kdoc issues in this submission. Kdoc script is your
> friend:
> 
> ./scripts/kernel-doc -none <files>

Yes, Thanks! I fixed them in v2

Sukadev

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2021-09-01 18:07 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-01  0:08 [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Sukadev Bhattiprolu
2021-09-01  0:08 ` [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool() Sukadev Bhattiprolu
2021-09-01  1:26   ` Dany Madden
2021-09-01  0:08 ` [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages Sukadev Bhattiprolu
2021-09-01  1:28   ` Dany Madden
2021-09-01  8:58   ` kernel test robot
2021-09-01  8:58     ` kernel test robot
2021-09-01  0:08 ` [PATCH net-next 3/9] ibmvnic: Use/rename local vars in init_rx_pools Sukadev Bhattiprolu
2021-09-01  1:28   ` Dany Madden
2021-09-01  0:08 ` [PATCH net-next 4/9] ibmvnic: Use/rename local vars in init_tx_pools Sukadev Bhattiprolu
2021-09-01  1:30   ` Dany Madden
2021-09-01  0:08 ` [PATCH net-next 5/9] ibmvnic: init_tx_pools move loop-invariant code out Sukadev Bhattiprolu
2021-09-01  1:32   ` Dany Madden
2021-09-01  0:08 ` [PATCH net-next 6/9] ibmvnic: Use bitmap for LTB map_ids Sukadev Bhattiprolu
2021-09-01  1:33   ` Dany Madden
2021-09-01  0:08 ` [PATCH net-next 7/9] ibmvnic: Reuse LTB when possible Sukadev Bhattiprolu
2021-09-01  1:34   ` Dany Madden
2021-09-01  0:08 ` [PATCH net-next 8/9] ibmvnic: Reuse rx pools " Sukadev Bhattiprolu
2021-09-01  1:35   ` Dany Madden
2021-09-01  0:08 ` [PATCH net-next 9/9] ibmvnic: Reuse tx " Sukadev Bhattiprolu
2021-09-01  1:36   ` Dany Madden
2021-09-01  1:21 ` [PATCH net-next 0/9] ibmvnic: Reuse ltb, rx, tx pools Rick Lindsley
2021-09-01  2:35 ` Jakub Kicinski
2021-09-01 18:07   ` Sukadev Bhattiprolu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.