linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing
@ 2022-02-03 17:09 Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 01/10] net: ipa: kill replenish_saved Alex Elder
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

This series revises the algorithm used for replenishing receive
buffers on RX endpoints.  Currently there are two atomic variables
that track how many receive buffers can be sent to the hardware.
The new algorithm obviates the need for those, by just assuming we
always want to provide the hardware with buffers until it can hold
no more.

The first patch eliminates an atomic variable that's not required.
The next moves some code into the main replenish function's caller,
making one of the called function's arguments unnecessary.   The
next six refactor things a bit more, adding a new helper function
that allows us to eliminate an additional atomic variable.  And the
final two implement two more minor improvements.

					-Alex

Alex Elder (10):
  net: ipa: kill replenish_saved
  net: ipa: allocate transaction before pages when replenishing
  net: ipa: increment backlog in replenish caller
  net: ipa: decide on doorbell in replenish loop
  net: ipa: allocate transaction in replenish loop
  net: ipa: don't use replenish_backlog
  net: ipa: introduce gsi_channel_trans_idle()
  net: ipa: kill replenish_backlog
  net: ipa: replenish after delivering payload
  net: ipa: determine replenish doorbell differently

 drivers/net/ipa/gsi_trans.c    |  11 ++++
 drivers/net/ipa/gsi_trans.h    |  10 +++
 drivers/net/ipa/ipa_endpoint.c | 112 +++++++++++----------------------
 drivers/net/ipa/ipa_endpoint.h |   8 +--
 4 files changed, 60 insertions(+), 81 deletions(-)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH net-next 01/10] net: ipa: kill replenish_saved
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 02/10] net: ipa: allocate transaction before pages when replenishing Alex Elder
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

The replenish_saved field keeps track of the number of times a new
buffer is added to the backlog when replenishing is disabled.  We
don't really use it though, so there's no need for us to track it
separately.  Whether replenishing is enabled or not, we can simply
increment the backlog.

Get rid of replenish_saved, and initialize and increment the backlog
where it would have otherwise been used.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 17 ++++-------------
 drivers/net/ipa/ipa_endpoint.h |  2 --
 2 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index fffd0a784ef2c..a9f6d4083f869 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1090,9 +1090,8 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
  * endpoint, based on the number of entries in the underlying channel ring
  * buffer.  If an endpoint's "backlog" is non-zero, it indicates how many
  * more receive buffers can be supplied to the hardware.  Replenishing for
- * an endpoint can be disabled, in which case requests to replenish a
- * buffer are "saved", and transferred to the backlog once it is re-enabled
- * again.
+ * an endpoint can be disabled, in which case buffers are not queued to
+ * the hardware.
  */
 static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 {
@@ -1102,7 +1101,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 
 	if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags)) {
 		if (add_one)
-			atomic_inc(&endpoint->replenish_saved);
+			atomic_inc(&endpoint->replenish_backlog);
 		return;
 	}
 
@@ -1147,11 +1146,8 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
 {
 	struct gsi *gsi = &endpoint->ipa->gsi;
 	u32 max_backlog;
-	u32 saved;
 
 	set_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
-	while ((saved = atomic_xchg(&endpoint->replenish_saved, 0)))
-		atomic_add(saved, &endpoint->replenish_backlog);
 
 	/* Start replenishing if hardware currently has no buffers */
 	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
@@ -1161,11 +1157,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
 
 static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint)
 {
-	u32 backlog;
-
 	clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
-	while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0)))
-		atomic_add(backlog, &endpoint->replenish_saved);
 }
 
 static void ipa_endpoint_replenish_work(struct work_struct *work)
@@ -1727,9 +1719,8 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
 		 */
 		clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
 		clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
-		atomic_set(&endpoint->replenish_saved,
+		atomic_set(&endpoint->replenish_backlog,
 			   gsi_channel_tre_max(gsi, endpoint->channel_id));
-		atomic_set(&endpoint->replenish_backlog, 0);
 		INIT_DELAYED_WORK(&endpoint->replenish_work,
 				  ipa_endpoint_replenish_work);
 	}
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
index 0313cdc607de3..c95816d882a74 100644
--- a/drivers/net/ipa/ipa_endpoint.h
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -66,7 +66,6 @@ enum ipa_replenish_flag {
  * @netdev:		Network device pointer, if endpoint uses one
  * @replenish_flags:	Replenishing state flags
  * @replenish_ready:	Number of replenish transactions without doorbell
- * @replenish_saved:	Replenish requests held while disabled
  * @replenish_backlog:	Number of buffers needed to fill hardware queue
  * @replenish_work:	Work item used for repeated replenish failures
  */
@@ -87,7 +86,6 @@ struct ipa_endpoint {
 	/* Receive buffer replenishing for RX endpoints */
 	DECLARE_BITMAP(replenish_flags, IPA_REPLENISH_COUNT);
 	u32 replenish_ready;
-	atomic_t replenish_saved;
 	atomic_t replenish_backlog;
 	struct delayed_work replenish_work;		/* global wq */
 };
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 02/10] net: ipa: allocate transaction before pages when replenishing
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 01/10] net: ipa: kill replenish_saved Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 03/10] net: ipa: increment backlog in replenish caller Alex Elder
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

A transaction failure only occurs if no more transactions are
available for an endpoint.  It's a very cheap test.

When replenishing an RX endpoint buffer, there's no point in
allocating pages if transactions are exhausted.  So don't bother
doing so unless the transaction allocation succeeds.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index a9f6d4083f869..f8dbd43949e16 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1046,14 +1046,14 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 	u32 len;
 	int ret;
 
+	trans = ipa_endpoint_trans_alloc(endpoint, 1);
+	if (!trans)
+		return -ENOMEM;
+
 	buffer_size = endpoint->data->rx.buffer_size;
 	page = dev_alloc_pages(get_order(buffer_size));
 	if (!page)
-		return -ENOMEM;
-
-	trans = ipa_endpoint_trans_alloc(endpoint, 1);
-	if (!trans)
-		goto err_free_pages;
+		goto err_trans_free;
 
 	/* Offset the buffer to make space for skb headroom */
 	offset = NET_SKB_PAD;
@@ -1061,7 +1061,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 
 	ret = gsi_trans_page_add(trans, page, len, offset);
 	if (ret)
-		goto err_trans_free;
+		goto err_free_pages;
 	trans->data = page;	/* transaction owns page now */
 
 	if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH) {
@@ -1073,10 +1073,10 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 
 	return 0;
 
-err_trans_free:
-	gsi_trans_free(trans);
 err_free_pages:
 	__free_pages(page, get_order(buffer_size));
+err_trans_free:
+	gsi_trans_free(trans);
 
 	return -ENOMEM;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 03/10] net: ipa: increment backlog in replenish caller
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 01/10] net: ipa: kill replenish_saved Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 02/10] net: ipa: allocate transaction before pages when replenishing Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 04/10] net: ipa: decide on doorbell in replenish loop Alex Elder
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

Three spots call ipa_endpoint_replenish(), and just one of those
requests that the backlog be incremented after completing the
replenish operation.

Instead, have the caller increment the backlog, and get rid of the
add_one argument to ipa_endpoint_replenish().

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 29 +++++++++--------------------
 1 file changed, 9 insertions(+), 20 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index f8dbd43949e16..060a025d70ec6 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1084,7 +1084,6 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 /**
  * ipa_endpoint_replenish() - Replenish endpoint receive buffers
  * @endpoint:	Endpoint to be replenished
- * @add_one:	Whether this is replacing a just-consumed buffer
  *
  * The IPA hardware can hold a fixed number of receive buffers for an RX
  * endpoint, based on the number of entries in the underlying channel ring
@@ -1093,24 +1092,17 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
  * an endpoint can be disabled, in which case buffers are not queued to
  * the hardware.
  */
-static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
+static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 {
 	struct gsi *gsi;
 	u32 backlog;
-	int delta;
 
-	if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags)) {
-		if (add_one)
-			atomic_inc(&endpoint->replenish_backlog);
+	if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags))
 		return;
-	}
 
-	/* If already active, just update the backlog */
-	if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags)) {
-		if (add_one)
-			atomic_inc(&endpoint->replenish_backlog);
+	/* Skip it if it's already active */
+	if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags))
 		return;
-	}
 
 	while (atomic_dec_not_zero(&endpoint->replenish_backlog))
 		if (ipa_endpoint_replenish_one(endpoint))
@@ -1118,17 +1110,13 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
 
-	if (add_one)
-		atomic_inc(&endpoint->replenish_backlog);
-
 	return;
 
 try_again_later:
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
 
 	/* The last one didn't succeed, so fix the backlog */
-	delta = add_one ? 2 : 1;
-	backlog = atomic_add_return(delta, &endpoint->replenish_backlog);
+	backlog = atomic_inc_return(&endpoint->replenish_backlog);
 
 	/* Whenever a receive buffer transaction completes we'll try to
 	 * replenish again.  It's unlikely, but if we fail to supply even
@@ -1152,7 +1140,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
 	/* Start replenishing if hardware currently has no buffers */
 	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
 	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
-		ipa_endpoint_replenish(endpoint, false);
+		ipa_endpoint_replenish(endpoint);
 }
 
 static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint)
@@ -1167,7 +1155,7 @@ static void ipa_endpoint_replenish_work(struct work_struct *work)
 
 	endpoint = container_of(dwork, struct ipa_endpoint, replenish_work);
 
-	ipa_endpoint_replenish(endpoint, false);
+	ipa_endpoint_replenish(endpoint);
 }
 
 static void ipa_endpoint_skb_copy(struct ipa_endpoint *endpoint,
@@ -1372,7 +1360,8 @@ static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
 {
 	struct page *page;
 
-	ipa_endpoint_replenish(endpoint, true);
+	ipa_endpoint_replenish(endpoint);
+	atomic_inc(&endpoint->replenish_backlog);
 
 	if (trans->cancelled)
 		return;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 04/10] net: ipa: decide on doorbell in replenish loop
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (2 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 03/10] net: ipa: increment backlog in replenish caller Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 05/10] net: ipa: allocate transaction " Alex Elder
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

Decide whether the doorbell should be signaled when committing a
replenish transaction in the main replenish loop, rather than in
ipa_endpoint_replenish_one().  This is a step to facilitate the
next patch.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 21 ++++++++++++---------
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 060a025d70ec6..274cf1c30b593 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1036,10 +1036,10 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
 	iowrite32(val, ipa->reg_virt + offset);
 }
 
-static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
+static int
+ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint, bool doorbell)
 {
 	struct gsi_trans *trans;
-	bool doorbell = false;
 	struct page *page;
 	u32 buffer_size;
 	u32 offset;
@@ -1064,11 +1064,6 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 		goto err_free_pages;
 	trans->data = page;	/* transaction owns page now */
 
-	if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH) {
-		doorbell = true;
-		endpoint->replenish_ready = 0;
-	}
-
 	gsi_trans_commit(trans, doorbell);
 
 	return 0;
@@ -1104,9 +1099,17 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 	if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags))
 		return;
 
-	while (atomic_dec_not_zero(&endpoint->replenish_backlog))
-		if (ipa_endpoint_replenish_one(endpoint))
+	while (atomic_dec_not_zero(&endpoint->replenish_backlog)) {
+		bool doorbell;
+
+		if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH)
+			endpoint->replenish_ready = 0;
+
+		/* Ring the doorbell if we've got a full batch */
+		doorbell = !endpoint->replenish_ready;
+		if (ipa_endpoint_replenish_one(endpoint, doorbell))
 			goto try_again_later;
+	}
 
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 05/10] net: ipa: allocate transaction in replenish loop
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (3 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 04/10] net: ipa: decide on doorbell in replenish loop Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 06/10] net: ipa: don't use replenish_backlog Alex Elder
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

When replenishing, have ipa_endpoint_replenish() allocate a
transaction, and pass that to ipa_endpoint_replenish_one() to fill.
Then, if that produces no error, commit the transaction within the
replenish loop as well.  In this way we can distinguish between
transaction failures and buffer allocation/mapping failures.

Failure to allocate a transaction simply means the hardware already
has as many receive buffers as it can hold.  In that case we can
break out of the replenish loop because there's nothing more to do.

If we fail to allocate or map pages for the receive buffer, just
try again later.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 40 ++++++++++++++--------------------
 1 file changed, 16 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 274cf1c30b593..f5367b902c27c 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1036,24 +1036,19 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
 	iowrite32(val, ipa->reg_virt + offset);
 }
 
-static int
-ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint, bool doorbell)
+static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint,
+				      struct gsi_trans *trans)
 {
-	struct gsi_trans *trans;
 	struct page *page;
 	u32 buffer_size;
 	u32 offset;
 	u32 len;
 	int ret;
 
-	trans = ipa_endpoint_trans_alloc(endpoint, 1);
-	if (!trans)
-		return -ENOMEM;
-
 	buffer_size = endpoint->data->rx.buffer_size;
 	page = dev_alloc_pages(get_order(buffer_size));
 	if (!page)
-		goto err_trans_free;
+		return -ENOMEM;
 
 	/* Offset the buffer to make space for skb headroom */
 	offset = NET_SKB_PAD;
@@ -1061,19 +1056,11 @@ ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint, bool doorbell)
 
 	ret = gsi_trans_page_add(trans, page, len, offset);
 	if (ret)
-		goto err_free_pages;
-	trans->data = page;	/* transaction owns page now */
+		__free_pages(page, get_order(buffer_size));
+	else
+		trans->data = page;	/* transaction owns page now */
 
-	gsi_trans_commit(trans, doorbell);
-
-	return 0;
-
-err_free_pages:
-	__free_pages(page, get_order(buffer_size));
-err_trans_free:
-	gsi_trans_free(trans);
-
-	return -ENOMEM;
+	return ret;
 }
 
 /**
@@ -1089,6 +1076,7 @@ ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint, bool doorbell)
  */
 static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 {
+	struct gsi_trans *trans;
 	struct gsi *gsi;
 	u32 backlog;
 
@@ -1100,15 +1088,18 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 		return;
 
 	while (atomic_dec_not_zero(&endpoint->replenish_backlog)) {
-		bool doorbell;
+		trans = ipa_endpoint_trans_alloc(endpoint, 1);
+		if (!trans)
+			break;
+
+		if (ipa_endpoint_replenish_one(endpoint, trans))
+			goto try_again_later;
 
 		if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH)
 			endpoint->replenish_ready = 0;
 
 		/* Ring the doorbell if we've got a full batch */
-		doorbell = !endpoint->replenish_ready;
-		if (ipa_endpoint_replenish_one(endpoint, doorbell))
-			goto try_again_later;
+		gsi_trans_commit(trans, !endpoint->replenish_ready);
 	}
 
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
@@ -1116,6 +1107,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 	return;
 
 try_again_later:
+	gsi_trans_free(trans);
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
 
 	/* The last one didn't succeed, so fix the backlog */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 06/10] net: ipa: don't use replenish_backlog
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (4 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 05/10] net: ipa: allocate transaction " Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 07/10] net: ipa: introduce gsi_channel_trans_idle() Alex Elder
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

Rather than determining when to stop replenishing using the
replenish backlog, just stop when we have exhausted all available
transactions.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index f5367b902c27c..fba8728ce12e3 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1087,11 +1087,8 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 	if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags))
 		return;
 
-	while (atomic_dec_not_zero(&endpoint->replenish_backlog)) {
-		trans = ipa_endpoint_trans_alloc(endpoint, 1);
-		if (!trans)
-			break;
-
+	while ((trans = ipa_endpoint_trans_alloc(endpoint, 1))) {
+		WARN_ON(!atomic_dec_not_zero(&endpoint->replenish_backlog));
 		if (ipa_endpoint_replenish_one(endpoint, trans))
 			goto try_again_later;
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 07/10] net: ipa: introduce gsi_channel_trans_idle()
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (5 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 06/10] net: ipa: don't use replenish_backlog Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 08/10] net: ipa: kill replenish_backlog Alex Elder
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

Create a new function that returns true if all transactions for a
channel are available for use.

Use it in ipa_endpoint_replenish_enable() to see whether to start
replenishing, and in ipa_endpoint_replenish() to determine whether
it's necessary after a failure to schedule delayed work to ensure a
future replenish attempt occurs.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/gsi_trans.c    | 11 +++++++++++
 drivers/net/ipa/gsi_trans.h    | 10 ++++++++++
 drivers/net/ipa/ipa_endpoint.c | 17 +++++------------
 3 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c
index 1544564bc2835..87e1d43c118c1 100644
--- a/drivers/net/ipa/gsi_trans.c
+++ b/drivers/net/ipa/gsi_trans.c
@@ -320,6 +320,17 @@ gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count)
 	atomic_add(tre_count, &trans_info->tre_avail);
 }
 
+/* Return true if no transactions are allocated, false otherwise */
+bool gsi_channel_trans_idle(struct gsi *gsi, u32 channel_id)
+{
+	u32 tre_max = gsi_channel_tre_max(gsi, channel_id);
+	struct gsi_trans_info *trans_info;
+
+	trans_info = &gsi->channel[channel_id].trans_info;
+
+	return atomic_read(&trans_info->tre_avail) == tre_max;
+}
+
 /* Allocate a GSI transaction on a channel */
 struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
 					  u32 tre_count,
diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h
index 17fd1822d8a9f..af379b49299ee 100644
--- a/drivers/net/ipa/gsi_trans.h
+++ b/drivers/net/ipa/gsi_trans.h
@@ -129,6 +129,16 @@ void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr);
  */
 void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool);
 
+/**
+ * gsi_channel_trans_idle() - Return whether no transactions are allocated
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel the transaction is associated with
+ *
+ * Return:	True if no transactions are allocated, false otherwise
+ *
+ */
+bool gsi_channel_trans_idle(struct gsi *gsi, u32 channel_id);
+
 /**
  * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel
  * @gsi:	GSI pointer
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index fba8728ce12e3..b854a39c69925 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1077,8 +1077,6 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint,
 static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 {
 	struct gsi_trans *trans;
-	struct gsi *gsi;
-	u32 backlog;
 
 	if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags))
 		return;
@@ -1108,30 +1106,25 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
 
 	/* The last one didn't succeed, so fix the backlog */
-	backlog = atomic_inc_return(&endpoint->replenish_backlog);
+	atomic_inc(&endpoint->replenish_backlog);
 
 	/* Whenever a receive buffer transaction completes we'll try to
 	 * replenish again.  It's unlikely, but if we fail to supply even
 	 * one buffer, nothing will trigger another replenish attempt.
-	 * Receive buffer transactions use one TRE, so schedule work to
-	 * try replenishing again if our backlog is *all* available TREs.
+	 * If the hardware has no receive buffers queued, schedule work to
+	 * try replenishing again.
 	 */
-	gsi = &endpoint->ipa->gsi;
-	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
+	if (gsi_channel_trans_idle(&endpoint->ipa->gsi, endpoint->channel_id))
 		schedule_delayed_work(&endpoint->replenish_work,
 				      msecs_to_jiffies(1));
 }
 
 static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
 {
-	struct gsi *gsi = &endpoint->ipa->gsi;
-	u32 max_backlog;
-
 	set_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
 
 	/* Start replenishing if hardware currently has no buffers */
-	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
-	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
+	if (gsi_channel_trans_idle(&endpoint->ipa->gsi, endpoint->channel_id))
 		ipa_endpoint_replenish(endpoint);
 }
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 08/10] net: ipa: kill replenish_backlog
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (6 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 07/10] net: ipa: introduce gsi_channel_trans_idle() Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 09/10] net: ipa: replenish after delivering payload Alex Elder
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

We no longer use the replenish_backlog atomic variable to decide
when we've got work to do providing receive buffers to hardware.
Basically, we try to keep the hardware as full as possible, all the
time.  We keep supplying buffers until the hardware has no more
space for them.

As a result, we can get rid of the replenish_backlog field and the
atomic operations performed on it.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 7 -------
 drivers/net/ipa/ipa_endpoint.h | 2 --
 2 files changed, 9 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index b854a39c69925..9d875126a360e 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1086,7 +1086,6 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 		return;
 
 	while ((trans = ipa_endpoint_trans_alloc(endpoint, 1))) {
-		WARN_ON(!atomic_dec_not_zero(&endpoint->replenish_backlog));
 		if (ipa_endpoint_replenish_one(endpoint, trans))
 			goto try_again_later;
 
@@ -1105,9 +1104,6 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 	gsi_trans_free(trans);
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
 
-	/* The last one didn't succeed, so fix the backlog */
-	atomic_inc(&endpoint->replenish_backlog);
-
 	/* Whenever a receive buffer transaction completes we'll try to
 	 * replenish again.  It's unlikely, but if we fail to supply even
 	 * one buffer, nothing will trigger another replenish attempt.
@@ -1346,7 +1342,6 @@ static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
 	struct page *page;
 
 	ipa_endpoint_replenish(endpoint);
-	atomic_inc(&endpoint->replenish_backlog);
 
 	if (trans->cancelled)
 		return;
@@ -1693,8 +1688,6 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
 		 */
 		clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
 		clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
-		atomic_set(&endpoint->replenish_backlog,
-			   gsi_channel_tre_max(gsi, endpoint->channel_id));
 		INIT_DELAYED_WORK(&endpoint->replenish_work,
 				  ipa_endpoint_replenish_work);
 	}
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
index c95816d882a74..9a37f9387f011 100644
--- a/drivers/net/ipa/ipa_endpoint.h
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -66,7 +66,6 @@ enum ipa_replenish_flag {
  * @netdev:		Network device pointer, if endpoint uses one
  * @replenish_flags:	Replenishing state flags
  * @replenish_ready:	Number of replenish transactions without doorbell
- * @replenish_backlog:	Number of buffers needed to fill hardware queue
  * @replenish_work:	Work item used for repeated replenish failures
  */
 struct ipa_endpoint {
@@ -86,7 +85,6 @@ struct ipa_endpoint {
 	/* Receive buffer replenishing for RX endpoints */
 	DECLARE_BITMAP(replenish_flags, IPA_REPLENISH_COUNT);
 	u32 replenish_ready;
-	atomic_t replenish_backlog;
 	struct delayed_work replenish_work;		/* global wq */
 };
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 09/10] net: ipa: replenish after delivering payload
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (7 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 08/10] net: ipa: kill replenish_backlog Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-03 17:09 ` [PATCH net-next 10/10] net: ipa: determine replenish doorbell differently Alex Elder
  2022-02-04 10:40 ` [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing patchwork-bot+netdevbpf
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

Replenishing is now solely driven by whether transactions are
available for a channel, and it doesn't really matter whether
we replenish before or after we deliver received packets to the
network stack.

Replenishing before delivering the payload adds a little latency.
Eliminate that by requesting a replenish after the payload is
delivered.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 9d875126a360e..a236edf5bf068 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1341,10 +1341,8 @@ static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
 {
 	struct page *page;
 
-	ipa_endpoint_replenish(endpoint);
-
 	if (trans->cancelled)
-		return;
+		goto done;
 
 	/* Parse or build a socket buffer using the actual received length */
 	page = trans->data;
@@ -1352,6 +1350,8 @@ static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
 		ipa_endpoint_status_parse(endpoint, page, trans->len);
 	else if (ipa_endpoint_skb_build(endpoint, page, trans->len))
 		trans->data = NULL;	/* Pages have been consumed */
+done:
+	ipa_endpoint_replenish(endpoint);
 }
 
 void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 10/10] net: ipa: determine replenish doorbell differently
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (8 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 09/10] net: ipa: replenish after delivering payload Alex Elder
@ 2022-02-03 17:09 ` Alex Elder
  2022-02-04 10:40 ` [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing patchwork-bot+netdevbpf
  10 siblings, 0 replies; 12+ messages in thread
From: Alex Elder @ 2022-02-03 17:09 UTC (permalink / raw)
  To: davem, kuba
  Cc: bjorn.andersson, mka, evgreen, cpratapa, avuyyuru, jponduru,
	subashab, elder, netdev, linux-arm-msm, linux-kernel

Rather than tracking the number of receive buffer transactions that
have been submitted without a doorbell, just track the total number
of transactions that have been issued.  Then ring the doorbell when
that number modulo the replenish batch size is 0.

The effect is roughly the same, but the new count is slightly more
interesting, and this approach will someday allow the replenish
batch size to be tuned at runtime.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 12 ++++++++----
 drivers/net/ipa/ipa_endpoint.h |  4 ++--
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index a236edf5bf068..888e94278a84f 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -25,7 +25,8 @@
 
 #define atomic_dec_not_zero(v)	atomic_add_unless((v), -1, 0)
 
-#define IPA_REPLENISH_BATCH	16
+/* Hardware is told about receive buffers once a "batch" has been queued */
+#define IPA_REPLENISH_BATCH	16		/* Must be non-zero */
 
 /* The amount of RX buffer space consumed by standard skb overhead */
 #define IPA_RX_BUFFER_OVERHEAD	(PAGE_SIZE - SKB_MAX_ORDER(NET_SKB_PAD, 0))
@@ -1086,14 +1087,15 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint)
 		return;
 
 	while ((trans = ipa_endpoint_trans_alloc(endpoint, 1))) {
+		bool doorbell;
+
 		if (ipa_endpoint_replenish_one(endpoint, trans))
 			goto try_again_later;
 
-		if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH)
-			endpoint->replenish_ready = 0;
 
 		/* Ring the doorbell if we've got a full batch */
-		gsi_trans_commit(trans, !endpoint->replenish_ready);
+		doorbell = !(++endpoint->replenish_count % IPA_REPLENISH_BATCH);
+		gsi_trans_commit(trans, doorbell);
 	}
 
 	clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
@@ -1863,6 +1865,8 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
 	enum ipa_endpoint_name name;
 	u32 filter_map;
 
+	BUILD_BUG_ON(!IPA_REPLENISH_BATCH);
+
 	if (!ipa_endpoint_data_valid(ipa, count, data))
 		return 0;	/* Error */
 
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
index 9a37f9387f011..12fd5b16c18eb 100644
--- a/drivers/net/ipa/ipa_endpoint.h
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -65,7 +65,7 @@ enum ipa_replenish_flag {
  * @evt_ring_id:	GSI event ring used by the endpoint
  * @netdev:		Network device pointer, if endpoint uses one
  * @replenish_flags:	Replenishing state flags
- * @replenish_ready:	Number of replenish transactions without doorbell
+ * @replenish_count:	Total number of replenish transactions committed
  * @replenish_work:	Work item used for repeated replenish failures
  */
 struct ipa_endpoint {
@@ -84,7 +84,7 @@ struct ipa_endpoint {
 
 	/* Receive buffer replenishing for RX endpoints */
 	DECLARE_BITMAP(replenish_flags, IPA_REPLENISH_COUNT);
-	u32 replenish_ready;
+	u64 replenish_count;
 	struct delayed_work replenish_work;		/* global wq */
 };
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing
  2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
                   ` (9 preceding siblings ...)
  2022-02-03 17:09 ` [PATCH net-next 10/10] net: ipa: determine replenish doorbell differently Alex Elder
@ 2022-02-04 10:40 ` patchwork-bot+netdevbpf
  10 siblings, 0 replies; 12+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-02-04 10:40 UTC (permalink / raw)
  To: Alex Elder
  Cc: davem, kuba, bjorn.andersson, mka, evgreen, cpratapa, avuyyuru,
	jponduru, subashab, elder, netdev, linux-arm-msm, linux-kernel

Hello:

This series was applied to netdev/net-next.git (master)
by David S. Miller <davem@davemloft.net>:

On Thu,  3 Feb 2022 11:09:17 -0600 you wrote:
> This series revises the algorithm used for replenishing receive
> buffers on RX endpoints.  Currently there are two atomic variables
> that track how many receive buffers can be sent to the hardware.
> The new algorithm obviates the need for those, by just assuming we
> always want to provide the hardware with buffers until it can hold
> no more.
> 
> [...]

Here is the summary with links:
  - [net-next,01/10] net: ipa: kill replenish_saved
    https://git.kernel.org/netdev/net-next/c/a9bec7ae70c1
  - [net-next,02/10] net: ipa: allocate transaction before pages when replenishing
    https://git.kernel.org/netdev/net-next/c/b4061c136b56
  - [net-next,03/10] net: ipa: increment backlog in replenish caller
    https://git.kernel.org/netdev/net-next/c/4b22d8419549
  - [net-next,04/10] net: ipa: decide on doorbell in replenish loop
    https://git.kernel.org/netdev/net-next/c/b9dbabc5ca84
  - [net-next,05/10] net: ipa: allocate transaction in replenish loop
    (no matching commit)
  - [net-next,06/10] net: ipa: don't use replenish_backlog
    https://git.kernel.org/netdev/net-next/c/d0ac30e74ea0
  - [net-next,07/10] net: ipa: introduce gsi_channel_trans_idle()
    https://git.kernel.org/netdev/net-next/c/5fc7f9ba2e51
  - [net-next,08/10] net: ipa: kill replenish_backlog
    https://git.kernel.org/netdev/net-next/c/09b337dedaca
  - [net-next,09/10] net: ipa: replenish after delivering payload
    https://git.kernel.org/netdev/net-next/c/5d6ac24fb10f
  - [net-next,10/10] net: ipa: determine replenish doorbell differently
    https://git.kernel.org/netdev/net-next/c/9654d8c462ce

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-02-04 10:40 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-03 17:09 [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing Alex Elder
2022-02-03 17:09 ` [PATCH net-next 01/10] net: ipa: kill replenish_saved Alex Elder
2022-02-03 17:09 ` [PATCH net-next 02/10] net: ipa: allocate transaction before pages when replenishing Alex Elder
2022-02-03 17:09 ` [PATCH net-next 03/10] net: ipa: increment backlog in replenish caller Alex Elder
2022-02-03 17:09 ` [PATCH net-next 04/10] net: ipa: decide on doorbell in replenish loop Alex Elder
2022-02-03 17:09 ` [PATCH net-next 05/10] net: ipa: allocate transaction " Alex Elder
2022-02-03 17:09 ` [PATCH net-next 06/10] net: ipa: don't use replenish_backlog Alex Elder
2022-02-03 17:09 ` [PATCH net-next 07/10] net: ipa: introduce gsi_channel_trans_idle() Alex Elder
2022-02-03 17:09 ` [PATCH net-next 08/10] net: ipa: kill replenish_backlog Alex Elder
2022-02-03 17:09 ` [PATCH net-next 09/10] net: ipa: replenish after delivering payload Alex Elder
2022-02-03 17:09 ` [PATCH net-next 10/10] net: ipa: determine replenish doorbell differently Alex Elder
2022-02-04 10:40 ` [PATCH net-next 00/10] net: ipa: improve RX buffer replenishing patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).