netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/4] s390/qeth: updates 2020-07-30
@ 2020-07-30 15:01 Julian Wiedmann
  2020-07-30 15:01 ` [PATCH net-next 1/4] s390/qeth: tolerate pre-filled RX buffer Julian Wiedmann
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Julian Wiedmann @ 2020-07-30 15:01 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski
  Cc: netdev, linux-s390, Heiko Carstens, Ursula Braun, Karsten Graul,
	Julian Wiedmann

Hi Dave & Jakub,

please apply the following patch series for qeth to netdev's net-next tree.

This primarily brings some modernization to the RX path, laying the
groundwork for smarter RX refill policies.
Some of the patches are tagged as fixes, but really target only rare /
theoretical issues. So given where we are in the release cycle and that we
touch the main RX path, taking them through net-next seems more appropriate.

Thanks,
Julian

Julian Wiedmann (4):
  s390/qeth: tolerate pre-filled RX buffer
  s390/qeth: integrate RX refill worker with NAPI
  s390/qeth: don't process empty bridge port events
  s390/qeth: use all configured RX buffers

 drivers/s390/net/qeth_core.h      |  2 +-
 drivers/s390/net/qeth_core_main.c | 76 +++++++++++++++++--------------
 drivers/s390/net/qeth_l2_main.c   |  5 +-
 drivers/s390/net/qeth_l3_main.c   |  1 -
 4 files changed, 48 insertions(+), 36 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH net-next 1/4] s390/qeth: tolerate pre-filled RX buffer
  2020-07-30 15:01 [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Julian Wiedmann
@ 2020-07-30 15:01 ` Julian Wiedmann
  2020-07-30 15:01 ` [PATCH net-next 2/4] s390/qeth: integrate RX refill worker with NAPI Julian Wiedmann
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Julian Wiedmann @ 2020-07-30 15:01 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski
  Cc: netdev, linux-s390, Heiko Carstens, Ursula Braun, Karsten Graul,
	Julian Wiedmann

When preparing a buffer for RX refill, tolerate that it already has a
pool_entry attached. Otherwise we could easily leak such a pool_entry
when re-driving the RX refill after an error (from eg. do_qdio()).

This needs some minor adjustment in the code that drains RX buffer(s)
prior to RX refill and during teardown, so that ->pool_entry is NULLed
accordingly.

Fixes: 4a71df50047f ("qeth: new qeth device driver")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
---
 drivers/s390/net/qeth_core_main.c | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index 8a76022fceda..b356ee5de9f8 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -204,12 +204,17 @@ EXPORT_SYMBOL_GPL(qeth_threads_running);
 void qeth_clear_working_pool_list(struct qeth_card *card)
 {
 	struct qeth_buffer_pool_entry *pool_entry, *tmp;
+	struct qeth_qdio_q *queue = card->qdio.in_q;
+	unsigned int i;
 
 	QETH_CARD_TEXT(card, 5, "clwrklst");
 	list_for_each_entry_safe(pool_entry, tmp,
 			    &card->qdio.in_buf_pool.entry_list, list){
 			list_del(&pool_entry->list);
 	}
+
+	for (i = 0; i < ARRAY_SIZE(queue->bufs); i++)
+		queue->bufs[i].pool_entry = NULL;
 }
 EXPORT_SYMBOL_GPL(qeth_clear_working_pool_list);
 
@@ -2965,7 +2970,7 @@ static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry(
 static int qeth_init_input_buffer(struct qeth_card *card,
 		struct qeth_qdio_buffer *buf)
 {
-	struct qeth_buffer_pool_entry *pool_entry;
+	struct qeth_buffer_pool_entry *pool_entry = buf->pool_entry;
 	int i;
 
 	if ((card->options.cq == QETH_CQ_ENABLED) && (!buf->rx_skb)) {
@@ -2976,9 +2981,13 @@ static int qeth_init_input_buffer(struct qeth_card *card,
 			return -ENOMEM;
 	}
 
-	pool_entry = qeth_find_free_buffer_pool_entry(card);
-	if (!pool_entry)
-		return -ENOBUFS;
+	if (!pool_entry) {
+		pool_entry = qeth_find_free_buffer_pool_entry(card);
+		if (!pool_entry)
+			return -ENOBUFS;
+
+		buf->pool_entry = pool_entry;
+	}
 
 	/*
 	 * since the buffer is accessed only from the input_tasklet
@@ -2986,8 +2995,6 @@ static int qeth_init_input_buffer(struct qeth_card *card,
 	 * the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run  out off
 	 * buffers
 	 */
-
-	buf->pool_entry = pool_entry;
 	for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) {
 		buf->buffer->element[i].length = PAGE_SIZE;
 		buf->buffer->element[i].addr =
@@ -5771,6 +5778,7 @@ static unsigned int qeth_rx_poll(struct qeth_card *card, int budget)
 		if (done) {
 			QETH_CARD_STAT_INC(card, rx_bufs);
 			qeth_put_buffer_pool_entry(card, buffer->pool_entry);
+			buffer->pool_entry = NULL;
 			qeth_queue_input_buffer(card, card->rx.b_index);
 			card->rx.b_count--;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 2/4] s390/qeth: integrate RX refill worker with NAPI
  2020-07-30 15:01 [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Julian Wiedmann
  2020-07-30 15:01 ` [PATCH net-next 1/4] s390/qeth: tolerate pre-filled RX buffer Julian Wiedmann
@ 2020-07-30 15:01 ` Julian Wiedmann
  2020-07-30 15:01 ` [PATCH net-next 3/4] s390/qeth: don't process empty bridge port events Julian Wiedmann
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Julian Wiedmann @ 2020-07-30 15:01 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski
  Cc: netdev, linux-s390, Heiko Carstens, Ursula Braun, Karsten Graul,
	Julian Wiedmann

Running a RX refill outside of NAPI context is inherently racy, even
though the worker is only started for an entirely idle RX ring.
From the moment that the worker has replenished parts of the RX ring,
the HW can use those RX buffers, raise an IRQ and cause our NAPI code to
run concurrently to the RX refill worker.

Instead let the worker schedule our NAPI instance, and refill the RX
ring from there. Keeping accurate count of how many buffers still need
to be refilled also removes some quirky arithmetic from the low-level
code.

Fixes: b333293058aa ("qeth: add support for af_iucv HiperSockets transport")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
---
 drivers/s390/net/qeth_core.h      |  2 +-
 drivers/s390/net/qeth_core_main.c | 40 +++++++++++++++++++------------
 drivers/s390/net/qeth_l2_main.c   |  1 -
 drivers/s390/net/qeth_l3_main.c   |  1 -
 4 files changed, 26 insertions(+), 18 deletions(-)

diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
index 3d54c8bbfa86..ecfd6d152e86 100644
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -764,6 +764,7 @@ struct qeth_rx {
 	u8 buf_element;
 	int e_offset;
 	int qdio_err;
+	u8 bufs_refill;
 };
 
 struct carrier_info {
@@ -833,7 +834,6 @@ struct qeth_card {
 	struct napi_struct napi;
 	struct qeth_rx rx;
 	struct delayed_work buffer_reclaim_work;
-	int reclaim_index;
 	struct work_struct close_dev_work;
 };
 
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index b356ee5de9f8..c2e44853ac1a 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -3492,20 +3492,15 @@ static int qeth_check_qdio_errors(struct qeth_card *card,
 	return 0;
 }
 
-static void qeth_queue_input_buffer(struct qeth_card *card, int index)
+static unsigned int qeth_rx_refill_queue(struct qeth_card *card,
+					 unsigned int count)
 {
 	struct qeth_qdio_q *queue = card->qdio.in_q;
 	struct list_head *lh;
-	int count;
 	int i;
 	int rc;
 	int newcount = 0;
 
-	count = (index < queue->next_buf_to_init)?
-		card->qdio.in_buf_pool.buf_count -
-		(queue->next_buf_to_init - index) :
-		card->qdio.in_buf_pool.buf_count -
-		(queue->next_buf_to_init + QDIO_MAX_BUFFERS_PER_Q - index);
 	/* only requeue at a certain threshold to avoid SIGAs */
 	if (count >= QETH_IN_BUF_REQUEUE_THRESHOLD(card)) {
 		for (i = queue->next_buf_to_init;
@@ -3533,12 +3528,11 @@ static void qeth_queue_input_buffer(struct qeth_card *card, int index)
 				i++;
 			if (i == card->qdio.in_buf_pool.buf_count) {
 				QETH_CARD_TEXT(card, 2, "qsarbw");
-				card->reclaim_index = index;
 				schedule_delayed_work(
 					&card->buffer_reclaim_work,
 					QETH_RECLAIM_WORK_TIME);
 			}
-			return;
+			return 0;
 		}
 
 		/*
@@ -3555,7 +3549,10 @@ static void qeth_queue_input_buffer(struct qeth_card *card, int index)
 		}
 		queue->next_buf_to_init = QDIO_BUFNR(queue->next_buf_to_init +
 						     count);
+		return count;
 	}
+
+	return 0;
 }
 
 static void qeth_buffer_reclaim_work(struct work_struct *work)
@@ -3563,8 +3560,10 @@ static void qeth_buffer_reclaim_work(struct work_struct *work)
 	struct qeth_card *card = container_of(work, struct qeth_card,
 		buffer_reclaim_work.work);
 
-	QETH_CARD_TEXT_(card, 2, "brw:%x", card->reclaim_index);
-	qeth_queue_input_buffer(card, card->reclaim_index);
+	local_bh_disable();
+	napi_schedule(&card->napi);
+	/* kick-start the NAPI softirq: */
+	local_bh_enable();
 }
 
 static void qeth_handle_send_error(struct qeth_card *card,
@@ -5743,6 +5742,7 @@ static unsigned int qeth_extract_skbs(struct qeth_card *card, int budget,
 
 static unsigned int qeth_rx_poll(struct qeth_card *card, int budget)
 {
+	struct qeth_rx *ctx = &card->rx;
 	unsigned int work_done = 0;
 
 	while (budget > 0) {
@@ -5779,8 +5779,10 @@ static unsigned int qeth_rx_poll(struct qeth_card *card, int budget)
 			QETH_CARD_STAT_INC(card, rx_bufs);
 			qeth_put_buffer_pool_entry(card, buffer->pool_entry);
 			buffer->pool_entry = NULL;
-			qeth_queue_input_buffer(card, card->rx.b_index);
 			card->rx.b_count--;
+			ctx->bufs_refill++;
+			ctx->bufs_refill -= qeth_rx_refill_queue(card,
+								 ctx->bufs_refill);
 
 			/* Step forward to next buffer: */
 			card->rx.b_index = QDIO_BUFNR(card->rx.b_index + 1);
@@ -5820,9 +5822,16 @@ int qeth_poll(struct napi_struct *napi, int budget)
 	if (card->options.cq == QETH_CQ_ENABLED)
 		qeth_cq_poll(card);
 
-	/* Exhausted the RX budget. Keep IRQ disabled, we get called again. */
-	if (budget && work_done >= budget)
-		return work_done;
+	if (budget) {
+		struct qeth_rx *ctx = &card->rx;
+
+		/* Process any substantial refill backlog: */
+		ctx->bufs_refill -= qeth_rx_refill_queue(card, ctx->bufs_refill);
+
+		/* Exhausted the RX budget. Keep IRQ disabled, we get called again. */
+		if (work_done >= budget)
+			return work_done;
+	}
 
 	if (napi_complete_done(napi, work_done) &&
 	    qdio_start_irq(CARD_DDEV(card)))
@@ -7009,6 +7018,7 @@ int qeth_stop(struct net_device *dev)
 	}
 
 	napi_disable(&card->napi);
+	cancel_delayed_work_sync(&card->buffer_reclaim_work);
 	qdio_stop_irq(CARD_DDEV(card));
 
 	return 0;
diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
index ef7a2db7a724..38e97bbde9ed 100644
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -285,7 +285,6 @@ static void qeth_l2_stop_card(struct qeth_card *card)
 	if (card->state == CARD_STATE_SOFTSETUP) {
 		qeth_clear_ipacmd_list(card);
 		qeth_drain_output_queues(card);
-		cancel_delayed_work_sync(&card->buffer_reclaim_work);
 		card->state = CARD_STATE_DOWN;
 	}
 
diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
index 15a12487ff7a..fe44b0249e34 100644
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -1169,7 +1169,6 @@ static void qeth_l3_stop_card(struct qeth_card *card)
 		qeth_l3_clear_ip_htable(card, 1);
 		qeth_clear_ipacmd_list(card);
 		qeth_drain_output_queues(card);
-		cancel_delayed_work_sync(&card->buffer_reclaim_work);
 		card->state = CARD_STATE_DOWN;
 	}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 3/4] s390/qeth: don't process empty bridge port events
  2020-07-30 15:01 [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Julian Wiedmann
  2020-07-30 15:01 ` [PATCH net-next 1/4] s390/qeth: tolerate pre-filled RX buffer Julian Wiedmann
  2020-07-30 15:01 ` [PATCH net-next 2/4] s390/qeth: integrate RX refill worker with NAPI Julian Wiedmann
@ 2020-07-30 15:01 ` Julian Wiedmann
  2020-07-30 15:01 ` [PATCH net-next 4/4] s390/qeth: use all configured RX buffers Julian Wiedmann
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Julian Wiedmann @ 2020-07-30 15:01 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski
  Cc: netdev, linux-s390, Heiko Carstens, Ursula Braun, Karsten Graul,
	Julian Wiedmann

Discard events that don't contain any entries. This shouldn't happen,
but subsequent code relies on being able to use entry 0. So better
be safe than accessing garbage.

Fixes: b4d72c08b358 ("qeth: bridgeport support - basic control")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
---
 drivers/s390/net/qeth_l2_main.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
index 38e97bbde9ed..8b342a88ff5c 100644
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -1140,6 +1140,10 @@ static void qeth_bridge_state_change(struct qeth_card *card,
 	int extrasize;
 
 	QETH_CARD_TEXT(card, 2, "brstchng");
+	if (qports->num_entries == 0) {
+		QETH_CARD_TEXT(card, 2, "BPempty");
+		return;
+	}
 	if (qports->entry_length != sizeof(struct qeth_sbp_port_entry)) {
 		QETH_CARD_TEXT_(card, 2, "BPsz%04x", qports->entry_length);
 		return;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 4/4] s390/qeth: use all configured RX buffers
  2020-07-30 15:01 [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Julian Wiedmann
                   ` (2 preceding siblings ...)
  2020-07-30 15:01 ` [PATCH net-next 3/4] s390/qeth: don't process empty bridge port events Julian Wiedmann
@ 2020-07-30 15:01 ` Julian Wiedmann
  2020-07-30 23:37   ` Jakub Kicinski
  2020-07-30 23:39 ` [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Jakub Kicinski
  2020-07-31 23:44 ` David Miller
  5 siblings, 1 reply; 9+ messages in thread
From: Julian Wiedmann @ 2020-07-30 15:01 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski
  Cc: netdev, linux-s390, Heiko Carstens, Ursula Braun, Karsten Graul,
	Julian Wiedmann

The (misplaced) comment doesn't make any sense, enforcing an
uninitialized RX buffer won't help with IRQ reduction.

So make the best use of all available RX buffers.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
---
 drivers/s390/net/qeth_core_main.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index c2e44853ac1a..bba1b54b8aa3 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -3022,6 +3022,7 @@ static unsigned int qeth_tx_select_bulk_max(struct qeth_card *card,
 
 static int qeth_init_qdio_queues(struct qeth_card *card)
 {
+	unsigned int rx_bufs = card->qdio.in_buf_pool.buf_count;
 	unsigned int i;
 	int rc;
 
@@ -3033,16 +3034,14 @@ static int qeth_init_qdio_queues(struct qeth_card *card)
 
 	qeth_initialize_working_pool_list(card);
 	/*give only as many buffers to hardware as we have buffer pool entries*/
-	for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; i++) {
+	for (i = 0; i < rx_bufs; i++) {
 		rc = qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]);
 		if (rc)
 			return rc;
 	}
 
-	card->qdio.in_q->next_buf_to_init =
-		card->qdio.in_buf_pool.buf_count - 1;
-	rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0,
-		     card->qdio.in_buf_pool.buf_count - 1);
+	card->qdio.in_q->next_buf_to_init = QDIO_BUFNR(rx_bufs);
+	rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0, rx_bufs);
 	if (rc) {
 		QETH_CARD_TEXT_(card, 2, "1err%d", rc);
 		return rc;
@@ -3535,13 +3534,6 @@ static unsigned int qeth_rx_refill_queue(struct qeth_card *card,
 			return 0;
 		}
 
-		/*
-		 * according to old code it should be avoided to requeue all
-		 * 128 buffers in order to benefit from PCI avoidance.
-		 * this function keeps at least one buffer (the buffer at
-		 * 'index') un-requeued -> this buffer is the first buffer that
-		 * will be requeued the next time
-		 */
 		rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0,
 			     queue->next_buf_to_init, count);
 		if (rc) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 4/4] s390/qeth: use all configured RX buffers
  2020-07-30 15:01 ` [PATCH net-next 4/4] s390/qeth: use all configured RX buffers Julian Wiedmann
@ 2020-07-30 23:37   ` Jakub Kicinski
  2020-07-31  7:50     ` Julian Wiedmann
  0 siblings, 1 reply; 9+ messages in thread
From: Jakub Kicinski @ 2020-07-30 23:37 UTC (permalink / raw)
  To: Julian Wiedmann
  Cc: David Miller, netdev, linux-s390, Heiko Carstens, Ursula Braun,
	Karsten Graul

On Thu, 30 Jul 2020 17:01:21 +0200 Julian Wiedmann wrote:
> The (misplaced) comment doesn't make any sense, enforcing an
> uninitialized RX buffer won't help with IRQ reduction.
> 
> So make the best use of all available RX buffers.

Often one entry in the ring is left free to make it easy to
differentiate between empty and full conditions. 

Is this not the reason here?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 0/4] s390/qeth: updates 2020-07-30
  2020-07-30 15:01 [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Julian Wiedmann
                   ` (3 preceding siblings ...)
  2020-07-30 15:01 ` [PATCH net-next 4/4] s390/qeth: use all configured RX buffers Julian Wiedmann
@ 2020-07-30 23:39 ` Jakub Kicinski
  2020-07-31 23:44 ` David Miller
  5 siblings, 0 replies; 9+ messages in thread
From: Jakub Kicinski @ 2020-07-30 23:39 UTC (permalink / raw)
  To: Julian Wiedmann
  Cc: David Miller, netdev, linux-s390, Heiko Carstens, Ursula Braun,
	Karsten Graul

On Thu, 30 Jul 2020 17:01:17 +0200 Julian Wiedmann wrote:
> Hi Dave & Jakub,
> 
> please apply the following patch series for qeth to netdev's net-next tree.
> 
> This primarily brings some modernization to the RX path, laying the
> groundwork for smarter RX refill policies.
> Some of the patches are tagged as fixes, but really target only rare /
> theoretical issues. So given where we are in the release cycle and that we
> touch the main RX path, taking them through net-next seems more appropriate.

First 2 patches look good to me:

Reviewed-by: Jakub Kicinski <kuba@kernel.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 4/4] s390/qeth: use all configured RX buffers
  2020-07-30 23:37   ` Jakub Kicinski
@ 2020-07-31  7:50     ` Julian Wiedmann
  0 siblings, 0 replies; 9+ messages in thread
From: Julian Wiedmann @ 2020-07-31  7:50 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: David Miller, netdev, linux-s390, Heiko Carstens, Ursula Braun,
	Karsten Graul

On 31.07.20 01:37, Jakub Kicinski wrote:
> On Thu, 30 Jul 2020 17:01:21 +0200 Julian Wiedmann wrote:
>> The (misplaced) comment doesn't make any sense, enforcing an
>> uninitialized RX buffer won't help with IRQ reduction.
>>
>> So make the best use of all available RX buffers.
> 
> Often one entry in the ring is left free to make it easy to
> differentiate between empty and full conditions. 
> 
> Is this not the reason here?
> 

Hmm no, the HW architecture works slightly different.

There's no index register that we could query for HW progress,
each ring entry has an associated state byte that needs to be
inspected and indicates HW progress (among other things).

So this was more likely just a mis-interpretation of how the
(quirky) IRQ reduction mechanism works in HW, or maybe part of
a code path that got removed during the NAPI conversion.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 0/4] s390/qeth: updates 2020-07-30
  2020-07-30 15:01 [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Julian Wiedmann
                   ` (4 preceding siblings ...)
  2020-07-30 23:39 ` [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Jakub Kicinski
@ 2020-07-31 23:44 ` David Miller
  5 siblings, 0 replies; 9+ messages in thread
From: David Miller @ 2020-07-31 23:44 UTC (permalink / raw)
  To: jwi; +Cc: kuba, netdev, linux-s390, hca, ubraun, kgraul

From: Julian Wiedmann <jwi@linux.ibm.com>
Date: Thu, 30 Jul 2020 17:01:17 +0200

> please apply the following patch series for qeth to netdev's net-next tree.
> 
> This primarily brings some modernization to the RX path, laying the
> groundwork for smarter RX refill policies.
> Some of the patches are tagged as fixes, but really target only rare /
> theoretical issues. So given where we are in the release cycle and that we
> touch the main RX path, taking them through net-next seems more appropriate.

Series applied to net-next, thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-07-31 23:45 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-30 15:01 [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Julian Wiedmann
2020-07-30 15:01 ` [PATCH net-next 1/4] s390/qeth: tolerate pre-filled RX buffer Julian Wiedmann
2020-07-30 15:01 ` [PATCH net-next 2/4] s390/qeth: integrate RX refill worker with NAPI Julian Wiedmann
2020-07-30 15:01 ` [PATCH net-next 3/4] s390/qeth: don't process empty bridge port events Julian Wiedmann
2020-07-30 15:01 ` [PATCH net-next 4/4] s390/qeth: use all configured RX buffers Julian Wiedmann
2020-07-30 23:37   ` Jakub Kicinski
2020-07-31  7:50     ` Julian Wiedmann
2020-07-30 23:39 ` [PATCH net-next 0/4] s390/qeth: updates 2020-07-30 Jakub Kicinski
2020-07-31 23:44 ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).