All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock
@ 2019-06-24 16:14 Mike Marciniszyn
  2019-06-24 20:17   ` Sasha Levin
  0 siblings, 1 reply; 7+ messages in thread
From: Mike Marciniszyn @ 2019-06-24 16:14 UTC (permalink / raw)
  To: stable; +Cc: linux-rdma, stable-commits

commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.

Heavy contention of the sde flushlist_lock can cause hard lockups at
extreme scale when the flushing logic is under stress.

Mitigate by replacing the item at a time copy to the local list with
an O(1) list_splice_init() and using the high priority work queue to
do the flushes.

Ported to linux-4.14.y.

Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Cc: <stable@vger.kernel.org>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
---
 drivers/infiniband/hw/hfi1/sdma.c |    9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index 6781bcd..ec613d5 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -410,10 +410,7 @@ static void sdma_flush(struct sdma_engine *sde)
 	sdma_flush_descq(sde);
 	spin_lock_irqsave(&sde->flushlist_lock, flags);
 	/* copy flush list */
-	list_for_each_entry_safe(txp, txp_next, &sde->flushlist, list) {
-		list_del_init(&txp->list);
-		list_add_tail(&txp->list, &flushlist);
-	}
+	list_splice_init(&sde->flushlist, &flushlist);
 	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
 	/* flush from flush list */
 	list_for_each_entry_safe(txp, txp_next, &flushlist, list)
@@ -2437,7 +2434,7 @@ int sdma_send_txreq(struct sdma_engine *sde,
 		wait->tx_count++;
 		wait->count += tx->num_desc;
 	}
-	schedule_work(&sde->flush_worker);
+	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
 	goto unlock;
 nodesc:
@@ -2537,7 +2534,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait *wait,
 		}
 	}
 	spin_unlock(&sde->flushlist_lock);
-	schedule_work(&sde->flush_worker);
+	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
 	goto update_tail;
 nodesc:

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock
  2019-06-24 16:14 [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock Mike Marciniszyn
@ 2019-06-24 20:17   ` Sasha Levin
  0 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-06-24 20:17 UTC (permalink / raw)
  To: Mike Marciniszyn; +Cc: stable, linux-rdma, stable-commits

On Mon, Jun 24, 2019 at 12:14:29PM -0400, Mike Marciniszyn wrote:
>commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.
>
>Heavy contention of the sde flushlist_lock can cause hard lockups at
>extreme scale when the flushing logic is under stress.
>
>Mitigate by replacing the item at a time copy to the local list with
>an O(1) list_splice_init() and using the high priority work queue to
>do the flushes.
>
>Ported to linux-4.14.y.

I've queued this one for 4.19 and 4.14, thank you.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock
@ 2019-06-24 20:17   ` Sasha Levin
  0 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-06-24 20:17 UTC (permalink / raw)
  To: Mike Marciniszyn; +Cc: stable, linux-rdma, stable-commits

On Mon, Jun 24, 2019 at 12:14:29PM -0400, Mike Marciniszyn wrote:
>commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.
>
>Heavy contention of the sde flushlist_lock can cause hard lockups at
>extreme scale when the flushing logic is under stress.
>
>Mitigate by replacing the item at a time copy to the local list with
>an O(1) list_splice_init() and using the high priority work queue to
>do the flushes.
>
>Ported to linux-4.14.y.

I've queued this one for 4.19 and 4.14, thank you.

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock
  2019-06-24 15:56 Mike Marciniszyn
@ 2019-06-24 20:17   ` Sasha Levin
  0 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-06-24 20:17 UTC (permalink / raw)
  To: Mike Marciniszyn; +Cc: stable, linux-rdma, stable-commits

On Mon, Jun 24, 2019 at 11:56:02AM -0400, Mike Marciniszyn wrote:
>commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.
>
>Heavy contention of the sde flushlist_lock can cause hard lockups at
>extreme scale when the flushing logic is under stress.
>
>Mitigate by replacing the item at a time copy to the local list with
>an O(1) list_splice_init() and using the high priority work queue to
>do the flushes.
>
>Ported to linux-4.9.y.

I've queued this one for 4.9 and 4.4, thank you.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock
@ 2019-06-24 20:17   ` Sasha Levin
  0 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-06-24 20:17 UTC (permalink / raw)
  To: Mike Marciniszyn; +Cc: stable, linux-rdma, stable-commits

On Mon, Jun 24, 2019 at 11:56:02AM -0400, Mike Marciniszyn wrote:
>commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.
>
>Heavy contention of the sde flushlist_lock can cause hard lockups at
>extreme scale when the flushing logic is under stress.
>
>Mitigate by replacing the item at a time copy to the local list with
>an O(1) list_splice_init() and using the high priority work queue to
>do the flushes.
>
>Ported to linux-4.9.y.

I've queued this one for 4.9 and 4.4, thank you.

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock
@ 2019-06-24 16:26 Mike Marciniszyn
  0 siblings, 0 replies; 7+ messages in thread
From: Mike Marciniszyn @ 2019-06-24 16:26 UTC (permalink / raw)
  To: stable; +Cc: linux-rdma, stable-commits

commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.

Heavy contention of the sde flushlist_lock can cause hard lockups at
extreme scale when the flushing logic is under stress.

Mitigate by replacing the item at a time copy to the local list with
an O(1) list_splice_init() and using the high priority work queue to
do the flushes.

Ported to linux-4.19.y.

Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Cc: <stable@vger.kernel.org>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
---
 drivers/infiniband/hw/hfi1/sdma.c |    9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index 88e326d..d648a41 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -410,10 +410,7 @@ static void sdma_flush(struct sdma_engine *sde)
 	sdma_flush_descq(sde);
 	spin_lock_irqsave(&sde->flushlist_lock, flags);
 	/* copy flush list */
-	list_for_each_entry_safe(txp, txp_next, &sde->flushlist, list) {
-		list_del_init(&txp->list);
-		list_add_tail(&txp->list, &flushlist);
-	}
+	list_splice_init(&sde->flushlist, &flushlist);
 	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
 	/* flush from flush list */
 	list_for_each_entry_safe(txp, txp_next, &flushlist, list)
@@ -2426,7 +2423,7 @@ int sdma_send_txreq(struct sdma_engine *sde,
 		wait->tx_count++;
 		wait->count += tx->num_desc;
 	}
-	schedule_work(&sde->flush_worker);
+	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
 	goto unlock;
 nodesc:
@@ -2526,7 +2523,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait *wait,
 		}
 	}
 	spin_unlock(&sde->flushlist_lock);
-	schedule_work(&sde->flush_worker);
+	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
 	goto update_tail;
 nodesc:

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock
@ 2019-06-24 15:56 Mike Marciniszyn
  2019-06-24 20:17   ` Sasha Levin
  0 siblings, 1 reply; 7+ messages in thread
From: Mike Marciniszyn @ 2019-06-24 15:56 UTC (permalink / raw)
  To: stable; +Cc: linux-rdma, stable-commits

commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.

Heavy contention of the sde flushlist_lock can cause hard lockups at
extreme scale when the flushing logic is under stress.

Mitigate by replacing the item at a time copy to the local list with
an O(1) list_splice_init() and using the high priority work queue to
do the flushes.

Ported to linux-4.9.y.

Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Cc: <stable@vger.kernel.org>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
---
 drivers/infiniband/hw/hfi1/sdma.c |    9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index 9cbe52d..76e63c8 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -410,10 +410,7 @@ static void sdma_flush(struct sdma_engine *sde)
 	sdma_flush_descq(sde);
 	spin_lock_irqsave(&sde->flushlist_lock, flags);
 	/* copy flush list */
-	list_for_each_entry_safe(txp, txp_next, &sde->flushlist, list) {
-		list_del_init(&txp->list);
-		list_add_tail(&txp->list, &flushlist);
-	}
+	list_splice_init(&sde->flushlist, &flushlist);
 	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
 	/* flush from flush list */
 	list_for_each_entry_safe(txp, txp_next, &flushlist, list)
@@ -2406,7 +2403,7 @@ int sdma_send_txreq(struct sdma_engine *sde,
 		wait->tx_count++;
 		wait->count += tx->num_desc;
 	}
-	schedule_work(&sde->flush_worker);
+	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
 	goto unlock;
 nodesc:
@@ -2504,7 +2501,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait *wait,
 		}
 	}
 	spin_unlock(&sde->flushlist_lock);
-	schedule_work(&sde->flush_worker);
+	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
 	ret = -ECOMM;
 	goto update_tail;
 nodesc:

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-06-24 20:17 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-24 16:14 [PATCH] IB/hfi1: Avoid hardlockup with flushlist_lock Mike Marciniszyn
2019-06-24 20:17 ` Sasha Levin
2019-06-24 20:17   ` Sasha Levin
  -- strict thread matches above, loose matches on Subject: below --
2019-06-24 16:26 Mike Marciniszyn
2019-06-24 15:56 Mike Marciniszyn
2019-06-24 20:17 ` Sasha Levin
2019-06-24 20:17   ` Sasha Levin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.