All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/11] SCSI smpboot thread conversion
@ 2016-03-11 15:28 Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean() Sebastian Andrzej Siewior
                   ` (10 more replies)
  0 siblings, 11 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi; +Cc: James E.J. Bottomley, Martin K. Petersen, rt

This series converts fcoe, bnx2i and bnx2fc to smpboot thread instead
of their own magic. The fcoe driver ended in more patches than I wanted
but that way it is easier to find the individual code blocks which were
used in the final patch.

The overall diffstat:
  8 files changed, 253 insertions(+), 478 deletions(-)

Sebastian


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean()
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
@ 2016-03-11 15:28 ` Sebastian Andrzej Siewior
  2016-03-11 16:17   ` Christoph Hellwig
  2016-03-11 15:28 ` [PATCH 02/11] scsi/fcoe: remove CONFIG_SMP in fcoe_percpu_thread_destroy() Sebastian Andrzej Siewior
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

for_each_possible_cpu() with a cpu_online() + `thread' check possibly does
the job. But there is a tiny race: Say CPU5 is reported online but is
going down. And after fcoe_percpu_clean() saw that CPU5 is online it
decided to enqueue a packet. After dev_alloc_skb() returned a skb
that CPU is offline (or say the notifier destroyed the kthread). So we
would OOps because `thread' is NULL.
An alternative would be to lock the CPUs during our loop (so no CPU is
going away) and then we iterate over the online mask.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/fcoe/fcoe.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 0efe7112fc1f..2b0d207f4b2b 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -2461,12 +2461,10 @@ static void fcoe_percpu_clean(struct fc_lport *lport)
 	struct sk_buff *skb;
 	unsigned int cpu;
 
-	for_each_possible_cpu(cpu) {
+	get_online_cpus();
+	for_each_online_cpu(cpu) {
 		pp = &per_cpu(fcoe_percpu, cpu);
 
-		if (!pp->thread || !cpu_online(cpu))
-			continue;
-
 		skb = dev_alloc_skb(0);
 		if (!skb)
 			continue;
@@ -2481,6 +2479,7 @@ static void fcoe_percpu_clean(struct fc_lport *lport)
 
 		wait_for_completion(&fcoe_flush_completion);
 	}
+	put_online_cpus();
 }
 
 /**
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 02/11] scsi/fcoe: remove CONFIG_SMP in fcoe_percpu_thread_destroy()
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean() Sebastian Andrzej Siewior
@ 2016-03-11 15:28 ` Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 03/11] scsi/fcoe: drop locking in fcoe_percpu_thread_destroy() if cpu == targ_cpu Sebastian Andrzej Siewior
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

There is only a marginal win in code size if the !SMP code is ifdeffed
out. The compiler should be able to optimize almost everything out.

Remove the ifdeffery for readability sake.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/fcoe/fcoe.c | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 2b0d207f4b2b..efbc8a1438ef 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1284,10 +1284,8 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 	struct task_struct *thread;
 	struct page *crc_eof;
 	struct sk_buff *skb;
-#ifdef CONFIG_SMP
 	struct fcoe_percpu_s *p0;
 	unsigned targ_cpu = get_cpu();
-#endif /* CONFIG_SMP */
 
 	FCOE_DBG("Destroying receive thread for CPU %d\n", cpu);
 
@@ -1301,7 +1299,6 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 	p->crc_eof_offset = 0;
 	spin_unlock_bh(&p->fcoe_rx_list.lock);
 
-#ifdef CONFIG_SMP
 	/*
 	 * Don't bother moving the skb's if this context is running
 	 * on the same CPU that is having its thread destroyed. This
@@ -1343,16 +1340,6 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 		spin_unlock_bh(&p->fcoe_rx_list.lock);
 	}
 	put_cpu();
-#else
-	/*
-	 * This a non-SMP scenario where the singular Rx thread is
-	 * being removed. Free all skbs and stop the thread.
-	 */
-	spin_lock_bh(&p->fcoe_rx_list.lock);
-	while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-		kfree_skb(skb);
-	spin_unlock_bh(&p->fcoe_rx_list.lock);
-#endif
 
 	if (thread)
 		kthread_stop(thread);
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 03/11] scsi/fcoe: drop locking in fcoe_percpu_thread_destroy() if cpu == targ_cpu
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean() Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 02/11] scsi/fcoe: remove CONFIG_SMP in fcoe_percpu_thread_destroy() Sebastian Andrzej Siewior
@ 2016-03-11 15:28 ` Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 04/11] scsi/fcoe: rename p0 to p_target in fcoe_percpu_thread_destroy() Sebastian Andrzej Siewior
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

The locking here is not required. At the begin of the function we hold
the lock and assign NULL to p->thread. This is used in other places to
ensure that nobody adds new items to the list.
Also, in the cpu != targ_cpu case we don't hold the lock of p-> as well.
This makes is consistent.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/fcoe/fcoe.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index efbc8a1438ef..50e9e980563e 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1334,10 +1334,8 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 		 * will reach this case and we will drop all skbs and later
 		 * stop the thread.
 		 */
-		spin_lock_bh(&p->fcoe_rx_list.lock);
 		while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
 			kfree_skb(skb);
-		spin_unlock_bh(&p->fcoe_rx_list.lock);
 	}
 	put_cpu();
 
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 04/11] scsi/fcoe: rename p0 to p_target in fcoe_percpu_thread_destroy()
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (2 preceding siblings ...)
  2016-03-11 15:28 ` [PATCH 03/11] scsi/fcoe: drop locking in fcoe_percpu_thread_destroy() if cpu == targ_cpu Sebastian Andrzej Siewior
@ 2016-03-11 15:28 ` Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 05/11] scsi/fcoe: drop the p_target lock earlier if there is no thread online Sebastian Andrzej Siewior
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

The `p' and `p0' variables have very small names on can get mixed up
easily. Thus I rename `p0' to `p_target' so it sounds more like the
target pointer than `p0' does.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/fcoe/fcoe.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 50e9e980563e..06f56b7f51c2 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1284,7 +1284,7 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 	struct task_struct *thread;
 	struct page *crc_eof;
 	struct sk_buff *skb;
-	struct fcoe_percpu_s *p0;
+	struct fcoe_percpu_s *p_target;
 	unsigned targ_cpu = get_cpu();
 
 	FCOE_DBG("Destroying receive thread for CPU %d\n", cpu);
@@ -1305,15 +1305,15 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 	 * can easily happen when the module is removed.
 	 */
 	if (cpu != targ_cpu) {
-		p0 = &per_cpu(fcoe_percpu, targ_cpu);
-		spin_lock_bh(&p0->fcoe_rx_list.lock);
-		if (p0->thread) {
+		p_target = &per_cpu(fcoe_percpu, targ_cpu);
+		spin_lock_bh(&p_target->fcoe_rx_list.lock);
+		if (p_target->thread) {
 			FCOE_DBG("Moving frames from CPU %d to CPU %d\n",
 				 cpu, targ_cpu);
 
 			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-				__skb_queue_tail(&p0->fcoe_rx_list, skb);
-			spin_unlock_bh(&p0->fcoe_rx_list.lock);
+				__skb_queue_tail(&p_target->fcoe_rx_list, skb);
+			spin_unlock_bh(&p_target->fcoe_rx_list.lock);
 		} else {
 			/*
 			 * The targeted CPU is not initialized and cannot accept
@@ -1322,7 +1322,7 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 			 */
 			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
 				kfree_skb(skb);
-			spin_unlock_bh(&p0->fcoe_rx_list.lock);
+			spin_unlock_bh(&p_target->fcoe_rx_list.lock);
 		}
 	} else {
 		/*
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 05/11] scsi/fcoe: drop the p_target lock earlier if there is no thread online
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (3 preceding siblings ...)
  2016-03-11 15:28 ` [PATCH 04/11] scsi/fcoe: rename p0 to p_target in fcoe_percpu_thread_destroy() Sebastian Andrzej Siewior
@ 2016-03-11 15:28 ` Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 06/11] scsi/fcoe: use skb_queue_splice_tail() intead of manual job Sebastian Andrzej Siewior
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

If the thread on the target CPU is not online then all skbs are freed.
There is no need to hold p_target's lock during that period.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/fcoe/fcoe.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 06f56b7f51c2..a065b31a7a02 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1320,9 +1320,10 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 			 * new	skbs. Unlock the targeted CPU and drop the skbs
 			 * on the CPU that is going offline.
 			 */
+			spin_unlock_bh(&p_target->fcoe_rx_list.lock);
+
 			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
 				kfree_skb(skb);
-			spin_unlock_bh(&p_target->fcoe_rx_list.lock);
 		}
 	} else {
 		/*
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 06/11] scsi/fcoe: use skb_queue_splice_tail() intead of manual job
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (4 preceding siblings ...)
  2016-03-11 15:28 ` [PATCH 05/11] scsi/fcoe: drop the p_target lock earlier if there is no thread online Sebastian Andrzej Siewior
@ 2016-03-11 15:28 ` Sebastian Andrzej Siewior
  2016-03-11 15:28 ` [PATCH 07/11] scsi/fcoe: drop the crc_eof page early Sebastian Andrzej Siewior
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

skb_queue_splice_tail() is able to do the same thing as a loop with
__skb_dequeue() and __skb_queue_tail() like we have it now.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/fcoe/fcoe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index a065b31a7a02..15826094cc65 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1311,8 +1311,8 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 			FCOE_DBG("Moving frames from CPU %d to CPU %d\n",
 				 cpu, targ_cpu);
 
-			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-				__skb_queue_tail(&p_target->fcoe_rx_list, skb);
+			skb_queue_splice_tail(&p->fcoe_rx_list,
+					      &p_target->fcoe_rx_list);
 			spin_unlock_bh(&p_target->fcoe_rx_list.lock);
 		} else {
 			/*
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 07/11] scsi/fcoe: drop the crc_eof page early
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (5 preceding siblings ...)
  2016-03-11 15:28 ` [PATCH 06/11] scsi/fcoe: use skb_queue_splice_tail() intead of manual job Sebastian Andrzej Siewior
@ 2016-03-11 15:28 ` Sebastian Andrzej Siewior
  2016-03-11 15:29 ` [PATCH 08/11] scsi/fcoe: convert to smpboot thread Sebastian Andrzej Siewior
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:28 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

On cleanup we free the crc_eof_page after all skbs are freed. There is
no reason why it can't be done earlier. We hold a reference to that page
and each skb does.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/fcoe/fcoe.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 15826094cc65..4a877ab95d44 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1299,6 +1299,8 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 	p->crc_eof_offset = 0;
 	spin_unlock_bh(&p->fcoe_rx_list.lock);
 
+	if (crc_eof)
+		put_page(crc_eof);
 	/*
 	 * Don't bother moving the skb's if this context is running
 	 * on the same CPU that is having its thread destroyed. This
@@ -1342,9 +1344,6 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 
 	if (thread)
 		kthread_stop(thread);
-
-	if (crc_eof)
-		put_page(crc_eof);
 }
 
 /**
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 08/11] scsi/fcoe: convert to smpboot thread
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (6 preceding siblings ...)
  2016-03-11 15:28 ` [PATCH 07/11] scsi/fcoe: drop the crc_eof page early Sebastian Andrzej Siewior
@ 2016-03-11 15:29 ` Sebastian Andrzej Siewior
  2016-03-11 15:29 ` [PATCH 09/11] scsi: bnx2i: " Sebastian Andrzej Siewior
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:29 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, Vasu Dev, Christoph Hellwig,
	fcoe-devel

The driver creates its own per-CPU threads which are updated based on
CPU hotplug events. It is also possible to delegate this task to the
smpboot-thread infrastructure and get the same job done while saving a
few lines of code.

The code checked ->thread to decide if there is an active per-CPU
thread. With the smpboot infrastructure this is no longer possible and I
replaced its logic with the ->active member. The thread pointer is saved
in `kthread' instead of `thread' so anything trying to use thread is
caught by the compiler.

The ->park() callback cleans up the resources if a CPU is going down. At
least one CPU has to be online (and not parked) and the skbs are moved to
this CPU. On module removal the ->cleanup() is invoked instead and all skbs
are purged.

The remaining part of the conversion is mostly straightforward.
This patch was only compile-tested due to -ENODEV.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/bnx2fc/bnx2fc_fcoe.c |   8 +-
 drivers/scsi/fcoe/fcoe.c          | 281 ++++++++++++++------------------------
 include/scsi/libfcoe.h            |   6 +-
 3 files changed, 112 insertions(+), 183 deletions(-)

diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index 67405c628864..f5bc11b2e884 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -457,7 +457,7 @@ static int bnx2fc_rcv(struct sk_buff *skb, struct net_device *dev,
 
 	__skb_queue_tail(&bg->fcoe_rx_list, skb);
 	if (bg->fcoe_rx_list.qlen == 1)
-		wake_up_process(bg->thread);
+		wake_up_process(bg->kthread);
 
 	spin_unlock(&bg->fcoe_rx_list.lock);
 
@@ -2654,7 +2654,7 @@ static int __init bnx2fc_mod_init(void)
 	}
 	wake_up_process(l2_thread);
 	spin_lock_bh(&bg->fcoe_rx_list.lock);
-	bg->thread = l2_thread;
+	bg->kthread = l2_thread;
 	spin_unlock_bh(&bg->fcoe_rx_list.lock);
 
 	for_each_possible_cpu(cpu) {
@@ -2727,8 +2727,8 @@ static void __exit bnx2fc_mod_exit(void)
 	/* Destroy global thread */
 	bg = &bnx2fc_global;
 	spin_lock_bh(&bg->fcoe_rx_list.lock);
-	l2_thread = bg->thread;
-	bg->thread = NULL;
+	l2_thread = bg->kthread;
+	bg->kthread = NULL;
 	while ((skb = __skb_dequeue(&bg->fcoe_rx_list)) != NULL)
 		kfree_skb(skb);
 
diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 4a877ab95d44..2bc570e96663 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -26,6 +26,7 @@
 #include <linux/if_vlan.h>
 #include <linux/crc32.h>
 #include <linux/slab.h>
+#include <linux/smpboot.h>
 #include <linux/cpu.h>
 #include <linux/fs.h>
 #include <linux/sysfs.h>
@@ -80,7 +81,6 @@ static int fcoe_reset(struct Scsi_Host *);
 static int fcoe_xmit(struct fc_lport *, struct fc_frame *);
 static int fcoe_rcv(struct sk_buff *, struct net_device *,
 		    struct packet_type *, struct net_device *);
-static int fcoe_percpu_receive_thread(void *);
 static void fcoe_percpu_clean(struct fc_lport *);
 static int fcoe_link_ok(struct fc_lport *);
 
@@ -107,7 +107,6 @@ static int fcoe_ddp_setup(struct fc_lport *, u16, struct scatterlist *,
 static int fcoe_ddp_done(struct fc_lport *, u16);
 static int fcoe_ddp_target(struct fc_lport *, u16, struct scatterlist *,
 			   unsigned int);
-static int fcoe_cpu_callback(struct notifier_block *, unsigned long, void *);
 static int fcoe_dcb_app_notification(struct notifier_block *notifier,
 				     ulong event, void *ptr);
 
@@ -136,11 +135,6 @@ static struct notifier_block fcoe_notifier = {
 	.notifier_call = fcoe_device_notification,
 };
 
-/* notification function for CPU hotplug events */
-static struct notifier_block fcoe_cpu_notifier = {
-	.notifier_call = fcoe_cpu_callback,
-};
-
 /* notification function for DCB events */
 static struct notifier_block dcb_notifier = {
 	.notifier_call = fcoe_dcb_app_notification,
@@ -1245,55 +1239,15 @@ static int __exit fcoe_if_exit(void)
 	return 0;
 }
 
-/**
- * fcoe_percpu_thread_create() - Create a receive thread for an online CPU
- * @cpu: The CPU index of the CPU to create a receive thread for
- */
-static void fcoe_percpu_thread_create(unsigned int cpu)
+static struct fcoe_percpu_s *fcoe_thread_cleanup_local(unsigned int cpu)
 {
-	struct fcoe_percpu_s *p;
-	struct task_struct *thread;
-
-	p = &per_cpu(fcoe_percpu, cpu);
-
-	thread = kthread_create_on_node(fcoe_percpu_receive_thread,
-					(void *)p, cpu_to_node(cpu),
-					"fcoethread/%d", cpu);
-
-	if (likely(!IS_ERR(thread))) {
-		kthread_bind(thread, cpu);
-		wake_up_process(thread);
-
-		spin_lock_bh(&p->fcoe_rx_list.lock);
-		p->thread = thread;
-		spin_unlock_bh(&p->fcoe_rx_list.lock);
-	}
-}
-
-/**
- * fcoe_percpu_thread_destroy() - Remove the receive thread of a CPU
- * @cpu: The CPU index of the CPU whose receive thread is to be destroyed
- *
- * Destroys a per-CPU Rx thread. Any pending skbs are moved to the
- * current CPU's Rx thread. If the thread being destroyed is bound to
- * the CPU processing this context the skbs will be freed.
- */
-static void fcoe_percpu_thread_destroy(unsigned int cpu)
-{
-	struct fcoe_percpu_s *p;
-	struct task_struct *thread;
 	struct page *crc_eof;
-	struct sk_buff *skb;
-	struct fcoe_percpu_s *p_target;
-	unsigned targ_cpu = get_cpu();
-
-	FCOE_DBG("Destroying receive thread for CPU %d\n", cpu);
+	struct fcoe_percpu_s *p;
 
 	/* Prevent any new skbs from being queued for this CPU. */
-	p = &per_cpu(fcoe_percpu, cpu);
+	p = per_cpu_ptr(&fcoe_percpu, cpu);
 	spin_lock_bh(&p->fcoe_rx_list.lock);
-	thread = p->thread;
-	p->thread = NULL;
+	p->active = false;
 	crc_eof = p->crc_eof_page;
 	p->crc_eof_page = NULL;
 	p->crc_eof_offset = 0;
@@ -1301,81 +1255,56 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
 
 	if (crc_eof)
 		put_page(crc_eof);
-	/*
-	 * Don't bother moving the skb's if this context is running
-	 * on the same CPU that is having its thread destroyed. This
-	 * can easily happen when the module is removed.
-	 */
-	if (cpu != targ_cpu) {
-		p_target = &per_cpu(fcoe_percpu, targ_cpu);
-		spin_lock_bh(&p_target->fcoe_rx_list.lock);
-		if (p_target->thread) {
-			FCOE_DBG("Moving frames from CPU %d to CPU %d\n",
-				 cpu, targ_cpu);
-
-			skb_queue_splice_tail(&p->fcoe_rx_list,
-					      &p_target->fcoe_rx_list);
-			spin_unlock_bh(&p_target->fcoe_rx_list.lock);
-		} else {
-			/*
-			 * The targeted CPU is not initialized and cannot accept
-			 * new	skbs. Unlock the targeted CPU and drop the skbs
-			 * on the CPU that is going offline.
-			 */
-			spin_unlock_bh(&p_target->fcoe_rx_list.lock);
-
-			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-				kfree_skb(skb);
-		}
-	} else {
-		/*
-		 * This scenario occurs when the module is being removed
-		 * and all threads are being destroyed. skbs will continue
-		 * to be shifted from the CPU thread that is being removed
-		 * to the CPU thread associated with the CPU that is processing
-		 * the module removal. Once there is only one CPU Rx thread it
-		 * will reach this case and we will drop all skbs and later
-		 * stop the thread.
-		 */
-		while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-			kfree_skb(skb);
-	}
-	put_cpu();
-
-	if (thread)
-		kthread_stop(thread);
+	return p;
 }
 
 /**
- * fcoe_cpu_callback() - Handler for CPU hotplug events
- * @nfb:    The callback data block
- * @action: The event triggering the callback
- * @hcpu:   The index of the CPU that the event is for
+ * fcoe_thread_park() - Park the receive thread of a CPU
+ * @cpu: The CPU index of the CPU whose receive thread is to be parked
  *
- * This creates or destroys per-CPU data for fcoe
- *
- * Returns NOTIFY_OK always.
+ * Parks the per-CPU Rx thread. Any pending skbs are moved to the
+ * first online CPU's Rx thread.
  */
-static int fcoe_cpu_callback(struct notifier_block *nfb,
-			     unsigned long action, void *hcpu)
+static void fcoe_thread_park(unsigned int cpu)
 {
-	unsigned cpu = (unsigned long)hcpu;
+	struct fcoe_percpu_s *p;
+	struct fcoe_percpu_s *p_target;
+	unsigned int targ_cpu = cpumask_any_but(cpu_online_mask, cpu);
 
-	switch (action) {
-	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
-		FCOE_DBG("CPU %x online: Create Rx thread\n", cpu);
-		fcoe_percpu_thread_create(cpu);
-		break;
-	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
-		FCOE_DBG("CPU %x offline: Remove Rx thread\n", cpu);
-		fcoe_percpu_thread_destroy(cpu);
-		break;
-	default:
-		break;
-	}
-	return NOTIFY_OK;
+	FCOE_DBG("Parking receive thread for CPU %d\n", cpu);
+
+	p = fcoe_thread_cleanup_local(cpu);
+
+	p_target = &per_cpu(fcoe_percpu, targ_cpu);
+	spin_lock_bh(&p_target->fcoe_rx_list.lock);
+	BUG_ON(!p_target->active);
+	FCOE_DBG("Moving frames from CPU %u to CPU %u\n", cpu, targ_cpu);
+
+	skb_queue_splice_tail(&p->fcoe_rx_list,
+			      &p_target->fcoe_rx_list);
+	spin_unlock_bh(&p_target->fcoe_rx_list.lock);
+}
+
+/**
+ * fcoe_thread_cleanup() - Cleanup the receive thread of a CPU
+ * @cpu: The CPU index of the CPU whose receive thread is to be cleaned up
+ * @online: true if the CPU is still online.
+ *
+ * Cleans up the per-CPU Rx thread. Any pending skbs are freed because this
+ * module will be removed. If the CPU is not online then it was parked and
+ * there are not resources bound to this per-CPU structure.
+ */
+static void fcoe_thread_cleanup(unsigned int cpu, bool online)
+{
+	struct fcoe_percpu_s *p;
+	struct sk_buff *skb;
+
+	if (!online)
+		return;
+	p = fcoe_thread_cleanup_local(cpu);
+
+	while ((skb = __skb_dequeue(&p->fcoe_rx_list)))
+		kfree_skb(skb);
 }
 
 /**
@@ -1494,7 +1423,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 
 	fps = &per_cpu(fcoe_percpu, cpu);
 	spin_lock(&fps->fcoe_rx_list.lock);
-	if (unlikely(!fps->thread)) {
+	if (unlikely(!fps->active)) {
 		/*
 		 * The targeted CPU is not ready, let's target
 		 * the first CPU now. For non-SMP systems this
@@ -1508,7 +1437,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 		cpu = cpumask_first(cpu_online_mask);
 		fps = &per_cpu(fcoe_percpu, cpu);
 		spin_lock(&fps->fcoe_rx_list.lock);
-		if (!fps->thread) {
+		if (!fps->active) {
 			spin_unlock(&fps->fcoe_rx_list.lock);
 			goto err;
 		}
@@ -1528,8 +1457,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	 * in softirq context.
 	 */
 	__skb_queue_tail(&fps->fcoe_rx_list, skb);
-	if (fps->thread->state == TASK_INTERRUPTIBLE)
-		wake_up_process(fps->thread);
+	wake_up_process(per_cpu_ptr(fcoe_percpu.kthread, cpu));
 	spin_unlock(&fps->fcoe_rx_list.lock);
 
 	return NET_RX_SUCCESS;
@@ -1842,40 +1770,42 @@ static void fcoe_recv_frame(struct sk_buff *skb)
 }
 
 /**
- * fcoe_percpu_receive_thread() - The per-CPU packet receive thread
- * @arg: The per-CPU context
+ * fcoe_thread_receive() - The per-CPU packet receive thread
+ * @arg: The CPU number
  *
- * Return: 0 for success
  */
-static int fcoe_percpu_receive_thread(void *arg)
+static void fcoe_thread_receive(unsigned int cpu)
 {
-	struct fcoe_percpu_s *p = arg;
+	struct fcoe_percpu_s *p = per_cpu_ptr(&fcoe_percpu, cpu);
 	struct sk_buff *skb;
 	struct sk_buff_head tmp;
 
 	skb_queue_head_init(&tmp);
 
-	set_user_nice(current, MIN_NICE);
+	spin_lock_bh(&p->fcoe_rx_list.lock);
+	skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
+	spin_unlock_bh(&p->fcoe_rx_list.lock);
 
-	while (!kthread_should_stop()) {
+	if (!skb_queue_len(&tmp))
+		return;
 
-		spin_lock_bh(&p->fcoe_rx_list.lock);
-		skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
+	while ((skb = __skb_dequeue(&tmp)))
+		fcoe_recv_frame(skb);
 
-		if (!skb_queue_len(&tmp)) {
-			set_current_state(TASK_INTERRUPTIBLE);
-			spin_unlock_bh(&p->fcoe_rx_list.lock);
-			schedule();
-			continue;
-		}
+	return;
+}
 
-		spin_unlock_bh(&p->fcoe_rx_list.lock);
+static int fcoe_thread_should_run(unsigned int cpu)
+{
+	struct fcoe_percpu_s *p = per_cpu_ptr(&fcoe_percpu, cpu);
 
-		while ((skb = __skb_dequeue(&tmp)) != NULL)
-			fcoe_recv_frame(skb);
-
-	}
-	return 0;
+	/*
+	 * Lockless peek on the list to see if it is empty. Real check happens
+	 * in fcoe_thread_receive().
+	 */
+	if (skb_queue_empty(&p->fcoe_rx_list))
+		return 0;
+	return 1;
 }
 
 /**
@@ -2459,7 +2389,7 @@ static void fcoe_percpu_clean(struct fc_lport *lport)
 		spin_lock_bh(&pp->fcoe_rx_list.lock);
 		__skb_queue_tail(&pp->fcoe_rx_list, skb);
 		if (pp->fcoe_rx_list.qlen == 1)
-			wake_up_process(pp->thread);
+			wake_up_process(per_cpu_ptr(fcoe_percpu.kthread, cpu));
 		spin_unlock_bh(&pp->fcoe_rx_list.lock);
 
 		wait_for_completion(&fcoe_flush_completion);
@@ -2583,6 +2513,32 @@ static struct fcoe_transport fcoe_sw_transport = {
 	.disable = fcoe_disable,
 };
 
+static void fcoe_thread_setup(unsigned int cpu)
+{
+	struct fcoe_percpu_s *p = per_cpu_ptr(&fcoe_percpu, cpu);
+
+	set_user_nice(current, MIN_NICE);
+	skb_queue_head_init(&p->fcoe_rx_list);
+}
+
+static void fcoe_thread_unpark(unsigned int cpu)
+{
+	struct fcoe_percpu_s *p = per_cpu_ptr(&fcoe_percpu, cpu);
+
+	p->active = true;
+}
+
+static struct smp_hotplug_thread fcoe_threads = {
+	.store			= &fcoe_percpu.kthread,
+	.setup			= fcoe_thread_setup,
+	.cleanup		= fcoe_thread_cleanup,
+	.thread_should_run      = fcoe_thread_should_run,
+	.thread_fn              = fcoe_thread_receive,
+	.park			= fcoe_thread_park,
+	.unpark			= fcoe_thread_unpark,
+	.thread_comm            = "fcoethread/%u",
+};
+
 /**
  * fcoe_init() - Initialize fcoe.ko
  *
@@ -2590,8 +2546,6 @@ static struct fcoe_transport fcoe_sw_transport = {
  */
 static int __init fcoe_init(void)
 {
-	struct fcoe_percpu_s *p;
-	unsigned int cpu;
 	int rc = 0;
 
 	fcoe_wq = alloc_workqueue("fcoe", 0, 0);
@@ -2608,22 +2562,7 @@ static int __init fcoe_init(void)
 
 	mutex_lock(&fcoe_config_mutex);
 
-	for_each_possible_cpu(cpu) {
-		p = &per_cpu(fcoe_percpu, cpu);
-		skb_queue_head_init(&p->fcoe_rx_list);
-	}
-
-	cpu_notifier_register_begin();
-
-	for_each_online_cpu(cpu)
-		fcoe_percpu_thread_create(cpu);
-
-	/* Initialize per CPU interrupt thread */
-	rc = __register_hotcpu_notifier(&fcoe_cpu_notifier);
-	if (rc)
-		goto out_free;
-
-	cpu_notifier_register_done();
+	smpboot_register_percpu_thread(&fcoe_threads);
 
 	/* Setup link change notification */
 	fcoe_dev_setup();
@@ -2636,11 +2575,7 @@ static int __init fcoe_init(void)
 	return 0;
 
 out_free:
-	for_each_online_cpu(cpu) {
-		fcoe_percpu_thread_destroy(cpu);
-	}
-
-	cpu_notifier_register_done();
+	smpboot_unregister_percpu_thread(&fcoe_threads);
 
 	mutex_unlock(&fcoe_config_mutex);
 	destroy_workqueue(fcoe_wq);
@@ -2658,7 +2593,6 @@ static void __exit fcoe_exit(void)
 	struct fcoe_interface *fcoe, *tmp;
 	struct fcoe_ctlr *ctlr;
 	struct fcoe_port *port;
-	unsigned int cpu;
 
 	mutex_lock(&fcoe_config_mutex);
 
@@ -2674,14 +2608,7 @@ static void __exit fcoe_exit(void)
 	}
 	rtnl_unlock();
 
-	cpu_notifier_register_begin();
-
-	for_each_online_cpu(cpu)
-		fcoe_percpu_thread_destroy(cpu);
-
-	__unregister_hotcpu_notifier(&fcoe_cpu_notifier);
-
-	cpu_notifier_register_done();
+	smpboot_unregister_percpu_thread(&fcoe_threads);
 
 	mutex_unlock(&fcoe_config_mutex);
 
diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
index de7e3ee60f0c..74ea1ea9c1f6 100644
--- a/include/scsi/libfcoe.h
+++ b/include/scsi/libfcoe.h
@@ -319,17 +319,19 @@ struct fcoe_transport {
 
 /**
  * struct fcoe_percpu_s - The context for FCoE receive thread(s)
- * @thread:	    The thread context
+ * @kthread:	    The thread context of the smp_hotplug_thread
  * @fcoe_rx_list:   The queue of pending packets to process
  * @page:	    The memory page for calculating frame trailer CRCs
  * @crc_eof_offset: The offset into the CRC page pointing to available
  *		    memory for a new trailer
+ * @active:	    true if the queue is active and not being removed
  */
 struct fcoe_percpu_s {
-	struct task_struct *thread;
+	struct task_struct *kthread;
 	struct sk_buff_head fcoe_rx_list;
 	struct page *crc_eof_page;
 	int crc_eof_offset;
+	bool active;
 };
 
 /**
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 09/11] scsi: bnx2i: convert to smpboot thread
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (7 preceding siblings ...)
  2016-03-11 15:29 ` [PATCH 08/11] scsi/fcoe: convert to smpboot thread Sebastian Andrzej Siewior
@ 2016-03-11 15:29 ` Sebastian Andrzej Siewior
  2016-03-11 15:29 ` [PATCH 10/11] scsi: bnx2fc: fix hotplug race in bnx2fc_process_new_cqes() Sebastian Andrzej Siewior
  2016-03-11 15:29 ` [PATCH 11/11] scsi: bnx2fc: convert to smpboot thread Sebastian Andrzej Siewior
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:29 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, QLogic-Storage-Upstream,
	Christoph Hellwig

The driver creates its own per-CPU threads which are updated based on
CPU hotplug events. It is also possible to delegate this task to the
smpboot-thread infrastructure and get the same job done while saving a
few lines of code.
The code checked ->iothread to decide if there is an active per-CPU
thread. With the smpboot infrastructure this is no longer possible and I
replaced its logic with the ->active member. The thread pointer is saved
in `kthread' instead of `iothread' so anything trying to use `thread' is
caught by the compiler.

The remaining part of the conversion is mostly straightforward.
This patch was only compile-tested due to -ENODEV.

Cc: QLogic-Storage-Upstream@qlogic.com
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/bnx2i/bnx2i.h      |   4 +-
 drivers/scsi/bnx2i/bnx2i_hwi.c  |  45 +----------
 drivers/scsi/bnx2i/bnx2i_init.c | 162 ++++++++++++++++------------------------
 3 files changed, 69 insertions(+), 142 deletions(-)

diff --git a/drivers/scsi/bnx2i/bnx2i.h b/drivers/scsi/bnx2i/bnx2i.h
index ed7f3228e234..2bbccf571a18 100644
--- a/drivers/scsi/bnx2i/bnx2i.h
+++ b/drivers/scsi/bnx2i/bnx2i.h
@@ -775,9 +775,10 @@ struct bnx2i_work {
 };
 
 struct bnx2i_percpu_s {
-	struct task_struct *iothread;
+	struct task_struct *kthread;
 	struct list_head work_list;
 	spinlock_t p_work_lock;
+	bool active;
 };
 
 
@@ -875,7 +876,6 @@ extern void bnx2i_print_active_cmd_queue(struct bnx2i_conn *conn);
 extern void bnx2i_print_xmit_pdu_queue(struct bnx2i_conn *conn);
 extern void bnx2i_print_recv_state(struct bnx2i_conn *conn);
 
-extern int bnx2i_percpu_io_thread(void *arg);
 extern int bnx2i_process_scsi_cmd_resp(struct iscsi_session *session,
 				       struct bnx2i_conn *bnx2i_conn,
 				       struct cqe *cqe);
diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
index fb072cc5e9fd..ec3969732846 100644
--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
+++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
@@ -1860,47 +1860,6 @@ static void bnx2i_process_cmd_cleanup_resp(struct iscsi_session *session,
 	complete(&bnx2i_conn->cmd_cleanup_cmpl);
 }
 
-
-/**
- * bnx2i_percpu_io_thread - thread per cpu for ios
- *
- * @arg:	ptr to bnx2i_percpu_info structure
- */
-int bnx2i_percpu_io_thread(void *arg)
-{
-	struct bnx2i_percpu_s *p = arg;
-	struct bnx2i_work *work, *tmp;
-	LIST_HEAD(work_list);
-
-	set_user_nice(current, MIN_NICE);
-
-	while (!kthread_should_stop()) {
-		spin_lock_bh(&p->p_work_lock);
-		while (!list_empty(&p->work_list)) {
-			list_splice_init(&p->work_list, &work_list);
-			spin_unlock_bh(&p->p_work_lock);
-
-			list_for_each_entry_safe(work, tmp, &work_list, list) {
-				list_del_init(&work->list);
-				/* work allocated in the bh, freed here */
-				bnx2i_process_scsi_cmd_resp(work->session,
-							    work->bnx2i_conn,
-							    &work->cqe);
-				atomic_dec(&work->bnx2i_conn->work_cnt);
-				kfree(work);
-			}
-			spin_lock_bh(&p->p_work_lock);
-		}
-		set_current_state(TASK_INTERRUPTIBLE);
-		spin_unlock_bh(&p->p_work_lock);
-		schedule();
-	}
-	__set_current_state(TASK_RUNNING);
-
-	return 0;
-}
-
-
 /**
  * bnx2i_queue_scsi_cmd_resp - queue cmd completion to the percpu thread
  * @bnx2i_conn:		bnx2i connection
@@ -1941,7 +1900,7 @@ static int bnx2i_queue_scsi_cmd_resp(struct iscsi_session *session,
 
 	p = &per_cpu(bnx2i_percpu, cpu);
 	spin_lock(&p->p_work_lock);
-	if (unlikely(!p->iothread)) {
+	if (unlikely(!p->active)) {
 		rc = -EINVAL;
 		goto err;
 	}
@@ -1954,7 +1913,7 @@ static int bnx2i_queue_scsi_cmd_resp(struct iscsi_session *session,
 		memcpy(&bnx2i_work->cqe, cqe, sizeof(struct cqe));
 		list_add_tail(&bnx2i_work->list, &p->work_list);
 		atomic_inc(&bnx2i_conn->work_cnt);
-		wake_up_process(p->iothread);
+		wake_up_process(p->kthread);
 		spin_unlock(&p->p_work_lock);
 		goto done;
 	} else
diff --git a/drivers/scsi/bnx2i/bnx2i_init.c b/drivers/scsi/bnx2i/bnx2i_init.c
index c8b410c24cf0..4ec8c552cc95 100644
--- a/drivers/scsi/bnx2i/bnx2i_init.c
+++ b/drivers/scsi/bnx2i/bnx2i_init.c
@@ -13,8 +13,8 @@
  * Previously Maintained by: Eddie Wai (eddie.wai@broadcom.com)
  * Maintained by: QLogic-Storage-Upstream@qlogic.com
  */
-
 #include "bnx2i.h"
+#include <linux/smpboot.h>
 
 static struct list_head adapter_list = LIST_HEAD_INIT(adapter_list);
 static u32 adapter_count;
@@ -70,14 +70,6 @@ u64 iscsi_error_mask = 0x00;
 
 DEFINE_PER_CPU(struct bnx2i_percpu_s, bnx2i_percpu);
 
-static int bnx2i_cpu_callback(struct notifier_block *nfb,
-			      unsigned long action, void *hcpu);
-/* notification function for CPU hotplug events */
-static struct notifier_block bnx2i_cpu_notifier = {
-	.notifier_call = bnx2i_cpu_callback,
-};
-
-
 /**
  * bnx2i_identify_device - identifies NetXtreme II device type
  * @hba: 		Adapter structure pointer
@@ -410,92 +402,95 @@ int bnx2i_get_stats(void *handle)
 	return 0;
 }
 
-
-/**
- * bnx2i_percpu_thread_create - Create a receive thread for an
- *				online CPU
- *
- * @cpu:	cpu index for the online cpu
- */
-static void bnx2i_percpu_thread_create(unsigned int cpu)
+static void bnx2i_thread_io_process(unsigned int cpu)
 {
-	struct bnx2i_percpu_s *p;
-	struct task_struct *thread;
+	struct bnx2i_percpu_s *p = per_cpu_ptr(&bnx2i_percpu, cpu);
+	struct bnx2i_work *work, *tmp;
+	LIST_HEAD(work_list);
 
-	p = &per_cpu(bnx2i_percpu, cpu);
+	spin_lock_bh(&p->p_work_lock);
+	while (!list_empty(&p->work_list)) {
+		list_splice_init(&p->work_list, &work_list);
+		spin_unlock_bh(&p->p_work_lock);
 
-	thread = kthread_create_on_node(bnx2i_percpu_io_thread, (void *)p,
-					cpu_to_node(cpu),
-					"bnx2i_thread/%d", cpu);
-	/* bind thread to the cpu */
-	if (likely(!IS_ERR(thread))) {
-		kthread_bind(thread, cpu);
-		p->iothread = thread;
-		wake_up_process(thread);
+		list_for_each_entry_safe(work, tmp, &work_list, list) {
+			list_del_init(&work->list);
+			/* work allocated in the bh, freed here */
+			bnx2i_process_scsi_cmd_resp(work->session,
+						    work->bnx2i_conn,
+						    &work->cqe);
+			atomic_dec(&work->bnx2i_conn->work_cnt);
+			kfree(work);
+		}
+		spin_lock_bh(&p->p_work_lock);
 	}
+	spin_unlock_bh(&p->p_work_lock);
 }
 
+static void bnx2i_thread_setup(unsigned int cpu)
+{
+	struct bnx2i_percpu_s *p = per_cpu_ptr(&bnx2i_percpu, cpu);
 
-static void bnx2i_percpu_thread_destroy(unsigned int cpu)
+	set_user_nice(current, MIN_NICE);
+	INIT_LIST_HEAD(&p->work_list);
+	spin_lock_init(&p->p_work_lock);
+}
+
+static int bnx2i_thread_should_run(unsigned int cpu)
+{
+	struct bnx2i_percpu_s *p = per_cpu_ptr(&bnx2i_percpu, cpu);
+	/*
+	 * A lockless peek at the list. The real check is done in
+	 * bnx2i_thread_io_process()
+	 */
+	return !list_empty(&p->work_list);
+}
+
+static void bnx2i_thread_park(unsigned int cpu)
 {
 	struct bnx2i_percpu_s *p;
-	struct task_struct *thread;
 	struct bnx2i_work *work, *tmp;
 
 	/* Prevent any new work from being queued for this CPU */
 	p = &per_cpu(bnx2i_percpu, cpu);
 	spin_lock_bh(&p->p_work_lock);
-	thread = p->iothread;
-	p->iothread = NULL;
+	p->active = false;
 
 	/* Free all work in the list */
 	list_for_each_entry_safe(work, tmp, &p->work_list, list) {
 		list_del_init(&work->list);
-		bnx2i_process_scsi_cmd_resp(work->session,
-					    work->bnx2i_conn, &work->cqe);
+		bnx2i_process_scsi_cmd_resp(work->session, work->bnx2i_conn,
+					    &work->cqe);
 		kfree(work);
 	}
 
 	spin_unlock_bh(&p->p_work_lock);
-	if (thread)
-		kthread_stop(thread);
 }
 
-
-/**
- * bnx2i_cpu_callback - Handler for CPU hotplug events
- *
- * @nfb:	The callback data block
- * @action:	The event triggering the callback
- * @hcpu:	The index of the CPU that the event is for
- *
- * This creates or destroys per-CPU data for iSCSI
- *
- * Returns NOTIFY_OK always.
- */
-static int bnx2i_cpu_callback(struct notifier_block *nfb,
-			      unsigned long action, void *hcpu)
+static void bnx2i_thread_cleanup(unsigned int cpu, bool online)
 {
-	unsigned cpu = (unsigned long)hcpu;
-
-	switch (action) {
-	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
-		printk(KERN_INFO "bnx2i: CPU %x online: Create Rx thread\n",
-			cpu);
-		bnx2i_percpu_thread_create(cpu);
-		break;
-	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
-		printk(KERN_INFO "CPU %x offline: Remove Rx thread\n", cpu);
-		bnx2i_percpu_thread_destroy(cpu);
-		break;
-	default:
-		break;
-	}
-	return NOTIFY_OK;
+	if (!online)
+		return;
+	bnx2i_thread_park(cpu);
 }
 
+static void bnx2i_thread_unpark(unsigned int cpu)
+{
+	struct bnx2i_percpu_s *p = per_cpu_ptr(&bnx2i_percpu, cpu);
+
+	p->active = true;
+}
+
+static struct smp_hotplug_thread bnx2i_threads = {
+	.store                  = &bnx2i_percpu.kthread,
+	.setup                  = bnx2i_thread_setup,
+	.cleanup		= bnx2i_thread_cleanup,
+	.thread_should_run      = bnx2i_thread_should_run,
+	.thread_fn              = bnx2i_thread_io_process,
+	.park                   = bnx2i_thread_park,
+	.unpark                 = bnx2i_thread_unpark,
+	.thread_comm            = "bnx2i_thread/%u",
+};
 
 /**
  * bnx2i_mod_init - module init entry point
@@ -507,8 +502,6 @@ static int bnx2i_cpu_callback(struct notifier_block *nfb,
 static int __init bnx2i_mod_init(void)
 {
 	int err;
-	unsigned cpu = 0;
-	struct bnx2i_percpu_s *p;
 
 	printk(KERN_INFO "%s", version);
 
@@ -531,24 +524,7 @@ static int __init bnx2i_mod_init(void)
 		goto unreg_xport;
 	}
 
-	/* Create percpu kernel threads to handle iSCSI I/O completions */
-	for_each_possible_cpu(cpu) {
-		p = &per_cpu(bnx2i_percpu, cpu);
-		INIT_LIST_HEAD(&p->work_list);
-		spin_lock_init(&p->p_work_lock);
-		p->iothread = NULL;
-	}
-
-	cpu_notifier_register_begin();
-
-	for_each_online_cpu(cpu)
-		bnx2i_percpu_thread_create(cpu);
-
-	/* Initialize per CPU interrupt thread */
-	__register_hotcpu_notifier(&bnx2i_cpu_notifier);
-
-	cpu_notifier_register_done();
-
+	smpboot_register_percpu_thread(&bnx2i_threads);
 	return 0;
 
 unreg_xport:
@@ -569,7 +545,6 @@ static int __init bnx2i_mod_init(void)
 static void __exit bnx2i_mod_exit(void)
 {
 	struct bnx2i_hba *hba;
-	unsigned cpu = 0;
 
 	mutex_lock(&bnx2i_dev_lock);
 	while (!list_empty(&adapter_list)) {
@@ -587,14 +562,7 @@ static void __exit bnx2i_mod_exit(void)
 	}
 	mutex_unlock(&bnx2i_dev_lock);
 
-	cpu_notifier_register_begin();
-
-	for_each_online_cpu(cpu)
-		bnx2i_percpu_thread_destroy(cpu);
-
-	__unregister_hotcpu_notifier(&bnx2i_cpu_notifier);
-
-	cpu_notifier_register_done();
+	smpboot_unregister_percpu_thread(&bnx2i_threads);
 
 	iscsi_unregister_transport(&bnx2i_iscsi_transport);
 	cnic_unregister_driver(CNIC_ULP_ISCSI);
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 10/11] scsi: bnx2fc: fix hotplug race in bnx2fc_process_new_cqes()
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (8 preceding siblings ...)
  2016-03-11 15:29 ` [PATCH 09/11] scsi: bnx2i: " Sebastian Andrzej Siewior
@ 2016-03-11 15:29 ` Sebastian Andrzej Siewior
  2016-03-11 15:29 ` [PATCH 11/11] scsi: bnx2fc: convert to smpboot thread Sebastian Andrzej Siewior
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:29 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, QLogic-Storage-Upstream,
	Christoph Hellwig

The ->iothread is accessed without holding the lock. Take this:

 CPU A                                  CPU B
-------                                -------
bnx2fc_process_new_cqes()               bnx2fc_percpu_thread_destroy()
 spin_lock_bh(fp_work_lock);
 fps->iothread != NULL
 list_add_tail(work)
 spin_unlock_bh(&fps->fp_work_lock);     spin_lock_bh(&fps->fp_work_lock);
                                         fps->iothread = NULL
 if (fps->iothread && work)
	...
 else
  bnx2fc_process_cq_compl(work)          bnx2fc_process_cq_compl(work);

CPU A will process wqe despite having it added to the work list of CPU
B which will at the same time clean up the queued wqe.

The fix is to add the item to the list and wakeup the thread while still
holding the lock. If the item was not added to the list then the
`process' variable is still true in which case we have to do manually.

Cc: QLogic-Storage-Upstream@qlogic.com
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/bnx2fc/bnx2fc_hwi.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
index 28c671b609b2..1427062e86f0 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
@@ -1045,6 +1045,7 @@ int bnx2fc_process_new_cqes(struct bnx2fc_rport *tgt)
 			struct bnx2fc_work *work = NULL;
 			struct bnx2fc_percpu_s *fps = NULL;
 			unsigned int cpu = wqe % num_possible_cpus();
+			bool process = true;
 
 			fps = &per_cpu(bnx2fc_percpu, cpu);
 			spin_lock_bh(&fps->fp_work_lock);
@@ -1052,16 +1053,16 @@ int bnx2fc_process_new_cqes(struct bnx2fc_rport *tgt)
 				goto unlock;
 
 			work = bnx2fc_alloc_work(tgt, wqe);
-			if (work)
+			if (work) {
 				list_add_tail(&work->list,
 					      &fps->work_list);
+				wake_up_process(fps->iothread);
+				process = false;
+			}
 unlock:
 			spin_unlock_bh(&fps->fp_work_lock);
 
-			/* Pending work request completion */
-			if (fps->iothread && work)
-				wake_up_process(fps->iothread);
-			else
+			if (process)
 				bnx2fc_process_cq_compl(tgt, wqe);
 			num_free_sqes++;
 		}
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 11/11] scsi: bnx2fc: convert to smpboot thread
  2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
                   ` (9 preceding siblings ...)
  2016-03-11 15:29 ` [PATCH 10/11] scsi: bnx2fc: fix hotplug race in bnx2fc_process_new_cqes() Sebastian Andrzej Siewior
@ 2016-03-11 15:29 ` Sebastian Andrzej Siewior
  10 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 15:29 UTC (permalink / raw)
  To: linux-scsi
  Cc: James E.J. Bottomley, Martin K. Petersen, rt,
	Sebastian Andrzej Siewior, QLogic-Storage-Upstream,
	Christoph Hellwig

The driver creates its own per-CPU threads which are updated based on
CPU hotplug events. It is also possible to delegate this task to the
smpboot-thread infrastructure and get the same job done while saving a
few lines of code.
The code checked ->iothread to decide if there is an active per-CPU
thread. With the smpboot infrastructure this is no longer possible and I
replaced its logic with the ->active member. The thread pointer is saved
in `kthread' instead of `iothread' so anything trying to use thread is
caught by the compiler.

The remaining part of the conversion is mostly straightforward.
This patch was only compile-tested due to -ENODEV.

Cc: QLogic-Storage-Upstream@qlogic.com
Cc: "James E.J. Bottomley" <JBottomley@odin.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/scsi/bnx2fc/bnx2fc.h      |   3 +-
 drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 187 ++++++++++++--------------------------
 drivers/scsi/bnx2fc/bnx2fc_hwi.c  |   4 +-
 3 files changed, 64 insertions(+), 130 deletions(-)

diff --git a/drivers/scsi/bnx2fc/bnx2fc.h b/drivers/scsi/bnx2fc/bnx2fc.h
index 499e369eabf0..7bc5692bb493 100644
--- a/drivers/scsi/bnx2fc/bnx2fc.h
+++ b/drivers/scsi/bnx2fc/bnx2fc.h
@@ -168,9 +168,10 @@ extern struct fcoe_percpu_s bnx2fc_global;
 extern struct workqueue_struct *bnx2fc_wq;
 
 struct bnx2fc_percpu_s {
-	struct task_struct *iothread;
+	struct task_struct *kthread;
 	struct list_head work_list;
 	spinlock_t fp_work_lock;
+	bool active;
 };
 
 struct bnx2fc_fw_stats {
diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index f5bc11b2e884..12ee035e9c4c 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -14,6 +14,7 @@
  */
 
 #include "bnx2fc.h"
+#include <linux/smpboot.h>
 
 static struct list_head adapter_list;
 static struct list_head if_list;
@@ -98,13 +99,6 @@ static void __exit bnx2fc_mod_exit(void);
 unsigned int bnx2fc_debug_level;
 module_param_named(debug_logging, bnx2fc_debug_level, int, S_IRUGO|S_IWUSR);
 
-static int bnx2fc_cpu_callback(struct notifier_block *nfb,
-			     unsigned long action, void *hcpu);
-/* notification function for CPU hotplug events */
-static struct notifier_block bnx2fc_cpu_notifier = {
-	.notifier_call = bnx2fc_cpu_callback,
-};
-
 static inline struct net_device *bnx2fc_netdev(const struct fc_lport *lport)
 {
 	return ((struct bnx2fc_interface *)
@@ -591,40 +585,26 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
 	fc_exch_recv(lport, fp);
 }
 
-/**
- * bnx2fc_percpu_io_thread - thread per cpu for ios
- *
- * @arg:	ptr to bnx2fc_percpu_info structure
- */
-int bnx2fc_percpu_io_thread(void *arg)
+static void bnx2fc_thread_io_process(unsigned int cpu)
 {
-	struct bnx2fc_percpu_s *p = arg;
+	struct bnx2fc_percpu_s *p = per_cpu_ptr(&bnx2fc_percpu, cpu);
 	struct bnx2fc_work *work, *tmp;
 	LIST_HEAD(work_list);
 
-	set_user_nice(current, MIN_NICE);
-	set_current_state(TASK_INTERRUPTIBLE);
-	while (!kthread_should_stop()) {
-		schedule();
-		spin_lock_bh(&p->fp_work_lock);
-		while (!list_empty(&p->work_list)) {
-			list_splice_init(&p->work_list, &work_list);
-			spin_unlock_bh(&p->fp_work_lock);
-
-			list_for_each_entry_safe(work, tmp, &work_list, list) {
-				list_del_init(&work->list);
-				bnx2fc_process_cq_compl(work->tgt, work->wqe);
-				kfree(work);
-			}
-
-			spin_lock_bh(&p->fp_work_lock);
-		}
-		__set_current_state(TASK_INTERRUPTIBLE);
+	spin_lock_bh(&p->fp_work_lock);
+	while (!list_empty(&p->work_list)) {
+		list_splice_init(&p->work_list, &work_list);
 		spin_unlock_bh(&p->fp_work_lock);
-	}
-	__set_current_state(TASK_RUNNING);
 
-	return 0;
+		list_for_each_entry_safe(work, tmp, &work_list, list) {
+			list_del_init(&work->list);
+			bnx2fc_process_cq_compl(work->tgt, work->wqe);
+			kfree(work);
+		}
+
+		spin_lock_bh(&p->fp_work_lock);
+	}
+	spin_unlock_bh(&p->fp_work_lock);
 }
 
 static struct fc_host_statistics *bnx2fc_get_host_stats(struct Scsi_Host *shost)
@@ -2518,34 +2498,9 @@ static struct fcoe_transport bnx2fc_transport = {
 	.disable = bnx2fc_disable,
 };
 
-/**
- * bnx2fc_percpu_thread_create - Create a receive thread for an
- *				 online CPU
- *
- * @cpu: cpu index for the online cpu
- */
-static void bnx2fc_percpu_thread_create(unsigned int cpu)
+static void bnx2fc_thread_park(unsigned int cpu)
 {
 	struct bnx2fc_percpu_s *p;
-	struct task_struct *thread;
-
-	p = &per_cpu(bnx2fc_percpu, cpu);
-
-	thread = kthread_create_on_node(bnx2fc_percpu_io_thread,
-					(void *)p, cpu_to_node(cpu),
-					"bnx2fc_thread/%d", cpu);
-	/* bind thread to the cpu */
-	if (likely(!IS_ERR(thread))) {
-		kthread_bind(thread, cpu);
-		p->iothread = thread;
-		wake_up_process(thread);
-	}
-}
-
-static void bnx2fc_percpu_thread_destroy(unsigned int cpu)
-{
-	struct bnx2fc_percpu_s *p;
-	struct task_struct *thread;
 	struct bnx2fc_work *work, *tmp;
 
 	BNX2FC_MISC_DBG("destroying io thread for CPU %d\n", cpu);
@@ -2553,9 +2508,7 @@ static void bnx2fc_percpu_thread_destroy(unsigned int cpu)
 	/* Prevent any new work from being queued for this CPU */
 	p = &per_cpu(bnx2fc_percpu, cpu);
 	spin_lock_bh(&p->fp_work_lock);
-	thread = p->iothread;
-	p->iothread = NULL;
-
+	p->active = false;
 
 	/* Free all work in the list */
 	list_for_each_entry_safe(work, tmp, &p->work_list, list) {
@@ -2565,44 +2518,52 @@ static void bnx2fc_percpu_thread_destroy(unsigned int cpu)
 	}
 
 	spin_unlock_bh(&p->fp_work_lock);
-
-	if (thread)
-		kthread_stop(thread);
 }
 
-/**
- * bnx2fc_cpu_callback - Handler for CPU hotplug events
- *
- * @nfb:    The callback data block
- * @action: The event triggering the callback
- * @hcpu:   The index of the CPU that the event is for
- *
- * This creates or destroys per-CPU data for fcoe
- *
- * Returns NOTIFY_OK always.
- */
-static int bnx2fc_cpu_callback(struct notifier_block *nfb,
-			     unsigned long action, void *hcpu)
+static void bnx2fc_thread_cleanup(unsigned int cpu, bool online)
 {
-	unsigned cpu = (unsigned long)hcpu;
-
-	switch (action) {
-	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
-		printk(PFX "CPU %x online: Create Rx thread\n", cpu);
-		bnx2fc_percpu_thread_create(cpu);
-		break;
-	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
-		printk(PFX "CPU %x offline: Remove Rx thread\n", cpu);
-		bnx2fc_percpu_thread_destroy(cpu);
-		break;
-	default:
-		break;
-	}
-	return NOTIFY_OK;
+	if (!online)
+		return;
+	bnx2fc_thread_park(cpu);
 }
 
+static void bnx2fc_thread_setup(unsigned int cpu)
+{
+	struct bnx2fc_percpu_s *p = per_cpu_ptr(&bnx2fc_percpu, cpu);
+
+	set_user_nice(current, MIN_NICE);
+	INIT_LIST_HEAD(&p->work_list);
+	spin_lock_init(&p->fp_work_lock);
+}
+
+static int bnx2fc_thread_should_run(unsigned int cpu)
+{
+	struct bnx2fc_percpu_s *p = per_cpu_ptr(&bnx2fc_percpu, cpu);
+	/*
+	 * lockless peek at the list. Real check is done in
+	 * bnx2fc_thread_io_process()
+	 */
+	return !list_empty(&p->work_list);
+}
+
+static void bnx2fc_thread_unpark(unsigned int cpu)
+{
+	struct bnx2fc_percpu_s *p = per_cpu_ptr(&bnx2fc_percpu, cpu);
+
+	p->active = true;
+}
+
+static struct smp_hotplug_thread bnx2fc_threads = {
+	.store			= &bnx2fc_percpu.kthread,
+	.setup			= bnx2fc_thread_setup,
+	.cleanup		= bnx2fc_thread_cleanup,
+	.thread_should_run	= bnx2fc_thread_should_run,
+	.thread_fn		= bnx2fc_thread_io_process,
+	.park			= bnx2fc_thread_park,
+	.unpark			= bnx2fc_thread_unpark,
+	.thread_comm		= "bnx2fc_thread/%u",
+};
+
 /**
  * bnx2fc_mod_init - module init entry point
  *
@@ -2614,8 +2575,6 @@ static int __init bnx2fc_mod_init(void)
 	struct fcoe_percpu_s *bg;
 	struct task_struct *l2_thread;
 	int rc = 0;
-	unsigned int cpu = 0;
-	struct bnx2fc_percpu_s *p;
 
 	printk(KERN_INFO PFX "%s", version);
 
@@ -2657,23 +2616,7 @@ static int __init bnx2fc_mod_init(void)
 	bg->kthread = l2_thread;
 	spin_unlock_bh(&bg->fcoe_rx_list.lock);
 
-	for_each_possible_cpu(cpu) {
-		p = &per_cpu(bnx2fc_percpu, cpu);
-		INIT_LIST_HEAD(&p->work_list);
-		spin_lock_init(&p->fp_work_lock);
-	}
-
-	cpu_notifier_register_begin();
-
-	for_each_online_cpu(cpu) {
-		bnx2fc_percpu_thread_create(cpu);
-	}
-
-	/* Initialize per CPU interrupt thread */
-	__register_hotcpu_notifier(&bnx2fc_cpu_notifier);
-
-	cpu_notifier_register_done();
-
+	smpboot_register_percpu_thread(&bnx2fc_threads);
 	cnic_register_driver(CNIC_ULP_FCOE, &bnx2fc_cnic_cb);
 
 	return 0;
@@ -2695,7 +2638,6 @@ static void __exit bnx2fc_mod_exit(void)
 	struct fcoe_percpu_s *bg;
 	struct task_struct *l2_thread;
 	struct sk_buff *skb;
-	unsigned int cpu = 0;
 
 	/*
 	 * NOTE: Since cnic calls register_driver routine rtnl_lock,
@@ -2737,16 +2679,7 @@ static void __exit bnx2fc_mod_exit(void)
 	if (l2_thread)
 		kthread_stop(l2_thread);
 
-	cpu_notifier_register_begin();
-
-	/* Destroy per cpu threads */
-	for_each_online_cpu(cpu) {
-		bnx2fc_percpu_thread_destroy(cpu);
-	}
-
-	__unregister_hotcpu_notifier(&bnx2fc_cpu_notifier);
-
-	cpu_notifier_register_done();
+	smpboot_unregister_percpu_thread(&bnx2fc_threads);
 
 	destroy_workqueue(bnx2fc_wq);
 	/*
diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
index 1427062e86f0..33aad48effaf 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
@@ -1049,14 +1049,14 @@ int bnx2fc_process_new_cqes(struct bnx2fc_rport *tgt)
 
 			fps = &per_cpu(bnx2fc_percpu, cpu);
 			spin_lock_bh(&fps->fp_work_lock);
-			if (unlikely(!fps->iothread))
+			if (unlikely(!fps->active))
 				goto unlock;
 
 			work = bnx2fc_alloc_work(tgt, wqe);
 			if (work) {
 				list_add_tail(&work->list,
 					      &fps->work_list);
-				wake_up_process(fps->iothread);
+				wake_up_process(fps->kthread);
 				process = false;
 			}
 unlock:
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean()
  2016-03-11 15:28 ` [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean() Sebastian Andrzej Siewior
@ 2016-03-11 16:17   ` Christoph Hellwig
  2016-03-11 16:32     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2016-03-11 16:17 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-scsi, James E.J. Bottomley, Martin K. Petersen, rt,
	Vasu Dev, Christoph Hellwig, fcoe-devel

On Fri, Mar 11, 2016 at 04:28:53PM +0100, Sebastian Andrzej Siewior wrote:
> for_each_possible_cpu() with a cpu_online() + `thread' check possibly does
> the job. But there is a tiny race: Say CPU5 is reported online but is
> going down. And after fcoe_percpu_clean() saw that CPU5 is online it
> decided to enqueue a packet. After dev_alloc_skb() returned a skb
> that CPU is offline (or say the notifier destroyed the kthread). So we
> would OOps because `thread' is NULL.
> An alternative would be to lock the CPUs during our loop (so no CPU is
> going away) and then we iterate over the online mask.

I've looked over this and the following patches, and I suspect
the right thing to do for fcoe and bnx2 is to convert them to use the
generic workqueue code instead of reinventing it poorly.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean()
  2016-03-11 16:17   ` Christoph Hellwig
@ 2016-03-11 16:32     ` Sebastian Andrzej Siewior
  2016-03-15  8:19       ` Christoph Hellwig
  0 siblings, 1 reply; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-03-11 16:32 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-scsi, James E.J. Bottomley, Martin K. Petersen, rt,
	Vasu Dev, Christoph Hellwig, fcoe-devel

On 03/11/2016 05:17 PM, Christoph Hellwig wrote:
> On Fri, Mar 11, 2016 at 04:28:53PM +0100, Sebastian Andrzej Siewior wrote:
>> for_each_possible_cpu() with a cpu_online() + `thread' check possibly does
>> the job. But there is a tiny race: Say CPU5 is reported online but is
>> going down. And after fcoe_percpu_clean() saw that CPU5 is online it
>> decided to enqueue a packet. After dev_alloc_skb() returned a skb
>> that CPU is offline (or say the notifier destroyed the kthread). So we
>> would OOps because `thread' is NULL.
>> An alternative would be to lock the CPUs during our loop (so no CPU is
>> going away) and then we iterate over the online mask.
> 
> I've looked over this and the following patches, and I suspect
> the right thing to do for fcoe and bnx2 is to convert them to use the
> generic workqueue code instead of reinventing it poorly.

alloc_workqueue() in setup and then queue_work_on(cpu, , item)? item
should be struct work_struct but all I have is a skb. Is there an easy
way to get this attached?

Sebastian


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean()
  2016-03-11 16:32     ` Sebastian Andrzej Siewior
@ 2016-03-15  8:19       ` Christoph Hellwig
  2016-04-08 13:30         ` Sebastian Andrzej Siewior
  2016-04-12 15:16         ` [PATCH v2] scsi/fcoe: convert to kworker Sebastian Andrzej Siewior
  0 siblings, 2 replies; 27+ messages in thread
From: Christoph Hellwig @ 2016-03-15  8:19 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-scsi, James E.J. Bottomley, Martin K. Petersen, rt,
	Vasu Dev, fcoe-devel

On Fri, Mar 11, 2016 at 05:32:15PM +0100, Sebastian Andrzej Siewior wrote:
> alloc_workqueue() in setup and then queue_work_on(cpu, , item)? item
> should be struct work_struct but all I have is a skb. Is there an easy
> way to get this attached?

Good question.  There is skb->cb, but it looks like it doesn't have
space for an additional work_item in the fcoe case.  Maybe have
a per-cpu work_struct and keep all the list handling as-is for now?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean()
  2016-03-15  8:19       ` Christoph Hellwig
@ 2016-04-08 13:30         ` Sebastian Andrzej Siewior
  2016-04-08 18:14           ` Sebastian Andrzej Siewior
  2016-04-12 15:16         ` [PATCH v2] scsi/fcoe: convert to kworker Sebastian Andrzej Siewior
  1 sibling, 1 reply; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-04-08 13:30 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-scsi, James E.J. Bottomley, Martin K. Petersen, rt,
	Vasu Dev, fcoe-devel

On 03/15/2016 09:19 AM, Christoph Hellwig wrote:
> On Fri, Mar 11, 2016 at 05:32:15PM +0100, Sebastian Andrzej Siewior wrote:
>> alloc_workqueue() in setup and then queue_work_on(cpu, , item)? item
>> should be struct work_struct but all I have is a skb. Is there an easy
>> way to get this attached?
> 
> Good question.  There is skb->cb, but it looks like it doesn't have
> space for an additional work_item in the fcoe case.  Maybe have
> a per-cpu work_struct and keep all the list handling as-is for now?

Okay. Let me try this. What about the few fixes from the series (which
apply before the rework to smbboot theads)?

Sebastian

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean()
  2016-04-08 13:30         ` Sebastian Andrzej Siewior
@ 2016-04-08 18:14           ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-04-08 18:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-scsi, James E.J. Bottomley, Martin K. Petersen, rt,
	Vasu Dev, fcoe-devel

On 04/08/2016 03:30 PM, Sebastian Andrzej Siewior wrote:
> On 03/15/2016 09:19 AM, Christoph Hellwig wrote:
>> On Fri, Mar 11, 2016 at 05:32:15PM +0100, Sebastian Andrzej Siewior wrote:
>>> alloc_workqueue() in setup and then queue_work_on(cpu, , item)? item
>>> should be struct work_struct but all I have is a skb. Is there an easy
>>> way to get this attached?
>>
>> Good question.  There is skb->cb, but it looks like it doesn't have
>> space for an additional work_item in the fcoe case.  Maybe have
>> a per-cpu work_struct and keep all the list handling as-is for now?
> 
> Okay. Let me try this. What about the few fixes from the series (which
> apply before the rework to smbboot theads)?

okay kworker. This does not look good. I have it converted what I miss
flushing work when CPU goes down and ensuring not to queue work while
the CPU is down.

- cpu_online(x) is racy. In DOWN_PREPARE the worker is deactivated /
  stopped. However slightly later the bit from the CPU mask is removed.

- Whatever is queued and did not make it before the CPU went down seems
  to be delayed until the CPU comes back online.

- if the worker keeps running while the CPU is going down the worker
  continues running on a different CPU.

So I don't see how the former two points can be solved without keeping
track of CPUs in a CPU notifier. Getting pushed to a different CPU be
probably less of an issue if we would have a work-item and would not
need to rely on the per-CPU list.

Sebastian

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v2] scsi/fcoe: convert to kworker
  2016-03-15  8:19       ` Christoph Hellwig
  2016-04-08 13:30         ` Sebastian Andrzej Siewior
@ 2016-04-12 15:16         ` Sebastian Andrzej Siewior
  2016-04-22 15:27           ` [PREEMPT-RT] " Sebastian Andrzej Siewior
  2016-06-10 10:38           ` Johannes Thumshirn
  1 sibling, 2 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-04-12 15:16 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-scsi, James E.J. Bottomley, Martin K. Petersen, rt,
	Vasu Dev, fcoe-devel

The driver creates its own per-CPU threads which are updated based on
CPU hotplug events. It is also possible to use kworkers and remove some
of the kthread infrastrucure.

The code checked ->thread to decide if there is an active per-CPU
thread. By using the kworker infrastructure this is no longer possible (or
required). The thread pointer is saved in `kthread' instead of `thread' so
anything trying to use thread is caught by the compiler. Currently only the
bnx2fc driver is using struct fcoe_percpu_s and the kthread member.

After a CPU went offline, we may still enqueue items on the "offline"
CPU. This isn't much of a problem. The work will be done on a random
CPU. The allocated crc_eof_page page won't be cleaned up. It is probably
expected that the CPU comes up at some point so it should not be a
problem. The crc_eof_page memory is released of course once the module is
removed.

This patch was only compile-tested due to -ENODEV.

Cc: Vasu Dev <vasu.dev@intel.com>
Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: fcoe-devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
v1…v2: use kworker instead of smbthread as per hch

If you want this I would the same for the two bnx drivers.

 drivers/scsi/bnx2fc/bnx2fc_fcoe.c |   8 +-
 drivers/scsi/fcoe/fcoe.c          | 276 ++++----------------------------------
 include/scsi/libfcoe.h            |   6 +-
 3 files changed, 34 insertions(+), 256 deletions(-)

diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index d7029ea5d319..cfb1b5b40d6c 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -466,7 +466,7 @@ static int bnx2fc_rcv(struct sk_buff *skb, struct net_device *dev,
 
 	__skb_queue_tail(&bg->fcoe_rx_list, skb);
 	if (bg->fcoe_rx_list.qlen == 1)
-		wake_up_process(bg->thread);
+		wake_up_process(bg->kthread);
 
 	spin_unlock(&bg->fcoe_rx_list.lock);
 
@@ -2663,7 +2663,7 @@ static int __init bnx2fc_mod_init(void)
 	}
 	wake_up_process(l2_thread);
 	spin_lock_bh(&bg->fcoe_rx_list.lock);
-	bg->thread = l2_thread;
+	bg->kthread = l2_thread;
 	spin_unlock_bh(&bg->fcoe_rx_list.lock);
 
 	for_each_possible_cpu(cpu) {
@@ -2736,8 +2736,8 @@ static void __exit bnx2fc_mod_exit(void)
 	/* Destroy global thread */
 	bg = &bnx2fc_global;
 	spin_lock_bh(&bg->fcoe_rx_list.lock);
-	l2_thread = bg->thread;
-	bg->thread = NULL;
+	l2_thread = bg->kthread;
+	bg->kthread = NULL;
 	while ((skb = __skb_dequeue(&bg->fcoe_rx_list)) != NULL)
 		kfree_skb(skb);
 
diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 0efe7112fc1f..f7c7ccc156da 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -67,9 +67,6 @@ static DEFINE_MUTEX(fcoe_config_mutex);
 
 static struct workqueue_struct *fcoe_wq;
 
-/* fcoe_percpu_clean completion.  Waiter protected by fcoe_create_mutex */
-static DECLARE_COMPLETION(fcoe_flush_completion);
-
 /* fcoe host list */
 /* must only by accessed under the RTNL mutex */
 static LIST_HEAD(fcoe_hostlist);
@@ -80,7 +77,6 @@ static int fcoe_reset(struct Scsi_Host *);
 static int fcoe_xmit(struct fc_lport *, struct fc_frame *);
 static int fcoe_rcv(struct sk_buff *, struct net_device *,
 		    struct packet_type *, struct net_device *);
-static int fcoe_percpu_receive_thread(void *);
 static void fcoe_percpu_clean(struct fc_lport *);
 static int fcoe_link_ok(struct fc_lport *);
 
@@ -107,7 +103,6 @@ static int fcoe_ddp_setup(struct fc_lport *, u16, struct scatterlist *,
 static int fcoe_ddp_done(struct fc_lport *, u16);
 static int fcoe_ddp_target(struct fc_lport *, u16, struct scatterlist *,
 			   unsigned int);
-static int fcoe_cpu_callback(struct notifier_block *, unsigned long, void *);
 static int fcoe_dcb_app_notification(struct notifier_block *notifier,
 				     ulong event, void *ptr);
 
@@ -136,11 +131,6 @@ static struct notifier_block fcoe_notifier = {
 	.notifier_call = fcoe_device_notification,
 };
 
-/* notification function for CPU hotplug events */
-static struct notifier_block fcoe_cpu_notifier = {
-	.notifier_call = fcoe_cpu_callback,
-};
-
 /* notification function for DCB events */
 static struct notifier_block dcb_notifier = {
 	.notifier_call = fcoe_dcb_app_notification,
@@ -1245,152 +1235,21 @@ static int __exit fcoe_if_exit(void)
 	return 0;
 }
 
-/**
- * fcoe_percpu_thread_create() - Create a receive thread for an online CPU
- * @cpu: The CPU index of the CPU to create a receive thread for
- */
-static void fcoe_percpu_thread_create(unsigned int cpu)
+static void fcoe_thread_cleanup_local(unsigned int cpu)
 {
-	struct fcoe_percpu_s *p;
-	struct task_struct *thread;
-
-	p = &per_cpu(fcoe_percpu, cpu);
-
-	thread = kthread_create_on_node(fcoe_percpu_receive_thread,
-					(void *)p, cpu_to_node(cpu),
-					"fcoethread/%d", cpu);
-
-	if (likely(!IS_ERR(thread))) {
-		kthread_bind(thread, cpu);
-		wake_up_process(thread);
-
-		spin_lock_bh(&p->fcoe_rx_list.lock);
-		p->thread = thread;
-		spin_unlock_bh(&p->fcoe_rx_list.lock);
-	}
-}
-
-/**
- * fcoe_percpu_thread_destroy() - Remove the receive thread of a CPU
- * @cpu: The CPU index of the CPU whose receive thread is to be destroyed
- *
- * Destroys a per-CPU Rx thread. Any pending skbs are moved to the
- * current CPU's Rx thread. If the thread being destroyed is bound to
- * the CPU processing this context the skbs will be freed.
- */
-static void fcoe_percpu_thread_destroy(unsigned int cpu)
-{
-	struct fcoe_percpu_s *p;
-	struct task_struct *thread;
 	struct page *crc_eof;
-	struct sk_buff *skb;
-#ifdef CONFIG_SMP
-	struct fcoe_percpu_s *p0;
-	unsigned targ_cpu = get_cpu();
-#endif /* CONFIG_SMP */
+	struct fcoe_percpu_s *p;
 
-	FCOE_DBG("Destroying receive thread for CPU %d\n", cpu);
-
-	/* Prevent any new skbs from being queued for this CPU. */
-	p = &per_cpu(fcoe_percpu, cpu);
+	p = per_cpu_ptr(&fcoe_percpu, cpu);
 	spin_lock_bh(&p->fcoe_rx_list.lock);
-	thread = p->thread;
-	p->thread = NULL;
 	crc_eof = p->crc_eof_page;
 	p->crc_eof_page = NULL;
 	p->crc_eof_offset = 0;
 	spin_unlock_bh(&p->fcoe_rx_list.lock);
 
-#ifdef CONFIG_SMP
-	/*
-	 * Don't bother moving the skb's if this context is running
-	 * on the same CPU that is having its thread destroyed. This
-	 * can easily happen when the module is removed.
-	 */
-	if (cpu != targ_cpu) {
-		p0 = &per_cpu(fcoe_percpu, targ_cpu);
-		spin_lock_bh(&p0->fcoe_rx_list.lock);
-		if (p0->thread) {
-			FCOE_DBG("Moving frames from CPU %d to CPU %d\n",
-				 cpu, targ_cpu);
-
-			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-				__skb_queue_tail(&p0->fcoe_rx_list, skb);
-			spin_unlock_bh(&p0->fcoe_rx_list.lock);
-		} else {
-			/*
-			 * The targeted CPU is not initialized and cannot accept
-			 * new	skbs. Unlock the targeted CPU and drop the skbs
-			 * on the CPU that is going offline.
-			 */
-			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-				kfree_skb(skb);
-			spin_unlock_bh(&p0->fcoe_rx_list.lock);
-		}
-	} else {
-		/*
-		 * This scenario occurs when the module is being removed
-		 * and all threads are being destroyed. skbs will continue
-		 * to be shifted from the CPU thread that is being removed
-		 * to the CPU thread associated with the CPU that is processing
-		 * the module removal. Once there is only one CPU Rx thread it
-		 * will reach this case and we will drop all skbs and later
-		 * stop the thread.
-		 */
-		spin_lock_bh(&p->fcoe_rx_list.lock);
-		while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-			kfree_skb(skb);
-		spin_unlock_bh(&p->fcoe_rx_list.lock);
-	}
-	put_cpu();
-#else
-	/*
-	 * This a non-SMP scenario where the singular Rx thread is
-	 * being removed. Free all skbs and stop the thread.
-	 */
-	spin_lock_bh(&p->fcoe_rx_list.lock);
-	while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-		kfree_skb(skb);
-	spin_unlock_bh(&p->fcoe_rx_list.lock);
-#endif
-
-	if (thread)
-		kthread_stop(thread);
-
 	if (crc_eof)
 		put_page(crc_eof);
-}
-
-/**
- * fcoe_cpu_callback() - Handler for CPU hotplug events
- * @nfb:    The callback data block
- * @action: The event triggering the callback
- * @hcpu:   The index of the CPU that the event is for
- *
- * This creates or destroys per-CPU data for fcoe
- *
- * Returns NOTIFY_OK always.
- */
-static int fcoe_cpu_callback(struct notifier_block *nfb,
-			     unsigned long action, void *hcpu)
-{
-	unsigned cpu = (unsigned long)hcpu;
-
-	switch (action) {
-	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
-		FCOE_DBG("CPU %x online: Create Rx thread\n", cpu);
-		fcoe_percpu_thread_create(cpu);
-		break;
-	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
-		FCOE_DBG("CPU %x offline: Remove Rx thread\n", cpu);
-		fcoe_percpu_thread_destroy(cpu);
-		break;
-	default:
-		break;
-	}
-	return NOTIFY_OK;
+	flush_work(&p->work);
 }
 
 /**
@@ -1509,26 +1368,6 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 
 	fps = &per_cpu(fcoe_percpu, cpu);
 	spin_lock(&fps->fcoe_rx_list.lock);
-	if (unlikely(!fps->thread)) {
-		/*
-		 * The targeted CPU is not ready, let's target
-		 * the first CPU now. For non-SMP systems this
-		 * will check the same CPU twice.
-		 */
-		FCOE_NETDEV_DBG(netdev, "CPU is online, but no receive thread "
-				"ready for incoming skb- using first online "
-				"CPU.\n");
-
-		spin_unlock(&fps->fcoe_rx_list.lock);
-		cpu = cpumask_first(cpu_online_mask);
-		fps = &per_cpu(fcoe_percpu, cpu);
-		spin_lock(&fps->fcoe_rx_list.lock);
-		if (!fps->thread) {
-			spin_unlock(&fps->fcoe_rx_list.lock);
-			goto err;
-		}
-	}
-
 	/*
 	 * We now have a valid CPU that we're targeting for
 	 * this skb. We also have this receive thread locked,
@@ -1543,8 +1382,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	 * in softirq context.
 	 */
 	__skb_queue_tail(&fps->fcoe_rx_list, skb);
-	if (fps->thread->state == TASK_INTERRUPTIBLE)
-		wake_up_process(fps->thread);
+	schedule_work_on(cpu, &fps->work);
 	spin_unlock(&fps->fcoe_rx_list.lock);
 
 	return NET_RX_SUCCESS;
@@ -1713,15 +1551,6 @@ static int fcoe_xmit(struct fc_lport *lport, struct fc_frame *fp)
 }
 
 /**
- * fcoe_percpu_flush_done() - Indicate per-CPU queue flush completion
- * @skb: The completed skb (argument required by destructor)
- */
-static void fcoe_percpu_flush_done(struct sk_buff *skb)
-{
-	complete(&fcoe_flush_completion);
-}
-
-/**
  * fcoe_filter_frames() - filter out bad fcoe frames, i.e. bad CRC
  * @lport: The local port the frame was received on
  * @fp:	   The received frame
@@ -1792,8 +1621,7 @@ static void fcoe_recv_frame(struct sk_buff *skb)
 	fr = fcoe_dev_from_skb(skb);
 	lport = fr->fr_dev;
 	if (unlikely(!lport)) {
-		if (skb->destructor != fcoe_percpu_flush_done)
-			FCOE_NETDEV_DBG(skb->dev, "NULL lport in skb\n");
+		FCOE_NETDEV_DBG(skb->dev, "NULL lport in skb\n");
 		kfree_skb(skb);
 		return;
 	}
@@ -1857,40 +1685,28 @@ static void fcoe_recv_frame(struct sk_buff *skb)
 }
 
 /**
- * fcoe_percpu_receive_thread() - The per-CPU packet receive thread
- * @arg: The per-CPU context
+ * fcoe_receive_work() - The per-CPU worker
+ * @work: The work struct
  *
- * Return: 0 for success
  */
-static int fcoe_percpu_receive_thread(void *arg)
+static void fcoe_receive_work(struct work_struct *work)
 {
-	struct fcoe_percpu_s *p = arg;
+	struct fcoe_percpu_s *p;
 	struct sk_buff *skb;
 	struct sk_buff_head tmp;
 
+	p = container_of(work, struct fcoe_percpu_s, work);
 	skb_queue_head_init(&tmp);
 
-	set_user_nice(current, MIN_NICE);
+	spin_lock_bh(&p->fcoe_rx_list.lock);
+	skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
+	spin_unlock_bh(&p->fcoe_rx_list.lock);
 
-	while (!kthread_should_stop()) {
+	if (!skb_queue_len(&tmp))
+		return;
 
-		spin_lock_bh(&p->fcoe_rx_list.lock);
-		skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
-
-		if (!skb_queue_len(&tmp)) {
-			set_current_state(TASK_INTERRUPTIBLE);
-			spin_unlock_bh(&p->fcoe_rx_list.lock);
-			schedule();
-			continue;
-		}
-
-		spin_unlock_bh(&p->fcoe_rx_list.lock);
-
-		while ((skb = __skb_dequeue(&tmp)) != NULL)
-			fcoe_recv_frame(skb);
-
-	}
-	return 0;
+	while ((skb = __skb_dequeue(&tmp)))
+		fcoe_recv_frame(skb);
 }
 
 /**
@@ -2450,36 +2266,19 @@ static int fcoe_link_ok(struct fc_lport *lport)
  *
  * Must be called with fcoe_create_mutex held to single-thread completion.
  *
- * This flushes the pending skbs by adding a new skb to each queue and
- * waiting until they are all freed.  This assures us that not only are
- * there no packets that will be handled by the lport, but also that any
- * threads already handling packet have returned.
+ * This flushes the pending skbs by flush the work item for each CPU. The work
+ * item on each possible CPU is flushed because we may have used the per-CPU
+ * struct of an offline CPU.
  */
 static void fcoe_percpu_clean(struct fc_lport *lport)
 {
 	struct fcoe_percpu_s *pp;
-	struct sk_buff *skb;
 	unsigned int cpu;
 
 	for_each_possible_cpu(cpu) {
 		pp = &per_cpu(fcoe_percpu, cpu);
 
-		if (!pp->thread || !cpu_online(cpu))
-			continue;
-
-		skb = dev_alloc_skb(0);
-		if (!skb)
-			continue;
-
-		skb->destructor = fcoe_percpu_flush_done;
-
-		spin_lock_bh(&pp->fcoe_rx_list.lock);
-		__skb_queue_tail(&pp->fcoe_rx_list, skb);
-		if (pp->fcoe_rx_list.qlen == 1)
-			wake_up_process(pp->thread);
-		spin_unlock_bh(&pp->fcoe_rx_list.lock);
-
-		wait_for_completion(&fcoe_flush_completion);
+		flush_work(&pp->work);
 	}
 }
 
@@ -2625,22 +2424,11 @@ static int __init fcoe_init(void)
 	mutex_lock(&fcoe_config_mutex);
 
 	for_each_possible_cpu(cpu) {
-		p = &per_cpu(fcoe_percpu, cpu);
+		p = per_cpu_ptr(&fcoe_percpu, cpu);
+		INIT_WORK(&p->work, fcoe_receive_work);
 		skb_queue_head_init(&p->fcoe_rx_list);
 	}
 
-	cpu_notifier_register_begin();
-
-	for_each_online_cpu(cpu)
-		fcoe_percpu_thread_create(cpu);
-
-	/* Initialize per CPU interrupt thread */
-	rc = __register_hotcpu_notifier(&fcoe_cpu_notifier);
-	if (rc)
-		goto out_free;
-
-	cpu_notifier_register_done();
-
 	/* Setup link change notification */
 	fcoe_dev_setup();
 
@@ -2652,12 +2440,6 @@ static int __init fcoe_init(void)
 	return 0;
 
 out_free:
-	for_each_online_cpu(cpu) {
-		fcoe_percpu_thread_destroy(cpu);
-	}
-
-	cpu_notifier_register_done();
-
 	mutex_unlock(&fcoe_config_mutex);
 	destroy_workqueue(fcoe_wq);
 	return rc;
@@ -2690,14 +2472,8 @@ static void __exit fcoe_exit(void)
 	}
 	rtnl_unlock();
 
-	cpu_notifier_register_begin();
-
-	for_each_online_cpu(cpu)
-		fcoe_percpu_thread_destroy(cpu);
-
-	__unregister_hotcpu_notifier(&fcoe_cpu_notifier);
-
-	cpu_notifier_register_done();
+	for_each_possible_cpu(cpu)
+		fcoe_thread_cleanup_local(cpu);
 
 	mutex_unlock(&fcoe_config_mutex);
 
diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
index de7e3ee60f0c..c6fbbb6581d3 100644
--- a/include/scsi/libfcoe.h
+++ b/include/scsi/libfcoe.h
@@ -319,14 +319,16 @@ struct fcoe_transport {
 
 /**
  * struct fcoe_percpu_s - The context for FCoE receive thread(s)
- * @thread:	    The thread context
+ * @kthread:	    The thread context (used by bnx2fc)
+ * @work:	    The work item (used by fcoe)
  * @fcoe_rx_list:   The queue of pending packets to process
  * @page:	    The memory page for calculating frame trailer CRCs
  * @crc_eof_offset: The offset into the CRC page pointing to available
  *		    memory for a new trailer
  */
 struct fcoe_percpu_s {
-	struct task_struct *thread;
+	struct task_struct *kthread;
+	struct work_struct work;
 	struct sk_buff_head fcoe_rx_list;
 	struct page *crc_eof_page;
 	int crc_eof_offset;
-- 
2.8.0.rc3

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker
  2016-04-12 15:16         ` [PATCH v2] scsi/fcoe: convert to kworker Sebastian Andrzej Siewior
@ 2016-04-22 15:27           ` Sebastian Andrzej Siewior
  2016-04-22 15:49             ` James Bottomley
  2016-06-10 10:38           ` Johannes Thumshirn
  1 sibling, 1 reply; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-04-22 15:27 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: James E.J. Bottomley, linux-scsi, Martin K. Petersen, Vasu Dev,
	rt, fcoe-devel

On 04/12/2016 05:16 PM, Sebastian Andrzej Siewior wrote:
> The driver creates its own per-CPU threads which are updated based on
> CPU hotplug events. It is also possible to use kworkers and remove some
> of the kthread infrastrucure.
> 
> The code checked ->thread to decide if there is an active per-CPU
> thread. By using the kworker infrastructure this is no longer possible (or
> required). The thread pointer is saved in `kthread' instead of `thread' so
> anything trying to use thread is caught by the compiler. Currently only the
> bnx2fc driver is using struct fcoe_percpu_s and the kthread member.
> 
> After a CPU went offline, we may still enqueue items on the "offline"
> CPU. This isn't much of a problem. The work will be done on a random
> CPU. The allocated crc_eof_page page won't be cleaned up. It is probably
> expected that the CPU comes up at some point so it should not be a
> problem. The crc_eof_page memory is released of course once the module is
> removed.
> 
> This patch was only compile-tested due to -ENODEV.
> 
> Cc: Vasu Dev <vasu.dev@intel.com>
> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: fcoe-devel@open-fcoe.org
> Cc: linux-scsi@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> v1…v2: use kworker instead of smbthread as per hch
> 
> If you want this I would the same for the two bnx drivers.

*ping*

Sebastian
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker
  2016-04-22 15:27           ` [PREEMPT-RT] " Sebastian Andrzej Siewior
@ 2016-04-22 15:49             ` James Bottomley
  2016-04-22 16:39               ` Laurence Oberman
  0 siblings, 1 reply; 27+ messages in thread
From: James Bottomley @ 2016-04-22 15:49 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, Christoph Hellwig
  Cc: linux-scsi, Martin K. Petersen, Vasu Dev, rt, fcoe-devel, Chad Dupuis

On Fri, 2016-04-22 at 17:27 +0200, Sebastian Andrzej Siewior wrote:
> On 04/12/2016 05:16 PM, Sebastian Andrzej Siewior wrote:
> > The driver creates its own per-CPU threads which are updated based
> > on
> > CPU hotplug events. It is also possible to use kworkers and remove
> > some
> > of the kthread infrastrucure.
> > 
> > The code checked ->thread to decide if there is an active per-CPU
> > thread. By using the kworker infrastructure this is no longer
> > possible (or
> > required). The thread pointer is saved in `kthread' instead of
> > `thread' so
> > anything trying to use thread is caught by the compiler. Currently
> > only the
> > bnx2fc driver is using struct fcoe_percpu_s and the kthread member.
> > 
> > After a CPU went offline, we may still enqueue items on the
> > "offline"
> > CPU. This isn't much of a problem. The work will be done on a
> > random
> > CPU. The allocated crc_eof_page page won't be cleaned up. It is
> > probably
> > expected that the CPU comes up at some point so it should not be a
> > problem. The crc_eof_page memory is released of course once the
> > module is
> > removed.
> > 
> > This patch was only compile-tested due to -ENODEV.
> > 
> > Cc: Vasu Dev <vasu.dev@intel.com>
> > Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
> > Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> > Cc: Christoph Hellwig <hch@lst.de>
> > Cc: fcoe-devel@open-fcoe.org
> > Cc: linux-scsi@vger.kernel.org
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> > v1…v2: use kworker instead of smbthread as per hch
> > 
> > If you want this I would the same for the two bnx drivers.
> 
> *ping*

Ping what?  You've sent in an untested patch that looks to be a big
change.  It's definitely not going in until it's tested.  Why don't you
see if you can recruit an FCoE person to your cause and get them to
test it.

It looks like you're looking for testing on bnx2fc, correct?  In which
case cc'ing a bnx2fc person might have been helpful (cc added).

James


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker
  2016-04-22 15:49             ` James Bottomley
@ 2016-04-22 16:39               ` Laurence Oberman
       [not found]                 ` <186981952.31194082.1461343179889.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Laurence Oberman @ 2016-04-22 16:39 UTC (permalink / raw)
  To: James Bottomley
  Cc: Sebastian Andrzej Siewior, Christoph Hellwig, linux-scsi,
	Martin K. Petersen, Vasu Dev, rt, fcoe-devel, Chad Dupuis

I have fcoe for testing.
I will pull this in next week and test it.

Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

----- Original Message -----
From: "James Bottomley" <jejb@linux.vnet.ibm.com>
To: "Sebastian Andrzej Siewior" <bigeasy@linutronix.de>, "Christoph Hellwig" <hch@infradead.org>
Cc: linux-scsi@vger.kernel.org, "Martin K. Petersen" <martin.petersen@oracle.com>, "Vasu Dev" <vasu.dev@intel.com>, rt@linutronix.de, fcoe-devel@open-fcoe.org, "Chad Dupuis" <chad.dupuis@qlogic.com>
Sent: Friday, April 22, 2016 11:49:45 AM
Subject: Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker

On Fri, 2016-04-22 at 17:27 +0200, Sebastian Andrzej Siewior wrote:
> On 04/12/2016 05:16 PM, Sebastian Andrzej Siewior wrote:
> > The driver creates its own per-CPU threads which are updated based
> > on
> > CPU hotplug events. It is also possible to use kworkers and remove
> > some
> > of the kthread infrastrucure.
> > 
> > The code checked ->thread to decide if there is an active per-CPU
> > thread. By using the kworker infrastructure this is no longer
> > possible (or
> > required). The thread pointer is saved in `kthread' instead of
> > `thread' so
> > anything trying to use thread is caught by the compiler. Currently
> > only the
> > bnx2fc driver is using struct fcoe_percpu_s and the kthread member.
> > 
> > After a CPU went offline, we may still enqueue items on the
> > "offline"
> > CPU. This isn't much of a problem. The work will be done on a
> > random
> > CPU. The allocated crc_eof_page page won't be cleaned up. It is
> > probably
> > expected that the CPU comes up at some point so it should not be a
> > problem. The crc_eof_page memory is released of course once the
> > module is
> > removed.
> > 
> > This patch was only compile-tested due to -ENODEV.
> > 
> > Cc: Vasu Dev <vasu.dev@intel.com>
> > Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
> > Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> > Cc: Christoph Hellwig <hch@lst.de>
> > Cc: fcoe-devel@open-fcoe.org
> > Cc: linux-scsi@vger.kernel.org
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> > v1…v2: use kworker instead of smbthread as per hch
> > 
> > If you want this I would the same for the two bnx drivers.
> 
> *ping*

Ping what?  You've sent in an untested patch that looks to be a big
change.  It's definitely not going in until it's tested.  Why don't you
see if you can recruit an FCoE person to your cause and get them to
test it.

It looks like you're looking for testing on bnx2fc, correct?  In which
case cc'ing a bnx2fc person might have been helpful (cc added).

James


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker
       [not found]                 ` <186981952.31194082.1461343179889.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-06-09 13:09                   ` Sebastian Andrzej Siewior
  2016-06-09 13:15                     ` Laurence Oberman
  0 siblings, 1 reply; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-06-09 13:09 UTC (permalink / raw)
  To: Laurence Oberman, James Bottomley
  Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA, Martin K. Petersen,
	Christoph Hellwig, rt-hfZtesqFncYOwBW4kG4KsQ,
	fcoe-devel-s9riP+hp16TNLxjTenLetw

On 04/22/2016 06:39 PM, Laurence Oberman wrote:
> I have fcoe for testing.
> I will pull this in next week and test it.

any update?

> 
> Laurence Oberman
> Principal Software Maintenance Engineer
> Red Hat Global Support Services

Sebastian

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker
  2016-06-09 13:09                   ` Sebastian Andrzej Siewior
@ 2016-06-09 13:15                     ` Laurence Oberman
  2016-06-09 13:22                       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 27+ messages in thread
From: Laurence Oberman @ 2016-06-09 13:15 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: James Bottomley, Christoph Hellwig, linux-scsi,
	Martin K. Petersen, Vasu Dev, rt, fcoe-devel, Chad Dupuis



----- Original Message -----
> From: "Sebastian Andrzej Siewior" <bigeasy@linutronix.de>
> To: "Laurence Oberman" <loberman@redhat.com>, "James Bottomley" <jejb@linux.vnet.ibm.com>
> Cc: "Christoph Hellwig" <hch@infradead.org>, linux-scsi@vger.kernel.org, "Martin K. Petersen"
> <martin.petersen@oracle.com>, "Vasu Dev" <vasu.dev@intel.com>, rt@linutronix.de, fcoe-devel@open-fcoe.org, "Chad
> Dupuis" <chad.dupuis@qlogic.com>
> Sent: Thursday, June 9, 2016 9:09:37 AM
> Subject: Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker
> 
> On 04/22/2016 06:39 PM, Laurence Oberman wrote:
> > I have fcoe for testing.
> > I will pull this in next week and test it.
> 
> any update?
> 
> > 
> > Laurence Oberman
> > Principal Software Maintenance Engineer
> > Red Hat Global Support Services
> 
> Sebastian
> 
> 
Hello
Apologies, somehow this fell off my radar.
I will get the FCOE test bed up and get it done ASAP.

Regards
Laurence

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PREEMPT-RT] [PATCH v2] scsi/fcoe: convert to kworker
  2016-06-09 13:15                     ` Laurence Oberman
@ 2016-06-09 13:22                       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-06-09 13:22 UTC (permalink / raw)
  To: Laurence Oberman
  Cc: James Bottomley, Christoph Hellwig, linux-scsi,
	Martin K. Petersen, Vasu Dev, rt, fcoe-devel, Chad Dupuis

On 06/09/2016 03:15 PM, Laurence Oberman wrote:
> Hello

Hi,

> Apologies, somehow this fell off my radar.
> I will get the FCOE test bed up and get it done ASAP.

Thanks

> 
> Regards
> Laurence

Sebastian


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2] scsi/fcoe: convert to kworker
  2016-04-12 15:16         ` [PATCH v2] scsi/fcoe: convert to kworker Sebastian Andrzej Siewior
  2016-04-22 15:27           ` [PREEMPT-RT] " Sebastian Andrzej Siewior
@ 2016-06-10 10:38           ` Johannes Thumshirn
       [not found]             ` <20160610103812.ojgzop6qdv3mos5d-3LAbnSA0sDC4fIQPS+WK3rNAH6kLmebB@public.gmane.org>
  2016-07-04  8:23             ` Sebastian Andrzej Siewior
  1 sibling, 2 replies; 27+ messages in thread
From: Johannes Thumshirn @ 2016-06-10 10:38 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Christoph Hellwig, linux-scsi, James E.J. Bottomley,
	Martin K. Petersen, rt, Vasu Dev, fcoe-devel

On Tue, Apr 12, 2016 at 05:16:54PM +0200, Sebastian Andrzej Siewior wrote:
> The driver creates its own per-CPU threads which are updated based on
> CPU hotplug events. It is also possible to use kworkers and remove some
> of the kthread infrastrucure.
> 
> The code checked ->thread to decide if there is an active per-CPU
> thread. By using the kworker infrastructure this is no longer possible (or
> required). The thread pointer is saved in `kthread' instead of `thread' so
> anything trying to use thread is caught by the compiler. Currently only the
> bnx2fc driver is using struct fcoe_percpu_s and the kthread member.
> 
> After a CPU went offline, we may still enqueue items on the "offline"
> CPU. This isn't much of a problem. The work will be done on a random
> CPU. The allocated crc_eof_page page won't be cleaned up. It is probably
> expected that the CPU comes up at some point so it should not be a
> problem. The crc_eof_page memory is released of course once the module is
> removed.
> 
> This patch was only compile-tested due to -ENODEV.
> 
> Cc: Vasu Dev <vasu.dev@intel.com>
> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: fcoe-devel@open-fcoe.org
> Cc: linux-scsi@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Tested in a Boot from FCoE scenario using a BCM57840.

Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

> ---
> v1…v2: use kworker instead of smbthread as per hch
> 
> If you want this I would the same for the two bnx drivers.
> 
>  drivers/scsi/bnx2fc/bnx2fc_fcoe.c |   8 +-
>  drivers/scsi/fcoe/fcoe.c          | 276 ++++----------------------------------
>  include/scsi/libfcoe.h            |   6 +-
>  3 files changed, 34 insertions(+), 256 deletions(-)
> 
> diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
> index d7029ea5d319..cfb1b5b40d6c 100644
> --- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
> +++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
> @@ -466,7 +466,7 @@ static int bnx2fc_rcv(struct sk_buff *skb, struct net_device *dev,
>  
>  	__skb_queue_tail(&bg->fcoe_rx_list, skb);
>  	if (bg->fcoe_rx_list.qlen == 1)
> -		wake_up_process(bg->thread);
> +		wake_up_process(bg->kthread);
>  
>  	spin_unlock(&bg->fcoe_rx_list.lock);
>  
> @@ -2663,7 +2663,7 @@ static int __init bnx2fc_mod_init(void)
>  	}
>  	wake_up_process(l2_thread);
>  	spin_lock_bh(&bg->fcoe_rx_list.lock);
> -	bg->thread = l2_thread;
> +	bg->kthread = l2_thread;
>  	spin_unlock_bh(&bg->fcoe_rx_list.lock);
>  
>  	for_each_possible_cpu(cpu) {
> @@ -2736,8 +2736,8 @@ static void __exit bnx2fc_mod_exit(void)
>  	/* Destroy global thread */
>  	bg = &bnx2fc_global;
>  	spin_lock_bh(&bg->fcoe_rx_list.lock);
> -	l2_thread = bg->thread;
> -	bg->thread = NULL;
> +	l2_thread = bg->kthread;
> +	bg->kthread = NULL;
>  	while ((skb = __skb_dequeue(&bg->fcoe_rx_list)) != NULL)
>  		kfree_skb(skb);
>  
> diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
> index 0efe7112fc1f..f7c7ccc156da 100644
> --- a/drivers/scsi/fcoe/fcoe.c
> +++ b/drivers/scsi/fcoe/fcoe.c
> @@ -67,9 +67,6 @@ static DEFINE_MUTEX(fcoe_config_mutex);
>  
>  static struct workqueue_struct *fcoe_wq;
>  
> -/* fcoe_percpu_clean completion.  Waiter protected by fcoe_create_mutex */
> -static DECLARE_COMPLETION(fcoe_flush_completion);
> -
>  /* fcoe host list */
>  /* must only by accessed under the RTNL mutex */
>  static LIST_HEAD(fcoe_hostlist);
> @@ -80,7 +77,6 @@ static int fcoe_reset(struct Scsi_Host *);
>  static int fcoe_xmit(struct fc_lport *, struct fc_frame *);
>  static int fcoe_rcv(struct sk_buff *, struct net_device *,
>  		    struct packet_type *, struct net_device *);
> -static int fcoe_percpu_receive_thread(void *);
>  static void fcoe_percpu_clean(struct fc_lport *);
>  static int fcoe_link_ok(struct fc_lport *);
>  
> @@ -107,7 +103,6 @@ static int fcoe_ddp_setup(struct fc_lport *, u16, struct scatterlist *,
>  static int fcoe_ddp_done(struct fc_lport *, u16);
>  static int fcoe_ddp_target(struct fc_lport *, u16, struct scatterlist *,
>  			   unsigned int);
> -static int fcoe_cpu_callback(struct notifier_block *, unsigned long, void *);
>  static int fcoe_dcb_app_notification(struct notifier_block *notifier,
>  				     ulong event, void *ptr);
>  
> @@ -136,11 +131,6 @@ static struct notifier_block fcoe_notifier = {
>  	.notifier_call = fcoe_device_notification,
>  };
>  
> -/* notification function for CPU hotplug events */
> -static struct notifier_block fcoe_cpu_notifier = {
> -	.notifier_call = fcoe_cpu_callback,
> -};
> -
>  /* notification function for DCB events */
>  static struct notifier_block dcb_notifier = {
>  	.notifier_call = fcoe_dcb_app_notification,
> @@ -1245,152 +1235,21 @@ static int __exit fcoe_if_exit(void)
>  	return 0;
>  }
>  
> -/**
> - * fcoe_percpu_thread_create() - Create a receive thread for an online CPU
> - * @cpu: The CPU index of the CPU to create a receive thread for
> - */
> -static void fcoe_percpu_thread_create(unsigned int cpu)
> +static void fcoe_thread_cleanup_local(unsigned int cpu)
>  {
> -	struct fcoe_percpu_s *p;
> -	struct task_struct *thread;
> -
> -	p = &per_cpu(fcoe_percpu, cpu);
> -
> -	thread = kthread_create_on_node(fcoe_percpu_receive_thread,
> -					(void *)p, cpu_to_node(cpu),
> -					"fcoethread/%d", cpu);
> -
> -	if (likely(!IS_ERR(thread))) {
> -		kthread_bind(thread, cpu);
> -		wake_up_process(thread);
> -
> -		spin_lock_bh(&p->fcoe_rx_list.lock);
> -		p->thread = thread;
> -		spin_unlock_bh(&p->fcoe_rx_list.lock);
> -	}
> -}
> -
> -/**
> - * fcoe_percpu_thread_destroy() - Remove the receive thread of a CPU
> - * @cpu: The CPU index of the CPU whose receive thread is to be destroyed
> - *
> - * Destroys a per-CPU Rx thread. Any pending skbs are moved to the
> - * current CPU's Rx thread. If the thread being destroyed is bound to
> - * the CPU processing this context the skbs will be freed.
> - */
> -static void fcoe_percpu_thread_destroy(unsigned int cpu)
> -{
> -	struct fcoe_percpu_s *p;
> -	struct task_struct *thread;
>  	struct page *crc_eof;
> -	struct sk_buff *skb;
> -#ifdef CONFIG_SMP
> -	struct fcoe_percpu_s *p0;
> -	unsigned targ_cpu = get_cpu();
> -#endif /* CONFIG_SMP */
> +	struct fcoe_percpu_s *p;
>  
> -	FCOE_DBG("Destroying receive thread for CPU %d\n", cpu);
> -
> -	/* Prevent any new skbs from being queued for this CPU. */
> -	p = &per_cpu(fcoe_percpu, cpu);
> +	p = per_cpu_ptr(&fcoe_percpu, cpu);
>  	spin_lock_bh(&p->fcoe_rx_list.lock);
> -	thread = p->thread;
> -	p->thread = NULL;
>  	crc_eof = p->crc_eof_page;
>  	p->crc_eof_page = NULL;
>  	p->crc_eof_offset = 0;
>  	spin_unlock_bh(&p->fcoe_rx_list.lock);
>  
> -#ifdef CONFIG_SMP
> -	/*
> -	 * Don't bother moving the skb's if this context is running
> -	 * on the same CPU that is having its thread destroyed. This
> -	 * can easily happen when the module is removed.
> -	 */
> -	if (cpu != targ_cpu) {
> -		p0 = &per_cpu(fcoe_percpu, targ_cpu);
> -		spin_lock_bh(&p0->fcoe_rx_list.lock);
> -		if (p0->thread) {
> -			FCOE_DBG("Moving frames from CPU %d to CPU %d\n",
> -				 cpu, targ_cpu);
> -
> -			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
> -				__skb_queue_tail(&p0->fcoe_rx_list, skb);
> -			spin_unlock_bh(&p0->fcoe_rx_list.lock);
> -		} else {
> -			/*
> -			 * The targeted CPU is not initialized and cannot accept
> -			 * new	skbs. Unlock the targeted CPU and drop the skbs
> -			 * on the CPU that is going offline.
> -			 */
> -			while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
> -				kfree_skb(skb);
> -			spin_unlock_bh(&p0->fcoe_rx_list.lock);
> -		}
> -	} else {
> -		/*
> -		 * This scenario occurs when the module is being removed
> -		 * and all threads are being destroyed. skbs will continue
> -		 * to be shifted from the CPU thread that is being removed
> -		 * to the CPU thread associated with the CPU that is processing
> -		 * the module removal. Once there is only one CPU Rx thread it
> -		 * will reach this case and we will drop all skbs and later
> -		 * stop the thread.
> -		 */
> -		spin_lock_bh(&p->fcoe_rx_list.lock);
> -		while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
> -			kfree_skb(skb);
> -		spin_unlock_bh(&p->fcoe_rx_list.lock);
> -	}
> -	put_cpu();
> -#else
> -	/*
> -	 * This a non-SMP scenario where the singular Rx thread is
> -	 * being removed. Free all skbs and stop the thread.
> -	 */
> -	spin_lock_bh(&p->fcoe_rx_list.lock);
> -	while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
> -		kfree_skb(skb);
> -	spin_unlock_bh(&p->fcoe_rx_list.lock);
> -#endif
> -
> -	if (thread)
> -		kthread_stop(thread);
> -
>  	if (crc_eof)
>  		put_page(crc_eof);
> -}
> -
> -/**
> - * fcoe_cpu_callback() - Handler for CPU hotplug events
> - * @nfb:    The callback data block
> - * @action: The event triggering the callback
> - * @hcpu:   The index of the CPU that the event is for
> - *
> - * This creates or destroys per-CPU data for fcoe
> - *
> - * Returns NOTIFY_OK always.
> - */
> -static int fcoe_cpu_callback(struct notifier_block *nfb,
> -			     unsigned long action, void *hcpu)
> -{
> -	unsigned cpu = (unsigned long)hcpu;
> -
> -	switch (action) {
> -	case CPU_ONLINE:
> -	case CPU_ONLINE_FROZEN:
> -		FCOE_DBG("CPU %x online: Create Rx thread\n", cpu);
> -		fcoe_percpu_thread_create(cpu);
> -		break;
> -	case CPU_DEAD:
> -	case CPU_DEAD_FROZEN:
> -		FCOE_DBG("CPU %x offline: Remove Rx thread\n", cpu);
> -		fcoe_percpu_thread_destroy(cpu);
> -		break;
> -	default:
> -		break;
> -	}
> -	return NOTIFY_OK;
> +	flush_work(&p->work);
>  }
>  
>  /**
> @@ -1509,26 +1368,6 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
>  
>  	fps = &per_cpu(fcoe_percpu, cpu);
>  	spin_lock(&fps->fcoe_rx_list.lock);
> -	if (unlikely(!fps->thread)) {
> -		/*
> -		 * The targeted CPU is not ready, let's target
> -		 * the first CPU now. For non-SMP systems this
> -		 * will check the same CPU twice.
> -		 */
> -		FCOE_NETDEV_DBG(netdev, "CPU is online, but no receive thread "
> -				"ready for incoming skb- using first online "
> -				"CPU.\n");
> -
> -		spin_unlock(&fps->fcoe_rx_list.lock);
> -		cpu = cpumask_first(cpu_online_mask);
> -		fps = &per_cpu(fcoe_percpu, cpu);
> -		spin_lock(&fps->fcoe_rx_list.lock);
> -		if (!fps->thread) {
> -			spin_unlock(&fps->fcoe_rx_list.lock);
> -			goto err;
> -		}
> -	}
> -
>  	/*
>  	 * We now have a valid CPU that we're targeting for
>  	 * this skb. We also have this receive thread locked,
> @@ -1543,8 +1382,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
>  	 * in softirq context.
>  	 */
>  	__skb_queue_tail(&fps->fcoe_rx_list, skb);
> -	if (fps->thread->state == TASK_INTERRUPTIBLE)
> -		wake_up_process(fps->thread);
> +	schedule_work_on(cpu, &fps->work);
>  	spin_unlock(&fps->fcoe_rx_list.lock);
>  
>  	return NET_RX_SUCCESS;
> @@ -1713,15 +1551,6 @@ static int fcoe_xmit(struct fc_lport *lport, struct fc_frame *fp)
>  }
>  
>  /**
> - * fcoe_percpu_flush_done() - Indicate per-CPU queue flush completion
> - * @skb: The completed skb (argument required by destructor)
> - */
> -static void fcoe_percpu_flush_done(struct sk_buff *skb)
> -{
> -	complete(&fcoe_flush_completion);
> -}
> -
> -/**
>   * fcoe_filter_frames() - filter out bad fcoe frames, i.e. bad CRC
>   * @lport: The local port the frame was received on
>   * @fp:	   The received frame
> @@ -1792,8 +1621,7 @@ static void fcoe_recv_frame(struct sk_buff *skb)
>  	fr = fcoe_dev_from_skb(skb);
>  	lport = fr->fr_dev;
>  	if (unlikely(!lport)) {
> -		if (skb->destructor != fcoe_percpu_flush_done)
> -			FCOE_NETDEV_DBG(skb->dev, "NULL lport in skb\n");
> +		FCOE_NETDEV_DBG(skb->dev, "NULL lport in skb\n");
>  		kfree_skb(skb);
>  		return;
>  	}
> @@ -1857,40 +1685,28 @@ static void fcoe_recv_frame(struct sk_buff *skb)
>  }
>  
>  /**
> - * fcoe_percpu_receive_thread() - The per-CPU packet receive thread
> - * @arg: The per-CPU context
> + * fcoe_receive_work() - The per-CPU worker
> + * @work: The work struct
>   *
> - * Return: 0 for success
>   */
> -static int fcoe_percpu_receive_thread(void *arg)
> +static void fcoe_receive_work(struct work_struct *work)
>  {
> -	struct fcoe_percpu_s *p = arg;
> +	struct fcoe_percpu_s *p;
>  	struct sk_buff *skb;
>  	struct sk_buff_head tmp;
>  
> +	p = container_of(work, struct fcoe_percpu_s, work);
>  	skb_queue_head_init(&tmp);
>  
> -	set_user_nice(current, MIN_NICE);
> +	spin_lock_bh(&p->fcoe_rx_list.lock);
> +	skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
> +	spin_unlock_bh(&p->fcoe_rx_list.lock);
>  
> -	while (!kthread_should_stop()) {
> +	if (!skb_queue_len(&tmp))
> +		return;
>  
> -		spin_lock_bh(&p->fcoe_rx_list.lock);
> -		skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
> -
> -		if (!skb_queue_len(&tmp)) {
> -			set_current_state(TASK_INTERRUPTIBLE);
> -			spin_unlock_bh(&p->fcoe_rx_list.lock);
> -			schedule();
> -			continue;
> -		}
> -
> -		spin_unlock_bh(&p->fcoe_rx_list.lock);
> -
> -		while ((skb = __skb_dequeue(&tmp)) != NULL)
> -			fcoe_recv_frame(skb);
> -
> -	}
> -	return 0;
> +	while ((skb = __skb_dequeue(&tmp)))
> +		fcoe_recv_frame(skb);
>  }
>  
>  /**
> @@ -2450,36 +2266,19 @@ static int fcoe_link_ok(struct fc_lport *lport)
>   *
>   * Must be called with fcoe_create_mutex held to single-thread completion.
>   *
> - * This flushes the pending skbs by adding a new skb to each queue and
> - * waiting until they are all freed.  This assures us that not only are
> - * there no packets that will be handled by the lport, but also that any
> - * threads already handling packet have returned.
> + * This flushes the pending skbs by flush the work item for each CPU. The work
> + * item on each possible CPU is flushed because we may have used the per-CPU
> + * struct of an offline CPU.
>   */
>  static void fcoe_percpu_clean(struct fc_lport *lport)
>  {
>  	struct fcoe_percpu_s *pp;
> -	struct sk_buff *skb;
>  	unsigned int cpu;
>  
>  	for_each_possible_cpu(cpu) {
>  		pp = &per_cpu(fcoe_percpu, cpu);
>  
> -		if (!pp->thread || !cpu_online(cpu))
> -			continue;
> -
> -		skb = dev_alloc_skb(0);
> -		if (!skb)
> -			continue;
> -
> -		skb->destructor = fcoe_percpu_flush_done;
> -
> -		spin_lock_bh(&pp->fcoe_rx_list.lock);
> -		__skb_queue_tail(&pp->fcoe_rx_list, skb);
> -		if (pp->fcoe_rx_list.qlen == 1)
> -			wake_up_process(pp->thread);
> -		spin_unlock_bh(&pp->fcoe_rx_list.lock);
> -
> -		wait_for_completion(&fcoe_flush_completion);
> +		flush_work(&pp->work);
>  	}
>  }
>  
> @@ -2625,22 +2424,11 @@ static int __init fcoe_init(void)
>  	mutex_lock(&fcoe_config_mutex);
>  
>  	for_each_possible_cpu(cpu) {
> -		p = &per_cpu(fcoe_percpu, cpu);
> +		p = per_cpu_ptr(&fcoe_percpu, cpu);
> +		INIT_WORK(&p->work, fcoe_receive_work);
>  		skb_queue_head_init(&p->fcoe_rx_list);
>  	}
>  
> -	cpu_notifier_register_begin();
> -
> -	for_each_online_cpu(cpu)
> -		fcoe_percpu_thread_create(cpu);
> -
> -	/* Initialize per CPU interrupt thread */
> -	rc = __register_hotcpu_notifier(&fcoe_cpu_notifier);
> -	if (rc)
> -		goto out_free;
> -
> -	cpu_notifier_register_done();
> -
>  	/* Setup link change notification */
>  	fcoe_dev_setup();
>  
> @@ -2652,12 +2440,6 @@ static int __init fcoe_init(void)
>  	return 0;
>  
>  out_free:
> -	for_each_online_cpu(cpu) {
> -		fcoe_percpu_thread_destroy(cpu);
> -	}
> -
> -	cpu_notifier_register_done();
> -
>  	mutex_unlock(&fcoe_config_mutex);
>  	destroy_workqueue(fcoe_wq);
>  	return rc;
> @@ -2690,14 +2472,8 @@ static void __exit fcoe_exit(void)
>  	}
>  	rtnl_unlock();
>  
> -	cpu_notifier_register_begin();
> -
> -	for_each_online_cpu(cpu)
> -		fcoe_percpu_thread_destroy(cpu);
> -
> -	__unregister_hotcpu_notifier(&fcoe_cpu_notifier);
> -
> -	cpu_notifier_register_done();
> +	for_each_possible_cpu(cpu)
> +		fcoe_thread_cleanup_local(cpu);
>  
>  	mutex_unlock(&fcoe_config_mutex);
>  
> diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
> index de7e3ee60f0c..c6fbbb6581d3 100644
> --- a/include/scsi/libfcoe.h
> +++ b/include/scsi/libfcoe.h
> @@ -319,14 +319,16 @@ struct fcoe_transport {
>  
>  /**
>   * struct fcoe_percpu_s - The context for FCoE receive thread(s)
> - * @thread:	    The thread context
> + * @kthread:	    The thread context (used by bnx2fc)
> + * @work:	    The work item (used by fcoe)
>   * @fcoe_rx_list:   The queue of pending packets to process
>   * @page:	    The memory page for calculating frame trailer CRCs
>   * @crc_eof_offset: The offset into the CRC page pointing to available
>   *		    memory for a new trailer
>   */
>  struct fcoe_percpu_s {
> -	struct task_struct *thread;
> +	struct task_struct *kthread;
> +	struct work_struct work;
>  	struct sk_buff_head fcoe_rx_list;
>  	struct page *crc_eof_page;
>  	int crc_eof_offset;
> -- 
> 2.8.0.rc3
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2] scsi/fcoe: convert to kworker
       [not found]             ` <20160610103812.ojgzop6qdv3mos5d-3LAbnSA0sDC4fIQPS+WK3rNAH6kLmebB@public.gmane.org>
@ 2016-07-01 19:09               ` Bynoe, Ronald J
  0 siblings, 0 replies; 27+ messages in thread
From: Bynoe, Ronald J @ 2016-07-01 19:09 UTC (permalink / raw)
  To: jthumshirn-l3A5Bk7waGM, bigeasy-hfZtesqFncYOwBW4kG4KsQ
  Cc: jejb-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA,
	hch-wEGCiKHe2LqWVfeAwA7xHQ, rt-hfZtesqFncYOwBW4kG4KsQ,
	fcoe-devel-s9riP+hp16TNLxjTenLetw



On Fri, 2016-06-10 at 12:38 +0200, Johannes Thumshirn wrote:

On Tue, Apr 12, 2016 at 05:16:54PM +0200, Sebastian Andrzej Siewior wrote:


The driver creates its own per-CPU threads which are updated based on
CPU hotplug events. It is also possible to use kworkers and remove some
of the kthread infrastrucure.

The code checked ->thread to decide if there is an active per-CPU
thread. By using the kworker infrastructure this is no longer possible (or
required). The thread pointer is saved in `kthread' instead of `thread' so
anything trying to use thread is caught by the compiler. Currently only the
bnx2fc driver is using struct fcoe_percpu_s and the kthread member.

After a CPU went offline, we may still enqueue items on the "offline"
CPU. This isn't much of a problem. The work will be done on a random
CPU. The allocated crc_eof_page page won't be cleaned up. It is probably
expected that the CPU comes up at some point so it should not be a
problem. The crc_eof_page memory is released of course once the module is
removed.

This patch was only compile-tested due to -ENODEV.

Cc: Vasu Dev <vasu.dev@intel.com<mailto:vasu.dev@intel.com>>
Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com<mailto:jejb@linux.vnet.ibm.com>>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com<mailto:martin.petersen@oracle.com>>
Cc: Christoph Hellwig <hch@lst.de<mailto:hch@lst.de>>
Cc: fcoe-devel@open-fcoe.org<mailto:fcoe-devel@open-fcoe.org>
Cc: linux-scsi@vger.kernel.org<mailto:linux-scsi@vger.kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de<mailto:bigeasy@linutronix.de>>



Tested in a Boot from FCoE scenario using a BCM57840.

Tested-by: Johannes Thumshirn <jthumshirn@suse.de<mailto:jthumshirn@suse.de>>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>


Tested-by: Ronald Bynoe <ronald.j.bynoe@intel.com>




---
v1…v2: use kworker instead of smbthread as per hch

If you want this I would the same for the two bnx drivers.

 drivers/scsi/bnx2fc/bnx2fc_fcoe.c |   8 +-
 drivers/scsi/fcoe/fcoe.c          | 276 ++++----------------------------------
 include/scsi/libfcoe.h            |   6 +-
 3 files changed, 34 insertions(+), 256 deletions(-)

diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index d7029ea5d319..cfb1b5b40d6c 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -466,7 +466,7 @@ static int bnx2fc_rcv(struct sk_buff *skb, struct net_device *dev,

        __skb_queue_tail(&bg->fcoe_rx_list, skb);
        if (bg->fcoe_rx_list.qlen == 1)
-               wake_up_process(bg->thread);
+               wake_up_process(bg->kthread);

        spin_unlock(&bg->fcoe_rx_list.lock);

@@ -2663,7 +2663,7 @@ static int __init bnx2fc_mod_init(void)
        }
        wake_up_process(l2_thread);
        spin_lock_bh(&bg->fcoe_rx_list.lock);
-       bg->thread = l2_thread;
+       bg->kthread = l2_thread;
        spin_unlock_bh(&bg->fcoe_rx_list.lock);

        for_each_possible_cpu(cpu) {
@@ -2736,8 +2736,8 @@ static void __exit bnx2fc_mod_exit(void)
        /* Destroy global thread */
        bg = &bnx2fc_global;
        spin_lock_bh(&bg->fcoe_rx_list.lock);
-       l2_thread = bg->thread;
-       bg->thread = NULL;
+       l2_thread = bg->kthread;
+       bg->kthread = NULL;
        while ((skb = __skb_dequeue(&bg->fcoe_rx_list)) != NULL)
                kfree_skb(skb);

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 0efe7112fc1f..f7c7ccc156da 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -67,9 +67,6 @@ static DEFINE_MUTEX(fcoe_config_mutex);

 static struct workqueue_struct *fcoe_wq;

-/* fcoe_percpu_clean completion.  Waiter protected by fcoe_create_mutex */
-static DECLARE_COMPLETION(fcoe_flush_completion);
-
 /* fcoe host list */
 /* must only by accessed under the RTNL mutex */
 static LIST_HEAD(fcoe_hostlist);
@@ -80,7 +77,6 @@ static int fcoe_reset(struct Scsi_Host *);
 static int fcoe_xmit(struct fc_lport *, struct fc_frame *);
 static int fcoe_rcv(struct sk_buff *, struct net_device *,
                    struct packet_type *, struct net_device *);
-static int fcoe_percpu_receive_thread(void *);
 static void fcoe_percpu_clean(struct fc_lport *);
 static int fcoe_link_ok(struct fc_lport *);

@@ -107,7 +103,6 @@ static int fcoe_ddp_setup(struct fc_lport *, u16, struct scatterlist *,
 static int fcoe_ddp_done(struct fc_lport *, u16);
 static int fcoe_ddp_target(struct fc_lport *, u16, struct scatterlist *,
                           unsigned int);
-static int fcoe_cpu_callback(struct notifier_block *, unsigned long, void *);
 static int fcoe_dcb_app_notification(struct notifier_block *notifier,
                                     ulong event, void *ptr);

@@ -136,11 +131,6 @@ static struct notifier_block fcoe_notifier = {
        .notifier_call = fcoe_device_notification,
 };

-/* notification function for CPU hotplug events */
-static struct notifier_block fcoe_cpu_notifier = {
-       .notifier_call = fcoe_cpu_callback,
-};
-
 /* notification function for DCB events */
 static struct notifier_block dcb_notifier = {
        .notifier_call = fcoe_dcb_app_notification,
@@ -1245,152 +1235,21 @@ static int __exit fcoe_if_exit(void)
        return 0;
 }

-/**
- * fcoe_percpu_thread_create() - Create a receive thread for an online CPU
- * @cpu: The CPU index of the CPU to create a receive thread for
- */
-static void fcoe_percpu_thread_create(unsigned int cpu)
+static void fcoe_thread_cleanup_local(unsigned int cpu)
 {
-       struct fcoe_percpu_s *p;
-       struct task_struct *thread;
-
-       p = &per_cpu(fcoe_percpu, cpu);
-
-       thread = kthread_create_on_node(fcoe_percpu_receive_thread,
-                                       (void *)p, cpu_to_node(cpu),
-                                       "fcoethread/%d", cpu);
-
-       if (likely(!IS_ERR(thread))) {
-               kthread_bind(thread, cpu);
-               wake_up_process(thread);
-
-               spin_lock_bh(&p->fcoe_rx_list.lock);
-               p->thread = thread;
-               spin_unlock_bh(&p->fcoe_rx_list.lock);
-       }
-}
-
-/**
- * fcoe_percpu_thread_destroy() - Remove the receive thread of a CPU
- * @cpu: The CPU index of the CPU whose receive thread is to be destroyed
- *
- * Destroys a per-CPU Rx thread. Any pending skbs are moved to the
- * current CPU's Rx thread. If the thread being destroyed is bound to
- * the CPU processing this context the skbs will be freed.
- */
-static void fcoe_percpu_thread_destroy(unsigned int cpu)
-{
-       struct fcoe_percpu_s *p;
-       struct task_struct *thread;
        struct page *crc_eof;
-       struct sk_buff *skb;
-#ifdef CONFIG_SMP
-       struct fcoe_percpu_s *p0;
-       unsigned targ_cpu = get_cpu();
-#endif /* CONFIG_SMP */
+       struct fcoe_percpu_s *p;

-       FCOE_DBG("Destroying receive thread for CPU %d\n", cpu);
-
-       /* Prevent any new skbs from being queued for this CPU. */
-       p = &per_cpu(fcoe_percpu, cpu);
+       p = per_cpu_ptr(&fcoe_percpu, cpu);
        spin_lock_bh(&p->fcoe_rx_list.lock);
-       thread = p->thread;
-       p->thread = NULL;
        crc_eof = p->crc_eof_page;
        p->crc_eof_page = NULL;
        p->crc_eof_offset = 0;
        spin_unlock_bh(&p->fcoe_rx_list.lock);

-#ifdef CONFIG_SMP
-       /*
-        * Don't bother moving the skb's if this context is running
-        * on the same CPU that is having its thread destroyed. This
-        * can easily happen when the module is removed.
-        */
-       if (cpu != targ_cpu) {
-               p0 = &per_cpu(fcoe_percpu, targ_cpu);
-               spin_lock_bh(&p0->fcoe_rx_list.lock);
-               if (p0->thread) {
-                       FCOE_DBG("Moving frames from CPU %d to CPU %d\n",
-                                cpu, targ_cpu);
-
-                       while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-                               __skb_queue_tail(&p0->fcoe_rx_list, skb);
-                       spin_unlock_bh(&p0->fcoe_rx_list.lock);
-               } else {
-                       /*
-                        * The targeted CPU is not initialized and cannot accept
-                        * new  skbs. Unlock the targeted CPU and drop the skbs
-                        * on the CPU that is going offline.
-                        */
-                       while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-                               kfree_skb(skb);
-                       spin_unlock_bh(&p0->fcoe_rx_list.lock);
-               }
-       } else {
-               /*
-                * This scenario occurs when the module is being removed
-                * and all threads are being destroyed. skbs will continue
-                * to be shifted from the CPU thread that is being removed
-                * to the CPU thread associated with the CPU that is processing
-                * the module removal. Once there is only one CPU Rx thread it
-                * will reach this case and we will drop all skbs and later
-                * stop the thread.
-                */
-               spin_lock_bh(&p->fcoe_rx_list.lock);
-               while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-                       kfree_skb(skb);
-               spin_unlock_bh(&p->fcoe_rx_list.lock);
-       }
-       put_cpu();
-#else
-       /*
-        * This a non-SMP scenario where the singular Rx thread is
-        * being removed. Free all skbs and stop the thread.
-        */
-       spin_lock_bh(&p->fcoe_rx_list.lock);
-       while ((skb = __skb_dequeue(&p->fcoe_rx_list)) != NULL)
-               kfree_skb(skb);
-       spin_unlock_bh(&p->fcoe_rx_list.lock);
-#endif
-
-       if (thread)
-               kthread_stop(thread);
-
        if (crc_eof)
                put_page(crc_eof);
-}
-
-/**
- * fcoe_cpu_callback() - Handler for CPU hotplug events
- * @nfb:    The callback data block
- * @action: The event triggering the callback
- * @hcpu:   The index of the CPU that the event is for
- *
- * This creates or destroys per-CPU data for fcoe
- *
- * Returns NOTIFY_OK always.
- */
-static int fcoe_cpu_callback(struct notifier_block *nfb,
-                            unsigned long action, void *hcpu)
-{
-       unsigned cpu = (unsigned long)hcpu;
-
-       switch (action) {
-       case CPU_ONLINE:
-       case CPU_ONLINE_FROZEN:
-               FCOE_DBG("CPU %x online: Create Rx thread\n", cpu);
-               fcoe_percpu_thread_create(cpu);
-               break;
-       case CPU_DEAD:
-       case CPU_DEAD_FROZEN:
-               FCOE_DBG("CPU %x offline: Remove Rx thread\n", cpu);
-               fcoe_percpu_thread_destroy(cpu);
-               break;
-       default:
-               break;
-       }
-       return NOTIFY_OK;
+       flush_work(&p->work);
 }

 /**
@@ -1509,26 +1368,6 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,

        fps = &per_cpu(fcoe_percpu, cpu);
        spin_lock(&fps->fcoe_rx_list.lock);
-       if (unlikely(!fps->thread)) {
-               /*
-                * The targeted CPU is not ready, let's target
-                * the first CPU now. For non-SMP systems this
-                * will check the same CPU twice.
-                */
-               FCOE_NETDEV_DBG(netdev, "CPU is online, but no receive thread "
-                               "ready for incoming skb- using first online "
-                               "CPU.\n");
-
-               spin_unlock(&fps->fcoe_rx_list.lock);
-               cpu = cpumask_first(cpu_online_mask);
-               fps = &per_cpu(fcoe_percpu, cpu);
-               spin_lock(&fps->fcoe_rx_list.lock);
-               if (!fps->thread) {
-                       spin_unlock(&fps->fcoe_rx_list.lock);
-                       goto err;
-               }
-       }
-
        /*
         * We now have a valid CPU that we're targeting for
         * this skb. We also have this receive thread locked,
@@ -1543,8 +1382,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
         * in softirq context.
         */
        __skb_queue_tail(&fps->fcoe_rx_list, skb);
-       if (fps->thread->state == TASK_INTERRUPTIBLE)
-               wake_up_process(fps->thread);
+       schedule_work_on(cpu, &fps->work);
        spin_unlock(&fps->fcoe_rx_list.lock);

        return NET_RX_SUCCESS;
@@ -1713,15 +1551,6 @@ static int fcoe_xmit(struct fc_lport *lport, struct fc_frame *fp)
 }

 /**
- * fcoe_percpu_flush_done() - Indicate per-CPU queue flush completion
- * @skb: The completed skb (argument required by destructor)
- */
-static void fcoe_percpu_flush_done(struct sk_buff *skb)
-{
-       complete(&fcoe_flush_completion);
-}
-
-/**
  * fcoe_filter_frames() - filter out bad fcoe frames, i.e. bad CRC
  * @lport: The local port the frame was received on
  * @fp:           The received frame
@@ -1792,8 +1621,7 @@ static void fcoe_recv_frame(struct sk_buff *skb)
        fr = fcoe_dev_from_skb(skb);
        lport = fr->fr_dev;
        if (unlikely(!lport)) {
-               if (skb->destructor != fcoe_percpu_flush_done)
-                       FCOE_NETDEV_DBG(skb->dev, "NULL lport in skb\n");
+               FCOE_NETDEV_DBG(skb->dev, "NULL lport in skb\n");
                kfree_skb(skb);
                return;
        }
@@ -1857,40 +1685,28 @@ static void fcoe_recv_frame(struct sk_buff *skb)
 }

 /**
- * fcoe_percpu_receive_thread() - The per-CPU packet receive thread
- * @arg: The per-CPU context
+ * fcoe_receive_work() - The per-CPU worker
+ * @work: The work struct
  *
- * Return: 0 for success
  */
-static int fcoe_percpu_receive_thread(void *arg)
+static void fcoe_receive_work(struct work_struct *work)
 {
-       struct fcoe_percpu_s *p = arg;
+       struct fcoe_percpu_s *p;
        struct sk_buff *skb;
        struct sk_buff_head tmp;

+       p = container_of(work, struct fcoe_percpu_s, work);
        skb_queue_head_init(&tmp);

-       set_user_nice(current, MIN_NICE);
+       spin_lock_bh(&p->fcoe_rx_list.lock);
+       skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
+       spin_unlock_bh(&p->fcoe_rx_list.lock);

-       while (!kthread_should_stop()) {
+       if (!skb_queue_len(&tmp))
+               return;

-               spin_lock_bh(&p->fcoe_rx_list.lock);
-               skb_queue_splice_init(&p->fcoe_rx_list, &tmp);
-
-               if (!skb_queue_len(&tmp)) {
-                       set_current_state(TASK_INTERRUPTIBLE);
-                       spin_unlock_bh(&p->fcoe_rx_list.lock);
-                       schedule();
-                       continue;
-               }
-
-               spin_unlock_bh(&p->fcoe_rx_list.lock);
-
-               while ((skb = __skb_dequeue(&tmp)) != NULL)
-                       fcoe_recv_frame(skb);
-
-       }
-       return 0;
+       while ((skb = __skb_dequeue(&tmp)))
+               fcoe_recv_frame(skb);
 }

 /**
@@ -2450,36 +2266,19 @@ static int fcoe_link_ok(struct fc_lport *lport)
  *
  * Must be called with fcoe_create_mutex held to single-thread completion.
  *
- * This flushes the pending skbs by adding a new skb to each queue and
- * waiting until they are all freed.  This assures us that not only are
- * there no packets that will be handled by the lport, but also that any
- * threads already handling packet have returned.
+ * This flushes the pending skbs by flush the work item for each CPU. The work
+ * item on each possible CPU is flushed because we may have used the per-CPU
+ * struct of an offline CPU.
  */
 static void fcoe_percpu_clean(struct fc_lport *lport)
 {
        struct fcoe_percpu_s *pp;
-       struct sk_buff *skb;
        unsigned int cpu;

        for_each_possible_cpu(cpu) {
                pp = &per_cpu(fcoe_percpu, cpu);

-               if (!pp->thread || !cpu_online(cpu))
-                       continue;
-
-               skb = dev_alloc_skb(0);
-               if (!skb)
-                       continue;
-
-               skb->destructor = fcoe_percpu_flush_done;
-
-               spin_lock_bh(&pp->fcoe_rx_list.lock);
-               __skb_queue_tail(&pp->fcoe_rx_list, skb);
-               if (pp->fcoe_rx_list.qlen == 1)
-                       wake_up_process(pp->thread);
-               spin_unlock_bh(&pp->fcoe_rx_list.lock);
-
-               wait_for_completion(&fcoe_flush_completion);
+               flush_work(&pp->work);
        }
 }

@@ -2625,22 +2424,11 @@ static int __init fcoe_init(void)
        mutex_lock(&fcoe_config_mutex);

        for_each_possible_cpu(cpu) {
-               p = &per_cpu(fcoe_percpu, cpu);
+               p = per_cpu_ptr(&fcoe_percpu, cpu);
+               INIT_WORK(&p->work, fcoe_receive_work);
                skb_queue_head_init(&p->fcoe_rx_list);
        }

-       cpu_notifier_register_begin();
-
-       for_each_online_cpu(cpu)
-               fcoe_percpu_thread_create(cpu);
-
-       /* Initialize per CPU interrupt thread */
-       rc = __register_hotcpu_notifier(&fcoe_cpu_notifier);
-       if (rc)
-               goto out_free;
-
-       cpu_notifier_register_done();
-
        /* Setup link change notification */
        fcoe_dev_setup();

@@ -2652,12 +2440,6 @@ static int __init fcoe_init(void)
        return 0;

 out_free:
-       for_each_online_cpu(cpu) {
-               fcoe_percpu_thread_destroy(cpu);
-       }
-
-       cpu_notifier_register_done();
-
        mutex_unlock(&fcoe_config_mutex);
        destroy_workqueue(fcoe_wq);
        return rc;
@@ -2690,14 +2472,8 @@ static void __exit fcoe_exit(void)
        }
        rtnl_unlock();

-       cpu_notifier_register_begin();
-
-       for_each_online_cpu(cpu)
-               fcoe_percpu_thread_destroy(cpu);
-
-       __unregister_hotcpu_notifier(&fcoe_cpu_notifier);
-
-       cpu_notifier_register_done();
+       for_each_possible_cpu(cpu)
+               fcoe_thread_cleanup_local(cpu);

        mutex_unlock(&fcoe_config_mutex);

diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
index de7e3ee60f0c..c6fbbb6581d3 100644
--- a/include/scsi/libfcoe.h
+++ b/include/scsi/libfcoe.h
@@ -319,14 +319,16 @@ struct fcoe_transport {

 /**
  * struct fcoe_percpu_s - The context for FCoE receive thread(s)
- * @thread:        The thread context
+ * @kthread:       The thread context (used by bnx2fc)
+ * @work:          The work item (used by fcoe)
  * @fcoe_rx_list:   The queue of pending packets to process
  * @page:          The memory page for calculating frame trailer CRCs
  * @crc_eof_offset: The offset into the CRC page pointing to available
  *                 memory for a new trailer
  */
 struct fcoe_percpu_s {
-       struct task_struct *thread;
+       struct task_struct *kthread;
+       struct work_struct work;
        struct sk_buff_head fcoe_rx_list;
        struct page *crc_eof_page;
        int crc_eof_offset;
--
2.8.0.rc3

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org<mailto:majordomo@vger.kernel.org>
More majordomo info at  http://vger.kernel.org/majordomo-info.html




_______________________________________________
fcoe-devel mailing list
fcoe-devel@open-fcoe.org
http://lists.open-fcoe.org/mailman/listinfo/fcoe-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2] scsi/fcoe: convert to kworker
  2016-06-10 10:38           ` Johannes Thumshirn
       [not found]             ` <20160610103812.ojgzop6qdv3mos5d-3LAbnSA0sDC4fIQPS+WK3rNAH6kLmebB@public.gmane.org>
@ 2016-07-04  8:23             ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-07-04  8:23 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Christoph Hellwig, linux-scsi, James E.J. Bottomley,
	Martin K. Petersen, rt, Vasu Dev, fcoe-devel

On 06/10/2016 12:38 PM, Johannes Thumshirn wrote:
…

> 
> Tested in a Boot from FCoE scenario using a BCM57840.

This got merged over the weekend. Thanks for that. I will try to look
into the other two (bnx2i, bnx2fc) and convert them as well within this
week.

Sebastian
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2016-07-04  8:24 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-11 15:28 [PATCH 00/11] SCSI smpboot thread conversion Sebastian Andrzej Siewior
2016-03-11 15:28 ` [PATCH 01/11] scsi/fcoe: lock online CPUs in fcoe_percpu_clean() Sebastian Andrzej Siewior
2016-03-11 16:17   ` Christoph Hellwig
2016-03-11 16:32     ` Sebastian Andrzej Siewior
2016-03-15  8:19       ` Christoph Hellwig
2016-04-08 13:30         ` Sebastian Andrzej Siewior
2016-04-08 18:14           ` Sebastian Andrzej Siewior
2016-04-12 15:16         ` [PATCH v2] scsi/fcoe: convert to kworker Sebastian Andrzej Siewior
2016-04-22 15:27           ` [PREEMPT-RT] " Sebastian Andrzej Siewior
2016-04-22 15:49             ` James Bottomley
2016-04-22 16:39               ` Laurence Oberman
     [not found]                 ` <186981952.31194082.1461343179889.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-06-09 13:09                   ` Sebastian Andrzej Siewior
2016-06-09 13:15                     ` Laurence Oberman
2016-06-09 13:22                       ` Sebastian Andrzej Siewior
2016-06-10 10:38           ` Johannes Thumshirn
     [not found]             ` <20160610103812.ojgzop6qdv3mos5d-3LAbnSA0sDC4fIQPS+WK3rNAH6kLmebB@public.gmane.org>
2016-07-01 19:09               ` Bynoe, Ronald J
2016-07-04  8:23             ` Sebastian Andrzej Siewior
2016-03-11 15:28 ` [PATCH 02/11] scsi/fcoe: remove CONFIG_SMP in fcoe_percpu_thread_destroy() Sebastian Andrzej Siewior
2016-03-11 15:28 ` [PATCH 03/11] scsi/fcoe: drop locking in fcoe_percpu_thread_destroy() if cpu == targ_cpu Sebastian Andrzej Siewior
2016-03-11 15:28 ` [PATCH 04/11] scsi/fcoe: rename p0 to p_target in fcoe_percpu_thread_destroy() Sebastian Andrzej Siewior
2016-03-11 15:28 ` [PATCH 05/11] scsi/fcoe: drop the p_target lock earlier if there is no thread online Sebastian Andrzej Siewior
2016-03-11 15:28 ` [PATCH 06/11] scsi/fcoe: use skb_queue_splice_tail() intead of manual job Sebastian Andrzej Siewior
2016-03-11 15:28 ` [PATCH 07/11] scsi/fcoe: drop the crc_eof page early Sebastian Andrzej Siewior
2016-03-11 15:29 ` [PATCH 08/11] scsi/fcoe: convert to smpboot thread Sebastian Andrzej Siewior
2016-03-11 15:29 ` [PATCH 09/11] scsi: bnx2i: " Sebastian Andrzej Siewior
2016-03-11 15:29 ` [PATCH 10/11] scsi: bnx2fc: fix hotplug race in bnx2fc_process_new_cqes() Sebastian Andrzej Siewior
2016-03-11 15:29 ` [PATCH 11/11] scsi: bnx2fc: convert to smpboot thread Sebastian Andrzej Siewior

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.