All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/55] Convert skd driver to blk-mq
@ 2017-08-17 20:12 Bart Van Assche
  2017-08-17 20:12 ` [PATCH 01/55] block: Relax a check in blk_start_queue() Bart Van Assche
                   ` (56 more replies)
  0 siblings, 57 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche

Hello Jens,

As you know all existing single queue block drivers have to be converted
to blk-mq before the single queue block layer can be removed. Hence this
patch series that converts the skd (sTec s1120) driver to blk-mq. As the
following performance numbers show, this patch series does not affect
performance of the skd driver significantly:

======================================================================
sTec Measurements
===================
Kernel module configuration
...........................
$ cat /etc/modprobe.d/skd.conf
options skd skd_max_queue_depth=200 skd_isr_type=1

blk-sq driver
.............
Kernel: 4.11.10-300.fc26.x86_64
$ (cd /sys/block/skd*/queue && grep -aH '' add_random hw_sector_size max_segments nr_requests rotational rq_affinity scheduler write_cache)
add_random:0
hw_sector_size:512
max_segments:256
nr_requests:128
rotational:0
rq_affinity:2
scheduler:[noop] deadline cfq
write_cache:write back

$ ~bart/software/tools/measure-latency /dev/skd* 512 |&
  tee measurements.txt
I/O pattern: randread
     lat (usec): min=16, max=550, avg=88.33, stdev=14.85
I/O pattern: randwrite
     lat (usec): min=20, max=5096, avg=26.35, stdev=56.03
$ for opt in "" "-w"; do for s in 512 4096 65536; do \
  ~bart/software/tools/max-iops $opt -b$s -j1 /dev/skd*; done; done |&
  tee measurements.txt
   read: IOPS=103k, BW=50.1MiB/s (52.6MB/s)(3006MiB/60002msec)
   read: IOPS=81.4k, BW=318MiB/s (333MB/s)(18.7GiB/60003msec)
   read: IOPS=15.7k, BW=978MiB/s (1026MB/s)(57.4GiB/60015msec)
  write: IOPS=62.4k, BW=30.5MiB/s (31.1MB/s)(1826MiB/60004msec)
  write: IOPS=68.8k, BW=266MiB/s (279MB/s)(15.6GiB/60004msec)
  write: IOPS=13.9k, BW=818MiB/s (858MB/s)(47.1GiB/60012msec)

blk-mq driver
.............
Kernel: 4.13.0-rc2+
$ uname -r
4.13.0-rc2+
$ (cd /sys/block/skd*/queue && grep -aH '' add_random hw_sector_size max_segments nr_requests rotational rq_affinity scheduler write_cache)
add_random:0
hw_sector_size:512
max_segments:256
nr_requests:100
rotational:0
rq_affinity:2
scheduler:[none]
write_cache:write back

$ ~bart/software/tools/measure-latency /dev/skd* 512 |&
  tee measurements.txt
I/O pattern: randread
     lat (usec): min=18, max=297, avg=91.02, stdev=13.16
I/O pattern: randwrite
     lat (usec): min=20, max=4680, avg=26.96, stdev=54.80
$ for opt in "" "-w"; do for s in 512 4096 65536; do \
  ~bart/software/tools/max-iops $opt -b$s -j1 /dev/skd*; done; done |&
  tee measurements.txt
   read: IOPS=101k, BW=49.4MiB/s (51.8MB/s)(2959MiB/60002msec)
   read: IOPS=83.3k, BW=325MiB/s (341MB/s)(19.6GiB/60003msec)
   read: IOPS=15.7k, BW=977MiB/s (1024MB/s)(57.3GiB/60019msec)
  write: IOPS=63.2k, BW=30.8MiB/s (32.3MB/s)(1846MiB/60003msec)
  write: IOPS=70.3k, BW=274MiB/s (288MB/s)(16.9GiB/60003msec)
  write: IOPS=13.2k, BW=823MiB/s (863MB/s)(48.3GiB/60012msec)
======================================================================

Please consider this patch series for kernel v4.14.

Thanks,

Bart.

Bart Van Assche (55):
  block: Relax a check in blk_start_queue()
  skd: Avoid that module unloading triggers a use-after-free
  skd: Submit requests to firmware before triggering the doorbell
  skd: Switch to GPLv2
  skd: Update maintainer information
  skd: Remove unneeded #include directives
  skd: Remove ESXi code
  skd: Remove unnecessary blank lines
  skd: Avoid that gcc 7 warns about fall-through when building with W=1
  skd: Fix spelling in a source code comment
  skd: Fix a function name in a comment
  skd: Remove set-but-not-used local variables
  skd: Remove a set-but-not-used variable from struct skd_device
  skd: Remove useless barrier() calls
  skd: Switch from the pr_*() to the dev_*() logging functions
  skd: Fix endianness annotations
  skd: Document locking assumptions
  skd: Introduce the symbolic constant SKD_MAX_REQ_PER_MSG
  skd: Introduce SKD_SKCOMP_SIZE
  skd: Fix size argument in skd_free_skcomp()
  skd: Reorder the code in skd_process_request()
  skd: Simplify the code for deciding whether or not to send a FIT msg
  skd: Simplify the code for allocating DMA message buffers
  skd: Use a structure instead of hardcoding structure offsets
  skd: Check structure sizes at build time
  skd: Use __packed only when needed
  skd: Make the skd_isr() code more brief
  skd: Use ARRAY_SIZE() where appropriate
  skd: Simplify the code for handling data direction
  skd: Remove superfluous initializations from
    skd_isr_completion_posted()
  skd: Drop second argument of skd_recover_requests()
  skd: Use for_each_sg()
  skd: Remove a redundant init_timer() call
  skd: Remove superfluous occurrences of the 'volatile' keyword
  skd: Use kcalloc() instead of kzalloc() with multiply
  skb: Use symbolic names for SCSI opcodes
  skd: Move a function definition
  skd: Rework request failing code path
  skd: Convert explicit skd_request_fn() calls
  skd: Remove SG IO support
  skd: Remove dead code
  skd: Initialize skd_special_context.req.n_sg to one
  skd: Enable request tags for the block layer queue
  skd: Convert several per-device scalar variables into atomics
  skd: Introduce skd_process_request()
  skd: Split skd_recover_requests()
  skd: Move skd_free_sg_list() up
  skd: Coalesce struct request and struct skd_request_context
  skd: Convert to blk-mq
  skd: Switch to block layer timeout mechanism
  skd: Remove skd_device.in_flight
  skd: Reduce memory usage
  skd: Remove several local variables
  skd: Optimize locking
  skd: Bump driver version

 MAINTAINERS               |    6 +
 block/blk-core.c          |    2 +-
 drivers/block/skd_main.c  | 3196 ++++++++++++---------------------------------
 drivers/block/skd_s1120.h |   38 +-
 4 files changed, 846 insertions(+), 2396 deletions(-)

-- 
2.14.0

^ permalink raw reply	[flat|nested] 60+ messages in thread

* [PATCH 01/55] block: Relax a check in blk_start_queue()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 02/55] skd: Avoid that module unloading triggers a use-after-free Bart Van Assche
                   ` (55 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Paolo 'Blaisorblade' Giarrusso,
	Andrew Morton, Hannes Reinecke, Johannes Thumshirn, stable

Calling blk_start_queue() from interrupt context with the queue
lock held and without disabling IRQs, as the skd driver does, is
safe. This patch avoids that loading the skd driver triggers the
following warning:

WARNING: CPU: 11 PID: 1348 at block/blk-core.c:283 blk_start_queue+0x84/0xa0
RIP: 0010:blk_start_queue+0x84/0xa0
Call Trace:
 skd_unquiesce_dev+0x12a/0x1d0 [skd]
 skd_complete_internal+0x1e7/0x5a0 [skd]
 skd_complete_other+0xc2/0xd0 [skd]
 skd_isr_completion_posted.isra.30+0x2a5/0x470 [skd]
 skd_isr+0x14f/0x180 [skd]
 irq_forced_thread_fn+0x2a/0x70
 irq_thread+0x144/0x1a0
 kthread+0x125/0x140
 ret_from_fork+0x2a/0x40

Fixes: commit a038e2536472 ("[PATCH] blk_start_queue() must be called with irq disabled - add warning")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Cc: Andrew Morton <akpm@osdl.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: <stable@vger.kernel.org>
---
 block/blk-core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d836c84ad3da..d579501f24ba 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -280,7 +280,7 @@ EXPORT_SYMBOL(blk_start_queue_async);
 void blk_start_queue(struct request_queue *q)
 {
 	lockdep_assert_held(q->queue_lock);
-	WARN_ON(!irqs_disabled());
+	WARN_ON(!in_interrupt() && !irqs_disabled());
 	WARN_ON_ONCE(q->mq_ops);
 
 	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 02/55] skd: Avoid that module unloading triggers a use-after-free
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
  2017-08-17 20:12 ` [PATCH 01/55] block: Relax a check in blk_start_queue() Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 03/55] skd: Submit requests to firmware before triggering the doorbell Bart Van Assche
                   ` (54 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn, stable

Since put_disk() triggers a disk_release() call and since that
last function calls blk_put_queue() if disk->queue != NULL, clear
the disk->queue pointer before calling put_disk(). This avoids
that unloading the skd kernel module triggers the following
use-after-free:

WARNING: CPU: 8 PID: 297 at lib/refcount.c:128 refcount_sub_and_test+0x70/0x80
refcount_t: underflow; use-after-free.
CPU: 8 PID: 297 Comm: kworker/8:1 Not tainted 4.11.10-300.fc26.x86_64 #1
Workqueue: events work_for_cpu_fn
Call Trace:
 dump_stack+0x63/0x84
 __warn+0xcb/0xf0
 warn_slowpath_fmt+0x5a/0x80
 refcount_sub_and_test+0x70/0x80
 refcount_dec_and_test+0x11/0x20
 kobject_put+0x1f/0x50
 blk_put_queue+0x15/0x20
 disk_release+0xae/0xf0
 device_release+0x32/0x90
 kobject_release+0x67/0x170
 kobject_put+0x2b/0x50
 put_disk+0x17/0x20
 skd_destruct+0x5c/0x890 [skd]
 skd_pci_probe+0x124d/0x13a0 [skd]
 local_pci_probe+0x42/0xa0
 work_for_cpu_fn+0x14/0x20
 process_one_work+0x19e/0x470
 worker_thread+0x1dc/0x4a0
 kthread+0x125/0x140
 ret_from_fork+0x25/0x30

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: <stable@vger.kernel.org>
---
 drivers/block/skd_main.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index d0368682bd43..edab9c04e8ad 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -4539,15 +4539,16 @@ static void skd_free_disk(struct skd_device *skdev)
 {
 	struct gendisk *disk = skdev->disk;
 
-	if (disk != NULL) {
-		struct request_queue *q = disk->queue;
-
-		if (disk->flags & GENHD_FL_UP)
-			del_gendisk(disk);
-		if (q)
-			blk_cleanup_queue(q);
-		put_disk(disk);
+	if (disk && (disk->flags & GENHD_FL_UP))
+		del_gendisk(disk);
+
+	if (skdev->queue) {
+		blk_cleanup_queue(skdev->queue);
+		skdev->queue = NULL;
+		disk->queue = NULL;
 	}
+
+	put_disk(disk);
 	skdev->disk = NULL;
 }
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 03/55] skd: Submit requests to firmware before triggering the doorbell
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
  2017-08-17 20:12 ` [PATCH 01/55] block: Relax a check in blk_start_queue() Bart Van Assche
  2017-08-17 20:12 ` [PATCH 02/55] skd: Avoid that module unloading triggers a use-after-free Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 04/55] skd: Switch to GPLv2 Bart Van Assche
                   ` (53 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn, stable

Ensure that the members of struct skd_msg_buf have been transferred
to the PCIe adapter before the doorbell is triggered. This patch
avoids that I/O fails sporadically and that the following error
message is reported:

(skd0:STM000196603:[0000:00:09.0]): Completion mismatch comp_id=0x0000 skreq=0x0400 new=0x0000

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: <stable@vger.kernel.org>
---
 drivers/block/skd_main.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index edab9c04e8ad..153f20ce318b 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2160,6 +2160,9 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 		 */
 		qcmd |= FIT_QCMD_MSGSIZE_64;
 
+	/* Make sure skd_msg_buf is written before the doorbell is triggered. */
+	smp_wmb();
+
 	SKD_WRITEQ(skdev, qcmd, FIT_Q_COMMAND);
 }
 
@@ -2202,6 +2205,9 @@ static void skd_send_special_fitmsg(struct skd_device *skdev,
 	qcmd = skspcl->mb_dma_address;
 	qcmd |= FIT_QCMD_QID_NORMAL + FIT_QCMD_MSGSIZE_128;
 
+	/* Make sure skd_msg_buf is written before the doorbell is triggered. */
+	smp_wmb();
+
 	SKD_WRITEQ(skdev, qcmd, FIT_Q_COMMAND);
 }
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 04/55] skd: Switch to GPLv2
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (2 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 03/55] skd: Submit requests to firmware before triggering the doorbell Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 05/55] skd: Update maintainer information Bart Van Assche
                   ` (52 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This change does not affect any skd driver version derived from a
dual licensed code base but makes all code derived from future
upstream skd driver versions GPLv2 only.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c  | 25 +++++++++----------------
 drivers/block/skd_s1120.h | 12 +++++-------
 2 files changed, 14 insertions(+), 23 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 153f20ce318b..95a528f1fb9c 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -1,19 +1,12 @@
-/* Copyright 2012 STEC, Inc.
+/*
+ * Driver for sTec s1120 PCIe SSDs. sTec was acquired in 2013 by HGST and HGST
+ * was acquired by Western Digital in 2012.
+ *
+ * Copyright 2012 sTec, Inc.
+ * Copyright (c) 2017 Western Digital Corporation or its affiliates.
  *
- * This file is licensed under the terms of the 3-clause
- * BSD License (http://opensource.org/licenses/BSD-3-Clause)
- * or the GNU GPL-2.0 (http://www.gnu.org/licenses/gpl-2.0.html),
- * at your option. Both licenses are also available in the LICENSE file
- * distributed with this project. This file may not be copied, modified,
- * or distributed except in accordance with those terms.
- * Gordoni Waidhofer <gwaidhofer@stec-inc.com>
- * Initial Driver Design!
- * Thomas Swann <tswann@stec-inc.com>
- * Interrupt handling.
- * Ramprasad Chinthekindi <rchinthekindi@stec-inc.com>
- * biomode implementation.
- * Akhil Bhansali <abhansali@stec-inc.com>
- * Added support for DISCARD / FLUSH and FUA.
+ * This file is part of the Linux kernel, and is made available under
+ * the terms of the GNU General Public License version 2.
  */
 
 #include <linux/kernel.h>
@@ -80,7 +73,7 @@ enum {
 #define DRV_VER_COMPL   "2.2.1." DRV_BUILD_ID
 
 MODULE_AUTHOR("bug-reports: support@stec-inc.com");
-MODULE_LICENSE("Dual BSD/GPL");
+MODULE_LICENSE("GPL");
 
 MODULE_DESCRIPTION("STEC s1120 PCIe SSD block driver (b" DRV_BUILD_ID ")");
 MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
diff --git a/drivers/block/skd_s1120.h b/drivers/block/skd_s1120.h
index 61c757ff0161..82ce34454dbf 100644
--- a/drivers/block/skd_s1120.h
+++ b/drivers/block/skd_s1120.h
@@ -1,11 +1,9 @@
-/* Copyright 2012 STEC, Inc.
+/*
+ * Copyright 2012 STEC, Inc.
+ * Copyright (c) 2017 Western Digital Corporation or its affiliates.
  *
- * This file is licensed under the terms of the 3-clause
- * BSD License (http://opensource.org/licenses/BSD-3-Clause)
- * or the GNU GPL-2.0 (http://www.gnu.org/licenses/gpl-2.0.html),
- * at your option. Both licenses are also available in the LICENSE file
- * distributed with this project. This file may not be copied, modified,
- * or distributed except in accordance with those terms.
+ * This file is part of the Linux kernel, and is made available under
+ * the terms of the GNU General Public License version 2.
  */
 
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 05/55] skd: Update maintainer information
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (3 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 04/55] skd: Switch to GPLv2 Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 06/55] skd: Remove unneeded #include directives Bart Van Assche
                   ` (51 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

E-mails sent to support@stec-inc.com bounce. Hence remove that
e-mail address from the driver. Add an entry to the MAINTAINERS
file instead.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 MAINTAINERS              | 6 ++++++
 drivers/block/skd_main.c | 1 -
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index f66488dfdbc9..1164f93a19f2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12482,6 +12482,12 @@ M:	Ion Badulescu <ionut@badula.org>
 S:	Odd Fixes
 F:	drivers/net/ethernet/adaptec/starfire*
 
+STEC S1220 SKD DRIVER
+M:	Bart Van Assche <bart.vanassche@wdc.com>
+L:	linux-block@vger.kernel.org
+S:	Maintained
+F:	drivers/block/skd*[ch]
+
 STI CEC DRIVER
 M:	Benjamin Gaignard <benjamin.gaignard@linaro.org>
 S:	Maintained
diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 95a528f1fb9c..a77a6550d6ea 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -72,7 +72,6 @@ enum {
 #define DRV_BIN_VERSION 0x100
 #define DRV_VER_COMPL   "2.2.1." DRV_BUILD_ID
 
-MODULE_AUTHOR("bug-reports: support@stec-inc.com");
 MODULE_LICENSE("GPL");
 
 MODULE_DESCRIPTION("STEC s1120 PCIe SSD block driver (b" DRV_BUILD_ID ")");
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 06/55] skd: Remove unneeded #include directives
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (4 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 05/55] skd: Update maintainer information Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 07/55] skd: Remove ESXi code Bart Van Assche
                   ` (50 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index a77a6550d6ea..06544f58dc73 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -20,7 +20,6 @@
 #include <linux/interrupt.h>
 #include <linux/compiler.h>
 #include <linux/workqueue.h>
-#include <linux/bitops.h>
 #include <linux/delay.h>
 #include <linux/time.h>
 #include <linux/hdreg.h>
@@ -30,7 +29,6 @@
 #include <linux/version.h>
 #include <linux/err.h>
 #include <linux/aer.h>
-#include <linux/ctype.h>
 #include <linux/wait.h>
 #include <linux/uio.h>
 #include <scsi/scsi.h>
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 07/55] skd: Remove ESXi code
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (5 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 06/55] skd: Remove unneeded #include directives Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 08/55] skd: Remove unnecessary blank lines Bart Van Assche
                   ` (49 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Since the code guarded by #ifdef SKD_VMK_POLL_HANDLER / #endif
is never built on Linux systems, remove it.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 06544f58dc73..74489da762a1 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -4777,20 +4777,6 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		goto err_out_timer;
 	}
 
-
-#ifdef SKD_VMK_POLL_HANDLER
-	if (skdev->irq_type == SKD_IRQ_MSIX) {
-		/* MSIX completion handler is being used for coredump */
-		vmklnx_scsi_register_poll_handler(skdev->scsi_host,
-						  skdev->msix_entries[5].vector,
-						  skd_comp_q, skdev);
-	} else {
-		vmklnx_scsi_register_poll_handler(skdev->scsi_host,
-						  skdev->pdev->irq, skd_isr,
-						  skdev);
-	}
-#endif  /* SKD_VMK_POLL_HANDLER */
-
 	return rc;
 
 err_out_timer:
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 08/55] skd: Remove unnecessary blank lines
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (6 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 07/55] skd: Remove ESXi code Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 09/55] skd: Avoid that gcc 7 warns about fall-through when building with W=1 Bart Van Assche
                   ` (48 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality but makes the skd
driver source code more uniform with that of other kernel drivers.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 21 +++++----------------
 1 file changed, 5 insertions(+), 16 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 74489da762a1..aa6bfd1391da 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -333,7 +333,6 @@ struct skd_device {
 
 	u32 timo_slot;
 
-
 	struct work_struct completion_worker;
 };
 
@@ -694,7 +693,6 @@ static void skd_request_fn(struct request_queue *q)
 		if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) {
 			skd_prep_zerosize_flush_cdb(scsi_req, skreq);
 			SKD_ASSERT(skreq->flush_cmd == 1);
-
 		} else {
 			skd_prep_rw_cdb(scsi_req, data_dir, lba, count);
 		}
@@ -2004,16 +2002,14 @@ static void skd_complete_internal(struct skd_device *skdev,
 				skd_send_internal_skspcl(skdev, skspcl,
 							 READ_CAPACITY);
 			else {
-				pr_err(
-				       "(%s):*** W/R Buffer mismatch %d ***\n",
+				pr_err("(%s):*** W/R Buffer mismatch %d ***\n",
 				       skd_name(skdev), skdev->connect_retries);
 				if (skdev->connect_retries <
 				    SKD_MAX_CONNECT_RETRIES) {
 					skdev->connect_retries++;
 					skd_soft_reset(skdev);
 				} else {
-					pr_err(
-					       "(%s): W/R Buffer Connect Error\n",
+					pr_err("(%s): W/R Buffer Connect Error\n",
 					       skd_name(skdev));
 					return;
 				}
@@ -2621,7 +2617,6 @@ static void skd_process_scsi_inq(struct skd_device *skdev,
 		skd_do_driver_inq(skdev, skcomp, skerr, scsi_req->cdb, buf);
 }
 
-
 static int skd_isr_completion_posted(struct skd_device *skdev,
 					int limit, int *enqueued)
 {
@@ -3083,8 +3078,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 			skdev->cur_max_queue_depth * 2 / 3 + 1;
 		if (skdev->queue_low_water_mark < 1)
 			skdev->queue_low_water_mark = 1;
-		pr_info(
-		       "(%s): Queue depth limit=%d dev=%d lowat=%d\n",
+		pr_info("(%s): Queue depth limit=%d dev=%d lowat=%d\n",
 		       skd_name(skdev),
 		       skdev->cur_max_queue_depth,
 		       skdev->dev_max_queue_depth, skdev->queue_low_water_mark);
@@ -4553,7 +4547,6 @@ static void skd_destruct(struct skd_device *skdev)
 	if (skdev == NULL)
 		return;
 
-
 	pr_debug("%s:%s:%d disk\n", skdev->name, __func__, __LINE__);
 	skd_free_disk(skdev);
 
@@ -4617,7 +4610,6 @@ static const struct block_device_operations skd_blockdev_ops = {
 	.getgeo		= skd_bdev_getgeo,
 };
 
-
 /*
  *****************************************************************************
  * PCIe DRIVER GLUE
@@ -4716,14 +4708,12 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	pci_set_master(pdev);
 	rc = pci_enable_pcie_error_reporting(pdev);
 	if (rc) {
-		pr_err(
-		       "(%s): bad enable of PCIe error reporting rc=%d\n",
+		pr_err("(%s): bad enable of PCIe error reporting rc=%d\n",
 		       skd_name(skdev), rc);
 		skdev->pcie_error_reporting_is_enabled = 0;
 	} else
 		skdev->pcie_error_reporting_is_enabled = 1;
 
-
 	pci_set_drvdata(pdev, skdev);
 
 	for (i = 0; i < SKD_MAX_BARS; i++) {
@@ -4768,8 +4758,7 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	} else {
 		/* we timed out, something is wrong with the device,
 		   don't add the disk structure */
-		pr_err(
-		       "(%s): error: waiting for s1120 timed out %d!\n",
+		pr_err("(%s): error: waiting for s1120 timed out %d!\n",
 		       skd_name(skdev), rc);
 		/* in case of no error; we timeout with ENXIO */
 		if (!rc)
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 09/55] skd: Avoid that gcc 7 warns about fall-through when building with W=1
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (7 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 08/55] skd: Remove unnecessary blank lines Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 10/55] skd: Fix spelling in a source code comment Bart Van Assche
                   ` (47 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index aa6bfd1391da..1d0ad31d2256 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2340,7 +2340,7 @@ static void skd_resolve_req_exception(struct skd_device *skdev,
 			blk_requeue_request(skdev->queue, skreq->req);
 			break;
 		}
-	/* fall through to report error */
+		/* fall through */
 
 	case SKD_CHECK_STATUS_REPORT_ERROR:
 	default:
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 10/55] skd: Fix spelling in a source code comment
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (8 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 09/55] skd: Avoid that gcc 7 warns about fall-through when building with W=1 Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 11/55] skd: Fix a function name in a comment Bart Van Assche
                   ` (46 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Change "ptimal" into "optimal" and remove the misleading reference
to sysfs.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 1d0ad31d2256..6c7cf5327d22 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -4273,7 +4273,7 @@ static int skd_cons_disk(struct skd_device *skdev)
 	blk_queue_max_segments(q, skdev->sgs_per_request);
 	blk_queue_max_hw_sectors(q, SKD_N_MAX_SECTORS);
 
-	/* set sysfs ptimal_io_size to 8K */
+	/* set optimal I/O size to 8KB */
 	blk_queue_io_opt(q, 8192);
 
 	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 11/55] skd: Fix a function name in a comment
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (9 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 10/55] skd: Fix spelling in a source code comment Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 12/55] skd: Remove set-but-not-used local variables Bart Van Assche
                   ` (45 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

There is no function skd_completion_posted_isr() in the skd driver
but there is a function called skd_isr_completion_posted(). Fix
the function name in the comment.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 6c7cf5327d22..5a88116efc97 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2790,7 +2790,7 @@ static void skd_complete_other(struct skd_device *skdev,
 	switch (req_table) {
 	case SKD_ID_RW_REQUEST:
 		/*
-		 * The caller, skd_completion_posted_isr() above,
+		 * The caller, skd_isr_completion_posted() above,
 		 * handles r/w requests. The only way we get here
 		 * is if the req_slot is out of bounds.
 		 */
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 12/55] skd: Remove set-but-not-used local variables
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (10 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 11/55] skd: Fix a function name in a comment Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 13/55] skd: Remove a set-but-not-used variable from struct skd_device Bart Van Assche
                   ` (44 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

These variables have been detected by building with W=1. Declare
'acc' as __maybe_unused because most access_ok() implementations
ignore their first argument. This patch does not change any
functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 11 +----------
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 5a88116efc97..ef7c0384e9a8 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -537,8 +537,6 @@ static void skd_request_fn(struct request_queue *q)
 	u32 lba;
 	u32 count;
 	int data_dir;
-	u32 be_lba;
-	u32 be_count;
 	u64 be_dmaa;
 	u64 cmdctxt;
 	u32 timo_slot;
@@ -676,8 +674,6 @@ static void skd_request_fn(struct request_queue *q)
 		cmd_ptr = &skmsg->msg_buf[skmsg->length];
 		memset(cmd_ptr, 0, 32);
 
-		be_lba = cpu_to_be32(lba);
-		be_count = cpu_to_be32(count);
 		be_dmaa = cpu_to_be64((u64)skreq->sksg_dma_address);
 		cmdctxt = skreq->id + SKD_ID_INCR;
 
@@ -889,7 +885,6 @@ static void skd_postop_sg_list(struct skd_device *skdev,
 static void skd_request_fn_not_online(struct request_queue *q)
 {
 	struct skd_device *skdev = q->queuedata;
-	int error;
 
 	SKD_ASSERT(skdev->state != SKD_DRVR_STATE_ONLINE);
 
@@ -919,7 +914,6 @@ static void skd_request_fn_not_online(struct request_queue *q)
 	case SKD_DRVR_STATE_FAULT:
 	case SKD_DRVR_STATE_DISAPPEARED:
 	default:
-		error = -EIO;
 		break;
 	}
 
@@ -943,7 +937,6 @@ static void skd_timer_tick(ulong arg)
 	struct skd_device *skdev = (struct skd_device *)arg;
 
 	u32 timo_slot;
-	u32 overdue_timestamp;
 	unsigned long reqflags;
 	u32 state;
 
@@ -976,8 +969,6 @@ static void skd_timer_tick(ulong arg)
 		goto timer_func_out;
 
 	/* Something is overdue */
-	overdue_timestamp = skdev->timeout_stamp - SKD_N_TIMEOUT_SLOT;
-
 	pr_debug("%s:%s:%d found %d timeouts, draining busy=%d\n",
 		 skdev->name, __func__, __LINE__,
 		 skdev->timeout_slot[timo_slot], skdev->in_flight);
@@ -1297,7 +1288,7 @@ static int skd_sg_io_get_and_check_args(struct skd_device *skdev,
 					struct skd_sg_io *sksgio)
 {
 	struct sg_io_hdr *sgp = &sksgio->sg;
-	int i, acc;
+	int i, __maybe_unused acc;
 
 	if (!access_ok(VERIFY_WRITE, sksgio->argp, sizeof(sg_io_hdr_t))) {
 		pr_debug("%s:%s:%d access sg failed %p\n",
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 13/55] skd: Remove a set-but-not-used variable from struct skd_device
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (11 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 12/55] skd: Remove set-but-not-used local variables Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 14/55] skd: Remove useless barrier() calls Bart Van Assche
                   ` (43 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index ef7c0384e9a8..53c84c846a5e 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -271,7 +271,6 @@ struct skd_device {
 	int gendisk_on;
 	int sync_done;
 
-	atomic_t device_count;
 	u32 devno;
 	u32 major;
 	char name[32];
@@ -4313,8 +4312,6 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 	skdev->sgs_per_request = skd_sgs_per_request;
 	skdev->dbg_level = skd_dbg_level;
 
-	atomic_set(&skdev->device_count, 0);
-
 	spin_lock_init(&skdev->lock);
 
 	INIT_WORK(&skdev->completion_worker, skd_completion_worker);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 14/55] skd: Remove useless barrier() calls
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (12 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 13/55] skd: Remove a set-but-not-used variable from struct skd_device Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 15/55] skd: Switch from the pr_*() to the dev_*() logging functions Bart Van Assche
                   ` (42 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

The purpose of barrier() is to prevent reordering by the compiler.
Since the compiler does not reorder calls to non-pure functions,
remove the barrier() calls from skd_reg_{read,write}{32,64}().

Since pr_debug() is able to report file name and line number
information, remove __FILE__ and __LINE__ from the pr_debug() calls.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 42 ++++++++++--------------------------------
 1 file changed, 10 insertions(+), 32 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 53c84c846a5e..54c6711a42d1 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -341,49 +341,27 @@ struct skd_device {
 
 static inline u32 skd_reg_read32(struct skd_device *skdev, u32 offset)
 {
-	u32 val;
-
-	if (likely(skdev->dbg_level < 2))
-		return readl(skdev->mem_map[1] + offset);
-	else {
-		barrier();
-		val = readl(skdev->mem_map[1] + offset);
-		barrier();
-		pr_debug("%s:%s:%d offset %x = %x\n",
-			 skdev->name, __func__, __LINE__, offset, val);
-		return val;
-	}
+	u32 val = readl(skdev->mem_map[1] + offset);
 
+	if (unlikely(skdev->dbg_level >= 2))
+		pr_debug("%s offset %x = %x\n", skdev->name, offset, val);
+	return val;
 }
 
 static inline void skd_reg_write32(struct skd_device *skdev, u32 val,
 				   u32 offset)
 {
-	if (likely(skdev->dbg_level < 2)) {
-		writel(val, skdev->mem_map[1] + offset);
-		barrier();
-	} else {
-		barrier();
-		writel(val, skdev->mem_map[1] + offset);
-		barrier();
-		pr_debug("%s:%s:%d offset %x = %x\n",
-			 skdev->name, __func__, __LINE__, offset, val);
-	}
+	writel(val, skdev->mem_map[1] + offset);
+	if (unlikely(skdev->dbg_level >= 2))
+		pr_debug("%s offset %x = %x\n", skdev->name, offset, val);
 }
 
 static inline void skd_reg_write64(struct skd_device *skdev, u64 val,
 				   u32 offset)
 {
-	if (likely(skdev->dbg_level < 2)) {
-		writeq(val, skdev->mem_map[1] + offset);
-		barrier();
-	} else {
-		barrier();
-		writeq(val, skdev->mem_map[1] + offset);
-		barrier();
-		pr_debug("%s:%s:%d offset %x = %016llx\n",
-			 skdev->name, __func__, __LINE__, offset, val);
-	}
+	writeq(val, skdev->mem_map[1] + offset);
+	if (unlikely(skdev->dbg_level >= 2))
+		pr_debug("%s offset %x = %016llx\n", skdev->name, offset, val);
 }
 
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 15/55] skd: Switch from the pr_*() to the dev_*() logging functions
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (13 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 14/55] skd: Remove useless barrier() calls Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:12 ` [PATCH 16/55] skd: Fix endianness annotations Bart Van Assche
                   ` (41 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Use dev_err() and dev_info() instead of pr_err() and pr_info().
Since dev_dbg() is able to report file name and line number
information, remove __FILE__ and __LINE__ from the dev_dbg() calls.
Remove the struct skd_device members and the function (skd_name())
that became superfluous due to these changes.

This patch removes the device name and serial number from log
statements. An example of the old log line format:

(skd0:STM000196603:[0000:00:09.0]): Driver state STARTING(3)=>ONLINE(4)

An example of the new log line format:

skd:0000:00:09.0: Driver state STARTING(3)=>ONLINE(4)

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 912 ++++++++++++++++++++---------------------------
 1 file changed, 391 insertions(+), 521 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 54c6711a42d1..5174303d7db7 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -273,7 +273,6 @@ struct skd_device {
 
 	u32 devno;
 	u32 major;
-	char name[32];
 	char isr_name[30];
 
 	enum skd_drvr_state state;
@@ -304,7 +303,6 @@ struct skd_device {
 	int read_cap_is_valid;
 	int inquiry_is_valid;
 	u8 inq_serial_num[13];  /*12 chars plus null term */
-	u8 id_str[80];          /* holds a composite name (pci + sernum) */
 
 	u8 skcomp_cycle;
 	u32 skcomp_ix;
@@ -344,7 +342,7 @@ static inline u32 skd_reg_read32(struct skd_device *skdev, u32 offset)
 	u32 val = readl(skdev->mem_map[1] + offset);
 
 	if (unlikely(skdev->dbg_level >= 2))
-		pr_debug("%s offset %x = %x\n", skdev->name, offset, val);
+		dev_dbg(&skdev->pdev->dev, "offset %x = %x\n", offset, val);
 	return val;
 }
 
@@ -353,7 +351,7 @@ static inline void skd_reg_write32(struct skd_device *skdev, u32 val,
 {
 	writel(val, skdev->mem_map[1] + offset);
 	if (unlikely(skdev->dbg_level >= 2))
-		pr_debug("%s offset %x = %x\n", skdev->name, offset, val);
+		dev_dbg(&skdev->pdev->dev, "offset %x = %x\n", offset, val);
 }
 
 static inline void skd_reg_write64(struct skd_device *skdev, u64 val,
@@ -361,7 +359,8 @@ static inline void skd_reg_write64(struct skd_device *skdev, u64 val,
 {
 	writeq(val, skdev->mem_map[1] + offset);
 	if (unlikely(skdev->dbg_level >= 2))
-		pr_debug("%s offset %x = %016llx\n", skdev->name, offset, val);
+		dev_dbg(&skdev->pdev->dev, "offset %x = %016llx\n", offset,
+			val);
 }
 
 
@@ -433,7 +432,6 @@ static void skd_isr_fwstate(struct skd_device *skdev);
 static void skd_recover_requests(struct skd_device *skdev, int requeue);
 static void skd_soft_reset(struct skd_device *skdev);
 
-static const char *skd_name(struct skd_device *skdev);
 const char *skd_drive_state_to_str(int state);
 const char *skd_skdev_state_to_str(enum skd_drvr_state state);
 static void skd_log_skdev(struct skd_device *skdev, const char *event);
@@ -563,26 +561,23 @@ static void skd_request_fn(struct request_queue *q)
 		if (io_flags & REQ_FUA)
 			fua++;
 
-		pr_debug("%s:%s:%d new req=%p lba=%u(0x%x) "
-			 "count=%u(0x%x) dir=%d\n",
-			 skdev->name, __func__, __LINE__,
-			 req, lba, lba, count, count, data_dir);
+		dev_dbg(&skdev->pdev->dev,
+			"new req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n",
+			req, lba, lba, count, count, data_dir);
 
 		/* At this point we know there is a request */
 
 		/* Are too many requets already in progress? */
 		if (skdev->in_flight >= skdev->cur_max_queue_depth) {
-			pr_debug("%s:%s:%d qdepth %d, limit %d\n",
-				 skdev->name, __func__, __LINE__,
-				 skdev->in_flight, skdev->cur_max_queue_depth);
+			dev_dbg(&skdev->pdev->dev, "qdepth %d, limit %d\n",
+				skdev->in_flight, skdev->cur_max_queue_depth);
 			break;
 		}
 
 		/* Is a skd_request_context available? */
 		skreq = skdev->skreq_free_list;
 		if (skreq == NULL) {
-			pr_debug("%s:%s:%d Out of req=%p\n",
-				 skdev->name, __func__, __LINE__, q);
+			dev_dbg(&skdev->pdev->dev, "Out of req=%p\n", q);
 			break;
 		}
 		SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE);
@@ -591,8 +586,7 @@ static void skd_request_fn(struct request_queue *q)
 		/* Now we check to see if we can get a fit msg */
 		if (skmsg == NULL) {
 			if (skdev->skmsg_free_list == NULL) {
-				pr_debug("%s:%s:%d Out of msg\n",
-					 skdev->name, __func__, __LINE__);
+				dev_dbg(&skdev->pdev->dev, "Out of msg\n");
 				break;
 			}
 		}
@@ -617,9 +611,9 @@ static void skd_request_fn(struct request_queue *q)
 			/* Are there any FIT msg buffers available? */
 			skmsg = skdev->skmsg_free_list;
 			if (skmsg == NULL) {
-				pr_debug("%s:%s:%d Out of msg skdev=%p\n",
-					 skdev->name, __func__, __LINE__,
-					 skdev);
+				dev_dbg(&skdev->pdev->dev,
+					"Out of msg skdev=%p\n",
+					skdev);
 				break;
 			}
 			SKD_ASSERT(skmsg->state == SKD_MSG_STATE_IDLE);
@@ -686,8 +680,7 @@ static void skd_request_fn(struct request_queue *q)
 			 * only resource that has been allocated but might
 			 * not be used is that the FIT msg could be empty.
 			 */
-			pr_debug("%s:%s:%d error Out\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev, "error Out\n");
 			skd_end_request(skdev, skreq, BLK_STS_RESOURCE);
 			continue;
 		}
@@ -712,9 +705,8 @@ static void skd_request_fn(struct request_queue *q)
 		timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
 		skdev->timeout_slot[timo_slot]++;
 		skdev->in_flight++;
-		pr_debug("%s:%s:%d req=0x%x busy=%d\n",
-			 skdev->name, __func__, __LINE__,
-			 skreq->id, skdev->in_flight);
+		dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id,
+			skdev->in_flight);
 
 		/*
 		 * If the FIT msg buffer is full send it.
@@ -736,9 +728,8 @@ static void skd_request_fn(struct request_queue *q)
 	if (skmsg != NULL) {
 		/* Bigger than just a FIT msg header? */
 		if (skmsg->length > sizeof(struct fit_msg_hdr)) {
-			pr_debug("%s:%s:%d sending msg=%p, len %d\n",
-				 skdev->name, __func__, __LINE__,
-				 skmsg, skmsg->length);
+			dev_dbg(&skdev->pdev->dev, "sending msg=%p, len %d\n",
+				skmsg, skmsg->length);
 			skd_send_fitmsg(skdev, skmsg);
 		} else {
 			/*
@@ -771,11 +762,12 @@ static void skd_end_request(struct skd_device *skdev,
 		u32 lba = (u32)blk_rq_pos(req);
 		u32 count = blk_rq_sectors(req);
 
-		pr_err("(%s): Error cmd=%s sect=%u count=%u id=0x%x\n",
-		       skd_name(skdev), cmd, lba, count, skreq->id);
+		dev_err(&skdev->pdev->dev,
+			"Error cmd=%s sect=%u count=%u id=0x%x\n", cmd, lba,
+			count, skreq->id);
 	} else
-		pr_debug("%s:%s:%d id=0x%x error=%d\n",
-			 skdev->name, __func__, __LINE__, skreq->id, error);
+		dev_dbg(&skdev->pdev->dev, "id=0x%x error=%d\n", skreq->id,
+			error);
 
 	__blk_end_request_all(skreq->req, error);
 }
@@ -827,16 +819,16 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 	skreq->sksg_list[n_sg - 1].control = FIT_SGD_CONTROL_LAST;
 
 	if (unlikely(skdev->dbg_level > 1)) {
-		pr_debug("%s:%s:%d skreq=%x sksg_list=%p sksg_dma=%llx\n",
-			 skdev->name, __func__, __LINE__,
-			 skreq->id, skreq->sksg_list, skreq->sksg_dma_address);
+		dev_dbg(&skdev->pdev->dev,
+			"skreq=%x sksg_list=%p sksg_dma=%llx\n",
+			skreq->id, skreq->sksg_list, skreq->sksg_dma_address);
 		for (i = 0; i < n_sg; i++) {
 			struct fit_sg_descriptor *sgd = &skreq->sksg_list[i];
-			pr_debug("%s:%s:%d   sg[%d] count=%u ctrl=0x%x "
-				 "addr=0x%llx next=0x%llx\n",
-				 skdev->name, __func__, __LINE__,
-				 i, sgd->byte_count, sgd->control,
-				 sgd->host_side_addr, sgd->next_desc_ptr);
+
+			dev_dbg(&skdev->pdev->dev,
+				"  sg[%d] count=%u ctrl=0x%x addr=0x%llx next=0x%llx\n",
+				i, sgd->byte_count, sgd->control,
+				sgd->host_side_addr, sgd->next_desc_ptr);
 		}
 	}
 
@@ -946,12 +938,10 @@ static void skd_timer_tick(ulong arg)
 		goto timer_func_out;
 
 	/* Something is overdue */
-	pr_debug("%s:%s:%d found %d timeouts, draining busy=%d\n",
-		 skdev->name, __func__, __LINE__,
-		 skdev->timeout_slot[timo_slot], skdev->in_flight);
-	pr_err("(%s): Overdue IOs (%d), busy %d\n",
-	       skd_name(skdev), skdev->timeout_slot[timo_slot],
-	       skdev->in_flight);
+	dev_dbg(&skdev->pdev->dev, "found %d timeouts, draining busy=%d\n",
+		skdev->timeout_slot[timo_slot], skdev->in_flight);
+	dev_err(&skdev->pdev->dev, "Overdue IOs (%d), busy %d\n",
+		skdev->timeout_slot[timo_slot], skdev->in_flight);
 
 	skdev->timer_countdown = SKD_DRAINING_TIMO;
 	skdev->state = SKD_DRVR_STATE_DRAINING_TIMEOUT;
@@ -971,9 +961,9 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 	case SKD_DRVR_STATE_LOAD:
 		break;
 	case SKD_DRVR_STATE_BUSY_SANITIZE:
-		pr_debug("%s:%s:%d drive busy sanitize[%x], driver[%x]\n",
-			 skdev->name, __func__, __LINE__,
-			 skdev->drive_state, skdev->state);
+		dev_dbg(&skdev->pdev->dev,
+			"drive busy sanitize[%x], driver[%x]\n",
+			skdev->drive_state, skdev->state);
 		/* If we've been in sanitize for 3 seconds, we figure we're not
 		 * going to get anymore completions, so recover requests now
 		 */
@@ -987,16 +977,15 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 	case SKD_DRVR_STATE_BUSY:
 	case SKD_DRVR_STATE_BUSY_IMMINENT:
 	case SKD_DRVR_STATE_BUSY_ERASE:
-		pr_debug("%s:%s:%d busy[%x], countdown=%d\n",
-			 skdev->name, __func__, __LINE__,
-			 skdev->state, skdev->timer_countdown);
+		dev_dbg(&skdev->pdev->dev, "busy[%x], countdown=%d\n",
+			skdev->state, skdev->timer_countdown);
 		if (skdev->timer_countdown > 0) {
 			skdev->timer_countdown--;
 			return;
 		}
-		pr_debug("%s:%s:%d busy[%x], timedout=%d, restarting device.",
-			 skdev->name, __func__, __LINE__,
-			 skdev->state, skdev->timer_countdown);
+		dev_dbg(&skdev->pdev->dev,
+			"busy[%x], timedout=%d, restarting device.",
+			skdev->state, skdev->timer_countdown);
 		skd_restart_device(skdev);
 		break;
 
@@ -1010,8 +999,8 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 		 * revcover at some point. */
 		skdev->state = SKD_DRVR_STATE_FAULT;
 
-		pr_err("(%s): DriveFault Connect Timeout (%x)\n",
-		       skd_name(skdev), skdev->drive_state);
+		dev_err(&skdev->pdev->dev, "DriveFault Connect Timeout (%x)\n",
+			skdev->drive_state);
 
 		/*start the queue so we can respond with error to requests */
 		/* wakeup anyone waiting for startup complete */
@@ -1029,17 +1018,15 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 		break;
 
 	case SKD_DRVR_STATE_DRAINING_TIMEOUT:
-		pr_debug("%s:%s:%d "
-			 "draining busy [%d] tick[%d] qdb[%d] tmls[%d]\n",
-			 skdev->name, __func__, __LINE__,
-			 skdev->timo_slot,
-			 skdev->timer_countdown,
-			 skdev->in_flight,
-			 skdev->timeout_slot[skdev->timo_slot]);
+		dev_dbg(&skdev->pdev->dev,
+			"draining busy [%d] tick[%d] qdb[%d] tmls[%d]\n",
+			skdev->timo_slot, skdev->timer_countdown,
+			skdev->in_flight,
+			skdev->timeout_slot[skdev->timo_slot]);
 		/* if the slot has cleared we can let the I/O continue */
 		if (skdev->timeout_slot[skdev->timo_slot] == 0) {
-			pr_debug("%s:%s:%d Slot drained, starting queue.\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev,
+				"Slot drained, starting queue.\n");
 			skdev->state = SKD_DRVR_STATE_ONLINE;
 			blk_start_queue(skdev->queue);
 			return;
@@ -1059,8 +1046,9 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 		/* For now, we fault the drive. Could attempt resets to
 		 * revcover at some point. */
 		skdev->state = SKD_DRVR_STATE_FAULT;
-		pr_err("(%s): DriveFault Reconnect Timeout (%x)\n",
-		       skd_name(skdev), skdev->drive_state);
+		dev_err(&skdev->pdev->dev,
+			"DriveFault Reconnect Timeout (%x)\n",
+			skdev->drive_state);
 
 		/*
 		 * Recovering does two things:
@@ -1082,8 +1070,8 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 			 * fail. This is to mitigate hung processes. */
 			skd_recover_requests(skdev, 0);
 		else {
-			pr_err("(%s): Disable BusMaster (%x)\n",
-			       skd_name(skdev), skdev->drive_state);
+			dev_err(&skdev->pdev->dev, "Disable BusMaster (%x)\n",
+				skdev->drive_state);
 			pci_disable_device(skdev->pdev);
 			skd_disable_interrupts(skdev);
 			skd_recover_requests(skdev, 0);
@@ -1115,8 +1103,7 @@ static int skd_start_timer(struct skd_device *skdev)
 
 	rc = mod_timer(&skdev->timer, (jiffies + HZ));
 	if (rc)
-		pr_err("%s: failed to start timer %d\n",
-		       __func__, rc);
+		dev_err(&skdev->pdev->dev, "failed to start timer %d\n", rc);
 	return rc;
 }
 
@@ -1163,9 +1150,9 @@ static int skd_bdev_ioctl(struct block_device *bdev, fmode_t mode,
 	struct skd_device *skdev = disk->private_data;
 	int __user *p = (int __user *)arg;
 
-	pr_debug("%s:%s:%d %s: CMD[%s] ioctl  mode 0x%x, cmd 0x%x arg %0lx\n",
-		 skdev->name, __func__, __LINE__,
-		 disk->disk_name, current->comm, mode, cmd_in, arg);
+	dev_dbg(&skdev->pdev->dev,
+		"%s: CMD[%s] ioctl  mode 0x%x, cmd 0x%x arg %0lx\n",
+		disk->disk_name, current->comm, mode, cmd_in, arg);
 
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -1191,8 +1178,8 @@ static int skd_bdev_ioctl(struct block_device *bdev, fmode_t mode,
 		break;
 	}
 
-	pr_debug("%s:%s:%d %s:  completion rc %d\n",
-		 skdev->name, __func__, __LINE__, disk->disk_name, rc);
+	dev_dbg(&skdev->pdev->dev, "%s:  completion rc %d\n", disk->disk_name,
+		rc);
 	return rc;
 }
 
@@ -1213,8 +1200,7 @@ static int skd_ioctl_sg_io(struct skd_device *skdev, fmode_t mode,
 		break;
 
 	default:
-		pr_debug("%s:%s:%d drive not online\n",
-			 skdev->name, __func__, __LINE__);
+		dev_dbg(&skdev->pdev->dev, "drive not online\n");
 		rc = -ENXIO;
 		goto out;
 	}
@@ -1268,38 +1254,38 @@ static int skd_sg_io_get_and_check_args(struct skd_device *skdev,
 	int i, __maybe_unused acc;
 
 	if (!access_ok(VERIFY_WRITE, sksgio->argp, sizeof(sg_io_hdr_t))) {
-		pr_debug("%s:%s:%d access sg failed %p\n",
-			 skdev->name, __func__, __LINE__, sksgio->argp);
+		dev_dbg(&skdev->pdev->dev, "access sg failed %p\n",
+			sksgio->argp);
 		return -EFAULT;
 	}
 
 	if (__copy_from_user(sgp, sksgio->argp, sizeof(sg_io_hdr_t))) {
-		pr_debug("%s:%s:%d copy_from_user sg failed %p\n",
-			 skdev->name, __func__, __LINE__, sksgio->argp);
+		dev_dbg(&skdev->pdev->dev, "copy_from_user sg failed %p\n",
+			sksgio->argp);
 		return -EFAULT;
 	}
 
 	if (sgp->interface_id != SG_INTERFACE_ID_ORIG) {
-		pr_debug("%s:%s:%d interface_id invalid 0x%x\n",
-			 skdev->name, __func__, __LINE__, sgp->interface_id);
+		dev_dbg(&skdev->pdev->dev, "interface_id invalid 0x%x\n",
+			sgp->interface_id);
 		return -EINVAL;
 	}
 
 	if (sgp->cmd_len > sizeof(sksgio->cdb)) {
-		pr_debug("%s:%s:%d cmd_len invalid %d\n",
-			 skdev->name, __func__, __LINE__, sgp->cmd_len);
+		dev_dbg(&skdev->pdev->dev, "cmd_len invalid %d\n",
+			sgp->cmd_len);
 		return -EINVAL;
 	}
 
 	if (sgp->iovec_count > 256) {
-		pr_debug("%s:%s:%d iovec_count invalid %d\n",
-			 skdev->name, __func__, __LINE__, sgp->iovec_count);
+		dev_dbg(&skdev->pdev->dev, "iovec_count invalid %d\n",
+			sgp->iovec_count);
 		return -EINVAL;
 	}
 
 	if (sgp->dxfer_len > (PAGE_SIZE * SKD_N_SG_PER_SPECIAL)) {
-		pr_debug("%s:%s:%d dxfer_len invalid %d\n",
-			 skdev->name, __func__, __LINE__, sgp->dxfer_len);
+		dev_dbg(&skdev->pdev->dev, "dxfer_len invalid %d\n",
+			sgp->dxfer_len);
 		return -EINVAL;
 	}
 
@@ -1318,21 +1304,21 @@ static int skd_sg_io_get_and_check_args(struct skd_device *skdev,
 		break;
 
 	default:
-		pr_debug("%s:%s:%d dxfer_dir invalid %d\n",
-			 skdev->name, __func__, __LINE__, sgp->dxfer_direction);
+		dev_dbg(&skdev->pdev->dev, "dxfer_dir invalid %d\n",
+			sgp->dxfer_direction);
 		return -EINVAL;
 	}
 
 	if (copy_from_user(sksgio->cdb, sgp->cmdp, sgp->cmd_len)) {
-		pr_debug("%s:%s:%d copy_from_user cmdp failed %p\n",
-			 skdev->name, __func__, __LINE__, sgp->cmdp);
+		dev_dbg(&skdev->pdev->dev, "copy_from_user cmdp failed %p\n",
+			sgp->cmdp);
 		return -EFAULT;
 	}
 
 	if (sgp->mx_sb_len != 0) {
 		if (!access_ok(VERIFY_WRITE, sgp->sbp, sgp->mx_sb_len)) {
-			pr_debug("%s:%s:%d access sbp failed %p\n",
-				 skdev->name, __func__, __LINE__, sgp->sbp);
+			dev_dbg(&skdev->pdev->dev, "access sbp failed %p\n",
+				sgp->sbp);
 			return -EFAULT;
 		}
 	}
@@ -1349,17 +1335,17 @@ static int skd_sg_io_get_and_check_args(struct skd_device *skdev,
 
 		iov = kmalloc(nbytes, GFP_KERNEL);
 		if (iov == NULL) {
-			pr_debug("%s:%s:%d alloc iovec failed %d\n",
-				 skdev->name, __func__, __LINE__,
-				 sgp->iovec_count);
+			dev_dbg(&skdev->pdev->dev, "alloc iovec failed %d\n",
+				sgp->iovec_count);
 			return -ENOMEM;
 		}
 		sksgio->iov = iov;
 		sksgio->iovcnt = sgp->iovec_count;
 
 		if (copy_from_user(iov, sgp->dxferp, nbytes)) {
-			pr_debug("%s:%s:%d copy_from_user iovec failed %p\n",
-				 skdev->name, __func__, __LINE__, sgp->dxferp);
+			dev_dbg(&skdev->pdev->dev,
+				"copy_from_user iovec failed %p\n",
+				sgp->dxferp);
 			return -EFAULT;
 		}
 
@@ -1387,9 +1373,9 @@ static int skd_sg_io_get_and_check_args(struct skd_device *skdev,
 		struct sg_iovec *iov = sksgio->iov;
 		for (i = 0; i < sksgio->iovcnt; i++, iov++) {
 			if (!access_ok(acc, iov->iov_base, iov->iov_len)) {
-				pr_debug("%s:%s:%d access data failed %p/%d\n",
-					 skdev->name, __func__, __LINE__,
-					 iov->iov_base, (int)iov->iov_len);
+				dev_dbg(&skdev->pdev->dev,
+					"access data failed %p/%zd\n",
+					iov->iov_base, iov->iov_len);
 				return -EFAULT;
 			}
 		}
@@ -1424,16 +1410,14 @@ static int skd_sg_io_obtain_skspcl(struct skd_device *skdev,
 			break;
 		}
 
-		pr_debug("%s:%s:%d blocking\n",
-			 skdev->name, __func__, __LINE__);
+		dev_dbg(&skdev->pdev->dev, "blocking\n");
 
 		rc = wait_event_interruptible_timeout(
 				skdev->waitq,
 				(skdev->skspcl_free_list != NULL),
 				msecs_to_jiffies(sksgio->sg.timeout));
 
-		pr_debug("%s:%s:%d unblocking, rc=%d\n",
-			 skdev->name, __func__, __LINE__, rc);
+		dev_dbg(&skdev->pdev->dev, "unblocking, rc=%d\n", rc);
 
 		if (rc <= 0) {
 			if (rc == 0)
@@ -1510,17 +1494,16 @@ static int skd_skreq_prep_buffering(struct skd_device *skdev,
 	if (unlikely(skdev->dbg_level > 1)) {
 		u32 i;
 
-		pr_debug("%s:%s:%d skreq=%x sksg_list=%p sksg_dma=%llx\n",
-			 skdev->name, __func__, __LINE__,
-			 skreq->id, skreq->sksg_list, skreq->sksg_dma_address);
+		dev_dbg(&skdev->pdev->dev,
+			"skreq=%x sksg_list=%p sksg_dma=%llx\n",
+			skreq->id, skreq->sksg_list, skreq->sksg_dma_address);
 		for (i = 0; i < skreq->n_sg; i++) {
 			struct fit_sg_descriptor *sgd = &skreq->sksg_list[i];
 
-			pr_debug("%s:%s:%d   sg[%d] count=%u ctrl=0x%x "
-				 "addr=0x%llx next=0x%llx\n",
-				 skdev->name, __func__, __LINE__,
-				 i, sgd->byte_count, sgd->control,
-				 sgd->host_side_addr, sgd->next_desc_ptr);
+			dev_dbg(&skdev->pdev->dev,
+				"  sg[%d] count=%u ctrl=0x%x addr=0x%llx next=0x%llx\n",
+				i, sgd->byte_count, sgd->control,
+				sgd->host_side_addr, sgd->next_desc_ptr);
 		}
 	}
 
@@ -1642,8 +1625,8 @@ static int skd_sg_io_await(struct skd_device *skdev, struct skd_sg_io *sksgio)
 	spin_lock_irqsave(&skdev->lock, flags);
 
 	if (sksgio->skspcl->req.state == SKD_REQ_STATE_ABORTED) {
-		pr_debug("%s:%s:%d skspcl %p aborted\n",
-			 skdev->name, __func__, __LINE__, sksgio->skspcl);
+		dev_dbg(&skdev->pdev->dev, "skspcl %p aborted\n",
+			sksgio->skspcl);
 
 		/* Build check cond, sense and let command finish. */
 		/* For a timeout, we must fabricate completion and sense
@@ -1668,13 +1651,11 @@ static int skd_sg_io_await(struct skd_device *skdev, struct skd_sg_io *sksgio)
 		sksgio->skspcl->orphaned = 1;
 		sksgio->skspcl = NULL;
 		if (rc == 0) {
-			pr_debug("%s:%s:%d timed out %p (%u ms)\n",
-				 skdev->name, __func__, __LINE__,
-				 sksgio, sksgio->sg.timeout);
+			dev_dbg(&skdev->pdev->dev, "timed out %p (%u ms)\n",
+				sksgio, sksgio->sg.timeout);
 			rc = -ETIMEDOUT;
 		} else {
-			pr_debug("%s:%s:%d cntlc %p\n",
-				 skdev->name, __func__, __LINE__, sksgio);
+			dev_dbg(&skdev->pdev->dev, "cntlc %p\n", sksgio);
 			rc = -EINTR;
 		}
 	}
@@ -1704,9 +1685,8 @@ static int skd_sg_io_put_status(struct skd_device *skdev,
 	if (sgp->masked_status || sgp->host_status || sgp->driver_status)
 		sgp->info |= SG_INFO_CHECK;
 
-	pr_debug("%s:%s:%d status %x masked %x resid 0x%x\n",
-		 skdev->name, __func__, __LINE__,
-		 sgp->status, sgp->masked_status, sgp->resid);
+	dev_dbg(&skdev->pdev->dev, "status %x masked %x resid 0x%x\n",
+		sgp->status, sgp->masked_status, sgp->resid);
 
 	if (sgp->masked_status == SAM_STAT_CHECK_CONDITION) {
 		if (sgp->mx_sb_len > 0) {
@@ -1718,17 +1698,17 @@ static int skd_sg_io_put_status(struct skd_device *skdev,
 			sgp->sb_len_wr = nbytes;
 
 			if (__copy_to_user(sgp->sbp, ei, nbytes)) {
-				pr_debug("%s:%s:%d copy_to_user sense failed %p\n",
-					 skdev->name, __func__, __LINE__,
-					 sgp->sbp);
+				dev_dbg(&skdev->pdev->dev,
+					"copy_to_user sense failed %p\n",
+					sgp->sbp);
 				return -EFAULT;
 			}
 		}
 	}
 
 	if (__copy_to_user(sksgio->argp, sgp, sizeof(sg_io_hdr_t))) {
-		pr_debug("%s:%s:%d copy_to_user sg failed %p\n",
-			 skdev->name, __func__, __LINE__, sksgio->argp);
+		dev_dbg(&skdev->pdev->dev, "copy_to_user sg failed %p\n",
+			sksgio->argp);
 		return -EFAULT;
 	}
 
@@ -1896,9 +1876,9 @@ static void skd_log_check_status(struct skd_device *skdev, u8 status, u8 key,
 	/* If the check condition is of special interest, log a message */
 	if ((status == SAM_STAT_CHECK_CONDITION) && (key == 0x02)
 	    && (code == 0x04) && (qual == 0x06)) {
-		pr_err("(%s): *** LOST_WRITE_DATA ERROR *** key/asc/"
-		       "ascq/fruc %02x/%02x/%02x/%02x\n",
-		       skd_name(skdev), key, code, qual, fruc);
+		dev_err(&skdev->pdev->dev,
+			"*** LOST_WRITE_DATA ERROR *** key/asc/ascq/fruc %02x/%02x/%02x/%02x\n",
+			key, code, qual, fruc);
 	}
 }
 
@@ -1916,8 +1896,7 @@ static void skd_complete_internal(struct skd_device *skdev,
 
 	SKD_ASSERT(skspcl == &skdev->internal_skspcl);
 
-	pr_debug("%s:%s:%d complete internal %x\n",
-		 skdev->name, __func__, __LINE__, scsi->cdb[0]);
+	dev_dbg(&skdev->pdev->dev, "complete internal %x\n", scsi->cdb[0]);
 
 	skspcl->req.completion = *skcomp;
 	skspcl->req.state = SKD_REQ_STATE_IDLE;
@@ -1937,13 +1916,13 @@ static void skd_complete_internal(struct skd_device *skdev,
 			skd_send_internal_skspcl(skdev, skspcl, WRITE_BUFFER);
 		else {
 			if (skdev->state == SKD_DRVR_STATE_STOPPING) {
-				pr_debug("%s:%s:%d TUR failed, don't send anymore state 0x%x\n",
-					 skdev->name, __func__, __LINE__,
-					 skdev->state);
+				dev_dbg(&skdev->pdev->dev,
+					"TUR failed, don't send anymore state 0x%x\n",
+					skdev->state);
 				return;
 			}
-			pr_debug("%s:%s:%d **** TUR failed, retry skerr\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev,
+				"**** TUR failed, retry skerr\n");
 			skd_send_internal_skspcl(skdev, skspcl, 0x00);
 		}
 		break;
@@ -1953,13 +1932,13 @@ static void skd_complete_internal(struct skd_device *skdev,
 			skd_send_internal_skspcl(skdev, skspcl, READ_BUFFER);
 		else {
 			if (skdev->state == SKD_DRVR_STATE_STOPPING) {
-				pr_debug("%s:%s:%d write buffer failed, don't send anymore state 0x%x\n",
-					 skdev->name, __func__, __LINE__,
-					 skdev->state);
+				dev_dbg(&skdev->pdev->dev,
+					"write buffer failed, don't send anymore state 0x%x\n",
+					skdev->state);
 				return;
 			}
-			pr_debug("%s:%s:%d **** write buffer failed, retry skerr\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev,
+				"**** write buffer failed, retry skerr\n");
 			skd_send_internal_skspcl(skdev, skspcl, 0x00);
 		}
 		break;
@@ -1970,30 +1949,29 @@ static void skd_complete_internal(struct skd_device *skdev,
 				skd_send_internal_skspcl(skdev, skspcl,
 							 READ_CAPACITY);
 			else {
-				pr_err("(%s):*** W/R Buffer mismatch %d ***\n",
-				       skd_name(skdev), skdev->connect_retries);
+				dev_err(&skdev->pdev->dev,
+					"*** W/R Buffer mismatch %d ***\n",
+					skdev->connect_retries);
 				if (skdev->connect_retries <
 				    SKD_MAX_CONNECT_RETRIES) {
 					skdev->connect_retries++;
 					skd_soft_reset(skdev);
 				} else {
-					pr_err("(%s): W/R Buffer Connect Error\n",
-					       skd_name(skdev));
+					dev_err(&skdev->pdev->dev,
+						"W/R Buffer Connect Error\n");
 					return;
 				}
 			}
 
 		} else {
 			if (skdev->state == SKD_DRVR_STATE_STOPPING) {
-				pr_debug("%s:%s:%d "
-					 "read buffer failed, don't send anymore state 0x%x\n",
-					 skdev->name, __func__, __LINE__,
-					 skdev->state);
+				dev_dbg(&skdev->pdev->dev,
+					"read buffer failed, don't send anymore state 0x%x\n",
+					skdev->state);
 				return;
 			}
-			pr_debug("%s:%s:%d "
-				 "**** read buffer failed, retry skerr\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev,
+				"**** read buffer failed, retry skerr\n");
 			skd_send_internal_skspcl(skdev, skspcl, 0x00);
 		}
 		break;
@@ -2008,10 +1986,9 @@ static void skd_complete_internal(struct skd_device *skdev,
 				(buf[4] << 24) | (buf[5] << 16) |
 				(buf[6] << 8) | buf[7];
 
-			pr_debug("%s:%s:%d last lba %d, bs %d\n",
-				 skdev->name, __func__, __LINE__,
-				 skdev->read_cap_last_lba,
-				 skdev->read_cap_blocksize);
+			dev_dbg(&skdev->pdev->dev, "last lba %d, bs %d\n",
+				skdev->read_cap_last_lba,
+				skdev->read_cap_blocksize);
 
 			set_capacity(skdev->disk, skdev->read_cap_last_lba + 1);
 
@@ -2022,13 +1999,10 @@ static void skd_complete_internal(struct skd_device *skdev,
 			   (skerr->key == MEDIUM_ERROR)) {
 			skdev->read_cap_last_lba = ~0;
 			set_capacity(skdev->disk, skdev->read_cap_last_lba + 1);
-			pr_debug("%s:%s:%d "
-				 "**** MEDIUM ERROR caused READCAP to fail, ignore failure and continue to inquiry\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev, "**** MEDIUM ERROR caused READCAP to fail, ignore failure and continue to inquiry\n");
 			skd_send_internal_skspcl(skdev, skspcl, INQUIRY);
 		} else {
-			pr_debug("%s:%s:%d **** READCAP failed, retry TUR\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev, "**** READCAP failed, retry TUR\n");
 			skd_send_internal_skspcl(skdev, skspcl,
 						 TEST_UNIT_READY);
 		}
@@ -2045,8 +2019,7 @@ static void skd_complete_internal(struct skd_device *skdev,
 		}
 
 		if (skd_unquiesce_dev(skdev) < 0)
-			pr_debug("%s:%s:%d **** failed, to ONLINE device\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev, "**** failed, to ONLINE device\n");
 		 /* connection is complete */
 		skdev->connect_retries = 0;
 		break;
@@ -2076,12 +2049,10 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 	u64 qcmd;
 	struct fit_msg_hdr *fmh;
 
-	pr_debug("%s:%s:%d dma address 0x%llx, busy=%d\n",
-		 skdev->name, __func__, __LINE__,
-		 skmsg->mb_dma_address, skdev->in_flight);
-	pr_debug("%s:%s:%d msg_buf 0x%p, offset %x\n",
-		 skdev->name, __func__, __LINE__,
-		 skmsg->msg_buf, skmsg->offset);
+	dev_dbg(&skdev->pdev->dev, "dma address 0x%llx, busy=%d\n",
+		skmsg->mb_dma_address, skdev->in_flight);
+	dev_dbg(&skdev->pdev->dev, "msg_buf 0x%p, offset %x\n", skmsg->msg_buf,
+		skmsg->offset);
 
 	qcmd = skmsg->mb_dma_address;
 	qcmd |= FIT_QCMD_QID_NORMAL;
@@ -2093,8 +2064,8 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 		u8 *bp = (u8 *)skmsg->msg_buf;
 		int i;
 		for (i = 0; i < skmsg->length; i += 8) {
-			pr_debug("%s:%s:%d msg[%2d] %8ph\n",
-				 skdev->name, __func__, __LINE__, i, &bp[i]);
+			dev_dbg(&skdev->pdev->dev, "msg[%2d] %8ph\n", i,
+				&bp[i]);
 			if (i == 0)
 				i = 64 - 8;
 		}
@@ -2130,25 +2101,24 @@ static void skd_send_special_fitmsg(struct skd_device *skdev,
 		int i;
 
 		for (i = 0; i < SKD_N_SPECIAL_FITMSG_BYTES; i += 8) {
-			pr_debug("%s:%s:%d  spcl[%2d] %8ph\n",
-				 skdev->name, __func__, __LINE__, i, &bp[i]);
+			dev_dbg(&skdev->pdev->dev, " spcl[%2d] %8ph\n", i,
+				&bp[i]);
 			if (i == 0)
 				i = 64 - 8;
 		}
 
-		pr_debug("%s:%s:%d skspcl=%p id=%04x sksg_list=%p sksg_dma=%llx\n",
-			 skdev->name, __func__, __LINE__,
-			 skspcl, skspcl->req.id, skspcl->req.sksg_list,
-			 skspcl->req.sksg_dma_address);
+		dev_dbg(&skdev->pdev->dev,
+			"skspcl=%p id=%04x sksg_list=%p sksg_dma=%llx\n",
+			skspcl, skspcl->req.id, skspcl->req.sksg_list,
+			skspcl->req.sksg_dma_address);
 		for (i = 0; i < skspcl->req.n_sg; i++) {
 			struct fit_sg_descriptor *sgd =
 				&skspcl->req.sksg_list[i];
 
-			pr_debug("%s:%s:%d   sg[%d] count=%u ctrl=0x%x "
-				 "addr=0x%llx next=0x%llx\n",
-				 skdev->name, __func__, __LINE__,
-				 i, sgd->byte_count, sgd->control,
-				 sgd->host_side_addr, sgd->next_desc_ptr);
+			dev_dbg(&skdev->pdev->dev,
+				"  sg[%d] count=%u ctrl=0x%x addr=0x%llx next=0x%llx\n",
+				i, sgd->byte_count, sgd->control,
+				sgd->host_side_addr, sgd->next_desc_ptr);
 		}
 	}
 
@@ -2226,13 +2196,13 @@ skd_check_status(struct skd_device *skdev,
 {
 	int i, n;
 
-	pr_err("(%s): key/asc/ascq/fruc %02x/%02x/%02x/%02x\n",
-	       skd_name(skdev), skerr->key, skerr->code, skerr->qual,
-	       skerr->fruc);
+	dev_err(&skdev->pdev->dev, "key/asc/ascq/fruc %02x/%02x/%02x/%02x\n",
+		skerr->key, skerr->code, skerr->qual, skerr->fruc);
 
-	pr_debug("%s:%s:%d stat: t=%02x stat=%02x k=%02x c=%02x q=%02x fruc=%02x\n",
-		 skdev->name, __func__, __LINE__, skerr->type, cmp_status,
-		 skerr->key, skerr->code, skerr->qual, skerr->fruc);
+	dev_dbg(&skdev->pdev->dev,
+		"stat: t=%02x stat=%02x k=%02x c=%02x q=%02x fruc=%02x\n",
+		skerr->type, cmp_status, skerr->key, skerr->code, skerr->qual,
+		skerr->fruc);
 
 	/* Does the info match an entry in the good category? */
 	n = sizeof(skd_chkstat_table) / sizeof(skd_chkstat_table[0]);
@@ -2260,10 +2230,9 @@ skd_check_status(struct skd_device *skdev,
 				continue;
 
 		if (sns->action == SKD_CHECK_STATUS_REPORT_SMART_ALERT) {
-			pr_err("(%s): SMART Alert: sense key/asc/ascq "
-			       "%02x/%02x/%02x\n",
-			       skd_name(skdev), skerr->key,
-			       skerr->code, skerr->qual);
+			dev_err(&skdev->pdev->dev,
+				"SMART Alert: sense key/asc/ascq %02x/%02x/%02x\n",
+				skerr->key, skerr->code, skerr->qual);
 		}
 		return sns->action;
 	}
@@ -2272,13 +2241,11 @@ skd_check_status(struct skd_device *skdev,
 	 * zero status means good
 	 */
 	if (cmp_status) {
-		pr_debug("%s:%s:%d status check: error\n",
-			 skdev->name, __func__, __LINE__);
+		dev_dbg(&skdev->pdev->dev, "status check: error\n");
 		return SKD_CHECK_STATUS_REPORT_ERROR;
 	}
 
-	pr_debug("%s:%s:%d status check good default\n",
-		 skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "status check good default\n");
 	return SKD_CHECK_STATUS_REPORT_GOOD;
 }
 
@@ -2296,7 +2263,7 @@ static void skd_resolve_req_exception(struct skd_device *skdev,
 	case SKD_CHECK_STATUS_BUSY_IMMINENT:
 		skd_log_skreq(skdev, skreq, "retry(busy)");
 		blk_requeue_request(skdev->queue, skreq->req);
-		pr_info("(%s) drive BUSY imminent\n", skd_name(skdev));
+		dev_info(&skdev->pdev->dev, "drive BUSY imminent\n");
 		skdev->state = SKD_DRVR_STATE_BUSY_IMMINENT;
 		skdev->timer_countdown = SKD_TIMER_MINUTES(20);
 		skd_quiesce_dev(skdev);
@@ -2396,8 +2363,8 @@ static void skd_do_inq_page_00(struct skd_device *skdev,
 	/* Caller requested "supported pages".  The driver needs to insert
 	 * its page.
 	 */
-	pr_debug("%s:%s:%d skd_do_driver_inquiry: modify supported pages.\n",
-		 skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev,
+		"skd_do_driver_inquiry: modify supported pages.\n");
 
 	/* If the device rejected the request because the CDB was
 	 * improperly formed, then just leave.
@@ -2495,8 +2462,7 @@ static void skd_do_inq_page_da(struct skd_device *skdev,
 	struct driver_inquiry_data inq;
 	u16 val;
 
-	pr_debug("%s:%s:%d skd_do_driver_inquiry: return driver page\n",
-		 skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skd_do_driver_inquiry: return driver page\n");
 
 	memset(&inq, 0, sizeof(inq));
 
@@ -2611,16 +2577,14 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 
 		skerr = &skdev->skerr_table[skdev->skcomp_ix];
 
-		pr_debug("%s:%s:%d "
-			 "cycle=%d ix=%d got cycle=%d cmdctxt=0x%x stat=%d "
-			 "busy=%d rbytes=0x%x proto=%d\n",
-			 skdev->name, __func__, __LINE__, skdev->skcomp_cycle,
-			 skdev->skcomp_ix, cmp_cycle, cmp_cntxt, cmp_status,
-			 skdev->in_flight, cmp_bytes, skdev->proto_ver);
+		dev_dbg(&skdev->pdev->dev,
+			"cycle=%d ix=%d got cycle=%d cmdctxt=0x%x stat=%d busy=%d rbytes=0x%x proto=%d\n",
+			skdev->skcomp_cycle, skdev->skcomp_ix, cmp_cycle,
+			cmp_cntxt, cmp_status, skdev->in_flight, cmp_bytes,
+			skdev->proto_ver);
 
 		if (cmp_cycle != skdev->skcomp_cycle) {
-			pr_debug("%s:%s:%d end of completions\n",
-				 skdev->name, __func__, __LINE__);
+			dev_dbg(&skdev->pdev->dev, "end of completions\n");
 			break;
 		}
 		/*
@@ -2656,15 +2620,14 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		 * Make sure the request ID for the slot matches.
 		 */
 		if (skreq->id != req_id) {
-			pr_debug("%s:%s:%d mismatch comp_id=0x%x req_id=0x%x\n",
-				 skdev->name, __func__, __LINE__,
-				 req_id, skreq->id);
+			dev_dbg(&skdev->pdev->dev,
+				"mismatch comp_id=0x%x req_id=0x%x\n", req_id,
+				skreq->id);
 			{
 				u16 new_id = cmp_cntxt;
-				pr_err("(%s): Completion mismatch "
-				       "comp_id=0x%04x skreq=0x%04x new=0x%04x\n",
-				       skd_name(skdev), req_id,
-				       skreq->id, new_id);
+				dev_err(&skdev->pdev->dev,
+					"Completion mismatch comp_id=0x%04x skreq=0x%04x new=0x%04x\n",
+					req_id, skreq->id, new_id);
 
 				continue;
 			}
@@ -2673,9 +2636,8 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		SKD_ASSERT(skreq->state == SKD_REQ_STATE_BUSY);
 
 		if (skreq->state == SKD_REQ_STATE_ABORTED) {
-			pr_debug("%s:%s:%d reclaim req %p id=%04x\n",
-				 skdev->name, __func__, __LINE__,
-				 skreq, skreq->id);
+			dev_dbg(&skdev->pdev->dev, "reclaim req %p id=%04x\n",
+				skreq, skreq->id);
 			/* a previously timed out command can
 			 * now be cleaned up */
 			skd_release_skreq(skdev, skreq);
@@ -2694,10 +2656,9 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 			skd_postop_sg_list(skdev, skreq);
 
 		if (!skreq->req) {
-			pr_debug("%s:%s:%d NULL backptr skdreq %p, "
-				 "req=0x%x req_id=0x%x\n",
-				 skdev->name, __func__, __LINE__,
-				 skreq, skreq->id, req_id);
+			dev_dbg(&skdev->pdev->dev,
+				"NULL backptr skdreq %p, req=0x%x req_id=0x%x\n",
+				skreq, skreq->id, req_id);
 		} else {
 			/*
 			 * Capture the outcome and post it back to the
@@ -2746,9 +2707,8 @@ static void skd_complete_other(struct skd_device *skdev,
 	req_table = req_id & SKD_ID_TABLE_MASK;
 	req_slot = req_id & SKD_ID_SLOT_MASK;
 
-	pr_debug("%s:%s:%d table=0x%x id=0x%x slot=%d\n",
-		 skdev->name, __func__, __LINE__,
-		 req_table, req_id, req_slot);
+	dev_dbg(&skdev->pdev->dev, "table=0x%x id=0x%x slot=%d\n", req_table,
+		req_id, req_slot);
 
 	/*
 	 * Based on the request id, determine how to dispatch this completion.
@@ -2816,14 +2776,12 @@ static void skd_complete_special(struct skd_device *skdev,
 				 volatile struct fit_comp_error_info *skerr,
 				 struct skd_special_context *skspcl)
 {
-	pr_debug("%s:%s:%d  completing special request %p\n",
-		 skdev->name, __func__, __LINE__, skspcl);
+	dev_dbg(&skdev->pdev->dev, " completing special request %p\n", skspcl);
 	if (skspcl->orphaned) {
 		/* Discard orphaned request */
 		/* ?: Can this release directly or does it need
 		 * to use a worker? */
-		pr_debug("%s:%s:%d release orphaned %p\n",
-			 skdev->name, __func__, __LINE__, skspcl);
+		dev_dbg(&skdev->pdev->dev, "release orphaned %p\n", skspcl);
 		skd_release_special(skdev, skspcl);
 		return;
 	}
@@ -2860,8 +2818,7 @@ static void skd_release_special(struct skd_device *skdev,
 	skdev->skspcl_free_list = (struct skd_special_context *)skspcl;
 
 	if (was_depleted) {
-		pr_debug("%s:%s:%d skspcl was depleted\n",
-			 skdev->name, __func__, __LINE__);
+		dev_dbg(&skdev->pdev->dev, "skspcl was depleted\n");
 		/* Free list was depleted. Their might be waiters. */
 		wake_up_interruptible(&skdev->waitq);
 	}
@@ -2926,8 +2883,8 @@ skd_isr(int irq, void *ptr)
 		ack = FIT_INT_DEF_MASK;
 		ack &= intstat;
 
-		pr_debug("%s:%s:%d intstat=0x%x ack=0x%x\n",
-			 skdev->name, __func__, __LINE__, intstat, ack);
+		dev_dbg(&skdev->pdev->dev, "intstat=0x%x ack=0x%x\n", intstat,
+			ack);
 
 		/* As long as there is an int pending on device, keep
 		 * running loop.  When none, get out, but if we've never
@@ -2992,13 +2949,13 @@ skd_isr(int irq, void *ptr)
 static void skd_drive_fault(struct skd_device *skdev)
 {
 	skdev->state = SKD_DRVR_STATE_FAULT;
-	pr_err("(%s): Drive FAULT\n", skd_name(skdev));
+	dev_err(&skdev->pdev->dev, "Drive FAULT\n");
 }
 
 static void skd_drive_disappeared(struct skd_device *skdev)
 {
 	skdev->state = SKD_DRVR_STATE_DISAPPEARED;
-	pr_err("(%s): Drive DISAPPEARED\n", skd_name(skdev));
+	dev_err(&skdev->pdev->dev, "Drive DISAPPEARED\n");
 }
 
 static void skd_isr_fwstate(struct skd_device *skdev)
@@ -3011,10 +2968,9 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 	sense = SKD_READL(skdev, FIT_STATUS);
 	state = sense & FIT_SR_DRIVE_STATE_MASK;
 
-	pr_err("(%s): s1120 state %s(%d)=>%s(%d)\n",
-	       skd_name(skdev),
-	       skd_drive_state_to_str(skdev->drive_state), skdev->drive_state,
-	       skd_drive_state_to_str(state), state);
+	dev_err(&skdev->pdev->dev, "s1120 state %s(%d)=>%s(%d)\n",
+		skd_drive_state_to_str(skdev->drive_state), skdev->drive_state,
+		skd_drive_state_to_str(state), state);
 
 	skdev->drive_state = state;
 
@@ -3046,10 +3002,11 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 			skdev->cur_max_queue_depth * 2 / 3 + 1;
 		if (skdev->queue_low_water_mark < 1)
 			skdev->queue_low_water_mark = 1;
-		pr_info("(%s): Queue depth limit=%d dev=%d lowat=%d\n",
-		       skd_name(skdev),
-		       skdev->cur_max_queue_depth,
-		       skdev->dev_max_queue_depth, skdev->queue_low_water_mark);
+		dev_info(&skdev->pdev->dev,
+			 "Queue depth limit=%d dev=%d lowat=%d\n",
+			 skdev->cur_max_queue_depth,
+			 skdev->dev_max_queue_depth,
+			 skdev->queue_low_water_mark);
 
 		skd_refresh_device_data(skdev);
 		break;
@@ -3086,8 +3043,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 		}
 		break;
 	case FIT_SR_DRIVE_FW_BOOTING:
-		pr_debug("%s:%s:%d ISR FIT_SR_DRIVE_FW_BOOTING %s\n",
-			 skdev->name, __func__, __LINE__, skdev->name);
+		dev_dbg(&skdev->pdev->dev, "ISR FIT_SR_DRIVE_FW_BOOTING\n");
 		skdev->state = SKD_DRVR_STATE_WAIT_BOOT;
 		skdev->timer_countdown = SKD_WAIT_BOOT_TIMO;
 		break;
@@ -3105,8 +3061,8 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 
 	/* PCIe bus returned all Fs? */
 	case 0xFF:
-		pr_info("(%s): state=0x%x sense=0x%x\n",
-		       skd_name(skdev), state, sense);
+		dev_info(&skdev->pdev->dev, "state=0x%x sense=0x%x\n", state,
+			 sense);
 		skd_drive_disappeared(skdev);
 		skd_recover_requests(skdev, 0);
 		blk_start_queue(skdev->queue);
@@ -3117,10 +3073,9 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 		 */
 		break;
 	}
-	pr_err("(%s): Driver state %s(%d)=>%s(%d)\n",
-	       skd_name(skdev),
-	       skd_skdev_state_to_str(prev_driver_state), prev_driver_state,
-	       skd_skdev_state_to_str(skdev->state), skdev->state);
+	dev_err(&skdev->pdev->dev, "Driver state %s(%d)=>%s(%d)\n",
+		skd_skdev_state_to_str(prev_driver_state), prev_driver_state,
+		skd_skdev_state_to_str(skdev->state), skdev->state);
 }
 
 static void skd_recover_requests(struct skd_device *skdev, int requeue)
@@ -3185,14 +3140,12 @@ static void skd_recover_requests(struct skd_device *skdev, int requeue)
 		 */
 		if (skspcl->req.state == SKD_REQ_STATE_BUSY) {
 			if (skspcl->orphaned) {
-				pr_debug("%s:%s:%d orphaned %p\n",
-					 skdev->name, __func__, __LINE__,
-					 skspcl);
+				dev_dbg(&skdev->pdev->dev, "orphaned %p\n",
+					skspcl);
 				skd_release_special(skdev, skspcl);
 			} else {
-				pr_debug("%s:%s:%d not orphaned %p\n",
-					 skdev->name, __func__, __LINE__,
-					 skspcl);
+				dev_dbg(&skdev->pdev->dev, "not orphaned %p\n",
+					skspcl);
 				skspcl->req.state = SKD_REQ_STATE_ABORTED;
 			}
 		}
@@ -3213,8 +3166,8 @@ static void skd_isr_msg_from_dev(struct skd_device *skdev)
 
 	mfd = SKD_READL(skdev, FIT_MSG_FROM_DEVICE);
 
-	pr_debug("%s:%s:%d mfd=0x%x last_mtd=0x%x\n",
-		 skdev->name, __func__, __LINE__, mfd, skdev->last_mtd);
+	dev_dbg(&skdev->pdev->dev, "mfd=0x%x last_mtd=0x%x\n", mfd,
+		skdev->last_mtd);
 
 	/* ignore any mtd that is an ack for something we didn't send */
 	if (FIT_MXD_TYPE(mfd) != FIT_MXD_TYPE(skdev->last_mtd))
@@ -3225,13 +3178,10 @@ static void skd_isr_msg_from_dev(struct skd_device *skdev)
 		skdev->proto_ver = FIT_PROTOCOL_MAJOR_VER(mfd);
 
 		if (skdev->proto_ver != FIT_PROTOCOL_VERSION_1) {
-			pr_err("(%s): protocol mismatch\n",
-			       skdev->name);
-			pr_err("(%s):   got=%d support=%d\n",
-			       skdev->name, skdev->proto_ver,
-			       FIT_PROTOCOL_VERSION_1);
-			pr_err("(%s):   please upgrade driver\n",
-			       skdev->name);
+			dev_err(&skdev->pdev->dev, "protocol mismatch\n");
+			dev_err(&skdev->pdev->dev, "  got=%d support=%d\n",
+				skdev->proto_ver, FIT_PROTOCOL_VERSION_1);
+			dev_err(&skdev->pdev->dev, "  please upgrade driver\n");
 			skdev->state = SKD_DRVR_STATE_PROTOCOL_MISMATCH;
 			skd_soft_reset(skdev);
 			break;
@@ -3285,9 +3235,8 @@ static void skd_isr_msg_from_dev(struct skd_device *skdev)
 		SKD_WRITEL(skdev, mtd, FIT_MSG_TO_DEVICE);
 		skdev->last_mtd = mtd;
 
-		pr_err("(%s): Time sync driver=0x%x device=0x%x\n",
-		       skd_name(skdev),
-		       skdev->connect_time_stamp, skdev->drive_jiffies);
+		dev_err(&skdev->pdev->dev, "Time sync driver=0x%x device=0x%x\n",
+			skdev->connect_time_stamp, skdev->drive_jiffies);
 		break;
 
 	case FIT_MTD_ARM_QUEUE:
@@ -3309,8 +3258,7 @@ static void skd_disable_interrupts(struct skd_device *skdev)
 	sense = SKD_READL(skdev, FIT_CONTROL);
 	sense &= ~FIT_CR_ENABLE_INTERRUPTS;
 	SKD_WRITEL(skdev, sense, FIT_CONTROL);
-	pr_debug("%s:%s:%d sense 0x%x\n",
-		 skdev->name, __func__, __LINE__, sense);
+	dev_dbg(&skdev->pdev->dev, "sense 0x%x\n", sense);
 
 	/* Note that the 1s is written. A 1-bit means
 	 * disable, a 0 means enable.
@@ -3329,13 +3277,11 @@ static void skd_enable_interrupts(struct skd_device *skdev)
 	/* Note that the compliment of mask is written. A 1-bit means
 	 * disable, a 0 means enable. */
 	SKD_WRITEL(skdev, ~val, FIT_INT_MASK_HOST);
-	pr_debug("%s:%s:%d interrupt mask=0x%x\n",
-		 skdev->name, __func__, __LINE__, ~val);
+	dev_dbg(&skdev->pdev->dev, "interrupt mask=0x%x\n", ~val);
 
 	val = SKD_READL(skdev, FIT_CONTROL);
 	val |= FIT_CR_ENABLE_INTERRUPTS;
-	pr_debug("%s:%s:%d control=0x%x\n",
-		 skdev->name, __func__, __LINE__, val);
+	dev_dbg(&skdev->pdev->dev, "control=0x%x\n", val);
 	SKD_WRITEL(skdev, val, FIT_CONTROL);
 }
 
@@ -3351,8 +3297,7 @@ static void skd_soft_reset(struct skd_device *skdev)
 
 	val = SKD_READL(skdev, FIT_CONTROL);
 	val |= (FIT_CR_SOFT_RESET);
-	pr_debug("%s:%s:%d control=0x%x\n",
-		 skdev->name, __func__, __LINE__, val);
+	dev_dbg(&skdev->pdev->dev, "control=0x%x\n", val);
 	SKD_WRITEL(skdev, val, FIT_CONTROL);
 }
 
@@ -3369,8 +3314,7 @@ static void skd_start_device(struct skd_device *skdev)
 
 	sense = SKD_READL(skdev, FIT_STATUS);
 
-	pr_debug("%s:%s:%d initial status=0x%x\n",
-		 skdev->name, __func__, __LINE__, sense);
+	dev_dbg(&skdev->pdev->dev, "initial status=0x%x\n", sense);
 
 	state = sense & FIT_SR_DRIVE_STATE_MASK;
 	skdev->drive_state = state;
@@ -3383,25 +3327,23 @@ static void skd_start_device(struct skd_device *skdev)
 
 	switch (skdev->drive_state) {
 	case FIT_SR_DRIVE_OFFLINE:
-		pr_err("(%s): Drive offline...\n", skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "Drive offline...\n");
 		break;
 
 	case FIT_SR_DRIVE_FW_BOOTING:
-		pr_debug("%s:%s:%d FIT_SR_DRIVE_FW_BOOTING %s\n",
-			 skdev->name, __func__, __LINE__, skdev->name);
+		dev_dbg(&skdev->pdev->dev, "FIT_SR_DRIVE_FW_BOOTING\n");
 		skdev->state = SKD_DRVR_STATE_WAIT_BOOT;
 		skdev->timer_countdown = SKD_WAIT_BOOT_TIMO;
 		break;
 
 	case FIT_SR_DRIVE_BUSY_SANITIZE:
-		pr_info("(%s): Start: BUSY_SANITIZE\n",
-		       skd_name(skdev));
+		dev_info(&skdev->pdev->dev, "Start: BUSY_SANITIZE\n");
 		skdev->state = SKD_DRVR_STATE_BUSY_SANITIZE;
 		skdev->timer_countdown = SKD_STARTED_BUSY_TIMO;
 		break;
 
 	case FIT_SR_DRIVE_BUSY_ERASE:
-		pr_info("(%s): Start: BUSY_ERASE\n", skd_name(skdev));
+		dev_info(&skdev->pdev->dev, "Start: BUSY_ERASE\n");
 		skdev->state = SKD_DRVR_STATE_BUSY_ERASE;
 		skdev->timer_countdown = SKD_STARTED_BUSY_TIMO;
 		break;
@@ -3412,14 +3354,13 @@ static void skd_start_device(struct skd_device *skdev)
 		break;
 
 	case FIT_SR_DRIVE_BUSY:
-		pr_err("(%s): Drive Busy...\n", skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "Drive Busy...\n");
 		skdev->state = SKD_DRVR_STATE_BUSY;
 		skdev->timer_countdown = SKD_STARTED_BUSY_TIMO;
 		break;
 
 	case FIT_SR_DRIVE_SOFT_RESET:
-		pr_err("(%s) drive soft reset in prog\n",
-		       skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "drive soft reset in prog\n");
 		break;
 
 	case FIT_SR_DRIVE_FAULT:
@@ -3429,8 +3370,7 @@ static void skd_start_device(struct skd_device *skdev)
 		 */
 		skd_drive_fault(skdev);
 		/*start the queue so we can respond with error to requests */
-		pr_debug("%s:%s:%d starting %s queue\n",
-			 skdev->name, __func__, __LINE__, skdev->name);
+		dev_dbg(&skdev->pdev->dev, "starting queue\n");
 		blk_start_queue(skdev->queue);
 		skdev->gendisk_on = -1;
 		wake_up_interruptible(&skdev->waitq);
@@ -3441,38 +3381,33 @@ static void skd_start_device(struct skd_device *skdev)
 		 * to the BAR1 addresses. */
 		skd_drive_disappeared(skdev);
 		/*start the queue so we can respond with error to requests */
-		pr_debug("%s:%s:%d starting %s queue to error-out reqs\n",
-			 skdev->name, __func__, __LINE__, skdev->name);
+		dev_dbg(&skdev->pdev->dev,
+			"starting queue to error-out reqs\n");
 		blk_start_queue(skdev->queue);
 		skdev->gendisk_on = -1;
 		wake_up_interruptible(&skdev->waitq);
 		break;
 
 	default:
-		pr_err("(%s) Start: unknown state %x\n",
-		       skd_name(skdev), skdev->drive_state);
+		dev_err(&skdev->pdev->dev, "Start: unknown state %x\n",
+			skdev->drive_state);
 		break;
 	}
 
 	state = SKD_READL(skdev, FIT_CONTROL);
-	pr_debug("%s:%s:%d FIT Control Status=0x%x\n",
-		 skdev->name, __func__, __LINE__, state);
+	dev_dbg(&skdev->pdev->dev, "FIT Control Status=0x%x\n", state);
 
 	state = SKD_READL(skdev, FIT_INT_STATUS_HOST);
-	pr_debug("%s:%s:%d Intr Status=0x%x\n",
-		 skdev->name, __func__, __LINE__, state);
+	dev_dbg(&skdev->pdev->dev, "Intr Status=0x%x\n", state);
 
 	state = SKD_READL(skdev, FIT_INT_MASK_HOST);
-	pr_debug("%s:%s:%d Intr Mask=0x%x\n",
-		 skdev->name, __func__, __LINE__, state);
+	dev_dbg(&skdev->pdev->dev, "Intr Mask=0x%x\n", state);
 
 	state = SKD_READL(skdev, FIT_MSG_FROM_DEVICE);
-	pr_debug("%s:%s:%d Msg from Dev=0x%x\n",
-		 skdev->name, __func__, __LINE__, state);
+	dev_dbg(&skdev->pdev->dev, "Msg from Dev=0x%x\n", state);
 
 	state = SKD_READL(skdev, FIT_HW_VERSION);
-	pr_debug("%s:%s:%d HW version=0x%x\n",
-		 skdev->name, __func__, __LINE__, state);
+	dev_dbg(&skdev->pdev->dev, "HW version=0x%x\n", state);
 
 	spin_unlock_irqrestore(&skdev->lock, flags);
 }
@@ -3487,14 +3422,12 @@ static void skd_stop_device(struct skd_device *skdev)
 	spin_lock_irqsave(&skdev->lock, flags);
 
 	if (skdev->state != SKD_DRVR_STATE_ONLINE) {
-		pr_err("(%s): skd_stop_device not online no sync\n",
-		       skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "%s not online no sync\n", __func__);
 		goto stop_out;
 	}
 
 	if (skspcl->req.state != SKD_REQ_STATE_IDLE) {
-		pr_err("(%s): skd_stop_device no special\n",
-		       skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "%s no special\n", __func__);
 		goto stop_out;
 	}
 
@@ -3512,16 +3445,13 @@ static void skd_stop_device(struct skd_device *skdev)
 
 	switch (skdev->sync_done) {
 	case 0:
-		pr_err("(%s): skd_stop_device no sync\n",
-		       skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "%s no sync\n", __func__);
 		break;
 	case 1:
-		pr_err("(%s): skd_stop_device sync done\n",
-		       skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "%s sync done\n", __func__);
 		break;
 	default:
-		pr_err("(%s): skd_stop_device sync error\n",
-		       skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "%s sync error\n", __func__);
 	}
 
 stop_out:
@@ -3551,8 +3481,8 @@ static void skd_stop_device(struct skd_device *skdev)
 	}
 
 	if (dev_state != FIT_SR_DRIVE_INIT)
-		pr_err("(%s): skd_stop_device state error 0x%02x\n",
-		       skd_name(skdev), dev_state);
+		dev_err(&skdev->pdev->dev, "%s state error 0x%02x\n", __func__,
+			dev_state);
 }
 
 /* assume spinlock is held */
@@ -3565,8 +3495,7 @@ static void skd_restart_device(struct skd_device *skdev)
 
 	state = SKD_READL(skdev, FIT_STATUS);
 
-	pr_debug("%s:%s:%d drive status=0x%x\n",
-		 skdev->name, __func__, __LINE__, state);
+	dev_dbg(&skdev->pdev->dev, "drive status=0x%x\n", state);
 
 	state &= FIT_SR_DRIVE_STATE_MASK;
 	skdev->drive_state = state;
@@ -3586,8 +3515,7 @@ static int skd_quiesce_dev(struct skd_device *skdev)
 	switch (skdev->state) {
 	case SKD_DRVR_STATE_BUSY:
 	case SKD_DRVR_STATE_BUSY_IMMINENT:
-		pr_debug("%s:%s:%d stopping %s queue\n",
-			 skdev->name, __func__, __LINE__, skdev->name);
+		dev_dbg(&skdev->pdev->dev, "stopping queue\n");
 		blk_stop_queue(skdev->queue);
 		break;
 	case SKD_DRVR_STATE_ONLINE:
@@ -3600,8 +3528,8 @@ static int skd_quiesce_dev(struct skd_device *skdev)
 	case SKD_DRVR_STATE_RESUMING:
 	default:
 		rc = -EINVAL;
-		pr_debug("%s:%s:%d state [%d] not implemented\n",
-			 skdev->name, __func__, __LINE__, skdev->state);
+		dev_dbg(&skdev->pdev->dev, "state [%d] not implemented\n",
+			skdev->state);
 	}
 	return rc;
 }
@@ -3613,8 +3541,7 @@ static int skd_unquiesce_dev(struct skd_device *skdev)
 
 	skd_log_skdev(skdev, "unquiesce");
 	if (skdev->state == SKD_DRVR_STATE_ONLINE) {
-		pr_debug("%s:%s:%d **** device already ONLINE\n",
-			 skdev->name, __func__, __LINE__);
+		dev_dbg(&skdev->pdev->dev, "**** device already ONLINE\n");
 		return 0;
 	}
 	if (skdev->drive_state != FIT_SR_DRIVE_ONLINE) {
@@ -3627,8 +3554,7 @@ static int skd_unquiesce_dev(struct skd_device *skdev)
 		 * to become available.
 		 */
 		skdev->state = SKD_DRVR_STATE_BUSY;
-		pr_debug("%s:%s:%d drive BUSY state\n",
-			 skdev->name, __func__, __LINE__);
+		dev_dbg(&skdev->pdev->dev, "drive BUSY state\n");
 		return 0;
 	}
 
@@ -3647,16 +3573,14 @@ static int skd_unquiesce_dev(struct skd_device *skdev)
 	case SKD_DRVR_STATE_IDLE:
 	case SKD_DRVR_STATE_LOAD:
 		skdev->state = SKD_DRVR_STATE_ONLINE;
-		pr_err("(%s): Driver state %s(%d)=>%s(%d)\n",
-		       skd_name(skdev),
-		       skd_skdev_state_to_str(prev_driver_state),
-		       prev_driver_state, skd_skdev_state_to_str(skdev->state),
-		       skdev->state);
-		pr_debug("%s:%s:%d **** device ONLINE...starting block queue\n",
-			 skdev->name, __func__, __LINE__);
-		pr_debug("%s:%s:%d starting %s queue\n",
-			 skdev->name, __func__, __LINE__, skdev->name);
-		pr_info("(%s): STEC s1120 ONLINE\n", skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "Driver state %s(%d)=>%s(%d)\n",
+			skd_skdev_state_to_str(prev_driver_state),
+			prev_driver_state, skd_skdev_state_to_str(skdev->state),
+			skdev->state);
+		dev_dbg(&skdev->pdev->dev,
+			"**** device ONLINE...starting block queue\n");
+		dev_dbg(&skdev->pdev->dev, "starting queue\n");
+		dev_info(&skdev->pdev->dev, "STEC s1120 ONLINE\n");
 		blk_start_queue(skdev->queue);
 		skdev->gendisk_on = 1;
 		wake_up_interruptible(&skdev->waitq);
@@ -3664,9 +3588,9 @@ static int skd_unquiesce_dev(struct skd_device *skdev)
 
 	case SKD_DRVR_STATE_DISAPPEARED:
 	default:
-		pr_debug("%s:%s:%d **** driver state %d, not implemented \n",
-			 skdev->name, __func__, __LINE__,
-			 skdev->state);
+		dev_dbg(&skdev->pdev->dev,
+			"**** driver state %d, not implemented\n",
+			skdev->state);
 		return -EBUSY;
 	}
 	return 0;
@@ -3684,11 +3608,10 @@ static irqreturn_t skd_reserved_isr(int irq, void *skd_host_data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&skdev->lock, flags);
-	pr_debug("%s:%s:%d MSIX = 0x%x\n",
-		 skdev->name, __func__, __LINE__,
-		 SKD_READL(skdev, FIT_INT_STATUS_HOST));
-	pr_err("(%s): MSIX reserved irq %d = 0x%x\n", skd_name(skdev),
-	       irq, SKD_READL(skdev, FIT_INT_STATUS_HOST));
+	dev_dbg(&skdev->pdev->dev, "MSIX = 0x%x\n",
+		SKD_READL(skdev, FIT_INT_STATUS_HOST));
+	dev_err(&skdev->pdev->dev, "MSIX reserved irq %d = 0x%x\n", irq,
+		SKD_READL(skdev, FIT_INT_STATUS_HOST));
 	SKD_WRITEL(skdev, FIT_INT_RESERVED_MASK, FIT_INT_STATUS_HOST);
 	spin_unlock_irqrestore(&skdev->lock, flags);
 	return IRQ_HANDLED;
@@ -3700,9 +3623,8 @@ static irqreturn_t skd_statec_isr(int irq, void *skd_host_data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&skdev->lock, flags);
-	pr_debug("%s:%s:%d MSIX = 0x%x\n",
-		 skdev->name, __func__, __LINE__,
-		 SKD_READL(skdev, FIT_INT_STATUS_HOST));
+	dev_dbg(&skdev->pdev->dev, "MSIX = 0x%x\n",
+		SKD_READL(skdev, FIT_INT_STATUS_HOST));
 	SKD_WRITEL(skdev, FIT_ISH_FW_STATE_CHANGE, FIT_INT_STATUS_HOST);
 	skd_isr_fwstate(skdev);
 	spin_unlock_irqrestore(&skdev->lock, flags);
@@ -3717,9 +3639,8 @@ static irqreturn_t skd_comp_q(int irq, void *skd_host_data)
 	int deferred;
 
 	spin_lock_irqsave(&skdev->lock, flags);
-	pr_debug("%s:%s:%d MSIX = 0x%x\n",
-		 skdev->name, __func__, __LINE__,
-		 SKD_READL(skdev, FIT_INT_STATUS_HOST));
+	dev_dbg(&skdev->pdev->dev, "MSIX = 0x%x\n",
+		SKD_READL(skdev, FIT_INT_STATUS_HOST));
 	SKD_WRITEL(skdev, FIT_ISH_COMPLETION_POSTED, FIT_INT_STATUS_HOST);
 	deferred = skd_isr_completion_posted(skdev, skd_isr_comp_limit,
 						&flush_enqueued);
@@ -3742,9 +3663,8 @@ static irqreturn_t skd_msg_isr(int irq, void *skd_host_data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&skdev->lock, flags);
-	pr_debug("%s:%s:%d MSIX = 0x%x\n",
-		 skdev->name, __func__, __LINE__,
-		 SKD_READL(skdev, FIT_INT_STATUS_HOST));
+	dev_dbg(&skdev->pdev->dev, "MSIX = 0x%x\n",
+		SKD_READL(skdev, FIT_INT_STATUS_HOST));
 	SKD_WRITEL(skdev, FIT_ISH_MSG_FROM_DEV, FIT_INT_STATUS_HOST);
 	skd_isr_msg_from_dev(skdev);
 	spin_unlock_irqrestore(&skdev->lock, flags);
@@ -3757,9 +3677,8 @@ static irqreturn_t skd_qfull_isr(int irq, void *skd_host_data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&skdev->lock, flags);
-	pr_debug("%s:%s:%d MSIX = 0x%x\n",
-		 skdev->name, __func__, __LINE__,
-		 SKD_READL(skdev, FIT_INT_STATUS_HOST));
+	dev_dbg(&skdev->pdev->dev, "MSIX = 0x%x\n",
+		SKD_READL(skdev, FIT_INT_STATUS_HOST));
 	SKD_WRITEL(skdev, FIT_INT_QUEUE_FULL, FIT_INT_STATUS_HOST);
 	spin_unlock_irqrestore(&skdev->lock, flags);
 	return IRQ_HANDLED;
@@ -3808,8 +3727,7 @@ static int skd_acquire_msix(struct skd_device *skdev)
 	rc = pci_alloc_irq_vectors(pdev, SKD_MAX_MSIX_COUNT, SKD_MAX_MSIX_COUNT,
 			PCI_IRQ_MSIX);
 	if (rc < 0) {
-		pr_err("(%s): failed to enable MSI-X %d\n",
-		       skd_name(skdev), rc);
+		dev_err(&skdev->pdev->dev, "failed to enable MSI-X %d\n", rc);
 		goto out;
 	}
 
@@ -3817,8 +3735,7 @@ static int skd_acquire_msix(struct skd_device *skdev)
 			sizeof(struct skd_msix_entry), GFP_KERNEL);
 	if (!skdev->msix_entries) {
 		rc = -ENOMEM;
-		pr_err("(%s): msix table allocation error\n",
-		       skd_name(skdev));
+		dev_err(&skdev->pdev->dev, "msix table allocation error\n");
 		goto out;
 	}
 
@@ -3835,16 +3752,15 @@ static int skd_acquire_msix(struct skd_device *skdev)
 				msix_entries[i].handler, 0,
 				qentry->isr_name, skdev);
 		if (rc) {
-			pr_err("(%s): Unable to register(%d) MSI-X "
-			       "handler %d: %s\n",
-			       skd_name(skdev), rc, i, qentry->isr_name);
+			dev_err(&skdev->pdev->dev,
+				"Unable to register(%d) MSI-X handler %d: %s\n",
+				rc, i, qentry->isr_name);
 			goto msix_out;
 		}
 	}
 
-	pr_debug("%s:%s:%d %s: <%s> msix %d irq(s) enabled\n",
-		 skdev->name, __func__, __LINE__,
-		 pci_name(pdev), skdev->name, SKD_MAX_MSIX_COUNT);
+	dev_dbg(&skdev->pdev->dev, "%d msix irq(s) enabled\n",
+		SKD_MAX_MSIX_COUNT);
 	return 0;
 
 msix_out:
@@ -3867,8 +3783,8 @@ static int skd_acquire_irq(struct skd_device *skdev)
 		if (!rc)
 			return 0;
 
-		pr_err("(%s): failed to enable MSI-X, re-trying with MSI %d\n",
-		       skd_name(skdev), rc);
+		dev_err(&skdev->pdev->dev,
+			"failed to enable MSI-X, re-trying with MSI %d\n", rc);
 	}
 
 	snprintf(skdev->isr_name, sizeof(skdev->isr_name), "%s%d", DRV_NAME,
@@ -3878,8 +3794,8 @@ static int skd_acquire_irq(struct skd_device *skdev)
 		irq_flag |= PCI_IRQ_MSI;
 	rc = pci_alloc_irq_vectors(pdev, 1, 1, irq_flag);
 	if (rc < 0) {
-		pr_err("(%s): failed to allocate the MSI interrupt %d\n",
-			skd_name(skdev), rc);
+		dev_err(&skdev->pdev->dev,
+			"failed to allocate the MSI interrupt %d\n", rc);
 		return rc;
 	}
 
@@ -3888,8 +3804,8 @@ static int skd_acquire_irq(struct skd_device *skdev)
 			skdev->isr_name, skdev);
 	if (rc) {
 		pci_free_irq_vectors(pdev);
-		pr_err("(%s): failed to allocate interrupt %d\n",
-			skd_name(skdev), rc);
+		dev_err(&skdev->pdev->dev, "failed to allocate interrupt %d\n",
+			rc);
 		return rc;
 	}
 
@@ -3932,9 +3848,9 @@ static int skd_cons_skcomp(struct skd_device *skdev)
 	nbytes = sizeof(*skcomp) * SKD_N_COMPLETION_ENTRY;
 	nbytes += sizeof(struct fit_comp_error_info) * SKD_N_COMPLETION_ENTRY;
 
-	pr_debug("%s:%s:%d comp pci_alloc, total bytes %d entries %d\n",
-		 skdev->name, __func__, __LINE__,
-		 nbytes, SKD_N_COMPLETION_ENTRY);
+	dev_dbg(&skdev->pdev->dev,
+		"comp pci_alloc, total bytes %d entries %d\n",
+		nbytes, SKD_N_COMPLETION_ENTRY);
 
 	skcomp = pci_zalloc_consistent(skdev->pdev, nbytes,
 				       &skdev->cq_dma_address);
@@ -3958,11 +3874,10 @@ static int skd_cons_skmsg(struct skd_device *skdev)
 	int rc = 0;
 	u32 i;
 
-	pr_debug("%s:%s:%d skmsg_table kzalloc, struct %lu, count %u total %lu\n",
-		 skdev->name, __func__, __LINE__,
-		 sizeof(struct skd_fitmsg_context),
-		 skdev->num_fitmsg_context,
-		 sizeof(struct skd_fitmsg_context) * skdev->num_fitmsg_context);
+	dev_dbg(&skdev->pdev->dev,
+		"skmsg_table kzalloc, struct %lu, count %u total %lu\n",
+		sizeof(struct skd_fitmsg_context), skdev->num_fitmsg_context,
+		sizeof(struct skd_fitmsg_context) * skdev->num_fitmsg_context);
 
 	skdev->skmsg_table = kzalloc(sizeof(struct skd_fitmsg_context)
 				     *skdev->num_fitmsg_context, GFP_KERNEL);
@@ -4042,11 +3957,10 @@ static int skd_cons_skreq(struct skd_device *skdev)
 	int rc = 0;
 	u32 i;
 
-	pr_debug("%s:%s:%d skreq_table kzalloc, struct %lu, count %u total %lu\n",
-		 skdev->name, __func__, __LINE__,
-		 sizeof(struct skd_request_context),
-		 skdev->num_req_context,
-		 sizeof(struct skd_request_context) * skdev->num_req_context);
+	dev_dbg(&skdev->pdev->dev,
+		"skreq_table kzalloc, struct %lu, count %u total %lu\n",
+		sizeof(struct skd_request_context), skdev->num_req_context,
+		sizeof(struct skd_request_context) * skdev->num_req_context);
 
 	skdev->skreq_table = kzalloc(sizeof(struct skd_request_context)
 				     * skdev->num_req_context, GFP_KERNEL);
@@ -4055,10 +3969,9 @@ static int skd_cons_skreq(struct skd_device *skdev)
 		goto err_out;
 	}
 
-	pr_debug("%s:%s:%d alloc sg_table sg_per_req %u scatlist %lu total %lu\n",
-		 skdev->name, __func__, __LINE__,
-		 skdev->sgs_per_request, sizeof(struct scatterlist),
-		 skdev->sgs_per_request * sizeof(struct scatterlist));
+	dev_dbg(&skdev->pdev->dev, "alloc sg_table sg_per_req %u scatlist %lu total %lu\n",
+		skdev->sgs_per_request, sizeof(struct scatterlist),
+		skdev->sgs_per_request * sizeof(struct scatterlist));
 
 	for (i = 0; i < skdev->num_req_context; i++) {
 		struct skd_request_context *skreq;
@@ -4101,11 +4014,10 @@ static int skd_cons_skspcl(struct skd_device *skdev)
 	int rc = 0;
 	u32 i, nbytes;
 
-	pr_debug("%s:%s:%d skspcl_table kzalloc, struct %lu, count %u total %lu\n",
-		 skdev->name, __func__, __LINE__,
-		 sizeof(struct skd_special_context),
-		 skdev->n_special,
-		 sizeof(struct skd_special_context) * skdev->n_special);
+	dev_dbg(&skdev->pdev->dev,
+		"skspcl_table kzalloc, struct %lu, count %u total %lu\n",
+		sizeof(struct skd_special_context), skdev->n_special,
+		sizeof(struct skd_special_context) * skdev->n_special);
 
 	skdev->skspcl_table = kzalloc(sizeof(struct skd_special_context)
 				      * skdev->n_special, GFP_KERNEL);
@@ -4248,8 +4160,7 @@ static int skd_cons_disk(struct skd_device *skdev)
 	queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q);
 
 	spin_lock_irqsave(&skdev->lock, flags);
-	pr_debug("%s:%s:%d stopping %s queue\n",
-		 skdev->name, __func__, __LINE__, skdev->name);
+	dev_dbg(&skdev->pdev->dev, "stopping queue\n");
 	blk_stop_queue(skdev->queue);
 	spin_unlock_irqrestore(&skdev->lock, flags);
 
@@ -4269,8 +4180,7 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 	skdev = kzalloc(sizeof(*skdev), GFP_KERNEL);
 
 	if (!skdev) {
-		pr_err(PFX "(%s): memory alloc failure\n",
-		       pci_name(pdev));
+		dev_err(&pdev->dev, "memory alloc failure\n");
 		return NULL;
 	}
 
@@ -4278,7 +4188,6 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 	skdev->pdev = pdev;
 	skdev->devno = skd_next_devno++;
 	skdev->major = blk_major;
-	sprintf(skdev->name, DRV_NAME "%d", skdev->devno);
 	skdev->dev_max_queue_depth = 0;
 
 	skdev->num_req_context = skd_max_queue_depth;
@@ -4294,42 +4203,41 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 
 	INIT_WORK(&skdev->completion_worker, skd_completion_worker);
 
-	pr_debug("%s:%s:%d skcomp\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skcomp\n");
 	rc = skd_cons_skcomp(skdev);
 	if (rc < 0)
 		goto err_out;
 
-	pr_debug("%s:%s:%d skmsg\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skmsg\n");
 	rc = skd_cons_skmsg(skdev);
 	if (rc < 0)
 		goto err_out;
 
-	pr_debug("%s:%s:%d skreq\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skreq\n");
 	rc = skd_cons_skreq(skdev);
 	if (rc < 0)
 		goto err_out;
 
-	pr_debug("%s:%s:%d skspcl\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skspcl\n");
 	rc = skd_cons_skspcl(skdev);
 	if (rc < 0)
 		goto err_out;
 
-	pr_debug("%s:%s:%d sksb\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "sksb\n");
 	rc = skd_cons_sksb(skdev);
 	if (rc < 0)
 		goto err_out;
 
-	pr_debug("%s:%s:%d disk\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "disk\n");
 	rc = skd_cons_disk(skdev);
 	if (rc < 0)
 		goto err_out;
 
-	pr_debug("%s:%s:%d VICTORY\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "VICTORY\n");
 	return skdev;
 
 err_out:
-	pr_debug("%s:%s:%d construct failed\n",
-		 skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "construct failed\n");
 	skd_destruct(skdev);
 	return NULL;
 }
@@ -4513,25 +4421,25 @@ static void skd_destruct(struct skd_device *skdev)
 	if (skdev == NULL)
 		return;
 
-	pr_debug("%s:%s:%d disk\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "disk\n");
 	skd_free_disk(skdev);
 
-	pr_debug("%s:%s:%d sksb\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "sksb\n");
 	skd_free_sksb(skdev);
 
-	pr_debug("%s:%s:%d skspcl\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skspcl\n");
 	skd_free_skspcl(skdev);
 
-	pr_debug("%s:%s:%d skreq\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skreq\n");
 	skd_free_skreq(skdev);
 
-	pr_debug("%s:%s:%d skmsg\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skmsg\n");
 	skd_free_skmsg(skdev);
 
-	pr_debug("%s:%s:%d skcomp\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skcomp\n");
 	skd_free_skcomp(skdev);
 
-	pr_debug("%s:%s:%d skdev\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "skdev\n");
 	kfree(skdev);
 }
 
@@ -4548,9 +4456,8 @@ static int skd_bdev_getgeo(struct block_device *bdev, struct hd_geometry *geo)
 
 	skdev = bdev->bd_disk->private_data;
 
-	pr_debug("%s:%s:%d %s: CMD[%s] getgeo device\n",
-		 skdev->name, __func__, __LINE__,
-		 bdev->bd_disk->disk_name, current->comm);
+	dev_dbg(&skdev->pdev->dev, "%s: CMD[%s] getgeo device\n",
+		bdev->bd_disk->disk_name, current->comm);
 
 	if (skdev->read_cap_is_valid) {
 		capacity = get_capacity(skdev->disk);
@@ -4565,7 +4472,7 @@ static int skd_bdev_getgeo(struct block_device *bdev, struct hd_geometry *geo)
 
 static int skd_bdev_attach(struct device *parent, struct skd_device *skdev)
 {
-	pr_debug("%s:%s:%d add_disk\n", skdev->name, __func__, __LINE__);
+	dev_dbg(&skdev->pdev->dev, "add_disk\n");
 	device_add_disk(parent, skdev->disk);
 	return 0;
 }
@@ -4626,10 +4533,10 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	char pci_str[32];
 	struct skd_device *skdev;
 
-	pr_info("STEC s1120 Driver(%s) version %s-b%s\n",
-	       DRV_NAME, DRV_VERSION, DRV_BUILD_ID);
-	pr_info("(skd?:??:[%s]): vendor=%04X device=%04x\n",
-	       pci_name(pdev), pdev->vendor, pdev->device);
+	dev_info(&pdev->dev, "STEC s1120 Driver(%s) version %s-b%s\n",
+		 DRV_NAME, DRV_VERSION, DRV_BUILD_ID);
+	dev_info(&pdev->dev, "vendor=%04X device=%04x\n", pdev->vendor,
+		 pdev->device);
 
 	rc = pci_enable_device(pdev);
 	if (rc)
@@ -4640,16 +4547,13 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
 	if (!rc) {
 		if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
-
-			pr_err("(%s): consistent DMA mask error %d\n",
-			       pci_name(pdev), rc);
+			dev_err(&pdev->dev, "consistent DMA mask error %d\n",
+				rc);
 		}
 	} else {
-		(rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)));
+		rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
 		if (rc) {
-
-			pr_err("(%s): DMA mask error %d\n",
-			       pci_name(pdev), rc);
+			dev_err(&pdev->dev, "DMA mask error %d\n", rc);
 			goto err_out_regions;
 		}
 	}
@@ -4669,13 +4573,13 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	}
 
 	skd_pci_info(skdev, pci_str);
-	pr_info("(%s): %s 64bit\n", skd_name(skdev), pci_str);
+	dev_info(&pdev->dev, "%s 64bit\n", pci_str);
 
 	pci_set_master(pdev);
 	rc = pci_enable_pcie_error_reporting(pdev);
 	if (rc) {
-		pr_err("(%s): bad enable of PCIe error reporting rc=%d\n",
-		       skd_name(skdev), rc);
+		dev_err(&pdev->dev,
+			"bad enable of PCIe error reporting rc=%d\n", rc);
 		skdev->pcie_error_reporting_is_enabled = 0;
 	} else
 		skdev->pcie_error_reporting_is_enabled = 1;
@@ -4688,21 +4592,19 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		skdev->mem_map[i] = ioremap(skdev->mem_phys[i],
 					    skdev->mem_size[i]);
 		if (!skdev->mem_map[i]) {
-			pr_err("(%s): Unable to map adapter memory!\n",
-			       skd_name(skdev));
+			dev_err(&pdev->dev,
+				"Unable to map adapter memory!\n");
 			rc = -ENODEV;
 			goto err_out_iounmap;
 		}
-		pr_debug("%s:%s:%d mem_map=%p, phyd=%016llx, size=%d\n",
-			 skdev->name, __func__, __LINE__,
-			 skdev->mem_map[i],
-			 (uint64_t)skdev->mem_phys[i], skdev->mem_size[i]);
+		dev_dbg(&pdev->dev, "mem_map=%p, phyd=%016llx, size=%d\n",
+			skdev->mem_map[i], (uint64_t)skdev->mem_phys[i],
+			skdev->mem_size[i]);
 	}
 
 	rc = skd_acquire_irq(skdev);
 	if (rc) {
-		pr_err("(%s): interrupt resource error %d\n",
-		       skd_name(skdev), rc);
+		dev_err(&pdev->dev, "interrupt resource error %d\n", rc);
 		goto err_out_iounmap;
 	}
 
@@ -4724,8 +4626,8 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	} else {
 		/* we timed out, something is wrong with the device,
 		   don't add the disk structure */
-		pr_err("(%s): error: waiting for s1120 timed out %d!\n",
-		       skd_name(skdev), rc);
+		dev_err(&pdev->dev, "error: waiting for s1120 timed out %d!\n",
+			rc);
 		/* in case of no error; we timeout with ENXIO */
 		if (!rc)
 			rc = -ENXIO;
@@ -4764,7 +4666,7 @@ static void skd_pci_remove(struct pci_dev *pdev)
 
 	skdev = pci_get_drvdata(pdev);
 	if (!skdev) {
-		pr_err("%s: no device data for PCI\n", pci_name(pdev));
+		dev_err(&pdev->dev, "no device data for PCI\n");
 		return;
 	}
 	skd_stop_device(skdev);
@@ -4793,7 +4695,7 @@ static int skd_pci_suspend(struct pci_dev *pdev, pm_message_t state)
 
 	skdev = pci_get_drvdata(pdev);
 	if (!skdev) {
-		pr_err("%s: no device data for PCI\n", pci_name(pdev));
+		dev_err(&pdev->dev, "no device data for PCI\n");
 		return -EIO;
 	}
 
@@ -4823,7 +4725,7 @@ static int skd_pci_resume(struct pci_dev *pdev)
 
 	skdev = pci_get_drvdata(pdev);
 	if (!skdev) {
-		pr_err("%s: no device data for PCI\n", pci_name(pdev));
+		dev_err(&pdev->dev, "no device data for PCI\n");
 		return -1;
 	}
 
@@ -4841,15 +4743,14 @@ static int skd_pci_resume(struct pci_dev *pdev)
 	if (!rc) {
 		if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
 
-			pr_err("(%s): consistent DMA mask error %d\n",
-			       pci_name(pdev), rc);
+			dev_err(&pdev->dev, "consistent DMA mask error %d\n",
+				rc);
 		}
 	} else {
 		rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
 		if (rc) {
 
-			pr_err("(%s): DMA mask error %d\n",
-			       pci_name(pdev), rc);
+			dev_err(&pdev->dev, "DMA mask error %d\n", rc);
 			goto err_out_regions;
 		}
 	}
@@ -4857,8 +4758,8 @@ static int skd_pci_resume(struct pci_dev *pdev)
 	pci_set_master(pdev);
 	rc = pci_enable_pcie_error_reporting(pdev);
 	if (rc) {
-		pr_err("(%s): bad enable of PCIe error reporting rc=%d\n",
-		       skdev->name, rc);
+		dev_err(&pdev->dev,
+			"bad enable of PCIe error reporting rc=%d\n", rc);
 		skdev->pcie_error_reporting_is_enabled = 0;
 	} else
 		skdev->pcie_error_reporting_is_enabled = 1;
@@ -4870,21 +4771,17 @@ static int skd_pci_resume(struct pci_dev *pdev)
 		skdev->mem_map[i] = ioremap(skdev->mem_phys[i],
 					    skdev->mem_size[i]);
 		if (!skdev->mem_map[i]) {
-			pr_err("(%s): Unable to map adapter memory!\n",
-			       skd_name(skdev));
+			dev_err(&pdev->dev, "Unable to map adapter memory!\n");
 			rc = -ENODEV;
 			goto err_out_iounmap;
 		}
-		pr_debug("%s:%s:%d mem_map=%p, phyd=%016llx, size=%d\n",
-			 skdev->name, __func__, __LINE__,
-			 skdev->mem_map[i],
-			 (uint64_t)skdev->mem_phys[i], skdev->mem_size[i]);
+		dev_dbg(&pdev->dev, "mem_map=%p, phyd=%016llx, size=%d\n",
+			skdev->mem_map[i], (uint64_t)skdev->mem_phys[i],
+			skdev->mem_size[i]);
 	}
 	rc = skd_acquire_irq(skdev);
 	if (rc) {
-
-		pr_err("(%s): interrupt resource error %d\n",
-		       pci_name(pdev), rc);
+		dev_err(&pdev->dev, "interrupt resource error %d\n", rc);
 		goto err_out_iounmap;
 	}
 
@@ -4922,15 +4819,15 @@ static void skd_pci_shutdown(struct pci_dev *pdev)
 {
 	struct skd_device *skdev;
 
-	pr_err("skd_pci_shutdown called\n");
+	dev_err(&pdev->dev, "%s called\n", __func__);
 
 	skdev = pci_get_drvdata(pdev);
 	if (!skdev) {
-		pr_err("%s: no device data for PCI\n", pci_name(pdev));
+		dev_err(&pdev->dev, "no device data for PCI\n");
 		return;
 	}
 
-	pr_err("%s: calling stop\n", skd_name(skdev));
+	dev_err(&pdev->dev, "calling stop\n");
 	skd_stop_device(skdev);
 }
 
@@ -4950,21 +4847,6 @@ static struct pci_driver skd_driver = {
  *****************************************************************************
  */
 
-static const char *skd_name(struct skd_device *skdev)
-{
-	memset(skdev->id_str, 0, sizeof(skdev->id_str));
-
-	if (skdev->inquiry_is_valid)
-		snprintf(skdev->id_str, sizeof(skdev->id_str), "%s:%s:[%s]",
-			 skdev->name, skdev->inq_serial_num,
-			 pci_name(skdev->pdev));
-	else
-		snprintf(skdev->id_str, sizeof(skdev->id_str), "%s:??:[%s]",
-			 skdev->name, pci_name(skdev->pdev));
-
-	return skdev->id_str;
-}
-
 const char *skd_drive_state_to_str(int state)
 {
 	switch (state) {
@@ -5078,58 +4960,46 @@ static const char *skd_skreq_state_to_str(enum skd_req_state state)
 
 static void skd_log_skdev(struct skd_device *skdev, const char *event)
 {
-	pr_debug("%s:%s:%d (%s) skdev=%p event='%s'\n",
-		 skdev->name, __func__, __LINE__, skdev->name, skdev, event);
-	pr_debug("%s:%s:%d   drive_state=%s(%d) driver_state=%s(%d)\n",
-		 skdev->name, __func__, __LINE__,
-		 skd_drive_state_to_str(skdev->drive_state), skdev->drive_state,
-		 skd_skdev_state_to_str(skdev->state), skdev->state);
-	pr_debug("%s:%s:%d   busy=%d limit=%d dev=%d lowat=%d\n",
-		 skdev->name, __func__, __LINE__,
-		 skdev->in_flight, skdev->cur_max_queue_depth,
-		 skdev->dev_max_queue_depth, skdev->queue_low_water_mark);
-	pr_debug("%s:%s:%d   timestamp=0x%x cycle=%d cycle_ix=%d\n",
-		 skdev->name, __func__, __LINE__,
-		 skdev->timeout_stamp, skdev->skcomp_cycle, skdev->skcomp_ix);
+	dev_dbg(&skdev->pdev->dev, "skdev=%p event='%s'\n", skdev, event);
+	dev_dbg(&skdev->pdev->dev, "  drive_state=%s(%d) driver_state=%s(%d)\n",
+		skd_drive_state_to_str(skdev->drive_state), skdev->drive_state,
+		skd_skdev_state_to_str(skdev->state), skdev->state);
+	dev_dbg(&skdev->pdev->dev, "  busy=%d limit=%d dev=%d lowat=%d\n",
+		skdev->in_flight, skdev->cur_max_queue_depth,
+		skdev->dev_max_queue_depth, skdev->queue_low_water_mark);
+	dev_dbg(&skdev->pdev->dev, "  timestamp=0x%x cycle=%d cycle_ix=%d\n",
+		skdev->timeout_stamp, skdev->skcomp_cycle, skdev->skcomp_ix);
 }
 
 static void skd_log_skmsg(struct skd_device *skdev,
 			  struct skd_fitmsg_context *skmsg, const char *event)
 {
-	pr_debug("%s:%s:%d (%s) skmsg=%p event='%s'\n",
-		 skdev->name, __func__, __LINE__, skdev->name, skmsg, event);
-	pr_debug("%s:%s:%d   state=%s(%d) id=0x%04x length=%d\n",
-		 skdev->name, __func__, __LINE__,
-		 skd_skmsg_state_to_str(skmsg->state), skmsg->state,
-		 skmsg->id, skmsg->length);
+	dev_dbg(&skdev->pdev->dev, "skmsg=%p event='%s'\n", skmsg, event);
+	dev_dbg(&skdev->pdev->dev, "  state=%s(%d) id=0x%04x length=%d\n",
+		skd_skmsg_state_to_str(skmsg->state), skmsg->state, skmsg->id,
+		skmsg->length);
 }
 
 static void skd_log_skreq(struct skd_device *skdev,
 			  struct skd_request_context *skreq, const char *event)
 {
-	pr_debug("%s:%s:%d (%s) skreq=%p event='%s'\n",
-		 skdev->name, __func__, __LINE__, skdev->name, skreq, event);
-	pr_debug("%s:%s:%d   state=%s(%d) id=0x%04x fitmsg=0x%04x\n",
-		 skdev->name, __func__, __LINE__,
-		 skd_skreq_state_to_str(skreq->state), skreq->state,
-		 skreq->id, skreq->fitmsg_id);
-	pr_debug("%s:%s:%d   timo=0x%x sg_dir=%d n_sg=%d\n",
-		 skdev->name, __func__, __LINE__,
-		 skreq->timeout_stamp, skreq->sg_data_dir, skreq->n_sg);
+	dev_dbg(&skdev->pdev->dev, "skreq=%p event='%s'\n", skreq, event);
+	dev_dbg(&skdev->pdev->dev, "  state=%s(%d) id=0x%04x fitmsg=0x%04x\n",
+		skd_skreq_state_to_str(skreq->state), skreq->state, skreq->id,
+		skreq->fitmsg_id);
+	dev_dbg(&skdev->pdev->dev, "  timo=0x%x sg_dir=%d n_sg=%d\n",
+		skreq->timeout_stamp, skreq->sg_data_dir, skreq->n_sg);
 
 	if (skreq->req != NULL) {
 		struct request *req = skreq->req;
 		u32 lba = (u32)blk_rq_pos(req);
 		u32 count = blk_rq_sectors(req);
 
-		pr_debug("%s:%s:%d "
-			 "req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n",
-			 skdev->name, __func__, __LINE__,
-			 req, lba, lba, count, count,
-			 (int)rq_data_dir(req));
+		dev_dbg(&skdev->pdev->dev,
+			"req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req,
+			lba, lba, count, count, (int)rq_data_dir(req));
 	} else
-		pr_debug("%s:%s:%d req=NULL\n",
-			 skdev->name, __func__, __LINE__);
+		dev_dbg(&skdev->pdev->dev, "req=NULL\n");
 }
 
 /*
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 16/55] skd: Fix endianness annotations
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (14 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 15/55] skd: Switch from the pr_*() to the dev_*() logging functions Bart Van Assche
@ 2017-08-17 20:12 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 17/55] skd: Document locking assumptions Bart Van Assche
                   ` (40 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Ensure that sparse does not report any warnings when building the
skd driver with sparse verification enabled (C=1 or C=2).

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c  | 14 ++++++--------
 drivers/block/skd_s1120.h | 18 +++++++++---------
 2 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 5174303d7db7..5a69e3288ab7 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -512,7 +512,7 @@ static void skd_request_fn(struct request_queue *q)
 	u32 lba;
 	u32 count;
 	int data_dir;
-	u64 be_dmaa;
+	__be64 be_dmaa;
 	u64 cmdctxt;
 	u32 timo_slot;
 	void *cmd_ptr;
@@ -645,7 +645,7 @@ static void skd_request_fn(struct request_queue *q)
 		cmd_ptr = &skmsg->msg_buf[skmsg->length];
 		memset(cmd_ptr, 0, 32);
 
-		be_dmaa = cpu_to_be64((u64)skreq->sksg_dma_address);
+		be_dmaa = cpu_to_be64(skreq->sksg_dma_address);
 		cmdctxt = skreq->id + SKD_ID_INCR;
 
 		scsi_req = cmd_ptr;
@@ -2402,9 +2402,7 @@ static void skd_do_inq_page_00(struct skd_device *skdev,
 
 		/* SCSI byte order increment of num_returned_bytes by 1 */
 		skcomp->num_returned_bytes =
-			be32_to_cpu(skcomp->num_returned_bytes) + 1;
-		skcomp->num_returned_bytes =
-			be32_to_cpu(skcomp->num_returned_bytes);
+			cpu_to_be32(be32_to_cpu(skcomp->num_returned_bytes) + 1);
 	}
 
 	/* update page length field to reflect the driver's page too */
@@ -2502,7 +2500,7 @@ static void skd_do_inq_page_da(struct skd_device *skdev,
 	memcpy(buf, &inq, min_t(unsigned, max_bytes, sizeof(inq)));
 
 	skcomp->num_returned_bytes =
-		be32_to_cpu(min_t(uint16_t, max_bytes, sizeof(inq)));
+		cpu_to_be32(min_t(uint16_t, max_bytes, sizeof(inq)));
 }
 
 static void skd_do_driver_inq(struct skd_device *skdev,
@@ -4674,7 +4672,7 @@ static void skd_pci_remove(struct pci_dev *pdev)
 
 	for (i = 0; i < SKD_MAX_BARS; i++)
 		if (skdev->mem_map[i])
-			iounmap((u32 *)skdev->mem_map[i]);
+			iounmap(skdev->mem_map[i]);
 
 	if (skdev->pcie_error_reporting_is_enabled)
 		pci_disable_pcie_error_reporting(pdev);
@@ -4705,7 +4703,7 @@ static int skd_pci_suspend(struct pci_dev *pdev, pm_message_t state)
 
 	for (i = 0; i < SKD_MAX_BARS; i++)
 		if (skdev->mem_map[i])
-			iounmap((u32 *)skdev->mem_map[i]);
+			iounmap(skdev->mem_map[i]);
 
 	if (skdev->pcie_error_reporting_is_enabled)
 		pci_disable_pcie_error_reporting(pdev);
diff --git a/drivers/block/skd_s1120.h b/drivers/block/skd_s1120.h
index 82ce34454dbf..f69d3d97744d 100644
--- a/drivers/block/skd_s1120.h
+++ b/drivers/block/skd_s1120.h
@@ -248,7 +248,7 @@ struct fit_msg_hdr {
  *  20-23 of the FIT_MTD_FITFW_INIT response.
  */
 struct fit_completion_entry_v1 {
-	uint32_t	num_returned_bytes;
+	__be32		num_returned_bytes;
 	uint16_t	tag;
 	uint8_t		status;  /* SCSI status */
 	uint8_t		cycle;
@@ -290,11 +290,11 @@ struct fit_comp_error_info {
  * Version one has the last 32 bits sg_list_len_bytes;
  */
 struct skd_command_header {
-	uint64_t	sg_list_dma_address;
+	__be64		sg_list_dma_address;
 	uint16_t	tag;
 	uint8_t		attribute;
 	uint8_t		add_cdb_len;     /* In 32 bit words */
-	uint32_t	sg_list_len_bytes;
+	__be32		sg_list_len_bytes;
 };
 
 struct skd_scsi_request {
@@ -307,16 +307,16 @@ struct driver_inquiry_data {
 	uint8_t		peripheral_device_type:5;
 	uint8_t		qualifier:3;
 	uint8_t		page_code;
-	uint16_t	page_length;
-	uint16_t	pcie_bus_number;
+	__be16		page_length;
+	__be16		pcie_bus_number;
 	uint8_t		pcie_device_number;
 	uint8_t		pcie_function_number;
 	uint8_t		pcie_link_speed;
 	uint8_t		pcie_link_lanes;
-	uint16_t	pcie_vendor_id;
-	uint16_t	pcie_device_id;
-	uint16_t	pcie_subsystem_vendor_id;
-	uint16_t	pcie_subsystem_device_id;
+	__be16		pcie_vendor_id;
+	__be16		pcie_device_id;
+	__be16		pcie_subsystem_vendor_id;
+	__be16		pcie_subsystem_device_id;
 	uint8_t		reserved1[2];
 	uint8_t		reserved2[3];
 	uint8_t		driver_version_length;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 17/55] skd: Document locking assumptions
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (15 preceding siblings ...)
  2017-08-17 20:12 ` [PATCH 16/55] skd: Fix endianness annotations Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 18/55] skd: Introduce the symbolic constant SKD_MAX_REQ_PER_MSG Bart Van Assche
                   ` (39 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 5a69e3288ab7..5c69e9210a62 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -1894,6 +1894,8 @@ static void skd_complete_internal(struct skd_device *skdev,
 	struct skd_scsi_request *scsi =
 		(struct skd_scsi_request *)&skspcl->msg_buf[64];
 
+	lockdep_assert_held(&skdev->lock);
+
 	SKD_ASSERT(skspcl == &skdev->internal_skspcl);
 
 	dev_dbg(&skdev->pdev->dev, "complete internal %x\n", scsi->cdb[0]);
@@ -2564,6 +2566,8 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 	int rc = 0;
 	int processed = 0;
 
+	lockdep_assert_held(&skdev->lock);
+
 	for (;; ) {
 		SKD_ASSERT(skdev->skcomp_ix < SKD_N_COMPLETION_ENTRY);
 
@@ -2701,6 +2705,8 @@ static void skd_complete_other(struct skd_device *skdev,
 	u32 req_slot;
 	struct skd_special_context *skspcl;
 
+	lockdep_assert_held(&skdev->lock);
+
 	req_id = skcomp->tag;
 	req_table = req_id & SKD_ID_TABLE_MASK;
 	req_slot = req_id & SKD_ID_SLOT_MASK;
@@ -2774,6 +2780,8 @@ static void skd_complete_special(struct skd_device *skdev,
 				 volatile struct fit_comp_error_info *skerr,
 				 struct skd_special_context *skspcl)
 {
+	lockdep_assert_held(&skdev->lock);
+
 	dev_dbg(&skdev->pdev->dev, " completing special request %p\n", skspcl);
 	if (skspcl->orphaned) {
 		/* Discard orphaned request */
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 18/55] skd: Introduce the symbolic constant SKD_MAX_REQ_PER_MSG
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (16 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 17/55] skd: Document locking assumptions Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 19/55] skd: Introduce SKD_SKCOMP_SIZE Bart Van Assche
                   ` (38 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 5c69e9210a62..98dc16073072 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -31,6 +31,7 @@
 #include <linux/aer.h>
 #include <linux/wait.h>
 #include <linux/uio.h>
+#include <linux/stringify.h>
 #include <scsi/scsi.h>
 #include <scsi/sg.h>
 #include <linux/io.h>
@@ -86,6 +87,7 @@ MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
 #define SKD_PAUSE_TIMEOUT       (5 * 1000)
 
 #define SKD_N_FITMSG_BYTES      (512u)
+#define SKD_MAX_REQ_PER_MSG	14
 
 #define SKD_N_SPECIAL_CONTEXT   32u
 #define SKD_N_SPECIAL_FITMSG_BYTES      (128u)
@@ -377,7 +379,7 @@ static int skd_max_req_per_msg = SKD_MAX_REQ_PER_MSG_DEFAULT;
 module_param(skd_max_req_per_msg, int, 0444);
 MODULE_PARM_DESC(skd_max_req_per_msg,
 		 "Maximum SCSI requests packed in a single message."
-		 " (1-14, default==1)");
+		 " (1-" __stringify(SKD_MAX_REQ_PER_MSG) ", default==1)");
 
 #define SKD_MAX_QUEUE_DEPTH_DEFAULT 64
 #define SKD_MAX_QUEUE_DEPTH_DEFAULT_STR "64"
@@ -5016,6 +5018,9 @@ static void skd_log_skreq(struct skd_device *skdev,
 
 static int __init skd_init(void)
 {
+	BUILD_BUG_ON(sizeof(struct fit_msg_hdr) + SKD_MAX_REQ_PER_MSG *
+		     sizeof(struct skd_scsi_request) != SKD_N_FITMSG_BYTES);
+
 	pr_info(PFX " v%s-b%s loaded\n", DRV_VERSION, DRV_BUILD_ID);
 
 	switch (skd_isr_type) {
@@ -5036,7 +5041,8 @@ static int __init skd_init(void)
 		skd_max_queue_depth = SKD_MAX_QUEUE_DEPTH_DEFAULT;
 	}
 
-	if (skd_max_req_per_msg < 1 || skd_max_req_per_msg > 14) {
+	if (skd_max_req_per_msg < 1 ||
+	    skd_max_req_per_msg > SKD_MAX_REQ_PER_MSG) {
 		pr_err(PFX "skd_max_req_per_msg %d invalid, re-set to %d\n",
 		       skd_max_req_per_msg, SKD_MAX_REQ_PER_MSG_DEFAULT);
 		skd_max_req_per_msg = SKD_MAX_REQ_PER_MSG_DEFAULT;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 19/55] skd: Introduce SKD_SKCOMP_SIZE
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (17 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 18/55] skd: Introduce the symbolic constant SKD_MAX_REQ_PER_MSG Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 20/55] skd: Fix size argument in skd_free_skcomp() Bart Van Assche
                   ` (37 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 22 ++++++++--------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 98dc16073072..53090a10150f 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -103,6 +103,10 @@ MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
 
 #define SKD_N_INTERNAL_BYTES    (512u)
 
+#define SKD_SKCOMP_SIZE							\
+	((sizeof(struct fit_completion_entry_v1) +			\
+	  sizeof(struct fit_comp_error_info)) * SKD_N_COMPLETION_ENTRY)
+
 /* 5 bits of uniqifier, 0xF800 */
 #define SKD_ID_INCR             (0x400)
 #define SKD_ID_TABLE_MASK       (3u << 8u)
@@ -2834,13 +2838,7 @@ static void skd_release_special(struct skd_device *skdev,
 
 static void skd_reset_skcomp(struct skd_device *skdev)
 {
-	u32 nbytes;
-	struct fit_completion_entry_v1 *skcomp;
-
-	nbytes = sizeof(*skcomp) * SKD_N_COMPLETION_ENTRY;
-	nbytes += sizeof(struct fit_comp_error_info) * SKD_N_COMPLETION_ENTRY;
-
-	memset(skdev->skcomp_table, 0, nbytes);
+	memset(skdev->skcomp_table, 0, SKD_SKCOMP_SIZE);
 
 	skdev->skcomp_ix = 0;
 	skdev->skcomp_cycle = 1;
@@ -3851,16 +3849,12 @@ static int skd_cons_skcomp(struct skd_device *skdev)
 {
 	int rc = 0;
 	struct fit_completion_entry_v1 *skcomp;
-	u32 nbytes;
-
-	nbytes = sizeof(*skcomp) * SKD_N_COMPLETION_ENTRY;
-	nbytes += sizeof(struct fit_comp_error_info) * SKD_N_COMPLETION_ENTRY;
 
 	dev_dbg(&skdev->pdev->dev,
-		"comp pci_alloc, total bytes %d entries %d\n",
-		nbytes, SKD_N_COMPLETION_ENTRY);
+		"comp pci_alloc, total bytes %zd entries %d\n",
+		SKD_SKCOMP_SIZE, SKD_N_COMPLETION_ENTRY);
 
-	skcomp = pci_zalloc_consistent(skdev->pdev, nbytes,
+	skcomp = pci_zalloc_consistent(skdev->pdev, SKD_SKCOMP_SIZE,
 				       &skdev->cq_dma_address);
 
 	if (skcomp == NULL) {
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 20/55] skd: Fix size argument in skd_free_skcomp()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (18 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 19/55] skd: Introduce SKD_SKCOMP_SIZE Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 21/55] skd: Reorder the code in skd_process_request() Bart Van Assche
                   ` (36 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Pass the correct size to pci_free_consistent() in skd_free_skcomp().

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 53090a10150f..ab344bfa91c9 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -4252,14 +4252,9 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 
 static void skd_free_skcomp(struct skd_device *skdev)
 {
-	if (skdev->skcomp_table != NULL) {
-		u32 nbytes;
-
-		nbytes = sizeof(skdev->skcomp_table[0]) *
-			 SKD_N_COMPLETION_ENTRY;
-		pci_free_consistent(skdev->pdev, nbytes,
+	if (skdev->skcomp_table)
+		pci_free_consistent(skdev->pdev, SKD_SKCOMP_SIZE,
 				    skdev->skcomp_table, skdev->cq_dma_address);
-	}
 
 	skdev->skcomp_table = NULL;
 	skdev->cq_dma_address = 0;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 21/55] skd: Reorder the code in skd_process_request()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (19 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 20/55] skd: Fix size argument in skd_free_skcomp() Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 22/55] skd: Simplify the code for deciding whether or not to send a FIT msg Bart Van Assche
                   ` (35 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Prepare the S/G-list before allocating a FIT msg such that the FIT
msg always contains at least one request after the for-loop is
finished.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 42 +++++++++---------------------------------
 1 file changed, 9 insertions(+), 33 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index ab344bfa91c9..cbebaf4b0878 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -612,6 +612,15 @@ static void skd_request_fn(struct request_queue *q)
 		skreq->req = req;
 		skreq->fitmsg_id = 0;
 
+		skreq->sg_data_dir = data_dir == READ ?
+			SKD_DATA_DIR_CARD_TO_HOST : SKD_DATA_DIR_HOST_TO_CARD;
+
+		if (req->bio && !skd_preop_sg_list(skdev, skreq)) {
+			dev_dbg(&skdev->pdev->dev, "error Out\n");
+			skd_end_request(skdev, skreq, BLK_STS_RESOURCE);
+			continue;
+		}
+
 		/* Either a FIT msg is in progress or we have to start one. */
 		if (skmsg == NULL) {
 			/* Are there any FIT msg buffers available? */
@@ -639,15 +648,6 @@ static void skd_request_fn(struct request_queue *q)
 
 		skreq->fitmsg_id = skmsg->id;
 
-		/*
-		 * Note that a FIT msg may have just been started
-		 * but contains no SoFIT requests yet.
-		 */
-
-		/*
-		 * Transcode the request, checking as we go. The outcome of
-		 * the transcoding is represented by the error variable.
-		 */
 		cmd_ptr = &skmsg->msg_buf[skmsg->length];
 		memset(cmd_ptr, 0, 32);
 
@@ -658,11 +658,6 @@ static void skd_request_fn(struct request_queue *q)
 		scsi_req->hdr.tag = cmdctxt;
 		scsi_req->hdr.sg_list_dma_address = be_dmaa;
 
-		if (data_dir == READ)
-			skreq->sg_data_dir = SKD_DATA_DIR_CARD_TO_HOST;
-		else
-			skreq->sg_data_dir = SKD_DATA_DIR_HOST_TO_CARD;
-
 		if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) {
 			skd_prep_zerosize_flush_cdb(scsi_req, skreq);
 			SKD_ASSERT(skreq->flush_cmd == 1);
@@ -673,25 +668,6 @@ static void skd_request_fn(struct request_queue *q)
 		if (fua)
 			scsi_req->cdb[1] |= SKD_FUA_NV;
 
-		if (!req->bio)
-			goto skip_sg;
-
-		if (!skd_preop_sg_list(skdev, skreq)) {
-			/*
-			 * Complete the native request with error.
-			 * Note that the request context is still at the
-			 * head of the free list, and that the SoFIT request
-			 * was encoded into the FIT msg buffer but the FIT
-			 * msg length has not been updated. In short, the
-			 * only resource that has been allocated but might
-			 * not be used is that the FIT msg could be empty.
-			 */
-			dev_dbg(&skdev->pdev->dev, "error Out\n");
-			skd_end_request(skdev, skreq, BLK_STS_RESOURCE);
-			continue;
-		}
-
-skip_sg:
 		scsi_req->hdr.sg_list_len_bytes =
 			cpu_to_be32(skreq->sg_byte_count);
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 22/55] skd: Simplify the code for deciding whether or not to send a FIT msg
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (20 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 21/55] skd: Reorder the code in skd_process_request() Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 23/55] skd: Simplify the code for allocating DMA message buffers Bart Van Assche
                   ` (34 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Due to the previous patch it is guaranteed that the FIT msg contains
at least one request after the for-loop has finished. Use this to
simplify the code.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 29 +++++------------------------
 1 file changed, 5 insertions(+), 24 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index cbebaf4b0878..3fc6ec9477c7 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -693,36 +693,17 @@ static void skd_request_fn(struct request_queue *q)
 		/*
 		 * If the FIT msg buffer is full send it.
 		 */
-		if (skmsg->length >= SKD_N_FITMSG_BYTES ||
-		    fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
+		if (fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
 			skd_send_fitmsg(skdev, skmsg);
 			skmsg = NULL;
 			fmh = NULL;
 		}
 	}
 
-	/*
-	 * Is a FIT msg in progress? If it is empty put the buffer back
-	 * on the free list. If it is non-empty send what we got.
-	 * This minimizes latency when there are fewer requests than
-	 * what fits in a FIT msg.
-	 */
-	if (skmsg != NULL) {
-		/* Bigger than just a FIT msg header? */
-		if (skmsg->length > sizeof(struct fit_msg_hdr)) {
-			dev_dbg(&skdev->pdev->dev, "sending msg=%p, len %d\n",
-				skmsg, skmsg->length);
-			skd_send_fitmsg(skdev, skmsg);
-		} else {
-			/*
-			 * The FIT msg is empty. It means we got started
-			 * on the msg, but the requests were rejected.
-			 */
-			skmsg->state = SKD_MSG_STATE_IDLE;
-			skmsg->id += SKD_ID_INCR;
-			skmsg->next = skdev->skmsg_free_list;
-			skdev->skmsg_free_list = skmsg;
-		}
+	/* If the FIT msg buffer is not empty send what we got. */
+	if (skmsg) {
+		WARN_ON_ONCE(!fmh->num_protocol_cmds_coalesced);
+		skd_send_fitmsg(skdev, skmsg);
 		skmsg = NULL;
 		fmh = NULL;
 	}
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 23/55] skd: Simplify the code for allocating DMA message buffers
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (21 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 22/55] skd: Simplify the code for deciding whether or not to send a FIT msg Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 24/55] skd: Use a structure instead of hardcoding structure offsets Bart Van Assche
                   ` (33 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

dma_alloc_coherent() guarantees alignment on a page boundary so
no explicit alignment is needed to align on a 64 byte boundary.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c  | 19 ++++++-------------
 drivers/block/skd_s1120.h |  2 +-
 2 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 3fc6ec9477c7..37b900c97b87 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -190,7 +190,6 @@ struct skd_fitmsg_context {
 	u16 outstanding;
 
 	u32 length;
-	u32 offset;
 
 	u8 *msg_buf;
 	dma_addr_t mb_dma_address;
@@ -2016,8 +2015,7 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 
 	dev_dbg(&skdev->pdev->dev, "dma address 0x%llx, busy=%d\n",
 		skmsg->mb_dma_address, skdev->in_flight);
-	dev_dbg(&skdev->pdev->dev, "msg_buf 0x%p, offset %x\n", skmsg->msg_buf,
-		skmsg->offset);
+	dev_dbg(&skdev->pdev->dev, "msg_buf %p\n", skmsg->msg_buf);
 
 	qcmd = skmsg->mb_dma_address;
 	qcmd |= FIT_QCMD_QID_NORMAL;
@@ -3854,7 +3852,7 @@ static int skd_cons_skmsg(struct skd_device *skdev)
 
 		skmsg->state = SKD_MSG_STATE_IDLE;
 		skmsg->msg_buf = pci_alloc_consistent(skdev->pdev,
-						      SKD_N_FITMSG_BYTES + 64,
+						      SKD_N_FITMSG_BYTES,
 						      &skmsg->mb_dma_address);
 
 		if (skmsg->msg_buf == NULL) {
@@ -3862,13 +3860,10 @@ static int skd_cons_skmsg(struct skd_device *skdev)
 			goto err_out;
 		}
 
-		skmsg->offset = (u32)((u64)skmsg->msg_buf &
-				      (~FIT_QCMD_BASE_ADDRESS_MASK));
-		skmsg->msg_buf += ~FIT_QCMD_BASE_ADDRESS_MASK;
-		skmsg->msg_buf = (u8 *)((u64)skmsg->msg_buf &
-				       FIT_QCMD_BASE_ADDRESS_MASK);
-		skmsg->mb_dma_address += ~FIT_QCMD_BASE_ADDRESS_MASK;
-		skmsg->mb_dma_address &= FIT_QCMD_BASE_ADDRESS_MASK;
+		WARN(((uintptr_t)skmsg->msg_buf | skmsg->mb_dma_address) &
+		     (FIT_QCMD_ALIGN - 1),
+		     "not aligned: msg_buf %p mb_dma_address %#llx\n",
+		     skmsg->msg_buf, skmsg->mb_dma_address);
 		memset(skmsg->msg_buf, 0, SKD_N_FITMSG_BYTES);
 
 		skmsg->next = &skmsg[1];
@@ -4230,8 +4225,6 @@ static void skd_free_skmsg(struct skd_device *skdev)
 		skmsg = &skdev->skmsg_table[i];
 
 		if (skmsg->msg_buf != NULL) {
-			skmsg->msg_buf += skmsg->offset;
-			skmsg->mb_dma_address += skmsg->offset;
 			pci_free_consistent(skdev->pdev, SKD_N_FITMSG_BYTES,
 					    skmsg->msg_buf,
 					    skmsg->mb_dma_address);
diff --git a/drivers/block/skd_s1120.h b/drivers/block/skd_s1120.h
index f69d3d97744d..8044705cbbf9 100644
--- a/drivers/block/skd_s1120.h
+++ b/drivers/block/skd_s1120.h
@@ -28,7 +28,7 @@
 #define  FIT_QCMD_MSGSIZE_128		(0x1 << 4)
 #define  FIT_QCMD_MSGSIZE_256		(0x2 << 4)
 #define  FIT_QCMD_MSGSIZE_512		(0x3 << 4)
-#define  FIT_QCMD_BASE_ADDRESS_MASK	(0xFFFFFFFFFFFFFFC0ull)
+#define  FIT_QCMD_ALIGN			L1_CACHE_BYTES
 
 /*
  * Control, 32-bit r/w
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 24/55] skd: Use a structure instead of hardcoding structure offsets
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (22 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 23/55] skd: Simplify the code for allocating DMA message buffers Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 25/55] skd: Check structure sizes at build time Bart Van Assche
                   ` (32 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This change makes the source code easier to read.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 41 ++++++++++++++++++++++-------------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 37b900c97b87..6ba6103f53dd 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -181,6 +181,11 @@ enum skd_check_status_action {
 	SKD_CHECK_STATUS_BUSY_IMMINENT,
 };
 
+struct skd_msg_buf {
+	struct fit_msg_hdr	fmh;
+	struct skd_scsi_request	scsi[SKD_MAX_REQ_PER_MSG];
+};
+
 struct skd_fitmsg_context {
 	enum skd_fit_msg_state state;
 
@@ -191,7 +196,7 @@ struct skd_fitmsg_context {
 
 	u32 length;
 
-	u8 *msg_buf;
+	struct skd_msg_buf *msg_buf;
 	dma_addr_t mb_dma_address;
 };
 
@@ -231,7 +236,7 @@ struct skd_special_context {
 	void *data_buf;
 	dma_addr_t db_dma_address;
 
-	u8 *msg_buf;
+	struct skd_msg_buf *msg_buf;
 	dma_addr_t mb_dma_address;
 };
 
@@ -520,7 +525,6 @@ static void skd_request_fn(struct request_queue *q)
 	__be64 be_dmaa;
 	u64 cmdctxt;
 	u32 timo_slot;
-	void *cmd_ptr;
 	int flush, fua;
 
 	if (skdev->state != SKD_DRVR_STATE_ONLINE) {
@@ -639,7 +643,7 @@ static void skd_request_fn(struct request_queue *q)
 			skmsg->id += SKD_ID_INCR;
 
 			/* Initialize the FIT msg header */
-			fmh = (struct fit_msg_hdr *)skmsg->msg_buf;
+			fmh = &skmsg->msg_buf->fmh;
 			memset(fmh, 0, sizeof(*fmh));
 			fmh->protocol_id = FIT_PROTOCOL_ID_SOFIT;
 			skmsg->length = sizeof(*fmh);
@@ -647,13 +651,13 @@ static void skd_request_fn(struct request_queue *q)
 
 		skreq->fitmsg_id = skmsg->id;
 
-		cmd_ptr = &skmsg->msg_buf[skmsg->length];
-		memset(cmd_ptr, 0, 32);
+		scsi_req =
+			&skmsg->msg_buf->scsi[fmh->num_protocol_cmds_coalesced];
+		memset(scsi_req, 0, sizeof(*scsi_req));
 
 		be_dmaa = cpu_to_be64(skreq->sksg_dma_address);
 		cmdctxt = skreq->id + SKD_ID_INCR;
 
-		scsi_req = cmd_ptr;
 		scsi_req->hdr.tag = cmdctxt;
 		scsi_req->hdr.sg_list_dma_address = be_dmaa;
 
@@ -1549,8 +1553,8 @@ static int skd_sg_io_send_fitmsg(struct skd_device *skdev,
 				 struct skd_sg_io *sksgio)
 {
 	struct skd_special_context *skspcl = sksgio->skspcl;
-	struct fit_msg_hdr *fmh = (struct fit_msg_hdr *)skspcl->msg_buf;
-	struct skd_scsi_request *scsi_req = (struct skd_scsi_request *)&fmh[1];
+	struct fit_msg_hdr *fmh = &skspcl->msg_buf->fmh;
+	struct skd_scsi_request *scsi_req = &skspcl->msg_buf->scsi[0];
 
 	memset(skspcl->msg_buf, 0, SKD_N_SPECIAL_FITMSG_BYTES);
 
@@ -1709,11 +1713,11 @@ static int skd_format_internal_skspcl(struct skd_device *skdev)
 	uint64_t dma_address;
 	struct skd_scsi_request *scsi;
 
-	fmh = (struct fit_msg_hdr *)&skspcl->msg_buf[0];
+	fmh = &skspcl->msg_buf->fmh;
 	fmh->protocol_id = FIT_PROTOCOL_ID_SOFIT;
 	fmh->num_protocol_cmds_coalesced = 1;
 
-	scsi = (struct skd_scsi_request *)&skspcl->msg_buf[64];
+	scsi = &skspcl->msg_buf->scsi[0];
 	memset(scsi, 0, sizeof(*scsi));
 	dma_address = skspcl->req.sksg_dma_address;
 	scsi->hdr.sg_list_dma_address = cpu_to_be64(dma_address);
@@ -1748,7 +1752,7 @@ static void skd_send_internal_skspcl(struct skd_device *skdev,
 	skspcl->req.state = SKD_REQ_STATE_BUSY;
 	skspcl->req.id += SKD_ID_INCR;
 
-	scsi = (struct skd_scsi_request *)&skspcl->msg_buf[64];
+	scsi = &skspcl->msg_buf->scsi[0];
 	scsi->hdr.tag = skspcl->req.id;
 
 	memset(scsi->cdb, 0, sizeof(scsi->cdb));
@@ -1853,8 +1857,7 @@ static void skd_complete_internal(struct skd_device *skdev,
 	u8 *buf = skspcl->data_buf;
 	u8 status;
 	int i;
-	struct skd_scsi_request *scsi =
-		(struct skd_scsi_request *)&skspcl->msg_buf[64];
+	struct skd_scsi_request *scsi = &skspcl->msg_buf->scsi[0];
 
 	lockdep_assert_held(&skdev->lock);
 
@@ -2020,7 +2023,7 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 	qcmd = skmsg->mb_dma_address;
 	qcmd |= FIT_QCMD_QID_NORMAL;
 
-	fmh = (struct fit_msg_hdr *)skmsg->msg_buf;
+	fmh = &skmsg->msg_buf->fmh;
 	skmsg->outstanding = fmh->num_protocol_cmds_coalesced;
 
 	if (unlikely(skdev->dbg_level > 1)) {
@@ -2501,8 +2504,7 @@ static void skd_process_scsi_inq(struct skd_device *skdev,
 				 struct skd_special_context *skspcl)
 {
 	uint8_t *buf;
-	struct fit_msg_hdr *fmh = (struct fit_msg_hdr *)skspcl->msg_buf;
-	struct skd_scsi_request *scsi_req = (struct skd_scsi_request *)&fmh[1];
+	struct skd_scsi_request *scsi_req = &skspcl->msg_buf->scsi[0];
 
 	dma_sync_sg_for_cpu(skdev->class_dev, skspcl->req.sg, skspcl->req.n_sg,
 			    skspcl->req.sg_data_dir);
@@ -4957,8 +4959,9 @@ static void skd_log_skreq(struct skd_device *skdev,
 
 static int __init skd_init(void)
 {
-	BUILD_BUG_ON(sizeof(struct fit_msg_hdr) + SKD_MAX_REQ_PER_MSG *
-		     sizeof(struct skd_scsi_request) != SKD_N_FITMSG_BYTES);
+	BUILD_BUG_ON(offsetof(struct skd_msg_buf, fmh) != 0);
+	BUILD_BUG_ON(offsetof(struct skd_msg_buf, scsi) != 64);
+	BUILD_BUG_ON(sizeof(struct skd_msg_buf) != SKD_N_FITMSG_BYTES);
 
 	pr_info(PFX " v%s-b%s loaded\n", DRV_VERSION, DRV_BUILD_ID);
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 25/55] skd: Check structure sizes at build time
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (23 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 24/55] skd: Use a structure instead of hardcoding structure offsets Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 26/55] skd: Use __packed only when needed Bart Van Assche
                   ` (31 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch will help to verify the changes made by the next patch.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 6ba6103f53dd..e2d205b58fe2 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -4959,6 +4959,11 @@ static void skd_log_skreq(struct skd_device *skdev,
 
 static int __init skd_init(void)
 {
+	BUILD_BUG_ON(sizeof(struct fit_completion_entry_v1) != 8);
+	BUILD_BUG_ON(sizeof(struct fit_comp_error_info) != 32);
+	BUILD_BUG_ON(sizeof(struct skd_command_header) != 16);
+	BUILD_BUG_ON(sizeof(struct skd_scsi_request) != 32);
+	BUILD_BUG_ON(sizeof(struct driver_inquiry_data) != 44);
 	BUILD_BUG_ON(offsetof(struct skd_msg_buf, fmh) != 0);
 	BUILD_BUG_ON(offsetof(struct skd_msg_buf, scsi) != 64);
 	BUILD_BUG_ON(sizeof(struct skd_msg_buf) != SKD_N_FITMSG_BYTES);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 26/55] skd: Use __packed only when needed
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (24 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 25/55] skd: Check structure sizes at build time Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 27/55] skd: Make the skd_isr() code more brief Bart Van Assche
                   ` (30 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Since needless use of __packed slows down access to data structures,
only use __packed when needed.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_s1120.h | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/drivers/block/skd_s1120.h b/drivers/block/skd_s1120.h
index 8044705cbbf9..de35f47e953c 100644
--- a/drivers/block/skd_s1120.h
+++ b/drivers/block/skd_s1120.h
@@ -10,8 +10,6 @@
 #ifndef SKD_S1120_H
 #define SKD_S1120_H
 
-#pragma pack(push, s1120_h, 1)
-
 /*
  * Q-channel, 64-bit r/w
  */
@@ -276,7 +274,7 @@ struct fit_comp_error_info {
 	uint16_t	sks_low; /* 10: Sense Key Specific (LSW) */
 	uint16_t	reserved3; /* 12: Part of additional sense bytes (unused) */
 	uint16_t	uec; /* 14: Additional Sense Bytes */
-	uint64_t	per; /* 16: Additional Sense Bytes */
+	uint64_t	per __packed; /* 16: Additional Sense Bytes */
 	uint8_t		reserved4[2]; /* 1E: Additional Sense Bytes (unused) */
 };
 
@@ -323,6 +321,4 @@ struct driver_inquiry_data {
 	uint8_t		driver_version[0x14];
 };
 
-#pragma pack(pop, s1120_h)
-
 #endif /* SKD_S1120_H */
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 27/55] skd: Make the skd_isr() code more brief
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (25 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 26/55] skd: Use __packed only when needed Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 28/55] skd: Use ARRAY_SIZE() where appropriate Bart Van Assche
                   ` (29 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index e2d205b58fe2..7d5048d95037 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2830,14 +2830,13 @@ static void skd_isr_msg_from_dev(struct skd_device *skdev);
 static irqreturn_t
 skd_isr(int irq, void *ptr)
 {
-	struct skd_device *skdev;
+	struct skd_device *skdev = ptr;
 	u32 intstat;
 	u32 ack;
 	int rc = 0;
 	int deferred = 0;
 	int flush_enqueued = 0;
 
-	skdev = (struct skd_device *)ptr;
 	spin_lock(&skdev->lock);
 
 	for (;; ) {
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 28/55] skd: Use ARRAY_SIZE() where appropriate
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (26 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 27/55] skd: Make the skd_isr() code more brief Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 29/55] skd: Simplify the code for handling data direction Bart Van Assche
                   ` (28 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Use ARRAY_SIZE() instead of open-coding it. This patch does not
change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 7d5048d95037..96d7b43cfcf2 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2160,7 +2160,7 @@ static enum skd_check_status_action
 skd_check_status(struct skd_device *skdev,
 		 u8 cmp_status, volatile struct fit_comp_error_info *skerr)
 {
-	int i, n;
+	int i;
 
 	dev_err(&skdev->pdev->dev, "key/asc/ascq/fruc %02x/%02x/%02x/%02x\n",
 		skerr->key, skerr->code, skerr->qual, skerr->fruc);
@@ -2171,8 +2171,7 @@ skd_check_status(struct skd_device *skdev,
 		skerr->fruc);
 
 	/* Does the info match an entry in the good category? */
-	n = sizeof(skd_chkstat_table) / sizeof(skd_chkstat_table[0]);
-	for (i = 0; i < n; i++) {
+	for (i = 0; i < ARRAY_SIZE(skd_chkstat_table); i++) {
 		struct sns_info *sns = &skd_chkstat_table[i];
 
 		if (sns->mask & 0x10)
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 29/55] skd: Simplify the code for handling data direction
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (27 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 28/55] skd: Use ARRAY_SIZE() where appropriate Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 30/55] skd: Remove superfluous initializations from skd_isr_completion_posted() Bart Van Assche
                   ` (27 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Use DMA_FROM_DEVICE and DMA_TO_DEVICE directly instead of
introducing driver-private constants with the same numerical
value.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 25 +++++++++----------------
 1 file changed, 9 insertions(+), 16 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 96d7b43cfcf2..e54089315a7a 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -212,7 +212,7 @@ struct skd_request_context {
 	u8 flush_cmd;
 
 	u32 timeout_stamp;
-	u8 sg_data_dir;
+	enum dma_data_direction data_dir;
 	struct scatterlist *sg;
 	u32 n_sg;
 	u32 sg_byte_count;
@@ -225,8 +225,6 @@ struct skd_request_context {
 	struct fit_comp_error_info err_info;
 
 };
-#define SKD_DATA_DIR_HOST_TO_CARD       1
-#define SKD_DATA_DIR_CARD_TO_HOST       2
 
 struct skd_special_context {
 	struct skd_request_context req;
@@ -615,8 +613,8 @@ static void skd_request_fn(struct request_queue *q)
 		skreq->req = req;
 		skreq->fitmsg_id = 0;
 
-		skreq->sg_data_dir = data_dir == READ ?
-			SKD_DATA_DIR_CARD_TO_HOST : SKD_DATA_DIR_HOST_TO_CARD;
+		skreq->data_dir = data_dir == READ ? DMA_FROM_DEVICE :
+			DMA_TO_DEVICE;
 
 		if (req->bio && !skd_preop_sg_list(skdev, skreq)) {
 			dev_dbg(&skdev->pdev->dev, "error Out\n");
@@ -742,16 +740,14 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 			     struct skd_request_context *skreq)
 {
 	struct request *req = skreq->req;
-	int writing = skreq->sg_data_dir == SKD_DATA_DIR_HOST_TO_CARD;
-	int pci_dir = writing ? PCI_DMA_TODEVICE : PCI_DMA_FROMDEVICE;
 	struct scatterlist *sg = &skreq->sg[0];
 	int n_sg;
 	int i;
 
 	skreq->sg_byte_count = 0;
 
-	/* SKD_ASSERT(skreq->sg_data_dir == SKD_DATA_DIR_HOST_TO_CARD ||
-		   skreq->sg_data_dir == SKD_DATA_DIR_CARD_TO_HOST); */
+	WARN_ON_ONCE(skreq->data_dir != DMA_TO_DEVICE &&
+		     skreq->data_dir != DMA_FROM_DEVICE);
 
 	n_sg = blk_rq_map_sg(skdev->queue, req, sg);
 	if (n_sg <= 0)
@@ -761,7 +757,7 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 	 * Map scatterlist to PCI bus addresses.
 	 * Note PCI might change the number of entries.
 	 */
-	n_sg = pci_map_sg(skdev->pdev, sg, n_sg, pci_dir);
+	n_sg = pci_map_sg(skdev->pdev, sg, n_sg, skreq->data_dir);
 	if (n_sg <= 0)
 		return false;
 
@@ -804,9 +800,6 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 static void skd_postop_sg_list(struct skd_device *skdev,
 			       struct skd_request_context *skreq)
 {
-	int writing = skreq->sg_data_dir == SKD_DATA_DIR_HOST_TO_CARD;
-	int pci_dir = writing ? PCI_DMA_TODEVICE : PCI_DMA_FROMDEVICE;
-
 	/*
 	 * restore the next ptr for next IO request so we
 	 * don't have to set it every time.
@@ -814,7 +807,7 @@ static void skd_postop_sg_list(struct skd_device *skdev,
 	skreq->sksg_list[skreq->n_sg - 1].next_desc_ptr =
 		skreq->sksg_dma_address +
 		((skreq->n_sg) * sizeof(struct fit_sg_descriptor));
-	pci_unmap_sg(skdev->pdev, &skreq->sg[0], skreq->n_sg, pci_dir);
+	pci_unmap_sg(skdev->pdev, &skreq->sg[0], skreq->n_sg, skreq->data_dir);
 }
 
 static void skd_request_fn_not_online(struct request_queue *q)
@@ -2506,7 +2499,7 @@ static void skd_process_scsi_inq(struct skd_device *skdev,
 	struct skd_scsi_request *scsi_req = &skspcl->msg_buf->scsi[0];
 
 	dma_sync_sg_for_cpu(skdev->class_dev, skspcl->req.sg, skspcl->req.n_sg,
-			    skspcl->req.sg_data_dir);
+			    skspcl->req.data_dir);
 	buf = skd_sg_1st_page_ptr(skspcl->req.sg);
 
 	if (buf)
@@ -4935,7 +4928,7 @@ static void skd_log_skreq(struct skd_device *skdev,
 		skd_skreq_state_to_str(skreq->state), skreq->state, skreq->id,
 		skreq->fitmsg_id);
 	dev_dbg(&skdev->pdev->dev, "  timo=0x%x sg_dir=%d n_sg=%d\n",
-		skreq->timeout_stamp, skreq->sg_data_dir, skreq->n_sg);
+		skreq->timeout_stamp, skreq->data_dir, skreq->n_sg);
 
 	if (skreq->req != NULL) {
 		struct request *req = skreq->req;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 30/55] skd: Remove superfluous initializations from skd_isr_completion_posted()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (28 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 29/55] skd: Simplify the code for handling data direction Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 31/55] skd: Drop second argument of skd_recover_requests() Bart Van Assche
                   ` (26 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

The value of skcmp, cmp_cntxt etc. is overwritten during every
loop iteration and is not used after the loop has finished. Hence
initializing these variables outside the loop is not necessary.
This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index e54089315a7a..008fa7231159 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2509,16 +2509,16 @@ static void skd_process_scsi_inq(struct skd_device *skdev,
 static int skd_isr_completion_posted(struct skd_device *skdev,
 					int limit, int *enqueued)
 {
-	volatile struct fit_completion_entry_v1 *skcmp = NULL;
+	volatile struct fit_completion_entry_v1 *skcmp;
 	volatile struct fit_comp_error_info *skerr;
 	u16 req_id;
 	u32 req_slot;
 	struct skd_request_context *skreq;
-	u16 cmp_cntxt = 0;
-	u8 cmp_status = 0;
-	u8 cmp_cycle = 0;
-	u32 cmp_bytes = 0;
-	int rc = 0;
+	u16 cmp_cntxt;
+	u8 cmp_status;
+	u8 cmp_cycle;
+	u32 cmp_bytes;
+	int rc;
 	int processed = 0;
 
 	lockdep_assert_held(&skdev->lock);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 31/55] skd: Drop second argument of skd_recover_requests()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (29 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 30/55] skd: Remove superfluous initializations from skd_isr_completion_posted() Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 32/55] skd: Use for_each_sg() Bart Van Assche
                   ` (25 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Since all callers pass zero as second argument to skd_recover_requests(),
drop that second argument.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 008fa7231159..a363d5f6bcb5 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -437,7 +437,7 @@ static void skd_release_special(struct skd_device *skdev,
 				struct skd_special_context *skspcl);
 static void skd_disable_interrupts(struct skd_device *skdev);
 static void skd_isr_fwstate(struct skd_device *skdev);
-static void skd_recover_requests(struct skd_device *skdev, int requeue);
+static void skd_recover_requests(struct skd_device *skdev);
 static void skd_soft_reset(struct skd_device *skdev);
 
 const char *skd_drive_state_to_str(int state);
@@ -930,7 +930,7 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 			skdev->timer_countdown--;
 			return;
 		}
-		skd_recover_requests(skdev, 0);
+		skd_recover_requests(skdev);
 		break;
 
 	case SKD_DRVR_STATE_BUSY:
@@ -1027,13 +1027,13 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 			/* It never came out of soft reset. Try to
 			 * recover the requests and then let them
 			 * fail. This is to mitigate hung processes. */
-			skd_recover_requests(skdev, 0);
+			skd_recover_requests(skdev);
 		else {
 			dev_err(&skdev->pdev->dev, "Disable BusMaster (%x)\n",
 				skdev->drive_state);
 			pci_disable_device(skdev->pdev);
 			skd_disable_interrupts(skdev);
-			skd_recover_requests(skdev, 0);
+			skd_recover_requests(skdev);
 		}
 
 		/*start the queue so we can respond with error to requests */
@@ -2935,7 +2935,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 			break;
 		}
 		if (skdev->state == SKD_DRVR_STATE_RESTARTING)
-			skd_recover_requests(skdev, 0);
+			skd_recover_requests(skdev);
 		if (skdev->state == SKD_DRVR_STATE_WAIT_BOOT) {
 			skdev->timer_countdown = SKD_STARTING_TIMO;
 			skdev->state = SKD_DRVR_STATE_STARTING;
@@ -3009,7 +3009,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 
 	case FIT_SR_DRIVE_FAULT:
 		skd_drive_fault(skdev);
-		skd_recover_requests(skdev, 0);
+		skd_recover_requests(skdev);
 		blk_start_queue(skdev->queue);
 		break;
 
@@ -3018,7 +3018,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 		dev_info(&skdev->pdev->dev, "state=0x%x sense=0x%x\n", state,
 			 sense);
 		skd_drive_disappeared(skdev);
-		skd_recover_requests(skdev, 0);
+		skd_recover_requests(skdev);
 		blk_start_queue(skdev->queue);
 		break;
 	default:
@@ -3032,7 +3032,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 		skd_skdev_state_to_str(skdev->state), skdev->state);
 }
 
-static void skd_recover_requests(struct skd_device *skdev, int requeue)
+static void skd_recover_requests(struct skd_device *skdev)
 {
 	int i;
 
@@ -3049,12 +3049,7 @@ static void skd_recover_requests(struct skd_device *skdev, int requeue)
 			if (skreq->n_sg > 0)
 				skd_postop_sg_list(skdev, skreq);
 
-			if (requeue &&
-			    (unsigned long) ++skreq->req->special <
-			    SKD_MAX_RETRIES)
-				blk_requeue_request(skdev->queue, skreq->req);
-			else
-				skd_end_request(skdev, skreq, BLK_STS_IOERR);
+			skd_end_request(skdev, skreq, BLK_STS_IOERR);
 
 			skreq->req = NULL;
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 32/55] skd: Use for_each_sg()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (30 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 31/55] skd: Drop second argument of skd_recover_requests() Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 33/55] skd: Remove a redundant init_timer() call Bart Van Assche
                   ` (24 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This change makes skd_preop_sg_list() support chained sg-lists.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index a363d5f6bcb5..62e06e35ddf5 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -740,7 +740,7 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 			     struct skd_request_context *skreq)
 {
 	struct request *req = skreq->req;
-	struct scatterlist *sg = &skreq->sg[0];
+	struct scatterlist *sgl = &skreq->sg[0], *sg;
 	int n_sg;
 	int i;
 
@@ -749,7 +749,7 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 	WARN_ON_ONCE(skreq->data_dir != DMA_TO_DEVICE &&
 		     skreq->data_dir != DMA_FROM_DEVICE);
 
-	n_sg = blk_rq_map_sg(skdev->queue, req, sg);
+	n_sg = blk_rq_map_sg(skdev->queue, req, sgl);
 	if (n_sg <= 0)
 		return false;
 
@@ -757,7 +757,7 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 	 * Map scatterlist to PCI bus addresses.
 	 * Note PCI might change the number of entries.
 	 */
-	n_sg = pci_map_sg(skdev->pdev, sg, n_sg, skreq->data_dir);
+	n_sg = pci_map_sg(skdev->pdev, sgl, n_sg, skreq->data_dir);
 	if (n_sg <= 0)
 		return false;
 
@@ -765,10 +765,10 @@ static bool skd_preop_sg_list(struct skd_device *skdev,
 
 	skreq->n_sg = n_sg;
 
-	for (i = 0; i < n_sg; i++) {
+	for_each_sg(sgl, sg, n_sg, i) {
 		struct fit_sg_descriptor *sgd = &skreq->sksg_list[i];
-		u32 cnt = sg_dma_len(&sg[i]);
-		uint64_t dma_addr = sg_dma_address(&sg[i]);
+		u32 cnt = sg_dma_len(sg);
+		uint64_t dma_addr = sg_dma_address(sg);
 
 		sgd->control = FIT_SGD_CONTROL_NOT_LAST;
 		sgd->byte_count = cnt;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 33/55] skd: Remove a redundant init_timer() call
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (31 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 32/55] skd: Use for_each_sg() Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 34/55] skd: Remove superfluous occurrences of the 'volatile' keyword Bart Van Assche
                   ` (23 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Since setup_timer() invokes init_timer(), invoking init_timer()
just before setup_timer() is redundant. Hence remove the init_timer()
call.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 62e06e35ddf5..71158e8b8a2b 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -1057,7 +1057,6 @@ static int skd_start_timer(struct skd_device *skdev)
 {
 	int rc;
 
-	init_timer(&skdev->timer);
 	setup_timer(&skdev->timer, skd_timer_tick, (ulong)skdev);
 
 	rc = mod_timer(&skdev->timer, (jiffies + HZ));
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 34/55] skd: Remove superfluous occurrences of the 'volatile' keyword
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (32 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 33/55] skd: Remove a redundant init_timer() call Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 35/55] skd: Use kcalloc() instead of kzalloc() with multiply Bart Van Assche
                   ` (22 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

mem_map[i] is accessed through readl() / writel() hence declaring
mem_map as volatile is not necessary.

Remove the volatile declarations from struct fit_completion_entry_v1
pointers and struct fit_comp_error_info since reading these structures
multiple times is safe.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 48 ++++++++++++++++++++++--------------------------
 1 file changed, 22 insertions(+), 26 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 71158e8b8a2b..0639c9f89984 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -263,7 +263,7 @@ typedef enum skd_irq_type {
 #define SKD_MAX_BARS                    2
 
 struct skd_device {
-	volatile void __iomem *mem_map[SKD_MAX_BARS];
+	void __iomem *mem_map[SKD_MAX_BARS];
 	resource_size_t mem_phys[SKD_MAX_BARS];
 	u32 mem_size[SKD_MAX_BARS];
 
@@ -1094,9 +1094,8 @@ static int skd_sg_io_put_status(struct skd_device *skdev,
 				struct skd_sg_io *sksgio);
 
 static void skd_complete_special(struct skd_device *skdev,
-				 volatile struct fit_completion_entry_v1
-				 *skcomp,
-				 volatile struct fit_comp_error_info *skerr,
+				 struct fit_completion_entry_v1 *skcomp,
+				 struct fit_comp_error_info *skerr,
 				 struct skd_special_context *skspcl);
 
 static int skd_bdev_ioctl(struct block_device *bdev, fmode_t mode,
@@ -1841,9 +1840,8 @@ static void skd_log_check_status(struct skd_device *skdev, u8 status, u8 key,
 }
 
 static void skd_complete_internal(struct skd_device *skdev,
-				  volatile struct fit_completion_entry_v1
-				  *skcomp,
-				  volatile struct fit_comp_error_info *skerr,
+				  struct fit_completion_entry_v1 *skcomp,
+				  struct fit_comp_error_info *skerr,
 				  struct skd_special_context *skspcl)
 {
 	u8 *buf = skspcl->data_buf;
@@ -2100,8 +2098,8 @@ static void skd_send_special_fitmsg(struct skd_device *skdev,
  */
 
 static void skd_complete_other(struct skd_device *skdev,
-			       volatile struct fit_completion_entry_v1 *skcomp,
-			       volatile struct fit_comp_error_info *skerr);
+			       struct fit_completion_entry_v1 *skcomp,
+			       struct fit_comp_error_info *skerr);
 
 struct sns_info {
 	u8 type;
@@ -2150,7 +2148,7 @@ static struct sns_info skd_chkstat_table[] = {
 
 static enum skd_check_status_action
 skd_check_status(struct skd_device *skdev,
-		 u8 cmp_status, volatile struct fit_comp_error_info *skerr)
+		 u8 cmp_status, struct fit_comp_error_info *skerr)
 {
 	int i;
 
@@ -2311,8 +2309,8 @@ static void skd_release_skreq(struct skd_device *skdev,
 #define DRIVER_INQ_EVPD_PAGE_CODE   0xDA
 
 static void skd_do_inq_page_00(struct skd_device *skdev,
-			       volatile struct fit_completion_entry_v1 *skcomp,
-			       volatile struct fit_comp_error_info *skerr,
+			       struct fit_completion_entry_v1 *skcomp,
+			       struct fit_comp_error_info *skerr,
 			       uint8_t *cdb, uint8_t *buf)
 {
 	uint16_t insert_pt, max_bytes, drive_pages, drive_bytes, new_size;
@@ -2408,8 +2406,8 @@ static void skd_get_link_info(struct pci_dev *pdev, u8 *speed, u8 *width)
 }
 
 static void skd_do_inq_page_da(struct skd_device *skdev,
-			       volatile struct fit_completion_entry_v1 *skcomp,
-			       volatile struct fit_comp_error_info *skerr,
+			       struct fit_completion_entry_v1 *skcomp,
+			       struct fit_comp_error_info *skerr,
 			       uint8_t *cdb, uint8_t *buf)
 {
 	struct pci_dev *pdev = skdev->pdev;
@@ -2461,8 +2459,8 @@ static void skd_do_inq_page_da(struct skd_device *skdev,
 }
 
 static void skd_do_driver_inq(struct skd_device *skdev,
-			      volatile struct fit_completion_entry_v1 *skcomp,
-			      volatile struct fit_comp_error_info *skerr,
+			      struct fit_completion_entry_v1 *skcomp,
+			      struct fit_comp_error_info *skerr,
 			      uint8_t *cdb, uint8_t *buf)
 {
 	if (!buf)
@@ -2489,9 +2487,8 @@ static unsigned char *skd_sg_1st_page_ptr(struct scatterlist *sg)
 }
 
 static void skd_process_scsi_inq(struct skd_device *skdev,
-				 volatile struct fit_completion_entry_v1
-				 *skcomp,
-				 volatile struct fit_comp_error_info *skerr,
+				 struct fit_completion_entry_v1 *skcomp,
+				 struct fit_comp_error_info *skerr,
 				 struct skd_special_context *skspcl)
 {
 	uint8_t *buf;
@@ -2508,8 +2505,8 @@ static void skd_process_scsi_inq(struct skd_device *skdev,
 static int skd_isr_completion_posted(struct skd_device *skdev,
 					int limit, int *enqueued)
 {
-	volatile struct fit_completion_entry_v1 *skcmp;
-	volatile struct fit_comp_error_info *skerr;
+	struct fit_completion_entry_v1 *skcmp;
+	struct fit_comp_error_info *skerr;
 	u16 req_id;
 	u32 req_slot;
 	struct skd_request_context *skreq;
@@ -2651,8 +2648,8 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 }
 
 static void skd_complete_other(struct skd_device *skdev,
-			       volatile struct fit_completion_entry_v1 *skcomp,
-			       volatile struct fit_comp_error_info *skerr)
+			       struct fit_completion_entry_v1 *skcomp,
+			       struct fit_comp_error_info *skerr)
 {
 	u32 req_id = 0;
 	u32 req_table;
@@ -2729,9 +2726,8 @@ static void skd_complete_other(struct skd_device *skdev,
 }
 
 static void skd_complete_special(struct skd_device *skdev,
-				 volatile struct fit_completion_entry_v1
-				 *skcomp,
-				 volatile struct fit_comp_error_info *skerr,
+				 struct fit_completion_entry_v1 *skcomp,
+				 struct fit_comp_error_info *skerr,
 				 struct skd_special_context *skspcl)
 {
 	lockdep_assert_held(&skdev->lock);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 35/55] skd: Use kcalloc() instead of kzalloc() with multiply
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (33 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 34/55] skd: Remove superfluous occurrences of the 'volatile' keyword Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 36/55] skb: Use symbolic names for SCSI opcodes Bart Van Assche
                   ` (21 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 0639c9f89984..ae66171ef10a 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -3815,12 +3815,13 @@ static int skd_cons_skmsg(struct skd_device *skdev)
 	u32 i;
 
 	dev_dbg(&skdev->pdev->dev,
-		"skmsg_table kzalloc, struct %lu, count %u total %lu\n",
+		"skmsg_table kcalloc, struct %lu, count %u total %lu\n",
 		sizeof(struct skd_fitmsg_context), skdev->num_fitmsg_context,
 		sizeof(struct skd_fitmsg_context) * skdev->num_fitmsg_context);
 
-	skdev->skmsg_table = kzalloc(sizeof(struct skd_fitmsg_context)
-				     *skdev->num_fitmsg_context, GFP_KERNEL);
+	skdev->skmsg_table = kcalloc(skdev->num_fitmsg_context,
+				     sizeof(struct skd_fitmsg_context),
+				     GFP_KERNEL);
 	if (skdev->skmsg_table == NULL) {
 		rc = -ENOMEM;
 		goto err_out;
@@ -3895,12 +3896,13 @@ static int skd_cons_skreq(struct skd_device *skdev)
 	u32 i;
 
 	dev_dbg(&skdev->pdev->dev,
-		"skreq_table kzalloc, struct %lu, count %u total %lu\n",
+		"skreq_table kcalloc, struct %lu, count %u total %lu\n",
 		sizeof(struct skd_request_context), skdev->num_req_context,
 		sizeof(struct skd_request_context) * skdev->num_req_context);
 
-	skdev->skreq_table = kzalloc(sizeof(struct skd_request_context)
-				     * skdev->num_req_context, GFP_KERNEL);
+	skdev->skreq_table = kcalloc(skdev->num_req_context,
+				     sizeof(struct skd_request_context),
+				     GFP_KERNEL);
 	if (skdev->skreq_table == NULL) {
 		rc = -ENOMEM;
 		goto err_out;
@@ -3918,8 +3920,8 @@ static int skd_cons_skreq(struct skd_device *skdev)
 		skreq->id = i + SKD_ID_RW_REQUEST;
 		skreq->state = SKD_REQ_STATE_IDLE;
 
-		skreq->sg = kzalloc(sizeof(struct scatterlist) *
-				    skdev->sgs_per_request, GFP_KERNEL);
+		skreq->sg = kcalloc(skdev->sgs_per_request,
+				    sizeof(struct scatterlist), GFP_KERNEL);
 		if (skreq->sg == NULL) {
 			rc = -ENOMEM;
 			goto err_out;
@@ -3952,12 +3954,13 @@ static int skd_cons_skspcl(struct skd_device *skdev)
 	u32 i, nbytes;
 
 	dev_dbg(&skdev->pdev->dev,
-		"skspcl_table kzalloc, struct %lu, count %u total %lu\n",
+		"skspcl_table kcalloc, struct %lu, count %u total %lu\n",
 		sizeof(struct skd_special_context), skdev->n_special,
 		sizeof(struct skd_special_context) * skdev->n_special);
 
-	skdev->skspcl_table = kzalloc(sizeof(struct skd_special_context)
-				      * skdev->n_special, GFP_KERNEL);
+	skdev->skspcl_table = kcalloc(skdev->n_special,
+				      sizeof(struct skd_special_context),
+				      GFP_KERNEL);
 	if (skdev->skspcl_table == NULL) {
 		rc = -ENOMEM;
 		goto err_out;
@@ -3983,8 +3986,9 @@ static int skd_cons_skspcl(struct skd_device *skdev)
 			goto err_out;
 		}
 
-		skspcl->req.sg = kzalloc(sizeof(struct scatterlist) *
-					 SKD_N_SG_PER_SPECIAL, GFP_KERNEL);
+		skspcl->req.sg = kcalloc(SKD_N_SG_PER_SPECIAL,
+					 sizeof(struct scatterlist),
+					 GFP_KERNEL);
 		if (skspcl->req.sg == NULL) {
 			rc = -ENOMEM;
 			goto err_out;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 36/55] skb: Use symbolic names for SCSI opcodes
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (34 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 35/55] skd: Use kcalloc() instead of kzalloc() with multiply Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 37/55] skd: Move a function definition Bart Van Assche
                   ` (20 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index ae66171ef10a..49e7097dd409 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -473,9 +473,9 @@ skd_prep_rw_cdb(struct skd_scsi_request *scsi_req,
 		unsigned count)
 {
 	if (data_dir == READ)
-		scsi_req->cdb[0] = 0x28;
+		scsi_req->cdb[0] = READ_10;
 	else
-		scsi_req->cdb[0] = 0x2a;
+		scsi_req->cdb[0] = WRITE_10;
 
 	scsi_req->cdb[1] = 0;
 	scsi_req->cdb[2] = (lba & 0xff000000) >> 24;
@@ -494,7 +494,7 @@ skd_prep_zerosize_flush_cdb(struct skd_scsi_request *scsi_req,
 {
 	skreq->flush_cmd = 1;
 
-	scsi_req->cdb[0] = 0x35;
+	scsi_req->cdb[0] = SYNCHRONIZE_CACHE;
 	scsi_req->cdb[1] = 0;
 	scsi_req->cdb[2] = 0;
 	scsi_req->cdb[3] = 0;
@@ -1880,7 +1880,8 @@ static void skd_complete_internal(struct skd_device *skdev,
 			}
 			dev_dbg(&skdev->pdev->dev,
 				"**** TUR failed, retry skerr\n");
-			skd_send_internal_skspcl(skdev, skspcl, 0x00);
+			skd_send_internal_skspcl(skdev, skspcl,
+						 TEST_UNIT_READY);
 		}
 		break;
 
@@ -1896,7 +1897,8 @@ static void skd_complete_internal(struct skd_device *skdev,
 			}
 			dev_dbg(&skdev->pdev->dev,
 				"**** write buffer failed, retry skerr\n");
-			skd_send_internal_skspcl(skdev, skspcl, 0x00);
+			skd_send_internal_skspcl(skdev, skspcl,
+						 TEST_UNIT_READY);
 		}
 		break;
 
@@ -1929,7 +1931,8 @@ static void skd_complete_internal(struct skd_device *skdev,
 			}
 			dev_dbg(&skdev->pdev->dev,
 				"**** read buffer failed, retry skerr\n");
-			skd_send_internal_skspcl(skdev, skspcl, 0x00);
+			skd_send_internal_skspcl(skdev, skspcl,
+						 TEST_UNIT_READY);
 		}
 		break;
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 37/55] skd: Move a function definition
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (35 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 36/55] skb: Use symbolic names for SCSI opcodes Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 38/55] skd: Rework request failing code path Bart Van Assche
                   ` (19 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality but makes the next
patch in this series easier to read.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 84 +++++++++++++++++++++++-------------------------
 1 file changed, 41 insertions(+), 43 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 49e7097dd409..ff2ea37b8fd3 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -506,7 +506,47 @@ skd_prep_zerosize_flush_cdb(struct skd_scsi_request *scsi_req,
 	scsi_req->cdb[9] = 0;
 }
 
-static void skd_request_fn_not_online(struct request_queue *q);
+static void skd_request_fn_not_online(struct request_queue *q)
+{
+	struct skd_device *skdev = q->queuedata;
+
+	SKD_ASSERT(skdev->state != SKD_DRVR_STATE_ONLINE);
+
+	skd_log_skdev(skdev, "req_not_online");
+	switch (skdev->state) {
+	case SKD_DRVR_STATE_PAUSING:
+	case SKD_DRVR_STATE_PAUSED:
+	case SKD_DRVR_STATE_STARTING:
+	case SKD_DRVR_STATE_RESTARTING:
+	case SKD_DRVR_STATE_WAIT_BOOT:
+	/* In case of starting, we haven't started the queue,
+	 * so we can't get here... but requests are
+	 * possibly hanging out waiting for us because we
+	 * reported the dev/skd0 already.  They'll wait
+	 * forever if connect doesn't complete.
+	 * What to do??? delay dev/skd0 ??
+	 */
+	case SKD_DRVR_STATE_BUSY:
+	case SKD_DRVR_STATE_BUSY_IMMINENT:
+	case SKD_DRVR_STATE_BUSY_ERASE:
+	case SKD_DRVR_STATE_DRAINING_TIMEOUT:
+		return;
+
+	case SKD_DRVR_STATE_BUSY_SANITIZE:
+	case SKD_DRVR_STATE_STOPPING:
+	case SKD_DRVR_STATE_SYNCING:
+	case SKD_DRVR_STATE_FAULT:
+	case SKD_DRVR_STATE_DISAPPEARED:
+	default:
+		break;
+	}
+
+	/* If we get here, terminate all pending block requeusts
+	 * with EIO and any scsi pass thru with appropriate sense
+	 */
+
+	skd_fail_all_pending(skdev);
+}
 
 static void skd_request_fn(struct request_queue *q)
 {
@@ -810,48 +850,6 @@ static void skd_postop_sg_list(struct skd_device *skdev,
 	pci_unmap_sg(skdev->pdev, &skreq->sg[0], skreq->n_sg, skreq->data_dir);
 }
 
-static void skd_request_fn_not_online(struct request_queue *q)
-{
-	struct skd_device *skdev = q->queuedata;
-
-	SKD_ASSERT(skdev->state != SKD_DRVR_STATE_ONLINE);
-
-	skd_log_skdev(skdev, "req_not_online");
-	switch (skdev->state) {
-	case SKD_DRVR_STATE_PAUSING:
-	case SKD_DRVR_STATE_PAUSED:
-	case SKD_DRVR_STATE_STARTING:
-	case SKD_DRVR_STATE_RESTARTING:
-	case SKD_DRVR_STATE_WAIT_BOOT:
-	/* In case of starting, we haven't started the queue,
-	 * so we can't get here... but requests are
-	 * possibly hanging out waiting for us because we
-	 * reported the dev/skd0 already.  They'll wait
-	 * forever if connect doesn't complete.
-	 * What to do??? delay dev/skd0 ??
-	 */
-	case SKD_DRVR_STATE_BUSY:
-	case SKD_DRVR_STATE_BUSY_IMMINENT:
-	case SKD_DRVR_STATE_BUSY_ERASE:
-	case SKD_DRVR_STATE_DRAINING_TIMEOUT:
-		return;
-
-	case SKD_DRVR_STATE_BUSY_SANITIZE:
-	case SKD_DRVR_STATE_STOPPING:
-	case SKD_DRVR_STATE_SYNCING:
-	case SKD_DRVR_STATE_FAULT:
-	case SKD_DRVR_STATE_DISAPPEARED:
-	default:
-		break;
-	}
-
-	/* If we get here, terminate all pending block requeusts
-	 * with EIO and any scsi pass thru with appropriate sense
-	 */
-
-	skd_fail_all_pending(skdev);
-}
-
 /*
  *****************************************************************************
  * TIMER
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 38/55] skd: Rework request failing code path
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (36 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 37/55] skd: Move a function definition Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 39/55] skd: Convert explicit skd_request_fn() calls Bart Van Assche
                   ` (18 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Move the skd_fail_all_pending() call out of skd_request_fn_not_online()
such that this function can be reused in the blk-mq code path.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index ff2ea37b8fd3..8040500ba09c 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -506,7 +506,10 @@ skd_prep_zerosize_flush_cdb(struct skd_scsi_request *scsi_req,
 	scsi_req->cdb[9] = 0;
 }
 
-static void skd_request_fn_not_online(struct request_queue *q)
+/*
+ * Return true if and only if all pending requests should be failed.
+ */
+static bool skd_fail_all(struct request_queue *q)
 {
 	struct skd_device *skdev = q->queuedata;
 
@@ -530,7 +533,7 @@ static void skd_request_fn_not_online(struct request_queue *q)
 	case SKD_DRVR_STATE_BUSY_IMMINENT:
 	case SKD_DRVR_STATE_BUSY_ERASE:
 	case SKD_DRVR_STATE_DRAINING_TIMEOUT:
-		return;
+		return false;
 
 	case SKD_DRVR_STATE_BUSY_SANITIZE:
 	case SKD_DRVR_STATE_STOPPING:
@@ -538,14 +541,8 @@ static void skd_request_fn_not_online(struct request_queue *q)
 	case SKD_DRVR_STATE_FAULT:
 	case SKD_DRVR_STATE_DISAPPEARED:
 	default:
-		break;
+		return true;
 	}
-
-	/* If we get here, terminate all pending block requeusts
-	 * with EIO and any scsi pass thru with appropriate sense
-	 */
-
-	skd_fail_all_pending(skdev);
 }
 
 static void skd_request_fn(struct request_queue *q)
@@ -566,7 +563,8 @@ static void skd_request_fn(struct request_queue *q)
 	int flush, fua;
 
 	if (skdev->state != SKD_DRVR_STATE_ONLINE) {
-		skd_request_fn_not_online(q);
+		if (skd_fail_all(q))
+			skd_fail_all_pending(skdev);
 		return;
 	}
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 39/55] skd: Convert explicit skd_request_fn() calls
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (37 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 38/55] skd: Rework request failing code path Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 40/55] skd: Remove SG IO support Bart Van Assche
                   ` (17 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This will make it easier to convert this driver to the blk-mq
approach. This patch also reduces interrupt latency by moving
skd_request_fn() calls out of the skd_isr() interrupt.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 8040500ba09c..3db89707b227 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2806,7 +2806,7 @@ static void skd_completion_worker(struct work_struct *work)
 	 * process everything in compq
 	 */
 	skd_isr_completion_posted(skdev, 0, &flush_enqueued);
-	skd_request_fn(skdev->queue);
+	blk_run_queue_async(skdev->queue);
 
 	spin_unlock_irqrestore(&skdev->lock, flags);
 }
@@ -2882,12 +2882,12 @@ skd_isr(int irq, void *ptr)
 	}
 
 	if (unlikely(flush_enqueued))
-		skd_request_fn(skdev->queue);
+		blk_run_queue_async(skdev->queue);
 
 	if (deferred)
 		schedule_work(&skdev->completion_worker);
 	else if (!flush_enqueued)
-		skd_request_fn(skdev->queue);
+		blk_run_queue_async(skdev->queue);
 
 	spin_unlock(&skdev->lock);
 
@@ -3588,12 +3588,12 @@ static irqreturn_t skd_comp_q(int irq, void *skd_host_data)
 	deferred = skd_isr_completion_posted(skdev, skd_isr_comp_limit,
 						&flush_enqueued);
 	if (flush_enqueued)
-		skd_request_fn(skdev->queue);
+		blk_run_queue_async(skdev->queue);
 
 	if (deferred)
 		schedule_work(&skdev->completion_worker);
 	else if (!flush_enqueued)
-		skd_request_fn(skdev->queue);
+		blk_run_queue_async(skdev->queue);
 
 	spin_unlock_irqrestore(&skdev->lock, flags);
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 40/55] skd: Remove SG IO support
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (38 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 39/55] skd: Convert explicit skd_request_fn() calls Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 41/55] skd: Remove dead code Bart Van Assche
                   ` (16 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

The skd SG IO support duplicates the functionality of the bsg driver.
Hence remove it.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 1071 +---------------------------------------------
 1 file changed, 2 insertions(+), 1069 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 3db89707b227..13d06598c1b7 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -30,7 +30,6 @@
 #include <linux/err.h>
 #include <linux/aer.h>
 #include <linux/wait.h>
-#include <linux/uio.h>
 #include <linux/stringify.h>
 #include <scsi/scsi.h>
 #include <scsi/sg.h>
@@ -43,13 +42,6 @@
 static int skd_dbg_level;
 static int skd_isr_comp_limit = 4;
 
-enum {
-	STEC_LINK_2_5GTS = 0,
-	STEC_LINK_5GTS = 1,
-	STEC_LINK_8GTS = 2,
-	STEC_LINK_UNKNOWN = 0xFF
-};
-
 enum {
 	SKD_FLUSH_INITIALIZER,
 	SKD_FLUSH_ZERO_SIZE_FIRST,
@@ -68,8 +60,6 @@ enum {
 #define DRV_VERSION "2.2.1"
 #define DRV_BUILD_ID "0260"
 #define PFX DRV_NAME ": "
-#define DRV_BIN_VERSION 0x100
-#define DRV_VER_COMPL   "2.2.1." DRV_BUILD_ID
 
 MODULE_LICENSE("GPL");
 
@@ -89,14 +79,12 @@ MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
 #define SKD_N_FITMSG_BYTES      (512u)
 #define SKD_MAX_REQ_PER_MSG	14
 
-#define SKD_N_SPECIAL_CONTEXT   32u
 #define SKD_N_SPECIAL_FITMSG_BYTES      (128u)
 
 /* SG elements are 32 bytes, so we can make this 4096 and still be under the
  * 128KB limit.  That allows 4096*4K = 16M xfer size
  */
 #define SKD_N_SG_PER_REQ_DEFAULT 256u
-#define SKD_N_SG_PER_SPECIAL    256u
 
 #define SKD_N_COMPLETION_ENTRY  256u
 #define SKD_N_READ_CAP_BYTES    (8u)
@@ -112,7 +100,6 @@ MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
 #define SKD_ID_TABLE_MASK       (3u << 8u)
 #define  SKD_ID_RW_REQUEST      (0u << 8u)
 #define  SKD_ID_INTERNAL        (1u << 8u)
-#define  SKD_ID_SPECIAL_REQUEST (2u << 8u)
 #define  SKD_ID_FIT_MSG         (3u << 8u)
 #define SKD_ID_SLOT_MASK        0x00FFu
 #define SKD_ID_SLOT_AND_TABLE_MASK 0x03FFu
@@ -229,8 +216,6 @@ struct skd_request_context {
 struct skd_special_context {
 	struct skd_request_context req;
 
-	u8 orphaned;
-
 	void *data_buf;
 	dma_addr_t db_dma_address;
 
@@ -238,22 +223,6 @@ struct skd_special_context {
 	dma_addr_t mb_dma_address;
 };
 
-struct skd_sg_io {
-	fmode_t mode;
-	void __user *argp;
-
-	struct sg_io_hdr sg;
-
-	u8 cdb[16];
-
-	u32 dxfer_len;
-	u32 iovcnt;
-	struct sg_iovec *iov;
-	struct sg_iovec no_iov_iov;
-
-	struct skd_special_context *skspcl;
-};
-
 typedef enum skd_irq_type {
 	SKD_IRQ_LEGACY,
 	SKD_IRQ_MSI,
@@ -302,9 +271,6 @@ struct skd_device {
 	struct skd_request_context *skreq_free_list;
 	struct skd_request_context *skreq_table;
 
-	struct skd_special_context *skspcl_free_list;
-	struct skd_special_context *skspcl_table;
-
 	struct skd_special_context internal_skspcl;
 	u32 read_cap_blocksize;
 	u32 read_cap_last_lba;
@@ -324,7 +290,6 @@ struct skd_device {
 	u32 timer_countdown;
 	u32 timer_substate;
 
-	int n_special;
 	int sgs_per_request;
 	u32 last_mtd;
 
@@ -402,10 +367,10 @@ MODULE_PARM_DESC(skd_sgs_per_request,
 		 "Maximum SG elements per block request."
 		 " (1-4096, default==256)");
 
-static int skd_max_pass_thru = SKD_N_SPECIAL_CONTEXT;
+static int skd_max_pass_thru = 1;
 module_param(skd_max_pass_thru, int, 0444);
 MODULE_PARM_DESC(skd_max_pass_thru,
-		 "Maximum SCSI pass-thru at a time." " (1-50, default==32)");
+		 "Maximum SCSI pass-thru at a time. IGNORED");
 
 module_param(skd_dbg_level, int, 0444);
 MODULE_PARM_DESC(skd_dbg_level, "s1120 debug level (0,1,2)");
@@ -433,8 +398,6 @@ static void skd_postop_sg_list(struct skd_device *skdev,
 static void skd_restart_device(struct skd_device *skdev);
 static int skd_quiesce_dev(struct skd_device *skdev);
 static int skd_unquiesce_dev(struct skd_device *skdev);
-static void skd_release_special(struct skd_device *skdev,
-				struct skd_special_context *skspcl);
 static void skd_disable_interrupts(struct skd_device *skdev);
 static void skd_isr_fwstate(struct skd_device *skdev);
 static void skd_recover_requests(struct skd_device *skdev);
@@ -1066,626 +1029,6 @@ static void skd_kill_timer(struct skd_device *skdev)
 	del_timer_sync(&skdev->timer);
 }
 
-/*
- *****************************************************************************
- * IOCTL
- *****************************************************************************
- */
-static int skd_ioctl_sg_io(struct skd_device *skdev,
-			   fmode_t mode, void __user *argp);
-static int skd_sg_io_get_and_check_args(struct skd_device *skdev,
-					struct skd_sg_io *sksgio);
-static int skd_sg_io_obtain_skspcl(struct skd_device *skdev,
-				   struct skd_sg_io *sksgio);
-static int skd_sg_io_prep_buffering(struct skd_device *skdev,
-				    struct skd_sg_io *sksgio);
-static int skd_sg_io_copy_buffer(struct skd_device *skdev,
-				 struct skd_sg_io *sksgio, int dxfer_dir);
-static int skd_sg_io_send_fitmsg(struct skd_device *skdev,
-				 struct skd_sg_io *sksgio);
-static int skd_sg_io_await(struct skd_device *skdev, struct skd_sg_io *sksgio);
-static int skd_sg_io_release_skspcl(struct skd_device *skdev,
-				    struct skd_sg_io *sksgio);
-static int skd_sg_io_put_status(struct skd_device *skdev,
-				struct skd_sg_io *sksgio);
-
-static void skd_complete_special(struct skd_device *skdev,
-				 struct fit_completion_entry_v1 *skcomp,
-				 struct fit_comp_error_info *skerr,
-				 struct skd_special_context *skspcl);
-
-static int skd_bdev_ioctl(struct block_device *bdev, fmode_t mode,
-			  uint cmd_in, ulong arg)
-{
-	static const int sg_version_num = 30527;
-	int rc = 0, timeout;
-	struct gendisk *disk = bdev->bd_disk;
-	struct skd_device *skdev = disk->private_data;
-	int __user *p = (int __user *)arg;
-
-	dev_dbg(&skdev->pdev->dev,
-		"%s: CMD[%s] ioctl  mode 0x%x, cmd 0x%x arg %0lx\n",
-		disk->disk_name, current->comm, mode, cmd_in, arg);
-
-	if (!capable(CAP_SYS_ADMIN))
-		return -EPERM;
-
-	switch (cmd_in) {
-	case SG_SET_TIMEOUT:
-		rc = get_user(timeout, p);
-		if (!rc)
-			disk->queue->sg_timeout = clock_t_to_jiffies(timeout);
-		break;
-	case SG_GET_TIMEOUT:
-		rc = jiffies_to_clock_t(disk->queue->sg_timeout);
-		break;
-	case SG_GET_VERSION_NUM:
-		rc = put_user(sg_version_num, p);
-		break;
-	case SG_IO:
-		rc = skd_ioctl_sg_io(skdev, mode, (void __user *)arg);
-		break;
-
-	default:
-		rc = -ENOTTY;
-		break;
-	}
-
-	dev_dbg(&skdev->pdev->dev, "%s:  completion rc %d\n", disk->disk_name,
-		rc);
-	return rc;
-}
-
-static int skd_ioctl_sg_io(struct skd_device *skdev, fmode_t mode,
-			   void __user *argp)
-{
-	int rc;
-	struct skd_sg_io sksgio;
-
-	memset(&sksgio, 0, sizeof(sksgio));
-	sksgio.mode = mode;
-	sksgio.argp = argp;
-	sksgio.iov = &sksgio.no_iov_iov;
-
-	switch (skdev->state) {
-	case SKD_DRVR_STATE_ONLINE:
-	case SKD_DRVR_STATE_BUSY_IMMINENT:
-		break;
-
-	default:
-		dev_dbg(&skdev->pdev->dev, "drive not online\n");
-		rc = -ENXIO;
-		goto out;
-	}
-
-	rc = skd_sg_io_get_and_check_args(skdev, &sksgio);
-	if (rc)
-		goto out;
-
-	rc = skd_sg_io_obtain_skspcl(skdev, &sksgio);
-	if (rc)
-		goto out;
-
-	rc = skd_sg_io_prep_buffering(skdev, &sksgio);
-	if (rc)
-		goto out;
-
-	rc = skd_sg_io_copy_buffer(skdev, &sksgio, SG_DXFER_TO_DEV);
-	if (rc)
-		goto out;
-
-	rc = skd_sg_io_send_fitmsg(skdev, &sksgio);
-	if (rc)
-		goto out;
-
-	rc = skd_sg_io_await(skdev, &sksgio);
-	if (rc)
-		goto out;
-
-	rc = skd_sg_io_copy_buffer(skdev, &sksgio, SG_DXFER_FROM_DEV);
-	if (rc)
-		goto out;
-
-	rc = skd_sg_io_put_status(skdev, &sksgio);
-	if (rc)
-		goto out;
-
-	rc = 0;
-
-out:
-	skd_sg_io_release_skspcl(skdev, &sksgio);
-
-	if (sksgio.iov != NULL && sksgio.iov != &sksgio.no_iov_iov)
-		kfree(sksgio.iov);
-	return rc;
-}
-
-static int skd_sg_io_get_and_check_args(struct skd_device *skdev,
-					struct skd_sg_io *sksgio)
-{
-	struct sg_io_hdr *sgp = &sksgio->sg;
-	int i, __maybe_unused acc;
-
-	if (!access_ok(VERIFY_WRITE, sksgio->argp, sizeof(sg_io_hdr_t))) {
-		dev_dbg(&skdev->pdev->dev, "access sg failed %p\n",
-			sksgio->argp);
-		return -EFAULT;
-	}
-
-	if (__copy_from_user(sgp, sksgio->argp, sizeof(sg_io_hdr_t))) {
-		dev_dbg(&skdev->pdev->dev, "copy_from_user sg failed %p\n",
-			sksgio->argp);
-		return -EFAULT;
-	}
-
-	if (sgp->interface_id != SG_INTERFACE_ID_ORIG) {
-		dev_dbg(&skdev->pdev->dev, "interface_id invalid 0x%x\n",
-			sgp->interface_id);
-		return -EINVAL;
-	}
-
-	if (sgp->cmd_len > sizeof(sksgio->cdb)) {
-		dev_dbg(&skdev->pdev->dev, "cmd_len invalid %d\n",
-			sgp->cmd_len);
-		return -EINVAL;
-	}
-
-	if (sgp->iovec_count > 256) {
-		dev_dbg(&skdev->pdev->dev, "iovec_count invalid %d\n",
-			sgp->iovec_count);
-		return -EINVAL;
-	}
-
-	if (sgp->dxfer_len > (PAGE_SIZE * SKD_N_SG_PER_SPECIAL)) {
-		dev_dbg(&skdev->pdev->dev, "dxfer_len invalid %d\n",
-			sgp->dxfer_len);
-		return -EINVAL;
-	}
-
-	switch (sgp->dxfer_direction) {
-	case SG_DXFER_NONE:
-		acc = -1;
-		break;
-
-	case SG_DXFER_TO_DEV:
-		acc = VERIFY_READ;
-		break;
-
-	case SG_DXFER_FROM_DEV:
-	case SG_DXFER_TO_FROM_DEV:
-		acc = VERIFY_WRITE;
-		break;
-
-	default:
-		dev_dbg(&skdev->pdev->dev, "dxfer_dir invalid %d\n",
-			sgp->dxfer_direction);
-		return -EINVAL;
-	}
-
-	if (copy_from_user(sksgio->cdb, sgp->cmdp, sgp->cmd_len)) {
-		dev_dbg(&skdev->pdev->dev, "copy_from_user cmdp failed %p\n",
-			sgp->cmdp);
-		return -EFAULT;
-	}
-
-	if (sgp->mx_sb_len != 0) {
-		if (!access_ok(VERIFY_WRITE, sgp->sbp, sgp->mx_sb_len)) {
-			dev_dbg(&skdev->pdev->dev, "access sbp failed %p\n",
-				sgp->sbp);
-			return -EFAULT;
-		}
-	}
-
-	if (sgp->iovec_count == 0) {
-		sksgio->iov[0].iov_base = sgp->dxferp;
-		sksgio->iov[0].iov_len = sgp->dxfer_len;
-		sksgio->iovcnt = 1;
-		sksgio->dxfer_len = sgp->dxfer_len;
-	} else {
-		struct sg_iovec *iov;
-		uint nbytes = sizeof(*iov) * sgp->iovec_count;
-		size_t iov_data_len;
-
-		iov = kmalloc(nbytes, GFP_KERNEL);
-		if (iov == NULL) {
-			dev_dbg(&skdev->pdev->dev, "alloc iovec failed %d\n",
-				sgp->iovec_count);
-			return -ENOMEM;
-		}
-		sksgio->iov = iov;
-		sksgio->iovcnt = sgp->iovec_count;
-
-		if (copy_from_user(iov, sgp->dxferp, nbytes)) {
-			dev_dbg(&skdev->pdev->dev,
-				"copy_from_user iovec failed %p\n",
-				sgp->dxferp);
-			return -EFAULT;
-		}
-
-		/*
-		 * Sum up the vecs, making sure they don't overflow
-		 */
-		iov_data_len = 0;
-		for (i = 0; i < sgp->iovec_count; i++) {
-			if (iov_data_len + iov[i].iov_len < iov_data_len)
-				return -EINVAL;
-			iov_data_len += iov[i].iov_len;
-		}
-
-		/* SG_IO howto says that the shorter of the two wins */
-		if (sgp->dxfer_len < iov_data_len) {
-			sksgio->iovcnt = iov_shorten((struct iovec *)iov,
-						     sgp->iovec_count,
-						     sgp->dxfer_len);
-			sksgio->dxfer_len = sgp->dxfer_len;
-		} else
-			sksgio->dxfer_len = iov_data_len;
-	}
-
-	if (sgp->dxfer_direction != SG_DXFER_NONE) {
-		struct sg_iovec *iov = sksgio->iov;
-		for (i = 0; i < sksgio->iovcnt; i++, iov++) {
-			if (!access_ok(acc, iov->iov_base, iov->iov_len)) {
-				dev_dbg(&skdev->pdev->dev,
-					"access data failed %p/%zd\n",
-					iov->iov_base, iov->iov_len);
-				return -EFAULT;
-			}
-		}
-	}
-
-	return 0;
-}
-
-static int skd_sg_io_obtain_skspcl(struct skd_device *skdev,
-				   struct skd_sg_io *sksgio)
-{
-	struct skd_special_context *skspcl = NULL;
-	int rc;
-
-	for (;;) {
-		ulong flags;
-
-		spin_lock_irqsave(&skdev->lock, flags);
-		skspcl = skdev->skspcl_free_list;
-		if (skspcl != NULL) {
-			skdev->skspcl_free_list =
-				(struct skd_special_context *)skspcl->req.next;
-			skspcl->req.id += SKD_ID_INCR;
-			skspcl->req.state = SKD_REQ_STATE_SETUP;
-			skspcl->orphaned = 0;
-			skspcl->req.n_sg = 0;
-		}
-		spin_unlock_irqrestore(&skdev->lock, flags);
-
-		if (skspcl != NULL) {
-			rc = 0;
-			break;
-		}
-
-		dev_dbg(&skdev->pdev->dev, "blocking\n");
-
-		rc = wait_event_interruptible_timeout(
-				skdev->waitq,
-				(skdev->skspcl_free_list != NULL),
-				msecs_to_jiffies(sksgio->sg.timeout));
-
-		dev_dbg(&skdev->pdev->dev, "unblocking, rc=%d\n", rc);
-
-		if (rc <= 0) {
-			if (rc == 0)
-				rc = -ETIMEDOUT;
-			else
-				rc = -EINTR;
-			break;
-		}
-		/*
-		 * If we get here rc > 0 meaning the timeout to
-		 * wait_event_interruptible_timeout() had time left, hence the
-		 * sought event -- non-empty free list -- happened.
-		 * Retry the allocation.
-		 */
-	}
-	sksgio->skspcl = skspcl;
-
-	return rc;
-}
-
-static int skd_skreq_prep_buffering(struct skd_device *skdev,
-				    struct skd_request_context *skreq,
-				    u32 dxfer_len)
-{
-	u32 resid = dxfer_len;
-
-	/*
-	 * The DMA engine must have aligned addresses and byte counts.
-	 */
-	resid += (-resid) & 3;
-	skreq->sg_byte_count = resid;
-
-	skreq->n_sg = 0;
-
-	while (resid > 0) {
-		u32 nbytes = PAGE_SIZE;
-		u32 ix = skreq->n_sg;
-		struct scatterlist *sg = &skreq->sg[ix];
-		struct fit_sg_descriptor *sksg = &skreq->sksg_list[ix];
-		struct page *page;
-
-		if (nbytes > resid)
-			nbytes = resid;
-
-		page = alloc_page(GFP_KERNEL);
-		if (page == NULL)
-			return -ENOMEM;
-
-		sg_set_page(sg, page, nbytes, 0);
-
-		/* TODO: This should be going through a pci_???()
-		 * routine to do proper mapping. */
-		sksg->control = FIT_SGD_CONTROL_NOT_LAST;
-		sksg->byte_count = nbytes;
-
-		sksg->host_side_addr = sg_phys(sg);
-
-		sksg->dev_side_addr = 0;
-		sksg->next_desc_ptr = skreq->sksg_dma_address +
-				      (ix + 1) * sizeof(*sksg);
-
-		skreq->n_sg++;
-		resid -= nbytes;
-	}
-
-	if (skreq->n_sg > 0) {
-		u32 ix = skreq->n_sg - 1;
-		struct fit_sg_descriptor *sksg = &skreq->sksg_list[ix];
-
-		sksg->control = FIT_SGD_CONTROL_LAST;
-		sksg->next_desc_ptr = 0;
-	}
-
-	if (unlikely(skdev->dbg_level > 1)) {
-		u32 i;
-
-		dev_dbg(&skdev->pdev->dev,
-			"skreq=%x sksg_list=%p sksg_dma=%llx\n",
-			skreq->id, skreq->sksg_list, skreq->sksg_dma_address);
-		for (i = 0; i < skreq->n_sg; i++) {
-			struct fit_sg_descriptor *sgd = &skreq->sksg_list[i];
-
-			dev_dbg(&skdev->pdev->dev,
-				"  sg[%d] count=%u ctrl=0x%x addr=0x%llx next=0x%llx\n",
-				i, sgd->byte_count, sgd->control,
-				sgd->host_side_addr, sgd->next_desc_ptr);
-		}
-	}
-
-	return 0;
-}
-
-static int skd_sg_io_prep_buffering(struct skd_device *skdev,
-				    struct skd_sg_io *sksgio)
-{
-	struct skd_special_context *skspcl = sksgio->skspcl;
-	struct skd_request_context *skreq = &skspcl->req;
-	u32 dxfer_len = sksgio->dxfer_len;
-	int rc;
-
-	rc = skd_skreq_prep_buffering(skdev, skreq, dxfer_len);
-	/*
-	 * Eventually, errors or not, skd_release_special() is called
-	 * to recover allocations including partial allocations.
-	 */
-	return rc;
-}
-
-static int skd_sg_io_copy_buffer(struct skd_device *skdev,
-				 struct skd_sg_io *sksgio, int dxfer_dir)
-{
-	struct skd_special_context *skspcl = sksgio->skspcl;
-	u32 iov_ix = 0;
-	struct sg_iovec curiov;
-	u32 sksg_ix = 0;
-	u8 *bufp = NULL;
-	u32 buf_len = 0;
-	u32 resid = sksgio->dxfer_len;
-	int rc;
-
-	curiov.iov_len = 0;
-	curiov.iov_base = NULL;
-
-	if (dxfer_dir != sksgio->sg.dxfer_direction) {
-		if (dxfer_dir != SG_DXFER_TO_DEV ||
-		    sksgio->sg.dxfer_direction != SG_DXFER_TO_FROM_DEV)
-			return 0;
-	}
-
-	while (resid > 0) {
-		u32 nbytes = PAGE_SIZE;
-
-		if (curiov.iov_len == 0) {
-			curiov = sksgio->iov[iov_ix++];
-			continue;
-		}
-
-		if (buf_len == 0) {
-			struct page *page;
-			page = sg_page(&skspcl->req.sg[sksg_ix++]);
-			bufp = page_address(page);
-			buf_len = PAGE_SIZE;
-		}
-
-		nbytes = min_t(u32, nbytes, resid);
-		nbytes = min_t(u32, nbytes, curiov.iov_len);
-		nbytes = min_t(u32, nbytes, buf_len);
-
-		if (dxfer_dir == SG_DXFER_TO_DEV)
-			rc = __copy_from_user(bufp, curiov.iov_base, nbytes);
-		else
-			rc = __copy_to_user(curiov.iov_base, bufp, nbytes);
-
-		if (rc)
-			return -EFAULT;
-
-		resid -= nbytes;
-		curiov.iov_len -= nbytes;
-		curiov.iov_base += nbytes;
-		buf_len -= nbytes;
-	}
-
-	return 0;
-}
-
-static int skd_sg_io_send_fitmsg(struct skd_device *skdev,
-				 struct skd_sg_io *sksgio)
-{
-	struct skd_special_context *skspcl = sksgio->skspcl;
-	struct fit_msg_hdr *fmh = &skspcl->msg_buf->fmh;
-	struct skd_scsi_request *scsi_req = &skspcl->msg_buf->scsi[0];
-
-	memset(skspcl->msg_buf, 0, SKD_N_SPECIAL_FITMSG_BYTES);
-
-	/* Initialize the FIT msg header */
-	fmh->protocol_id = FIT_PROTOCOL_ID_SOFIT;
-	fmh->num_protocol_cmds_coalesced = 1;
-
-	/* Initialize the SCSI request */
-	if (sksgio->sg.dxfer_direction != SG_DXFER_NONE)
-		scsi_req->hdr.sg_list_dma_address =
-			cpu_to_be64(skspcl->req.sksg_dma_address);
-	scsi_req->hdr.tag = skspcl->req.id;
-	scsi_req->hdr.sg_list_len_bytes =
-		cpu_to_be32(skspcl->req.sg_byte_count);
-	memcpy(scsi_req->cdb, sksgio->cdb, sizeof(scsi_req->cdb));
-
-	skspcl->req.state = SKD_REQ_STATE_BUSY;
-	skd_send_special_fitmsg(skdev, skspcl);
-
-	return 0;
-}
-
-static int skd_sg_io_await(struct skd_device *skdev, struct skd_sg_io *sksgio)
-{
-	unsigned long flags;
-	int rc;
-
-	rc = wait_event_interruptible_timeout(skdev->waitq,
-					      (sksgio->skspcl->req.state !=
-					       SKD_REQ_STATE_BUSY),
-					      msecs_to_jiffies(sksgio->sg.
-							       timeout));
-
-	spin_lock_irqsave(&skdev->lock, flags);
-
-	if (sksgio->skspcl->req.state == SKD_REQ_STATE_ABORTED) {
-		dev_dbg(&skdev->pdev->dev, "skspcl %p aborted\n",
-			sksgio->skspcl);
-
-		/* Build check cond, sense and let command finish. */
-		/* For a timeout, we must fabricate completion and sense
-		 * data to complete the command */
-		sksgio->skspcl->req.completion.status =
-			SAM_STAT_CHECK_CONDITION;
-
-		memset(&sksgio->skspcl->req.err_info, 0,
-		       sizeof(sksgio->skspcl->req.err_info));
-		sksgio->skspcl->req.err_info.type = 0x70;
-		sksgio->skspcl->req.err_info.key = ABORTED_COMMAND;
-		sksgio->skspcl->req.err_info.code = 0x44;
-		sksgio->skspcl->req.err_info.qual = 0;
-		rc = 0;
-	} else if (sksgio->skspcl->req.state != SKD_REQ_STATE_BUSY)
-		/* No longer on the adapter. We finish. */
-		rc = 0;
-	else {
-		/* Something's gone wrong. Still busy. Timeout or
-		 * user interrupted (control-C). Mark as an orphan
-		 * so it will be disposed when completed. */
-		sksgio->skspcl->orphaned = 1;
-		sksgio->skspcl = NULL;
-		if (rc == 0) {
-			dev_dbg(&skdev->pdev->dev, "timed out %p (%u ms)\n",
-				sksgio, sksgio->sg.timeout);
-			rc = -ETIMEDOUT;
-		} else {
-			dev_dbg(&skdev->pdev->dev, "cntlc %p\n", sksgio);
-			rc = -EINTR;
-		}
-	}
-
-	spin_unlock_irqrestore(&skdev->lock, flags);
-
-	return rc;
-}
-
-static int skd_sg_io_put_status(struct skd_device *skdev,
-				struct skd_sg_io *sksgio)
-{
-	struct sg_io_hdr *sgp = &sksgio->sg;
-	struct skd_special_context *skspcl = sksgio->skspcl;
-	int resid = 0;
-
-	u32 nb = be32_to_cpu(skspcl->req.completion.num_returned_bytes);
-
-	sgp->status = skspcl->req.completion.status;
-	resid = sksgio->dxfer_len - nb;
-
-	sgp->masked_status = sgp->status & STATUS_MASK;
-	sgp->msg_status = 0;
-	sgp->host_status = 0;
-	sgp->driver_status = 0;
-	sgp->resid = resid;
-	if (sgp->masked_status || sgp->host_status || sgp->driver_status)
-		sgp->info |= SG_INFO_CHECK;
-
-	dev_dbg(&skdev->pdev->dev, "status %x masked %x resid 0x%x\n",
-		sgp->status, sgp->masked_status, sgp->resid);
-
-	if (sgp->masked_status == SAM_STAT_CHECK_CONDITION) {
-		if (sgp->mx_sb_len > 0) {
-			struct fit_comp_error_info *ei = &skspcl->req.err_info;
-			u32 nbytes = sizeof(*ei);
-
-			nbytes = min_t(u32, nbytes, sgp->mx_sb_len);
-
-			sgp->sb_len_wr = nbytes;
-
-			if (__copy_to_user(sgp->sbp, ei, nbytes)) {
-				dev_dbg(&skdev->pdev->dev,
-					"copy_to_user sense failed %p\n",
-					sgp->sbp);
-				return -EFAULT;
-			}
-		}
-	}
-
-	if (__copy_to_user(sksgio->argp, sgp, sizeof(sg_io_hdr_t))) {
-		dev_dbg(&skdev->pdev->dev, "copy_to_user sg failed %p\n",
-			sksgio->argp);
-		return -EFAULT;
-	}
-
-	return 0;
-}
-
-static int skd_sg_io_release_skspcl(struct skd_device *skdev,
-				    struct skd_sg_io *sksgio)
-{
-	struct skd_special_context *skspcl = sksgio->skspcl;
-
-	if (skspcl != NULL) {
-		ulong flags;
-
-		sksgio->skspcl = NULL;
-
-		spin_lock_irqsave(&skdev->lock, flags);
-		skd_release_special(skdev, skspcl);
-		spin_unlock_irqrestore(&skdev->lock, flags);
-	}
-
-	return 0;
-}
-
 /*
  *****************************************************************************
  * INTERNAL REQUESTS -- generated by driver itself
@@ -2305,202 +1648,6 @@ static void skd_release_skreq(struct skd_device *skdev,
 	skdev->skreq_free_list = skreq;
 }
 
-#define DRIVER_INQ_EVPD_PAGE_CODE   0xDA
-
-static void skd_do_inq_page_00(struct skd_device *skdev,
-			       struct fit_completion_entry_v1 *skcomp,
-			       struct fit_comp_error_info *skerr,
-			       uint8_t *cdb, uint8_t *buf)
-{
-	uint16_t insert_pt, max_bytes, drive_pages, drive_bytes, new_size;
-
-	/* Caller requested "supported pages".  The driver needs to insert
-	 * its page.
-	 */
-	dev_dbg(&skdev->pdev->dev,
-		"skd_do_driver_inquiry: modify supported pages.\n");
-
-	/* If the device rejected the request because the CDB was
-	 * improperly formed, then just leave.
-	 */
-	if (skcomp->status == SAM_STAT_CHECK_CONDITION &&
-	    skerr->key == ILLEGAL_REQUEST && skerr->code == 0x24)
-		return;
-
-	/* Get the amount of space the caller allocated */
-	max_bytes = (cdb[3] << 8) | cdb[4];
-
-	/* Get the number of pages actually returned by the device */
-	drive_pages = (buf[2] << 8) | buf[3];
-	drive_bytes = drive_pages + 4;
-	new_size = drive_pages + 1;
-
-	/* Supported pages must be in numerical order, so find where
-	 * the driver page needs to be inserted into the list of
-	 * pages returned by the device.
-	 */
-	for (insert_pt = 4; insert_pt < drive_bytes; insert_pt++) {
-		if (buf[insert_pt] == DRIVER_INQ_EVPD_PAGE_CODE)
-			return; /* Device using this page code. abort */
-		else if (buf[insert_pt] > DRIVER_INQ_EVPD_PAGE_CODE)
-			break;
-	}
-
-	if (insert_pt < max_bytes) {
-		uint16_t u;
-
-		/* Shift everything up one byte to make room. */
-		for (u = new_size + 3; u > insert_pt; u--)
-			buf[u] = buf[u - 1];
-		buf[insert_pt] = DRIVER_INQ_EVPD_PAGE_CODE;
-
-		/* SCSI byte order increment of num_returned_bytes by 1 */
-		skcomp->num_returned_bytes =
-			cpu_to_be32(be32_to_cpu(skcomp->num_returned_bytes) + 1);
-	}
-
-	/* update page length field to reflect the driver's page too */
-	buf[2] = (uint8_t)((new_size >> 8) & 0xFF);
-	buf[3] = (uint8_t)((new_size >> 0) & 0xFF);
-}
-
-static void skd_get_link_info(struct pci_dev *pdev, u8 *speed, u8 *width)
-{
-	int pcie_reg;
-	u16 pci_bus_speed;
-	u8 pci_lanes;
-
-	pcie_reg = pci_find_capability(pdev, PCI_CAP_ID_EXP);
-	if (pcie_reg) {
-		u16 linksta;
-		pci_read_config_word(pdev, pcie_reg + PCI_EXP_LNKSTA, &linksta);
-
-		pci_bus_speed = linksta & 0xF;
-		pci_lanes = (linksta & 0x3F0) >> 4;
-	} else {
-		*speed = STEC_LINK_UNKNOWN;
-		*width = 0xFF;
-		return;
-	}
-
-	switch (pci_bus_speed) {
-	case 1:
-		*speed = STEC_LINK_2_5GTS;
-		break;
-	case 2:
-		*speed = STEC_LINK_5GTS;
-		break;
-	case 3:
-		*speed = STEC_LINK_8GTS;
-		break;
-	default:
-		*speed = STEC_LINK_UNKNOWN;
-		break;
-	}
-
-	if (pci_lanes <= 0x20)
-		*width = pci_lanes;
-	else
-		*width = 0xFF;
-}
-
-static void skd_do_inq_page_da(struct skd_device *skdev,
-			       struct fit_completion_entry_v1 *skcomp,
-			       struct fit_comp_error_info *skerr,
-			       uint8_t *cdb, uint8_t *buf)
-{
-	struct pci_dev *pdev = skdev->pdev;
-	unsigned max_bytes;
-	struct driver_inquiry_data inq;
-	u16 val;
-
-	dev_dbg(&skdev->pdev->dev, "skd_do_driver_inquiry: return driver page\n");
-
-	memset(&inq, 0, sizeof(inq));
-
-	inq.page_code = DRIVER_INQ_EVPD_PAGE_CODE;
-
-	skd_get_link_info(pdev, &inq.pcie_link_speed, &inq.pcie_link_lanes);
-	inq.pcie_bus_number = cpu_to_be16(pdev->bus->number);
-	inq.pcie_device_number = PCI_SLOT(pdev->devfn);
-	inq.pcie_function_number = PCI_FUNC(pdev->devfn);
-
-	pci_read_config_word(pdev, PCI_VENDOR_ID, &val);
-	inq.pcie_vendor_id = cpu_to_be16(val);
-
-	pci_read_config_word(pdev, PCI_DEVICE_ID, &val);
-	inq.pcie_device_id = cpu_to_be16(val);
-
-	pci_read_config_word(pdev, PCI_SUBSYSTEM_VENDOR_ID, &val);
-	inq.pcie_subsystem_vendor_id = cpu_to_be16(val);
-
-	pci_read_config_word(pdev, PCI_SUBSYSTEM_ID, &val);
-	inq.pcie_subsystem_device_id = cpu_to_be16(val);
-
-	/* Driver version, fixed lenth, padded with spaces on the right */
-	inq.driver_version_length = sizeof(inq.driver_version);
-	memset(&inq.driver_version, ' ', sizeof(inq.driver_version));
-	memcpy(inq.driver_version, DRV_VER_COMPL,
-	       min(sizeof(inq.driver_version), strlen(DRV_VER_COMPL)));
-
-	inq.page_length = cpu_to_be16((sizeof(inq) - 4));
-
-	/* Clear the error set by the device */
-	skcomp->status = SAM_STAT_GOOD;
-	memset((void *)skerr, 0, sizeof(*skerr));
-
-	/* copy response into output buffer */
-	max_bytes = (cdb[3] << 8) | cdb[4];
-	memcpy(buf, &inq, min_t(unsigned, max_bytes, sizeof(inq)));
-
-	skcomp->num_returned_bytes =
-		cpu_to_be32(min_t(uint16_t, max_bytes, sizeof(inq)));
-}
-
-static void skd_do_driver_inq(struct skd_device *skdev,
-			      struct fit_completion_entry_v1 *skcomp,
-			      struct fit_comp_error_info *skerr,
-			      uint8_t *cdb, uint8_t *buf)
-{
-	if (!buf)
-		return;
-	else if (cdb[0] != INQUIRY)
-		return;         /* Not an INQUIRY */
-	else if ((cdb[1] & 1) == 0)
-		return;         /* EVPD not set */
-	else if (cdb[2] == 0)
-		/* Need to add driver's page to supported pages list */
-		skd_do_inq_page_00(skdev, skcomp, skerr, cdb, buf);
-	else if (cdb[2] == DRIVER_INQ_EVPD_PAGE_CODE)
-		/* Caller requested driver's page */
-		skd_do_inq_page_da(skdev, skcomp, skerr, cdb, buf);
-}
-
-static unsigned char *skd_sg_1st_page_ptr(struct scatterlist *sg)
-{
-	if (!sg)
-		return NULL;
-	if (!sg_page(sg))
-		return NULL;
-	return sg_virt(sg);
-}
-
-static void skd_process_scsi_inq(struct skd_device *skdev,
-				 struct fit_completion_entry_v1 *skcomp,
-				 struct fit_comp_error_info *skerr,
-				 struct skd_special_context *skspcl)
-{
-	uint8_t *buf;
-	struct skd_scsi_request *scsi_req = &skspcl->msg_buf->scsi[0];
-
-	dma_sync_sg_for_cpu(skdev->class_dev, skspcl->req.sg, skspcl->req.n_sg,
-			    skspcl->req.data_dir);
-	buf = skd_sg_1st_page_ptr(skspcl->req.sg);
-
-	if (buf)
-		skd_do_driver_inq(skdev, skcomp, skerr, scsi_req->cdb, buf);
-}
-
 static int skd_isr_completion_posted(struct skd_device *skdev,
 					int limit, int *enqueued)
 {
@@ -2678,22 +1825,6 @@ static void skd_complete_other(struct skd_device *skdev,
 		 */
 		break;
 
-	case SKD_ID_SPECIAL_REQUEST:
-		/*
-		 * Make sure the req_slot is in bounds and that the id
-		 * matches.
-		 */
-		if (req_slot < skdev->n_special) {
-			skspcl = &skdev->skspcl_table[req_slot];
-			if (skspcl->req.id == req_id &&
-			    skspcl->req.state == SKD_REQ_STATE_BUSY) {
-				skd_complete_special(skdev,
-						     skcomp, skerr, skspcl);
-				return;
-			}
-		}
-		break;
-
 	case SKD_ID_INTERNAL:
 		if (req_slot == 0) {
 			skspcl = &skdev->internal_skspcl;
@@ -2724,61 +1855,6 @@ static void skd_complete_other(struct skd_device *skdev,
 	 */
 }
 
-static void skd_complete_special(struct skd_device *skdev,
-				 struct fit_completion_entry_v1 *skcomp,
-				 struct fit_comp_error_info *skerr,
-				 struct skd_special_context *skspcl)
-{
-	lockdep_assert_held(&skdev->lock);
-
-	dev_dbg(&skdev->pdev->dev, " completing special request %p\n", skspcl);
-	if (skspcl->orphaned) {
-		/* Discard orphaned request */
-		/* ?: Can this release directly or does it need
-		 * to use a worker? */
-		dev_dbg(&skdev->pdev->dev, "release orphaned %p\n", skspcl);
-		skd_release_special(skdev, skspcl);
-		return;
-	}
-
-	skd_process_scsi_inq(skdev, skcomp, skerr, skspcl);
-
-	skspcl->req.state = SKD_REQ_STATE_COMPLETED;
-	skspcl->req.completion = *skcomp;
-	skspcl->req.err_info = *skerr;
-
-	skd_log_check_status(skdev, skspcl->req.completion.status, skerr->key,
-			     skerr->code, skerr->qual, skerr->fruc);
-
-	wake_up_interruptible(&skdev->waitq);
-}
-
-/* assume spinlock is already held */
-static void skd_release_special(struct skd_device *skdev,
-				struct skd_special_context *skspcl)
-{
-	int i, was_depleted;
-
-	for (i = 0; i < skspcl->req.n_sg; i++) {
-		struct page *page = sg_page(&skspcl->req.sg[i]);
-		__free_page(page);
-	}
-
-	was_depleted = (skdev->skspcl_free_list == NULL);
-
-	skspcl->req.state = SKD_REQ_STATE_IDLE;
-	skspcl->req.id += SKD_ID_INCR;
-	skspcl->req.next =
-		(struct skd_request_context *)skdev->skspcl_free_list;
-	skdev->skspcl_free_list = (struct skd_special_context *)skspcl;
-
-	if (was_depleted) {
-		dev_dbg(&skdev->pdev->dev, "skspcl was depleted\n");
-		/* Free list was depleted. Their might be waiters. */
-		wake_up_interruptible(&skdev->waitq);
-	}
-}
-
 static void skd_reset_skcomp(struct skd_device *skdev)
 {
 	memset(skdev->skcomp_table, 0, SKD_SKCOMP_SIZE);
@@ -3071,30 +2147,6 @@ static void skd_recover_requests(struct skd_device *skdev)
 	}
 	skdev->skmsg_free_list = skdev->skmsg_table;
 
-	for (i = 0; i < skdev->n_special; i++) {
-		struct skd_special_context *skspcl = &skdev->skspcl_table[i];
-
-		/* If orphaned, reclaim it because it has already been reported
-		 * to the process as an error (it was just waiting for
-		 * a completion that didn't come, and now it will never come)
-		 * If busy, change to a state that will cause it to error
-		 * out in the wait routine and let it do the normal
-		 * reporting and reclaiming
-		 */
-		if (skspcl->req.state == SKD_REQ_STATE_BUSY) {
-			if (skspcl->orphaned) {
-				dev_dbg(&skdev->pdev->dev, "orphaned %p\n",
-					skspcl);
-				skd_release_special(skdev, skspcl);
-			} else {
-				dev_dbg(&skdev->pdev->dev, "not orphaned %p\n",
-					skspcl);
-				skspcl->req.state = SKD_REQ_STATE_ABORTED;
-			}
-		}
-	}
-	skdev->skspcl_free_list = skdev->skspcl_table;
-
 	for (i = 0; i < SKD_N_TIMEOUT_SLOT; i++)
 		skdev->timeout_slot[i] = 0;
 
@@ -3947,72 +2999,6 @@ static int skd_cons_skreq(struct skd_device *skdev)
 	return rc;
 }
 
-static int skd_cons_skspcl(struct skd_device *skdev)
-{
-	int rc = 0;
-	u32 i, nbytes;
-
-	dev_dbg(&skdev->pdev->dev,
-		"skspcl_table kcalloc, struct %lu, count %u total %lu\n",
-		sizeof(struct skd_special_context), skdev->n_special,
-		sizeof(struct skd_special_context) * skdev->n_special);
-
-	skdev->skspcl_table = kcalloc(skdev->n_special,
-				      sizeof(struct skd_special_context),
-				      GFP_KERNEL);
-	if (skdev->skspcl_table == NULL) {
-		rc = -ENOMEM;
-		goto err_out;
-	}
-
-	for (i = 0; i < skdev->n_special; i++) {
-		struct skd_special_context *skspcl;
-
-		skspcl = &skdev->skspcl_table[i];
-
-		skspcl->req.id = i + SKD_ID_SPECIAL_REQUEST;
-		skspcl->req.state = SKD_REQ_STATE_IDLE;
-
-		skspcl->req.next = &skspcl[1].req;
-
-		nbytes = SKD_N_SPECIAL_FITMSG_BYTES;
-
-		skspcl->msg_buf =
-			pci_zalloc_consistent(skdev->pdev, nbytes,
-					      &skspcl->mb_dma_address);
-		if (skspcl->msg_buf == NULL) {
-			rc = -ENOMEM;
-			goto err_out;
-		}
-
-		skspcl->req.sg = kcalloc(SKD_N_SG_PER_SPECIAL,
-					 sizeof(struct scatterlist),
-					 GFP_KERNEL);
-		if (skspcl->req.sg == NULL) {
-			rc = -ENOMEM;
-			goto err_out;
-		}
-
-		skspcl->req.sksg_list = skd_cons_sg_list(skdev,
-							 SKD_N_SG_PER_SPECIAL,
-							 &skspcl->req.
-							 sksg_dma_address);
-		if (skspcl->req.sksg_list == NULL) {
-			rc = -ENOMEM;
-			goto err_out;
-		}
-	}
-
-	/* Free list is in order starting with the 0th entry. */
-	skdev->skspcl_table[i - 1].req.next = NULL;
-	skdev->skspcl_free_list = skdev->skspcl_table;
-
-	return rc;
-
-err_out:
-	return rc;
-}
-
 static int skd_cons_sksb(struct skd_device *skdev)
 {
 	int rc = 0;
@@ -4132,7 +3118,6 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 
 	skdev->num_req_context = skd_max_queue_depth;
 	skdev->num_fitmsg_context = skd_max_queue_depth;
-	skdev->n_special = skd_max_pass_thru;
 	skdev->cur_max_queue_depth = 1;
 	skdev->queue_low_water_mark = 1;
 	skdev->proto_ver = 99;
@@ -4158,11 +3143,6 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 	if (rc < 0)
 		goto err_out;
 
-	dev_dbg(&skdev->pdev->dev, "skspcl\n");
-	rc = skd_cons_skspcl(skdev);
-	if (rc < 0)
-		goto err_out;
-
 	dev_dbg(&skdev->pdev->dev, "sksb\n");
 	rc = skd_cons_sksb(skdev);
 	if (rc < 0)
@@ -4262,43 +3242,6 @@ static void skd_free_skreq(struct skd_device *skdev)
 	skdev->skreq_table = NULL;
 }
 
-static void skd_free_skspcl(struct skd_device *skdev)
-{
-	u32 i;
-	u32 nbytes;
-
-	if (skdev->skspcl_table == NULL)
-		return;
-
-	for (i = 0; i < skdev->n_special; i++) {
-		struct skd_special_context *skspcl;
-
-		skspcl = &skdev->skspcl_table[i];
-
-		if (skspcl->msg_buf != NULL) {
-			nbytes = SKD_N_SPECIAL_FITMSG_BYTES;
-			pci_free_consistent(skdev->pdev, nbytes,
-					    skspcl->msg_buf,
-					    skspcl->mb_dma_address);
-		}
-
-		skspcl->msg_buf = NULL;
-		skspcl->mb_dma_address = 0;
-
-		skd_free_sg_list(skdev, skspcl->req.sksg_list,
-				 SKD_N_SG_PER_SPECIAL,
-				 skspcl->req.sksg_dma_address);
-
-		skspcl->req.sksg_list = NULL;
-		skspcl->req.sksg_dma_address = 0;
-
-		kfree(skspcl->req.sg);
-	}
-
-	kfree(skdev->skspcl_table);
-	skdev->skspcl_table = NULL;
-}
-
 static void skd_free_sksb(struct skd_device *skdev)
 {
 	struct skd_special_context *skspcl;
@@ -4360,9 +3303,6 @@ static void skd_destruct(struct skd_device *skdev)
 	dev_dbg(&skdev->pdev->dev, "sksb\n");
 	skd_free_sksb(skdev);
 
-	dev_dbg(&skdev->pdev->dev, "skspcl\n");
-	skd_free_skspcl(skdev);
-
 	dev_dbg(&skdev->pdev->dev, "skreq\n");
 	skd_free_skreq(skdev);
 
@@ -4412,7 +3352,6 @@ static int skd_bdev_attach(struct device *parent, struct skd_device *skdev)
 
 static const struct block_device_operations skd_blockdev_ops = {
 	.owner		= THIS_MODULE,
-	.ioctl		= skd_bdev_ioctl,
 	.getgeo		= skd_bdev_getgeo,
 };
 
@@ -4997,12 +3936,6 @@ static int __init skd_init(void)
 		skd_isr_comp_limit = 0;
 	}
 
-	if (skd_max_pass_thru < 1 || skd_max_pass_thru > 50) {
-		pr_err(PFX "skd_max_pass_thru %d invalid, re-set to %d\n",
-		       skd_max_pass_thru, SKD_N_SPECIAL_CONTEXT);
-		skd_max_pass_thru = SKD_N_SPECIAL_CONTEXT;
-	}
-
 	return pci_register_driver(&skd_driver);
 }
 
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 41/55] skd: Remove dead code
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (39 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 40/55] skd: Remove SG IO support Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 42/55] skd: Initialize skd_special_context.req.n_sg to one Bart Van Assche
                   ` (15 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Removing the SG IO code also removed the code that sets
SKD_REQ_STATE_ABORTED. Hence also remove the code that checks for
this state.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 13d06598c1b7..c7f531e99ede 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -152,7 +152,6 @@ enum skd_req_state {
 	SKD_REQ_STATE_BUSY,
 	SKD_REQ_STATE_COMPLETED,
 	SKD_REQ_STATE_TIMEOUT,
-	SKD_REQ_STATE_ABORTED,
 };
 
 enum skd_fit_msg_state {
@@ -1734,15 +1733,6 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 
 		SKD_ASSERT(skreq->state == SKD_REQ_STATE_BUSY);
 
-		if (skreq->state == SKD_REQ_STATE_ABORTED) {
-			dev_dbg(&skdev->pdev->dev, "reclaim req %p id=%04x\n",
-				skreq, skreq->id);
-			/* a previously timed out command can
-			 * now be cleaned up */
-			skd_release_skreq(skdev, skreq);
-			continue;
-		}
-
 		skreq->completion = *skcmp;
 		if (unlikely(cmp_status == SAM_STAT_CHECK_CONDITION)) {
 			skreq->err_info = *skerr;
@@ -3823,8 +3813,6 @@ static const char *skd_skreq_state_to_str(enum skd_req_state state)
 		return "COMPLETED";
 	case SKD_REQ_STATE_TIMEOUT:
 		return "TIMEOUT";
-	case SKD_REQ_STATE_ABORTED:
-		return "ABORTED";
 	default:
 		return "???";
 	}
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 42/55] skd: Initialize skd_special_context.req.n_sg to one
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (40 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 41/55] skd: Remove dead code Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 43/55] skd: Enable request tags for the block layer queue Bart Van Assche
                   ` (14 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

The debug code in skd_send_special_fitmsg() assumes that req.n_sg
represents the number of S/G descriptors. However, skd_construct()
initializes that member variable to zero. Set req.n_sg to one such
that the debugging code in skd_send_special_fitmsg() works as
expected.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index c7f531e99ede..392c898d86e2 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -1050,6 +1050,7 @@ static int skd_format_internal_skspcl(struct skd_device *skdev)
 	memset(scsi, 0, sizeof(*scsi));
 	dma_address = skspcl->req.sksg_dma_address;
 	scsi->hdr.sg_list_dma_address = cpu_to_be64(dma_address);
+	skspcl->req.n_sg = 1;
 	sgd->control = FIT_SGD_CONTROL_LAST;
 	sgd->byte_count = 0;
 	sgd->host_side_addr = skspcl->db_dma_address;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 43/55] skd: Enable request tags for the block layer queue
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (41 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 42/55] skd: Initialize skd_special_context.req.n_sg to one Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 44/55] skd: Convert several per-device scalar variables into atomics Bart Van Assche
                   ` (13 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Use the request tag when allocating a skd_fitmsg_context or
skd_request_context such that the lists used to track free elements
can be eliminated. Swap the skd_end_request() and skd_release_req()
calls to avoid triggering a use-after-free. Remove
skd_fitmsg_context.state and .outstanding because FIT messages are
shared among requests and because updating a FIT message after a
request has finished whould trigger a use-after-free.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 267 +++++++++++++----------------------------------
 1 file changed, 73 insertions(+), 194 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 392c898d86e2..35343fbf4144 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -16,6 +16,7 @@
 #include <linux/slab.h>
 #include <linux/spinlock.h>
 #include <linux/blkdev.h>
+#include <linux/blk-mq.h>
 #include <linux/sched.h>
 #include <linux/interrupt.h>
 #include <linux/compiler.h>
@@ -154,11 +155,6 @@ enum skd_req_state {
 	SKD_REQ_STATE_TIMEOUT,
 };
 
-enum skd_fit_msg_state {
-	SKD_MSG_STATE_IDLE,
-	SKD_MSG_STATE_BUSY,
-};
-
 enum skd_check_status_action {
 	SKD_CHECK_STATUS_REPORT_GOOD,
 	SKD_CHECK_STATUS_REPORT_SMART_ALERT,
@@ -173,12 +169,7 @@ struct skd_msg_buf {
 };
 
 struct skd_fitmsg_context {
-	enum skd_fit_msg_state state;
-
-	struct skd_fitmsg_context *next;
-
 	u32 id;
-	u16 outstanding;
 
 	u32 length;
 
@@ -189,8 +180,6 @@ struct skd_fitmsg_context {
 struct skd_request_context {
 	enum skd_req_state state;
 
-	struct skd_request_context *next;
-
 	u16 id;
 	u32 fitmsg_id;
 
@@ -264,10 +253,8 @@ struct skd_device {
 
 	u32 timeout_slot[SKD_N_TIMEOUT_SLOT];
 	u32 timeout_stamp;
-	struct skd_fitmsg_context *skmsg_free_list;
 	struct skd_fitmsg_context *skmsg_table;
 
-	struct skd_request_context *skreq_free_list;
 	struct skd_request_context *skreq_table;
 
 	struct skd_special_context internal_skspcl;
@@ -387,8 +374,8 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 static void skd_send_special_fitmsg(struct skd_device *skdev,
 				    struct skd_special_context *skspcl);
 static void skd_request_fn(struct request_queue *rq);
-static void skd_end_request(struct skd_device *skdev,
-		struct skd_request_context *skreq, blk_status_t status);
+static void skd_end_request(struct skd_device *skdev, struct request *req,
+			    blk_status_t status);
 static bool skd_preop_sg_list(struct skd_device *skdev,
 			     struct skd_request_context *skreq);
 static void skd_postop_sg_list(struct skd_device *skdev,
@@ -405,8 +392,6 @@ static void skd_soft_reset(struct skd_device *skdev);
 const char *skd_drive_state_to_str(int state);
 const char *skd_skdev_state_to_str(enum skd_drvr_state state);
 static void skd_log_skdev(struct skd_device *skdev, const char *event);
-static void skd_log_skmsg(struct skd_device *skdev,
-			  struct skd_fitmsg_context *skmsg, const char *event);
 static void skd_log_skreq(struct skd_device *skdev,
 			  struct skd_request_context *skreq, const char *event);
 
@@ -424,7 +409,7 @@ static void skd_fail_all_pending(struct skd_device *skdev)
 		req = blk_peek_request(q);
 		if (req == NULL)
 			break;
-		blk_start_request(req);
+		WARN_ON_ONCE(blk_queue_start_tag(q, req));
 		__blk_end_request_all(req, BLK_STS_IOERR);
 	}
 }
@@ -523,6 +508,7 @@ static void skd_request_fn(struct request_queue *q)
 	u64 cmdctxt;
 	u32 timo_slot;
 	int flush, fua;
+	u32 tag;
 
 	if (skdev->state != SKD_DRVR_STATE_ONLINE) {
 		if (skd_fail_all(q))
@@ -531,9 +517,7 @@ static void skd_request_fn(struct request_queue *q)
 	}
 
 	if (blk_queue_stopped(skdev->queue)) {
-		if (skdev->skmsg_free_list == NULL ||
-		    skdev->skreq_free_list == NULL ||
-		    skdev->in_flight >= skdev->queue_low_water_mark)
+		if (skdev->in_flight >= skdev->queue_low_water_mark)
 			/* There is still some kind of shortage */
 			return;
 
@@ -581,27 +565,6 @@ static void skd_request_fn(struct request_queue *q)
 			break;
 		}
 
-		/* Is a skd_request_context available? */
-		skreq = skdev->skreq_free_list;
-		if (skreq == NULL) {
-			dev_dbg(&skdev->pdev->dev, "Out of req=%p\n", q);
-			break;
-		}
-		SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE);
-		SKD_ASSERT((skreq->id & SKD_ID_INCR) == 0);
-
-		/* Now we check to see if we can get a fit msg */
-		if (skmsg == NULL) {
-			if (skdev->skmsg_free_list == NULL) {
-				dev_dbg(&skdev->pdev->dev, "Out of msg\n");
-				break;
-			}
-		}
-
-		skreq->flush_cmd = 0;
-		skreq->n_sg = 0;
-		skreq->sg_byte_count = 0;
-
 		/*
 		 * OK to now dequeue request from q.
 		 *
@@ -609,7 +572,22 @@ static void skd_request_fn(struct request_queue *q)
 		 * the native request. Note that skd_request_context is
 		 * available but is still at the head of the free list.
 		 */
-		blk_start_request(req);
+		WARN_ON_ONCE(blk_queue_start_tag(q, req));
+
+		tag = blk_mq_unique_tag(req);
+		WARN_ONCE(tag >= skd_max_queue_depth,
+			  "%#x > %#x (nr_requests = %lu)\n", tag,
+			  skd_max_queue_depth, q->nr_requests);
+
+		skreq = &skdev->skreq_table[tag];
+		SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE);
+		SKD_ASSERT((skreq->id & SKD_ID_INCR) == 0);
+
+		skreq->id = tag + SKD_ID_RW_REQUEST;
+		skreq->flush_cmd = 0;
+		skreq->n_sg = 0;
+		skreq->sg_byte_count = 0;
+
 		skreq->req = req;
 		skreq->fitmsg_id = 0;
 
@@ -618,27 +596,13 @@ static void skd_request_fn(struct request_queue *q)
 
 		if (req->bio && !skd_preop_sg_list(skdev, skreq)) {
 			dev_dbg(&skdev->pdev->dev, "error Out\n");
-			skd_end_request(skdev, skreq, BLK_STS_RESOURCE);
+			skd_end_request(skdev, skreq->req, BLK_STS_RESOURCE);
 			continue;
 		}
 
 		/* Either a FIT msg is in progress or we have to start one. */
 		if (skmsg == NULL) {
-			/* Are there any FIT msg buffers available? */
-			skmsg = skdev->skmsg_free_list;
-			if (skmsg == NULL) {
-				dev_dbg(&skdev->pdev->dev,
-					"Out of msg skdev=%p\n",
-					skdev);
-				break;
-			}
-			SKD_ASSERT(skmsg->state == SKD_MSG_STATE_IDLE);
-			SKD_ASSERT((skmsg->id & SKD_ID_INCR) == 0);
-
-			skdev->skmsg_free_list = skmsg->next;
-
-			skmsg->state = SKD_MSG_STATE_BUSY;
-			skmsg->id += SKD_ID_INCR;
+			skmsg = &skdev->skmsg_table[tag];
 
 			/* Initialize the FIT msg header */
 			fmh = &skmsg->msg_buf->fmh;
@@ -673,7 +637,6 @@ static void skd_request_fn(struct request_queue *q)
 			cpu_to_be32(skreq->sg_byte_count);
 
 		/* Complete resource allocations. */
-		skdev->skreq_free_list = skreq->next;
 		skreq->state = SKD_REQ_STATE_BUSY;
 		skreq->id += SKD_ID_INCR;
 
@@ -717,23 +680,22 @@ static void skd_request_fn(struct request_queue *q)
 		blk_stop_queue(skdev->queue);
 }
 
-static void skd_end_request(struct skd_device *skdev,
-		struct skd_request_context *skreq, blk_status_t error)
+static void skd_end_request(struct skd_device *skdev, struct request *req,
+			    blk_status_t error)
 {
 	if (unlikely(error)) {
-		struct request *req = skreq->req;
 		char *cmd = (rq_data_dir(req) == READ) ? "read" : "write";
 		u32 lba = (u32)blk_rq_pos(req);
 		u32 count = blk_rq_sectors(req);
 
 		dev_err(&skdev->pdev->dev,
 			"Error cmd=%s sect=%u count=%u id=0x%x\n", cmd, lba,
-			count, skreq->id);
+			count, req->tag);
 	} else
-		dev_dbg(&skdev->pdev->dev, "id=0x%x error=%d\n", skreq->id,
+		dev_dbg(&skdev->pdev->dev, "id=0x%x error=%d\n", req->tag,
 			error);
 
-	__blk_end_request_all(skreq->req, error);
+	__blk_end_request_all(req, error);
 }
 
 static bool skd_preop_sg_list(struct skd_device *skdev,
@@ -1346,7 +1308,6 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 			    struct skd_fitmsg_context *skmsg)
 {
 	u64 qcmd;
-	struct fit_msg_hdr *fmh;
 
 	dev_dbg(&skdev->pdev->dev, "dma address 0x%llx, busy=%d\n",
 		skmsg->mb_dma_address, skdev->in_flight);
@@ -1355,9 +1316,6 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 	qcmd = skmsg->mb_dma_address;
 	qcmd |= FIT_QCMD_QID_NORMAL;
 
-	fmh = &skmsg->msg_buf->fmh;
-	skmsg->outstanding = fmh->num_protocol_cmds_coalesced;
-
 	if (unlikely(skdev->dbg_level > 1)) {
 		u8 *bp = (u8 *)skmsg->msg_buf;
 		int i;
@@ -1547,19 +1505,20 @@ skd_check_status(struct skd_device *skdev,
 }
 
 static void skd_resolve_req_exception(struct skd_device *skdev,
-				      struct skd_request_context *skreq)
+				      struct skd_request_context *skreq,
+				      struct request *req)
 {
 	u8 cmp_status = skreq->completion.status;
 
 	switch (skd_check_status(skdev, cmp_status, &skreq->err_info)) {
 	case SKD_CHECK_STATUS_REPORT_GOOD:
 	case SKD_CHECK_STATUS_REPORT_SMART_ALERT:
-		skd_end_request(skdev, skreq, BLK_STS_OK);
+		skd_end_request(skdev, req, BLK_STS_OK);
 		break;
 
 	case SKD_CHECK_STATUS_BUSY_IMMINENT:
 		skd_log_skreq(skdev, skreq, "retry(busy)");
-		blk_requeue_request(skdev->queue, skreq->req);
+		blk_requeue_request(skdev->queue, req);
 		dev_info(&skdev->pdev->dev, "drive BUSY imminent\n");
 		skdev->state = SKD_DRVR_STATE_BUSY_IMMINENT;
 		skdev->timer_countdown = SKD_TIMER_MINUTES(20);
@@ -1567,16 +1526,16 @@ static void skd_resolve_req_exception(struct skd_device *skdev,
 		break;
 
 	case SKD_CHECK_STATUS_REQUEUE_REQUEST:
-		if ((unsigned long) ++skreq->req->special < SKD_MAX_RETRIES) {
+		if ((unsigned long) ++req->special < SKD_MAX_RETRIES) {
 			skd_log_skreq(skdev, skreq, "retry");
-			blk_requeue_request(skdev->queue, skreq->req);
+			blk_requeue_request(skdev->queue, req);
 			break;
 		}
 		/* fall through */
 
 	case SKD_CHECK_STATUS_REPORT_ERROR:
 	default:
-		skd_end_request(skdev, skreq, BLK_STS_IOERR);
+		skd_end_request(skdev, req, BLK_STS_IOERR);
 		break;
 	}
 }
@@ -1585,44 +1544,8 @@ static void skd_resolve_req_exception(struct skd_device *skdev,
 static void skd_release_skreq(struct skd_device *skdev,
 			      struct skd_request_context *skreq)
 {
-	u32 msg_slot;
-	struct skd_fitmsg_context *skmsg;
-
 	u32 timo_slot;
 
-	/*
-	 * Reclaim the FIT msg buffer if this is
-	 * the first of the requests it carried to
-	 * be completed. The FIT msg buffer used to
-	 * send this request cannot be reused until
-	 * we are sure the s1120 card has copied
-	 * it to its memory. The FIT msg might have
-	 * contained several requests. As soon as
-	 * any of them are completed we know that
-	 * the entire FIT msg was transferred.
-	 * Only the first completed request will
-	 * match the FIT msg buffer id. The FIT
-	 * msg buffer id is immediately updated.
-	 * When subsequent requests complete the FIT
-	 * msg buffer id won't match, so we know
-	 * quite cheaply that it is already done.
-	 */
-	msg_slot = skreq->fitmsg_id & SKD_ID_SLOT_MASK;
-	SKD_ASSERT(msg_slot < skdev->num_fitmsg_context);
-
-	skmsg = &skdev->skmsg_table[msg_slot];
-	if (skmsg->id == skreq->fitmsg_id) {
-		SKD_ASSERT(skmsg->state == SKD_MSG_STATE_BUSY);
-		SKD_ASSERT(skmsg->outstanding > 0);
-		skmsg->outstanding--;
-		if (skmsg->outstanding == 0) {
-			skmsg->state = SKD_MSG_STATE_IDLE;
-			skmsg->id += SKD_ID_INCR;
-			skmsg->next = skdev->skmsg_free_list;
-			skdev->skmsg_free_list = skmsg;
-		}
-	}
-
 	/*
 	 * Decrease the number of active requests.
 	 * Also decrements the count in the timeout slot.
@@ -1644,8 +1567,20 @@ static void skd_release_skreq(struct skd_device *skdev,
 	 */
 	skreq->state = SKD_REQ_STATE_IDLE;
 	skreq->id += SKD_ID_INCR;
-	skreq->next = skdev->skreq_free_list;
-	skdev->skreq_free_list = skreq;
+}
+
+static struct skd_request_context *skd_skreq_from_rq(struct skd_device *skdev,
+						     struct request *rq)
+{
+	struct skd_request_context *skreq;
+	int i;
+
+	for (i = 0, skreq = skdev->skreq_table; i < skdev->num_fitmsg_context;
+	     i++, skreq++)
+		if (skreq->req == rq)
+			return skreq;
+
+	return NULL;
 }
 
 static int skd_isr_completion_posted(struct skd_device *skdev,
@@ -1654,7 +1589,8 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 	struct fit_completion_entry_v1 *skcmp;
 	struct fit_comp_error_info *skerr;
 	u16 req_id;
-	u32 req_slot;
+	u32 tag;
+	struct request *rq;
 	struct skd_request_context *skreq;
 	u16 cmp_cntxt;
 	u8 cmp_status;
@@ -1702,18 +1638,24 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		 * r/w request (see skd_start() above) or a special request.
 		 */
 		req_id = cmp_cntxt;
-		req_slot = req_id & SKD_ID_SLOT_AND_TABLE_MASK;
+		tag = req_id & SKD_ID_SLOT_AND_TABLE_MASK;
 
 		/* Is this other than a r/w request? */
-		if (req_slot >= skdev->num_req_context) {
+		if (tag >= skdev->num_req_context) {
 			/*
 			 * This is not a completion for a r/w request.
 			 */
+			WARN_ON_ONCE(blk_map_queue_find_tag(skdev->queue->
+							    queue_tags, tag));
 			skd_complete_other(skdev, skcmp, skerr);
 			continue;
 		}
 
-		skreq = &skdev->skreq_table[req_slot];
+		rq = blk_map_queue_find_tag(skdev->queue->queue_tags, tag);
+		if (WARN(!rq, "No request for tag %#x -> %#x\n", cmp_cntxt,
+			 tag))
+			continue;
+		skreq = skd_skreq_from_rq(skdev, rq);
 
 		/*
 		 * Make sure the request ID for the slot matches.
@@ -1745,26 +1687,16 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		if (skreq->n_sg > 0)
 			skd_postop_sg_list(skdev, skreq);
 
-		if (!skreq->req) {
-			dev_dbg(&skdev->pdev->dev,
-				"NULL backptr skdreq %p, req=0x%x req_id=0x%x\n",
-				skreq, skreq->id, req_id);
-		} else {
-			/*
-			 * Capture the outcome and post it back to the
-			 * native request.
-			 */
-			if (likely(cmp_status == SAM_STAT_GOOD))
-				skd_end_request(skdev, skreq, BLK_STS_OK);
-			else
-				skd_resolve_req_exception(skdev, skreq);
-		}
+		/* Mark the FIT msg and timeout slot as free. */
+		skd_release_skreq(skdev, skreq);
 
 		/*
-		 * Release the skreq, its FIT msg (if one), timeout slot,
-		 * and queue depth.
+		 * Capture the outcome and post it back to the native request.
 		 */
-		skd_release_skreq(skdev, skreq);
+		if (likely(cmp_status == SAM_STAT_GOOD))
+			skd_end_request(skdev, rq, BLK_STS_OK);
+		else
+			skd_resolve_req_exception(skdev, skreq, rq);
 
 		/* skd_isr_comp_limit equal zero means no limit */
 		if (limit) {
@@ -2099,44 +2031,26 @@ static void skd_recover_requests(struct skd_device *skdev)
 
 	for (i = 0; i < skdev->num_req_context; i++) {
 		struct skd_request_context *skreq = &skdev->skreq_table[i];
+		struct request *req = skreq->req;
 
 		if (skreq->state == SKD_REQ_STATE_BUSY) {
 			skd_log_skreq(skdev, skreq, "recover");
 
 			SKD_ASSERT((skreq->id & SKD_ID_INCR) != 0);
-			SKD_ASSERT(skreq->req != NULL);
+			SKD_ASSERT(req != NULL);
 
 			/* Release DMA resources for the request. */
 			if (skreq->n_sg > 0)
 				skd_postop_sg_list(skdev, skreq);
 
-			skd_end_request(skdev, skreq, BLK_STS_IOERR);
-
 			skreq->req = NULL;
 
 			skreq->state = SKD_REQ_STATE_IDLE;
 			skreq->id += SKD_ID_INCR;
-		}
-		if (i > 0)
-			skreq[-1].next = skreq;
-		skreq->next = NULL;
-	}
-	skdev->skreq_free_list = skdev->skreq_table;
-
-	for (i = 0; i < skdev->num_fitmsg_context; i++) {
-		struct skd_fitmsg_context *skmsg = &skdev->skmsg_table[i];
 
-		if (skmsg->state == SKD_MSG_STATE_BUSY) {
-			skd_log_skmsg(skdev, skmsg, "salvaged");
-			SKD_ASSERT((skmsg->id & SKD_ID_INCR) != 0);
-			skmsg->state = SKD_MSG_STATE_IDLE;
-			skmsg->id += SKD_ID_INCR;
+			skd_end_request(skdev, req, BLK_STS_IOERR);
 		}
-		if (i > 0)
-			skmsg[-1].next = skmsg;
-		skmsg->next = NULL;
 	}
-	skdev->skmsg_free_list = skdev->skmsg_table;
 
 	for (i = 0; i < SKD_N_TIMEOUT_SLOT; i++)
 		skdev->timeout_slot[i] = 0;
@@ -2876,7 +2790,6 @@ static int skd_cons_skmsg(struct skd_device *skdev)
 
 		skmsg->id = i + SKD_ID_FIT_MSG;
 
-		skmsg->state = SKD_MSG_STATE_IDLE;
 		skmsg->msg_buf = pci_alloc_consistent(skdev->pdev,
 						      SKD_N_FITMSG_BYTES,
 						      &skmsg->mb_dma_address);
@@ -2891,14 +2804,8 @@ static int skd_cons_skmsg(struct skd_device *skdev)
 		     "not aligned: msg_buf %p mb_dma_address %#llx\n",
 		     skmsg->msg_buf, skmsg->mb_dma_address);
 		memset(skmsg->msg_buf, 0, SKD_N_FITMSG_BYTES);
-
-		skmsg->next = &skmsg[1];
 	}
 
-	/* Free list is in order starting with the 0th entry. */
-	skdev->skmsg_table[i - 1].next = NULL;
-	skdev->skmsg_free_list = skdev->skmsg_table;
-
 err_out:
 	return rc;
 }
@@ -2958,10 +2865,7 @@ static int skd_cons_skreq(struct skd_device *skdev)
 		struct skd_request_context *skreq;
 
 		skreq = &skdev->skreq_table[i];
-
-		skreq->id = i + SKD_ID_RW_REQUEST;
 		skreq->state = SKD_REQ_STATE_IDLE;
-
 		skreq->sg = kcalloc(skdev->sgs_per_request,
 				    sizeof(struct scatterlist), GFP_KERNEL);
 		if (skreq->sg == NULL) {
@@ -2978,14 +2882,8 @@ static int skd_cons_skreq(struct skd_device *skdev)
 			rc = -ENOMEM;
 			goto err_out;
 		}
-
-		skreq->next = &skreq[1];
 	}
 
-	/* Free list is in order starting with the 0th entry. */
-	skdev->skreq_table[i - 1].next = NULL;
-	skdev->skreq_free_list = skdev->skreq_table;
-
 err_out:
 	return rc;
 }
@@ -3061,6 +2959,8 @@ static int skd_cons_disk(struct skd_device *skdev)
 		goto err_out;
 	}
 	blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
+	q->nr_requests = skd_max_queue_depth / 2;
+	blk_queue_init_tags(q, skd_max_queue_depth, NULL, BLK_TAG_ALLOC_FIFO);
 
 	skdev->queue = q;
 	disk->queue = q;
@@ -3789,18 +3689,6 @@ const char *skd_skdev_state_to_str(enum skd_drvr_state state)
 	}
 }
 
-static const char *skd_skmsg_state_to_str(enum skd_fit_msg_state state)
-{
-	switch (state) {
-	case SKD_MSG_STATE_IDLE:
-		return "IDLE";
-	case SKD_MSG_STATE_BUSY:
-		return "BUSY";
-	default:
-		return "???";
-	}
-}
-
 static const char *skd_skreq_state_to_str(enum skd_req_state state)
 {
 	switch (state) {
@@ -3832,15 +3720,6 @@ static void skd_log_skdev(struct skd_device *skdev, const char *event)
 		skdev->timeout_stamp, skdev->skcomp_cycle, skdev->skcomp_ix);
 }
 
-static void skd_log_skmsg(struct skd_device *skdev,
-			  struct skd_fitmsg_context *skmsg, const char *event)
-{
-	dev_dbg(&skdev->pdev->dev, "skmsg=%p event='%s'\n", skmsg, event);
-	dev_dbg(&skdev->pdev->dev, "  state=%s(%d) id=0x%04x length=%d\n",
-		skd_skmsg_state_to_str(skmsg->state), skmsg->state, skmsg->id,
-		skmsg->length);
-}
-
 static void skd_log_skreq(struct skd_device *skdev,
 			  struct skd_request_context *skreq, const char *event)
 {
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 44/55] skd: Convert several per-device scalar variables into atomics
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (42 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 43/55] skd: Enable request tags for the block layer queue Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 45/55] skd: Introduce skd_process_request() Bart Van Assche
                   ` (12 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Convert the per-device scalar variables that are protected by the
queue lock into atomics such that it becomes safe to access these
variables without holding the queue lock.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 68 ++++++++++++++++++++++++++----------------------
 1 file changed, 37 insertions(+), 31 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 35343fbf4144..4b92d711d2d3 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -243,7 +243,7 @@ struct skd_device {
 	enum skd_drvr_state state;
 	u32 drive_state;
 
-	u32 in_flight;
+	atomic_t in_flight;
 	u32 cur_max_queue_depth;
 	u32 queue_low_water_mark;
 	u32 dev_max_queue_depth;
@@ -251,8 +251,8 @@ struct skd_device {
 	u32 num_fitmsg_context;
 	u32 num_req_context;
 
-	u32 timeout_slot[SKD_N_TIMEOUT_SLOT];
-	u32 timeout_stamp;
+	atomic_t timeout_slot[SKD_N_TIMEOUT_SLOT];
+	atomic_t timeout_stamp;
 	struct skd_fitmsg_context *skmsg_table;
 
 	struct skd_request_context *skreq_table;
@@ -517,7 +517,8 @@ static void skd_request_fn(struct request_queue *q)
 	}
 
 	if (blk_queue_stopped(skdev->queue)) {
-		if (skdev->in_flight >= skdev->queue_low_water_mark)
+		if (atomic_read(&skdev->in_flight) >=
+		    skdev->queue_low_water_mark)
 			/* There is still some kind of shortage */
 			return;
 
@@ -559,9 +560,11 @@ static void skd_request_fn(struct request_queue *q)
 		/* At this point we know there is a request */
 
 		/* Are too many requets already in progress? */
-		if (skdev->in_flight >= skdev->cur_max_queue_depth) {
+		if (atomic_read(&skdev->in_flight) >=
+		    skdev->cur_max_queue_depth) {
 			dev_dbg(&skdev->pdev->dev, "qdepth %d, limit %d\n",
-				skdev->in_flight, skdev->cur_max_queue_depth);
+				atomic_read(&skdev->in_flight),
+				skdev->cur_max_queue_depth);
 			break;
 		}
 
@@ -647,12 +650,12 @@ static void skd_request_fn(struct request_queue *q)
 		 * Update the active request counts.
 		 * Capture the timeout timestamp.
 		 */
-		skreq->timeout_stamp = skdev->timeout_stamp;
+		skreq->timeout_stamp = atomic_read(&skdev->timeout_stamp);
 		timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
-		skdev->timeout_slot[timo_slot]++;
-		skdev->in_flight++;
+		atomic_inc(&skdev->timeout_slot[timo_slot]);
+		atomic_inc(&skdev->in_flight);
 		dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id,
-			skdev->in_flight);
+			atomic_read(&skdev->in_flight));
 
 		/*
 		 * If the FIT msg buffer is full send it.
@@ -805,22 +808,24 @@ static void skd_timer_tick(ulong arg)
 		skd_timer_tick_not_online(skdev);
 		goto timer_func_out;
 	}
-	skdev->timeout_stamp++;
-	timo_slot = skdev->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
+	timo_slot = atomic_inc_return(&skdev->timeout_stamp) &
+		SKD_TIMEOUT_SLOT_MASK;
 
 	/*
 	 * All requests that happened during the previous use of
 	 * this slot should be done by now. The previous use was
 	 * over 7 seconds ago.
 	 */
-	if (skdev->timeout_slot[timo_slot] == 0)
+	if (atomic_read(&skdev->timeout_slot[timo_slot]) == 0)
 		goto timer_func_out;
 
 	/* Something is overdue */
 	dev_dbg(&skdev->pdev->dev, "found %d timeouts, draining busy=%d\n",
-		skdev->timeout_slot[timo_slot], skdev->in_flight);
+		atomic_read(&skdev->timeout_slot[timo_slot]),
+		atomic_read(&skdev->in_flight));
 	dev_err(&skdev->pdev->dev, "Overdue IOs (%d), busy %d\n",
-		skdev->timeout_slot[timo_slot], skdev->in_flight);
+		atomic_read(&skdev->timeout_slot[timo_slot]),
+		atomic_read(&skdev->in_flight));
 
 	skdev->timer_countdown = SKD_DRAINING_TIMO;
 	skdev->state = SKD_DRVR_STATE_DRAINING_TIMEOUT;
@@ -900,10 +905,10 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 		dev_dbg(&skdev->pdev->dev,
 			"draining busy [%d] tick[%d] qdb[%d] tmls[%d]\n",
 			skdev->timo_slot, skdev->timer_countdown,
-			skdev->in_flight,
-			skdev->timeout_slot[skdev->timo_slot]);
+			atomic_read(&skdev->in_flight),
+			atomic_read(&skdev->timeout_slot[skdev->timo_slot]));
 		/* if the slot has cleared we can let the I/O continue */
-		if (skdev->timeout_slot[skdev->timo_slot] == 0) {
+		if (atomic_read(&skdev->timeout_slot[skdev->timo_slot]) == 0) {
 			dev_dbg(&skdev->pdev->dev,
 				"Slot drained, starting queue.\n");
 			skdev->state = SKD_DRVR_STATE_ONLINE;
@@ -1310,7 +1315,7 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 	u64 qcmd;
 
 	dev_dbg(&skdev->pdev->dev, "dma address 0x%llx, busy=%d\n",
-		skmsg->mb_dma_address, skdev->in_flight);
+		skmsg->mb_dma_address, atomic_read(&skdev->in_flight));
 	dev_dbg(&skdev->pdev->dev, "msg_buf %p\n", skmsg->msg_buf);
 
 	qcmd = skmsg->mb_dma_address;
@@ -1550,12 +1555,12 @@ static void skd_release_skreq(struct skd_device *skdev,
 	 * Decrease the number of active requests.
 	 * Also decrements the count in the timeout slot.
 	 */
-	SKD_ASSERT(skdev->in_flight > 0);
-	skdev->in_flight -= 1;
+	SKD_ASSERT(atomic_read(&skdev->in_flight) > 0);
+	atomic_dec(&skdev->in_flight);
 
 	timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
-	SKD_ASSERT(skdev->timeout_slot[timo_slot] > 0);
-	skdev->timeout_slot[timo_slot] -= 1;
+	SKD_ASSERT(atomic_read(&skdev->timeout_slot[timo_slot]) > 0);
+	atomic_dec(&skdev->timeout_slot[timo_slot]);
 
 	/*
 	 * Reset backpointer
@@ -1615,8 +1620,8 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		dev_dbg(&skdev->pdev->dev,
 			"cycle=%d ix=%d got cycle=%d cmdctxt=0x%x stat=%d busy=%d rbytes=0x%x proto=%d\n",
 			skdev->skcomp_cycle, skdev->skcomp_ix, cmp_cycle,
-			cmp_cntxt, cmp_status, skdev->in_flight, cmp_bytes,
-			skdev->proto_ver);
+			cmp_cntxt, cmp_status, atomic_read(&skdev->in_flight),
+			cmp_bytes, skdev->proto_ver);
 
 		if (cmp_cycle != skdev->skcomp_cycle) {
 			dev_dbg(&skdev->pdev->dev, "end of completions\n");
@@ -1707,8 +1712,8 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		}
 	}
 
-	if ((skdev->state == SKD_DRVR_STATE_PAUSING)
-		&& (skdev->in_flight) == 0) {
+	if (skdev->state == SKD_DRVR_STATE_PAUSING &&
+	    atomic_read(&skdev->in_flight) == 0) {
 		skdev->state = SKD_DRVR_STATE_PAUSED;
 		wake_up_interruptible(&skdev->waitq);
 	}
@@ -2053,9 +2058,9 @@ static void skd_recover_requests(struct skd_device *skdev)
 	}
 
 	for (i = 0; i < SKD_N_TIMEOUT_SLOT; i++)
-		skdev->timeout_slot[i] = 0;
+		atomic_set(&skdev->timeout_slot[i], 0);
 
-	skdev->in_flight = 0;
+	atomic_set(&skdev->in_flight, 0);
 }
 
 static void skd_isr_msg_from_dev(struct skd_device *skdev)
@@ -3714,10 +3719,11 @@ static void skd_log_skdev(struct skd_device *skdev, const char *event)
 		skd_drive_state_to_str(skdev->drive_state), skdev->drive_state,
 		skd_skdev_state_to_str(skdev->state), skdev->state);
 	dev_dbg(&skdev->pdev->dev, "  busy=%d limit=%d dev=%d lowat=%d\n",
-		skdev->in_flight, skdev->cur_max_queue_depth,
+		atomic_read(&skdev->in_flight), skdev->cur_max_queue_depth,
 		skdev->dev_max_queue_depth, skdev->queue_low_water_mark);
 	dev_dbg(&skdev->pdev->dev, "  timestamp=0x%x cycle=%d cycle_ix=%d\n",
-		skdev->timeout_stamp, skdev->skcomp_cycle, skdev->skcomp_ix);
+		atomic_read(&skdev->timeout_stamp), skdev->skcomp_cycle,
+		skdev->skcomp_ix);
 }
 
 static void skd_log_skreq(struct skd_device *skdev,
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 45/55] skd: Introduce skd_process_request()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (43 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 44/55] skd: Convert several per-device scalar variables into atomics Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 46/55] skd: Split skd_recover_requests() Bart Van Assche
                   ` (11 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

The only functional change in this patch is that the skd_fitmsg_context
in which requests are accumulated is changed from a local variable into
a member of struct skd_device. This patch will make the blk-mq conversion
easier.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 237 ++++++++++++++++++++++++-----------------------
 1 file changed, 119 insertions(+), 118 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 4b92d711d2d3..1d10373b0da3 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -232,6 +232,7 @@ struct skd_device {
 	spinlock_t lock;
 	struct gendisk *disk;
 	struct request_queue *queue;
+	struct skd_fitmsg_context *skmsg;
 	struct device *class_dev;
 	int gendisk_on;
 	int sync_done;
@@ -492,23 +493,128 @@ static bool skd_fail_all(struct request_queue *q)
 	}
 }
 
-static void skd_request_fn(struct request_queue *q)
+static void skd_process_request(struct request *req)
 {
+	struct request_queue *const q = req->q;
 	struct skd_device *skdev = q->queuedata;
-	struct skd_fitmsg_context *skmsg = NULL;
-	struct fit_msg_hdr *fmh = NULL;
-	struct skd_request_context *skreq;
-	struct request *req = NULL;
+	struct skd_fitmsg_context *skmsg;
+	struct fit_msg_hdr *fmh;
+	const u32 tag = blk_mq_unique_tag(req);
+	struct skd_request_context *const skreq = &skdev->skreq_table[tag];
 	struct skd_scsi_request *scsi_req;
 	unsigned long io_flags;
 	u32 lba;
 	u32 count;
 	int data_dir;
 	__be64 be_dmaa;
-	u64 cmdctxt;
 	u32 timo_slot;
 	int flush, fua;
-	u32 tag;
+
+	WARN_ONCE(tag >= skd_max_queue_depth, "%#x > %#x (nr_requests = %lu)\n",
+		  tag, skd_max_queue_depth, q->nr_requests);
+
+	SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE);
+
+	flush = fua = 0;
+
+	lba = (u32)blk_rq_pos(req);
+	count = blk_rq_sectors(req);
+	data_dir = rq_data_dir(req);
+	io_flags = req->cmd_flags;
+
+	if (req_op(req) == REQ_OP_FLUSH)
+		flush++;
+
+	if (io_flags & REQ_FUA)
+		fua++;
+
+	dev_dbg(&skdev->pdev->dev,
+		"new req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req, lba,
+		lba, count, count, data_dir);
+
+	skreq->id = tag + SKD_ID_RW_REQUEST;
+	skreq->flush_cmd = 0;
+	skreq->n_sg = 0;
+	skreq->sg_byte_count = 0;
+
+	skreq->req = req;
+	skreq->fitmsg_id = 0;
+
+	skreq->data_dir = data_dir == READ ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
+
+	if (req->bio && !skd_preop_sg_list(skdev, skreq)) {
+		dev_dbg(&skdev->pdev->dev, "error Out\n");
+		skd_end_request(skdev, skreq->req, BLK_STS_RESOURCE);
+		return;
+	}
+
+	/* Either a FIT msg is in progress or we have to start one. */
+	skmsg = skdev->skmsg;
+	if (!skmsg) {
+		skmsg = &skdev->skmsg_table[tag];
+		skdev->skmsg = skmsg;
+
+		/* Initialize the FIT msg header */
+		fmh = &skmsg->msg_buf->fmh;
+		memset(fmh, 0, sizeof(*fmh));
+		fmh->protocol_id = FIT_PROTOCOL_ID_SOFIT;
+		skmsg->length = sizeof(*fmh);
+	} else {
+		fmh = &skmsg->msg_buf->fmh;
+	}
+
+	skreq->fitmsg_id = skmsg->id;
+
+	scsi_req = &skmsg->msg_buf->scsi[fmh->num_protocol_cmds_coalesced];
+	memset(scsi_req, 0, sizeof(*scsi_req));
+
+	be_dmaa = cpu_to_be64(skreq->sksg_dma_address);
+
+	scsi_req->hdr.tag = skreq->id;
+	scsi_req->hdr.sg_list_dma_address = be_dmaa;
+
+	if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) {
+		skd_prep_zerosize_flush_cdb(scsi_req, skreq);
+		SKD_ASSERT(skreq->flush_cmd == 1);
+	} else {
+		skd_prep_rw_cdb(scsi_req, data_dir, lba, count);
+	}
+
+	if (fua)
+		scsi_req->cdb[1] |= SKD_FUA_NV;
+
+	scsi_req->hdr.sg_list_len_bytes = cpu_to_be32(skreq->sg_byte_count);
+
+	/* Complete resource allocations. */
+	skreq->state = SKD_REQ_STATE_BUSY;
+
+	skmsg->length += sizeof(struct skd_scsi_request);
+	fmh->num_protocol_cmds_coalesced++;
+
+	/*
+	 * Update the active request counts.
+	 * Capture the timeout timestamp.
+	 */
+	skreq->timeout_stamp = atomic_read(&skdev->timeout_stamp);
+	timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
+	atomic_inc(&skdev->timeout_slot[timo_slot]);
+	atomic_inc(&skdev->in_flight);
+	dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id,
+		atomic_read(&skdev->in_flight));
+
+	/*
+	 * If the FIT msg buffer is full send it.
+	 */
+	if (fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
+		skd_send_fitmsg(skdev, skmsg);
+		skdev->skmsg = NULL;
+	}
+}
+
+static void skd_request_fn(struct request_queue *q)
+{
+	struct skd_device *skdev = q->queuedata;
+	struct request *req;
 
 	if (skdev->state != SKD_DRVR_STATE_ONLINE) {
 		if (skd_fail_all(q))
@@ -533,30 +639,12 @@ static void skd_request_fn(struct request_queue *q)
 	 *  - There are no more FIT msg buffers
 	 */
 	for (;; ) {
-
-		flush = fua = 0;
-
 		req = blk_peek_request(q);
 
 		/* Are there any native requests to start? */
 		if (req == NULL)
 			break;
 
-		lba = (u32)blk_rq_pos(req);
-		count = blk_rq_sectors(req);
-		data_dir = rq_data_dir(req);
-		io_flags = req->cmd_flags;
-
-		if (req_op(req) == REQ_OP_FLUSH)
-			flush++;
-
-		if (io_flags & REQ_FUA)
-			fua++;
-
-		dev_dbg(&skdev->pdev->dev,
-			"new req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n",
-			req, lba, lba, count, count, data_dir);
-
 		/* At this point we know there is a request */
 
 		/* Are too many requets already in progress? */
@@ -576,103 +664,16 @@ static void skd_request_fn(struct request_queue *q)
 		 * available but is still at the head of the free list.
 		 */
 		WARN_ON_ONCE(blk_queue_start_tag(q, req));
-
-		tag = blk_mq_unique_tag(req);
-		WARN_ONCE(tag >= skd_max_queue_depth,
-			  "%#x > %#x (nr_requests = %lu)\n", tag,
-			  skd_max_queue_depth, q->nr_requests);
-
-		skreq = &skdev->skreq_table[tag];
-		SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE);
-		SKD_ASSERT((skreq->id & SKD_ID_INCR) == 0);
-
-		skreq->id = tag + SKD_ID_RW_REQUEST;
-		skreq->flush_cmd = 0;
-		skreq->n_sg = 0;
-		skreq->sg_byte_count = 0;
-
-		skreq->req = req;
-		skreq->fitmsg_id = 0;
-
-		skreq->data_dir = data_dir == READ ? DMA_FROM_DEVICE :
-			DMA_TO_DEVICE;
-
-		if (req->bio && !skd_preop_sg_list(skdev, skreq)) {
-			dev_dbg(&skdev->pdev->dev, "error Out\n");
-			skd_end_request(skdev, skreq->req, BLK_STS_RESOURCE);
-			continue;
-		}
-
-		/* Either a FIT msg is in progress or we have to start one. */
-		if (skmsg == NULL) {
-			skmsg = &skdev->skmsg_table[tag];
-
-			/* Initialize the FIT msg header */
-			fmh = &skmsg->msg_buf->fmh;
-			memset(fmh, 0, sizeof(*fmh));
-			fmh->protocol_id = FIT_PROTOCOL_ID_SOFIT;
-			skmsg->length = sizeof(*fmh);
-		}
-
-		skreq->fitmsg_id = skmsg->id;
-
-		scsi_req =
-			&skmsg->msg_buf->scsi[fmh->num_protocol_cmds_coalesced];
-		memset(scsi_req, 0, sizeof(*scsi_req));
-
-		be_dmaa = cpu_to_be64(skreq->sksg_dma_address);
-		cmdctxt = skreq->id + SKD_ID_INCR;
-
-		scsi_req->hdr.tag = cmdctxt;
-		scsi_req->hdr.sg_list_dma_address = be_dmaa;
-
-		if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) {
-			skd_prep_zerosize_flush_cdb(scsi_req, skreq);
-			SKD_ASSERT(skreq->flush_cmd == 1);
-		} else {
-			skd_prep_rw_cdb(scsi_req, data_dir, lba, count);
-		}
-
-		if (fua)
-			scsi_req->cdb[1] |= SKD_FUA_NV;
-
-		scsi_req->hdr.sg_list_len_bytes =
-			cpu_to_be32(skreq->sg_byte_count);
-
-		/* Complete resource allocations. */
-		skreq->state = SKD_REQ_STATE_BUSY;
-		skreq->id += SKD_ID_INCR;
-
-		skmsg->length += sizeof(struct skd_scsi_request);
-		fmh->num_protocol_cmds_coalesced++;
-
-		/*
-		 * Update the active request counts.
-		 * Capture the timeout timestamp.
-		 */
-		skreq->timeout_stamp = atomic_read(&skdev->timeout_stamp);
-		timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
-		atomic_inc(&skdev->timeout_slot[timo_slot]);
-		atomic_inc(&skdev->in_flight);
-		dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id,
-			atomic_read(&skdev->in_flight));
-
-		/*
-		 * If the FIT msg buffer is full send it.
-		 */
-		if (fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
-			skd_send_fitmsg(skdev, skmsg);
-			skmsg = NULL;
-			fmh = NULL;
-		}
+		skd_process_request(req);
 	}
 
 	/* If the FIT msg buffer is not empty send what we got. */
-	if (skmsg) {
+	if (skdev->skmsg) {
+		struct fit_msg_hdr *fmh = &skdev->skmsg->msg_buf->fmh;
+
 		WARN_ON_ONCE(!fmh->num_protocol_cmds_coalesced);
-		skd_send_fitmsg(skdev, skmsg);
-		skmsg = NULL;
-		fmh = NULL;
+		skd_send_fitmsg(skdev, skdev->skmsg);
+		skdev->skmsg = NULL;
 	}
 
 	/*
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 46/55] skd: Split skd_recover_requests()
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (44 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 45/55] skd: Introduce skd_process_request() Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 47/55] skd: Move skd_free_sg_list() up Bart Van Assche
                   ` (10 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality but makes the blk-mq
conversion patch easier to read.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 39 ++++++++++++++++++++++-----------------
 1 file changed, 22 insertions(+), 17 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 1d10373b0da3..451974138b32 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2031,31 +2031,36 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 		skd_skdev_state_to_str(skdev->state), skdev->state);
 }
 
-static void skd_recover_requests(struct skd_device *skdev)
+static void skd_recover_request(struct skd_device *skdev,
+				struct skd_request_context *skreq)
 {
-	int i;
+	struct request *req = skreq->req;
 
-	for (i = 0; i < skdev->num_req_context; i++) {
-		struct skd_request_context *skreq = &skdev->skreq_table[i];
-		struct request *req = skreq->req;
+	if (skreq->state != SKD_REQ_STATE_BUSY)
+		return;
+
+	skd_log_skreq(skdev, skreq, "recover");
+
+	SKD_ASSERT(req != NULL);
 
-		if (skreq->state == SKD_REQ_STATE_BUSY) {
-			skd_log_skreq(skdev, skreq, "recover");
+	/* Release DMA resources for the request. */
+	if (skreq->n_sg > 0)
+		skd_postop_sg_list(skdev, skreq);
 
-			SKD_ASSERT((skreq->id & SKD_ID_INCR) != 0);
-			SKD_ASSERT(req != NULL);
+	skreq->req = NULL;
+	skreq->state = SKD_REQ_STATE_IDLE;
 
-			/* Release DMA resources for the request. */
-			if (skreq->n_sg > 0)
-				skd_postop_sg_list(skdev, skreq);
+	skd_end_request(skdev, req, BLK_STS_IOERR);
+}
 
-			skreq->req = NULL;
+static void skd_recover_requests(struct skd_device *skdev)
+{
+	int i;
 
-			skreq->state = SKD_REQ_STATE_IDLE;
-			skreq->id += SKD_ID_INCR;
+	for (i = 0; i < skdev->num_req_context; i++) {
+		struct skd_request_context *skreq = &skdev->skreq_table[i];
 
-			skd_end_request(skdev, req, BLK_STS_IOERR);
-		}
+		skd_recover_request(skdev, skreq);
 	}
 
 	for (i = 0; i < SKD_N_TIMEOUT_SLOT; i++)
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 47/55] skd: Move skd_free_sg_list() up
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (45 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 46/55] skd: Split skd_recover_requests() Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 48/55] skd: Coalesce struct request and struct skd_request_context Bart Van Assche
                   ` (9 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Issue a warning if a NULL argument is passed to skd_free_sg_list().
Move this function up to make the blk-mq conversion patch easier
to read.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 451974138b32..b69b1a041c8f 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -2850,6 +2850,18 @@ static struct fit_sg_descriptor *skd_cons_sg_list(struct skd_device *skdev,
 	return sg_list;
 }
 
+static void skd_free_sg_list(struct skd_device *skdev,
+			     struct fit_sg_descriptor *sg_list, u32 n_sg,
+			     dma_addr_t dma_addr)
+{
+	u32 nbytes = sizeof(*sg_list) * n_sg;
+
+	if (WARN_ON_ONCE(!sg_list))
+		return;
+
+	pci_free_consistent(skdev->pdev, nbytes, sg_list, dma_addr);
+}
+
 static int skd_cons_skreq(struct skd_device *skdev)
 {
 	int rc = 0;
@@ -3105,19 +3117,6 @@ static void skd_free_skmsg(struct skd_device *skdev)
 	skdev->skmsg_table = NULL;
 }
 
-static void skd_free_sg_list(struct skd_device *skdev,
-			     struct fit_sg_descriptor *sg_list,
-			     u32 n_sg, dma_addr_t dma_addr)
-{
-	if (sg_list != NULL) {
-		u32 nbytes;
-
-		nbytes = sizeof(*sg_list) * n_sg;
-
-		pci_free_consistent(skdev->pdev, nbytes, sg_list, dma_addr);
-	}
-}
-
 static void skd_free_skreq(struct skd_device *skdev)
 {
 	u32 i;
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 48/55] skd: Coalesce struct request and struct skd_request_context
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (46 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 47/55] skd: Move skd_free_sg_list() up Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 49/55] skd: Convert to blk-mq Bart Van Assche
                   ` (8 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Set request_queue.cmd_size, introduce skd_init_rq() and skd_exit_rq()
and remove skd_device.skreq_table.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 174 +++++++++++++++--------------------------------
 1 file changed, 54 insertions(+), 120 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index b69b1a041c8f..dad623659fae 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -183,7 +183,6 @@ struct skd_request_context {
 	u16 id;
 	u32 fitmsg_id;
 
-	struct request *req;
 	u8 flush_cmd;
 
 	u32 timeout_stamp;
@@ -256,8 +255,6 @@ struct skd_device {
 	atomic_t timeout_stamp;
 	struct skd_fitmsg_context *skmsg_table;
 
-	struct skd_request_context *skreq_table;
-
 	struct skd_special_context internal_skspcl;
 	u32 read_cap_blocksize;
 	u32 read_cap_last_lba;
@@ -500,7 +497,7 @@ static void skd_process_request(struct request *req)
 	struct skd_fitmsg_context *skmsg;
 	struct fit_msg_hdr *fmh;
 	const u32 tag = blk_mq_unique_tag(req);
-	struct skd_request_context *const skreq = &skdev->skreq_table[tag];
+	struct skd_request_context *const skreq = blk_mq_rq_to_pdu(req);
 	struct skd_scsi_request *scsi_req;
 	unsigned long io_flags;
 	u32 lba;
@@ -537,14 +534,14 @@ static void skd_process_request(struct request *req)
 	skreq->n_sg = 0;
 	skreq->sg_byte_count = 0;
 
-	skreq->req = req;
 	skreq->fitmsg_id = 0;
 
 	skreq->data_dir = data_dir == READ ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
 
 	if (req->bio && !skd_preop_sg_list(skdev, skreq)) {
 		dev_dbg(&skdev->pdev->dev, "error Out\n");
-		skd_end_request(skdev, skreq->req, BLK_STS_RESOURCE);
+		skd_end_request(skdev, blk_mq_rq_from_pdu(skreq),
+				BLK_STS_RESOURCE);
 		return;
 	}
 
@@ -705,7 +702,7 @@ static void skd_end_request(struct skd_device *skdev, struct request *req,
 static bool skd_preop_sg_list(struct skd_device *skdev,
 			     struct skd_request_context *skreq)
 {
-	struct request *req = skreq->req;
+	struct request *req = blk_mq_rq_from_pdu(skreq);
 	struct scatterlist *sgl = &skreq->sg[0], *sg;
 	int n_sg;
 	int i;
@@ -1563,11 +1560,6 @@ static void skd_release_skreq(struct skd_device *skdev,
 	SKD_ASSERT(atomic_read(&skdev->timeout_slot[timo_slot]) > 0);
 	atomic_dec(&skdev->timeout_slot[timo_slot]);
 
-	/*
-	 * Reset backpointer
-	 */
-	skreq->req = NULL;
-
 	/*
 	 * Reclaim the skd_request_context
 	 */
@@ -1575,20 +1567,6 @@ static void skd_release_skreq(struct skd_device *skdev,
 	skreq->id += SKD_ID_INCR;
 }
 
-static struct skd_request_context *skd_skreq_from_rq(struct skd_device *skdev,
-						     struct request *rq)
-{
-	struct skd_request_context *skreq;
-	int i;
-
-	for (i = 0, skreq = skdev->skreq_table; i < skdev->num_fitmsg_context;
-	     i++, skreq++)
-		if (skreq->req == rq)
-			return skreq;
-
-	return NULL;
-}
-
 static int skd_isr_completion_posted(struct skd_device *skdev,
 					int limit, int *enqueued)
 {
@@ -1661,7 +1639,7 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		if (WARN(!rq, "No request for tag %#x -> %#x\n", cmp_cntxt,
 			 tag))
 			continue;
-		skreq = skd_skreq_from_rq(skdev, rq);
+		skreq = blk_mq_rq_to_pdu(rq);
 
 		/*
 		 * Make sure the request ID for the slot matches.
@@ -2034,7 +2012,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 static void skd_recover_request(struct skd_device *skdev,
 				struct skd_request_context *skreq)
 {
-	struct request *req = skreq->req;
+	struct request *req = blk_mq_rq_from_pdu(skreq);
 
 	if (skreq->state != SKD_REQ_STATE_BUSY)
 		return;
@@ -2047,7 +2025,6 @@ static void skd_recover_request(struct skd_device *skdev,
 	if (skreq->n_sg > 0)
 		skd_postop_sg_list(skdev, skreq);
 
-	skreq->req = NULL;
 	skreq->state = SKD_REQ_STATE_IDLE;
 
 	skd_end_request(skdev, req, BLK_STS_IOERR);
@@ -2058,8 +2035,12 @@ static void skd_recover_requests(struct skd_device *skdev)
 	int i;
 
 	for (i = 0; i < skdev->num_req_context; i++) {
-		struct skd_request_context *skreq = &skdev->skreq_table[i];
+		struct request *rq = blk_map_queue_find_tag(skdev->queue->
+							    queue_tags, i);
+		struct skd_request_context *skreq = blk_mq_rq_to_pdu(rq);
 
+		if (!rq)
+			continue;
 		skd_recover_request(skdev, skreq);
 	}
 
@@ -2862,53 +2843,28 @@ static void skd_free_sg_list(struct skd_device *skdev,
 	pci_free_consistent(skdev->pdev, nbytes, sg_list, dma_addr);
 }
 
-static int skd_cons_skreq(struct skd_device *skdev)
+static int skd_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
 {
-	int rc = 0;
-	u32 i;
-
-	dev_dbg(&skdev->pdev->dev,
-		"skreq_table kcalloc, struct %lu, count %u total %lu\n",
-		sizeof(struct skd_request_context), skdev->num_req_context,
-		sizeof(struct skd_request_context) * skdev->num_req_context);
-
-	skdev->skreq_table = kcalloc(skdev->num_req_context,
-				     sizeof(struct skd_request_context),
-				     GFP_KERNEL);
-	if (skdev->skreq_table == NULL) {
-		rc = -ENOMEM;
-		goto err_out;
-	}
-
-	dev_dbg(&skdev->pdev->dev, "alloc sg_table sg_per_req %u scatlist %lu total %lu\n",
-		skdev->sgs_per_request, sizeof(struct scatterlist),
-		skdev->sgs_per_request * sizeof(struct scatterlist));
-
-	for (i = 0; i < skdev->num_req_context; i++) {
-		struct skd_request_context *skreq;
+	struct skd_device *skdev = q->queuedata;
+	struct skd_request_context *skreq = blk_mq_rq_to_pdu(rq);
 
-		skreq = &skdev->skreq_table[i];
-		skreq->state = SKD_REQ_STATE_IDLE;
-		skreq->sg = kcalloc(skdev->sgs_per_request,
-				    sizeof(struct scatterlist), GFP_KERNEL);
-		if (skreq->sg == NULL) {
-			rc = -ENOMEM;
-			goto err_out;
-		}
-		sg_init_table(skreq->sg, skdev->sgs_per_request);
+	skreq->state = SKD_REQ_STATE_IDLE;
+	skreq->sg = (void *)(skreq + 1);
+	sg_init_table(skreq->sg, skd_sgs_per_request);
+	skreq->sksg_list = skd_cons_sg_list(skdev, skd_sgs_per_request,
+					    &skreq->sksg_dma_address);
 
-		skreq->sksg_list = skd_cons_sg_list(skdev,
-						    skdev->sgs_per_request,
-						    &skreq->sksg_dma_address);
+	return skreq->sksg_list ? 0 : -ENOMEM;
+}
 
-		if (skreq->sksg_list == NULL) {
-			rc = -ENOMEM;
-			goto err_out;
-		}
-	}
+static void skd_exit_rq(struct request_queue *q, struct request *rq)
+{
+	struct skd_device *skdev = q->queuedata;
+	struct skd_request_context *skreq = blk_mq_rq_to_pdu(rq);
 
-err_out:
-	return rc;
+	skd_free_sg_list(skdev, skreq->sksg_list,
+			 skdev->sgs_per_request,
+			 skreq->sksg_dma_address);
 }
 
 static int skd_cons_sksb(struct skd_device *skdev)
@@ -2976,18 +2932,30 @@ static int skd_cons_disk(struct skd_device *skdev)
 	disk->fops = &skd_blockdev_ops;
 	disk->private_data = skdev;
 
-	q = blk_init_queue(skd_request_fn, &skdev->lock);
+	q = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE);
 	if (!q) {
 		rc = -ENOMEM;
 		goto err_out;
 	}
 	blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
+	q->queuedata = skdev;
+	q->request_fn = skd_request_fn;
+	q->queue_lock = &skdev->lock;
 	q->nr_requests = skd_max_queue_depth / 2;
-	blk_queue_init_tags(q, skd_max_queue_depth, NULL, BLK_TAG_ALLOC_FIFO);
+	q->cmd_size = sizeof(struct skd_request_context) +
+		skdev->sgs_per_request * sizeof(struct scatterlist);
+	q->init_rq_fn = skd_init_rq;
+	q->exit_rq_fn = skd_exit_rq;
+	rc = blk_init_allocated_queue(q);
+	if (rc < 0)
+		goto cleanup_q;
+	rc = blk_queue_init_tags(q, skd_max_queue_depth, NULL,
+				 BLK_TAG_ALLOC_FIFO);
+	if (rc < 0)
+		goto cleanup_q;
 
 	skdev->queue = q;
 	disk->queue = q;
-	q->queuedata = skdev;
 
 	blk_queue_write_cache(q, true, true);
 	blk_queue_max_segments(q, skdev->sgs_per_request);
@@ -3006,6 +2974,10 @@ static int skd_cons_disk(struct skd_device *skdev)
 
 err_out:
 	return rc;
+
+cleanup_q:
+	blk_cleanup_queue(q);
+	goto err_out;
 }
 
 #define SKD_N_DEV_TABLE         16u
@@ -3052,11 +3024,6 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 	if (rc < 0)
 		goto err_out;
 
-	dev_dbg(&skdev->pdev->dev, "skreq\n");
-	rc = skd_cons_skreq(skdev);
-	if (rc < 0)
-		goto err_out;
-
 	dev_dbg(&skdev->pdev->dev, "sksb\n");
 	rc = skd_cons_sksb(skdev);
 	if (rc < 0)
@@ -3117,32 +3084,6 @@ static void skd_free_skmsg(struct skd_device *skdev)
 	skdev->skmsg_table = NULL;
 }
 
-static void skd_free_skreq(struct skd_device *skdev)
-{
-	u32 i;
-
-	if (skdev->skreq_table == NULL)
-		return;
-
-	for (i = 0; i < skdev->num_req_context; i++) {
-		struct skd_request_context *skreq;
-
-		skreq = &skdev->skreq_table[i];
-
-		skd_free_sg_list(skdev, skreq->sksg_list,
-				 skdev->sgs_per_request,
-				 skreq->sksg_dma_address);
-
-		skreq->sksg_list = NULL;
-		skreq->sksg_dma_address = 0;
-
-		kfree(skreq->sg);
-	}
-
-	kfree(skdev->skreq_table);
-	skdev->skreq_table = NULL;
-}
-
 static void skd_free_sksb(struct skd_device *skdev)
 {
 	struct skd_special_context *skspcl;
@@ -3204,9 +3145,6 @@ static void skd_destruct(struct skd_device *skdev)
 	dev_dbg(&skdev->pdev->dev, "sksb\n");
 	skd_free_sksb(skdev);
 
-	dev_dbg(&skdev->pdev->dev, "skreq\n");
-	skd_free_skreq(skdev);
-
 	dev_dbg(&skdev->pdev->dev, "skmsg\n");
 	skd_free_skmsg(skdev);
 
@@ -3734,23 +3672,19 @@ static void skd_log_skdev(struct skd_device *skdev, const char *event)
 static void skd_log_skreq(struct skd_device *skdev,
 			  struct skd_request_context *skreq, const char *event)
 {
+	struct request *req = blk_mq_rq_from_pdu(skreq);
+	u32 lba = blk_rq_pos(req);
+	u32 count = blk_rq_sectors(req);
+
 	dev_dbg(&skdev->pdev->dev, "skreq=%p event='%s'\n", skreq, event);
 	dev_dbg(&skdev->pdev->dev, "  state=%s(%d) id=0x%04x fitmsg=0x%04x\n",
 		skd_skreq_state_to_str(skreq->state), skreq->state, skreq->id,
 		skreq->fitmsg_id);
 	dev_dbg(&skdev->pdev->dev, "  timo=0x%x sg_dir=%d n_sg=%d\n",
 		skreq->timeout_stamp, skreq->data_dir, skreq->n_sg);
-
-	if (skreq->req != NULL) {
-		struct request *req = skreq->req;
-		u32 lba = (u32)blk_rq_pos(req);
-		u32 count = blk_rq_sectors(req);
-
-		dev_dbg(&skdev->pdev->dev,
-			"req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req,
-			lba, lba, count, count, (int)rq_data_dir(req));
-	} else
-		dev_dbg(&skdev->pdev->dev, "req=NULL\n");
+	dev_dbg(&skdev->pdev->dev,
+		"req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req, lba, lba,
+		count, count, (int)rq_data_dir(req));
 }
 
 /*
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 49/55] skd: Convert to blk-mq
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (47 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 48/55] skd: Coalesce struct request and struct skd_request_context Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 50/55] skd: Switch to block layer timeout mechanism Bart Van Assche
                   ` (7 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Introduce a tag set and a blk_mq_ops structure. Set .cmd_size such
that struct request and struct skd_request_context are allocated
through a single allocation. Remove the skd_request_context.req
pointer. Make queue starting asynchronous such that this can occur
safely from interrupt context. Use locking to protect skdev->skmsg
and *skdev->skmsg against concurrent access from concurrent
.queue_rq() calls. Introduce the functions skd_init_request() and
skd_exit_request() to set up / clean up the per-request S/G-list.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 229 +++++++++++++++++++----------------------------
 1 file changed, 91 insertions(+), 138 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index dad623659fae..3590f9a775ae 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -230,6 +230,7 @@ struct skd_device {
 
 	spinlock_t lock;
 	struct gendisk *disk;
+	struct blk_mq_tag_set tag_set;
 	struct request_queue *queue;
 	struct skd_fitmsg_context *skmsg;
 	struct device *class_dev;
@@ -287,6 +288,7 @@ struct skd_device {
 
 	u32 timo_slot;
 
+	struct work_struct start_queue;
 	struct work_struct completion_worker;
 };
 
@@ -371,7 +373,6 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 			    struct skd_fitmsg_context *skmsg);
 static void skd_send_special_fitmsg(struct skd_device *skdev,
 				    struct skd_special_context *skspcl);
-static void skd_request_fn(struct request_queue *rq);
 static void skd_end_request(struct skd_device *skdev, struct request *req,
 			    blk_status_t status);
 static bool skd_preop_sg_list(struct skd_device *skdev,
@@ -398,20 +399,6 @@ static void skd_log_skreq(struct skd_device *skdev,
  * READ/WRITE REQUESTS
  *****************************************************************************
  */
-static void skd_fail_all_pending(struct skd_device *skdev)
-{
-	struct request_queue *q = skdev->queue;
-	struct request *req;
-
-	for (;; ) {
-		req = blk_peek_request(q);
-		if (req == NULL)
-			break;
-		WARN_ON_ONCE(blk_queue_start_tag(q, req));
-		__blk_end_request_all(req, BLK_STS_IOERR);
-	}
-}
-
 static void
 skd_prep_rw_cdb(struct skd_scsi_request *scsi_req,
 		int data_dir, unsigned lba,
@@ -490,7 +477,7 @@ static bool skd_fail_all(struct request_queue *q)
 	}
 }
 
-static void skd_process_request(struct request *req)
+static void skd_process_request(struct request *req, bool last)
 {
 	struct request_queue *const q = req->q;
 	struct skd_device *skdev = q->queuedata;
@@ -499,6 +486,7 @@ static void skd_process_request(struct request *req)
 	const u32 tag = blk_mq_unique_tag(req);
 	struct skd_request_context *const skreq = blk_mq_rq_to_pdu(req);
 	struct skd_scsi_request *scsi_req;
+	unsigned long flags;
 	unsigned long io_flags;
 	u32 lba;
 	u32 count;
@@ -545,6 +533,7 @@ static void skd_process_request(struct request *req)
 		return;
 	}
 
+	spin_lock_irqsave(&skdev->lock, flags);
 	/* Either a FIT msg is in progress or we have to start one. */
 	skmsg = skdev->skmsg;
 	if (!skmsg) {
@@ -602,83 +591,30 @@ static void skd_process_request(struct request *req)
 	/*
 	 * If the FIT msg buffer is full send it.
 	 */
-	if (fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
+	if (last || fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
 		skd_send_fitmsg(skdev, skmsg);
 		skdev->skmsg = NULL;
 	}
+	spin_unlock_irqrestore(&skdev->lock, flags);
 }
 
-static void skd_request_fn(struct request_queue *q)
+static blk_status_t skd_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+				    const struct blk_mq_queue_data *mqd)
 {
+	struct request *req = mqd->rq;
+	struct request_queue *q = req->q;
 	struct skd_device *skdev = q->queuedata;
-	struct request *req;
-
-	if (skdev->state != SKD_DRVR_STATE_ONLINE) {
-		if (skd_fail_all(q))
-			skd_fail_all_pending(skdev);
-		return;
-	}
-
-	if (blk_queue_stopped(skdev->queue)) {
-		if (atomic_read(&skdev->in_flight) >=
-		    skdev->queue_low_water_mark)
-			/* There is still some kind of shortage */
-			return;
-
-		queue_flag_clear(QUEUE_FLAG_STOPPED, skdev->queue);
-	}
-
-	/*
-	 * Stop conditions:
-	 *  - There are no more native requests
-	 *  - There are already the maximum number of requests in progress
-	 *  - There are no more skd_request_context entries
-	 *  - There are no more FIT msg buffers
-	 */
-	for (;; ) {
-		req = blk_peek_request(q);
-
-		/* Are there any native requests to start? */
-		if (req == NULL)
-			break;
-
-		/* At this point we know there is a request */
-
-		/* Are too many requets already in progress? */
-		if (atomic_read(&skdev->in_flight) >=
-		    skdev->cur_max_queue_depth) {
-			dev_dbg(&skdev->pdev->dev, "qdepth %d, limit %d\n",
-				atomic_read(&skdev->in_flight),
-				skdev->cur_max_queue_depth);
-			break;
-		}
-
-		/*
-		 * OK to now dequeue request from q.
-		 *
-		 * At this point we are comitted to either start or reject
-		 * the native request. Note that skd_request_context is
-		 * available but is still at the head of the free list.
-		 */
-		WARN_ON_ONCE(blk_queue_start_tag(q, req));
-		skd_process_request(req);
-	}
 
-	/* If the FIT msg buffer is not empty send what we got. */
-	if (skdev->skmsg) {
-		struct fit_msg_hdr *fmh = &skdev->skmsg->msg_buf->fmh;
+	if (skdev->state == SKD_DRVR_STATE_ONLINE) {
+		blk_mq_start_request(req);
+		skd_process_request(req, mqd->last);
 
-		WARN_ON_ONCE(!fmh->num_protocol_cmds_coalesced);
-		skd_send_fitmsg(skdev, skdev->skmsg);
-		skdev->skmsg = NULL;
+		return BLK_STS_OK;
+	} else {
+		return skd_fail_all(q) ? BLK_STS_IOERR : BLK_STS_RESOURCE;
 	}
 
-	/*
-	 * If req is non-NULL it means there is something to do but
-	 * we are out of a resource.
-	 */
-	if (req)
-		blk_stop_queue(skdev->queue);
+	return BLK_STS_OK;
 }
 
 static void skd_end_request(struct skd_device *skdev, struct request *req,
@@ -696,7 +632,7 @@ static void skd_end_request(struct skd_device *skdev, struct request *req,
 		dev_dbg(&skdev->pdev->dev, "id=0x%x error=%d\n", req->tag,
 			error);
 
-	__blk_end_request_all(req, error);
+	blk_mq_end_request(req, error);
 }
 
 static bool skd_preop_sg_list(struct skd_device *skdev,
@@ -781,6 +717,19 @@ static void skd_postop_sg_list(struct skd_device *skdev,
 
 static void skd_timer_tick_not_online(struct skd_device *skdev);
 
+static void skd_start_queue(struct work_struct *work)
+{
+	struct skd_device *skdev = container_of(work, typeof(*skdev),
+						start_queue);
+
+	/*
+	 * Although it is safe to call blk_start_queue() from interrupt
+	 * context, blk_mq_start_hw_queues() must not be called from
+	 * interrupt context.
+	 */
+	blk_mq_start_hw_queues(skdev->queue);
+}
+
 static void skd_timer_tick(ulong arg)
 {
 	struct skd_device *skdev = (struct skd_device *)arg;
@@ -886,7 +835,7 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 
 		/*start the queue so we can respond with error to requests */
 		/* wakeup anyone waiting for startup complete */
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		skdev->gendisk_on = -1;
 		wake_up_interruptible(&skdev->waitq);
 		break;
@@ -961,7 +910,7 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 
 		/*start the queue so we can respond with error to requests */
 		/* wakeup anyone waiting for startup complete */
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		skdev->gendisk_on = -1;
 		wake_up_interruptible(&skdev->waitq);
 		break;
@@ -1543,7 +1492,6 @@ static void skd_resolve_req_exception(struct skd_device *skdev,
 	}
 }
 
-/* assume spinlock is already held */
 static void skd_release_skreq(struct skd_device *skdev,
 			      struct skd_request_context *skreq)
 {
@@ -1574,6 +1522,7 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 	struct fit_comp_error_info *skerr;
 	u16 req_id;
 	u32 tag;
+	u16 hwq = 0;
 	struct request *rq;
 	struct skd_request_context *skreq;
 	u16 cmp_cntxt;
@@ -1629,13 +1578,13 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 			/*
 			 * This is not a completion for a r/w request.
 			 */
-			WARN_ON_ONCE(blk_map_queue_find_tag(skdev->queue->
-							    queue_tags, tag));
+			WARN_ON_ONCE(blk_mq_tag_to_rq(skdev->tag_set.tags[hwq],
+						      tag));
 			skd_complete_other(skdev, skcmp, skerr);
 			continue;
 		}
 
-		rq = blk_map_queue_find_tag(skdev->queue->queue_tags, tag);
+		rq = blk_mq_tag_to_rq(skdev->tag_set.tags[hwq], tag);
 		if (WARN(!rq, "No request for tag %#x -> %#x\n", cmp_cntxt,
 			 tag))
 			continue;
@@ -1789,7 +1738,7 @@ static void skd_completion_worker(struct work_struct *work)
 	 * process everything in compq
 	 */
 	skd_isr_completion_posted(skdev, 0, &flush_enqueued);
-	blk_run_queue_async(skdev->queue);
+	schedule_work(&skdev->start_queue);
 
 	spin_unlock_irqrestore(&skdev->lock, flags);
 }
@@ -1865,12 +1814,12 @@ skd_isr(int irq, void *ptr)
 	}
 
 	if (unlikely(flush_enqueued))
-		blk_run_queue_async(skdev->queue);
+		schedule_work(&skdev->start_queue);
 
 	if (deferred)
 		schedule_work(&skdev->completion_worker);
 	else if (!flush_enqueued)
-		blk_run_queue_async(skdev->queue);
+		schedule_work(&skdev->start_queue);
 
 	spin_unlock(&skdev->lock);
 
@@ -1953,7 +1902,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 		 */
 		skdev->state = SKD_DRVR_STATE_BUSY_SANITIZE;
 		skdev->timer_countdown = SKD_TIMER_SECONDS(3);
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		break;
 	case FIT_SR_DRIVE_BUSY_ERASE:
 		skdev->state = SKD_DRVR_STATE_BUSY_ERASE;
@@ -1987,7 +1936,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 	case FIT_SR_DRIVE_FAULT:
 		skd_drive_fault(skdev);
 		skd_recover_requests(skdev);
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		break;
 
 	/* PCIe bus returned all Fs? */
@@ -1996,7 +1945,7 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 			 sense);
 		skd_drive_disappeared(skdev);
 		skd_recover_requests(skdev);
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		break;
 	default:
 		/*
@@ -2009,18 +1958,16 @@ static void skd_isr_fwstate(struct skd_device *skdev)
 		skd_skdev_state_to_str(skdev->state), skdev->state);
 }
 
-static void skd_recover_request(struct skd_device *skdev,
-				struct skd_request_context *skreq)
+static void skd_recover_request(struct request *req, void *data, bool reserved)
 {
-	struct request *req = blk_mq_rq_from_pdu(skreq);
+	struct skd_device *const skdev = data;
+	struct skd_request_context *skreq = blk_mq_rq_to_pdu(req);
 
 	if (skreq->state != SKD_REQ_STATE_BUSY)
 		return;
 
 	skd_log_skreq(skdev, skreq, "recover");
 
-	SKD_ASSERT(req != NULL);
-
 	/* Release DMA resources for the request. */
 	if (skreq->n_sg > 0)
 		skd_postop_sg_list(skdev, skreq);
@@ -2034,15 +1981,7 @@ static void skd_recover_requests(struct skd_device *skdev)
 {
 	int i;
 
-	for (i = 0; i < skdev->num_req_context; i++) {
-		struct request *rq = blk_map_queue_find_tag(skdev->queue->
-							    queue_tags, i);
-		struct skd_request_context *skreq = blk_mq_rq_to_pdu(rq);
-
-		if (!rq)
-			continue;
-		skd_recover_request(skdev, skreq);
-	}
+	blk_mq_tagset_busy_iter(&skdev->tag_set, skd_recover_request, skdev);
 
 	for (i = 0; i < SKD_N_TIMEOUT_SLOT; i++)
 		atomic_set(&skdev->timeout_slot[i], 0);
@@ -2263,7 +2202,7 @@ static void skd_start_device(struct skd_device *skdev)
 		skd_drive_fault(skdev);
 		/*start the queue so we can respond with error to requests */
 		dev_dbg(&skdev->pdev->dev, "starting queue\n");
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		skdev->gendisk_on = -1;
 		wake_up_interruptible(&skdev->waitq);
 		break;
@@ -2275,7 +2214,7 @@ static void skd_start_device(struct skd_device *skdev)
 		/*start the queue so we can respond with error to requests */
 		dev_dbg(&skdev->pdev->dev,
 			"starting queue to error-out reqs\n");
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		skdev->gendisk_on = -1;
 		wake_up_interruptible(&skdev->waitq);
 		break;
@@ -2408,7 +2347,7 @@ static int skd_quiesce_dev(struct skd_device *skdev)
 	case SKD_DRVR_STATE_BUSY:
 	case SKD_DRVR_STATE_BUSY_IMMINENT:
 		dev_dbg(&skdev->pdev->dev, "stopping queue\n");
-		blk_stop_queue(skdev->queue);
+		blk_mq_stop_hw_queues(skdev->queue);
 		break;
 	case SKD_DRVR_STATE_ONLINE:
 	case SKD_DRVR_STATE_STOPPING:
@@ -2473,7 +2412,7 @@ static int skd_unquiesce_dev(struct skd_device *skdev)
 			"**** device ONLINE...starting block queue\n");
 		dev_dbg(&skdev->pdev->dev, "starting queue\n");
 		dev_info(&skdev->pdev->dev, "STEC s1120 ONLINE\n");
-		blk_start_queue(skdev->queue);
+		schedule_work(&skdev->start_queue);
 		skdev->gendisk_on = 1;
 		wake_up_interruptible(&skdev->waitq);
 		break;
@@ -2537,12 +2476,12 @@ static irqreturn_t skd_comp_q(int irq, void *skd_host_data)
 	deferred = skd_isr_completion_posted(skdev, skd_isr_comp_limit,
 						&flush_enqueued);
 	if (flush_enqueued)
-		blk_run_queue_async(skdev->queue);
+		schedule_work(&skdev->start_queue);
 
 	if (deferred)
 		schedule_work(&skdev->completion_worker);
 	else if (!flush_enqueued)
-		blk_run_queue_async(skdev->queue);
+		schedule_work(&skdev->start_queue);
 
 	spin_unlock_irqrestore(&skdev->lock, flags);
 
@@ -2843,9 +2782,10 @@ static void skd_free_sg_list(struct skd_device *skdev,
 	pci_free_consistent(skdev->pdev, nbytes, sg_list, dma_addr);
 }
 
-static int skd_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
+static int skd_init_request(struct blk_mq_tag_set *set, struct request *rq,
+			    unsigned int hctx_idx, unsigned int numa_node)
 {
-	struct skd_device *skdev = q->queuedata;
+	struct skd_device *skdev = set->driver_data;
 	struct skd_request_context *skreq = blk_mq_rq_to_pdu(rq);
 
 	skreq->state = SKD_REQ_STATE_IDLE;
@@ -2857,9 +2797,10 @@ static int skd_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
 	return skreq->sksg_list ? 0 : -ENOMEM;
 }
 
-static void skd_exit_rq(struct request_queue *q, struct request *rq)
+static void skd_exit_request(struct blk_mq_tag_set *set, struct request *rq,
+			     unsigned int hctx_idx)
 {
-	struct skd_device *skdev = q->queuedata;
+	struct skd_device *skdev = set->driver_data;
 	struct skd_request_context *skreq = blk_mq_rq_to_pdu(rq);
 
 	skd_free_sg_list(skdev, skreq->sksg_list,
@@ -2911,6 +2852,12 @@ static int skd_cons_sksb(struct skd_device *skdev)
 	return rc;
 }
 
+static const struct blk_mq_ops skd_mq_ops = {
+	.queue_rq	= skd_mq_queue_rq,
+	.init_request	= skd_init_request,
+	.exit_request	= skd_exit_request,
+};
+
 static int skd_cons_disk(struct skd_device *skdev)
 {
 	int rc = 0;
@@ -2932,27 +2879,30 @@ static int skd_cons_disk(struct skd_device *skdev)
 	disk->fops = &skd_blockdev_ops;
 	disk->private_data = skdev;
 
-	q = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE);
+	q = NULL;
+	memset(&skdev->tag_set, 0, sizeof(skdev->tag_set));
+	skdev->tag_set.ops = &skd_mq_ops;
+	skdev->tag_set.nr_hw_queues = 1;
+	skdev->tag_set.queue_depth = skd_max_queue_depth;
+	skdev->tag_set.cmd_size = sizeof(struct skd_request_context) +
+		skdev->sgs_per_request * sizeof(struct scatterlist);
+	skdev->tag_set.numa_node = NUMA_NO_NODE;
+	skdev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE |
+		BLK_MQ_F_SG_MERGE |
+		BLK_ALLOC_POLICY_TO_MQ_FLAG(BLK_TAG_ALLOC_FIFO);
+	skdev->tag_set.driver_data = skdev;
+	if (blk_mq_alloc_tag_set(&skdev->tag_set) >= 0) {
+		q = blk_mq_init_queue(&skdev->tag_set);
+		if (!q)
+			blk_mq_free_tag_set(&skdev->tag_set);
+	}
 	if (!q) {
 		rc = -ENOMEM;
 		goto err_out;
 	}
 	blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
 	q->queuedata = skdev;
-	q->request_fn = skd_request_fn;
-	q->queue_lock = &skdev->lock;
 	q->nr_requests = skd_max_queue_depth / 2;
-	q->cmd_size = sizeof(struct skd_request_context) +
-		skdev->sgs_per_request * sizeof(struct scatterlist);
-	q->init_rq_fn = skd_init_rq;
-	q->exit_rq_fn = skd_exit_rq;
-	rc = blk_init_allocated_queue(q);
-	if (rc < 0)
-		goto cleanup_q;
-	rc = blk_queue_init_tags(q, skd_max_queue_depth, NULL,
-				 BLK_TAG_ALLOC_FIFO);
-	if (rc < 0)
-		goto cleanup_q;
 
 	skdev->queue = q;
 	disk->queue = q;
@@ -2969,15 +2919,11 @@ static int skd_cons_disk(struct skd_device *skdev)
 
 	spin_lock_irqsave(&skdev->lock, flags);
 	dev_dbg(&skdev->pdev->dev, "stopping queue\n");
-	blk_stop_queue(skdev->queue);
+	blk_mq_stop_hw_queues(skdev->queue);
 	spin_unlock_irqrestore(&skdev->lock, flags);
 
 err_out:
 	return rc;
-
-cleanup_q:
-	blk_cleanup_queue(q);
-	goto err_out;
 }
 
 #define SKD_N_DEV_TABLE         16u
@@ -3012,6 +2958,7 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 
 	spin_lock_init(&skdev->lock);
 
+	INIT_WORK(&skdev->start_queue, skd_start_queue);
 	INIT_WORK(&skdev->completion_worker, skd_completion_worker);
 
 	dev_dbg(&skdev->pdev->dev, "skcomp\n");
@@ -3130,6 +3077,9 @@ static void skd_free_disk(struct skd_device *skdev)
 		disk->queue = NULL;
 	}
 
+	if (skdev->tag_set.tags)
+		blk_mq_free_tag_set(&skdev->tag_set);
+
 	put_disk(disk);
 	skdev->disk = NULL;
 }
@@ -3139,6 +3089,8 @@ static void skd_destruct(struct skd_device *skdev)
 	if (skdev == NULL)
 		return;
 
+	cancel_work_sync(&skdev->start_queue);
+
 	dev_dbg(&skdev->pdev->dev, "disk\n");
 	skd_free_disk(skdev);
 
@@ -3682,6 +3634,7 @@ static void skd_log_skreq(struct skd_device *skdev,
 		skreq->fitmsg_id);
 	dev_dbg(&skdev->pdev->dev, "  timo=0x%x sg_dir=%d n_sg=%d\n",
 		skreq->timeout_stamp, skreq->data_dir, skreq->n_sg);
+
 	dev_dbg(&skdev->pdev->dev,
 		"req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req, lba, lba,
 		count, count, (int)rq_data_dir(req));
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 50/55] skd: Switch to block layer timeout mechanism
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (48 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 49/55] skd: Convert to blk-mq Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 51/55] skd: Remove skd_device.in_flight Bart Van Assche
                   ` (6 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Remove the timeout slot variables and rely on the block layer to
detect request timeouts.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 117 +++++++++++++----------------------------------
 1 file changed, 31 insertions(+), 86 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 3590f9a775ae..a982de2014cc 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -105,9 +105,6 @@ MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
 #define SKD_ID_SLOT_MASK        0x00FFu
 #define SKD_ID_SLOT_AND_TABLE_MASK 0x03FFu
 
-#define SKD_N_TIMEOUT_SLOT      4u
-#define SKD_TIMEOUT_SLOT_MASK   3u
-
 #define SKD_N_MAX_SECTORS 2048u
 
 #define SKD_MAX_RETRIES 2u
@@ -125,7 +122,6 @@ enum skd_drvr_state {
 	SKD_DRVR_STATE_ONLINE,
 	SKD_DRVR_STATE_PAUSING,
 	SKD_DRVR_STATE_PAUSED,
-	SKD_DRVR_STATE_DRAINING_TIMEOUT,
 	SKD_DRVR_STATE_RESTARTING,
 	SKD_DRVR_STATE_RESUMING,
 	SKD_DRVR_STATE_STOPPING,
@@ -142,7 +138,6 @@ enum skd_drvr_state {
 #define SKD_WAIT_BOOT_TIMO      SKD_TIMER_SECONDS(90u)
 #define SKD_STARTING_TIMO       SKD_TIMER_SECONDS(8u)
 #define SKD_RESTARTING_TIMO     SKD_TIMER_MINUTES(4u)
-#define SKD_DRAINING_TIMO       SKD_TIMER_SECONDS(6u)
 #define SKD_BUSY_TIMO           SKD_TIMER_MINUTES(20u)
 #define SKD_STARTED_BUSY_TIMO   SKD_TIMER_SECONDS(60u)
 #define SKD_START_WAIT_SECONDS  90u
@@ -185,7 +180,6 @@ struct skd_request_context {
 
 	u8 flush_cmd;
 
-	u32 timeout_stamp;
 	enum dma_data_direction data_dir;
 	struct scatterlist *sg;
 	u32 n_sg;
@@ -252,8 +246,6 @@ struct skd_device {
 	u32 num_fitmsg_context;
 	u32 num_req_context;
 
-	atomic_t timeout_slot[SKD_N_TIMEOUT_SLOT];
-	atomic_t timeout_stamp;
 	struct skd_fitmsg_context *skmsg_table;
 
 	struct skd_special_context internal_skspcl;
@@ -464,7 +456,6 @@ static bool skd_fail_all(struct request_queue *q)
 	case SKD_DRVR_STATE_BUSY:
 	case SKD_DRVR_STATE_BUSY_IMMINENT:
 	case SKD_DRVR_STATE_BUSY_ERASE:
-	case SKD_DRVR_STATE_DRAINING_TIMEOUT:
 		return false;
 
 	case SKD_DRVR_STATE_BUSY_SANITIZE:
@@ -492,7 +483,6 @@ static void skd_process_request(struct request *req, bool last)
 	u32 count;
 	int data_dir;
 	__be64 be_dmaa;
-	u32 timo_slot;
 	int flush, fua;
 
 	WARN_ONCE(tag >= skd_max_queue_depth, "%#x > %#x (nr_requests = %lu)\n",
@@ -577,13 +567,6 @@ static void skd_process_request(struct request *req, bool last)
 	skmsg->length += sizeof(struct skd_scsi_request);
 	fmh->num_protocol_cmds_coalesced++;
 
-	/*
-	 * Update the active request counts.
-	 * Capture the timeout timestamp.
-	 */
-	skreq->timeout_stamp = atomic_read(&skdev->timeout_stamp);
-	timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
-	atomic_inc(&skdev->timeout_slot[timo_slot]);
 	atomic_inc(&skdev->in_flight);
 	dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id,
 		atomic_read(&skdev->in_flight));
@@ -617,6 +600,16 @@ static blk_status_t skd_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return BLK_STS_OK;
 }
 
+static enum blk_eh_timer_return skd_timed_out(struct request *req)
+{
+	struct skd_device *skdev = req->q->queuedata;
+
+	dev_err(&skdev->pdev->dev, "request with tag %#x timed out\n",
+		blk_mq_unique_tag(req));
+
+	return BLK_EH_HANDLED;
+}
+
 static void skd_end_request(struct skd_device *skdev, struct request *req,
 			    blk_status_t error)
 {
@@ -635,6 +628,18 @@ static void skd_end_request(struct skd_device *skdev, struct request *req,
 	blk_mq_end_request(req, error);
 }
 
+/* Only called in case of a request timeout */
+static void skd_softirq_done(struct request *req)
+{
+	struct skd_device *skdev = req->q->queuedata;
+	struct skd_request_context *skreq = blk_mq_rq_to_pdu(req);
+	unsigned long flags;
+
+	spin_lock_irqsave(&skdev->lock, flags);
+	skd_end_request(skdev, blk_mq_rq_from_pdu(skreq), BLK_STS_TIMEOUT);
+	spin_unlock_irqrestore(&skdev->lock, flags);
+}
+
 static bool skd_preop_sg_list(struct skd_device *skdev,
 			     struct skd_request_context *skreq)
 {
@@ -733,8 +738,6 @@ static void skd_start_queue(struct work_struct *work)
 static void skd_timer_tick(ulong arg)
 {
 	struct skd_device *skdev = (struct skd_device *)arg;
-
-	u32 timo_slot;
 	unsigned long reqflags;
 	u32 state;
 
@@ -751,35 +754,9 @@ static void skd_timer_tick(ulong arg)
 	if (state != skdev->drive_state)
 		skd_isr_fwstate(skdev);
 
-	if (skdev->state != SKD_DRVR_STATE_ONLINE) {
+	if (skdev->state != SKD_DRVR_STATE_ONLINE)
 		skd_timer_tick_not_online(skdev);
-		goto timer_func_out;
-	}
-	timo_slot = atomic_inc_return(&skdev->timeout_stamp) &
-		SKD_TIMEOUT_SLOT_MASK;
-
-	/*
-	 * All requests that happened during the previous use of
-	 * this slot should be done by now. The previous use was
-	 * over 7 seconds ago.
-	 */
-	if (atomic_read(&skdev->timeout_slot[timo_slot]) == 0)
-		goto timer_func_out;
-
-	/* Something is overdue */
-	dev_dbg(&skdev->pdev->dev, "found %d timeouts, draining busy=%d\n",
-		atomic_read(&skdev->timeout_slot[timo_slot]),
-		atomic_read(&skdev->in_flight));
-	dev_err(&skdev->pdev->dev, "Overdue IOs (%d), busy %d\n",
-		atomic_read(&skdev->timeout_slot[timo_slot]),
-		atomic_read(&skdev->in_flight));
-
-	skdev->timer_countdown = SKD_DRAINING_TIMO;
-	skdev->state = SKD_DRVR_STATE_DRAINING_TIMEOUT;
-	skdev->timo_slot = timo_slot;
-	blk_stop_queue(skdev->queue);
 
-timer_func_out:
 	mod_timer(&skdev->timer, (jiffies + HZ));
 
 	spin_unlock_irqrestore(&skdev->lock, reqflags);
@@ -848,27 +825,6 @@ static void skd_timer_tick_not_online(struct skd_device *skdev)
 	case SKD_DRVR_STATE_PAUSED:
 		break;
 
-	case SKD_DRVR_STATE_DRAINING_TIMEOUT:
-		dev_dbg(&skdev->pdev->dev,
-			"draining busy [%d] tick[%d] qdb[%d] tmls[%d]\n",
-			skdev->timo_slot, skdev->timer_countdown,
-			atomic_read(&skdev->in_flight),
-			atomic_read(&skdev->timeout_slot[skdev->timo_slot]));
-		/* if the slot has cleared we can let the I/O continue */
-		if (atomic_read(&skdev->timeout_slot[skdev->timo_slot]) == 0) {
-			dev_dbg(&skdev->pdev->dev,
-				"Slot drained, starting queue.\n");
-			skdev->state = SKD_DRVR_STATE_ONLINE;
-			blk_start_queue(skdev->queue);
-			return;
-		}
-		if (skdev->timer_countdown > 0) {
-			skdev->timer_countdown--;
-			return;
-		}
-		skd_restart_device(skdev);
-		break;
-
 	case SKD_DRVR_STATE_RESTARTING:
 		if (skdev->timer_countdown > 0) {
 			skdev->timer_countdown--;
@@ -1495,8 +1451,6 @@ static void skd_resolve_req_exception(struct skd_device *skdev,
 static void skd_release_skreq(struct skd_device *skdev,
 			      struct skd_request_context *skreq)
 {
-	u32 timo_slot;
-
 	/*
 	 * Decrease the number of active requests.
 	 * Also decrements the count in the timeout slot.
@@ -1504,10 +1458,6 @@ static void skd_release_skreq(struct skd_device *skdev,
 	SKD_ASSERT(atomic_read(&skdev->in_flight) > 0);
 	atomic_dec(&skdev->in_flight);
 
-	timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK;
-	SKD_ASSERT(atomic_read(&skdev->timeout_slot[timo_slot]) > 0);
-	atomic_dec(&skdev->timeout_slot[timo_slot]);
-
 	/*
 	 * Reclaim the skd_request_context
 	 */
@@ -1620,7 +1570,6 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		if (skreq->n_sg > 0)
 			skd_postop_sg_list(skdev, skreq);
 
-		/* Mark the FIT msg and timeout slot as free. */
 		skd_release_skreq(skdev, skreq);
 
 		/*
@@ -1979,13 +1928,8 @@ static void skd_recover_request(struct request *req, void *data, bool reserved)
 
 static void skd_recover_requests(struct skd_device *skdev)
 {
-	int i;
-
 	blk_mq_tagset_busy_iter(&skdev->tag_set, skd_recover_request, skdev);
 
-	for (i = 0; i < SKD_N_TIMEOUT_SLOT; i++)
-		atomic_set(&skdev->timeout_slot[i], 0);
-
 	atomic_set(&skdev->in_flight, 0);
 }
 
@@ -2917,6 +2861,10 @@ static int skd_cons_disk(struct skd_device *skdev)
 	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
 	queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q);
 
+	blk_queue_rq_timeout(q, 8 * HZ);
+	blk_queue_rq_timed_out(q, skd_timed_out);
+	blk_queue_softirq_done(q, skd_softirq_done);
+
 	spin_lock_irqsave(&skdev->lock, flags);
 	dev_dbg(&skdev->pdev->dev, "stopping queue\n");
 	blk_mq_stop_hw_queues(skdev->queue);
@@ -3561,8 +3509,6 @@ const char *skd_skdev_state_to_str(enum skd_drvr_state state)
 		return "PAUSING";
 	case SKD_DRVR_STATE_PAUSED:
 		return "PAUSED";
-	case SKD_DRVR_STATE_DRAINING_TIMEOUT:
-		return "DRAINING_TIMEOUT";
 	case SKD_DRVR_STATE_RESTARTING:
 		return "RESTARTING";
 	case SKD_DRVR_STATE_RESUMING:
@@ -3616,9 +3562,8 @@ static void skd_log_skdev(struct skd_device *skdev, const char *event)
 	dev_dbg(&skdev->pdev->dev, "  busy=%d limit=%d dev=%d lowat=%d\n",
 		atomic_read(&skdev->in_flight), skdev->cur_max_queue_depth,
 		skdev->dev_max_queue_depth, skdev->queue_low_water_mark);
-	dev_dbg(&skdev->pdev->dev, "  timestamp=0x%x cycle=%d cycle_ix=%d\n",
-		atomic_read(&skdev->timeout_stamp), skdev->skcomp_cycle,
-		skdev->skcomp_ix);
+	dev_dbg(&skdev->pdev->dev, "  cycle=%d cycle_ix=%d\n",
+		skdev->skcomp_cycle, skdev->skcomp_ix);
 }
 
 static void skd_log_skreq(struct skd_device *skdev,
@@ -3632,8 +3577,8 @@ static void skd_log_skreq(struct skd_device *skdev,
 	dev_dbg(&skdev->pdev->dev, "  state=%s(%d) id=0x%04x fitmsg=0x%04x\n",
 		skd_skreq_state_to_str(skreq->state), skreq->state, skreq->id,
 		skreq->fitmsg_id);
-	dev_dbg(&skdev->pdev->dev, "  timo=0x%x sg_dir=%d n_sg=%d\n",
-		skreq->timeout_stamp, skreq->data_dir, skreq->n_sg);
+	dev_dbg(&skdev->pdev->dev, "  sg_dir=%d n_sg=%d\n",
+		skreq->data_dir, skreq->n_sg);
 
 	dev_dbg(&skdev->pdev->dev,
 		"req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req, lba, lba,
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 51/55] skd: Remove skd_device.in_flight
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (49 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 50/55] skd: Switch to block layer timeout mechanism Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 52/55] skd: Reduce memory usage Bart Van Assche
                   ` (5 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Since skd_device.in_flight is only used to display the number of
in-flight requests in debug messages, remove that member and
introduce skd_in_flight(). That last function relies on the block
layer to determine the number of in flight requests.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 37 +++++++++++++++++++++----------------
 1 file changed, 21 insertions(+), 16 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index a982de2014cc..a20434ca3e18 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -238,7 +238,6 @@ struct skd_device {
 	enum skd_drvr_state state;
 	u32 drive_state;
 
-	atomic_t in_flight;
 	u32 cur_max_queue_depth;
 	u32 queue_low_water_mark;
 	u32 dev_max_queue_depth;
@@ -391,6 +390,22 @@ static void skd_log_skreq(struct skd_device *skdev,
  * READ/WRITE REQUESTS
  *****************************************************************************
  */
+static void skd_inc_in_flight(struct request *rq, void *data, bool reserved)
+{
+	int *count = data;
+
+	count++;
+}
+
+static int skd_in_flight(struct skd_device *skdev)
+{
+	int count = 0;
+
+	blk_mq_tagset_busy_iter(&skdev->tag_set, skd_inc_in_flight, &count);
+
+	return count;
+}
+
 static void
 skd_prep_rw_cdb(struct skd_scsi_request *scsi_req,
 		int data_dir, unsigned lba,
@@ -567,9 +582,8 @@ static void skd_process_request(struct request *req, bool last)
 	skmsg->length += sizeof(struct skd_scsi_request);
 	fmh->num_protocol_cmds_coalesced++;
 
-	atomic_inc(&skdev->in_flight);
 	dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id,
-		atomic_read(&skdev->in_flight));
+		skd_in_flight(skdev));
 
 	/*
 	 * If the FIT msg buffer is full send it.
@@ -1218,7 +1232,7 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 	u64 qcmd;
 
 	dev_dbg(&skdev->pdev->dev, "dma address 0x%llx, busy=%d\n",
-		skmsg->mb_dma_address, atomic_read(&skdev->in_flight));
+		skmsg->mb_dma_address, skd_in_flight(skdev));
 	dev_dbg(&skdev->pdev->dev, "msg_buf %p\n", skmsg->msg_buf);
 
 	qcmd = skmsg->mb_dma_address;
@@ -1451,13 +1465,6 @@ static void skd_resolve_req_exception(struct skd_device *skdev,
 static void skd_release_skreq(struct skd_device *skdev,
 			      struct skd_request_context *skreq)
 {
-	/*
-	 * Decrease the number of active requests.
-	 * Also decrements the count in the timeout slot.
-	 */
-	SKD_ASSERT(atomic_read(&skdev->in_flight) > 0);
-	atomic_dec(&skdev->in_flight);
-
 	/*
 	 * Reclaim the skd_request_context
 	 */
@@ -1498,7 +1505,7 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 		dev_dbg(&skdev->pdev->dev,
 			"cycle=%d ix=%d got cycle=%d cmdctxt=0x%x stat=%d busy=%d rbytes=0x%x proto=%d\n",
 			skdev->skcomp_cycle, skdev->skcomp_ix, cmp_cycle,
-			cmp_cntxt, cmp_status, atomic_read(&skdev->in_flight),
+			cmp_cntxt, cmp_status, skd_in_flight(skdev),
 			cmp_bytes, skdev->proto_ver);
 
 		if (cmp_cycle != skdev->skcomp_cycle) {
@@ -1590,7 +1597,7 @@ static int skd_isr_completion_posted(struct skd_device *skdev,
 	}
 
 	if (skdev->state == SKD_DRVR_STATE_PAUSING &&
-	    atomic_read(&skdev->in_flight) == 0) {
+	    skd_in_flight(skdev) == 0) {
 		skdev->state = SKD_DRVR_STATE_PAUSED;
 		wake_up_interruptible(&skdev->waitq);
 	}
@@ -1929,8 +1936,6 @@ static void skd_recover_request(struct request *req, void *data, bool reserved)
 static void skd_recover_requests(struct skd_device *skdev)
 {
 	blk_mq_tagset_busy_iter(&skdev->tag_set, skd_recover_request, skdev);
-
-	atomic_set(&skdev->in_flight, 0);
 }
 
 static void skd_isr_msg_from_dev(struct skd_device *skdev)
@@ -3560,7 +3565,7 @@ static void skd_log_skdev(struct skd_device *skdev, const char *event)
 		skd_drive_state_to_str(skdev->drive_state), skdev->drive_state,
 		skd_skdev_state_to_str(skdev->state), skdev->state);
 	dev_dbg(&skdev->pdev->dev, "  busy=%d limit=%d dev=%d lowat=%d\n",
-		atomic_read(&skdev->in_flight), skdev->cur_max_queue_depth,
+		skd_in_flight(skdev), skdev->cur_max_queue_depth,
 		skdev->dev_max_queue_depth, skdev->queue_low_water_mark);
 	dev_dbg(&skdev->pdev->dev, "  cycle=%d cycle_ix=%d\n",
 		skdev->skcomp_cycle, skdev->skcomp_ix);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 52/55] skd: Reduce memory usage
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (50 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 51/55] skd: Remove skd_device.in_flight Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 53/55] skd: Remove several local variables Bart Van Assche
                   ` (4 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Every single coherent DMA memory buffer occupies at least one page.
Reduce memory usage by switching from coherent buffers to streaming
DMA for I/O requests (struct skd_fitmsg_context) and S/G-lists
(struct fit_sg_descriptor[]).

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 145 +++++++++++++++++++++++++++++++++++------------
 1 file changed, 108 insertions(+), 37 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index a20434ca3e18..610c8979dc7e 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -32,6 +32,7 @@
 #include <linux/aer.h>
 #include <linux/wait.h>
 #include <linux/stringify.h>
+#include <linux/slab_def.h>
 #include <scsi/scsi.h>
 #include <scsi/sg.h>
 #include <linux/io.h>
@@ -256,6 +257,9 @@ struct skd_device {
 
 	u8 skcomp_cycle;
 	u32 skcomp_ix;
+	struct kmem_cache *msgbuf_cache;
+	struct kmem_cache *sglist_cache;
+	struct kmem_cache *databuf_cache;
 	struct fit_completion_entry_v1 *skcomp_table;
 	struct fit_comp_error_info *skerr_table;
 	dma_addr_t cq_dma_address;
@@ -538,6 +542,11 @@ static void skd_process_request(struct request *req, bool last)
 		return;
 	}
 
+	dma_sync_single_for_device(&skdev->pdev->dev, skreq->sksg_dma_address,
+				   skreq->n_sg *
+				   sizeof(struct fit_sg_descriptor),
+				   DMA_TO_DEVICE);
+
 	spin_lock_irqsave(&skdev->lock, flags);
 	/* Either a FIT msg is in progress or we have to start one. */
 	skmsg = skdev->skmsg;
@@ -1078,6 +1087,11 @@ static void skd_complete_internal(struct skd_device *skdev,
 
 	dev_dbg(&skdev->pdev->dev, "complete internal %x\n", scsi->cdb[0]);
 
+	dma_sync_single_for_cpu(&skdev->pdev->dev,
+				skspcl->db_dma_address,
+				skspcl->req.sksg_list[0].byte_count,
+				DMA_BIDIRECTIONAL);
+
 	skspcl->req.completion = *skcomp;
 	skspcl->req.state = SKD_REQ_STATE_IDLE;
 	skspcl->req.id += SKD_ID_INCR;
@@ -1263,6 +1277,9 @@ static void skd_send_fitmsg(struct skd_device *skdev,
 		 */
 		qcmd |= FIT_QCMD_MSGSIZE_64;
 
+	dma_sync_single_for_device(&skdev->pdev->dev, skmsg->mb_dma_address,
+				   skmsg->length, DMA_TO_DEVICE);
+
 	/* Make sure skd_msg_buf is written before the doorbell is triggered. */
 	smp_wmb();
 
@@ -1274,6 +1291,8 @@ static void skd_send_special_fitmsg(struct skd_device *skdev,
 {
 	u64 qcmd;
 
+	WARN_ON_ONCE(skspcl->req.n_sg != 1);
+
 	if (unlikely(skdev->dbg_level > 1)) {
 		u8 *bp = (u8 *)skspcl->msg_buf;
 		int i;
@@ -1307,6 +1326,17 @@ static void skd_send_special_fitmsg(struct skd_device *skdev,
 	qcmd = skspcl->mb_dma_address;
 	qcmd |= FIT_QCMD_QID_NORMAL + FIT_QCMD_MSGSIZE_128;
 
+	dma_sync_single_for_device(&skdev->pdev->dev, skspcl->mb_dma_address,
+				   SKD_N_SPECIAL_FITMSG_BYTES, DMA_TO_DEVICE);
+	dma_sync_single_for_device(&skdev->pdev->dev,
+				   skspcl->req.sksg_dma_address,
+				   1 * sizeof(struct fit_sg_descriptor),
+				   DMA_TO_DEVICE);
+	dma_sync_single_for_device(&skdev->pdev->dev,
+				   skspcl->db_dma_address,
+				   skspcl->req.sksg_list[0].byte_count,
+				   DMA_BIDIRECTIONAL);
+
 	/* Make sure skd_msg_buf is written before the doorbell is triggered. */
 	smp_wmb();
 
@@ -2619,6 +2649,35 @@ static void skd_release_irq(struct skd_device *skdev)
  *****************************************************************************
  */
 
+static void *skd_alloc_dma(struct skd_device *skdev, struct kmem_cache *s,
+			   dma_addr_t *dma_handle, gfp_t gfp,
+			   enum dma_data_direction dir)
+{
+	struct device *dev = &skdev->pdev->dev;
+	void *buf;
+
+	buf = kmem_cache_alloc(s, gfp);
+	if (!buf)
+		return NULL;
+	*dma_handle = dma_map_single(dev, buf, s->size, dir);
+	if (dma_mapping_error(dev, *dma_handle)) {
+		kfree(buf);
+		buf = NULL;
+	}
+	return buf;
+}
+
+static void skd_free_dma(struct skd_device *skdev, struct kmem_cache *s,
+			 void *vaddr, dma_addr_t dma_handle,
+			 enum dma_data_direction dir)
+{
+	if (!vaddr)
+		return;
+
+	dma_unmap_single(&skdev->pdev->dev, dma_handle, s->size, dir);
+	kmem_cache_free(s, vaddr);
+}
+
 static int skd_cons_skcomp(struct skd_device *skdev)
 {
 	int rc = 0;
@@ -2695,18 +2754,14 @@ static struct fit_sg_descriptor *skd_cons_sg_list(struct skd_device *skdev,
 						  dma_addr_t *ret_dma_addr)
 {
 	struct fit_sg_descriptor *sg_list;
-	u32 nbytes;
 
-	nbytes = sizeof(*sg_list) * n_sg;
-
-	sg_list = pci_alloc_consistent(skdev->pdev, nbytes, ret_dma_addr);
+	sg_list = skd_alloc_dma(skdev, skdev->sglist_cache, ret_dma_addr,
+				GFP_DMA | __GFP_ZERO, DMA_TO_DEVICE);
 
 	if (sg_list != NULL) {
 		uint64_t dma_address = *ret_dma_addr;
 		u32 i;
 
-		memset(sg_list, 0, nbytes);
-
 		for (i = 0; i < n_sg - 1; i++) {
 			uint64_t ndp_off;
 			ndp_off = (i + 1) * sizeof(struct fit_sg_descriptor);
@@ -2720,15 +2775,14 @@ static struct fit_sg_descriptor *skd_cons_sg_list(struct skd_device *skdev,
 }
 
 static void skd_free_sg_list(struct skd_device *skdev,
-			     struct fit_sg_descriptor *sg_list, u32 n_sg,
+			     struct fit_sg_descriptor *sg_list,
 			     dma_addr_t dma_addr)
 {
-	u32 nbytes = sizeof(*sg_list) * n_sg;
-
 	if (WARN_ON_ONCE(!sg_list))
 		return;
 
-	pci_free_consistent(skdev->pdev, nbytes, sg_list, dma_addr);
+	skd_free_dma(skdev, skdev->sglist_cache, sg_list, dma_addr,
+		     DMA_TO_DEVICE);
 }
 
 static int skd_init_request(struct blk_mq_tag_set *set, struct request *rq,
@@ -2752,34 +2806,31 @@ static void skd_exit_request(struct blk_mq_tag_set *set, struct request *rq,
 	struct skd_device *skdev = set->driver_data;
 	struct skd_request_context *skreq = blk_mq_rq_to_pdu(rq);
 
-	skd_free_sg_list(skdev, skreq->sksg_list,
-			 skdev->sgs_per_request,
-			 skreq->sksg_dma_address);
+	skd_free_sg_list(skdev, skreq->sksg_list, skreq->sksg_dma_address);
 }
 
 static int skd_cons_sksb(struct skd_device *skdev)
 {
 	int rc = 0;
 	struct skd_special_context *skspcl;
-	u32 nbytes;
 
 	skspcl = &skdev->internal_skspcl;
 
 	skspcl->req.id = 0 + SKD_ID_INTERNAL;
 	skspcl->req.state = SKD_REQ_STATE_IDLE;
 
-	nbytes = SKD_N_INTERNAL_BYTES;
-
-	skspcl->data_buf = pci_zalloc_consistent(skdev->pdev, nbytes,
-						 &skspcl->db_dma_address);
+	skspcl->data_buf = skd_alloc_dma(skdev, skdev->databuf_cache,
+					 &skspcl->db_dma_address,
+					 GFP_DMA | __GFP_ZERO,
+					 DMA_BIDIRECTIONAL);
 	if (skspcl->data_buf == NULL) {
 		rc = -ENOMEM;
 		goto err_out;
 	}
 
-	nbytes = SKD_N_SPECIAL_FITMSG_BYTES;
-	skspcl->msg_buf = pci_zalloc_consistent(skdev->pdev, nbytes,
-						&skspcl->mb_dma_address);
+	skspcl->msg_buf = skd_alloc_dma(skdev, skdev->msgbuf_cache,
+					&skspcl->mb_dma_address,
+					GFP_DMA | __GFP_ZERO, DMA_TO_DEVICE);
 	if (skspcl->msg_buf == NULL) {
 		rc = -ENOMEM;
 		goto err_out;
@@ -2886,6 +2937,7 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 {
 	struct skd_device *skdev;
 	int blk_major = skd_major;
+	size_t size;
 	int rc;
 
 	skdev = kzalloc(sizeof(*skdev), GFP_KERNEL);
@@ -2914,6 +2966,31 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
 	INIT_WORK(&skdev->start_queue, skd_start_queue);
 	INIT_WORK(&skdev->completion_worker, skd_completion_worker);
 
+	size = max(SKD_N_FITMSG_BYTES, SKD_N_SPECIAL_FITMSG_BYTES);
+	skdev->msgbuf_cache = kmem_cache_create("skd-msgbuf", size, 0,
+						SLAB_HWCACHE_ALIGN, NULL);
+	if (!skdev->msgbuf_cache)
+		goto err_out;
+	WARN_ONCE(kmem_cache_size(skdev->msgbuf_cache) < size,
+		  "skd-msgbuf: %d < %zd\n",
+		  kmem_cache_size(skdev->msgbuf_cache), size);
+	size = skd_sgs_per_request * sizeof(struct fit_sg_descriptor);
+	skdev->sglist_cache = kmem_cache_create("skd-sglist", size, 0,
+						SLAB_HWCACHE_ALIGN, NULL);
+	if (!skdev->sglist_cache)
+		goto err_out;
+	WARN_ONCE(kmem_cache_size(skdev->sglist_cache) < size,
+		  "skd-sglist: %d < %zd\n",
+		  kmem_cache_size(skdev->sglist_cache), size);
+	size = SKD_N_INTERNAL_BYTES;
+	skdev->databuf_cache = kmem_cache_create("skd-databuf", size, 0,
+						 SLAB_HWCACHE_ALIGN, NULL);
+	if (!skdev->databuf_cache)
+		goto err_out;
+	WARN_ONCE(kmem_cache_size(skdev->databuf_cache) < size,
+		  "skd-databuf: %d < %zd\n",
+		  kmem_cache_size(skdev->databuf_cache), size);
+
 	dev_dbg(&skdev->pdev->dev, "skcomp\n");
 	rc = skd_cons_skcomp(skdev);
 	if (rc < 0)
@@ -2986,31 +3063,21 @@ static void skd_free_skmsg(struct skd_device *skdev)
 
 static void skd_free_sksb(struct skd_device *skdev)
 {
-	struct skd_special_context *skspcl;
-	u32 nbytes;
-
-	skspcl = &skdev->internal_skspcl;
-
-	if (skspcl->data_buf != NULL) {
-		nbytes = SKD_N_INTERNAL_BYTES;
+	struct skd_special_context *skspcl = &skdev->internal_skspcl;
 
-		pci_free_consistent(skdev->pdev, nbytes,
-				    skspcl->data_buf, skspcl->db_dma_address);
-	}
+	skd_free_dma(skdev, skdev->databuf_cache, skspcl->data_buf,
+		     skspcl->db_dma_address, DMA_BIDIRECTIONAL);
 
 	skspcl->data_buf = NULL;
 	skspcl->db_dma_address = 0;
 
-	if (skspcl->msg_buf != NULL) {
-		nbytes = SKD_N_SPECIAL_FITMSG_BYTES;
-		pci_free_consistent(skdev->pdev, nbytes,
-				    skspcl->msg_buf, skspcl->mb_dma_address);
-	}
+	skd_free_dma(skdev, skdev->msgbuf_cache, skspcl->msg_buf,
+		     skspcl->mb_dma_address, DMA_TO_DEVICE);
 
 	skspcl->msg_buf = NULL;
 	skspcl->mb_dma_address = 0;
 
-	skd_free_sg_list(skdev, skspcl->req.sksg_list, 1,
+	skd_free_sg_list(skdev, skspcl->req.sksg_list,
 			 skspcl->req.sksg_dma_address);
 
 	skspcl->req.sksg_list = NULL;
@@ -3056,6 +3123,10 @@ static void skd_destruct(struct skd_device *skdev)
 	dev_dbg(&skdev->pdev->dev, "skcomp\n");
 	skd_free_skcomp(skdev);
 
+	kmem_cache_destroy(skdev->databuf_cache);
+	kmem_cache_destroy(skdev->sglist_cache);
+	kmem_cache_destroy(skdev->msgbuf_cache);
+
 	dev_dbg(&skdev->pdev->dev, "skdev\n");
 	kfree(skdev);
 }
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 53/55] skd: Remove several local variables
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (51 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 52/55] skd: Reduce memory usage Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 54/55] skd: Optimize locking Bart Van Assche
                   ` (3 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

This patch does not change any functionality but makes the code
more brief.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 37 +++++++------------------------------
 1 file changed, 7 insertions(+), 30 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 610c8979dc7e..a732bb8040f4 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -44,12 +44,6 @@
 static int skd_dbg_level;
 static int skd_isr_comp_limit = 4;
 
-enum {
-	SKD_FLUSH_INITIALIZER,
-	SKD_FLUSH_ZERO_SIZE_FIRST,
-	SKD_FLUSH_DATA_SECOND,
-};
-
 #define SKD_ASSERT(expr) \
 	do { \
 		if (unlikely(!(expr))) { \
@@ -497,31 +491,15 @@ static void skd_process_request(struct request *req, bool last)
 	struct skd_request_context *const skreq = blk_mq_rq_to_pdu(req);
 	struct skd_scsi_request *scsi_req;
 	unsigned long flags;
-	unsigned long io_flags;
-	u32 lba;
-	u32 count;
-	int data_dir;
-	__be64 be_dmaa;
-	int flush, fua;
+	const u32 lba = blk_rq_pos(req);
+	const u32 count = blk_rq_sectors(req);
+	const int data_dir = rq_data_dir(req);
 
 	WARN_ONCE(tag >= skd_max_queue_depth, "%#x > %#x (nr_requests = %lu)\n",
 		  tag, skd_max_queue_depth, q->nr_requests);
 
 	SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE);
 
-	flush = fua = 0;
-
-	lba = (u32)blk_rq_pos(req);
-	count = blk_rq_sectors(req);
-	data_dir = rq_data_dir(req);
-	io_flags = req->cmd_flags;
-
-	if (req_op(req) == REQ_OP_FLUSH)
-		flush++;
-
-	if (io_flags & REQ_FUA)
-		fua++;
-
 	dev_dbg(&skdev->pdev->dev,
 		"new req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req, lba,
 		lba, count, count, data_dir);
@@ -568,19 +546,18 @@ static void skd_process_request(struct request *req, bool last)
 	scsi_req = &skmsg->msg_buf->scsi[fmh->num_protocol_cmds_coalesced];
 	memset(scsi_req, 0, sizeof(*scsi_req));
 
-	be_dmaa = cpu_to_be64(skreq->sksg_dma_address);
-
 	scsi_req->hdr.tag = skreq->id;
-	scsi_req->hdr.sg_list_dma_address = be_dmaa;
+	scsi_req->hdr.sg_list_dma_address =
+		cpu_to_be64(skreq->sksg_dma_address);
 
-	if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) {
+	if (req_op(req) == REQ_OP_FLUSH) {
 		skd_prep_zerosize_flush_cdb(scsi_req, skreq);
 		SKD_ASSERT(skreq->flush_cmd == 1);
 	} else {
 		skd_prep_rw_cdb(scsi_req, data_dir, lba, count);
 	}
 
-	if (fua)
+	if (req->cmd_flags & REQ_FUA)
 		scsi_req->cdb[1] |= SKD_FUA_NV;
 
 	scsi_req->hdr.sg_list_len_bytes = cpu_to_be32(skreq->sg_byte_count);
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 54/55] skd: Optimize locking
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (52 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 53/55] skd: Remove several local variables Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:13 ` [PATCH 55/55] skd: Bump driver version Bart Van Assche
                   ` (2 subsequent siblings)
  56 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche, Hannes Reinecke, Johannes Thumshirn

Only take skdev->lock if necessary.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/block/skd_main.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index a732bb8040f4..bcd8df0bf203 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -490,7 +490,7 @@ static void skd_process_request(struct request *req, bool last)
 	const u32 tag = blk_mq_unique_tag(req);
 	struct skd_request_context *const skreq = blk_mq_rq_to_pdu(req);
 	struct skd_scsi_request *scsi_req;
-	unsigned long flags;
+	unsigned long flags = 0;
 	const u32 lba = blk_rq_pos(req);
 	const u32 count = blk_rq_sectors(req);
 	const int data_dir = rq_data_dir(req);
@@ -525,9 +525,13 @@ static void skd_process_request(struct request *req, bool last)
 				   sizeof(struct fit_sg_descriptor),
 				   DMA_TO_DEVICE);
 
-	spin_lock_irqsave(&skdev->lock, flags);
 	/* Either a FIT msg is in progress or we have to start one. */
-	skmsg = skdev->skmsg;
+	if (skd_max_req_per_msg == 1) {
+		skmsg = NULL;
+	} else {
+		spin_lock_irqsave(&skdev->lock, flags);
+		skmsg = skdev->skmsg;
+	}
 	if (!skmsg) {
 		skmsg = &skdev->skmsg_table[tag];
 		skdev->skmsg = skmsg;
@@ -574,11 +578,16 @@ static void skd_process_request(struct request *req, bool last)
 	/*
 	 * If the FIT msg buffer is full send it.
 	 */
-	if (last || fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
+	if (skd_max_req_per_msg == 1) {
 		skd_send_fitmsg(skdev, skmsg);
-		skdev->skmsg = NULL;
+	} else {
+		if (last ||
+		    fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) {
+			skd_send_fitmsg(skdev, skmsg);
+			skdev->skmsg = NULL;
+		}
+		spin_unlock_irqrestore(&skdev->lock, flags);
 	}
-	spin_unlock_irqrestore(&skdev->lock, flags);
 }
 
 static blk_status_t skd_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 55/55] skd: Bump driver version
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (53 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 54/55] skd: Optimize locking Bart Van Assche
@ 2017-08-17 20:13 ` Bart Van Assche
  2017-08-17 20:45   ` Jens Axboe
  2017-08-17 20:44 ` [PATCH 00/55] Convert skd driver to blk-mq Jens Axboe
  2017-08-18 14:46 ` Jens Axboe
  56 siblings, 1 reply; 60+ messages in thread
From: Bart Van Assche @ 2017-08-17 20:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali,
	Bart Van Assche

Bump the driver version. Remove the build ID because build IDs do
not make sense for an upstream kernel driver. Keep the driver
version in the module information but do not report it during every
load, unload or probe.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
---
 drivers/block/skd_main.c | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index bcd8df0bf203..a61c7a3a5557 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -53,14 +53,13 @@ static int skd_isr_comp_limit = 4;
 	} while (0)
 
 #define DRV_NAME "skd"
-#define DRV_VERSION "2.2.1"
-#define DRV_BUILD_ID "0260"
+#define DRV_VERSION "3.0.0"
 #define PFX DRV_NAME ": "
 
 MODULE_LICENSE("GPL");
 
-MODULE_DESCRIPTION("STEC s1120 PCIe SSD block driver (b" DRV_BUILD_ID ")");
-MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
+MODULE_DESCRIPTION("STEC s1120 PCIe SSD block driver");
+MODULE_VERSION(DRV_VERSION);
 
 #define PCI_VENDOR_ID_STEC      0x1B39
 #define PCI_DEVICE_ID_S1120     0x0001
@@ -3206,10 +3205,8 @@ static int skd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	char pci_str[32];
 	struct skd_device *skdev;
 
-	dev_info(&pdev->dev, "STEC s1120 Driver(%s) version %s-b%s\n",
-		 DRV_NAME, DRV_VERSION, DRV_BUILD_ID);
-	dev_info(&pdev->dev, "vendor=%04X device=%04x\n", pdev->vendor,
-		 pdev->device);
+	dev_dbg(&pdev->dev, "vendor=%04X device=%04x\n", pdev->vendor,
+		pdev->device);
 
 	rc = pci_enable_device(pdev);
 	if (rc)
@@ -3664,8 +3661,6 @@ static int __init skd_init(void)
 	BUILD_BUG_ON(offsetof(struct skd_msg_buf, scsi) != 64);
 	BUILD_BUG_ON(sizeof(struct skd_msg_buf) != SKD_N_FITMSG_BYTES);
 
-	pr_info(PFX " v%s-b%s loaded\n", DRV_VERSION, DRV_BUILD_ID);
-
 	switch (skd_isr_type) {
 	case SKD_IRQ_LEGACY:
 	case SKD_IRQ_MSI:
@@ -3714,8 +3709,6 @@ static int __init skd_init(void)
 
 static void __exit skd_exit(void)
 {
-	pr_info(PFX " v%s-b%s unloading\n", DRV_VERSION, DRV_BUILD_ID);
-
 	pci_unregister_driver(&skd_driver);
 
 	if (skd_major)
-- 
2.14.0

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/55] Convert skd driver to blk-mq
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (54 preceding siblings ...)
  2017-08-17 20:13 ` [PATCH 55/55] skd: Bump driver version Bart Van Assche
@ 2017-08-17 20:44 ` Jens Axboe
  2017-08-18 14:46 ` Jens Axboe
  56 siblings, 0 replies; 60+ messages in thread
From: Jens Axboe @ 2017-08-17 20:44 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali

On 08/17/2017 02:12 PM, Bart Van Assche wrote:
> Hello Jens,
> 
> As you know all existing single queue block drivers have to be converted
> to blk-mq before the single queue block layer can be removed. Hence this
> patch series that converts the skd (sTec s1120) driver to blk-mq. As the
> following performance numbers show, this patch series does not affect
> performance of the skd driver significantly:

Awesome, thanks Bart!
 
>  MAINTAINERS               |    6 +
>  block/blk-core.c          |    2 +-
>  drivers/block/skd_main.c  | 3196 ++++++++++++---------------------------------
>  drivers/block/skd_s1120.h |   38 +-
>  4 files changed, 846 insertions(+), 2396 deletions(-)

Outside of killing yet another user of the legacy stuff, this right there
is the real win.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 55/55] skd: Bump driver version
  2017-08-17 20:13 ` [PATCH 55/55] skd: Bump driver version Bart Van Assche
@ 2017-08-17 20:45   ` Jens Axboe
  0 siblings, 0 replies; 60+ messages in thread
From: Jens Axboe @ 2017-08-17 20:45 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali

On 08/17/2017 02:13 PM, Bart Van Assche wrote:
> Bump the driver version. Remove the build ID because build IDs do
> not make sense for an upstream kernel driver. Keep the driver
> version in the module information but do not report it during every
> load, unload or probe.

Just kill the version completely.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/55] Convert skd driver to blk-mq
  2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
                   ` (55 preceding siblings ...)
  2017-08-17 20:44 ` [PATCH 00/55] Convert skd driver to blk-mq Jens Axboe
@ 2017-08-18 14:46 ` Jens Axboe
  2017-08-18 15:05   ` Bart Van Assche
  56 siblings, 1 reply; 60+ messages in thread
From: Jens Axboe @ 2017-08-18 14:46 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Akhil Bhansali

On Thu, Aug 17 2017, Bart Van Assche wrote:
> Hello Jens,
> 
> As you know all existing single queue block drivers have to be converted
> to blk-mq before the single queue block layer can be removed. Hence this
> patch series that converts the skd (sTec s1120) driver to blk-mq. As the
> following performance numbers show, this patch series does not affect
> performance of the skd driver significantly:

Applied for 4.14. Would still appreciate a followup patch to kill the
versioning nonsense.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/55] Convert skd driver to blk-mq
  2017-08-18 14:46 ` Jens Axboe
@ 2017-08-18 15:05   ` Bart Van Assche
  0 siblings, 0 replies; 60+ messages in thread
From: Bart Van Assche @ 2017-08-18 15:05 UTC (permalink / raw)
  To: axboe; +Cc: hch, linux-block, Damien Le Moal, Akhil Bhansali

T24gRnJpLCAyMDE3LTA4LTE4IGF0IDA4OjQ2IC0wNjAwLCBKZW5zIEF4Ym9lIHdyb3RlOg0KPiBP
biBUaHUsIEF1ZyAxNyAyMDE3LCBCYXJ0IFZhbiBBc3NjaGUgd3JvdGU6DQo+ID4gQXMgeW91IGtu
b3cgYWxsIGV4aXN0aW5nIHNpbmdsZSBxdWV1ZSBibG9jayBkcml2ZXJzIGhhdmUgdG8gYmUgY29u
dmVydGVkDQo+ID4gdG8gYmxrLW1xIGJlZm9yZSB0aGUgc2luZ2xlIHF1ZXVlIGJsb2NrIGxheWVy
IGNhbiBiZSByZW1vdmVkLiBIZW5jZSB0aGlzDQo+ID4gcGF0Y2ggc2VyaWVzIHRoYXQgY29udmVy
dHMgdGhlIHNrZCAoc1RlYyBzMTEyMCkgZHJpdmVyIHRvIGJsay1tcS4gQXMgdGhlDQo+ID4gZm9s
bG93aW5nIHBlcmZvcm1hbmNlIG51bWJlcnMgc2hvdywgdGhpcyBwYXRjaCBzZXJpZXMgZG9lcyBu
b3QgYWZmZWN0DQo+ID4gcGVyZm9ybWFuY2Ugb2YgdGhlIHNrZCBkcml2ZXIgc2lnbmlmaWNhbnRs
eToNCj4gDQo+IEFwcGxpZWQgZm9yIDQuMTQuIFdvdWxkIHN0aWxsIGFwcHJlY2lhdGUgYSBmb2xs
b3d1cCBwYXRjaCB0byBraWxsIHRoZQ0KPiB2ZXJzaW9uaW5nIG5vbnNlbnNlLg0KDQpUaGFua3Mg
SmVucyEgSSB3aWxsIHBvc3Qgc3VjaCBhIGZvbGxvdy11cCBsYXRlciB0b2RheS4NCg0KQmVzdCBy
ZWdhcmRzLA0KDQpCYXJ0Lg==

^ permalink raw reply	[flat|nested] 60+ messages in thread

end of thread, other threads:[~2017-08-18 15:05 UTC | newest]

Thread overview: 60+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-17 20:12 [PATCH 00/55] Convert skd driver to blk-mq Bart Van Assche
2017-08-17 20:12 ` [PATCH 01/55] block: Relax a check in blk_start_queue() Bart Van Assche
2017-08-17 20:12 ` [PATCH 02/55] skd: Avoid that module unloading triggers a use-after-free Bart Van Assche
2017-08-17 20:12 ` [PATCH 03/55] skd: Submit requests to firmware before triggering the doorbell Bart Van Assche
2017-08-17 20:12 ` [PATCH 04/55] skd: Switch to GPLv2 Bart Van Assche
2017-08-17 20:12 ` [PATCH 05/55] skd: Update maintainer information Bart Van Assche
2017-08-17 20:12 ` [PATCH 06/55] skd: Remove unneeded #include directives Bart Van Assche
2017-08-17 20:12 ` [PATCH 07/55] skd: Remove ESXi code Bart Van Assche
2017-08-17 20:12 ` [PATCH 08/55] skd: Remove unnecessary blank lines Bart Van Assche
2017-08-17 20:12 ` [PATCH 09/55] skd: Avoid that gcc 7 warns about fall-through when building with W=1 Bart Van Assche
2017-08-17 20:12 ` [PATCH 10/55] skd: Fix spelling in a source code comment Bart Van Assche
2017-08-17 20:12 ` [PATCH 11/55] skd: Fix a function name in a comment Bart Van Assche
2017-08-17 20:12 ` [PATCH 12/55] skd: Remove set-but-not-used local variables Bart Van Assche
2017-08-17 20:12 ` [PATCH 13/55] skd: Remove a set-but-not-used variable from struct skd_device Bart Van Assche
2017-08-17 20:12 ` [PATCH 14/55] skd: Remove useless barrier() calls Bart Van Assche
2017-08-17 20:12 ` [PATCH 15/55] skd: Switch from the pr_*() to the dev_*() logging functions Bart Van Assche
2017-08-17 20:12 ` [PATCH 16/55] skd: Fix endianness annotations Bart Van Assche
2017-08-17 20:13 ` [PATCH 17/55] skd: Document locking assumptions Bart Van Assche
2017-08-17 20:13 ` [PATCH 18/55] skd: Introduce the symbolic constant SKD_MAX_REQ_PER_MSG Bart Van Assche
2017-08-17 20:13 ` [PATCH 19/55] skd: Introduce SKD_SKCOMP_SIZE Bart Van Assche
2017-08-17 20:13 ` [PATCH 20/55] skd: Fix size argument in skd_free_skcomp() Bart Van Assche
2017-08-17 20:13 ` [PATCH 21/55] skd: Reorder the code in skd_process_request() Bart Van Assche
2017-08-17 20:13 ` [PATCH 22/55] skd: Simplify the code for deciding whether or not to send a FIT msg Bart Van Assche
2017-08-17 20:13 ` [PATCH 23/55] skd: Simplify the code for allocating DMA message buffers Bart Van Assche
2017-08-17 20:13 ` [PATCH 24/55] skd: Use a structure instead of hardcoding structure offsets Bart Van Assche
2017-08-17 20:13 ` [PATCH 25/55] skd: Check structure sizes at build time Bart Van Assche
2017-08-17 20:13 ` [PATCH 26/55] skd: Use __packed only when needed Bart Van Assche
2017-08-17 20:13 ` [PATCH 27/55] skd: Make the skd_isr() code more brief Bart Van Assche
2017-08-17 20:13 ` [PATCH 28/55] skd: Use ARRAY_SIZE() where appropriate Bart Van Assche
2017-08-17 20:13 ` [PATCH 29/55] skd: Simplify the code for handling data direction Bart Van Assche
2017-08-17 20:13 ` [PATCH 30/55] skd: Remove superfluous initializations from skd_isr_completion_posted() Bart Van Assche
2017-08-17 20:13 ` [PATCH 31/55] skd: Drop second argument of skd_recover_requests() Bart Van Assche
2017-08-17 20:13 ` [PATCH 32/55] skd: Use for_each_sg() Bart Van Assche
2017-08-17 20:13 ` [PATCH 33/55] skd: Remove a redundant init_timer() call Bart Van Assche
2017-08-17 20:13 ` [PATCH 34/55] skd: Remove superfluous occurrences of the 'volatile' keyword Bart Van Assche
2017-08-17 20:13 ` [PATCH 35/55] skd: Use kcalloc() instead of kzalloc() with multiply Bart Van Assche
2017-08-17 20:13 ` [PATCH 36/55] skb: Use symbolic names for SCSI opcodes Bart Van Assche
2017-08-17 20:13 ` [PATCH 37/55] skd: Move a function definition Bart Van Assche
2017-08-17 20:13 ` [PATCH 38/55] skd: Rework request failing code path Bart Van Assche
2017-08-17 20:13 ` [PATCH 39/55] skd: Convert explicit skd_request_fn() calls Bart Van Assche
2017-08-17 20:13 ` [PATCH 40/55] skd: Remove SG IO support Bart Van Assche
2017-08-17 20:13 ` [PATCH 41/55] skd: Remove dead code Bart Van Assche
2017-08-17 20:13 ` [PATCH 42/55] skd: Initialize skd_special_context.req.n_sg to one Bart Van Assche
2017-08-17 20:13 ` [PATCH 43/55] skd: Enable request tags for the block layer queue Bart Van Assche
2017-08-17 20:13 ` [PATCH 44/55] skd: Convert several per-device scalar variables into atomics Bart Van Assche
2017-08-17 20:13 ` [PATCH 45/55] skd: Introduce skd_process_request() Bart Van Assche
2017-08-17 20:13 ` [PATCH 46/55] skd: Split skd_recover_requests() Bart Van Assche
2017-08-17 20:13 ` [PATCH 47/55] skd: Move skd_free_sg_list() up Bart Van Assche
2017-08-17 20:13 ` [PATCH 48/55] skd: Coalesce struct request and struct skd_request_context Bart Van Assche
2017-08-17 20:13 ` [PATCH 49/55] skd: Convert to blk-mq Bart Van Assche
2017-08-17 20:13 ` [PATCH 50/55] skd: Switch to block layer timeout mechanism Bart Van Assche
2017-08-17 20:13 ` [PATCH 51/55] skd: Remove skd_device.in_flight Bart Van Assche
2017-08-17 20:13 ` [PATCH 52/55] skd: Reduce memory usage Bart Van Assche
2017-08-17 20:13 ` [PATCH 53/55] skd: Remove several local variables Bart Van Assche
2017-08-17 20:13 ` [PATCH 54/55] skd: Optimize locking Bart Van Assche
2017-08-17 20:13 ` [PATCH 55/55] skd: Bump driver version Bart Van Assche
2017-08-17 20:45   ` Jens Axboe
2017-08-17 20:44 ` [PATCH 00/55] Convert skd driver to blk-mq Jens Axboe
2017-08-18 14:46 ` Jens Axboe
2017-08-18 15:05   ` Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.