All of lore.kernel.org
 help / color / mirror / Atom feed
* [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions
@ 2017-07-13 21:12 Mike Snitzer
  2017-07-13 21:12 ` [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s) Mike Snitzer
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Mike Snitzer @ 2017-07-13 21:12 UTC (permalink / raw)
  To: hch; +Cc: dm-devel, linux-block, linux-scsi

Hi,

Please review the 2 patches in this series.  I'm left on-the-fence
about whether there is any point re-enabling "dm-mq stacked on old
.request_fn device(s)" -- especially given the awkward details
documented in the 1/2 patch header.

I welcome any feedback on this, thanks!

BTW, I tested these changes using mptest:
 git://github.com/snitm/mptest.git

with the following permutations ('runtest' script tweaked before each
permutation):

echo N > /sys/module/scsi_mod/parameters/use_blk_mq
echo N > /sys/module/dm_mod/parameters/use_blk_mq

echo Y > /sys/module/scsi_mod/parameters/use_blk_mq
echo Y > /sys/module/dm_mod/parameters/use_blk_mq

echo Y > /sys/module/scsi_mod/parameters/use_blk_mq
echo N > /sys/module/dm_mod/parameters/use_blk_mq

echo N > /sys/module/scsi_mod/parameters/use_blk_mq
echo Y > /sys/module/dm_mod/parameters/use_blk_mq

Mike Snitzer (2):
  dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s)
  dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions

 drivers/md/dm-mpath.c         | 16 ++++++++++++++--
 drivers/md/dm-rq.c            | 13 +++++--------
 drivers/md/dm-table.c         | 31 +++----------------------------
 drivers/md/dm-target.c        |  4 ++--
 drivers/md/dm.h               |  1 -
 include/linux/device-mapper.h |  3 ++-
 6 files changed, 26 insertions(+), 42 deletions(-)

-- 
2.10.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s)
  2017-07-13 21:12 [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Mike Snitzer
@ 2017-07-13 21:12 ` Mike Snitzer
  2017-07-14  7:22     ` Christoph Hellwig
  2017-07-13 21:12 ` [for-4.14 RFC PATCH 2/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Mike Snitzer
  2017-07-14  7:12 ` [for-4.14 RFC PATCH 0/2] " Christoph Hellwig
  2 siblings, 1 reply; 11+ messages in thread
From: Mike Snitzer @ 2017-07-13 21:12 UTC (permalink / raw)
  To: hch; +Cc: dm-devel, linux-block, linux-scsi

Conditionally use __blk_put_request() or blk_put_request() instead of
just blk_put_request() in multipath_release_clone().

Otherwise a deadlock will occur because scsi_end_request() will take the
clone request's queue_lock, around its call to blk_finish_request(), and
then the later blk_put_request() also tries to take it:

[12749.916332]  queued_spin_lock_slowpath+0xb/0xf
[12749.916335]  _raw_spin_lock_irqsave+0x37/0x40
[12749.916339]  blk_put_request+0x39/0x60
[12749.916342]  multipath_release_clone+0xe/0x10 [dm_multipath]
[12749.916350]  dm_softirq_done+0x156/0x240 [dm_mod]
[12749.916353]  __blk_mq_complete_request+0x90/0x140
[12749.916355]  blk_mq_complete_request+0x16/0x20
[12749.916360]  dm_complete_request+0x23/0x40 [dm_mod]
[12749.916365]  end_clone_request+0x1d/0x20 [dm_mod]
[12749.916367]  blk_finish_request+0x83/0x120
[12749.916370]  scsi_end_request+0x12d/0x1d0
[12749.916371]  scsi_io_completion+0x13c/0x630
[12749.916374]  ? set_next_entity+0x7c/0x780
[12749.916376]  scsi_finish_command+0xd9/0x120
[12749.916378]  scsi_softirq_done+0x12a/0x150
[12749.916380]  blk_done_softirq+0x9e/0xd0
[12749.916382]  __do_softirq+0xc9/0x269
[12749.916384]  run_ksoftirqd+0x29/0x50
[12749.916385]  smpboot_thread_fn+0x110/0x160
[12749.916387]  kthread+0x109/0x140
[12749.916389]  ? sort_range+0x30/0x30
[12749.916390]  ? kthread_park+0x60/0x60
[12749.916391]  ret_from_fork+0x25/0x30

This "fix" is gross in that the long-term fitness of stacking blk-mq DM
multipath (dm-mq) ontop of old .request_fn devices is questionable.  The
above stack trace shows just how ugly it is to have old .request_fn SCSI
code cascade into blk-mq code during DM multipath request completion.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
 drivers/md/dm-mpath.c         | 16 ++++++++++++++--
 drivers/md/dm-rq.c            |  8 +++++---
 drivers/md/dm-target.c        |  4 ++--
 include/linux/device-mapper.h |  3 ++-
 4 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 0e8ab5b..34cf7b6 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -520,9 +520,21 @@ static int multipath_clone_and_map(struct dm_target *ti, struct request *rq,
 	return DM_MAPIO_REMAPPED;
 }
 
-static void multipath_release_clone(struct request *clone)
+static void multipath_release_clone(struct dm_target *ti, struct request *clone)
 {
-	blk_put_request(clone);
+	struct multipath *m = ti->private;
+	struct request_queue *q = clone->q;
+
+	if (!q->mq_ops && m->queue_mode == DM_TYPE_MQ_REQUEST_BASED) {
+		/*
+		 * dm-mq on .request_fn already holds clone->q->queue_lock
+		 * via blk_finish_request()...
+		 * - true for .request_fn SCSI, but is it _always_ true?
+		 */
+		lockdep_assert_held(q->queue_lock);
+		__blk_put_request(q, clone);
+	} else
+		blk_put_request(clone);
 }
 
 /*
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index c6ebc5b..95bb44c 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -220,11 +220,12 @@ static void dm_end_request(struct request *clone, blk_status_t error)
 {
 	int rw = rq_data_dir(clone);
 	struct dm_rq_target_io *tio = clone->end_io_data;
+	struct dm_target *ti = tio->ti;
 	struct mapped_device *md = tio->md;
 	struct request *rq = tio->orig;
 
 	blk_rq_unprep_clone(clone);
-	tio->ti->type->release_clone_rq(clone);
+	ti->type->release_clone_rq(ti, clone);
 
 	rq_end_stats(md, rq);
 	if (!rq->q->mq_ops)
@@ -267,6 +268,7 @@ static void dm_mq_delay_requeue_request(struct request *rq, unsigned long msecs)
 
 static void dm_requeue_original_request(struct dm_rq_target_io *tio, bool delay_requeue)
 {
+	struct dm_target *ti = tio->ti;
 	struct mapped_device *md = tio->md;
 	struct request *rq = tio->orig;
 	int rw = rq_data_dir(rq);
@@ -274,7 +276,7 @@ static void dm_requeue_original_request(struct dm_rq_target_io *tio, bool delay_
 	rq_end_stats(md, rq);
 	if (tio->clone) {
 		blk_rq_unprep_clone(tio->clone);
-		tio->ti->type->release_clone_rq(tio->clone);
+		ti->type->release_clone_rq(ti, tio->clone);
 	}
 
 	if (!rq->q->mq_ops)
@@ -488,7 +490,7 @@ static int map_request(struct dm_rq_target_io *tio)
 	case DM_MAPIO_REMAPPED:
 		if (setup_clone(clone, rq, tio, GFP_ATOMIC)) {
 			/* -ENOMEM */
-			ti->type->release_clone_rq(clone);
+			ti->type->release_clone_rq(ti, clone);
 			return DM_MAPIO_REQUEUE;
 		}
 
diff --git a/drivers/md/dm-target.c b/drivers/md/dm-target.c
index c0d7e60..adbd17b 100644
--- a/drivers/md/dm-target.c
+++ b/drivers/md/dm-target.c
@@ -138,12 +138,12 @@ static int io_err_clone_and_map_rq(struct dm_target *ti, struct request *rq,
 	return DM_MAPIO_KILL;
 }
 
-static void io_err_release_clone_rq(struct request *clone)
+static void io_err_release_clone_rq(struct dm_target *ti, struct request *clone)
 {
 }
 
 static long io_err_dax_direct_access(struct dm_target *ti, pgoff_t pgoff,
-		long nr_pages, void **kaddr, pfn_t *pfn)
+				     long nr_pages, void **kaddr, pfn_t *pfn)
 {
 	return -EIO;
 }
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 0c1b50ad..f2ca0ab 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -61,7 +61,8 @@ typedef int (*dm_clone_and_map_request_fn) (struct dm_target *ti,
 					    struct request *rq,
 					    union map_info *map_context,
 					    struct request **clone);
-typedef void (*dm_release_clone_request_fn) (struct request *clone);
+typedef void (*dm_release_clone_request_fn) (struct dm_target *ti,
+					     struct request *clone);
 
 /*
  * Returns:
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [for-4.14 RFC PATCH 2/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions
  2017-07-13 21:12 [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Mike Snitzer
  2017-07-13 21:12 ` [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s) Mike Snitzer
@ 2017-07-13 21:12 ` Mike Snitzer
  2017-07-14  7:12 ` [for-4.14 RFC PATCH 0/2] " Christoph Hellwig
  2 siblings, 0 replies; 11+ messages in thread
From: Mike Snitzer @ 2017-07-13 21:12 UTC (permalink / raw)
  To: hch; +Cc: dm-devel, linux-block, linux-scsi

Currently if dm_mod.use_blk_mq=Y (or a DM-multipath table is loaded with
queue_mode=mq) and all underlying devices are not blk-mq, DM core will
fail with the error:
  "table load rejected: all devices are not blk-mq request-stackable"

This all-blk-mq-or-nothing approach is too cut-throat because it
prevents access to data stored on what could have been a previously
working multipath setup (e.g. if user decides to try dm_mod.use_blk_mq=Y
or queue_mode=mq only to find their underlying devices aren't blk-mq).

This restriction, and others like not being able to stack a top-level
blk-mq request-queue ontop of old .request_fn device(s), can be removed
thanks to commit eb8db831be ("dm: always defer request allocation to the
owner of the request_queue").

Now that request-based DM will always rely on the target (multipath) to
call blk_get_request() to create a clone request it is possible to
support all 4 permutations of stacking old .request_fn and blk-mq
request_queues.

Depends-on: eb8db831be ("dm: always defer request allocation to the owner of the request_queue")
Reported-by: Ewan Milne <emilne@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
 drivers/md/dm-rq.c    |  5 -----
 drivers/md/dm-table.c | 31 +++----------------------------
 drivers/md/dm.h       |  1 -
 3 files changed, 3 insertions(+), 34 deletions(-)

diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 95bb44c..d64677b 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -782,11 +782,6 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t)
 	struct dm_target *immutable_tgt;
 	int err;
 
-	if (!dm_table_all_blk_mq_devices(t)) {
-		DMERR("request-based dm-mq may only be stacked on blk-mq device(s)");
-		return -EINVAL;
-	}
-
 	md->tag_set = kzalloc_node(sizeof(struct blk_mq_tag_set), GFP_KERNEL, md->numa_node_id);
 	if (!md->tag_set)
 		return -ENOMEM;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index a39bcd9..e630768 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -46,7 +46,6 @@ struct dm_table {
 
 	bool integrity_supported:1;
 	bool singleton:1;
-	bool all_blk_mq:1;
 	unsigned integrity_added:1;
 
 	/*
@@ -910,7 +909,6 @@ static int dm_table_determine_type(struct dm_table *t)
 {
 	unsigned i;
 	unsigned bio_based = 0, request_based = 0, hybrid = 0;
-	unsigned sq_count = 0, mq_count = 0;
 	struct dm_target *tgt;
 	struct dm_dev_internal *dd;
 	struct list_head *devices = dm_table_get_devices(t);
@@ -985,11 +983,9 @@ static int dm_table_determine_type(struct dm_table *t)
 		int srcu_idx;
 		struct dm_table *live_table = dm_get_live_table(t->md, &srcu_idx);
 
-		/* inherit live table's type and all_blk_mq */
-		if (live_table) {
+		/* inherit live table's type */
+		if (live_table)
 			t->type = live_table->type;
-			t->all_blk_mq = live_table->all_blk_mq;
-		}
 		dm_put_live_table(t->md, srcu_idx);
 		return 0;
 	}
@@ -999,25 +995,9 @@ static int dm_table_determine_type(struct dm_table *t)
 		struct request_queue *q = bdev_get_queue(dd->dm_dev->bdev);
 
 		if (!blk_queue_stackable(q)) {
-			DMERR("table load rejected: including"
-			      " non-request-stackable devices");
+			DMERR("table load rejected: includes non-request-stackable devices");
 			return -EINVAL;
 		}
-
-		if (q->mq_ops)
-			mq_count++;
-		else
-			sq_count++;
-	}
-	if (sq_count && mq_count) {
-		DMERR("table load rejected: not all devices are blk-mq request-stackable");
-		return -EINVAL;
-	}
-	t->all_blk_mq = mq_count > 0;
-
-	if (t->type == DM_TYPE_MQ_REQUEST_BASED && !t->all_blk_mq) {
-		DMERR("table load rejected: all devices are not blk-mq request-stackable");
-		return -EINVAL;
 	}
 
 	return 0;
@@ -1067,11 +1047,6 @@ bool dm_table_request_based(struct dm_table *t)
 	return __table_type_request_based(dm_table_get_type(t));
 }
 
-bool dm_table_all_blk_mq_devices(struct dm_table *t)
-{
-	return t->all_blk_mq;
-}
-
 static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md)
 {
 	enum dm_queue_mode type = dm_table_get_type(t);
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 38c84c0..c484c4d 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -70,7 +70,6 @@ struct dm_target *dm_table_get_immutable_target(struct dm_table *t);
 struct dm_target *dm_table_get_wildcard_target(struct dm_table *t);
 bool dm_table_bio_based(struct dm_table *t);
 bool dm_table_request_based(struct dm_table *t);
-bool dm_table_all_blk_mq_devices(struct dm_table *t);
 void dm_table_free_md_mempools(struct dm_table *t);
 struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t);
 
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions
  2017-07-13 21:12 [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Mike Snitzer
  2017-07-13 21:12 ` [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s) Mike Snitzer
  2017-07-13 21:12 ` [for-4.14 RFC PATCH 2/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Mike Snitzer
@ 2017-07-14  7:12 ` Christoph Hellwig
  2017-07-14 14:02   ` Mike Snitzer
  2 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2017-07-14  7:12 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: hch, dm-devel, linux-block, linux-scsi

Btw, we might want to expedite this to 4.13, a 4.13 now defaults
to blk-mq for scsi, and this patch would make sure that dm keeps
on just working with that switch.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s)
  2017-07-13 21:12 ` [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s) Mike Snitzer
@ 2017-07-14  7:22     ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2017-07-14  7:22 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: hch, dm-devel, linux-block, linux-scsi

The problem here is the following:

blk_finish_request must always be called with the queue lock held,
it even has an assert.

Without blk-mq used by dm-rq, dm uses the block softirq to execute the
completion, which means we always have a different execution context and
can take the queue lock again without issuesi.

With blk-mq used by dm-rq, the the dm .complete handler that is the rough
equivalent of the softirq handler is called either directly if were are
on the same CPU, or using a IPI (hardirq) if not.  If this handler gets
called from a legacy request function it will be called with the
queue_lock held, but if it's called from a blk-mq driver or actually
uses the IPI no lock will be held.

When I did my blk-mq only for dm-mpath WIP patch my solution to that
was that I removed the ->complete handler entirely and just ran the
whole dm completion from the original hardirq context.  With that change
I know that for blk-mq we'll never hold the queue_lock (and the blk-mq
request free path doesn't care), and for legacy we always hold it,
so __blk_put_request can always be used.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s)
@ 2017-07-14  7:22     ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2017-07-14  7:22 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: linux-block, dm-devel, hch, linux-scsi

The problem here is the following:

blk_finish_request must always be called with the queue lock held,
it even has an assert.

Without blk-mq used by dm-rq, dm uses the block softirq to execute the
completion, which means we always have a different execution context and
can take the queue lock again without issuesi.

With blk-mq used by dm-rq, the the dm .complete handler that is the rough
equivalent of the softirq handler is called either directly if were are
on the same CPU, or using a IPI (hardirq) if not.  If this handler gets
called from a legacy request function it will be called with the
queue_lock held, but if it's called from a blk-mq driver or actually
uses the IPI no lock will be held.

When I did my blk-mq only for dm-mpath WIP patch my solution to that
was that I removed the ->complete handler entirely and just ran the
whole dm completion from the original hardirq context.  With that change
I know that for blk-mq we'll never hold the queue_lock (and the blk-mq
request free path doesn't care), and for legacy we always hold it,
so __blk_put_request can always be used.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions
  2017-07-14  7:12 ` [for-4.14 RFC PATCH 0/2] " Christoph Hellwig
@ 2017-07-14 14:02   ` Mike Snitzer
  2017-07-15  8:44     ` Christoph Hellwig
  0 siblings, 1 reply; 11+ messages in thread
From: Mike Snitzer @ 2017-07-14 14:02 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: dm-devel, linux-block, linux-scsi

On Fri, Jul 14 2017 at  3:12am -0400,
Christoph Hellwig <hch@lst.de> wrote:

> Btw, we might want to expedite this to 4.13, a 4.13 now defaults
> to blk-mq for scsi, and this patch would make sure that dm keeps
> on just working with that switch.

Don't think we need to rush anything in response to that change in
SCSI's default.  old .request_fn DM multipath works happily ontop of
blk-mq devices (so long as all paths are blk-mq).

Similarly, blk-mq DM multipath works ontop of all blk-mq devices.

Lastly, old .request_fn DM multipath ontop of old .request_fn paths
works fine.

It is just blk-mq DM multipath ontop of old .request_fn paths that is
disallowed in current upstream code.

But again, I really don't see why we should even want/need to support
that mode... hence my question raised in this RFC.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s)
  2017-07-14  7:22     ` Christoph Hellwig
  (?)
@ 2017-07-14 14:19     ` Mike Snitzer
  2017-07-14 17:17       ` Ewan D. Milne
  -1 siblings, 1 reply; 11+ messages in thread
From: Mike Snitzer @ 2017-07-14 14:19 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: dm-devel, linux-block, linux-scsi

On Fri, Jul 14 2017 at  3:22am -0400,
Christoph Hellwig <hch@lst.de> wrote:

> The problem here is the following:
> 
> blk_finish_request must always be called with the queue lock held,
> it even has an assert.
> 
> Without blk-mq used by dm-rq, dm uses the block softirq to execute the
> completion, which means we always have a different execution context and
> can take the queue lock again without issuesi.
> 
> With blk-mq used by dm-rq, the the dm .complete handler that is the rough
> equivalent of the softirq handler is called either directly if were are
> on the same CPU, or using a IPI (hardirq) if not.  If this handler gets
> called from a legacy request function it will be called with the
> queue_lock held, but if it's called from a blk-mq driver or actually
> uses the IPI no lock will be held.

Yeap, very well explained!  I found exactly that yesterday when I
developed this patch.  I stopped short of getting into those details in
my header though, but as you know it comes down to dm_complete_request's
blk-mq-vs-not branching (blk_mq_complete_request vs blk_complete_request).

> When I did my blk-mq only for dm-mpath WIP patch my solution to that
> was that I removed the ->complete handler entirely and just ran the
> whole dm completion from the original hardirq context.  With that change
> I know that for blk-mq we'll never hold the queue_lock (and the blk-mq
> request free path doesn't care), and for legacy we always hold it,
> so __blk_put_request can always be used.

Do you see a benefit to extracting that portion of your WIP patch
(removing the ->complete handler entirely)?

Or leave well enough alone and just continue to disable dm-mq's ability
to stack on .request_fn paths?

Given SCSI's switch to scsi-mq by default I cannot see value in propping
up stacking on the old .request_fn devices.

But interested to get your thoughts, thanks.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s)
  2017-07-14 14:19     ` Mike Snitzer
@ 2017-07-14 17:17       ` Ewan D. Milne
  2017-07-14 21:15         ` Mike Snitzer
  0 siblings, 1 reply; 11+ messages in thread
From: Ewan D. Milne @ 2017-07-14 17:17 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Christoph Hellwig, dm-devel, linux-block, linux-scsi

On Fri, 2017-07-14 at 10:19 -0400, Mike Snitzer wrote:
> 
> Do you see a benefit to extracting that portion of your WIP patch
> (removing the ->complete handler entirely)?
> 
> Or leave well enough alone and just continue to disable dm-mq's ability
> to stack on .request_fn paths?
> 
> Given SCSI's switch to scsi-mq by default I cannot see value in propping
> up stacking on the old .request_fn devices.

So, the dm_mod.use_blk_mq flag is global, right?  I guess the question
is whether all of the block device types used on a system under DM are
supported under MQ.  If that is the case then we would be OK.

The other question is whether there are negative performance
consequences in any (corner?) cases with MQ that would result in it
being preferable to run in non-MQ mode (e.g. tag space with lpfc, did
we ever resolve that?) but the right approach there is to put the effort
into the MQ path going forward, as has been the case.

-Ewan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s)
  2017-07-14 17:17       ` Ewan D. Milne
@ 2017-07-14 21:15         ` Mike Snitzer
  0 siblings, 0 replies; 11+ messages in thread
From: Mike Snitzer @ 2017-07-14 21:15 UTC (permalink / raw)
  To: Ewan D. Milne; +Cc: Christoph Hellwig, dm-devel, linux-block, linux-scsi

On Fri, Jul 14 2017 at  1:17pm -0400,
Ewan D. Milne <emilne@redhat.com> wrote:

> On Fri, 2017-07-14 at 10:19 -0400, Mike Snitzer wrote:
> > 
> > Do you see a benefit to extracting that portion of your WIP patch
> > (removing the ->complete handler entirely)?
> > 
> > Or leave well enough alone and just continue to disable dm-mq's ability
> > to stack on .request_fn paths?
> > 
> > Given SCSI's switch to scsi-mq by default I cannot see value in propping
> > up stacking on the old .request_fn devices.
> 
> So, the dm_mod.use_blk_mq flag is global, right?  I guess the question
> is whether all of the block device types used on a system under DM are
> supported under MQ.  If that is the case then we would be OK.

I didn't quite understand Ewan's question so we talked in person.  His
concern was whether other DM targets needed to be worried about blk-mq
vs not.  I clarified that DM multipath is the only target that is
request-based and that it is fine with stacking on scsi-mq.  And all the
bio-based targets obviously stack just fine on scsi-mq devices.

> The other question is whether there are negative performance
> consequences in any (corner?) cases with MQ that would result in it
> being preferable to run in non-MQ mode (e.g. tag space with lpfc, did
> we ever resolve that?) but the right approach there is to put the effort
> into the MQ path going forward, as has been the case.

Yeap.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions
  2017-07-14 14:02   ` Mike Snitzer
@ 2017-07-15  8:44     ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2017-07-15  8:44 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Christoph Hellwig, dm-devel, linux-block, linux-scsi

On Fri, Jul 14, 2017 at 10:02:06AM -0400, Mike Snitzer wrote:
> On Fri, Jul 14 2017 at  3:12am -0400,
> Christoph Hellwig <hch@lst.de> wrote:
> 
> > Btw, we might want to expedite this to 4.13, a 4.13 now defaults
> > to blk-mq for scsi, and this patch would make sure that dm keeps
> > on just working with that switch.
> 
> Don't think we need to rush anything in response to that change in
> SCSI's default.  old .request_fn DM multipath works happily ontop of
> blk-mq devices (so long as all paths are blk-mq).

You're right.  In that case I think we should just skip this series
and I'll dust of the patch to just kill the non-mq support for 3.14
if the switch of scsi to default to mq works out for 3.13.

> It is just blk-mq DM multipath ontop of old .request_fn paths that is
> disallowed in current upstream code.
> 
> But again, I really don't see why we should even want/need to support
> that mode... hence my question raised in this RFC.

I think this mode makes sense in the long run - to get rid of the
legacy request code in dm.  But as long as we keep both modes arounds
the use for it seems a big questionable indeed.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-07-15  8:44 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-13 21:12 [for-4.14 RFC PATCH 0/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Mike Snitzer
2017-07-13 21:12 ` [for-4.14 RFC PATCH 1/2] dm rq: avoid deadlock if dm-mq is stacked on old .request_fn device(s) Mike Snitzer
2017-07-14  7:22   ` Christoph Hellwig
2017-07-14  7:22     ` Christoph Hellwig
2017-07-14 14:19     ` Mike Snitzer
2017-07-14 17:17       ` Ewan D. Milne
2017-07-14 21:15         ` Mike Snitzer
2017-07-13 21:12 ` [for-4.14 RFC PATCH 2/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Mike Snitzer
2017-07-14  7:12 ` [for-4.14 RFC PATCH 0/2] " Christoph Hellwig
2017-07-14 14:02   ` Mike Snitzer
2017-07-15  8:44     ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.