All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch, v2] blk-mq: avoid excessive boot delays with large lun counts
@ 2015-10-30 17:22 Jeff Moyer
  2015-11-01  0:32 ` Ming Lei
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff Moyer @ 2015-10-30 17:22 UTC (permalink / raw)
  To: axboe, Jason Luo; +Cc: linux-kernel, Guru Anbalagane, Feng Jin, tj, Ming Lei

Hi,

Zhangqing Luo reported long boot times on a system with thousands of
LUNs when scsi-mq was enabled.  He narrowed the problem down to
blk_mq_add_queue_tag_set, where every queue is frozen in order to set
the BLK_MQ_F_TAG_SHARED flag.  Each added device will freeze all queues
added before it in sequence, which involves waiting for an RCU grace
period for each one.  We don't need to do this.  After the second queue
is added, only new queues need to be initialized with the shared tag.
We can do that by percolating the flag up to the blk_mq_tag_set, and
updating the newly added queue's hctxs if the flag is set.

This problem was introduced by commit 0d2602ca30e41 (blk-mq: improve
support for shared tags maps).

Reported-and-tested-by: Jason Luo <zhangqing.luo@oracle.com>
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>

---
Changes from v1:
- addressed review comments from Ming, which simplified the patch

Jason, if you could sanity test this patch to make sure it still solves
your problem, that would be greatly appreciated.

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 85f0143..12f79af 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1673,7 +1673,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
 	INIT_LIST_HEAD(&hctx->dispatch);
 	hctx->queue = q;
 	hctx->queue_num = hctx_idx;
-	hctx->flags = set->flags;
+	hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED;
 
 	blk_mq_init_cpu_notifier(&hctx->cpu_notifier,
 					blk_mq_hctx_notify, hctx);
@@ -1860,27 +1860,26 @@ static void blk_mq_map_swqueue(struct request_queue *q,
 	}
 }
 
-static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set)
+static void queue_set_hctx_shared(struct request_queue *q, bool shared)
 {
 	struct blk_mq_hw_ctx *hctx;
-	struct request_queue *q;
-	bool shared;
 	int i;
 
-	if (set->tag_list.next == set->tag_list.prev)
-		shared = false;
-	else
-		shared = true;
+	queue_for_each_hw_ctx(q, hctx, i) {
+		if (shared)
+			hctx->flags |= BLK_MQ_F_TAG_SHARED;
+		else
+			hctx->flags &= ~BLK_MQ_F_TAG_SHARED;
+	}
+}
+
+static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set, bool shared)
+{
+	struct request_queue *q;
 
 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
 		blk_mq_freeze_queue(q);
-
-		queue_for_each_hw_ctx(q, hctx, i) {
-			if (shared)
-				hctx->flags |= BLK_MQ_F_TAG_SHARED;
-			else
-				hctx->flags &= ~BLK_MQ_F_TAG_SHARED;
-		}
+		queue_set_hctx_shared(q, shared);
 		blk_mq_unfreeze_queue(q);
 	}
 }
@@ -1891,7 +1890,12 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
 
 	mutex_lock(&set->tag_list_lock);
 	list_del_init(&q->tag_set_list);
-	blk_mq_update_tag_set_depth(set);
+	if (set->tag_list.next == set->tag_list.prev) {
+		/* just transitioned to unshared */
+		set->flags &= ~BLK_MQ_F_TAG_SHARED;
+		/* update existing queue */
+		blk_mq_update_tag_set_depth(set, false);
+	}
 	mutex_unlock(&set->tag_list_lock);
 }
 
@@ -1901,8 +1905,17 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
 	q->tag_set = set;
 
 	mutex_lock(&set->tag_list_lock);
+
+	/* Check to see if we're transitioning to shared (from 1 to 2 queues). */
+	if (!list_empty(&set->tag_list) && !(set->flags & BLK_MQ_F_TAG_SHARED)) {
+		set->flags |= BLK_MQ_F_TAG_SHARED;
+		/* update existing queue */
+		blk_mq_update_tag_set_depth(set, true);
+	}
+	if (set->flags & BLK_MQ_F_TAG_SHARED)
+		queue_set_hctx_shared(q, true);
 	list_add_tail(&q->tag_set_list, &set->tag_list);
-	blk_mq_update_tag_set_depth(set);
+
 	mutex_unlock(&set->tag_list_lock);
 }
 

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [patch, v2] blk-mq: avoid excessive boot delays with large lun counts
  2015-10-30 17:22 [patch, v2] blk-mq: avoid excessive boot delays with large lun counts Jeff Moyer
@ 2015-11-01  0:32 ` Ming Lei
  2015-11-02 14:04   ` Jeff Moyer
  0 siblings, 1 reply; 6+ messages in thread
From: Ming Lei @ 2015-11-01  0:32 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Jens Axboe, Jason Luo, Linux Kernel Mailing List,
	Guru Anbalagane, Feng Jin, Tejun Heo

On Sat, Oct 31, 2015 at 1:22 AM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Hi,
>
> Zhangqing Luo reported long boot times on a system with thousands of
> LUNs when scsi-mq was enabled.  He narrowed the problem down to
> blk_mq_add_queue_tag_set, where every queue is frozen in order to set
> the BLK_MQ_F_TAG_SHARED flag.  Each added device will freeze all queues
> added before it in sequence, which involves waiting for an RCU grace
> period for each one.  We don't need to do this.  After the second queue
> is added, only new queues need to be initialized with the shared tag.
> We can do that by percolating the flag up to the blk_mq_tag_set, and
> updating the newly added queue's hctxs if the flag is set.
>
> This problem was introduced by commit 0d2602ca30e41 (blk-mq: improve
> support for shared tags maps).
>
> Reported-and-tested-by: Jason Luo <zhangqing.luo@oracle.com>
> Signed-off-by: Jeff Moyer <jmoyer@redhat.com>

You can add
         Reviewed-by: Ming Lei <ming.lei@canonical.com>
if the following trivial issues(especially the 2nd one) are addressed.

>
> ---
> Changes from v1:
> - addressed review comments from Ming, which simplified the patch
>
> Jason, if you could sanity test this patch to make sure it still solves
> your problem, that would be greatly appreciated.
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 85f0143..12f79af 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1673,7 +1673,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
>         INIT_LIST_HEAD(&hctx->dispatch);
>         hctx->queue = q;
>         hctx->queue_num = hctx_idx;
> -       hctx->flags = set->flags;
> +       hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED;
>
>         blk_mq_init_cpu_notifier(&hctx->cpu_notifier,
>                                         blk_mq_hctx_notify, hctx);
> @@ -1860,27 +1860,26 @@ static void blk_mq_map_swqueue(struct request_queue *q,
>         }
>  }
>
> -static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set)
> +static void queue_set_hctx_shared(struct request_queue *q, bool shared)
>  {
>         struct blk_mq_hw_ctx *hctx;
> -       struct request_queue *q;
> -       bool shared;
>         int i;
>
> -       if (set->tag_list.next == set->tag_list.prev)
> -               shared = false;
> -       else
> -               shared = true;
> +       queue_for_each_hw_ctx(q, hctx, i) {
> +               if (shared)
> +                       hctx->flags |= BLK_MQ_F_TAG_SHARED;
> +               else
> +                       hctx->flags &= ~BLK_MQ_F_TAG_SHARED;
> +       }
> +}
> +
> +static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set, bool shared)
> +{
> +       struct request_queue *q;
>
>         list_for_each_entry(q, &set->tag_list, tag_set_list) {
>                 blk_mq_freeze_queue(q);
> -
> -               queue_for_each_hw_ctx(q, hctx, i) {
> -                       if (shared)
> -                               hctx->flags |= BLK_MQ_F_TAG_SHARED;
> -                       else
> -                               hctx->flags &= ~BLK_MQ_F_TAG_SHARED;
> -               }
> +               queue_set_hctx_shared(q, shared);
>                 blk_mq_unfreeze_queue(q);
>         }
>  }
> @@ -1891,7 +1890,12 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
>
>         mutex_lock(&set->tag_list_lock);
>         list_del_init(&q->tag_set_list);
> -       blk_mq_update_tag_set_depth(set);
> +       if (set->tag_list.next == set->tag_list.prev) {

list_is_singular() should be better.

> +               /* just transitioned to unshared */
> +               set->flags &= ~BLK_MQ_F_TAG_SHARED;
> +               /* update existing queue */
> +               blk_mq_update_tag_set_depth(set, false);
> +       }
>         mutex_unlock(&set->tag_list_lock);
>  }
>
> @@ -1901,8 +1905,17 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
>         q->tag_set = set;
>
>         mutex_lock(&set->tag_list_lock);
> +
> +       /* Check to see if we're transitioning to shared (from 1 to 2 queues). */
> +       if (!list_empty(&set->tag_list) && !(set->flags & BLK_MQ_F_TAG_SHARED)) {
> +               set->flags |= BLK_MQ_F_TAG_SHARED;
> +               /* update existing queue */
> +               blk_mq_update_tag_set_depth(set, true);
> +       }
> +       if (set->flags & BLK_MQ_F_TAG_SHARED)

The above should be 'else if', otherwise the current queue will be set
twice.

> +               queue_set_hctx_shared(q, true);
>         list_add_tail(&q->tag_set_list, &set->tag_list);
> -       blk_mq_update_tag_set_depth(set);
> +
>         mutex_unlock(&set->tag_list_lock);
>  }
>



-- 
Ming Lei

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch, v2] blk-mq: avoid excessive boot delays with large lun counts
  2015-11-01  0:32 ` Ming Lei
@ 2015-11-02 14:04   ` Jeff Moyer
  2015-11-03  1:12     ` Ming Lei
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff Moyer @ 2015-11-02 14:04 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Jason Luo, Linux Kernel Mailing List,
	Guru Anbalagane, Feng Jin, Tejun Heo

Ming Lei <tom.leiming@gmail.com> writes:

> You can add
>          Reviewed-by: Ming Lei <ming.lei@canonical.com>
> if the following trivial issues(especially the 2nd one) are addressed.

[snip]

>> @@ -1891,7 +1890,12 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
>>
>>         mutex_lock(&set->tag_list_lock);
>>         list_del_init(&q->tag_set_list);
>> -       blk_mq_update_tag_set_depth(set);
>> +       if (set->tag_list.next == set->tag_list.prev) {
>
> list_is_singular() should be better.

Didn't even know that existed.  Thanks.

>> +               /* just transitioned to unshared */
>> +               set->flags &= ~BLK_MQ_F_TAG_SHARED;
>> +               /* update existing queue */
>> +               blk_mq_update_tag_set_depth(set, false);
>> +       }
>>         mutex_unlock(&set->tag_list_lock);
>>  }
>>
>> @@ -1901,8 +1905,17 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
>>         q->tag_set = set;
>>
>>         mutex_lock(&set->tag_list_lock);
>> +
>> +       /* Check to see if we're transitioning to shared (from 1 to 2 queues). */
>> +       if (!list_empty(&set->tag_list) && !(set->flags & BLK_MQ_F_TAG_SHARED)) {
>> +               set->flags |= BLK_MQ_F_TAG_SHARED;
>> +               /* update existing queue */
>> +               blk_mq_update_tag_set_depth(set, true);
>> +       }
>> +       if (set->flags & BLK_MQ_F_TAG_SHARED)
>
> The above should be 'else if', otherwise the current queue will be set
> twice.

I moved the list add below this to avoid that very issue.  See:

>> +               queue_set_hctx_shared(q, true);
>>         list_add_tail(&q->tag_set_list, &set->tag_list);
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This seemed the cleanest way to structure the code to avoid the double
walking of the hctx list for the current q.

-Jeff

>> -       blk_mq_update_tag_set_depth(set);
>> +
>>         mutex_unlock(&set->tag_list_lock);
>>  }
>>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch, v2] blk-mq: avoid excessive boot delays with large lun counts
  2015-11-02 14:04   ` Jeff Moyer
@ 2015-11-03  1:12     ` Ming Lei
  2015-11-03 13:27       ` Jeff Moyer
  0 siblings, 1 reply; 6+ messages in thread
From: Ming Lei @ 2015-11-03  1:12 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Jens Axboe, Jason Luo, Linux Kernel Mailing List,
	Guru Anbalagane, Feng Jin, Tejun Heo

On Mon, Nov 2, 2015 at 10:04 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Ming Lei <tom.leiming@gmail.com> writes:
>
>> You can add
>>          Reviewed-by: Ming Lei <ming.lei@canonical.com>
>> if the following trivial issues(especially the 2nd one) are addressed.
>
> [snip]
>
>>> @@ -1891,7 +1890,12 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
>>>
>>>         mutex_lock(&set->tag_list_lock);
>>>         list_del_init(&q->tag_set_list);
>>> -       blk_mq_update_tag_set_depth(set);
>>> +       if (set->tag_list.next == set->tag_list.prev) {
>>
>> list_is_singular() should be better.
>
> Didn't even know that existed.  Thanks.
>
>>> +               /* just transitioned to unshared */
>>> +               set->flags &= ~BLK_MQ_F_TAG_SHARED;
>>> +               /* update existing queue */
>>> +               blk_mq_update_tag_set_depth(set, false);
>>> +       }
>>>         mutex_unlock(&set->tag_list_lock);
>>>  }
>>>
>>> @@ -1901,8 +1905,17 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
>>>         q->tag_set = set;
>>>
>>>         mutex_lock(&set->tag_list_lock);
>>> +
>>> +       /* Check to see if we're transitioning to shared (from 1 to 2 queues). */
>>> +       if (!list_empty(&set->tag_list) && !(set->flags & BLK_MQ_F_TAG_SHARED)) {
>>> +               set->flags |= BLK_MQ_F_TAG_SHARED;
>>> +               /* update existing queue */
>>> +               blk_mq_update_tag_set_depth(set, true);
>>> +       }
>>> +       if (set->flags & BLK_MQ_F_TAG_SHARED)
>>
>> The above should be 'else if', otherwise the current queue will be set
>> twice.
>
> I moved the list add below this to avoid that very issue.  See:
>
>>> +               queue_set_hctx_shared(q, true);
>>>         list_add_tail(&q->tag_set_list, &set->tag_list);
>            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> This seemed the cleanest way to structure the code to avoid the double
> walking of the hctx list for the current q.

OK, it is correct, then v1 is fine.

Reviewed-by: Ming Lei <ming.lei@canonical.com>

>
> -Jeff
>
>>> -       blk_mq_update_tag_set_depth(set);
>>> +
>>>         mutex_unlock(&set->tag_list_lock);
>>>  }
>>>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch, v2] blk-mq: avoid excessive boot delays with large lun counts
  2015-11-03  1:12     ` Ming Lei
@ 2015-11-03 13:27       ` Jeff Moyer
  2015-11-03 15:23         ` Jens Axboe
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff Moyer @ 2015-11-03 13:27 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Jason Luo, Linux Kernel Mailing List,
	Guru Anbalagane, Feng Jin, Tejun Heo

Ming Lei <tom.leiming@gmail.com> writes:

>>> The above should be 'else if', otherwise the current queue will be set
>>> twice.
>>
>> I moved the list add below this to avoid that very issue.  See:
>>
>>>> +               queue_set_hctx_shared(q, true);
>>>>         list_add_tail(&q->tag_set_list, &set->tag_list);
>>            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>
>> This seemed the cleanest way to structure the code to avoid the double
>> walking of the hctx list for the current q.
>
> OK, it is correct, then v1 is fine.
>
> Reviewed-by: Ming Lei <ming.lei@canonical.com>

Thanks, Ming.  Jens, I'll re-send with the list_is_singular change and
this one should be ready for merging.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch, v2] blk-mq: avoid excessive boot delays with large lun counts
  2015-11-03 13:27       ` Jeff Moyer
@ 2015-11-03 15:23         ` Jens Axboe
  0 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2015-11-03 15:23 UTC (permalink / raw)
  To: Jeff Moyer, Ming Lei
  Cc: Jason Luo, Linux Kernel Mailing List, Guru Anbalagane, Feng Jin,
	Tejun Heo

On 11/03/2015 06:27 AM, Jeff Moyer wrote:
> Ming Lei <tom.leiming@gmail.com> writes:
>
>>>> The above should be 'else if', otherwise the current queue will be set
>>>> twice.
>>>
>>> I moved the list add below this to avoid that very issue.  See:
>>>
>>>>> +               queue_set_hctx_shared(q, true);
>>>>>          list_add_tail(&q->tag_set_list, &set->tag_list);
>>>             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>
>>> This seemed the cleanest way to structure the code to avoid the double
>>> walking of the hctx list for the current q.
>>
>> OK, it is correct, then v1 is fine.
>>
>> Reviewed-by: Ming Lei <ming.lei@canonical.com>
>
> Thanks, Ming.  Jens, I'll re-send with the list_is_singular change and
> this one should be ready for merging.

Great, thanks Jeff!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-11-03 15:24 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-30 17:22 [patch, v2] blk-mq: avoid excessive boot delays with large lun counts Jeff Moyer
2015-11-01  0:32 ` Ming Lei
2015-11-02 14:04   ` Jeff Moyer
2015-11-03  1:12     ` Ming Lei
2015-11-03 13:27       ` Jeff Moyer
2015-11-03 15:23         ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.