All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: paulmck@kernel.org
Cc: Ming Lei <ming.lei@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Jens Axboe <axboe@kernel.dk>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Chao Leng <lengchao@huawei.com>, Keith Busch <kbusch@kernel.org>,
	Ming Lin <mlin@kernel.org>
Subject: Re: [PATCH v5 1/2] blk-mq: add tagset quiesce interface
Date: Tue, 28 Jul 2020 21:37:23 -0700	[thread overview]
Message-ID: <57f76f9c-6fb9-b6f1-ba85-1594755e60f3@grimberg.me> (raw)
In-Reply-To: <20200729041004.GV9247@paulmck-ThinkPad-P72>


>>>> Dynamically allocating each one is possible but not very scalable.
>>>>
>>>> The question is if there is some way, we can do this with on-stack
>>>> or a single on-heap rcu_head or equivalent that can achieve the same
>>>> effect.
>>>
>>> If the hctx structures are guaranteed to stay put, you could count
>>> them and then do a single allocation of an array of rcu_head structures
>>> (or some larger structure containing an rcu_head structure, if needed).
>>> You could then sequence through this array, consuming one rcu_head per
>>> hctx as you processed it.  Once all the callbacks had been invoked,
>>> it would be safe to free the array.
>>>
>>> Sounds too simple, though.  So what am I missing?
>>
>> We don't want higher-order allocations...
> 
> OK, I will bite...  Do multiple lower-order allocations (page size is
> still lower-order, correct?) and link them together.
> 
> Sorry, couldn't resist...

Possible, but I didn't want us to resort to all this complexity and
thought we can find a better, simpler solution.

WARNING: multiple messages have this Message-ID (diff)
From: Sagi Grimberg <sagi@grimberg.me>
To: paulmck@kernel.org
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-nvme@lists.infradead.org, Ming Lei <ming.lei@redhat.com>,
	linux-block@vger.kernel.org, Chao Leng <lengchao@huawei.com>,
	Keith Busch <kbusch@kernel.org>, Ming Lin <mlin@kernel.org>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v5 1/2] blk-mq: add tagset quiesce interface
Date: Tue, 28 Jul 2020 21:37:23 -0700	[thread overview]
Message-ID: <57f76f9c-6fb9-b6f1-ba85-1594755e60f3@grimberg.me> (raw)
In-Reply-To: <20200729041004.GV9247@paulmck-ThinkPad-P72>


>>>> Dynamically allocating each one is possible but not very scalable.
>>>>
>>>> The question is if there is some way, we can do this with on-stack
>>>> or a single on-heap rcu_head or equivalent that can achieve the same
>>>> effect.
>>>
>>> If the hctx structures are guaranteed to stay put, you could count
>>> them and then do a single allocation of an array of rcu_head structures
>>> (or some larger structure containing an rcu_head structure, if needed).
>>> You could then sequence through this array, consuming one rcu_head per
>>> hctx as you processed it.  Once all the callbacks had been invoked,
>>> it would be safe to free the array.
>>>
>>> Sounds too simple, though.  So what am I missing?
>>
>> We don't want higher-order allocations...
> 
> OK, I will bite...  Do multiple lower-order allocations (page size is
> still lower-order, correct?) and link them together.
> 
> Sorry, couldn't resist...

Possible, but I didn't want us to resort to all this complexity and
thought we can find a better, simpler solution.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-07-29  4:37 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-27 23:10 [PATCH v5 0/2] improve nvme quiesce time for large amount of namespaces Sagi Grimberg
2020-07-27 23:10 ` Sagi Grimberg
2020-07-27 23:10 ` [PATCH v5 1/2] blk-mq: add tagset quiesce interface Sagi Grimberg
2020-07-27 23:10   ` Sagi Grimberg
2020-07-27 23:32   ` Keith Busch
2020-07-27 23:32     ` Keith Busch
2020-07-28  0:12     ` Sagi Grimberg
2020-07-28  0:12       ` Sagi Grimberg
2020-07-28  1:40   ` Ming Lei
2020-07-28  1:40     ` Ming Lei
2020-07-28  1:51     ` Jens Axboe
2020-07-28  1:51       ` Jens Axboe
2020-07-28  2:17       ` Ming Lei
2020-07-28  2:17         ` Ming Lei
2020-07-28  2:23         ` Jens Axboe
2020-07-28  2:23           ` Jens Axboe
2020-07-28  2:28           ` Ming Lei
2020-07-28  2:28             ` Ming Lei
2020-07-28  2:32             ` Jens Axboe
2020-07-28  2:32               ` Jens Axboe
2020-07-28  3:29               ` Sagi Grimberg
2020-07-28  3:29                 ` Sagi Grimberg
2020-07-28  3:25     ` Sagi Grimberg
2020-07-28  3:25       ` Sagi Grimberg
2020-07-28  7:18   ` Christoph Hellwig
2020-07-28  7:18     ` Christoph Hellwig
2020-07-28  7:48     ` Sagi Grimberg
2020-07-28  7:48       ` Sagi Grimberg
2020-07-28  9:16     ` Ming Lei
2020-07-28  9:16       ` Ming Lei
2020-07-28  9:24       ` Sagi Grimberg
2020-07-28  9:24         ` Sagi Grimberg
2020-07-28  9:33         ` Ming Lei
2020-07-28  9:33           ` Ming Lei
2020-07-28  9:37           ` Sagi Grimberg
2020-07-28  9:37             ` Sagi Grimberg
2020-07-28  9:43             ` Sagi Grimberg
2020-07-28  9:43               ` Sagi Grimberg
2020-07-28 10:10               ` Ming Lei
2020-07-28 10:10                 ` Ming Lei
2020-07-28 10:57                 ` Christoph Hellwig
2020-07-28 10:57                   ` Christoph Hellwig
2020-07-28 14:13                 ` Paul E. McKenney
2020-07-28 14:13                   ` Paul E. McKenney
2020-07-28 10:58             ` Christoph Hellwig
2020-07-28 10:58               ` Christoph Hellwig
2020-07-28 16:25               ` Sagi Grimberg
2020-07-28 16:25                 ` Sagi Grimberg
2020-07-28 13:54         ` Paul E. McKenney
2020-07-28 13:54           ` Paul E. McKenney
2020-07-28 23:46           ` Sagi Grimberg
2020-07-28 23:46             ` Sagi Grimberg
2020-07-29  0:31             ` Paul E. McKenney
2020-07-29  0:31               ` Paul E. McKenney
2020-07-29  0:43               ` Sagi Grimberg
2020-07-29  0:43                 ` Sagi Grimberg
2020-07-29  0:59                 ` Keith Busch
2020-07-29  0:59                   ` Keith Busch
2020-07-29  4:39                   ` Sagi Grimberg
2020-07-29  4:39                     ` Sagi Grimberg
2020-08-07  9:04                     ` Chao Leng
2020-08-07  9:04                       ` Chao Leng
2020-08-07  9:24                       ` Ming Lei
2020-08-07  9:24                         ` Ming Lei
2020-08-07  9:35                         ` Chao Leng
2020-08-07  9:35                           ` Chao Leng
2020-07-29  4:10                 ` Paul E. McKenney
2020-07-29  4:10                   ` Paul E. McKenney
2020-07-29  4:37                   ` Sagi Grimberg [this message]
2020-07-29  4:37                     ` Sagi Grimberg
2020-07-27 23:10 ` [PATCH v5 2/2] nvme: use blk_mq_[un]quiesce_tagset Sagi Grimberg
2020-07-27 23:10   ` Sagi Grimberg
2020-07-28  0:54   ` Sagi Grimberg
2020-07-28  0:54     ` Sagi Grimberg
2020-07-28  3:21     ` Chao Leng
2020-07-28  3:21       ` Chao Leng
2020-07-28  3:34       ` Sagi Grimberg
2020-07-28  3:34         ` Sagi Grimberg
2020-07-28  3:51         ` Chao Leng
2020-07-28  3:51           ` Chao Leng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57f76f9c-6fb9-b6f1-ba85-1594755e60f3@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=lengchao@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=mlin@kernel.org \
    --cc=paulmck@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.