From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>, Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
Christoph Hellwig <hch@lst.de>, Daniel Wagner <dwagner@suse.de>,
Wen Xiong <wenxiong@us.ibm.com>,
John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx
Date: Wed, 30 Jun 2021 21:46:35 +0200 [thread overview]
Message-ID: <89081624-fedd-aa94-1ba2-9a137708a1f1@suse.de> (raw)
In-Reply-To: <e106f9c4-35c3-b2da-cdd8-3c4dff8234d6@grimberg.me>
On 6/30/21 8:59 PM, Sagi Grimberg wrote:
>
>>>>> Shouldn't we rather modify the tagset to only refer to the current
>>>>> online
>>>>> CPUs _only_, thereby never submit a connect request for hctx with only
>>>>> offline CPUs?
>>>>
>>>> Then you may setup very less io queues, and performance may suffer even
>>>> though lots of CPUs become online later.
>>>> ;
>>> Only if we stay with the reduced number of I/O queues. Which is not
>>> what I'm
>>> proposing; I'd rather prefer to connect and disconnect queues from
>>> the cpu
>>> hotplug handler. For starters we could even trigger a reset once the
>>> first
>>> cpu within a hctx is onlined.
>>
>> Yeah, that need one big/complicated patchset, but not see any advantages
>> over this simple approach.
>
> I tend to agree with Ming here.
Actually, Daniel and me came to a slightly different idea: use cpu
hotplug notifier.
Thing is, blk-mq already has cpu hotplug notifier, which should ensure
that no I/O is pending during cpu hotplug.
If we now add a nvme cpu hotplug notifier which essentially kicks off a
reset once all cpu in a hctx are offline the reset logic will rearrange
the queues to match the current cpu layout.
And when the cpus are getting onlined we'll do another reset.
Daniel is currently preparing a patch; let's see how it goes.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
WARNING: multiple messages have this Message-ID (diff)
From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>, Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
Christoph Hellwig <hch@lst.de>, Daniel Wagner <dwagner@suse.de>,
Wen Xiong <wenxiong@us.ibm.com>,
John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx
Date: Wed, 30 Jun 2021 21:46:35 +0200 [thread overview]
Message-ID: <89081624-fedd-aa94-1ba2-9a137708a1f1@suse.de> (raw)
In-Reply-To: <e106f9c4-35c3-b2da-cdd8-3c4dff8234d6@grimberg.me>
On 6/30/21 8:59 PM, Sagi Grimberg wrote:
>
>>>>> Shouldn't we rather modify the tagset to only refer to the current
>>>>> online
>>>>> CPUs _only_, thereby never submit a connect request for hctx with only
>>>>> offline CPUs?
>>>>
>>>> Then you may setup very less io queues, and performance may suffer even
>>>> though lots of CPUs become online later.
>>>> ;
>>> Only if we stay with the reduced number of I/O queues. Which is not
>>> what I'm
>>> proposing; I'd rather prefer to connect and disconnect queues from
>>> the cpu
>>> hotplug handler. For starters we could even trigger a reset once the
>>> first
>>> cpu within a hctx is onlined.
>>
>> Yeah, that need one big/complicated patchset, but not see any advantages
>> over this simple approach.
>
> I tend to agree with Ming here.
Actually, Daniel and me came to a slightly different idea: use cpu
hotplug notifier.
Thing is, blk-mq already has cpu hotplug notifier, which should ensure
that no I/O is pending during cpu hotplug.
If we now add a nvme cpu hotplug notifier which essentially kicks off a
reset once all cpu in a hctx are offline the reset logic will rearrange
the queues to match the current cpu layout.
And when the cpus are getting onlined we'll do another reset.
Daniel is currently preparing a patch; let's see how it goes.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-06-30 19:46 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-29 7:49 [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-06-29 7:49 ` Ming Lei
2021-06-29 7:49 ` [PATCH 1/2] blk-mq: not deactivate hctx if the device doesn't use managed irq Ming Lei
2021-06-29 7:49 ` Ming Lei
2021-06-29 12:39 ` Hannes Reinecke
2021-06-29 12:39 ` Hannes Reinecke
2021-06-29 14:17 ` Ming Lei
2021-06-29 14:17 ` Ming Lei
2021-06-29 15:49 ` John Garry
2021-06-29 15:49 ` John Garry
2021-06-30 0:32 ` Ming Lei
2021-06-30 0:32 ` Ming Lei
2021-06-30 9:25 ` John Garry
2021-06-30 9:25 ` John Garry
2021-07-01 9:52 ` Christoph Hellwig
2021-07-01 9:52 ` Christoph Hellwig
2021-06-29 23:30 ` Damien Le Moal
2021-06-29 23:30 ` Damien Le Moal
2021-06-30 18:58 ` Sagi Grimberg
2021-06-30 18:58 ` Sagi Grimberg
2021-06-30 21:57 ` Damien Le Moal
2021-06-30 21:57 ` Damien Le Moal
2021-07-01 14:20 ` Keith Busch
2021-07-01 14:20 ` Keith Busch
2021-06-29 7:49 ` [PATCH 2/2] nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop Ming Lei
2021-06-29 7:49 ` Ming Lei
2021-06-30 8:15 ` Hannes Reinecke
2021-06-30 8:15 ` Hannes Reinecke
2021-06-30 8:47 ` Ming Lei
2021-06-30 8:47 ` Ming Lei
2021-06-30 8:18 ` [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Hannes Reinecke
2021-06-30 8:18 ` Hannes Reinecke
2021-06-30 8:42 ` Ming Lei
2021-06-30 8:42 ` Ming Lei
2021-06-30 9:43 ` Hannes Reinecke
2021-06-30 9:43 ` Hannes Reinecke
2021-06-30 9:53 ` Ming Lei
2021-06-30 9:53 ` Ming Lei
2021-06-30 18:59 ` Sagi Grimberg
2021-06-30 18:59 ` Sagi Grimberg
2021-06-30 19:46 ` Hannes Reinecke [this message]
2021-06-30 19:46 ` Hannes Reinecke
2021-06-30 23:59 ` Ming Lei
2021-06-30 23:59 ` Ming Lei
2021-07-01 8:00 ` Hannes Reinecke
2021-07-01 8:00 ` Hannes Reinecke
2021-07-01 9:13 ` Ming Lei
2021-07-01 9:13 ` Ming Lei
2021-07-02 9:47 ` Daniel Wagner
2021-07-02 9:47 ` Daniel Wagner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=89081624-fedd-aa94-1ba2-9a137708a1f1@suse.de \
--to=hare@suse.de \
--cc=axboe@kernel.dk \
--cc=dwagner@suse.de \
--cc=hch@lst.de \
--cc=john.garry@huawei.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=ming.lei@redhat.com \
--cc=sagi@grimberg.me \
--cc=wenxiong@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.