From: Christoph Hellwig <hch@lst.de>
To: Omar Sandoval <osandov@osandov.com>
Cc: Christoph Hellwig <hch@lst.de>,
Thomas Gleixner <tglx@linutronix.de>,
Jens Axboe <axboe@kernel.dk>, Keith Busch <keith.busch@intel.com>,
linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 7/8] blk-mq: create hctx for each present CPU
Date: Thu, 8 Jun 2017 08:58:50 +0200 [thread overview]
Message-ID: <20170608065850.GA12803@lst.de> (raw)
In-Reply-To: <20170607220411.GF7481@vader.DHCP.thefacebook.com>
On Wed, Jun 07, 2017 at 03:04:11PM -0700, Omar Sandoval wrote:
> On Sat, Jun 03, 2017 at 04:04:02PM +0200, Christoph Hellwig wrote:
> > Currently we only create hctx for online CPUs, which can lead to a lot
> > of churn due to frequent soft offline / online operations. Instead
> > allocate one for each present CPU to avoid this and dramatically simplify
> > the code.
> >
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
>
> Oh man, this cleanup is great. Did you run blktests on this? block/008
> does a bunch of hotplugging while I/O is running.
I haven't run blktests yet, in fact when I did the work blktests didn't
exist yet. But thanks for the reminder, I'll run it now.
next prev parent reply other threads:[~2017-06-08 6:58 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-03 14:03 spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
2017-06-03 14:03 ` [PATCH 1/8] genirq: allow assigning affinity to present but not online CPUs Christoph Hellwig
2017-06-04 15:14 ` Sagi Grimberg
2017-06-17 23:21 ` Thomas Gleixner
2017-06-03 14:03 ` [PATCH 2/8] genirq: move pending helpers to internal.h Christoph Hellwig
2017-06-04 15:15 ` Sagi Grimberg
2017-06-03 14:03 ` [PATCH 3/8] genirq/affinity: factor out a irq_affinity_set helper Christoph Hellwig
2017-06-04 15:15 ` Sagi Grimberg
2017-06-16 10:23 ` Thomas Gleixner
2017-06-16 11:08 ` Thomas Gleixner
2017-06-16 12:00 ` Thomas Gleixner
2017-06-17 23:14 ` Thomas Gleixner
2017-06-03 14:03 ` [PATCH 4/8] genirq/affinity: assign vectors to all present CPUs Christoph Hellwig
2017-06-04 15:17 ` Sagi Grimberg
2017-06-22 17:10 ` [tip:irq/core] genirq/affinity: Assign " tip-bot for Christoph Hellwig
2017-06-03 14:04 ` [PATCH 5/8] genirq/affinity: update CPU affinity for CPU hotplug events Christoph Hellwig
2017-06-16 10:26 ` Thomas Gleixner
2017-06-16 10:29 ` Thomas Gleixner
2017-06-03 14:04 ` [PATCH 6/8] blk-mq: include all present CPUs in the default queue mapping Christoph Hellwig
2017-06-04 15:11 ` Sagi Grimberg
2017-06-03 14:04 ` [PATCH 7/8] blk-mq: create hctx for each present CPU Christoph Hellwig
2017-06-04 15:11 ` Sagi Grimberg
2017-06-07 9:10 ` Ming Lei
2017-06-07 19:06 ` Christoph Hellwig
2017-06-08 2:28 ` Ming Lei
2017-06-07 22:04 ` Omar Sandoval
2017-06-08 6:58 ` Christoph Hellwig [this message]
2017-06-03 14:04 ` [PATCH 8/8] nvme: allocate queues for all possible CPUs Christoph Hellwig
2017-06-04 15:13 ` Sagi Grimberg
2017-06-16 6:48 ` spread MSI(-X) vectors to all possible CPUs V2 Christoph Hellwig
2017-06-16 7:28 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170608065850.GA12803@lst.de \
--to=hch@lst.de \
--cc=axboe@kernel.dk \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=osandov@osandov.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).