linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dongli Zhang <dongli.zhang@oracle.com>
To: "wenbinzeng(曾文斌)" <wenbinzeng@tencent.com>
Cc: Ming Lei <ming.lei@redhat.com>,
	Wenbin Zeng <wenbin.zeng@gmail.com>,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"keith.busch@intel.com" <keith.busch@intel.com>,
	"hare@suse.com" <hare@suse.com>,
	"osandov@fb.com" <osandov@fb.com>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	"bvanassche@acm.org" <bvanassche@acm.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)
Date: Tue, 25 Jun 2019 10:51:31 +0800	[thread overview]
Message-ID: <9bae2938-dcb6-de91-b16f-36ce8af8b7fb@oracle.com> (raw)
In-Reply-To: <20190625022706.GE23777@ming.t460p>



On 6/25/19 10:27 AM, Ming Lei wrote:
> On Tue, Jun 25, 2019 at 02:14:46AM +0000, wenbinzeng(曾文斌) wrote:
>> Hi Ming,
>>
>>> -----Original Message-----
>>> From: Ming Lei <ming.lei@redhat.com>
>>> Sent: Tuesday, June 25, 2019 9:55 AM
>>> To: Wenbin Zeng <wenbin.zeng@gmail.com>
>>> Cc: axboe@kernel.dk; keith.busch@intel.com; hare@suse.com; osandov@fb.com;
>>> sagi@grimberg.me; bvanassche@acm.org; linux-block@vger.kernel.org;
>>> linux-kernel@vger.kernel.org; wenbinzeng(曾文斌) <wenbinzeng@tencent.com>
>>> Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)
>>>
>>> On Mon, Jun 24, 2019 at 11:24:07PM +0800, Wenbin Zeng wrote:
>>>> Currently hctx->cpumask is not updated when hot-plugging new cpus,
>>>> as there are many chances kblockd_mod_delayed_work_on() getting
>>>> called with WORK_CPU_UNBOUND, workqueue blk_mq_run_work_fn may run
>>>
>>> There are only two cases in which WORK_CPU_UNBOUND is applied:
>>>
>>> 1) single hw queue
>>>
>>> 2) multiple hw queue, and all CPUs in this hctx become offline
>>>
>>> For 1), all CPUs can be found in hctx->cpumask.
>>>
>>>> on the newly-plugged cpus, consequently __blk_mq_run_hw_queue()
>>>> reporting excessive "run queue from wrong CPU" messages because
>>>> cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) returns false.
>>>
>>> The message means CPU hotplug race is triggered.
>>>
>>> Yeah, there is big problem in blk_mq_hctx_notify_dead() which is called
>>> after one CPU is dead, but still run this hw queue to dispatch request,
>>> and all CPUs in this hctx might become offline.
>>>
>>> We have some discussion before on this issue:
>>>
>>> https://lore.kernel.org/linux-block/CACVXFVN729SgFQGUgmu1iN7P6Mv5+puE78STz8hj
>>> 9J5bS828Ng@mail.gmail.com/
>>>
>>
>> There is another scenario, you can reproduce it by hot-plugging cpus to kvm guests via qemu monitor (I believe virsh setvcpus --live can do the same thing), for example:
>> (qemu) cpu-add 1
>> (qemu) cpu-add 2
>> (qemu) cpu-add 3
>>
>> In such scenario, cpu 1, 2 and 3 are not visible at boot, hctx->cpumask doesn't get synced when these cpus are added.

Here is how I play with it with the most recent qemu and linux.

Boot VM with 1 out of 4 vcpu online.

# qemu-system-x86_64 -hda disk.img \
-smp 1,maxcpus=4 \
-m 4096M -enable-kvm \
-device nvme,drive=lightnvme,serial=deadbeaf1 \
-drive file=nvme.img,if=none,id=lightnvme \
-vnc :0 \
-kernel /.../mainline-linux/arch/x86_64/boot/bzImage \
-append "root=/dev/sda1 init=/sbin/init text" \
-monitor stdio -net nic -net user,hostfwd=tcp::5022-:22


As Ming mentioned, after boot:

# cat /proc/cpuinfo  | grep processor
processor	: 0

# cat /sys/block/nvme0n1/mq/0/cpu_list
0
# cat /sys/block/nvme0n1/mq/1/cpu_list
1
# cat /sys/block/nvme0n1/mq/2/cpu_list
2
# cat /sys/block/nvme0n1/mq/3/cpu_list
3

# cat /proc/interrupts | grep nvme
 24:         11   PCI-MSI 65536-edge      nvme0q0
 25:         78   PCI-MSI 65537-edge      nvme0q1
 26:          0   PCI-MSI 65538-edge      nvme0q2
 27:          0   PCI-MSI 65539-edge      nvme0q3
 28:          0   PCI-MSI 65540-edge      nvme0q4

I hotplug with "device_add
qemu64-x86_64-cpu,id=core1,socket-id=1,core-id=0,thread-id=0" in qemu monitor.

Dongli Zhang

> 
> It is CPU cold-plug, we suppose to support it.
> 
> The new added CPUs should be visible to hctx, since we spread queues
> among all possible CPUs(), please see blk_mq_map_queues() and
> irq_build_affinity_masks(), which is like static allocation on CPU
> resources.
> 
> Otherwise, you might use an old kernel or there is bug somewhere.
> 
>>
>>>>
>>>> This patch added a cpu-hotplug handler into blk-mq, updating
>>>> hctx->cpumask at cpu-hotplug.
>>>
>>> This way isn't correct, hctx->cpumask should be kept as sync with
>>> queue mapping.
>>
>> Please advise what should I do to deal with the above situation? Thanks a lot.
> 
> As I shared in last email, there is one approach discussed, which seems
> doable.
> 
> Thanks,
> Ming
> 

  reply	other threads:[~2019-06-25  2:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-24 15:24 [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug Wenbin Zeng
2019-06-25  1:30 ` Dongli Zhang
2019-06-25  2:21   ` [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail) wenbinzeng(曾文斌)
2019-06-25  1:55 ` [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug Ming Lei
2019-06-25  2:14   ` [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail) wenbinzeng(曾文斌)
2019-06-25  2:27     ` Ming Lei
2019-06-25  2:51       ` Dongli Zhang [this message]
2019-06-25  3:28       ` wenbinzeng(曾文斌)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9bae2938-dcb6-de91-b16f-36ce8af8b7fb@oracle.com \
    --to=dongli.zhang@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=hare@suse.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=osandov@fb.com \
    --cc=sagi@grimberg.me \
    --cc=wenbin.zeng@gmail.com \
    --cc=wenbinzeng@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).