All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dongli Zhang <dongli.zhang@oracle.com>
To: linux-scsi@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-block@vger.kernel.org
Cc: mst@redhat.com, jasowang@redhat.com, axboe@kernel.dk,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	cohuck@redhat.com, linux-kernel@vger.kernel.org
Subject: [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi
Date: Wed, 27 Mar 2019 18:36:33 +0800	[thread overview]
Message-ID: <1553682995-5682-1-git-send-email-dongli.zhang@oracle.com> (raw)

When tag_set->nr_maps is 1, the block layer limits the number of hw queues
by nr_cpu_ids. No matter how many hw queues are use by
virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they
can use at most nr_cpu_ids hw queues.

In addition, specifically for pci scenario, when the 'num-queues' specified
by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be able to
allocate more than maxcpus vectors in order to have a vector for each
queue. As a result, they fall back into MSI-X with one vector for config
and one shared for queues.

Considering above reasons, this patch set limits the number of hw queues
used by nr_cpu_ids for both virtio-blk and virtio-scsi.

-------------------------------------------------------------

Here is test result of virtio-scsi:

qemu cmdline:

-smp 2,maxcpus=4, \
-device virtio-scsi-pci,id=scsi0,num_queues=8, \
-device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0, \
-drive file=test.img,if=none,id=drive0

Although maxcpus=4 and num_queues=8, 4 queues are used while 2 interrupts
are allocated.

# cat /proc/interrupts
... ...
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          0        369   PCI-MSI 65537-edge      virtio0-virtqueues
... ...

# /sys/block/sda/mq/
0  1  2  3   ------> 4 queues although qemu sets num_queues=8


With the patch set, there is per-queue interrupt.

# cat /proc/interrupts
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          0          0   PCI-MSI 65537-edge      virtio0-control
 26:          0          0   PCI-MSI 65538-edge      virtio0-event
 27:        296          0   PCI-MSI 65539-edge      virtio0-request
 28:          0        139   PCI-MSI 65540-edge      virtio0-request
 29:          0          0   PCI-MSI 65541-edge      virtio0-request
 30:          0          0   PCI-MSI 65542-edge      virtio0-request

# ls /sys/block/sda/mq
0  1  2  3

-------------------------------------------------------------

Here is test result of virtio-blk:

qemu cmdline:

-smp 2,maxcpus=4,
-device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0,num-queues=8
-drive test.img,format=raw,if=none,id=drive-virtio-disk0

Although maxcpus=4 and num-queues=8, 4 queues are used while 2 interrupts
are allocated.

# cat /proc/interrupts
... ...
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          0         65   PCI-MSI 65537-edge      virtio0-virtqueues
... ...

# ls /sys/block/vda/mq
0  1  2  3    -------> 4 queues although qemu sets num_queues=8


With the patch set, there is per-queue interrupt.

# cat /proc/interrupts
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:         64          0   PCI-MSI 65537-edge      virtio0-req.0
 26:          0      10290   PCI-MSI 65538-edge      virtio0-req.1
 27:          0          0   PCI-MSI 65539-edge      virtio0-req.2
 28:          0          0   PCI-MSI 65540-edge      virtio0-req.3

# ls /sys/block/vda/mq/
0  1  2  3


Reference: https://lore.kernel.org/lkml/e4afe4c5-0262-4500-aeec-60f30734b4fc@default/

Thank you very much!

Dongli Zhang


WARNING: multiple messages have this Message-ID (diff)
From: Dongli Zhang <dongli.zhang@oracle.com>
To: linux-scsi@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-block@vger.kernel.org
Cc: axboe@kernel.dk, martin.petersen@oracle.com, mst@redhat.com,
	cohuck@redhat.com, linux-kernel@vger.kernel.org,
	jejb@linux.ibm.com
Subject: [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi
Date: Wed, 27 Mar 2019 18:36:33 +0800	[thread overview]
Message-ID: <1553682995-5682-1-git-send-email-dongli.zhang@oracle.com> (raw)

When tag_set->nr_maps is 1, the block layer limits the number of hw queues
by nr_cpu_ids. No matter how many hw queues are use by
virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they
can use at most nr_cpu_ids hw queues.

In addition, specifically for pci scenario, when the 'num-queues' specified
by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be able to
allocate more than maxcpus vectors in order to have a vector for each
queue. As a result, they fall back into MSI-X with one vector for config
and one shared for queues.

Considering above reasons, this patch set limits the number of hw queues
used by nr_cpu_ids for both virtio-blk and virtio-scsi.

-------------------------------------------------------------

Here is test result of virtio-scsi:

qemu cmdline:

-smp 2,maxcpus=4, \
-device virtio-scsi-pci,id=scsi0,num_queues=8, \
-device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0, \
-drive file=test.img,if=none,id=drive0

Although maxcpus=4 and num_queues=8, 4 queues are used while 2 interrupts
are allocated.

# cat /proc/interrupts
... ...
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          0        369   PCI-MSI 65537-edge      virtio0-virtqueues
... ...

# /sys/block/sda/mq/
0  1  2  3   ------> 4 queues although qemu sets num_queues=8


With the patch set, there is per-queue interrupt.

# cat /proc/interrupts
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          0          0   PCI-MSI 65537-edge      virtio0-control
 26:          0          0   PCI-MSI 65538-edge      virtio0-event
 27:        296          0   PCI-MSI 65539-edge      virtio0-request
 28:          0        139   PCI-MSI 65540-edge      virtio0-request
 29:          0          0   PCI-MSI 65541-edge      virtio0-request
 30:          0          0   PCI-MSI 65542-edge      virtio0-request

# ls /sys/block/sda/mq
0  1  2  3

-------------------------------------------------------------

Here is test result of virtio-blk:

qemu cmdline:

-smp 2,maxcpus=4,
-device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0,num-queues=8
-drive test.img,format=raw,if=none,id=drive-virtio-disk0

Although maxcpus=4 and num-queues=8, 4 queues are used while 2 interrupts
are allocated.

# cat /proc/interrupts
... ...
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          0         65   PCI-MSI 65537-edge      virtio0-virtqueues
... ...

# ls /sys/block/vda/mq
0  1  2  3    -------> 4 queues although qemu sets num_queues=8


With the patch set, there is per-queue interrupt.

# cat /proc/interrupts
 24:          0          0   PCI-MSI 65536-edge      virtio0-config
 25:         64          0   PCI-MSI 65537-edge      virtio0-req.0
 26:          0      10290   PCI-MSI 65538-edge      virtio0-req.1
 27:          0          0   PCI-MSI 65539-edge      virtio0-req.2
 28:          0          0   PCI-MSI 65540-edge      virtio0-req.3

# ls /sys/block/vda/mq/
0  1  2  3


Reference: https://lore.kernel.org/lkml/e4afe4c5-0262-4500-aeec-60f30734b4fc@default/

Thank you very much!

Dongli Zhang

             reply	other threads:[~2019-03-27 10:33 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-27 10:36 Dongli Zhang [this message]
2019-03-27 10:36 ` [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi Dongli Zhang
2019-03-27 10:36 ` [PATCH 1/2] virtio-blk: limit number of hw queues by nr_cpu_ids Dongli Zhang
2019-04-10 13:12   ` Stefan Hajnoczi
2019-04-10 13:12   ` Stefan Hajnoczi
2019-03-27 10:36 ` Dongli Zhang
2019-03-27 10:36 ` [PATCH 2/2] scsi: virtio_scsi: " Dongli Zhang
2019-03-27 10:36 ` Dongli Zhang
2019-04-10 13:12   ` Stefan Hajnoczi
2019-04-10 13:12   ` Stefan Hajnoczi
2019-04-08 13:57 ` [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi Dongli Zhang
2019-04-08 13:57   ` Dongli Zhang
2019-04-10 14:18 ` Jens Axboe
2019-04-10 14:18 ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1553682995-5682-1-git-send-email-dongli.zhang@oracle.com \
    --to=dongli.zhang@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=cohuck@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=jejb@linux.ibm.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.