On Thu, Dec 05, 2019 at 01:30:09AM +0000, Wangyong wrote: > > > > On Thu, Nov 28, 2019 at 08:44:43AM +0000, Wangyong wrote: > > > Hi all, > > > > This looks interesting, please continue this discussion on the QEMU mailing list > > so that others can participate. > > > > > > > > This patch makes virtio_blk queue size configurable > > > > > > commit 6040aedddb5f474a9c2304b6a432a652d82b3d3c > > > Author: Mark Kanda > > > Date: Mon Dec 11 09:16:24 2017 -0600 > > > > > > virtio-blk: make queue size configurable > > > > > > But when we set the queue size to more than 128, it will not take effect. > > > > > > That's because linux aio's maximum outstanding requests at a time is > > > always less than or equal to 128 > > > > > > The following code limits the outstanding requests at a time: > > > > > > #define MAX_EVENTS 128 > > > > > > laio_do_submit() > > > { > > > > > > if (!s->io_q.blocked && > > > (!s->io_q.plugged || > > > s->io_q.in_flight + s->io_q.in_queue >= MAX_EVENTS)) { > > > ioq_submit(s); > > > } > > > } > > > > > > Should we make the value of MAX_EVENTS configurable ? > > > > Increasing MAX_EVENTS to a larger hardcoded value seems reasonable as a > > shortterm fix. Please first check how /proc/sys/fs/aio-max-nr and > > io_setup(2) handle this resource limit. The patch must not break existing > > systems where 128 works today. > [root@node2 ~]# cat /etc/centos-release > CentOS Linux release 7.5.1804 (Core) > > [root@node2 ~]# cat /proc/sys/fs/aio-max-nr > 4294967296 > > > > MAX_EVENTS should have the same value as queue size ? > > > > Multiple virtio-blk devices can share a single AioContext, > Is multiple virtio-blk configured with one IOThread? > Multiple virtio-blk performance will be worse. Yes. By default IOThreads are not used and all virtio-blk devices share the main loop's AioContext. When IOThreads are configured it's up to the user how to assign devices to IOThreads. Assigning multiple devices to one IOThread is realistic because it's common to create only num_vcpus IOThreads. A good starting point would be a patch that raises the limit to a higher hardcoded number. Then you can investigate how to size the AioContext appropriately (maybe dynamically?) for a full fix. Stefan