linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Denis V. Lunev" <den@virtuozzo.com>
To: Stefan Hajnoczi <stefanha@gmail.com>, Ming Lei <tom.leiming@gmail.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
	io-uring@vger.kernel.org, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Kirill Tkhai <kirill.tkhai@openvz.org>,
	Manuel Bentele <development@manuel-bentele.de>,
	qemu-devel@nongnu.org, Kevin Wolf <kwolf@redhat.com>,
	rjones@redhat.com, Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Andrey Zhadchenko <andrey.zhadchenko@virtuozzo.com>
Subject: Re: ublk-qcow2: ublk-qcow2 is available
Date: Wed, 5 Oct 2022 14:38:02 +0200	[thread overview]
Message-ID: <4011cfc9-1f7d-2dbe-fc5b-2b43b960a79f@virtuozzo.com> (raw)
In-Reply-To: <CAJSP0QUQgA8Az3Kx8-6ynbWxDxaSVW3xWOPj4VBPhhUhsRYT9g@mail.gmail.com>

On 10/5/22 14:21, Stefan Hajnoczi wrote:
> On Wed, 5 Oct 2022 at 00:19, Ming Lei <tom.leiming@gmail.com> wrote:
>> On Tue, Oct 04, 2022 at 09:53:32AM -0400, Stefan Hajnoczi wrote:
>>> On Tue, 4 Oct 2022 at 05:44, Ming Lei <tom.leiming@gmail.com> wrote:
>>>> On Mon, Oct 03, 2022 at 03:53:41PM -0400, Stefan Hajnoczi wrote:
>>>>> On Fri, Sep 30, 2022 at 05:24:11PM +0800, Ming Lei wrote:
>>>>>> ublk-qcow2 is available now.
>>>>> Cool, thanks for sharing!
>>>>>
>>>>>> So far it provides basic read/write function, and compression and snapshot
>>>>>> aren't supported yet. The target/backend implementation is completely
>>>>>> based on io_uring, and share the same io_uring with ublk IO command
>>>>>> handler, just like what ublk-loop does.
>>>>>>
>>>>>> Follows the main motivations of ublk-qcow2:
>>>>>>
>>>>>> - building one complicated target from scratch helps libublksrv APIs/functions
>>>>>>    become mature/stable more quickly, since qcow2 is complicated and needs more
>>>>>>    requirement from libublksrv compared with other simple ones(loop, null)
>>>>>>
>>>>>> - there are several attempts of implementing qcow2 driver in kernel, such as
>>>>>>    ``qloop`` [2], ``dm-qcow2`` [3] and ``in kernel qcow2(ro)`` [4], so ublk-qcow2
>>>>>>    might useful be for covering requirement in this field
>>>>>>
>>>>>> - performance comparison with qemu-nbd, and it was my 1st thought to evaluate
>>>>>>    performance of ublk/io_uring backend by writing one ublk-qcow2 since ublksrv
>>>>>>    is started
>>>>>>
>>>>>> - help to abstract common building block or design pattern for writing new ublk
>>>>>>    target/backend
>>>>>>
>>>>>> So far it basically passes xfstest(XFS) test by using ublk-qcow2 block
>>>>>> device as TEST_DEV, and kernel building workload is verified too. Also
>>>>>> soft update approach is applied in meta flushing, and meta data
>>>>>> integrity is guaranteed, 'make test T=qcow2/040' covers this kind of
>>>>>> test, and only cluster leak is reported during this test.
>>>>>>
>>>>>> The performance data looks much better compared with qemu-nbd, see
>>>>>> details in commit log[1], README[5] and STATUS[6]. And the test covers both
>>>>>> empty image and pre-allocated image, for example of pre-allocated qcow2
>>>>>> image(8GB):
>>>>>>
>>>>>> - qemu-nbd (make test T=qcow2/002)
>>>>> Single queue?
>>>> Yeah.
>>>>
>>>>>>      randwrite(4k): jobs 1, iops 24605
>>>>>>      randread(4k): jobs 1, iops 30938
>>>>>>      randrw(4k): jobs 1, iops read 13981 write 14001
>>>>>>      rw(512k): jobs 1, iops read 724 write 728
>>>>> Please try qemu-storage-daemon's VDUSE export type as well. The
>>>>> command-line should be similar to this:
>>>>>
>>>>>    # modprobe virtio_vdpa # attaches vDPA devices to host kernel
>>>> Not found virtio_vdpa module even though I enabled all the following
>>>> options:
>>>>
>>>>          --- vDPA drivers
>>>>            <M>   vDPA device simulator core
>>>>            <M>     vDPA simulator for networking device
>>>>            <M>     vDPA simulator for block device
>>>>            <M>   VDUSE (vDPA Device in Userspace) support
>>>>            <M>   Intel IFC VF vDPA driver
>>>>            <M>   Virtio PCI bridge vDPA driver
>>>>            <M>   vDPA driver for Alibaba ENI
>>>>
>>>> BTW, my test environment is VM and the shared data is done in VM too, and
>>>> can virtio_vdpa be used inside VM?
>>> I hope Xie Yongji can help explain how to benchmark VDUSE.
>>>
>>> virtio_vdpa is available inside guests too. Please check that
>>> VIRTIO_VDPA ("vDPA driver for virtio devices") is enabled in "Virtio
>>> drivers" menu.
>>>
>>>>>    # modprobe vduse
>>>>>    # qemu-storage-daemon \
>>>>>        --blockdev file,filename=test.qcow2,cache.direct=of|off,aio=native,node-name=file \
>>>>>        --blockdev qcow2,file=file,node-name=qcow2 \
>>>>>        --object iothread,id=iothread0 \
>>>>>        --export vduse-blk,id=vduse0,name=vduse0,num-queues=$(nproc),node-name=qcow2,writable=on,iothread=iothread0
>>>>>    # vdpa dev add name vduse0 mgmtdev vduse
>>>>>
>>>>> A virtio-blk device should appear and xfstests can be run on it
>>>>> (typically /dev/vda unless you already have other virtio-blk devices).
>>>>>
>>>>> Afterwards you can destroy the device using:
>>>>>
>>>>>    # vdpa dev del vduse0
>>>>>
>>>>>> - ublk-qcow2 (make test T=qcow2/022)
>>>>> There are a lot of other factors not directly related to NBD vs ublk. In
>>>>> order to get an apples-to-apples comparison with qemu-* a ublk export
>>>>> type is needed in qemu-storage-daemon. That way only the difference is
>>>>> the ublk interface and the rest of the code path is identical, making it
>>>>> possible to compare NBD, VDUSE, ublk, etc more precisely.
>>>> Maybe not true.
>>>>
>>>> ublk-qcow2 uses io_uring to handle all backend IO(include meta IO) completely,
>>>> and so far single io_uring/pthread is for handling all qcow2 IOs and IO
>>>> command.
>>> qemu-nbd doesn't use io_uring to handle the backend IO, so we don't
>> I tried to use it via --aio=io_uring for setting up qemu-nbd, but not succeed.
>>
>>> know whether the benchmark demonstrates that ublk is faster than NBD,
>>> that the ublk-qcow2 implementation is faster than qemu-nbd's qcow2,
>>> whether there are miscellaneous implementation differences between
>>> ublk-qcow2 and qemu-nbd (like using the same io_uring context for both
>>> ublk and backend IO), or something else.
>> The theory shouldn't be too complicated:
>>
>> 1) io uring passthough(pt) communication is fast than socket, and io command
>> is carried over io_uring pt commands, and should be fast than virio
>> communication too.
>>
>> 2) io uring io handling is fast than libaio which is taken in the
>> test on qemu-nbd, and all qcow2 backend io(include meta io) is handled
>> by io_uring.
>>
>> https://github.com/ming1/ubdsrv/blob/master/tests/common/qcow2_common
>>
>> 3) ublk uses one single io_uring to handle all io commands and qcow2
>> backend IOs, so batching handling is common, and it is easy to see
>> dozens of IOs/io commands handled in single syscall, or even more.
> I agree with the theory but theory has to be tested through
> experiments in order to validate it. We can all learn from systematic
> performance analysis - there might even be bottlenecks in ublk that
> can be solved to improve performance further.
>
>>> I'm suggesting measuring changes to just 1 variable at a time.
>>> Otherwise it's hard to reach a conclusion about the root cause of the
>>> performance difference. Let's learn why ublk-qcow2 performs well.
>> Turns out the latest Fedora 37-beta doesn't support vdpa yet, so I built
>> qemu from the latest github tree, and finally it starts to work. And test kernel
>> is v6.0 release.
>>
>> Follows the test result, and all three devices are setup as single
>> queue, and all tests are run in single job, still done in one VM, and
>> the test images are stored on XFS/virito-scsi backed SSD.
>>
>> The 1st group tests all three block device which is backed by empty
>> qcow2 image.
>>
>> The 2nd group tests all the three block devices backed by pre-allocated
>> qcow2 image.
>>
>> Except for big sequential IO(512K), there is still not small gap between
>> vdpa-virtio-blk and ublk.
>>
>> 1. run fio on block device over empty qcow2 image
>> 1) qemu-nbd
>> running qcow2/001
>> run perf test on empty qcow2 image via nbd
>>          fio (nbd(/mnt/data/ublk_null_8G_nYbgF.qcow2), libaio, bs 4k, dio, hw queues:1)...
>>          randwrite: jobs 1, iops 8549
>>          randread: jobs 1, iops 34829
>>          randrw: jobs 1, iops read 11363 write 11333
>>          rw(512k): jobs 1, iops read 590 write 597
>>
>>
>> 2) ublk-qcow2
>> running qcow2/021
>> run perf test on empty qcow2 image via ublk
>>          fio (ublk/qcow2( -f /mnt/data/ublk_null_8G_s761j.qcow2), libaio, bs 4k, dio, hw queues:1, uring_comp: 0, get_data: 0).
>>          randwrite: jobs 1, iops 16086
>>          randread: jobs 1, iops 172720
>>          randrw: jobs 1, iops read 35760 write 35702
>>          rw(512k): jobs 1, iops read 1140 write 1149
>>
>> 3) vdpa-virtio-blk
>> running debug/test_dev
>> run io test on specified device
>>          fio (vdpa(/dev/vdc), libaio, bs 4k, dio, hw queues:1)...
>>          randwrite: jobs 1, iops 8626
>>          randread: jobs 1, iops 126118
>>          randrw: jobs 1, iops read 17698 write 17665
>>          rw(512k): jobs 1, iops read 1023 write 1031
>>
>>
>> 2. run fio on block device over pre-allocated qcow2 image
>> 1) qemu-nbd
>> running qcow2/002
>> run perf test on pre-allocated qcow2 image via nbd
>>          fio (nbd(/mnt/data/ublk_data_8G_sc0SB.qcow2), libaio, bs 4k, dio, hw queues:1)...
>>          randwrite: jobs 1, iops 21439
>>          randread: jobs 1, iops 30336
>>          randrw: jobs 1, iops read 11476 write 11449
>>          rw(512k): jobs 1, iops read 718 write 722
>>
>> 2) ublk-qcow2
>> running qcow2/022
>> run perf test on pre-allocated qcow2 image via ublk
>>          fio (ublk/qcow2( -f /mnt/data/ublk_data_8G_yZiaJ.qcow2), libaio, bs 4k, dio, hw queues:1, uring_comp: 0, get_data: 0).
>>          randwrite: jobs 1, iops 98757
>>          randread: jobs 1, iops 110246
>>          randrw: jobs 1, iops read 47229 write 47161
>>          rw(512k): jobs 1, iops read 1416 write 1427
>>
>> 3) vdpa-virtio-blk
>> running debug/test_dev
>> run io test on specified device
>>          fio (vdpa(/dev/vdc), libaio, bs 4k, dio, hw queues:1)...
>>          randwrite: jobs 1, iops 47317
>>          randread: jobs 1, iops 74092
>>          randrw: jobs 1, iops read 27196 write 27234
>>          rw(512k): jobs 1, iops read 1447 write 1458
> Thanks for including VDUSE results! ublk looks great here and worth
> considering even in cases where NBD or VDUSE is already being used.
>
> Stefan
+ Andrey Zhadchenko

  reply	other threads:[~2022-10-05 12:38 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-30  9:24 ublk-qcow2: ublk-qcow2 is available Ming Lei
2022-10-03 19:53 ` Stefan Hajnoczi
2022-10-03 23:57   ` Denis V. Lunev
2022-10-05 15:11     ` Stefan Hajnoczi
2022-10-06 10:26       ` Ming Lei
2022-10-06 13:59         ` Stefan Hajnoczi
2022-10-06 15:09           ` Ming Lei
2022-10-06 18:29             ` Stefan Hajnoczi
2022-10-07 11:21               ` Ming Lei
2022-10-04  9:43   ` Ming Lei
2022-10-04 13:53     ` Stefan Hajnoczi
2022-10-05  4:18       ` Ming Lei
2022-10-05 12:21         ` Stefan Hajnoczi
2022-10-05 12:38           ` Denis V. Lunev [this message]
2022-10-06 11:24           ` Ming Lei
2022-10-07 10:04             ` Yongji Xie
2022-10-07 10:51               ` Ming Lei
2022-10-07 11:21                 ` Yongji Xie
2022-10-07 11:23                   ` Ming Lei
2022-10-08  8:43         ` Ziyang Zhang
2022-10-12 14:22           ` Stefan Hajnoczi
2022-10-13  6:48             ` Yongji Xie
2022-10-13 16:02               ` Stefan Hajnoczi
2022-10-14 12:56               ` Ming Lei
2022-10-17 11:11                 ` Yongji Xie
2022-10-18  6:59                   ` Ming Lei
2022-10-18 13:17                     ` Yongji Xie
2022-10-18 14:54                       ` Stefan Hajnoczi
2022-10-19  9:09                         ` Ming Lei
2022-10-24 16:11                           ` Stefan Hajnoczi
2022-10-21  5:33                         ` Yongji Xie
2022-10-21  6:30                           ` Jason Wang
2022-10-25  8:17                             ` Yongji Xie
2022-10-25 12:02                               ` Stefan Hajnoczi
2022-10-28 13:33                                 ` Yongji Xie
2022-11-01  2:36                                 ` Jason Wang
2022-11-02 19:13                                   ` Stefan Hajnoczi
2022-11-04  6:55                                     ` Jason Wang
2022-10-21  6:28                     ` Jason Wang
2022-10-06 10:14       ` Richard W.M. Jones
2022-10-12 14:15         ` Stefan Hajnoczi
2022-10-13  1:50           ` Ming Lei
2022-10-13 16:01             ` Stefan Hajnoczi
2022-10-04  5:43 ` Manuel Bentele

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4011cfc9-1f7d-2dbe-fc5b-2b43b960a79f@virtuozzo.com \
    --to=den@virtuozzo.com \
    --cc=andrey.zhadchenko@virtuozzo.com \
    --cc=development@manuel-bentele.de \
    --cc=io-uring@vger.kernel.org \
    --cc=kirill.tkhai@openvz.org \
    --cc=kwolf@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    --cc=rjones@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    --cc=tom.leiming@gmail.com \
    --cc=xieyongji@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).