From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: robh@kernel.org, srikar@linux.vnet.ibm.com, aik@ozlabs.ru,
jasowang@redhat.com, linuxram@us.ibm.com,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org, hch@infradead.org,
paulus@samba.org, joe@perches.com, linuxppc-dev@lists.ozlabs.org,
elfring@users.sourceforge.net, haren@linux.vnet.ibm.com,
david@gibson.dropbear.id.au
Subject: Re: [RFC 0/4] Virtio uses DMA API for all devices
Date: Wed, 25 Jul 2018 08:56:23 +0530 [thread overview]
Message-ID: <4062dd48-2b5b-e454-e860-c6bfe321ebdc@linux.vnet.ibm.com> (raw)
In-Reply-To: <20180723120511-mutt-send-email-mst@kernel.org>
On 07/23/2018 02:38 PM, Michael S. Tsirkin wrote:
> On Mon, Jul 23, 2018 at 11:58:23AM +0530, Anshuman Khandual wrote:
>> On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote:
>>> On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote:
>>>> This patch series is the follow up on the discussions we had before about
>>>> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation
>>>> for virito devices (https://patchwork.kernel.org/patch/10417371/). There
>>>> were suggestions about doing away with two different paths of transactions
>>>> with the host/QEMU, first being the direct GPA and the other being the DMA
>>>> API based translations.
>>>>
>>>> First patch attempts to create a direct GPA mapping based DMA operations
>>>> structure called 'virtio_direct_dma_ops' with exact same implementation
>>>> of the direct GPA path which virtio core currently has but just wrapped in
>>>> a DMA API format. Virtio core must use 'virtio_direct_dma_ops' instead of
>>>> the arch default in absence of VIRTIO_F_IOMMU_PLATFORM flag to preserve the
>>>> existing semantics. The second patch does exactly that inside the function
>>>> virtio_finalize_features(). The third patch removes the default direct GPA
>>>> path from virtio core forcing it to use DMA API callbacks for all devices.
>>>> Now with that change, every device must have a DMA operations structure
>>>> associated with it. The fourth patch adds an additional hook which gives
>>>> the platform an opportunity to do yet another override if required. This
>>>> platform hook can be used on POWER Ultravisor based protected guests to
>>>> load up SWIOTLB DMA callbacks to do the required (as discussed previously
>>>> in the above mentioned thread how host is allowed to access only parts of
>>>> the guest GPA range) bounce buffering into the shared memory for all I/O
>>>> scatter gather buffers to be consumed on the host side.
>>>>
>>>> Please go through these patches and review whether this approach broadly
>>>> makes sense. I will appreciate suggestions, inputs, comments regarding
>>>> the patches or the approach in general. Thank you.
>>> I like how patches 1-3 look. Could you test performance
>>> with/without to see whether the extra indirection through
>>> use of DMA ops causes a measurable slow-down?
>>
>> I ran this simple DD command 10 times where /dev/vda is a virtio block
>> device of 10GB size.
>>
>> dd if=/dev/zero of=/dev/vda bs=8M count=1024 oflag=direct
>>
>> With and without patches bandwidth which has a bit wide range does not
>> look that different from each other.
>>
>> Without patches
>> ===============
>>
>> ---------- 1 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.95557 s, 4.4 GB/s
>> ---------- 2 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.05176 s, 4.2 GB/s
>> ---------- 3 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.88314 s, 4.6 GB/s
>> ---------- 4 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.84899 s, 4.6 GB/s
>> ---------- 5 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 5.37184 s, 1.6 GB/s
>> ---------- 6 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.9205 s, 4.5 GB/s
>> ---------- 7 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 6.85166 s, 1.3 GB/s
>> ---------- 8 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.74049 s, 4.9 GB/s
>> ---------- 9 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 6.31699 s, 1.4 GB/s
>> ---------- 10 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.47057 s, 3.5 GB/s
>>
>>
>> With patches
>> ============
>>
>> ---------- 1 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.25993 s, 3.8 GB/s
>> ---------- 2 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.82438 s, 4.7 GB/s
>> ---------- 3 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.93856 s, 4.4 GB/s
>> ---------- 4 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.83405 s, 4.7 GB/s
>> ---------- 5 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 7.50199 s, 1.1 GB/s
>> ---------- 6 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.28742 s, 3.8 GB/s
>> ---------- 7 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 5.74958 s, 1.5 GB/s
>> ---------- 8 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.99149 s, 4.3 GB/s
>> ---------- 9 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 5.67647 s, 1.5 GB/s
>> ---------- 10 ---------
>> 1024+0 records in
>> 1024+0 records out
>> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.93957 s, 2.9 GB/s
>>
>> Does this look okay ?
>
> You want to test IOPS with lots of small writes and using
> raw ramdisk on host.
Hello Michael,
I have conducted the following experiments and here are the results.
TEST SETUP
==========
A virtio block disk is mounted on the guest as follows.
<disk type='file' device='disk'>
<driver name='qemu' type='raw' ioeventfd='off'/>
<source file='/mnt/disk2.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
In the host back end its an QEMU raw image on tmpfs file system.
disk:
-rw-r--r-- 1 libvirt-qemu kvm 5.0G Jul 24 06:26 disk2.img
mount:
size=21G on /mnt type tmpfs (rw,relatime,size=22020096k)
TEST CONFIG
===========
FIO (https://linux.die.net/man/1/fio) is being run with and without
the patches.
Read test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=read
unlink=1
iodepth=256
Write test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=write
unlink=1
iodepth=256
The virtio block device comes up as /dev/vda on the guest with
/sys/block/vda/queue/nr_requests=128
TEST RESULTS
============
Without the patches
-------------------
Read test:
Run status group 0 (all jobs):
READ: bw=550MiB/s (577MB/s), 33.2MiB/s-35.6MiB/s (34.9MB/s-37.4MB/s), io=161GiB (173GB), run=300001-300009msec
Disk stats (read/write):
vda: ios=42249926/0, merge=0/0, ticks=1499920/0, in_queue=35672384, util=100.00%
Write test:
Run status group 0 (all jobs):
WRITE: bw=514MiB/s (539MB/s), 31.5MiB/s-34.6MiB/s (33.0MB/s-36.2MB/s), io=151GiB (162GB), run=300001-300009msec
Disk stats (read/write):
vda: ios=29/39459261, merge=0/0, ticks=0/1570580, in_queue=35745992, util=100.00%
With the patches
----------------
Read test:
Run status group 0 (all jobs):
READ: bw=572MiB/s (600MB/s), 35.0MiB/s-37.2MiB/s (36.7MB/s-38.0MB/s), io=168GiB (180GB), run=300001-300006msec
Disk stats (read/write):
vda: ios=43917611/0, merge=0/0, ticks=1934268/0, in_queue=35531688, util=100.00%
Write test:
Run status group 0 (all jobs):
WRITE: bw=546MiB/s (572MB/s), 33.7MiB/s-35.0MiB/s (35.3MB/s-36.7MB/s), io=160GiB (172GB), run=300001-300007msec
Disk stats (read/write):
vda: ios=14/41893878, merge=0/0, ticks=8/2107816, in_queue=35535716, util=100.00%
Results with and without the patches are similar.
next prev parent reply other threads:[~2018-07-25 3:26 UTC|newest]
Thread overview: 119+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-20 3:59 [RFC 0/4] Virtio uses DMA API for all devices Anshuman Khandual
2018-07-20 3:59 ` [RFC 1/4] virtio: Define virtio_direct_dma_ops structure Anshuman Khandual
2018-07-30 9:24 ` Christoph Hellwig
2018-07-31 4:01 ` Anshuman Khandual
2018-07-20 3:59 ` [RFC 2/4] virtio: Override device's DMA OPS with virtio_direct_dma_ops selectively Anshuman Khandual
2018-07-28 8:56 ` Anshuman Khandual
2018-07-28 21:16 ` Michael S. Tsirkin
2018-07-30 4:15 ` Anshuman Khandual
2018-07-30 9:30 ` Christoph Hellwig
2018-07-31 6:39 ` Anshuman Khandual
2018-07-30 9:25 ` Christoph Hellwig
2018-07-31 7:00 ` Anshuman Khandual
2018-07-20 3:59 ` [RFC 3/4] virtio: Force virtio core to use DMA API callbacks for all virtio devices Anshuman Khandual
2018-07-20 3:59 ` [RFC 4/4] virtio: Add platform specific DMA API translation for virito devices Anshuman Khandual
2018-07-20 13:15 ` Michael S. Tsirkin
2018-07-23 2:16 ` Anshuman Khandual
2018-07-25 4:30 ` Anshuman Khandual
2018-07-25 13:31 ` Michael S. Tsirkin
2018-07-20 13:16 ` [RFC 0/4] Virtio uses DMA API for all devices Michael S. Tsirkin
2018-07-23 6:28 ` Anshuman Khandual
2018-07-23 9:08 ` Michael S. Tsirkin
2018-07-25 3:26 ` Anshuman Khandual [this message]
2018-07-27 11:31 ` Michael S. Tsirkin
2018-07-28 8:37 ` Anshuman Khandual
2018-07-27 9:58 ` Will Deacon
2018-07-27 10:58 ` Anshuman Khandual
2018-07-30 9:34 ` Christoph Hellwig
2018-07-30 10:28 ` Michael S. Tsirkin
2018-07-30 11:18 ` Christoph Hellwig
2018-07-30 13:26 ` Michael S. Tsirkin
2018-07-31 17:30 ` Christoph Hellwig
2018-07-31 20:36 ` Benjamin Herrenschmidt
2018-08-01 8:16 ` Will Deacon
2018-08-01 8:36 ` Christoph Hellwig
2018-08-01 9:05 ` Will Deacon
2018-08-01 22:41 ` Michael S. Tsirkin
2018-08-01 22:35 ` Michael S. Tsirkin
2018-08-02 15:24 ` Benjamin Herrenschmidt
2018-08-02 15:41 ` Michael S. Tsirkin
2018-08-02 16:01 ` Benjamin Herrenschmidt
2018-08-02 17:19 ` Michael S. Tsirkin
2018-08-02 17:53 ` Benjamin Herrenschmidt
2018-08-02 20:52 ` Michael S. Tsirkin
2018-08-02 21:13 ` Benjamin Herrenschmidt
2018-08-02 21:51 ` Michael S. Tsirkin
2018-08-03 7:05 ` Christoph Hellwig
2018-08-03 15:58 ` Benjamin Herrenschmidt
2018-08-03 16:02 ` Christoph Hellwig
2018-08-03 18:58 ` Benjamin Herrenschmidt
2018-08-04 8:21 ` Christoph Hellwig
2018-08-05 1:10 ` Benjamin Herrenschmidt
2018-08-05 7:29 ` Christoph Hellwig
2018-08-05 21:16 ` Benjamin Herrenschmidt
2018-08-05 21:30 ` Benjamin Herrenschmidt
2018-08-06 9:42 ` Christoph Hellwig
2018-08-06 19:52 ` Benjamin Herrenschmidt
2018-08-07 6:21 ` Christoph Hellwig
2018-08-07 6:42 ` Benjamin Herrenschmidt
2018-08-07 13:55 ` Christoph Hellwig
2018-08-07 20:32 ` Benjamin Herrenschmidt
2018-08-08 6:31 ` Christoph Hellwig
2018-08-08 10:07 ` Benjamin Herrenschmidt
2018-08-08 12:30 ` Christoph Hellwig
2018-08-08 13:18 ` Benjamin Herrenschmidt
2018-08-08 20:31 ` Michael S. Tsirkin
2018-08-08 22:13 ` Benjamin Herrenschmidt
2018-08-09 2:00 ` Benjamin Herrenschmidt
2018-08-09 5:40 ` Christoph Hellwig
2018-09-07 0:09 ` Jiandi An
2018-09-10 6:19 ` Christoph Hellwig
2018-09-10 8:53 ` Gerd Hoffmann
2018-08-03 19:07 ` Michael S. Tsirkin
2018-08-04 1:11 ` Benjamin Herrenschmidt
2018-08-04 1:16 ` Benjamin Herrenschmidt
2018-08-05 0:22 ` Michael S. Tsirkin
2018-08-05 4:52 ` Benjamin Herrenschmidt
2018-08-06 13:46 ` Michael S. Tsirkin
2018-08-06 19:56 ` Benjamin Herrenschmidt
2018-08-06 20:35 ` Michael S. Tsirkin
2018-08-06 21:26 ` Benjamin Herrenschmidt
2018-08-06 21:46 ` Michael S. Tsirkin
2018-08-06 22:13 ` Benjamin Herrenschmidt
2018-08-06 23:16 ` Benjamin Herrenschmidt
2018-08-06 23:45 ` Michael S. Tsirkin
2018-08-07 0:18 ` Benjamin Herrenschmidt
2018-08-07 6:32 ` Christoph Hellwig
2018-08-07 6:27 ` Christoph Hellwig
2018-08-07 6:44 ` Benjamin Herrenschmidt
2018-08-07 6:18 ` Christoph Hellwig
2018-08-07 6:16 ` Christoph Hellwig
2018-08-06 23:18 ` Benjamin Herrenschmidt
2018-08-07 6:12 ` Christoph Hellwig
2018-08-04 1:18 ` Benjamin Herrenschmidt
2018-08-04 1:22 ` Benjamin Herrenschmidt
2018-08-05 0:23 ` Michael S. Tsirkin
2018-08-03 19:17 ` Michael S. Tsirkin
2018-08-04 8:15 ` Christoph Hellwig
2018-08-05 0:09 ` Michael S. Tsirkin
2018-08-05 1:11 ` Benjamin Herrenschmidt
2018-08-05 7:25 ` Christoph Hellwig
2018-08-05 0:53 ` Benjamin Herrenschmidt
2018-08-05 0:27 ` Michael S. Tsirkin
2018-08-06 14:05 ` Will Deacon
2018-08-01 21:56 ` Michael S. Tsirkin
2018-08-02 15:33 ` Benjamin Herrenschmidt
2018-08-02 20:53 ` Michael S. Tsirkin
2018-08-03 7:06 ` Christoph Hellwig
2018-08-02 20:55 ` Michael S. Tsirkin
2018-08-03 2:41 ` Jason Wang
2018-08-03 19:08 ` Michael S. Tsirkin
2018-08-04 1:21 ` Benjamin Herrenschmidt
2018-08-05 0:24 ` Michael S. Tsirkin
2018-08-06 9:02 ` Anshuman Khandual
2018-08-06 13:36 ` Michael S. Tsirkin
2018-08-06 15:24 ` Christoph Hellwig
2018-08-06 16:06 ` Michael S. Tsirkin
2018-08-06 16:10 ` Christoph Hellwig
2018-08-06 16:13 ` Michael S. Tsirkin
2018-08-06 16:34 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4062dd48-2b5b-e454-e860-c6bfe321ebdc@linux.vnet.ibm.com \
--to=khandual@linux.vnet.ibm.com \
--cc=aik@ozlabs.ru \
--cc=david@gibson.dropbear.id.au \
--cc=elfring@users.sourceforge.net \
--cc=haren@linux.vnet.ibm.com \
--cc=hch@infradead.org \
--cc=jasowang@redhat.com \
--cc=joe@perches.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linuxram@us.ibm.com \
--cc=mst@redhat.com \
--cc=paulus@samba.org \
--cc=robh@kernel.org \
--cc=srikar@linux.vnet.ibm.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).