qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Possible io_uring regression with QEMU on Ubuntu's kernel
@ 2021-06-30  8:47 Juhyung Park
  2021-07-01 17:50 ` Kamal Mostafa
  0 siblings, 1 reply; 3+ messages in thread
From: Juhyung Park @ 2021-06-30  8:47 UTC (permalink / raw)
  To: Kamal Mostafa, Stefan Bader, io-uring
  Cc: Jens Axboe, qemu-devel, Stefano Garzarella

Hi everyone.

With the latest Ubuntu 20.04's HWE kernel 5.8.0-59, I'm noticing some
weirdness when using QEMU/libvirt with the following storage
configuration:

<disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="io_uring"
discard="unmap" detect_zeroes="unmap"/>
  <source dev="/dev/disk/by-id/md-uuid-df271a1e:9dfb7edb:8dc4fbb8:c43e652f-part1"
index="1"/>
  <backingStore/>
  <target dev="vda" bus="virtio"/>
  <alias name="virtio-disk0"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</disk>

QEMU version is 5.2+dfsg-9ubuntu3 and libvirt version is 7.0.0-2ubuntu2.

The guest VM is unable to handle I/O properly with io_uring, and
nuking io="io_uring" fixes the issue.
On one machine (EPYC 7742), the partition table cannot be read and on
another (Ryzen 9 3950X), ext4 detects weirdness with journaling and
ultimately remounts the guest disk to R/O:

[    2.712321] virtio_blk virtio5: [vda] 3906519775 512-byte logical
blocks (2.00 TB/1.82 TiB)
[    2.714054] vda: detected capacity change from 0 to 2000138124800
[    2.963671] blk_update_request: I/O error, dev vda, sector 0 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.964909] Buffer I/O error on dev vda, logical block 0, async page read
[    2.966021] blk_update_request: I/O error, dev vda, sector 1 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.967177] Buffer I/O error on dev vda, logical block 1, async page read
[    2.968330] blk_update_request: I/O error, dev vda, sector 2 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.969504] Buffer I/O error on dev vda, logical block 2, async page read
[    2.970767] blk_update_request: I/O error, dev vda, sector 3 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.971624] Buffer I/O error on dev vda, logical block 3, async page read
[    2.972170] blk_update_request: I/O error, dev vda, sector 4 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.972728] Buffer I/O error on dev vda, logical block 4, async page read
[    2.973308] blk_update_request: I/O error, dev vda, sector 5 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.973920] Buffer I/O error on dev vda, logical block 5, async page read
[    2.974496] blk_update_request: I/O error, dev vda, sector 6 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.975093] Buffer I/O error on dev vda, logical block 6, async page read
[    2.975685] blk_update_request: I/O error, dev vda, sector 7 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.976295] Buffer I/O error on dev vda, logical block 7, async page read
[    2.980074] blk_update_request: I/O error, dev vda, sector 0 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.981104] Buffer I/O error on dev vda, logical block 0, async page read
[    2.981786] blk_update_request: I/O error, dev vda, sector 1 op
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[    2.982083] ixgbe 0000:06:00.0: Multiqueue Enabled: Rx Queue count
= 63, Tx Queue count = 63 XDP Queue count = 0
[    2.982442] Buffer I/O error on dev vda, logical block 1, async page read
[    2.983642] ldm_validate_partition_table(): Disk read failed.

Kernel 5.8.0-55 is fine, and the only io_uring-related change between
5.8.0-55 and 5.8.0-59 is the commit 4b982bd0f383 ("io_uring: don't
mark S_ISBLK async work as unbounded").

The weird thing is that this commit was first introduced with v5.12,
but neither the mainline v5.12.0 or v5.13.0 is affected by this issue.

I guess one of these commits following the backported commit from
v5.12 fixes the issue, but that's just a guess. It might be another
earlier commit:
c7d95613c7d6 io_uring: fix early sqd_list removal sqpoll hangs
9728463737db io_uring: fix rw req completion
6ad7f2332e84 io_uring: clear F_REISSUE right after getting it
e82ad4853948 io_uring: fix !CONFIG_BLOCK compilation failure
230d50d448ac io_uring: move reissue into regular IO path
07204f21577a io_uring: fix EIOCBQUEUED iter revert
696ee88a7c50 io_uring/io-wq: protect against sprintf overflow

It would be much appreciated if Jens could give pointers to Canonical
developers on how to fix the issue, and hopefully a suggestion to
prevent this from happening again.

Thanks,
Regards


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Possible io_uring regression with QEMU on Ubuntu's kernel
  2021-06-30  8:47 Possible io_uring regression with QEMU on Ubuntu's kernel Juhyung Park
@ 2021-07-01 17:50 ` Kamal Mostafa
  2021-07-01 18:16   ` Juhyung Park
  0 siblings, 1 reply; 3+ messages in thread
From: Kamal Mostafa @ 2021-07-01 17:50 UTC (permalink / raw)
  To: Juhyung Park
  Cc: Jens Axboe, Kamal Mostafa, qemu-devel, Stefan Bader,
	Ubuntu Kernel Team, io-uring, Stefano Garzarella

[-- Attachment #1: Type: text/plain, Size: 5159 bytes --]

Hi-

Thanks very much for reporting this.  We picked up that patch ("io_uring:
don't mark S_ISBLK async work as unbounded") for our Ubuntu v5.8 kernel
from linux-stable/v5.10.31.  Since it's not clear that it's appropriate for
v5.8 (or even v5.10-stable?) we'll revert it from Ubuntu v5.8 if you can
confirm that actually fixes the problem.

Here's a test build of that (5.8.0-59 with that commit reverted).  The full
set of packages is provided, but you probably only actually need to install
the linux-image and linux-modules[-extra] deb's. We'll stand by for your
results:
https://kernel.ubuntu.com/~kamal/uringrevert0/

Thanks again,

 -Kamal Mostafa (Canonical Kernel Team)

On Wed, Jun 30, 2021 at 1:47 AM Juhyung Park <qkrwngud825@gmail.com> wrote:

> Hi everyone.
>
> With the latest Ubuntu 20.04's HWE kernel 5.8.0-59, I'm noticing some
> weirdness when using QEMU/libvirt with the following storage
> configuration:
>
> <disk type="block" device="disk">
>   <driver name="qemu" type="raw" cache="none" io="io_uring"
> discard="unmap" detect_zeroes="unmap"/>
>   <source
> dev="/dev/disk/by-id/md-uuid-df271a1e:9dfb7edb:8dc4fbb8:c43e652f-part1"
> index="1"/>
>   <backingStore/>
>   <target dev="vda" bus="virtio"/>
>   <alias name="virtio-disk0"/>
>   <address type="pci" domain="0x0000" bus="0x07" slot="0x00"
> function="0x0"/>
> </disk>
>
> QEMU version is 5.2+dfsg-9ubuntu3 and libvirt version is 7.0.0-2ubuntu2.
>
> The guest VM is unable to handle I/O properly with io_uring, and
> nuking io="io_uring" fixes the issue.
> On one machine (EPYC 7742), the partition table cannot be read and on
> another (Ryzen 9 3950X), ext4 detects weirdness with journaling and
> ultimately remounts the guest disk to R/O:
>
> [    2.712321] virtio_blk virtio5: [vda] 3906519775 512-byte logical
> blocks (2.00 TB/1.82 TiB)
> [    2.714054] vda: detected capacity change from 0 to 2000138124800
> [    2.963671] blk_update_request: I/O error, dev vda, sector 0 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.964909] Buffer I/O error on dev vda, logical block 0, async page
> read
> [    2.966021] blk_update_request: I/O error, dev vda, sector 1 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.967177] Buffer I/O error on dev vda, logical block 1, async page
> read
> [    2.968330] blk_update_request: I/O error, dev vda, sector 2 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.969504] Buffer I/O error on dev vda, logical block 2, async page
> read
> [    2.970767] blk_update_request: I/O error, dev vda, sector 3 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.971624] Buffer I/O error on dev vda, logical block 3, async page
> read
> [    2.972170] blk_update_request: I/O error, dev vda, sector 4 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.972728] Buffer I/O error on dev vda, logical block 4, async page
> read
> [    2.973308] blk_update_request: I/O error, dev vda, sector 5 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.973920] Buffer I/O error on dev vda, logical block 5, async page
> read
> [    2.974496] blk_update_request: I/O error, dev vda, sector 6 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.975093] Buffer I/O error on dev vda, logical block 6, async page
> read
> [    2.975685] blk_update_request: I/O error, dev vda, sector 7 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.976295] Buffer I/O error on dev vda, logical block 7, async page
> read
> [    2.980074] blk_update_request: I/O error, dev vda, sector 0 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.981104] Buffer I/O error on dev vda, logical block 0, async page
> read
> [    2.981786] blk_update_request: I/O error, dev vda, sector 1 op
> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
> [    2.982083] ixgbe 0000:06:00.0: Multiqueue Enabled: Rx Queue count
> = 63, Tx Queue count = 63 XDP Queue count = 0
> [    2.982442] Buffer I/O error on dev vda, logical block 1, async page
> read
> [    2.983642] ldm_validate_partition_table(): Disk read failed.
>
> Kernel 5.8.0-55 is fine, and the only io_uring-related change between
> 5.8.0-55 and 5.8.0-59 is the commit 4b982bd0f383 ("io_uring: don't
> mark S_ISBLK async work as unbounded").
>
> The weird thing is that this commit was first introduced with v5.12,
> but neither the mainline v5.12.0 or v5.13.0 is affected by this issue.
>
> I guess one of these commits following the backported commit from
> v5.12 fixes the issue, but that's just a guess. It might be another
> earlier commit:
> c7d95613c7d6 io_uring: fix early sqd_list removal sqpoll hangs
> 9728463737db io_uring: fix rw req completion
> 6ad7f2332e84 io_uring: clear F_REISSUE right after getting it
> e82ad4853948 io_uring: fix !CONFIG_BLOCK compilation failure
> 230d50d448ac io_uring: move reissue into regular IO path
> 07204f21577a io_uring: fix EIOCBQUEUED iter revert
> 696ee88a7c50 io_uring/io-wq: protect against sprintf overflow
>
> It would be much appreciated if Jens could give pointers to Canonical
> developers on how to fix the issue, and hopefully a suggestion to
> prevent this from happening again.
>
> Thanks,
> Regards
>

[-- Attachment #2: Type: text/html, Size: 6225 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Possible io_uring regression with QEMU on Ubuntu's kernel
  2021-07-01 17:50 ` Kamal Mostafa
@ 2021-07-01 18:16   ` Juhyung Park
  0 siblings, 0 replies; 3+ messages in thread
From: Juhyung Park @ 2021-07-01 18:16 UTC (permalink / raw)
  To: Kamal Mostafa
  Cc: Jens Axboe, qemu-devel, Stefan Bader, Ubuntu Kernel Team,
	io-uring, Stefano Garzarella

Hi Kamal.

Thanks for the timely response.
We currently worked around the issue by installing linux-generic-hwe-20.04-edge.

I've just installed the new build that you provided but I'm afraid the
same issue persists.

I've double-checked that the kernel is installed properly:
root@datai-ampere:~# uname -a
Linux datai-ampere 5.8.0-59-generic #66~20.04.1+uringrevert0 SMP Thu
Jul 1 16:50:12 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
root@datai-ampere:~# cat /proc/version
Linux version 5.8.0-59-generic (ubuntu@ip-10-0-33-11) (gcc (Ubuntu
9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34)
#66~20.04.1+uringrevert0 SMP Thu Jul 1 16:50:12 UTC 2021

The guest VM is still unable to read /dev/vda's partition table with
READ errors.

Is the commit reverted properly?
If it is, I'm afraid that it might be something else, hmm..

I'm still certain that it's a regression from 5.8.0-55 to 5.8.0-59.

Thanks.

On Fri, Jul 2, 2021 at 2:50 AM Kamal Mostafa <kamal@canonical.com> wrote:
>
> Hi-
>
> Thanks very much for reporting this.  We picked up that patch ("io_uring: don't mark S_ISBLK async work as unbounded") for our Ubuntu v5.8 kernel from linux-stable/v5.10.31.  Since it's not clear that it's appropriate for v5.8 (or even v5.10-stable?) we'll revert it from Ubuntu v5.8 if you can confirm that actually fixes the problem.
>
> Here's a test build of that (5.8.0-59 with that commit reverted).  The full set of packages is provided, but you probably only actually need to install the linux-image and linux-modules[-extra] deb's. We'll stand by for your results:
> https://kernel.ubuntu.com/~kamal/uringrevert0/
>
> Thanks again,
>
>  -Kamal Mostafa (Canonical Kernel Team)
>
> On Wed, Jun 30, 2021 at 1:47 AM Juhyung Park <qkrwngud825@gmail.com> wrote:
>>
>> Hi everyone.
>>
>> With the latest Ubuntu 20.04's HWE kernel 5.8.0-59, I'm noticing some
>> weirdness when using QEMU/libvirt with the following storage
>> configuration:
>>
>> <disk type="block" device="disk">
>>   <driver name="qemu" type="raw" cache="none" io="io_uring"
>> discard="unmap" detect_zeroes="unmap"/>
>>   <source dev="/dev/disk/by-id/md-uuid-df271a1e:9dfb7edb:8dc4fbb8:c43e652f-part1"
>> index="1"/>
>>   <backingStore/>
>>   <target dev="vda" bus="virtio"/>
>>   <alias name="virtio-disk0"/>
>>   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
>> </disk>
>>
>> QEMU version is 5.2+dfsg-9ubuntu3 and libvirt version is 7.0.0-2ubuntu2.
>>
>> The guest VM is unable to handle I/O properly with io_uring, and
>> nuking io="io_uring" fixes the issue.
>> On one machine (EPYC 7742), the partition table cannot be read and on
>> another (Ryzen 9 3950X), ext4 detects weirdness with journaling and
>> ultimately remounts the guest disk to R/O:
>>
>> [    2.712321] virtio_blk virtio5: [vda] 3906519775 512-byte logical
>> blocks (2.00 TB/1.82 TiB)
>> [    2.714054] vda: detected capacity change from 0 to 2000138124800
>> [    2.963671] blk_update_request: I/O error, dev vda, sector 0 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.964909] Buffer I/O error on dev vda, logical block 0, async page read
>> [    2.966021] blk_update_request: I/O error, dev vda, sector 1 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.967177] Buffer I/O error on dev vda, logical block 1, async page read
>> [    2.968330] blk_update_request: I/O error, dev vda, sector 2 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.969504] Buffer I/O error on dev vda, logical block 2, async page read
>> [    2.970767] blk_update_request: I/O error, dev vda, sector 3 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.971624] Buffer I/O error on dev vda, logical block 3, async page read
>> [    2.972170] blk_update_request: I/O error, dev vda, sector 4 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.972728] Buffer I/O error on dev vda, logical block 4, async page read
>> [    2.973308] blk_update_request: I/O error, dev vda, sector 5 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.973920] Buffer I/O error on dev vda, logical block 5, async page read
>> [    2.974496] blk_update_request: I/O error, dev vda, sector 6 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.975093] Buffer I/O error on dev vda, logical block 6, async page read
>> [    2.975685] blk_update_request: I/O error, dev vda, sector 7 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.976295] Buffer I/O error on dev vda, logical block 7, async page read
>> [    2.980074] blk_update_request: I/O error, dev vda, sector 0 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.981104] Buffer I/O error on dev vda, logical block 0, async page read
>> [    2.981786] blk_update_request: I/O error, dev vda, sector 1 op
>> 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
>> [    2.982083] ixgbe 0000:06:00.0: Multiqueue Enabled: Rx Queue count
>> = 63, Tx Queue count = 63 XDP Queue count = 0
>> [    2.982442] Buffer I/O error on dev vda, logical block 1, async page read
>> [    2.983642] ldm_validate_partition_table(): Disk read failed.
>>
>> Kernel 5.8.0-55 is fine, and the only io_uring-related change between
>> 5.8.0-55 and 5.8.0-59 is the commit 4b982bd0f383 ("io_uring: don't
>> mark S_ISBLK async work as unbounded").
>>
>> The weird thing is that this commit was first introduced with v5.12,
>> but neither the mainline v5.12.0 or v5.13.0 is affected by this issue.
>>
>> I guess one of these commits following the backported commit from
>> v5.12 fixes the issue, but that's just a guess. It might be another
>> earlier commit:
>> c7d95613c7d6 io_uring: fix early sqd_list removal sqpoll hangs
>> 9728463737db io_uring: fix rw req completion
>> 6ad7f2332e84 io_uring: clear F_REISSUE right after getting it
>> e82ad4853948 io_uring: fix !CONFIG_BLOCK compilation failure
>> 230d50d448ac io_uring: move reissue into regular IO path
>> 07204f21577a io_uring: fix EIOCBQUEUED iter revert
>> 696ee88a7c50 io_uring/io-wq: protect against sprintf overflow
>>
>> It would be much appreciated if Jens could give pointers to Canonical
>> developers on how to fix the issue, and hopefully a suggestion to
>> prevent this from happening again.
>>
>> Thanks,
>> Regards


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-07-01 18:58 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-30  8:47 Possible io_uring regression with QEMU on Ubuntu's kernel Juhyung Park
2021-07-01 17:50 ` Kamal Mostafa
2021-07-01 18:16   ` Juhyung Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).