From: "Thomas Deutschmann" <whissi@whissi.de>
To: <vverma@digitalocean.com>, <song@kernel.org>
Cc: <stable@vger.kernel.org>, <regressions@lists.linux.dev>
Subject: RE: [REGRESSION] v5.17-rc1+: FIFREEZE ioctl system call hangs
Date: Thu, 11 Aug 2022 14:34:55 +0200 [thread overview]
Message-ID: <000001d8ad7e$c340ad70$49c20850$@whissi.de> (raw)
In-Reply-To: <000401d8a746$3eaca200$bc05e600$@whissi.de>
Hi,
any news on this? Is there anything else you need from me or I can help
with?
Thanks.
--
Regards,
Thomas
-----Original Message-----
From: Thomas Deutschmann <whissi@whissi.de>
Sent: Wednesday, August 3, 2022 4:35 PM
To: vverma@digitalocean.com; song@kernel.org
Cc: stable@vger.kernel.org; regressions@lists.linux.dev
Subject: [REGRESSION] v5.17-rc1+: FIFREEZE ioctl system call hangs
Hi,
while trying to backup a Dell R7525 system running Debian bookworm/testing
using LVM snapshots I noticed that the system will 'freeze' sometimes (not
all the
times) when creating the snapshot.
First I thought this was related to LVM so I created
https://listman.redhat.com/archives/linux-lvm/2022-July/026228.html
(continued at
https://listman.redhat.com/archives/linux-lvm/2022-August/thread.html#26229)
Long story short:
I was even able to reproduce with fsfreeze, see last strace lines
> [...]
> 14471 1659449870.984635 openat(AT_FDCWD, "/var/lib/machines", O_RDONLY) =3
> 14471 1659449870.984658 newfstatat(3, "",
{st_mode=S_IFDIR|0700,st_size=4096, ...}, AT_EMPTY_PATH) = 0
> 14471 1659449870.984678 ioctl(3, FIFREEZE
so I started to bisect kernel and found the following bad commit:
> md: add support for REQ_NOWAIT
>
> commit 021a24460dc2 ("block: add QUEUE_FLAG_NOWAIT") added support
> for checking whether a given bdev supports handling of REQ_NOWAIT or not.
> Since then commit 6abc49468eea ("dm: add support for REQ_NOWAIT and enable
> it for linear target") added support for REQ_NOWAIT for dm. This uses
> a similar approach to incorporate REQ_NOWAIT for md based bios.
>
> This patch was tested using t/io_uring tool within FIO. A nvme drive
> was partitioned into 2 partitions and a simple raid 0 configuration
> /dev/md0 was created.
>
> md0 : active raid0 nvme4n1p1[1] nvme4n1p2[0]
> 937423872 blocks super 1.2 512k chunks
>
> Before patch:
>
> $ ./t/io_uring /dev/md0 -p 0 -a 0 -d 1 -r 100
>
> Running top while the above runs:
>
> $ ps -eL | grep $(pidof io_uring)
>
> 38396 38396 pts/2 00:00:00 io_uring
> 38396 38397 pts/2 00:00:15 io_uring
> 38396 38398 pts/2 00:00:13 iou-wrk-38397
>
> We can see iou-wrk-38397 io worker thread created which gets created
> when io_uring sees that the underlying device (/dev/md0 in this case)
> doesn't support nowait.
>
> After patch:
>
> $ ./t/io_uring /dev/md0 -p 0 -a 0 -d 1 -r 100
>
> Running top while the above runs:
>
> $ ps -eL | grep $(pidof io_uring)
>
> 38341 38341 pts/2 00:10:22 io_uring
> 38341 38342 pts/2 00:10:37 io_uring
>
> After running this patch, we don't see any io worker thread
> being created which indicated that io_uring saw that the
> underlying device does support nowait. This is the exact behaviour
> noticed on a dm device which also supports nowait.
>
> For all the other raid personalities except raid0, we would need
> to train pieces which involves make_request fn in order for them
> to correctly handle REQ_NOWAIT.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i
d=f51d46d0e7cb5b8494aa534d276a9d8915a2443d
After reverting this commit (and follow up commit
0f9650bd838efe5c52f7e5f40c3204ad59f1964d)
v5.18.15 and v5.19 worked for me again.
At this point I still wonder why I experienced the same problem even after I
removed one nvme device from the mdraid array and tested it separately. So
maybe there is another nowait/REQ_NOWAIT problem somewhere. During bisect
I only tested against the mdraid array.
#regzbot introduced: f51d46d0e7cb5b8494aa534d276a9d8915a2443d
#regzbot link:
https://listman.redhat.com/archives/linux-lvm/2022-July/026228.html
#regzbot link:
https://listman.redhat.com/archives/linux-lvm/2022-August/thread.html#26229
--
Regards,
Thomas
next prev parent reply other threads:[~2022-08-11 12:35 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-03 14:35 [REGRESSION] v5.17-rc1+: FIFREEZE ioctl system call hangs Thomas Deutschmann
2022-08-11 12:34 ` Thomas Deutschmann [this message]
2022-08-15 10:58 ` Thorsten Leemhuis
2022-08-15 15:46 ` Vishal Verma
2022-08-17 6:19 ` Song Liu
2022-08-17 6:53 ` Thomas Deutschmann
2022-08-17 18:29 ` Thomas Deutschmann
2022-08-19 2:46 ` Thomas Deutschmann
2022-08-20 1:04 ` Song Liu
2022-08-22 15:29 ` Thomas Deutschmann
2022-08-22 16:30 ` Thomas Deutschmann
2022-08-22 21:52 ` Song Liu
2022-08-22 22:44 ` Thomas Deutschmann
2022-08-22 22:59 ` Song Liu
2022-08-23 1:37 ` Song Liu
2022-08-23 3:15 ` Thomas Deutschmann
2022-08-23 17:13 ` Song Liu
2022-08-25 16:47 ` Song Liu
2022-08-25 19:12 ` Jens Axboe
2022-08-25 22:24 ` Song Liu
2022-08-26 20:10 ` Thomas Deutschmann
2022-09-08 13:25 ` [REGRESSION] v5.17-rc1+: FIFREEZE ioctl system call hangs #forregzbot Thorsten Leemhuis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='000001d8ad7e$c340ad70$49c20850$@whissi.de' \
--to=whissi@whissi.de \
--cc=regressions@lists.linux.dev \
--cc=song@kernel.org \
--cc=stable@vger.kernel.org \
--cc=vverma@digitalocean.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).