linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [REGRESSION] lxc-stop hang on 5.17.x kernels
@ 2022-05-02 13:17 Daniel Harding
  2022-05-02 13:26 ` Jens Axboe
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Harding @ 2022-05-02 13:17 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: regressions, io-uring, linux-kernel

I use lxc-4.0.12 on Gentoo, built with io-uring support 
(--enable-liburing), targeting liburing-2.1.  My kernel config is a very 
lightly modified version of Fedora's generic kernel config. After moving 
from the 5.16.x series to the 5.17.x kernel series, I started noticed 
frequent hangs in lxc-stop.  It doesn't happen 100% of the time, but 
definitely more than 50% of the time.  Bisecting narrowed down the issue 
to commit aa43477b040251f451db0d844073ac00a8ab66ee: io_uring: poll 
rework. Testing indicates the problem is still present in 5.18-rc5. 
Unfortunately I do not have the expertise with the codebases of either 
lxc or io-uring to try to debug the problem further on my own, but I can 
easily apply patches to any of the involved components (lxc, liburing, 
kernel) and rebuild for testing or validation.  I am also happy to 
provide any further information that would be helpful with reproducing 
or debugging the problem.

Regards,

Daniel Harding

#regzbot introduced: aa43477b040251f451db0d844073ac00a8ab66ee

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 13:17 [REGRESSION] lxc-stop hang on 5.17.x kernels Daniel Harding
@ 2022-05-02 13:26 ` Jens Axboe
  2022-05-02 13:36   ` Daniel Harding
  0 siblings, 1 reply; 27+ messages in thread
From: Jens Axboe @ 2022-05-02 13:26 UTC (permalink / raw)
  To: Daniel Harding, Pavel Begunkov; +Cc: regressions, io-uring, linux-kernel

On 5/2/22 7:17 AM, Daniel Harding wrote:
> I use lxc-4.0.12 on Gentoo, built with io-uring support
> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
> very lightly modified version of Fedora's generic kernel config. After
> moving from the 5.16.x series to the 5.17.x kernel series, I started
> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
> time, but definitely more than 50% of the time.  Bisecting narrowed
> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
> io_uring: poll rework. Testing indicates the problem is still present
> in 5.18-rc5. Unfortunately I do not have the expertise with the
> codebases of either lxc or io-uring to try to debug the problem
> further on my own, but I can easily apply patches to any of the
> involved components (lxc, liburing, kernel) and rebuild for testing or
> validation.  I am also happy to provide any further information that
> would be helpful with reproducing or debugging the problem.

Do you have a recipe to reproduce the hang? That would make it
significantly easier to figure out.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 13:26 ` Jens Axboe
@ 2022-05-02 13:36   ` Daniel Harding
  2022-05-02 13:59     ` Jens Axboe
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Harding @ 2022-05-02 13:36 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: regressions, io-uring, linux-kernel

On 5/2/22 16:26, Jens Axboe wrote:
> On 5/2/22 7:17 AM, Daniel Harding wrote:
>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>> very lightly modified version of Fedora's generic kernel config. After
>> moving from the 5.16.x series to the 5.17.x kernel series, I started
>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>> time, but definitely more than 50% of the time.  Bisecting narrowed
>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>> io_uring: poll rework. Testing indicates the problem is still present
>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>> codebases of either lxc or io-uring to try to debug the problem
>> further on my own, but I can easily apply patches to any of the
>> involved components (lxc, liburing, kernel) and rebuild for testing or
>> validation.  I am also happy to provide any further information that
>> would be helpful with reproducing or debugging the problem.
> Do you have a recipe to reproduce the hang? That would make it
> significantly easier to figure out.

I can reproduce it with just the following:

     sudo lxc-create --n lxc-test --template download --bdev dir --dir 
/var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
     sudo lxc-start -n lxc-test
     sudo lxc-stop -n lxc-test

The lxc-stop command never exits and the container continues running.  
If that isn't sufficient to reproduce, please let me know.

-- 
Regards,

Daniel Harding

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 13:36   ` Daniel Harding
@ 2022-05-02 13:59     ` Jens Axboe
  2022-05-02 17:00       ` Jens Axboe
  0 siblings, 1 reply; 27+ messages in thread
From: Jens Axboe @ 2022-05-02 13:59 UTC (permalink / raw)
  To: Daniel Harding, Pavel Begunkov; +Cc: regressions, io-uring, linux-kernel

On 5/2/22 7:36 AM, Daniel Harding wrote:
> On 5/2/22 16:26, Jens Axboe wrote:
>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>>> very lightly modified version of Fedora's generic kernel config. After
>>> moving from the 5.16.x series to the 5.17.x kernel series, I started
>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>> time, but definitely more than 50% of the time.  Bisecting narrowed
>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>> io_uring: poll rework. Testing indicates the problem is still present
>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>> codebases of either lxc or io-uring to try to debug the problem
>>> further on my own, but I can easily apply patches to any of the
>>> involved components (lxc, liburing, kernel) and rebuild for testing or
>>> validation.  I am also happy to provide any further information that
>>> would be helpful with reproducing or debugging the problem.
>> Do you have a recipe to reproduce the hang? That would make it
>> significantly easier to figure out.
> 
> I can reproduce it with just the following:
> 
>     sudo lxc-create --n lxc-test --template download --bdev dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>     sudo lxc-start -n lxc-test
>     sudo lxc-stop -n lxc-test
> 
> The lxc-stop command never exits and the container continues running.
> If that isn't sufficient to reproduce, please let me know.

Thanks, that's useful! I'm at a conference this week and hence have
limited amount of time to debug, hopefully Pavel has time to take a look
at this.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 13:59     ` Jens Axboe
@ 2022-05-02 17:00       ` Jens Axboe
  2022-05-02 17:40         ` Pavel Begunkov
  0 siblings, 1 reply; 27+ messages in thread
From: Jens Axboe @ 2022-05-02 17:00 UTC (permalink / raw)
  To: Daniel Harding, Pavel Begunkov; +Cc: regressions, io-uring, linux-kernel

On 5/2/22 7:59 AM, Jens Axboe wrote:
> On 5/2/22 7:36 AM, Daniel Harding wrote:
>> On 5/2/22 16:26, Jens Axboe wrote:
>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>>>> very lightly modified version of Fedora's generic kernel config. After
>>>> moving from the 5.16.x series to the 5.17.x kernel series, I started
>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>>> time, but definitely more than 50% of the time.  Bisecting narrowed
>>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>>> io_uring: poll rework. Testing indicates the problem is still present
>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>> codebases of either lxc or io-uring to try to debug the problem
>>>> further on my own, but I can easily apply patches to any of the
>>>> involved components (lxc, liburing, kernel) and rebuild for testing or
>>>> validation.  I am also happy to provide any further information that
>>>> would be helpful with reproducing or debugging the problem.
>>> Do you have a recipe to reproduce the hang? That would make it
>>> significantly easier to figure out.
>>
>> I can reproduce it with just the following:
>>
>>     sudo lxc-create --n lxc-test --template download --bdev dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>>     sudo lxc-start -n lxc-test
>>     sudo lxc-stop -n lxc-test
>>
>> The lxc-stop command never exits and the container continues running.
>> If that isn't sufficient to reproduce, please let me know.
> 
> Thanks, that's useful! I'm at a conference this week and hence have
> limited amount of time to debug, hopefully Pavel has time to take a look
> at this.

Didn't manage to reproduce. Can you try, on both the good and bad
kernel, to do:

# echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable

run lxc-stop

# cp /sys/kernel/debug/tracing/trace ~/iou-trace

so we can see what's going on? Looking at the source, lxc is just using
plain POLL_ADD, so I'm guessing it's not getting a notification when it
expects to, or it's POLL_REMOVE not doing its job. If we have a trace
from both a working and broken kernel, that might shed some light on it.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 17:00       ` Jens Axboe
@ 2022-05-02 17:40         ` Pavel Begunkov
  2022-05-02 18:49           ` Daniel Harding
  0 siblings, 1 reply; 27+ messages in thread
From: Pavel Begunkov @ 2022-05-02 17:40 UTC (permalink / raw)
  To: Jens Axboe, Daniel Harding; +Cc: regressions, io-uring, linux-kernel

On 5/2/22 18:00, Jens Axboe wrote:
> On 5/2/22 7:59 AM, Jens Axboe wrote:
>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>>>>> very lightly modified version of Fedora's generic kernel config. After
>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I started
>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>>>> time, but definitely more than 50% of the time.  Bisecting narrowed
>>>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>> io_uring: poll rework. Testing indicates the problem is still present
>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>> further on my own, but I can easily apply patches to any of the
>>>>> involved components (lxc, liburing, kernel) and rebuild for testing or
>>>>> validation.  I am also happy to provide any further information that
>>>>> would be helpful with reproducing or debugging the problem.
>>>> Do you have a recipe to reproduce the hang? That would make it
>>>> significantly easier to figure out.
>>>
>>> I can reproduce it with just the following:
>>>
>>>      sudo lxc-create --n lxc-test --template download --bdev dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>>>      sudo lxc-start -n lxc-test
>>>      sudo lxc-stop -n lxc-test
>>>
>>> The lxc-stop command never exits and the container continues running.
>>> If that isn't sufficient to reproduce, please let me know.
>>
>> Thanks, that's useful! I'm at a conference this week and hence have
>> limited amount of time to debug, hopefully Pavel has time to take a look
>> at this.
> 
> Didn't manage to reproduce. Can you try, on both the good and bad
> kernel, to do:

Same here, it doesn't reproduce for me


> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
> 
> run lxc-stop
> 
> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
> 
> so we can see what's going on? Looking at the source, lxc is just using
> plain POLL_ADD, so I'm guessing it's not getting a notification when it
> expects to, or it's POLL_REMOVE not doing its job. If we have a trace
> from both a working and broken kernel, that might shed some light on it.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 17:40         ` Pavel Begunkov
@ 2022-05-02 18:49           ` Daniel Harding
  2022-05-02 23:14             ` Pavel Begunkov
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Harding @ 2022-05-02 18:49 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe; +Cc: regressions, io-uring, linux-kernel

On 5/2/22 20:40, Pavel Begunkov wrote:
> On 5/2/22 18:00, Jens Axboe wrote:
>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>>>>>> very lightly modified version of Fedora's generic kernel config. 
>>>>>> After
>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I started
>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>>>>> time, but definitely more than 50% of the time. Bisecting narrowed
>>>>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>> io_uring: poll rework. Testing indicates the problem is still 
>>>>>> present
>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>> involved components (lxc, liburing, kernel) and rebuild for 
>>>>>> testing or
>>>>>> validation.  I am also happy to provide any further information that
>>>>>> would be helpful with reproducing or debugging the problem.
>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>> significantly easier to figure out.
>>>>
>>>> I can reproduce it with just the following:
>>>>
>>>>      sudo lxc-create --n lxc-test --template download --bdev dir 
>>>> --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>>>>      sudo lxc-start -n lxc-test
>>>>      sudo lxc-stop -n lxc-test
>>>>
>>>> The lxc-stop command never exits and the container continues running.
>>>> If that isn't sufficient to reproduce, please let me know.
>>>
>>> Thanks, that's useful! I'm at a conference this week and hence have
>>> limited amount of time to debug, hopefully Pavel has time to take a 
>>> look
>>> at this.
>>
>> Didn't manage to reproduce. Can you try, on both the good and bad
>> kernel, to do:
>
> Same here, it doesn't reproduce for me
OK, sorry it wasn't something simple.
> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>
>> run lxc-stop
>>
>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>
>> so we can see what's going on? Looking at the source, lxc is just using
>> plain POLL_ADD, so I'm guessing it's not getting a notification when it
>> expects to, or it's POLL_REMOVE not doing its job. If we have a trace
>> from both a working and broken kernel, that might shed some light on it.
It's late in my timezone, but I'll try to work on getting those traces 
tomorrow.

-- 
Regards,

Daniel Harding

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 18:49           ` Daniel Harding
@ 2022-05-02 23:14             ` Pavel Begunkov
  2022-05-03  7:13               ` Daniel Harding
  2022-05-03  7:37               ` Daniel Harding
  0 siblings, 2 replies; 27+ messages in thread
From: Pavel Begunkov @ 2022-05-02 23:14 UTC (permalink / raw)
  To: Daniel Harding, Jens Axboe; +Cc: regressions, io-uring, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3029 bytes --]

On 5/2/22 19:49, Daniel Harding wrote:
> On 5/2/22 20:40, Pavel Begunkov wrote:
>> On 5/2/22 18:00, Jens Axboe wrote:
>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>>>>>>> very lightly modified version of Fedora's generic kernel config. After
>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I started
>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>>>>>> time, but definitely more than 50% of the time. Bisecting narrowed
>>>>>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>> io_uring: poll rework. Testing indicates the problem is still present
>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>> involved components (lxc, liburing, kernel) and rebuild for testing or
>>>>>>> validation.  I am also happy to provide any further information that
>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>> significantly easier to figure out.
>>>>>
>>>>> I can reproduce it with just the following:
>>>>>
>>>>>      sudo lxc-create --n lxc-test --template download --bdev dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>>>>>      sudo lxc-start -n lxc-test
>>>>>      sudo lxc-stop -n lxc-test
>>>>>
>>>>> The lxc-stop command never exits and the container continues running.
>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>
>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>> limited amount of time to debug, hopefully Pavel has time to take a look
>>>> at this.
>>>
>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>> kernel, to do:
>>
>> Same here, it doesn't reproduce for me
> OK, sorry it wasn't something simple.
>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>
>>> run lxc-stop
>>>
>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>
>>> so we can see what's going on? Looking at the source, lxc is just using
>>> plain POLL_ADD, so I'm guessing it's not getting a notification when it
>>> expects to, or it's POLL_REMOVE not doing its job. If we have a trace
>>> from both a working and broken kernel, that might shed some light on it.
> It's late in my timezone, but I'll try to work on getting those traces tomorrow.

I think I got it, I've attached a trace.

What's interesting is that it issues a multi shot poll but I don't
see any kind of cancellation, neither cancel requests nor task/ring
exit. Perhaps have to go look at lxc to see how it's supposed
to work

-- 
Pavel Begunkov

[-- Attachment #2: uring_trace --]
[-- Type: text/plain, Size: 30396 bytes --]

# tracer: nop
#
# entries-in-buffer/entries-written: 207/207   #P:16
#
#                                _-----=> irqs-off/BH-disabled
#                               / _----=> need-resched
#                              | / _---=> hardirq/softirq
#                              || / _--=> preempt-depth
#                              ||| / _-=> migrate-disable
#                              |||| /     delay
#           TASK-PID     CPU#  |||||  TIMESTAMP  FUNCTION
#              | |         |   |||||     |         |
       lxc-start-3026    [010] .....    58.519361: io_uring_create: ring 00000000de7fa538, fd 3 sq size 512, cq size 1024, flags 0x0
       lxc-start-3026    [010] .....    58.519418: io_uring_create: ring 0000000001d3ba30, fd 4 sq size 512, cq size 1024, flags 0x0
       lxc-start-3026    [010] .....    58.519433: io_uring_submit_sqe: ring 00000000de7fa538, req 0000000061bba231, user_data 0x5631784e5920, opcode 6, flags 0x80000, non block 1, sq_thread 0
       lxc-start-3026    [010] .....    58.519434: io_uring_file_get: ring 00000000de7fa538, req 0000000061bba231, user_data 0x5631784e5920, fd 7
       lxc-start-3026    [010] .....    58.519438: io_uring_submit_sqe: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, flags 0x80000, non block 1, sq_thread 0
       lxc-start-3026    [010] .....    58.519438: io_uring_file_get: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, fd 31
       lxc-start-3026    [010] .....    58.519442: io_uring_submit_sqe: ring 00000000de7fa538, req 000000004b8dead2, user_data 0x5631784e59c0, opcode 6, flags 0x80000, non block 1, sq_thread 0
       lxc-start-3026    [010] .....    58.519442: io_uring_file_get: ring 00000000de7fa538, req 000000004b8dead2, user_data 0x5631784e59c0, fd 5
       lxc-start-3026    [010] .....    58.519444: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:4-164     [005] d..1.    58.542112: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [010] ...1.    58.542147: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [010] .....    58.542165: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:4-164     [005] d..1.    58.542245: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [010] ...1.    58.542272: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [010] .....    58.542278: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:4-164     [005] d..1.    58.542419: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [002] ...1.    58.542433: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [002] .....    58.542449: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.705795: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [004] ...1.    58.705838: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [004] .....    58.705859: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.705947: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [004] ...1.    58.705981: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [004] .....    58.705990: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.706420: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.706460: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.706472: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.706516: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.706552: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.706561: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.706878: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.706910: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.706920: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.706932: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.706940: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.706942: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.707011: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.707044: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.707054: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.707527: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.707560: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.707569: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.708337: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.708360: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.708366: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.708443: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.708466: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.708472: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.708515: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.708538: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.708544: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.708597: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.708620: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.708626: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.708637: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [006] ...1.    58.708643: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [006] .....    58.708645: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.709330: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.709371: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.709387: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:4-164     [005] d..1.    58.711588: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.711605: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.711618: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.713658: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.713677: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.713697: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.716828: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.716849: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.716872: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [014] d..1.    58.717675: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.717693: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.717703: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.726493: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [010] ...1.    58.726510: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [010] .....    58.726526: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.727418: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [010] ...1.    58.727436: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [010] .....    58.727453: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [014] d..1.    58.730047: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [010] ...1.    58.730076: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [010] .....    58.730094: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [003] d..1.    58.730287: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [010] ...1.    58.730309: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [010] .....    58.730313: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [003] d..1.    58.730948: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [010] ...1.    58.730964: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [010] .....    58.730978: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [007] d..1.    58.734533: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.734563: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.734578: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [007] d..1.    58.737818: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.737838: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.737850: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [014] d..1.    58.739063: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.739102: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.739116: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [014] d..1.    58.739723: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.739750: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.739758: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [014] d..1.    58.750710: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [008] ...1.    58.750744: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [008] .....    58.750756: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [014] d..1.    58.751685: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.751705: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.751717: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [014] d..1.    58.751740: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.751749: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.751752: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [014] d..1.    58.757615: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.757631: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.757646: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.757727: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.757735: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.757742: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [014] d..1.    58.757998: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.758009: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.758015: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.758181: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.758206: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.758212: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.758275: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.758299: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.758305: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.758336: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.758343: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.758345: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [014] d..1.    58.758348: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.758352: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.758353: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.758365: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.758372: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.758373: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [012] d..1.    58.758937: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.758955: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.758961: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.769910: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.769952: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.769962: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.770059: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.770094: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.770103: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.770736: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.770757: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.770767: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.770876: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.770892: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.770896: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [002] d..1.    58.771562: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.771579: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.771586: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [004] d..1.    58.772074: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.772092: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.772098: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.772902: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.772924: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.772943: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.774121: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.774142: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.774157: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.774821: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.774835: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.774844: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.775258: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.775270: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.775276: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.775613: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.775622: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.775625: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.775706: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.775716: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.775717: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.776105: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.776114: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.776116: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:2-120     [005] d..1.    58.776501: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.776515: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.776518: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [002] d..1.    58.778234: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.778256: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.778278: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [004] d..1.    58.779513: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.779540: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.779557: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [004] d..1.    58.851129: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.851159: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.851182: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [004] d..1.    58.851306: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.851318: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.851323: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [004] d..1.    58.851347: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.851357: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.851361: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [004] d..1.    58.852176: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.852212: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.852221: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [004] d..1.    58.859578: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    58.859605: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    58.859630: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [014] d..1.    59.861798: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    59.861827: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
       lxc-start-3026    [015] .....    59.861855: io_uring_cqring_wait: ring 00000000de7fa538, min_events 1
   kworker/u32:3-139     [014] d..1.    59.863176: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    59.863191: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
   kworker/u32:2-120     [005] d..1.    59.863195: io_uring_task_add: ring 00000000de7fa538, req 000000000dd2a118, user_data 0x5631784e5970, opcode 6, mask 41
       lxc-start-3026    [015] ...1.    59.863206: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e5970, result 1, cflags 0x2
        lxc-stop-3305    [005] d..1.    76.843006: io_uring_task_add: ring 00000000de7fa538, req 000000004b8dead2, user_data 0x5631784e59c0, opcode 6, mask c3
       lxc-start-3026    [015] ...1.    76.843057: io_uring_complete: ring 00000000de7fa538, req 0000000000000000, user_data 0x5631784e59c0, result 1, cflags 0x2

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 23:14             ` Pavel Begunkov
@ 2022-05-03  7:13               ` Daniel Harding
  2022-05-03  7:37               ` Daniel Harding
  1 sibling, 0 replies; 27+ messages in thread
From: Daniel Harding @ 2022-05-03  7:13 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe; +Cc: regressions, io-uring, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 4269 bytes --]

On 5/3/22 02:14, Pavel Begunkov wrote:
> On 5/2/22 19:49, Daniel Harding wrote:
>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel config 
>>>>>>>> is a
>>>>>>>> very lightly modified version of Fedora's generic kernel 
>>>>>>>> config. After
>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I 
>>>>>>>> started
>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>>>>>>> time, but definitely more than 50% of the time. Bisecting narrowed
>>>>>>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>> io_uring: poll rework. Testing indicates the problem is still 
>>>>>>>> present
>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for 
>>>>>>>> testing or
>>>>>>>> validation.  I am also happy to provide any further information 
>>>>>>>> that
>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>> significantly easier to figure out.
>>>>>>
>>>>>> I can reproduce it with just the following:
>>>>>>
>>>>>>      sudo lxc-create --n lxc-test --template download --bdev dir 
>>>>>> --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>>>>>>      sudo lxc-start -n lxc-test
>>>>>>      sudo lxc-stop -n lxc-test
>>>>>>
>>>>>> The lxc-stop command never exits and the container continues 
>>>>>> running.
>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>
>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>> limited amount of time to debug, hopefully Pavel has time to take 
>>>>> a look
>>>>> at this.
>>>>
>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>> kernel, to do:
>>>
>>> Same here, it doesn't reproduce for me
>> OK, sorry it wasn't something simple.
>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>
>>>> run lxc-stop
>>>>
>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>
>>>> so we can see what's going on? Looking at the source, lxc is just 
>>>> using
>>>> plain POLL_ADD, so I'm guessing it's not getting a notification 
>>>> when it
>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a trace
>>>> from both a working and broken kernel, that might shed some light 
>>>> on it.
>> It's late in my timezone, but I'll try to work on getting those 
>> traces tomorrow.
>
> I think I got it, I've attached a trace.
>
> What's interesting is that it issues a multi shot poll but I don't
> see any kind of cancellation, neither cancel requests nor task/ring
> exit. Perhaps have to go look at lxc to see how it's supposed
> to work

Yes, that looks exactly like my bad trace.  I've attached good trace 
(captured with linux-5.16.19) and a bad trace (captured with 
linux-5.17.5).  These are the differences I noticed with just a visual scan:

* Both traces have three io_uring_submit_sqe calls at the very 
beginning, but in the good trace, there are further io_uring_submit_sqe 
calls throughout the trace, while in the bad trace, there are none.
* The good trace uses a mask of c3 for io_uring_task_add much more often 
than the bad trace:  the bad trace uses a mask of c3 only for the very 
last call to io_uring_task_add, but a mask of 41 for the other calls.
* In the good trace, many of the io_uring_complete calls have a result 
of 195, while in the bad trace, they all have a result of 1.

I don't know whether any of those things are significant or not, but 
that's what jumped out at me.

I have also attached a copy of the script I used to generate the 
traces.  If there is anything further I can to do help debug, please let 
me know.

-- 
Regards,

Daniel Harding

[-- Attachment #2: lxc-trace-good --]
[-- Type: text/plain, Size: 151978 bytes --]

# tracer: nop
#
# entries-in-buffer/entries-written: 1145/1145   #P:16
#
#                                _-----=> irqs-off
#                               / _----=> need-resched
#                              | / _---=> hardirq/softirq
#                              || / _--=> preempt-depth
#                              ||| / _-=> migrate-disable
#                              |||| /     delay
#           TASK-PID     CPU#  |||||  TIMESTAMP  FUNCTION
#              | |         |   |||||     |         |
       lxc-start-11222   [002] .....  3524.100871: io_uring_create: ring 000000004faf180d, fd 3 sq size 512, cq size 1024, flags 0
       lxc-start-11222   [002] .....  3524.100928: io_uring_create: ring 000000001361d50f, fd 4 sq size 512, cq size 1024, flags 0
       lxc-start-11222   [002] .....  3524.100949: io_uring_file_get: ring 000000004faf180d, fd 7
       lxc-start-11222   [002] .....  3524.100950: io_uring_submit_sqe: ring 000000004faf180d, req 00000000b6edaa7a, op 6, data 0x55dd0ffc9b40, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.100954: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.100954: io_uring_submit_sqe: ring 000000004faf180d, req 000000003f077f78, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.100959: io_uring_file_get: ring 000000004faf180d, fd 5
       lxc-start-11222   [002] .....  3524.100959: io_uring_submit_sqe: ring 000000004faf180d, req 0000000096324fd9, op 6, data 0x55dd0ffc9be0, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.100962: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
        lxc-stop-11254   [015] d..1.  3524.108902: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9be0, mask c3
       lxc-start-11222   [002] ...1.  3524.108917: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9be0, result 195, cflags 2
       lxc-start-11222   [002] .....  3524.108940: io_uring_file_get: ring 000000004faf180d, fd 24
       lxc-start-11222   [002] .....  3524.108940: io_uring_submit_sqe: ring 000000004faf180d, req 000000006cc3c1d8, op 6, data 0x55dd0ffc9420, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] ...1.  3524.108941: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9420, result 1, cflags 0
       lxc-start-11222   [002] .....  3524.108958: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
        lxc-stop-11254   [004] d..1.  3524.108985: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9be0, mask c3
       lxc-start-11222   [002] ...1.  3524.108995: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9be0, result 195, cflags 2
       lxc-start-11222   [002] .....  3524.108997: io_uring_file_get: ring 000000004faf180d, fd 24
       lxc-start-11222   [002] .....  3524.108997: io_uring_submit_sqe: ring 000000004faf180d, req 000000004f5d8238, op 6, data 0x55dd0ffc9470, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] ...1.  3524.108997: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9470, result 1, cflags 0
       lxc-start-11222   [002] .....  3524.109002: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
        lxc-stop-11254   [004] d..1.  3524.109011: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9be0, mask c3
       lxc-start-11222   [002] ...1.  3524.109017: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9be0, result 195, cflags 2
       lxc-start-11222   [002] .....  3524.109018: io_uring_file_get: ring 000000004faf180d, fd 24
       lxc-start-11222   [002] .....  3524.109019: io_uring_submit_sqe: ring 000000004faf180d, req 00000000d7f354a6, op 6, data 0x55dd0ffc94c0, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] ...1.  3524.109019: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc94c0, result 1, cflags 0
       lxc-start-11222   [002] .....  3524.109021: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
        lxc-stop-11254   [004] d..1.  3524.109028: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9be0, mask c3
       lxc-start-11222   [002] ...1.  3524.109035: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9be0, result 195, cflags 2
       lxc-start-11222   [002] .....  3524.109039: io_uring_file_get: ring 000000004faf180d, fd 24
       lxc-start-11222   [002] .....  3524.109039: io_uring_submit_sqe: ring 000000004faf180d, req 000000007cd8fe38, op 6, data 0x55dd0ffc6380, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] ...1.  3524.109039: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc6380, result 1, cflags 0
       lxc-start-11222   [002] .....  3524.109045: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
        lxc-stop-11254   [004] d..1.  3524.109049: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9be0, mask c3
       lxc-start-11222   [002] ...1.  3524.109055: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9be0, result 195, cflags 2
       lxc-start-11222   [002] .....  3524.109057: io_uring_file_get: ring 000000004faf180d, fd 24
       lxc-start-11222   [002] .....  3524.109057: io_uring_submit_sqe: ring 000000004faf180d, req 00000000efe6c834, op 6, data 0x55dd0ffc63d0, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] ...1.  3524.109057: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc63d0, result 1, cflags 0
       lxc-start-11222   [002] .....  3524.109060: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
        lxc-stop-11254   [004] d..1.  3524.109097: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9be0, mask c3
       lxc-start-11222   [002] ...1.  3524.109106: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9be0, result 195, cflags 2
       lxc-start-11222   [002] .....  3524.109109: io_uring_file_get: ring 000000004faf180d, fd 24
       lxc-start-11222   [002] .....  3524.109109: io_uring_submit_sqe: ring 000000004faf180d, req 00000000ed469fc0, op 6, data 0x55dd0ffc6420, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] ...1.  3524.109109: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc6420, result 1, cflags 0
       lxc-start-11222   [002] .....  3524.109112: io_uring_file_get: ring 000000004faf180d, fd 24
       lxc-start-11222   [002] .....  3524.109113: io_uring_submit_sqe: ring 000000004faf180d, req 00000000d8d68867, op 6, data 0x55dd0ffc6420, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.109113: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.127868: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.127894: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.127904: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.127905: io_uring_submit_sqe: ring 000000004faf180d, req 00000000eb0edc76, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.127908: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.128680: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.128706: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.128714: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.128714: io_uring_submit_sqe: ring 000000004faf180d, req 00000000714847fc, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.128717: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.128817: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.128843: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.128850: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.128850: io_uring_submit_sqe: ring 000000004faf180d, req 0000000048b90ba5, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.128852: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [010] d..1.  3524.247880: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.247916: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.247930: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.247931: io_uring_submit_sqe: ring 000000004faf180d, req 0000000084aef9e8, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.247935: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [010] d..1.  3524.247985: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.248009: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.248015: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.248016: io_uring_submit_sqe: ring 000000004faf180d, req 00000000d2fc7b04, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.248018: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [010] d..1.  3524.248036: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.248048: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.248050: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.248050: io_uring_submit_sqe: ring 000000004faf180d, req 00000000dea1d21b, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.248052: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [010] d..1.  3524.248077: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.248104: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.248117: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.248117: io_uring_submit_sqe: ring 000000004faf180d, req 000000004387fc1c, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] ...1.  3524.248119: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 1, cflags 0
       lxc-start-11222   [006] .....  3524.248120: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.248121: io_uring_submit_sqe: ring 000000004faf180d, req 000000008974a2d5, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.248122: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [010] d..1.  3524.248164: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [006] ...1.  3524.248191: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [006] .....  3524.248198: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [006] .....  3524.248198: io_uring_submit_sqe: ring 000000004faf180d, req 000000002589be61, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [006] .....  3524.248200: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.248573: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.248618: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.248626: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.248627: io_uring_submit_sqe: ring 000000004faf180d, req 000000001216deb4, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.248629: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.249368: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.249392: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.249400: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.249401: io_uring_submit_sqe: ring 000000004faf180d, req 0000000030e87858, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.249403: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.249793: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.249819: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.249827: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.249827: io_uring_submit_sqe: ring 000000004faf180d, req 00000000f7156669, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.249830: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.250177: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.250208: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.250216: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.250216: io_uring_submit_sqe: ring 000000004faf180d, req 0000000063927e59, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.250218: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.250243: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.250268: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.250274: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.250275: io_uring_submit_sqe: ring 000000004faf180d, req 00000000426146d4, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.250277: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.250778: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.250789: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.250797: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.250797: io_uring_submit_sqe: ring 000000004faf180d, req 00000000ec29de44, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.250799: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.250882: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.250896: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.250897: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.250897: io_uring_submit_sqe: ring 000000004faf180d, req 00000000885c597e, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.250898: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.250903: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.250908: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.250909: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.250909: io_uring_submit_sqe: ring 000000004faf180d, req 000000001095359f, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.250909: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.250956: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.250981: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.250982: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.250982: io_uring_submit_sqe: ring 000000004faf180d, req 00000000c38bea87, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.250983: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.251021: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.251029: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.251031: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.251031: io_uring_submit_sqe: ring 000000004faf180d, req 00000000e6553c8a, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.251031: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.251056: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.251083: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.251084: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.251085: io_uring_submit_sqe: ring 000000004faf180d, req 0000000002deba43, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.251085: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.251467: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.251476: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.251477: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.251477: io_uring_submit_sqe: ring 000000004faf180d, req 0000000085abde6b, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.251477: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [006] d..1.  3524.253792: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.253812: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.253819: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.253819: io_uring_submit_sqe: ring 000000004faf180d, req 0000000082aa85a4, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.253821: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [008] d..1.  3524.254328: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [002] ...1.  3524.254351: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [002] .....  3524.254361: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [002] .....  3524.254361: io_uring_submit_sqe: ring 000000004faf180d, req 00000000db0032c3, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [002] .....  3524.254363: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [008] d..1.  3524.256987: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.257003: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.257020: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.257021: io_uring_submit_sqe: ring 000000004faf180d, req 00000000cc3c4e38, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.257027: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [004] d..1.  3524.257626: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.257642: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.257644: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.257644: io_uring_submit_sqe: ring 000000004faf180d, req 0000000009e25313, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.257645: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [004] d..1.  3524.263730: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.263742: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.263753: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.263754: io_uring_submit_sqe: ring 000000004faf180d, req 00000000b0e7e58a, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.263757: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [004] d..1.  3524.263891: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.263899: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.263900: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.263900: io_uring_submit_sqe: ring 000000004faf180d, req 00000000eb638006, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.263900: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [004] d..1.  3524.263918: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.263926: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.263927: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.263927: io_uring_submit_sqe: ring 000000004faf180d, req 00000000a65c2dea, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.263927: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [004] d..1.  3524.264369: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.264375: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.264376: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.264377: io_uring_submit_sqe: ring 000000004faf180d, req 00000000f4b699a0, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.264377: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [003] d..1.  3524.266553: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.266567: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.266577: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.266578: io_uring_submit_sqe: ring 000000004faf180d, req 00000000201a004a, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.266580: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [003] d..1.  3524.266923: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [012] ...1.  3524.266931: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [012] .....  3524.266933: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [012] .....  3524.266933: io_uring_submit_sqe: ring 000000004faf180d, req 00000000201a004a, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [012] .....  3524.266933: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [002] d..1.  3524.267418: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.267430: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.267438: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.267439: io_uring_submit_sqe: ring 000000004faf180d, req 00000000f4b699a0, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.267441: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [014] d..1.  3524.273098: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.273111: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.273115: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.273115: io_uring_submit_sqe: ring 000000004faf180d, req 00000000a65c2dea, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.273116: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [014] d..1.  3524.273241: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.273251: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.273253: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.273253: io_uring_submit_sqe: ring 000000004faf180d, req 00000000eb638006, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.273254: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [014] d..1.  3524.273294: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.273304: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.273306: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.273306: io_uring_submit_sqe: ring 000000004faf180d, req 00000000b0e7e58a, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.273307: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [010] d..1.  3524.286330: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.286361: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.286375: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.286376: io_uring_submit_sqe: ring 000000004faf180d, req 0000000009e25313, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.286380: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [002] d..1.  3524.286966: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.286996: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.287005: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.287006: io_uring_submit_sqe: ring 000000004faf180d, req 00000000cc3c4e38, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.287008: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [008] d..1.  3524.295795: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.295833: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.295845: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.295846: io_uring_submit_sqe: ring 000000004faf180d, req 00000000db0032c3, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.295849: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [008] d..1.  3524.296009: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.296046: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.296056: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.296057: io_uring_submit_sqe: ring 000000004faf180d, req 0000000082aa85a4, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.296060: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.296961: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [004] ...1.  3524.297002: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [004] .....  3524.297013: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [004] .....  3524.297013: io_uring_submit_sqe: ring 000000004faf180d, req 0000000085abde6b, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [004] .....  3524.297017: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.301130: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301153: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301165: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301165: io_uring_submit_sqe: ring 000000004faf180d, req 0000000002deba43, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301169: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [004] d..1.  3524.301237: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301252: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301259: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301259: io_uring_submit_sqe: ring 000000004faf180d, req 00000000e6553c8a, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:6-10147   [004] d..1.  3524.301264: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301274: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301276: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301276: io_uring_submit_sqe: ring 000000004faf180d, req 00000000c38bea87, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301278: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [004] d..1.  3524.301489: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301504: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301511: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301511: io_uring_submit_sqe: ring 000000004faf180d, req 000000001095359f, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301513: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.301714: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301746: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301753: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301754: io_uring_submit_sqe: ring 000000004faf180d, req 00000000885c597e, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301756: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [004] d..1.  3524.301787: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301800: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301803: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301803: io_uring_submit_sqe: ring 000000004faf180d, req 00000000ec29de44, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301804: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [004] d..1.  3524.301826: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301838: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301840: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301841: io_uring_submit_sqe: ring 000000004faf180d, req 00000000426146d4, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301841: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [004] d..1.  3524.301851: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301861: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301863: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301863: io_uring_submit_sqe: ring 000000004faf180d, req 0000000063927e59, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301864: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [002] d..1.  3524.301940: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301959: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301963: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301963: io_uring_submit_sqe: ring 000000004faf180d, req 00000000f7156669, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301965: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [002] d..1.  3524.301985: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.301993: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.301995: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.301996: io_uring_submit_sqe: ring 000000004faf180d, req 0000000030e87858, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.301997: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [003] d..1.  3524.302458: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.302465: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.302467: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.302467: io_uring_submit_sqe: ring 000000004faf180d, req 000000001216deb4, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.302468: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.302970: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.302980: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.302981: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.302981: io_uring_submit_sqe: ring 000000004faf180d, req 000000002589be61, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.302982: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [002] d..1.  3524.303378: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.303389: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.303396: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.303396: io_uring_submit_sqe: ring 000000004faf180d, req 000000008974a2d5, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.303398: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [003] d..1.  3524.312785: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.312802: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.312814: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.312815: io_uring_submit_sqe: ring 000000004faf180d, req 000000004387fc1c, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.312819: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [003] d..1.  3524.313354: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.313360: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.313362: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.313362: io_uring_submit_sqe: ring 000000004faf180d, req 00000000dea1d21b, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.313363: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [003] d..1.  3524.313966: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.313972: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.313978: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.313978: io_uring_submit_sqe: ring 000000004faf180d, req 00000000d2fc7b04, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.313979: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [003] d..1.  3524.314197: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.314202: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.314203: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.314203: io_uring_submit_sqe: ring 000000004faf180d, req 0000000084aef9e8, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.314204: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:6-10147   [004] d..1.  3524.314482: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.314489: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.314490: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.314491: io_uring_submit_sqe: ring 000000004faf180d, req 0000000048b90ba5, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.314491: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:2-9972    [014] d..1.  3524.326177: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.326224: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.326244: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.326245: io_uring_submit_sqe: ring 000000004faf180d, req 00000000714847fc, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.326251: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:2-9972    [010] d..1.  3524.326650: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.326661: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.326666: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.326666: io_uring_submit_sqe: ring 000000004faf180d, req 00000000eb0edc76, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.326669: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:2-9972    [010] d..1.  3524.326835: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.326857: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.326860: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.326861: io_uring_submit_sqe: ring 000000004faf180d, req 000000003f077f78, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.326862: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:2-9972    [010] d..1.  3524.327295: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.327315: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.327318: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.327319: io_uring_submit_sqe: ring 000000004faf180d, req 00000000ed469fc0, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.327320: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [000] d..1.  3524.390092: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.390127: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.390142: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.390143: io_uring_submit_sqe: ring 000000004faf180d, req 00000000efe6c834, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.390148: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:5-10021   [007] d..1.  3524.390293: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.390321: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.390328: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.390328: io_uring_submit_sqe: ring 000000004faf180d, req 000000007cd8fe38, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:5-10021   [007] d..1.  3524.390346: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.390361: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.390365: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.390365: io_uring_submit_sqe: ring 000000004faf180d, req 00000000d7f354a6, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.390367: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:5-10021   [007] d..1.  3524.391096: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.391117: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.391120: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.391120: io_uring_submit_sqe: ring 000000004faf180d, req 000000004f5d8238, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.391121: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:1-105     [002] d..1.  3524.394382: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.394411: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.394418: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.394419: io_uring_submit_sqe: ring 000000004faf180d, req 000000006cc3c1d8, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.394421: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:0-9962    [000] d..1.  3524.684009: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3524.684032: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3524.684044: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3524.684045: io_uring_submit_sqe: ring 000000004faf180d, req 000000006cc3c1d8, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3524.684048: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [003] d..1.  3525.686314: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3525.686378: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3525.686410: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3525.686412: io_uring_submit_sqe: ring 000000004faf180d, req 000000004f5d8238, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3525.686421: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
   kworker/u32:3-9973    [003] d..1.  3525.693146: io_uring_task_add: ring 000000004faf180d, op 6, data 0x55dd0ffc9b90, mask 41
       lxc-start-11222   [005] ...1.  3525.693282: io_uring_complete: ring 000000004faf180d, user_data 0x55dd0ffc9b90, result 65, cflags 0
       lxc-start-11222   [005] .....  3525.693292: io_uring_file_get: ring 000000004faf180d, fd 53
       lxc-start-11222   [005] .....  3525.693293: io_uring_submit_sqe: ring 000000004faf180d, req 00000000d7f354a6, op 6, data 0x55dd0ffc9b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11222   [005] .....  3525.693296: io_uring_cqring_wait: ring 000000004faf180d, min_events 1
       lxc-start-11852   [012] .....  3632.581537: io_uring_create: ring 00000000cbfee049, fd 3 sq size 512, cq size 1024, flags 0
       lxc-start-11852   [012] .....  3632.581566: io_uring_create: ring 00000000eba63feb, fd 4 sq size 512, cq size 1024, flags 0
       lxc-start-11852   [012] .....  3632.581575: io_uring_file_get: ring 00000000cbfee049, fd 7
       lxc-start-11852   [012] .....  3632.581576: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000317df242, op 6, data 0x55b72af57b40, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.581578: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.581578: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000094f2006, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.581580: io_uring_file_get: ring 00000000cbfee049, fd 5
       lxc-start-11852   [012] .....  3632.581580: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000fe390591, op 6, data 0x55b72af57be0, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.581581: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:1-105     [000] d..1.  3632.583290: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.583317: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.583328: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.583329: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000baa91618, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.583332: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:1-105     [000] d..1.  3632.583502: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.583527: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.583534: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.583535: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000e51398a5, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.583537: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
        lxc-stop-11902   [013] d..1.  3632.589763: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57be0, mask c3
       lxc-start-11852   [012] ...1.  3632.589778: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57be0, result 195, cflags 2
       lxc-start-11852   [012] .....  3632.589797: io_uring_file_get: ring 00000000cbfee049, fd 24
       lxc-start-11852   [012] .....  3632.589798: io_uring_submit_sqe: ring 00000000cbfee049, req 000000003093f3c0, op 6, data 0x55b72af57420, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] ...1.  3632.589799: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57420, result 1, cflags 0
       lxc-start-11852   [012] .....  3632.589812: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
        lxc-stop-11902   [013] d..1.  3632.589821: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57be0, mask c3
       lxc-start-11852   [012] ...1.  3632.589828: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57be0, result 195, cflags 2
       lxc-start-11852   [012] .....  3632.589830: io_uring_file_get: ring 00000000cbfee049, fd 24
       lxc-start-11852   [012] .....  3632.589830: io_uring_submit_sqe: ring 00000000cbfee049, req 0000000023e1924e, op 6, data 0x55b72af57470, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] ...1.  3632.589830: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57470, result 1, cflags 0
       lxc-start-11852   [012] .....  3632.589834: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
        lxc-stop-11902   [013] d..1.  3632.589842: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57be0, mask c3
       lxc-start-11852   [012] ...1.  3632.589848: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57be0, result 195, cflags 2
       lxc-start-11852   [012] .....  3632.589850: io_uring_file_get: ring 00000000cbfee049, fd 24
       lxc-start-11852   [012] .....  3632.589850: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000c207c8b2, op 6, data 0x55b72af574c0, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] ...1.  3632.589851: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af574c0, result 1, cflags 0
       lxc-start-11852   [012] .....  3632.589854: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
        lxc-stop-11902   [013] d..1.  3632.589861: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57be0, mask c3
       lxc-start-11852   [012] ...1.  3632.589867: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57be0, result 195, cflags 2
       lxc-start-11852   [012] .....  3632.589872: io_uring_file_get: ring 00000000cbfee049, fd 24
       lxc-start-11852   [012] .....  3632.589872: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000a95bcfcd, op 6, data 0x55b72af54380, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] ...1.  3632.589873: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af54380, result 1, cflags 0
       lxc-start-11852   [012] .....  3632.589879: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
        lxc-stop-11902   [013] d..1.  3632.589884: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57be0, mask c3
       lxc-start-11852   [012] ...1.  3632.589892: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57be0, result 195, cflags 2
       lxc-start-11852   [012] .....  3632.589893: io_uring_file_get: ring 00000000cbfee049, fd 24
       lxc-start-11852   [012] .....  3632.589894: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000c0cb7ee9, op 6, data 0x55b72af543d0, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] ...1.  3632.589894: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af543d0, result 1, cflags 0
       lxc-start-11852   [012] .....  3632.589897: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
        lxc-stop-11902   [013] d..1.  3632.589934: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57be0, mask c3
       lxc-start-11852   [012] ...1.  3632.589942: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57be0, result 195, cflags 2
       lxc-start-11852   [012] .....  3632.589945: io_uring_file_get: ring 00000000cbfee049, fd 24
       lxc-start-11852   [012] .....  3632.589945: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000d647b16c, op 6, data 0x55b72af54420, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] ...1.  3632.589946: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af54420, result 1, cflags 0
       lxc-start-11852   [012] .....  3632.589950: io_uring_file_get: ring 00000000cbfee049, fd 24
       lxc-start-11852   [012] .....  3632.589951: io_uring_submit_sqe: ring 00000000cbfee049, req 0000000047139aab, op 6, data 0x55b72af54420, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.589951: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:1-105     [003] d..1.  3632.672710: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.672747: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.672762: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.672763: io_uring_submit_sqe: ring 00000000cbfee049, req 000000000792c8ff, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.672767: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:1-105     [003] d..1.  3632.681413: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.681442: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.681453: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.681453: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000f9982d6a, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.681456: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:1-105     [003] d..1.  3632.681501: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.681527: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.681534: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.681535: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000589f1cb6, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.681537: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.681626: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.681654: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.681661: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.681661: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000b95e38f4, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:0-9962    [007] d..1.  3632.681682: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.681713: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.681723: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.681723: io_uring_submit_sqe: ring 00000000cbfee049, req 000000005df43d91, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] ...1.  3632.681724: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 1, cflags 0
       lxc-start-11852   [012] .....  3632.681726: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.681726: io_uring_submit_sqe: ring 00000000cbfee049, req 0000000026c4fbf5, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.681727: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.681803: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.681843: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.681853: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.681854: io_uring_submit_sqe: ring 00000000cbfee049, req 000000003640b6fd, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.681857: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.682743: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.682785: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.682795: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.682795: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000efcbd90a, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.682797: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.683151: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.683173: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.683176: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.683177: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000d51d520f, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.683178: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.683912: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.683926: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.683935: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.683936: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000811155a2, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.683938: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.684286: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.684293: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.684295: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.684296: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000cf57da32, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.684297: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.684337: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.684346: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.684349: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.684349: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000bc408527, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.684350: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.684758: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.684777: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.684783: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.684783: io_uring_submit_sqe: ring 00000000cbfee049, req 0000000078b6f936, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.684784: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.686260: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.686273: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.686283: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.686284: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000e16b9cd6, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.686286: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.686388: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [012] ...1.  3632.686394: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [012] .....  3632.686395: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [012] .....  3632.686395: io_uring_submit_sqe: ring 00000000cbfee049, req 000000006927e8fb, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [012] .....  3632.686396: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
   kworker/u32:0-9962    [007] d..1.  3632.686553: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b90, mask 41
       lxc-start-11852   [002] ...1.  3632.686562: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result 65, cflags 0
       lxc-start-11852   [002] .....  3632.686570: io_uring_file_get: ring 00000000cbfee049, fd 53
       lxc-start-11852   [002] .....  3632.686570: io_uring_submit_sqe: ring 00000000cbfee049, req 00000000256abef6, op 6, data 0x55b72af57b90, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [002] .....  3632.686572: io_uring_cqring_wait: ring 00000000cbfee049, min_events 1
 systemd-shutdow-11853   [010] dN.3.  3632.725323: io_uring_task_add: ring 00000000cbfee049, op 6, data 0x55b72af57b40, mask 0
       lxc-start-11852   [002] ...1.  3632.725336: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b40, result 1, cflags 2
       lxc-start-11852   [002] .....  3632.725355: io_uring_file_get: ring 00000000eba63feb, fd 53
       lxc-start-11852   [002] .....  3632.725356: io_uring_submit_sqe: ring 00000000eba63feb, req 00000000db0032c3, op 6, data 0x55b72af54470, flags 524288, non block 1, sq_thread 0
       lxc-start-11852   [002] .....  3632.725365: io_uring_cqring_wait: ring 00000000eba63feb, min_events 1
       lxc-start-11852   [002] ...1.  3632.725394: io_uring_complete: ring 00000000eba63feb, user_data 0x55b72af54470, result -125, cflags 0
       lxc-start-11852   [002] ...1.  3632.725408: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b40, result -125, cflags 0
       lxc-start-11852   [002] ...1.  3632.725409: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57be0, result -125, cflags 0
       lxc-start-11852   [002] ...1.  3632.725409: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af57b90, result -125, cflags 0
       lxc-start-11852   [002] ...1.  3632.725409: io_uring_complete: ring 00000000cbfee049, user_data 0x55b72af54420, result -125, cflags 0
       lxc-start-12282   [012] .....  3686.295587: io_uring_create: ring 00000000dde5f318, fd 3 sq size 512, cq size 1024, flags 0
       lxc-start-12282   [012] .....  3686.295629: io_uring_create: ring 000000005adcd687, fd 4 sq size 512, cq size 1024, flags 0
       lxc-start-12282   [012] .....  3686.295643: io_uring_file_get: ring 00000000dde5f318, fd 7
       lxc-start-12282   [012] .....  3686.295644: io_uring_submit_sqe: ring 00000000dde5f318, req 0000000023e1924e, op 6, data 0x5649dc653b40, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.295646: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.295646: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003093f3c0, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.295650: io_uring_file_get: ring 00000000dde5f318, fd 5
       lxc-start-12282   [012] .....  3686.295650: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000baa91618, op 6, data 0x5649dc653be0, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.295652: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.297292: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.297324: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.297337: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.297338: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000094f2006, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.297341: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.297499: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.297524: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.297531: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.297531: io_uring_submit_sqe: ring 00000000dde5f318, req 0000000040fc5778, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.297533: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.397128: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397174: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397188: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397189: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000396c7307, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.397193: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.397234: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397273: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397280: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397280: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003de8bff9, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [008] d..1.  3686.397321: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397356: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397363: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397363: io_uring_submit_sqe: ring 00000000dde5f318, req 000000007bf2d030, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [008] d..1.  3686.397403: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397440: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397450: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397450: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000589f1cb6, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [008] d..1.  3686.397475: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397499: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397505: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397505: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000f9982d6a, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [008] d..1.  3686.397514: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397528: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397530: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397530: io_uring_submit_sqe: ring 00000000dde5f318, req 000000000792c8ff, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.397531: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.397539: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397548: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397550: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397550: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000e51398a5, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.397551: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.397584: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.397587: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.397589: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.397589: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d647b16c, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.397590: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.398321: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.398326: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.398328: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.398328: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c0cb7ee9, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.398328: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.398717: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.398724: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.398725: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.398725: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000a95bcfcd, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.398726: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.399249: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.399261: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.399270: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.399271: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c207c8b2, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.399273: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [014] d..1.  3686.399752: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.399762: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.399777: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.399777: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d7704e12, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.399779: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [014] d..1.  3686.399852: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.399856: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.399858: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.399858: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000053af97e, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.399858: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.400743: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.400751: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.400758: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.400759: io_uring_submit_sqe: ring 00000000dde5f318, req 000000001c8655bc, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.400761: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.402236: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.402248: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.402255: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.402255: io_uring_submit_sqe: ring 00000000dde5f318, req 000000005cf39616, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.402257: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.402773: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.402785: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.402793: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.402793: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000409fe667, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.402794: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.404155: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.404166: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.404177: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.404178: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000537f12ba, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.404180: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.404870: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.404881: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.404894: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.404895: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006e38a1a6, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.404898: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.407079: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.407099: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.407110: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.407110: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c17842af, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.407112: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.407232: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.407249: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.407256: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.407256: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000cf57da32, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [010] d..1.  3686.407259: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [012] ...1.  3686.407268: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [012] .....  3686.407269: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [012] .....  3686.407269: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000811155a2, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [012] .....  3686.407270: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.407692: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.407712: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.407718: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.407718: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d51d520f, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.407719: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [010] d..1.  3686.407874: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.407890: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.407893: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.407893: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000efcbd90a, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.407895: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3686.408500: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.408523: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.408530: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.408530: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003640b6fd, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.408532: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3686.409923: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.409938: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.409949: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.409950: io_uring_submit_sqe: ring 00000000dde5f318, req 0000000026c4fbf5, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.409953: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [006] d..1.  3686.413155: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.413185: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.413193: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.413193: io_uring_submit_sqe: ring 00000000dde5f318, req 000000005df43d91, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.413196: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [006] d..1.  3686.413241: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.413268: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.413275: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.413275: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000b95e38f4, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:0-9962    [006] d..1.  3686.413276: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.413280: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.413286: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.413286: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000eda217e8, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.413288: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3686.443455: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.443479: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.443490: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.443491: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000074f1609, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.443493: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3686.444126: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.444145: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.444151: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.444151: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006444ebae, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.444152: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3686.448052: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.448091: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.448102: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.448103: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000edd0bca1, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.448106: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3686.449460: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.449502: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.449515: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.449516: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004bd66771, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.449520: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3686.449591: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.449631: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.449641: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.449641: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004a8d7413, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [015] .....  3686.449645: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3686.452161: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [015] ...1.  3686.452174: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [015] .....  3686.452186: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [015] .....  3686.452186: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000a3daf66d, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [015] d..1.  3686.452201: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.452232: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.452240: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.452240: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006478182b, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [015] d..1.  3686.452244: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.452253: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.452255: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.452256: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006478182b, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.452257: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [015] d..1.  3686.452340: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.452367: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.452374: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.452375: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000a3daf66d, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.452377: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [015] d..1.  3686.452447: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.452472: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.452480: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.452480: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004a8d7413, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.452483: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [015] d..1.  3686.452536: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.452563: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.452570: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.452570: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004bd66771, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [015] d..1.  3686.452572: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.452584: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.452586: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.452586: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000edd0bca1, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.452587: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [015] d..1.  3686.453058: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.453083: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.453091: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.453091: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006444ebae, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.453093: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [015] d..1.  3686.453854: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.453863: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.453871: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.453871: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000074f1609, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.453874: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [015] d..1.  3686.454322: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.454344: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.454351: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.454352: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000eda217e8, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.454353: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [015] d..1.  3686.454930: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3686.454949: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3686.454955: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3686.454955: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000b95e38f4, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3686.454957: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.455457: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.455467: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.455469: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.455469: io_uring_submit_sqe: ring 00000000dde5f318, req 000000005df43d91, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.455470: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.458543: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.458553: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.458565: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.458566: io_uring_submit_sqe: ring 00000000dde5f318, req 0000000026c4fbf5, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.458569: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.458623: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.458630: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.458632: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.458632: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003640b6fd, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.458632: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.458646: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.458652: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.458653: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.458653: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000efcbd90a, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.458654: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.459203: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.459212: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.459217: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.459217: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d51d520f, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.459219: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.459403: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.459409: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.459410: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.459410: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000811155a2, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.459411: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.459675: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.459683: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.459684: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.459684: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000cf57da32, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.459684: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.459993: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.459999: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.460000: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.460000: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c17842af, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.460000: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.460497: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.460504: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.460506: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.460506: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006e38a1a6, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.460507: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [008] d..1.  3686.460670: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.460677: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.460678: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.460678: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000537f12ba, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.460678: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [012] d..1.  3686.460740: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.460749: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.460750: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.460751: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000409fe667, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.460751: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [012] d..1.  3686.499721: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.499746: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.499759: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.499759: io_uring_submit_sqe: ring 00000000dde5f318, req 000000005cf39616, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.499762: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [012] d..1.  3686.499881: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.499893: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.499897: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.499897: io_uring_submit_sqe: ring 00000000dde5f318, req 000000001c8655bc, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [013] d..1.  3686.499913: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.499923: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.499925: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.499925: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000053af97e, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.499926: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [013] d..1.  3686.500446: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.500465: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.500470: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.500471: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d7704e12, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.500472: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3686.504137: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3686.504287: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3686.504297: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3686.504298: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c207c8b2, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3686.504300: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [014] d..1.  3687.506228: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3687.506263: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3687.506296: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3687.506298: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000a95bcfcd, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] ...1.  3687.506301: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 1, cflags 0
       lxc-start-12282   [008] .....  3687.506308: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3687.506308: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c0cb7ee9, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-9973    [013] d..1.  3687.506311: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3687.506316: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3687.506318: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3687.506319: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d647b16c, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3687.506321: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:3-9973    [013] d..1.  3687.506350: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3687.506374: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3687.506378: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3687.506379: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000e51398a5, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3687.506380: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
        lxc-stop-12569   [010] d..1.  3691.305593: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653be0, mask c3
       lxc-start-12282   [008] ...1.  3691.305648: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653be0, result 195, cflags 2
       lxc-start-12282   [008] .....  3691.305685: io_uring_file_get: ring 00000000dde5f318, fd 24
       lxc-start-12282   [008] .....  3691.305686: io_uring_submit_sqe: ring 00000000dde5f318, req 000000000792c8ff, op 6, data 0x5649dc653420, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] ...1.  3691.305688: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653420, result 1, cflags 0
       lxc-start-12282   [008] .....  3691.305719: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
        lxc-stop-12569   [010] d..1.  3691.305776: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653be0, mask c3
       lxc-start-12282   [008] ...1.  3691.305814: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653be0, result 195, cflags 2
       lxc-start-12282   [008] .....  3691.305833: io_uring_file_get: ring 00000000dde5f318, fd 24
       lxc-start-12282   [008] .....  3691.305834: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000f9982d6a, op 6, data 0x5649dc653470, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] ...1.  3691.305835: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653470, result 1, cflags 0
       lxc-start-12282   [008] .....  3691.305859: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
        lxc-stop-12569   [010] d..1.  3691.305911: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653be0, mask c3
       lxc-start-12282   [008] ...1.  3691.305951: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653be0, result 195, cflags 2
       lxc-start-12282   [008] .....  3691.305970: io_uring_file_get: ring 00000000dde5f318, fd 24
       lxc-start-12282   [008] .....  3691.305970: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000589f1cb6, op 6, data 0x5649dc6534c0, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] ...1.  3691.305971: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc6534c0, result 1, cflags 0
       lxc-start-12282   [008] .....  3691.305995: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
        lxc-stop-12569   [010] d..1.  3691.306047: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653be0, mask c3
       lxc-start-12282   [008] ...1.  3691.306086: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653be0, result 195, cflags 2
       lxc-start-12282   [008] .....  3691.306110: io_uring_file_get: ring 00000000dde5f318, fd 24
       lxc-start-12282   [008] .....  3691.306111: io_uring_submit_sqe: ring 00000000dde5f318, req 000000007bf2d030, op 6, data 0x5649dc650380, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] ...1.  3691.306112: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc650380, result 1, cflags 0
       lxc-start-12282   [008] .....  3691.306139: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
        lxc-stop-12569   [010] d..1.  3691.306187: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653be0, mask c3
       lxc-start-12282   [008] ...1.  3691.306217: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653be0, result 195, cflags 2
       lxc-start-12282   [008] .....  3691.306228: io_uring_file_get: ring 00000000dde5f318, fd 24
       lxc-start-12282   [008] .....  3691.306228: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003de8bff9, op 6, data 0x5649dc6503d0, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] ...1.  3691.306229: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc6503d0, result 1, cflags 0
       lxc-start-12282   [008] .....  3691.306245: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
        lxc-stop-12569   [010] d..1.  3691.306294: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653be0, mask c3
       lxc-start-12282   [008] ...1.  3691.306320: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653be0, result 195, cflags 2
       lxc-start-12282   [008] .....  3691.306331: io_uring_file_get: ring 00000000dde5f318, fd 24
       lxc-start-12282   [008] .....  3691.306332: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000396c7307, op 6, data 0x5649dc650420, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] ...1.  3691.306332: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc650420, result 1, cflags 0
       lxc-start-12282   [008] .....  3691.306349: io_uring_file_get: ring 00000000dde5f318, fd 24
       lxc-start-12282   [008] .....  3691.306349: io_uring_submit_sqe: ring 00000000dde5f318, req 0000000040fc5778, op 6, data 0x5649dc650420, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3691.306350: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [010] d..1.  3691.308239: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [008] ...1.  3691.308253: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [008] .....  3691.308263: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [008] .....  3691.308263: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000094f2006, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [008] .....  3691.308266: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [008] d..1.  3691.308299: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3691.308311: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3691.308315: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3691.308315: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003093f3c0, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3691.308316: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [008] d..1.  3691.308335: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.308350: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.308355: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.308355: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003093f3c0, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.308357: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [008] d..1.  3691.308372: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.308382: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.308384: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.308384: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000094f2006, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.308385: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [008] d..1.  3691.308687: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.308696: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.308700: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.308700: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000e51398a5, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.308701: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [008] d..1.  3691.308732: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.308741: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.308743: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.308743: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000396c7307, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.308744: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [008] d..1.  3691.308769: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.308778: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.308780: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.308780: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003de8bff9, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.308781: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.308815: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3691.308826: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3691.308831: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3691.308832: io_uring_submit_sqe: ring 00000000dde5f318, req 000000007bf2d030, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3691.308834: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309382: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3691.309393: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3691.309399: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3691.309400: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000589f1cb6, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3691.309402: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309468: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3691.309480: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3691.309485: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3691.309486: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000f9982d6a, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3691.309488: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309510: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [013] ...1.  3691.309535: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [013] .....  3691.309548: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [013] .....  3691.309548: io_uring_submit_sqe: ring 00000000dde5f318, req 000000000792c8ff, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [013] .....  3691.309552: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309576: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [013] ...1.  3691.309588: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [013] .....  3691.309592: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [013] .....  3691.309593: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d647b16c, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [013] .....  3691.309594: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309610: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [013] ...1.  3691.309625: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [013] .....  3691.309629: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [013] .....  3691.309629: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c0cb7ee9, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [013] .....  3691.309630: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309672: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [013] ...1.  3691.309685: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [013] .....  3691.309688: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [013] .....  3691.309689: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000a95bcfcd, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [013] .....  3691.309690: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309715: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [013] ...1.  3691.309729: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [013] .....  3691.309732: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [013] .....  3691.309732: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c207c8b2, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [013] .....  3691.309734: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309766: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [013] ...1.  3691.309777: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [013] .....  3691.309781: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [013] .....  3691.309781: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d7704e12, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [013] .....  3691.309782: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [012] d..1.  3691.309834: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [009] ...1.  3691.309850: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [009] .....  3691.309857: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [009] .....  3691.309857: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000053af97e, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [009] .....  3691.309860: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.309953: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.309968: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.309979: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.309980: io_uring_submit_sqe: ring 00000000dde5f318, req 000000001c8655bc, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.309984: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.309997: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.310007: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.310011: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.310011: io_uring_submit_sqe: ring 00000000dde5f318, req 000000005cf39616, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.310013: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3691.311711: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.311729: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.311740: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.311740: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000409fe667, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.311743: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3691.312191: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.312202: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.312206: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.312207: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000537f12ba, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.312208: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3691.313199: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.313208: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.313213: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.313213: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006e38a1a6, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.313214: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3691.313710: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.313716: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.313718: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.313719: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000c17842af, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.313720: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3691.318109: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.318134: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.318147: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.318148: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000cf57da32, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.318152: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.326519: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.326547: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.326573: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.326575: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000811155a2, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.326581: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.327615: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.327624: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.327629: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.327629: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000d51d520f, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.327631: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.329831: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.329875: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.329888: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.329888: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000efcbd90a, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.329892: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.330025: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.330063: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.330075: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.330075: io_uring_submit_sqe: ring 00000000dde5f318, req 000000003640b6fd, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.330079: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.330289: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.330329: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.330341: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.330342: io_uring_submit_sqe: ring 00000000dde5f318, req 0000000026c4fbf5, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.330345: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [000] d..1.  3691.330400: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.330444: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.330455: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.330456: io_uring_submit_sqe: ring 00000000dde5f318, req 000000005df43d91, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
  kworker/u32:11-8049    [000] d..1.  3691.330460: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.330470: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.330473: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.330474: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000b95e38f4, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.330477: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [002] d..1.  3691.342249: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.342285: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.342306: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.342307: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000eda217e8, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.342313: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [002] d..1.  3691.342349: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.342363: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.342366: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.342366: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000074f1609, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.342367: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [004] d..1.  3691.343250: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.343274: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.343279: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.343279: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006444ebae, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.343280: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [004] d..1.  3691.343418: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.343439: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.343443: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.343443: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000edd0bca1, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.343444: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [004] d..1.  3691.355842: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.355900: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.355936: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.355938: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004bd66771, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.355947: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
  kworker/u32:11-8049    [004] d..1.  3691.356078: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.356123: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.356135: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.356135: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004a8d7413, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.356138: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.358632: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.358648: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.358651: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.358652: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000a3daf66d, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.358653: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.358794: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.358814: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.358818: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.358818: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006478182b, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.358819: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:0-9962    [009] d..1.  3691.358839: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.358855: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.358858: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.358859: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006478182b, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.358860: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:1-105     [007] d..1.  3691.369632: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.369672: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.369697: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.369699: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000a3daf66d, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.369709: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:1-105     [007] d..1.  3691.370381: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.370410: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.370416: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.370416: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004a8d7413, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.370418: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:1-105     [007] d..1.  3691.371061: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.371070: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.371073: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.371073: io_uring_submit_sqe: ring 00000000dde5f318, req 000000004bd66771, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.371074: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:1-105     [007] d..1.  3691.371524: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.371530: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.371532: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.371532: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000edd0bca1, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.371532: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:1-105     [007] d..1.  3691.371556: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.371561: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.371562: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.371562: io_uring_submit_sqe: ring 00000000dde5f318, req 000000006444ebae, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.371563: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:1-105     [007] d..1.  3691.371576: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [011] ...1.  3691.371580: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [011] .....  3691.371581: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [011] .....  3691.371581: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000074f1609, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [011] .....  3691.371582: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
   kworker/u32:1-105     [007] d..1.  3691.371987: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b90, mask 41
       lxc-start-12282   [000] ...1.  3691.372002: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result 65, cflags 0
       lxc-start-12282   [000] .....  3691.372014: io_uring_file_get: ring 00000000dde5f318, fd 53
       lxc-start-12282   [000] .....  3691.372014: io_uring_submit_sqe: ring 00000000dde5f318, req 00000000eda217e8, op 6, data 0x5649dc653b90, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [000] .....  3691.372018: io_uring_cqring_wait: ring 00000000dde5f318, min_events 1
 systemd-shutdow-12283   [004] dN.3.  3691.420670: io_uring_task_add: ring 00000000dde5f318, op 6, data 0x5649dc653b40, mask 0
       lxc-start-12282   [000] ...1.  3691.420681: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b40, result 1, cflags 2
       lxc-start-12282   [000] .....  3691.420707: io_uring_file_get: ring 000000005adcd687, fd 53
       lxc-start-12282   [000] .....  3691.420708: io_uring_submit_sqe: ring 000000005adcd687, req 00000000c15d2432, op 6, data 0x5649dc650470, flags 524288, non block 1, sq_thread 0
       lxc-start-12282   [000] .....  3691.420719: io_uring_cqring_wait: ring 000000005adcd687, min_events 1
       lxc-start-12282   [000] ...1.  3691.420761: io_uring_complete: ring 000000005adcd687, user_data 0x5649dc650470, result -125, cflags 0
       lxc-start-12282   [000] ...1.  3691.420785: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653be0, result -125, cflags 0
       lxc-start-12282   [000] ...1.  3691.420786: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b90, result -125, cflags 0
       lxc-start-12282   [000] ...1.  3691.420786: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc650420, result -125, cflags 0
       lxc-start-12282   [000] ...1.  3691.420786: io_uring_complete: ring 00000000dde5f318, user_data 0x5649dc653b40, result -125, cflags 0

[-- Attachment #3: lxc-trace-bad --]
[-- Type: text/plain, Size: 23533 bytes --]

# tracer: nop
#
# entries-in-buffer/entries-written: 183/183   #P:16
#
#                                _-----=> irqs-off/BH-disabled
#                               / _----=> need-resched
#                              | / _---=> hardirq/softirq
#                              || / _--=> preempt-depth
#                              ||| / _-=> migrate-disable
#                              |||| /     delay
#           TASK-PID     CPU#  |||||  TIMESTAMP  FUNCTION
#              | |         |   |||||     |         |
       lxc-start-2249    [007] .....    47.766086: io_uring_create: ring 00000000ad366d59, fd 3 sq size 512, cq size 1024, flags 0
       lxc-start-2249    [007] .....    47.766128: io_uring_create: ring 00000000a3f46d45, fd 4 sq size 512, cq size 1024, flags 0
       lxc-start-2249    [007] .....    47.766143: io_uring_submit_sqe: ring 00000000ad366d59, req 00000000c89e5524, op 6, data 0x56049a674b80, flags 524288, non block 1, sq_thread 0
       lxc-start-2249    [007] .....    47.766144: io_uring_file_get: ring 00000000ad366d59, fd 7
       lxc-start-2249    [007] .....    47.766146: io_uring_submit_sqe: ring 00000000ad366d59, req 000000004114f7f5, op 6, data 0x56049a674bd0, flags 524288, non block 1, sq_thread 0
       lxc-start-2249    [007] .....    47.766147: io_uring_file_get: ring 00000000ad366d59, fd 53
       lxc-start-2249    [007] .....    47.766152: io_uring_submit_sqe: ring 00000000ad366d59, req 00000000476a2670, op 6, data 0x56049a671380, flags 524288, non block 1, sq_thread 0
       lxc-start-2249    [007] .....    47.766152: io_uring_file_get: ring 00000000ad366d59, fd 5
       lxc-start-2249    [007] .....    47.766155: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [011] d..1.    47.792070: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [007] ...1.    47.792101: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [007] .....    47.792114: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [007] d..1.    47.792800: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.792834: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.792840: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [007] d..1.    47.792933: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.792959: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.792965: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.902763: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.903010: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.903023: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.903040: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.903049: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.903051: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.903718: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.903742: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.903749: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904085: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904107: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904113: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904175: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904198: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904204: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904525: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904548: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904554: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904586: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904609: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904614: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904700: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904725: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904731: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.905406: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.905414: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.905421: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.906170: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.906202: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.906212: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.908746: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.908768: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.908778: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.909269: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.909291: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.909299: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.911856: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.911893: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.911909: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.912580: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.912592: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.912601: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.918821: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.918835: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.918844: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.919048: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.919055: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.919056: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.919090: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.919097: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.919098: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.919752: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.919760: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.919761: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [000] d..1.    47.921724: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.921733: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.921734: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.922163: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.922171: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.922172: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.922643: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.922654: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.922662: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.928228: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.928235: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.928240: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.928339: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.928344: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.928345: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.928370: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.928375: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.928376: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.930346: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.930358: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.930363: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.930820: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.930831: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.930832: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.937915: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.937946: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.937958: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.938370: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.938399: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.938406: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.942927: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.942955: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.942966: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943029: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943057: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943063: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943247: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943310: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943317: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943323: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943335: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943337: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943389: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943418: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943424: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [012] d..1.    47.943452: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943479: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943485: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [012] d..1.    47.943656: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943684: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943691: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [012] d..1.    47.943709: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943724: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943727: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.943742: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943768: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943774: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.944355: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.944382: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.944389: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.944882: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.944908: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.944914: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.945357: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.945382: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.945388: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.954651: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.954665: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.954676: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.955367: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.955381: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.955392: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [000] d..1.    47.955791: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.955798: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.955800: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.956563: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.956574: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.956585: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.956808: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.956816: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.956819: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.957256: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.957265: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.957275: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.957353: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.957361: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.957362: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [004] d..1.    47.960507: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    47.960518: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    47.960529: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [000] d..1.    47.961340: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [001] ...1.    47.961355: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [001] .....    47.961360: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [014] d..1.    48.029152: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    48.029182: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    48.029196: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [000] d..1.    48.029815: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    48.029828: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    48.029832: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [000] d..1.    48.033023: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    48.033040: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    48.033048: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [006] d..1.    49.035170: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    49.035232: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    49.035261: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [006] d..1.    49.042316: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    49.042361: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
   kworker/u32:7-151     [004] d..1.    49.042367: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    49.042368: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
        lxc-stop-2534    [003] d..1.    52.782291: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a671380, mask c3
       lxc-start-2249    [005] ...1.    52.782314: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a671380, result 1, cflags 2

[-- Attachment #4: lxc-record-trace --]
[-- Type: text/plain, Size: 316 bytes --]

#!/bin/bash

sudo -v

echo 1 | sudo dd status=none of=/sys/kernel/debug/tracing/events/io_uring/enable

sudo lxc-start -n lxc-test

sleep 5

sudo lxc-stop -n lxc-test &

sleep 5

sudo cat /sys/kernel/debug/tracing/trace > ~/lxc-trace

echo 0 | sudo dd status=none of=/sys/kernel/debug/tracing/events/io_uring/enable

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-02 23:14             ` Pavel Begunkov
  2022-05-03  7:13               ` Daniel Harding
@ 2022-05-03  7:37               ` Daniel Harding
  2022-05-03 14:14                 ` Pavel Begunkov
  1 sibling, 1 reply; 27+ messages in thread
From: Daniel Harding @ 2022-05-03  7:37 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe; +Cc: regressions, io-uring, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 4271 bytes --]

[Resend with a smaller trace]

On 5/3/22 02:14, Pavel Begunkov wrote:
> On 5/2/22 19:49, Daniel Harding wrote:
>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>>>>>>>> very lightly modified version of Fedora's generic kernel 
>>>>>>>> config. After
>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I 
>>>>>>>> started
>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>>>>>>> time, but definitely more than 50% of the time. Bisecting narrowed
>>>>>>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>> io_uring: poll rework. Testing indicates the problem is still 
>>>>>>>> present
>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for 
>>>>>>>> testing or
>>>>>>>> validation.  I am also happy to provide any further information 
>>>>>>>> that
>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>> significantly easier to figure out.
>>>>>>
>>>>>> I can reproduce it with just the following:
>>>>>>
>>>>>>      sudo lxc-create --n lxc-test --template download --bdev dir 
>>>>>> --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>>>>>>      sudo lxc-start -n lxc-test
>>>>>>      sudo lxc-stop -n lxc-test
>>>>>>
>>>>>> The lxc-stop command never exits and the container continues running.
>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>
>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>> limited amount of time to debug, hopefully Pavel has time to take 
>>>>> a look
>>>>> at this.
>>>>
>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>> kernel, to do:
>>>
>>> Same here, it doesn't reproduce for me
>> OK, sorry it wasn't something simple.
>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>
>>>> run lxc-stop
>>>>
>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>
>>>> so we can see what's going on? Looking at the source, lxc is just using
>>>> plain POLL_ADD, so I'm guessing it's not getting a notification when it
>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a trace
>>>> from both a working and broken kernel, that might shed some light 
>>>> on it.
>> It's late in my timezone, but I'll try to work on getting those 
>> traces tomorrow.
>
> I think I got it, I've attached a trace.
>
> What's interesting is that it issues a multi shot poll but I don't
> see any kind of cancellation, neither cancel requests nor task/ring
> exit. Perhaps have to go look at lxc to see how it's supposed
> to work

Yes, that looks exactly like my bad trace.  I've attached good trace 
(captured with linux-5.16.19) and a bad trace (captured with 
linux-5.17.5).  These are the differences I noticed with just a visual scan:

* Both traces have three io_uring_submit_sqe calls at the very 
beginning, but in the good trace, there are further io_uring_submit_sqe 
calls throughout the trace, while in the bad trace, there are none.
* The good trace uses a mask of c3 for io_uring_task_add much more often 
than the bad trace:  the bad trace uses a mask of c3 only for the very 
last call to io_uring_task_add, but a mask of 41 for the other calls.
* In the good trace, many of the io_uring_complete calls have a result 
of 195, while in the bad trace, they all have a result of 1.

I don't know whether any of those things are significant or not, but 
that's what jumped out at me.

I have also attached a copy of the script I used to generate the 
traces.  If there is anything further I can to do help debug, please let 
me know.

-- 
Regards,

Daniel Harding

[-- Attachment #2: lxc-trace-good --]
[-- Type: text/plain, Size: 19774 bytes --]

# tracer: nop
#
# entries-in-buffer/entries-written: 145/145   #P:16
#
#                                _-----=> irqs-off
#                               / _----=> need-resched
#                              | / _---=> hardirq/softirq
#                              || / _--=> preempt-depth
#                              ||| / _-=> migrate-disable
#                              |||| /     delay
#           TASK-PID     CPU#  |||||  TIMESTAMP  FUNCTION
#              | |         |   |||||     |         |
       lxc-start-5701    [008] .....   537.440889: io_uring_create: ring 00000000b31688a5, fd 3 sq size 512, cq size 1024, flags 0
       lxc-start-5701    [008] .....   537.440918: io_uring_create: ring 0000000076cc4f29, fd 4 sq size 512, cq size 1024, flags 0
       lxc-start-5701    [008] .....   537.440928: io_uring_file_get: ring 00000000b31688a5, fd 7
       lxc-start-5701    [008] .....   537.440928: io_uring_submit_sqe: ring 00000000b31688a5, req 000000002a7a691b, op 6, data 0x55dbcf28cd80, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [008] .....   537.440931: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [008] .....   537.440931: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000105632b5, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [008] .....   537.440933: io_uring_file_get: ring 00000000b31688a5, fd 5
       lxc-start-5701    [008] .....   537.440933: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000dcef163d, op 6, data 0x55dbcf28d470, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [008] .....   537.440935: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [008] d..1.   537.442660: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.442693: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.442708: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.442709: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000435a6b59, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.442713: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [008] d..1.   537.442882: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.442907: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.442914: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.442915: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000df1dac20, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.442917: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
        lxc-stop-5751    [006] d..1.   537.448788: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d470, mask c3
       lxc-start-5701    [010] ...1.   537.448814: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d470, result 195, cflags 2
       lxc-start-5701    [010] .....   537.448836: io_uring_file_get: ring 00000000b31688a5, fd 24
       lxc-start-5701    [010] .....   537.448837: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000c3421fbe, op 6, data 0x55dbcf28d4c0, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] ...1.   537.448838: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d4c0, result 1, cflags 0
       lxc-start-5701    [010] .....   537.448857: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
        lxc-stop-5751    [012] d..1.   537.448888: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d470, mask c3
       lxc-start-5701    [010] ...1.   537.448908: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d470, result 195, cflags 2
       lxc-start-5701    [010] .....   537.448916: io_uring_file_get: ring 00000000b31688a5, fd 24
       lxc-start-5701    [010] .....   537.448916: io_uring_submit_sqe: ring 00000000b31688a5, req 0000000091e8a675, op 6, data 0x55dbcf28a380, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] ...1.   537.448916: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28a380, result 1, cflags 0
       lxc-start-5701    [010] .....   537.448924: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
        lxc-stop-5751    [012] d..1.   537.448965: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d470, mask c3
       lxc-start-5701    [010] ...1.   537.448982: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d470, result 195, cflags 2
       lxc-start-5701    [010] .....   537.448991: io_uring_file_get: ring 00000000b31688a5, fd 24
       lxc-start-5701    [010] .....   537.448991: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000404191e3, op 6, data 0x55dbcf28a3d0, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] ...1.   537.448992: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28a3d0, result 1, cflags 0
       lxc-start-5701    [010] .....   537.449002: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
        lxc-stop-5751    [012] d..1.   537.449018: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d470, mask c3
       lxc-start-5701    [010] ...1.   537.449038: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d470, result 195, cflags 2
       lxc-start-5701    [010] .....   537.449046: io_uring_file_get: ring 00000000b31688a5, fd 24
       lxc-start-5701    [010] .....   537.449046: io_uring_submit_sqe: ring 00000000b31688a5, req 000000004c87bdb8, op 6, data 0x55dbcf28a420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] ...1.   537.449047: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28a420, result 1, cflags 0
       lxc-start-5701    [010] .....   537.449057: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
        lxc-stop-5751    [012] d..1.   537.449095: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d470, mask c3
       lxc-start-5701    [010] ...1.   537.449113: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d470, result 195, cflags 2
       lxc-start-5701    [010] .....   537.449120: io_uring_file_get: ring 00000000b31688a5, fd 24
       lxc-start-5701    [010] .....   537.449120: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000b8df304d, op 6, data 0x55dbcf28a470, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] ...1.   537.449121: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28a470, result 1, cflags 0
       lxc-start-5701    [010] .....   537.449131: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
        lxc-stop-5751    [012] d..1.   537.449207: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d470, mask c3
       lxc-start-5701    [010] ...1.   537.449223: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d470, result 195, cflags 2
       lxc-start-5701    [010] .....   537.449231: io_uring_file_get: ring 00000000b31688a5, fd 24
       lxc-start-5701    [010] .....   537.449231: io_uring_submit_sqe: ring 00000000b31688a5, req 0000000028c2db2d, op 6, data 0x55dbcf28a4c0, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] ...1.   537.449232: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28a4c0, result 1, cflags 0
       lxc-start-5701    [010] .....   537.449241: io_uring_file_get: ring 00000000b31688a5, fd 24
       lxc-start-5701    [010] .....   537.449241: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000a0ddd401, op 6, data 0x55dbcf28a4c0, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.449242: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.535100: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.535155: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.535170: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.535171: io_uring_submit_sqe: ring 00000000b31688a5, req 000000004b218709, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.535175: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.535942: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.535969: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.535976: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.535976: io_uring_submit_sqe: ring 00000000b31688a5, req 0000000006afc786, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-138     [012] d..1.   537.536000: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.536030: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.536036: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.536037: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000deda76cd, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.536038: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.536129: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.536155: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.536161: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.536162: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000d59ff773, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.536164: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.536199: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.536225: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.536234: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.536235: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000ecd22ed2, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
   kworker/u32:3-138     [012] d..1.   537.536246: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.536262: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.536264: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.536264: io_uring_submit_sqe: ring 00000000b31688a5, req 000000007e28f103, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.536265: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.536273: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.536286: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.536288: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.536288: io_uring_submit_sqe: ring 00000000b31688a5, req 000000002a8f1fc3, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.536289: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.536666: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.536694: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.536701: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.536701: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000228168af, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.536703: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.537049: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.537077: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.537084: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.537084: io_uring_submit_sqe: ring 00000000b31688a5, req 000000003f0080b8, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.537087: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.537813: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [010] ...1.   537.537842: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [010] .....   537.537851: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [010] .....   537.537851: io_uring_submit_sqe: ring 00000000b31688a5, req 000000001baf187d, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [010] .....   537.537854: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.538189: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [013] ...1.   537.538203: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [013] .....   537.538208: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [013] .....   537.538208: io_uring_submit_sqe: ring 00000000b31688a5, req 000000000fb0d9a9, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [013] .....   537.538210: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.538614: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [013] ...1.   537.538625: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [013] .....   537.538634: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [013] .....   537.538635: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000c9d4b035, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [013] .....   537.538637: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.538718: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [013] ...1.   537.538729: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [013] .....   537.538736: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [013] .....   537.538736: io_uring_submit_sqe: ring 00000000b31688a5, req 000000009f6c3d45, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [013] .....   537.538737: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [010] d..1.   537.539270: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [013] ...1.   537.539272: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [013] .....   537.539278: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [013] .....   537.539278: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000658d5a67, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [013] .....   537.539279: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [012] d..1.   537.540199: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [013] ...1.   537.540213: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [013] .....   537.540221: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [013] .....   537.540222: io_uring_submit_sqe: ring 00000000b31688a5, req 000000003ad51d84, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [013] .....   537.540223: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
   kworker/u32:3-138     [010] d..1.   537.540415: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28d420, mask 41
       lxc-start-5701    [013] ...1.   537.540424: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result 65, cflags 0
       lxc-start-5701    [013] .....   537.540428: io_uring_file_get: ring 00000000b31688a5, fd 53
       lxc-start-5701    [013] .....   537.540428: io_uring_submit_sqe: ring 00000000b31688a5, req 00000000297b5537, op 6, data 0x55dbcf28d420, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [013] .....   537.540430: io_uring_cqring_wait: ring 00000000b31688a5, min_events 1
 systemd-shutdow-5702    [008] dN.3.   537.572892: io_uring_task_add: ring 00000000b31688a5, op 6, data 0x55dbcf28cd80, mask 0
       lxc-start-5701    [013] ...1.   537.572909: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28cd80, result 1, cflags 2
       lxc-start-5701    [013] .....   537.572929: io_uring_file_get: ring 0000000076cc4f29, fd 53
       lxc-start-5701    [013] .....   537.572930: io_uring_submit_sqe: ring 0000000076cc4f29, req 00000000e482f889, op 6, data 0x55dbcf28a510, flags 524288, non block 1, sq_thread 0
       lxc-start-5701    [013] .....   537.572938: io_uring_cqring_wait: ring 0000000076cc4f29, min_events 1
       lxc-start-5701    [013] ...1.   537.572967: io_uring_complete: ring 0000000076cc4f29, user_data 0x55dbcf28a510, result -125, cflags 0
       lxc-start-5701    [013] ...1.   537.572984: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28a4c0, result -125, cflags 0
       lxc-start-5701    [013] ...1.   537.572984: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d470, result -125, cflags 0
       lxc-start-5701    [013] ...1.   537.572984: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28d420, result -125, cflags 0
       lxc-start-5701    [013] ...1.   537.572984: io_uring_complete: ring 00000000b31688a5, user_data 0x55dbcf28cd80, result -125, cflags 0

[-- Attachment #3: lxc-trace-bad --]
[-- Type: text/plain, Size: 23533 bytes --]

# tracer: nop
#
# entries-in-buffer/entries-written: 183/183   #P:16
#
#                                _-----=> irqs-off/BH-disabled
#                               / _----=> need-resched
#                              | / _---=> hardirq/softirq
#                              || / _--=> preempt-depth
#                              ||| / _-=> migrate-disable
#                              |||| /     delay
#           TASK-PID     CPU#  |||||  TIMESTAMP  FUNCTION
#              | |         |   |||||     |         |
       lxc-start-2249    [007] .....    47.766086: io_uring_create: ring 00000000ad366d59, fd 3 sq size 512, cq size 1024, flags 0
       lxc-start-2249    [007] .....    47.766128: io_uring_create: ring 00000000a3f46d45, fd 4 sq size 512, cq size 1024, flags 0
       lxc-start-2249    [007] .....    47.766143: io_uring_submit_sqe: ring 00000000ad366d59, req 00000000c89e5524, op 6, data 0x56049a674b80, flags 524288, non block 1, sq_thread 0
       lxc-start-2249    [007] .....    47.766144: io_uring_file_get: ring 00000000ad366d59, fd 7
       lxc-start-2249    [007] .....    47.766146: io_uring_submit_sqe: ring 00000000ad366d59, req 000000004114f7f5, op 6, data 0x56049a674bd0, flags 524288, non block 1, sq_thread 0
       lxc-start-2249    [007] .....    47.766147: io_uring_file_get: ring 00000000ad366d59, fd 53
       lxc-start-2249    [007] .....    47.766152: io_uring_submit_sqe: ring 00000000ad366d59, req 00000000476a2670, op 6, data 0x56049a671380, flags 524288, non block 1, sq_thread 0
       lxc-start-2249    [007] .....    47.766152: io_uring_file_get: ring 00000000ad366d59, fd 5
       lxc-start-2249    [007] .....    47.766155: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [011] d..1.    47.792070: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [007] ...1.    47.792101: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [007] .....    47.792114: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [007] d..1.    47.792800: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.792834: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.792840: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [007] d..1.    47.792933: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.792959: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.792965: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.902763: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.903010: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.903023: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.903040: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.903049: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.903051: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.903718: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.903742: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.903749: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904085: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904107: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904113: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904175: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904198: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904204: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904525: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904548: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904554: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904586: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904609: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904614: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.904700: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.904725: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.904731: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.905406: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.905414: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.905421: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.906170: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.906202: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.906212: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.908746: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.908768: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.908778: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.909269: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.909291: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.909299: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.911856: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.911893: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.911909: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [005] d..1.    47.912580: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.912592: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.912601: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.918821: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.918835: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.918844: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.919048: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.919055: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.919056: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.919090: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.919097: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.919098: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.919752: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.919760: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.919761: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [000] d..1.    47.921724: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.921733: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.921734: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.922163: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.922171: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.922172: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.922643: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.922654: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.922662: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.928228: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.928235: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.928240: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.928339: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.928344: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.928345: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:2-107     [012] d..1.    47.928370: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.928375: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.928376: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.930346: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.930358: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.930363: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.930820: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [014] ...1.    47.930831: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [014] .....    47.930832: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.937915: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.937946: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.937958: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.938370: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.938399: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.938406: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.942927: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.942955: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.942966: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943029: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943057: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943063: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943247: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943310: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943317: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943323: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943335: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943337: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [010] d..1.    47.943389: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943418: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943424: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [012] d..1.    47.943452: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943479: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943485: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [012] d..1.    47.943656: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943684: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943691: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [012] d..1.    47.943709: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943724: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943727: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.943742: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [006] ...1.    47.943768: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [006] .....    47.943774: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.944355: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [000] ...1.    47.944382: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [000] .....    47.944389: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.944882: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.944908: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.944914: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:6-148     [005] d..1.    47.945357: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.945382: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.945388: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [003] d..1.    47.954651: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.954665: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.954676: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.955367: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.955381: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.955392: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [000] d..1.    47.955791: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.955798: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.955800: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.956563: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.956574: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.956585: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.956808: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.956816: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.956819: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.957256: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.957265: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.957275: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [006] d..1.    47.957353: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [002] ...1.    47.957361: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [002] .....    47.957362: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:7-151     [004] d..1.    47.960507: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    47.960518: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    47.960529: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:9-1078    [000] d..1.    47.961340: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [001] ...1.    47.961355: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [001] .....    47.961360: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [014] d..1.    48.029152: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    48.029182: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    48.029196: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [000] d..1.    48.029815: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    48.029828: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    48.029832: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
  kworker/u32:11-2181    [000] d..1.    48.033023: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    48.033040: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    48.033048: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [006] d..1.    49.035170: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    49.035232: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
       lxc-start-2249    [005] .....    49.035261: io_uring_cqring_wait: ring 00000000ad366d59, min_events 1
   kworker/u32:8-156     [006] d..1.    49.042316: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    49.042361: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
   kworker/u32:7-151     [004] d..1.    49.042367: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a674bd0, mask 41
       lxc-start-2249    [005] ...1.    49.042368: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a674bd0, result 1, cflags 2
        lxc-stop-2534    [003] d..1.    52.782291: io_uring_task_add: ring 00000000ad366d59, op 6, data 0x56049a671380, mask c3
       lxc-start-2249    [005] ...1.    52.782314: io_uring_complete: ring 00000000ad366d59, user_data 0x56049a671380, result 1, cflags 2

[-- Attachment #4: lxc-record-trace --]
[-- Type: text/plain, Size: 372 bytes --]

#!/bin/bash

sudo -v

echo 0 | sudo dd status=none of=/sys/kernel/debug/tracing/trace

echo 1 | sudo dd status=none of=/sys/kernel/debug/tracing/events/io_uring/enable

sudo lxc-start -n lxc-test

sudo lxc-stop -n lxc-test &

sleep 1

sudo cat /sys/kernel/debug/tracing/trace > ~/lxc-trace

echo 0 | sudo dd status=none of=/sys/kernel/debug/tracing/events/io_uring/enable

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-03  7:37               ` Daniel Harding
@ 2022-05-03 14:14                 ` Pavel Begunkov
  2022-05-04  6:54                   ` Daniel Harding
  0 siblings, 1 reply; 27+ messages in thread
From: Pavel Begunkov @ 2022-05-03 14:14 UTC (permalink / raw)
  To: Daniel Harding, Jens Axboe; +Cc: regressions, io-uring, linux-kernel

On 5/3/22 08:37, Daniel Harding wrote:
> [Resend with a smaller trace]
> 
> On 5/3/22 02:14, Pavel Begunkov wrote:
>> On 5/2/22 19:49, Daniel Harding wrote:
>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel config is a
>>>>>>>>> very lightly modified version of Fedora's generic kernel config. After
>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I started
>>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% of the
>>>>>>>>> time, but definitely more than 50% of the time. Bisecting narrowed
>>>>>>>>> down the issue to commit aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still present
>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for testing or
>>>>>>>>> validation.  I am also happy to provide any further information that
>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>> significantly easier to figure out.
>>>>>>>
>>>>>>> I can reproduce it with just the following:
>>>>>>>
>>>>>>>      sudo lxc-create --n lxc-test --template download --bdev dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic -a amd64
>>>>>>>      sudo lxc-start -n lxc-test
>>>>>>>      sudo lxc-stop -n lxc-test
>>>>>>>
>>>>>>> The lxc-stop command never exits and the container continues running.
>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>
>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>> limited amount of time to debug, hopefully Pavel has time to take a look
>>>>>> at this.
>>>>>
>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>> kernel, to do:
>>>>
>>>> Same here, it doesn't reproduce for me
>>> OK, sorry it wasn't something simple.
>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>
>>>>> run lxc-stop
>>>>>
>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>
>>>>> so we can see what's going on? Looking at the source, lxc is just using
>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification when it
>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a trace
>>>>> from both a working and broken kernel, that might shed some light on it.
>>> It's late in my timezone, but I'll try to work on getting those traces tomorrow.
>>
>> I think I got it, I've attached a trace.
>>
>> What's interesting is that it issues a multi shot poll but I don't
>> see any kind of cancellation, neither cancel requests nor task/ring
>> exit. Perhaps have to go look at lxc to see how it's supposed
>> to work
> 
> Yes, that looks exactly like my bad trace.  I've attached good trace (captured with linux-5.16.19) and a bad trace (captured with linux-5.17.5).  These are the differences I noticed with just a visual scan:
> 
> * Both traces have three io_uring_submit_sqe calls at the very beginning, but in the good trace, there are further io_uring_submit_sqe calls throughout the trace, while in the bad trace, there are none.
> * The good trace uses a mask of c3 for io_uring_task_add much more often than the bad trace:  the bad trace uses a mask of c3 only for the very last call to io_uring_task_add, but a mask of 41 for the other calls.
> * In the good trace, many of the io_uring_complete calls have a result of 195, while in the bad trace, they all have a result of 1.
> 
> I don't know whether any of those things are significant or not, but that's what jumped out at me.
> 
> I have also attached a copy of the script I used to generate the traces.  If there is anything further I can to do help debug, please let me know.

Good observations! thanks for traces.

It sounds like multi-shot poll requests were getting downgraded
to one-shot, which is a valid behaviour and was so because we
didn't fully support some cases. If that's the reason, than
the userspace/lxc is misusing the ABI. At least, that's the
working hypothesis for now, need to check lxc.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-03 14:14                 ` Pavel Begunkov
@ 2022-05-04  6:54                   ` Daniel Harding
  2022-05-15  8:20                     ` Thorsten Leemhuis
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Harding @ 2022-05-04  6:54 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe; +Cc: regressions, io-uring, linux-kernel

On 5/3/22 17:14, Pavel Begunkov wrote:
> On 5/3/22 08:37, Daniel Harding wrote:
>> [Resend with a smaller trace]
>>
>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel 
>>>>>>>>>> config is a
>>>>>>>>>> very lightly modified version of Fedora's generic kernel 
>>>>>>>>>> config. After
>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I 
>>>>>>>>>> started
>>>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100% 
>>>>>>>>>> of the
>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting 
>>>>>>>>>> narrowed
>>>>>>>>>> down the issue to commit 
>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still 
>>>>>>>>>> present
>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for 
>>>>>>>>>> testing or
>>>>>>>>>> validation.  I am also happy to provide any further 
>>>>>>>>>> information that
>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>>> significantly easier to figure out.
>>>>>>>>
>>>>>>>> I can reproduce it with just the following:
>>>>>>>>
>>>>>>>>      sudo lxc-create --n lxc-test --template download --bdev 
>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic 
>>>>>>>> -a amd64
>>>>>>>>      sudo lxc-start -n lxc-test
>>>>>>>>      sudo lxc-stop -n lxc-test
>>>>>>>>
>>>>>>>> The lxc-stop command never exits and the container continues 
>>>>>>>> running.
>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>
>>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>>> limited amount of time to debug, hopefully Pavel has time to 
>>>>>>> take a look
>>>>>>> at this.
>>>>>>
>>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>>> kernel, to do:
>>>>>
>>>>> Same here, it doesn't reproduce for me
>>>> OK, sorry it wasn't something simple.
>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>
>>>>>> run lxc-stop
>>>>>>
>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>
>>>>>> so we can see what's going on? Looking at the source, lxc is just 
>>>>>> using
>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification 
>>>>>> when it
>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a 
>>>>>> trace
>>>>>> from both a working and broken kernel, that might shed some light 
>>>>>> on it.
>>>> It's late in my timezone, but I'll try to work on getting those 
>>>> traces tomorrow.
>>>
>>> I think I got it, I've attached a trace.
>>>
>>> What's interesting is that it issues a multi shot poll but I don't
>>> see any kind of cancellation, neither cancel requests nor task/ring
>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>> to work
>>
>> Yes, that looks exactly like my bad trace.  I've attached good trace 
>> (captured with linux-5.16.19) and a bad trace (captured with 
>> linux-5.17.5).  These are the differences I noticed with just a 
>> visual scan:
>>
>> * Both traces have three io_uring_submit_sqe calls at the very 
>> beginning, but in the good trace, there are further 
>> io_uring_submit_sqe calls throughout the trace, while in the bad 
>> trace, there are none.
>> * The good trace uses a mask of c3 for io_uring_task_add much more 
>> often than the bad trace:  the bad trace uses a mask of c3 only for 
>> the very last call to io_uring_task_add, but a mask of 41 for the 
>> other calls.
>> * In the good trace, many of the io_uring_complete calls have a 
>> result of 195, while in the bad trace, they all have a result of 1.
>>
>> I don't know whether any of those things are significant or not, but 
>> that's what jumped out at me.
>>
>> I have also attached a copy of the script I used to generate the 
>> traces.  If there is anything further I can to do help debug, please 
>> let me know.
>
> Good observations! thanks for traces.
>
> It sounds like multi-shot poll requests were getting downgraded
> to one-shot, which is a valid behaviour and was so because we
> didn't fully support some cases. If that's the reason, than
> the userspace/lxc is misusing the ABI. At least, that's the
> working hypothesis for now, need to check lxc.

So, I looked at the lxc source code, and it appears to at least try to 
handle the case of multi-shot being downgraded to one-shot.  I don't 
know enough to know if the code is actually correct however:

https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189

https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254

https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290

-- 
Regards,

Daniel Harding

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-04  6:54                   ` Daniel Harding
@ 2022-05-15  8:20                     ` Thorsten Leemhuis
  2022-05-15 18:34                       ` Daniel Harding
  0 siblings, 1 reply; 27+ messages in thread
From: Thorsten Leemhuis @ 2022-05-15  8:20 UTC (permalink / raw)
  To: Daniel Harding, Pavel Begunkov, Jens Axboe
  Cc: regressions, io-uring, linux-kernel

On 04.05.22 08:54, Daniel Harding wrote:
> On 5/3/22 17:14, Pavel Begunkov wrote:
>> On 5/3/22 08:37, Daniel Harding wrote:
>>> [Resend with a smaller trace]
>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>> config is a
>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>> config. After
>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I
>>>>>>>>>>> started
>>>>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100%
>>>>>>>>>>> of the
>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>> narrowed
>>>>>>>>>>> down the issue to commit
>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still
>>>>>>>>>>> present
>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for
>>>>>>>>>>> testing or
>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>> information that
>>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>
>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>
>>>>>>>>>      sudo lxc-create --n lxc-test --template download --bdev
>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
>>>>>>>>> -a amd64
>>>>>>>>>      sudo lxc-start -n lxc-test
>>>>>>>>>      sudo lxc-stop -n lxc-test
>>>>>>>>>
>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>> running.
>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>
>>>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>> take a look
>>>>>>>> at this.
>>>>>>>
>>>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>>>> kernel, to do:
>>>>>>
>>>>>> Same here, it doesn't reproduce for me
>>>>> OK, sorry it wasn't something simple.
>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>
>>>>>>> run lxc-stop
>>>>>>>
>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>
>>>>>>> so we can see what's going on? Looking at the source, lxc is just
>>>>>>> using
>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification
>>>>>>> when it
>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a
>>>>>>> trace
>>>>>>> from both a working and broken kernel, that might shed some light
>>>>>>> on it.
>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>> traces tomorrow.
>>>>
>>>> I think I got it, I've attached a trace.
>>>>
>>>> What's interesting is that it issues a multi shot poll but I don't
>>>> see any kind of cancellation, neither cancel requests nor task/ring
>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>> to work
>>>
>>> Yes, that looks exactly like my bad trace.  I've attached good trace
>>> (captured with linux-5.16.19) and a bad trace (captured with
>>> linux-5.17.5).  These are the differences I noticed with just a
>>> visual scan:
>>>
>>> * Both traces have three io_uring_submit_sqe calls at the very
>>> beginning, but in the good trace, there are further
>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>> trace, there are none.
>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>> often than the bad trace:  the bad trace uses a mask of c3 only for
>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>> other calls.
>>> * In the good trace, many of the io_uring_complete calls have a
>>> result of 195, while in the bad trace, they all have a result of 1.
>>>
>>> I don't know whether any of those things are significant or not, but
>>> that's what jumped out at me.
>>>
>>> I have also attached a copy of the script I used to generate the
>>> traces.  If there is anything further I can to do help debug, please
>>> let me know.
>>
>> Good observations! thanks for traces.
>>
>> It sounds like multi-shot poll requests were getting downgraded
>> to one-shot, which is a valid behaviour and was so because we
>> didn't fully support some cases. If that's the reason, than
>> the userspace/lxc is misusing the ABI. At least, that's the
>> working hypothesis for now, need to check lxc.
> 
> So, I looked at the lxc source code, and it appears to at least try to
> handle the case of multi-shot being downgraded to one-shot.  I don't
> know enough to know if the code is actually correct however:
> 
> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290

Hi, this is your Linux kernel regression tracker. Nothing happened here
for round about ten days now afaics; or did the discussion continue
somewhere else.

From what I gathered from this discussion is seems the root cause might
be in LXC, but it was exposed by kernel change. That makes it sill a
kernel regression that should be fixed; or is there a strong reason why
we should let this one slip?

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)

P.S.: As the Linux kernel's regression tracker I deal with a lot of
reports and sometimes miss something important when writing mails like
this. If that's the case here, don't hesitate to tell me in a public
reply, it's in everyone's interest to set the public record straight.

#regzbot poke

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-15  8:20                     ` Thorsten Leemhuis
@ 2022-05-15 18:34                       ` Daniel Harding
  2022-05-16 12:12                         ` Pavel Begunkov
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Harding @ 2022-05-15 18:34 UTC (permalink / raw)
  To: Thorsten Leemhuis, Pavel Begunkov, Jens Axboe
  Cc: regressions, io-uring, linux-kernel

On 5/15/22 11:20, Thorsten Leemhuis wrote:
> On 04.05.22 08:54, Daniel Harding wrote:
>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>> [Resend with a smaller trace]
>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>> config is a
>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>> config. After
>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I
>>>>>>>>>>>> started
>>>>>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100%
>>>>>>>>>>>> of the
>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>> narrowed
>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still
>>>>>>>>>>>> present
>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for
>>>>>>>>>>>> testing or
>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>> information that
>>>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>
>>>>>>>>>>       sudo lxc-create --n lxc-test --template download --bdev
>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
>>>>>>>>>> -a amd64
>>>>>>>>>>       sudo lxc-start -n lxc-test
>>>>>>>>>>       sudo lxc-stop -n lxc-test
>>>>>>>>>>
>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>> running.
>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>> take a look
>>>>>>>>> at this.
>>>>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>>>>> kernel, to do:
>>>>>>> Same here, it doesn't reproduce for me
>>>>>> OK, sorry it wasn't something simple.
>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>> run lxc-stop
>>>>>>>>
>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>
>>>>>>>> so we can see what's going on? Looking at the source, lxc is just
>>>>>>>> using
>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification
>>>>>>>> when it
>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a
>>>>>>>> trace
>>>>>>>> from both a working and broken kernel, that might shed some light
>>>>>>>> on it.
>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>> traces tomorrow.
>>>>> I think I got it, I've attached a trace.
>>>>>
>>>>> What's interesting is that it issues a multi shot poll but I don't
>>>>> see any kind of cancellation, neither cancel requests nor task/ring
>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>> to work
>>>> Yes, that looks exactly like my bad trace.  I've attached good trace
>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>> visual scan:
>>>>
>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>> beginning, but in the good trace, there are further
>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>> trace, there are none.
>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>> often than the bad trace:  the bad trace uses a mask of c3 only for
>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>> other calls.
>>>> * In the good trace, many of the io_uring_complete calls have a
>>>> result of 195, while in the bad trace, they all have a result of 1.
>>>>
>>>> I don't know whether any of those things are significant or not, but
>>>> that's what jumped out at me.
>>>>
>>>> I have also attached a copy of the script I used to generate the
>>>> traces.  If there is anything further I can to do help debug, please
>>>> let me know.
>>> Good observations! thanks for traces.
>>>
>>> It sounds like multi-shot poll requests were getting downgraded
>>> to one-shot, which is a valid behaviour and was so because we
>>> didn't fully support some cases. If that's the reason, than
>>> the userspace/lxc is misusing the ABI. At least, that's the
>>> working hypothesis for now, need to check lxc.
>> So, I looked at the lxc source code, and it appears to at least try to
>> handle the case of multi-shot being downgraded to one-shot.  I don't
>> know enough to know if the code is actually correct however:
>>
>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290
> Hi, this is your Linux kernel regression tracker. Nothing happened here
> for round about ten days now afaics; or did the discussion continue
> somewhere else.
>
>  From what I gathered from this discussion is seems the root cause might
> be in LXC, but it was exposed by kernel change. That makes it sill a
> kernel regression that should be fixed; or is there a strong reason why
> we should let this one slip?

No, there hasn't been any discussion since the email you replied to.  
I've done a bit more testing on my end, but without anything 
conclusive.  The one thing I can say is that my testing shows that LXC 
does correctly handle multi-shot poll requests which were being 
downgraded to one-shot in 5.16.x kernels, which I think invalidates 
Pavel's theory.  In 5.17.x kernels, those same poll requests are no 
longer being downgraded to one-shot requests, and thus under 5.17.x LXC 
is no longer re-arming those poll requests (but also shouldn't need to, 
according to what is being returned by the kernel).  I don't know if 
this change in kernel behavior is related to the hang, or if it is just 
a side effect of other io-uring changes that made it into 5.17.  Nothing 
in the LXC's usage of io-uring seems obviously incorrect to me, but I am 
far from an expert.  I also did some work toward creating a simpler 
reproducer, without success (I was able to get a simple program using 
io-uring running, but never could get it to hang).  ISTM that this is 
still a kernel regression, unless someone can point out a definite fault 
in the way LXC is using io-uring.

-- 
Regards,

Daniel Harding

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-15 18:34                       ` Daniel Harding
@ 2022-05-16 12:12                         ` Pavel Begunkov
  2022-05-16 13:25                           ` Pavel Begunkov
  0 siblings, 1 reply; 27+ messages in thread
From: Pavel Begunkov @ 2022-05-16 12:12 UTC (permalink / raw)
  To: Daniel Harding, Thorsten Leemhuis, Jens Axboe
  Cc: regressions, io-uring, linux-kernel

On 5/15/22 19:34, Daniel Harding wrote:
> On 5/15/22 11:20, Thorsten Leemhuis wrote:
>> On 04.05.22 08:54, Daniel Harding wrote:
>>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>>> [Resend with a smaller trace]
>>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>>> config is a
>>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>>> config. After
>>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I
>>>>>>>>>>>>> started
>>>>>>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100%
>>>>>>>>>>>>> of the
>>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>>> narrowed
>>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still
>>>>>>>>>>>>> present
>>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for
>>>>>>>>>>>>> testing or
>>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>>> information that
>>>>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>>
>>>>>>>>>>>       sudo lxc-create --n lxc-test --template download --bdev
>>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
>>>>>>>>>>> -a amd64
>>>>>>>>>>>       sudo lxc-start -n lxc-test
>>>>>>>>>>>       sudo lxc-stop -n lxc-test
>>>>>>>>>>>
>>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>>> running.
>>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>>> take a look
>>>>>>>>>> at this.
>>>>>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>>>>>> kernel, to do:
>>>>>>>> Same here, it doesn't reproduce for me
>>>>>>> OK, sorry it wasn't something simple.
>>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>>> run lxc-stop
>>>>>>>>>
>>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>>
>>>>>>>>> so we can see what's going on? Looking at the source, lxc is just
>>>>>>>>> using
>>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification
>>>>>>>>> when it
>>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a
>>>>>>>>> trace
>>>>>>>>> from both a working and broken kernel, that might shed some light
>>>>>>>>> on it.
>>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>>> traces tomorrow.
>>>>>> I think I got it, I've attached a trace.
>>>>>>
>>>>>> What's interesting is that it issues a multi shot poll but I don't
>>>>>> see any kind of cancellation, neither cancel requests nor task/ring
>>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>>> to work
>>>>> Yes, that looks exactly like my bad trace.  I've attached good trace
>>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>>> visual scan:
>>>>>
>>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>>> beginning, but in the good trace, there are further
>>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>>> trace, there are none.
>>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>>> often than the bad trace:  the bad trace uses a mask of c3 only for
>>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>>> other calls.
>>>>> * In the good trace, many of the io_uring_complete calls have a
>>>>> result of 195, while in the bad trace, they all have a result of 1.
>>>>>
>>>>> I don't know whether any of those things are significant or not, but
>>>>> that's what jumped out at me.
>>>>>
>>>>> I have also attached a copy of the script I used to generate the
>>>>> traces.  If there is anything further I can to do help debug, please
>>>>> let me know.
>>>> Good observations! thanks for traces.
>>>>
>>>> It sounds like multi-shot poll requests were getting downgraded
>>>> to one-shot, which is a valid behaviour and was so because we
>>>> didn't fully support some cases. If that's the reason, than
>>>> the userspace/lxc is misusing the ABI. At least, that's the
>>>> working hypothesis for now, need to check lxc.
>>> So, I looked at the lxc source code, and it appears to at least try to
>>> handle the case of multi-shot being downgraded to one-shot.  I don't
>>> know enough to know if the code is actually correct however:
>>>
>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290
>> Hi, this is your Linux kernel regression tracker. Nothing happened here
>> for round about ten days now afaics; or did the discussion continue
>> somewhere else.
>>
>>  From what I gathered from this discussion is seems the root cause might
>> be in LXC, but it was exposed by kernel change. That makes it sill a
>> kernel regression that should be fixed; or is there a strong reason why
>> we should let this one slip?
> 
> No, there hasn't been any discussion since the email you replied to. I've done a bit more testing on my end, but without anything conclusive.  The one thing I can say is that my testing shows that LXC does correctly handle multi-shot poll requests which were being downgraded to one-shot in 5.16.x kernels, which I think invalidates Pavel's theory.  In 5.17.x kernels, those same poll requests are no longer being downgraded to one-shot requests, and thus under 5.17.x LXC is no longer re-arming those poll requests (but also shouldn't need to, according to what is being returned by the kernel).  I don't know if this change in kernel behavior is related to the hang, or if it is just a side effect of other io-uring changes that made it into 5.17.  Nothing in the LXC's usage of io-uring seems obviously incorrect to me, but I am far from an expert.  I also did some work toward creating a simpler reproducer, without success (I was able to get a simple program using io-uring running, 
> but never could get it to hang).  ISTM that this is still a kernel regression, unless someone can point out a definite fault in the way LXC is using io-uring.

Haven't had time to debug it. Apparently LXC is stuck on
read(2) terminal fd. Not yet clear what is the reason.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 12:12                         ` Pavel Begunkov
@ 2022-05-16 13:25                           ` Pavel Begunkov
  2022-05-16 13:57                             ` Daniel Harding
  0 siblings, 1 reply; 27+ messages in thread
From: Pavel Begunkov @ 2022-05-16 13:25 UTC (permalink / raw)
  To: Daniel Harding, Thorsten Leemhuis, Jens Axboe
  Cc: regressions, io-uring, linux-kernel, Christian Brauner

On 5/16/22 13:12, Pavel Begunkov wrote:
> On 5/15/22 19:34, Daniel Harding wrote:
>> On 5/15/22 11:20, Thorsten Leemhuis wrote:
>>> On 04.05.22 08:54, Daniel Harding wrote:
>>>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>>>> [Resend with a smaller trace]
>>>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>>>> config is a
>>>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>>>> config. After
>>>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I
>>>>>>>>>>>>>> started
>>>>>>>>>>>>>> noticed frequent hangs in lxc-stop.  It doesn't happen 100%
>>>>>>>>>>>>>> of the
>>>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>>>> narrowed
>>>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still
>>>>>>>>>>>>>> present
>>>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for
>>>>>>>>>>>>>> testing or
>>>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>>>> information that
>>>>>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>>>
>>>>>>>>>>>>       sudo lxc-create --n lxc-test --template download --bdev
>>>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
>>>>>>>>>>>> -a amd64
>>>>>>>>>>>>       sudo lxc-start -n lxc-test
>>>>>>>>>>>>       sudo lxc-stop -n lxc-test
>>>>>>>>>>>>
>>>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>>>> running.
>>>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>>>> take a look
>>>>>>>>>>> at this.
>>>>>>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>>>>>>> kernel, to do:
>>>>>>>>> Same here, it doesn't reproduce for me
>>>>>>>> OK, sorry it wasn't something simple.
>>>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>>>> run lxc-stop
>>>>>>>>>>
>>>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>>>
>>>>>>>>>> so we can see what's going on? Looking at the source, lxc is just
>>>>>>>>>> using
>>>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification
>>>>>>>>>> when it
>>>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a
>>>>>>>>>> trace
>>>>>>>>>> from both a working and broken kernel, that might shed some light
>>>>>>>>>> on it.
>>>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>>>> traces tomorrow.
>>>>>>> I think I got it, I've attached a trace.
>>>>>>>
>>>>>>> What's interesting is that it issues a multi shot poll but I don't
>>>>>>> see any kind of cancellation, neither cancel requests nor task/ring
>>>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>>>> to work
>>>>>> Yes, that looks exactly like my bad trace.  I've attached good trace
>>>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>>>> visual scan:
>>>>>>
>>>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>>>> beginning, but in the good trace, there are further
>>>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>>>> trace, there are none.
>>>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>>>> often than the bad trace:  the bad trace uses a mask of c3 only for
>>>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>>>> other calls.
>>>>>> * In the good trace, many of the io_uring_complete calls have a
>>>>>> result of 195, while in the bad trace, they all have a result of 1.
>>>>>>
>>>>>> I don't know whether any of those things are significant or not, but
>>>>>> that's what jumped out at me.
>>>>>>
>>>>>> I have also attached a copy of the script I used to generate the
>>>>>> traces.  If there is anything further I can to do help debug, please
>>>>>> let me know.
>>>>> Good observations! thanks for traces.
>>>>>
>>>>> It sounds like multi-shot poll requests were getting downgraded
>>>>> to one-shot, which is a valid behaviour and was so because we
>>>>> didn't fully support some cases. If that's the reason, than
>>>>> the userspace/lxc is misusing the ABI. At least, that's the
>>>>> working hypothesis for now, need to check lxc.
>>>> So, I looked at the lxc source code, and it appears to at least try to
>>>> handle the case of multi-shot being downgraded to one-shot.  I don't
>>>> know enough to know if the code is actually correct however:
>>>>
>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290
>>> Hi, this is your Linux kernel regression tracker. Nothing happened here
>>> for round about ten days now afaics; or did the discussion continue
>>> somewhere else.
>>>
>>>  From what I gathered from this discussion is seems the root cause might
>>> be in LXC, but it was exposed by kernel change. That makes it sill a
>>> kernel regression that should be fixed; or is there a strong reason why
>>> we should let this one slip?
>>
>> No, there hasn't been any discussion since the email you replied to. I've done a bit more testing on my end, but without anything conclusive.  The one thing I can say is that my testing shows that LXC does correctly handle multi-shot poll requests which were being downgraded to one-shot in 5.16.x kernels, which I think invalidates Pavel's theory.  In 5.17.x kernels, those same poll requests are no longer being downgraded to one-shot requests, and thus under 5.17.x LXC is no longer re-arming those poll requests (but also shouldn't need to, according to what is being returned by the kernel).  I don't know if this change in kernel behavior is related to the hang, or if it is just a side effect of other io-uring changes that made it into 5.17.  Nothing in the LXC's usage of io-uring seems obviously incorrect to me, but I am far from an expert.  I also did some work toward creating a simpler reproducer, without success (I was able to get a simple program using io-uring running, 
>> but never could get it to hang).  ISTM that this is still a kernel regression, unless someone can point out a definite fault in the way LXC is using io-uring.
> 
> Haven't had time to debug it. Apparently LXC is stuck on
> read(2) terminal fd. Not yet clear what is the reason.

How it was with oneshots:

1: kernel: poll fires, add a CQE
2: kernel: remove poll
3: userspace: get CQE
4: userspace: read(terminal_fd);
5: userspace: add new poll
6: goto 1)

What might happen and actually happens with multishot:

1: kernel: poll fires, add CQE1
2: kernel: poll fires again, add CQE2
3: userspace: get CQE1
4: userspace: read(terminal_fd); // reads all data, for both CQE1 and CQE2
5: userspace: get CQE2
6: userspace: read(terminal_fd); // nothing to read, hangs here

It should be the read in lxc_terminal_ptx_io().

IMHO, it's not a regression but a not perfect feature API and/or
an API misuse.

Cc: Christian Brauner

Christian, in case you may have some input on the LXC side of things.
Daniel reported an LXC problem when it uses io_uring multishot poll requests.
Before aa43477b04025 ("io_uring: poll rework"), multishot poll requests for
tty/pty and some other files were always downgraded to oneshots, which had
been fixed by the commit and exposed the problem. I hope the example above
explains it, but please let me know if it needs more details

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 13:25                           ` Pavel Begunkov
@ 2022-05-16 13:57                             ` Daniel Harding
  2022-05-16 15:13                               ` Daniel Harding
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Harding @ 2022-05-16 13:57 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Thorsten Leemhuis,
	Jens Axboe, Christian Brauner

On 5/16/22 16:25, Pavel Begunkov wrote:
> On 5/16/22 13:12, Pavel Begunkov wrote:
>> On 5/15/22 19:34, Daniel Harding wrote:
>>> On 5/15/22 11:20, Thorsten Leemhuis wrote:
>>>> On 04.05.22 08:54, Daniel Harding wrote:
>>>>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>>>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>>>>> [Resend with a smaller trace]
>>>>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>>>>> config is a
>>>>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>>>>> config. After
>>>>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel 
>>>>>>>>>>>>>>> series, I
>>>>>>>>>>>>>>> started
>>>>>>>>>>>>>>> noticed frequent hangs in lxc-stop. It doesn't happen 100%
>>>>>>>>>>>>>>> of the
>>>>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>>>>> narrowed
>>>>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is 
>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>> present
>>>>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise 
>>>>>>>>>>>>>>> with the
>>>>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the 
>>>>>>>>>>>>>>> problem
>>>>>>>>>>>>>>> further on my own, but I can easily apply patches to any 
>>>>>>>>>>>>>>> of the
>>>>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for
>>>>>>>>>>>>>>> testing or
>>>>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>>>>> information that
>>>>>>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would 
>>>>>>>>>>>>>> make it
>>>>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>>>>
>>>>>>>>>>>>>       sudo lxc-create --n lxc-test --template download --bdev
>>>>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
>>>>>>>>>>>>> -a amd64
>>>>>>>>>>>>>       sudo lxc-start -n lxc-test
>>>>>>>>>>>>>       sudo lxc-stop -n lxc-test
>>>>>>>>>>>>>
>>>>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>>>>> running.
>>>>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>>>>> Thanks, that's useful! I'm at a conference this week and 
>>>>>>>>>>>> hence have
>>>>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>>>>> take a look
>>>>>>>>>>>> at this.
>>>>>>>>>>> Didn't manage to reproduce. Can you try, on both the good 
>>>>>>>>>>> and bad
>>>>>>>>>>> kernel, to do:
>>>>>>>>>> Same here, it doesn't reproduce for me
>>>>>>>>> OK, sorry it wasn't something simple.
>>>>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>>>>> run lxc-stop
>>>>>>>>>>>
>>>>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>>>>
>>>>>>>>>>> so we can see what's going on? Looking at the source, lxc is 
>>>>>>>>>>> just
>>>>>>>>>>> using
>>>>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification
>>>>>>>>>>> when it
>>>>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a
>>>>>>>>>>> trace
>>>>>>>>>>> from both a working and broken kernel, that might shed some 
>>>>>>>>>>> light
>>>>>>>>>>> on it.
>>>>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>>>>> traces tomorrow.
>>>>>>>> I think I got it, I've attached a trace.
>>>>>>>>
>>>>>>>> What's interesting is that it issues a multi shot poll but I don't
>>>>>>>> see any kind of cancellation, neither cancel requests nor 
>>>>>>>> task/ring
>>>>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>>>>> to work
>>>>>>> Yes, that looks exactly like my bad trace.  I've attached good 
>>>>>>> trace
>>>>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>>>>> visual scan:
>>>>>>>
>>>>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>>>>> beginning, but in the good trace, there are further
>>>>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>>>>> trace, there are none.
>>>>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>>>>> often than the bad trace:  the bad trace uses a mask of c3 only for
>>>>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>>>>> other calls.
>>>>>>> * In the good trace, many of the io_uring_complete calls have a
>>>>>>> result of 195, while in the bad trace, they all have a result of 1.
>>>>>>>
>>>>>>> I don't know whether any of those things are significant or not, 
>>>>>>> but
>>>>>>> that's what jumped out at me.
>>>>>>>
>>>>>>> I have also attached a copy of the script I used to generate the
>>>>>>> traces.  If there is anything further I can to do help debug, 
>>>>>>> please
>>>>>>> let me know.
>>>>>> Good observations! thanks for traces.
>>>>>>
>>>>>> It sounds like multi-shot poll requests were getting downgraded
>>>>>> to one-shot, which is a valid behaviour and was so because we
>>>>>> didn't fully support some cases. If that's the reason, than
>>>>>> the userspace/lxc is misusing the ABI. At least, that's the
>>>>>> working hypothesis for now, need to check lxc.
>>>>> So, I looked at the lxc source code, and it appears to at least 
>>>>> try to
>>>>> handle the case of multi-shot being downgraded to one-shot.  I don't
>>>>> know enough to know if the code is actually correct however:
>>>>>
>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189 
>>>>>
>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254 
>>>>>
>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290 
>>>>>
>>>> Hi, this is your Linux kernel regression tracker. Nothing happened 
>>>> here
>>>> for round about ten days now afaics; or did the discussion continue
>>>> somewhere else.
>>>>
>>>>  From what I gathered from this discussion is seems the root cause 
>>>> might
>>>> be in LXC, but it was exposed by kernel change. That makes it sill a
>>>> kernel regression that should be fixed; or is there a strong reason 
>>>> why
>>>> we should let this one slip?
>>>
>>> No, there hasn't been any discussion since the email you replied to. 
>>> I've done a bit more testing on my end, but without anything 
>>> conclusive.  The one thing I can say is that my testing shows that 
>>> LXC does correctly handle multi-shot poll requests which were being 
>>> downgraded to one-shot in 5.16.x kernels, which I think invalidates 
>>> Pavel's theory.  In 5.17.x kernels, those same poll requests are no 
>>> longer being downgraded to one-shot requests, and thus under 5.17.x 
>>> LXC is no longer re-arming those poll requests (but also shouldn't 
>>> need to, according to what is being returned by the kernel). I don't 
>>> know if this change in kernel behavior is related to the hang, or if 
>>> it is just a side effect of other io-uring changes that made it into 
>>> 5.17.  Nothing in the LXC's usage of io-uring seems obviously 
>>> incorrect to me, but I am far from an expert.  I also did some work 
>>> toward creating a simpler reproducer, without success (I was able to 
>>> get a simple program using io-uring running, but never could get it 
>>> to hang).  ISTM that this is still a kernel regression, unless 
>>> someone can point out a definite fault in the way LXC is using 
>>> io-uring.
>>
>> Haven't had time to debug it. Apparently LXC is stuck on
>> read(2) terminal fd. Not yet clear what is the reason.
>
> How it was with oneshots:
>
> 1: kernel: poll fires, add a CQE
> 2: kernel: remove poll
> 3: userspace: get CQE
> 4: userspace: read(terminal_fd);
> 5: userspace: add new poll
> 6: goto 1)
>
> What might happen and actually happens with multishot:
>
> 1: kernel: poll fires, add CQE1
> 2: kernel: poll fires again, add CQE2
> 3: userspace: get CQE1
> 4: userspace: read(terminal_fd); // reads all data, for both CQE1 and 
> CQE2
> 5: userspace: get CQE2
> 6: userspace: read(terminal_fd); // nothing to read, hangs here
>
> It should be the read in lxc_terminal_ptx_io().
>
> IMHO, it's not a regression but a not perfect feature API and/or
> an API misuse.
>
> Cc: Christian Brauner
>
> Christian, in case you may have some input on the LXC side of things.
> Daniel reported an LXC problem when it uses io_uring multishot poll 
> requests.
> Before aa43477b04025 ("io_uring: poll rework"), multishot poll 
> requests for
> tty/pty and some other files were always downgraded to oneshots, which 
> had
> been fixed by the commit and exposed the problem. I hope the example 
> above
> explains it, but please let me know if it needs more details

Pavel, I had actually just started a draft email with the same theory 
(although you stated it much more clearly than I could have).  I'm 
working on debugging the LXC side, but I'm pretty sure the issue is due 
to LXC using blocking reads and getting stuck exactly as you describe.  
If I can confirm this, I'll go ahead and mark this regression as invalid 
and file an issue with LXC.  Thanks for your help and patience.

-- 
Regards,

Daniel Harding


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 13:57                             ` Daniel Harding
@ 2022-05-16 15:13                               ` Daniel Harding
  2022-05-16 18:13                                 ` Pavel Begunkov
  2022-05-16 18:17                                 ` Thorsten Leemhuis
  0 siblings, 2 replies; 27+ messages in thread
From: Daniel Harding @ 2022-05-16 15:13 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Thorsten Leemhuis,
	Jens Axboe, Christian Brauner

[-- Attachment #1: Type: text/plain, Size: 10402 bytes --]

On 5/16/22 16:57, Daniel Harding wrote:
> On 5/16/22 16:25, Pavel Begunkov wrote:
>> On 5/16/22 13:12, Pavel Begunkov wrote:
>>> On 5/15/22 19:34, Daniel Harding wrote:
>>>> On 5/15/22 11:20, Thorsten Leemhuis wrote:
>>>>> On 04.05.22 08:54, Daniel Harding wrote:
>>>>>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>>>>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>>>>>> [Resend with a smaller trace]
>>>>>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>>>>>> config is a
>>>>>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>>>>>> config. After
>>>>>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel 
>>>>>>>>>>>>>>>> series, I
>>>>>>>>>>>>>>>> started
>>>>>>>>>>>>>>>> noticed frequent hangs in lxc-stop. It doesn't happen 100%
>>>>>>>>>>>>>>>> of the
>>>>>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>>>>>> narrowed
>>>>>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is 
>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>> present
>>>>>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise 
>>>>>>>>>>>>>>>> with the
>>>>>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the 
>>>>>>>>>>>>>>>> problem
>>>>>>>>>>>>>>>> further on my own, but I can easily apply patches to 
>>>>>>>>>>>>>>>> any of the
>>>>>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild 
>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>> testing or
>>>>>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>>>>>> information that
>>>>>>>>>>>>>>>> would be helpful with reproducing or debugging the 
>>>>>>>>>>>>>>>> problem.
>>>>>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would 
>>>>>>>>>>>>>>> make it
>>>>>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>       sudo lxc-create --n lxc-test --template download 
>>>>>>>>>>>>>> --bdev
>>>>>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r 
>>>>>>>>>>>>>> bionic
>>>>>>>>>>>>>> -a amd64
>>>>>>>>>>>>>>       sudo lxc-start -n lxc-test
>>>>>>>>>>>>>>       sudo lxc-stop -n lxc-test
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>>>>>> running.
>>>>>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>>>>>> Thanks, that's useful! I'm at a conference this week and 
>>>>>>>>>>>>> hence have
>>>>>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>>>>>> take a look
>>>>>>>>>>>>> at this.
>>>>>>>>>>>> Didn't manage to reproduce. Can you try, on both the good 
>>>>>>>>>>>> and bad
>>>>>>>>>>>> kernel, to do:
>>>>>>>>>>> Same here, it doesn't reproduce for me
>>>>>>>>>> OK, sorry it wasn't something simple.
>>>>>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>>>>>> run lxc-stop
>>>>>>>>>>>>
>>>>>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>>>>>
>>>>>>>>>>>> so we can see what's going on? Looking at the source, lxc 
>>>>>>>>>>>> is just
>>>>>>>>>>>> using
>>>>>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a 
>>>>>>>>>>>> notification
>>>>>>>>>>>> when it
>>>>>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we 
>>>>>>>>>>>> have a
>>>>>>>>>>>> trace
>>>>>>>>>>>> from both a working and broken kernel, that might shed some 
>>>>>>>>>>>> light
>>>>>>>>>>>> on it.
>>>>>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>>>>>> traces tomorrow.
>>>>>>>>> I think I got it, I've attached a trace.
>>>>>>>>>
>>>>>>>>> What's interesting is that it issues a multi shot poll but I 
>>>>>>>>> don't
>>>>>>>>> see any kind of cancellation, neither cancel requests nor 
>>>>>>>>> task/ring
>>>>>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>>>>>> to work
>>>>>>>> Yes, that looks exactly like my bad trace.  I've attached good 
>>>>>>>> trace
>>>>>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>>>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>>>>>> visual scan:
>>>>>>>>
>>>>>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>>>>>> beginning, but in the good trace, there are further
>>>>>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>>>>>> trace, there are none.
>>>>>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>>>>>> often than the bad trace:  the bad trace uses a mask of c3 only 
>>>>>>>> for
>>>>>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>>>>>> other calls.
>>>>>>>> * In the good trace, many of the io_uring_complete calls have a
>>>>>>>> result of 195, while in the bad trace, they all have a result 
>>>>>>>> of 1.
>>>>>>>>
>>>>>>>> I don't know whether any of those things are significant or 
>>>>>>>> not, but
>>>>>>>> that's what jumped out at me.
>>>>>>>>
>>>>>>>> I have also attached a copy of the script I used to generate the
>>>>>>>> traces.  If there is anything further I can to do help debug, 
>>>>>>>> please
>>>>>>>> let me know.
>>>>>>> Good observations! thanks for traces.
>>>>>>>
>>>>>>> It sounds like multi-shot poll requests were getting downgraded
>>>>>>> to one-shot, which is a valid behaviour and was so because we
>>>>>>> didn't fully support some cases. If that's the reason, than
>>>>>>> the userspace/lxc is misusing the ABI. At least, that's the
>>>>>>> working hypothesis for now, need to check lxc.
>>>>>> So, I looked at the lxc source code, and it appears to at least 
>>>>>> try to
>>>>>> handle the case of multi-shot being downgraded to one-shot.  I don't
>>>>>> know enough to know if the code is actually correct however:
>>>>>>
>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189 
>>>>>>
>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254 
>>>>>>
>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290 
>>>>>>
>>>>> Hi, this is your Linux kernel regression tracker. Nothing happened 
>>>>> here
>>>>> for round about ten days now afaics; or did the discussion continue
>>>>> somewhere else.
>>>>>
>>>>>  From what I gathered from this discussion is seems the root cause 
>>>>> might
>>>>> be in LXC, but it was exposed by kernel change. That makes it sill a
>>>>> kernel regression that should be fixed; or is there a strong 
>>>>> reason why
>>>>> we should let this one slip?
>>>>
>>>> No, there hasn't been any discussion since the email you replied 
>>>> to. I've done a bit more testing on my end, but without anything 
>>>> conclusive.  The one thing I can say is that my testing shows that 
>>>> LXC does correctly handle multi-shot poll requests which were being 
>>>> downgraded to one-shot in 5.16.x kernels, which I think invalidates 
>>>> Pavel's theory.  In 5.17.x kernels, those same poll requests are no 
>>>> longer being downgraded to one-shot requests, and thus under 5.17.x 
>>>> LXC is no longer re-arming those poll requests (but also shouldn't 
>>>> need to, according to what is being returned by the kernel). I 
>>>> don't know if this change in kernel behavior is related to the 
>>>> hang, or if it is just a side effect of other io-uring changes that 
>>>> made it into 5.17.  Nothing in the LXC's usage of io-uring seems 
>>>> obviously incorrect to me, but I am far from an expert.  I also did 
>>>> some work toward creating a simpler reproducer, without success (I 
>>>> was able to get a simple program using io-uring running, but never 
>>>> could get it to hang).  ISTM that this is still a kernel 
>>>> regression, unless someone can point out a definite fault in the 
>>>> way LXC is using io-uring.
>>>
>>> Haven't had time to debug it. Apparently LXC is stuck on
>>> read(2) terminal fd. Not yet clear what is the reason.
>>
>> How it was with oneshots:
>>
>> 1: kernel: poll fires, add a CQE
>> 2: kernel: remove poll
>> 3: userspace: get CQE
>> 4: userspace: read(terminal_fd);
>> 5: userspace: add new poll
>> 6: goto 1)
>>
>> What might happen and actually happens with multishot:
>>
>> 1: kernel: poll fires, add CQE1
>> 2: kernel: poll fires again, add CQE2
>> 3: userspace: get CQE1
>> 4: userspace: read(terminal_fd); // reads all data, for both CQE1 and 
>> CQE2
>> 5: userspace: get CQE2
>> 6: userspace: read(terminal_fd); // nothing to read, hangs here
>>
>> It should be the read in lxc_terminal_ptx_io().
>>
>> IMHO, it's not a regression but a not perfect feature API and/or
>> an API misuse.
>>
>> Cc: Christian Brauner
>>
>> Christian, in case you may have some input on the LXC side of things.
>> Daniel reported an LXC problem when it uses io_uring multishot poll 
>> requests.
>> Before aa43477b04025 ("io_uring: poll rework"), multishot poll 
>> requests for
>> tty/pty and some other files were always downgraded to oneshots, 
>> which had
>> been fixed by the commit and exposed the problem. I hope the example 
>> above
>> explains it, but please let me know if it needs more details
>
> Pavel, I had actually just started a draft email with the same theory 
> (although you stated it much more clearly than I could have).  I'm 
> working on debugging the LXC side, but I'm pretty sure the issue is 
> due to LXC using blocking reads and getting stuck exactly as you 
> describe.  If I can confirm this, I'll go ahead and mark this 
> regression as invalid and file an issue with LXC. Thanks for your help 
> and patience.

Yes, it does appear that was the problem.  The attach POC patch against 
LXC fixes the hang.  The kernel is working as intended.

#regzbot invalid:  userspace programming error

-- 
Regards,

Daniel Harding

[-- Attachment #2: lxc.patch --]
[-- Type: text/x-patch, Size: 1102 bytes --]

diff --git a/src/lxc/terminal.c b/src/lxc/terminal.c
index c5bf8cdfe..5eee50625 100644
--- a/src/lxc/terminal.c
+++ b/src/lxc/terminal.c
@@ -334,7 +334,10 @@ static int lxc_terminal_ptx_io(struct lxc_terminal *terminal)
 
 	w = r = lxc_read_nointr(terminal->ptx, buf, sizeof(buf));
 	if (r <= 0)
-		return -1;
+		if (errno == EWOULDBLOCK)
+			return 0;
+		else
+			return -1;
 
 	w_rbuf = w_log = 0;
 	/* write to peer first */
@@ -444,13 +447,21 @@ static int lxc_terminal_mainloop_add_peer(struct lxc_terminal *terminal)
 int lxc_terminal_mainloop_add(struct lxc_async_descr *descr,
 			      struct lxc_terminal *terminal)
 {
-	int ret;
+	int flags, ret;
 
 	if (terminal->ptx < 0) {
 		INFO("Terminal is not initialized");
 		return 0;
 	}
 
+	flags = fcntl(terminal->ptx, F_GETFL);
+	flags |= O_NONBLOCK;
+	ret = fcntl(terminal->ptx, F_SETFL, flags);
+	if (ret < 0) {
+		ERROR("Failed to set O_NONBLOCK for terminal ptx fd %d", terminal->ptx);
+		return -1;
+	}
+
 	ret = lxc_mainloop_add_handler(descr, terminal->ptx,
 				       lxc_terminal_ptx_io_handler,
 				       default_cleanup_handler,

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 15:13                               ` Daniel Harding
@ 2022-05-16 18:13                                 ` Pavel Begunkov
  2022-05-17  8:19                                   ` Christian Brauner
  2022-05-16 18:17                                 ` Thorsten Leemhuis
  1 sibling, 1 reply; 27+ messages in thread
From: Pavel Begunkov @ 2022-05-16 18:13 UTC (permalink / raw)
  To: Daniel Harding
  Cc: regressions, io-uring, linux-kernel, Thorsten Leemhuis,
	Jens Axboe, Christian Brauner

On 5/16/22 16:13, Daniel Harding wrote:
> On 5/16/22 16:57, Daniel Harding wrote:
>> On 5/16/22 16:25, Pavel Begunkov wrote:
>>> On 5/16/22 13:12, Pavel Begunkov wrote:
>>>> On 5/15/22 19:34, Daniel Harding wrote:
>>>>> On 5/15/22 11:20, Thorsten Leemhuis wrote:
>>>>>> On 04.05.22 08:54, Daniel Harding wrote:
>>>>>>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>>>>>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>>>>>>> [Resend with a smaller trace]
>>>>>>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>>>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>>>>>>> config is a
>>>>>>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>>>>>>> config. After
>>>>>>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I
>>>>>>>>>>>>>>>>> started
>>>>>>>>>>>>>>>>> noticed frequent hangs in lxc-stop. It doesn't happen 100%
>>>>>>>>>>>>>>>>> of the
>>>>>>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>>>>>>> narrowed
>>>>>>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still
>>>>>>>>>>>>>>>>> present
>>>>>>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for
>>>>>>>>>>>>>>>>> testing or
>>>>>>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>>>>>>> information that
>>>>>>>>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>       sudo lxc-create --n lxc-test --template download --bdev
>>>>>>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
>>>>>>>>>>>>>>> -a amd64
>>>>>>>>>>>>>>>       sudo lxc-start -n lxc-test
>>>>>>>>>>>>>>>       sudo lxc-stop -n lxc-test
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>>>>>>> running.
>>>>>>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>>>>>>> take a look
>>>>>>>>>>>>>> at this.
>>>>>>>>>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>>>>>>>>>> kernel, to do:
>>>>>>>>>>>> Same here, it doesn't reproduce for me
>>>>>>>>>>> OK, sorry it wasn't something simple.
>>>>>>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>>>>>>> run lxc-stop
>>>>>>>>>>>>>
>>>>>>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>>>>>>
>>>>>>>>>>>>> so we can see what's going on? Looking at the source, lxc is just
>>>>>>>>>>>>> using
>>>>>>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification
>>>>>>>>>>>>> when it
>>>>>>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a
>>>>>>>>>>>>> trace
>>>>>>>>>>>>> from both a working and broken kernel, that might shed some light
>>>>>>>>>>>>> on it.
>>>>>>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>>>>>>> traces tomorrow.
>>>>>>>>>> I think I got it, I've attached a trace.
>>>>>>>>>>
>>>>>>>>>> What's interesting is that it issues a multi shot poll but I don't
>>>>>>>>>> see any kind of cancellation, neither cancel requests nor task/ring
>>>>>>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>>>>>>> to work
>>>>>>>>> Yes, that looks exactly like my bad trace.  I've attached good trace
>>>>>>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>>>>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>>>>>>> visual scan:
>>>>>>>>>
>>>>>>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>>>>>>> beginning, but in the good trace, there are further
>>>>>>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>>>>>>> trace, there are none.
>>>>>>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>>>>>>> often than the bad trace:  the bad trace uses a mask of c3 only for
>>>>>>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>>>>>>> other calls.
>>>>>>>>> * In the good trace, many of the io_uring_complete calls have a
>>>>>>>>> result of 195, while in the bad trace, they all have a result of 1.
>>>>>>>>>
>>>>>>>>> I don't know whether any of those things are significant or not, but
>>>>>>>>> that's what jumped out at me.
>>>>>>>>>
>>>>>>>>> I have also attached a copy of the script I used to generate the
>>>>>>>>> traces.  If there is anything further I can to do help debug, please
>>>>>>>>> let me know.
>>>>>>>> Good observations! thanks for traces.
>>>>>>>>
>>>>>>>> It sounds like multi-shot poll requests were getting downgraded
>>>>>>>> to one-shot, which is a valid behaviour and was so because we
>>>>>>>> didn't fully support some cases. If that's the reason, than
>>>>>>>> the userspace/lxc is misusing the ABI. At least, that's the
>>>>>>>> working hypothesis for now, need to check lxc.
>>>>>>> So, I looked at the lxc source code, and it appears to at least try to
>>>>>>> handle the case of multi-shot being downgraded to one-shot.  I don't
>>>>>>> know enough to know if the code is actually correct however:
>>>>>>>
>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290
>>>>>> Hi, this is your Linux kernel regression tracker. Nothing happened here
>>>>>> for round about ten days now afaics; or did the discussion continue
>>>>>> somewhere else.
>>>>>>
>>>>>>  From what I gathered from this discussion is seems the root cause might
>>>>>> be in LXC, but it was exposed by kernel change. That makes it sill a
>>>>>> kernel regression that should be fixed; or is there a strong reason why
>>>>>> we should let this one slip?
>>>>>
>>>>> No, there hasn't been any discussion since the email you replied to. I've done a bit more testing on my end, but without anything conclusive.  The one thing I can say is that my testing shows that LXC does correctly handle multi-shot poll requests which were being downgraded to one-shot in 5.16.x kernels, which I think invalidates Pavel's theory.  In 5.17.x kernels, those same poll requests are no longer being downgraded to one-shot requests, and thus under 5.17.x LXC is no longer re-arming those poll requests (but also shouldn't need to, according to what is being returned by the kernel). I don't know if this change in kernel behavior is related to the hang, or if it is just a side effect of other io-uring changes that made it into 5.17.  Nothing in the LXC's usage of io-uring seems obviously incorrect to me, but I am far from an expert.  I also did some work toward creating a simpler reproducer, without success (I was able to get a simple program using io-uring 
>>>>> running, but never could get it to hang).  ISTM that this is still a kernel regression, unless someone can point out a definite fault in the way LXC is using io-uring.
>>>>
>>>> Haven't had time to debug it. Apparently LXC is stuck on
>>>> read(2) terminal fd. Not yet clear what is the reason.
>>>
>>> How it was with oneshots:
>>>
>>> 1: kernel: poll fires, add a CQE
>>> 2: kernel: remove poll
>>> 3: userspace: get CQE
>>> 4: userspace: read(terminal_fd);
>>> 5: userspace: add new poll
>>> 6: goto 1)
>>>
>>> What might happen and actually happens with multishot:
>>>
>>> 1: kernel: poll fires, add CQE1
>>> 2: kernel: poll fires again, add CQE2
>>> 3: userspace: get CQE1
>>> 4: userspace: read(terminal_fd); // reads all data, for both CQE1 and CQE2
>>> 5: userspace: get CQE2
>>> 6: userspace: read(terminal_fd); // nothing to read, hangs here
>>>
>>> It should be the read in lxc_terminal_ptx_io().
>>>
>>> IMHO, it's not a regression but a not perfect feature API and/or
>>> an API misuse.
>>>
>>> Cc: Christian Brauner
>>>
>>> Christian, in case you may have some input on the LXC side of things.
>>> Daniel reported an LXC problem when it uses io_uring multishot poll requests.
>>> Before aa43477b04025 ("io_uring: poll rework"), multishot poll requests for
>>> tty/pty and some other files were always downgraded to oneshots, which had
>>> been fixed by the commit and exposed the problem. I hope the example above
>>> explains it, but please let me know if it needs more details
>>
>> Pavel, I had actually just started a draft email with the same theory (although you stated it much more clearly than I could have).  I'm working on debugging the LXC side, but I'm pretty sure the issue is due to LXC using blocking reads and getting stuck exactly as you describe.  If I can confirm this, I'll go ahead and mark this regression as invalid and file an issue with LXC. Thanks for your help and patience.
> 
> Yes, it does appear that was the problem.  The attach POC patch against LXC fixes the hang.  The kernel is working as intended.

Daniel, that's great, thanks for confirming!

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 15:13                               ` Daniel Harding
  2022-05-16 18:13                                 ` Pavel Begunkov
@ 2022-05-16 18:17                                 ` Thorsten Leemhuis
  2022-05-16 18:22                                   ` Jens Axboe
  1 sibling, 1 reply; 27+ messages in thread
From: Thorsten Leemhuis @ 2022-05-16 18:17 UTC (permalink / raw)
  To: Daniel Harding, Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Jens Axboe, Christian Brauner



On 16.05.22 17:13, Daniel Harding wrote:
> On 5/16/22 16:57, Daniel Harding wrote:
>> On 5/16/22 16:25, Pavel Begunkov wrote:
>>> On 5/16/22 13:12, Pavel Begunkov wrote:
>>>> On 5/15/22 19:34, Daniel Harding wrote:
>>>>> On 5/15/22 11:20, Thorsten Leemhuis wrote:
>>>>>> On 04.05.22 08:54, Daniel Harding wrote:
>>>>>>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>>>>>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>>>>>>> [Resend with a smaller trace]
>>>>>>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>>>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>>>>>>> config is a
>>>>>>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>>>>>>> config. After
>>>>>>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel
>>>>>>>>>>>>>>>>> series, I
>>>>>>>>>>>>>>>>> started
>>>>>>>>>>>>>>>>> noticed frequent hangs in lxc-stop. It doesn't happen 100%
>>>>>>>>>>>>>>>>> of the
>>>>>>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>>>>>>> narrowed
>>>>>>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is
>>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>>> present
>>>>>>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise
>>>>>>>>>>>>>>>>> with the
>>>>>>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the
>>>>>>>>>>>>>>>>> problem
>>>>>>>>>>>>>>>>> further on my own, but I can easily apply patches to
>>>>>>>>>>>>>>>>> any of the
>>>>>>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild
>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>> testing or
>>>>>>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>>>>>>> information that
>>>>>>>>>>>>>>>>> would be helpful with reproducing or debugging the
>>>>>>>>>>>>>>>>> problem.
>>>>>>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would
>>>>>>>>>>>>>>>> make it
>>>>>>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>       sudo lxc-create --n lxc-test --template download
>>>>>>>>>>>>>>> --bdev
>>>>>>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r
>>>>>>>>>>>>>>> bionic
>>>>>>>>>>>>>>> -a amd64
>>>>>>>>>>>>>>>       sudo lxc-start -n lxc-test
>>>>>>>>>>>>>>>       sudo lxc-stop -n lxc-test
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>>>>>>> running.
>>>>>>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>>>>>>> Thanks, that's useful! I'm at a conference this week and
>>>>>>>>>>>>>> hence have
>>>>>>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>>>>>>> take a look
>>>>>>>>>>>>>> at this.
>>>>>>>>>>>>> Didn't manage to reproduce. Can you try, on both the good
>>>>>>>>>>>>> and bad
>>>>>>>>>>>>> kernel, to do:
>>>>>>>>>>>> Same here, it doesn't reproduce for me
>>>>>>>>>>> OK, sorry it wasn't something simple.
>>>>>>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>>>>>>> run lxc-stop
>>>>>>>>>>>>>
>>>>>>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>>>>>>
>>>>>>>>>>>>> so we can see what's going on? Looking at the source, lxc
>>>>>>>>>>>>> is just
>>>>>>>>>>>>> using
>>>>>>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a
>>>>>>>>>>>>> notification
>>>>>>>>>>>>> when it
>>>>>>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we
>>>>>>>>>>>>> have a
>>>>>>>>>>>>> trace
>>>>>>>>>>>>> from both a working and broken kernel, that might shed some
>>>>>>>>>>>>> light
>>>>>>>>>>>>> on it.
>>>>>>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>>>>>>> traces tomorrow.
>>>>>>>>>> I think I got it, I've attached a trace.
>>>>>>>>>>
>>>>>>>>>> What's interesting is that it issues a multi shot poll but I
>>>>>>>>>> don't
>>>>>>>>>> see any kind of cancellation, neither cancel requests nor
>>>>>>>>>> task/ring
>>>>>>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>>>>>>> to work
>>>>>>>>> Yes, that looks exactly like my bad trace.  I've attached good
>>>>>>>>> trace
>>>>>>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>>>>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>>>>>>> visual scan:
>>>>>>>>>
>>>>>>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>>>>>>> beginning, but in the good trace, there are further
>>>>>>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>>>>>>> trace, there are none.
>>>>>>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>>>>>>> often than the bad trace:  the bad trace uses a mask of c3 only
>>>>>>>>> for
>>>>>>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>>>>>>> other calls.
>>>>>>>>> * In the good trace, many of the io_uring_complete calls have a
>>>>>>>>> result of 195, while in the bad trace, they all have a result
>>>>>>>>> of 1.
>>>>>>>>>
>>>>>>>>> I don't know whether any of those things are significant or
>>>>>>>>> not, but
>>>>>>>>> that's what jumped out at me.
>>>>>>>>>
>>>>>>>>> I have also attached a copy of the script I used to generate the
>>>>>>>>> traces.  If there is anything further I can to do help debug,
>>>>>>>>> please
>>>>>>>>> let me know.
>>>>>>>> Good observations! thanks for traces.
>>>>>>>>
>>>>>>>> It sounds like multi-shot poll requests were getting downgraded
>>>>>>>> to one-shot, which is a valid behaviour and was so because we
>>>>>>>> didn't fully support some cases. If that's the reason, than
>>>>>>>> the userspace/lxc is misusing the ABI. At least, that's the
>>>>>>>> working hypothesis for now, need to check lxc.
>>>>>>> So, I looked at the lxc source code, and it appears to at least
>>>>>>> try to
>>>>>>> handle the case of multi-shot being downgraded to one-shot.  I don't
>>>>>>> know enough to know if the code is actually correct however:
>>>>>>>
>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
>>>>>>>
>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
>>>>>>>
>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290
>>>>>>>
>>>>>> Hi, this is your Linux kernel regression tracker. Nothing happened
>>>>>> here
>>>>>> for round about ten days now afaics; or did the discussion continue
>>>>>> somewhere else.
>>>>>>
>>>>>>  From what I gathered from this discussion is seems the root cause
>>>>>> might
>>>>>> be in LXC, but it was exposed by kernel change. That makes it sill a
>>>>>> kernel regression that should be fixed; or is there a strong
>>>>>> reason why
>>>>>> we should let this one slip?
>>>>>
>>>>> No, there hasn't been any discussion since the email you replied
>>>>> to. I've done a bit more testing on my end, but without anything
>>>>> conclusive.  The one thing I can say is that my testing shows that
>>>>> LXC does correctly handle multi-shot poll requests which were being
>>>>> downgraded to one-shot in 5.16.x kernels, which I think invalidates
>>>>> Pavel's theory.  In 5.17.x kernels, those same poll requests are no
>>>>> longer being downgraded to one-shot requests, and thus under 5.17.x
>>>>> LXC is no longer re-arming those poll requests (but also shouldn't
>>>>> need to, according to what is being returned by the kernel). I
>>>>> don't know if this change in kernel behavior is related to the
>>>>> hang, or if it is just a side effect of other io-uring changes that
>>>>> made it into 5.17.  Nothing in the LXC's usage of io-uring seems
>>>>> obviously incorrect to me, but I am far from an expert.  I also did
>>>>> some work toward creating a simpler reproducer, without success (I
>>>>> was able to get a simple program using io-uring running, but never
>>>>> could get it to hang).  ISTM that this is still a kernel
>>>>> regression, unless someone can point out a definite fault in the
>>>>> way LXC is using io-uring.
>>>>
>>>> Haven't had time to debug it. Apparently LXC is stuck on
>>>> read(2) terminal fd. Not yet clear what is the reason.
>>>
>>> How it was with oneshots:
>>>
>>> 1: kernel: poll fires, add a CQE
>>> 2: kernel: remove poll
>>> 3: userspace: get CQE
>>> 4: userspace: read(terminal_fd);
>>> 5: userspace: add new poll
>>> 6: goto 1)
>>>
>>> What might happen and actually happens with multishot:
>>>
>>> 1: kernel: poll fires, add CQE1
>>> 2: kernel: poll fires again, add CQE2
>>> 3: userspace: get CQE1
>>> 4: userspace: read(terminal_fd); // reads all data, for both CQE1 and
>>> CQE2
>>> 5: userspace: get CQE2
>>> 6: userspace: read(terminal_fd); // nothing to read, hangs here
>>>
>>> It should be the read in lxc_terminal_ptx_io().
>>>
>>> IMHO, it's not a regression but a not perfect feature API and/or
>>> an API misuse.
>>>
>>> Cc: Christian Brauner
>>>
>>> Christian, in case you may have some input on the LXC side of things.
>>> Daniel reported an LXC problem when it uses io_uring multishot poll
>>> requests.
>>> Before aa43477b04025 ("io_uring: poll rework"), multishot poll
>>> requests for
>>> tty/pty and some other files were always downgraded to oneshots,
>>> which had
>>> been fixed by the commit and exposed the problem. I hope the example
>>> above
>>> explains it, but please let me know if it needs more details
>>
>> Pavel, I had actually just started a draft email with the same theory
>> (although you stated it much more clearly than I could have).  I'm
>> working on debugging the LXC side, but I'm pretty sure the issue is
>> due to LXC using blocking reads and getting stuck exactly as you
>> describe.  If I can confirm this, I'll go ahead and mark this
>> regression as invalid and file an issue with LXC. Thanks for your help
>> and patience.
> 
> Yes, it does appear that was the problem.  The attach POC patch against
> LXC fixes the hang.  The kernel is working as intended.
> 
> #regzbot invalid:  userspace programming error

Hmmm, not sure if I like this. So yes, this might be a bug in LXC, but
afaics it's a bug that was exposed by kernel change in 5.17 (correct me
if I'm wrong!). The problem thus still qualifies as a kernel regression
that normally needs to be fixed, as can be seen my some of the quotes
from Linus in this file:
https://www.kernel.org/doc/html/latest/process/handling-regressions.html

Reg. the "normally": there are situations when we let a regression like
this slip -- for example if this particular use case is really odd, so
that the regression only occurs for very very few users. Is that the
case here? Or will most systems with a current or older version of LXC
show the reported problem if they are updated to 5.17 without updating
to a fixed LXC version as well? Then I'd say we likely should try to
find a workaround, as Linus otherwise won't be happy if he ever stumbles
about this thread.

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)

P.S.: As the Linux kernel's regression tracker I deal with a lot of
reports and sometimes miss something important when writing mails like
this. If that's the case here, don't hesitate to tell me in a public
reply, it's in everyone's interest to set the public record straight

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 18:17                                 ` Thorsten Leemhuis
@ 2022-05-16 18:22                                   ` Jens Axboe
  2022-05-16 18:34                                     ` Thorsten Leemhuis
  0 siblings, 1 reply; 27+ messages in thread
From: Jens Axboe @ 2022-05-16 18:22 UTC (permalink / raw)
  To: Thorsten Leemhuis, Daniel Harding, Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Christian Brauner

On 5/16/22 12:17 PM, Thorsten Leemhuis wrote:
>>> Pavel, I had actually just started a draft email with the same theory
>>> (although you stated it much more clearly than I could have).  I'm
>>> working on debugging the LXC side, but I'm pretty sure the issue is
>>> due to LXC using blocking reads and getting stuck exactly as you
>>> describe.  If I can confirm this, I'll go ahead and mark this
>>> regression as invalid and file an issue with LXC. Thanks for your help
>>> and patience.
>>
>> Yes, it does appear that was the problem.  The attach POC patch against
>> LXC fixes the hang.  The kernel is working as intended.
>>
>> #regzbot invalid:  userspace programming error
> 
> Hmmm, not sure if I like this. So yes, this might be a bug in LXC, but
> afaics it's a bug that was exposed by kernel change in 5.17 (correct me
> if I'm wrong!). The problem thus still qualifies as a kernel regression
> that normally needs to be fixed, as can be seen my some of the quotes
> from Linus in this file:
> https://www.kernel.org/doc/html/latest/process/handling-regressions.html

Sorry, but that's really BS in this particularly case. This could always
have triggered, it's the way multishot works. Will we count eg timing
changes as potential regressions, because an application relied on
something there? That does not make it ABI.

In general I agree with Linus on this, a change in behavior breaking
something should be investigated and figured out (and reverted, if need
be). This is not that.

-- 
Jens Axboe
I

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 18:22                                   ` Jens Axboe
@ 2022-05-16 18:34                                     ` Thorsten Leemhuis
  2022-05-16 18:39                                       ` Jens Axboe
  0 siblings, 1 reply; 27+ messages in thread
From: Thorsten Leemhuis @ 2022-05-16 18:34 UTC (permalink / raw)
  To: Jens Axboe, Daniel Harding, Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Christian Brauner

On 16.05.22 20:22, Jens Axboe wrote:
> On 5/16/22 12:17 PM, Thorsten Leemhuis wrote:
>>>> Pavel, I had actually just started a draft email with the same theory
>>>> (although you stated it much more clearly than I could have).  I'm
>>>> working on debugging the LXC side, but I'm pretty sure the issue is
>>>> due to LXC using blocking reads and getting stuck exactly as you
>>>> describe.  If I can confirm this, I'll go ahead and mark this
>>>> regression as invalid and file an issue with LXC. Thanks for your help
>>>> and patience.
>>>
>>> Yes, it does appear that was the problem.  The attach POC patch against
>>> LXC fixes the hang.  The kernel is working as intended.
>>>
>>> #regzbot invalid:  userspace programming error
>>
>> Hmmm, not sure if I like this. So yes, this might be a bug in LXC, but
>> afaics it's a bug that was exposed by kernel change in 5.17 (correct me
>> if I'm wrong!). The problem thus still qualifies as a kernel regression
>> that normally needs to be fixed, as can be seen my some of the quotes
>> from Linus in this file:
>> https://www.kernel.org/doc/html/latest/process/handling-regressions.html
> 
> Sorry, but that's really BS in this particularly case. This could always
> have triggered, it's the way multishot works. Will we count eg timing
> changes as potential regressions, because an application relied on
> something there? That does not make it ABI.
> 
> In general I agree with Linus on this, a change in behavior breaking
> something should be investigated and figured out (and reverted, if need
> be). This is not that.

Sorry, I have to deal with various subsystems and a lot of regressions
reports. I can't know the details of each of issue and there are
developers around that are not that familiar with all the practical
implications of the "no regressions". That's why I was just trying to
ensure that this is something safe to ignore. If you say it is, than I'm
totally happy and now rest my case. :-D

Ciao, Thorsten


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 18:34                                     ` Thorsten Leemhuis
@ 2022-05-16 18:39                                       ` Jens Axboe
  2022-05-16 19:07                                         ` Thorsten Leemhuis
  0 siblings, 1 reply; 27+ messages in thread
From: Jens Axboe @ 2022-05-16 18:39 UTC (permalink / raw)
  To: Thorsten Leemhuis, Daniel Harding, Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Christian Brauner

On 5/16/22 12:34 PM, Thorsten Leemhuis wrote:
> On 16.05.22 20:22, Jens Axboe wrote:
>> On 5/16/22 12:17 PM, Thorsten Leemhuis wrote:
>>>>> Pavel, I had actually just started a draft email with the same theory
>>>>> (although you stated it much more clearly than I could have).  I'm
>>>>> working on debugging the LXC side, but I'm pretty sure the issue is
>>>>> due to LXC using blocking reads and getting stuck exactly as you
>>>>> describe.  If I can confirm this, I'll go ahead and mark this
>>>>> regression as invalid and file an issue with LXC. Thanks for your help
>>>>> and patience.
>>>>
>>>> Yes, it does appear that was the problem.  The attach POC patch against
>>>> LXC fixes the hang.  The kernel is working as intended.
>>>>
>>>> #regzbot invalid:  userspace programming error
>>>
>>> Hmmm, not sure if I like this. So yes, this might be a bug in LXC, but
>>> afaics it's a bug that was exposed by kernel change in 5.17 (correct me
>>> if I'm wrong!). The problem thus still qualifies as a kernel regression
>>> that normally needs to be fixed, as can be seen my some of the quotes
>>> from Linus in this file:
>>> https://www.kernel.org/doc/html/latest/process/handling-regressions.html
>>
>> Sorry, but that's really BS in this particularly case. This could always
>> have triggered, it's the way multishot works. Will we count eg timing
>> changes as potential regressions, because an application relied on
>> something there? That does not make it ABI.
>>
>> In general I agree with Linus on this, a change in behavior breaking
>> something should be investigated and figured out (and reverted, if need
>> be). This is not that.
> 
> Sorry, I have to deal with various subsystems and a lot of regressions
> reports. I can't know the details of each of issue and there are
> developers around that are not that familiar with all the practical
> implications of the "no regressions". That's why I was just trying to
> ensure that this is something safe to ignore. If you say it is, than I'm
> totally happy and now rest my case. :-D

It's just a slippery slope that quickly leads to the fact that _any_
kernel change is a potential regressions, as it may change something
that an app unknowingly depends on. For this case, the multishot ended
up being downgraded to single shot on older kernels, so you'd never see
multiple triggers of it. And multiple triggers is a natural effect of
the level triggered poll that io_uring does. The app didn't handle
multiple events in between reading them, which was an oversight in how
that was done.

Hence I do think this one can be safely closed.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 18:39                                       ` Jens Axboe
@ 2022-05-16 19:07                                         ` Thorsten Leemhuis
  2022-05-16 19:14                                           ` Jens Axboe
  0 siblings, 1 reply; 27+ messages in thread
From: Thorsten Leemhuis @ 2022-05-16 19:07 UTC (permalink / raw)
  To: Jens Axboe, Daniel Harding, Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Christian Brauner



On 16.05.22 20:39, Jens Axboe wrote:
> On 5/16/22 12:34 PM, Thorsten Leemhuis wrote:
>> On 16.05.22 20:22, Jens Axboe wrote:
>>> On 5/16/22 12:17 PM, Thorsten Leemhuis wrote:
>>>>>> Pavel, I had actually just started a draft email with the same theory
>>>>>> (although you stated it much more clearly than I could have).  I'm
>>>>>> working on debugging the LXC side, but I'm pretty sure the issue is
>>>>>> due to LXC using blocking reads and getting stuck exactly as you
>>>>>> describe.  If I can confirm this, I'll go ahead and mark this
>>>>>> regression as invalid and file an issue with LXC. Thanks for your help
>>>>>> and patience.
>>>>>
>>>>> Yes, it does appear that was the problem.  The attach POC patch against
>>>>> LXC fixes the hang.  The kernel is working as intended.
>>>>>
>>>>> #regzbot invalid:  userspace programming error
>>>>
>>>> Hmmm, not sure if I like this. So yes, this might be a bug in LXC, but
>>>> afaics it's a bug that was exposed by kernel change in 5.17 (correct me
>>>> if I'm wrong!). The problem thus still qualifies as a kernel regression
>>>> that normally needs to be fixed, as can be seen my some of the quotes
>>>> from Linus in this file:
>>>> https://www.kernel.org/doc/html/latest/process/handling-regressions.html
>>>
>>> Sorry, but that's really BS in this particularly case. This could always
>>> have triggered, it's the way multishot works. Will we count eg timing
>>> changes as potential regressions, because an application relied on
>>> something there? That does not make it ABI.
>>>
>>> In general I agree with Linus on this, a change in behavior breaking
>>> something should be investigated and figured out (and reverted, if need
>>> be). This is not that.
>>
>> Sorry, I have to deal with various subsystems and a lot of regressions
>> reports. I can't know the details of each of issue and there are
>> developers around that are not that familiar with all the practical
>> implications of the "no regressions". That's why I was just trying to
>> ensure that this is something safe to ignore. If you say it is, than I'm
>> totally happy and now rest my case. :-D
> 
> It's just a slippery slope that quickly leads to the fact that _any_
> kernel change is a potential regressions,

I know, don't worry, that's why I'm trying to be careful. But I also had
cases already where someone (even a proper subsystem maintainer) said
"this is not a regression, it's a userspace bug" and it clearly was a
kernel regression (and Linus wasn't happy when he found out). That why I
was trying to evaluate the situation to get an impression is this is
really something that can/should be ignored. But I guess by
approach/wording here might have not been the best and needs to be improved.

> as it may change something
> that an app unknowingly depends on. For this case, the multishot ended
> up being downgraded to single shot on older kernels, so you'd never see
> multiple triggers of it. And multiple triggers is a natural effect of
> the level triggered poll that io_uring does. The app didn't handle
> multiple events in between reading them, which was an oversight in how
> that was done.
> 
> Hence I do think this one can be safely closed.

Many thx for clarifying.

Ciao, Thorsten

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 19:07                                         ` Thorsten Leemhuis
@ 2022-05-16 19:14                                           ` Jens Axboe
  0 siblings, 0 replies; 27+ messages in thread
From: Jens Axboe @ 2022-05-16 19:14 UTC (permalink / raw)
  To: Thorsten Leemhuis, Daniel Harding, Pavel Begunkov
  Cc: regressions, io-uring, linux-kernel, Christian Brauner

On 5/16/22 1:07 PM, Thorsten Leemhuis wrote:
> 
> 
> On 16.05.22 20:39, Jens Axboe wrote:
>> On 5/16/22 12:34 PM, Thorsten Leemhuis wrote:
>>> On 16.05.22 20:22, Jens Axboe wrote:
>>>> On 5/16/22 12:17 PM, Thorsten Leemhuis wrote:
>>>>>>> Pavel, I had actually just started a draft email with the same theory
>>>>>>> (although you stated it much more clearly than I could have).  I'm
>>>>>>> working on debugging the LXC side, but I'm pretty sure the issue is
>>>>>>> due to LXC using blocking reads and getting stuck exactly as you
>>>>>>> describe.  If I can confirm this, I'll go ahead and mark this
>>>>>>> regression as invalid and file an issue with LXC. Thanks for your help
>>>>>>> and patience.
>>>>>>
>>>>>> Yes, it does appear that was the problem.  The attach POC patch against
>>>>>> LXC fixes the hang.  The kernel is working as intended.
>>>>>>
>>>>>> #regzbot invalid:  userspace programming error
>>>>>
>>>>> Hmmm, not sure if I like this. So yes, this might be a bug in LXC, but
>>>>> afaics it's a bug that was exposed by kernel change in 5.17 (correct me
>>>>> if I'm wrong!). The problem thus still qualifies as a kernel regression
>>>>> that normally needs to be fixed, as can be seen my some of the quotes
>>>>> from Linus in this file:
>>>>> https://www.kernel.org/doc/html/latest/process/handling-regressions.html
>>>>
>>>> Sorry, but that's really BS in this particularly case. This could always
>>>> have triggered, it's the way multishot works. Will we count eg timing
>>>> changes as potential regressions, because an application relied on
>>>> something there? That does not make it ABI.
>>>>
>>>> In general I agree with Linus on this, a change in behavior breaking
>>>> something should be investigated and figured out (and reverted, if need
>>>> be). This is not that.
>>>
>>> Sorry, I have to deal with various subsystems and a lot of regressions
>>> reports. I can't know the details of each of issue and there are
>>> developers around that are not that familiar with all the practical
>>> implications of the "no regressions". That's why I was just trying to
>>> ensure that this is something safe to ignore. If you say it is, than I'm
>>> totally happy and now rest my case. :-D
>>
>> It's just a slippery slope that quickly leads to the fact that _any_
>> kernel change is a potential regressions,
> 
> I know, don't worry, that's why I'm trying to be careful. But I also had
> cases already where someone (even a proper subsystem maintainer) said
> "this is not a regression, it's a userspace bug" and it clearly was a
> kernel regression (and Linus wasn't happy when he found out). That why I

I get where you're coming from, and that is indeed what most maintainers
would say :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-16 18:13                                 ` Pavel Begunkov
@ 2022-05-17  8:19                                   ` Christian Brauner
  2022-05-17 10:31                                     ` Pavel Begunkov
  0 siblings, 1 reply; 27+ messages in thread
From: Christian Brauner @ 2022-05-17  8:19 UTC (permalink / raw)
  To: Pavel Begunkov, Daniel Harding, Jens Axboe, Thorsten Leemhuis
  Cc: regressions, io-uring, linux-kernel

On Mon, May 16, 2022 at 07:13:05PM +0100, Pavel Begunkov wrote:
> On 5/16/22 16:13, Daniel Harding wrote:
> > On 5/16/22 16:57, Daniel Harding wrote:
> > > On 5/16/22 16:25, Pavel Begunkov wrote:
> > > > On 5/16/22 13:12, Pavel Begunkov wrote:
> > > > > On 5/15/22 19:34, Daniel Harding wrote:
> > > > > > On 5/15/22 11:20, Thorsten Leemhuis wrote:
> > > > > > > On 04.05.22 08:54, Daniel Harding wrote:
> > > > > > > > On 5/3/22 17:14, Pavel Begunkov wrote:
> > > > > > > > > On 5/3/22 08:37, Daniel Harding wrote:
> > > > > > > > > > [Resend with a smaller trace]
> > > > > > > > > > On 5/3/22 02:14, Pavel Begunkov wrote:
> > > > > > > > > > > On 5/2/22 19:49, Daniel Harding wrote:
> > > > > > > > > > > > On 5/2/22 20:40, Pavel Begunkov wrote:
> > > > > > > > > > > > > On 5/2/22 18:00, Jens Axboe wrote:
> > > > > > > > > > > > > > On 5/2/22 7:59 AM, Jens Axboe wrote:
> > > > > > > > > > > > > > > On 5/2/22 7:36 AM, Daniel Harding wrote:
> > > > > > > > > > > > > > > > On 5/2/22 16:26, Jens Axboe wrote:
> > > > > > > > > > > > > > > > > On 5/2/22 7:17 AM, Daniel Harding wrote:
> > > > > > > > > > > > > > > > > > I use lxc-4.0.12 on Gentoo, built with io-uring support
> > > > > > > > > > > > > > > > > > (--enable-liburing), targeting liburing-2.1.  My kernel
> > > > > > > > > > > > > > > > > > config is a
> > > > > > > > > > > > > > > > > > very lightly modified version of Fedora's generic kernel
> > > > > > > > > > > > > > > > > > config. After
> > > > > > > > > > > > > > > > > > moving from the 5.16.x series to the 5.17.x kernel series, I
> > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > noticed frequent hangs in lxc-stop. It doesn't happen 100%
> > > > > > > > > > > > > > > > > > of the
> > > > > > > > > > > > > > > > > > time, but definitely more than 50% of the time. Bisecting
> > > > > > > > > > > > > > > > > > narrowed
> > > > > > > > > > > > > > > > > > down the issue to commit
> > > > > > > > > > > > > > > > > > aa43477b040251f451db0d844073ac00a8ab66ee:
> > > > > > > > > > > > > > > > > > io_uring: poll rework. Testing indicates the problem is still
> > > > > > > > > > > > > > > > > > present
> > > > > > > > > > > > > > > > > > in 5.18-rc5. Unfortunately I do not have the expertise with the
> > > > > > > > > > > > > > > > > > codebases of either lxc or io-uring to try to debug the problem
> > > > > > > > > > > > > > > > > > further on my own, but I can easily apply patches to any of the
> > > > > > > > > > > > > > > > > > involved components (lxc, liburing, kernel) and rebuild for
> > > > > > > > > > > > > > > > > > testing or
> > > > > > > > > > > > > > > > > > validation.  I am also happy to provide any further
> > > > > > > > > > > > > > > > > > information that
> > > > > > > > > > > > > > > > > > would be helpful with reproducing or debugging the problem.
> > > > > > > > > > > > > > > > > Do you have a recipe to reproduce the hang? That would make it
> > > > > > > > > > > > > > > > > significantly easier to figure out.
> > > > > > > > > > > > > > > > I can reproduce it with just the following:
> > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > >       sudo lxc-create --n lxc-test --template download --bdev
> > > > > > > > > > > > > > > > dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
> > > > > > > > > > > > > > > > -a amd64
> > > > > > > > > > > > > > > >       sudo lxc-start -n lxc-test
> > > > > > > > > > > > > > > >       sudo lxc-stop -n lxc-test
> > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > The lxc-stop command never exits and the container continues
> > > > > > > > > > > > > > > > running.
> > > > > > > > > > > > > > > > If that isn't sufficient to reproduce, please let me know.
> > > > > > > > > > > > > > > Thanks, that's useful! I'm at a conference this week and hence have
> > > > > > > > > > > > > > > limited amount of time to debug, hopefully Pavel has time to
> > > > > > > > > > > > > > > take a look
> > > > > > > > > > > > > > > at this.
> > > > > > > > > > > > > > Didn't manage to reproduce. Can you try, on both the good and bad
> > > > > > > > > > > > > > kernel, to do:
> > > > > > > > > > > > > Same here, it doesn't reproduce for me
> > > > > > > > > > > > OK, sorry it wasn't something simple.
> > > > > > > > > > > > > # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
> > > > > > > > > > > > > > run lxc-stop
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > # cp /sys/kernel/debug/tracing/trace ~/iou-trace
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > so we can see what's going on? Looking at the source, lxc is just
> > > > > > > > > > > > > > using
> > > > > > > > > > > > > > plain POLL_ADD, so I'm guessing it's not getting a notification
> > > > > > > > > > > > > > when it
> > > > > > > > > > > > > > expects to, or it's POLL_REMOVE not doing its job. If we have a
> > > > > > > > > > > > > > trace
> > > > > > > > > > > > > > from both a working and broken kernel, that might shed some light
> > > > > > > > > > > > > > on it.
> > > > > > > > > > > > It's late in my timezone, but I'll try to work on getting those
> > > > > > > > > > > > traces tomorrow.
> > > > > > > > > > > I think I got it, I've attached a trace.
> > > > > > > > > > > 
> > > > > > > > > > > What's interesting is that it issues a multi shot poll but I don't
> > > > > > > > > > > see any kind of cancellation, neither cancel requests nor task/ring
> > > > > > > > > > > exit. Perhaps have to go look at lxc to see how it's supposed
> > > > > > > > > > > to work
> > > > > > > > > > Yes, that looks exactly like my bad trace.  I've attached good trace
> > > > > > > > > > (captured with linux-5.16.19) and a bad trace (captured with
> > > > > > > > > > linux-5.17.5).  These are the differences I noticed with just a
> > > > > > > > > > visual scan:
> > > > > > > > > > 
> > > > > > > > > > * Both traces have three io_uring_submit_sqe calls at the very
> > > > > > > > > > beginning, but in the good trace, there are further
> > > > > > > > > > io_uring_submit_sqe calls throughout the trace, while in the bad
> > > > > > > > > > trace, there are none.
> > > > > > > > > > * The good trace uses a mask of c3 for io_uring_task_add much more
> > > > > > > > > > often than the bad trace:  the bad trace uses a mask of c3 only for
> > > > > > > > > > the very last call to io_uring_task_add, but a mask of 41 for the
> > > > > > > > > > other calls.
> > > > > > > > > > * In the good trace, many of the io_uring_complete calls have a
> > > > > > > > > > result of 195, while in the bad trace, they all have a result of 1.
> > > > > > > > > > 
> > > > > > > > > > I don't know whether any of those things are significant or not, but
> > > > > > > > > > that's what jumped out at me.
> > > > > > > > > > 
> > > > > > > > > > I have also attached a copy of the script I used to generate the
> > > > > > > > > > traces.  If there is anything further I can to do help debug, please
> > > > > > > > > > let me know.
> > > > > > > > > Good observations! thanks for traces.
> > > > > > > > > 
> > > > > > > > > It sounds like multi-shot poll requests were getting downgraded
> > > > > > > > > to one-shot, which is a valid behaviour and was so because we
> > > > > > > > > didn't fully support some cases. If that's the reason, than
> > > > > > > > > the userspace/lxc is misusing the ABI. At least, that's the
> > > > > > > > > working hypothesis for now, need to check lxc.
> > > > > > > > So, I looked at the lxc source code, and it appears to at least try to
> > > > > > > > handle the case of multi-shot being downgraded to one-shot.  I don't
> > > > > > > > know enough to know if the code is actually correct however:
> > > > > > > > 
> > > > > > > > https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
> > > > > > > > https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
> > > > > > > > https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290
> > > > > > > Hi, this is your Linux kernel regression tracker. Nothing happened here
> > > > > > > for round about ten days now afaics; or did the discussion continue
> > > > > > > somewhere else.
> > > > > > > 
> > > > > > >  From what I gathered from this discussion is seems the root cause might
> > > > > > > be in LXC, but it was exposed by kernel change. That makes it sill a
> > > > > > > kernel regression that should be fixed; or is there a strong reason why
> > > > > > > we should let this one slip?
> > > > > > 
> > > > > > No, there hasn't been any discussion since the email you
> > > > > > replied to. I've done a bit more testing on my end, but
> > > > > > without anything conclusive.  The one thing I can say is
> > > > > > that my testing shows that LXC does correctly handle
> > > > > > multi-shot poll requests which were being downgraded to
> > > > > > one-shot in 5.16.x kernels, which I think invalidates
> > > > > > Pavel's theory.  In 5.17.x kernels, those same poll
> > > > > > requests are no longer being downgraded to one-shot
> > > > > > requests, and thus under 5.17.x LXC is no longer
> > > > > > re-arming those poll requests (but also shouldn't need
> > > > > > to, according to what is being returned by the kernel).
> > > > > > I don't know if this change in kernel behavior is
> > > > > > related to the hang, or if it is just a side effect of
> > > > > > other io-uring changes that made it into 5.17.  Nothing
> > > > > > in the LXC's usage of io-uring seems obviously incorrect
> > > > > > to me, but I am far from an expert.  I also did some
> > > > > > work toward creating a simpler reproducer, without
> > > > > > success (I was able to get a simple program using
> > > > > > io-uring running, but never could get it to hang).  ISTM
> > > > > > that this is still a kernel regression, unless someone
> > > > > > can point out a definite fault in the way LXC is using
> > > > > > io-uring.
> > > > > 
> > > > > Haven't had time to debug it. Apparently LXC is stuck on
> > > > > read(2) terminal fd. Not yet clear what is the reason.
> > > > 
> > > > How it was with oneshots:
> > > > 
> > > > 1: kernel: poll fires, add a CQE
> > > > 2: kernel: remove poll
> > > > 3: userspace: get CQE
> > > > 4: userspace: read(terminal_fd);
> > > > 5: userspace: add new poll
> > > > 6: goto 1)
> > > > 
> > > > What might happen and actually happens with multishot:
> > > > 
> > > > 1: kernel: poll fires, add CQE1
> > > > 2: kernel: poll fires again, add CQE2
> > > > 3: userspace: get CQE1
> > > > 4: userspace: read(terminal_fd); // reads all data, for both CQE1 and CQE2
> > > > 5: userspace: get CQE2
> > > > 6: userspace: read(terminal_fd); // nothing to read, hangs here

Ah, gotcha.
So "5: userspace: get CQE2" what's the correct way to handle this
problem surfacing in 6? Is it simply to use non-blocking fds and then
handle EAGAIN/EWOULDBLOCK or is there a better way I'm missing?

> > > > 
> > > > It should be the read in lxc_terminal_ptx_io().
> > > > 
> > > > IMHO, it's not a regression but a not perfect feature API and/or
> > > > an API misuse.
> > > > 
> > > > Cc: Christian Brauner
> > > > 
> > > > Christian, in case you may have some input on the LXC side of things.
> > > > Daniel reported an LXC problem when it uses io_uring multishot poll requests.
> > > > Before aa43477b04025 ("io_uring: poll rework"), multishot poll requests for
> > > > tty/pty and some other files were always downgraded to oneshots, which had
> > > > been fixed by the commit and exposed the problem. I hope the example above
> > > > explains it, but please let me know if it needs more details
> > > 
> > > Pavel, I had actually just started a draft email with the same theory (although you stated it much more clearly than I could have).  I'm working on debugging the LXC side, but I'm pretty sure the issue is due to LXC using blocking reads and getting stuck exactly as you describe.  If I can confirm this, I'll go ahead and mark this regression as invalid and file an issue with LXC. Thanks for your help and patience.
> > 
> > Yes, it does appear that was the problem.  The attach POC patch against LXC fixes the hang.  The kernel is working as intended.
> 
> Daniel, that's great, thanks for confirming!

Daniel, Jens, Pavel, Thorsten,

Thanks for debugging this! I've received an issue on the LXC bug tracker
for this.

Just a little bit of background: LXC defaults to epoll event loops
currently still so users must explicitly at compile-time select that
they want to use io_uring. I exepct that in the future we might simply
switch to io_uring completely.

But the fact that it's not the default might be the reason the issue
hasn't surfaced earlier if it could've always been triggered.

(Fwiw, the multishot to oneshot downgrade of pty/tty fds was a bit of a
problem originally and I only found out about it because of a Twitter
thread with Jens; but maybe I missed documentation around this.)

Christian

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [REGRESSION] lxc-stop hang on 5.17.x kernels
  2022-05-17  8:19                                   ` Christian Brauner
@ 2022-05-17 10:31                                     ` Pavel Begunkov
  0 siblings, 0 replies; 27+ messages in thread
From: Pavel Begunkov @ 2022-05-17 10:31 UTC (permalink / raw)
  To: Christian Brauner, Daniel Harding, Jens Axboe, Thorsten Leemhuis
  Cc: regressions, io-uring, linux-kernel

On 5/17/22 09:19, Christian Brauner wrote:
> On Mon, May 16, 2022 at 07:13:05PM +0100, Pavel Begunkov wrote:
>> On 5/16/22 16:13, Daniel Harding wrote:
>>> On 5/16/22 16:57, Daniel Harding wrote:
>>>> On 5/16/22 16:25, Pavel Begunkov wrote:
>>>>> On 5/16/22 13:12, Pavel Begunkov wrote:
>>>>>> On 5/15/22 19:34, Daniel Harding wrote:
>>>>>>> On 5/15/22 11:20, Thorsten Leemhuis wrote:
>>>>>>>> On 04.05.22 08:54, Daniel Harding wrote:
>>>>>>>>> On 5/3/22 17:14, Pavel Begunkov wrote:
>>>>>>>>>> On 5/3/22 08:37, Daniel Harding wrote:
>>>>>>>>>>> [Resend with a smaller trace]
>>>>>>>>>>> On 5/3/22 02:14, Pavel Begunkov wrote:
>>>>>>>>>>>> On 5/2/22 19:49, Daniel Harding wrote:
>>>>>>>>>>>>> On 5/2/22 20:40, Pavel Begunkov wrote:
>>>>>>>>>>>>>> On 5/2/22 18:00, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 5/2/22 7:59 AM, Jens Axboe wrote:
>>>>>>>>>>>>>>>> On 5/2/22 7:36 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>>>> On 5/2/22 16:26, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>> On 5/2/22 7:17 AM, Daniel Harding wrote:
>>>>>>>>>>>>>>>>>>> I use lxc-4.0.12 on Gentoo, built with io-uring support
>>>>>>>>>>>>>>>>>>> (--enable-liburing), targeting liburing-2.1.  My kernel
>>>>>>>>>>>>>>>>>>> config is a
>>>>>>>>>>>>>>>>>>> very lightly modified version of Fedora's generic kernel
>>>>>>>>>>>>>>>>>>> config. After
>>>>>>>>>>>>>>>>>>> moving from the 5.16.x series to the 5.17.x kernel series, I
>>>>>>>>>>>>>>>>>>> started
>>>>>>>>>>>>>>>>>>> noticed frequent hangs in lxc-stop. It doesn't happen 100%
>>>>>>>>>>>>>>>>>>> of the
>>>>>>>>>>>>>>>>>>> time, but definitely more than 50% of the time. Bisecting
>>>>>>>>>>>>>>>>>>> narrowed
>>>>>>>>>>>>>>>>>>> down the issue to commit
>>>>>>>>>>>>>>>>>>> aa43477b040251f451db0d844073ac00a8ab66ee:
>>>>>>>>>>>>>>>>>>> io_uring: poll rework. Testing indicates the problem is still
>>>>>>>>>>>>>>>>>>> present
>>>>>>>>>>>>>>>>>>> in 5.18-rc5. Unfortunately I do not have the expertise with the
>>>>>>>>>>>>>>>>>>> codebases of either lxc or io-uring to try to debug the problem
>>>>>>>>>>>>>>>>>>> further on my own, but I can easily apply patches to any of the
>>>>>>>>>>>>>>>>>>> involved components (lxc, liburing, kernel) and rebuild for
>>>>>>>>>>>>>>>>>>> testing or
>>>>>>>>>>>>>>>>>>> validation.  I am also happy to provide any further
>>>>>>>>>>>>>>>>>>> information that
>>>>>>>>>>>>>>>>>>> would be helpful with reproducing or debugging the problem.
>>>>>>>>>>>>>>>>>> Do you have a recipe to reproduce the hang? That would make it
>>>>>>>>>>>>>>>>>> significantly easier to figure out.
>>>>>>>>>>>>>>>>> I can reproduce it with just the following:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>        sudo lxc-create --n lxc-test --template download --bdev
>>>>>>>>>>>>>>>>> dir --dir /var/lib/lxc/lxc-test/rootfs -- -d ubuntu -r bionic
>>>>>>>>>>>>>>>>> -a amd64
>>>>>>>>>>>>>>>>>        sudo lxc-start -n lxc-test
>>>>>>>>>>>>>>>>>        sudo lxc-stop -n lxc-test
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The lxc-stop command never exits and the container continues
>>>>>>>>>>>>>>>>> running.
>>>>>>>>>>>>>>>>> If that isn't sufficient to reproduce, please let me know.
>>>>>>>>>>>>>>>> Thanks, that's useful! I'm at a conference this week and hence have
>>>>>>>>>>>>>>>> limited amount of time to debug, hopefully Pavel has time to
>>>>>>>>>>>>>>>> take a look
>>>>>>>>>>>>>>>> at this.
>>>>>>>>>>>>>>> Didn't manage to reproduce. Can you try, on both the good and bad
>>>>>>>>>>>>>>> kernel, to do:
>>>>>>>>>>>>>> Same here, it doesn't reproduce for me
>>>>>>>>>>>>> OK, sorry it wasn't something simple.
>>>>>>>>>>>>>> # echo 1 > /sys/kernel/debug/tracing/events/io_uring/enable
>>>>>>>>>>>>>>> run lxc-stop
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> # cp /sys/kernel/debug/tracing/trace ~/iou-trace
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> so we can see what's going on? Looking at the source, lxc is just
>>>>>>>>>>>>>>> using
>>>>>>>>>>>>>>> plain POLL_ADD, so I'm guessing it's not getting a notification
>>>>>>>>>>>>>>> when it
>>>>>>>>>>>>>>> expects to, or it's POLL_REMOVE not doing its job. If we have a
>>>>>>>>>>>>>>> trace
>>>>>>>>>>>>>>> from both a working and broken kernel, that might shed some light
>>>>>>>>>>>>>>> on it.
>>>>>>>>>>>>> It's late in my timezone, but I'll try to work on getting those
>>>>>>>>>>>>> traces tomorrow.
>>>>>>>>>>>> I think I got it, I've attached a trace.
>>>>>>>>>>>>
>>>>>>>>>>>> What's interesting is that it issues a multi shot poll but I don't
>>>>>>>>>>>> see any kind of cancellation, neither cancel requests nor task/ring
>>>>>>>>>>>> exit. Perhaps have to go look at lxc to see how it's supposed
>>>>>>>>>>>> to work
>>>>>>>>>>> Yes, that looks exactly like my bad trace.  I've attached good trace
>>>>>>>>>>> (captured with linux-5.16.19) and a bad trace (captured with
>>>>>>>>>>> linux-5.17.5).  These are the differences I noticed with just a
>>>>>>>>>>> visual scan:
>>>>>>>>>>>
>>>>>>>>>>> * Both traces have three io_uring_submit_sqe calls at the very
>>>>>>>>>>> beginning, but in the good trace, there are further
>>>>>>>>>>> io_uring_submit_sqe calls throughout the trace, while in the bad
>>>>>>>>>>> trace, there are none.
>>>>>>>>>>> * The good trace uses a mask of c3 for io_uring_task_add much more
>>>>>>>>>>> often than the bad trace:  the bad trace uses a mask of c3 only for
>>>>>>>>>>> the very last call to io_uring_task_add, but a mask of 41 for the
>>>>>>>>>>> other calls.
>>>>>>>>>>> * In the good trace, many of the io_uring_complete calls have a
>>>>>>>>>>> result of 195, while in the bad trace, they all have a result of 1.
>>>>>>>>>>>
>>>>>>>>>>> I don't know whether any of those things are significant or not, but
>>>>>>>>>>> that's what jumped out at me.
>>>>>>>>>>>
>>>>>>>>>>> I have also attached a copy of the script I used to generate the
>>>>>>>>>>> traces.  If there is anything further I can to do help debug, please
>>>>>>>>>>> let me know.
>>>>>>>>>> Good observations! thanks for traces.
>>>>>>>>>>
>>>>>>>>>> It sounds like multi-shot poll requests were getting downgraded
>>>>>>>>>> to one-shot, which is a valid behaviour and was so because we
>>>>>>>>>> didn't fully support some cases. If that's the reason, than
>>>>>>>>>> the userspace/lxc is misusing the ABI. At least, that's the
>>>>>>>>>> working hypothesis for now, need to check lxc.
>>>>>>>>> So, I looked at the lxc source code, and it appears to at least try to
>>>>>>>>> handle the case of multi-shot being downgraded to one-shot.  I don't
>>>>>>>>> know enough to know if the code is actually correct however:
>>>>>>>>>
>>>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L165-L189
>>>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L254
>>>>>>>>> https://github.com/lxc/lxc/blob/7e37cc96bb94175a8e351025d26cc35dc2d10543/src/lxc/mainloop.c#L288-L290
>>>>>>>> Hi, this is your Linux kernel regression tracker. Nothing happened here
>>>>>>>> for round about ten days now afaics; or did the discussion continue
>>>>>>>> somewhere else.
>>>>>>>>
>>>>>>>>   From what I gathered from this discussion is seems the root cause might
>>>>>>>> be in LXC, but it was exposed by kernel change. That makes it sill a
>>>>>>>> kernel regression that should be fixed; or is there a strong reason why
>>>>>>>> we should let this one slip?
>>>>>>>
>>>>>>> No, there hasn't been any discussion since the email you
>>>>>>> replied to. I've done a bit more testing on my end, but
>>>>>>> without anything conclusive.  The one thing I can say is
>>>>>>> that my testing shows that LXC does correctly handle
>>>>>>> multi-shot poll requests which were being downgraded to
>>>>>>> one-shot in 5.16.x kernels, which I think invalidates
>>>>>>> Pavel's theory.  In 5.17.x kernels, those same poll
>>>>>>> requests are no longer being downgraded to one-shot
>>>>>>> requests, and thus under 5.17.x LXC is no longer
>>>>>>> re-arming those poll requests (but also shouldn't need
>>>>>>> to, according to what is being returned by the kernel).
>>>>>>> I don't know if this change in kernel behavior is
>>>>>>> related to the hang, or if it is just a side effect of
>>>>>>> other io-uring changes that made it into 5.17.  Nothing
>>>>>>> in the LXC's usage of io-uring seems obviously incorrect
>>>>>>> to me, but I am far from an expert.  I also did some
>>>>>>> work toward creating a simpler reproducer, without
>>>>>>> success (I was able to get a simple program using
>>>>>>> io-uring running, but never could get it to hang).  ISTM
>>>>>>> that this is still a kernel regression, unless someone
>>>>>>> can point out a definite fault in the way LXC is using
>>>>>>> io-uring.
>>>>>>
>>>>>> Haven't had time to debug it. Apparently LXC is stuck on
>>>>>> read(2) terminal fd. Not yet clear what is the reason.
>>>>>
>>>>> How it was with oneshots:
>>>>>
>>>>> 1: kernel: poll fires, add a CQE
>>>>> 2: kernel: remove poll
>>>>> 3: userspace: get CQE
>>>>> 4: userspace: read(terminal_fd);
>>>>> 5: userspace: add new poll
>>>>> 6: goto 1)
>>>>>
>>>>> What might happen and actually happens with multishot:
>>>>>
>>>>> 1: kernel: poll fires, add CQE1
>>>>> 2: kernel: poll fires again, add CQE2
>>>>> 3: userspace: get CQE1
>>>>> 4: userspace: read(terminal_fd); // reads all data, for both CQE1 and CQE2
>>>>> 5: userspace: get CQE2
>>>>> 6: userspace: read(terminal_fd); // nothing to read, hangs here
> 
> Ah, gotcha.
> So "5: userspace: get CQE2" what's the correct way to handle this
> problem surfacing in 6? Is it simply to use non-blocking fds and then
> handle EAGAIN/EWOULDBLOCK or is there a better way I'm missing?

I don't see a better way, unfortunately. If you read via io_uring it'll
hide blocking from you, but it doesn't seem like a simple change and
won't be performance-optimal anyway as ttys don't support IOCB_NOWAIT


>>>>> It should be the read in lxc_terminal_ptx_io().
>>>>>
>>>>> IMHO, it's not a regression but a not perfect feature API and/or
>>>>> an API misuse.
>>>>>
>>>>> Cc: Christian Brauner
>>>>>
>>>>> Christian, in case you may have some input on the LXC side of things.
>>>>> Daniel reported an LXC problem when it uses io_uring multishot poll requests.
>>>>> Before aa43477b04025 ("io_uring: poll rework"), multishot poll requests for
>>>>> tty/pty and some other files were always downgraded to oneshots, which had
>>>>> been fixed by the commit and exposed the problem. I hope the example above
>>>>> explains it, but please let me know if it needs more details
>>>>
>>>> Pavel, I had actually just started a draft email with the same theory (although you stated it much more clearly than I could have).  I'm working on debugging the LXC side, but I'm pretty sure the issue is due to LXC using blocking reads and getting stuck exactly as you describe.  If I can confirm this, I'll go ahead and mark this regression as invalid and file an issue with LXC. Thanks for your help and patience.
>>>
>>> Yes, it does appear that was the problem.  The attach POC patch against LXC fixes the hang.  The kernel is working as intended.
>>
>> Daniel, that's great, thanks for confirming!
> 
> Daniel, Jens, Pavel, Thorsten,
> 
> Thanks for debugging this! I've received an issue on the LXC bug tracker
> for this.
> 
> Just a little bit of background: LXC defaults to epoll event loops
> currently still so users must explicitly at compile-time select that
> they want to use io_uring. I exepct that in the future we might simply
> switch to io_uring completely.
> 
> But the fact that it's not the default might be the reason the issue
> hasn't surfaced earlier if it could've always been triggered.
> 
> (Fwiw, the multishot to oneshot downgrade of pty/tty fds was a bit of a
> problem originally and I only found out about it because of a Twitter
> thread with Jens; but maybe I missed documentation around this.)

I bet it was quite a pain!

"The CQE flags field will have IORING_CQE_F_MORE set on completion if
the application should expect further CQE entries from the original
request. If this flag isn't set on completion, then the poll request
has been terminated and no further events will be generated."

The rule still applies, though now we don't immediately downgrade it
for a bunch of common cases like polling files with multiple wait
queues.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2022-05-17 10:32 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-02 13:17 [REGRESSION] lxc-stop hang on 5.17.x kernels Daniel Harding
2022-05-02 13:26 ` Jens Axboe
2022-05-02 13:36   ` Daniel Harding
2022-05-02 13:59     ` Jens Axboe
2022-05-02 17:00       ` Jens Axboe
2022-05-02 17:40         ` Pavel Begunkov
2022-05-02 18:49           ` Daniel Harding
2022-05-02 23:14             ` Pavel Begunkov
2022-05-03  7:13               ` Daniel Harding
2022-05-03  7:37               ` Daniel Harding
2022-05-03 14:14                 ` Pavel Begunkov
2022-05-04  6:54                   ` Daniel Harding
2022-05-15  8:20                     ` Thorsten Leemhuis
2022-05-15 18:34                       ` Daniel Harding
2022-05-16 12:12                         ` Pavel Begunkov
2022-05-16 13:25                           ` Pavel Begunkov
2022-05-16 13:57                             ` Daniel Harding
2022-05-16 15:13                               ` Daniel Harding
2022-05-16 18:13                                 ` Pavel Begunkov
2022-05-17  8:19                                   ` Christian Brauner
2022-05-17 10:31                                     ` Pavel Begunkov
2022-05-16 18:17                                 ` Thorsten Leemhuis
2022-05-16 18:22                                   ` Jens Axboe
2022-05-16 18:34                                     ` Thorsten Leemhuis
2022-05-16 18:39                                       ` Jens Axboe
2022-05-16 19:07                                         ` Thorsten Leemhuis
2022-05-16 19:14                                           ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).