All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Stefan Bühler" <source@stbuehler.de>
To: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: io_uring: closing / release
Date: Sat, 11 May 2019 18:26:52 +0200	[thread overview]
Message-ID: <e57a384b-fa46-7caa-3800-bcc9bc6ced90@stbuehler.de> (raw)
In-Reply-To: <87f76da1-5525-086e-7a9c-3bdb2ad12188@stbuehler.de>

Hi,

On 27.04.19 23:07, Stefan Bühler wrote:
> Hi,
> 
> On 23.04.19 22:31, Jens Axboe wrote:
>> On 4/23/19 1:06 PM, Stefan Bühler wrote:
>>> I have one other question: is there any way to cancel an IO read/write
>>> operation? I don't think closing io_uring has any effect, what about
>>> closing the files I'm reading/writing?  (Adding cancelation to kiocb
>>> sounds like a non-trivial task; and I don't think it already supports it.)
>>
>> There is no way to do that. If you look at existing aio, nobody supports
>> that either. Hence io_uring doesn't export any sort of cancellation outside
>> of the poll case where we can handle it internally to io_uring.
>>
>> If you look at storage, then generally IO doesn't wait around in the stack,
>> it's issued. Most hardware only supports queue abort like cancellation,
>> which isn't useful at all.
>>
>> So I don't think that will ever happen.
>>
>>> So cleanup in general seems hard to me: do I have to wait for all
>>> read/write operations to complete so I can safely free all buffers
>>> before I close the event loop?
>>
>> The ring exit waits for IO to complete already.
> 
> I now understand at least how that part is working;
> io_ring_ctx_wait_and_kill calls wait_for_completion(&ctx->ctx_done),
> which only completes after all references are gone; each pending job
> keeps a reference.
> 
> But wait_for_completion is not interruptible; so if there are "stuck"
> jobs even root can't kill the task (afaict) anymore.
> 
> Once e.g. readv is working on pipes/sockets (I have some local patches
> here for that), you can easily end up in a situation where a
> socketpair() or a pipe() is still alive, but the read will never finish
> (I can trigger this problem with an explicit close(uring_fd), not sure
> how to observe this on process exit).
> 
> For a socketpair() even both ends could be kept alive by never ending
> read jobs.
> 
> Using wait_for_completion seems like a really bad idea to me; this is a
> problem for io_uring_register too.

As far as I know this is not a problem yet in 5.1 as reads on pipes and
sockets are still blocking the submission, and SQPOLL is only for
CAP_SYS_ADMIN.

But once my "punt to workers if file doesn't support async" patch (or
something similar) goes in, this will become a real problem.

My current trigger looks like this: create socketpair(), submit read
from both ends (using the same iovec... who cares), wait for the workers
to pick up the reads, munmap everything and exit.

The kernel will then cleanup the files: but the sockets are still in use
by io_uring, and it will only close the io_uring context, which will
then get stuck in io_ring_ctx_wait_and_kill.

In my qemu test environment "nobody" can leak 16 contexts before hitting
some resource limits.

cheers,
Stefan

  reply	other threads:[~2019-05-11 16:26 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-23 19:06 io_uring: not good enough for release Stefan Bühler
2019-04-23 20:31 ` Jens Axboe
2019-04-23 22:07   ` Jens Axboe
2019-04-24 16:09     ` Jens Axboe
2019-04-27 16:05       ` io_uring: RWF_NOWAIT support Stefan Bühler
2019-04-27 18:34         ` [PATCH v1 1/1] [io_uring] fix handling SQEs requesting NOWAIT Stefan Bühler
2019-04-30 15:40           ` Jens Axboe
2019-04-27 15:50     ` io_uring: submission error handling Stefan Bühler
2019-04-30 16:02       ` Jens Axboe
2019-04-30 16:15         ` Jens Axboe
2019-04-30 18:15           ` Stefan Bühler
2019-04-30 18:42             ` Jens Axboe
2019-05-01 11:49               ` [PATCH v1 1/1] [io_uring] don't stall on submission errors Stefan Bühler
2019-05-01 12:43                 ` Jens Axboe
2019-04-27 21:07   ` io_uring: closing / release Stefan Bühler
2019-05-11 16:26     ` Stefan Bühler [this message]
2019-04-28 15:54   ` io_uring: O_NONBLOCK/IOCB_NOWAIT/RWF_NOWAIT mess Stefan Bühler
2019-05-11 16:34     ` Stefan Bühler
2019-05-11 16:57       ` [PATCH 1/5] fs: RWF flags override default IOCB flags from file flags Stefan Bühler
2019-05-11 16:57         ` [PATCH 2/5] tcp: handle SPLICE_F_NONBLOCK in tcp_splice_read Stefan Bühler
2019-05-11 16:57         ` [PATCH 3/5] pipe: use IOCB_NOWAIT instead of O_NONBLOCK Stefan Bühler
2019-05-11 16:57         ` [PATCH 4/5] socket: " Stefan Bühler
2019-05-11 16:57         ` [PATCH 5/5] io_uring: use FMODE_NOWAIT to detect files supporting IOCB_NOWAIT Stefan Bühler
2019-05-03  9:47   ` [PATCH 1/2] io_uring: restructure io_{read,write} control flow Stefan Bühler
2019-05-03  9:47     ` [PATCH 2/2] io_uring: punt to workers if file doesn't support async Stefan Bühler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e57a384b-fa46-7caa-3800-bcc9bc6ced90@stbuehler.de \
    --to=source@stbuehler.de \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.