linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Jann Horn <jannh@google.com>
Cc: linux-aio@kvack.org, linux-block@vger.kernel.org,
	Linux API <linux-api@vger.kernel.org>,
	hch@lst.de, jmoyer@redhat.com, Avi Kivity <avi@scylladb.com>,
	Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH 05/19] Add io_uring IO interface
Date: Tue, 12 Feb 2019 15:06:16 -0700	[thread overview]
Message-ID: <f20b8e79-d10f-6316-561f-3c77cab71ee0@kernel.dk> (raw)
In-Reply-To: <1ca9f039-c6f0-cae7-8484-7db0a4e4e213@kernel.dk>

On 2/12/19 3:03 PM, Jens Axboe wrote:
> On 2/12/19 2:42 PM, Jann Horn wrote:
>> On Sat, Feb 9, 2019 at 5:15 AM Jens Axboe <axboe@kernel.dk> wrote:
>>> On 2/8/19 3:12 PM, Jann Horn wrote:
>>>> On Fri, Feb 8, 2019 at 6:34 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>>> The submission queue (SQ) and completion queue (CQ) rings are shared
>>>>> between the application and the kernel. This eliminates the need to
>>>>> copy data back and forth to submit and complete IO.
>>>>>
>>>>> IO submissions use the io_uring_sqe data structure, and completions
>>>>> are generated in the form of io_uring_cqe data structures. The SQ
>>>>> ring is an index into the io_uring_sqe array, which makes it possible
>>>>> to submit a batch of IOs without them being contiguous in the ring.
>>>>> The CQ ring is always contiguous, as completion events are inherently
>>>>> unordered, and hence any io_uring_cqe entry can point back to an
>>>>> arbitrary submission.
>>>>>
>>>>> Two new system calls are added for this:
>>>>>
>>>>> io_uring_setup(entries, params)
>>>>>         Sets up an io_uring instance for doing async IO. On success,
>>>>>         returns a file descriptor that the application can mmap to
>>>>>         gain access to the SQ ring, CQ ring, and io_uring_sqes.
>>>>>
>>>>> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>>>>>         Initiates IO against the rings mapped to this fd, or waits for
>>>>>         them to complete, or both. The behavior is controlled by the
>>>>>         parameters passed in. If 'to_submit' is non-zero, then we'll
>>>>>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>>>>>         kernel will wait for 'min_complete' events, if they aren't
>>>>>         already available. It's valid to set IORING_ENTER_GETEVENTS
>>>>>         and 'min_complete' == 0 at the same time, this allows the
>>>>>         kernel to return already completed events without waiting
>>>>>         for them. This is useful only for polling, as for IRQ
>>>>>         driven IO, the application can just check the CQ ring
>>>>>         without entering the kernel.
>>>>>
>>>>> With this setup, it's possible to do async IO with a single system
>>>>> call. Future developments will enable polled IO with this interface,
>>>>> and polled submission as well. The latter will enable an application
>>>>> to do IO without doing ANY system calls at all.
>>>>>
>>>>> For IRQ driven IO, an application only needs to enter the kernel for
>>>>> completions if it wants to wait for them to occur.
>>>>>
>>>>> Each io_uring is backed by a workqueue, to support buffered async IO
>>>>> as well. We will only punt to an async context if the command would
>>>>> need to wait for IO on the device side. Any data that can be accessed
>>>>> directly in the page cache is done inline. This avoids the slowness
>>>>> issue of usual threadpools, since cached data is accessed as quickly
>>>>> as a sync interface.
>> [...]
>>>>> +static int io_submit_sqe(struct io_ring_ctx *ctx, const struct sqe_submit *s)
>>>>> +{
>>>>> +       struct io_kiocb *req;
>>>>> +       ssize_t ret;
>>>>> +
>>>>> +       /* enforce forwards compatibility on users */
>>>>> +       if (unlikely(s->sqe->flags))
>>>>> +               return -EINVAL;
>>>>> +
>>>>> +       req = io_get_req(ctx);
>>>>> +       if (unlikely(!req))
>>>>> +               return -EAGAIN;
>>>>> +
>>>>> +       req->rw.ki_filp = NULL;
>>>>> +
>>>>> +       ret = __io_submit_sqe(ctx, req, s, true);
>>>>> +       if (ret == -EAGAIN) {
>>>>> +               memcpy(&req->submit, s, sizeof(*s));
>>>>> +               INIT_WORK(&req->work, io_sq_wq_submit_work);
>>>>> +               queue_work(ctx->sqo_wq, &req->work);
>>>>> +               ret = 0;
>>>>> +       }
>>>>> +       if (ret)
>>>>> +               io_free_req(req);
>>>>> +
>>>>> +       return ret;
>>>>> +}
>>>>> +
>>>>> +static void io_commit_sqring(struct io_ring_ctx *ctx)
>>>>> +{
>>>>> +       struct io_sq_ring *ring = ctx->sq_ring;
>>>>> +
>>>>> +       if (ctx->cached_sq_head != ring->r.head) {
>>>>> +               WRITE_ONCE(ring->r.head, ctx->cached_sq_head);
>>>>> +               /* write side barrier of head update, app has read side */
>>>>> +               smp_wmb();
>>>>
>>>> Can you elaborate on what this memory barrier is doing? Don't you need
>>>> some sort of memory barrier *before* the WRITE_ONCE(), to ensure that
>>>> nobody sees the updated head before you're done reading the submission
>>>> queue entry? Or is that barrier elsewhere?
>>>
>>> The matching read barrier is in the application, it must do that before
>>> reading ->head for the SQ ring.
>>>
>>> For the other barrier, since the ring->r.head now has a READ_ONCE(),
>>> that should be all we need to ensure that loads are done.
>>
>> READ_ONCE() / WRITE_ONCE are not hardware memory barriers that enforce
>> ordering with regard to concurrent execution on other cores. They are
>> only compiler barriers, influencing the order in which the compiler
>> emits things. (Well, unless you're on alpha, where READ_ONCE() implies
>> a memory barrier that prevents reordering of dependent reads.)
>>
>> As far as I can tell, between the READ_ONCE(ring->array[...]) in
>> io_get_sqring() and the WRITE_ONCE() in io_commit_sqring(), you have
>> no *hardware* memory barrier that prevents reordering against
>> concurrently running userspace code. As far as I can tell, the
>> following could happen:
>>
>>  - The kernel reads from ring->array in io_get_sqring(), then updates
>> the head in io_commit_sqring(). The CPU reorders the memory accesses
>> such that the write to the head becomes visible before the read from
>> ring->array has completed.
>>  - Userspace observes the write to the head and reuses the array slots
>> the kernel has freed with the write, clobbering ring->array before the
>> kernel reads from ring->array.
> 
> I'd say this is highly theoretical for the normal use case, as we
> will have submitted IO in between. Hence the load must have been done.
> The only case that needs it is the sq thread case, since we bundle
> those up. This should do it:

Actually, I take that back, as in this particular case the sq thread
is the only one that reads it. Hence it'll have done a full submission
of the read SQE entries before reading a new round. Not that it matters
for that case, as a preempt would have implied a full barrier anyway.

The non-sq thread case does not need the store-vs-load ordering
barrier, as SQEs are either discarded or submitted before we commit
the sqring. Since that's the case, by definition all loads are done.

-- 
Jens Axboe


  reply	other threads:[~2019-02-12 22:06 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-08 17:34 [PATCHSET v13] io_uring IO interface Jens Axboe
2019-02-08 17:34 ` [PATCH 01/19] fs: add an iopoll method to struct file_operations Jens Axboe
2019-02-09  9:20   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 02/19] block: wire up block device iopoll method Jens Axboe
2019-02-09  9:22   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 03/19] block: add bio_set_polled() helper Jens Axboe
2019-02-09  9:24   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 04/19] iomap: wire up the iopoll method Jens Axboe
2019-02-09  9:25   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 05/19] Add io_uring IO interface Jens Axboe
2019-02-08 22:12   ` Jann Horn
2019-02-09  4:15     ` Jens Axboe
2019-02-12 21:42       ` Jann Horn
2019-02-12 22:03         ` Jens Axboe
2019-02-12 22:06           ` Jens Axboe [this message]
2019-02-12 22:40             ` Jann Horn
2019-02-12 22:45               ` Jens Axboe
2019-02-12 22:52                 ` Jens Axboe
2019-02-12 22:57                   ` Jann Horn
2019-02-12 23:00                     ` Jens Axboe
2019-02-12 23:11                       ` Jann Horn
2019-02-12 23:19                         ` Jens Axboe
2019-02-12 23:28                           ` Jann Horn
2019-02-12 23:46                             ` Jens Axboe
2019-02-12 23:53                               ` Jens Axboe
2019-02-13  0:07                                 ` Andy Lutomirski
2019-02-13  0:14                                   ` Jann Horn
2019-02-13  0:24                                   ` Jens Axboe
2019-02-09  9:35   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 06/19] io_uring: add fsync support Jens Axboe
2019-02-08 22:36   ` Jann Horn
2019-02-08 23:31     ` Jens Axboe
2019-02-09  9:37   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 07/19] io_uring: support for IO polling Jens Axboe
2019-02-09  9:39   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 08/19] fs: add fget_many() and fput_many() Jens Axboe
2019-02-09  9:41   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 09/19] io_uring: use fget/fput_many() for file references Jens Axboe
2019-02-09  9:42   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 10/19] io_uring: batch io_kiocb allocation Jens Axboe
2019-02-09  9:43   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Jens Axboe
2019-02-09  9:45   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 12/19] io_uring: add support for pre-mapped user IO buffers Jens Axboe
2019-02-08 22:54   ` Jann Horn
2019-02-08 23:38     ` Jens Axboe
2019-02-09 16:50       ` Jens Axboe
2019-02-09  9:48   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 13/19] net: split out functions related to registering inflight socket files Jens Axboe
2019-02-08 19:49   ` David Miller
2019-02-08 19:51     ` Jens Axboe
2019-02-09  9:49   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 14/19] io_uring: add file set registration Jens Axboe
2019-02-08 20:26   ` Jann Horn
2019-02-09  0:16     ` Jens Axboe
2019-02-09  9:50   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 15/19] io_uring: add submission polling Jens Axboe
2019-02-09  9:53   ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 16/19] io_uring: add io_kiocb ref count Jens Axboe
2019-02-08 17:34 ` [PATCH 17/19] io_uring: add support for IORING_OP_POLL Jens Axboe
2019-02-08 17:34 ` [PATCH 18/19] io_uring: allow workqueue item to handle multiple buffered requests Jens Axboe
2019-02-08 17:34 ` [PATCH 19/19] io_uring: add io_uring_event cache hit information Jens Axboe
2019-02-09 21:13 [PATCHSET v14] io_uring IO interface Jens Axboe
2019-02-09 21:13 ` [PATCH 05/19] Add " Jens Axboe
2019-02-10 12:03   ` Thomas Gleixner
2019-02-10 14:19     ` Jens Axboe
2019-02-11 19:00 [PATCHSET v15] " Jens Axboe
2019-02-11 19:00 ` [PATCH 05/19] Add " Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f20b8e79-d10f-6316-561f-3c77cab71ee0@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=avi@scylladb.com \
    --cc=hch@lst.de \
    --cc=jannh@google.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-aio@kvack.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).