All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jann Horn <jannh@google.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-aio@kvack.org, linux-block@vger.kernel.org,
	Linux API <linux-api@vger.kernel.org>,
	hch@lst.de, jmoyer@redhat.com, Avi Kivity <avi@scylladb.com>,
	Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH 05/19] Add io_uring IO interface
Date: Fri, 8 Feb 2019 23:12:03 +0100	[thread overview]
Message-ID: <CAG48ez2Qc9XOApLRb5fnNiOjxaURO8vjZ-EHX7g25gje3weZ6A@mail.gmail.com> (raw)
In-Reply-To: <20190208173423.27014-6-axboe@kernel.dk>

On Fri, Feb 8, 2019 at 6:34 PM Jens Axboe <axboe@kernel.dk> wrote:
> The submission queue (SQ) and completion queue (CQ) rings are shared
> between the application and the kernel. This eliminates the need to
> copy data back and forth to submit and complete IO.
>
> IO submissions use the io_uring_sqe data structure, and completions
> are generated in the form of io_uring_cqe data structures. The SQ
> ring is an index into the io_uring_sqe array, which makes it possible
> to submit a batch of IOs without them being contiguous in the ring.
> The CQ ring is always contiguous, as completion events are inherently
> unordered, and hence any io_uring_cqe entry can point back to an
> arbitrary submission.
>
> Two new system calls are added for this:
>
> io_uring_setup(entries, params)
>         Sets up an io_uring instance for doing async IO. On success,
>         returns a file descriptor that the application can mmap to
>         gain access to the SQ ring, CQ ring, and io_uring_sqes.
>
> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>         Initiates IO against the rings mapped to this fd, or waits for
>         them to complete, or both. The behavior is controlled by the
>         parameters passed in. If 'to_submit' is non-zero, then we'll
>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>         kernel will wait for 'min_complete' events, if they aren't
>         already available. It's valid to set IORING_ENTER_GETEVENTS
>         and 'min_complete' == 0 at the same time, this allows the
>         kernel to return already completed events without waiting
>         for them. This is useful only for polling, as for IRQ
>         driven IO, the application can just check the CQ ring
>         without entering the kernel.
>
> With this setup, it's possible to do async IO with a single system
> call. Future developments will enable polled IO with this interface,
> and polled submission as well. The latter will enable an application
> to do IO without doing ANY system calls at all.
>
> For IRQ driven IO, an application only needs to enter the kernel for
> completions if it wants to wait for them to occur.
>
> Each io_uring is backed by a workqueue, to support buffered async IO
> as well. We will only punt to an async context if the command would
> need to wait for IO on the device side. Any data that can be accessed
> directly in the page cache is done inline. This avoids the slowness
> issue of usual threadpools, since cached data is accessed as quickly
> as a sync interface.
[...]
> +static void io_commit_cqring(struct io_ring_ctx *ctx)
> +{
> +       struct io_cq_ring *ring = ctx->cq_ring;
> +
> +       if (ctx->cached_cq_tail != ring->r.tail) {

I know that this is very unlikely to actually matter, but both because
I don't feel fuzzy about relying on compiler internals regarding when
the compiler might decide to generate dangerous double-reads (if
switch() can blow up, why shouldn't the compiler be able to make if()
blow up if it wants to, too?), and because I would like it to be as
clear as possible to the reader which memory is shared with userspace,
can we please have READ_ONCE() on *every* shared memory read, not just
the ones in places that look like they might plausibly blow up
otherwise? Sorry, shared memory is a bit of a pet peeve of mine.

> +               /* order cqe stores with ring update */
> +               smp_wmb();
> +               WRITE_ONCE(ring->r.tail, ctx->cached_cq_tail);
> +               /* write side barrier of tail update, app has read side */
> +               smp_wmb();
> +
> +               if (wq_has_sleeper(&ctx->cq_wait)) {
> +                       wake_up_interruptible(&ctx->cq_wait);
> +                       kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
> +               }
> +       }
> +}
[...]
> +static void io_cqring_fill_event(struct io_ring_ctx *ctx, u64 ki_user_data,
> +                                long res, unsigned ev_flags)
> +{
> +       struct io_uring_cqe *cqe;
> +
> +       /*
> +        * If we can't get a cq entry, userspace overflowed the
> +        * submission (by quite a lot). Increment the overflow count in
> +        * the ring.
> +        */
> +       cqe = io_get_cqring(ctx);
> +       if (cqe) {
> +               cqe->user_data = ki_user_data;
> +               cqe->res = res;
> +               cqe->flags = ev_flags;

Please use WRITE_ONCE() for stores like these.

> +       } else
> +               ctx->cq_ring->overflow++;
> +}
[...]
> +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> +                          const struct sqe_submit *s, bool force_nonblock)
> +{
> +       ssize_t ret;
> +       int opcode;
> +
> +       if (unlikely(s->index >= ctx->sq_entries))
> +               return -EINVAL;
> +       req->user_data = READ_ONCE(s->sqe->user_data);
> +
> +       opcode = READ_ONCE(s->sqe->opcode);

There might be a sneaky bug here. Consider the following scenario:

1. request gets submitted from io_sq_wq_submit_work() with opcode
IORING_OP_READV, io_read() is invoked
2. io_read() looks up the file, taking a reference to it
3. call_read_iter() returns -EAGAIN
4. io_read() returns -EAGAIN without dropping its reference on the
file (because it expects that it'll be called again)
5. __io_submit_sqe() returns -EAGAIN
6. io_sq_wq_submit_work() loops back and retries __io_submit_sqe()
7. __io_submit_sqe() reads opcode again, this time it's IORING_OP_NOP
8. io_nop() gets called
9. io_nop() uses io_free_req() to delete the request without dropping
its reference on the file

So that's a file reference leak, I think?

> +       switch (opcode) {
> +       case IORING_OP_NOP:
> +               ret = io_nop(req, req->user_data);
> +               break;
> +       case IORING_OP_READV:
> +               ret = io_read(req, s, force_nonblock);
> +               break;
> +       case IORING_OP_WRITEV:
> +               ret = io_write(req, s, force_nonblock);
> +               break;
> +       default:
> +               ret = -EINVAL;
> +               break;
> +       }
> +
> +       return ret;
> +}
[...]
> +static int io_submit_sqe(struct io_ring_ctx *ctx, const struct sqe_submit *s)
> +{
> +       struct io_kiocb *req;
> +       ssize_t ret;
> +
> +       /* enforce forwards compatibility on users */
> +       if (unlikely(s->sqe->flags))
> +               return -EINVAL;
> +
> +       req = io_get_req(ctx);
> +       if (unlikely(!req))
> +               return -EAGAIN;
> +
> +       req->rw.ki_filp = NULL;
> +
> +       ret = __io_submit_sqe(ctx, req, s, true);
> +       if (ret == -EAGAIN) {
> +               memcpy(&req->submit, s, sizeof(*s));
> +               INIT_WORK(&req->work, io_sq_wq_submit_work);
> +               queue_work(ctx->sqo_wq, &req->work);
> +               ret = 0;
> +       }
> +       if (ret)
> +               io_free_req(req);
> +
> +       return ret;
> +}
> +
> +static void io_commit_sqring(struct io_ring_ctx *ctx)
> +{
> +       struct io_sq_ring *ring = ctx->sq_ring;
> +
> +       if (ctx->cached_sq_head != ring->r.head) {
> +               WRITE_ONCE(ring->r.head, ctx->cached_sq_head);
> +               /* write side barrier of head update, app has read side */
> +               smp_wmb();

Can you elaborate on what this memory barrier is doing? Don't you need
some sort of memory barrier *before* the WRITE_ONCE(), to ensure that
nobody sees the updated head before you're done reading the submission
queue entry? Or is that barrier elsewhere?

> +       }
> +}
> +
> +/*
> + * Undo last io_get_sqring()
> + */
> +static void io_drop_sqring(struct io_ring_ctx *ctx)
> +{
> +       ctx->cached_sq_head--;
> +}
[...]
> +static void io_unaccount_mem(struct user_struct *user, unsigned long nr_pages)
> +{
> +       if (capable(CAP_IPC_LOCK))
> +               return;

Hrm... what happens if root creates a uring, drops CAP_IPC_LOCK, and
then destroys the uring? Will the pages get subtracted from
->locked_vm even though they were never added to it, causing a
wraparound?

You might want to make sure that ctx->user is set if and only if the
creator didn't have CAP_IPC_LOCK, and then just do a `user == NULL`
check instead of a `capable(...)` check. Or you could do what BPF is
doing (AFAICS) and not treat root specially - root can just bump the
rlimit if necessary.

> +       atomic_long_sub(nr_pages, &user->locked_vm);
> +}
> +
> +static int io_account_mem(struct user_struct *user, unsigned long nr_pages)
> +{
> +       unsigned long page_limit, cur_pages, new_pages;
> +
> +       if (capable(CAP_IPC_LOCK))
> +               return 0;
> +
> +       /* Don't allow more pages than we can safely lock */
> +       page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> +
> +       do {
> +               cur_pages = atomic_long_read(&user->locked_vm);
> +               new_pages = cur_pages + nr_pages;
> +               if (new_pages > page_limit)
> +                       return -ENOMEM;
> +       } while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
> +                                       new_pages) != cur_pages);
> +
> +       return 0;
> +}
[...]
> +config IO_URING
> +       bool "Enable IO uring support" if EXPERT
> +       select ANON_INODES
> +       default y
> +       help
> +         This option enables support for the io_uring interface, enabling
> +         applications to submit and completion IO through submission and
> +         completion rings that are shared between the kernel and application.

Nit: I can't parse this part: "enabling applications to submit and
completion IO"

WARNING: multiple messages have this Message-ID (diff)
From: Jann Horn <jannh@google.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-aio@kvack.org, linux-block@vger.kernel.org,
	Linux API <linux-api@vger.kernel.org>,
	hch@lst.de, jmoyer@redhat.com, Avi Kivity <avi@scylladb.com>,
	Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH 05/19] Add io_uring IO interface
Date: Fri, 8 Feb 2019 23:12:03 +0100	[thread overview]
Message-ID: <CAG48ez2Qc9XOApLRb5fnNiOjxaURO8vjZ-EHX7g25gje3weZ6A@mail.gmail.com> (raw)
In-Reply-To: <20190208173423.27014-6-axboe@kernel.dk>

On Fri, Feb 8, 2019 at 6:34 PM Jens Axboe <axboe@kernel.dk> wrote:
> The submission queue (SQ) and completion queue (CQ) rings are shared
> between the application and the kernel. This eliminates the need to
> copy data back and forth to submit and complete IO.
>
> IO submissions use the io_uring_sqe data structure, and completions
> are generated in the form of io_uring_cqe data structures. The SQ
> ring is an index into the io_uring_sqe array, which makes it possible
> to submit a batch of IOs without them being contiguous in the ring.
> The CQ ring is always contiguous, as completion events are inherently
> unordered, and hence any io_uring_cqe entry can point back to an
> arbitrary submission.
>
> Two new system calls are added for this:
>
> io_uring_setup(entries, params)
>         Sets up an io_uring instance for doing async IO. On success,
>         returns a file descriptor that the application can mmap to
>         gain access to the SQ ring, CQ ring, and io_uring_sqes.
>
> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>         Initiates IO against the rings mapped to this fd, or waits for
>         them to complete, or both. The behavior is controlled by the
>         parameters passed in. If 'to_submit' is non-zero, then we'll
>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>         kernel will wait for 'min_complete' events, if they aren't
>         already available. It's valid to set IORING_ENTER_GETEVENTS
>         and 'min_complete' == 0 at the same time, this allows the
>         kernel to return already completed events without waiting
>         for them. This is useful only for polling, as for IRQ
>         driven IO, the application can just check the CQ ring
>         without entering the kernel.
>
> With this setup, it's possible to do async IO with a single system
> call. Future developments will enable polled IO with this interface,
> and polled submission as well. The latter will enable an application
> to do IO without doing ANY system calls at all.
>
> For IRQ driven IO, an application only needs to enter the kernel for
> completions if it wants to wait for them to occur.
>
> Each io_uring is backed by a workqueue, to support buffered async IO
> as well. We will only punt to an async context if the command would
> need to wait for IO on the device side. Any data that can be accessed
> directly in the page cache is done inline. This avoids the slowness
> issue of usual threadpools, since cached data is accessed as quickly
> as a sync interface.
[...]
> +static void io_commit_cqring(struct io_ring_ctx *ctx)
> +{
> +       struct io_cq_ring *ring = ctx->cq_ring;
> +
> +       if (ctx->cached_cq_tail != ring->r.tail) {

I know that this is very unlikely to actually matter, but both because
I don't feel fuzzy about relying on compiler internals regarding when
the compiler might decide to generate dangerous double-reads (if
switch() can blow up, why shouldn't the compiler be able to make if()
blow up if it wants to, too?), and because I would like it to be as
clear as possible to the reader which memory is shared with userspace,
can we please have READ_ONCE() on *every* shared memory read, not just
the ones in places that look like they might plausibly blow up
otherwise? Sorry, shared memory is a bit of a pet peeve of mine.

> +               /* order cqe stores with ring update */
> +               smp_wmb();
> +               WRITE_ONCE(ring->r.tail, ctx->cached_cq_tail);
> +               /* write side barrier of tail update, app has read side */
> +               smp_wmb();
> +
> +               if (wq_has_sleeper(&ctx->cq_wait)) {
> +                       wake_up_interruptible(&ctx->cq_wait);
> +                       kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
> +               }
> +       }
> +}
[...]
> +static void io_cqring_fill_event(struct io_ring_ctx *ctx, u64 ki_user_data,
> +                                long res, unsigned ev_flags)
> +{
> +       struct io_uring_cqe *cqe;
> +
> +       /*
> +        * If we can't get a cq entry, userspace overflowed the
> +        * submission (by quite a lot). Increment the overflow count in
> +        * the ring.
> +        */
> +       cqe = io_get_cqring(ctx);
> +       if (cqe) {
> +               cqe->user_data = ki_user_data;
> +               cqe->res = res;
> +               cqe->flags = ev_flags;

Please use WRITE_ONCE() for stores like these.

> +       } else
> +               ctx->cq_ring->overflow++;
> +}
[...]
> +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> +                          const struct sqe_submit *s, bool force_nonblock)
> +{
> +       ssize_t ret;
> +       int opcode;
> +
> +       if (unlikely(s->index >= ctx->sq_entries))
> +               return -EINVAL;
> +       req->user_data = READ_ONCE(s->sqe->user_data);
> +
> +       opcode = READ_ONCE(s->sqe->opcode);

There might be a sneaky bug here. Consider the following scenario:

1. request gets submitted from io_sq_wq_submit_work() with opcode
IORING_OP_READV, io_read() is invoked
2. io_read() looks up the file, taking a reference to it
3. call_read_iter() returns -EAGAIN
4. io_read() returns -EAGAIN without dropping its reference on the
file (because it expects that it'll be called again)
5. __io_submit_sqe() returns -EAGAIN
6. io_sq_wq_submit_work() loops back and retries __io_submit_sqe()
7. __io_submit_sqe() reads opcode again, this time it's IORING_OP_NOP
8. io_nop() gets called
9. io_nop() uses io_free_req() to delete the request without dropping
its reference on the file

So that's a file reference leak, I think?

> +       switch (opcode) {
> +       case IORING_OP_NOP:
> +               ret = io_nop(req, req->user_data);
> +               break;
> +       case IORING_OP_READV:
> +               ret = io_read(req, s, force_nonblock);
> +               break;
> +       case IORING_OP_WRITEV:
> +               ret = io_write(req, s, force_nonblock);
> +               break;
> +       default:
> +               ret = -EINVAL;
> +               break;
> +       }
> +
> +       return ret;
> +}
[...]
> +static int io_submit_sqe(struct io_ring_ctx *ctx, const struct sqe_submit *s)
> +{
> +       struct io_kiocb *req;
> +       ssize_t ret;
> +
> +       /* enforce forwards compatibility on users */
> +       if (unlikely(s->sqe->flags))
> +               return -EINVAL;
> +
> +       req = io_get_req(ctx);
> +       if (unlikely(!req))
> +               return -EAGAIN;
> +
> +       req->rw.ki_filp = NULL;
> +
> +       ret = __io_submit_sqe(ctx, req, s, true);
> +       if (ret == -EAGAIN) {
> +               memcpy(&req->submit, s, sizeof(*s));
> +               INIT_WORK(&req->work, io_sq_wq_submit_work);
> +               queue_work(ctx->sqo_wq, &req->work);
> +               ret = 0;
> +       }
> +       if (ret)
> +               io_free_req(req);
> +
> +       return ret;
> +}
> +
> +static void io_commit_sqring(struct io_ring_ctx *ctx)
> +{
> +       struct io_sq_ring *ring = ctx->sq_ring;
> +
> +       if (ctx->cached_sq_head != ring->r.head) {
> +               WRITE_ONCE(ring->r.head, ctx->cached_sq_head);
> +               /* write side barrier of head update, app has read side */
> +               smp_wmb();

Can you elaborate on what this memory barrier is doing? Don't you need
some sort of memory barrier *before* the WRITE_ONCE(), to ensure that
nobody sees the updated head before you're done reading the submission
queue entry? Or is that barrier elsewhere?

> +       }
> +}
> +
> +/*
> + * Undo last io_get_sqring()
> + */
> +static void io_drop_sqring(struct io_ring_ctx *ctx)
> +{
> +       ctx->cached_sq_head--;
> +}
[...]
> +static void io_unaccount_mem(struct user_struct *user, unsigned long nr_pages)
> +{
> +       if (capable(CAP_IPC_LOCK))
> +               return;

Hrm... what happens if root creates a uring, drops CAP_IPC_LOCK, and
then destroys the uring? Will the pages get subtracted from
->locked_vm even though they were never added to it, causing a
wraparound?

You might want to make sure that ctx->user is set if and only if the
creator didn't have CAP_IPC_LOCK, and then just do a `user == NULL`
check instead of a `capable(...)` check. Or you could do what BPF is
doing (AFAICS) and not treat root specially - root can just bump the
rlimit if necessary.

> +       atomic_long_sub(nr_pages, &user->locked_vm);
> +}
> +
> +static int io_account_mem(struct user_struct *user, unsigned long nr_pages)
> +{
> +       unsigned long page_limit, cur_pages, new_pages;
> +
> +       if (capable(CAP_IPC_LOCK))
> +               return 0;
> +
> +       /* Don't allow more pages than we can safely lock */
> +       page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> +
> +       do {
> +               cur_pages = atomic_long_read(&user->locked_vm);
> +               new_pages = cur_pages + nr_pages;
> +               if (new_pages > page_limit)
> +                       return -ENOMEM;
> +       } while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
> +                                       new_pages) != cur_pages);
> +
> +       return 0;
> +}
[...]
> +config IO_URING
> +       bool "Enable IO uring support" if EXPERT
> +       select ANON_INODES
> +       default y
> +       help
> +         This option enables support for the io_uring interface, enabling
> +         applications to submit and completion IO through submission and
> +         completion rings that are shared between the kernel and application.

Nit: I can't parse this part: "enabling applications to submit and
completion IO"

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

  reply	other threads:[~2019-02-08 22:12 UTC|newest]

Thread overview: 131+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-08 17:34 [PATCHSET v13] io_uring IO interface Jens Axboe
2019-02-08 17:34 ` Jens Axboe
2019-02-08 17:34 ` [PATCH 01/19] fs: add an iopoll method to struct file_operations Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:20   ` Hannes Reinecke
2019-02-09  9:20     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 02/19] block: wire up block device iopoll method Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:22   ` Hannes Reinecke
2019-02-09  9:22     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 03/19] block: add bio_set_polled() helper Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:24   ` Hannes Reinecke
2019-02-09  9:24     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 04/19] iomap: wire up the iopoll method Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:25   ` Hannes Reinecke
2019-02-09  9:25     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 05/19] Add io_uring IO interface Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 22:12   ` Jann Horn [this message]
2019-02-08 22:12     ` Jann Horn
2019-02-09  4:15     ` Jens Axboe
2019-02-09  4:15       ` Jens Axboe
2019-02-12 21:42       ` Jann Horn
2019-02-12 21:42         ` Jann Horn
2019-02-12 22:03         ` Jens Axboe
2019-02-12 22:03           ` Jens Axboe
2019-02-12 22:06           ` Jens Axboe
2019-02-12 22:06             ` Jens Axboe
2019-02-12 22:40             ` Jann Horn
2019-02-12 22:40               ` Jann Horn
2019-02-12 22:45               ` Jens Axboe
2019-02-12 22:45                 ` Jens Axboe
2019-02-12 22:52                 ` Jens Axboe
2019-02-12 22:52                   ` Jens Axboe
2019-02-12 22:57                   ` Jann Horn
2019-02-12 22:57                     ` Jann Horn
2019-02-12 23:00                     ` Jens Axboe
2019-02-12 23:00                       ` Jens Axboe
2019-02-12 23:11                       ` Jann Horn
2019-02-12 23:11                         ` Jann Horn
2019-02-12 23:19                         ` Jens Axboe
2019-02-12 23:19                           ` Jens Axboe
2019-02-12 23:28                           ` Jann Horn
2019-02-12 23:28                             ` Jann Horn
2019-02-12 23:46                             ` Jens Axboe
2019-02-12 23:46                               ` Jens Axboe
2019-02-12 23:53                               ` Jens Axboe
2019-02-12 23:53                                 ` Jens Axboe
2019-02-13  0:07                                 ` Andy Lutomirski
2019-02-13  0:07                                   ` Andy Lutomirski
2019-02-13  0:14                                   ` Jann Horn
2019-02-13  0:14                                     ` Jann Horn
2019-02-13  0:24                                   ` Jens Axboe
2019-02-13  0:24                                     ` Jens Axboe
2019-02-09  9:35   ` Hannes Reinecke
2019-02-09  9:35     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 06/19] io_uring: add fsync support Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 22:36   ` Jann Horn
2019-02-08 22:36     ` Jann Horn
2019-02-08 23:31     ` Jens Axboe
2019-02-08 23:31       ` Jens Axboe
2019-02-09  9:37   ` Hannes Reinecke
2019-02-09  9:37     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 07/19] io_uring: support for IO polling Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:39   ` Hannes Reinecke
2019-02-09  9:39     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 08/19] fs: add fget_many() and fput_many() Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:41   ` Hannes Reinecke
2019-02-09  9:41     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 09/19] io_uring: use fget/fput_many() for file references Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:42   ` Hannes Reinecke
2019-02-09  9:42     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 10/19] io_uring: batch io_kiocb allocation Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:43   ` Hannes Reinecke
2019-02-09  9:43     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:45   ` Hannes Reinecke
2019-02-09  9:45     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 12/19] io_uring: add support for pre-mapped user IO buffers Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 22:54   ` Jann Horn
2019-02-08 22:54     ` Jann Horn
2019-02-08 23:38     ` Jens Axboe
2019-02-08 23:38       ` Jens Axboe
2019-02-09 16:50       ` Jens Axboe
2019-02-09 16:50         ` Jens Axboe
2019-02-09  9:48   ` Hannes Reinecke
2019-02-09  9:48     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 13/19] net: split out functions related to registering inflight socket files Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 19:49   ` David Miller
2019-02-08 19:49     ` David Miller
2019-02-08 19:51     ` Jens Axboe
2019-02-09  9:49   ` Hannes Reinecke
2019-02-09  9:49     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 14/19] io_uring: add file set registration Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 20:26   ` Jann Horn
2019-02-08 20:26     ` Jann Horn
2019-02-09  0:16     ` Jens Axboe
2019-02-09  0:16       ` Jens Axboe
2019-02-09  9:50   ` Hannes Reinecke
2019-02-09  9:50     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 15/19] io_uring: add submission polling Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09  9:53   ` Hannes Reinecke
2019-02-09  9:53     ` Hannes Reinecke
2019-02-08 17:34 ` [PATCH 16/19] io_uring: add io_kiocb ref count Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 17:34 ` [PATCH 17/19] io_uring: add support for IORING_OP_POLL Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 17:34 ` [PATCH 18/19] io_uring: allow workqueue item to handle multiple buffered requests Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-08 17:34 ` [PATCH 19/19] io_uring: add io_uring_event cache hit information Jens Axboe
2019-02-08 17:34   ` Jens Axboe
2019-02-09 21:13 [PATCHSET v14] io_uring IO interface Jens Axboe
2019-02-09 21:13 ` [PATCH 05/19] Add " Jens Axboe
2019-02-09 21:13   ` Jens Axboe
2019-02-10 12:03   ` Thomas Gleixner
2019-02-10 12:03     ` Thomas Gleixner
2019-02-10 14:19     ` Jens Axboe
2019-02-10 14:19       ` Jens Axboe
2019-02-11 19:00 [PATCHSET v15] " Jens Axboe
2019-02-11 19:00 ` [PATCH 05/19] Add " Jens Axboe
2019-02-11 19:00   ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAG48ez2Qc9XOApLRb5fnNiOjxaURO8vjZ-EHX7g25gje3weZ6A@mail.gmail.com \
    --to=jannh@google.com \
    --cc=avi@scylladb.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=jmoyer@redhat.com \
    --cc=linux-aio@kvack.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.