All of lore.kernel.org
 help / color / mirror / Atom feed
* [GIT PULL] io_uring updates for 5.16-rc1
@ 2021-10-31 19:41 Jens Axboe
  2021-11-01 16:49 ` Linus Torvalds
  2021-11-01 17:28 ` pr-tracker-bot
  0 siblings, 2 replies; 4+ messages in thread
From: Jens Axboe @ 2021-10-31 19:41 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: io-uring

Hi Linus,

Sitting on top of the core block branch due to the polling changes, here
are the io_uring updates for the 5.16-rc1 merge window. Light on new
features (basically just the hybrid mode support), outside of that it's
just fixes, cleanups, and performance improvements. In detail:

- Add ring related information to the fdinfo output (Hao)

- Hybrid async mode (Hao)

- Support for batched issue on block (me)

- sqe error trace improvement (me)

- IOPOLL efficiency improvements (Pavel)

- submit state cleanups and improvements (Pavel)

- Completion side improvements (Pavel)

- Drain improvements (Pavel)

- Buffer selection cleanups (Pavel)

- Fixed file node improvements (Pavel)

- io-wq setup cancelation fix (Pavel)

- Various other performance improvements and cleanups (Pavel)

- Misc fixes (Arnd, Bixuan, Changcheng, Hao, me, Noah)

This will throw two merge conflicts, see below for how I resolved it.
There are two spots, one is trivial, and the other needs
io_queue_linked_timeout() moved into io_queue_sqe_arm_apoll().

Please pull!


The following changes since commit e0d78afeb8d190164a823d5ef5821b0b3802af33:

  block: fix too broad elevator check in blk_mq_free_request() (2021-10-19 05:48:15 -0600)

are available in the Git repository at:

  git://git.kernel.dk/linux-block.git tags/for-5.16/io_uring-2021-10-29

for you to fetch changes up to 1d5f5ea7cb7d15b9fb1cc82673ebb054f02cd7d2:

  io-wq: remove worker to owner tw dependency (2021-10-29 09:49:33 -0600)

----------------------------------------------------------------
for-5.16/io_uring-2021-10-29

----------------------------------------------------------------
Arnd Bergmann (1):
      io_uring: warning about unused-but-set parameter

Bixuan Cui (1):
      io-wq: Remove duplicate code in io_workqueue_create()

Changcheng Deng (1):
      io_uring: Use ERR_CAST() instead of ERR_PTR(PTR_ERR())

Hao Xu (4):
      io_uring: add more uring info to fdinfo for debug
      io_uring: return boolean value for io_alloc_async_data
      io_uring: split logic of force_nonblock
      io_uring: implement async hybrid mode for pollable requests

Jens Axboe (4):
      io_uring: dump sqe contents if issue fails
      io_uring: inform block layer of how many requests we are submitting
      io_uring: don't assign write hint in the read path
      io_uring: harder fdinfo sq/cq ring iterating

Noah Goldstein (1):
      fs/io_uring: Prioritise checking faster conditions first in io_write

Pavel Begunkov (85):
      io_uring: kill off ios_left
      io_uring: inline io_dismantle_req
      io_uring: inline linked part of io_req_find_next
      io_uring: dedup CQE flushing non-empty checks
      io_uring: kill extra wake_up_process in tw add
      io_uring: remove ctx referencing from complete_post
      io_uring: optimise io_req_init() sqe flags checks
      io_uring: mark having different creds unlikely
      io_uring: force_nonspin
      io_uring: make io_do_iopoll return number of reqs
      io_uring: use slist for completion batching
      io_uring: remove allocation cache array
      io-wq: add io_wq_work_node based stack
      io_uring: replace list with stack for req caches
      io_uring: split iopoll loop
      io_uring: use single linked list for iopoll
      io_uring: add a helper for batch free
      io_uring: convert iopoll_completed to store_release
      io_uring: optimise batch completion
      io_uring: inline completion batching helpers
      io_uring: don't pass tail into io_free_batch_list
      io_uring: don't pass state to io_submit_state_end
      io_uring: deduplicate io_queue_sqe() call sites
      io_uring: remove drain_active check from hot path
      io_uring: split slow path from io_queue_sqe
      io_uring: inline hot path of __io_queue_sqe()
      io_uring: reshuffle queue_sqe completion handling
      io_uring: restructure submit sqes to_submit checks
      io_uring: kill off ->inflight_entry field
      io_uring: comment why inline complete calls io_clean_op()
      io_uring: disable draining earlier
      io_uring: extra a helper for drain init
      io_uring: don't return from io_drain_req()
      io_uring: init opcode in io_init_req()
      io_uring: clean up buffer select
      io_uring: add flag to not fail link after timeout
      io_uring: optimise kiocb layout
      io_uring: add more likely/unlikely() annotations
      io_uring: delay req queueing into compl-batch list
      io_uring: optimise request allocation
      io_uring: optimise INIT_WQ_LIST
      io_uring: don't wake sqpoll in io_cqring_ev_posted
      io_uring: merge CQ and poll waitqueues
      io_uring: optimise ctx referencing by requests
      io_uring: mark cold functions
      io_uring: optimise io_free_batch_list()
      io_uring: control ->async_data with a REQ_F flag
      io_uring: remove struct io_completion
      io_uring: inline io_req_needs_clean()
      io_uring: inline io_poll_complete
      io_uring: correct fill events helpers types
      io_uring: optimise plugging
      io_uring: safer fallback_work free
      io_uring: reshuffle io_submit_state bits
      io_uring: optimise out req->opcode reloading
      io_uring: remove extra io_ring_exit_work wake up
      io_uring: fix io_free_batch_list races
      io_uring: optimise io_req_set_rsrc_node()
      io_uring: optimise rsrc referencing
      io_uring: consistent typing for issue_flags
      io_uring: prioritise read success path over fails
      io_uring: optimise rw comletion handlers
      io_uring: encapsulate rw state
      io_uring: optimise read/write iov state storing
      io_uring: optimise io_import_iovec nonblock passing
      io_uring: clean up io_import_iovec
      io_uring: rearrange io_read()/write()
      io_uring: optimise req->ctx reloads
      io_uring: kill io_wq_current_is_worker() in iopoll
      io_uring: optimise io_import_iovec fixed path
      io_uring: return iovec from __io_import_iovec
      io_uring: optimise fixed rw rsrc node setting
      io_uring: clean io_prep_rw()
      io_uring: arm poll for non-nowait files
      io_uring: combine REQ_F_NOWAIT_{READ,WRITE} flags
      io_uring: simplify io_file_supports_nowait()
      io-wq: use helper for worker refcounting
      io_uring: clean io_wq_submit_work()'s main loop
      io_uring: clean iowq submit work cancellation
      io_uring: check if opcode needs poll first on arming
      io_uring: don't try io-wq polling if not supported
      io_uring: clean up timeout async_data allocation
      io_uring: kill unused param from io_file_supports_nowait
      io_uring: clusterise ki_flags access in rw_prep
      io-wq: remove worker to owner tw dependency

 fs/io-wq.c                      |   58 +-
 fs/io-wq.h                      |   59 +-
 fs/io_uring.c                   | 1714 ++++++++++++++++++++-------------------
 include/trace/events/io_uring.h |   61 ++
 include/uapi/linux/io_uring.h   |    1 +
 5 files changed, 1045 insertions(+), 848 deletions(-)


diff --cc fs/io_uring.c
index 5c11b8442743,c887e4e19e9e..5d7234363b9f
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@@ -6945,7 -6906,33 +6903,33 @@@ static void io_queue_linked_timeout(str
  	io_put_req(req);
  }
  
- static void __io_queue_sqe(struct io_kiocb *req)
+ static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
+ 	__must_hold(&req->ctx->uring_lock)
+ {
+ 	struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
+ 
+ 	switch (io_arm_poll_handler(req)) {
+ 	case IO_APOLL_READY:
+ 		if (linked_timeout) {
 -			io_unprep_linked_timeout(req);
++			io_queue_linked_timeout(linked_timeout);
+ 			linked_timeout = NULL;
+ 		}
+ 		io_req_task_queue(req);
+ 		break;
+ 	case IO_APOLL_ABORTED:
+ 		/*
+ 		 * Queued up for async execution, worker will release
+ 		 * submit reference when the iocb is actually submitted.
+ 		 */
+ 		io_queue_async_work(req, NULL);
+ 		break;
+ 	}
+ 
+ 	if (linked_timeout)
+ 		io_queue_linked_timeout(linked_timeout);
+ }
+ 
+ static inline void __io_queue_sqe(struct io_kiocb *req)
  	__must_hold(&req->ctx->uring_lock)
  {
  	struct io_kiocb *linked_timeout;
@@@ -10645,11 -10697,9 +10703,11 @@@ static __cold int io_unregister_iowq_af
  	return io_wq_cpu_affinity(tctx->io_wq, NULL);
  }
  
- static int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
- 					void __user *arg)
+ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
+ 					       void __user *arg)
 +	__must_hold(&ctx->uring_lock)
  {
 +	struct io_tctx_node *node;
  	struct io_uring_task *tctx = NULL;
  	struct io_sq_data *sqd = NULL;
  	__u32 new_count[2];

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [GIT PULL] io_uring updates for 5.16-rc1
  2021-10-31 19:41 [GIT PULL] io_uring updates for 5.16-rc1 Jens Axboe
@ 2021-11-01 16:49 ` Linus Torvalds
  2021-11-01 17:06   ` Jens Axboe
  2021-11-01 17:28 ` pr-tracker-bot
  1 sibling, 1 reply; 4+ messages in thread
From: Linus Torvalds @ 2021-11-01 16:49 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring

On Sun, Oct 31, 2021 at 12:41 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> This will throw two merge conflicts, see below for how I resolved it.
> There are two spots, one is trivial, and the other needs
> io_queue_linked_timeout() moved into io_queue_sqe_arm_apoll().

So I ended up resolving it the same way you did, because that was the
mindless direct thing.

But I don't much like it.

Basically, io_queue_sqe_arm_apoll() now ends up doing

        case IO_APOLL_READY:
                if (linked_timeout) {
                        io_queue_linked_timeout(linked_timeout);
                        linked_timeout = NULL;
                }
                io_req_task_queue(req);
                break;
    ...
        if (linked_timeout)
                io_queue_linked_timeout(linked_timeout);

and that really seems *completely* pointless. Notice how it does that

        if (linked_timeout)
                io_queue_linked_timeout()

basically twice, and sets linked_timeout to NULL just to avoid the second one...

Why isn't it just

        case IO_APOLL_READY:
                io_req_task_queue(req);
                break;
  ...
        if (linked_timeout)
                io_queue_linked_timeout(linked_timeout);

where the only difference would seem to be the order of operations
between io_req_task_queue() and io_queue_linked_timeout()?

Does the order of operations really matter here? As far as I can tell,
io_req_task_queue() really just queues up work for later, so it's not
really very ordered wrt that io_queue_linked_timeout(), and in the
_other_ case statement it's apparently fine to do that
io_queue_async_work() before the io_queue_linked_timeout()..

Again - I ended up resolving this the same way you had done, because I
don't know the exact rules here well enough to do anything else. But
it _looks_ a bit messy.

Hmm?

                 Linus

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [GIT PULL] io_uring updates for 5.16-rc1
  2021-11-01 16:49 ` Linus Torvalds
@ 2021-11-01 17:06   ` Jens Axboe
  0 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2021-11-01 17:06 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: io-uring

On 11/1/21 10:49 AM, Linus Torvalds wrote:
> On Sun, Oct 31, 2021 at 12:41 PM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> This will throw two merge conflicts, see below for how I resolved it.
>> There are two spots, one is trivial, and the other needs
>> io_queue_linked_timeout() moved into io_queue_sqe_arm_apoll().
> 
> So I ended up resolving it the same way you did, because that was the
> mindless direct thing.
> 
> But I don't much like it.
> 
> Basically, io_queue_sqe_arm_apoll() now ends up doing
> 
>         case IO_APOLL_READY:
>                 if (linked_timeout) {
>                         io_queue_linked_timeout(linked_timeout);
>                         linked_timeout = NULL;
>                 }
>                 io_req_task_queue(req);
>                 break;
>     ...
>         if (linked_timeout)
>                 io_queue_linked_timeout(linked_timeout);
> 
> and that really seems *completely* pointless. Notice how it does that
> 
>         if (linked_timeout)
>                 io_queue_linked_timeout()
> 
> basically twice, and sets linked_timeout to NULL just to avoid the second one...
> 
> Why isn't it just
> 
>         case IO_APOLL_READY:
>                 io_req_task_queue(req);
>                 break;
>   ...
>         if (linked_timeout)
>                 io_queue_linked_timeout(linked_timeout);
> 
> where the only difference would seem to be the order of operations
> between io_req_task_queue() and io_queue_linked_timeout()?
> 
> Does the order of operations really matter here? As far as I can tell,
> io_req_task_queue() really just queues up work for later, so it's not
> really very ordered wrt that io_queue_linked_timeout(), and in the
> _other_ case statement it's apparently fine to do that
> io_queue_async_work() before the io_queue_linked_timeout()..
> 
> Again - I ended up resolving this the same way you had done, because I
> don't know the exact rules here well enough to do anything else. But
> it _looks_ a bit messy.

Yes I agree, and it's mostly just to keep the resolution simpler as I
don't think the current construct makes too much sense when both of them
end up being queueing the linked timeout. I think the cleanup done here
made more sense in the context before, not now.

We'll get a cleanup done for this shortly.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [GIT PULL] io_uring updates for 5.16-rc1
  2021-10-31 19:41 [GIT PULL] io_uring updates for 5.16-rc1 Jens Axboe
  2021-11-01 16:49 ` Linus Torvalds
@ 2021-11-01 17:28 ` pr-tracker-bot
  1 sibling, 0 replies; 4+ messages in thread
From: pr-tracker-bot @ 2021-11-01 17:28 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Linus Torvalds, io-uring

The pull request you sent on Sun, 31 Oct 2021 13:41:47 -0600:

> git://git.kernel.dk/linux-block.git tags/for-5.16/io_uring-2021-10-29

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/8d1f01775f8ead7ee313403158be95bffdbb3638

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-11-01 17:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-31 19:41 [GIT PULL] io_uring updates for 5.16-rc1 Jens Axboe
2021-11-01 16:49 ` Linus Torvalds
2021-11-01 17:06   ` Jens Axboe
2021-11-01 17:28 ` pr-tracker-bot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.