All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next v2 00/25] 5.20 cleanups and poll optimisations
@ 2022-06-14 14:36 Pavel Begunkov
  2022-06-14 14:36 ` [PATCH for-next v2 01/25] io_uring: make reg buf init consistent Pavel Begunkov
                   ` (24 more replies)
  0 siblings, 25 replies; 44+ messages in thread
From: Pavel Begunkov @ 2022-06-14 14:36 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

1-13 are cleanups after splitting io_uring into files.

Patch 14 from Hao should remove some overhead from poll requests

Patch 15 from Hao adds per-bucket spinlocks, and 16-19 do a little
bit of cleanup. The downside of per-bucket spinlocks is that it adds
additional spinlock/unlock pair in the poll request completion side,
which shouldn't matter much with 20/25.

Patch 20 uses inline completion infra for poll requests, this nicely
improves perf when there is a good tw batching.

Patch 21 implements the userspace visible side of
IORING_SETUP_SINGLE_ISSUER, it'll be used for poll requests and
later for spinlock optimisations.

22-25 introduces ->uring_lock protected cancellation hashing. It
requires us to grab ->uring_lock in the completion side, but saves
two spin lock/unlock pairs. We apply it automatically in cases the
mutex is already likely to be held (see 25/25 description), so there
is no additional mutex overhead and potential latency problemes.


Numbers:

The used poll benchmark each iteration queues a batch of 32 POLLIN
poll requests and triggers all of them with read (+write).

baseline (patches 1-18):
    11720 K req/s
base + 19 (+ inline completion infra)
    12419 K req/s, ~+6%
base + 19-25 (+ uring_lock hashing):
    12804 K req/s, +9.2% from the baseline, or +3.2% relative to patch 19.

Note that patch 19 only helps performance of poll-add requests, whenever
25/25 also improves apoll.

v2:
  don't move ->cancel_seq out of iowq work struct
  fix up single-issuer

Hao Xu (2):
  io_uring: poll: remove unnecessary req->ref set
  io_uring: switch cancel_hash to use per entry spinlock

Pavel Begunkov (23):
  io_uring: make reg buf init consistent
  io_uring: move defer_list to slow data
  io_uring: better caching for ctx timeout fields
  io_uring: refactor ctx slow data placement
  io_uring: move small helpers to headers
  io_uring: explain io_wq_work::cancel_seq placement
  io_uring: inline ->registered_rings
  io_uring: don't set REQ_F_COMPLETE_INLINE in tw
  io_uring: never defer-complete multi-apoll
  io_uring: kill REQ_F_COMPLETE_INLINE
  io_uring: refactor io_req_task_complete()
  io_uring: don't inline io_put_kbuf
  io_uring: remove check_cq checking from hot paths
  io_uring: pass poll_find lock back
  io_uring: clean up io_try_cancel
  io_uring: limit number hash buckets
  io_uring: clean up io_ring_ctx_alloc
  io_uring: use state completion infra for poll reqs
  io_uring: add IORING_SETUP_SINGLE_ISSUER
  io_uring: pass hash table into poll_find
  io_uring: introduce a struct for hash table
  io_uring: propagate locking state to poll cancel
  io_uring: mutex locked poll hashing

 include/uapi/linux/io_uring.h |   5 +-
 io_uring/cancel.c             |  23 +++-
 io_uring/cancel.h             |   4 +-
 io_uring/fdinfo.c             |  11 +-
 io_uring/io-wq.h              |   1 +
 io_uring/io_uring.c           | 145 +++++++++++-----------
 io_uring/io_uring.h           |  17 +++
 io_uring/io_uring_types.h     | 108 +++++++++--------
 io_uring/kbuf.c               |  33 +++++
 io_uring/kbuf.h               |  38 +-----
 io_uring/poll.c               | 219 +++++++++++++++++++++++++---------
 io_uring/poll.h               |   3 +-
 io_uring/rsrc.c               |   9 +-
 io_uring/tctx.c               |  36 ++++--
 io_uring/tctx.h               |   7 +-
 io_uring/timeout.c            |   3 +-
 16 files changed, 416 insertions(+), 246 deletions(-)

-- 
2.36.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2022-06-15 15:20 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-14 14:36 [PATCH for-next v2 00/25] 5.20 cleanups and poll optimisations Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 01/25] io_uring: make reg buf init consistent Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 02/25] io_uring: move defer_list to slow data Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 03/25] io_uring: better caching for ctx timeout fields Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 04/25] io_uring: refactor ctx slow data placement Pavel Begunkov
2022-06-15  7:58   ` Hao Xu
2022-06-15 10:11     ` Pavel Begunkov
2022-06-15 10:59       ` Hao Xu
2022-06-14 14:36 ` [PATCH for-next v2 05/25] io_uring: move small helpers to headers Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 06/25] io_uring: explain io_wq_work::cancel_seq placement Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 07/25] io_uring: inline ->registered_rings Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 08/25] io_uring: don't set REQ_F_COMPLETE_INLINE in tw Pavel Begunkov
2022-06-14 14:36 ` [PATCH for-next v2 09/25] io_uring: never defer-complete multi-apoll Pavel Begunkov
2022-06-15  8:05   ` Hao Xu
2022-06-14 14:37 ` [PATCH for-next v2 10/25] io_uring: kill REQ_F_COMPLETE_INLINE Pavel Begunkov
2022-06-15  8:20   ` Hao Xu
2022-06-15 10:18     ` Pavel Begunkov
2022-06-15 10:19       ` Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 11/25] io_uring: refactor io_req_task_complete() Pavel Begunkov
2022-06-14 17:45   ` Hao Xu
2022-06-14 17:52     ` Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 12/25] io_uring: don't inline io_put_kbuf Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 13/25] io_uring: remove check_cq checking from hot paths Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 14/25] io_uring: poll: remove unnecessary req->ref set Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 15/25] io_uring: switch cancel_hash to use per entry spinlock Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 16/25] io_uring: pass poll_find lock back Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 17/25] io_uring: clean up io_try_cancel Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 18/25] io_uring: limit number hash buckets Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 19/25] io_uring: clean up io_ring_ctx_alloc Pavel Begunkov
2022-06-15  8:46   ` Hao Xu
2022-06-14 14:37 ` [PATCH for-next v2 20/25] io_uring: use state completion infra for poll reqs Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 21/25] io_uring: add IORING_SETUP_SINGLE_ISSUER Pavel Begunkov
2022-06-15  9:34   ` Hao Xu
2022-06-15 10:20     ` Pavel Begunkov
2022-06-15  9:41   ` Hao Xu
2022-06-15 10:26     ` Pavel Begunkov
2022-06-15 11:08       ` Hao Xu
2022-06-14 14:37 ` [PATCH for-next v2 22/25] io_uring: pass hash table into poll_find Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 23/25] io_uring: introduce a struct for hash table Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 24/25] io_uring: propagate locking state to poll cancel Pavel Begunkov
2022-06-14 14:37 ` [PATCH for-next v2 25/25] io_uring: mutex locked poll hashing Pavel Begunkov
2022-06-15 12:53   ` Hao Xu
2022-06-15 13:55     ` Pavel Begunkov
2022-06-15 15:19       ` Hao Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.