All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dylan Yudaken <dylany@fb.com>
To: "axboe@kernel.dk" <axboe@kernel.dk>,
	"hao.xu@linux.dev" <hao.xu@linux.dev>,
	"asml.silence@gmail.com" <asml.silence@gmail.com>,
	"io-uring@vger.kernel.org" <io-uring@vger.kernel.org>
Cc: Kernel Team <Kernel-team@fb.com>
Subject: Re: [PATCH RFC for-next 0/8] io_uring: tw contention improvments
Date: Tue, 21 Jun 2022 07:03:19 +0000	[thread overview]
Message-ID: <f8c8e52996aaa8fb8c72ae46f0e87e733a9053aa.camel@fb.com> (raw)
In-Reply-To: <15e36a76-65d5-2acb-8cb7-3952d9d8f7d1@linux.dev>

On Tue, 2022-06-21 at 13:10 +0800, Hao Xu wrote:
> On 6/21/22 00:18, Dylan Yudaken wrote:
> > Task work currently uses a spin lock to guard task_list and
> > task_running. Some use cases such as networking can trigger
> > task_work_add
> > from multiple threads all at once, which suffers from contention
> > here.
> > 
> > This can be changed to use a lockless list which seems to have
> > better
> > performance. Running the micro benchmark in [1] I see 20%
> > improvment in
> > multithreaded task work add. It required removing the priority tw
> > list
> > optimisation, however it isn't clear how important that
> > optimisation is.
> > Additionally it has fairly easy to break semantics.
> > 
> > Patch 1-2 remove the priority tw list optimisation
> > Patch 3-5 add lockless lists for task work
> > Patch 6 fixes a bug I noticed in io_uring event tracing
> > Patch 7-8 adds tracing for task_work_run
> > 
> 
> Compared to the spinlock overhead, the prio task list optimization is
> definitely unimportant, so I agree with removing it here.
> Replace the task list with llisy was something I considered but I
> gave
> it up since it changes the list to a stack which means we have to
> handle
> the tasks in a reverse order. This may affect the latency, do you
> have
> some numbers for it, like avg and 99% 95% lat?
> 

Do you have an idea for how to test that? I used a microbenchmark as
well as a network benchmark [1] to verify that overall throughput is
higher. TW latency sounds a lot more complicated to measure as it's
difficult to trigger accurately.

My feeling is that with reasonable batching (say 8-16 items) the
latency will be low as TW is generally very quick, but if you have an
idea for benchmarking I can take a look

[1]: https://github.com/DylanZA/netbench

  reply	other threads:[~2022-06-21  7:03 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-20 16:18 [PATCH RFC for-next 0/8] io_uring: tw contention improvments Dylan Yudaken
2022-06-20 16:18 ` [PATCH RFC for-next 1/8] io_uring: remove priority tw list optimisation Dylan Yudaken
2022-06-20 16:18 ` [PATCH RFC for-next 2/8] io_uring: remove __io_req_task_work_add Dylan Yudaken
2022-06-20 16:18 ` [PATCH RFC for-next 3/8] io_uring: lockless task list Dylan Yudaken
2022-06-20 16:18 ` [PATCH RFC for-next 4/8] io_uring: introduce llist helpers Dylan Yudaken
2022-06-20 16:18 ` [PATCH RFC for-next 5/8] io_uring: batch task_work Dylan Yudaken
2022-06-20 16:18 ` [PATCH RFC for-next 6/8] io_uring: move io_uring_get_opcode out of TP_printk Dylan Yudaken
2022-06-20 16:19 ` [PATCH RFC for-next 7/8] io_uring: add trace event for running task work Dylan Yudaken
2022-06-20 16:19 ` [PATCH RFC for-next 8/8] io_uring: trace task_work_run Dylan Yudaken
2022-06-21  5:10 ` [PATCH RFC for-next 0/8] io_uring: tw contention improvments Hao Xu
2022-06-21  7:03   ` Dylan Yudaken [this message]
2022-06-21  7:34     ` Hao Xu
2022-06-22  9:31       ` Dylan Yudaken
2022-06-22 11:16         ` Hao Xu
2022-06-22 11:24           ` Hao Xu
2022-06-22 11:51             ` Dylan Yudaken
2022-06-22 12:28               ` Hao Xu
2022-06-22 12:29                 ` Hao Xu
2022-06-22 11:52             ` Hao Xu
2022-06-21  7:38     ` Hao Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f8c8e52996aaa8fb8c72ae46f0e87e733a9053aa.camel@fb.com \
    --to=dylany@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=hao.xu@linux.dev \
    --cc=io-uring@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.