All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dmitry Sychov <dmitry.sychov@gmail.com>
To: Sergiy Yevtushenko <sergiy.yevtushenko@gmail.com>
Cc: Mark Papadakis <markuspapadakis@icloud.com>,
	"H. de Vries" <hdevries@fastmail.com>,
	io-uring <io-uring@vger.kernel.org>
Subject: Re: Any performance gains from using per thread(thread local) urings?
Date: Wed, 13 May 2020 17:31:17 +0300	[thread overview]
Message-ID: <CADPKF+fW3Yj28PAWBqUO8s9ztkU9sRzTLsLXeh0qgUhE8oWzDg@mail.gmail.com> (raw)
In-Reply-To: <CADPKF+dR=uQx9Dnu83ADghgei4KxwqnfBwONvp-ou--aePq0xg@mail.gmail.com>

> Sharing state should be avoided as much as possible.

Its more about freely moving state between threads (like using
io_uring_cqe::user_data), not sharing...

On Wed, May 13, 2020 at 5:22 PM Dmitry Sychov <dmitry.sychov@gmail.com> wrote:
>
> Anyone could shed some light on the inner implementation of uring please? :)
>
> Specifically how well kernel scales with the increased number of user
> created urings?
>
> > If kernel implementation will change from single to multiple queues,
> > user space is already prepared for this change.
>
> Thats +1 for per-thread urings. An expectation for the kernel to
> become better and better in multiple urings scaling in the future.
>
> On Wed, May 13, 2020 at 4:52 PM Sergiy Yevtushenko
> <sergiy.yevtushenko@gmail.com> wrote:
> >
> > Completely agree. Sharing state should be avoided as much as possible.
> > Returning to original question: I believe that uring-per-thread scheme is better regardless from how queue is managed inside the kernel.
> > - If there is only one queue inside the kernel, then it's more efficient to perform multiplexing/demultiplexing requests in kernel space
> > - If there are several queues inside the kernel, then user space code better matches kernel-space code.
> > - If kernel implementation will change from single to multiple queues, user space is already prepared for this change.
> >
> >
> > On Wed, May 13, 2020 at 3:30 PM Mark Papadakis <markuspapadakis@icloud.com> wrote:
> >>
> >>
> >>
> >> > On 13 May 2020, at 4:15 PM, Dmitry Sychov <dmitry.sychov@gmail.com> wrote:
> >> >
> >> > Hey Mark,
> >> >
> >> > Or we could share one SQ and one CQ between multiple threads(bound by
> >> > the max number of CPU cores) for direct read/write access using very
> >> > light mutex to sync.
> >> >
> >> > This also solves threads starvation issue  - thread A submits the job
> >> > into shared SQ while thread B both collects and _processes_ the result
> >> > from the shared CQ instead of waiting on his own unique CQ for next
> >> > completion event.
> >> >
> >>
> >>
> >> Well, if the SQ submitted by A and its matching CQ is consumed by B, and A will need access to that CQ because it is tightly coupled to state it owns exclusively(for example), or other reasons, then you’d still need to move that CQ from B back to A, or share it somehow, which seems expensive-is.
> >>
> >> It depends on what kind of roles your threads have though; I am personally very much against sharing state between threads unless there a really good reason for it.
> >>
> >>
> >>
> >>
> >>
> >>
> >> > On Wed, May 13, 2020 at 2:56 PM Mark Papadakis
> >> > <markuspapadakis@icloud.com> wrote:
> >> >>
> >> >> For what it’s worth, I am (also) using using multiple “reactor” (i.e event driven) cores, each associated with one OS thread, and each reactor core manages its own io_uring context/queues.
> >> >>
> >> >> Even if scheduling all SQEs through a single io_uring SQ — by e.g collecting all such SQEs in every OS thread and then somehow “moving” them to the one OS thread that manages the SQ so that it can enqueue them all -- is very cheap, you ‘d still need to drain the CQ from that thread and presumably process those CQEs in a single OS thread, which will definitely be more work than having each reactor/OS thread dequeue CQEs for SQEs that itself submitted.
> >> >> You could have a single OS thread just for I/O and all other threads could do something else but you’d presumably need to serialize access/share state between them and the one OS thread for I/O which maybe a scalability bottleneck.
> >> >>
> >> >> ( if you are curious, you can read about it here https://medium.com/@markpapadakis/building-high-performance-services-in-2020-e2dea272f6f6 )
> >> >>
> >> >> If you experiment with the various possible designs though, I’d love it if you were to share your findings.
> >> >>
> >> >> —
> >> >> @markpapapdakis
> >> >>
> >> >>
> >> >>> On 13 May 2020, at 2:01 PM, Dmitry Sychov <dmitry.sychov@gmail.com> wrote:
> >> >>>
> >> >>> Hi Hielke,
> >> >>>
> >> >>>> If you want max performance, what you generally will see in non-blocking servers is one event loop per core/thread.
> >> >>>> This means one ring per core/thread. Of course there is no simple answer to this.
> >> >>>> See how thread-based servers work vs non-blocking servers. E.g. Apache vs Nginx or Tomcat vs Netty.
> >> >>>
> >> >>> I think a lot depends on the internal uring implementation. To what
> >> >>> degree the kernel is able to handle multiple urings independently,
> >> >>> without much congestion points(like updates of the same memory
> >> >>> locations from multiple threads), thus taking advantage of one ring
> >> >>> per CPU core.
> >> >>>
> >> >>> For example, if the tasks from multiple rings are later combined into
> >> >>> single input kernel queue (effectively forming a congestion point) I
> >> >>> see
> >> >>> no reason to use exclusive ring per core in user space.
> >> >>>
> >> >>> [BTW in Windows IOCP is always one input+output queue for all(active) threads].
> >> >>>
> >> >>> Also we could pop out multiple completion events from a single CQ at
> >> >>> once to spread the handling to cores-bound threads .
> >> >>>
> >> >>> I thought about one uring per core at first, but now I'am not sure -
> >> >>> maybe the kernel devs have something to add to the discussion?
> >> >>>
> >> >>> P.S. uring is the main reason I'am switching from windows to linux dev
> >> >>> for client-sever app so I want to extract the max performance possible
> >> >>> out of this new exciting uring stuff. :)
> >> >>>
> >> >>> Thanks, Dmitry
> >> >>
> >>

  reply	other threads:[~2020-05-13 14:31 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-12 20:20 Dmitry Sychov
2020-05-13  6:07 ` H. de Vries
2020-05-13 11:01   ` Dmitry Sychov
2020-05-13 11:56     ` Mark Papadakis
2020-05-13 13:15       ` Dmitry Sychov
2020-05-13 13:27         ` Mark Papadakis
2020-05-13 13:48           ` Dmitry Sychov
2020-05-13 14:12           ` Sergiy Yevtushenko
     [not found]           ` <CAO5MNut+nD-OqsKgae=eibWYuPim1f8-NuwqVpD87eZQnrwscA@mail.gmail.com>
2020-05-13 14:22             ` Dmitry Sychov
2020-05-13 14:31               ` Dmitry Sychov [this message]
2020-05-13 16:02               ` Pavel Begunkov
2020-05-13 19:23                 ` Dmitry Sychov
2020-05-14 10:06                   ` Pavel Begunkov
2020-05-14 11:35                     ` Dmitry Sychov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADPKF+fW3Yj28PAWBqUO8s9ztkU9sRzTLsLXeh0qgUhE8oWzDg@mail.gmail.com \
    --to=dmitry.sychov@gmail.com \
    --cc=hdevries@fastmail.com \
    --cc=io-uring@vger.kernel.org \
    --cc=markuspapadakis@icloud.com \
    --cc=sergiy.yevtushenko@gmail.com \
    --subject='Re: Any performance gains from using per thread(thread local) urings?' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.