All of lore.kernel.org
 help / color / mirror / Atom feed
* User questions: client code and SQE/CQE starvation
@ 2022-01-11 20:39 dormando
       [not found] ` <CAAss7+q_qjYBbiN+RaGrd3ngOPPGRwJiQU+Gkq1YPzfy7X8wqg@mail.gmail.com>
  2022-01-15 23:32 ` Noah Goldstein
  0 siblings, 2 replies; 4+ messages in thread
From: dormando @ 2022-01-11 20:39 UTC (permalink / raw)
  To: io-uring

Hey,

Been integrating io_uring in my stack which has been going well-ish.
Wondering if you folks have seen implementations of client libraries that
feel clean and user friendly?

IE: with poll/select/epoll/kqueue most client libraries (like libcurl)
implement functions like "client_send_data(ctx, etc)", which returns
-WANT_READ/-WANT_WRITE/etc and an fd if it needs more data to move
forward. With the syscalls themselves externalized in io_uring I'm
struggling to come up with abstractions I like and haven't found much
public on a googlin'. Do any public ones exist yet?

On implementing networked servers, it feels natural to do a core loop
like:

      while (1) {
          io_uring_submit_and_wait(&t->ring, 1);

          uint32_t head = 0;
          uint32_t count = 0;

          io_uring_for_each_cqe(&t->ring, head, cqe) {

              event *pe = io_uring_cqe_get_data(cqe);
              pe->callback(pe->udata, cqe);

              count++;
          }
          io_uring_cq_advance(&t->ring, count);
      }

... but A) you can run out of SQE's if they're generated from within
callbacks()'s (retries, get further data, writes after reads, etc).
B) Run out of CQE's with IORING_FEAT_NODROP and can no longer free up
SQE's

So this loop doesn't work under pressure :)

I see that qemu's implementation walks an object queue, which calls
io_uring_submit() if SQE's are exhausted. I don't recall it trying to do
anything if submit returns EBUSY because of CQE exhaustion? I've not found
other merged code implementing non-toy network servers and most examples
are rewrites of CLI tooling which are much more constrained problems. Have
I missed anything?

I can make this work but a lot of solutions are double walking lists
(fetch all CQE's into an array, advance them, then process), or not being
able to take advantage of any of the batching API's. Hoping the
community's got some better examples to untwist my brain a bit :)

For now I have things working but want to do a cleanup pass before making
my clients/server bits public facing.

Thanks!
-Dormando

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: User questions: client code and SQE/CQE starvation
       [not found] ` <CAAss7+q_qjYBbiN+RaGrd3ngOPPGRwJiQU+Gkq1YPzfy7X8wqg@mail.gmail.com>
@ 2022-01-14  9:19   ` Josef
  2022-01-14 21:25     ` dormando
  0 siblings, 1 reply; 4+ messages in thread
From: Josef @ 2022-01-14  9:19 UTC (permalink / raw)
  To: dormando, io-uring

sorry i accidentally pressed send message...

run out of SQE should not be problem, when
io_uring_get_sqe(https://github.com/axboe/liburing/blob/master/src/queue.c#L409)
returns a null, you can run io_uring_submit
in netty we do that automatically when its full
https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringSubmissionQueue.java#L117

In theory you could run out of CQE, netty io_uring approach is a
little bit different.
https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringCompletionQueue.java#L86
(similar to io_uring_for_each_cqe) to make sure the kernel sees that
and the process function is called here
https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringEventLoop.java#L203



> On Wed, 12 Jan 2022 at 22:17, dormando <dormando@rydia.net> wrote:
> >
> > Hey,
> >
> > Been integrating io_uring in my stack which has been going well-ish.
> > Wondering if you folks have seen implementations of client libraries that
> > feel clean and user friendly?
> >
> > IE: with poll/select/epoll/kqueue most client libraries (like libcurl)
> > implement functions like "client_send_data(ctx, etc)", which returns
> > -WANT_READ/-WANT_WRITE/etc and an fd if it needs more data to move
> > forward. With the syscalls themselves externalized in io_uring I'm
> > struggling to come up with abstractions I like and haven't found much
> > public on a googlin'. Do any public ones exist yet?
> >
> > On implementing networked servers, it feels natural to do a core loop
> > like:
> >
> >       while (1) {
> >           io_uring_submit_and_wait(&t->ring, 1);
> >
> >           uint32_t head = 0;
> >           uint32_t count = 0;
> >
> >           io_uring_for_each_cqe(&t->ring, head, cqe) {
> >
> >               event *pe = io_uring_cqe_get_data(cqe);
> >               pe->callback(pe->udata, cqe);
> >
> >               count++;
> >           }
> >           io_uring_cq_advance(&t->ring, count);
> >       }
> >
> > ... but A) you can run out of SQE's if they're generated from within
> > callbacks()'s (retries, get further data, writes after reads, etc).
> > B) Run out of CQE's with IORING_FEAT_NODROP and can no longer free up
> > SQE's
> >
> > So this loop doesn't work under pressure :)
> >
> > I see that qemu's implementation walks an object queue, which calls
> > io_uring_submit() if SQE's are exhausted. I don't recall it trying to do
> > anything if submit returns EBUSY because of CQE exhaustion? I've not found
> > other merged code implementing non-toy network servers and most examples
> > are rewrites of CLI tooling which are much more constrained problems. Have
> > I missed anything?
> >
> > I can make this work but a lot of solutions are double walking lists
> > (fetch all CQE's into an array, advance them, then process), or not being
> > able to take advantage of any of the batching API's. Hoping the
> > community's got some better examples to untwist my brain a bit :)
> >
> > For now I have things working but want to do a cleanup pass before making
> > my clients/server bits public facing.
> >
> > Thanks!
> > -Dormando
>
>
>
> --
> Josef Grieb

--
Josef Grieb

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: User questions: client code and SQE/CQE starvation
  2022-01-14  9:19   ` Josef
@ 2022-01-14 21:25     ` dormando
  0 siblings, 0 replies; 4+ messages in thread
From: dormando @ 2022-01-14 21:25 UTC (permalink / raw)
  To: Josef; +Cc: io-uring



On Fri, 14 Jan 2022, Josef wrote:

> sorry i accidentally pressed send message...
>
> run out of SQE should not be problem, when
> io_uring_get_sqe(https://github.com/axboe/liburing/blob/master/src/queue.c#L409)
> returns a null, you can run io_uring_submit
> in netty we do that automatically when its full
> https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringSubmissionQueue.java#L117

Thanks! Unless I'm completely misreading the liburing code,
io_uring_submit() can return EBUSY and fail to submit the sqe's, if there
is currently a queue of CQE's beyond the limit (ie; FEAT_NODROP). Which
would mean you can't reliably submit when get_sqe() returns NULL? I hope I
have this wrong since it would be much simpler otherwise :)

> In theory you could run out of CQE, netty io_uring approach is a little
> bit different.
> https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringCompletionQueue.java#L86
> (similar to io_uring_for_each_cqe) to make sure the kernel sees that and
> the process function is called here
> https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringEventLoop.java#L203

Thanks. I'll study these a bit more.

>
>
> > On Wed, 12 Jan 2022 at 22:17, dormando <dormando@rydia.net> wrote:
> > >
> > > Hey,
> > >
> > > Been integrating io_uring in my stack which has been going well-ish.
> > > Wondering if you folks have seen implementations of client libraries that
> > > feel clean and user friendly?
> > >
> > > IE: with poll/select/epoll/kqueue most client libraries (like libcurl)
> > > implement functions like "client_send_data(ctx, etc)", which returns
> > > -WANT_READ/-WANT_WRITE/etc and an fd if it needs more data to move
> > > forward. With the syscalls themselves externalized in io_uring I'm
> > > struggling to come up with abstractions I like and haven't found much
> > > public on a googlin'. Do any public ones exist yet?
> > >
> > > On implementing networked servers, it feels natural to do a core loop
> > > like:
> > >
> > >       while (1) {
> > >           io_uring_submit_and_wait(&t->ring, 1);
> > >
> > >           uint32_t head = 0;
> > >           uint32_t count = 0;
> > >
> > >           io_uring_for_each_cqe(&t->ring, head, cqe) {
> > >
> > >               event *pe = io_uring_cqe_get_data(cqe);
> > >               pe->callback(pe->udata, cqe);
> > >
> > >               count++;
> > >           }
> > >           io_uring_cq_advance(&t->ring, count);
> > >       }
> > >
> > > ... but A) you can run out of SQE's if they're generated from within
> > > callbacks()'s (retries, get further data, writes after reads, etc).
> > > B) Run out of CQE's with IORING_FEAT_NODROP and can no longer free up
> > > SQE's
> > >
> > > So this loop doesn't work under pressure :)
> > >
> > > I see that qemu's implementation walks an object queue, which calls
> > > io_uring_submit() if SQE's are exhausted. I don't recall it trying to do
> > > anything if submit returns EBUSY because of CQE exhaustion? I've not found
> > > other merged code implementing non-toy network servers and most examples
> > > are rewrites of CLI tooling which are much more constrained problems. Have
> > > I missed anything?
> > >
> > > I can make this work but a lot of solutions are double walking lists
> > > (fetch all CQE's into an array, advance them, then process), or not being
> > > able to take advantage of any of the batching API's. Hoping the
> > > community's got some better examples to untwist my brain a bit :)
> > >
> > > For now I have things working but want to do a cleanup pass before making
> > > my clients/server bits public facing.
> > >
> > > Thanks!
> > > -Dormando
> >
> >
> >
> > --
> > Josef Grieb
>
> --
> Josef Grieb
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: User questions: client code and SQE/CQE starvation
  2022-01-11 20:39 User questions: client code and SQE/CQE starvation dormando
       [not found] ` <CAAss7+q_qjYBbiN+RaGrd3ngOPPGRwJiQU+Gkq1YPzfy7X8wqg@mail.gmail.com>
@ 2022-01-15 23:32 ` Noah Goldstein
  1 sibling, 0 replies; 4+ messages in thread
From: Noah Goldstein @ 2022-01-15 23:32 UTC (permalink / raw)
  To: dormando; +Cc: open list:IO_URING

On Wed, Jan 12, 2022 at 3:17 PM dormando <dormando@rydia.net> wrote:
>
> Hey,
>
> Been integrating io_uring in my stack which has been going well-ish.
> Wondering if you folks have seen implementations of client libraries that
> feel clean and user friendly?

libev: http://cvs.schmorp.de/libev/
has an io_uring backend: http://cvs.schmorp.de/libev/ev_iouring.c?view=markup

>
> IE: with poll/select/epoll/kqueue most client libraries (like libcurl)
> implement functions like "client_send_data(ctx, etc)", which returns
> -WANT_READ/-WANT_WRITE/etc and an fd if it needs more data to move
> forward. With the syscalls themselves externalized in io_uring I'm
> struggling to come up with abstractions I like and haven't found much
> public on a googlin'. Do any public ones exist yet?
>
> On implementing networked servers, it feels natural to do a core loop
> like:
>
>       while (1) {
>           io_uring_submit_and_wait(&t->ring, 1);
>
>           uint32_t head = 0;
>           uint32_t count = 0;
>
>           io_uring_for_each_cqe(&t->ring, head, cqe) {
>
>               event *pe = io_uring_cqe_get_data(cqe);
>               pe->callback(pe->udata, cqe);
>
>               count++;
>           }
>           io_uring_cq_advance(&t->ring, count);
>       }
>
> ... but A) you can run out of SQE's if they're generated from within
> callbacks()'s (retries, get further data, writes after reads, etc).
> B) Run out of CQE's with IORING_FEAT_NODROP and can no longer free up
> SQE's
>
> So this loop doesn't work under pressure :)
>
> I see that qemu's implementation walks an object queue, which calls
> io_uring_submit() if SQE's are exhausted. I don't recall it trying to do
> anything if submit returns EBUSY because of CQE exhaustion? I've not found
> other merged code implementing non-toy network servers and most examples
> are rewrites of CLI tooling which are much more constrained problems. Have
> I missed anything?
>
> I can make this work but a lot of solutions are double walking lists
> (fetch all CQE's into an array, advance them, then process), or not being
> able to take advantage of any of the batching API's. Hoping the
> community's got some better examples to untwist my brain a bit :)
>
> For now I have things working but want to do a cleanup pass before making
> my clients/server bits public facing.
>
> Thanks!
> -Dormando

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-01-15 23:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-11 20:39 User questions: client code and SQE/CQE starvation dormando
     [not found] ` <CAAss7+q_qjYBbiN+RaGrd3ngOPPGRwJiQU+Gkq1YPzfy7X8wqg@mail.gmail.com>
2022-01-14  9:19   ` Josef
2022-01-14 21:25     ` dormando
2022-01-15 23:32 ` Noah Goldstein

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.