io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pavel Begunkov <asml.silence@gmail.com>
To: "Carter Li 李通洲" <carter.li@eoitek.com>, "Jens Axboe" <axboe@kernel.dk>
Cc: io-uring <io-uring@vger.kernel.org>
Subject: Re: [ISSUE] The time cost of IOSQE_IO_LINK
Date: Thu, 13 Feb 2020 18:08:05 +0300	[thread overview]
Message-ID: <9a8e4c8a-f8b2-900d-92b6-cc69b6adf324@gmail.com> (raw)
In-Reply-To: <ADF462D7-A381-4314-8931-DDB0A2C18761@eoitek.com>

On 2/13/2020 3:33 AM, Carter Li 李通洲 wrote:
> Thanks for your reply.
> 
> You are right the nop isn't really a good test case. But I actually
> found this issue when benchmarking my echo server, which didn't use
> NOP of course.

If there are no hidden subtle issues in io_uring, your benchmark or the
used pattern itself, it's probably because of overhead on async punting
(copying iovecs, several extra switches, refcounts, grabbing mm/fs/etc,
io-wq itself).

I was going to tune async/punting stuff anyway, so I'll look into this.
And of course, there is always a good chance Jens have some bright insights

BTW, what's benefit of doing poll(fd)->read(fd), but not directly read()?

> Test case attached below. Use rust_echo_bench for benchmarking.
> https://github.com/haraldh/rust_echo_bench
> 
> 
> $ gcc link_recv.c -o link_recv -luring -O3 -DUSE_LINK=0
> $ ./link_recv 12345
> $ cargo run --release # On another console
> Benchmarking: 127.0.0.1:12345
> 50 clients, running 512 bytes, 60 sec.
> 
> Speed: 168264 request/sec, 168264 response/sec
> Requests: 10095846
> Responses: 10095844
> 
> $ gcc link_recv.c -o link_recv -luring -O3 -DUSE_LINK=1
> $ ./link_recv 12345
> $ cargo run --release # On another console
> Benchmarking: 127.0.0.1:12345
> 50 clients, running 512 bytes, 60 sec.
> 
> Speed: 112666 request/sec, 112666 response/sec
> Requests: 6760009
> Responses: 6759975
> 
> 
> I think `POLL_ADD(POLLIN)-RECV` and `POLL_ADD(POLLOUT)-SEND` are common use cases for networking ( for some reason a short read for SEND is not considered an error, `RECV-SEND` cannot be used in a link chain ). RECV/SEND won't block after polled. I expect better performance for fewer io_uring_enter syscalls. Could you please have a check with it?
> 
> Another more complex test case `POLL_ADD-READ_FIXED-WRITE_FIXED` I have posted on Github, which currently results in freeze.
> 
> https://github.com/axboe/liburing/issues/71
> 
> Carter
> 
> ---
> 
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> #include <unistd.h>
> 
> #include <sys/socket.h>
> #include <sys/poll.h>
> #include <netinet/in.h>
> 
> #include <liburing.h>
> 
> #define BACKLOG 128
> #define MAX_MESSAGE_LEN 1024
> #define MAX_CONNECTIONS 1024
> #ifndef USE_LINK
> #   define USE_LINK 0
> #endif
> 
> enum { ACCEPT, POLL, READ, WRITE };
> 
> struct conn_info {
>     __u32 fd;
>     __u32 type;
> };
> 
> typedef char buf_type[MAX_CONNECTIONS][MAX_MESSAGE_LEN];
> 
> static struct io_uring ring;
> static unsigned cqe_count = 0;
> 
> int init_socket(int portno) {
>     int sock_listen_fd = socket(AF_INET, SOCK_STREAM, 0);
>     if (sock_listen_fd < 0) {
>         perror("socket");
>         return -1;
>     }
> 
>     struct sockaddr_in server_addr = {
>         .sin_family = AF_INET,
>         .sin_port = htons(portno),
>         .sin_addr = {
>             .s_addr = INADDR_ANY,
>         },
>     };
> 
>     if (bind(sock_listen_fd, (struct sockaddr *)&server_addr, sizeof(server_addr)) < 0) {
>         perror("bind");
>         return -1;
>     }
> 
>     if (listen(sock_listen_fd, BACKLOG) < 0) {
>         perror("listen");
>         return -1;
>     }
> 
>     return sock_listen_fd;
> }
> 
> static struct io_uring_sqe* get_sqe_safe() {
>     struct io_uring_sqe *sqe = io_uring_get_sqe(&ring);
>     if (__builtin_expect(!!sqe, 1)) {
>         return sqe;
>     } else {
>         io_uring_cq_advance(&ring, cqe_count);
>         cqe_count = 0;
>         io_uring_submit(&ring);
>         return io_uring_get_sqe(&ring);
>     }
> }
> 
> static void add_accept(int fd, struct sockaddr *client_addr, socklen_t *client_len) {
>     struct io_uring_sqe *sqe = get_sqe_safe();
>     struct conn_info conn_i = {
>         .fd = fd,
>         .type = ACCEPT,
>     };
> 
>     io_uring_prep_accept(sqe, fd, client_addr, client_len, 0);
>     memcpy(&sqe->user_data, &conn_i, sizeof(conn_i));
> }
> 
> static void add_poll(int fd, int poll_mask, unsigned flags) {
>     struct io_uring_sqe *sqe = get_sqe_safe();
>     struct conn_info conn_i = {
>         .fd = fd,
>         .type = POLL,
>     };
> 
>     io_uring_prep_poll_add(sqe, fd, poll_mask);
>     io_uring_sqe_set_flags(sqe, flags);
>     memcpy(&sqe->user_data, &conn_i, sizeof(conn_i));
> }
> 
> static void add_socket_read(int fd, size_t size, buf_type *bufs) {
>     struct io_uring_sqe *sqe = get_sqe_safe();
>     struct conn_info conn_i = {
>         .fd = fd,
>         .type = READ,
>     };
> 
>     io_uring_prep_recv(sqe, fd, (*bufs)[fd], size, MSG_NOSIGNAL);
>     memcpy(&sqe->user_data, &conn_i, sizeof(conn_i));
> }
> 
> static void add_socket_write(int fd, size_t size, buf_type *bufs, unsigned flags) {
>     struct io_uring_sqe *sqe = get_sqe_safe();
>     struct conn_info conn_i = {
>         .fd = fd,
>         .type = WRITE,
>     };
> 
>     io_uring_prep_send(sqe, fd, (*bufs)[fd], size, MSG_NOSIGNAL);
>     io_uring_sqe_set_flags(sqe, flags);
>     memcpy(&sqe->user_data, &conn_i, sizeof(conn_i));
> }
> 
> int main(int argc, char *argv[]) {
>     if (argc < 2) {
>         fprintf(stderr, "Please give a port number: %s [port]\n", argv[0]);
>         return 1;
>     }
> 
>     int portno = strtol(argv[1], NULL, 10);
>     int sock_listen_fd = init_socket(portno);
>     if (sock_listen_fd < 0) return -1;
>     printf("io_uring echo server listening for connections on port: %d\n", portno);
> 
> 
>     int ret = io_uring_queue_init(BACKLOG, &ring, 0);
>     if (ret < 0) {
>         fprintf(stderr, "queue_init: %s\n", strerror(-ret));
>         return -1;
>     }
> 
>     buf_type *bufs = (buf_type *)malloc(sizeof(*bufs));
> 
>     struct sockaddr_in client_addr;
>     socklen_t client_len = sizeof(client_addr);
>     add_accept(sock_listen_fd, (struct sockaddr *)&client_addr, &client_len);
> 
>     while (1) {
>         io_uring_submit_and_wait(&ring, 1);
> 
>         struct io_uring_cqe *cqe;
>         unsigned head;
> 
>         io_uring_for_each_cqe(&ring, head, cqe) {
>             ++cqe_count;
> 
>             struct conn_info conn_i;
>             memcpy(&conn_i, &cqe->user_data, sizeof(conn_i));
>             int result = cqe->res;
> 
>             switch (conn_i.type) {
>             case ACCEPT:
> #if USE_LINK
>                 add_poll(result, POLLIN, IOSQE_IO_LINK);
>                 add_socket_read(result, MAX_MESSAGE_LEN, bufs);
> #else
>                 add_poll(result, POLLIN, 0);
> #endif
>                 add_accept(sock_listen_fd, (struct sockaddr *)&client_addr, &client_len);
>                 break;
> 
> #if !USE_LINK
>             case POLL:
>                 add_socket_read(conn_i.fd, MAX_MESSAGE_LEN, bufs);
>                 break;
> #endif
> 
>             case READ:
>                 if (__builtin_expect(result <= 0, 0)) {
>                     shutdown(conn_i.fd, SHUT_RDWR);
>                 } else {
>                     add_socket_write(conn_i.fd, result, bufs, 0);
>                 }
>                 break;
> 
>             case WRITE:
> #if USE_LINK
>                 add_poll(conn_i.fd, POLLIN, IOSQE_IO_LINK);
>                 add_socket_read(conn_i.fd, MAX_MESSAGE_LEN, bufs);
> #else
>                 add_poll(conn_i.fd, POLLIN, 0);
> #endif
>                 break;
>             }
>         }
> 
>         io_uring_cq_advance(&ring, cqe_count);
>         cqe_count = 0;
>     }
> 
> 
>     close(sock_listen_fd);
>     free(bufs);
> }
> 
> 
> 
>> 2020年2月13日 上午1:11,Jens Axboe <axboe@kernel.dk> 写道:
>>
>> On 2/12/20 9:31 AM, Carter Li 李通洲 wrote:
>>> Hi everyone,
>>>
>>> IOSQE_IO_LINK seems to have very high cost, even greater then io_uring_enter syscall.
>>>
>>> Test code attached below. The program completes after getting 100000000 cqes.
>>>
>>> $ gcc test.c -luring -o test0 -g -O3 -DUSE_LINK=0
>>> $ time ./test0
>>> USE_LINK: 0, count: 100000000, submit_count: 1562500
>>> 0.99user 9.99system 0:11.02elapsed 99%CPU (0avgtext+0avgdata 1608maxresident)k
>>> 0inputs+0outputs (0major+72minor)pagefaults 0swaps
>>>
>>> $ gcc test.c -luring -o test1 -g -O3 -DUSE_LINK=1
>>> $ time ./test1
>>> USE_LINK: 1, count: 100000110, submit_count: 799584
>>> 0.83user 19.21system 0:20.90elapsed 95%CPU (0avgtext+0avgdata 1632maxresident)k
>>> 0inputs+0outputs (0major+72minor)pagefaults 0swaps
>>>
>>> As you can see, the `-DUSE_LINK=1` version emits only about half io_uring_submit calls
>>> of the other version, but takes twice as long. That makes IOSQE_IO_LINK almost useless,
>>> please have a check.
>>
>> The nop isn't really a good test case, as it doesn't contain any smarts
>> in terms of executing a link fast. So it doesn't say a whole lot outside
>> of "we could make nop links faster", which is also kind of pointless.
>>
>> "Normal" commands will work better. Where the link is really a win is if
>> the first request needs to go async to complete. For that case, the
>> next link can execute directly from that context. This saves an async
>> punt for the common case.
>>
>> -- 
>> Jens Axboe
>>
> 

-- 
Pavel Begunkov

  reply	other threads:[~2020-02-13 15:08 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-12 16:31 [ISSUE] The time cost of IOSQE_IO_LINK Carter Li 李通洲
2020-02-12 17:11 ` Jens Axboe
2020-02-12 17:22   ` Jens Axboe
2020-02-12 17:29     ` Jens Axboe
2020-02-13  0:33   ` Carter Li 李通洲
2020-02-13 15:08     ` Pavel Begunkov [this message]
2020-02-13 15:14       ` Jens Axboe
2020-02-13 15:51         ` Carter Li 李通洲
2020-02-14  1:25           ` Carter Li 李通洲
2020-02-14  2:45             ` Jens Axboe
2020-02-14  5:03               ` Jens Axboe
2020-02-14 15:32                 ` Peter Zijlstra
2020-02-14 15:47                   ` Jens Axboe
2020-02-14 16:18                     ` Jens Axboe
2020-02-14 17:52                       ` Jens Axboe
2020-02-14 20:44                         ` Jens Axboe
2020-02-15  0:16                           ` Carter Li 李通洲
2020-02-15  1:10                             ` Jens Axboe
2020-02-15  1:25                               ` Carter Li 李通洲
2020-02-15  1:27                                 ` Jens Axboe
2020-02-15  6:01                                   ` Jens Axboe
2020-02-15  6:32                                     ` Carter Li 李通洲
2020-02-15 15:11                                       ` Jens Axboe
2020-02-16 19:06                                     ` Pavel Begunkov
2020-02-16 22:23                                       ` Jens Axboe
2020-02-17 10:30                                         ` Pavel Begunkov
2020-02-17 19:30                                           ` Jens Axboe
2020-02-16 23:06                                       ` Jens Axboe
2020-02-16 23:07                                         ` Jens Axboe
2020-02-17 12:09                           ` Peter Zijlstra
2020-02-17 16:12                             ` Jens Axboe
2020-02-17 17:16                               ` Jens Axboe
2020-02-17 17:46                                 ` Peter Zijlstra
2020-02-17 18:16                                   ` Jens Axboe
2020-02-18 13:13                                     ` Peter Zijlstra
2020-02-18 14:27                                       ` [PATCH] asm-generic/atomic: Add try_cmpxchg() fallbacks Peter Zijlstra
2020-02-18 14:40                                         ` Peter Zijlstra
2020-02-20 10:30                                         ` Will Deacon
2020-02-20 10:37                                           ` Peter Zijlstra
2020-02-20 10:39                                             ` Will Deacon
2020-02-18 14:56                                       ` [ISSUE] The time cost of IOSQE_IO_LINK Oleg Nesterov
2020-02-18 15:07                                         ` Oleg Nesterov
2020-02-18 15:38                                           ` Peter Zijlstra
2020-02-18 16:33                                             ` Jens Axboe
2020-02-18 15:07                                         ` Peter Zijlstra
2020-02-18 15:50                                           ` [PATCH] task_work_run: don't take ->pi_lock unconditionally Oleg Nesterov
2020-02-20 16:39                                             ` Peter Zijlstra
2020-02-20 17:22                                               ` Oleg Nesterov
2020-02-20 17:49                                                 ` Peter Zijlstra
2020-02-21 14:52                                                   ` Oleg Nesterov
2020-02-24 18:47                                                     ` Jens Axboe
2020-02-28 19:17                                                       ` Jens Axboe
2020-02-28 19:25                                                         ` Peter Zijlstra
2020-02-28 19:28                                                           ` Jens Axboe
2020-02-28 20:06                                                             ` Peter Zijlstra
2020-02-28 20:15                                                               ` Jens Axboe
2020-02-18 16:46                                       ` [ISSUE] The time cost of IOSQE_IO_LINK Jens Axboe
2020-02-18 16:52                                         ` Jens Axboe
2020-02-18 13:13                               ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9a8e4c8a-f8b2-900d-92b6-cc69b6adf324@gmail.com \
    --to=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=carter.li@eoitek.com \
    --cc=io-uring@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).