All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andy Lutomirski <luto@kernel.org>
To: Marek Majkowski <marek@cloudflare.com>
Cc: Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	Linux API <linux-api@vger.kernel.org>,
	Jason Baron <jbaron@akamai.com>
Subject: Re: Resurrecting EPOLLROUNDROBIN
Date: Mon, 25 Mar 2019 17:23:04 -0700	[thread overview]
Message-ID: <CALCETrUP7UcNtbvHw0SW3S+DW3LZNCav5cujdmXbFa9rZn+Tiw@mail.gmail.com> (raw)
In-Reply-To: <CAJPywTLRxP4P6J8c4pzpwtZ1NhYwiRQ_P1dbCX00UYrBK7hg2Q@mail.gmail.com>

On Mon, Mar 25, 2019 at 4:38 AM Marek Majkowski <marek@cloudflare.com> wrote:
>
> Hi,
>
> Recently we noticed epoll is not helpful for load balancing when
> called on a listen TCP socket. I described this in a blog post:
>
> https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
>
> The short explanation: new connections going to a listen socket are
> not evenly distributed across processes that wait on the EPOLLIN. In
> practice the last process doing epoll_wait() will get the new
> connection. See the trivial program to reproduce:
>
> https://github.com/cloudflare/cloudflare-blog/blob/master/2017-10-accept-balancing/epoll-and-accept.py
>
>    $ ./epoll-and-accept.py &
>    $ for i in `seq 6`; do echo | nc localhost 1024; done
>    worker 0
>    worker 0
>    worker 0
>    worker 0
>    worker 0
>    worker 0
>
> Worker #0 did all the accept() calls. This is because the listen
> socket wait queue is a LIFO (not FIFO!). With current behaviour, the
> process calling epoll_wait() most recently will be woken up first.
> This usually is the busiest process. This leads to uneven load
> distribution across worker processes.

I recall a discussion of this at a conference several years ago, but
it's been several years.  Anyway:

I read the blog post, and I looked at your example, and the kernel
behavior actually seems quite sane to me.  From the kernel's
perspective, if you're calling accept in a loop in a bunch of threads
(mediated by epoll or otherwise), and one of those threads is able to
call accept() fast enough, then that thread *should* get all the
sockets.  It's cache hot, and bouncing around is expensive.

Now obviously the overall behavior here is suboptimal, but that's
arguably because the user process is being silly, not because the
kernel is doing it wrong.  Shouldn't the user process take the newly
accepted socket and hand it off to an appropriate thread for
servicing?  If I were doing this, I'd get a freshly accepted socket
and either forward it to a thread (or process) that is appropriately
lightly loaded or, even better, that is pinned to the CPU that RFS has
assigned to the flow assuming that that thread isn't overloaded.  If
the program is using threads, then this doesn't need to involve the
kernel at all and, if it's using processes, then SCM_RIGHTS would do
the trick.  But asking the kernel to arbitrarily and awkwardly
round-robin the sockets and then keeping the flows on the threads that
get picked means that, at best, each thread gets an arbitrary
selection of flows and the balancing isn't particularly good.

Now, if someone were to actually try doing this in userspace and it
was too slow, I could see adding some kernel mechanisms to accelerate
the process.  Perhaps a mechanism to ask to accept only new
connections that are RFSified to the calling CPU would be useful.  But
this shouldn't be an *epoll* mechanism, since there is no actual
guarantee that the CPU that returns first from epoll_wait() is the
same CPU that calls accept() under load.  (Under load, multiple new
connections could come in and wake multiple CPUs before any of them
manage to call accept().)

So I think that EPOLLROUNDROBIN is not a great solution to the
problem, and I think that the problem isn't obviously a *kernel*
problem in the first place.

  parent reply	other threads:[~2019-03-26  0:23 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-25 11:38 Resurrecting EPOLLROUNDROBIN Marek Majkowski
2019-03-25 16:54 ` Jason Baron
2019-03-26  0:23 ` Andy Lutomirski [this message]
2019-03-26 15:00   ` Jason Baron
2019-03-27 15:57     ` Marek Majkowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALCETrUP7UcNtbvHw0SW3S+DW3LZNCav5cujdmXbFa9rZn+Tiw@mail.gmail.com \
    --to=luto@kernel.org \
    --cc=jbaron@akamai.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=marek@cloudflare.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.