bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
To: <zenczykowski@gmail.com>
Cc: <andrii@kernel.org>, <ast@kernel.org>, <benh@amazon.com>,
	<bpf@vger.kernel.org>, <daniel@iogearbox.net>,
	<davem@davemloft.net>, <edumazet@google.com>, <kafai@fb.com>,
	<kuba@kernel.org>, <kuni1840@gmail.com>, <kuniyu@amazon.co.jp>,
	<linux-kernel@vger.kernel.org>, <netdev@vger.kernel.org>
Subject: Re: [PATCH v4 bpf-next 00/11] Socket migration for SO_REUSEPORT.
Date: Wed, 28 Apr 2021 17:18:30 +0900	[thread overview]
Message-ID: <20210428081830.2292-1-kuniyu@amazon.co.jp> (raw)
In-Reply-To: <CAHo-Ooz252rnWZ=9k6nO0vjGKFkQDoaLxZ1jxiTomtckq9DbYA@mail.gmail.com>

From:   Maciej Żenczykowski <zenczykowski@gmail.com>
Date:   Tue, 27 Apr 2021 15:00:12 -0700
> On Tue, Apr 27, 2021 at 2:55 PM Maciej Żenczykowski
> <zenczykowski@gmail.com> wrote:
> > On Mon, Apr 26, 2021 at 8:47 PM Kuniyuki Iwashima <kuniyu@amazon.co.jp> wrote:
> > > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > > accept connections evenly. However, there is a defect in the current
> > > implementation [1]. When a SYN packet is received, the connection is tied
> > > to a listening socket. Accordingly, when the listener is closed, in-flight
> > > requests during the three-way handshake and child sockets in the accept
> > > queue are dropped even if other listeners on the same port could accept
> > > such connections.
> > >
> > > This situation can happen when various server management tools restart
> > > server (such as nginx) processes. For instance, when we change nginx
> > > configurations and restart it, it spins up new workers that respect the new
> > > configuration and closes all listeners on the old workers, resulting in the
> > > in-flight ACK of 3WHS is responded by RST.
> >
> > This is IMHO a userspace bug.

I do not think so.

If the kernel selected another listener for incoming connections, they
could be accept()ed. There is no room for usersapce to change the behaviour
without an in-kernel tool, eBPF. A feature that can cause failure
stochastically due to kernel behaviour cannot be a userspace bug.


> >
> > You should never be closing or creating new SO_REUSEPORT sockets on a
> > running server (listening port).
> >
> > There's at least 3 ways to accomplish this.
> >
> > One involves a shim parent process that takes care of creating the
> > sockets (without close-on-exec),
> > then fork-exec's the actual server process[es] (which will use the
> > already opened listening fds),
> > and can thus re-fork-exec a new child while using the same set of sockets.
> > Here the old server can terminate before the new one starts.
> >
> > (one could even envision systemd being modified to support this...)
> >
> > The second involves the old running server fork-execing the new server
> > and handing off the non-CLOEXEC sockets that way.
> 
> (this doesn't even need to be fork-exec -- can just be exec -- and is
> potentially easier)
> 
> > The third approach involves unix fd passing of sockets to hand off the
> > listening sockets from the old process/thread(s) to the new
> > process/thread(s).  Once handed off the old server can stop accept'ing
> > on the listening sockets and close them (the real copies are in the
> > child), finish processing any still active connections (or time them
> 
> (this doesn't actually need to be a child, in can be an entirely new
> parallel instance of the server,
> potentially running in an entirely new container/cgroup setup, though
> in the same network namespace)
> 
> > out) and terminate.
> >
> > Either way you're never creating new SO_REUSEPORT sockets (dup doesn't
> > count), nor closing the final copy of a given socket.

Indeed each approach can be an option, but it makes application more
complicated. Also what if the process holding the listener fd died, there
could be down time.

I do not think every approach works well in everywhere for everyone.


> >
> > This is basically the same thing that was needed not to lose incoming
> > connections in a pre-SO_REUSEPORT world.
> > (no SO_REUSEADDR by itself doesn't prevent an incoming SYN from
> > triggering a RST during the server restart, it just makes the window
> > when RSTs happen shorter)

SO_REUSEPORT makes each process/listener independent, and we need not pass
fds. So, it makes application much simpler. Even with SO_REUSEPORT, one
listener might crash, but it is more tolerant than losing all connections
at once.

To enjoy such merits, isn't it natural to improve the existing feature in
this post-SO_REUSEPORT world?


> >
> > This was from day one (I reported to Tom and worked with him on the
> > very initial distribution function) envisioned to work like this,
> > and we (Google) have always used it with unix fd handoff to support
> > transparent restart.

      reply	other threads:[~2021-04-28  8:19 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-27  3:46 [PATCH v4 bpf-next 00/11] Socket migration for SO_REUSEPORT Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 01/11] net: Introduce net.ipv4.tcp_migrate_req Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 02/11] tcp: Add num_closed_socks to struct sock_reuseport Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 03/11] tcp: Keep TCP_CLOSE sockets in the reuseport group Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 04/11] tcp: Add reuseport_migrate_sock() to select a new listener Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 05/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 06/11] tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs Kuniyuki Iwashima
2021-05-05  4:56   ` Martin KaFai Lau
2021-05-05 23:16     ` Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 07/11] tcp: Migrate TCP_NEW_SYN_RECV requests at receiving the final ACK Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 08/11] bpf: Support BPF_FUNC_get_socket_cookie() for BPF_PROG_TYPE_SK_REUSEPORT Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 09/11] bpf: Support socket migration by eBPF Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 10/11] libbpf: Set expected_attach_type for BPF_PROG_TYPE_SK_REUSEPORT Kuniyuki Iwashima
2021-04-27  3:46 ` [PATCH v4 bpf-next 11/11] bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE Kuniyuki Iwashima
2021-05-05  5:14   ` Martin KaFai Lau
2021-05-05 23:19     ` Kuniyuki Iwashima
2021-04-27 16:38 ` [PATCH v4 bpf-next 00/11] Socket migration for SO_REUSEPORT Jason Baron
2021-04-28  1:27   ` Martin KaFai Lau
2021-04-28 14:18     ` Eric Dumazet
2021-04-28 15:49       ` Kuniyuki Iwashima
2021-04-28  8:13   ` Kuniyuki Iwashima
2021-04-28 14:44     ` Jason Baron
2021-04-28 15:52       ` Kuniyuki Iwashima
2021-04-28 16:33         ` Eric Dumazet
2021-04-29  3:16           ` Kuniyuki Iwashima
2021-05-05  6:54             ` Martin KaFai Lau
2021-04-27 21:55 ` Maciej Żenczykowski
2021-04-27 22:00   ` Maciej Żenczykowski
2021-04-28  8:18     ` Kuniyuki Iwashima [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210428081830.2292-1-kuniyu@amazon.co.jp \
    --to=kuniyu@amazon.co.jp \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=benh@amazon.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kafai@fb.com \
    --cc=kuba@kernel.org \
    --cc=kuni1840@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=zenczykowski@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).