On 4/26/21 11:46 PM, Kuniyuki Iwashima wrote: > The SO_REUSEPORT option allows sockets to listen on the same port and to > accept connections evenly. However, there is a defect in the current > implementation [1]. When a SYN packet is received, the connection is tied > to a listening socket. Accordingly, when the listener is closed, in-flight > requests during the three-way handshake and child sockets in the accept > queue are dropped even if other listeners on the same port could accept > such connections. > > This situation can happen when various server management tools restart > server (such as nginx) processes. For instance, when we change nginx > configurations and restart it, it spins up new workers that respect the new > configuration and closes all listeners on the old workers, resulting in the > in-flight ACK of 3WHS is responded by RST. Hi Kuniyuki, I had implemented a different approach to this that I wanted to get your thoughts about. The idea is to use unix sockets and SCM_RIGHTS to pass the listen fd (or any other fd) around. Currently, if you have an 'old' webserver that you want to replace with a 'new' webserver, you would need a separate process to receive the listen fd and then have that process send the fd to the new webserver, if they are not running con-currently. So instead what I'm proposing is a 'delayed close' for a unix socket. That is, one could do: 1) bind unix socket with path '/sockets' 2) sendmsg() the listen fd via the unix socket 2) setsockopt() some 'timeout' on the unix socket (maybe 10 seconds or so) 3) exit/close the old webserver and the listen socket 4) start the new webserver 5) create new unix socket and bind to '/sockets' (if has MAY_WRITE file permissions) 6) recvmsg() the listen fd So the idea is that we set a timeout on the unix socket. If the new process does not start and bind to the unix socket, it simply closes, thus releasing the listen socket. However, if it does bind it can now call recvmsg() and use the listen fd as normal. It can then simply continue to use the old listen fds and/or create new ones and drain the old ones. Thus, the old and new webservers do not have to run concurrently. This doesn't involve any changes to the tcp layer and can be used to pass any type of fd. not sure if it's actually useful for anything else though. I'm not sure if this solves your use-case or not but I thought I'd share it. One can also inherit the fds like in systemd's socket activation model, but that again requires another process to hold open the listen fd. I have a very rough patch (emphasis on rough), that implements this idea that I'm attaching below to explain it better. It would need a bunch of fixups and it's against an older kernel, but hopefully gives this direction a better explanation. Thanks, -Jason > > To avoid such a situation, users have to know deeply how the kernel handles > SYN packets and implement connection draining by eBPF [2]: > > 1. Stop routing SYN packets to the listener by eBPF. > 2. Wait for all timers to expire to complete requests > 3. Accept connections until EAGAIN, then close the listener. > > or > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > 2. Stop routing SYN packets. > 3. Accept connections up to the count, then close the listener. > > In either way, we cannot close a listener immediately. However, ideally, > the application need not drain the not yet accepted sockets because 3WHS > and tying a connection to a listener are just the kernel behaviour. The > root cause is within the kernel, so the issue should be addressed in kernel > space and should not be visible to user space. This patchset fixes it so > that users need not take care of kernel implementation and connection > draining. With this patchset, the kernel redistributes requests and > connections from a listener to the others in the same reuseport group > at/after close or shutdown syscalls. > > Although some software does connection draining, there are still merits in > migration. For some security reasons, such as replacing TLS certificates, > we may want to apply new settings as soon as possible and/or we may not be > able to wait for connection draining. The sockets in the accept queue have > not started application sessions yet. So, if we do not drain such sockets, > they can be handled by the newer listeners and could have a longer > lifetime. It is difficult to drain all connections in every case, but we > can decrease such aborted connections by migration. In that sense, > migration is always better than draining. > > Moreover, auto-migration simplifies user space logic and also works well in > a case where we cannot modify and build a server program to implement the > workaround. > > Note that the source and destination listeners MUST have the same settings > at the socket API level; otherwise, applications may face inconsistency and > cause errors. In such a case, we have to use the eBPF program to select a > specific listener or to cancel migration. > > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code > snippets along the way. > > > Link: > [1] The SO_REUSEPORT socket option > https://urldefense.com/v3/__https://lwn.net/Articles/542629/__;!!GjvTz_vk!EfhfOTCU_7XOxhuo8yW-66aU3Arq_7mkRIloYIyJYvsuGuFYTPYMmHvYbG59iA$ > > [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode > https://urldefense.com/v3/__https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/__;!!GjvTz_vk!EfhfOTCU_7XOxhuo8yW-66aU3Arq_7mkRIloYIyJYvsuGuFYTPYMmHv5_PVAcw$ > > > Changelog: > v4: > * Make some functions and variables 'static' in selftest > * Remove 'scalability' from the cover letter because it is not > primarily reason to use SO_REUSEPORT > > v3: > https://urldefense.com/v3/__https://lore.kernel.org/bpf/20210420154140.80034-1-kuniyu@amazon.co.jp/__;!!GjvTz_vk!EfhfOTCU_7XOxhuo8yW-66aU3Arq_7mkRIloYIyJYvsuGuFYTPYMmHtKFGgFOg$ > * Add sysctl back for reuseport_grow() > * Add helper functions to manage socks[] > * Separate migration related logic into functions: reuseport_resurrect(), > reuseport_stop_listen_sock(), reuseport_migrate_sock() > * Clone request_sock to be migrated > * Migrate request one by one > * Pass child socket to eBPF prog > > v2: > https://urldefense.com/v3/__https://lore.kernel.org/netdev/20201207132456.65472-1-kuniyu@amazon.co.jp/__;!!GjvTz_vk!EfhfOTCU_7XOxhuo8yW-66aU3Arq_7mkRIloYIyJYvsuGuFYTPYMmHtxujEgug$ > * Do not save closed sockets in socks[] > * Revert 607904c357c61adf20b8fd18af765e501d61a385 > * Extract inet_csk_reqsk_queue_migrate() into a single patch > * Change the spin_lock order to avoid lockdep warning > * Add static to __reuseport_select_sock > * Use refcount_inc_not_zero() in reuseport_select_migrated_sock() > * Set the default attach type in bpf_prog_load_check_attach() > * Define new proto of BPF_FUNC_get_socket_cookie > * Fix test to be compiled successfully > * Update commit messages > > v1: > https://urldefense.com/v3/__https://lore.kernel.org/netdev/20201201144418.35045-1-kuniyu@amazon.co.jp/__;!!GjvTz_vk!EfhfOTCU_7XOxhuo8yW-66aU3Arq_7mkRIloYIyJYvsuGuFYTPYMmHsPqhRjHg$ > * Remove the sysctl option > * Enable migration if eBPF progam is not attached > * Add expected_attach_type to check if eBPF program can migrate sockets > * Add a field to tell migration type to eBPF program > * Support BPF_FUNC_get_socket_cookie to get the cookie of sk > * Allocate an empty skb if skb is NULL > * Pass req_to_sk(req)->sk_hash because listener's hash is zero > * Update commit messages and coverletter > > RFC: > https://urldefense.com/v3/__https://lore.kernel.org/netdev/20201117094023.3685-1-kuniyu@amazon.co.jp/__;!!GjvTz_vk!EfhfOTCU_7XOxhuo8yW-66aU3Arq_7mkRIloYIyJYvsuGuFYTPYMmHsn-5vckQ$ > > > Kuniyuki Iwashima (11): > net: Introduce net.ipv4.tcp_migrate_req. > tcp: Add num_closed_socks to struct sock_reuseport. > tcp: Keep TCP_CLOSE sockets in the reuseport group. > tcp: Add reuseport_migrate_sock() to select a new listener. > tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues. > tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs. > tcp: Migrate TCP_NEW_SYN_RECV requests at receiving the final ACK. > bpf: Support BPF_FUNC_get_socket_cookie() for > BPF_PROG_TYPE_SK_REUSEPORT. > bpf: Support socket migration by eBPF. > libbpf: Set expected_attach_type for BPF_PROG_TYPE_SK_REUSEPORT. > bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE. > > Documentation/networking/ip-sysctl.rst | 20 + > include/linux/bpf.h | 1 + > include/linux/filter.h | 2 + > include/net/netns/ipv4.h | 1 + > include/net/request_sock.h | 2 + > include/net/sock_reuseport.h | 9 +- > include/uapi/linux/bpf.h | 16 + > kernel/bpf/syscall.c | 13 + > net/core/filter.c | 23 +- > net/core/request_sock.c | 38 ++ > net/core/sock_reuseport.c | 337 ++++++++++-- > net/ipv4/inet_connection_sock.c | 147 +++++- > net/ipv4/inet_hashtables.c | 2 +- > net/ipv4/sysctl_net_ipv4.c | 9 + > net/ipv4/tcp_ipv4.c | 20 +- > net/ipv6/tcp_ipv6.c | 14 +- > tools/include/uapi/linux/bpf.h | 16 + > tools/lib/bpf/libbpf.c | 5 +- > tools/testing/selftests/bpf/network_helpers.c | 2 +- > tools/testing/selftests/bpf/network_helpers.h | 1 + > .../bpf/prog_tests/migrate_reuseport.c | 484 ++++++++++++++++++ > .../bpf/progs/test_migrate_reuseport.c | 51 ++ > 22 files changed, 1151 insertions(+), 62 deletions(-) > create mode 100644 tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c > create mode 100644 tools/testing/selftests/bpf/progs/test_migrate_reuseport.c >