From: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
To: "David S . Miller" <davem@davemloft.net>,
Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
"Jakub Kicinski" <kuba@kernel.org>
Cc: Willem de Bruijn <willemb@google.com>,
Eric Dumazet <edumazet@google.com>,
Craig Gallek <kraig@google.com>, Paolo Abeni <pabeni@redhat.com>,
<netdev@vger.kernel.org>, Kuniyuki Iwashima <kuniyu@amazon.co.jp>,
"Kuniyuki Iwashima" <kuni1840@gmail.com>,
Benjamin Herrenschmidt <benh@amazon.com>,
<osa-contribution-log@amazon.com>
Subject: [PATCH net 2/2] udp: Improve load balancing for SO_REUSEPORT.
Date: Tue, 21 Jul 2020 15:15:31 +0900 [thread overview]
Message-ID: <20200721061531.94236-3-kuniyu@amazon.co.jp> (raw)
In-Reply-To: <20200721061531.94236-1-kuniyu@amazon.co.jp>
Currently, SO_REUSEPORT does not work well if connected sockets are in a
UDP reuseport group.
Then reuseport_has_conns() returns true and the result of
reuseport_select_sock() is discarded. Also, unconnected sockets have the
same score, hence only does the first unconnected socket in udp_hslot
always receive all packets sent to unconnected sockets.
So, the result of reuseport_select_sock() should be used for load
balancing.
The noteworthy point is that the unconnected sockets placed after
connected sockets in sock_reuseport.socks will receive more packets than
others because of the algorithm in reuseport_select_sock().
index | connected | reciprocal_scale | result
---------------------------------------------
0 | no | 20% | 40%
1 | no | 20% | 20%
2 | yes | 20% | 0%
3 | no | 20% | 40%
4 | yes | 20% | 0%
If most of the sockets are connected, this can be a problem, but it still
works better than now.
Fixes: acdcecc61285 ("udp: correct reuseport selection with connected sockets")
CC: Willem de Bruijn <willemb@google.com>
Reviewed-by: Benjamin Herrenschmidt <benh@amazon.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
---
net/ipv4/udp.c | 15 +++++++++------
net/ipv6/udp.c | 15 +++++++++------
2 files changed, 18 insertions(+), 12 deletions(-)
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 1b7ebbcae497..99251d3c70d0 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -416,7 +416,7 @@ static struct sock *udp4_lib_lookup2(struct net *net,
struct udp_hslot *hslot2,
struct sk_buff *skb)
{
- struct sock *sk, *result;
+ struct sock *sk, *result, *reuseport_result;
int score, badness;
u32 hash = 0;
@@ -426,17 +426,20 @@ static struct sock *udp4_lib_lookup2(struct net *net,
score = compute_score(sk, net, saddr, sport,
daddr, hnum, dif, sdif);
if (score > badness) {
+ reuseport_result = NULL;
+
if (sk->sk_reuseport &&
sk->sk_state != TCP_ESTABLISHED) {
hash = udp_ehashfn(net, daddr, hnum,
saddr, sport);
- result = reuseport_select_sock(sk, hash, skb,
- sizeof(struct udphdr));
- if (result && !reuseport_has_conns(sk, false))
- return result;
+ reuseport_result = reuseport_select_sock(sk, hash, skb,
+ sizeof(struct udphdr));
+ if (reuseport_result && !reuseport_has_conns(sk, false))
+ return reuseport_result;
}
+
+ result = reuseport_result ? : sk;
badness = score;
- result = sk;
}
}
return result;
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 7d4151747340..9503c87ac0b3 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -148,7 +148,7 @@ static struct sock *udp6_lib_lookup2(struct net *net,
int dif, int sdif, struct udp_hslot *hslot2,
struct sk_buff *skb)
{
- struct sock *sk, *result;
+ struct sock *sk, *result, *reuseport_result;
int score, badness;
u32 hash = 0;
@@ -158,17 +158,20 @@ static struct sock *udp6_lib_lookup2(struct net *net,
score = compute_score(sk, net, saddr, sport,
daddr, hnum, dif, sdif);
if (score > badness) {
+ reuseport_result = NULL;
+
if (sk->sk_reuseport &&
sk->sk_state != TCP_ESTABLISHED) {
hash = udp6_ehashfn(net, daddr, hnum,
saddr, sport);
- result = reuseport_select_sock(sk, hash, skb,
- sizeof(struct udphdr));
- if (result && !reuseport_has_conns(sk, false))
- return result;
+ reuseport_result = reuseport_select_sock(sk, hash, skb,
+ sizeof(struct udphdr));
+ if (reuseport_result && !reuseport_has_conns(sk, false))
+ return reuseport_result;
}
- result = sk;
+
+ result = reuseport_result ? : sk;
badness = score;
}
}
--
2.17.2 (Apple Git-113)
next prev parent reply other threads:[~2020-07-21 6:17 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-21 6:15 [PATCH net 0/2] udp: Fix reuseport selection with connected sockets Kuniyuki Iwashima
2020-07-21 6:15 ` [PATCH net 1/2] udp: Copy has_conns in reuseport_grow() Kuniyuki Iwashima
2020-07-21 12:25 ` Willem de Bruijn
2020-07-21 6:15 ` Kuniyuki Iwashima [this message]
2020-07-21 12:33 ` [PATCH net 2/2] udp: Improve load balancing for SO_REUSEPORT Willem de Bruijn
2020-07-21 22:32 ` [PATCH net 0/2] udp: Fix reuseport selection with connected sockets David Miller
2020-07-22 4:28 ` Kuniyuki Iwashima
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200721061531.94236-3-kuniyu@amazon.co.jp \
--to=kuniyu@amazon.co.jp \
--cc=benh@amazon.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kraig@google.com \
--cc=kuba@kernel.org \
--cc=kuni1840@gmail.com \
--cc=kuznet@ms2.inr.ac.ru \
--cc=netdev@vger.kernel.org \
--cc=osa-contribution-log@amazon.com \
--cc=pabeni@redhat.com \
--cc=willemb@google.com \
--cc=yoshfuji@linux-ipv6.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).