netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp
@ 2021-07-19  9:12 Eric Dumazet
  2021-07-19 15:40 ` Wei Wang
  2021-07-19 17:20 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Eric Dumazet @ 2021-07-19  9:12 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski
  Cc: netdev, Eric Dumazet, Eric Dumazet, Wei Wang, Yuchung Cheng,
	Neal Cardwell

From: Eric Dumazet <edumazet@google.com>

tfo_active_disable_stamp is read and written locklessly.
We need to annotate these accesses appropriately.

Then, we need to perform the atomic_inc(tfo_active_disable_times)
after the timestamp has been updated, and thus add barriers
to make sure tcp_fastopen_active_should_disable() wont read
a stale timestamp.

Fixes: cf1ef3f0719b ("net/tcp_fastopen: Disable active side TFO in certain scenarios")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Wei Wang <weiwan@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
---
 net/ipv4/tcp_fastopen.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
index 47c32604d38fca960d2cd56f3588bfd2e390b789..b32af76e21325373126b51423496e3b8d47d97ff 100644
--- a/net/ipv4/tcp_fastopen.c
+++ b/net/ipv4/tcp_fastopen.c
@@ -507,8 +507,15 @@ void tcp_fastopen_active_disable(struct sock *sk)
 {
 	struct net *net = sock_net(sk);
 
+	/* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */
+	WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies);
+
+	/* Paired with smp_rmb() in tcp_fastopen_active_should_disable().
+	 * We want net->ipv4.tfo_active_disable_stamp to be updated first.
+	 */
+	smp_mb__before_atomic();
 	atomic_inc(&net->ipv4.tfo_active_disable_times);
-	net->ipv4.tfo_active_disable_stamp = jiffies;
+
 	NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE);
 }
 
@@ -526,10 +533,16 @@ bool tcp_fastopen_active_should_disable(struct sock *sk)
 	if (!tfo_da_times)
 		return false;
 
+	/* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */
+	smp_rmb();
+
 	/* Limit timeout to max: 2^6 * initial timeout */
 	multiplier = 1 << min(tfo_da_times - 1, 6);
-	timeout = multiplier * tfo_bh_timeout * HZ;
-	if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout))
+
+	/* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */
+	timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) +
+		  multiplier * tfo_bh_timeout * HZ;
+	if (time_before(jiffies, timeout))
 		return true;
 
 	/* Mark check bit so we can check for successful active TFO
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp
  2021-07-19  9:12 [PATCH net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp Eric Dumazet
@ 2021-07-19 15:40 ` Wei Wang
  2021-07-19 17:20 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Wei Wang @ 2021-07-19 15:40 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, Jakub Kicinski, netdev, Eric Dumazet,
	Yuchung Cheng, Neal Cardwell

On Mon, Jul 19, 2021 at 2:12 AM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> From: Eric Dumazet <edumazet@google.com>
>
> tfo_active_disable_stamp is read and written locklessly.
> We need to annotate these accesses appropriately.
>
> Then, we need to perform the atomic_inc(tfo_active_disable_times)
> after the timestamp has been updated, and thus add barriers
> to make sure tcp_fastopen_active_should_disable() wont read
> a stale timestamp.
>
> Fixes: cf1ef3f0719b ("net/tcp_fastopen: Disable active side TFO in certain scenarios")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Wei Wang <weiwan@google.com>
> Cc: Yuchung Cheng <ycheng@google.com>
> Cc: Neal Cardwell <ncardwell@google.com>
> ---

Thanks Eric!

Acked-by: Wei Wang <weiwan@google.com>

>  net/ipv4/tcp_fastopen.c | 19 ++++++++++++++++---
>  1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
> index 47c32604d38fca960d2cd56f3588bfd2e390b789..b32af76e21325373126b51423496e3b8d47d97ff 100644
> --- a/net/ipv4/tcp_fastopen.c
> +++ b/net/ipv4/tcp_fastopen.c
> @@ -507,8 +507,15 @@ void tcp_fastopen_active_disable(struct sock *sk)
>  {
>         struct net *net = sock_net(sk);
>
> +       /* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */
> +       WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies);
> +
> +       /* Paired with smp_rmb() in tcp_fastopen_active_should_disable().
> +        * We want net->ipv4.tfo_active_disable_stamp to be updated first.
> +        */
> +       smp_mb__before_atomic();
>         atomic_inc(&net->ipv4.tfo_active_disable_times);
> -       net->ipv4.tfo_active_disable_stamp = jiffies;
> +
>         NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE);
>  }
>
> @@ -526,10 +533,16 @@ bool tcp_fastopen_active_should_disable(struct sock *sk)
>         if (!tfo_da_times)
>                 return false;
>
> +       /* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */
> +       smp_rmb();
> +
>         /* Limit timeout to max: 2^6 * initial timeout */
>         multiplier = 1 << min(tfo_da_times - 1, 6);
> -       timeout = multiplier * tfo_bh_timeout * HZ;
> -       if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout))
> +
> +       /* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */
> +       timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) +
> +                 multiplier * tfo_bh_timeout * HZ;
> +       if (time_before(jiffies, timeout))
>                 return true;
>
>         /* Mark check bit so we can check for successful active TFO
> --
> 2.32.0.402.g57bb445576-goog
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp
  2021-07-19  9:12 [PATCH net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp Eric Dumazet
  2021-07-19 15:40 ` Wei Wang
@ 2021-07-19 17:20 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2021-07-19 17:20 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, kuba, netdev, edumazet, weiwan, ycheng, ncardwell

Hello:

This patch was applied to netdev/net.git (refs/heads/master):

On Mon, 19 Jul 2021 02:12:18 -0700 you wrote:
> From: Eric Dumazet <edumazet@google.com>
> 
> tfo_active_disable_stamp is read and written locklessly.
> We need to annotate these accesses appropriately.
> 
> Then, we need to perform the atomic_inc(tfo_active_disable_times)
> after the timestamp has been updated, and thus add barriers
> to make sure tcp_fastopen_active_should_disable() wont read
> a stale timestamp.
> 
> [...]

Here is the summary with links:
  - [net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp
    https://git.kernel.org/netdev/net/c/6f20c8adb181

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-07-19 17:22 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-19  9:12 [PATCH net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp Eric Dumazet
2021-07-19 15:40 ` Wei Wang
2021-07-19 17:20 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).