netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
@ 2023-01-12  6:53 Jason Xing
  2023-01-14  2:14 ` Jason Xing
  2023-01-14  9:45 ` Eric Dumazet
  0 siblings, 2 replies; 8+ messages in thread
From: Jason Xing @ 2023-01-12  6:53 UTC (permalink / raw)
  To: edumazet, davem, yoshfuji, dsahern, kuba, pabeni
  Cc: netdev, linux-kernel, kerneljasonxing, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

While one cpu is working on looking up the right socket from ehash
table, another cpu is done deleting the request socket and is about
to add (or is adding) the big socket from the table. It means that
we could miss both of them, even though it has little chance.

Let me draw a call trace map of the server side.
   CPU 0                           CPU 1
   -----                           -----
tcp_v4_rcv()                  syn_recv_sock()
                            inet_ehash_insert()
                            -> sk_nulls_del_node_init_rcu(osk)
__inet_lookup_established()
                            -> __sk_nulls_add_node_rcu(sk, list)

Notice that the CPU 0 is receiving the data after the final ack
during 3-way shakehands and CPU 1 is still handling the final ack.

Why could this be a real problem?
This case is happening only when the final ack and the first data
receiving by different CPUs. Then the server receiving data with
ACK flag tries to search one proper established socket from ehash
table, but apparently it fails as my map shows above. After that,
the server fetches a listener socket and then sends a RST because
it finds a ACK flag in the skb (data), which obeys RST definition
in RFC 793.

Many thanks to Eric for great help from beginning to end.

Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
 net/ipv4/inet_hashtables.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index 24a38b56fab9..18f88cb4efcb 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	spin_lock(lock);
 	if (osk) {
 		WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
+		if (sk_hashed(osk))
+			/* Before deleting the node, we insert a new one to make
+			 * sure that the look-up=sk process would not miss either
+			 * of them and that at least one node would exist in ehash
+			 * table all the time. Otherwise there's a tiny chance
+			 * that lookup process could find nothing in ehash table.
+			 */
+			__sk_nulls_add_node_rcu(sk, list);
 		ret = sk_nulls_del_node_init_rcu(osk);
+		goto unlock;
 	} else if (found_dup_sk) {
 		*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
 		if (*found_dup_sk)
@@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	if (ret)
 		__sk_nulls_add_node_rcu(sk, list);
 
+unlock:
 	spin_unlock(lock);
 
 	return ret;
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
  2023-01-12  6:53 [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table Jason Xing
@ 2023-01-14  2:14 ` Jason Xing
  2023-01-14  9:26   ` Eric Dumazet
  2023-01-14  9:45 ` Eric Dumazet
  1 sibling, 1 reply; 8+ messages in thread
From: Jason Xing @ 2023-01-14  2:14 UTC (permalink / raw)
  To: edumazet, davem, yoshfuji, dsahern, pabeni, kuba
  Cc: netdev, linux-kernel, Jason Xing

On Thu, Jan 12, 2023 at 2:54 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> From: Jason Xing <kernelxing@tencent.com>
>
> While one cpu is working on looking up the right socket from ehash
> table, another cpu is done deleting the request socket and is about
> to add (or is adding) the big socket from the table. It means that
> we could miss both of them, even though it has little chance.
>
> Let me draw a call trace map of the server side.
>    CPU 0                           CPU 1
>    -----                           -----
> tcp_v4_rcv()                  syn_recv_sock()
>                             inet_ehash_insert()
>                             -> sk_nulls_del_node_init_rcu(osk)
> __inet_lookup_established()
>                             -> __sk_nulls_add_node_rcu(sk, list)
>
> Notice that the CPU 0 is receiving the data after the final ack
> during 3-way shakehands and CPU 1 is still handling the final ack.
>
> Why could this be a real problem?
> This case is happening only when the final ack and the first data
> receiving by different CPUs. Then the server receiving data with
> ACK flag tries to search one proper established socket from ehash
> table, but apparently it fails as my map shows above. After that,
> the server fetches a listener socket and then sends a RST because
> it finds a ACK flag in the skb (data), which obeys RST definition
> in RFC 793.
>
> Many thanks to Eric for great help from beginning to end.
>
> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")

I extracted one part from the commit 5e0724d027f0 as follows.

@@ -423,30 +423,41 @@ int inet_ehash_insert(struct sock *sk, struct sock *osk)
……
-     __sk_nulls_add_node_rcu(sk, list);
      if (osk) {
-         WARN_ON(sk->sk_hash != osk->sk_hash);
-         sk_nulls_del_node_init_rcu(osk);
+        WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
+        ret = sk_nulls_del_node_init_rcu(osk);
      }
+    if (ret)
+         __sk_nulls_add_node_rcu(sk, list);
……

In this patch I submitted, I reverse, or we can say, restore the
original order of inserting and deleting before the commit
5e0724d027f0 as Eric suggested.

I believe it does not have an impact on other user cases.  The only
thing I want to do is fix this issue as soon as possible no matter
what exactly kind of patch gets merged and who writes the patch at
last if there is a better one.
At that time I'll get the big information back to my customers who
complain about this issue more often than not and tell them "see the
kernel community settles completely".

So could someone please take some time to help me review the patch?
It's not complicated. Thank you from the bottom of my heart in
advance.

Jason

> Signed-off-by: Jason Xing <kernelxing@tencent.com>
> ---
>  net/ipv4/inet_hashtables.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> index 24a38b56fab9..18f88cb4efcb 100644
> --- a/net/ipv4/inet_hashtables.c
> +++ b/net/ipv4/inet_hashtables.c
> @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         spin_lock(lock);
>         if (osk) {
>                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> +               if (sk_hashed(osk))
> +                       /* Before deleting the node, we insert a new one to make
> +                        * sure that the look-up=sk process would not miss either
> +                        * of them and that at least one node would exist in ehash
> +                        * table all the time. Otherwise there's a tiny chance
> +                        * that lookup process could find nothing in ehash table.
> +                        */
> +                       __sk_nulls_add_node_rcu(sk, list);
>                 ret = sk_nulls_del_node_init_rcu(osk);
> +               goto unlock;
>         } else if (found_dup_sk) {
>                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
>                 if (*found_dup_sk)
> @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         if (ret)
>                 __sk_nulls_add_node_rcu(sk, list);
>
> +unlock:
>         spin_unlock(lock);
>
>         return ret;
> --
> 2.37.3
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
  2023-01-14  2:14 ` Jason Xing
@ 2023-01-14  9:26   ` Eric Dumazet
  2023-01-14  9:48     ` Jason Xing
  0 siblings, 1 reply; 8+ messages in thread
From: Eric Dumazet @ 2023-01-14  9:26 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, yoshfuji, dsahern, pabeni, kuba, netdev, linux-kernel, Jason Xing

On Sat, Jan 14, 2023 at 3:15 AM Jason Xing <kerneljasonxing@gmail.com> wrote:

>
> So could someone please take some time to help me review the patch?
> It's not complicated. Thank you from the bottom of my heart in
> advance.


Sure.

Please be patient, and accept the fact that maintainers are
overwhelmed by mixes of patches and company work.

In the meantime, can you double check if the transition from
established to timewait socket is also covered by your patch ?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
  2023-01-12  6:53 [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table Jason Xing
  2023-01-14  2:14 ` Jason Xing
@ 2023-01-14  9:45 ` Eric Dumazet
  2023-01-14 12:05   ` Jason Xing
  1 sibling, 1 reply; 8+ messages in thread
From: Eric Dumazet @ 2023-01-14  9:45 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, yoshfuji, dsahern, kuba, pabeni, netdev, linux-kernel, Jason Xing

On Thu, Jan 12, 2023 at 7:54 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> From: Jason Xing <kernelxing@tencent.com>
>
> While one cpu is working on looking up the right socket from ehash
> table, another cpu is done deleting the request socket and is about
> to add (or is adding) the big socket from the table. It means that
> we could miss both of them, even though it has little chance.
>
> Let me draw a call trace map of the server side.
>    CPU 0                           CPU 1
>    -----                           -----
> tcp_v4_rcv()                  syn_recv_sock()
>                             inet_ehash_insert()
>                             -> sk_nulls_del_node_init_rcu(osk)
> __inet_lookup_established()
>                             -> __sk_nulls_add_node_rcu(sk, list)
>
> Notice that the CPU 0 is receiving the data after the final ack
> during 3-way shakehands and CPU 1 is still handling the final ack.
>
> Why could this be a real problem?
> This case is happening only when the final ack and the first data
> receiving by different CPUs. Then the server receiving data with
> ACK flag tries to search one proper established socket from ehash
> table, but apparently it fails as my map shows above. After that,
> the server fetches a listener socket and then sends a RST because
> it finds a ACK flag in the skb (data), which obeys RST definition
> in RFC 793.
>
> Many thanks to Eric for great help from beginning to end.
>
> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> Signed-off-by: Jason Xing <kernelxing@tencent.com>
> ---
>  net/ipv4/inet_hashtables.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> index 24a38b56fab9..18f88cb4efcb 100644
> --- a/net/ipv4/inet_hashtables.c
> +++ b/net/ipv4/inet_hashtables.c
> @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         spin_lock(lock);
>         if (osk) {
>                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> +               if (sk_hashed(osk))
> +                       /* Before deleting the node, we insert a new one to make
> +                        * sure that the look-up=sk process would not miss either
> +                        * of them and that at least one node would exist in ehash
> +                        * table all the time. Otherwise there's a tiny chance
> +                        * that lookup process could find nothing in ehash table.
> +                        */
> +                       __sk_nulls_add_node_rcu(sk, list);

In our private email exchange, I suggested to insert sk at the _tail_
of the hash bucket.

Inserting it at the _head_ would still leave a race condition, because
a concurrent reader might
have already started the bucket traversal, and would not see 'sk'.

Thanks.

>                 ret = sk_nulls_del_node_init_rcu(osk);
> +               goto unlock;
>         } else if (found_dup_sk) {
>                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
>                 if (*found_dup_sk)
> @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         if (ret)
>                 __sk_nulls_add_node_rcu(sk, list);
>
> +unlock:
>         spin_unlock(lock);
>
>         return ret;
> --
> 2.37.3
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
  2023-01-14  9:26   ` Eric Dumazet
@ 2023-01-14  9:48     ` Jason Xing
  0 siblings, 0 replies; 8+ messages in thread
From: Jason Xing @ 2023-01-14  9:48 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: davem, yoshfuji, dsahern, pabeni, kuba, netdev, linux-kernel, Jason Xing

On Sat, Jan 14, 2023 at 5:27 PM Eric Dumazet <edumazet@google.com> wrote:
>
> On Sat, Jan 14, 2023 at 3:15 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> >
> > So could someone please take some time to help me review the patch?
> > It's not complicated. Thank you from the bottom of my heart in
> > advance.
>
>
> Sure.
>
> Please be patient, and accept the fact that maintainers are
> overwhelmed by mixes of patches and company work.

I felt so sorry. I'm too anxious. Thanks for what you've contributed
to the net module really.

>
> In the meantime, can you double check if the transition from
> established to timewait socket is also covered by your patch ?

Yeah, I'll do that.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
  2023-01-14  9:45 ` Eric Dumazet
@ 2023-01-14 12:05   ` Jason Xing
  2023-01-14 12:30     ` Eric Dumazet
  0 siblings, 1 reply; 8+ messages in thread
From: Jason Xing @ 2023-01-14 12:05 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: davem, yoshfuji, dsahern, kuba, pabeni, netdev, linux-kernel, Jason Xing

On Sat, Jan 14, 2023 at 5:45 PM Eric Dumazet <edumazet@google.com> wrote:
>
> On Thu, Jan 12, 2023 at 7:54 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
> >
> > From: Jason Xing <kernelxing@tencent.com>
> >
> > While one cpu is working on looking up the right socket from ehash
> > table, another cpu is done deleting the request socket and is about
> > to add (or is adding) the big socket from the table. It means that
> > we could miss both of them, even though it has little chance.
> >
> > Let me draw a call trace map of the server side.
> >    CPU 0                           CPU 1
> >    -----                           -----
> > tcp_v4_rcv()                  syn_recv_sock()
> >                             inet_ehash_insert()
> >                             -> sk_nulls_del_node_init_rcu(osk)
> > __inet_lookup_established()
> >                             -> __sk_nulls_add_node_rcu(sk, list)
> >
> > Notice that the CPU 0 is receiving the data after the final ack
> > during 3-way shakehands and CPU 1 is still handling the final ack.
> >
> > Why could this be a real problem?
> > This case is happening only when the final ack and the first data
> > receiving by different CPUs. Then the server receiving data with
> > ACK flag tries to search one proper established socket from ehash
> > table, but apparently it fails as my map shows above. After that,
> > the server fetches a listener socket and then sends a RST because
> > it finds a ACK flag in the skb (data), which obeys RST definition
> > in RFC 793.
> >
> > Many thanks to Eric for great help from beginning to end.
> >
> > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > ---
> >  net/ipv4/inet_hashtables.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> >
> > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> > index 24a38b56fab9..18f88cb4efcb 100644
> > --- a/net/ipv4/inet_hashtables.c
> > +++ b/net/ipv4/inet_hashtables.c
> > @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> >         spin_lock(lock);
> >         if (osk) {
> >                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> > +               if (sk_hashed(osk))
> > +                       /* Before deleting the node, we insert a new one to make
> > +                        * sure that the look-up=sk process would not miss either
> > +                        * of them and that at least one node would exist in ehash
> > +                        * table all the time. Otherwise there's a tiny chance
> > +                        * that lookup process could find nothing in ehash table.
> > +                        */
> > +                       __sk_nulls_add_node_rcu(sk, list);
>
> In our private email exchange, I suggested to insert sk at the _tail_
> of the hash bucket.
>

Yes, I noticed that. At that time I kept considering the race
condition of the RCU itself, not the scene you mentioned as below.

> Inserting it at the _head_ would still leave a race condition, because
> a concurrent reader might
> have already started the bucket traversal, and would not see 'sk'.

Thanks for the detailed explanation. Now I see why. I'll replace it
with __sk_nulls_add_node_tail_rcu() function and send the v2 patch.

By the way, I checked the removal of TIMEWAIT socket which is included
in this patch.
I write down the call-trace:
inet_hash_connect()
    -> __inet_hash_connect()
        -> if (sk_unhashed(sk)) {
                inet_ehash_nolisten(sk, (struct sock *)tw, NULL);
                    -> inet_ehash_insert(sk, osk, found_dup_sk);
Therefore, this patch covers the timewait case.

Thanks,
Jason

>
> Thanks.
>
> >                 ret = sk_nulls_del_node_init_rcu(osk);
> > +               goto unlock;
> >         } else if (found_dup_sk) {
> >                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
> >                 if (*found_dup_sk)
> > @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> >         if (ret)
> >                 __sk_nulls_add_node_rcu(sk, list);
> >
> > +unlock:
> >         spin_unlock(lock);
> >
> >         return ret;
> > --
> > 2.37.3
> >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
  2023-01-14 12:05   ` Jason Xing
@ 2023-01-14 12:30     ` Eric Dumazet
  2023-01-14 12:48       ` Jason Xing
  0 siblings, 1 reply; 8+ messages in thread
From: Eric Dumazet @ 2023-01-14 12:30 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, yoshfuji, dsahern, kuba, pabeni, netdev, linux-kernel, Jason Xing

()

On Sat, Jan 14, 2023 at 1:06 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> On Sat, Jan 14, 2023 at 5:45 PM Eric Dumazet <edumazet@google.com> wrote:
> >
> > On Thu, Jan 12, 2023 at 7:54 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
> > >
> > > From: Jason Xing <kernelxing@tencent.com>
> > >
> > > While one cpu is working on looking up the right socket from ehash
> > > table, another cpu is done deleting the request socket and is about
> > > to add (or is adding) the big socket from the table. It means that
> > > we could miss both of them, even though it has little chance.
> > >
> > > Let me draw a call trace map of the server side.
> > >    CPU 0                           CPU 1
> > >    -----                           -----
> > > tcp_v4_rcv()                  syn_recv_sock()
> > >                             inet_ehash_insert()
> > >                             -> sk_nulls_del_node_init_rcu(osk)
> > > __inet_lookup_established()
> > >                             -> __sk_nulls_add_node_rcu(sk, list)
> > >
> > > Notice that the CPU 0 is receiving the data after the final ack
> > > during 3-way shakehands and CPU 1 is still handling the final ack.
> > >
> > > Why could this be a real problem?
> > > This case is happening only when the final ack and the first data
> > > receiving by different CPUs. Then the server receiving data with
> > > ACK flag tries to search one proper established socket from ehash
> > > table, but apparently it fails as my map shows above. After that,
> > > the server fetches a listener socket and then sends a RST because
> > > it finds a ACK flag in the skb (data), which obeys RST definition
> > > in RFC 793.
> > >
> > > Many thanks to Eric for great help from beginning to end.
> > >
> > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> > > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > > ---
> > >  net/ipv4/inet_hashtables.c | 10 ++++++++++
> > >  1 file changed, 10 insertions(+)
> > >
> > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> > > index 24a38b56fab9..18f88cb4efcb 100644
> > > --- a/net/ipv4/inet_hashtables.c
> > > +++ b/net/ipv4/inet_hashtables.c
> > > @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> > >         spin_lock(lock);
> > >         if (osk) {
> > >                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> > > +               if (sk_hashed(osk))
> > > +                       /* Before deleting the node, we insert a new one to make
> > > +                        * sure that the look-up=sk process would not miss either
> > > +                        * of them and that at least one node would exist in ehash
> > > +                        * table all the time. Otherwise there's a tiny chance
> > > +                        * that lookup process could find nothing in ehash table.
> > > +                        */
> > > +                       __sk_nulls_add_node_rcu(sk, list);
> >
> > In our private email exchange, I suggested to insert sk at the _tail_
> > of the hash bucket.
> >
>
> Yes, I noticed that. At that time I kept considering the race
> condition of the RCU itself, not the scene you mentioned as below.
>
> > Inserting it at the _head_ would still leave a race condition, because
> > a concurrent reader might
> > have already started the bucket traversal, and would not see 'sk'.
>
> Thanks for the detailed explanation. Now I see why. I'll replace it
> with __sk_nulls_add_node_tail_rcu() function and send the v2 patch.
>
> By the way, I checked the removal of TIMEWAIT socket which is included
> in this patch.
> I write down the call-trace:
> inet_hash_connect()
>     -> __inet_hash_connect()
>         -> if (sk_unhashed(sk)) {
>                 inet_ehash_nolisten(sk, (struct sock *)tw, NULL);
>                     -> inet_ehash_insert(sk, osk, found_dup_sk);
> Therefore, this patch covers the timewait case.

This is the path handling the TIME_WAIT ---> ESTABLISH case.

I was referring to the more common opposite case which is the case
where a race could possibly happen.

This is inet_twsk_hashdance, and I suspect we want something like:

diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 1d77d992e6e77f7d96bd061be6dbb802c2566b3f..6d681ef52bb24b984a9dbda25b19291fc4393914
100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
 }
 EXPORT_SYMBOL_GPL(inet_twsk_put);

-static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
+static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
                                   struct hlist_nulls_head *list)
 {
-       hlist_nulls_add_head_rcu(&tw->tw_node, list);
+       hlist_nulls_add_tail_rcu(&tw->tw_node, list);
 }

 static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
@@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock
*tw, struct sock *sk,

        spin_lock(lock);

-       inet_twsk_add_node_rcu(tw, &ehead->chain);
+       inet_twsk_add_node_tail_rcu(tw, &ehead->chain);

        /* Step 3: Remove SK from hash chain */
        if (__sk_nulls_del_node_init_rcu(sk))

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table
  2023-01-14 12:30     ` Eric Dumazet
@ 2023-01-14 12:48       ` Jason Xing
  0 siblings, 0 replies; 8+ messages in thread
From: Jason Xing @ 2023-01-14 12:48 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: davem, yoshfuji, dsahern, pabeni, netdev, linux-kernel, Jason Xing, kuba

On Sat, Jan 14, 2023 at 8:31 PM Eric Dumazet <edumazet@google.com> wrote:
>
> ()
>
> On Sat, Jan 14, 2023 at 1:06 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
> >
> > On Sat, Jan 14, 2023 at 5:45 PM Eric Dumazet <edumazet@google.com> wrote:
> > >
> > > On Thu, Jan 12, 2023 at 7:54 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
> > > >
> > > > From: Jason Xing <kernelxing@tencent.com>
> > > >
> > > > While one cpu is working on looking up the right socket from ehash
> > > > table, another cpu is done deleting the request socket and is about
> > > > to add (or is adding) the big socket from the table. It means that
> > > > we could miss both of them, even though it has little chance.
> > > >
> > > > Let me draw a call trace map of the server side.
> > > >    CPU 0                           CPU 1
> > > >    -----                           -----
> > > > tcp_v4_rcv()                  syn_recv_sock()
> > > >                             inet_ehash_insert()
> > > >                             -> sk_nulls_del_node_init_rcu(osk)
> > > > __inet_lookup_established()
> > > >                             -> __sk_nulls_add_node_rcu(sk, list)
> > > >
> > > > Notice that the CPU 0 is receiving the data after the final ack
> > > > during 3-way shakehands and CPU 1 is still handling the final ack.
> > > >
> > > > Why could this be a real problem?
> > > > This case is happening only when the final ack and the first data
> > > > receiving by different CPUs. Then the server receiving data with
> > > > ACK flag tries to search one proper established socket from ehash
> > > > table, but apparently it fails as my map shows above. After that,
> > > > the server fetches a listener socket and then sends a RST because
> > > > it finds a ACK flag in the skb (data), which obeys RST definition
> > > > in RFC 793.
> > > >
> > > > Many thanks to Eric for great help from beginning to end.
> > > >
> > > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> > > > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > > > ---
> > > >  net/ipv4/inet_hashtables.c | 10 ++++++++++
> > > >  1 file changed, 10 insertions(+)
> > > >
> > > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> > > > index 24a38b56fab9..18f88cb4efcb 100644
> > > > --- a/net/ipv4/inet_hashtables.c
> > > > +++ b/net/ipv4/inet_hashtables.c
> > > > @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> > > >         spin_lock(lock);
> > > >         if (osk) {
> > > >                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> > > > +               if (sk_hashed(osk))
> > > > +                       /* Before deleting the node, we insert a new one to make
> > > > +                        * sure that the look-up=sk process would not miss either
> > > > +                        * of them and that at least one node would exist in ehash
> > > > +                        * table all the time. Otherwise there's a tiny chance
> > > > +                        * that lookup process could find nothing in ehash table.
> > > > +                        */
> > > > +                       __sk_nulls_add_node_rcu(sk, list);
> > >
> > > In our private email exchange, I suggested to insert sk at the _tail_
> > > of the hash bucket.
> > >
> >
> > Yes, I noticed that. At that time I kept considering the race
> > condition of the RCU itself, not the scene you mentioned as below.
> >
> > > Inserting it at the _head_ would still leave a race condition, because
> > > a concurrent reader might
> > > have already started the bucket traversal, and would not see 'sk'.
> >
> > Thanks for the detailed explanation. Now I see why. I'll replace it
> > with __sk_nulls_add_node_tail_rcu() function and send the v2 patch.
> >
> > By the way, I checked the removal of TIMEWAIT socket which is included
> > in this patch.
> > I write down the call-trace:
> > inet_hash_connect()
> >     -> __inet_hash_connect()
> >         -> if (sk_unhashed(sk)) {
> >                 inet_ehash_nolisten(sk, (struct sock *)tw, NULL);
> >                     -> inet_ehash_insert(sk, osk, found_dup_sk);
> > Therefore, this patch covers the timewait case.
>
> This is the path handling the TIME_WAIT ---> ESTABLISH case.
>
> I was referring to the more common opposite case which is the case
> where a race could possibly happen.
>
> This is inet_twsk_hashdance, and I suspect we want something like:
>

Thanks, Eric. I learned :)

> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
> index 1d77d992e6e77f7d96bd061be6dbb802c2566b3f..6d681ef52bb24b984a9dbda25b19291fc4393914
> 100644
> --- a/net/ipv4/inet_timewait_sock.c
> +++ b/net/ipv4/inet_timewait_sock.c
> @@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
>  }
>  EXPORT_SYMBOL_GPL(inet_twsk_put);
>
> -static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
> +static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
>                                    struct hlist_nulls_head *list)
>  {
> -       hlist_nulls_add_head_rcu(&tw->tw_node, list);
> +       hlist_nulls_add_tail_rcu(&tw->tw_node, list);
>  }
>
>  static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
> @@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock
> *tw, struct sock *sk,
>
>         spin_lock(lock);
>
> -       inet_twsk_add_node_rcu(tw, &ehead->chain);
> +       inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
>
>         /* Step 3: Remove SK from hash chain */
>         if (__sk_nulls_del_node_init_rcu(sk))

I'll put this part of the code into my next submission and add more
comments about it.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-01-14 12:48 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-12  6:53 [PATCH net] tcp: avoid the lookup process failing to get sk in ehash table Jason Xing
2023-01-14  2:14 ` Jason Xing
2023-01-14  9:26   ` Eric Dumazet
2023-01-14  9:48     ` Jason Xing
2023-01-14  9:45 ` Eric Dumazet
2023-01-14 12:05   ` Jason Xing
2023-01-14 12:30     ` Eric Dumazet
2023-01-14 12:48       ` Jason Xing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).