All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] tcp: avoid possible arithmetic overflows
@ 2014-09-20 17:19 Eric Dumazet
  2014-09-20 18:01 ` Joe Perches
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2014-09-20 17:19 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Neal Cardwell, Yuchung Cheng

From: Eric Dumazet <edumazet@google.com>

icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default,
or more if some sysctl (eg tcp_retries2) are changed.

Better use 64bit to perform icsk_rto << icsk_backoff operations

From: Eric Dumazet <edumazet@google.com>
---
 net/ipv4/tcp_input.c  |    7 +++++--
 net/ipv4/tcp_output.c |   13 ++++++-------
 net/ipv4/tcp_timer.c  |    5 +++--
 3 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 02fb66d4a018..1ea3847c62fc 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -3208,9 +3208,12 @@ static void tcp_ack_probe(struct sock *sk)
 		 * This function is not for random using!
 		 */
 	} else {
+		unsigned long when;
+
+		when = min((u64)icsk->icsk_rto << icsk->icsk_backoff,
+			   (u64)TCP_RTO_MAX);
 		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
-					  TCP_RTO_MAX);
+					  when, TCP_RTO_MAX);
 	}
 }
 
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 7f1280dcad57..2231b400f3ce 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -3279,6 +3279,7 @@ void tcp_send_probe0(struct sock *sk)
 {
 	struct inet_connection_sock *icsk = inet_csk(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
+	unsigned long when;
 	int err;
 
 	err = tcp_write_wakeup(sk);
@@ -3294,9 +3295,8 @@ void tcp_send_probe0(struct sock *sk)
 		if (icsk->icsk_backoff < sysctl_tcp_retries2)
 			icsk->icsk_backoff++;
 		icsk->icsk_probes_out++;
-		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
-					  TCP_RTO_MAX);
+		when = min((u64)icsk->icsk_rto << icsk->icsk_backoff,
+			   (u64)TCP_RTO_MAX);
 	} else {
 		/* If packet was not sent due to local congestion,
 		 * do not backoff and do not remember icsk_probes_out.
@@ -3306,11 +3306,10 @@ void tcp_send_probe0(struct sock *sk)
 		 */
 		if (!icsk->icsk_probes_out)
 			icsk->icsk_probes_out = 1;
-		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff,
-					      TCP_RESOURCE_PROBE_INTERVAL),
-					  TCP_RTO_MAX);
+		when = min((u64)icsk->icsk_rto << icsk->icsk_backoff,
+			   (u64)TCP_RESOURCE_PROBE_INTERVAL);
 	}
+	inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, TCP_RTO_MAX);
 }
 
 int tcp_rtx_synack(struct sock *sk, struct request_sock *req)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index a339e7ba05a4..05e1d0723233 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -180,7 +180,7 @@ static int tcp_write_timeout(struct sock *sk)
 
 		retry_until = sysctl_tcp_retries2;
 		if (sock_flag(sk, SOCK_DEAD)) {
-			const int alive = (icsk->icsk_rto < TCP_RTO_MAX);
+			const int alive = icsk->icsk_rto < TCP_RTO_MAX;
 
 			retry_until = tcp_orphan_retries(sk, alive);
 			do_reset = alive ||
@@ -294,7 +294,8 @@ static void tcp_probe_timer(struct sock *sk)
 	max_probes = sysctl_tcp_retries2;
 
 	if (sock_flag(sk, SOCK_DEAD)) {
-		const int alive = ((icsk->icsk_rto << icsk->icsk_backoff) < TCP_RTO_MAX);
+		u64 exp_rto = (u64)icsk->icsk_rto << icsk->icsk_backoff;
+		const int alive = exp_rto < TCP_RTO_MAX;
 
 		max_probes = tcp_orphan_retries(sk, alive);
 

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next] tcp: avoid possible arithmetic overflows
  2014-09-20 17:19 [PATCH net-next] tcp: avoid possible arithmetic overflows Eric Dumazet
@ 2014-09-20 18:01 ` Joe Perches
  2014-09-20 19:46   ` Yuchung Cheng
  0 siblings, 1 reply; 13+ messages in thread
From: Joe Perches @ 2014-09-20 18:01 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, Neal Cardwell, Yuchung Cheng

On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@google.com>
> 
> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default,
> or more if some sysctl (eg tcp_retries2) are changed.
> 
> Better use 64bit to perform icsk_rto << icsk_backoff operations

Maybe better to use a helper function for this?

something like:

static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk)
{
	u64 when = (u64)icsk->icsk_rto;

	return when << icsk->icsk_backoff;
}

> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
[]
> @@ -3208,9 +3208,12 @@ static void tcp_ack_probe(struct sock *sk)
>  		 * This function is not for random using!
>  		 */
>  	} else {
> +		unsigned long when;
> +
> +		when = min((u64)icsk->icsk_rto << icsk->icsk_backoff,
> +			   (u64)TCP_RTO_MAX);

Maybe:
	u32 when = (u32)min_t(u64, icsk_rto_backoff(icsk), TCP_RTO_MAX);

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next] tcp: avoid possible arithmetic overflows
  2014-09-20 18:01 ` Joe Perches
@ 2014-09-20 19:46   ` Yuchung Cheng
  2014-09-20 19:55     ` Eric Dumazet
  0 siblings, 1 reply; 13+ messages in thread
From: Yuchung Cheng @ 2014-09-20 19:46 UTC (permalink / raw)
  To: Joe Perches; +Cc: Eric Dumazet, David Miller, netdev, Neal Cardwell

On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote:
> On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote:
>> From: Eric Dumazet <edumazet@google.com>
>>
>> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default,
>> or more if some sysctl (eg tcp_retries2) are changed.
>>
>> Better use 64bit to perform icsk_rto << icsk_backoff operations
>
> Maybe better to use a helper function for this?
>
> something like:
>
> static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk)
> {
>         u64 when = (u64)icsk->icsk_rto;
>
>         return when << icsk->icsk_backoff;
> }
Thanks for the fix Eric. I second Joe's idea to use a helper function.

>
>> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> []
>> @@ -3208,9 +3208,12 @@ static void tcp_ack_probe(struct sock *sk)
>>                * This function is not for random using!
>>                */
>>       } else {
>> +             unsigned long when;
>> +
>> +             when = min((u64)icsk->icsk_rto << icsk->icsk_backoff,
>> +                        (u64)TCP_RTO_MAX);
>
> Maybe:
>         u32 when = (u32)min_t(u64, icsk_rto_backoff(icsk), TCP_RTO_MAX);
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next] tcp: avoid possible arithmetic overflows
  2014-09-20 19:46   ` Yuchung Cheng
@ 2014-09-20 19:55     ` Eric Dumazet
  2014-09-20 20:19       ` Joe Perches
  2014-09-22 11:13       ` [PATCH " David Laight
  0 siblings, 2 replies; 13+ messages in thread
From: Eric Dumazet @ 2014-09-20 19:55 UTC (permalink / raw)
  To: Yuchung Cheng; +Cc: Joe Perches, David Miller, netdev, Neal Cardwell

On Sat, 2014-09-20 at 12:46 -0700, Yuchung Cheng wrote:
> On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote:
> > On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote:
> >> From: Eric Dumazet <edumazet@google.com>
> >>
> >> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default,
> >> or more if some sysctl (eg tcp_retries2) are changed.
> >>
> >> Better use 64bit to perform icsk_rto << icsk_backoff operations
> >
> > Maybe better to use a helper function for this?
> >
> > something like:
> >
> > static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk)
> > {
> >         u64 when = (u64)icsk->icsk_rto;
> >
> >         return when << icsk->icsk_backoff;
> > }
> Thanks for the fix Eric. I second Joe's idea to use a helper function.
> 

Yep.

Given the timeout functions in the kernel use 'unsigned long', I prefer
to keep the u64 magic private to this helper.

I will probably use 

static inline unsigned long icsk_rto_backoff(const struct inet_connection_sock *icsk)
{
	u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;

	return min_t(u64, when, ~0UL);
}

On 64bit arches, the min_t() should be a nop.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next] tcp: avoid possible arithmetic overflows
  2014-09-20 19:55     ` Eric Dumazet
@ 2014-09-20 20:19       ` Joe Perches
  2014-09-21  0:29         ` [PATCH v2 " Eric Dumazet
  2014-09-22 11:13       ` [PATCH " David Laight
  1 sibling, 1 reply; 13+ messages in thread
From: Joe Perches @ 2014-09-20 20:19 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Yuchung Cheng, David Miller, netdev, Neal Cardwell

On Sat, 2014-09-20 at 12:55 -0700, Eric Dumazet wrote:
> On Sat, 2014-09-20 at 12:46 -0700, Yuchung Cheng wrote:
> > On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote:
> > > On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote:
> > >> From: Eric Dumazet <edumazet@google.com>
> > >>
> > >> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default,
> > >> or more if some sysctl (eg tcp_retries2) are changed.
> > >>
> > >> Better use 64bit to perform icsk_rto << icsk_backoff operations
> > >
> > > Maybe better to use a helper function for this?
> > >
> > > something like:
> > >
> > > static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk)
> > > {
> > >         u64 when = (u64)icsk->icsk_rto;
> > >
> > >         return when << icsk->icsk_backoff;
> > > }
> > Thanks for the fix Eric. I second Joe's idea to use a helper function.
> > 
> 
> Yep.
> 
> Given the timeout functions in the kernel use 'unsigned long', I prefer
> to keep the u64 magic private to this helper.
> 
> I will probably use 
> 
> static inline unsigned long icsk_rto_backoff(const struct inet_connection_sock *icsk)
> {
> 	u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;
> 
> 	return min_t(u64, when, ~0UL);

OK.

I think an explicit cast to unsigned long after the min_t
to avoid the implicit downcast would be better

	return (unsigned long)min_t(etc...)

so that no warning is produced if someone does make W=3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 net-next] tcp: avoid possible arithmetic overflows
  2014-09-20 20:19       ` Joe Perches
@ 2014-09-21  0:29         ` Eric Dumazet
  2014-09-21 17:46           ` Yuchung Cheng
  2014-09-22  3:56           ` [PATCH v2 " Joe Perches
  0 siblings, 2 replies; 13+ messages in thread
From: Eric Dumazet @ 2014-09-21  0:29 UTC (permalink / raw)
  To: Joe Perches; +Cc: Yuchung Cheng, David Miller, netdev, Neal Cardwell

From: Eric Dumazet <edumazet@google.com>

icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
or more if some sysctl (eg tcp_retries2) are changed.

Better use 64bit to perform icsk_rto << icsk_backoff operations

As Joe Perches suggested, add a helper for this.

From: Eric Dumazet <edumazet@google.com>
---
 include/net/inet_connection_sock.h |    9 +++++++++
 net/ipv4/tcp_input.c               |    5 +++--
 net/ipv4/tcp_output.c              |   13 ++++++-------
 net/ipv4/tcp_timer.c               |    4 ++--
 4 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
index 5fbe6568c3cf..7551b402d6fe 100644
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -242,6 +242,15 @@ static inline void inet_csk_reset_xmit_timer(struct sock *sk, const int what,
 #endif
 }
 
+static inline unsigned long
+inet_csk_rto_backoff(const struct inet_connection_sock *icsk,
+		     unsigned long max_when)
+{
+        u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;
+
+        return (unsigned long)min_t(u64, when, max_when);
+}
+
 struct sock *inet_csk_accept(struct sock *sk, int flags, int *err);
 
 struct request_sock *inet_csk_search_req(const struct sock *sk,
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 02fb66d4a018..13f3da4762e3 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -3208,9 +3208,10 @@ static void tcp_ack_probe(struct sock *sk)
 		 * This function is not for random using!
 		 */
 	} else {
+		unsigned long when = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
+
 		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
-					  TCP_RTO_MAX);
+					  when, TCP_RTO_MAX);
 	}
 }
 
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 7f1280dcad57..8c61a7c0c889 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -3279,6 +3279,7 @@ void tcp_send_probe0(struct sock *sk)
 {
 	struct inet_connection_sock *icsk = inet_csk(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
+	unsigned long probe_max;
 	int err;
 
 	err = tcp_write_wakeup(sk);
@@ -3294,9 +3295,7 @@ void tcp_send_probe0(struct sock *sk)
 		if (icsk->icsk_backoff < sysctl_tcp_retries2)
 			icsk->icsk_backoff++;
 		icsk->icsk_probes_out++;
-		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
-					  TCP_RTO_MAX);
+		probe_max = TCP_RTO_MAX;
 	} else {
 		/* If packet was not sent due to local congestion,
 		 * do not backoff and do not remember icsk_probes_out.
@@ -3306,11 +3305,11 @@ void tcp_send_probe0(struct sock *sk)
 		 */
 		if (!icsk->icsk_probes_out)
 			icsk->icsk_probes_out = 1;
-		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff,
-					      TCP_RESOURCE_PROBE_INTERVAL),
-					  TCP_RTO_MAX);
+		probe_max = TCP_RESOURCE_PROBE_INTERVAL;
 	}
+	inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
+				  inet_csk_rto_backoff(icsk, probe_max),
+				  TCP_RTO_MAX);
 }
 
 int tcp_rtx_synack(struct sock *sk, struct request_sock *req)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index a339e7ba05a4..b24360f6e293 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -180,7 +180,7 @@ static int tcp_write_timeout(struct sock *sk)
 
 		retry_until = sysctl_tcp_retries2;
 		if (sock_flag(sk, SOCK_DEAD)) {
-			const int alive = (icsk->icsk_rto < TCP_RTO_MAX);
+			const int alive = icsk->icsk_rto < TCP_RTO_MAX;
 
 			retry_until = tcp_orphan_retries(sk, alive);
 			do_reset = alive ||
@@ -294,7 +294,7 @@ static void tcp_probe_timer(struct sock *sk)
 	max_probes = sysctl_tcp_retries2;
 
 	if (sock_flag(sk, SOCK_DEAD)) {
-		const int alive = ((icsk->icsk_rto << icsk->icsk_backoff) < TCP_RTO_MAX);
+		const int alive = inet_csk_rto_backoff(icsk, TCP_RTO_MAX) < TCP_RTO_MAX;
 
 		max_probes = tcp_orphan_retries(sk, alive);
 

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 net-next] tcp: avoid possible arithmetic overflows
  2014-09-21  0:29         ` [PATCH v2 " Eric Dumazet
@ 2014-09-21 17:46           ` Yuchung Cheng
  2014-09-22 12:42             ` Eric Dumazet
  2014-09-22  3:56           ` [PATCH v2 " Joe Perches
  1 sibling, 1 reply; 13+ messages in thread
From: Yuchung Cheng @ 2014-09-21 17:46 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Joe Perches, David Miller, netdev, Neal Cardwell

On Sat, Sep 20, 2014 at 5:29 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> From: Eric Dumazet <edumazet@google.com>
>
> icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
> or more if some sysctl (eg tcp_retries2) are changed.
>
> Better use 64bit to perform icsk_rto << icsk_backoff operations
>
> As Joe Perches suggested, add a helper for this.
>
> From: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>

Maybe also use the new helper for the backoff in tcp_v4_err()?

> ---
>  include/net/inet_connection_sock.h |    9 +++++++++
>  net/ipv4/tcp_input.c               |    5 +++--
>  net/ipv4/tcp_output.c              |   13 ++++++-------
>  net/ipv4/tcp_timer.c               |    4 ++--
>  4 files changed, 20 insertions(+), 11 deletions(-)
>
> diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
> index 5fbe6568c3cf..7551b402d6fe 100644
> --- a/include/net/inet_connection_sock.h
> +++ b/include/net/inet_connection_sock.h
> @@ -242,6 +242,15 @@ static inline void inet_csk_reset_xmit_timer(struct sock *sk, const int what,
>  #endif
>  }
>
> +static inline unsigned long
> +inet_csk_rto_backoff(const struct inet_connection_sock *icsk,
> +                    unsigned long max_when)
> +{
> +        u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;
> +
> +        return (unsigned long)min_t(u64, when, max_when);
> +}
> +
>  struct sock *inet_csk_accept(struct sock *sk, int flags, int *err);
>
>  struct request_sock *inet_csk_search_req(const struct sock *sk,
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 02fb66d4a018..13f3da4762e3 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -3208,9 +3208,10 @@ static void tcp_ack_probe(struct sock *sk)
>                  * This function is not for random using!
>                  */
>         } else {
> +               unsigned long when = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
> +
>                 inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
> -                                         min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
> -                                         TCP_RTO_MAX);
> +                                         when, TCP_RTO_MAX);
>         }
>  }
>
> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> index 7f1280dcad57..8c61a7c0c889 100644
> --- a/net/ipv4/tcp_output.c
> +++ b/net/ipv4/tcp_output.c
> @@ -3279,6 +3279,7 @@ void tcp_send_probe0(struct sock *sk)
>  {
>         struct inet_connection_sock *icsk = inet_csk(sk);
>         struct tcp_sock *tp = tcp_sk(sk);
> +       unsigned long probe_max;
>         int err;
>
>         err = tcp_write_wakeup(sk);
> @@ -3294,9 +3295,7 @@ void tcp_send_probe0(struct sock *sk)
>                 if (icsk->icsk_backoff < sysctl_tcp_retries2)
>                         icsk->icsk_backoff++;
>                 icsk->icsk_probes_out++;
> -               inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
> -                                         min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
> -                                         TCP_RTO_MAX);
> +               probe_max = TCP_RTO_MAX;
>         } else {
>                 /* If packet was not sent due to local congestion,
>                  * do not backoff and do not remember icsk_probes_out.
> @@ -3306,11 +3305,11 @@ void tcp_send_probe0(struct sock *sk)
>                  */
>                 if (!icsk->icsk_probes_out)
>                         icsk->icsk_probes_out = 1;
> -               inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
> -                                         min(icsk->icsk_rto << icsk->icsk_backoff,
> -                                             TCP_RESOURCE_PROBE_INTERVAL),
> -                                         TCP_RTO_MAX);
> +               probe_max = TCP_RESOURCE_PROBE_INTERVAL;
>         }
> +       inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
> +                                 inet_csk_rto_backoff(icsk, probe_max),
> +                                 TCP_RTO_MAX);
>  }
>
>  int tcp_rtx_synack(struct sock *sk, struct request_sock *req)
> diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
> index a339e7ba05a4..b24360f6e293 100644
> --- a/net/ipv4/tcp_timer.c
> +++ b/net/ipv4/tcp_timer.c
> @@ -180,7 +180,7 @@ static int tcp_write_timeout(struct sock *sk)
>
>                 retry_until = sysctl_tcp_retries2;
>                 if (sock_flag(sk, SOCK_DEAD)) {
> -                       const int alive = (icsk->icsk_rto < TCP_RTO_MAX);
> +                       const int alive = icsk->icsk_rto < TCP_RTO_MAX;
>
>                         retry_until = tcp_orphan_retries(sk, alive);
>                         do_reset = alive ||
> @@ -294,7 +294,7 @@ static void tcp_probe_timer(struct sock *sk)
>         max_probes = sysctl_tcp_retries2;
>
>         if (sock_flag(sk, SOCK_DEAD)) {
> -               const int alive = ((icsk->icsk_rto << icsk->icsk_backoff) < TCP_RTO_MAX);
> +               const int alive = inet_csk_rto_backoff(icsk, TCP_RTO_MAX) < TCP_RTO_MAX;
>
>                 max_probes = tcp_orphan_retries(sk, alive);
>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 net-next] tcp: avoid possible arithmetic overflows
  2014-09-21  0:29         ` [PATCH v2 " Eric Dumazet
  2014-09-21 17:46           ` Yuchung Cheng
@ 2014-09-22  3:56           ` Joe Perches
  1 sibling, 0 replies; 13+ messages in thread
From: Joe Perches @ 2014-09-22  3:56 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Yuchung Cheng, David Miller, netdev, Neal Cardwell

On Sat, 2014-09-20 at 17:29 -0700, Eric Dumazet wrote:
> icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
> or more if some sysctl (eg tcp_retries2) are changed.
> 
> Better use 64bit to perform icsk_rto << icsk_backoff operations

Thanks Eric.

> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
[]
> @@ -3208,9 +3208,10 @@ static void tcp_ack_probe(struct sock *sk)
>  		 * This function is not for random using!
>  		 */
>  	} else {
> +		unsigned long when = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
> +
>  		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
> -					  min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
> -					  TCP_RTO_MAX);
> +					  when, TCP_RTO_MAX);

Pity about the possible extra compare to TCP_RTO_MAX here.
I hope gcc smart enough to optimize it away.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH net-next] tcp: avoid possible arithmetic overflows
  2014-09-20 19:55     ` Eric Dumazet
  2014-09-20 20:19       ` Joe Perches
@ 2014-09-22 11:13       ` David Laight
  1 sibling, 0 replies; 13+ messages in thread
From: David Laight @ 2014-09-22 11:13 UTC (permalink / raw)
  To: 'Eric Dumazet', Yuchung Cheng
  Cc: Joe Perches, David Miller, netdev, Neal Cardwell

From: Eric Dumazet
> On Sat, 2014-09-20 at 12:46 -0700, Yuchung Cheng wrote:
> > On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote:
> > > On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote:
> > >> From: Eric Dumazet <edumazet@google.com>
> > >>
> > >> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default,
> > >> or more if some sysctl (eg tcp_retries2) are changed.
> > >>
> > >> Better use 64bit to perform icsk_rto << icsk_backoff operations
> > >
> > > Maybe better to use a helper function for this?
> > >
> > > something like:
> > >
> > > static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk)
> > > {
> > >         u64 when = (u64)icsk->icsk_rto;
> > >
> > >         return when << icsk->icsk_backoff;
> > > }
> > Thanks for the fix Eric. I second Joe's idea to use a helper function.
> >
> 
> Yep.
> 
> Given the timeout functions in the kernel use 'unsigned long', I prefer
> to keep the u64 magic private to this helper.
> 
> I will probably use
> 
> static inline unsigned long icsk_rto_backoff(const struct inet_connection_sock *icsk)
> {
> 	u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;
> 
> 	return min_t(u64, when, ~0UL);
> }
> 
> On 64bit arches, the min_t() should be a nop.

Isn't it likely to generate a 'condition is always true/false' warning?
(that may depend on whether min_t contains < or <=)

	David


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 net-next] tcp: avoid possible arithmetic overflows
  2014-09-21 17:46           ` Yuchung Cheng
@ 2014-09-22 12:42             ` Eric Dumazet
  2014-09-22 20:19               ` [PATCH v3 " Eric Dumazet
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2014-09-22 12:42 UTC (permalink / raw)
  To: Yuchung Cheng; +Cc: Joe Perches, David Miller, netdev, Neal Cardwell

On Sun, 2014-09-21 at 10:46 -0700, Yuchung Cheng wrote:
> On Sat, Sep 20, 2014 at 5:29 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> > From: Eric Dumazet <edumazet@google.com>
> >
> > icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
> > or more if some sysctl (eg tcp_retries2) are changed.
> >
> > Better use 64bit to perform icsk_rto << icsk_backoff operations
> >
> > As Joe Perches suggested, add a helper for this.
> >
> > From: Eric Dumazet <edumazet@google.com>
> Acked-by: Yuchung Cheng <ycheng@google.com>
> 
> Maybe also use the new helper for the backoff in tcp_v4_err()?

You mean the following bit ? If yes, I'll incorporate it in v3

Thanks

diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 006b045716d8..3b2e49cb2b61 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -430,9 +430,9 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info)
 			break;
 
 		icsk->icsk_backoff--;
-		inet_csk(sk)->icsk_rto = (tp->srtt_us ? __tcp_set_rto(tp) :
-			TCP_TIMEOUT_INIT) << icsk->icsk_backoff;
-		tcp_bound_rto(sk);
+		icsk->icsk_rto = tp->srtt_us ? __tcp_set_rto(tp) :
+					       TCP_TIMEOUT_INIT;
+		icsk->icsk_rto = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
 
 		skb = tcp_write_queue_head(sk);
 		BUG_ON(!skb);

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 net-next] tcp: avoid possible arithmetic overflows
  2014-09-22 12:42             ` Eric Dumazet
@ 2014-09-22 20:19               ` Eric Dumazet
  2014-09-22 20:27                 ` David Miller
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2014-09-22 20:19 UTC (permalink / raw)
  To: Yuchung Cheng; +Cc: Joe Perches, David Miller, netdev, Neal Cardwell

From: Eric Dumazet <edumazet@google.com>

icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
or more if some sysctl (eg tcp_retries2) are changed.

Better use 64bit to perform icsk_rto << icsk_backoff operations

As Joe Perches suggested, add a helper for this.

Yuchung spotted the tcp_v4_err() case.

From: Eric Dumazet <edumazet@google.com>
---
 include/net/inet_connection_sock.h |    9 +++++++++
 net/ipv4/tcp_input.c               |    5 +++--
 net/ipv4/tcp_ipv4.c                |    6 +++---
 net/ipv4/tcp_output.c              |   13 ++++++-------
 net/ipv4/tcp_timer.c               |    4 ++--
 5 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
index 5fbe6568c3cf..848e85cb5c61 100644
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -242,6 +242,15 @@ static inline void inet_csk_reset_xmit_timer(struct sock *sk, const int what,
 #endif
 }
 
+static inline unsigned long
+inet_csk_rto_backoff(const struct inet_connection_sock *icsk,
+		     unsigned long max_when)
+{
+        u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff;
+
+        return (unsigned long)min_t(u64, when, max_when);
+}
+
 struct sock *inet_csk_accept(struct sock *sk, int flags, int *err);
 
 struct request_sock *inet_csk_search_req(const struct sock *sk,
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 02fb66d4a018..13f3da4762e3 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -3208,9 +3208,10 @@ static void tcp_ack_probe(struct sock *sk)
 		 * This function is not for random using!
 		 */
 	} else {
+		unsigned long when = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
+
 		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
-					  TCP_RTO_MAX);
+					  when, TCP_RTO_MAX);
 	}
 }
 
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 006b045716d8..3b2e49cb2b61 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -430,9 +430,9 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info)
 			break;
 
 		icsk->icsk_backoff--;
-		inet_csk(sk)->icsk_rto = (tp->srtt_us ? __tcp_set_rto(tp) :
-			TCP_TIMEOUT_INIT) << icsk->icsk_backoff;
-		tcp_bound_rto(sk);
+		icsk->icsk_rto = tp->srtt_us ? __tcp_set_rto(tp) :
+					       TCP_TIMEOUT_INIT;
+		icsk->icsk_rto = inet_csk_rto_backoff(icsk, TCP_RTO_MAX);
 
 		skb = tcp_write_queue_head(sk);
 		BUG_ON(!skb);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 7f1280dcad57..8c61a7c0c889 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -3279,6 +3279,7 @@ void tcp_send_probe0(struct sock *sk)
 {
 	struct inet_connection_sock *icsk = inet_csk(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
+	unsigned long probe_max;
 	int err;
 
 	err = tcp_write_wakeup(sk);
@@ -3294,9 +3295,7 @@ void tcp_send_probe0(struct sock *sk)
 		if (icsk->icsk_backoff < sysctl_tcp_retries2)
 			icsk->icsk_backoff++;
 		icsk->icsk_probes_out++;
-		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX),
-					  TCP_RTO_MAX);
+		probe_max = TCP_RTO_MAX;
 	} else {
 		/* If packet was not sent due to local congestion,
 		 * do not backoff and do not remember icsk_probes_out.
@@ -3306,11 +3305,11 @@ void tcp_send_probe0(struct sock *sk)
 		 */
 		if (!icsk->icsk_probes_out)
 			icsk->icsk_probes_out = 1;
-		inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
-					  min(icsk->icsk_rto << icsk->icsk_backoff,
-					      TCP_RESOURCE_PROBE_INTERVAL),
-					  TCP_RTO_MAX);
+		probe_max = TCP_RESOURCE_PROBE_INTERVAL;
 	}
+	inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
+				  inet_csk_rto_backoff(icsk, probe_max),
+				  TCP_RTO_MAX);
 }
 
 int tcp_rtx_synack(struct sock *sk, struct request_sock *req)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index a339e7ba05a4..b24360f6e293 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -180,7 +180,7 @@ static int tcp_write_timeout(struct sock *sk)
 
 		retry_until = sysctl_tcp_retries2;
 		if (sock_flag(sk, SOCK_DEAD)) {
-			const int alive = (icsk->icsk_rto < TCP_RTO_MAX);
+			const int alive = icsk->icsk_rto < TCP_RTO_MAX;
 
 			retry_until = tcp_orphan_retries(sk, alive);
 			do_reset = alive ||
@@ -294,7 +294,7 @@ static void tcp_probe_timer(struct sock *sk)
 	max_probes = sysctl_tcp_retries2;
 
 	if (sock_flag(sk, SOCK_DEAD)) {
-		const int alive = ((icsk->icsk_rto << icsk->icsk_backoff) < TCP_RTO_MAX);
+		const int alive = inet_csk_rto_backoff(icsk, TCP_RTO_MAX) < TCP_RTO_MAX;
 
 		max_probes = tcp_orphan_retries(sk, alive);
 

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 net-next] tcp: avoid possible arithmetic overflows
  2014-09-22 20:19               ` [PATCH v3 " Eric Dumazet
@ 2014-09-22 20:27                 ` David Miller
  2014-09-22 20:28                   ` Eric Dumazet
  0 siblings, 1 reply; 13+ messages in thread
From: David Miller @ 2014-09-22 20:27 UTC (permalink / raw)
  To: eric.dumazet; +Cc: ycheng, joe, netdev, ncardwell

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Mon, 22 Sep 2014 13:19:44 -0700

> From: Eric Dumazet <edumazet@google.com>
> 
> icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
> or more if some sysctl (eg tcp_retries2) are changed.
> 
> Better use 64bit to perform icsk_rto << icsk_backoff operations
> 
> As Joe Perches suggested, add a helper for this.
> 
> Yuchung spotted the tcp_v4_err() case.
> 
> From: Eric Dumazet <edumazet@google.com>

I assume you meant "Signed-off-by: " on that last line, not "From: " :-)

Applied,

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 net-next] tcp: avoid possible arithmetic overflows
  2014-09-22 20:27                 ` David Miller
@ 2014-09-22 20:28                   ` Eric Dumazet
  0 siblings, 0 replies; 13+ messages in thread
From: Eric Dumazet @ 2014-09-22 20:28 UTC (permalink / raw)
  To: David Miller; +Cc: ycheng, joe, netdev, ncardwell

On Mon, 2014-09-22 at 16:27 -0400, David Miller wrote:
> From: Eric Dumazet <eric.dumazet@gmail.com>
> Date: Mon, 22 Sep 2014 13:19:44 -0700
> 
> > From: Eric Dumazet <edumazet@google.com>
> > 
> > icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
> > or more if some sysctl (eg tcp_retries2) are changed.
> > 
> > Better use 64bit to perform icsk_rto << icsk_backoff operations
> > 
> > As Joe Perches suggested, add a helper for this.
> > 
> > Yuchung spotted the tcp_v4_err() case.
> > 
> > From: Eric Dumazet <edumazet@google.com>
> 
> I assume you meant "Signed-off-by: " on that last line, not "From: " :-)
> 
> Applied,

Arg, thanks David ;)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-09-22 20:28 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-20 17:19 [PATCH net-next] tcp: avoid possible arithmetic overflows Eric Dumazet
2014-09-20 18:01 ` Joe Perches
2014-09-20 19:46   ` Yuchung Cheng
2014-09-20 19:55     ` Eric Dumazet
2014-09-20 20:19       ` Joe Perches
2014-09-21  0:29         ` [PATCH v2 " Eric Dumazet
2014-09-21 17:46           ` Yuchung Cheng
2014-09-22 12:42             ` Eric Dumazet
2014-09-22 20:19               ` [PATCH v3 " Eric Dumazet
2014-09-22 20:27                 ` David Miller
2014-09-22 20:28                   ` Eric Dumazet
2014-09-22  3:56           ` [PATCH v2 " Joe Perches
2014-09-22 11:13       ` [PATCH " David Laight

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.