All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v2 0/2] tcp: ensure rate sample to use the most recently sent skb
@ 2022-04-16  9:19 Pengcheng Yang
  2022-04-16  9:19 ` [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample Pengcheng Yang
  2022-04-16  9:19 ` [PATCH net-next v2 2/2] tcp: use tcp_skb_sent_after() instead in RACK Pengcheng Yang
  0 siblings, 2 replies; 7+ messages in thread
From: Pengcheng Yang @ 2022-04-16  9:19 UTC (permalink / raw)
  To: Eric Dumazet, Neal Cardwell, Yuchung Cheng, netdev
  Cc: David S. Miller, Jakub Kicinski, Pengcheng Yang

This patch ensure to use the most recently sent skb
when filling the rate sample. And make RACK and rate
sample share the use of tcp_skb_sent_after() helper.

v2: introduce a new help function tcp_skb_sent_after()

Pengcheng Yang (2):
  tcp: ensure to use the most recently sent skb when filling the rate
    sample
  tcp: use tcp_skb_sent_after() instead in RACK

 include/net/tcp.h       |  6 ++++++
 net/ipv4/tcp_rate.c     | 11 ++++++++---
 net/ipv4/tcp_recovery.c | 15 +++++----------
 3 files changed, 19 insertions(+), 13 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample
  2022-04-16  9:19 [PATCH net-next v2 0/2] tcp: ensure rate sample to use the most recently sent skb Pengcheng Yang
@ 2022-04-16  9:19 ` Pengcheng Yang
  2022-04-17 18:51   ` Neal Cardwell
  2022-04-16  9:19 ` [PATCH net-next v2 2/2] tcp: use tcp_skb_sent_after() instead in RACK Pengcheng Yang
  1 sibling, 1 reply; 7+ messages in thread
From: Pengcheng Yang @ 2022-04-16  9:19 UTC (permalink / raw)
  To: Eric Dumazet, Neal Cardwell, Yuchung Cheng, netdev
  Cc: David S. Miller, Jakub Kicinski, Pengcheng Yang

If an ACK (s)acks multiple skbs, we favor the information
from the most recently sent skb by choosing the skb with
the highest prior_delivered count. But in the interval
between receiving ACKs, we send multiple skbs with the same
prior_delivered, because the tp->delivered only changes
when we receive an ACK.

We used RACK's solution, copying tcp_rack_sent_after() as
tcp_skb_sent_after() helper to determine "which packet was
sent last?". Later, we will use tcp_skb_sent_after() instead
in RACK.

Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
Cc: Neal Cardwell <ncardwell@google.com>
---
 include/net/tcp.h   |  6 ++++++
 net/ipv4/tcp_rate.c | 11 ++++++++---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 6d50a66..fcd69fc 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1042,6 +1042,7 @@ struct rate_sample {
 	int  losses;		/* number of packets marked lost upon ACK */
 	u32  acked_sacked;	/* number of packets newly (S)ACKed upon ACK */
 	u32  prior_in_flight;	/* in flight before this ACK */
+	u32  last_end_seq;	/* end_seq of most recently ACKed packet */
 	bool is_app_limited;	/* is sample from packet with bubble in pipe? */
 	bool is_retrans;	/* is sample from retransmission? */
 	bool is_ack_delayed;	/* is this (likely) a delayed ACK? */
@@ -1158,6 +1159,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
 		  bool is_sack_reneg, struct rate_sample *rs);
 void tcp_rate_check_app_limited(struct sock *sk);
 
+static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
+{
+	return t1 > t2 || (t1 == t2 && after(seq1, seq2));
+}
+
 /* These functions determine how the current flow behaves in respect of SACK
  * handling. SACK is negotiated with the peer, and therefore it can vary
  * between different flows.
diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
index 617b818..a8f6d9d 100644
--- a/net/ipv4/tcp_rate.c
+++ b/net/ipv4/tcp_rate.c
@@ -74,27 +74,32 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
  *
  * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
  * called multiple times. We favor the information from the most recently
- * sent skb, i.e., the skb with the highest prior_delivered count.
+ * sent skb, i.e., the skb with the most recently sent time and the highest
+ * sequence.
  */
 void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
 			    struct rate_sample *rs)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
+	u64 tx_tstamp;
 
 	if (!scb->tx.delivered_mstamp)
 		return;
 
+	tx_tstamp = tcp_skb_timestamp_us(skb);
 	if (!rs->prior_delivered ||
-	    after(scb->tx.delivered, rs->prior_delivered)) {
+	    tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
+			       scb->end_seq, rs->last_end_seq)) {
 		rs->prior_delivered_ce  = scb->tx.delivered_ce;
 		rs->prior_delivered  = scb->tx.delivered;
 		rs->prior_mstamp     = scb->tx.delivered_mstamp;
 		rs->is_app_limited   = scb->tx.is_app_limited;
 		rs->is_retrans	     = scb->sacked & TCPCB_RETRANS;
+		rs->last_end_seq     = scb->end_seq;
 
 		/* Record send time of most recently ACKed packet: */
-		tp->first_tx_mstamp  = tcp_skb_timestamp_us(skb);
+		tp->first_tx_mstamp  = tx_tstamp;
 		/* Find the duration of the "send phase" of this window: */
 		rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
 						     scb->tx.first_tx_mstamp);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH net-next v2 2/2] tcp: use tcp_skb_sent_after() instead in RACK
  2022-04-16  9:19 [PATCH net-next v2 0/2] tcp: ensure rate sample to use the most recently sent skb Pengcheng Yang
  2022-04-16  9:19 ` [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample Pengcheng Yang
@ 2022-04-16  9:19 ` Pengcheng Yang
  1 sibling, 0 replies; 7+ messages in thread
From: Pengcheng Yang @ 2022-04-16  9:19 UTC (permalink / raw)
  To: Eric Dumazet, Neal Cardwell, Yuchung Cheng, netdev
  Cc: David S. Miller, Jakub Kicinski, Pengcheng Yang

This patch doesn't change any functionality.

Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
---
 net/ipv4/tcp_recovery.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c
index fd113f6..48f30e7 100644
--- a/net/ipv4/tcp_recovery.c
+++ b/net/ipv4/tcp_recovery.c
@@ -2,11 +2,6 @@
 #include <linux/tcp.h>
 #include <net/tcp.h>
 
-static bool tcp_rack_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
-{
-	return t1 > t2 || (t1 == t2 && after(seq1, seq2));
-}
-
 static u32 tcp_rack_reo_wnd(const struct sock *sk)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
@@ -77,9 +72,9 @@ static void tcp_rack_detect_loss(struct sock *sk, u32 *reo_timeout)
 		    !(scb->sacked & TCPCB_SACKED_RETRANS))
 			continue;
 
-		if (!tcp_rack_sent_after(tp->rack.mstamp,
-					 tcp_skb_timestamp_us(skb),
-					 tp->rack.end_seq, scb->end_seq))
+		if (!tcp_skb_sent_after(tp->rack.mstamp,
+					tcp_skb_timestamp_us(skb),
+					tp->rack.end_seq, scb->end_seq))
 			break;
 
 		/* A packet is lost if it has not been s/acked beyond
@@ -140,8 +135,8 @@ void tcp_rack_advance(struct tcp_sock *tp, u8 sacked, u32 end_seq,
 	}
 	tp->rack.advanced = 1;
 	tp->rack.rtt_us = rtt_us;
-	if (tcp_rack_sent_after(xmit_time, tp->rack.mstamp,
-				end_seq, tp->rack.end_seq)) {
+	if (tcp_skb_sent_after(xmit_time, tp->rack.mstamp,
+			       end_seq, tp->rack.end_seq)) {
 		tp->rack.mstamp = xmit_time;
 		tp->rack.end_seq = end_seq;
 	}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample
  2022-04-16  9:19 ` [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample Pengcheng Yang
@ 2022-04-17 18:51   ` Neal Cardwell
  2022-04-19 13:59     ` Paolo Abeni
  0 siblings, 1 reply; 7+ messages in thread
From: Neal Cardwell @ 2022-04-17 18:51 UTC (permalink / raw)
  To: Pengcheng Yang
  Cc: Eric Dumazet, Yuchung Cheng, netdev, David S. Miller, Jakub Kicinski

On Sat, Apr 16, 2022 at 5:20 AM Pengcheng Yang <yangpc@wangsu.com> wrote:
>
> If an ACK (s)acks multiple skbs, we favor the information
> from the most recently sent skb by choosing the skb with
> the highest prior_delivered count. But in the interval
> between receiving ACKs, we send multiple skbs with the same
> prior_delivered, because the tp->delivered only changes
> when we receive an ACK.
>
> We used RACK's solution, copying tcp_rack_sent_after() as
> tcp_skb_sent_after() helper to determine "which packet was
> sent last?". Later, we will use tcp_skb_sent_after() instead
> in RACK.
>
> Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> Cc: Neal Cardwell <ncardwell@google.com>
> ---
>  include/net/tcp.h   |  6 ++++++
>  net/ipv4/tcp_rate.c | 11 ++++++++---
>  2 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index 6d50a66..fcd69fc 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -1042,6 +1042,7 @@ struct rate_sample {
>         int  losses;            /* number of packets marked lost upon ACK */
>         u32  acked_sacked;      /* number of packets newly (S)ACKed upon ACK */
>         u32  prior_in_flight;   /* in flight before this ACK */
> +       u32  last_end_seq;      /* end_seq of most recently ACKed packet */
>         bool is_app_limited;    /* is sample from packet with bubble in pipe? */
>         bool is_retrans;        /* is sample from retransmission? */
>         bool is_ack_delayed;    /* is this (likely) a delayed ACK? */
> @@ -1158,6 +1159,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
>                   bool is_sack_reneg, struct rate_sample *rs);
>  void tcp_rate_check_app_limited(struct sock *sk);
>
> +static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
> +{
> +       return t1 > t2 || (t1 == t2 && after(seq1, seq2));
> +}
> +
>  /* These functions determine how the current flow behaves in respect of SACK
>   * handling. SACK is negotiated with the peer, and therefore it can vary
>   * between different flows.
> diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
> index 617b818..a8f6d9d 100644
> --- a/net/ipv4/tcp_rate.c
> +++ b/net/ipv4/tcp_rate.c
> @@ -74,27 +74,32 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
>   *
>   * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
>   * called multiple times. We favor the information from the most recently
> - * sent skb, i.e., the skb with the highest prior_delivered count.
> + * sent skb, i.e., the skb with the most recently sent time and the highest
> + * sequence.
>   */
>  void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
>                             struct rate_sample *rs)
>  {
>         struct tcp_sock *tp = tcp_sk(sk);
>         struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
> +       u64 tx_tstamp;
>
>         if (!scb->tx.delivered_mstamp)
>                 return;
>
> +       tx_tstamp = tcp_skb_timestamp_us(skb);
>         if (!rs->prior_delivered ||
> -           after(scb->tx.delivered, rs->prior_delivered)) {
> +           tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
> +                              scb->end_seq, rs->last_end_seq)) {
>                 rs->prior_delivered_ce  = scb->tx.delivered_ce;
>                 rs->prior_delivered  = scb->tx.delivered;
>                 rs->prior_mstamp     = scb->tx.delivered_mstamp;
>                 rs->is_app_limited   = scb->tx.is_app_limited;
>                 rs->is_retrans       = scb->sacked & TCPCB_RETRANS;
> +               rs->last_end_seq     = scb->end_seq;
>
>                 /* Record send time of most recently ACKed packet: */
> -               tp->first_tx_mstamp  = tcp_skb_timestamp_us(skb);
> +               tp->first_tx_mstamp  = tx_tstamp;
>                 /* Find the duration of the "send phase" of this window: */
>                 rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
>                                                      scb->tx.first_tx_mstamp);
> --

Thanks for the patch! The change looks good to me, and it passes our
team's packetdrill tests.

One suggestion: currently this patch seems to be targeted to the
net-next branch. However, since it's a bug fix my sense is that it
would be best to target this to the net branch, so that it gets
backported to stable releases.

One complication is that the follow-on patch in this series ("tcp: use
tcp_skb_sent_after() instead in RACK") is a pure re-factor/cleanup,
which is more appropriate for net-next. So the plan I was trying to
describe in the previous thread was that this series could be
implemented as:

(1) first, submit "tcp: ensure to use the most recently sent skb when
filling the rate sample" to the net branch
(2) wait for the fix in the net branch to be merged into the net-next branch
(3) second, submit "tcp: use tcp_skb_sent_after() instead in RACK" to
the net-next branch

What do folks think?

thanks,
neal

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample
  2022-04-17 18:51   ` Neal Cardwell
@ 2022-04-19 13:59     ` Paolo Abeni
  2022-04-20  1:48       ` Pengcheng Yang
  0 siblings, 1 reply; 7+ messages in thread
From: Paolo Abeni @ 2022-04-19 13:59 UTC (permalink / raw)
  To: Neal Cardwell, Pengcheng Yang
  Cc: Eric Dumazet, Yuchung Cheng, netdev, David S. Miller, Jakub Kicinski

On Sun, 2022-04-17 at 14:51 -0400, Neal Cardwell wrote:
> On Sat, Apr 16, 2022 at 5:20 AM Pengcheng Yang <yangpc@wangsu.com> wrote:
> > 
> > If an ACK (s)acks multiple skbs, we favor the information
> > from the most recently sent skb by choosing the skb with
> > the highest prior_delivered count. But in the interval
> > between receiving ACKs, we send multiple skbs with the same
> > prior_delivered, because the tp->delivered only changes
> > when we receive an ACK.
> > 
> > We used RACK's solution, copying tcp_rack_sent_after() as
> > tcp_skb_sent_after() helper to determine "which packet was
> > sent last?". Later, we will use tcp_skb_sent_after() instead
> > in RACK.
> > 
> > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> > Cc: Neal Cardwell <ncardwell@google.com>
> > ---
> >  include/net/tcp.h   |  6 ++++++
> >  net/ipv4/tcp_rate.c | 11 ++++++++---
> >  2 files changed, 14 insertions(+), 3 deletions(-)
> > 
> > diff --git a/include/net/tcp.h b/include/net/tcp.h
> > index 6d50a66..fcd69fc 100644
> > --- a/include/net/tcp.h
> > +++ b/include/net/tcp.h
> > @@ -1042,6 +1042,7 @@ struct rate_sample {
> >         int  losses;            /* number of packets marked lost upon ACK */
> >         u32  acked_sacked;      /* number of packets newly (S)ACKed upon ACK */
> >         u32  prior_in_flight;   /* in flight before this ACK */
> > +       u32  last_end_seq;      /* end_seq of most recently ACKed packet */
> >         bool is_app_limited;    /* is sample from packet with bubble in pipe? */
> >         bool is_retrans;        /* is sample from retransmission? */
> >         bool is_ack_delayed;    /* is this (likely) a delayed ACK? */
> > @@ -1158,6 +1159,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
> >                   bool is_sack_reneg, struct rate_sample *rs);
> >  void tcp_rate_check_app_limited(struct sock *sk);
> > 
> > +static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
> > +{
> > +       return t1 > t2 || (t1 == t2 && after(seq1, seq2));
> > +}
> > +
> >  /* These functions determine how the current flow behaves in respect of SACK
> >   * handling. SACK is negotiated with the peer, and therefore it can vary
> >   * between different flows.
> > diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
> > index 617b818..a8f6d9d 100644
> > --- a/net/ipv4/tcp_rate.c
> > +++ b/net/ipv4/tcp_rate.c
> > @@ -74,27 +74,32 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
> >   *
> >   * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
> >   * called multiple times. We favor the information from the most recently
> > - * sent skb, i.e., the skb with the highest prior_delivered count.
> > + * sent skb, i.e., the skb with the most recently sent time and the highest
> > + * sequence.
> >   */
> >  void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
> >                             struct rate_sample *rs)
> >  {
> >         struct tcp_sock *tp = tcp_sk(sk);
> >         struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
> > +       u64 tx_tstamp;
> > 
> >         if (!scb->tx.delivered_mstamp)
> >                 return;
> > 
> > +       tx_tstamp = tcp_skb_timestamp_us(skb);
> >         if (!rs->prior_delivered ||
> > -           after(scb->tx.delivered, rs->prior_delivered)) {
> > +           tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
> > +                              scb->end_seq, rs->last_end_seq)) {
> >                 rs->prior_delivered_ce  = scb->tx.delivered_ce;
> >                 rs->prior_delivered  = scb->tx.delivered;
> >                 rs->prior_mstamp     = scb->tx.delivered_mstamp;
> >                 rs->is_app_limited   = scb->tx.is_app_limited;
> >                 rs->is_retrans       = scb->sacked & TCPCB_RETRANS;
> > +               rs->last_end_seq     = scb->end_seq;
> > 
> >                 /* Record send time of most recently ACKed packet: */
> > -               tp->first_tx_mstamp  = tcp_skb_timestamp_us(skb);
> > +               tp->first_tx_mstamp  = tx_tstamp;
> >                 /* Find the duration of the "send phase" of this window: */
> >                 rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
> >                                                      scb->tx.first_tx_mstamp);
> > --
> 
> Thanks for the patch! The change looks good to me, and it passes our
> team's packetdrill tests.
> 
> One suggestion: currently this patch seems to be targeted to the
> net-next branch. However, since it's a bug fix my sense is that it
> would be best to target this to the net branch, so that it gets
> backported to stable releases.
> 
> One complication is that the follow-on patch in this series ("tcp: use
> tcp_skb_sent_after() instead in RACK") is a pure re-factor/cleanup,
> which is more appropriate for net-next. So the plan I was trying to
> describe in the previous thread was that this series could be
> implemented as:
> 
> (1) first, submit "tcp: ensure to use the most recently sent skb when
> filling the rate sample" to the net branch
> (2) wait for the fix in the net branch to be merged into the net-next branch
> (3) second, submit "tcp: use tcp_skb_sent_after() instead in RACK" to
> the net-next branch
> 
> What do folks think?

+1 for the above.

@Pengcheng: please additionally provide a suitable 'fixes' tag for
patch 1/2.

Thanks!

Paolo	


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample
  2022-04-19 13:59     ` Paolo Abeni
@ 2022-04-20  1:48       ` Pengcheng Yang
  2022-04-20  2:11         ` Neal Cardwell
  0 siblings, 1 reply; 7+ messages in thread
From: Pengcheng Yang @ 2022-04-20  1:48 UTC (permalink / raw)
  To: 'Paolo Abeni', 'Neal Cardwell'
  Cc: 'Eric Dumazet', 'Yuchung Cheng',
	netdev, 'David S. Miller', 'Jakub Kicinski'

On Tue, Apr 19, 2022 at 10:00 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> On Sun, 2022-04-17 at 14:51 -0400, Neal Cardwell wrote:
> > On Sat, Apr 16, 2022 at 5:20 AM Pengcheng Yang <yangpc@wangsu.com> wrote:
> > >
> > > If an ACK (s)acks multiple skbs, we favor the information
> > > from the most recently sent skb by choosing the skb with
> > > the highest prior_delivered count. But in the interval
> > > between receiving ACKs, we send multiple skbs with the same
> > > prior_delivered, because the tp->delivered only changes
> > > when we receive an ACK.
> > >
> > > We used RACK's solution, copying tcp_rack_sent_after() as
> > > tcp_skb_sent_after() helper to determine "which packet was
> > > sent last?". Later, we will use tcp_skb_sent_after() instead
> > > in RACK.
> > >
> > > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> > > Cc: Neal Cardwell <ncardwell@google.com>
> > > ---
> > >  include/net/tcp.h   |  6 ++++++
> > >  net/ipv4/tcp_rate.c | 11 ++++++++---
> > >  2 files changed, 14 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/include/net/tcp.h b/include/net/tcp.h
> > > index 6d50a66..fcd69fc 100644
> > > --- a/include/net/tcp.h
> > > +++ b/include/net/tcp.h
> > > @@ -1042,6 +1042,7 @@ struct rate_sample {
> > >         int  losses;            /* number of packets marked lost upon ACK */
> > >         u32  acked_sacked;      /* number of packets newly (S)ACKed upon ACK */
> > >         u32  prior_in_flight;   /* in flight before this ACK */
> > > +       u32  last_end_seq;      /* end_seq of most recently ACKed packet */
> > >         bool is_app_limited;    /* is sample from packet with bubble in pipe? */
> > >         bool is_retrans;        /* is sample from retransmission? */
> > >         bool is_ack_delayed;    /* is this (likely) a delayed ACK? */
> > > @@ -1158,6 +1159,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
> > >                   bool is_sack_reneg, struct rate_sample *rs);
> > >  void tcp_rate_check_app_limited(struct sock *sk);
> > >
> > > +static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
> > > +{
> > > +       return t1 > t2 || (t1 == t2 && after(seq1, seq2));
> > > +}
> > > +
> > >  /* These functions determine how the current flow behaves in respect of SACK
> > >   * handling. SACK is negotiated with the peer, and therefore it can vary
> > >   * between different flows.
> > > diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
> > > index 617b818..a8f6d9d 100644
> > > --- a/net/ipv4/tcp_rate.c
> > > +++ b/net/ipv4/tcp_rate.c
> > > @@ -74,27 +74,32 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
> > >   *
> > >   * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
> > >   * called multiple times. We favor the information from the most recently
> > > - * sent skb, i.e., the skb with the highest prior_delivered count.
> > > + * sent skb, i.e., the skb with the most recently sent time and the highest
> > > + * sequence.
> > >   */
> > >  void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
> > >                             struct rate_sample *rs)
> > >  {
> > >         struct tcp_sock *tp = tcp_sk(sk);
> > >         struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
> > > +       u64 tx_tstamp;
> > >
> > >         if (!scb->tx.delivered_mstamp)
> > >                 return;
> > >
> > > +       tx_tstamp = tcp_skb_timestamp_us(skb);
> > >         if (!rs->prior_delivered ||
> > > -           after(scb->tx.delivered, rs->prior_delivered)) {
> > > +           tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
> > > +                              scb->end_seq, rs->last_end_seq)) {
> > >                 rs->prior_delivered_ce  = scb->tx.delivered_ce;
> > >                 rs->prior_delivered  = scb->tx.delivered;
> > >                 rs->prior_mstamp     = scb->tx.delivered_mstamp;
> > >                 rs->is_app_limited   = scb->tx.is_app_limited;
> > >                 rs->is_retrans       = scb->sacked & TCPCB_RETRANS;
> > > +               rs->last_end_seq     = scb->end_seq;
> > >
> > >                 /* Record send time of most recently ACKed packet: */
> > > -               tp->first_tx_mstamp  = tcp_skb_timestamp_us(skb);
> > > +               tp->first_tx_mstamp  = tx_tstamp;
> > >                 /* Find the duration of the "send phase" of this window: */
> > >                 rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
> > >                                                      scb->tx.first_tx_mstamp);
> > > --
> >
> > Thanks for the patch! The change looks good to me, and it passes our
> > team's packetdrill tests.
> >
> > One suggestion: currently this patch seems to be targeted to the
> > net-next branch. However, since it's a bug fix my sense is that it
> > would be best to target this to the net branch, so that it gets
> > backported to stable releases.
> >
> > One complication is that the follow-on patch in this series ("tcp: use
> > tcp_skb_sent_after() instead in RACK") is a pure re-factor/cleanup,
> > which is more appropriate for net-next. So the plan I was trying to
> > describe in the previous thread was that this series could be
> > implemented as:
> >
> > (1) first, submit "tcp: ensure to use the most recently sent skb when
> > filling the rate sample" to the net branch
> > (2) wait for the fix in the net branch to be merged into the net-next branch
> > (3) second, submit "tcp: use tcp_skb_sent_after() instead in RACK" to
> > the net-next branch
> >
> > What do folks think?
> 
> +1 for the above.
> 
> @Pengcheng: please additionally provide a suitable 'fixes' tag for
> patch 1/2.

Fixes: b9f64820fb22 ("tcp: track data delivery rate for a TCP connection")

> 
> Thanks!
> 
> Paolo


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample
  2022-04-20  1:48       ` Pengcheng Yang
@ 2022-04-20  2:11         ` Neal Cardwell
  0 siblings, 0 replies; 7+ messages in thread
From: Neal Cardwell @ 2022-04-20  2:11 UTC (permalink / raw)
  To: Pengcheng Yang
  Cc: Paolo Abeni, Eric Dumazet, Yuchung Cheng, netdev,
	David S. Miller, Jakub Kicinski

 .

On Tue, Apr 19, 2022 at 9:48 PM Pengcheng Yang <yangpc@wangsu.com> wrote:
>
> On Tue, Apr 19, 2022 at 10:00 PM Paolo Abeni <pabeni@redhat.com> wrote:
> >
> > On Sun, 2022-04-17 at 14:51 -0400, Neal Cardwell wrote:
> > > On Sat, Apr 16, 2022 at 5:20 AM Pengcheng Yang <yangpc@wangsu.com> wrote:
> > > >
> > > > If an ACK (s)acks multiple skbs, we favor the information
> > > > from the most recently sent skb by choosing the skb with
> > > > the highest prior_delivered count. But in the interval
> > > > between receiving ACKs, we send multiple skbs with the same
> > > > prior_delivered, because the tp->delivered only changes
> > > > when we receive an ACK.
> > > >
> > > > We used RACK's solution, copying tcp_rack_sent_after() as
> > > > tcp_skb_sent_after() helper to determine "which packet was
> > > > sent last?". Later, we will use tcp_skb_sent_after() instead
> > > > in RACK.
> > > >
> > > > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> > > > Cc: Neal Cardwell <ncardwell@google.com>
> > > > ---
> > > >  include/net/tcp.h   |  6 ++++++
> > > >  net/ipv4/tcp_rate.c | 11 ++++++++---
> > > >  2 files changed, 14 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/include/net/tcp.h b/include/net/tcp.h
> > > > index 6d50a66..fcd69fc 100644
> > > > --- a/include/net/tcp.h
> > > > +++ b/include/net/tcp.h
> > > > @@ -1042,6 +1042,7 @@ struct rate_sample {
> > > >         int  losses;            /* number of packets marked lost upon ACK */
> > > >         u32  acked_sacked;      /* number of packets newly (S)ACKed upon ACK */
> > > >         u32  prior_in_flight;   /* in flight before this ACK */
> > > > +       u32  last_end_seq;      /* end_seq of most recently ACKed packet */
> > > >         bool is_app_limited;    /* is sample from packet with bubble in pipe? */
> > > >         bool is_retrans;        /* is sample from retransmission? */
> > > >         bool is_ack_delayed;    /* is this (likely) a delayed ACK? */
> > > > @@ -1158,6 +1159,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
> > > >                   bool is_sack_reneg, struct rate_sample *rs);
> > > >  void tcp_rate_check_app_limited(struct sock *sk);
> > > >
> > > > +static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
> > > > +{
> > > > +       return t1 > t2 || (t1 == t2 && after(seq1, seq2));
> > > > +}
> > > > +
> > > >  /* These functions determine how the current flow behaves in respect of SACK
> > > >   * handling. SACK is negotiated with the peer, and therefore it can vary
> > > >   * between different flows.
> > > > diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
> > > > index 617b818..a8f6d9d 100644
> > > > --- a/net/ipv4/tcp_rate.c
> > > > +++ b/net/ipv4/tcp_rate.c
> > > > @@ -74,27 +74,32 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
> > > >   *
> > > >   * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
> > > >   * called multiple times. We favor the information from the most recently
> > > > - * sent skb, i.e., the skb with the highest prior_delivered count.
> > > > + * sent skb, i.e., the skb with the most recently sent time and the highest
> > > > + * sequence.
> > > >   */
> > > >  void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
> > > >                             struct rate_sample *rs)
> > > >  {
> > > >         struct tcp_sock *tp = tcp_sk(sk);
> > > >         struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
> > > > +       u64 tx_tstamp;
> > > >
> > > >         if (!scb->tx.delivered_mstamp)
> > > >                 return;
> > > >
> > > > +       tx_tstamp = tcp_skb_timestamp_us(skb);
> > > >         if (!rs->prior_delivered ||
> > > > -           after(scb->tx.delivered, rs->prior_delivered)) {
> > > > +           tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
> > > > +                              scb->end_seq, rs->last_end_seq)) {
> > > >                 rs->prior_delivered_ce  = scb->tx.delivered_ce;
> > > >                 rs->prior_delivered  = scb->tx.delivered;
> > > >                 rs->prior_mstamp     = scb->tx.delivered_mstamp;
> > > >                 rs->is_app_limited   = scb->tx.is_app_limited;
> > > >                 rs->is_retrans       = scb->sacked & TCPCB_RETRANS;
> > > > +               rs->last_end_seq     = scb->end_seq;
> > > >
> > > >                 /* Record send time of most recently ACKed packet: */
> > > > -               tp->first_tx_mstamp  = tcp_skb_timestamp_us(skb);
> > > > +               tp->first_tx_mstamp  = tx_tstamp;
> > > >                 /* Find the duration of the "send phase" of this window: */
> > > >                 rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
> > > >                                                      scb->tx.first_tx_mstamp);
> > > > --
> > >
> > > Thanks for the patch! The change looks good to me, and it passes our
> > > team's packetdrill tests.
> > >
> > > One suggestion: currently this patch seems to be targeted to the
> > > net-next branch. However, since it's a bug fix my sense is that it
> > > would be best to target this to the net branch, so that it gets
> > > backported to stable releases.
> > >
> > > One complication is that the follow-on patch in this series ("tcp: use
> > > tcp_skb_sent_after() instead in RACK") is a pure re-factor/cleanup,
> > > which is more appropriate for net-next. So the plan I was trying to
> > > describe in the previous thread was that this series could be
> > > implemented as:
> > >
> > > (1) first, submit "tcp: ensure to use the most recently sent skb when
> > > filling the rate sample" to the net branch
> > > (2) wait for the fix in the net branch to be merged into the net-next branch
> > > (3) second, submit "tcp: use tcp_skb_sent_after() instead in RACK" to
> > > the net-next branch
> > >
> > > What do folks think?
> >
> > +1 for the above.
> >
> > @Pengcheng: please additionally provide a suitable 'fixes' tag for
> > patch 1/2.
>
> Fixes: b9f64820fb22 ("tcp: track data delivery rate for a TCP connection")

Thanks. That looks like the correct SHA1. However, I think there may
be a miscommunication. :-)

I think what Paolo and I are suggesting is:

(1) e-mail the patch "tcp: ensure to use the most recently sent skb
when filling the rate sample" as a submission to the net branch
("[PATCH net v3] tcp: ensure to use the most recently sent skb when
filling the rate sample"), with the "Fixes:" footer in the commit
message  in the line above your "Signed-off-by:" footer.

(2) wait for the fix in the net branch to be merged into the net-next branch

(3) submit "tcp: use tcp_skb_sent_after() instead in RACK" to the
net-next branch

thanks,
neal

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-04-20  2:12 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-16  9:19 [PATCH net-next v2 0/2] tcp: ensure rate sample to use the most recently sent skb Pengcheng Yang
2022-04-16  9:19 ` [PATCH net-next v2 1/2] tcp: ensure to use the most recently sent skb when filling the rate sample Pengcheng Yang
2022-04-17 18:51   ` Neal Cardwell
2022-04-19 13:59     ` Paolo Abeni
2022-04-20  1:48       ` Pengcheng Yang
2022-04-20  2:11         ` Neal Cardwell
2022-04-16  9:19 ` [PATCH net-next v2 2/2] tcp: use tcp_skb_sent_after() instead in RACK Pengcheng Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.