All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 1/3] net: TCP thin-stream detection
@ 2009-10-30 13:53 apetlund
  2009-10-30 15:24 ` Arnd Hannemann
  0 siblings, 1 reply; 13+ messages in thread
From: apetlund @ 2009-10-30 13:53 UTC (permalink / raw)
  To: Arnd Hannemann
  Cc: Andreas Petlund, William Allen Simpson, netdev, linux-kernel,
	shemminger, ilpo.jarvinen, davem

> Andreas Petlund schrieb:
>> Den 28. okt. 2009 kl. 04.09 skrev William Allen Simpson:
>>> Andreas Petlund wrote:
>>>> +/* Determines whether this is a thin stream (which may suffer from +
* increased latency). Used to trigger latency-reducing mechanisms. + */
>>>> +static inline unsigned int tcp_stream_is_thin(const struct
>>>> tcp_sock *tp)
>>>> +{
>>>> +	return tp->packets_out < 4;
>>>> +}
>>>> +
>>> This bothers me a bit.  Having just looked at your Linux presentation,
and not (yet) read your papers, it seems much of your justification was
>>> with 1 packet per RTT.  Here, you seem to be concentrating on 4,
probably
>>> because many implementations quickly ramp up to 4.
>> The limit of 4 packets in flight is based on the fact that less than 4
packets in flight makes fast retransmissions impossible, thus limiting the
retransmit options to timeout-retransmissions. The criterion is
>
> There is Limited Transmit! So this is generally not true.
>> therefore as conservative as possible while still serving its purpose.
If further losses occur, the exponential backoff will increase latency
further. The concept of using this limit is also discussed in the
Internet draft for Early Retransmit by Allman et al.:
>> http://www.icir.org/mallman/papers/draft-ietf-tcpm-early-rexmt-01.txt
>
> This ID is covering exactly the cases which Limited Transmit does not
cover and works "automagically" without help of application. So why not
just implement this ID?

As Ilpo writes, the mechanism we propose is simpler than the ID, and
slightly more aggressive. The reason why we chose this is as follows: 1)
The ID and Limited Transmit tries to prevent retransmission timeouts by
retransmitting more aggressively, thus keeping the congestion window open
even though congestion may be the limiting factor. If their limiting
conditions change, they still have higher sending rates available. The
thin-stream applications are not limited by congestion control. There is
therefore no motivation to prevent retransmission timeouts in order to
keep the congestion window open because in the thin-stream scenario, a
larger window is not needed, but we retransmit early only to reduce
application-layer latencies. 2) Our suggested implementation is simpler.
3) I believe that the reason why the ID has not been implemented in Linux
is that the motivation did not justify the achieved result. We have
analysed a wide range of time-dependent applications and found that they
very often produce thin streams due to transmissions being triggered by
human interaction. This changes the motivational picture since a thin
stream is an indicator of time-dependency.

Regards,
Andreas








^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-30 13:53 [PATCH 1/3] net: TCP thin-stream detection apetlund
@ 2009-10-30 15:24 ` Arnd Hannemann
  2009-11-05 13:34   ` Andreas Petlund
  0 siblings, 1 reply; 13+ messages in thread
From: Arnd Hannemann @ 2009-10-30 15:24 UTC (permalink / raw)
  To: apetlund
  Cc: William Allen Simpson, netdev, linux-kernel, shemminger,
	ilpo.jarvinen, davem

apetlund@simula.no schrieb:
> As Ilpo writes, the mechanism we propose is simpler than the ID, and
> slightly more aggressive. The reason why we chose this is as follows: 1)
> The ID and Limited Transmit tries to prevent retransmission timeouts by
> retransmitting more aggressively, thus keeping the congestion window open
> even though congestion may be the limiting factor. If their limiting
> conditions change, they still have higher sending rates available. The
> thin-stream applications are not limited by congestion control. There is
> therefore no motivation to prevent retransmission timeouts in order to
> keep the congestion window open because in the thin-stream scenario, a
> larger window is not needed, but we retransmit early only to reduce
> application-layer latencies. 2) Our suggested implementation is simpler.
> 3) I believe that the reason why the ID has not been implemented in Linux
> is that the motivation did not justify the achieved result. We have
> analysed a wide range of time-dependent applications and found that they
> very often produce thin streams due to transmissions being triggered by
> human interaction. This changes the motivational picture since a thin
> stream is an indicator of time-dependency.


Both mechanism prevent retransmission timeouts, thereby reducing latency.
Who cares, that they were motivated by performance?

I agree, that you are more aggressive, and that your scheme may have
latency advantages, at least for the Limited Transmit case. And there are
probably good reasons for your proposal. But I really think you should
bring your proposal up in IETF TCPM WG. I have the feeling that there are
a lot of corner cases we didn't think of.

One example: Consider standard NewReno non-SACK enabled flow:
For some reasons two data packets get reordered.
The TCP sender will produce a dupACK and an ACK.
The dupACK will trigger (because of your logic) a spurious retransmit.
The spurious retransmit will trigger a dupACK.
This dupACK will again trigger a spurious retransmit.
And this game will continue, unless a packet is dropped by coincidence.

P.S.: The Early-Rexmit ID has not been implemented in Linux,
because our student who was working on that is busy with something
else...

Best regards,
Arnd Hannemann

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-30 15:24 ` Arnd Hannemann
@ 2009-11-05 13:34   ` Andreas Petlund
  2009-11-05 13:45     ` Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Andreas Petlund @ 2009-11-05 13:34 UTC (permalink / raw)
  To: Arnd Hannemann
  Cc: William Allen Simpson, netdev, linux-kernel, shemminger,
	ilpo.jarvinen, davem

Arnd Hannemann wrote:
> Both mechanism prevent retransmission timeouts, thereby reducing latency.
> Who cares, that they were motivated by performance?

The essence of motivation is that there exist an incentive for performing an 
action. If the motivation for fast retransmitting earlier is to keep the cwnd 
open for a greedy application with small time-dependency, the question may be 
posed whether it is worth the effort of the proposed changes. With the 
thin-stream applications, we have confirmed that this is very often an 
indication of time-dependent/interactive applications (like SSH-text sessions, 
RDP, sensor networks, stock trading systems, interactive games etc). We have 
further shown that such applications are prone to lag upon retransmissions due 
to the inadequacies of TCP to deal with thin streams. We have also shown that 
by performing the proposed adjustments, we can drastically improve the 
situation. 

Since we now know that the modifications can drastically improve the user 
experience, the motivation/incentive for implementing the modifications is 
increased. 

> I agree, that you are more aggressive, and that your scheme may have
> latency advantages, at least for the Limited Transmit case. And there are
> probably good reasons for your proposal. But I really think you should
> bring your proposal up in IETF TCPM WG. I have the feeling that there are
> a lot of corner cases we didn't think of.
> 
> One example: Consider standard NewReno non-SACK enabled flow:
> For some reasons two data packets get reordered.
> The TCP sender will produce a dupACK and an ACK.
> The dupACK will trigger (because of your logic) a spurious retransmit.
> The spurious retransmit will trigger a dupACK.
> This dupACK will again trigger a spurious retransmit.
> And this game will continue, unless a packet is dropped by coincidence.

Such an effect will be extremely rare. It will depend on the application 
producing an extremely even flow of packets with just the right 
interarrival time, and also on reordering of data (which also will 
happen very seldom when the number of packets in flight are so low). 
Even though it can happen, the data flow will progress (with spurious 
retransmissions). The effect will stop as soon as the application sends 
more than 4 segments in an RTT (which will disable the thin-stream 
modifications) or less than 1 (which will cause all segments to be 
successfully ACKed), or if, as you say, a packet is dropped.

I will be thankful for more input on eventual corner cases and also on 
test cases that we may perform to evaluate the modifications for 
scenarios that are of concern.

Best regards,
Andreas





^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-11-05 13:34   ` Andreas Petlund
@ 2009-11-05 13:45     ` Ilpo Järvinen
  2009-11-09 15:24       ` Andreas Petlund
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2009-11-05 13:45 UTC (permalink / raw)
  To: Andreas Petlund
  Cc: Arnd Hannemann, William Allen Simpson, netdev, linux-kernel,
	shemminger, davem

On Thu, 5 Nov 2009, Andreas Petlund wrote:

> Arnd Hannemann wrote:
> > 
> > One example: Consider standard NewReno non-SACK enabled flow:
> > For some reasons two data packets get reordered.
> > The TCP sender will produce a dupACK and an ACK.
> > The dupACK will trigger (because of your logic) a spurious retransmit.
> > The spurious retransmit will trigger a dupACK.
> > This dupACK will again trigger a spurious retransmit.
> > And this game will continue, unless a packet is dropped by coincidence.
> 
> Such an effect will be extremely rare. It will depend on the application 
> producing an extremely even flow of packets with just the right 
> interarrival time, and also on reordering of data (which also will 
> happen very seldom when the number of packets in flight are so low). 
> Even though it can happen, the data flow will progress (with spurious 
> retransmissions). The effect will stop as soon as the application sends 
> more than 4 segments in an RTT (which will disable the thin-stream 
> modifications) or less than 1 (which will cause all segments to be 
> successfully ACKed), or if, as you say, a packet is dropped.

I'd simply workaround this problem by requiring SACK to be enabled for 
such a connection. This is reinforced by the fact that small windowed 
transfers want it certainly to be on anyway to get the best out of ACK 
flow even if there were some ACK losses.


-- 
 i.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-11-05 13:45     ` Ilpo Järvinen
@ 2009-11-09 15:24       ` Andreas Petlund
  0 siblings, 0 replies; 13+ messages in thread
From: Andreas Petlund @ 2009-11-09 15:24 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Arnd Hannemann, William Allen Simpson, netdev, linux-kernel,
	shemminger, davem

Ilpo Järvinen wrote:
> On Thu, 5 Nov 2009, Andreas Petlund wrote:
> 
>> Arnd Hannemann wrote:
>>> One example: Consider standard NewReno non-SACK enabled flow:
>>> For some reasons two data packets get reordered.
>>> The TCP sender will produce a dupACK and an ACK.
>>> The dupACK will trigger (because of your logic) a spurious retransmit.
>>> The spurious retransmit will trigger a dupACK.
>>> This dupACK will again trigger a spurious retransmit.
>>> And this game will continue, unless a packet is dropped by coincidence.
>> Such an effect will be extremely rare. It will depend on the application 
>> producing an extremely even flow of packets with just the right 
>> interarrival time, and also on reordering of data (which also will 
>> happen very seldom when the number of packets in flight are so low). 
>> Even though it can happen, the data flow will progress (with spurious 
>> retransmissions). The effect will stop as soon as the application sends 
>> more than 4 segments in an RTT (which will disable the thin-stream 
>> modifications) or less than 1 (which will cause all segments to be 
>> successfully ACKed), or if, as you say, a packet is dropped.
> 
> I'd simply workaround this problem by requiring SACK to be enabled for 
> such a connection. This is reinforced by the fact that small windowed 
> transfers want it certainly to be on anyway to get the best out of ACK 
> flow even if there were some ACK losses.
> 

Thanks. I will revise the patches based on all the feedback I have gotten
and get back to the list with a new version when I have done some more
testing.

Best regards,
Andreas




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-30 16:13 ` William Allen Simpson
@ 2009-11-05 13:36   ` Andreas Petlund
  0 siblings, 0 replies; 13+ messages in thread
From: Andreas Petlund @ 2009-11-05 13:36 UTC (permalink / raw)
  To: William Allen Simpson
  Cc: Linux Kernel Network Developers, Ilpo Järvinen,
	Arnd Hannemann, linux-kernel, shemminger, davem,
	Christian Samsel

William Allen Simpson wrote:
> I'm finding it hard to follow 3 threads, for the 3 parts of the patch.
> 
> As I mentioned in one of these threads, I've plenty of experience with
> designing and implementing protocols for gaming.  And it seems to me that
> you're making changes to the entire TCP stack to make up for shortcomings
> in the implementor's design.  Yet, these changes require application
> implementors to set a sockopt that's only available in Linux.  Unlikely,
> as they probably don't even keep track of such things....
>

The target is not only games, but for instance SSH sessions, RDP or VNC, 
stock trading services, sensor networks and so forth. There are a lot of 
time-dependent applications that shows thin stream properties. Many of 
these use TCP, and will continue to use it. Some of these applications 
use UDP as default, but fall back to TCP if there is a problem with the 
UDP connection (for instance Skype). By providing better latency for thin 
streams, we can increase the service level for all these applications. 

Our experience is that at least some designers of interactive/time-dependent 
applications are skilled enough and concerned enough to investigate whether 
options exist that may improve the applications they are designing. Of 
course there are exceptions, but for open-sourced software, there will be 
people who can provide this input. If the argument is that there is no need 
for customised options because developers are stupid, we could strip away a 
lot of the existing network code.

> I've already suggested the end-to-end interest list, where you'll find many
> of us with a strong interest in this topic.

I've been reading end-to-end for several years, and I think I will take this 
discussion to that list eventually. We have discussed whether we should take 
this to end-to-end first, and netdev after, but decided to go here for the 
following reasons: 1) We have working patches that we wanted to contribute. 
2) The modifications are implemented as optional. 3) When active, the 
modifications handle a special case of TCP streams that we have shown to 
have minimal impact on general TCP behaviour.

Also, in my experience, the end-to-end list discussions tend to digress, 
making it difficult to keep the discussion to the special case that we 
address. Since we wanted technical and practical feedback that would help us
to refine the modifications in the patches in addition to the discussion on 
transport protocols, we chose to go to netdev first.

> The IETF has two related working groups:
>   tcpm -- tcp modifications
>   tsvwg -- general transport, including sctp modifications

There are plenty of examples of TCP mechanisms present in the Linux 
kernel that has not been standardised, for instance TCP CUBIC, the 
default congestion control for many Linux distributions at this time.

We have a set of patches, and a large body of experiments that shows them to
be effective for the thin-stream scenario without any significant disadvantages.
Please consider this before discarding the proposition based on a general 
principle of standardisation. We believe that the thin-stream modifications 
will provide extra value to Linux networking.

Best regards,
Andreas



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-30 15:23 apetlund
@ 2009-10-30 16:13 ` William Allen Simpson
  2009-11-05 13:36   ` Andreas Petlund
  0 siblings, 1 reply; 13+ messages in thread
From: William Allen Simpson @ 2009-10-30 16:13 UTC (permalink / raw)
  To: Linux Kernel Network Developers
  Cc: apetlund, Ilpo Järvinen, Arnd Hannemann, linux-kernel,
	shemminger, davem, Christian Samsel

apetlund@simula.no wrote:
> I share the opinion that the linear timeouts should be limited, and back
> off exponentially after the limit, as Eric suggested. I believe this will
> be a sufficient safety-valve for the black-hole scenario, although I would
> like to run some tests.
> 
> As I wrote to Arnd, there are many similarities with the EFR approach and
> what our patch does. The largest difference is that the thin-stream
> patterns are identified as an indication of time dependent/interactive
> apps. This is the reason why the proposed patch does not try to keep an
> inflated cwnd open, but only focuses on the cases of few packets in
> flight. The target is time-dependent/interactive applications, and as such
> we don't want a generally enabled mechanism, but want to give the option
> of enabling it only in the cases where they are most needed (in contrast
> to a generally enabled "automagically" triggered EFR).
> 
> Below is a link to a table presenting some of the applications that we
> have traced and analysed the packet interarrival times of:
> 
> http://folk.uio.no/apetlund/lktmp/thin_apps_table.pdf
> 
> We were surprised to see how many cases of "thin-stream" traffic patterns
> were indicative of time-dependent/interactive apps.
> 
I'm finding it hard to follow 3 threads, for the 3 parts of the patch.

As I mentioned in one of these threads, I've plenty of experience with
designing and implementing protocols for gaming.  And it seems to me that
you're making changes to the entire TCP stack to make up for shortcomings
in the implementor's design.  Yet, these changes require application
implementors to set a sockopt that's only available in Linux.  Unlikely,
as they probably don't even keep track of such things....

There are other efforts in this area, they've been mentioned.

I'm new to this list, so I'm not entirely sure that protocol design is
regularly discussed here.  But I'd prefer that the discussion moved to
one of the lists that's dedicated to such protocol design and testing.

I've already suggested the end-to-end interest list, where you'll find many
of us with a strong interest in this topic.

List-Subscribe: <http://mailman.postel.org/mailman/listinfo/end2end-interest>,
	<mailto:end2end-interest-request@postel.org?subject=subscribe>

The IETF has two related working groups:
   tcpm -- tcp modifications
   tsvwg -- general transport, including sctp modifications

Without further ado, just count me as opposed at this time.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
@ 2009-10-30 15:23 apetlund
  2009-10-30 16:13 ` William Allen Simpson
  0 siblings, 1 reply; 13+ messages in thread
From: apetlund @ 2009-10-30 15:23 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Arnd Hannemann, Andreas Petlund, William Allen Simpson, netdev,
	linux-kernel, shemminger, davem, Christian Samsel

> On Thu, 29 Oct 2009, Arnd Hannemann wrote:
>
>> Andreas Petlund schrieb:
>> > Den 28. okt. 2009 kl. 04.09 skrev William Allen Simpson:
>> >
>> >> Andreas Petlund wrote:
>> >>> +/* Determines whether this is a thin stream (which may suffer from
+ * increased latency). Used to trigger latency-reducing
mechanisms.
>> >>> + */
>> >>> +static inline unsigned int tcp_stream_is_thin(const struct
tcp_sock *tp)
>> >>> +{
>> >>> +	return tp->packets_out < 4;
>> >>> +}
>> >>> +
>> >> This bothers me a bit.  Having just looked at your Linux
>> presentation,
>> >> and not (yet) read your papers, it seems much of your justification was
>> >> with 1 packet per RTT.  Here, you seem to be concentrating on 4,
probably
>> >> because many implementations quickly ramp up to 4.
>> >>
>> >
>> > The limit of 4 packets in flight is based on the fact that less than
4
>> > packets in flight makes fast retransmissions impossible, thus
limiting
>> > the retransmit options to timeout-retransmissions. The criterion is
>> There is Limited Transmit! So this is generally not true.
>> > therefore as conservative as possible while still serving its
purpose.
>> > If further losses occur, the exponential backoff will increase
latency
>> > further. The concept of using this limit is also discussed in the
Internet draft for Early Retransmit by Allman et al.:
>> > http://www.icir.org/mallman/papers/draft-ietf-tcpm-early-rexmt-01.txt
>> This ID is covering exactly the cases which Limited Transmit does not
cover and works "automagically" without help of application. So why not
just implement this ID?
>
> I even gave some advise recently to one guy how to polish up the early
retransmit implementation of his. ...However, I think we haven't heard
from that since then... I added him as CC if he happens to have it
already
> done.
>
> It is actually so that patches 1+3 implement sort of an early
retransmit,
> just slightly more aggressive of it than what is given in ID but I find
the difference in the aggressiveness rather insignificant. ...Whereas
the
> RTO stuff is more questionable.
>

I share the opinion that the linear timeouts should be limited, and back
off exponentially after the limit, as Eric suggested. I believe this will
be a sufficient safety-valve for the black-hole scenario, although I would
like to run some tests.

As I wrote to Arnd, there are many similarities with the EFR approach and
what our patch does. The largest difference is that the thin-stream
patterns are identified as an indication of time dependent/interactive
apps. This is the reason why the proposed patch does not try to keep an
inflated cwnd open, but only focuses on the cases of few packets in
flight. The target is time-dependent/interactive applications, and as such
we don't want a generally enabled mechanism, but want to give the option
of enabling it only in the cases where they are most needed (in contrast
to a generally enabled "automagically" triggered EFR).

Below is a link to a table presenting some of the applications that we
have traced and analysed the packet interarrival times of:

http://folk.uio.no/apetlund/lktmp/thin_apps_table.pdf

We were surprised to see how many cases of "thin-stream" traffic patterns
were indicative of time-dependent/interactive apps.

Regards,
Andreas







^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-29 16:32     ` Arnd Hannemann
@ 2009-10-29 20:26       ` Ilpo Järvinen
  0 siblings, 0 replies; 13+ messages in thread
From: Ilpo Järvinen @ 2009-10-29 20:26 UTC (permalink / raw)
  To: Arnd Hannemann
  Cc: Andreas Petlund, William Allen Simpson, netdev, linux-kernel,
	shemminger, davem, Christian Samsel

On Thu, 29 Oct 2009, Arnd Hannemann wrote:

> Andreas Petlund schrieb:
> > Den 28. okt. 2009 kl. 04.09 skrev William Allen Simpson:
> > 
> >> Andreas Petlund wrote:
> >>> +/* Determines whether this is a thin stream (which may suffer from
> >>> + * increased latency). Used to trigger latency-reducing mechanisms.
> >>> + */
> >>> +static inline unsigned int tcp_stream_is_thin(const struct  
> >>> tcp_sock *tp)
> >>> +{
> >>> +	return tp->packets_out < 4;
> >>> +}
> >>> +
> >> This bothers me a bit.  Having just looked at your Linux presentation,
> >> and not (yet) read your papers, it seems much of your justification  
> >> was
> >> with 1 packet per RTT.  Here, you seem to be concentrating on 4,  
> >> probably
> >> because many implementations quickly ramp up to 4.
> >>
> > 
> > The limit of 4 packets in flight is based on the fact that less than 4  
> > packets in flight makes fast retransmissions impossible, thus limiting  
> > the retransmit options to timeout-retransmissions. The criterion is  
> 
> There is Limited Transmit! So this is generally not true.
> 
> > therefore as conservative as possible while still serving its purpose.  
> > If further losses occur, the exponential backoff will increase latency  
> > further. The concept of using this limit is also discussed in the  
> > Internet draft for Early Retransmit by Allman et al.:
> > http://www.icir.org/mallman/papers/draft-ietf-tcpm-early-rexmt-01.txt
> 
> This ID is covering exactly the cases which Limited Transmit does not
> cover and works "automagically" without help of application. So why not
> just implement this ID?

I even gave some advise recently to one guy how to polish up the early 
retransmit implementation of his. ...However, I think we haven't heard 
from that since then... I added him as CC if he happens to have it already 
done.

It is actually so that patches 1+3 implement sort of an early retransmit, 
just slightly more aggressive of it than what is given in ID but I find 
the difference in the aggressiveness rather insignificant. ...Whereas the 
RTO stuff is more questionable.


-- 
 i.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-29 13:51   ` Andreas Petlund
@ 2009-10-29 16:32     ` Arnd Hannemann
  2009-10-29 20:26       ` Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Arnd Hannemann @ 2009-10-29 16:32 UTC (permalink / raw)
  To: Andreas Petlund
  Cc: William Allen Simpson, netdev, linux-kernel, shemminger,
	ilpo.jarvinen, davem

Andreas Petlund schrieb:
> Den 28. okt. 2009 kl. 04.09 skrev William Allen Simpson:
> 
>> Andreas Petlund wrote:
>>> +/* Determines whether this is a thin stream (which may suffer from
>>> + * increased latency). Used to trigger latency-reducing mechanisms.
>>> + */
>>> +static inline unsigned int tcp_stream_is_thin(const struct  
>>> tcp_sock *tp)
>>> +{
>>> +	return tp->packets_out < 4;
>>> +}
>>> +
>> This bothers me a bit.  Having just looked at your Linux presentation,
>> and not (yet) read your papers, it seems much of your justification  
>> was
>> with 1 packet per RTT.  Here, you seem to be concentrating on 4,  
>> probably
>> because many implementations quickly ramp up to 4.
>>
> 
> The limit of 4 packets in flight is based on the fact that less than 4  
> packets in flight makes fast retransmissions impossible, thus limiting  
> the retransmit options to timeout-retransmissions. The criterion is  

There is Limited Transmit! So this is generally not true.

> therefore as conservative as possible while still serving its purpose.  
> If further losses occur, the exponential backoff will increase latency  
> further. The concept of using this limit is also discussed in the  
> Internet draft for Early Retransmit by Allman et al.:
> http://www.icir.org/mallman/papers/draft-ietf-tcpm-early-rexmt-01.txt

This ID is covering exactly the cases which Limited Transmit does not
cover and works "automagically" without help of application. So why not
just implement this ID?

Best regards,
Arnd

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-28  3:09 ` William Allen Simpson
@ 2009-10-29 13:51   ` Andreas Petlund
  2009-10-29 16:32     ` Arnd Hannemann
  0 siblings, 1 reply; 13+ messages in thread
From: Andreas Petlund @ 2009-10-29 13:51 UTC (permalink / raw)
  To: William Allen Simpson
  Cc: netdev, linux-kernel, shemminger, ilpo.jarvinen, davem


Den 28. okt. 2009 kl. 04.09 skrev William Allen Simpson:

> Andreas Petlund wrote:
>> +/* Determines whether this is a thin stream (which may suffer from
>> + * increased latency). Used to trigger latency-reducing mechanisms.
>> + */
>> +static inline unsigned int tcp_stream_is_thin(const struct  
>> tcp_sock *tp)
>> +{
>> +	return tp->packets_out < 4;
>> +}
>> +
> This bothers me a bit.  Having just looked at your Linux presentation,
> and not (yet) read your papers, it seems much of your justification  
> was
> with 1 packet per RTT.  Here, you seem to be concentrating on 4,  
> probably
> because many implementations quickly ramp up to 4.
>

The limit of 4 packets in flight is based on the fact that less than 4  
packets in flight makes fast retransmissions impossible, thus limiting  
the retransmit options to timeout-retransmissions. The criterion is  
therefore as conservative as possible while still serving its purpose.  
If further losses occur, the exponential backoff will increase latency  
further. The concept of using this limit is also discussed in the  
Internet draft for Early Retransmit by Allman et al.:
http://www.icir.org/mallman/papers/draft-ietf-tcpm-early-rexmt-01.txt

> But there's a fair amount of experience showing that ramping to 4 is
> problematic on congested paths, especially wireless networks.  Fast
> retransmit in that case would be disastrous.

First, the modifications implemented in the patch is explicitly  
enabled only for applications where the developer knows that streams  
will be thin, thus only a small subset of the streams will apply the  
modifications. Second, experiments we have performed to try to map the  
effect on a congested bottleneck both with and without the  
modifications show that no measurable effect is recorded.

Graphs presenting results from experiments performed to analyse  
latency and fairness issues can be found here:
http://folk.uio.no/apetlund/lktmp/

> Once upon a time, I worked on a fair number of interactive games a  
> decade
> or so ago.  And agree that this can be a problem, although I've never
> been a fan of turning off the Nagle algorithm.  My solution has always
> been a heartbeat, rather than trying to shoehorn this into TCP.
>

The beginning of this patch was an analysis of game traffic from the  
Norwegian game company Funcom. They use TCP for all their MMOGs as  
does, for example, Blizzard for WoW. Our analysis showed that many  
players experienced extreme latencies, and the source of this was  
tracked to the effects that we discuss here. As long as a wide range  
of time-dependent applications choose to use TCP, and we can improve  
conditions for their needs without jeopardising other functionality,  
we think that this will add value to the TCP stack.

> Also, I've not seen any discussion on the end-to-end interest list.

It will be enlightening to have a discussion on end-to-end about this  
topic.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] net: TCP thin-stream detection
  2009-10-27 16:31 Andreas Petlund
@ 2009-10-28  3:09 ` William Allen Simpson
  2009-10-29 13:51   ` Andreas Petlund
  0 siblings, 1 reply; 13+ messages in thread
From: William Allen Simpson @ 2009-10-28  3:09 UTC (permalink / raw)
  To: Andreas Petlund; +Cc: netdev, linux-kernel, shemminger, ilpo.jarvinen, davem

Andreas Petlund wrote:
> +/* Determines whether this is a thin stream (which may suffer from
> + * increased latency). Used to trigger latency-reducing mechanisms.
> + */
> +static inline unsigned int tcp_stream_is_thin(const struct tcp_sock *tp)
> +{
> +	return tp->packets_out < 4;
> +}
> +
This bothers me a bit.  Having just looked at your Linux presentation,
and not (yet) read your papers, it seems much of your justification was
with 1 packet per RTT.  Here, you seem to be concentrating on 4, probably
because many implementations quickly ramp up to 4.

But there's a fair amount of experience showing that ramping to 4 is
problematic on congested paths, especially wireless networks.  Fast
retransmit in that case would be disastrous.

Once upon a time, I worked on a fair number of interactive games a decade
or so ago.  And agree that this can be a problem, although I've never
been a fan of turning off the Nagle algorithm.  My solution has always
been a heartbeat, rather than trying to shoehorn this into TCP.

Also, I've not seen any discussion on the end-to-end interest list.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/3] net: TCP thin-stream detection
@ 2009-10-27 16:31 Andreas Petlund
  2009-10-28  3:09 ` William Allen Simpson
  0 siblings, 1 reply; 13+ messages in thread
From: Andreas Petlund @ 2009-10-27 16:31 UTC (permalink / raw)
  To: netdev; +Cc: linux-kernel, shemminger, ilpo.jarvinen, davem

Inline function to dynamically detect thin streams based on the number of packets in flight. Used to trigger thin-stream mechanisms.


Signed-off-by: Andreas Petlund <apetlund@simula.no>
---
 include/net/tcp.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 03a49c7..7c4482f 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -800,6 +800,14 @@ static inline bool tcp_in_initial_slowstart(const struct tcp_sock *tp)
 	return tp->snd_ssthresh >= TCP_INFINITE_SSTHRESH;
 }
 
+/* Determines whether this is a thin stream (which may suffer from
+ * increased latency). Used to trigger latency-reducing mechanisms.
+ */
+static inline unsigned int tcp_stream_is_thin(const struct tcp_sock *tp)
+{
+	return tp->packets_out < 4;
+}
+
 /* If cwnd > ssthresh, we may raise ssthresh to be half-way to cwnd.
  * The exception is rate halving phase, when cwnd is decreasing towards
  * ssthresh.
-- 
1.6.0.4



^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2009-11-09 15:24 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-30 13:53 [PATCH 1/3] net: TCP thin-stream detection apetlund
2009-10-30 15:24 ` Arnd Hannemann
2009-11-05 13:34   ` Andreas Petlund
2009-11-05 13:45     ` Ilpo Järvinen
2009-11-09 15:24       ` Andreas Petlund
  -- strict thread matches above, loose matches on Subject: below --
2009-10-30 15:23 apetlund
2009-10-30 16:13 ` William Allen Simpson
2009-11-05 13:36   ` Andreas Petlund
2009-10-27 16:31 Andreas Petlund
2009-10-28  3:09 ` William Allen Simpson
2009-10-29 13:51   ` Andreas Petlund
2009-10-29 16:32     ` Arnd Hannemann
2009-10-29 20:26       ` Ilpo Järvinen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.