linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Linux TCP impotency
  2001-05-13 19:38 Linux TCP impotency clock
@ 2001-05-13 19:38 ` Alan Cox
  2001-05-13 20:54 ` bert hubert
  1 sibling, 0 replies; 4+ messages in thread
From: Alan Cox @ 2001-05-13 19:38 UTC (permalink / raw)
  To: clock; +Cc: linux-kernel

> causes the earlier started one to survive and the later to starve. Running bcp
> instead of the second (which uses UDP) at 11000 bytes per second caused the
> utilization in both directions to go up nearly to 100%.
> 
> Is this a normal TCP stack behaviour?

Yes. TCP is not fair. Look up 'capture effect' if you want to know more.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Linux TCP impotency
@ 2001-05-13 19:38 clock
  2001-05-13 19:38 ` Alan Cox
  2001-05-13 20:54 ` bert hubert
  0 siblings, 2 replies; 4+ messages in thread
From: clock @ 2001-05-13 19:38 UTC (permalink / raw)
  To: linux-kernel

Using 2.2.19 I discovered that running two simultaneous scp's (uses up whole
capacity in TCP traffic) on a 115200bps full duplex serial port nullmodem cable
causes the earlier started one to survive and the later to starve. Running bcp
instead of the second (which uses UDP) at 11000 bytes per second caused the
utilization in both directions to go up nearly to 100%.

Is this a normal TCP stack behaviour?

-- 
Karel Kulhavy                     http://atrey.karlin.mff.cuni.cz/~clock

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Linux TCP impotency
  2001-05-13 19:38 Linux TCP impotency clock
  2001-05-13 19:38 ` Alan Cox
@ 2001-05-13 20:54 ` bert hubert
  2001-05-14 16:19   ` tdanis
  1 sibling, 1 reply; 4+ messages in thread
From: bert hubert @ 2001-05-13 20:54 UTC (permalink / raw)
  To: linux-kernel

On Sun, May 13, 2001 at 09:38:53PM +0200, clock@ghost.btnet.cz wrote:
> Using 2.2.19 I discovered that running two simultaneous scp's (uses up whole
> capacity in TCP traffic) on a 115200bps full duplex serial port nullmodem cable
> causes the earlier started one to survive and the later to starve. Running bcp
> instead of the second (which uses UDP) at 11000 bytes per second caused the
> utilization in both directions to go up nearly to 100%.
> 
> Is this a normal TCP stack behaviour?

Might very well be. Read about different forms of (class based) queuing
which try (and succeed) to improve IP in this respect. TCP is not fair and
IP has no intrinsic features to help you. http://ds9a.nl/2.4Routing contains
some explanations and links.

SFQ sounds like it might fit your bill.

Regards,

bert

-- 
http://www.PowerDNS.com      Versatile DNS Services  
Trilab                       The Technology People   
'SYN! .. SYN|ACK! .. ACK!' - the mating call of the internet

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Linux TCP impotency
  2001-05-13 20:54 ` bert hubert
@ 2001-05-14 16:19   ` tdanis
  0 siblings, 0 replies; 4+ messages in thread
From: tdanis @ 2001-05-14 16:19 UTC (permalink / raw)
  To: linux-kernel

On Sun, May 13, 2001 at 10:54:15PM +0200, ahu@ds9a.nl wrote:
> On Sun, May 13, 2001 at 09:38:53PM +0200, clock@ghost.btnet.cz wrote:
> > Using 2.2.19 I discovered that running two simultaneous scp's (uses up whole
> > capacity in TCP traffic) on a 115200bps full duplex serial port nullmodem cable
> > causes the earlier started one to survive and the later to starve. Running bcp
> > instead of the second (which uses UDP) at 11000 bytes per second caused the
> > utilization in both directions to go up nearly to 100%.
> > 
> > Is this a normal TCP stack behaviour?
> 
> Might very well be. Read about different forms of (class based) queuing
> which try (and succeed) to improve IP in this respect. TCP is not fair and
> IP has no intrinsic features to help you. http://ds9a.nl/2.4Routing contains
> some explanations and links.
> 
> SFQ sounds like it might fit your bill.
> 
> Regards,
> 
> bert
> 
> -- 
> http://www.PowerDNS.com      Versatile DNS Services  
> Trilab                       The Technology People   
> 'SYN! .. SYN|ACK! .. ACK!' - the mating call of the internet
> -

	I was about to report the same 'od' behaviour : I have 3
	machines, connected via a HUB at 100 Mb/s half duplex.

	From machine A : rsh B dd if=/dev/zero bs=8192 | dd of=/dev/null

	=> transfert around 10/11 MB/s (B => A)

	Now, I start a second transfert from machine C :

	rsh B dd if=/dev/zero bs=8192 | dd of=/dev/null

	=> transfert around 10/11 MB/s between B and C, almost nothing
	between A and B (ie, the connexion is stalled between A and B).

	If I stop the second transfert, I takes many seconds for
	the transfert to restart between A and B.

	On a highly saturated network, I have already seen such a
	behavior.

	Is that related to the IP adresses, the lowest being served
	first ?

A+,
-- 
	Thierry Danis

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2001-05-14 16:19 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-05-13 19:38 Linux TCP impotency clock
2001-05-13 19:38 ` Alan Cox
2001-05-13 20:54 ` bert hubert
2001-05-14 16:19   ` tdanis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).