All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [MPTCP] MPTCP layer processing overhead?!
@ 2018-10-09 22:17 Christoph Paasch
  0 siblings, 0 replies; 4+ messages in thread
From: Christoph Paasch @ 2018-10-09 22:17 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 850 bytes --]

On 09/10/18 - 23:59:19, Dejene Boru wrote:
> Hi all,
> 
> My understanding from MPTCP protocol is that, if MPTCP has only one subflow
> it behaves almost similar to regular TCP with Reno  congestion control.
> This means that if I enable MPTCP but set the  congestion control for
> example to BBR or DCTCP configuring the parameters required for the
> respective congestion control, it should be able to achieve the same
> throughput as when MPTCP is disabled and the corresponding congestion
> control is configured. I have tested MPTCP with  one subflow  using DCTCP
> congestion control and obtained a 10% reduced throughput compared with
> MPTCP disabled.  Which part of the MPTCP code does result in this kind of
> overhead?

Are you CPU-limited here in this scenario?

What throughput are you getting exactly?


Christoph


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [MPTCP] MPTCP layer processing overhead?!
@ 2018-10-09 22:44 Christoph Paasch
  0 siblings, 0 replies; 4+ messages in thread
From: Christoph Paasch @ 2018-10-09 22:44 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 1974 bytes --]

(adding back the mailing-list in CC)

On 10/10/18 - 00:25:55, Dejene Boru wrote:
> CPU does not seem to be the bottleneck because I have core i7 and is only
> running iperf on the server and client. The throughput is 935 Mbps with
> MPTCP disabled and 922Mbps with MPTCP enabled. By the way I have repeated
> the same test setting the congestion control to Reno and obtained similar
> result.  I compiled MPTCP from source v94 (kernel 4.14.73).

Ok, that's not a 10% reduction but rather a 1.3% reduction :)

MPTCP adds additional TCP-options in the header. These are consuming
20 bytes. If you take a 1500 MTU, that means you are effectively adding a
20 / 1500 = 1.3% overhead compared to the payload.

Thus, a 1.3% throughput reduction is expected here.


MPTCP's benefit is mostly visible when pooling resources across multiple
interfaces (or paths) or when in mobility scenarios.


Christoph


> 
> Dejene Boru
> 
> 
> On Wed, Oct 10, 2018 at 12:17 AM Christoph Paasch <cpaasch(a)apple.com> wrote:
> 
> > On 09/10/18 - 23:59:19, Dejene Boru wrote:
> > > Hi all,
> > >
> > > My understanding from MPTCP protocol is that, if MPTCP has only one
> > subflow
> > > it behaves almost similar to regular TCP with Reno  congestion control.
> > > This means that if I enable MPTCP but set the  congestion control for
> > > example to BBR or DCTCP configuring the parameters required for the
> > > respective congestion control, it should be able to achieve the same
> > > throughput as when MPTCP is disabled and the corresponding congestion
> > > control is configured. I have tested MPTCP with  one subflow  using DCTCP
> > > congestion control and obtained a 10% reduced throughput compared with
> > > MPTCP disabled.  Which part of the MPTCP code does result in this kind of
> > > overhead?
> >
> > Are you CPU-limited here in this scenario?
> >
> > What throughput are you getting exactly?
> >
> >
> > Christoph
> >
> >

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [MPTCP] MPTCP layer processing overhead ?!
@ 2018-10-09 22:03 Dejene Boru
  0 siblings, 0 replies; 4+ messages in thread
From: Dejene Boru @ 2018-10-09 22:03 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 706 bytes --]

Hi all,

My understanding from MPTCP protocol is that, if MPTCP has only one subflow
it behaves almost similar to regular TCP with Reno  congestion control.
This means that if I enable MPTCP but set the  congestion control for
example to BBR or DCTCP configuring the parameters required for the
respective congestion control, it should be able to achieve the same
throughput as when MPTCP is disabled and the corresponding congestion
control is configured. I have tested MPTCP with  one subflow  using DCTCP
congestion control and obtained a 10% reduced throughput compared with
MPTCP disabled.  Which part of the MPTCP code does result in this kind of
overhead?

Regards,



Dejene Boru

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 1516 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [MPTCP] MPTCP layer processing overhead?!
@ 2018-10-09 21:59 Dejene Boru
  0 siblings, 0 replies; 4+ messages in thread
From: Dejene Boru @ 2018-10-09 21:59 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 702 bytes --]

Hi all,

My understanding from MPTCP protocol is that, if MPTCP has only one subflow
it behaves almost similar to regular TCP with Reno  congestion control.
This means that if I enable MPTCP but set the  congestion control for
example to BBR or DCTCP configuring the parameters required for the
respective congestion control, it should be able to achieve the same
throughput as when MPTCP is disabled and the corresponding congestion
control is configured. I have tested MPTCP with  one subflow  using DCTCP
congestion control and obtained a 10% reduced throughput compared with
MPTCP disabled.  Which part of the MPTCP code does result in this kind of
overhead?

Regards,

Dejene Boru

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 1487 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-10-09 22:44 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-09 22:17 [MPTCP] MPTCP layer processing overhead?! Christoph Paasch
  -- strict thread matches above, loose matches on Subject: below --
2018-10-09 22:44 Christoph Paasch
2018-10-09 22:03 [MPTCP] MPTCP layer processing overhead ?! Dejene Boru
2018-10-09 21:59 [MPTCP] MPTCP layer processing overhead?! Dejene Boru

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.