All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: mptcp@lists.linux.dev
Cc: fwestpha@redhat.com
Subject: Re: [RFC PATCH 0/4] mptcp: just another receive path refactor
Date: Fri, 28 May 2021 17:18:55 +0200	[thread overview]
Message-ID: <9af85be18465b0f9595459764013568456453ce9.camel@redhat.com> (raw)
In-Reply-To: <cover.1621963632.git.pabeni@redhat.com>

On Tue, 2021-05-25 at 19:37 +0200, Paolo Abeni wrote:
> This could have some negative performance effects, as on average more
> locking is required for each packet. I'm doing some perf test and will
> report the results.

There are several different possible scenarios:

1) single subflow, ksoftirq && user-space process run on the same CPU
2) multiple subflows, ksoftirqs && user-space process run on the same
CPU
3) single subflow, ksoftirq && user-space process run on different CPUs
4) multiple subflows ksoftirqs && user-space process run on different
CPUs

With a single subflow, the most common scenario is with ksoftirq &&
user-space process run on the same CPU. With multiple subflows on
resonable server H/W we should likley observe a more mixed situation:
softirqs running on multiple CPUs, one of them also hosting the user-
space process. I don't have data for that yet.

The figures:

scenario	export branch 		RX path refactor	delta
1)		23Mbps			21Mbps			-8%
2)		30Mbps			19Mbps			-37%
3)		17.8Mbps		17.5Mbps		noise range
4)		1-3Mbps			1-3Mbps			???

The last scenario outlined a bug, we likely don't send MPTCP level ACK
frequently enough under some condition. That *could* possibly be
related to:

https://github.com/multipath-tcp/mptcp_net-next/issues/137

but I'm unsure about that.

The delta in scenario 2) is quite significant.

The root cause is that in such scenario the user-space process is the
bottle-neck: it keeps a CPU fully busy, spending most of the available
cycles memcpying the data into the user-space. 

With the current export branch, the skbs movement/enqueuing happens
completely inside the ksoftirqd processes.

On top the RX path refactor, some skbs handling is peformed by the
mptcp_release_cb() inside the scope of the user-space process. That
reduces the number of CPU cycles available for memcpying the data and
thus reduces also the overall tput.

I experimented with a different approach - e.g. keeping the skbs
accounted to the incoming subflows - but that looks not feasible.

Input wanted: WDYT of the above?

Thanks!

Paolo


      parent reply	other threads:[~2021-05-28 15:19 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-25 17:37 [RFC PATCH 0/4] mptcp: just another receive path refactor Paolo Abeni
2021-05-25 17:37 ` [RFC PATCH 1/4] mptcp: wake-up readers only for in sequence data Paolo Abeni
2021-05-25 17:37 ` [RFC PATCH 2/4] mptcp: don't clear MPTCP_DATA_READY in sk_wait_event() Paolo Abeni
2021-05-25 17:37 ` [RFC PATCH 3/4] mptcp: move the whole rx path under msk socket lock protection Paolo Abeni
2021-05-26  0:06   ` Mat Martineau
2021-05-26 10:50     ` Paolo Abeni
2021-05-25 17:37 ` [RFC PATCH 4/4] mptcp: cleanup mem accounting Paolo Abeni
2021-05-26  0:12   ` Mat Martineau
2021-05-26 10:42     ` Paolo Abeni
2021-05-28 15:18 ` Paolo Abeni [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9af85be18465b0f9595459764013568456453ce9.camel@redhat.com \
    --to=pabeni@redhat.com \
    --cc=fwestpha@redhat.com \
    --cc=mptcp@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.