All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni at redhat.com>
To: mptcp at lists.01.org
Subject: [MPTCP] Re: [RFC PATCH] selinux: handle MPTCP consistently with TCP
Date: Fri, 04 Dec 2020 11:04:36 +0100	[thread overview]
Message-ID: <8c844984eaa92413066367af69b56194b111ad8f.camel@redhat.com> (raw)
In-Reply-To: CAHC9VhT-rj=tJwVycS19TgJDQ766oUH6ng+Uv=wu+WDrgE0AHA@mail.gmail.com

[-- Attachment #1: Type: text/plain, Size: 1656 bytes --]

On Thu, 2020-12-03 at 21:24 -0500, Paul Moore wrote:
> On Thu, Dec 3, 2020 at 6:54 PM Florian Westphal <fw(a)strlen.de> wrote:
> > Paul Moore <paul(a)paul-moore.com> wrote:
> > > I'm not very well versed in MPTCP, but this *seems* okay to me, minus
> > > the else-crud chunk.  Just to confirm my understanding, while MPTCP
> > > allows one TCP connection/stream to be subdivided and distributed
> > > across multiple interfaces, it does not allow multiple TCP streams to
> > > be multiplexed on a single connection, yes?
> > 
> > Its the latter.  The application sees a TCP interface (socket), but
> > data may be carried over multiple individual tcp streams on the wire.
> 
> Hmm, that may complicate things a bit from a SELinux perspective.  Maybe not.
> 
> Just to make sure I understand, with MPTCP, a client that
> traditionally opened multiple TCP sockets to talk to a server would
> now just open a single MPTCP socket and create multiple sub-flows
> instead of multiple TCP sockets?

I expect most clients will not be updated specifically for MPTCP,
except changing the protocol number at socket creation time - and we
would like to avoid even that.

If a given application creates multiple sockets, it will still do that
with MPTCP. The kernel, according to the configuration provided by the
user-space and/or by the peer, may try to create additional subflows
for each MPTCP sockets, using different local or remote address and/or
port number. Each subflow is represented inside the kernel as a TCP
'struct sock' with specific ULP operations. No related 'struct socket'
is exposed to user-space.

Cheers,

Paolo

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Abeni <pabeni@redhat.com>
To: Paul Moore <paul@paul-moore.com>, Florian Westphal <fw@strlen.de>
Cc: Stephen Smalley <stephen.smalley.work@gmail.com>,
	selinux@vger.kernel.org, mptcp@lists.01.org,
	linux-security-module@vger.kernel.org
Subject: Re: [MPTCP] Re: [RFC PATCH] selinux: handle MPTCP consistently with TCP
Date: Fri, 04 Dec 2020 11:04:36 +0100	[thread overview]
Message-ID: <8c844984eaa92413066367af69b56194b111ad8f.camel@redhat.com> (raw)
In-Reply-To: <CAHC9VhT-rj=tJwVycS19TgJDQ766oUH6ng+Uv=wu+WDrgE0AHA@mail.gmail.com>

On Thu, 2020-12-03 at 21:24 -0500, Paul Moore wrote:
> On Thu, Dec 3, 2020 at 6:54 PM Florian Westphal <fw@strlen.de> wrote:
> > Paul Moore <paul@paul-moore.com> wrote:
> > > I'm not very well versed in MPTCP, but this *seems* okay to me, minus
> > > the else-crud chunk.  Just to confirm my understanding, while MPTCP
> > > allows one TCP connection/stream to be subdivided and distributed
> > > across multiple interfaces, it does not allow multiple TCP streams to
> > > be multiplexed on a single connection, yes?
> > 
> > Its the latter.  The application sees a TCP interface (socket), but
> > data may be carried over multiple individual tcp streams on the wire.
> 
> Hmm, that may complicate things a bit from a SELinux perspective.  Maybe not.
> 
> Just to make sure I understand, with MPTCP, a client that
> traditionally opened multiple TCP sockets to talk to a server would
> now just open a single MPTCP socket and create multiple sub-flows
> instead of multiple TCP sockets?

I expect most clients will not be updated specifically for MPTCP,
except changing the protocol number at socket creation time - and we
would like to avoid even that.

If a given application creates multiple sockets, it will still do that
with MPTCP. The kernel, according to the configuration provided by the
user-space and/or by the peer, may try to create additional subflows
for each MPTCP sockets, using different local or remote address and/or
port number. Each subflow is represented inside the kernel as a TCP
'struct sock' with specific ULP operations. No related 'struct socket'
is exposed to user-space.

Cheers,

Paolo


             reply	other threads:[~2020-12-04 10:04 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-04 10:04 Paolo Abeni [this message]
2020-12-04 10:04 ` [MPTCP] Re: [RFC PATCH] selinux: handle MPTCP consistently with TCP Paolo Abeni
  -- strict thread matches above, loose matches on Subject: below --
2020-12-10  2:43 Paul Moore
2020-12-10  2:43 ` Paul Moore
2020-12-09 10:02 Paolo Abeni
2020-12-09 10:02 ` Paolo Abeni
2020-12-08 23:35 Paul Moore
2020-12-08 23:35 ` Paul Moore
2020-12-08 15:35 Paolo Abeni
2020-12-08 15:35 ` Paolo Abeni
2020-12-04 23:22 Paul Moore
2020-12-04 23:22 ` Paul Moore
2020-12-04  2:24 Paul Moore
2020-12-04  2:24 ` Paul Moore
2020-12-03 23:54 Florian Westphal
2020-12-03 23:54 ` Florian Westphal
2020-12-03 23:30 Paul Moore
2020-12-03 23:30 ` Paul Moore
2020-12-03 17:24 Mat Martineau
2020-12-03 17:24 ` Mat Martineau
2020-12-02 11:17 Paolo Abeni
2020-12-02 11:17 ` [MPTCP] " Paolo Abeni
2020-12-02 10:31 Paolo Abeni
2020-12-02 10:31 ` Paolo Abeni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8c844984eaa92413066367af69b56194b111ad8f.camel@redhat.com \
    --to=unknown@example.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.