All of lore.kernel.org
 help / color / mirror / Atom feed
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen developer discussion <xen-devel@lists.xenproject.org>,
	netdev@vger.kernel.org
Subject: Re: Layer 3 (point-to-point) netfront and netback drivers
Date: Mon, 19 Sep 2022 19:32:57 -0400	[thread overview]
Message-ID: <Yyj8K0OL/M2L/Ts1@itl-email> (raw)
In-Reply-To: <Yyj5d0uTeXLGmvLK@mattapan.m5p.com>

[-- Attachment #1: Type: text/plain, Size: 3969 bytes --]

On Mon, Sep 19, 2022 at 04:21:27PM -0700, Elliott Mitchell wrote:
> On Mon, Sep 19, 2022 at 05:41:05PM -0400, Demi Marie Obenour wrote:
> > On Mon, Sep 19, 2022 at 01:46:59PM -0700, Elliott Mitchell wrote:
> > > On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> > > > How difficult would it be to provide layer 3 (point-to-point) versions
> > > > of the existing netfront and netback drivers?  Ideally, these would
> > > > share almost all of the code with the existing drivers, with the only
> > > > difference being how they are registered with the kernel.  Advantages
> > > > compared to the existing drivers include less attack surface (since the
> > > > peer is no longer network-adjacent), slightly better performance, and no
> > > > need for ARP or NDP traffic.
> > > 
> > > I've actually been wondering about a similar idea.  How about breaking
> > > the entire network stack off and placing /that/ in a separate VM?
> > 
> > This is going to be very hard to do without awesome but difficult
> > changes to applications.  Switching to layer 3 links is a much smaller
> > change that should be transparent to applications.
> 
> Indeed for ones which modify network settings, but not for ones which
> merely use the sockets API.  Isn't this the same issue for what you're
> suggesting?

No.  What I am referring to is having netfront and netback carry IP
packets instead of Ethernet frames.  This is transparent to applications
that use the sockets API.  What you are talking about, if I understand
correctly, requires changing the implementation of the sockets API,
which is much harder.

> > > The other use is network cards which are increasingly able to handle more
> > > of the network stack.  The Linux network team have been resistant to
> > > allowing more offloading, so perhaps it is time to break *everything*
> > > off.
> > 
> > Do you have any particular examples?  The only one I can think of is
> > that Linux is not okay with TCP offload engines.
> 
> That is precisely what I was thinking of.  While I understand the desire
> for control, when it comes down to it a network card which lies could
> simply transparently proxy everything.  Anything not protected by
> cryptography is vulnerable, so worrying about raw packets doesn't seem
> useful.

IIRC the problems with TCP offload engines are that they do not support
all of Linux’s features (such as netfilter), require invasive hooks so
that various configuration can be handled using standard Linux tools,
and have closed-source firmware with substantial remote attack surface.

> > > I'm unsure the benefits would justify the effort, but I keep thinking of
> > > this as the solution to some interesting issues.  Filtering becomes more
> > > interesting, but BPF could work across VMs.
> > 
> > Classic BPF perhaps, but eBPF's attack surface is far too large for this
> > to be viable.  Unprivileged eBPF is already disabled by default.
> 
> I was thinking of classic BPF.  If everything below the sockets layer
> was in a separate VM, filtering rules could still work by pushing BPF
> rules to the other side.
> 
> 
> Your idea is to push less into a separate VM than I was thinking.  I
> wanted to bring up it might be worthwhile pushing more.  If your project
> launches I imagine eventually you'll be trying to encompass more, so it
> may be easier to consider what the future will hold.

I don’t actually plan to go beyond this, although you are of course free
to do so.  This change is simply to reduce attack surface and complexity
in Qubes OS, which uses layer 2 links where layer 3 links would do.  I
am hoping this is just a matter of how the netback and netfront drivers
register with Linux.  I also don’t have the time to implement the change
right now.  My question is about what the change would involve.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

      reply	other threads:[~2022-09-19 23:33 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-18 12:41 Layer 3 (point-to-point) netfront and netback drivers Demi Marie Obenour
2022-09-19 20:46 ` Elliott Mitchell
2022-09-19 21:41   ` Demi Marie Obenour
2022-09-19 23:21     ` Elliott Mitchell
2022-09-19 23:32       ` Demi Marie Obenour [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yyj8K0OL/M2L/Ts1@itl-email \
    --to=demi@invisiblethingslab.com \
    --cc=ehem+xen@m5p.com \
    --cc=netdev@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.