All of lore.kernel.org
 help / color / mirror / Atom feed
* Layer 3 (point-to-point) netfront and netback drivers
@ 2022-09-18 12:41 Demi Marie Obenour
  2022-09-19 20:46 ` Elliott Mitchell
  0 siblings, 1 reply; 5+ messages in thread
From: Demi Marie Obenour @ 2022-09-18 12:41 UTC (permalink / raw)
  To: Xen developer discussion, netdev

[-- Attachment #1: Type: text/plain, Size: 534 bytes --]

How difficult would it be to provide layer 3 (point-to-point) versions
of the existing netfront and netback drivers?  Ideally, these would
share almost all of the code with the existing drivers, with the only
difference being how they are registered with the kernel.  Advantages
compared to the existing drivers include less attack surface (since the
peer is no longer network-adjacent), slightly better performance, and no
need for ARP or NDP traffic.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Layer 3 (point-to-point) netfront and netback drivers
  2022-09-18 12:41 Layer 3 (point-to-point) netfront and netback drivers Demi Marie Obenour
@ 2022-09-19 20:46 ` Elliott Mitchell
  2022-09-19 21:41   ` Demi Marie Obenour
  0 siblings, 1 reply; 5+ messages in thread
From: Elliott Mitchell @ 2022-09-19 20:46 UTC (permalink / raw)
  To: Demi Marie Obenour; +Cc: Xen developer discussion, netdev

On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> How difficult would it be to provide layer 3 (point-to-point) versions
> of the existing netfront and netback drivers?  Ideally, these would
> share almost all of the code with the existing drivers, with the only
> difference being how they are registered with the kernel.  Advantages
> compared to the existing drivers include less attack surface (since the
> peer is no longer network-adjacent), slightly better performance, and no
> need for ARP or NDP traffic.

I've actually been wondering about a similar idea.  How about breaking
the entire network stack off and placing /that/ in a separate VM?

One use for this is a VM could be constrained to *exclusively* have
network access via Tor.  This would allow a better hidden service as it
would have no network topology knowledge.

The other use is network cards which are increasingly able to handle more
of the network stack.  The Linux network team have been resistant to
allowing more offloading, so perhaps it is time to break *everything*
off.

I'm unsure the benefits would justify the effort, but I keep thinking of
this as the solution to some interesting issues.  Filtering becomes more
interesting, but BPF could work across VMs.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Layer 3 (point-to-point) netfront and netback drivers
  2022-09-19 20:46 ` Elliott Mitchell
@ 2022-09-19 21:41   ` Demi Marie Obenour
  2022-09-19 23:21     ` Elliott Mitchell
  0 siblings, 1 reply; 5+ messages in thread
From: Demi Marie Obenour @ 2022-09-19 21:41 UTC (permalink / raw)
  To: Elliott Mitchell; +Cc: Xen developer discussion, netdev

[-- Attachment #1: Type: text/plain, Size: 2112 bytes --]

On Mon, Sep 19, 2022 at 01:46:59PM -0700, Elliott Mitchell wrote:
> On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> > How difficult would it be to provide layer 3 (point-to-point) versions
> > of the existing netfront and netback drivers?  Ideally, these would
> > share almost all of the code with the existing drivers, with the only
> > difference being how they are registered with the kernel.  Advantages
> > compared to the existing drivers include less attack surface (since the
> > peer is no longer network-adjacent), slightly better performance, and no
> > need for ARP or NDP traffic.
> 
> I've actually been wondering about a similar idea.  How about breaking
> the entire network stack off and placing /that/ in a separate VM?

This is going to be very hard to do without awesome but difficult
changes to applications.  Switching to layer 3 links is a much smaller
change that should be transparent to applications.

> One use for this is a VM could be constrained to *exclusively* have
> network access via Tor.  This would allow a better hidden service as it
> would have no network topology knowledge.

That is great in theory, but in practice programs will expect to use
network protocols to connect to Tor.  Whonix already implements this
with the current Xen netfront/netback.

> The other use is network cards which are increasingly able to handle more
> of the network stack.  The Linux network team have been resistant to
> allowing more offloading, so perhaps it is time to break *everything*
> off.

Do you have any particular examples?  The only one I can think of is
that Linux is not okay with TCP offload engines.

> I'm unsure the benefits would justify the effort, but I keep thinking of
> this as the solution to some interesting issues.  Filtering becomes more
> interesting, but BPF could work across VMs.

Classic BPF perhaps, but eBPF's attack surface is far too large for this
to be viable.  Unprivileged eBPF is already disabled by default.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Layer 3 (point-to-point) netfront and netback drivers
  2022-09-19 21:41   ` Demi Marie Obenour
@ 2022-09-19 23:21     ` Elliott Mitchell
  2022-09-19 23:32       ` Demi Marie Obenour
  0 siblings, 1 reply; 5+ messages in thread
From: Elliott Mitchell @ 2022-09-19 23:21 UTC (permalink / raw)
  To: Demi Marie Obenour; +Cc: Xen developer discussion, netdev

On Mon, Sep 19, 2022 at 05:41:05PM -0400, Demi Marie Obenour wrote:
> On Mon, Sep 19, 2022 at 01:46:59PM -0700, Elliott Mitchell wrote:
> > On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> > > How difficult would it be to provide layer 3 (point-to-point) versions
> > > of the existing netfront and netback drivers?  Ideally, these would
> > > share almost all of the code with the existing drivers, with the only
> > > difference being how they are registered with the kernel.  Advantages
> > > compared to the existing drivers include less attack surface (since the
> > > peer is no longer network-adjacent), slightly better performance, and no
> > > need for ARP or NDP traffic.
> > 
> > I've actually been wondering about a similar idea.  How about breaking
> > the entire network stack off and placing /that/ in a separate VM?
> 
> This is going to be very hard to do without awesome but difficult
> changes to applications.  Switching to layer 3 links is a much smaller
> change that should be transparent to applications.

Indeed for ones which modify network settings, but not for ones which
merely use the sockets API.  Isn't this the same issue for what you're
suggesting?

(I'm suggesting pushing more into a separate VM, but the principle is the
same)


> > One use for this is a VM could be constrained to *exclusively* have
> > network access via Tor.  This would allow a better hidden service as it
> > would have no network topology knowledge.
> 
> That is great in theory, but in practice programs will expect to use
> network protocols to connect to Tor.  Whonix already implements this
> with the current Xen netfront/netback.

Whonix is wrapping at layer 2 and simply NATing everything.  I'm
suggesting cutting at a higher layer.

> > The other use is network cards which are increasingly able to handle more
> > of the network stack.  The Linux network team have been resistant to
> > allowing more offloading, so perhaps it is time to break *everything*
> > off.
> 
> Do you have any particular examples?  The only one I can think of is
> that Linux is not okay with TCP offload engines.

That is precisely what I was thinking of.  While I understand the desire
for control, when it comes down to it a network card which lies could
simply transparently proxy everything.  Anything not protected by
cryptography is vulnerable, so worrying about raw packets doesn't seem
useful.

> > I'm unsure the benefits would justify the effort, but I keep thinking of
> > this as the solution to some interesting issues.  Filtering becomes more
> > interesting, but BPF could work across VMs.
> 
> Classic BPF perhaps, but eBPF's attack surface is far too large for this
> to be viable.  Unprivileged eBPF is already disabled by default.

I was thinking of classic BPF.  If everything below the sockets layer
was in a separate VM, filtering rules could still work by pushing BPF
rules to the other side.


Your idea is to push less into a separate VM than I was thinking.  I
wanted to bring up it might be worthwhile pushing more.  If your project
launches I imagine eventually you'll be trying to encompass more, so it
may be easier to consider what the future will hold.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Layer 3 (point-to-point) netfront and netback drivers
  2022-09-19 23:21     ` Elliott Mitchell
@ 2022-09-19 23:32       ` Demi Marie Obenour
  0 siblings, 0 replies; 5+ messages in thread
From: Demi Marie Obenour @ 2022-09-19 23:32 UTC (permalink / raw)
  To: Elliott Mitchell; +Cc: Xen developer discussion, netdev

[-- Attachment #1: Type: text/plain, Size: 3969 bytes --]

On Mon, Sep 19, 2022 at 04:21:27PM -0700, Elliott Mitchell wrote:
> On Mon, Sep 19, 2022 at 05:41:05PM -0400, Demi Marie Obenour wrote:
> > On Mon, Sep 19, 2022 at 01:46:59PM -0700, Elliott Mitchell wrote:
> > > On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> > > > How difficult would it be to provide layer 3 (point-to-point) versions
> > > > of the existing netfront and netback drivers?  Ideally, these would
> > > > share almost all of the code with the existing drivers, with the only
> > > > difference being how they are registered with the kernel.  Advantages
> > > > compared to the existing drivers include less attack surface (since the
> > > > peer is no longer network-adjacent), slightly better performance, and no
> > > > need for ARP or NDP traffic.
> > > 
> > > I've actually been wondering about a similar idea.  How about breaking
> > > the entire network stack off and placing /that/ in a separate VM?
> > 
> > This is going to be very hard to do without awesome but difficult
> > changes to applications.  Switching to layer 3 links is a much smaller
> > change that should be transparent to applications.
> 
> Indeed for ones which modify network settings, but not for ones which
> merely use the sockets API.  Isn't this the same issue for what you're
> suggesting?

No.  What I am referring to is having netfront and netback carry IP
packets instead of Ethernet frames.  This is transparent to applications
that use the sockets API.  What you are talking about, if I understand
correctly, requires changing the implementation of the sockets API,
which is much harder.

> > > The other use is network cards which are increasingly able to handle more
> > > of the network stack.  The Linux network team have been resistant to
> > > allowing more offloading, so perhaps it is time to break *everything*
> > > off.
> > 
> > Do you have any particular examples?  The only one I can think of is
> > that Linux is not okay with TCP offload engines.
> 
> That is precisely what I was thinking of.  While I understand the desire
> for control, when it comes down to it a network card which lies could
> simply transparently proxy everything.  Anything not protected by
> cryptography is vulnerable, so worrying about raw packets doesn't seem
> useful.

IIRC the problems with TCP offload engines are that they do not support
all of Linux’s features (such as netfilter), require invasive hooks so
that various configuration can be handled using standard Linux tools,
and have closed-source firmware with substantial remote attack surface.

> > > I'm unsure the benefits would justify the effort, but I keep thinking of
> > > this as the solution to some interesting issues.  Filtering becomes more
> > > interesting, but BPF could work across VMs.
> > 
> > Classic BPF perhaps, but eBPF's attack surface is far too large for this
> > to be viable.  Unprivileged eBPF is already disabled by default.
> 
> I was thinking of classic BPF.  If everything below the sockets layer
> was in a separate VM, filtering rules could still work by pushing BPF
> rules to the other side.
> 
> 
> Your idea is to push less into a separate VM than I was thinking.  I
> wanted to bring up it might be worthwhile pushing more.  If your project
> launches I imagine eventually you'll be trying to encompass more, so it
> may be easier to consider what the future will hold.

I don’t actually plan to go beyond this, although you are of course free
to do so.  This change is simply to reduce attack surface and complexity
in Qubes OS, which uses layer 2 links where layer 3 links would do.  I
am hoping this is just a matter of how the netback and netfront drivers
register with Linux.  I also don’t have the time to implement the change
right now.  My question is about what the change would involve.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-09-19 23:33 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-18 12:41 Layer 3 (point-to-point) netfront and netback drivers Demi Marie Obenour
2022-09-19 20:46 ` Elliott Mitchell
2022-09-19 21:41   ` Demi Marie Obenour
2022-09-19 23:21     ` Elliott Mitchell
2022-09-19 23:32       ` Demi Marie Obenour

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.