All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Qemu-KVM VETH
@ 2013-09-19 21:31 Tim Epkes
  2013-09-20 11:58 ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Tim Epkes @ 2013-09-19 21:31 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 197 bytes --]

Any plans to provide VETH support for Qemu-KVM.  It is a great pt-pt tie
when when connecting to KVM's on the same machine.  I have multiple reasons
for doing so (one is educational).  Thanks

Tim

[-- Attachment #2: Type: text/html, Size: 251 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu-KVM VETH
  2013-09-19 21:31 [Qemu-devel] Qemu-KVM VETH Tim Epkes
@ 2013-09-20 11:58 ` Stefan Hajnoczi
  2013-09-20 15:48   ` Tim Epkes
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2013-09-20 11:58 UTC (permalink / raw)
  To: Tim Epkes; +Cc: qemu-devel

On Thu, Sep 19, 2013 at 05:31:01PM -0400, Tim Epkes wrote:
> Any plans to provide VETH support for Qemu-KVM.  It is a great pt-pt tie
> when when connecting to KVM's on the same machine.  I have multiple reasons
> for doing so (one is educational).  Thanks

QEMU already supports -netdev tap (if you want to use the host Linux
networking stack) and -netdev socket (if you just want point-to-point
tunneling).

The veth driver isn't suitable for QEMU's use case.  QEMU is a userspace
process that wants to inject/extract Ethernet frames.  That's exactly
what the tun (tap) driver does.  veth is useful for containers where you
want a Linux network interface that is handled by the host network stack.

Two solutions for point-to-point:

1. Run two guests with -netdev tap.  Put the interfaces on a software
   bridge (see brctl(8)).  Or you could also use IP forwarding instead
   of a bridge if you like.

2. Run two guests with -netdev socket.  They send Ethernet frames
   directly to each other.

See the qemu man page for configuration details.

Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu-KVM VETH
  2013-09-20 11:58 ` Stefan Hajnoczi
@ 2013-09-20 15:48   ` Tim Epkes
  2013-09-20 17:26     ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Tim Epkes @ 2013-09-20 15:48 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1734 bytes --]

Stefan,

The problem aI face is that a bridge in the middle ( using taps) breaks
LLDP ( L2 discovery protocol) and should break ISIS as well.  Sockets
aren't bad, but if for some reason I take the listener VM down and bring
back up, then I have to bring down all connector VMs, which chains on
itself if there is a lot of connectivity defined.  When UDP was available
that wasn't an issue.

Anyway, that is how I came to VETHs.  I am aware that you can use UDP
multicast, but performance past one hop is extremely poor ( resulting in 3
of 5 pings to be lost.  Thanks

Tim

On Friday, September 20, 2013, Stefan Hajnoczi wrote:

> On Thu, Sep 19, 2013 at 05:31:01PM -0400, Tim Epkes wrote:
> > Any plans to provide VETH support for Qemu-KVM.  It is a great pt-pt tie
> > when when connecting to KVM's on the same machine.  I have multiple
> reasons
> > for doing so (one is educational).  Thanks
>
> QEMU already supports -netdev tap (if you want to use the host Linux
> networking stack) and -netdev socket (if you just want point-to-point
> tunneling).
>
> The veth driver isn't suitable for QEMU's use case.  QEMU is a userspace
> process that wants to inject/extract Ethernet frames.  That's exactly
> what the tun (tap) driver does.  veth is useful for containers where you
> want a Linux network interface that is handled by the host network stack.
>
> Two solutions for point-to-point:
>
> 1. Run two guests with -netdev tap.  Put the interfaces on a software
>    bridge (see brctl(8)).  Or you could also use IP forwarding instead
>    of a bridge if you like.
>
> 2. Run two guests with -netdev socket.  They send Ethernet frames
>    directly to each other.
>
> See the qemu man page for configuration details.
>
> Stefan
>

[-- Attachment #2: Type: text/html, Size: 2053 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu-KVM VETH
  2013-09-20 15:48   ` Tim Epkes
@ 2013-09-20 17:26     ` Stefan Hajnoczi
  2013-09-20 20:03       ` Tim Epkes
  2013-09-24 19:55       ` Tim Epkes
  0 siblings, 2 replies; 8+ messages in thread
From: Stefan Hajnoczi @ 2013-09-20 17:26 UTC (permalink / raw)
  To: Tim Epkes; +Cc: qemu-devel

On Fri, Sep 20, 2013 at 11:48:50AM -0400, Tim Epkes wrote:
> The problem aI face is that a bridge in the middle ( using taps) breaks
> LLDP ( L2 discovery protocol) and should break ISIS as well.  Sockets
> aren't bad, but if for some reason I take the listener VM down and bring
> back up, then I have to bring down all connector VMs, which chains on
> itself if there is a lot of connectivity defined.  When UDP was available
> that wasn't an issue.

I just checked linux.git but this patch has not been applied (although
it's trivial if you're willing to rebuild your kernel from source):
http://comments.gmane.org/gmane.linux.network/208908

It sounds like improving net/socket.c might be the right place to look.

> Anyway, that is how I came to VETHs.  I am aware that you can use UDP
> multicast, but performance past one hop is extremely poor ( resulting in 3
> of 5 pings to be lost.  Thanks

Unfortunately the veth driver does not hand Ethernet frames to/from
userspace.  We really need something tap-like where userspace can
inject/extract packets.

Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu-KVM VETH
  2013-09-20 17:26     ` Stefan Hajnoczi
@ 2013-09-20 20:03       ` Tim Epkes
  2013-09-24 19:55       ` Tim Epkes
  1 sibling, 0 replies; 8+ messages in thread
From: Tim Epkes @ 2013-09-20 20:03 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

Agree in all points.  Thanks for the information.  I'll give the patch a try.

Tim

Sent from my iPhone

On Sep 20, 2013, at 1:26 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Fri, Sep 20, 2013 at 11:48:50AM -0400, Tim Epkes wrote:
>> The problem aI face is that a bridge in the middle ( using taps) breaks
>> LLDP ( L2 discovery protocol) and should break ISIS as well.  Sockets
>> aren't bad, but if for some reason I take the listener VM down and bring
>> back up, then I have to bring down all connector VMs, which chains on
>> itself if there is a lot of connectivity defined.  When UDP was available
>> that wasn't an issue.
> 
> I just checked linux.git but this patch has not been applied (although
> it's trivial if you're willing to rebuild your kernel from source):
> http://comments.gmane.org/gmane.linux.network/208908
> 
> It sounds like improving net/socket.c might be the right place to look.
> 
>> Anyway, that is how I came to VETHs.  I am aware that you can use UDP
>> multicast, but performance past one hop is extremely poor ( resulting in 3
>> of 5 pings to be lost.  Thanks
> 
> Unfortunately the veth driver does not hand Ethernet frames to/from
> userspace.  We really need something tap-like where userspace can
> inject/extract packets.
> 
> Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu-KVM VETH
  2013-09-20 17:26     ` Stefan Hajnoczi
  2013-09-20 20:03       ` Tim Epkes
@ 2013-09-24 19:55       ` Tim Epkes
  2013-09-25  8:10         ` Stefan Hajnoczi
  1 sibling, 1 reply; 8+ messages in thread
From: Tim Epkes @ 2013-09-24 19:55 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1571 bytes --]

Stefan,

I just tested the most recent version of qemu (1.4).  I have 2 virtual
images back to back via a tcp socket.  They can ping eachother, but when I
down the listener and bring it back up (meaning killing the kvm and
relaunching) it cannot ping anymore.  The connector never retries to
connect.  This is the most viable solution for the LLDP and ISIS issues.
 Thanks

Tim


On Fri, Sep 20, 2013 at 1:26 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Fri, Sep 20, 2013 at 11:48:50AM -0400, Tim Epkes wrote:
> > The problem aI face is that a bridge in the middle ( using taps) breaks
> > LLDP ( L2 discovery protocol) and should break ISIS as well.  Sockets
> > aren't bad, but if for some reason I take the listener VM down and bring
> > back up, then I have to bring down all connector VMs, which chains on
> > itself if there is a lot of connectivity defined.  When UDP was available
> > that wasn't an issue.
>
> I just checked linux.git but this patch has not been applied (although
> it's trivial if you're willing to rebuild your kernel from source):
> http://comments.gmane.org/gmane.linux.network/208908
>
> It sounds like improving net/socket.c might be the right place to look.
>
> > Anyway, that is how I came to VETHs.  I am aware that you can use UDP
> > multicast, but performance past one hop is extremely poor ( resulting in
> 3
> > of 5 pings to be lost.  Thanks
>
> Unfortunately the veth driver does not hand Ethernet frames to/from
> userspace.  We really need something tap-like where userspace can
> inject/extract packets.
>
> Stefan
>

[-- Attachment #2: Type: text/html, Size: 2220 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu-KVM VETH
  2013-09-24 19:55       ` Tim Epkes
@ 2013-09-25  8:10         ` Stefan Hajnoczi
  2013-09-25 11:35           ` Tim Epkes
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2013-09-25  8:10 UTC (permalink / raw)
  To: Tim Epkes; +Cc: qemu-devel

On Tue, Sep 24, 2013 at 03:55:04PM -0400, Tim Epkes wrote:
> I just tested the most recent version of qemu (1.4).  I have 2 virtual
> images back to back via a tcp socket.  They can ping eachother, but when I
> down the listener and bring it back up (meaning killing the kvm and
> relaunching) it cannot ping anymore.  The connector never retries to
> connect.  This is the most viable solution for the LLDP and ISIS issues.

If the VDE bridge forwards LLDP then that might be your best bet:
http://vde.sourceforge.net/

The VDE bridge process stays alive.  VMs can come and go.

Or if you're able and willing to modify net/socket.c you could add
reconnect logic.

Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Qemu-KVM VETH
  2013-09-25  8:10         ` Stefan Hajnoczi
@ 2013-09-25 11:35           ` Tim Epkes
  0 siblings, 0 replies; 8+ messages in thread
From: Tim Epkes @ 2013-09-25 11:35 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1495 bytes --]

Well its been a long time since I have coded anything in c and time is a
factor (as I just don't have any to spare).  I'll have to figure something
else out for it.  The modifications would be more than net/socket.c, as the
client connects to the tcp server/listener and when it goes away, there is
nothing in TCP to tell it it went away.  There would have to be a keepalive
mechanism put into both the client and the server to know when to tear down
a connection and then another function to start connection
establishment/retries.  This is why the UDP method of doing connections in
earlier versions of qemu worked so well and I wish we would bring it back.
 It didn't worry about state.  Thanks anyway.

Tim


On Wed, Sep 25, 2013 at 4:10 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Tue, Sep 24, 2013 at 03:55:04PM -0400, Tim Epkes wrote:
> > I just tested the most recent version of qemu (1.4).  I have 2 virtual
> > images back to back via a tcp socket.  They can ping eachother, but when
> I
> > down the listener and bring it back up (meaning killing the kvm and
> > relaunching) it cannot ping anymore.  The connector never retries to
> > connect.  This is the most viable solution for the LLDP and ISIS issues.
>
> If the VDE bridge forwards LLDP then that might be your best bet:
> http://vde.sourceforge.net/
>
> The VDE bridge process stays alive.  VMs can come and go.
>
> Or if you're able and willing to modify net/socket.c you could add
> reconnect logic.
>
> Stefan
>

[-- Attachment #2: Type: text/html, Size: 2039 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-09-25 11:35 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-19 21:31 [Qemu-devel] Qemu-KVM VETH Tim Epkes
2013-09-20 11:58 ` Stefan Hajnoczi
2013-09-20 15:48   ` Tim Epkes
2013-09-20 17:26     ` Stefan Hajnoczi
2013-09-20 20:03       ` Tim Epkes
2013-09-24 19:55       ` Tim Epkes
2013-09-25  8:10         ` Stefan Hajnoczi
2013-09-25 11:35           ` Tim Epkes

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.