From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36393) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UjqD5-0007Pd-GQ for qemu-devel@nongnu.org; Tue, 04 Jun 2013 08:19:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UjqCz-0004dE-6B for qemu-devel@nongnu.org; Tue, 04 Jun 2013 08:19:31 -0400 Received: from mail-bk0-x22c.google.com ([2a00:1450:4008:c01::22c]:47395) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UjqCy-0004d2-Sp for qemu-devel@nongnu.org; Tue, 04 Jun 2013 08:19:25 -0400 Received: by mail-bk0-f44.google.com with SMTP id r7so94598bkg.31 for ; Tue, 04 Jun 2013 05:19:23 -0700 (PDT) MIME-Version: 1.0 Sender: lukego@gmail.com In-Reply-To: References: <20130527093409.GH21969@stefanha-thinkpad.redhat.com> <51A496C4.1020602@os.inf.tu-dresden.de> <87r4grca4p.fsf@codemonkey.ws> <20130528171742.GB30296@redhat.com> <8761y3vsrg.fsf@codemonkey.ws> Date: Tue, 4 Jun 2013 14:19:23 +0200 Message-ID: From: Luke Gorrie Content-Type: multipart/alternative; boundary=f46d041c45ba6b29ad04de531787 Subject: Re: [Qemu-devel] [snabb-devel:300] Re: snabbswitch integration with QEMU for userspace ethernet I/O List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "snabb-devel@googlegroups.com" Cc: Stefano Stabellini , "Michael S. Tsirkin" , qemu-devel , julien.grall@citrix.com, Anthony Liguori , Julian Stecklina --f46d041c45ba6b29ad04de531787 Content-Type: text/plain; charset=ISO-8859-1 Howdy, My brain is slowly catching up with all of the information shared in this thread. Here is my first attempt to tease out a way forward for Snabb Switch. The idea that excites me is to implement a complete PCI device in Snabb Switch and expose this to the guest at the basic PCI/MMIO/DMA level. The device would be a Virtio network adapter based on Rusty Russell's specification. The switch<->VM interface would be based on PCI rather than vhost. I _think_ this is the basic idea that Stefano Stabellini and Julian Stecklina are talking about. I like this because: - The abstraction level is primarily PCI hardware devices (hardware) rather than system calls (kernel) as with vhost/socket/splice/etc. This is a much better fit for the Snabb Switch code, which is already doing physical network I/O based on built-in drivers built on PCI MMIO/DMA. I invest my energy in learning more about PCI and Virtio rather than Linux and QEMU. - The code feels more generic. The software we develop is a standard Virtio PCI network device rather than a specific QEMU-vhost interface. In principle (...) we could reuse the same code with more hypervisors in the future. - The code that I am not well positioned to write myself - the hypervisor side - may have already been written/prototyped by others and available for testing, even though it's not available in mainline QEMU. I have some questions, if you don't mind: 1. Have I understood the idea correctly above? (Or what do I have wrong?) 2. Is this PCI integration available in some code base that I could test with? e.g. non-mainline QEMU, Xen, vbox, VMware, etc? 3. If I hack a proof-of-concept what is most likely to go wrong in an OpenStack context? I mean - the "memory hotplug" and "track what is dirty" issues that are alluded to. Is my code going to run slowly? drop packets? break during migration? crash VMs? Long-term I do need a solution that works with standard mainline QEMU but I could also start with something more custom and revisit the whole issue next year. The most important thing now is to start making forward progress and have something working and performant this summer/autumn. Cheers & thanks for all the information, -Luke --f46d041c45ba6b29ad04de531787 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Howdy,

My brain is slowly catchin= g up with all of the information shared in this thread. Here is my first at= tempt to tease out a way forward for Snabb Switch.

The idea that excites me is to implement a complete PCI device i= n Snabb Switch and expose this to the guest at the basic PCI/MMIO/DMA level= . The device would be a Virtio network adapter based on Rusty Russell's= specification. The switch<->VM interface would be based on PCI rathe= r than vhost.

I _think_ this is the basic idea that Stefa= no=A0Stabellini and=A0Julian Stecklina are talking about.

I like this because:

- The abstraction level is primarily PCI hardware devices (hardware) rather= than system calls (kernel) as with vhost/socket/splice/etc. This is a much= better fit for the Snabb Switch code, which is already doing physical netw= ork I/O based on built-in drivers built on PCI MMIO/DMA. I invest my energy= in learning more about PCI and Virtio rather than Linux and QEMU.

- The code feels more generic. The software= we develop is a standard Virtio PCI network device rather than a specific = QEMU-vhost interface. In principle (...) we could reuse the same code with = more hypervisors in the future.

- The code that I am not well positioned to= write myself - the hypervisor side - may have already been written/prototy= ped by others and available for testing, even though it's not available= in mainline QEMU.

I have some questions, if you don't min= d:

1. Have I understood the idea corre= ctly above? (Or what do I have wrong?)
2. Is this PCI integ= ration available in some code base that I could test with? e.g. non-mainlin= e QEMU, Xen, vbox, VMware, etc?
3. If I hack a proof-of-concept what is most likely to go wrong = in an OpenStack context? I mean - the "memory hotplug" and "= track what is dirty" issues that are alluded to. Is my code going to r= un slowly? drop packets? break during migration? crash VMs?

Long-term I do need a solution that works w= ith standard mainline QEMU but I could also start with something more custo= m and revisit the whole issue next year. The most important thing now is to= start making forward progress and have something working and performant th= is summer/autumn.

Cheers & thanks for all the information= ,
-Luke


--f46d041c45ba6b29ad04de531787--