From mboxrd@z Thu Jan 1 00:00:00 1970 From: Cam Macdonell Subject: Re: [PATCH v2] Shared memory device with interrupt support Date: Mon, 18 May 2009 10:50:30 -0600 Message-ID: <4A1191D6.6090105@cs.ualberta.ca> References: <3D9CB4061D1EB3408D4A0B910433453C030BCA8892@inbmail01.lsi.com> <5A459472-C496-49ED-93D8-0C4CC391F50A@cs.ualberta.ca> <4A1086E6.30405@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "kvm@vger.kernel.org list" , Gregory Haskins To: Avi Kivity Return-path: Received: from fleet.cs.ualberta.ca ([129.128.22.22]:48077 "EHLO fleet.cs.ualberta.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751689AbZERQua (ORCPT ); Mon, 18 May 2009 12:50:30 -0400 In-Reply-To: <4A1086E6.30405@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Avi Kivity wrote: > Cam Macdonell wrote: >>> >>> If my understanding is correct both the VM's who wants to communicate >>> would gives this path in the command line with one of them specifying >>> as "server". >> >> Exactly, the one with the "server" in the parameter list will wait for >> a connection before booting. > > hm, we may be able to eliminate the server from the fast path, at the > cost of some complexity. > > When a guest connects to the server, the server creates an eventfd and > passes using SCM_RIGHTS to all other connected guests. The server also > passes the eventfds of currently connected guests to the new guest. > From now on, the server does not participate in anything; when a quest > wants to send an interrupt to one or more other guests, its qemu just > writes to the eventfds() of the corresponding guests; their qemus will > inject the interrupt, without any server involvement. > > Now, anyone who has been paying attention will have their alarms going > off at the word eventfd. And yes, if the host supports irqfd, the > various qemus can associate those eventfds with an irq and pretty much > forget about them. When a qemu triggers an irqfd, the interrupt will be > injected directly without the target qemu's involvement. > > I like it. That certainly sounds like the right direction for multi-VM setup. I'm currently working on the shmem PCI card server discussed in the first patch's thread to support broadcast and multicast which will now be simpler if qemu handles the *casting. My usual noob questions: Do I need to run Greg's tree on the host for the necessary irqfd/eventfd suppport? Are there any examples to work from aside from Greg's unit tests? Thanks, Cam