From mboxrd@z Thu Jan 1 00:00:00 1970 From: Cam Macdonell Subject: Re: [PATCH] Add shared memory PCI device that shares a memory object betweens VMs Date: Thu, 23 Apr 2009 10:28:00 -0600 Message-ID: <49F09710.5000405@cs.ualberta.ca> References: <1238600608-9120-1-git-send-email-cam@cs.ualberta.ca> <49D3965C.1030503@codemonkey.ws> <49D3AD79.7080708@redhat.com> <49D3B7ED.4030303@codemonkey.ws> <49EAFC44.9080809@redhat.com> <49EF9D31.4030605@cs.ualberta.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: subbu kl Return-path: Received: from fleet.cs.ualberta.ca ([129.128.22.22]:51566 "EHLO fleet.cs.ualberta.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755834AbZDWQ2I (ORCPT ); Thu, 23 Apr 2009 12:28:08 -0400 Received: from fleet.cs.ualberta.ca (localhost.localdomain [127.0.0.1]) by fleet-spampd (Postfix) with ESMTP id 6BE2128019 for ; Thu, 23 Apr 2009 10:28:00 -0600 (MDT) In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: subbu kl wrote: > Cam, > > just a wild though about alternative approach. Ideas are always good. > Once specific set of > address range of one guest is visible to other guest its just a matter > of DMA/single memcpy will transfer the data across. My idea is to eliminate unnecessary copying. This introduces one. > usually non-transparent PCIe bridges(NTB) will be used for inter > processor data communication. physical PCIe NTB between two processors > just sets up a PCIe data channel with some Address translation stuffs. > > So i was just wondering if we can write this non transparent bridge > (qemu PCI device) with Addrdess translation capability then guests just > can start mmap and start accessing each others memory :) I think your concept is similar to what Anthony suggested using virtio to export and import other VMs memory. However, RAM and shared memory are not the same thing and having one guest access another's RAM could confuse the guest. With the approach of mapping a BAR, the shared memory is separate from the guest RAM but it can be mapped by the guest processes. Cam > ~subbu > > On Thu, Apr 23, 2009 at 4:11 AM, Cam Macdonell > wrote: > > subbu kl wrote: > > correct me if wrong, > can we do the sharing business by writing a non-transparent qemu > PCI device in host and guests can access each other's address > space ? > > > Hi Subbu, > > I'm a bit confused by your question. Are you asking how this device > works or suggesting an alternative approach? I'm not sure what you > mean by a non-transparent qemu device. > > Cam > > > ~subbu > > > On Sun, Apr 19, 2009 at 3:56 PM, Avi Kivity >> wrote: > > Cameron Macdonell wrote: > > > Hi Avi and Anthony, > > Sorry for the top-reply, but we haven't discussed this aspect > here before. > > I've been thinking about how to implement interrupts. As > far as > I can tell, unix domain sockets in Qemu/KVM are used > point-to-point with one VM being the server by specifying > "server" along with the unix: option. This works simply > for two > VMs, but I'm unsure how this can extend to multiple VMs. How > would a server VM know how many clients to wait for? How can > messages then be multicast or broadcast? Is a separate > "interrupt server" necessary? > > > > I don't think unix provides a reliable multicast RPC. So yes, an > interrupt server seems necessary. > > You could expand its role an make it a "shared memory PCI card > server", and have it also be responsible for providing the > backing > file using an SCM_RIGHTS fd. That would reduce setup > headaches for > users (setting up a file for which all VMs have permissions). > > -- Do not meddle in the internals of kernels, for they are > subtle and > quick to panic. > > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > > > > > More majordomo info at > http://vger.kernel.org/majordomo-info.html > > > > > -- > ~subbu > > > > > -- > ~subbu