From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Northup Subject: Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest Date: Thu, 25 Aug 2011 15:08:52 -0700 Message-ID: References: <20110824222510.GC14835@dancer.ca.sandia.gov> <232C9ABA-F703-4AE5-83BC-774C715D4D8F@suse.de> <20110825044913.GA24996@dancer.ca.sandia.gov> <1314248794.32391.60.camel@jaguar> <1314271548.3692.22.camel@lappy> <20110825150806.GF24996@dancer.ca.sandia.gov> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Sasha Levin , Stefan Hajnoczi , Pekka Enberg , Alexander Graf , kvm@vger.kernel.org, cam@cs.ualberta.ca To: David Evensky Return-path: Received: from smtp-out.google.com ([74.125.121.67]:60722 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755569Ab1HYWIz convert rfc822-to-8bit (ORCPT ); Thu, 25 Aug 2011 18:08:55 -0400 Received: from wpaz9.hot.corp.google.com (wpaz9.hot.corp.google.com [172.24.198.73]) by smtp-out.google.com with ESMTP id p7PM8rVE031433 for ; Thu, 25 Aug 2011 15:08:54 -0700 Received: from pzk4 (pzk4.prod.google.com [10.243.19.132]) by wpaz9.hot.corp.google.com with ESMTP id p7PM5ADs014994 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Thu, 25 Aug 2011 15:08:52 -0700 Received: by pzk4 with SMTP id 4so3814426pzk.14 for ; Thu, 25 Aug 2011 15:08:52 -0700 (PDT) In-Reply-To: <20110825150806.GF24996@dancer.ca.sandia.gov> Sender: kvm-owner@vger.kernel.org List-ID: Just FYI, one issue that I found with exposing host memory regions as a PCI BAR (including via a very old version of the ivshmem driver... haven't tried a newer one) is that x86's pci_mmap_page_range doesn't want to set up a write-back cacheable mapping of a BAR. It may not matter for your requirements, but the uncached access reduced guest<->host bandwidth via the shared memory driver by a lot. If you need the physical address to be fixed, you might be better off by reserving a memory region in the e820 map rather than a PCI BAR, since BARs can move around. On Thu, Aug 25, 2011 at 8:08 AM, David Evensky wrote: > > Adding in the rest of what ivshmem does shouldn't affect our use, *I > think*. =A0I hadn't intended this to do everything that ivshmem does, > but I can see how that would be useful. It would be cool if it could > grow into that. > > Our requirements for the driver in kvm tool are that another program > on the host can create a shared segment (anonymous, non-file backed) > with a specified handle, size, and contents. That this segment is > available to the guest at boot time at a specified address and that n= o > driver will change the contents of the memory except under direct use= r > action. Also, when the guest goes away the shared memory segment > shouldn't be affected (e.g. contents changed). Finally, we cannot > change the lightweight nature of kvm tool. > > This is the feature of ivshmem that I need to check today. I did some > testing a month ago, but it wasn't detailed enough to check this out. > > \dae > > > > > On Thu, Aug 25, 2011 at 02:25:48PM +0300, Sasha Levin wrote: > > On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote: > > > On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg wrote: > > > > Hi Stefan, > > > > > > > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi wrote: > > > >>> It's obviously not competing. One thing you might want to con= sider is > > > >>> making the guest interface compatible with ivshmem. Is there = any reason > > > >>> we shouldn't do that? I don't consider that a requirement, ju= st nice to > > > >>> have. > > > >> > > > >> The point of implementing the same interface as ivshmem is tha= t users > > > >> don't need to rejig guests or applications in order to switch = between > > > >> hypervisors. =A0A different interface also prevents same-to-sa= me > > > >> benchmarks. > > > >> > > > >> There is little benefit to creating another virtual device int= erface > > > >> when a perfectly good one already exists. =A0The question shou= ld be: how > > > >> is this shmem device different and better than ivshmem? =A0If = there is > > > >> no justification then implement the ivshmem interface. > > > > > > > > So which interface are we actually taking about? Userspace/kern= el in the > > > > guest or hypervisor/guest kernel? > > > > > > The hardware interface. =A0Same PCI BAR layout and semantics. > > > > > > > Either way, while it would be nice to share the interface but i= t's not a > > > > *requirement* for tools/kvm unless ivshmem is specified in the = virtio > > > > spec or the driver is in mainline Linux. We don't intend to req= uire people > > > > to implement non-standard and non-Linux QEMU interfaces. OTOH, > > > > ivshmem would make the PCI ID problem go away. > > > > > > Introducing yet another non-standard and non-Linux interface does= n't > > > help though. =A0If there is no significant improvement over ivshm= em then > > > it makes sense to let ivshmem gain critical mass and more users > > > instead of fragmenting the space. > > > > I support doing it ivshmem-compatible, though it doesn't have to be= a > > requirement right now (that is, use this patch as a base and build = it > > towards ivshmem - which shouldn't be an issue since this patch prov= ides > > the PCI+SHM parts which are required by ivshmem anyway). > > > > ivshmem is a good, documented, stable interface backed by a lot of > > research and testing behind it. Looking at the spec it's obvious th= at > > Cam had KVM in mind when designing it and thats exactly what we wan= t to > > have in the KVM tool. > > > > David, did you have any plans to extend it to become ivshmem-compat= ible? > > If not, would turning it into such break any code that depends on i= t > > horribly? > > > > -- > > > > Sasha. > > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at =A0http://vger.kernel.org/majordomo-info.html