From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:39521) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QNOVa-0000Ns-40 for qemu-devel@nongnu.org; Fri, 20 May 2011 08:08:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QNOVZ-0006Oq-1y for qemu-devel@nongnu.org; Fri, 20 May 2011 08:08:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36082) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QNOVY-0006Og-MF for qemu-devel@nongnu.org; Fri, 20 May 2011 08:08:45 -0400 Date: Fri, 20 May 2011 15:08:41 +0300 From: Gleb Natapov Message-ID: <20110520120841.GO27310@redhat.com> References: <4DD3C5B9.1080908@redhat.com> <4DD420A5.2020606@web.de> <4DD51CF3.5040306@codemonkey.ws> <4DD51D36.7040504@siemens.com> <20110519173923.GF27310@redhat.com> <4DD55D5B.30008@web.de> <20110519182203.GH27310@redhat.com> <4DD62FFE.8020103@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DD62FFE.8020103@redhat.com> Subject: Re: [Qemu-devel] [RFC] Memory API List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Jan Kiszka , qemu-devel On Fri, May 20, 2011 at 12:10:22PM +0300, Avi Kivity wrote: > On 05/19/2011 09:22 PM, Gleb Natapov wrote: > >> > >> BARs may overlap with other BARs or with RAM. That's well-known, so PCI > >> bridged need to register their regions with the _overlap variant > >> unconditionally. In contrast to the current PhysPageDesc mechanism, the > >With what priority? > > It doesn't matter, since the spec doesn't define priorities among PCI BARs. > And among PCI BAR and memory (the case the question above referred too). > >If it needs to call _overlap unconditionally why not > >always call _overlap and drop not _overlap variant? > > Other uses need non-overlapping registration. And who prohibit them from creating one? > > >> > >> And they do not need to. The APIC regions will be managed by the per-CPU > >> region management, reusing the tool box we need for all bridges. It will > >> register the APIC page with a priority higher than the default one, thus > >> overriding everything that comes from the host bridge. I think that > >> reflects pretty well real machine behaviour. > >> > >What is "higher"? How does it know that priority is high enough? > > It is well known that 1 > 0, for example. > That is if you have global scale. In the case I am asking about you do not. Even if PCI will register memory region that overlaps APIC address with priority 1000 APIC memory region should still be able to override it even with priority 0. Voila 1000 < 0? Where is your sarcasm now? :) But Jan already answered this one. Actually what really matters is the place of the node in a topology, not priority. But then for all of this to make sense registration has to be hierarchical. > >I > >thought, from reading other replies, that priorities are meaningful > >only on the same hierarchy level (which kinda make sense), but now you > >are saying that you will override PCI address from another part of > >the topology? > > -- per-cpu memory > | > +--- apic page (prio 1) > | > +--- global memory (prio 0) > > -- > I have a truly marvellous patch that fixes the bug which this > signature is too narrow to contain. -- Gleb.