From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:60299) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QN7rL-0004rD-Lq for qemu-devel@nongnu.org; Thu, 19 May 2011 14:22:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QN7rK-0001l7-P8 for qemu-devel@nongnu.org; Thu, 19 May 2011 14:22:07 -0400 Received: from mx1.redhat.com ([209.132.183.28]:3470) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QN7rK-0001l3-Hy for qemu-devel@nongnu.org; Thu, 19 May 2011 14:22:06 -0400 Date: Thu, 19 May 2011 21:22:03 +0300 From: Gleb Natapov Message-ID: <20110519182203.GH27310@redhat.com> References: <4DD3C5B9.1080908@redhat.com> <4DD420A5.2020606@web.de> <4DD51CF3.5040306@codemonkey.ws> <4DD51D36.7040504@siemens.com> <20110519173923.GF27310@redhat.com> <4DD55D5B.30008@web.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DD55D5B.30008@web.de> Subject: Re: [Qemu-devel] [RFC] Memory API List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jan Kiszka Cc: Avi Kivity , qemu-devel On Thu, May 19, 2011 at 08:11:39PM +0200, Jan Kiszka wrote: > On 2011-05-19 19:39, Gleb Natapov wrote: > > On Thu, May 19, 2011 at 03:37:58PM +0200, Jan Kiszka wrote: > >> On 2011-05-19 15:36, Anthony Liguori wrote: > >>> On 05/18/2011 02:40 PM, Jan Kiszka wrote: > >>>> On 2011-05-18 15:12, Avi Kivity wrote: > >>>>> void cpu_register_memory_region(MemoryRegion *mr, target_phys_addr_t > >>>>> addr); > >>>> > >>>> OK, let's allow overlapping, but make it explicit: > >>>> > >>>> void cpu_register_memory_region_overlap(MemoryRegion *mr, > >>>> target_phys_addr_t addr, > >>>> int priority); > >>> > >>> The device doesn't actually know how overlapping is handled. This is > >>> based on the bus hierarchy. > >> > >> Devices don't register their regions, buses do. > >> > > Today PCI device may register region that overlaps with any other > > registered memory region without even knowing it. Guest can write any > > RAM address into PCI BAR and this RAM address will be come mmio are. More > > interesting is what happens when guest reprogram PCI BAR to other address > > - the RAM that was at the previous address just disappears. Obviously > > this is crazy behaviour, but the question is how do we want to handle > > it? One option is to disallow such overlapping registration, another is > > to restore RAM mapping after PCI BAR is reprogrammed. If we chose second > > one the PCI will not know that _overlap() should be called. > > BARs may overlap with other BARs or with RAM. That's well-known, so PCI > bridged need to register their regions with the _overlap variant > unconditionally. In contrast to the current PhysPageDesc mechanism, the With what priority? If it needs to call _overlap unconditionally why not always call _overlap and drop not _overlap variant? > new region management will not cause any harm to overlapping regions so > that they can "recover" when the overlap is gone. > > > > > Another example may be APIC region and PCI. They overlap, but neither > > CPU nor PCI knows about it. > > And they do not need to. The APIC regions will be managed by the per-CPU > region management, reusing the tool box we need for all bridges. It will > register the APIC page with a priority higher than the default one, thus > overriding everything that comes from the host bridge. I think that > reflects pretty well real machine behaviour. > What is "higher"? How does it know that priority is high enough? I thought, from reading other replies, that priorities are meaningful only on the same hierarchy level (which kinda make sense), but now you are saying that you will override PCI address from another part of the topology? -- Gleb.