From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:58873) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QNK44-0007H0-Tf for qemu-devel@nongnu.org; Fri, 20 May 2011 03:24:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QNK43-0006xu-PB for qemu-devel@nongnu.org; Fri, 20 May 2011 03:24:04 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48132) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QNK43-0006xm-Gc for qemu-devel@nongnu.org; Fri, 20 May 2011 03:24:03 -0400 Date: Fri, 20 May 2011 10:23:58 +0300 From: Gleb Natapov Message-ID: <20110520072358.GL27310@redhat.com> References: <4DD51CF3.5040306@codemonkey.ws> <4DD51D36.7040504@siemens.com> <20110519173923.GF27310@redhat.com> <4DD55D5B.30008@web.de> <20110519182203.GH27310@redhat.com> <4DD56126.20808@web.de> <20110519184056.GJ27310@redhat.com> <4DD56543.9020404@web.de> <20110519185019.GK27310@redhat.com> <4DD567B5.1060205@web.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DD567B5.1060205@web.de> Subject: Re: [Qemu-devel] [RFC] Memory API List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jan Kiszka Cc: Avi Kivity , qemu-devel On Thu, May 19, 2011 at 08:55:49PM +0200, Jan Kiszka wrote: > >>>> Because we should catch accidental overlaps in all those non PCI devices > >>>> with hard-wired addressing. That's a bug in the device/machine model and > >>>> should be reported as such by QEMU. > >>> Why should we complicate API to catch unlikely errors? If you want to > >>> debug that add capability to dump memory map from the monitor. > >> > >> Because we need to switch tons of code that so far saw a fairly > >> different reaction of the core to overlapping regions. > >> > > How so? Today if there is accidental overlap device will not function properly. > > With new API it will be the same. > > I rather expect subtle differences as overlapping registration changes > existing regions, in the future those will recover. > Where do you expect the differences will come from? Conversion to the new API shouldn't change the order of the registration and if the last registration will override previous one the end result should be the same as we have today. > >>>>>> new region management will not cause any harm to overlapping regions so > >>>>>> that they can "recover" when the overlap is gone. > >>>>>> > >>>>>>> > >>>>>>> Another example may be APIC region and PCI. They overlap, but neither > >>>>>>> CPU nor PCI knows about it. > >>>>>> > >>>>>> And they do not need to. The APIC regions will be managed by the per-CPU > >>>>>> region management, reusing the tool box we need for all bridges. It will > >>>>>> register the APIC page with a priority higher than the default one, thus > >>>>>> overriding everything that comes from the host bridge. I think that > >>>>>> reflects pretty well real machine behaviour. > >>>>>> > >>>>> What is "higher"? How does it know that priority is high enough? > >>>> > >>>> Because no one else manages priorities at a specific hierarchy level. > >>>> There is only one. > >>>> > >>> PCI and CPU are on different hierarchy levels. PCI is under the PIIX and > >>> CPU is on a system BUS. > >> > >> The priority for the APIC mapping will be applied at CPU level, of > >> course. So it will override everything, not just PCI. > >> > > So you do not need explicit priority because the place in hierarchy > > implicitly provides you with one. > > Yes. OK :) So you agree that we can do without priorities :) > Alternatively, you could add a prio offset to all mappings when > climbing one level up, provided that offset is smaller than the prio > range locally available to each level. > Then a memory region final priority will depend on a tree height. If two disjointed tree branches of different height will claim the same memory region the higher one will have higher priority. I think this priority management is a can of worms. Only the lowest level (aka system bus) will use memory API directly. PCI device will call PCI subsystem. PCI subsystem, instead of assigning arbitrary priorities to all overlappings, may just resolve them and pass flattened view to the chipset. Chipset in turn will look for overlappings between PCI memory areas and RAM/ISA/other memory areas that are outside of PCI windows and resolve all those passing the flattened view to system bus where APIC/PCI conflict will be resolved and finally memory API will be used to create memory map. In such a model I do not see the need for priorities. All overlappings are resolved in the most logical place, the one that has the best knowledge about how to resolve the conflict. The will be no code duplication. Overlapping resolution code will be in separate library used by all layers. -- Gleb.