From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=38173 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PJ6yy-0005YQ-Tz for qemu-devel@nongnu.org; Thu, 18 Nov 2010 11:05:10 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PJ6yx-0001yu-2p for qemu-devel@nongnu.org; Thu, 18 Nov 2010 11:05:08 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58768) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PJ6yw-0001yk-G9 for qemu-devel@nongnu.org; Thu, 18 Nov 2010 11:05:07 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id oAIG552f029757 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 18 Nov 2010 11:05:05 -0500 Date: Thu, 18 Nov 2010 18:04:56 +0200 From: "Michael S. Tsirkin" Subject: Re: [Qemu-devel] Re: [PATCH] spice: add qxl device Message-ID: <20101118160456.GA11273@redhat.com> References: <20101118093026.GG16832@redhat.com> <20101118095751.GA7948@redhat.com> <20101118113308.GC31261@redhat.com> <20101118115529.GJ7948@redhat.com> <20101118120311.GA31987@redhat.com> <20101118122726.GM7948@redhat.com> <20101118140414.GE8247@redhat.com> <20101118145755.GT7948@redhat.com> <20101118152548.GA10273@redhat.com> <20101118154241.GX7948@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20101118154241.GX7948@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gleb Natapov Cc: Gerd Hoffmann , qemu-devel@nongnu.org On Thu, Nov 18, 2010 at 05:42:41PM +0200, Gleb Natapov wrote: > On Thu, Nov 18, 2010 at 05:25:48PM +0200, Michael S. Tsirkin wrote: > > On Thu, Nov 18, 2010 at 04:57:55PM +0200, Gleb Natapov wrote: > > > On Thu, Nov 18, 2010 at 04:04:14PM +0200, Michael S. Tsirkin wrote: > > > > > > What do you want to know? > > > > > How it claims access to framebuffer. Legacy VGA has not only IO space > > > > > but MMIO space too. > > > > > > > > There's a separate bit to enable memory is that is what you > > > > are asking about. > > > > > > > > The spec specifies the address ranges: > > > > > > > > Base class 03. Sub-class 00. Interface 000 0000b: > > > > > > > > VGA-compatible controller. Memory addresses 0A 0000h through 0B FFFFh. > > > > I/O addresses 3B0h to 3BBh and 3C0h to 3DFh and all aliases of these > > > > addresses. > > > > > > > > > > > So MMIO space is also not configurable? In short you can't insert two > > > vga card in two pci slots and use both simultaneously. > > > > I think so, yes. But you can switch between them by disabling > > one and enabling another one. > > > Sure. At init time. > > > > > > > > > > This wouldn't be backwards > > > > > > > > > compatible to ISA machines, so old software my not run properly back in > > > > > > > > > the days when transaction from ISA to PCI happened. > > > > > > > > > > > > > > > > initialization software could be the BIOS. > > > > > > > > So maybe BIOS update was needed in the transition. > > > > > > > > > > > > > > > That is possible. > > > > > > > > > > > > > > > > So my guess is that > > > > > > > > > old ISA ports works in backwards compatible way. > > > > > > > > > > > > > > > > The spec seems to contradict this. > > > > > > > > > > > > > > > > > > When qemu is started, it works correctly: the io memory is disabled and card does > > > > > > > > > > not claim any io. Then BIOS comes along and enables io. At this point > > > > > > > > > > map callback is invoked and maps io memory, card starts claiming io. > > > > > > > > > Looking at the code I see that cirrus claims all IO ports and > > > > > > > > > framebuffer memory during init function unconditionally. > > > > > > > > > > > > > > > > So that may be OK for ISA, but not for PCI. > > > > > > > > > > > > > > > The code does it for both. > > > > > > > > > > > > Yep. So it's a bug. > > > > > > > > > > > > > > > > > > > > > > > > > > What is broken is that if BIOS/guest then disables IO memory, > > > > > > > > > > (I think - even if guest is rebooted!) we will keep claiming IO transactions. > > > > > > > > > > That our emulation does this seems to be a clear spec violation, we are > > > > > > > > > > just lucky that BIOS/guest does not do this at the moment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > So what "fixing" this will buy us? > > > > > > > > > > > > > > > > > > > > > > > > Besides spec compliancy, you mean? Ability to support multiple VGA > > > > > > > > > > > > cards. That's how it works I think: BIOS enables IO on the primary > > > > > > > > > > > > VGA device only. > > > > > > > > > > > > > > > > > > > > > > > What spec defines hot-plug for primary VGA adapter? > > > > > > > > > > > > > > > > > > > > No idea about hotplug. I am talking about multiple VGA cards, > > > > > > > > > > enabling/disabling them dynamically should be possible. > > > > > > > > > Of course. With properly designed VGA card you should be able to have > > > > > > > > > more then one, > > > > > > > > > > > > > > > > And, for that to have a chance to work when all cards are identical, you > > > > > > > > don't claim IO when IO is disabled. > > > > > > > > > > > > > > > But then only one card will be able to use IO since enabling IO on more > > > > > > > the one cards will cause conflict. > > > > > > > > > > > > Sure. That's life for legacy io though. > > > > > > > > > > > But that is the point. You can't have two regular VGA card > > > > > simultaneously. > > > > > > > > You can't *enable* them simultaneously. The fact that we cant create > > > > them in qemu is a bug. > > > You can insert two of them on real HW too. > > > > Yes. You can insert any number of VGA cards on a PCI bus, > > BIOS can configure one and disable the rest. > > > Bios can configure one as legacy. And all other can be happily used by > guest OS with proper drivers. Maybe. There's no standard way to disable 'legacy mode' though. So a card might or might not keep claiming the legacy ranges (assuming io/memory is enabled). Maybe if you want to use multiple cards all of them need drivers. > > > > > > > > > but one of them will provide legacy functionality > > > > > > > > > and is not removable. > > > > > > > > > > > > > > > > The guest might not support hotplug. But there's no way > > > > > > > > it can prevent surprise removal. qemu should not crash > > > > > > > > when this happens. > > > > > > > Qemu can prevent any removal, surprise or not. Qemu can just > > > > > > > disallow device removal. > > > > > > > > > > > > Yes, but that won't emulate real hardware faithfully. > > > > > To the letter. There is no HW with hot-unplaggable primary > > > > > vga card. You are welcome to surprise remove vga card from your > > > > > machine and see what will happen. > > > > > > > > This is different from removing any other card with hotplug module > > > > unloaded in OS how? OS might crash but so what? You can always reboot > > > > it, hardware won't be damaged, so qemu shouldn't crash too. > > > If guest can't handle unplug there is no meaning for qemu to provide it. > > > > Sure there's meaning. Giving guest access to backend > > has security implications. We must have a way to revoke that > > access even if guest misbehaves. > > > I am not sure what do you mean by "giving guest access to backend"? > Guest shouldn't be able to surprise removal VGA card, or any card at all > for that matter. WHQL includes surprise removal tests. So any card that passed that will work with surprise removal. > > I expect surprise removal to be of most use for > > assigned devices. But even for emulated devices, we have a small > > number of slots available, so it would still be useful to free up the PCI slot, > > even if guest needs to be rebooted then. > > > We are talking about should we require primary VGA to be > hot-unplaggable. The last thing you want to remove to free PCI slots is > primary VGA card especially if no guest OS can handle it ;) What I am saying is we need surprise removal generally. If won't be too bad if we make these commands fail for VGA. > > > You can have development mobo were chipset can be unplugged. Should we > > > allow to hot-unplug chipset? > > > > If it's useful, we could :) > On real HW you used to have memory controller there :) > > > > > > > > > > > > > On real hardware with a hotplug supporting slot > > > > > > (and without an EM lock :) ) you can yank the card out > > > > > > and the guest can do nothing about it. > > > > > > > > > > > And you will not find primary vga there. > > > > > > > > Where? PCI spec explicitly allows VGA cards behind expansion slots, > > > > stick it there, and you can remove it too. > > > > > > > You can't remove PCI card just from any PCI slot IIRC. Slot and add-on > > > card should be specifically designed to be hot-unpluggable. > > > (Something > > > to do with when power and data lines are disconnected during un-plug > > > IIRC). Not all things you can yank from mobo while OS is running are > > > "unpluggable" :) > > > > That's true for PCI/PCI-X. I didn't check PCI Express, maybe there all cards > > are hotpluggable. Do you know? > > > > > Nope, no idea. On PCI Express you have much less lines to care about > though. > > -- > Gleb.