From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=33708 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PJ6N7-00081V-Da for qemu-devel@nongnu.org; Thu, 18 Nov 2010 10:26:03 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PJ6N5-0002TL-77 for qemu-devel@nongnu.org; Thu, 18 Nov 2010 10:26:00 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55420) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PJ6N4-0002TH-SV for qemu-devel@nongnu.org; Thu, 18 Nov 2010 10:25:59 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id oAIFPw8G001895 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 18 Nov 2010 10:25:58 -0500 Date: Thu, 18 Nov 2010 17:25:48 +0200 From: "Michael S. Tsirkin" Subject: Re: [Qemu-devel] Re: [PATCH] spice: add qxl device Message-ID: <20101118152548.GA10273@redhat.com> References: <20101118090321.GD16832@redhat.com> <20101118091149.GY7948@redhat.com> <20101118093026.GG16832@redhat.com> <20101118095751.GA7948@redhat.com> <20101118113308.GC31261@redhat.com> <20101118115529.GJ7948@redhat.com> <20101118120311.GA31987@redhat.com> <20101118122726.GM7948@redhat.com> <20101118140414.GE8247@redhat.com> <20101118145755.GT7948@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20101118145755.GT7948@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gleb Natapov Cc: Gerd Hoffmann , qemu-devel@nongnu.org On Thu, Nov 18, 2010 at 04:57:55PM +0200, Gleb Natapov wrote: > On Thu, Nov 18, 2010 at 04:04:14PM +0200, Michael S. Tsirkin wrote: > > > > What do you want to know? > > > How it claims access to framebuffer. Legacy VGA has not only IO space > > > but MMIO space too. > > > > There's a separate bit to enable memory is that is what you > > are asking about. > > > > The spec specifies the address ranges: > > > > Base class 03. Sub-class 00. Interface 000 0000b: > > > > VGA-compatible controller. Memory addresses 0A 0000h through 0B FFFFh. > > I/O addresses 3B0h to 3BBh and 3C0h to 3DFh and all aliases of these > > addresses. > > > > > So MMIO space is also not configurable? In short you can't insert two > vga card in two pci slots and use both simultaneously. I think so, yes. But you can switch between them by disabling one and enabling another one. > > > > > > > This wouldn't be backwards > > > > > > > compatible to ISA machines, so old software my not run properly back in > > > > > > > the days when transaction from ISA to PCI happened. > > > > > > > > > > > > initialization software could be the BIOS. > > > > > > So maybe BIOS update was needed in the transition. > > > > > > > > > > > That is possible. > > > > > > > > > > > > So my guess is that > > > > > > > old ISA ports works in backwards compatible way. > > > > > > > > > > > > The spec seems to contradict this. > > > > > > > > > > > > > > When qemu is started, it works correctly: the io memory is disabled and card does > > > > > > > > not claim any io. Then BIOS comes along and enables io. At this point > > > > > > > > map callback is invoked and maps io memory, card starts claiming io. > > > > > > > Looking at the code I see that cirrus claims all IO ports and > > > > > > > framebuffer memory during init function unconditionally. > > > > > > > > > > > > So that may be OK for ISA, but not for PCI. > > > > > > > > > > > The code does it for both. > > > > > > > > Yep. So it's a bug. > > > > > > > > > > > > > > > > > > > > What is broken is that if BIOS/guest then disables IO memory, > > > > > > > > (I think - even if guest is rebooted!) we will keep claiming IO transactions. > > > > > > > > That our emulation does this seems to be a clear spec violation, we are > > > > > > > > just lucky that BIOS/guest does not do this at the moment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > So what "fixing" this will buy us? > > > > > > > > > > > > > > > > > > > > Besides spec compliancy, you mean? Ability to support multiple VGA > > > > > > > > > > cards. That's how it works I think: BIOS enables IO on the primary > > > > > > > > > > VGA device only. > > > > > > > > > > > > > > > > > > > What spec defines hot-plug for primary VGA adapter? > > > > > > > > > > > > > > > > No idea about hotplug. I am talking about multiple VGA cards, > > > > > > > > enabling/disabling them dynamically should be possible. > > > > > > > Of course. With properly designed VGA card you should be able to have > > > > > > > more then one, > > > > > > > > > > > > And, for that to have a chance to work when all cards are identical, you > > > > > > don't claim IO when IO is disabled. > > > > > > > > > > > But then only one card will be able to use IO since enabling IO on more > > > > > the one cards will cause conflict. > > > > > > > > Sure. That's life for legacy io though. > > > > > > > But that is the point. You can't have two regular VGA card > > > simultaneously. > > > > You can't *enable* them simultaneously. The fact that we cant create > > them in qemu is a bug. > You can insert two of them on real HW too. Yes. You can insert any number of VGA cards on a PCI bus, BIOS can configure one and disable the rest. > > > > > The card should be designed to work in legacy > > > mode and non-legacy mode. Then one of them will be used by legacy > > > software like BIOS and another will be driven from an OS by a driver > > > written specifically for the card. > > > > There's nothing in the spec that says so. IMO it > > should be possible to have two cards and have BIOS (or maybe even the > > OS) select which one to use. > Two use for what? If you have two cards you most probably want to use both > of them otherwise why have you put two of them in the same computer. There could be a variety of reasons, such as dual boot where one card would work better with linux, another with windows. > But > only one of then can work in legacy mode and be usable without specialized > driver. This shouldn't be spelled out by the spec. Yes. But guest software can control which. > > > > > > > > > but one of them will provide legacy functionality > > > > > > > and is not removable. > > > > > > > > > > > > The guest might not support hotplug. But there's no way > > > > > > it can prevent surprise removal. qemu should not crash > > > > > > when this happens. > > > > > Qemu can prevent any removal, surprise or not. Qemu can just > > > > > disallow device removal. > > > > > > > > Yes, but that won't emulate real hardware faithfully. > > > To the letter. There is no HW with hot-unplaggable primary > > > vga card. You are welcome to surprise remove vga card from your > > > machine and see what will happen. > > > > This is different from removing any other card with hotplug module > > unloaded in OS how? OS might crash but so what? You can always reboot > > it, hardware won't be damaged, so qemu shouldn't crash too. > If guest can't handle unplug there is no meaning for qemu to provide it. Sure there's meaning. Giving guest access to backend has security implications. We must have a way to revoke that access even if guest misbehaves. I expect surprise removal to be of most use for assigned devices. But even for emulated devices, we have a small number of slots available, so it would still be useful to free up the PCI slot, even if guest needs to be rebooted then. > You can have development mobo were chipset can be unplugged. Should we > allow to hot-unplug chipset? If it's useful, we could :) > > > > > > On real hardware with a hotplug supporting slot > > > > (and without an EM lock :) ) you can yank the card out > > > > and the guest can do nothing about it. > > > > > > > And you will not find primary vga there. > > > > Where? PCI spec explicitly allows VGA cards behind expansion slots, > > stick it there, and you can remove it too. > > > You can't remove PCI card just from any PCI slot IIRC. Slot and add-on > card should be specifically designed to be hot-unpluggable. > (Something > to do with when power and data lines are disconnected during un-plug > IIRC). Not all things you can yank from mobo while OS is running are > "unpluggable" :) That's true for PCI/PCI-X. I didn't check PCI Express, maybe there all cards are hotpluggable. Do you know? > -- > Gleb.