From mboxrd@z Thu Jan 1 00:00:00 1970 From: Laszlo Ersek Subject: Re: [PATCH v1 14/21] Ovmf/Xen: allow non-PCI usage of XenBusDxe Date: Mon, 26 Jan 2015 10:27:31 +0100 Message-ID: <54C60883.8030706__16270.9232572981$1422264621$gmane$org@redhat.com> References: <1422025390-8036-1-git-send-email-ard.biesheuvel@linaro.org> <1422025390-8036-15-git-send-email-ard.biesheuvel@linaro.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1422025390-8036-15-git-send-email-ard.biesheuvel@linaro.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ard Biesheuvel Cc: ian.campbell@citrix.com, olivier.martin@arm.com, stefano.stabellini@eu.citrix.com, edk2-devel@lists.sourceforge.net, leif.lindholm@linaro.org, xen-devel@lists.xen.org, roy.franz@linaro.org, ilias.biris@linaro.org, anthony.perard@citrix.com, christoffer.dall@linaro.org List-Id: xen-devel@lists.xenproject.org On 01/23/15 16:03, Ard Biesheuvel wrote: > While Xen on Intel uses a virtual PCI device to communicate the > base address of the grant table, the ARM implementation uses a DT > node, which is fundamentally incompatible with the way XenBusDxe is > implemented, i.e., as a UEFI Driver Model implementation for a PCI > device. > > To allow the non-PCI implementations to use this driver anyway, this > patch introduces an abstract XENIO_PROTOCOL protocol, which contains > just the grant table base address. The Intel implementation is adapted > to allocate such a protocol on the fly based on the PCI config space > metadata, so it operates as before. Other users can invoke the driver > by installing a XENIO_PROTOCOL instance on a handle, and invoking > ConnectController() > > Contributed-under: TianoCore Contribution Agreement 1.0 > Signed-off-by: Ard Biesheuvel > --- > OvmfPkg/Include/Protocol/XenIo.h | 48 ++++++++++ > OvmfPkg/OvmfPkg.dec | 1 + > OvmfPkg/XenBusDxe/ComponentName.c | 2 +- > OvmfPkg/XenBusDxe/GrantTable.c | 5 +- > OvmfPkg/XenBusDxe/GrantTable.h | 3 +- > OvmfPkg/XenBusDxe/XenBus.c | 6 +- > OvmfPkg/XenBusDxe/XenBusDxe.c | 190 ++++++++++++++++++++++++++++++++------ > OvmfPkg/XenBusDxe/XenBusDxe.h | 2 + > OvmfPkg/XenBusDxe/XenBusDxe.inf | 1 + > 9 files changed, 220 insertions(+), 38 deletions(-) > create mode 100644 OvmfPkg/Include/Protocol/XenIo.h This is what we have with virtio: EFI_BLOCK_IO_PROTOCOL EFI_SIMPLE_NETWORK_PROTOCOL [OvmfPkg/VirtioBlkDxe] [OvmfPkg/VirtioNetDxe] | | | EFI_EXT_SCSI_PASS_THRU_PROTOCOL | | [OvmfPkg/VirtioScsiDxe] | | | | +------------------------+--------------------------+ | VIRTIO_DEVICE_PROTOCOL | +---------------------+---------------------+ | | [OvmfPkg/VirtioPciDeviceDxe] [custom platform drivers] | | | | EFI_PCI_IO_PROTOCOL [OvmfPkg/Library/VirtioMmioDeviceLib] [MdeModulePkg/Bus/Pci/PciBusDxe] direct MMIO register access And this is what we should have for Xen, I think: EFI_BLOCK_IO_PROTOCOL [OvmfPkg/XenPvBlkDxe] | XENBUS_PROTOCOL [OvmfPkg/XenBusDxe] | XENIO_PROTOCOL | +---------------------+-----------------------+ | | [OvmfPkg/XenIoPciDxe] [ArmPlatformPkg/ArmVirtualizationPkg/VirtFdtDxe] | | EFI_PCI_IO_PROTOCOL [ArmPlatformPkg/ArmVirtualizationPkg/XenIoMmioLib] [MdeModulePkg/Bus/Pci/PciBusDxe] XENIO_PROTOCOL should abstract both of the following: - hypercall implementation - grant table base address A new driver, OvmfPkg/XenIoPciDxe, would take over the "bottom half" of the current OvmfPkg/XenBusDxe driver (discovery, hypercalls etc). It would install the XENIO_PROTOCOL on the PCI device handle. (Meaning, it would be a device driver.) A new library, ArmPlatformPkg/ArmVirtualizationPkg/XenIoMmioLib, would allow its caller, ArmPlatformPkg/ArmVirtualizationPkg/VirtFdtDxe, to discover the grant table address in the DTB, and call it with the grant table base address. The library would then create a new handle, with instances of the XENIO_PROTOCOL and the EFI_DEVICE_PATH_PROTOCOL on it. The XENIO_PROTOCOL would again abstract the hypercall implementation as well. OvmfPkg/XenBusDxe would preserve only its current "top half". All hypercalls and all PciIo accesses would be rebased to XENIO_PROTOCOL member calls. Ie. first you have to collect all those accesses and distill the member functions of XENIO_PROTOCOL first. This is probably the hardest part of the design. When Olivier did the same for VIRTIO_DEVICE_PROTOCOL, he was certainly helped by the virtio specification (and my original, PCI-only source code that carefully referenced each step in the spec), which had formalized the virtio operations in human language. This could be a good opportunity to force the Xen guys to produce a similar document (a text file would suffice). No changes for XenPvBlkDxe. If you want, you can still factor out the hypercall implementation to a separate library. Then use the Intel library instance with the XenIoPciDxe driver, and the ARM library instance with VirtFdtDxe + XenIoMmioLib. Then, once we have PCI in ArmVirtualizationPkg, you can use XenIoPciDxe even there, just with the ARM-specific hypercall library. ... I'm uncertain how much sense the above makes for Xen specifics, and how much it overlaps with your current patchset, but if I understood correctly, these layers were your main question, and I think we should go with the above structure. It follows how virtio is split into layers, and it puts the UEFI driver model to good use. (One difference is that Xen has one more layers AIUI.) If I understand the current patch right, looking at the XenBusDxeDriverBindingSupported() hunk, it would make OvmfPkg/XenBusDxe capable of using both PciIo and XenIo. I don't like that (nor do you, I believe). :) Thanks Laszlo