From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ard Biesheuvel Subject: Re: [PATCH v1 14/21] Ovmf/Xen: allow non-PCI usage of XenBusDxe Date: Mon, 26 Jan 2015 09:46:03 +0000 Message-ID: References: <1422025390-8036-1-git-send-email-ard.biesheuvel@linaro.org> <1422025390-8036-15-git-send-email-ard.biesheuvel@linaro.org> <54C60883.8030706@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <54C60883.8030706@redhat.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Laszlo Ersek Cc: Ian Campbell , Olivier Martin , Stefano Stabellini , "edk2-devel@lists.sourceforge.net" , Leif Lindholm , xen-devel@lists.xen.org, Roy Franz , Ilias Biris , Anthony PERARD , Christoffer Dall List-Id: xen-devel@lists.xenproject.org On 26 January 2015 at 09:27, Laszlo Ersek wrote: > On 01/23/15 16:03, Ard Biesheuvel wrote: >> While Xen on Intel uses a virtual PCI device to communicate the >> base address of the grant table, the ARM implementation uses a DT >> node, which is fundamentally incompatible with the way XenBusDxe is >> implemented, i.e., as a UEFI Driver Model implementation for a PCI >> device. >> >> To allow the non-PCI implementations to use this driver anyway, this >> patch introduces an abstract XENIO_PROTOCOL protocol, which contains >> just the grant table base address. The Intel implementation is adapted >> to allocate such a protocol on the fly based on the PCI config space >> metadata, so it operates as before. Other users can invoke the driver >> by installing a XENIO_PROTOCOL instance on a handle, and invoking >> ConnectController() >> >> Contributed-under: TianoCore Contribution Agreement 1.0 >> Signed-off-by: Ard Biesheuvel >> --- >> OvmfPkg/Include/Protocol/XenIo.h | 48 ++++++++++ >> OvmfPkg/OvmfPkg.dec | 1 + >> OvmfPkg/XenBusDxe/ComponentName.c | 2 +- >> OvmfPkg/XenBusDxe/GrantTable.c | 5 +- >> OvmfPkg/XenBusDxe/GrantTable.h | 3 +- >> OvmfPkg/XenBusDxe/XenBus.c | 6 +- >> OvmfPkg/XenBusDxe/XenBusDxe.c | 190 ++++++++++++++++++++++++++++++++------ >> OvmfPkg/XenBusDxe/XenBusDxe.h | 2 + >> OvmfPkg/XenBusDxe/XenBusDxe.inf | 1 + >> 9 files changed, 220 insertions(+), 38 deletions(-) >> create mode 100644 OvmfPkg/Include/Protocol/XenIo.h > > This is what we have with virtio: > > EFI_BLOCK_IO_PROTOCOL EFI_SIMPLE_NETWORK_PROTOCOL > [OvmfPkg/VirtioBlkDxe] [OvmfPkg/VirtioNetDxe] > | | > | EFI_EXT_SCSI_PASS_THRU_PROTOCOL | > | [OvmfPkg/VirtioScsiDxe] | > | | | > +------------------------+--------------------------+ > | > VIRTIO_DEVICE_PROTOCOL > | > +---------------------+---------------------+ > | | > [OvmfPkg/VirtioPciDeviceDxe] [custom platform drivers] > | | > | | > EFI_PCI_IO_PROTOCOL [OvmfPkg/Library/VirtioMmioDeviceLib] > [MdeModulePkg/Bus/Pci/PciBusDxe] direct MMIO register access > > > And this is what we should have for Xen, I think: > > > EFI_BLOCK_IO_PROTOCOL > [OvmfPkg/XenPvBlkDxe] > | > XENBUS_PROTOCOL > [OvmfPkg/XenBusDxe] > | > XENIO_PROTOCOL > | > +---------------------+-----------------------+ > | | > [OvmfPkg/XenIoPciDxe] [ArmPlatformPkg/ArmVirtualizationPkg/VirtFdtDxe] > | | > EFI_PCI_IO_PROTOCOL [ArmPlatformPkg/ArmVirtualizationPkg/XenIoMmioLib] > [MdeModulePkg/Bus/Pci/PciBusDxe] > > Very helpful, thanks! > XENIO_PROTOCOL should abstract both of the following: > - hypercall implementation > - grant table base address > > A new driver, OvmfPkg/XenIoPciDxe, would take over the "bottom half" of > the current OvmfPkg/XenBusDxe driver (discovery, hypercalls etc). It > would install the XENIO_PROTOCOL on the PCI device handle. (Meaning, it > would be a device driver.) > Exactly. This is what I started implementing, but decided to get something working first before burning a lot of time on this. (It is always easier to get comments on working patches than on abstract architectural design questions) So it would be sufficient to install the XENIO_PROTOCOL on the existing ControllerHandle containing the EFI_PCI_IO_PROTOCOL? The recursion in the UEFI driver model would take care that the subordinate bus and devices are discovered as well? > A new library, ArmPlatformPkg/ArmVirtualizationPkg/XenIoMmioLib, would > allow its caller, ArmPlatformPkg/ArmVirtualizationPkg/VirtFdtDxe, to > discover the grant table address in the DTB, and call it with the grant > table base address. The library would then create a new handle, with > instances of the XENIO_PROTOCOL and the EFI_DEVICE_PATH_PROTOCOL on it. > The XENIO_PROTOCOL would again abstract the hypercall implementation as > well. > Re hypercall (as you observe below) this is actually orthogonal, i.e., not tied to the PCI vs MMIO question > OvmfPkg/XenBusDxe would preserve only its current "top half". All > hypercalls and all PciIo accesses would be rebased to XENIO_PROTOCOL > member calls. Ie. first you have to collect all those accesses and > distill the member functions of XENIO_PROTOCOL first. > > This is probably the hardest part of the design. When Olivier did the > same for VIRTIO_DEVICE_PROTOCOL, he was certainly helped by the virtio > specification (and my original, PCI-only source code that carefully > referenced each step in the spec), which had formalized the virtio > operations in human language. This could be a good opportunity to force > the Xen guys to produce a similar document (a text file would suffice). > Well, the point here is that the Xen PCI device is only used as a vehicle to convey the grant table address, nothing else. After reading the grant table address from BAR1, no other calls into the EFI_PCI_IO_PROTOCOL are made at all. > No changes for XenPvBlkDxe. > > If you want, you can still factor out the hypercall implementation to a > separate library. Then use the Intel library instance with the > XenIoPciDxe driver, and the ARM library instance with VirtFdtDxe + > XenIoMmioLib. Then, once we have PCI in ArmVirtualizationPkg, you can > use XenIoPciDxe even there, just with the ARM-specific hypercall > library. > Yes. I agree I need to rework that patch, but the existing XenHypercallLib works pretty well, I think. > ... I'm uncertain how much sense the above makes for Xen specifics, and > how much it overlaps with your current patchset, but if I understood > correctly, these layers were your main question, and I think we should > go with the above structure. It follows how virtio is split into layers, > and it puts the UEFI driver model to good use. (One difference is that > Xen has one more layers AIUI.) > > If I understand the current patch right, looking at the > XenBusDxeDriverBindingSupported() hunk, it would make OvmfPkg/XenBusDxe > capable of using both PciIo and XenIo. I don't like that (nor do you, I > believe). :) > Indeed. As I said, the idea was to produce something working to get the discussion going. I will proceed and split off XenIoPciDxe from XenBusDxe, which installs my existing XENIO_PROTOCOL on the existing ControllerHandle if its EFI_PCI_IO_PROTOCOL produces the correct vid/pid and BAR1 mem type. Thanks again for taking the time to look into these patches. -- Ard.