From: Icenowy Zheng <email@example.com>
To: Maxime Ripard <firstname.lastname@example.org>
Cc: Lorenzo Pieralisi <email@example.com>,
Andrew Murray <firstname.lastname@example.org>,
Bjorn Helgaas <email@example.com>, Chen-Yu Tsai <firstname.lastname@example.org>,
Rob Herring <email@example.com>,
Subject: Re: [RFC PATCH] PCI: dwc: add support for Allwinner SoCs' PCIe controller
Date: Mon, 20 Apr 2020 16:18:58 +0800 [thread overview]
Message-ID: <firstname.lastname@example.org> (raw)
在 2020-04-06星期一的 10:27 +0200，Maxime Ripard写道：
> On Fri, Apr 03, 2020 at 12:05:49AM +0800, Icenowy Zheng wrote:
> > The Allwinner H6 SoC uses DesignWare's PCIe controller to provide a
> > PCIe
> > host.
> > However, on Allwinner H6, the PCIe host has bad MMIO, which needs
> > to be
> > workarounded. A workaround with the EL2 hypervisor functionality of
> > ARM
> > Cortex cores is now available, which wraps MMIO operations.
> > This patch is going to add a driver for the DWC PCIe controller
> > available in Allwinner SoCs, either the H6 one when wrapped by the
> > hypervisor (so that the driver can consider it as an ordinary PCIe
> > controller) or further not buggy ones.
> > Signed-off-by: Icenowy Zheng <email@example.com>
> > ---
> > There's no device tree binding patch available, because I still
> > have
> > questions on the device tree compatible string. I want to use it to
> > describe that this driver doesn't support the "native Allwinner H6
> > PCIe
> > controller", but a wrapped version with my hypervisor.
> > I think supporting a "para-physical" device is some new thing, so
> > this
> > patch is RFC.
> > My hypervisor is at , and some basic usage documentation is at
> > .
> >  https://github.com/Icenowy/aw-el2-barebone
> > 
> > https://forum.armbian.com/topic/13529-a-try-on-utilizing-h6-pcie-with-virtualization/
> I'm a bit concerned to throw yet another mandatory, difficult to
> update, component in the already quite long boot chain.
> Getting fixes deployed in ATF or U-Boot is already pretty long,
> another component in there will just make it worse, and it's another
> hard to debug component that we throw into the mix.
> And this prevents any use of virtualisation on the platform.
> I haven't found an explanation on what that hypervisor is doing
> exactly, but from a look at it it seems that it will trap all the
> accesses to the PCIe memory region to emulate a regular space on top
> of the restricted one we have?
> If so, can't we do that from the kernel directly by using a memory
> region that always fault with a fault handler like Framebuffer's
> deferred_io is doing (drivers/video/fbdev/core/fb_defio.c) ?
I don't know well about the memory management of the kernel. However,
for PCIe memory space, the kernel allows simple ioremap() on it. So
wrapping it shouldn't be so easy.
And I think the maintainer of pcie-tango suffers from a even more
simple issue -- PCI config space and MMIO space are muxed. They failed
to wrap MMIO I/O, and make a warning and taint the kernel. pcie-tango
is mentioned in my previous discussion on H6 PCIe, see .
next prev parent reply other threads:[~2020-04-20 8:19 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-02 16:05 [RFC PATCH] PCI: dwc: add support for Allwinner SoCs' PCIe controller Icenowy Zheng
2020-04-06 8:27 ` Maxime Ripard
2020-04-20 8:18 ` Icenowy Zheng [this message]
2020-05-06 15:36 ` Maxime Ripard
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).