qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Philippe Mathieu-Daudé" <f4bug@amsat.org>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: "open list:RISC-V" <qemu-riscv@nongnu.org>,
	Arnd Bergmann <arnd@arndb.de>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	qemu-arm <qemu-arm@nongnu.org>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
Date: Tue, 20 Apr 2021 13:52:03 +0200	[thread overview]
Message-ID: <bb3cc932-5111-c388-2770-3c1110dbc89f@amsat.org> (raw)
In-Reply-To: <CAFEAcA_TuKCJ31xsv_j49oQfOFuEipmMnsNb2czPZRMPTN=wxg@mail.gmail.com>

On 4/19/21 3:42 PM, Peter Maydell wrote:
> On Thu, 25 Mar 2021 at 18:14, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>>
>> Hi Peter,
>>
>> On 3/25/21 5:33 PM, Peter Maydell wrote:
>>> Currently the gpex PCI controller implements no special behaviour for
>>> guest accesses to areas of the PIO and MMIO where it has not mapped
>>> any PCI devices, which means that for Arm you end up with a CPU
>>> exception due to a data abort.
>>>
>>> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
>>> like this return -1 for reads and ignore writes.  In the interests of
>>> not being surprising, make host CPU accesses to these windows behave
>>> as -1/discard where there's no mapped PCI device.
>>>
>>> The old behaviour generally didn't cause any problems, because
>>> almost always the guest OS will map the PCI devices and then only
>>> access where it has mapped them. One corner case where you will see
>>> this kind of access is if Linux attempts to probe legacy ISA
>>> devices via a PIO window access. So far the only case where we've
>>> seen this has been via the syzkaller fuzzer.
>>>
>>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>>> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
>>> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>>> ---
>>> v1->v2 changes: put in the hw_compat machinery.
>>>
>>> Still not sure if I want to put this in 6.0 or not.
>>>
>>>  include/hw/pci-host/gpex.h |  4 +++
>>>  hw/core/machine.c          |  1 +
>>>  hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
>>>  3 files changed, 58 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
>>> index d48a020a952..fcf8b638200 100644
>>> --- a/include/hw/pci-host/gpex.h
>>> +++ b/include/hw/pci-host/gpex.h
>>> @@ -49,8 +49,12 @@ struct GPEXHost {
>>>
>>>      MemoryRegion io_ioport;
>>>      MemoryRegion io_mmio;
>>> +    MemoryRegion io_ioport_window;
>>> +    MemoryRegion io_mmio_window;
>>>      qemu_irq irq[GPEX_NUM_IRQS];
>>>      int irq_num[GPEX_NUM_IRQS];
>>> +
>>> +    bool allow_unmapped_accesses;
>>>  };
>>>
>>>  struct GPEXConfig {
>>> diff --git a/hw/core/machine.c b/hw/core/machine.c
>>> index 257a664ea2e..9750fad7435 100644
>>> --- a/hw/core/machine.c
>>> +++ b/hw/core/machine.c
>>> @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
>>>      { "PIIX4_PM", "smm-compat", "on"},
>>>      { "virtio-blk-device", "report-discard-granularity", "off" },
>>>      { "virtio-net-pci", "vectors", "3"},
>>> +    { "gpex-pcihost", "allow-unmapped-accesses", "false" },
>>>  };
>>>  const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
>>>
>>> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
>>> index 2bdbe7b4561..a6752fac5e8 100644
>>> --- a/hw/pci-host/gpex.c
>>> +++ b/hw/pci-host/gpex.c
>>> @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
>>>      int i;
>>>
>>>      pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
>>> +    sysbus_init_mmio(sbd, &pex->mmio);
>>> +
>>> +    /*
>>> +     * Note that the MemoryRegions io_mmio and io_ioport that we pass
>>> +     * to pci_register_root_bus() are not the same as the
>>> +     * MemoryRegions io_mmio_window and io_ioport_window that we
>>> +     * expose as SysBus MRs. The difference is in the behaviour of
>>> +     * accesses to addresses where no PCI device has been mapped.
>>> +     *
>>> +     * io_mmio and io_ioport are the underlying PCI view of the PCI
>>> +     * address space, and when a PCI device does a bus master access
>>> +     * to a bad address this is reported back to it as a transaction
>>> +     * failure.
>>> +     *
>>> +     * io_mmio_window and io_ioport_window implement "unmapped
>>> +     * addresses read as -1 and ignore writes"; this is traditional
>>> +     * x86 PC behaviour, which is not mandated by the PCI spec proper
>>> +     * but expected by much PCI-using guest software, including Linux.
>>
>> I suspect PCI-ISA bridges to provide an EISA bus.
> 
> I'm not sure what you mean here -- there isn't an ISA bridge
> or an EISA bus involved here. This is purely about the behaviour
> of the memory window the PCI host controller exposes to the CPU
> (and in particular the window for when a PCI device's BAR is
> set to "IO" rather than "MMIO"), though we change both here.

I guess I always interpreted the IO BAR were here to address ISA
backward compatibility. I don't know well PCI so I'll study it
more. Sorry for my confused comment.

>>> +     * In the interests of not being unnecessarily surprising, we
>>> +     * implement it in the gpex PCI host controller, by providing the
>>> +     * _window MRs, which are containers with io ops that implement
>>> +     * the 'background' behaviour and which hold the real PCI MRs as
>>> +     * subregions.
>>> +     */
>>>      memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
>>>      memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
>>>
>>> -    sysbus_init_mmio(sbd, &pex->mmio);
>>> -    sysbus_init_mmio(sbd, &s->io_mmio);
>>> -    sysbus_init_mmio(sbd, &s->io_ioport);
>>> +    if (s->allow_unmapped_accesses) {
>>> +        memory_region_init_io(&s->io_mmio_window, OBJECT(s),
>>> +                              &unassigned_io_ops, OBJECT(s),
>>> +                              "gpex_mmio_window", UINT64_MAX);
>>
>> EISA -> 4 * GiB
>>
>> unassigned_io_ops allows 64-bit accesses. Here we want up to 32.
>>
>> Maybe we don't care.
>>
>>> +        memory_region_init_io(&s->io_ioport_window, OBJECT(s),
>>> +                              &unassigned_io_ops, OBJECT(s),
>>> +                              "gpex_ioport_window", 64 * 1024);
>>
>> Ditto, unassigned_io_ops accepts 64-bit accesses.
> 
> These are just using the same sizes as the io_mmio and io_ioport
> MRs which the existing code creates.
> 
>>>  static void gpex_host_class_init(ObjectClass *klass, void *data)
>>>  {
>>>      DeviceClass *dc = DEVICE_CLASS(klass);
>>> @@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
>>>      dc->realize = gpex_host_realize;
>>>      set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
>>>      dc->fw_name = "pci";
>>> +    device_class_set_props(dc, gpex_host_properties);
>>
>> IMO this change belongs to the parent bridges,
>> TYPE_PCI_HOST_BRIDGE and TYPE_PCIE_HOST_BRIDGE.
> 
> Arnd had a look through the kernel sources and apparently not
> all PCI host controllers do this -- there are a few SoCs where the
> kernel has to put in special case code to allow for the fact that
> it will get a bus error for accesses to unmapped parts of the
> window. So I concluded that the specific controller implementation
> was the right place for it.

Yes the changes are simple. I'm certainly not NAcking the patch,
but can't review it neither :( So please ignore my comments.


  reply	other threads:[~2021-04-20 11:53 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-25 16:33 [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows Peter Maydell
2021-03-25 17:01 ` Richard Henderson
2021-03-25 17:03 ` Michael S. Tsirkin
2021-03-25 18:14 ` Philippe Mathieu-Daudé
2021-04-19 13:42   ` Peter Maydell
2021-04-20 11:52     ` Philippe Mathieu-Daudé [this message]
2021-04-20 12:26       ` Arnd Bergmann
2021-04-20 12:31         ` Philippe Mathieu-Daudé
2021-04-20 10:24 ` Michael S. Tsirkin
2021-04-20 12:39   ` Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bb3cc932-5111-c388-2770-3c1110dbc89f@amsat.org \
    --to=f4bug@amsat.org \
    --cc=arnd@arndb.de \
    --cc=dvyukov@google.com \
    --cc=mst@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-riscv@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).