QEMU-Devel Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
@ 2021-03-25 16:33 Peter Maydell
  2021-03-25 17:01 ` Richard Henderson
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Peter Maydell @ 2021-03-25 16:33 UTC (permalink / raw)
  To: qemu-arm, qemu-devel
  Cc: qemu-riscv, Arnd Bergmann, Dmitry Vyukov, Michael S. Tsirkin

Currently the gpex PCI controller implements no special behaviour for
guest accesses to areas of the PIO and MMIO where it has not mapped
any PCI devices, which means that for Arm you end up with a CPU
exception due to a data abort.

Most host OSes expect "like an x86 PC" behaviour, where bad accesses
like this return -1 for reads and ignore writes.  In the interests of
not being surprising, make host CPU accesses to these windows behave
as -1/discard where there's no mapped PCI device.

The old behaviour generally didn't cause any problems, because
almost always the guest OS will map the PCI devices and then only
access where it has mapped them. One corner case where you will see
this kind of access is if Linux attempts to probe legacy ISA
devices via a PIO window access. So far the only case where we've
seen this has been via the syzkaller fuzzer.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
v1->v2 changes: put in the hw_compat machinery.

Still not sure if I want to put this in 6.0 or not.

 include/hw/pci-host/gpex.h |  4 +++
 hw/core/machine.c          |  1 +
 hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
 3 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
index d48a020a952..fcf8b638200 100644
--- a/include/hw/pci-host/gpex.h
+++ b/include/hw/pci-host/gpex.h
@@ -49,8 +49,12 @@ struct GPEXHost {
 
     MemoryRegion io_ioport;
     MemoryRegion io_mmio;
+    MemoryRegion io_ioport_window;
+    MemoryRegion io_mmio_window;
     qemu_irq irq[GPEX_NUM_IRQS];
     int irq_num[GPEX_NUM_IRQS];
+
+    bool allow_unmapped_accesses;
 };
 
 struct GPEXConfig {
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 257a664ea2e..9750fad7435 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
     { "PIIX4_PM", "smm-compat", "on"},
     { "virtio-blk-device", "report-discard-granularity", "off" },
     { "virtio-net-pci", "vectors", "3"},
+    { "gpex-pcihost", "allow-unmapped-accesses", "false" },
 };
 const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
 
diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
index 2bdbe7b4561..a6752fac5e8 100644
--- a/hw/pci-host/gpex.c
+++ b/hw/pci-host/gpex.c
@@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
     int i;
 
     pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
+    sysbus_init_mmio(sbd, &pex->mmio);
+
+    /*
+     * Note that the MemoryRegions io_mmio and io_ioport that we pass
+     * to pci_register_root_bus() are not the same as the
+     * MemoryRegions io_mmio_window and io_ioport_window that we
+     * expose as SysBus MRs. The difference is in the behaviour of
+     * accesses to addresses where no PCI device has been mapped.
+     *
+     * io_mmio and io_ioport are the underlying PCI view of the PCI
+     * address space, and when a PCI device does a bus master access
+     * to a bad address this is reported back to it as a transaction
+     * failure.
+     *
+     * io_mmio_window and io_ioport_window implement "unmapped
+     * addresses read as -1 and ignore writes"; this is traditional
+     * x86 PC behaviour, which is not mandated by the PCI spec proper
+     * but expected by much PCI-using guest software, including Linux.
+     *
+     * In the interests of not being unnecessarily surprising, we
+     * implement it in the gpex PCI host controller, by providing the
+     * _window MRs, which are containers with io ops that implement
+     * the 'background' behaviour and which hold the real PCI MRs as
+     * subregions.
+     */
     memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
     memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
 
-    sysbus_init_mmio(sbd, &pex->mmio);
-    sysbus_init_mmio(sbd, &s->io_mmio);
-    sysbus_init_mmio(sbd, &s->io_ioport);
+    if (s->allow_unmapped_accesses) {
+        memory_region_init_io(&s->io_mmio_window, OBJECT(s),
+                              &unassigned_io_ops, OBJECT(s),
+                              "gpex_mmio_window", UINT64_MAX);
+        memory_region_init_io(&s->io_ioport_window, OBJECT(s),
+                              &unassigned_io_ops, OBJECT(s),
+                              "gpex_ioport_window", 64 * 1024);
+
+        memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio);
+        memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport);
+        sysbus_init_mmio(sbd, &s->io_mmio_window);
+        sysbus_init_mmio(sbd, &s->io_ioport_window);
+    } else {
+        sysbus_init_mmio(sbd, &s->io_mmio);
+        sysbus_init_mmio(sbd, &s->io_ioport);
+    }
+
     for (i = 0; i < GPEX_NUM_IRQS; i++) {
         sysbus_init_irq(sbd, &s->irq[i]);
         s->irq_num[i] = -1;
@@ -108,6 +147,16 @@ static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
     return "0000:00";
 }
 
+static Property gpex_host_properties[] = {
+    /*
+     * Permit CPU accesses to unmapped areas of the PIO and MMIO windows
+     * (discarding writes and returning -1 for reads) rather than aborting.
+     */
+    DEFINE_PROP_BOOL("allow-unmapped-accesses", GPEXHost,
+                     allow_unmapped_accesses, true),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
 static void gpex_host_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
@@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
     dc->realize = gpex_host_realize;
     set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
     dc->fw_name = "pci";
+    device_class_set_props(dc, gpex_host_properties);
 }
 
 static void gpex_host_initfn(Object *obj)
-- 
2.20.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-03-25 16:33 [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows Peter Maydell
@ 2021-03-25 17:01 ` Richard Henderson
  2021-03-25 17:03 ` Michael S. Tsirkin
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Richard Henderson @ 2021-03-25 17:01 UTC (permalink / raw)
  To: Peter Maydell, qemu-arm, qemu-devel
  Cc: qemu-riscv, Dmitry Vyukov, Arnd Bergmann, Michael S. Tsirkin

On 3/25/21 10:33 AM, Peter Maydell wrote:
> Currently the gpex PCI controller implements no special behaviour for
> guest accesses to areas of the PIO and MMIO where it has not mapped
> any PCI devices, which means that for Arm you end up with a CPU
> exception due to a data abort.
> 
> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> like this return -1 for reads and ignore writes.  In the interests of
> not being surprising, make host CPU accesses to these windows behave
> as -1/discard where there's no mapped PCI device.
> 
> The old behaviour generally didn't cause any problems, because
> almost always the guest OS will map the PCI devices and then only
> access where it has mapped them. One corner case where you will see
> this kind of access is if Linux attempts to probe legacy ISA
> devices via a PIO window access. So far the only case where we've
> seen this has been via the syzkaller fuzzer.
> 
> Reported-by: Dmitry Vyukov<dvyukov@google.com>
> Fixes:https://bugs.launchpad.net/qemu/+bug/1918917
> Signed-off-by: Peter Maydell<peter.maydell@linaro.org>
> ---
> v1->v2 changes: put in the hw_compat machinery.
> 
> Still not sure if I want to put this in 6.0 or not.

I know what you mean.

> 
>   include/hw/pci-host/gpex.h |  4 +++
>   hw/core/machine.c          |  1 +
>   hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
>   3 files changed, 58 insertions(+), 3 deletions(-)

That said, the code looks fine, so,

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-03-25 16:33 [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows Peter Maydell
  2021-03-25 17:01 ` Richard Henderson
@ 2021-03-25 17:03 ` Michael S. Tsirkin
  2021-03-25 18:14 ` Philippe Mathieu-Daudé
  2021-04-20 10:24 ` Michael S. Tsirkin
  3 siblings, 0 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2021-03-25 17:03 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Arnd Bergmann, qemu-arm, qemu-riscv, qemu-devel, Dmitry Vyukov

On Thu, Mar 25, 2021 at 04:33:15PM +0000, Peter Maydell wrote:
> Currently the gpex PCI controller implements no special behaviour for
> guest accesses to areas of the PIO and MMIO where it has not mapped
> any PCI devices, which means that for Arm you end up with a CPU
> exception due to a data abort.
> 
> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> like this return -1 for reads and ignore writes.  In the interests of
> not being surprising, make host CPU accesses to these windows behave
> as -1/discard where there's no mapped PCI device.
> 
> The old behaviour generally didn't cause any problems, because
> almost always the guest OS will map the PCI devices and then only
> access where it has mapped them. One corner case where you will see
> this kind of access is if Linux attempts to probe legacy ISA
> devices via a PIO window access. So far the only case where we've
> seen this has been via the syzkaller fuzzer.
> 
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
> v1->v2 changes: put in the hw_compat machinery.
> 
> Still not sure if I want to put this in 6.0 or not.
> 
>  include/hw/pci-host/gpex.h |  4 +++
>  hw/core/machine.c          |  1 +
>  hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
>  3 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> index d48a020a952..fcf8b638200 100644
> --- a/include/hw/pci-host/gpex.h
> +++ b/include/hw/pci-host/gpex.h
> @@ -49,8 +49,12 @@ struct GPEXHost {
>  
>      MemoryRegion io_ioport;
>      MemoryRegion io_mmio;
> +    MemoryRegion io_ioport_window;
> +    MemoryRegion io_mmio_window;
>      qemu_irq irq[GPEX_NUM_IRQS];
>      int irq_num[GPEX_NUM_IRQS];
> +
> +    bool allow_unmapped_accesses;
>  };
>  
>  struct GPEXConfig {
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 257a664ea2e..9750fad7435 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
>      { "PIIX4_PM", "smm-compat", "on"},
>      { "virtio-blk-device", "report-discard-granularity", "off" },
>      { "virtio-net-pci", "vectors", "3"},
> +    { "gpex-pcihost", "allow-unmapped-accesses", "false" },
>  };
>  const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
>  
> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> index 2bdbe7b4561..a6752fac5e8 100644
> --- a/hw/pci-host/gpex.c
> +++ b/hw/pci-host/gpex.c
> @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
>      int i;
>  
>      pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
> +    sysbus_init_mmio(sbd, &pex->mmio);
> +
> +    /*
> +     * Note that the MemoryRegions io_mmio and io_ioport that we pass
> +     * to pci_register_root_bus() are not the same as the
> +     * MemoryRegions io_mmio_window and io_ioport_window that we
> +     * expose as SysBus MRs. The difference is in the behaviour of
> +     * accesses to addresses where no PCI device has been mapped.
> +     *
> +     * io_mmio and io_ioport are the underlying PCI view of the PCI
> +     * address space, and when a PCI device does a bus master access
> +     * to a bad address this is reported back to it as a transaction
> +     * failure.
> +     *
> +     * io_mmio_window and io_ioport_window implement "unmapped
> +     * addresses read as -1 and ignore writes"; this is traditional
> +     * x86 PC behaviour, which is not mandated by the PCI spec proper
> +     * but expected by much PCI-using guest software, including Linux.
> +     *
> +     * In the interests of not being unnecessarily surprising, we
> +     * implement it in the gpex PCI host controller, by providing the
> +     * _window MRs, which are containers with io ops that implement
> +     * the 'background' behaviour and which hold the real PCI MRs as
> +     * subregions.
> +     */
>      memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
>      memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
>  
> -    sysbus_init_mmio(sbd, &pex->mmio);
> -    sysbus_init_mmio(sbd, &s->io_mmio);
> -    sysbus_init_mmio(sbd, &s->io_ioport);
> +    if (s->allow_unmapped_accesses) {
> +        memory_region_init_io(&s->io_mmio_window, OBJECT(s),
> +                              &unassigned_io_ops, OBJECT(s),
> +                              "gpex_mmio_window", UINT64_MAX);
> +        memory_region_init_io(&s->io_ioport_window, OBJECT(s),
> +                              &unassigned_io_ops, OBJECT(s),
> +                              "gpex_ioport_window", 64 * 1024);
> +
> +        memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio);
> +        memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport);
> +        sysbus_init_mmio(sbd, &s->io_mmio_window);
> +        sysbus_init_mmio(sbd, &s->io_ioport_window);
> +    } else {
> +        sysbus_init_mmio(sbd, &s->io_mmio);
> +        sysbus_init_mmio(sbd, &s->io_ioport);
> +    }
> +
>      for (i = 0; i < GPEX_NUM_IRQS; i++) {
>          sysbus_init_irq(sbd, &s->irq[i]);
>          s->irq_num[i] = -1;
> @@ -108,6 +147,16 @@ static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
>      return "0000:00";
>  }
>  
> +static Property gpex_host_properties[] = {
> +    /*
> +     * Permit CPU accesses to unmapped areas of the PIO and MMIO windows
> +     * (discarding writes and returning -1 for reads) rather than aborting.
> +     */
> +    DEFINE_PROP_BOOL("allow-unmapped-accesses", GPEXHost,
> +                     allow_unmapped_accesses, true),
> +    DEFINE_PROP_END_OF_LIST(),
> +};
> +
>  static void gpex_host_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
> @@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
>      dc->realize = gpex_host_realize;
>      set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
>      dc->fw_name = "pci";
> +    device_class_set_props(dc, gpex_host_properties);
>  }
>  
>  static void gpex_host_initfn(Object *obj)
> -- 
> 2.20.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-03-25 16:33 [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows Peter Maydell
  2021-03-25 17:01 ` Richard Henderson
  2021-03-25 17:03 ` Michael S. Tsirkin
@ 2021-03-25 18:14 ` Philippe Mathieu-Daudé
  2021-04-19 13:42   ` Peter Maydell
  2021-04-20 10:24 ` Michael S. Tsirkin
  3 siblings, 1 reply; 10+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-03-25 18:14 UTC (permalink / raw)
  To: Peter Maydell, qemu-arm, qemu-devel
  Cc: qemu-riscv, Dmitry Vyukov, Arnd Bergmann, Michael S. Tsirkin

Hi Peter,

On 3/25/21 5:33 PM, Peter Maydell wrote:
> Currently the gpex PCI controller implements no special behaviour for
> guest accesses to areas of the PIO and MMIO where it has not mapped
> any PCI devices, which means that for Arm you end up with a CPU
> exception due to a data abort.
> 
> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> like this return -1 for reads and ignore writes.  In the interests of
> not being surprising, make host CPU accesses to these windows behave
> as -1/discard where there's no mapped PCI device.
> 
> The old behaviour generally didn't cause any problems, because
> almost always the guest OS will map the PCI devices and then only
> access where it has mapped them. One corner case where you will see
> this kind of access is if Linux attempts to probe legacy ISA
> devices via a PIO window access. So far the only case where we've
> seen this has been via the syzkaller fuzzer.
> 
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> v1->v2 changes: put in the hw_compat machinery.
> 
> Still not sure if I want to put this in 6.0 or not.
> 
>  include/hw/pci-host/gpex.h |  4 +++
>  hw/core/machine.c          |  1 +
>  hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
>  3 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> index d48a020a952..fcf8b638200 100644
> --- a/include/hw/pci-host/gpex.h
> +++ b/include/hw/pci-host/gpex.h
> @@ -49,8 +49,12 @@ struct GPEXHost {
>  
>      MemoryRegion io_ioport;
>      MemoryRegion io_mmio;
> +    MemoryRegion io_ioport_window;
> +    MemoryRegion io_mmio_window;
>      qemu_irq irq[GPEX_NUM_IRQS];
>      int irq_num[GPEX_NUM_IRQS];
> +
> +    bool allow_unmapped_accesses;
>  };
>  
>  struct GPEXConfig {
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 257a664ea2e..9750fad7435 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
>      { "PIIX4_PM", "smm-compat", "on"},
>      { "virtio-blk-device", "report-discard-granularity", "off" },
>      { "virtio-net-pci", "vectors", "3"},
> +    { "gpex-pcihost", "allow-unmapped-accesses", "false" },
>  };
>  const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
>  
> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> index 2bdbe7b4561..a6752fac5e8 100644
> --- a/hw/pci-host/gpex.c
> +++ b/hw/pci-host/gpex.c
> @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
>      int i;
>  
>      pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
> +    sysbus_init_mmio(sbd, &pex->mmio);
> +
> +    /*
> +     * Note that the MemoryRegions io_mmio and io_ioport that we pass
> +     * to pci_register_root_bus() are not the same as the
> +     * MemoryRegions io_mmio_window and io_ioport_window that we
> +     * expose as SysBus MRs. The difference is in the behaviour of
> +     * accesses to addresses where no PCI device has been mapped.
> +     *
> +     * io_mmio and io_ioport are the underlying PCI view of the PCI
> +     * address space, and when a PCI device does a bus master access
> +     * to a bad address this is reported back to it as a transaction
> +     * failure.
> +     *
> +     * io_mmio_window and io_ioport_window implement "unmapped
> +     * addresses read as -1 and ignore writes"; this is traditional
> +     * x86 PC behaviour, which is not mandated by the PCI spec proper
> +     * but expected by much PCI-using guest software, including Linux.

I suspect PCI-ISA bridges to provide an EISA bus.

The 'IEEE P996' ISA spec doesn't seem public. Per the Intel
ISA spec:
https://archive.org/details/bitsavers_intelbusSpep89_3342148/page/n31/mode/2up
the data lines are tri-stated, but I couldn't find the default logic
when no add-on card owns the bus at the requested address (to confirm
the "read as -1").

> +     * In the interests of not being unnecessarily surprising, we
> +     * implement it in the gpex PCI host controller, by providing the
> +     * _window MRs, which are containers with io ops that implement
> +     * the 'background' behaviour and which hold the real PCI MRs as
> +     * subregions.
> +     */
>      memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
>      memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
>  
> -    sysbus_init_mmio(sbd, &pex->mmio);
> -    sysbus_init_mmio(sbd, &s->io_mmio);
> -    sysbus_init_mmio(sbd, &s->io_ioport);
> +    if (s->allow_unmapped_accesses) {
> +        memory_region_init_io(&s->io_mmio_window, OBJECT(s),
> +                              &unassigned_io_ops, OBJECT(s),
> +                              "gpex_mmio_window", UINT64_MAX);

EISA -> 4 * GiB

unassigned_io_ops allows 64-bit accesses. Here we want up to 32.

Maybe we don't care.

> +        memory_region_init_io(&s->io_ioport_window, OBJECT(s),
> +                              &unassigned_io_ops, OBJECT(s),
> +                              "gpex_ioport_window", 64 * 1024);

Ditto, unassigned_io_ops accepts 64-bit accesses.

> +
> +        memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio);
> +        memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport);
> +        sysbus_init_mmio(sbd, &s->io_mmio_window);
> +        sysbus_init_mmio(sbd, &s->io_ioport_window);
> +    } else {
> +        sysbus_init_mmio(sbd, &s->io_mmio);
> +        sysbus_init_mmio(sbd, &s->io_ioport);
> +    }
> +
>      for (i = 0; i < GPEX_NUM_IRQS; i++) {
>          sysbus_init_irq(sbd, &s->irq[i]);
>          s->irq_num[i] = -1;
> @@ -108,6 +147,16 @@ static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
>      return "0000:00";
>  }
>  
> +static Property gpex_host_properties[] = {
> +    /*
> +     * Permit CPU accesses to unmapped areas of the PIO and MMIO windows
> +     * (discarding writes and returning -1 for reads) rather than aborting.
> +     */
> +    DEFINE_PROP_BOOL("allow-unmapped-accesses", GPEXHost,
> +                     allow_unmapped_accesses, true),
> +    DEFINE_PROP_END_OF_LIST(),
> +};
> +
>  static void gpex_host_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
> @@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
>      dc->realize = gpex_host_realize;
>      set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
>      dc->fw_name = "pci";
> +    device_class_set_props(dc, gpex_host_properties);

IMO this change belongs to the parent bridges,
TYPE_PCI_HOST_BRIDGE and TYPE_PCIE_HOST_BRIDGE.

Again, 6.0 is at the door, so this can be discussed /
updated later.

Regards,

Phil.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-03-25 18:14 ` Philippe Mathieu-Daudé
@ 2021-04-19 13:42   ` Peter Maydell
  2021-04-20 11:52     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Maydell @ 2021-04-19 13:42 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: open list:RISC-V, Arnd Bergmann, Michael S. Tsirkin,
	QEMU Developers, qemu-arm, Dmitry Vyukov

On Thu, 25 Mar 2021 at 18:14, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>
> Hi Peter,
>
> On 3/25/21 5:33 PM, Peter Maydell wrote:
> > Currently the gpex PCI controller implements no special behaviour for
> > guest accesses to areas of the PIO and MMIO where it has not mapped
> > any PCI devices, which means that for Arm you end up with a CPU
> > exception due to a data abort.
> >
> > Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> > like this return -1 for reads and ignore writes.  In the interests of
> > not being surprising, make host CPU accesses to these windows behave
> > as -1/discard where there's no mapped PCI device.
> >
> > The old behaviour generally didn't cause any problems, because
> > almost always the guest OS will map the PCI devices and then only
> > access where it has mapped them. One corner case where you will see
> > this kind of access is if Linux attempts to probe legacy ISA
> > devices via a PIO window access. So far the only case where we've
> > seen this has been via the syzkaller fuzzer.
> >
> > Reported-by: Dmitry Vyukov <dvyukov@google.com>
> > Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> > Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> > ---
> > v1->v2 changes: put in the hw_compat machinery.
> >
> > Still not sure if I want to put this in 6.0 or not.
> >
> >  include/hw/pci-host/gpex.h |  4 +++
> >  hw/core/machine.c          |  1 +
> >  hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
> >  3 files changed, 58 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> > index d48a020a952..fcf8b638200 100644
> > --- a/include/hw/pci-host/gpex.h
> > +++ b/include/hw/pci-host/gpex.h
> > @@ -49,8 +49,12 @@ struct GPEXHost {
> >
> >      MemoryRegion io_ioport;
> >      MemoryRegion io_mmio;
> > +    MemoryRegion io_ioport_window;
> > +    MemoryRegion io_mmio_window;
> >      qemu_irq irq[GPEX_NUM_IRQS];
> >      int irq_num[GPEX_NUM_IRQS];
> > +
> > +    bool allow_unmapped_accesses;
> >  };
> >
> >  struct GPEXConfig {
> > diff --git a/hw/core/machine.c b/hw/core/machine.c
> > index 257a664ea2e..9750fad7435 100644
> > --- a/hw/core/machine.c
> > +++ b/hw/core/machine.c
> > @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
> >      { "PIIX4_PM", "smm-compat", "on"},
> >      { "virtio-blk-device", "report-discard-granularity", "off" },
> >      { "virtio-net-pci", "vectors", "3"},
> > +    { "gpex-pcihost", "allow-unmapped-accesses", "false" },
> >  };
> >  const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
> >
> > diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> > index 2bdbe7b4561..a6752fac5e8 100644
> > --- a/hw/pci-host/gpex.c
> > +++ b/hw/pci-host/gpex.c
> > @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
> >      int i;
> >
> >      pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
> > +    sysbus_init_mmio(sbd, &pex->mmio);
> > +
> > +    /*
> > +     * Note that the MemoryRegions io_mmio and io_ioport that we pass
> > +     * to pci_register_root_bus() are not the same as the
> > +     * MemoryRegions io_mmio_window and io_ioport_window that we
> > +     * expose as SysBus MRs. The difference is in the behaviour of
> > +     * accesses to addresses where no PCI device has been mapped.
> > +     *
> > +     * io_mmio and io_ioport are the underlying PCI view of the PCI
> > +     * address space, and when a PCI device does a bus master access
> > +     * to a bad address this is reported back to it as a transaction
> > +     * failure.
> > +     *
> > +     * io_mmio_window and io_ioport_window implement "unmapped
> > +     * addresses read as -1 and ignore writes"; this is traditional
> > +     * x86 PC behaviour, which is not mandated by the PCI spec proper
> > +     * but expected by much PCI-using guest software, including Linux.
>
> I suspect PCI-ISA bridges to provide an EISA bus.

I'm not sure what you mean here -- there isn't an ISA bridge
or an EISA bus involved here. This is purely about the behaviour
of the memory window the PCI host controller exposes to the CPU
(and in particular the window for when a PCI device's BAR is
set to "IO" rather than "MMIO"), though we change both here.

> > +     * In the interests of not being unnecessarily surprising, we
> > +     * implement it in the gpex PCI host controller, by providing the
> > +     * _window MRs, which are containers with io ops that implement
> > +     * the 'background' behaviour and which hold the real PCI MRs as
> > +     * subregions.
> > +     */
> >      memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
> >      memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
> >
> > -    sysbus_init_mmio(sbd, &pex->mmio);
> > -    sysbus_init_mmio(sbd, &s->io_mmio);
> > -    sysbus_init_mmio(sbd, &s->io_ioport);
> > +    if (s->allow_unmapped_accesses) {
> > +        memory_region_init_io(&s->io_mmio_window, OBJECT(s),
> > +                              &unassigned_io_ops, OBJECT(s),
> > +                              "gpex_mmio_window", UINT64_MAX);
>
> EISA -> 4 * GiB
>
> unassigned_io_ops allows 64-bit accesses. Here we want up to 32.
>
> Maybe we don't care.
>
> > +        memory_region_init_io(&s->io_ioport_window, OBJECT(s),
> > +                              &unassigned_io_ops, OBJECT(s),
> > +                              "gpex_ioport_window", 64 * 1024);
>
> Ditto, unassigned_io_ops accepts 64-bit accesses.

These are just using the same sizes as the io_mmio and io_ioport
MRs which the existing code creates.

> >  static void gpex_host_class_init(ObjectClass *klass, void *data)
> >  {
> >      DeviceClass *dc = DEVICE_CLASS(klass);
> > @@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
> >      dc->realize = gpex_host_realize;
> >      set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
> >      dc->fw_name = "pci";
> > +    device_class_set_props(dc, gpex_host_properties);
>
> IMO this change belongs to the parent bridges,
> TYPE_PCI_HOST_BRIDGE and TYPE_PCIE_HOST_BRIDGE.

Arnd had a look through the kernel sources and apparently not
all PCI host controllers do this -- there are a few SoCs where the
kernel has to put in special case code to allow for the fact that
it will get a bus error for accesses to unmapped parts of the
window. So I concluded that the specific controller implementation
was the right place for it.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-03-25 16:33 [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows Peter Maydell
                   ` (2 preceding siblings ...)
  2021-03-25 18:14 ` Philippe Mathieu-Daudé
@ 2021-04-20 10:24 ` Michael S. Tsirkin
  2021-04-20 12:39   ` Peter Maydell
  3 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2021-04-20 10:24 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Arnd Bergmann, qemu-arm, qemu-riscv, qemu-devel, Dmitry Vyukov

On Thu, Mar 25, 2021 at 04:33:15PM +0000, Peter Maydell wrote:
> Currently the gpex PCI controller implements no special behaviour for
> guest accesses to areas of the PIO and MMIO where it has not mapped
> any PCI devices, which means that for Arm you end up with a CPU
> exception due to a data abort.
> 
> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> like this return -1 for reads and ignore writes.  In the interests of
> not being surprising, make host CPU accesses to these windows behave
> as -1/discard where there's no mapped PCI device.
> 
> The old behaviour generally didn't cause any problems, because
> almost always the guest OS will map the PCI devices and then only
> access where it has mapped them. One corner case where you will see
> this kind of access is if Linux attempts to probe legacy ISA
> devices via a PIO window access. So far the only case where we've
> seen this has been via the syzkaller fuzzer.
> 
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>


Looks ok superficially

Acked-by: Michael S. Tsirkin <mst@redhat.com>

Peter pls merge if appropriate.

> ---
> v1->v2 changes: put in the hw_compat machinery.
> 
> Still not sure if I want to put this in 6.0 or not.
> 
>  include/hw/pci-host/gpex.h |  4 +++
>  hw/core/machine.c          |  1 +
>  hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
>  3 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> index d48a020a952..fcf8b638200 100644
> --- a/include/hw/pci-host/gpex.h
> +++ b/include/hw/pci-host/gpex.h
> @@ -49,8 +49,12 @@ struct GPEXHost {
>  
>      MemoryRegion io_ioport;
>      MemoryRegion io_mmio;
> +    MemoryRegion io_ioport_window;
> +    MemoryRegion io_mmio_window;
>      qemu_irq irq[GPEX_NUM_IRQS];
>      int irq_num[GPEX_NUM_IRQS];
> +
> +    bool allow_unmapped_accesses;
>  };
>  
>  struct GPEXConfig {
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 257a664ea2e..9750fad7435 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
>      { "PIIX4_PM", "smm-compat", "on"},
>      { "virtio-blk-device", "report-discard-granularity", "off" },
>      { "virtio-net-pci", "vectors", "3"},
> +    { "gpex-pcihost", "allow-unmapped-accesses", "false" },
>  };
>  const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
>  
> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> index 2bdbe7b4561..a6752fac5e8 100644
> --- a/hw/pci-host/gpex.c
> +++ b/hw/pci-host/gpex.c
> @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
>      int i;
>  
>      pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
> +    sysbus_init_mmio(sbd, &pex->mmio);
> +
> +    /*
> +     * Note that the MemoryRegions io_mmio and io_ioport that we pass
> +     * to pci_register_root_bus() are not the same as the
> +     * MemoryRegions io_mmio_window and io_ioport_window that we
> +     * expose as SysBus MRs. The difference is in the behaviour of
> +     * accesses to addresses where no PCI device has been mapped.
> +     *
> +     * io_mmio and io_ioport are the underlying PCI view of the PCI
> +     * address space, and when a PCI device does a bus master access
> +     * to a bad address this is reported back to it as a transaction
> +     * failure.
> +     *
> +     * io_mmio_window and io_ioport_window implement "unmapped
> +     * addresses read as -1 and ignore writes"; this is traditional
> +     * x86 PC behaviour, which is not mandated by the PCI spec proper
> +     * but expected by much PCI-using guest software, including Linux.
> +     *
> +     * In the interests of not being unnecessarily surprising, we
> +     * implement it in the gpex PCI host controller, by providing the
> +     * _window MRs, which are containers with io ops that implement
> +     * the 'background' behaviour and which hold the real PCI MRs as
> +     * subregions.
> +     */
>      memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
>      memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
>  
> -    sysbus_init_mmio(sbd, &pex->mmio);
> -    sysbus_init_mmio(sbd, &s->io_mmio);
> -    sysbus_init_mmio(sbd, &s->io_ioport);
> +    if (s->allow_unmapped_accesses) {
> +        memory_region_init_io(&s->io_mmio_window, OBJECT(s),
> +                              &unassigned_io_ops, OBJECT(s),
> +                              "gpex_mmio_window", UINT64_MAX);
> +        memory_region_init_io(&s->io_ioport_window, OBJECT(s),
> +                              &unassigned_io_ops, OBJECT(s),
> +                              "gpex_ioport_window", 64 * 1024);
> +
> +        memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio);
> +        memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport);
> +        sysbus_init_mmio(sbd, &s->io_mmio_window);
> +        sysbus_init_mmio(sbd, &s->io_ioport_window);
> +    } else {
> +        sysbus_init_mmio(sbd, &s->io_mmio);
> +        sysbus_init_mmio(sbd, &s->io_ioport);
> +    }
> +
>      for (i = 0; i < GPEX_NUM_IRQS; i++) {
>          sysbus_init_irq(sbd, &s->irq[i]);
>          s->irq_num[i] = -1;
> @@ -108,6 +147,16 @@ static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
>      return "0000:00";
>  }
>  
> +static Property gpex_host_properties[] = {
> +    /*
> +     * Permit CPU accesses to unmapped areas of the PIO and MMIO windows
> +     * (discarding writes and returning -1 for reads) rather than aborting.
> +     */
> +    DEFINE_PROP_BOOL("allow-unmapped-accesses", GPEXHost,
> +                     allow_unmapped_accesses, true),
> +    DEFINE_PROP_END_OF_LIST(),
> +};
> +
>  static void gpex_host_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
> @@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
>      dc->realize = gpex_host_realize;
>      set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
>      dc->fw_name = "pci";
> +    device_class_set_props(dc, gpex_host_properties);
>  }
>  
>  static void gpex_host_initfn(Object *obj)
> -- 
> 2.20.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-04-19 13:42   ` Peter Maydell
@ 2021-04-20 11:52     ` Philippe Mathieu-Daudé
  2021-04-20 12:26       ` Arnd Bergmann
  0 siblings, 1 reply; 10+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-04-20 11:52 UTC (permalink / raw)
  To: Peter Maydell
  Cc: open list:RISC-V, Arnd Bergmann, Michael S. Tsirkin,
	QEMU Developers, qemu-arm, Dmitry Vyukov

On 4/19/21 3:42 PM, Peter Maydell wrote:
> On Thu, 25 Mar 2021 at 18:14, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>>
>> Hi Peter,
>>
>> On 3/25/21 5:33 PM, Peter Maydell wrote:
>>> Currently the gpex PCI controller implements no special behaviour for
>>> guest accesses to areas of the PIO and MMIO where it has not mapped
>>> any PCI devices, which means that for Arm you end up with a CPU
>>> exception due to a data abort.
>>>
>>> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
>>> like this return -1 for reads and ignore writes.  In the interests of
>>> not being surprising, make host CPU accesses to these windows behave
>>> as -1/discard where there's no mapped PCI device.
>>>
>>> The old behaviour generally didn't cause any problems, because
>>> almost always the guest OS will map the PCI devices and then only
>>> access where it has mapped them. One corner case where you will see
>>> this kind of access is if Linux attempts to probe legacy ISA
>>> devices via a PIO window access. So far the only case where we've
>>> seen this has been via the syzkaller fuzzer.
>>>
>>> Reported-by: Dmitry Vyukov <dvyukov@google.com>
>>> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
>>> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>>> ---
>>> v1->v2 changes: put in the hw_compat machinery.
>>>
>>> Still not sure if I want to put this in 6.0 or not.
>>>
>>>  include/hw/pci-host/gpex.h |  4 +++
>>>  hw/core/machine.c          |  1 +
>>>  hw/pci-host/gpex.c         | 56 ++++++++++++++++++++++++++++++++++++--
>>>  3 files changed, 58 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
>>> index d48a020a952..fcf8b638200 100644
>>> --- a/include/hw/pci-host/gpex.h
>>> +++ b/include/hw/pci-host/gpex.h
>>> @@ -49,8 +49,12 @@ struct GPEXHost {
>>>
>>>      MemoryRegion io_ioport;
>>>      MemoryRegion io_mmio;
>>> +    MemoryRegion io_ioport_window;
>>> +    MemoryRegion io_mmio_window;
>>>      qemu_irq irq[GPEX_NUM_IRQS];
>>>      int irq_num[GPEX_NUM_IRQS];
>>> +
>>> +    bool allow_unmapped_accesses;
>>>  };
>>>
>>>  struct GPEXConfig {
>>> diff --git a/hw/core/machine.c b/hw/core/machine.c
>>> index 257a664ea2e..9750fad7435 100644
>>> --- a/hw/core/machine.c
>>> +++ b/hw/core/machine.c
>>> @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
>>>      { "PIIX4_PM", "smm-compat", "on"},
>>>      { "virtio-blk-device", "report-discard-granularity", "off" },
>>>      { "virtio-net-pci", "vectors", "3"},
>>> +    { "gpex-pcihost", "allow-unmapped-accesses", "false" },
>>>  };
>>>  const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
>>>
>>> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
>>> index 2bdbe7b4561..a6752fac5e8 100644
>>> --- a/hw/pci-host/gpex.c
>>> +++ b/hw/pci-host/gpex.c
>>> @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
>>>      int i;
>>>
>>>      pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
>>> +    sysbus_init_mmio(sbd, &pex->mmio);
>>> +
>>> +    /*
>>> +     * Note that the MemoryRegions io_mmio and io_ioport that we pass
>>> +     * to pci_register_root_bus() are not the same as the
>>> +     * MemoryRegions io_mmio_window and io_ioport_window that we
>>> +     * expose as SysBus MRs. The difference is in the behaviour of
>>> +     * accesses to addresses where no PCI device has been mapped.
>>> +     *
>>> +     * io_mmio and io_ioport are the underlying PCI view of the PCI
>>> +     * address space, and when a PCI device does a bus master access
>>> +     * to a bad address this is reported back to it as a transaction
>>> +     * failure.
>>> +     *
>>> +     * io_mmio_window and io_ioport_window implement "unmapped
>>> +     * addresses read as -1 and ignore writes"; this is traditional
>>> +     * x86 PC behaviour, which is not mandated by the PCI spec proper
>>> +     * but expected by much PCI-using guest software, including Linux.
>>
>> I suspect PCI-ISA bridges to provide an EISA bus.
> 
> I'm not sure what you mean here -- there isn't an ISA bridge
> or an EISA bus involved here. This is purely about the behaviour
> of the memory window the PCI host controller exposes to the CPU
> (and in particular the window for when a PCI device's BAR is
> set to "IO" rather than "MMIO"), though we change both here.

I guess I always interpreted the IO BAR were here to address ISA
backward compatibility. I don't know well PCI so I'll study it
more. Sorry for my confused comment.

>>> +     * In the interests of not being unnecessarily surprising, we
>>> +     * implement it in the gpex PCI host controller, by providing the
>>> +     * _window MRs, which are containers with io ops that implement
>>> +     * the 'background' behaviour and which hold the real PCI MRs as
>>> +     * subregions.
>>> +     */
>>>      memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
>>>      memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
>>>
>>> -    sysbus_init_mmio(sbd, &pex->mmio);
>>> -    sysbus_init_mmio(sbd, &s->io_mmio);
>>> -    sysbus_init_mmio(sbd, &s->io_ioport);
>>> +    if (s->allow_unmapped_accesses) {
>>> +        memory_region_init_io(&s->io_mmio_window, OBJECT(s),
>>> +                              &unassigned_io_ops, OBJECT(s),
>>> +                              "gpex_mmio_window", UINT64_MAX);
>>
>> EISA -> 4 * GiB
>>
>> unassigned_io_ops allows 64-bit accesses. Here we want up to 32.
>>
>> Maybe we don't care.
>>
>>> +        memory_region_init_io(&s->io_ioport_window, OBJECT(s),
>>> +                              &unassigned_io_ops, OBJECT(s),
>>> +                              "gpex_ioport_window", 64 * 1024);
>>
>> Ditto, unassigned_io_ops accepts 64-bit accesses.
> 
> These are just using the same sizes as the io_mmio and io_ioport
> MRs which the existing code creates.
> 
>>>  static void gpex_host_class_init(ObjectClass *klass, void *data)
>>>  {
>>>      DeviceClass *dc = DEVICE_CLASS(klass);
>>> @@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
>>>      dc->realize = gpex_host_realize;
>>>      set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
>>>      dc->fw_name = "pci";
>>> +    device_class_set_props(dc, gpex_host_properties);
>>
>> IMO this change belongs to the parent bridges,
>> TYPE_PCI_HOST_BRIDGE and TYPE_PCIE_HOST_BRIDGE.
> 
> Arnd had a look through the kernel sources and apparently not
> all PCI host controllers do this -- there are a few SoCs where the
> kernel has to put in special case code to allow for the fact that
> it will get a bus error for accesses to unmapped parts of the
> window. So I concluded that the specific controller implementation
> was the right place for it.

Yes the changes are simple. I'm certainly not NAcking the patch,
but can't review it neither :( So please ignore my comments.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-04-20 11:52     ` Philippe Mathieu-Daudé
@ 2021-04-20 12:26       ` Arnd Bergmann
  2021-04-20 12:31         ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 10+ messages in thread
From: Arnd Bergmann @ 2021-04-20 12:26 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Peter Maydell, open list:RISC-V, Michael S. Tsirkin,
	QEMU Developers, qemu-arm, Dmitry Vyukov

On Tue, Apr 20, 2021 at 1:52 PM Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
> On 4/19/21 3:42 PM, Peter Maydell wrote:
> >>
> >> I suspect PCI-ISA bridges to provide an EISA bus.
> >
> > I'm not sure what you mean here -- there isn't an ISA bridge
> > or an EISA bus involved here. This is purely about the behaviour
> > of the memory window the PCI host controller exposes to the CPU
> > (and in particular the window for when a PCI device's BAR is
> > set to "IO" rather than "MMIO"), though we change both here.
>
> I guess I always interpreted the IO BAR were here to address ISA
> backward compatibility. I don't know well PCI so I'll study it
> more. Sorry for my confused comment.

It is mostly for compatibility, but there are many layers of it:

- PCI supports actual ISA/EISA/VLB/PCMCIA/LPC/PC104/... style
  devices behind a bridge, using I/O ports at their native address.

- PCI devices themselves can have fixed I/O ports at well-known
  addresses, e.g. VGA or IDE/ATA adapters

- PCI devices can behave like legacy devices using port I/O
  but use PCI resource assignment to pick an arbitrary port
  number outside of the legacy range

- PCIe can support all of the above by virtue of being backwards
  compatible with PCI and allowing PCI buses behind bridges,
  though port I/O is deprecated here and often not supported at all

The first two are very rare these days, but Linux still support them
in order to run on old hardware, and any driver for these that
assumes a hardcoded port number can crash the kernel if the
PCI host bridge causes an asynchronous external abort or
similar.

       Arnd


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-04-20 12:26       ` Arnd Bergmann
@ 2021-04-20 12:31         ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 10+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-04-20 12:31 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Peter Maydell, open list:RISC-V, Michael S. Tsirkin,
	QEMU Developers, qemu-arm, Dmitry Vyukov

On 4/20/21 2:26 PM, Arnd Bergmann wrote:
> On Tue, Apr 20, 2021 at 1:52 PM Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>> On 4/19/21 3:42 PM, Peter Maydell wrote:
>>>>
>>>> I suspect PCI-ISA bridges to provide an EISA bus.
>>>
>>> I'm not sure what you mean here -- there isn't an ISA bridge
>>> or an EISA bus involved here. This is purely about the behaviour
>>> of the memory window the PCI host controller exposes to the CPU
>>> (and in particular the window for when a PCI device's BAR is
>>> set to "IO" rather than "MMIO"), though we change both here.
>>
>> I guess I always interpreted the IO BAR were here to address ISA
>> backward compatibility. I don't know well PCI so I'll study it
>> more. Sorry for my confused comment.
> 
> It is mostly for compatibility, but there are many layers of it:
> 
> - PCI supports actual ISA/EISA/VLB/PCMCIA/LPC/PC104/... style
>   devices behind a bridge, using I/O ports at their native address.
> 
> - PCI devices themselves can have fixed I/O ports at well-known
>   addresses, e.g. VGA or IDE/ATA adapters
> 
> - PCI devices can behave like legacy devices using port I/O
>   but use PCI resource assignment to pick an arbitrary port
>   number outside of the legacy range
> 
> - PCIe can support all of the above by virtue of being backwards
>   compatible with PCI and allowing PCI buses behind bridges,
>   though port I/O is deprecated here and often not supported at all
> 
> The first two are very rare these days, but Linux still support them
> in order to run on old hardware, and any driver for these that
> assumes a hardcoded port number can crash the kernel if the
> PCI host bridge causes an asynchronous external abort or
> similar.

Thanks for the clarification and enumeration Arnd!


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
  2021-04-20 10:24 ` Michael S. Tsirkin
@ 2021-04-20 12:39   ` Peter Maydell
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Maydell @ 2021-04-20 12:39 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Arnd Bergmann, qemu-arm, open list:RISC-V, QEMU Developers,
	Dmitry Vyukov

On Tue, 20 Apr 2021 at 11:24, Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Mar 25, 2021 at 04:33:15PM +0000, Peter Maydell wrote:
> > Currently the gpex PCI controller implements no special behaviour for
> > guest accesses to areas of the PIO and MMIO where it has not mapped
> > any PCI devices, which means that for Arm you end up with a CPU
> > exception due to a data abort.
> >
> > Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> > like this return -1 for reads and ignore writes.  In the interests of
> > not being surprising, make host CPU accesses to these windows behave
> > as -1/discard where there's no mapped PCI device.
> >
> > The old behaviour generally didn't cause any problems, because
> > almost always the guest OS will map the PCI devices and then only
> > access where it has mapped them. One corner case where you will see
> > this kind of access is if Linux attempts to probe legacy ISA
> > devices via a PIO window access. So far the only case where we've
> > seen this has been via the syzkaller fuzzer.
> >
> > Reported-by: Dmitry Vyukov <dvyukov@google.com>
> > Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> > Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>
>
> Looks ok superficially
>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> Peter pls merge if appropriate.

Thanks; I'll take it via target-arm.next for 6.1 (it'll need
a tweak to use hw_compat_6_0 rather than hw_compat_5_2 so it might
need to wait until the patch adding hw_compat_6_0 hits master.)

-- PMM


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, back to index

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-25 16:33 [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows Peter Maydell
2021-03-25 17:01 ` Richard Henderson
2021-03-25 17:03 ` Michael S. Tsirkin
2021-03-25 18:14 ` Philippe Mathieu-Daudé
2021-04-19 13:42   ` Peter Maydell
2021-04-20 11:52     ` Philippe Mathieu-Daudé
2021-04-20 12:26       ` Arnd Bergmann
2021-04-20 12:31         ` Philippe Mathieu-Daudé
2021-04-20 10:24 ` Michael S. Tsirkin
2021-04-20 12:39   ` Peter Maydell

QEMU-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/qemu-devel/0 qemu-devel/git/0.git
	git clone --mirror https://lore.kernel.org/qemu-devel/1 qemu-devel/git/1.git
	git clone --mirror https://lore.kernel.org/qemu-devel/2 qemu-devel/git/2.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 qemu-devel qemu-devel/ https://lore.kernel.org/qemu-devel \
		qemu-devel@nongnu.org
	public-inbox-index qemu-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.nongnu.qemu-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git