All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-10-17 12:19 Karol Herbst
  2019-11-14 19:17   ` Karol Herbst
                   ` (2 more replies)
  0 siblings, 3 replies; 150+ messages in thread
From: Karol Herbst @ 2019-10-17 12:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: Karol Herbst, Bjorn Helgaas, Lyude Paul, Rafael J . Wysocki,
	Mika Westerberg, linux-pci, linux-pm, dri-devel, nouveau

Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
states.

v2: convert to pci_dev quirk
    put a proper technical explanation of the issue as a in-code comment
v3: disable it only for certain combinations of intel and nvidia hardware
v4: simplify quirk by setting flag on the GPU itself

Signed-off-by: Karol Herbst <kherbst@redhat.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Mika Westerberg <mika.westerberg@intel.com>
Cc: linux-pci@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
---
 drivers/pci/pci.c    |  7 ++++++
 drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
 include/linux/pci.h  |  1 +
 3 files changed, 61 insertions(+)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index b97d9e10c9cc..02e71e0bcdd7 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
 	   || (state == PCI_D2 && !dev->d2_support))
 		return -EIO;
 
+	/*
+	 * check if we have a bad combination of bridge controller and nvidia
+         * GPU, see quirk_broken_nv_runpm for more info
+	 */
+	if (state != PCI_D0 && dev->broken_nv_runpm)
+		return 0;
+
 	pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
 
 	/*
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 44c4ae1abd00..0006c9e37b6f 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
 			      PCI_CLASS_DISPLAY_VGA, 8,
 			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
+
+/*
+ * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
+ * those were put into D3cold state if they were put into a non D0 PCI PM
+ * device state before doing so.
+ *
+ * This leads to various issue different issues which all manifest differently,
+ * but have the same root cause:
+ *  - AIML code execution hits an infinite loop (as the coe waits on device
+ *    memory to change).
+ *  - kernel crashes, as all pci reads return -1, which most code isn't able
+ *    to handle well enough.
+ *  - sudden shutdowns, as the kernel identified an unrecoverable error after
+ *    userspace tries to access the GPU.
+ *
+ * In all cases dmesg will contain at least one line like this:
+ * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
+ * followed by a lot of nouveau timeouts.
+ *
+ * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
+ * PCIe bridge controller in order to power down the GPU.
+ * Nonetheless, there are other code paths inside the ACPI firmware which use
+ * other registers, which seem to work fine:
+ *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
+ *  - 0xb0 bit 0x10 (link disable)
+ * Changing the conditions inside the firmware by poking into the relevant
+ * addresses does resolve the issue, but it seemed to be ACPI private memory
+ * and not any device accessible memory at all, so there is no portable way of
+ * changing the conditions.
+ *
+ * The only systems where this behavior can be seen are hybrid graphics laptops
+ * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
+ * only occurs in combination with listed Intel PCIe bridge controllers and
+ * the mentioned GPUs or if it's only a hw bug in the bridge controller.
+ *
+ * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
+ * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
+ * in the bridge controller rather than in the GPU.
+ *
+ * This issue was not able to be reproduced on non laptop systems.
+ */
+
+static void quirk_broken_nv_runpm(struct pci_dev *dev)
+{
+	struct pci_dev *bridge = pci_upstream_bridge(dev);
+
+	if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
+	    bridge->device == 0x1901)
+		dev->broken_nv_runpm = 1;
+}
+DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+			      PCI_BASE_CLASS_DISPLAY, 16,
+			      quirk_broken_nv_runpm);
diff --git a/include/linux/pci.h b/include/linux/pci.h
index ac8a6c4e1792..903a0b3a39ec 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -416,6 +416,7 @@ struct pci_dev {
 	unsigned int	__aer_firmware_first_valid:1;
 	unsigned int	__aer_firmware_first:1;
 	unsigned int	broken_intx_masking:1;	/* INTx masking can't be used */
+	unsigned int	broken_nv_runpm:1;	/* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
 	unsigned int	io_window_1k:1;		/* Intel bridge 1K I/O windows */
 	unsigned int	irq_managed:1;
 	unsigned int	has_secondary_link:1;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-14 19:17   ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-14 19:17 UTC (permalink / raw)
  To: LKML
  Cc: Bjorn Helgaas, Lyude Paul, Rafael J . Wysocki, Mika Westerberg,
	Linux PCI, Linux PM, dri-devel, nouveau

ping on the patch.

I wasn't able to verify this issue on any other bridge controller, so
it really might be only this one.

On Thu, Oct 17, 2019 at 2:19 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> states.
>
> v2: convert to pci_dev quirk
>     put a proper technical explanation of the issue as a in-code comment
> v3: disable it only for certain combinations of intel and nvidia hardware
> v4: simplify quirk by setting flag on the GPU itself
>
> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Mika Westerberg <mika.westerberg@intel.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c    |  7 ++++++
>  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/pci.h  |  1 +
>  3 files changed, 61 insertions(+)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index b97d9e10c9cc..02e71e0bcdd7 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>            || (state == PCI_D2 && !dev->d2_support))
>                 return -EIO;
>
> +       /*
> +        * check if we have a bad combination of bridge controller and nvidia
> +         * GPU, see quirk_broken_nv_runpm for more info
> +        */
> +       if (state != PCI_D0 && dev->broken_nv_runpm)
> +               return 0;
> +
>         pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
>
>         /*
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 44c4ae1abd00..0006c9e37b6f 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
>  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
>                               PCI_CLASS_DISPLAY_VGA, 8,
>                               quirk_reset_lenovo_thinkpad_p50_nvgpu);
> +
> +/*
> + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> + * those were put into D3cold state if they were put into a non D0 PCI PM
> + * device state before doing so.
> + *
> + * This leads to various issue different issues which all manifest differently,
> + * but have the same root cause:
> + *  - AIML code execution hits an infinite loop (as the coe waits on device
> + *    memory to change).
> + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> + *    to handle well enough.
> + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> + *    userspace tries to access the GPU.
> + *
> + * In all cases dmesg will contain at least one line like this:
> + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> + * followed by a lot of nouveau timeouts.
> + *
> + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> + * PCIe bridge controller in order to power down the GPU.
> + * Nonetheless, there are other code paths inside the ACPI firmware which use
> + * other registers, which seem to work fine:
> + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> + *  - 0xb0 bit 0x10 (link disable)
> + * Changing the conditions inside the firmware by poking into the relevant
> + * addresses does resolve the issue, but it seemed to be ACPI private memory
> + * and not any device accessible memory at all, so there is no portable way of
> + * changing the conditions.
> + *
> + * The only systems where this behavior can be seen are hybrid graphics laptops
> + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> + * only occurs in combination with listed Intel PCIe bridge controllers and
> + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> + *
> + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> + * in the bridge controller rather than in the GPU.
> + *
> + * This issue was not able to be reproduced on non laptop systems.
> + */
> +
> +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> +{
> +       struct pci_dev *bridge = pci_upstream_bridge(dev);
> +
> +       if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> +           bridge->device == 0x1901)
> +               dev->broken_nv_runpm = 1;
> +}
> +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> +                             PCI_BASE_CLASS_DISPLAY, 16,
> +                             quirk_broken_nv_runpm);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index ac8a6c4e1792..903a0b3a39ec 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -416,6 +416,7 @@ struct pci_dev {
>         unsigned int    __aer_firmware_first_valid:1;
>         unsigned int    __aer_firmware_first:1;
>         unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> +       unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
>         unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
>         unsigned int    irq_managed:1;
>         unsigned int    has_secondary_link:1;
> --
> 2.21.0
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-14 19:17   ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-14 19:17 UTC (permalink / raw)
  To: LKML
  Cc: Linux PM, Linux PCI, Mika Westerberg, Rafael J . Wysocki,
	dri-devel, nouveau, Bjorn Helgaas

ping on the patch.

I wasn't able to verify this issue on any other bridge controller, so
it really might be only this one.

On Thu, Oct 17, 2019 at 2:19 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> states.
>
> v2: convert to pci_dev quirk
>     put a proper technical explanation of the issue as a in-code comment
> v3: disable it only for certain combinations of intel and nvidia hardware
> v4: simplify quirk by setting flag on the GPU itself
>
> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Mika Westerberg <mika.westerberg@intel.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c    |  7 ++++++
>  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/pci.h  |  1 +
>  3 files changed, 61 insertions(+)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index b97d9e10c9cc..02e71e0bcdd7 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>            || (state == PCI_D2 && !dev->d2_support))
>                 return -EIO;
>
> +       /*
> +        * check if we have a bad combination of bridge controller and nvidia
> +         * GPU, see quirk_broken_nv_runpm for more info
> +        */
> +       if (state != PCI_D0 && dev->broken_nv_runpm)
> +               return 0;
> +
>         pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
>
>         /*
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 44c4ae1abd00..0006c9e37b6f 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
>  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
>                               PCI_CLASS_DISPLAY_VGA, 8,
>                               quirk_reset_lenovo_thinkpad_p50_nvgpu);
> +
> +/*
> + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> + * those were put into D3cold state if they were put into a non D0 PCI PM
> + * device state before doing so.
> + *
> + * This leads to various issue different issues which all manifest differently,
> + * but have the same root cause:
> + *  - AIML code execution hits an infinite loop (as the coe waits on device
> + *    memory to change).
> + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> + *    to handle well enough.
> + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> + *    userspace tries to access the GPU.
> + *
> + * In all cases dmesg will contain at least one line like this:
> + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> + * followed by a lot of nouveau timeouts.
> + *
> + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> + * PCIe bridge controller in order to power down the GPU.
> + * Nonetheless, there are other code paths inside the ACPI firmware which use
> + * other registers, which seem to work fine:
> + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> + *  - 0xb0 bit 0x10 (link disable)
> + * Changing the conditions inside the firmware by poking into the relevant
> + * addresses does resolve the issue, but it seemed to be ACPI private memory
> + * and not any device accessible memory at all, so there is no portable way of
> + * changing the conditions.
> + *
> + * The only systems where this behavior can be seen are hybrid graphics laptops
> + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> + * only occurs in combination with listed Intel PCIe bridge controllers and
> + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> + *
> + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> + * in the bridge controller rather than in the GPU.
> + *
> + * This issue was not able to be reproduced on non laptop systems.
> + */
> +
> +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> +{
> +       struct pci_dev *bridge = pci_upstream_bridge(dev);
> +
> +       if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> +           bridge->device == 0x1901)
> +               dev->broken_nv_runpm = 1;
> +}
> +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> +                             PCI_BASE_CLASS_DISPLAY, 16,
> +                             quirk_broken_nv_runpm);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index ac8a6c4e1792..903a0b3a39ec 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -416,6 +416,7 @@ struct pci_dev {
>         unsigned int    __aer_firmware_first_valid:1;
>         unsigned int    __aer_firmware_first:1;
>         unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> +       unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
>         unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
>         unsigned int    irq_managed:1;
>         unsigned int    has_secondary_link:1;
> --
> 2.21.0
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 20:06   ` Dave Airlie
  0 siblings, 0 replies; 150+ messages in thread
From: Dave Airlie @ 2019-11-19 20:06 UTC (permalink / raw)
  To: Karol Herbst
  Cc: LKML, Linux PM list, Linux PCI, Mika Westerberg,
	Rafael J . Wysocki, dri-devel, nouveau, Bjorn Helgaas

On Thu, 17 Oct 2019 at 22:19, Karol Herbst <kherbst@redhat.com> wrote:
>
> Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> states.


Can we get this acked/committed? At this stage I think we've done all
we can unless Intel actually escalate this internally and work out how
the hw is broken.

Dave.
>
> v2: convert to pci_dev quirk
>     put a proper technical explanation of the issue as a in-code comment
> v3: disable it only for certain combinations of intel and nvidia hardware
> v4: simplify quirk by setting flag on the GPU itself
>
> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Mika Westerberg <mika.westerberg@intel.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c    |  7 ++++++
>  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/pci.h  |  1 +
>  3 files changed, 61 insertions(+)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index b97d9e10c9cc..02e71e0bcdd7 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>            || (state == PCI_D2 && !dev->d2_support))
>                 return -EIO;
>
> +       /*
> +        * check if we have a bad combination of bridge controller and nvidia
> +         * GPU, see quirk_broken_nv_runpm for more info
> +        */
> +       if (state != PCI_D0 && dev->broken_nv_runpm)
> +               return 0;
> +
>         pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
>
>         /*
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 44c4ae1abd00..0006c9e37b6f 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
>  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
>                               PCI_CLASS_DISPLAY_VGA, 8,
>                               quirk_reset_lenovo_thinkpad_p50_nvgpu);
> +
> +/*
> + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> + * those were put into D3cold state if they were put into a non D0 PCI PM
> + * device state before doing so.
> + *
> + * This leads to various issue different issues which all manifest differently,
> + * but have the same root cause:
> + *  - AIML code execution hits an infinite loop (as the coe waits on device
> + *    memory to change).
> + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> + *    to handle well enough.
> + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> + *    userspace tries to access the GPU.
> + *
> + * In all cases dmesg will contain at least one line like this:
> + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> + * followed by a lot of nouveau timeouts.
> + *
> + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> + * PCIe bridge controller in order to power down the GPU.
> + * Nonetheless, there are other code paths inside the ACPI firmware which use
> + * other registers, which seem to work fine:
> + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> + *  - 0xb0 bit 0x10 (link disable)
> + * Changing the conditions inside the firmware by poking into the relevant
> + * addresses does resolve the issue, but it seemed to be ACPI private memory
> + * and not any device accessible memory at all, so there is no portable way of
> + * changing the conditions.
> + *
> + * The only systems where this behavior can be seen are hybrid graphics laptops
> + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> + * only occurs in combination with listed Intel PCIe bridge controllers and
> + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> + *
> + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> + * in the bridge controller rather than in the GPU.
> + *
> + * This issue was not able to be reproduced on non laptop systems.
> + */
> +
> +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> +{
> +       struct pci_dev *bridge = pci_upstream_bridge(dev);
> +
> +       if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> +           bridge->device == 0x1901)
> +               dev->broken_nv_runpm = 1;
> +}
> +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> +                             PCI_BASE_CLASS_DISPLAY, 16,
> +                             quirk_broken_nv_runpm);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index ac8a6c4e1792..903a0b3a39ec 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -416,6 +416,7 @@ struct pci_dev {
>         unsigned int    __aer_firmware_first_valid:1;
>         unsigned int    __aer_firmware_first:1;
>         unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> +       unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
>         unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
>         unsigned int    irq_managed:1;
>         unsigned int    has_secondary_link:1;
> --
> 2.21.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 20:06   ` Dave Airlie
  0 siblings, 0 replies; 150+ messages in thread
From: Dave Airlie @ 2019-11-19 20:06 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Linux PM list, Linux PCI, Mika Westerberg, Rafael J . Wysocki,
	LKML, dri-devel, nouveau, Bjorn Helgaas

On Thu, 17 Oct 2019 at 22:19, Karol Herbst <kherbst@redhat.com> wrote:
>
> Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> states.


Can we get this acked/committed? At this stage I think we've done all
we can unless Intel actually escalate this internally and work out how
the hw is broken.

Dave.
>
> v2: convert to pci_dev quirk
>     put a proper technical explanation of the issue as a in-code comment
> v3: disable it only for certain combinations of intel and nvidia hardware
> v4: simplify quirk by setting flag on the GPU itself
>
> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Mika Westerberg <mika.westerberg@intel.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c    |  7 ++++++
>  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/pci.h  |  1 +
>  3 files changed, 61 insertions(+)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index b97d9e10c9cc..02e71e0bcdd7 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>            || (state == PCI_D2 && !dev->d2_support))
>                 return -EIO;
>
> +       /*
> +        * check if we have a bad combination of bridge controller and nvidia
> +         * GPU, see quirk_broken_nv_runpm for more info
> +        */
> +       if (state != PCI_D0 && dev->broken_nv_runpm)
> +               return 0;
> +
>         pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
>
>         /*
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 44c4ae1abd00..0006c9e37b6f 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
>  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
>                               PCI_CLASS_DISPLAY_VGA, 8,
>                               quirk_reset_lenovo_thinkpad_p50_nvgpu);
> +
> +/*
> + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> + * those were put into D3cold state if they were put into a non D0 PCI PM
> + * device state before doing so.
> + *
> + * This leads to various issue different issues which all manifest differently,
> + * but have the same root cause:
> + *  - AIML code execution hits an infinite loop (as the coe waits on device
> + *    memory to change).
> + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> + *    to handle well enough.
> + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> + *    userspace tries to access the GPU.
> + *
> + * In all cases dmesg will contain at least one line like this:
> + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> + * followed by a lot of nouveau timeouts.
> + *
> + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> + * PCIe bridge controller in order to power down the GPU.
> + * Nonetheless, there are other code paths inside the ACPI firmware which use
> + * other registers, which seem to work fine:
> + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> + *  - 0xb0 bit 0x10 (link disable)
> + * Changing the conditions inside the firmware by poking into the relevant
> + * addresses does resolve the issue, but it seemed to be ACPI private memory
> + * and not any device accessible memory at all, so there is no portable way of
> + * changing the conditions.
> + *
> + * The only systems where this behavior can be seen are hybrid graphics laptops
> + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> + * only occurs in combination with listed Intel PCIe bridge controllers and
> + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> + *
> + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> + * in the bridge controller rather than in the GPU.
> + *
> + * This issue was not able to be reproduced on non laptop systems.
> + */
> +
> +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> +{
> +       struct pci_dev *bridge = pci_upstream_bridge(dev);
> +
> +       if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> +           bridge->device == 0x1901)
> +               dev->broken_nv_runpm = 1;
> +}
> +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> +                             PCI_BASE_CLASS_DISPLAY, 16,
> +                             quirk_broken_nv_runpm);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index ac8a6c4e1792..903a0b3a39ec 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -416,6 +416,7 @@ struct pci_dev {
>         unsigned int    __aer_firmware_first_valid:1;
>         unsigned int    __aer_firmware_first:1;
>         unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> +       unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
>         unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
>         unsigned int    irq_managed:1;
>         unsigned int    has_secondary_link:1;
> --
> 2.21.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 20:06   ` Dave Airlie
  0 siblings, 0 replies; 150+ messages in thread
From: Dave Airlie @ 2019-11-19 20:06 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Linux PM list, Linux PCI, Mika Westerberg, Rafael J . Wysocki,
	LKML, dri-devel, nouveau, Bjorn Helgaas

On Thu, 17 Oct 2019 at 22:19, Karol Herbst <kherbst@redhat.com> wrote:
>
> Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> states.


Can we get this acked/committed? At this stage I think we've done all
we can unless Intel actually escalate this internally and work out how
the hw is broken.

Dave.
>
> v2: convert to pci_dev quirk
>     put a proper technical explanation of the issue as a in-code comment
> v3: disable it only for certain combinations of intel and nvidia hardware
> v4: simplify quirk by setting flag on the GPU itself
>
> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Mika Westerberg <mika.westerberg@intel.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c    |  7 ++++++
>  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/pci.h  |  1 +
>  3 files changed, 61 insertions(+)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index b97d9e10c9cc..02e71e0bcdd7 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>            || (state == PCI_D2 && !dev->d2_support))
>                 return -EIO;
>
> +       /*
> +        * check if we have a bad combination of bridge controller and nvidia
> +         * GPU, see quirk_broken_nv_runpm for more info
> +        */
> +       if (state != PCI_D0 && dev->broken_nv_runpm)
> +               return 0;
> +
>         pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
>
>         /*
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 44c4ae1abd00..0006c9e37b6f 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
>  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
>                               PCI_CLASS_DISPLAY_VGA, 8,
>                               quirk_reset_lenovo_thinkpad_p50_nvgpu);
> +
> +/*
> + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> + * those were put into D3cold state if they were put into a non D0 PCI PM
> + * device state before doing so.
> + *
> + * This leads to various issue different issues which all manifest differently,
> + * but have the same root cause:
> + *  - AIML code execution hits an infinite loop (as the coe waits on device
> + *    memory to change).
> + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> + *    to handle well enough.
> + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> + *    userspace tries to access the GPU.
> + *
> + * In all cases dmesg will contain at least one line like this:
> + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> + * followed by a lot of nouveau timeouts.
> + *
> + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> + * PCIe bridge controller in order to power down the GPU.
> + * Nonetheless, there are other code paths inside the ACPI firmware which use
> + * other registers, which seem to work fine:
> + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> + *  - 0xb0 bit 0x10 (link disable)
> + * Changing the conditions inside the firmware by poking into the relevant
> + * addresses does resolve the issue, but it seemed to be ACPI private memory
> + * and not any device accessible memory at all, so there is no portable way of
> + * changing the conditions.
> + *
> + * The only systems where this behavior can be seen are hybrid graphics laptops
> + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> + * only occurs in combination with listed Intel PCIe bridge controllers and
> + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> + *
> + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> + * in the bridge controller rather than in the GPU.
> + *
> + * This issue was not able to be reproduced on non laptop systems.
> + */
> +
> +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> +{
> +       struct pci_dev *bridge = pci_upstream_bridge(dev);
> +
> +       if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> +           bridge->device == 0x1901)
> +               dev->broken_nv_runpm = 1;
> +}
> +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> +                             PCI_BASE_CLASS_DISPLAY, 16,
> +                             quirk_broken_nv_runpm);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index ac8a6c4e1792..903a0b3a39ec 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -416,6 +416,7 @@ struct pci_dev {
>         unsigned int    __aer_firmware_first_valid:1;
>         unsigned int    __aer_firmware_first:1;
>         unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> +       unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
>         unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
>         unsigned int    irq_managed:1;
>         unsigned int    has_secondary_link:1;
> --
> 2.21.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 21:49   ` Bjorn Helgaas
  0 siblings, 0 replies; 150+ messages in thread
From: Bjorn Helgaas @ 2019-11-19 21:49 UTC (permalink / raw)
  To: Karol Herbst
  Cc: linux-kernel, Lyude Paul, Rafael J . Wysocki, Mika Westerberg,
	linux-pci, linux-pm, dri-devel, nouveau, Dave Airlie

[+cc Dave]

On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> states.
> 
> v2: convert to pci_dev quirk
>     put a proper technical explanation of the issue as a in-code comment
> v3: disable it only for certain combinations of intel and nvidia hardware
> v4: simplify quirk by setting flag on the GPU itself

I have zero confidence that we understand the real problem, but we do
need to do something with this.  I'll merge it for v5.5 if we get the
minor procedural stuff below straightened out.

> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Mika Westerberg <mika.westerberg@intel.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c    |  7 ++++++
>  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/pci.h  |  1 +
>  3 files changed, 61 insertions(+)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index b97d9e10c9cc..02e71e0bcdd7 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>  	   || (state == PCI_D2 && !dev->d2_support))
>  		return -EIO;
>  
> +	/*
> +	 * check if we have a bad combination of bridge controller and nvidia
> +         * GPU, see quirk_broken_nv_runpm for more info

Whitespace damage.  Capitalized incorrectly (see other comments
nearby).

> +	 */
> +	if (state != PCI_D0 && dev->broken_nv_runpm)
> +		return 0;
> +
>  	pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
>  
>  	/*
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 44c4ae1abd00..0006c9e37b6f 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
>  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
>  			      PCI_CLASS_DISPLAY_VGA, 8,
>  			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
> +
> +/*
> + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> + * those were put into D3cold state if they were put into a non D0 PCI PM
> + * device state before doing so.

A device in D3cold is off the bus by definition.

IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for
the GPU fails in the transition back to D0, while D0 -> D3cold -> D0
works fine.

So I guess the problem is that we can put the device in D3cold with no
problem, but if we put in D3hot before going to D3cold, the device
never comes back to D0.  Right?

> + * This leads to various issue different issues which all manifest differently,

s/issue different//

Actually, I think there's only one underlying issue with several
manifestations.

> + * but have the same root cause:
> + *  - AIML code execution hits an infinite loop (as the coe waits on device
> + *    memory to change).

s/AIML/AML/
s/coe/code/

> + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> + *    to handle well enough.

s/pci/PCI/

More details about these crashes would be useful as we look at places
that *should* be able to handle errors like this.

> + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> + *    userspace tries to access the GPU.

This doesn't fit with the others and more details might be
informative here as well.

> + * In all cases dmesg will contain at least one line like this:
> + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> + * followed by a lot of nouveau timeouts.
> + *
> + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> + * PCIe bridge controller in order to power down the GPU.
> + * Nonetheless, there are other code paths inside the ACPI firmware which use
> + * other registers, which seem to work fine:
> + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> + *  - 0xb0 bit 0x10 (link disable)

All these register addresses are device-specific, so they're useless
without identifying the device.  "lspci -vvnn" output would let us at
least connect this with something.  It would be nice to have that info
archived along with your acpidump and python repro scripts in a
bugzilla with the URL in the commit log.

These are likely in PCI capabilities.  If I make the leap of assuming
the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link
Control register is at 0xb0 and the register at 0xbc would be the Root
Control register, and indeed 0x20 in Root Control is reserved.

I don't know what the relevance of all this is, though.  It's not
remarkable that accesses to these registers work.

Unless you mean you can access these registers *after* trying to put
the device back in D0, but other accesses to the device fail.  That
would indeed be very interesting.

> + * Changing the conditions inside the firmware by poking into the relevant
> + * addresses does resolve the issue, but it seemed to be ACPI private memory
> + * and not any device accessible memory at all, so there is no portable way of
> + * changing the conditions.
> + *
> + * The only systems where this behavior can be seen are hybrid graphics laptops
> + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> + * only occurs in combination with listed Intel PCIe bridge controllers and
> + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> + *
> + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> + * in the bridge controller rather than in the GPU.

I don't think we can conclude anything about where the defect is and I
don't think speculating here will help future readers of this code.

I *would* still like to see a bugzilla listing the systems where this
problem has been seen with the "lspci -vvnn", dmesg logs, and at least
one acpidump.  I think there's more to this story, and I suspect we
may be revisiting this in the future.

> + * This issue was not able to be reproduced on non laptop systems.
> + */
> +
> +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> +{
> +	struct pci_dev *bridge = pci_upstream_bridge(dev);
> +
> +	if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> +	    bridge->device == 0x1901)

pci_upstream_bridge() may return NULL, so you need

  if (bridge && bridge->vendor == PCI_VENDOR_ID_INTEL ...

https://lore.kernel.org/r/20190927144421.22608-1-kherbst@redhat.com
says Skylake and Kaby Lake SoCs are affected.  But here you only check
for one Device ID?

> +		dev->broken_nv_runpm = 1;
> +}
> +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> +			      PCI_BASE_CLASS_DISPLAY, 16,
> +			      quirk_broken_nv_runpm);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index ac8a6c4e1792..903a0b3a39ec 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -416,6 +416,7 @@ struct pci_dev {
>  	unsigned int	__aer_firmware_first_valid:1;
>  	unsigned int	__aer_firmware_first:1;
>  	unsigned int	broken_intx_masking:1;	/* INTx masking can't be used */
> +	unsigned int	broken_nv_runpm:1;	/* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
>  	unsigned int	io_window_1k:1;		/* Intel bridge 1K I/O windows */
>  	unsigned int	irq_managed:1;
>  	unsigned int	has_secondary_link:1;
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 21:49   ` Bjorn Helgaas
  0 siblings, 0 replies; 150+ messages in thread
From: Bjorn Helgaas @ 2019-11-19 21:49 UTC (permalink / raw)
  To: Karol Herbst
  Cc: linux-pm, linux-pci, Mika Westerberg, Rafael J . Wysocki,
	linux-kernel, dri-devel, nouveau

[+cc Dave]

On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> states.
> 
> v2: convert to pci_dev quirk
>     put a proper technical explanation of the issue as a in-code comment
> v3: disable it only for certain combinations of intel and nvidia hardware
> v4: simplify quirk by setting flag on the GPU itself

I have zero confidence that we understand the real problem, but we do
need to do something with this.  I'll merge it for v5.5 if we get the
minor procedural stuff below straightened out.

> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> Cc: Mika Westerberg <mika.westerberg@intel.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c    |  7 ++++++
>  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/pci.h  |  1 +
>  3 files changed, 61 insertions(+)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index b97d9e10c9cc..02e71e0bcdd7 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>  	   || (state == PCI_D2 && !dev->d2_support))
>  		return -EIO;
>  
> +	/*
> +	 * check if we have a bad combination of bridge controller and nvidia
> +         * GPU, see quirk_broken_nv_runpm for more info

Whitespace damage.  Capitalized incorrectly (see other comments
nearby).

> +	 */
> +	if (state != PCI_D0 && dev->broken_nv_runpm)
> +		return 0;
> +
>  	pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
>  
>  	/*
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 44c4ae1abd00..0006c9e37b6f 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
>  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
>  			      PCI_CLASS_DISPLAY_VGA, 8,
>  			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
> +
> +/*
> + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> + * those were put into D3cold state if they were put into a non D0 PCI PM
> + * device state before doing so.

A device in D3cold is off the bus by definition.

IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for
the GPU fails in the transition back to D0, while D0 -> D3cold -> D0
works fine.

So I guess the problem is that we can put the device in D3cold with no
problem, but if we put in D3hot before going to D3cold, the device
never comes back to D0.  Right?

> + * This leads to various issue different issues which all manifest differently,

s/issue different//

Actually, I think there's only one underlying issue with several
manifestations.

> + * but have the same root cause:
> + *  - AIML code execution hits an infinite loop (as the coe waits on device
> + *    memory to change).

s/AIML/AML/
s/coe/code/

> + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> + *    to handle well enough.

s/pci/PCI/

More details about these crashes would be useful as we look at places
that *should* be able to handle errors like this.

> + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> + *    userspace tries to access the GPU.

This doesn't fit with the others and more details might be
informative here as well.

> + * In all cases dmesg will contain at least one line like this:
> + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> + * followed by a lot of nouveau timeouts.
> + *
> + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> + * PCIe bridge controller in order to power down the GPU.
> + * Nonetheless, there are other code paths inside the ACPI firmware which use
> + * other registers, which seem to work fine:
> + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> + *  - 0xb0 bit 0x10 (link disable)

All these register addresses are device-specific, so they're useless
without identifying the device.  "lspci -vvnn" output would let us at
least connect this with something.  It would be nice to have that info
archived along with your acpidump and python repro scripts in a
bugzilla with the URL in the commit log.

These are likely in PCI capabilities.  If I make the leap of assuming
the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link
Control register is at 0xb0 and the register at 0xbc would be the Root
Control register, and indeed 0x20 in Root Control is reserved.

I don't know what the relevance of all this is, though.  It's not
remarkable that accesses to these registers work.

Unless you mean you can access these registers *after* trying to put
the device back in D0, but other accesses to the device fail.  That
would indeed be very interesting.

> + * Changing the conditions inside the firmware by poking into the relevant
> + * addresses does resolve the issue, but it seemed to be ACPI private memory
> + * and not any device accessible memory at all, so there is no portable way of
> + * changing the conditions.
> + *
> + * The only systems where this behavior can be seen are hybrid graphics laptops
> + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> + * only occurs in combination with listed Intel PCIe bridge controllers and
> + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> + *
> + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> + * in the bridge controller rather than in the GPU.

I don't think we can conclude anything about where the defect is and I
don't think speculating here will help future readers of this code.

I *would* still like to see a bugzilla listing the systems where this
problem has been seen with the "lspci -vvnn", dmesg logs, and at least
one acpidump.  I think there's more to this story, and I suspect we
may be revisiting this in the future.

> + * This issue was not able to be reproduced on non laptop systems.
> + */
> +
> +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> +{
> +	struct pci_dev *bridge = pci_upstream_bridge(dev);
> +
> +	if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> +	    bridge->device == 0x1901)

pci_upstream_bridge() may return NULL, so you need

  if (bridge && bridge->vendor == PCI_VENDOR_ID_INTEL ...

https://lore.kernel.org/r/20190927144421.22608-1-kherbst@redhat.com
says Skylake and Kaby Lake SoCs are affected.  But here you only check
for one Device ID?

> +		dev->broken_nv_runpm = 1;
> +}
> +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> +			      PCI_BASE_CLASS_DISPLAY, 16,
> +			      quirk_broken_nv_runpm);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index ac8a6c4e1792..903a0b3a39ec 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -416,6 +416,7 @@ struct pci_dev {
>  	unsigned int	__aer_firmware_first_valid:1;
>  	unsigned int	__aer_firmware_first:1;
>  	unsigned int	broken_intx_masking:1;	/* INTx masking can't be used */
> +	unsigned int	broken_nv_runpm:1;	/* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
>  	unsigned int	io_window_1k:1;		/* Intel bridge 1K I/O windows */
>  	unsigned int	irq_managed:1;
>  	unsigned int	has_secondary_link:1;
> -- 
> 2.21.0
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 22:26     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-19 22:26 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: LKML, Lyude Paul, Rafael J . Wysocki, Mika Westerberg, Linux PCI,
	Linux PM, dri-devel, nouveau, Dave Airlie

On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Dave]
>
> On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > states.
> >
> > v2: convert to pci_dev quirk
> >     put a proper technical explanation of the issue as a in-code comment
> > v3: disable it only for certain combinations of intel and nvidia hardware
> > v4: simplify quirk by setting flag on the GPU itself
>
> I have zero confidence that we understand the real problem, but we do
> need to do something with this.  I'll merge it for v5.5 if we get the
> minor procedural stuff below straightened out.
>

Thanks, and I agree with your statement, but at this point I think
only Intel can help out digging deeper as I see no way to debug this
further.

> > Signed-off-by: Karol Herbst <kherbst@redhat.com>
> > Cc: Bjorn Helgaas <bhelgaas@google.com>
> > Cc: Lyude Paul <lyude@redhat.com>
> > Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> > Cc: Mika Westerberg <mika.westerberg@intel.com>
> > Cc: linux-pci@vger.kernel.org
> > Cc: linux-pm@vger.kernel.org
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: nouveau@lists.freedesktop.org
> > ---
> >  drivers/pci/pci.c    |  7 ++++++
> >  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
> >  include/linux/pci.h  |  1 +
> >  3 files changed, 61 insertions(+)
> >
> > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > index b97d9e10c9cc..02e71e0bcdd7 100644
> > --- a/drivers/pci/pci.c
> > +++ b/drivers/pci/pci.c
> > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
> >          || (state == PCI_D2 && !dev->d2_support))
> >               return -EIO;
> >
> > +     /*
> > +      * check if we have a bad combination of bridge controller and nvidia
> > +         * GPU, see quirk_broken_nv_runpm for more info
>
> Whitespace damage.  Capitalized incorrectly (see other comments
> nearby).
>
> > +      */
> > +     if (state != PCI_D0 && dev->broken_nv_runpm)
> > +             return 0;
> > +
> >       pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
> >
> >       /*
> > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> > index 44c4ae1abd00..0006c9e37b6f 100644
> > --- a/drivers/pci/quirks.c
> > +++ b/drivers/pci/quirks.c
> > @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
> >  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
> >                             PCI_CLASS_DISPLAY_VGA, 8,
> >                             quirk_reset_lenovo_thinkpad_p50_nvgpu);
> > +
> > +/*
> > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> > + * those were put into D3cold state if they were put into a non D0 PCI PM
> > + * device state before doing so.
>
> A device in D3cold is off the bus by definition.
>
> IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for
> the GPU fails in the transition back to D0, while D0 -> D3cold -> D0
> works fine.
>
> So I guess the problem is that we can put the device in D3cold with no
> problem, but if we put in D3hot before going to D3cold, the device
> never comes back to D0.  Right?
>

correct. It By the way, it doesn't matter if I put the device into D1
instead, as long as the device is not in D0 state before putting it
into D3cold, it fails.

> > + * This leads to various issue different issues which all manifest differently,
>
> s/issue different//
>
> Actually, I think there's only one underlying issue with several
> manifestations.
>
> > + * but have the same root cause:
> > + *  - AIML code execution hits an infinite loop (as the coe waits on device
> > + *    memory to change).
>
> s/AIML/AML/
> s/coe/code/
>
> > + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> > + *    to handle well enough.
>
> s/pci/PCI/
>
> More details about these crashes would be useful as we look at places
> that *should* be able to handle errors like this.
>

makes sense, I could ,orthogonal to this, make the code more robust if
we hit issues like this in the future. What I am mostly wondering
about is, why pci core doesn't give up if the device doesn't come back
from D3cold? It sounds like, that the most sane thing to do here is to
just give up and fail runtime_resume and report errors back to
userspace trying to make use of the devices.

> > + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> > + *    userspace tries to access the GPU.
>
> This doesn't fit with the others and more details might be
> informative here as well.
>

yeah.. I try to get more infos on that. But at least for me (and it
might be a distribution thing) if I execute lspci, the system shuts
down, or at least tries to and might fail.

> > + * In all cases dmesg will contain at least one line like this:
> > + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> > + * followed by a lot of nouveau timeouts.
> > + *
> > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> > + * PCIe bridge controller in order to power down the GPU.
> > + * Nonetheless, there are other code paths inside the ACPI firmware which use
> > + * other registers, which seem to work fine:
> > + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> > + *  - 0xb0 bit 0x10 (link disable)
>
> All these register addresses are device-specific, so they're useless
> without identifying the device.  "lspci -vvnn" output would let us at
> least connect this with something.  It would be nice to have that info
> archived along with your acpidump and python repro scripts in a
> bugzilla with the URL in the commit log.
>
> These are likely in PCI capabilities.  If I make the leap of assuming
> the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link
> Control register is at 0xb0 and the register at 0xbc would be the Root
> Control register, and indeed 0x20 in Root Control is reserved.
>
> I don't know what the relevance of all this is, though.  It's not
> remarkable that accesses to these registers work.
>

those are registers on the bridge controller and are using inside ACPI
to power down the link. Depending on the OS detected other methods are
used afaik.

> Unless you mean you can access these registers *after* trying to put
> the device back in D0, but other accesses to the device fail.  That
> would indeed be very interesting.
>
> > + * Changing the conditions inside the firmware by poking into the relevant
> > + * addresses does resolve the issue, but it seemed to be ACPI private memory
> > + * and not any device accessible memory at all, so there is no portable way of
> > + * changing the conditions.
> > + *
> > + * The only systems where this behavior can be seen are hybrid graphics laptops
> > + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> > + * only occurs in combination with listed Intel PCIe bridge controllers and
> > + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> > + *
> > + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> > + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> > + * in the bridge controller rather than in the GPU.
>
> I don't think we can conclude anything about where the defect is and I
> don't think speculating here will help future readers of this code.
>
> I *would* still like to see a bugzilla listing the systems where this
> problem has been seen with the "lspci -vvnn", dmesg logs, and at least
> one acpidump.  I think there's more to this story, and I suspect we
> may be revisiting this in the future.
>

one big one is https://bugzilla.kernel.org/show_bug.cgi?id=156341 but
it's filled with a lot of different reports, but I am sure most of
them point to this very issue.

Sadly nobody thought of checking lspci with runpm disabled.. but I
could check for other bugs.

> > + * This issue was not able to be reproduced on non laptop systems.
> > + */
> > +
> > +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> > +{
> > +     struct pci_dev *bridge = pci_upstream_bridge(dev);
> > +
> > +     if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> > +         bridge->device == 0x1901)
>
> pci_upstream_bridge() may return NULL, so you need
>
>   if (bridge && bridge->vendor == PCI_VENDOR_ID_INTEL ...
>
> https://lore.kernel.org/r/20190927144421.22608-1-kherbst@redhat.com
> says Skylake and Kaby Lake SoCs are affected.  But here you only check
> for one Device ID?
>

yes, I found this bridge controllers on skylake and kaby lake SoCs,
but I could verify there are systems with a different architecture
(using the "PCI Express Root Port" devices instead of "Processor PCIe
Controller") do not show this issue, so I think it might indeed be
just this one bridge controller. I couldn't verify this issue on any
other so far.

But I could verify this issue with this one bridge controller in
combination with Maxwell, Pascal and Turing Nvidia GPUs.

> > +             dev->broken_nv_runpm = 1;
> > +}
> > +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> > +                           PCI_BASE_CLASS_DISPLAY, 16,
> > +                           quirk_broken_nv_runpm);
> > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > index ac8a6c4e1792..903a0b3a39ec 100644
> > --- a/include/linux/pci.h
> > +++ b/include/linux/pci.h
> > @@ -416,6 +416,7 @@ struct pci_dev {
> >       unsigned int    __aer_firmware_first_valid:1;
> >       unsigned int    __aer_firmware_first:1;
> >       unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> > +     unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
> >       unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
> >       unsigned int    irq_managed:1;
> >       unsigned int    has_secondary_link:1;
> > --
> > 2.21.0
> >
>

Will send out a v5 later addressing you review. Thanks!


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 22:26     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-19 22:26 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Linux PM, Linux PCI, Mika Westerberg, Rafael J . Wysocki, LKML,
	dri-devel, nouveau

On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Dave]
>
> On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > states.
> >
> > v2: convert to pci_dev quirk
> >     put a proper technical explanation of the issue as a in-code comment
> > v3: disable it only for certain combinations of intel and nvidia hardware
> > v4: simplify quirk by setting flag on the GPU itself
>
> I have zero confidence that we understand the real problem, but we do
> need to do something with this.  I'll merge it for v5.5 if we get the
> minor procedural stuff below straightened out.
>

Thanks, and I agree with your statement, but at this point I think
only Intel can help out digging deeper as I see no way to debug this
further.

> > Signed-off-by: Karol Herbst <kherbst@redhat.com>
> > Cc: Bjorn Helgaas <bhelgaas@google.com>
> > Cc: Lyude Paul <lyude@redhat.com>
> > Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> > Cc: Mika Westerberg <mika.westerberg@intel.com>
> > Cc: linux-pci@vger.kernel.org
> > Cc: linux-pm@vger.kernel.org
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: nouveau@lists.freedesktop.org
> > ---
> >  drivers/pci/pci.c    |  7 ++++++
> >  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
> >  include/linux/pci.h  |  1 +
> >  3 files changed, 61 insertions(+)
> >
> > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > index b97d9e10c9cc..02e71e0bcdd7 100644
> > --- a/drivers/pci/pci.c
> > +++ b/drivers/pci/pci.c
> > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
> >          || (state == PCI_D2 && !dev->d2_support))
> >               return -EIO;
> >
> > +     /*
> > +      * check if we have a bad combination of bridge controller and nvidia
> > +         * GPU, see quirk_broken_nv_runpm for more info
>
> Whitespace damage.  Capitalized incorrectly (see other comments
> nearby).
>
> > +      */
> > +     if (state != PCI_D0 && dev->broken_nv_runpm)
> > +             return 0;
> > +
> >       pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
> >
> >       /*
> > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> > index 44c4ae1abd00..0006c9e37b6f 100644
> > --- a/drivers/pci/quirks.c
> > +++ b/drivers/pci/quirks.c
> > @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
> >  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
> >                             PCI_CLASS_DISPLAY_VGA, 8,
> >                             quirk_reset_lenovo_thinkpad_p50_nvgpu);
> > +
> > +/*
> > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> > + * those were put into D3cold state if they were put into a non D0 PCI PM
> > + * device state before doing so.
>
> A device in D3cold is off the bus by definition.
>
> IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for
> the GPU fails in the transition back to D0, while D0 -> D3cold -> D0
> works fine.
>
> So I guess the problem is that we can put the device in D3cold with no
> problem, but if we put in D3hot before going to D3cold, the device
> never comes back to D0.  Right?
>

correct. It By the way, it doesn't matter if I put the device into D1
instead, as long as the device is not in D0 state before putting it
into D3cold, it fails.

> > + * This leads to various issue different issues which all manifest differently,
>
> s/issue different//
>
> Actually, I think there's only one underlying issue with several
> manifestations.
>
> > + * but have the same root cause:
> > + *  - AIML code execution hits an infinite loop (as the coe waits on device
> > + *    memory to change).
>
> s/AIML/AML/
> s/coe/code/
>
> > + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> > + *    to handle well enough.
>
> s/pci/PCI/
>
> More details about these crashes would be useful as we look at places
> that *should* be able to handle errors like this.
>

makes sense, I could ,orthogonal to this, make the code more robust if
we hit issues like this in the future. What I am mostly wondering
about is, why pci core doesn't give up if the device doesn't come back
from D3cold? It sounds like, that the most sane thing to do here is to
just give up and fail runtime_resume and report errors back to
userspace trying to make use of the devices.

> > + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> > + *    userspace tries to access the GPU.
>
> This doesn't fit with the others and more details might be
> informative here as well.
>

yeah.. I try to get more infos on that. But at least for me (and it
might be a distribution thing) if I execute lspci, the system shuts
down, or at least tries to and might fail.

> > + * In all cases dmesg will contain at least one line like this:
> > + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> > + * followed by a lot of nouveau timeouts.
> > + *
> > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> > + * PCIe bridge controller in order to power down the GPU.
> > + * Nonetheless, there are other code paths inside the ACPI firmware which use
> > + * other registers, which seem to work fine:
> > + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> > + *  - 0xb0 bit 0x10 (link disable)
>
> All these register addresses are device-specific, so they're useless
> without identifying the device.  "lspci -vvnn" output would let us at
> least connect this with something.  It would be nice to have that info
> archived along with your acpidump and python repro scripts in a
> bugzilla with the URL in the commit log.
>
> These are likely in PCI capabilities.  If I make the leap of assuming
> the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link
> Control register is at 0xb0 and the register at 0xbc would be the Root
> Control register, and indeed 0x20 in Root Control is reserved.
>
> I don't know what the relevance of all this is, though.  It's not
> remarkable that accesses to these registers work.
>

those are registers on the bridge controller and are using inside ACPI
to power down the link. Depending on the OS detected other methods are
used afaik.

> Unless you mean you can access these registers *after* trying to put
> the device back in D0, but other accesses to the device fail.  That
> would indeed be very interesting.
>
> > + * Changing the conditions inside the firmware by poking into the relevant
> > + * addresses does resolve the issue, but it seemed to be ACPI private memory
> > + * and not any device accessible memory at all, so there is no portable way of
> > + * changing the conditions.
> > + *
> > + * The only systems where this behavior can be seen are hybrid graphics laptops
> > + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> > + * only occurs in combination with listed Intel PCIe bridge controllers and
> > + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> > + *
> > + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> > + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> > + * in the bridge controller rather than in the GPU.
>
> I don't think we can conclude anything about where the defect is and I
> don't think speculating here will help future readers of this code.
>
> I *would* still like to see a bugzilla listing the systems where this
> problem has been seen with the "lspci -vvnn", dmesg logs, and at least
> one acpidump.  I think there's more to this story, and I suspect we
> may be revisiting this in the future.
>

one big one is https://bugzilla.kernel.org/show_bug.cgi?id=156341 but
it's filled with a lot of different reports, but I am sure most of
them point to this very issue.

Sadly nobody thought of checking lspci with runpm disabled.. but I
could check for other bugs.

> > + * This issue was not able to be reproduced on non laptop systems.
> > + */
> > +
> > +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> > +{
> > +     struct pci_dev *bridge = pci_upstream_bridge(dev);
> > +
> > +     if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> > +         bridge->device == 0x1901)
>
> pci_upstream_bridge() may return NULL, so you need
>
>   if (bridge && bridge->vendor == PCI_VENDOR_ID_INTEL ...
>
> https://lore.kernel.org/r/20190927144421.22608-1-kherbst@redhat.com
> says Skylake and Kaby Lake SoCs are affected.  But here you only check
> for one Device ID?
>

yes, I found this bridge controllers on skylake and kaby lake SoCs,
but I could verify there are systems with a different architecture
(using the "PCI Express Root Port" devices instead of "Processor PCIe
Controller") do not show this issue, so I think it might indeed be
just this one bridge controller. I couldn't verify this issue on any
other so far.

But I could verify this issue with this one bridge controller in
combination with Maxwell, Pascal and Turing Nvidia GPUs.

> > +             dev->broken_nv_runpm = 1;
> > +}
> > +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> > +                           PCI_BASE_CLASS_DISPLAY, 16,
> > +                           quirk_broken_nv_runpm);
> > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > index ac8a6c4e1792..903a0b3a39ec 100644
> > --- a/include/linux/pci.h
> > +++ b/include/linux/pci.h
> > @@ -416,6 +416,7 @@ struct pci_dev {
> >       unsigned int    __aer_firmware_first_valid:1;
> >       unsigned int    __aer_firmware_first:1;
> >       unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> > +     unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
> >       unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
> >       unsigned int    irq_managed:1;
> >       unsigned int    has_secondary_link:1;
> > --
> > 2.21.0
> >
>

Will send out a v5 later addressing you review. Thanks!

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 22:26     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-19 22:26 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Linux PM, Linux PCI, Mika Westerberg, Rafael J . Wysocki, LKML,
	dri-devel, nouveau

On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Dave]
>
> On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > states.
> >
> > v2: convert to pci_dev quirk
> >     put a proper technical explanation of the issue as a in-code comment
> > v3: disable it only for certain combinations of intel and nvidia hardware
> > v4: simplify quirk by setting flag on the GPU itself
>
> I have zero confidence that we understand the real problem, but we do
> need to do something with this.  I'll merge it for v5.5 if we get the
> minor procedural stuff below straightened out.
>

Thanks, and I agree with your statement, but at this point I think
only Intel can help out digging deeper as I see no way to debug this
further.

> > Signed-off-by: Karol Herbst <kherbst@redhat.com>
> > Cc: Bjorn Helgaas <bhelgaas@google.com>
> > Cc: Lyude Paul <lyude@redhat.com>
> > Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> > Cc: Mika Westerberg <mika.westerberg@intel.com>
> > Cc: linux-pci@vger.kernel.org
> > Cc: linux-pm@vger.kernel.org
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: nouveau@lists.freedesktop.org
> > ---
> >  drivers/pci/pci.c    |  7 ++++++
> >  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
> >  include/linux/pci.h  |  1 +
> >  3 files changed, 61 insertions(+)
> >
> > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > index b97d9e10c9cc..02e71e0bcdd7 100644
> > --- a/drivers/pci/pci.c
> > +++ b/drivers/pci/pci.c
> > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
> >          || (state == PCI_D2 && !dev->d2_support))
> >               return -EIO;
> >
> > +     /*
> > +      * check if we have a bad combination of bridge controller and nvidia
> > +         * GPU, see quirk_broken_nv_runpm for more info
>
> Whitespace damage.  Capitalized incorrectly (see other comments
> nearby).
>
> > +      */
> > +     if (state != PCI_D0 && dev->broken_nv_runpm)
> > +             return 0;
> > +
> >       pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
> >
> >       /*
> > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> > index 44c4ae1abd00..0006c9e37b6f 100644
> > --- a/drivers/pci/quirks.c
> > +++ b/drivers/pci/quirks.c
> > @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
> >  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
> >                             PCI_CLASS_DISPLAY_VGA, 8,
> >                             quirk_reset_lenovo_thinkpad_p50_nvgpu);
> > +
> > +/*
> > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> > + * those were put into D3cold state if they were put into a non D0 PCI PM
> > + * device state before doing so.
>
> A device in D3cold is off the bus by definition.
>
> IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for
> the GPU fails in the transition back to D0, while D0 -> D3cold -> D0
> works fine.
>
> So I guess the problem is that we can put the device in D3cold with no
> problem, but if we put in D3hot before going to D3cold, the device
> never comes back to D0.  Right?
>

correct. It By the way, it doesn't matter if I put the device into D1
instead, as long as the device is not in D0 state before putting it
into D3cold, it fails.

> > + * This leads to various issue different issues which all manifest differently,
>
> s/issue different//
>
> Actually, I think there's only one underlying issue with several
> manifestations.
>
> > + * but have the same root cause:
> > + *  - AIML code execution hits an infinite loop (as the coe waits on device
> > + *    memory to change).
>
> s/AIML/AML/
> s/coe/code/
>
> > + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> > + *    to handle well enough.
>
> s/pci/PCI/
>
> More details about these crashes would be useful as we look at places
> that *should* be able to handle errors like this.
>

makes sense, I could ,orthogonal to this, make the code more robust if
we hit issues like this in the future. What I am mostly wondering
about is, why pci core doesn't give up if the device doesn't come back
from D3cold? It sounds like, that the most sane thing to do here is to
just give up and fail runtime_resume and report errors back to
userspace trying to make use of the devices.

> > + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> > + *    userspace tries to access the GPU.
>
> This doesn't fit with the others and more details might be
> informative here as well.
>

yeah.. I try to get more infos on that. But at least for me (and it
might be a distribution thing) if I execute lspci, the system shuts
down, or at least tries to and might fail.

> > + * In all cases dmesg will contain at least one line like this:
> > + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> > + * followed by a lot of nouveau timeouts.
> > + *
> > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> > + * PCIe bridge controller in order to power down the GPU.
> > + * Nonetheless, there are other code paths inside the ACPI firmware which use
> > + * other registers, which seem to work fine:
> > + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> > + *  - 0xb0 bit 0x10 (link disable)
>
> All these register addresses are device-specific, so they're useless
> without identifying the device.  "lspci -vvnn" output would let us at
> least connect this with something.  It would be nice to have that info
> archived along with your acpidump and python repro scripts in a
> bugzilla with the URL in the commit log.
>
> These are likely in PCI capabilities.  If I make the leap of assuming
> the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link
> Control register is at 0xb0 and the register at 0xbc would be the Root
> Control register, and indeed 0x20 in Root Control is reserved.
>
> I don't know what the relevance of all this is, though.  It's not
> remarkable that accesses to these registers work.
>

those are registers on the bridge controller and are using inside ACPI
to power down the link. Depending on the OS detected other methods are
used afaik.

> Unless you mean you can access these registers *after* trying to put
> the device back in D0, but other accesses to the device fail.  That
> would indeed be very interesting.
>
> > + * Changing the conditions inside the firmware by poking into the relevant
> > + * addresses does resolve the issue, but it seemed to be ACPI private memory
> > + * and not any device accessible memory at all, so there is no portable way of
> > + * changing the conditions.
> > + *
> > + * The only systems where this behavior can be seen are hybrid graphics laptops
> > + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> > + * only occurs in combination with listed Intel PCIe bridge controllers and
> > + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> > + *
> > + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> > + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> > + * in the bridge controller rather than in the GPU.
>
> I don't think we can conclude anything about where the defect is and I
> don't think speculating here will help future readers of this code.
>
> I *would* still like to see a bugzilla listing the systems where this
> problem has been seen with the "lspci -vvnn", dmesg logs, and at least
> one acpidump.  I think there's more to this story, and I suspect we
> may be revisiting this in the future.
>

one big one is https://bugzilla.kernel.org/show_bug.cgi?id=156341 but
it's filled with a lot of different reports, but I am sure most of
them point to this very issue.

Sadly nobody thought of checking lspci with runpm disabled.. but I
could check for other bugs.

> > + * This issue was not able to be reproduced on non laptop systems.
> > + */
> > +
> > +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> > +{
> > +     struct pci_dev *bridge = pci_upstream_bridge(dev);
> > +
> > +     if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> > +         bridge->device == 0x1901)
>
> pci_upstream_bridge() may return NULL, so you need
>
>   if (bridge && bridge->vendor == PCI_VENDOR_ID_INTEL ...
>
> https://lore.kernel.org/r/20190927144421.22608-1-kherbst@redhat.com
> says Skylake and Kaby Lake SoCs are affected.  But here you only check
> for one Device ID?
>

yes, I found this bridge controllers on skylake and kaby lake SoCs,
but I could verify there are systems with a different architecture
(using the "PCI Express Root Port" devices instead of "Processor PCIe
Controller") do not show this issue, so I think it might indeed be
just this one bridge controller. I couldn't verify this issue on any
other so far.

But I could verify this issue with this one bridge controller in
combination with Maxwell, Pascal and Turing Nvidia GPUs.

> > +             dev->broken_nv_runpm = 1;
> > +}
> > +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> > +                           PCI_BASE_CLASS_DISPLAY, 16,
> > +                           quirk_broken_nv_runpm);
> > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > index ac8a6c4e1792..903a0b3a39ec 100644
> > --- a/include/linux/pci.h
> > +++ b/include/linux/pci.h
> > @@ -416,6 +416,7 @@ struct pci_dev {
> >       unsigned int    __aer_firmware_first_valid:1;
> >       unsigned int    __aer_firmware_first:1;
> >       unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> > +     unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
> >       unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
> >       unsigned int    irq_managed:1;
> >       unsigned int    has_secondary_link:1;
> > --
> > 2.21.0
> >
>

Will send out a v5 later addressing you review. Thanks!

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 22:57       ` Bjorn Helgaas
  0 siblings, 0 replies; 150+ messages in thread
From: Bjorn Helgaas @ 2019-11-19 22:57 UTC (permalink / raw)
  To: Karol Herbst
  Cc: LKML, Lyude Paul, Rafael J . Wysocki, Mika Westerberg, Linux PCI,
	Linux PM, dri-devel, nouveau, Dave Airlie, Daniel Drake,
	Vidya Sagar, Thierry Reding

[+cc Daniel, Vidya, Thierry; thread starts at
https://lore.kernel.org/r/20191017121901.13699-1-kherbst@redhat.com]

On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into
> > > higher device states.
> > > ...

> > > + *  - kernel crashes, as all pci reads return -1, which most code
> > > + *    isn't able to handle well enough.
> >
> > More details about these crashes would be useful as we look at
> > places that *should* be able to handle errors like this.
> 
> makes sense, I could ,orthogonal to this, make the code more robust
> if we hit issues like this in the future. What I am mostly wondering
> about is, why pci core doesn't give up if the device doesn't come
> back from D3cold? It sounds like, that the most sane thing to do
> here is to just give up and fail runtime_resume and report errors
> back to userspace trying to make use of the devices.

It's possible there's something the core could do here.  It's
interesting that we have at least three issues in this area in this
release:

  1) This NVIDIA GPU issue
  2) Daniel's issue with AMD Ryzen5/7 XHCI power-on
     (https://lore.kernel.org/r/20191014061355.29072-1-drake@endlessm.com)
  3) Vidya's issue with Intel 750 NVMe power-on
     (https://lore.kernel.org/r/20191118172310.21373-1-vidyas@nvidia.com)

Vidya's current patch is the most generic (calling pci_dev_wait() from
pci_power_up()).  That will print a warning if the device doesn't
respond, but we still don't go as far as returning errors to
runtime_resume.

This is definitely something we should consider improving in the
future.

> > > + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> > > + *    userspace tries to access the GPU.
> >
> > This doesn't fit with the others and more details might be
> > informative here as well.
> 
> yeah.. I try to get more infos on that. But at least for me (and it
> might be a distribution thing) if I execute lspci, the system shuts
> down, or at least tries to and might fail.

The lspci doesn't need to be after the failure occurs.  You can do it
immediately after boot.

If you can capture any part of the dmesg or console log of these
sudden shutdowns, that's all I'm interested in at this point.

Bjorn

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-19 22:57       ` Bjorn Helgaas
  0 siblings, 0 replies; 150+ messages in thread
From: Bjorn Helgaas @ 2019-11-19 22:57 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Thierry Reding, Linux PM, Linux PCI, Mika Westerberg,
	Rafael J . Wysocki, LKML, dri-devel, Daniel Drake, nouveau,
	Vidya Sagar

[+cc Daniel, Vidya, Thierry; thread starts at
https://lore.kernel.org/r/20191017121901.13699-1-kherbst@redhat.com]

On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into
> > > higher device states.
> > > ...

> > > + *  - kernel crashes, as all pci reads return -1, which most code
> > > + *    isn't able to handle well enough.
> >
> > More details about these crashes would be useful as we look at
> > places that *should* be able to handle errors like this.
> 
> makes sense, I could ,orthogonal to this, make the code more robust
> if we hit issues like this in the future. What I am mostly wondering
> about is, why pci core doesn't give up if the device doesn't come
> back from D3cold? It sounds like, that the most sane thing to do
> here is to just give up and fail runtime_resume and report errors
> back to userspace trying to make use of the devices.

It's possible there's something the core could do here.  It's
interesting that we have at least three issues in this area in this
release:

  1) This NVIDIA GPU issue
  2) Daniel's issue with AMD Ryzen5/7 XHCI power-on
     (https://lore.kernel.org/r/20191014061355.29072-1-drake@endlessm.com)
  3) Vidya's issue with Intel 750 NVMe power-on
     (https://lore.kernel.org/r/20191118172310.21373-1-vidyas@nvidia.com)

Vidya's current patch is the most generic (calling pci_dev_wait() from
pci_power_up()).  That will print a warning if the device doesn't
respond, but we still don't go as far as returning errors to
runtime_resume.

This is definitely something we should consider improving in the
future.

> > > + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> > > + *    userspace tries to access the GPU.
> >
> > This doesn't fit with the others and more details might be
> > informative here as well.
> 
> yeah.. I try to get more infos on that. But at least for me (and it
> might be a distribution thing) if I execute lspci, the system shuts
> down, or at least tries to and might fail.

The lspci doesn't need to be after the failure occurs.  You can do it
immediately after boot.

If you can capture any part of the dmesg or console log of these
sudden shutdowns, that's all I'm interested in at this point.

Bjorn
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 10:18       ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 10:18 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Rafael J . Wysocki, Linux PCI,
	Linux PM, dri-devel, nouveau, Dave Airlie, Mario.Limonciello

Hi Karol,

On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > [+cc Dave]
> >
> > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > states.
> > >
> > > v2: convert to pci_dev quirk
> > >     put a proper technical explanation of the issue as a in-code comment
> > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > v4: simplify quirk by setting flag on the GPU itself
> >
> > I have zero confidence that we understand the real problem, but we do
> > need to do something with this.  I'll merge it for v5.5 if we get the
> > minor procedural stuff below straightened out.
> >
> 
> Thanks, and I agree with your statement, but at this point I think
> only Intel can help out digging deeper as I see no way to debug this
> further.

I don't have anything against this patch, as long as the quirk stays
limited to the particular root port leading to the NVIDIA GPU. The
reason why I think it should to be limited is that I'm pretty certain
the problem is not in the root port itself. I have here a KBL based
Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
(it is connected to PCH root port) and it wakes up there just fine, so
don't want to break that.

Now, PCIe devices cannot go into D3cold all by themselves. They always
need help from the platform side which is ACPI in this case. This is
done by having the device to have _PR3 method that returns one or more
power resources that the OS is supposed to turn off when the device is
put into D3cold. All of that is implemented as form of ACPI methods that
pretty much do the hardware specific things that are outside of PCIe
spec to get the device into D3cold. At high level the _OFF() method
causes the root port to broadcast PME_Turn_Off message that results the
link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
can be GPIOs) and finally removes power (if the link goes into L3,
otherwise it goes into L2).

I think this is where the problem actually lies - the ASL methods that
are used to put the device into D3cold and back. We know that in Windows
this all works fine so unless Windows quirks the root port the same way
there is another reason behind this.

In case of Dell XPS 9560 (IIRC that's the machine you have) the
corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
_ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
itself do lots of things and it is hard to follow the dissassembled
ASL which does not have any comments but there are couple of things that
stand out where we may go into a different path. One of them is this in
the PGOF() method:

   If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))

The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
(see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
_REV")) so it might be that Dell people tested this at some point in
Linux as well. Added Mario in case he has any ideas.

Previously I suggested you to try the ACPI method tracing to see what
happens inside PGOF(). Did you have time to try it? It may provide more
information about that is happening inside those methods and hopefully
point us to the root cause.

Also if you haven't tried already passing acpi_rev_override in the
command line makes the _REV to return 5 so it should go into the "Linux"
path in PGOF().

[1] https://www.kernel.org/doc/html/latest/firmware-guide/acpi/osi.html#do-not-use-rev

> > > Signed-off-by: Karol Herbst <kherbst@redhat.com>
> > > Cc: Bjorn Helgaas <bhelgaas@google.com>
> > > Cc: Lyude Paul <lyude@redhat.com>
> > > Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> > > Cc: Mika Westerberg <mika.westerberg@intel.com>
> > > Cc: linux-pci@vger.kernel.org
> > > Cc: linux-pm@vger.kernel.org
> > > Cc: dri-devel@lists.freedesktop.org
> > > Cc: nouveau@lists.freedesktop.org
> > > ---
> > >  drivers/pci/pci.c    |  7 ++++++
> > >  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
> > >  include/linux/pci.h  |  1 +
> > >  3 files changed, 61 insertions(+)
> > >
> > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > > index b97d9e10c9cc..02e71e0bcdd7 100644
> > > --- a/drivers/pci/pci.c
> > > +++ b/drivers/pci/pci.c
> > > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
> > >          || (state == PCI_D2 && !dev->d2_support))
> > >               return -EIO;
> > >
> > > +     /*
> > > +      * check if we have a bad combination of bridge controller and nvidia
> > > +         * GPU, see quirk_broken_nv_runpm for more info
> >
> > Whitespace damage.  Capitalized incorrectly (see other comments
> > nearby).
> >
> > > +      */
> > > +     if (state != PCI_D0 && dev->broken_nv_runpm)
> > > +             return 0;
> > > +
> > >       pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
> > >
> > >       /*
> > > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> > > index 44c4ae1abd00..0006c9e37b6f 100644
> > > --- a/drivers/pci/quirks.c
> > > +++ b/drivers/pci/quirks.c
> > > @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
> > >  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
> > >                             PCI_CLASS_DISPLAY_VGA, 8,
> > >                             quirk_reset_lenovo_thinkpad_p50_nvgpu);
> > > +
> > > +/*
> > > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> > > + * those were put into D3cold state if they were put into a non D0 PCI PM
> > > + * device state before doing so.
> >
> > A device in D3cold is off the bus by definition.
> >
> > IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for
> > the GPU fails in the transition back to D0, while D0 -> D3cold -> D0
> > works fine.
> >
> > So I guess the problem is that we can put the device in D3cold with no
> > problem, but if we put in D3hot before going to D3cold, the device
> > never comes back to D0.  Right?
> >
> 
> correct. It By the way, it doesn't matter if I put the device into D1
> instead, as long as the device is not in D0 state before putting it
> into D3cold, it fails.
> 
> > > + * This leads to various issue different issues which all manifest differently,
> >
> > s/issue different//
> >
> > Actually, I think there's only one underlying issue with several
> > manifestations.
> >
> > > + * but have the same root cause:
> > > + *  - AIML code execution hits an infinite loop (as the coe waits on device
> > > + *    memory to change).
> >
> > s/AIML/AML/
> > s/coe/code/
> >
> > > + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> > > + *    to handle well enough.
> >
> > s/pci/PCI/
> >
> > More details about these crashes would be useful as we look at places
> > that *should* be able to handle errors like this.
> >
> 
> makes sense, I could ,orthogonal to this, make the code more robust if
> we hit issues like this in the future. What I am mostly wondering
> about is, why pci core doesn't give up if the device doesn't come back
> from D3cold? It sounds like, that the most sane thing to do here is to
> just give up and fail runtime_resume and report errors back to
> userspace trying to make use of the devices.
> 
> > > + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> > > + *    userspace tries to access the GPU.
> >
> > This doesn't fit with the others and more details might be
> > informative here as well.
> >
> 
> yeah.. I try to get more infos on that. But at least for me (and it
> might be a distribution thing) if I execute lspci, the system shuts
> down, or at least tries to and might fail.
> 
> > > + * In all cases dmesg will contain at least one line like this:
> > > + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> > > + * followed by a lot of nouveau timeouts.
> > > + *
> > > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> > > + * PCIe bridge controller in order to power down the GPU.
> > > + * Nonetheless, there are other code paths inside the ACPI firmware which use
> > > + * other registers, which seem to work fine:
> > > + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> > > + *  - 0xb0 bit 0x10 (link disable)
> >
> > All these register addresses are device-specific, so they're useless
> > without identifying the device.  "lspci -vvnn" output would let us at
> > least connect this with something.  It would be nice to have that info
> > archived along with your acpidump and python repro scripts in a
> > bugzilla with the URL in the commit log.
> >
> > These are likely in PCI capabilities.  If I make the leap of assuming
> > the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link
> > Control register is at 0xb0 and the register at 0xbc would be the Root
> > Control register, and indeed 0x20 in Root Control is reserved.
> >
> > I don't know what the relevance of all this is, though.  It's not
> > remarkable that accesses to these registers work.
> >
> 
> those are registers on the bridge controller and are using inside ACPI
> to power down the link. Depending on the OS detected other methods are
> used afaik.
> 
> > Unless you mean you can access these registers *after* trying to put
> > the device back in D0, but other accesses to the device fail.  That
> > would indeed be very interesting.
> >
> > > + * Changing the conditions inside the firmware by poking into the relevant
> > > + * addresses does resolve the issue, but it seemed to be ACPI private memory
> > > + * and not any device accessible memory at all, so there is no portable way of
> > > + * changing the conditions.
> > > + *
> > > + * The only systems where this behavior can be seen are hybrid graphics laptops
> > > + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> > > + * only occurs in combination with listed Intel PCIe bridge controllers and
> > > + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> > > + *
> > > + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> > > + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> > > + * in the bridge controller rather than in the GPU.
> >
> > I don't think we can conclude anything about where the defect is and I
> > don't think speculating here will help future readers of this code.
> >
> > I *would* still like to see a bugzilla listing the systems where this
> > problem has been seen with the "lspci -vvnn", dmesg logs, and at least
> > one acpidump.  I think there's more to this story, and I suspect we
> > may be revisiting this in the future.
> >
> 
> one big one is https://bugzilla.kernel.org/show_bug.cgi?id=156341 but
> it's filled with a lot of different reports, but I am sure most of
> them point to this very issue.
> 
> Sadly nobody thought of checking lspci with runpm disabled.. but I
> could check for other bugs.
> 
> > > + * This issue was not able to be reproduced on non laptop systems.
> > > + */
> > > +
> > > +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> > > +{
> > > +     struct pci_dev *bridge = pci_upstream_bridge(dev);
> > > +
> > > +     if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> > > +         bridge->device == 0x1901)
> >
> > pci_upstream_bridge() may return NULL, so you need
> >
> >   if (bridge && bridge->vendor == PCI_VENDOR_ID_INTEL ...
> >
> > https://lore.kernel.org/r/20190927144421.22608-1-kherbst@redhat.com
> > says Skylake and Kaby Lake SoCs are affected.  But here you only check
> > for one Device ID?
> >
> 
> yes, I found this bridge controllers on skylake and kaby lake SoCs,
> but I could verify there are systems with a different architecture
> (using the "PCI Express Root Port" devices instead of "Processor PCIe
> Controller") do not show this issue, so I think it might indeed be
> just this one bridge controller. I couldn't verify this issue on any
> other so far.
> 
> But I could verify this issue with this one bridge controller in
> combination with Maxwell, Pascal and Turing Nvidia GPUs.
> 
> > > +             dev->broken_nv_runpm = 1;
> > > +}
> > > +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> > > +                           PCI_BASE_CLASS_DISPLAY, 16,
> > > +                           quirk_broken_nv_runpm);
> > > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > > index ac8a6c4e1792..903a0b3a39ec 100644
> > > --- a/include/linux/pci.h
> > > +++ b/include/linux/pci.h
> > > @@ -416,6 +416,7 @@ struct pci_dev {
> > >       unsigned int    __aer_firmware_first_valid:1;
> > >       unsigned int    __aer_firmware_first:1;
> > >       unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> > > +     unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
> > >       unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
> > >       unsigned int    irq_managed:1;
> > >       unsigned int    has_secondary_link:1;
> > > --
> > > 2.21.0
> > >
> >
> 
> Will send out a v5 later addressing you review. Thanks!

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 10:18       ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 10:18 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Linux PM, Linux PCI, Mario.Limonciello, Rafael J . Wysocki, LKML,
	dri-devel, Bjorn Helgaas, nouveau

Hi Karol,

On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > [+cc Dave]
> >
> > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > states.
> > >
> > > v2: convert to pci_dev quirk
> > >     put a proper technical explanation of the issue as a in-code comment
> > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > v4: simplify quirk by setting flag on the GPU itself
> >
> > I have zero confidence that we understand the real problem, but we do
> > need to do something with this.  I'll merge it for v5.5 if we get the
> > minor procedural stuff below straightened out.
> >
> 
> Thanks, and I agree with your statement, but at this point I think
> only Intel can help out digging deeper as I see no way to debug this
> further.

I don't have anything against this patch, as long as the quirk stays
limited to the particular root port leading to the NVIDIA GPU. The
reason why I think it should to be limited is that I'm pretty certain
the problem is not in the root port itself. I have here a KBL based
Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
(it is connected to PCH root port) and it wakes up there just fine, so
don't want to break that.

Now, PCIe devices cannot go into D3cold all by themselves. They always
need help from the platform side which is ACPI in this case. This is
done by having the device to have _PR3 method that returns one or more
power resources that the OS is supposed to turn off when the device is
put into D3cold. All of that is implemented as form of ACPI methods that
pretty much do the hardware specific things that are outside of PCIe
spec to get the device into D3cold. At high level the _OFF() method
causes the root port to broadcast PME_Turn_Off message that results the
link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
can be GPIOs) and finally removes power (if the link goes into L3,
otherwise it goes into L2).

I think this is where the problem actually lies - the ASL methods that
are used to put the device into D3cold and back. We know that in Windows
this all works fine so unless Windows quirks the root port the same way
there is another reason behind this.

In case of Dell XPS 9560 (IIRC that's the machine you have) the
corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
_ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
itself do lots of things and it is hard to follow the dissassembled
ASL which does not have any comments but there are couple of things that
stand out where we may go into a different path. One of them is this in
the PGOF() method:

   If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))

The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
(see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
_REV")) so it might be that Dell people tested this at some point in
Linux as well. Added Mario in case he has any ideas.

Previously I suggested you to try the ACPI method tracing to see what
happens inside PGOF(). Did you have time to try it? It may provide more
information about that is happening inside those methods and hopefully
point us to the root cause.

Also if you haven't tried already passing acpi_rev_override in the
command line makes the _REV to return 5 so it should go into the "Linux"
path in PGOF().

[1] https://www.kernel.org/doc/html/latest/firmware-guide/acpi/osi.html#do-not-use-rev

> > > Signed-off-by: Karol Herbst <kherbst@redhat.com>
> > > Cc: Bjorn Helgaas <bhelgaas@google.com>
> > > Cc: Lyude Paul <lyude@redhat.com>
> > > Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
> > > Cc: Mika Westerberg <mika.westerberg@intel.com>
> > > Cc: linux-pci@vger.kernel.org
> > > Cc: linux-pm@vger.kernel.org
> > > Cc: dri-devel@lists.freedesktop.org
> > > Cc: nouveau@lists.freedesktop.org
> > > ---
> > >  drivers/pci/pci.c    |  7 ++++++
> > >  drivers/pci/quirks.c | 53 ++++++++++++++++++++++++++++++++++++++++++++
> > >  include/linux/pci.h  |  1 +
> > >  3 files changed, 61 insertions(+)
> > >
> > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > > index b97d9e10c9cc..02e71e0bcdd7 100644
> > > --- a/drivers/pci/pci.c
> > > +++ b/drivers/pci/pci.c
> > > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
> > >          || (state == PCI_D2 && !dev->d2_support))
> > >               return -EIO;
> > >
> > > +     /*
> > > +      * check if we have a bad combination of bridge controller and nvidia
> > > +         * GPU, see quirk_broken_nv_runpm for more info
> >
> > Whitespace damage.  Capitalized incorrectly (see other comments
> > nearby).
> >
> > > +      */
> > > +     if (state != PCI_D0 && dev->broken_nv_runpm)
> > > +             return 0;
> > > +
> > >       pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
> > >
> > >       /*
> > > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> > > index 44c4ae1abd00..0006c9e37b6f 100644
> > > --- a/drivers/pci/quirks.c
> > > +++ b/drivers/pci/quirks.c
> > > @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
> > >  DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
> > >                             PCI_CLASS_DISPLAY_VGA, 8,
> > >                             quirk_reset_lenovo_thinkpad_p50_nvgpu);
> > > +
> > > +/*
> > > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
> > > + * those were put into D3cold state if they were put into a non D0 PCI PM
> > > + * device state before doing so.
> >
> > A device in D3cold is off the bus by definition.
> >
> > IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for
> > the GPU fails in the transition back to D0, while D0 -> D3cold -> D0
> > works fine.
> >
> > So I guess the problem is that we can put the device in D3cold with no
> > problem, but if we put in D3hot before going to D3cold, the device
> > never comes back to D0.  Right?
> >
> 
> correct. It By the way, it doesn't matter if I put the device into D1
> instead, as long as the device is not in D0 state before putting it
> into D3cold, it fails.
> 
> > > + * This leads to various issue different issues which all manifest differently,
> >
> > s/issue different//
> >
> > Actually, I think there's only one underlying issue with several
> > manifestations.
> >
> > > + * but have the same root cause:
> > > + *  - AIML code execution hits an infinite loop (as the coe waits on device
> > > + *    memory to change).
> >
> > s/AIML/AML/
> > s/coe/code/
> >
> > > + *  - kernel crashes, as all pci reads return -1, which most code isn't able
> > > + *    to handle well enough.
> >
> > s/pci/PCI/
> >
> > More details about these crashes would be useful as we look at places
> > that *should* be able to handle errors like this.
> >
> 
> makes sense, I could ,orthogonal to this, make the code more robust if
> we hit issues like this in the future. What I am mostly wondering
> about is, why pci core doesn't give up if the device doesn't come back
> from D3cold? It sounds like, that the most sane thing to do here is to
> just give up and fail runtime_resume and report errors back to
> userspace trying to make use of the devices.
> 
> > > + *  - sudden shutdowns, as the kernel identified an unrecoverable error after
> > > + *    userspace tries to access the GPU.
> >
> > This doesn't fit with the others and more details might be
> > informative here as well.
> >
> 
> yeah.. I try to get more infos on that. But at least for me (and it
> might be a distribution thing) if I execute lspci, the system shuts
> down, or at least tries to and might fail.
> 
> > > + * In all cases dmesg will contain at least one line like this:
> > > + * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
> > > + * followed by a lot of nouveau timeouts.
> > > + *
> > > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
> > > + * PCIe bridge controller in order to power down the GPU.
> > > + * Nonetheless, there are other code paths inside the ACPI firmware which use
> > > + * other registers, which seem to work fine:
> > > + *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
> > > + *  - 0xb0 bit 0x10 (link disable)
> >
> > All these register addresses are device-specific, so they're useless
> > without identifying the device.  "lspci -vvnn" output would let us at
> > least connect this with something.  It would be nice to have that info
> > archived along with your acpidump and python repro scripts in a
> > bugzilla with the URL in the commit log.
> >
> > These are likely in PCI capabilities.  If I make the leap of assuming
> > the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link
> > Control register is at 0xb0 and the register at 0xbc would be the Root
> > Control register, and indeed 0x20 in Root Control is reserved.
> >
> > I don't know what the relevance of all this is, though.  It's not
> > remarkable that accesses to these registers work.
> >
> 
> those are registers on the bridge controller and are using inside ACPI
> to power down the link. Depending on the OS detected other methods are
> used afaik.
> 
> > Unless you mean you can access these registers *after* trying to put
> > the device back in D0, but other accesses to the device fail.  That
> > would indeed be very interesting.
> >
> > > + * Changing the conditions inside the firmware by poking into the relevant
> > > + * addresses does resolve the issue, but it seemed to be ACPI private memory
> > > + * and not any device accessible memory at all, so there is no portable way of
> > > + * changing the conditions.
> > > + *
> > > + * The only systems where this behavior can be seen are hybrid graphics laptops
> > > + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
> > > + * only occurs in combination with listed Intel PCIe bridge controllers and
> > > + * the mentioned GPUs or if it's only a hw bug in the bridge controller.
> > > + *
> > > + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
> > > + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
> > > + * in the bridge controller rather than in the GPU.
> >
> > I don't think we can conclude anything about where the defect is and I
> > don't think speculating here will help future readers of this code.
> >
> > I *would* still like to see a bugzilla listing the systems where this
> > problem has been seen with the "lspci -vvnn", dmesg logs, and at least
> > one acpidump.  I think there's more to this story, and I suspect we
> > may be revisiting this in the future.
> >
> 
> one big one is https://bugzilla.kernel.org/show_bug.cgi?id=156341 but
> it's filled with a lot of different reports, but I am sure most of
> them point to this very issue.
> 
> Sadly nobody thought of checking lspci with runpm disabled.. but I
> could check for other bugs.
> 
> > > + * This issue was not able to be reproduced on non laptop systems.
> > > + */
> > > +
> > > +static void quirk_broken_nv_runpm(struct pci_dev *dev)
> > > +{
> > > +     struct pci_dev *bridge = pci_upstream_bridge(dev);
> > > +
> > > +     if (bridge->vendor == PCI_VENDOR_ID_INTEL &&
> > > +         bridge->device == 0x1901)
> >
> > pci_upstream_bridge() may return NULL, so you need
> >
> >   if (bridge && bridge->vendor == PCI_VENDOR_ID_INTEL ...
> >
> > https://lore.kernel.org/r/20190927144421.22608-1-kherbst@redhat.com
> > says Skylake and Kaby Lake SoCs are affected.  But here you only check
> > for one Device ID?
> >
> 
> yes, I found this bridge controllers on skylake and kaby lake SoCs,
> but I could verify there are systems with a different architecture
> (using the "PCI Express Root Port" devices instead of "Processor PCIe
> Controller") do not show this issue, so I think it might indeed be
> just this one bridge controller. I couldn't verify this issue on any
> other so far.
> 
> But I could verify this issue with this one bridge controller in
> combination with Maxwell, Pascal and Turing Nvidia GPUs.
> 
> > > +             dev->broken_nv_runpm = 1;
> > > +}
> > > +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
> > > +                           PCI_BASE_CLASS_DISPLAY, 16,
> > > +                           quirk_broken_nv_runpm);
> > > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > > index ac8a6c4e1792..903a0b3a39ec 100644
> > > --- a/include/linux/pci.h
> > > +++ b/include/linux/pci.h
> > > @@ -416,6 +416,7 @@ struct pci_dev {
> > >       unsigned int    __aer_firmware_first_valid:1;
> > >       unsigned int    __aer_firmware_first:1;
> > >       unsigned int    broken_intx_masking:1;  /* INTx masking can't be used */
> > > +     unsigned int    broken_nv_runpm:1;      /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */
> > >       unsigned int    io_window_1k:1;         /* Intel bridge 1K I/O windows */
> > >       unsigned int    irq_managed:1;
> > >       unsigned int    has_secondary_link:1;
> > > --
> > > 2.21.0
> > >
> >
> 
> Will send out a v5 later addressing you review. Thanks!
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 10:52         ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 10:52 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> Hi Karol,
>
> On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > [+cc Dave]
> > >
> > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > states.
> > > >
> > > > v2: convert to pci_dev quirk
> > > >     put a proper technical explanation of the issue as a in-code comment
> > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > v4: simplify quirk by setting flag on the GPU itself
> > >
> > > I have zero confidence that we understand the real problem, but we do
> > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > minor procedural stuff below straightened out.
> > >
> >
> > Thanks, and I agree with your statement, but at this point I think
> > only Intel can help out digging deeper as I see no way to debug this
> > further.
>
> I don't have anything against this patch, as long as the quirk stays
> limited to the particular root port leading to the NVIDIA GPU. The
> reason why I think it should to be limited is that I'm pretty certain
> the problem is not in the root port itself. I have here a KBL based
> Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> (it is connected to PCH root port) and it wakes up there just fine, so
> don't want to break that.
>
> Now, PCIe devices cannot go into D3cold all by themselves. They always
> need help from the platform side which is ACPI in this case. This is
> done by having the device to have _PR3 method that returns one or more
> power resources that the OS is supposed to turn off when the device is
> put into D3cold. All of that is implemented as form of ACPI methods that
> pretty much do the hardware specific things that are outside of PCIe
> spec to get the device into D3cold. At high level the _OFF() method
> causes the root port to broadcast PME_Turn_Off message that results the
> link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> can be GPIOs) and finally removes power (if the link goes into L3,
> otherwise it goes into L2).
>
> I think this is where the problem actually lies - the ASL methods that
> are used to put the device into D3cold and back. We know that in Windows
> this all works fine so unless Windows quirks the root port the same way
> there is another reason behind this.
>
> In case of Dell XPS 9560 (IIRC that's the machine you have) the
> corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> itself do lots of things and it is hard to follow the dissassembled
> ASL which does not have any comments but there are couple of things that
> stand out where we may go into a different path. One of them is this in
> the PGOF() method:
>
>    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
>
> The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> _REV")) so it might be that Dell people tested this at some point in
> Linux as well. Added Mario in case he has any ideas.
>
> Previously I suggested you to try the ACPI method tracing to see what
> happens inside PGOF(). Did you have time to try it? It may provide more
> information about that is happening inside those methods and hopefully
> point us to the root cause.
>
> Also if you haven't tried already passing acpi_rev_override in the
> command line makes the _REV to return 5 so it should go into the "Linux"
> path in PGOF().

Oh, so does it look like we are trying to work around AML that tried
to work around some problematic behavior in Linux at one point?

> [1] https://www.kernel.org/doc/html/latest/firmware-guide/acpi/osi.html#do-not-use-rev

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 10:52         ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 10:52 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Linux PM, Linux PCI, Mario Limonciello, Rafael J . Wysocki, LKML,
	dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> Hi Karol,
>
> On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > [+cc Dave]
> > >
> > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > states.
> > > >
> > > > v2: convert to pci_dev quirk
> > > >     put a proper technical explanation of the issue as a in-code comment
> > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > v4: simplify quirk by setting flag on the GPU itself
> > >
> > > I have zero confidence that we understand the real problem, but we do
> > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > minor procedural stuff below straightened out.
> > >
> >
> > Thanks, and I agree with your statement, but at this point I think
> > only Intel can help out digging deeper as I see no way to debug this
> > further.
>
> I don't have anything against this patch, as long as the quirk stays
> limited to the particular root port leading to the NVIDIA GPU. The
> reason why I think it should to be limited is that I'm pretty certain
> the problem is not in the root port itself. I have here a KBL based
> Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> (it is connected to PCH root port) and it wakes up there just fine, so
> don't want to break that.
>
> Now, PCIe devices cannot go into D3cold all by themselves. They always
> need help from the platform side which is ACPI in this case. This is
> done by having the device to have _PR3 method that returns one or more
> power resources that the OS is supposed to turn off when the device is
> put into D3cold. All of that is implemented as form of ACPI methods that
> pretty much do the hardware specific things that are outside of PCIe
> spec to get the device into D3cold. At high level the _OFF() method
> causes the root port to broadcast PME_Turn_Off message that results the
> link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> can be GPIOs) and finally removes power (if the link goes into L3,
> otherwise it goes into L2).
>
> I think this is where the problem actually lies - the ASL methods that
> are used to put the device into D3cold and back. We know that in Windows
> this all works fine so unless Windows quirks the root port the same way
> there is another reason behind this.
>
> In case of Dell XPS 9560 (IIRC that's the machine you have) the
> corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> itself do lots of things and it is hard to follow the dissassembled
> ASL which does not have any comments but there are couple of things that
> stand out where we may go into a different path. One of them is this in
> the PGOF() method:
>
>    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
>
> The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> _REV")) so it might be that Dell people tested this at some point in
> Linux as well. Added Mario in case he has any ideas.
>
> Previously I suggested you to try the ACPI method tracing to see what
> happens inside PGOF(). Did you have time to try it? It may provide more
> information about that is happening inside those methods and hopefully
> point us to the root cause.
>
> Also if you haven't tried already passing acpi_rev_override in the
> command line makes the _REV to return 5 so it should go into the "Linux"
> path in PGOF().

Oh, so does it look like we are trying to work around AML that tried
to work around some problematic behavior in Linux at one point?

> [1] https://www.kernel.org/doc/html/latest/firmware-guide/acpi/osi.html#do-not-use-rev
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 10:52         ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 10:52 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> Hi Karol,
>
> On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > [+cc Dave]
> > >
> > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > states.
> > > >
> > > > v2: convert to pci_dev quirk
> > > >     put a proper technical explanation of the issue as a in-code comment
> > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > v4: simplify quirk by setting flag on the GPU itself
> > >
> > > I have zero confidence that we understand the real problem, but we do
> > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > minor procedural stuff below straightened out.
> > >
> >
> > Thanks, and I agree with your statement, but at this point I think
> > only Intel can help out digging deeper as I see no way to debug this
> > further.
>
> I don't have anything against this patch, as long as the quirk stays
> limited to the particular root port leading to the NVIDIA GPU. The
> reason why I think it should to be limited is that I'm pretty certain
> the problem is not in the root port itself. I have here a KBL based
> Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> (it is connected to PCH root port) and it wakes up there just fine, so
> don't want to break that.
>
> Now, PCIe devices cannot go into D3cold all by themselves. They always
> need help from the platform side which is ACPI in this case. This is
> done by having the device to have _PR3 method that returns one or more
> power resources that the OS is supposed to turn off when the device is
> put into D3cold. All of that is implemented as form of ACPI methods that
> pretty much do the hardware specific things that are outside of PCIe
> spec to get the device into D3cold. At high level the _OFF() method
> causes the root port to broadcast PME_Turn_Off message that results the
> link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> can be GPIOs) and finally removes power (if the link goes into L3,
> otherwise it goes into L2).
>
> I think this is where the problem actually lies - the ASL methods that
> are used to put the device into D3cold and back. We know that in Windows
> this all works fine so unless Windows quirks the root port the same way
> there is another reason behind this.
>
> In case of Dell XPS 9560 (IIRC that's the machine you have) the
> corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> itself do lots of things and it is hard to follow the dissassembled
> ASL which does not have any comments but there are couple of things that
> stand out where we may go into a different path. One of them is this in
> the PGOF() method:
>
>    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
>
> The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> _REV")) so it might be that Dell people tested this at some point in
> Linux as well. Added Mario in case he has any ideas.
>
> Previously I suggested you to try the ACPI method tracing to see what
> happens inside PGOF(). Did you have time to try it? It may provide more
> information about that is happening inside those methods and hopefully
> point us to the root cause.
>
> Also if you haven't tried already passing acpi_rev_override in the
> command line makes the _REV to return 5 so it should go into the "Linux"
> path in PGOF().

Oh, so does it look like we are trying to work around AML that tried
to work around some problematic behavior in Linux at one point?

> [1] https://www.kernel.org/doc/html/latest/firmware-guide/acpi/osi.html#do-not-use-rev
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:22           ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 11:22 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > Hi Karol,
> >
> > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > >
> > > > [+cc Dave]
> > > >
> > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > > states.
> > > > >
> > > > > v2: convert to pci_dev quirk
> > > > >     put a proper technical explanation of the issue as a in-code comment
> > > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > > v4: simplify quirk by setting flag on the GPU itself
> > > >
> > > > I have zero confidence that we understand the real problem, but we do
> > > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > > minor procedural stuff below straightened out.
> > > >
> > >
> > > Thanks, and I agree with your statement, but at this point I think
> > > only Intel can help out digging deeper as I see no way to debug this
> > > further.
> >
> > I don't have anything against this patch, as long as the quirk stays
> > limited to the particular root port leading to the NVIDIA GPU. The
> > reason why I think it should to be limited is that I'm pretty certain
> > the problem is not in the root port itself. I have here a KBL based
> > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> > (it is connected to PCH root port) and it wakes up there just fine, so
> > don't want to break that.
> >
> > Now, PCIe devices cannot go into D3cold all by themselves. They always
> > need help from the platform side which is ACPI in this case. This is
> > done by having the device to have _PR3 method that returns one or more
> > power resources that the OS is supposed to turn off when the device is
> > put into D3cold. All of that is implemented as form of ACPI methods that
> > pretty much do the hardware specific things that are outside of PCIe
> > spec to get the device into D3cold. At high level the _OFF() method
> > causes the root port to broadcast PME_Turn_Off message that results the
> > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> > can be GPIOs) and finally removes power (if the link goes into L3,
> > otherwise it goes into L2).
> >
> > I think this is where the problem actually lies - the ASL methods that
> > are used to put the device into D3cold and back. We know that in Windows
> > this all works fine so unless Windows quirks the root port the same way
> > there is another reason behind this.
> >
> > In case of Dell XPS 9560 (IIRC that's the machine you have) the
> > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> > itself do lots of things and it is hard to follow the dissassembled
> > ASL which does not have any comments but there are couple of things that
> > stand out where we may go into a different path. One of them is this in
> > the PGOF() method:
> >
> >    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
> >
> > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> > _REV")) so it might be that Dell people tested this at some point in
> > Linux as well. Added Mario in case he has any ideas.
> >
> > Previously I suggested you to try the ACPI method tracing to see what
> > happens inside PGOF(). Did you have time to try it? It may provide more
> > information about that is happening inside those methods and hopefully
> > point us to the root cause.
> >
> > Also if you haven't tried already passing acpi_rev_override in the
> > command line makes the _REV to return 5 so it should go into the "Linux"
> > path in PGOF().
> 
> Oh, so does it look like we are trying to work around AML that tried
> to work around some problematic behavior in Linux at one point?

Yes, it looks like so if I read the ASL right. The whole method looks
like below (the full acpidump was shared by Karol in v3 thread) and
there is similar check in the _ON (PGON) method:

        Method (PGOF, 1, Serialized)
        {
            PIOF = Arg0
            If ((PIOF == Zero))
            {
                If ((SGGP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == One))
            {
                If ((P1GP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == 0x02))
            {
                If ((P2GP == Zero))
                {
                    Return (Zero)
                }
            }

            PEBA = \XBAS /* External reference */
            PDEV = GDEV (PIOF)
            PFUN = GFUN (PIOF)
            Name (SCLK, Package (0x03)
            {
                One, 
                0x80, 
                Zero
            })
            If ((CCHK (PIOF, Zero) == Zero))
            {
                Return (Zero)
            }

            \_SB.PCI0.PEG0.PEGP.LTRE = \_SB.PCI0.PEG0.LREN
            If ((Arg0 == Zero))
            {
                ELC0 = LCT0 /* \_SB_.PCI0.LCT0 */
                H0VI = S0VI /* \_SB_.PCI0.S0VI */
                H0DI = S0DI /* \_SB_.PCI0.S0DI */
                ECP0 = LCP0 /* \_SB_.PCI0.LCP0 */
            }
            ElseIf ((Arg0 == One))
            {
                ELC1 = LCT1 /* \_SB_.PCI0.LCT1 */
                H1VI = S1VI /* \_SB_.PCI0.S1VI */
                H1DI = S1DI /* \_SB_.PCI0.S1DI */
                ECP1 = LCP1 /* \_SB_.PCI0.LCP1 */
            }
            ElseIf ((Arg0 == 0x02))
            {
                ELC2 = LCT2 /* \_SB_.PCI0.LCT2 */
                H2VI = S2VI /* \_SB_.PCI0.S2VI */
                H2DI = S2DI /* \_SB_.PCI0.S2DI */
                ECP2 = LCP2 /* \_SB_.PCI0.LCP2 */
            }

            If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 
                0x05))))
            {
                If ((PIOF == Zero))
                {
                    P0LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P0LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P0RM = One
                    P0AP = 0x03
                }
                ElseIf ((PIOF == One))
                {
                    P1LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P1LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P1RM = One
                    P1AP = 0x03
                }
                ElseIf ((PIOF == 0x02))
                {
                    P2LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P2LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P2RM = One
                    P2AP = 0x03
                }

                If ((PBGE != Zero))
                {
                    If (SBDL (PIOF))
                    {
                        MBDL = GMXB (PIOF)
                        PDUB (PIOF, MBDL)
                    }
                }
            }
            Else
            {
                LKDS (PIOF)
            }

            If ((DerefOf (SCLK [Zero]) != Zero))
            {
                PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
                Sleep (0x10)
            }

            GPPR (PIOF, Zero)
            If ((OSYS != 0x07D9))
            {
                DIWK (PIOF)
            }

            \_SB.SGOV (0x01010004, Zero)
            Sleep (0x14)
            Return (Zero)
        }

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:22           ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 11:22 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > Hi Karol,
> >
> > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > >
> > > > [+cc Dave]
> > > >
> > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > > states.
> > > > >
> > > > > v2: convert to pci_dev quirk
> > > > >     put a proper technical explanation of the issue as a in-code comment
> > > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > > v4: simplify quirk by setting flag on the GPU itself
> > > >
> > > > I have zero confidence that we understand the real problem, but we do
> > > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > > minor procedural stuff below straightened out.
> > > >
> > >
> > > Thanks, and I agree with your statement, but at this point I think
> > > only Intel can help out digging deeper as I see no way to debug this
> > > further.
> >
> > I don't have anything against this patch, as long as the quirk stays
> > limited to the particular root port leading to the NVIDIA GPU. The
> > reason why I think it should to be limited is that I'm pretty certain
> > the problem is not in the root port itself. I have here a KBL based
> > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> > (it is connected to PCH root port) and it wakes up there just fine, so
> > don't want to break that.
> >
> > Now, PCIe devices cannot go into D3cold all by themselves. They always
> > need help from the platform side which is ACPI in this case. This is
> > done by having the device to have _PR3 method that returns one or more
> > power resources that the OS is supposed to turn off when the device is
> > put into D3cold. All of that is implemented as form of ACPI methods that
> > pretty much do the hardware specific things that are outside of PCIe
> > spec to get the device into D3cold. At high level the _OFF() method
> > causes the root port to broadcast PME_Turn_Off message that results the
> > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> > can be GPIOs) and finally removes power (if the link goes into L3,
> > otherwise it goes into L2).
> >
> > I think this is where the problem actually lies - the ASL methods that
> > are used to put the device into D3cold and back. We know that in Windows
> > this all works fine so unless Windows quirks the root port the same way
> > there is another reason behind this.
> >
> > In case of Dell XPS 9560 (IIRC that's the machine you have) the
> > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> > itself do lots of things and it is hard to follow the dissassembled
> > ASL which does not have any comments but there are couple of things that
> > stand out where we may go into a different path. One of them is this in
> > the PGOF() method:
> >
> >    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
> >
> > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> > _REV")) so it might be that Dell people tested this at some point in
> > Linux as well. Added Mario in case he has any ideas.
> >
> > Previously I suggested you to try the ACPI method tracing to see what
> > happens inside PGOF(). Did you have time to try it? It may provide more
> > information about that is happening inside those methods and hopefully
> > point us to the root cause.
> >
> > Also if you haven't tried already passing acpi_rev_override in the
> > command line makes the _REV to return 5 so it should go into the "Linux"
> > path in PGOF().
> 
> Oh, so does it look like we are trying to work around AML that tried
> to work around some problematic behavior in Linux at one point?

Yes, it looks like so if I read the ASL right. The whole method looks
like below (the full acpidump was shared by Karol in v3 thread) and
there is similar check in the _ON (PGON) method:

        Method (PGOF, 1, Serialized)
        {
            PIOF = Arg0
            If ((PIOF == Zero))
            {
                If ((SGGP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == One))
            {
                If ((P1GP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == 0x02))
            {
                If ((P2GP == Zero))
                {
                    Return (Zero)
                }
            }

            PEBA = \XBAS /* External reference */
            PDEV = GDEV (PIOF)
            PFUN = GFUN (PIOF)
            Name (SCLK, Package (0x03)
            {
                One, 
                0x80, 
                Zero
            })
            If ((CCHK (PIOF, Zero) == Zero))
            {
                Return (Zero)
            }

            \_SB.PCI0.PEG0.PEGP.LTRE = \_SB.PCI0.PEG0.LREN
            If ((Arg0 == Zero))
            {
                ELC0 = LCT0 /* \_SB_.PCI0.LCT0 */
                H0VI = S0VI /* \_SB_.PCI0.S0VI */
                H0DI = S0DI /* \_SB_.PCI0.S0DI */
                ECP0 = LCP0 /* \_SB_.PCI0.LCP0 */
            }
            ElseIf ((Arg0 == One))
            {
                ELC1 = LCT1 /* \_SB_.PCI0.LCT1 */
                H1VI = S1VI /* \_SB_.PCI0.S1VI */
                H1DI = S1DI /* \_SB_.PCI0.S1DI */
                ECP1 = LCP1 /* \_SB_.PCI0.LCP1 */
            }
            ElseIf ((Arg0 == 0x02))
            {
                ELC2 = LCT2 /* \_SB_.PCI0.LCT2 */
                H2VI = S2VI /* \_SB_.PCI0.S2VI */
                H2DI = S2DI /* \_SB_.PCI0.S2DI */
                ECP2 = LCP2 /* \_SB_.PCI0.LCP2 */
            }

            If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 
                0x05))))
            {
                If ((PIOF == Zero))
                {
                    P0LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P0LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P0RM = One
                    P0AP = 0x03
                }
                ElseIf ((PIOF == One))
                {
                    P1LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P1LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P1RM = One
                    P1AP = 0x03
                }
                ElseIf ((PIOF == 0x02))
                {
                    P2LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P2LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P2RM = One
                    P2AP = 0x03
                }

                If ((PBGE != Zero))
                {
                    If (SBDL (PIOF))
                    {
                        MBDL = GMXB (PIOF)
                        PDUB (PIOF, MBDL)
                    }
                }
            }
            Else
            {
                LKDS (PIOF)
            }

            If ((DerefOf (SCLK [Zero]) != Zero))
            {
                PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
                Sleep (0x10)
            }

            GPPR (PIOF, Zero)
            If ((OSYS != 0x07D9))
            {
                DIWK (PIOF)
            }

            \_SB.SGOV (0x01010004, Zero)
            Sleep (0x14)
            Return (Zero)
        }
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-20 11:22           ` Mika Westerberg
@ 2019-11-20 11:48             ` Rafael J. Wysocki
  -1 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 11:48 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > Hi Karol,
> > >
> > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > >
> > > > > [+cc Dave]
> > > > >
> > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > > > states.
> > > > > >
> > > > > > v2: convert to pci_dev quirk
> > > > > >     put a proper technical explanation of the issue as a in-code comment
> > > > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > > > v4: simplify quirk by setting flag on the GPU itself
> > > > >
> > > > > I have zero confidence that we understand the real problem, but we do
> > > > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > > > minor procedural stuff below straightened out.
> > > > >
> > > >
> > > > Thanks, and I agree with your statement, but at this point I think
> > > > only Intel can help out digging deeper as I see no way to debug this
> > > > further.
> > >
> > > I don't have anything against this patch, as long as the quirk stays
> > > limited to the particular root port leading to the NVIDIA GPU. The
> > > reason why I think it should to be limited is that I'm pretty certain
> > > the problem is not in the root port itself. I have here a KBL based
> > > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> > > (it is connected to PCH root port) and it wakes up there just fine, so
> > > don't want to break that.
> > >
> > > Now, PCIe devices cannot go into D3cold all by themselves. They always
> > > need help from the platform side which is ACPI in this case. This is
> > > done by having the device to have _PR3 method that returns one or more
> > > power resources that the OS is supposed to turn off when the device is
> > > put into D3cold. All of that is implemented as form of ACPI methods that
> > > pretty much do the hardware specific things that are outside of PCIe
> > > spec to get the device into D3cold. At high level the _OFF() method
> > > causes the root port to broadcast PME_Turn_Off message that results the
> > > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> > > can be GPIOs) and finally removes power (if the link goes into L3,
> > > otherwise it goes into L2).
> > >
> > > I think this is where the problem actually lies - the ASL methods that
> > > are used to put the device into D3cold and back. We know that in Windows
> > > this all works fine so unless Windows quirks the root port the same way
> > > there is another reason behind this.
> > >
> > > In case of Dell XPS 9560 (IIRC that's the machine you have) the
> > > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> > > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> > > itself do lots of things and it is hard to follow the dissassembled
> > > ASL which does not have any comments but there are couple of things that
> > > stand out where we may go into a different path. One of them is this in
> > > the PGOF() method:
> > >
> > >    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
> > >
> > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> > > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> > > _REV")) so it might be that Dell people tested this at some point in
> > > Linux as well. Added Mario in case he has any ideas.
> > >
> > > Previously I suggested you to try the ACPI method tracing to see what
> > > happens inside PGOF(). Did you have time to try it? It may provide more
> > > information about that is happening inside those methods and hopefully
> > > point us to the root cause.
> > >
> > > Also if you haven't tried already passing acpi_rev_override in the
> > > command line makes the _REV to return 5 so it should go into the "Linux"
> > > path in PGOF().
> >
> > Oh, so does it look like we are trying to work around AML that tried
> > to work around some problematic behavior in Linux at one point?
>
> Yes, it looks like so if I read the ASL right.

OK, so that would call for a DMI-based quirk as the real cause for the
issue seems to be the AML in question, which means a firmware problem.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:48             ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 11:48 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > Hi Karol,
> > >
> > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > >
> > > > > [+cc Dave]
> > > > >
> > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > > > states.
> > > > > >
> > > > > > v2: convert to pci_dev quirk
> > > > > >     put a proper technical explanation of the issue as a in-code comment
> > > > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > > > v4: simplify quirk by setting flag on the GPU itself
> > > > >
> > > > > I have zero confidence that we understand the real problem, but we do
> > > > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > > > minor procedural stuff below straightened out.
> > > > >
> > > >
> > > > Thanks, and I agree with your statement, but at this point I think
> > > > only Intel can help out digging deeper as I see no way to debug this
> > > > further.
> > >
> > > I don't have anything against this patch, as long as the quirk stays
> > > limited to the particular root port leading to the NVIDIA GPU. The
> > > reason why I think it should to be limited is that I'm pretty certain
> > > the problem is not in the root port itself. I have here a KBL based
> > > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> > > (it is connected to PCH root port) and it wakes up there just fine, so
> > > don't want to break that.
> > >
> > > Now, PCIe devices cannot go into D3cold all by themselves. They always
> > > need help from the platform side which is ACPI in this case. This is
> > > done by having the device to have _PR3 method that returns one or more
> > > power resources that the OS is supposed to turn off when the device is
> > > put into D3cold. All of that is implemented as form of ACPI methods that
> > > pretty much do the hardware specific things that are outside of PCIe
> > > spec to get the device into D3cold. At high level the _OFF() method
> > > causes the root port to broadcast PME_Turn_Off message that results the
> > > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> > > can be GPIOs) and finally removes power (if the link goes into L3,
> > > otherwise it goes into L2).
> > >
> > > I think this is where the problem actually lies - the ASL methods that
> > > are used to put the device into D3cold and back. We know that in Windows
> > > this all works fine so unless Windows quirks the root port the same way
> > > there is another reason behind this.
> > >
> > > In case of Dell XPS 9560 (IIRC that's the machine you have) the
> > > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> > > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> > > itself do lots of things and it is hard to follow the dissassembled
> > > ASL which does not have any comments but there are couple of things that
> > > stand out where we may go into a different path. One of them is this in
> > > the PGOF() method:
> > >
> > >    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
> > >
> > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> > > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> > > _REV")) so it might be that Dell people tested this at some point in
> > > Linux as well. Added Mario in case he has any ideas.
> > >
> > > Previously I suggested you to try the ACPI method tracing to see what
> > > happens inside PGOF(). Did you have time to try it? It may provide more
> > > information about that is happening inside those methods and hopefully
> > > point us to the root cause.
> > >
> > > Also if you haven't tried already passing acpi_rev_override in the
> > > command line makes the _REV to return 5 so it should go into the "Linux"
> > > path in PGOF().
> >
> > Oh, so does it look like we are trying to work around AML that tried
> > to work around some problematic behavior in Linux at one point?
>
> Yes, it looks like so if I read the ASL right.

OK, so that would call for a DMI-based quirk as the real cause for the
issue seems to be the AML in question, which means a firmware problem.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:51               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:51 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > Hi Karol,
> > > >
> > > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > > > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > >
> > > > > > [+cc Dave]
> > > > > >
> > > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > > > > states.
> > > > > > >
> > > > > > > v2: convert to pci_dev quirk
> > > > > > >     put a proper technical explanation of the issue as a in-code comment
> > > > > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > > > > v4: simplify quirk by setting flag on the GPU itself
> > > > > >
> > > > > > I have zero confidence that we understand the real problem, but we do
> > > > > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > > > > minor procedural stuff below straightened out.
> > > > > >
> > > > >
> > > > > Thanks, and I agree with your statement, but at this point I think
> > > > > only Intel can help out digging deeper as I see no way to debug this
> > > > > further.
> > > >
> > > > I don't have anything against this patch, as long as the quirk stays
> > > > limited to the particular root port leading to the NVIDIA GPU. The
> > > > reason why I think it should to be limited is that I'm pretty certain
> > > > the problem is not in the root port itself. I have here a KBL based
> > > > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> > > > (it is connected to PCH root port) and it wakes up there just fine, so
> > > > don't want to break that.
> > > >
> > > > Now, PCIe devices cannot go into D3cold all by themselves. They always
> > > > need help from the platform side which is ACPI in this case. This is
> > > > done by having the device to have _PR3 method that returns one or more
> > > > power resources that the OS is supposed to turn off when the device is
> > > > put into D3cold. All of that is implemented as form of ACPI methods that
> > > > pretty much do the hardware specific things that are outside of PCIe
> > > > spec to get the device into D3cold. At high level the _OFF() method
> > > > causes the root port to broadcast PME_Turn_Off message that results the
> > > > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> > > > can be GPIOs) and finally removes power (if the link goes into L3,
> > > > otherwise it goes into L2).
> > > >
> > > > I think this is where the problem actually lies - the ASL methods that
> > > > are used to put the device into D3cold and back. We know that in Windows
> > > > this all works fine so unless Windows quirks the root port the same way
> > > > there is another reason behind this.
> > > >
> > > > In case of Dell XPS 9560 (IIRC that's the machine you have) the
> > > > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> > > > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> > > > itself do lots of things and it is hard to follow the dissassembled
> > > > ASL which does not have any comments but there are couple of things that
> > > > stand out where we may go into a different path. One of them is this in
> > > > the PGOF() method:
> > > >
> > > >    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
> > > >
> > > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> > > > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> > > > _REV")) so it might be that Dell people tested this at some point in
> > > > Linux as well. Added Mario in case he has any ideas.
> > > >
> > > > Previously I suggested you to try the ACPI method tracing to see what
> > > > happens inside PGOF(). Did you have time to try it? It may provide more
> > > > information about that is happening inside those methods and hopefully
> > > > point us to the root cause.
> > > >
> > > > Also if you haven't tried already passing acpi_rev_override in the
> > > > command line makes the _REV to return 5 so it should go into the "Linux"
> > > > path in PGOF().
> > >
> > > Oh, so does it look like we are trying to work around AML that tried
> > > to work around some problematic behavior in Linux at one point?
> >
> > Yes, it looks like so if I read the ASL right.
>
> OK, so that would call for a DMI-based quirk as the real cause for the
> issue seems to be the AML in question, which means a firmware problem.
>

And I disagree as this is a linux specific workaround and windows goes
that path and succeeds. This firmware based workaround was added,
because it broke on Linux.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:51               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:51 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > Hi Karol,
> > > >
> > > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote:
> > > > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > >
> > > > > > [+cc Dave]
> > > > > >
> > > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote:
> > > > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
> > > > > > > states.
> > > > > > >
> > > > > > > v2: convert to pci_dev quirk
> > > > > > >     put a proper technical explanation of the issue as a in-code comment
> > > > > > > v3: disable it only for certain combinations of intel and nvidia hardware
> > > > > > > v4: simplify quirk by setting flag on the GPU itself
> > > > > >
> > > > > > I have zero confidence that we understand the real problem, but we do
> > > > > > need to do something with this.  I'll merge it for v5.5 if we get the
> > > > > > minor procedural stuff below straightened out.
> > > > > >
> > > > >
> > > > > Thanks, and I agree with your statement, but at this point I think
> > > > > only Intel can help out digging deeper as I see no way to debug this
> > > > > further.
> > > >
> > > > I don't have anything against this patch, as long as the quirk stays
> > > > limited to the particular root port leading to the NVIDIA GPU. The
> > > > reason why I think it should to be limited is that I'm pretty certain
> > > > the problem is not in the root port itself. I have here a KBL based
> > > > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold
> > > > (it is connected to PCH root port) and it wakes up there just fine, so
> > > > don't want to break that.
> > > >
> > > > Now, PCIe devices cannot go into D3cold all by themselves. They always
> > > > need help from the platform side which is ACPI in this case. This is
> > > > done by having the device to have _PR3 method that returns one or more
> > > > power resources that the OS is supposed to turn off when the device is
> > > > put into D3cold. All of that is implemented as form of ACPI methods that
> > > > pretty much do the hardware specific things that are outside of PCIe
> > > > spec to get the device into D3cold. At high level the _OFF() method
> > > > causes the root port to broadcast PME_Turn_Off message that results the
> > > > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both
> > > > can be GPIOs) and finally removes power (if the link goes into L3,
> > > > otherwise it goes into L2).
> > > >
> > > > I think this is where the problem actually lies - the ASL methods that
> > > > are used to put the device into D3cold and back. We know that in Windows
> > > > this all works fine so unless Windows quirks the root port the same way
> > > > there is another reason behind this.
> > > >
> > > > In case of Dell XPS 9560 (IIRC that's the machine you have) the
> > > > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its
> > > > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods
> > > > itself do lots of things and it is hard to follow the dissassembled
> > > > ASL which does not have any comments but there are couple of things that
> > > > stand out where we may go into a different path. One of them is this in
> > > > the PGOF() method:
> > > >
> > > >    If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05))))
> > > >
> > > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux
> > > > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override
> > > > _REV")) so it might be that Dell people tested this at some point in
> > > > Linux as well. Added Mario in case he has any ideas.
> > > >
> > > > Previously I suggested you to try the ACPI method tracing to see what
> > > > happens inside PGOF(). Did you have time to try it? It may provide more
> > > > information about that is happening inside those methods and hopefully
> > > > point us to the root cause.
> > > >
> > > > Also if you haven't tried already passing acpi_rev_override in the
> > > > command line makes the _REV to return 5 so it should go into the "Linux"
> > > > path in PGOF().
> > >
> > > Oh, so does it look like we are trying to work around AML that tried
> > > to work around some problematic behavior in Linux at one point?
> >
> > Yes, it looks like so if I read the ASL right.
>
> OK, so that would call for a DMI-based quirk as the real cause for the
> issue seems to be the AML in question, which means a firmware problem.
>

And I disagree as this is a linux specific workaround and windows goes
that path and succeeds. This firmware based workaround was added,
because it broke on Linux.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:51             ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 11:51 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
>             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 
>                 0x05))))
>             {

The OSYS comes from this (in DSDT):

                If (_OSI ("Windows 2009"))
                {
                    OSYS = 0x07D9
                }

                If (_OSI ("Windows 2012"))
                {
                    OSYS = 0x07DC
                }

                If (_OSI ("Windows 2013"))
                {
                    OSYS = 0x07DD
                }

                If (_OSI ("Windows 2015"))
                {
                    OSYS = 0x07DF
                }

So I guess this particular check tries to identify Windows 7 and older,
and Linux.

>                 If ((PIOF == Zero))
>                 {
>                     P0LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P0LT == 0x08))
>                         {
>                             Break
>                         }
> 
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
> 
>                     P0RM = One
>                     P0AP = 0x03
>                 }
>                 ElseIf ((PIOF == One))
>                 {
>                     P1LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P1LT == 0x08))
>                         {
>                             Break
>                         }
> 
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
> 
>                     P1RM = One
>                     P1AP = 0x03
>                 }
>                 ElseIf ((PIOF == 0x02))
>                 {
>                     P2LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P2LT == 0x08))
>                         {
>                             Break
>                         }
> 
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
> 
>                     P2RM = One
>                     P2AP = 0x03
>                 }
> 
>                 If ((PBGE != Zero))
>                 {
>                     If (SBDL (PIOF))
>                     {
>                         MBDL = GMXB (PIOF)
>                         PDUB (PIOF, MBDL)
>                     }
>                 }
>             }
>             Else
>             {
>                 LKDS (PIOF)
>             }
> 
>             If ((DerefOf (SCLK [Zero]) != Zero))
>             {
>                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
>                 Sleep (0x10)
>             }
> 
>             GPPR (PIOF, Zero)
>             If ((OSYS != 0x07D9))
>             {
>                 DIWK (PIOF)
>             }
> 
>             \_SB.SGOV (0x01010004, Zero)
>             Sleep (0x14)
>             Return (Zero)
>         }

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:51             ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 11:51 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
>             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 
>                 0x05))))
>             {

The OSYS comes from this (in DSDT):

                If (_OSI ("Windows 2009"))
                {
                    OSYS = 0x07D9
                }

                If (_OSI ("Windows 2012"))
                {
                    OSYS = 0x07DC
                }

                If (_OSI ("Windows 2013"))
                {
                    OSYS = 0x07DD
                }

                If (_OSI ("Windows 2015"))
                {
                    OSYS = 0x07DF
                }

So I guess this particular check tries to identify Windows 7 and older,
and Linux.

>                 If ((PIOF == Zero))
>                 {
>                     P0LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P0LT == 0x08))
>                         {
>                             Break
>                         }
> 
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
> 
>                     P0RM = One
>                     P0AP = 0x03
>                 }
>                 ElseIf ((PIOF == One))
>                 {
>                     P1LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P1LT == 0x08))
>                         {
>                             Break
>                         }
> 
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
> 
>                     P1RM = One
>                     P1AP = 0x03
>                 }
>                 ElseIf ((PIOF == 0x02))
>                 {
>                     P2LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P2LT == 0x08))
>                         {
>                             Break
>                         }
> 
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
> 
>                     P2RM = One
>                     P2AP = 0x03
>                 }
> 
>                 If ((PBGE != Zero))
>                 {
>                     If (SBDL (PIOF))
>                     {
>                         MBDL = GMXB (PIOF)
>                         PDUB (PIOF, MBDL)
>                     }
>                 }
>             }
>             Else
>             {
>                 LKDS (PIOF)
>             }
> 
>             If ((DerefOf (SCLK [Zero]) != Zero))
>             {
>                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
>                 Sleep (0x10)
>             }
> 
>             GPPR (PIOF, Zero)
>             If ((OSYS != 0x07D9))
>             {
>                 DIWK (PIOF)
>             }
> 
>             \_SB.SGOV (0x01010004, Zero)
>             Sleep (0x14)
>             Return (Zero)
>         }
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:54               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:54 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

for newer Windows the firmware uses bit  0x80 on 0x248 (Q0L2 being the
field name) on the bridge controller to turn of the device, on other
versions it uses the "older"? 0xb0 register and the P0LD field, which
is documented, where the former is not.

On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
> >             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
> >                 0x05))))
> >             {
>
> The OSYS comes from this (in DSDT):
>
>                 If (_OSI ("Windows 2009"))
>                 {
>                     OSYS = 0x07D9
>                 }
>
>                 If (_OSI ("Windows 2012"))
>                 {
>                     OSYS = 0x07DC
>                 }
>
>                 If (_OSI ("Windows 2013"))
>                 {
>                     OSYS = 0x07DD
>                 }
>
>                 If (_OSI ("Windows 2015"))
>                 {
>                     OSYS = 0x07DF
>                 }
>
> So I guess this particular check tries to identify Windows 7 and older,
> and Linux.
>
> >                 If ((PIOF == Zero))
> >                 {
> >                     P0LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P0LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P0RM = One
> >                     P0AP = 0x03
> >                 }
> >                 ElseIf ((PIOF == One))
> >                 {
> >                     P1LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P1LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P1RM = One
> >                     P1AP = 0x03
> >                 }
> >                 ElseIf ((PIOF == 0x02))
> >                 {
> >                     P2LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P2LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P2RM = One
> >                     P2AP = 0x03
> >                 }
> >
> >                 If ((PBGE != Zero))
> >                 {
> >                     If (SBDL (PIOF))
> >                     {
> >                         MBDL = GMXB (PIOF)
> >                         PDUB (PIOF, MBDL)
> >                     }
> >                 }
> >             }
> >             Else
> >             {
> >                 LKDS (PIOF)
> >             }
> >
> >             If ((DerefOf (SCLK [Zero]) != Zero))
> >             {
> >                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
> >                 Sleep (0x10)
> >             }
> >
> >             GPPR (PIOF, Zero)
> >             If ((OSYS != 0x07D9))
> >             {
> >                 DIWK (PIOF)
> >             }
> >
> >             \_SB.SGOV (0x01010004, Zero)
> >             Sleep (0x14)
> >             Return (Zero)
> >         }
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:54               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:54 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

for newer Windows the firmware uses bit  0x80 on 0x248 (Q0L2 being the
field name) on the bridge controller to turn of the device, on other
versions it uses the "older"? 0xb0 register and the P0LD field, which
is documented, where the former is not.

On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
> >             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
> >                 0x05))))
> >             {
>
> The OSYS comes from this (in DSDT):
>
>                 If (_OSI ("Windows 2009"))
>                 {
>                     OSYS = 0x07D9
>                 }
>
>                 If (_OSI ("Windows 2012"))
>                 {
>                     OSYS = 0x07DC
>                 }
>
>                 If (_OSI ("Windows 2013"))
>                 {
>                     OSYS = 0x07DD
>                 }
>
>                 If (_OSI ("Windows 2015"))
>                 {
>                     OSYS = 0x07DF
>                 }
>
> So I guess this particular check tries to identify Windows 7 and older,
> and Linux.
>
> >                 If ((PIOF == Zero))
> >                 {
> >                     P0LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P0LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P0RM = One
> >                     P0AP = 0x03
> >                 }
> >                 ElseIf ((PIOF == One))
> >                 {
> >                     P1LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P1LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P1RM = One
> >                     P1AP = 0x03
> >                 }
> >                 ElseIf ((PIOF == 0x02))
> >                 {
> >                     P2LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P2LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P2RM = One
> >                     P2AP = 0x03
> >                 }
> >
> >                 If ((PBGE != Zero))
> >                 {
> >                     If (SBDL (PIOF))
> >                     {
> >                         MBDL = GMXB (PIOF)
> >                         PDUB (PIOF, MBDL)
> >                     }
> >                 }
> >             }
> >             Else
> >             {
> >                 LKDS (PIOF)
> >             }
> >
> >             If ((DerefOf (SCLK [Zero]) != Zero))
> >             {
> >                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
> >                 Sleep (0x10)
> >             }
> >
> >             GPPR (PIOF, Zero)
> >             If ((OSYS != 0x07D9))
> >             {
> >                 DIWK (PIOF)
> >             }
> >
> >             \_SB.SGOV (0x01010004, Zero)
> >             Sleep (0x14)
> >             Return (Zero)
> >         }
>

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:54               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:54 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

for newer Windows the firmware uses bit  0x80 on 0x248 (Q0L2 being the
field name) on the bridge controller to turn of the device, on other
versions it uses the "older"? 0xb0 register and the P0LD field, which
is documented, where the former is not.

On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
> >             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
> >                 0x05))))
> >             {
>
> The OSYS comes from this (in DSDT):
>
>                 If (_OSI ("Windows 2009"))
>                 {
>                     OSYS = 0x07D9
>                 }
>
>                 If (_OSI ("Windows 2012"))
>                 {
>                     OSYS = 0x07DC
>                 }
>
>                 If (_OSI ("Windows 2013"))
>                 {
>                     OSYS = 0x07DD
>                 }
>
>                 If (_OSI ("Windows 2015"))
>                 {
>                     OSYS = 0x07DF
>                 }
>
> So I guess this particular check tries to identify Windows 7 and older,
> and Linux.
>
> >                 If ((PIOF == Zero))
> >                 {
> >                     P0LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P0LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P0RM = One
> >                     P0AP = 0x03
> >                 }
> >                 ElseIf ((PIOF == One))
> >                 {
> >                     P1LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P1LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P1RM = One
> >                     P1AP = 0x03
> >                 }
> >                 ElseIf ((PIOF == 0x02))
> >                 {
> >                     P2LD = One
> >                     TCNT = Zero
> >                     While ((TCNT < LDLY))
> >                     {
> >                         If ((P2LT == 0x08))
> >                         {
> >                             Break
> >                         }
> >
> >                         Sleep (0x10)
> >                         TCNT += 0x10
> >                     }
> >
> >                     P2RM = One
> >                     P2AP = 0x03
> >                 }
> >
> >                 If ((PBGE != Zero))
> >                 {
> >                     If (SBDL (PIOF))
> >                     {
> >                         MBDL = GMXB (PIOF)
> >                         PDUB (PIOF, MBDL)
> >                     }
> >                 }
> >             }
> >             Else
> >             {
> >                 LKDS (PIOF)
> >             }
> >
> >             If ((DerefOf (SCLK [Zero]) != Zero))
> >             {
> >                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
> >                 Sleep (0x10)
> >             }
> >
> >             GPPR (PIOF, Zero)
> >             If ((OSYS != 0x07D9))
> >             {
> >                 DIWK (PIOF)
> >             }
> >
> >             \_SB.SGOV (0x01010004, Zero)
> >             Sleep (0x14)
> >             Return (Zero)
> >         }
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:58                 ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:58 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

overall, what I really want to know is, _why_ does it work on windows?
Or what are we doing differently on Linux so that it doesn't work? If
anybody has any idea on how we could dig into this and figure it out
on this level, this would probably allow us to get closer to the root
cause? no?

On Wed, Nov 20, 2019 at 12:54 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> for newer Windows the firmware uses bit  0x80 on 0x248 (Q0L2 being the
> field name) on the bridge controller to turn of the device, on other
> versions it uses the "older"? 0xb0 register and the P0LD field, which
> is documented, where the former is not.
>
> On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
> > >             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
> > >                 0x05))))
> > >             {
> >
> > The OSYS comes from this (in DSDT):
> >
> >                 If (_OSI ("Windows 2009"))
> >                 {
> >                     OSYS = 0x07D9
> >                 }
> >
> >                 If (_OSI ("Windows 2012"))
> >                 {
> >                     OSYS = 0x07DC
> >                 }
> >
> >                 If (_OSI ("Windows 2013"))
> >                 {
> >                     OSYS = 0x07DD
> >                 }
> >
> >                 If (_OSI ("Windows 2015"))
> >                 {
> >                     OSYS = 0x07DF
> >                 }
> >
> > So I guess this particular check tries to identify Windows 7 and older,
> > and Linux.
> >
> > >                 If ((PIOF == Zero))
> > >                 {
> > >                     P0LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P0LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P0RM = One
> > >                     P0AP = 0x03
> > >                 }
> > >                 ElseIf ((PIOF == One))
> > >                 {
> > >                     P1LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P1LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P1RM = One
> > >                     P1AP = 0x03
> > >                 }
> > >                 ElseIf ((PIOF == 0x02))
> > >                 {
> > >                     P2LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P2LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P2RM = One
> > >                     P2AP = 0x03
> > >                 }
> > >
> > >                 If ((PBGE != Zero))
> > >                 {
> > >                     If (SBDL (PIOF))
> > >                     {
> > >                         MBDL = GMXB (PIOF)
> > >                         PDUB (PIOF, MBDL)
> > >                     }
> > >                 }
> > >             }
> > >             Else
> > >             {
> > >                 LKDS (PIOF)
> > >             }
> > >
> > >             If ((DerefOf (SCLK [Zero]) != Zero))
> > >             {
> > >                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
> > >                 Sleep (0x10)
> > >             }
> > >
> > >             GPPR (PIOF, Zero)
> > >             If ((OSYS != 0x07D9))
> > >             {
> > >                 DIWK (PIOF)
> > >             }
> > >
> > >             \_SB.SGOV (0x01010004, Zero)
> > >             Sleep (0x14)
> > >             Return (Zero)
> > >         }
> >


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:58                 ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:58 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

overall, what I really want to know is, _why_ does it work on windows?
Or what are we doing differently on Linux so that it doesn't work? If
anybody has any idea on how we could dig into this and figure it out
on this level, this would probably allow us to get closer to the root
cause? no?

On Wed, Nov 20, 2019 at 12:54 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> for newer Windows the firmware uses bit  0x80 on 0x248 (Q0L2 being the
> field name) on the bridge controller to turn of the device, on other
> versions it uses the "older"? 0xb0 register and the P0LD field, which
> is documented, where the former is not.
>
> On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
> > >             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
> > >                 0x05))))
> > >             {
> >
> > The OSYS comes from this (in DSDT):
> >
> >                 If (_OSI ("Windows 2009"))
> >                 {
> >                     OSYS = 0x07D9
> >                 }
> >
> >                 If (_OSI ("Windows 2012"))
> >                 {
> >                     OSYS = 0x07DC
> >                 }
> >
> >                 If (_OSI ("Windows 2013"))
> >                 {
> >                     OSYS = 0x07DD
> >                 }
> >
> >                 If (_OSI ("Windows 2015"))
> >                 {
> >                     OSYS = 0x07DF
> >                 }
> >
> > So I guess this particular check tries to identify Windows 7 and older,
> > and Linux.
> >
> > >                 If ((PIOF == Zero))
> > >                 {
> > >                     P0LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P0LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P0RM = One
> > >                     P0AP = 0x03
> > >                 }
> > >                 ElseIf ((PIOF == One))
> > >                 {
> > >                     P1LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P1LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P1RM = One
> > >                     P1AP = 0x03
> > >                 }
> > >                 ElseIf ((PIOF == 0x02))
> > >                 {
> > >                     P2LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P2LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P2RM = One
> > >                     P2AP = 0x03
> > >                 }
> > >
> > >                 If ((PBGE != Zero))
> > >                 {
> > >                     If (SBDL (PIOF))
> > >                     {
> > >                         MBDL = GMXB (PIOF)
> > >                         PDUB (PIOF, MBDL)
> > >                     }
> > >                 }
> > >             }
> > >             Else
> > >             {
> > >                 LKDS (PIOF)
> > >             }
> > >
> > >             If ((DerefOf (SCLK [Zero]) != Zero))
> > >             {
> > >                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
> > >                 Sleep (0x10)
> > >             }
> > >
> > >             GPPR (PIOF, Zero)
> > >             If ((OSYS != 0x07D9))
> > >             {
> > >                 DIWK (PIOF)
> > >             }
> > >
> > >             \_SB.SGOV (0x01010004, Zero)
> > >             Sleep (0x14)
> > >             Return (Zero)
> > >         }
> >

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 11:58                 ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 11:58 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

overall, what I really want to know is, _why_ does it work on windows?
Or what are we doing differently on Linux so that it doesn't work? If
anybody has any idea on how we could dig into this and figure it out
on this level, this would probably allow us to get closer to the root
cause? no?

On Wed, Nov 20, 2019 at 12:54 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> for newer Windows the firmware uses bit  0x80 on 0x248 (Q0L2 being the
> field name) on the bridge controller to turn of the device, on other
> versions it uses the "older"? 0xb0 register and the P0LD field, which
> is documented, where the former is not.
>
> On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote:
> > >             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
> > >                 0x05))))
> > >             {
> >
> > The OSYS comes from this (in DSDT):
> >
> >                 If (_OSI ("Windows 2009"))
> >                 {
> >                     OSYS = 0x07D9
> >                 }
> >
> >                 If (_OSI ("Windows 2012"))
> >                 {
> >                     OSYS = 0x07DC
> >                 }
> >
> >                 If (_OSI ("Windows 2013"))
> >                 {
> >                     OSYS = 0x07DD
> >                 }
> >
> >                 If (_OSI ("Windows 2015"))
> >                 {
> >                     OSYS = 0x07DF
> >                 }
> >
> > So I guess this particular check tries to identify Windows 7 and older,
> > and Linux.
> >
> > >                 If ((PIOF == Zero))
> > >                 {
> > >                     P0LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P0LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P0RM = One
> > >                     P0AP = 0x03
> > >                 }
> > >                 ElseIf ((PIOF == One))
> > >                 {
> > >                     P1LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P1LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P1RM = One
> > >                     P1AP = 0x03
> > >                 }
> > >                 ElseIf ((PIOF == 0x02))
> > >                 {
> > >                     P2LD = One
> > >                     TCNT = Zero
> > >                     While ((TCNT < LDLY))
> > >                     {
> > >                         If ((P2LT == 0x08))
> > >                         {
> > >                             Break
> > >                         }
> > >
> > >                         Sleep (0x10)
> > >                         TCNT += 0x10
> > >                     }
> > >
> > >                     P2RM = One
> > >                     P2AP = 0x03
> > >                 }
> > >
> > >                 If ((PBGE != Zero))
> > >                 {
> > >                     If (SBDL (PIOF))
> > >                     {
> > >                         MBDL = GMXB (PIOF)
> > >                         PDUB (PIOF, MBDL)
> > >                     }
> > >                 }
> > >             }
> > >             Else
> > >             {
> > >                 LKDS (PIOF)
> > >             }
> > >
> > >             If ((DerefOf (SCLK [Zero]) != Zero))
> > >             {
> > >                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
> > >                 Sleep (0x10)
> > >             }
> > >
> > >             GPPR (PIOF, Zero)
> > >             If ((OSYS != 0x07D9))
> > >             {
> > >                 DIWK (PIOF)
> > >             }
> > >
> > >             \_SB.SGOV (0x01010004, Zero)
> > >             Sleep (0x14)
> > >             Return (Zero)
> > >         }
> >

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:06                 ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 12:06 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Mika Westerberg, Bjorn Helgaas, LKML,
	Lyude Paul, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >

[cut]

> > > >
> > > > Oh, so does it look like we are trying to work around AML that tried
> > > > to work around some problematic behavior in Linux at one point?
> > >
> > > Yes, it looks like so if I read the ASL right.
> >
> > OK, so that would call for a DMI-based quirk as the real cause for the
> > issue seems to be the AML in question, which means a firmware problem.
> >
>
> And I disagree as this is a linux specific workaround and windows goes
> that path and succeeds. This firmware based workaround was added,
> because it broke on Linux.

Apparently so at the time it was added, but would it still break after
the kernel changes made since then?

Moreover, has it not become harmful now?  IOW, wouldn't it work after
removing the "Linux workaround" from the AML?

The only way to verify that I can see would be to run the system with
custom ACPI tables without the "Linux workaround" in the AML in
question.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:06                 ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 12:06 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >

[cut]

> > > >
> > > > Oh, so does it look like we are trying to work around AML that tried
> > > > to work around some problematic behavior in Linux at one point?
> > >
> > > Yes, it looks like so if I read the ASL right.
> >
> > OK, so that would call for a DMI-based quirk as the real cause for the
> > issue seems to be the AML in question, which means a firmware problem.
> >
>
> And I disagree as this is a linux specific workaround and windows goes
> that path and succeeds. This firmware based workaround was added,
> because it broke on Linux.

Apparently so at the time it was added, but would it still break after
the kernel changes made since then?

Moreover, has it not become harmful now?  IOW, wouldn't it work after
removing the "Linux workaround" from the AML?

The only way to verify that I can see would be to run the system with
custom ACPI tables without the "Linux workaround" in the AML in
question.
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:06                 ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 12:06 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >

[cut]

> > > >
> > > > Oh, so does it look like we are trying to work around AML that tried
> > > > to work around some problematic behavior in Linux at one point?
> > >
> > > Yes, it looks like so if I read the ASL right.
> >
> > OK, so that would call for a DMI-based quirk as the real cause for the
> > issue seems to be the AML in question, which means a firmware problem.
> >
>
> And I disagree as this is a linux specific workaround and windows goes
> that path and succeeds. This firmware based workaround was added,
> because it broke on Linux.

Apparently so at the time it was added, but would it still break after
the kernel changes made since then?

Moreover, has it not become harmful now?  IOW, wouldn't it work after
removing the "Linux workaround" from the AML?

The only way to verify that I can see would be to run the system with
custom ACPI tables without the "Linux workaround" in the AML in
question.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:09                   ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 12:09 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> overall, what I really want to know is, _why_ does it work on windows?

So do I ;-)

> Or what are we doing differently on Linux so that it doesn't work? If
> anybody has any idea on how we could dig into this and figure it out
> on this level, this would probably allow us to get closer to the root
> cause? no?

Have you tried to use the acpi_rev_override parameter in your system and
does it have any effect?

Also did you try to trace the ACPI _ON/_OFF() methods? I think that
should hopefully reveal something.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:09                   ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 12:09 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> overall, what I really want to know is, _why_ does it work on windows?

So do I ;-)

> Or what are we doing differently on Linux so that it doesn't work? If
> anybody has any idea on how we could dig into this and figure it out
> on this level, this would probably allow us to get closer to the root
> cause? no?

Have you tried to use the acpi_rev_override parameter in your system and
does it have any effect?

Also did you try to trace the ACPI _ON/_OFF() methods? I think that
should hopefully reveal something.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:09                   ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:09 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
>
> [cut]
>
> > > > >
> > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > to work around some problematic behavior in Linux at one point?
> > > >
> > > > Yes, it looks like so if I read the ASL right.
> > >
> > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > issue seems to be the AML in question, which means a firmware problem.
> > >
> >
> > And I disagree as this is a linux specific workaround and windows goes
> > that path and succeeds. This firmware based workaround was added,
> > because it broke on Linux.
>
> Apparently so at the time it was added, but would it still break after
> the kernel changes made since then?
>
> Moreover, has it not become harmful now?  IOW, wouldn't it work after
> removing the "Linux workaround" from the AML?
>
> The only way to verify that I can see would be to run the system with
> custom ACPI tables without the "Linux workaround" in the AML in
> question.
>

the workaround is not enabled by default, because it has to be
explicitly enabled by the user.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:09                   ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:09 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
>
> [cut]
>
> > > > >
> > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > to work around some problematic behavior in Linux at one point?
> > > >
> > > > Yes, it looks like so if I read the ASL right.
> > >
> > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > issue seems to be the AML in question, which means a firmware problem.
> > >
> >
> > And I disagree as this is a linux specific workaround and windows goes
> > that path and succeeds. This firmware based workaround was added,
> > because it broke on Linux.
>
> Apparently so at the time it was added, but would it still break after
> the kernel changes made since then?
>
> Moreover, has it not become harmful now?  IOW, wouldn't it work after
> removing the "Linux workaround" from the AML?
>
> The only way to verify that I can see would be to run the system with
> custom ACPI tables without the "Linux workaround" in the AML in
> question.
>

the workaround is not enabled by default, because it has to be
explicitly enabled by the user.

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:09                   ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:09 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
>
> [cut]
>
> > > > >
> > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > to work around some problematic behavior in Linux at one point?
> > > >
> > > > Yes, it looks like so if I read the ASL right.
> > >
> > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > issue seems to be the AML in question, which means a firmware problem.
> > >
> >
> > And I disagree as this is a linux specific workaround and windows goes
> > that path and succeeds. This firmware based workaround was added,
> > because it broke on Linux.
>
> Apparently so at the time it was added, but would it still break after
> the kernel changes made since then?
>
> Moreover, has it not become harmful now?  IOW, wouldn't it work after
> removing the "Linux workaround" from the AML?
>
> The only way to verify that I can see would be to run the system with
> custom ACPI tables without the "Linux workaround" in the AML in
> question.
>

the workaround is not enabled by default, because it has to be
explicitly enabled by the user.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:11                   ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 12:11 UTC (permalink / raw)
  To: Karol Herbst, Mika Westerberg
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Rafael J . Wysocki, Linux PCI,
	Linux PM, dri-devel, nouveau, Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
>
> [cut]
>
> > > > >
> > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > to work around some problematic behavior in Linux at one point?
> > > >
> > > > Yes, it looks like so if I read the ASL right.
> > >
> > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > issue seems to be the AML in question, which means a firmware problem.
> > >
> >
> > And I disagree as this is a linux specific workaround and windows goes
> > that path and succeeds. This firmware based workaround was added,
> > because it broke on Linux.
>
> Apparently so at the time it was added, but would it still break after
> the kernel changes made since then?
>
> Moreover, has it not become harmful now?  IOW, wouldn't it work after
> removing the "Linux workaround" from the AML?
>
> The only way to verify that I can see would be to run the system with
> custom ACPI tables without the "Linux workaround" in the AML in
> question.

Or running it with acpi_rev_override as suggested by Mika, which
effectively would be the same thing.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:11                   ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 12:11 UTC (permalink / raw)
  To: Karol Herbst, Mika Westerberg
  Cc: Linux PM, Linux PCI, Mario Limonciello, Rafael J . Wysocki, LKML,
	dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
>
> [cut]
>
> > > > >
> > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > to work around some problematic behavior in Linux at one point?
> > > >
> > > > Yes, it looks like so if I read the ASL right.
> > >
> > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > issue seems to be the AML in question, which means a firmware problem.
> > >
> >
> > And I disagree as this is a linux specific workaround and windows goes
> > that path and succeeds. This firmware based workaround was added,
> > because it broke on Linux.
>
> Apparently so at the time it was added, but would it still break after
> the kernel changes made since then?
>
> Moreover, has it not become harmful now?  IOW, wouldn't it work after
> removing the "Linux workaround" from the AML?
>
> The only way to verify that I can see would be to run the system with
> custom ACPI tables without the "Linux workaround" in the AML in
> question.

Or running it with acpi_rev_override as suggested by Mika, which
effectively would be the same thing.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:11                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:11 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > overall, what I really want to know is, _why_ does it work on windows?
>
> So do I ;-)
>
> > Or what are we doing differently on Linux so that it doesn't work? If
> > anybody has any idea on how we could dig into this and figure it out
> > on this level, this would probably allow us to get closer to the root
> > cause? no?
>
> Have you tried to use the acpi_rev_override parameter in your system and
> does it have any effect?
>
> Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> should hopefully reveal something.
>

I think I did in the past and it seemed to have worked, there is just
one big issue with this: it's a Dell specific workaround afaik, and
this issue plagues not just Dell, but we've seen it on HP and Lenovo
laptops as well, and I've heard about users having the same issues on
Asus and MSI laptops as well.

I will spend some time to collect all the necessary information,
create a bug to put it all in there and send out a v5 with the updated
information and references to this bug.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:11                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:11 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > overall, what I really want to know is, _why_ does it work on windows?
>
> So do I ;-)
>
> > Or what are we doing differently on Linux so that it doesn't work? If
> > anybody has any idea on how we could dig into this and figure it out
> > on this level, this would probably allow us to get closer to the root
> > cause? no?
>
> Have you tried to use the acpi_rev_override parameter in your system and
> does it have any effect?
>
> Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> should hopefully reveal something.
>

I think I did in the past and it seemed to have worked, there is just
one big issue with this: it's a Dell specific workaround afaik, and
this issue plagues not just Dell, but we've seen it on HP and Lenovo
laptops as well, and I've heard about users having the same issues on
Asus and MSI laptops as well.

I will spend some time to collect all the necessary information,
create a bug to put it all in there and send out a v5 with the updated
information and references to this bug.

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:11                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:11 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > overall, what I really want to know is, _why_ does it work on windows?
>
> So do I ;-)
>
> > Or what are we doing differently on Linux so that it doesn't work? If
> > anybody has any idea on how we could dig into this and figure it out
> > on this level, this would probably allow us to get closer to the root
> > cause? no?
>
> Have you tried to use the acpi_rev_override parameter in your system and
> does it have any effect?
>
> Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> should hopefully reveal something.
>

I think I did in the past and it seemed to have worked, there is just
one big issue with this: it's a Dell specific workaround afaik, and
this issue plagues not just Dell, but we've seen it on HP and Lenovo
laptops as well, and I've heard about users having the same issues on
Asus and MSI laptops as well.

I will spend some time to collect all the necessary information,
create a bug to put it all in there and send out a v5 with the updated
information and references to this bug.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:14                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 12:14 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Mika Westerberg, Bjorn Helgaas, LKML,
	Lyude Paul, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 1:10 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> >
> > [cut]
> >
> > > > > >
> > > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > > to work around some problematic behavior in Linux at one point?
> > > > >
> > > > > Yes, it looks like so if I read the ASL right.
> > > >
> > > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > > issue seems to be the AML in question, which means a firmware problem.
> > > >
> > >
> > > And I disagree as this is a linux specific workaround and windows goes
> > > that path and succeeds. This firmware based workaround was added,
> > > because it broke on Linux.
> >
> > Apparently so at the time it was added, but would it still break after
> > the kernel changes made since then?
> >
> > Moreover, has it not become harmful now?  IOW, wouldn't it work after
> > removing the "Linux workaround" from the AML?
> >
> > The only way to verify that I can see would be to run the system with
> > custom ACPI tables without the "Linux workaround" in the AML in
> > question.
> >
>
> the workaround is not enabled by default, because it has to be
> explicitly enabled by the user.

I'm not sure what you are talking about.

I'm taking specifically about the ((OSYS == 0x07DF) && (_REV == 0x05))
check mentioned by Mika which doesn't seem to depend on user input in
any way.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:14                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 12:14 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 1:10 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> >
> > [cut]
> >
> > > > > >
> > > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > > to work around some problematic behavior in Linux at one point?
> > > > >
> > > > > Yes, it looks like so if I read the ASL right.
> > > >
> > > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > > issue seems to be the AML in question, which means a firmware problem.
> > > >
> > >
> > > And I disagree as this is a linux specific workaround and windows goes
> > > that path and succeeds. This firmware based workaround was added,
> > > because it broke on Linux.
> >
> > Apparently so at the time it was added, but would it still break after
> > the kernel changes made since then?
> >
> > Moreover, has it not become harmful now?  IOW, wouldn't it work after
> > removing the "Linux workaround" from the AML?
> >
> > The only way to verify that I can see would be to run the system with
> > custom ACPI tables without the "Linux workaround" in the AML in
> > question.
> >
>
> the workaround is not enabled by default, because it has to be
> explicitly enabled by the user.

I'm not sure what you are talking about.

I'm taking specifically about the ((OSYS == 0x07DF) && (_REV == 0x05))
check mentioned by Mika which doesn't seem to depend on user input in
any way.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:19                       ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:19 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

It depends on the kernel being built with ACPI_REV_OVERRIDE_POSSIBLE=y
and acpi_rev_override=1 being set on the kernel command line

On Wed, Nov 20, 2019 at 1:15 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 1:10 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > >
> > >
> > > [cut]
> > >
> > > > > > >
> > > > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > > > to work around some problematic behavior in Linux at one point?
> > > > > >
> > > > > > Yes, it looks like so if I read the ASL right.
> > > > >
> > > > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > > > issue seems to be the AML in question, which means a firmware problem.
> > > > >
> > > >
> > > > And I disagree as this is a linux specific workaround and windows goes
> > > > that path and succeeds. This firmware based workaround was added,
> > > > because it broke on Linux.
> > >
> > > Apparently so at the time it was added, but would it still break after
> > > the kernel changes made since then?
> > >
> > > Moreover, has it not become harmful now?  IOW, wouldn't it work after
> > > removing the "Linux workaround" from the AML?
> > >
> > > The only way to verify that I can see would be to run the system with
> > > custom ACPI tables without the "Linux workaround" in the AML in
> > > question.
> > >
> >
> > the workaround is not enabled by default, because it has to be
> > explicitly enabled by the user.
>
> I'm not sure what you are talking about.
>
> I'm taking specifically about the ((OSYS == 0x07DF) && (_REV == 0x05))
> check mentioned by Mika which doesn't seem to depend on user input in
> any way.
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 12:19                       ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 12:19 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

It depends on the kernel being built with ACPI_REV_OVERRIDE_POSSIBLE=y
and acpi_rev_override=1 being set on the kernel command line

On Wed, Nov 20, 2019 at 1:15 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 1:10 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst <kherbst@redhat.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote:
> > > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg
> > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > >
> > >
> > > [cut]
> > >
> > > > > > >
> > > > > > > Oh, so does it look like we are trying to work around AML that tried
> > > > > > > to work around some problematic behavior in Linux at one point?
> > > > > >
> > > > > > Yes, it looks like so if I read the ASL right.
> > > > >
> > > > > OK, so that would call for a DMI-based quirk as the real cause for the
> > > > > issue seems to be the AML in question, which means a firmware problem.
> > > > >
> > > >
> > > > And I disagree as this is a linux specific workaround and windows goes
> > > > that path and succeeds. This firmware based workaround was added,
> > > > because it broke on Linux.
> > >
> > > Apparently so at the time it was added, but would it still break after
> > > the kernel changes made since then?
> > >
> > > Moreover, has it not become harmful now?  IOW, wouldn't it work after
> > > removing the "Linux workaround" from the AML?
> > >
> > > The only way to verify that I can see would be to run the system with
> > > custom ACPI tables without the "Linux workaround" in the AML in
> > > question.
> > >
> >
> > the workaround is not enabled by default, because it has to be
> > explicitly enabled by the user.
>
> I'm not sure what you are talking about.
>
> I'm taking specifically about the ((OSYS == 0x07DF) && (_REV == 0x05))
> check mentioned by Mika which doesn't seem to depend on user input in
> any way.
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 15:15                       ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 15:15 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > overall, what I really want to know is, _why_ does it work on windows?
> >
> > So do I ;-)
> >
> > > Or what are we doing differently on Linux so that it doesn't work? If
> > > anybody has any idea on how we could dig into this and figure it out
> > > on this level, this would probably allow us to get closer to the root
> > > cause? no?
> >
> > Have you tried to use the acpi_rev_override parameter in your system and
> > does it have any effect?
> >
> > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > should hopefully reveal something.
> >
> 
> I think I did in the past and it seemed to have worked, there is just
> one big issue with this: it's a Dell specific workaround afaik, and
> this issue plagues not just Dell, but we've seen it on HP and Lenovo
> laptops as well, and I've heard about users having the same issues on
> Asus and MSI laptops as well.

Maybe it is not a workaround at all but instead it simply determines
whether the system supports RTD3 or something like that (IIRC Windows 8
started supporting it). Maybe Dell added check for Linux because at that
time Linux did not support it.

In case RTD3 is supported it invokes LKDS() which probably does the L2
or L3 entry and this is for some reason does not work the same way in
Linux than it does with Windows 8+.

I don't remember if this happens only with nouveau or with the
proprietary driver as well but looking at the nouveau runtime PM suspend
hook (assuming I'm looking at the correct code):

static int
nouveau_pmops_runtime_suspend(struct device *dev)
{       
        struct pci_dev *pdev = to_pci_dev(dev);
        struct drm_device *drm_dev = pci_get_drvdata(pdev);
        int ret;

        if (!nouveau_pmops_runtime()) {
                pm_runtime_forbid(dev);
                return -EBUSY;
        }

        nouveau_switcheroo_optimus_dsm();
        ret = nouveau_do_suspend(drm_dev, true);
        pci_save_state(pdev);
        pci_disable_device(pdev);
        pci_ignore_hotplug(pdev);
        pci_set_power_state(pdev, PCI_D3cold);
        drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
        return ret;
}

Normally PCI drivers leave the PCI bus PM things to PCI core but here
the driver does these. So I wonder if it makes any difference if we let
the core handle all that:

static int
nouveau_pmops_runtime_suspend(struct device *dev)
{       
        struct pci_dev *pdev = to_pci_dev(dev);
        struct drm_device *drm_dev = pci_get_drvdata(pdev);
        int ret;

        if (!nouveau_pmops_runtime()) {
                pm_runtime_forbid(dev);
                return -EBUSY;
        }

        nouveau_switcheroo_optimus_dsm();
        ret = nouveau_do_suspend(drm_dev, true);
        pci_ignore_hotplug(pdev);
        drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
        return ret;
}

and similar for the nouveau_pmops_runtime_resume().

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 15:15                       ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 15:15 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > overall, what I really want to know is, _why_ does it work on windows?
> >
> > So do I ;-)
> >
> > > Or what are we doing differently on Linux so that it doesn't work? If
> > > anybody has any idea on how we could dig into this and figure it out
> > > on this level, this would probably allow us to get closer to the root
> > > cause? no?
> >
> > Have you tried to use the acpi_rev_override parameter in your system and
> > does it have any effect?
> >
> > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > should hopefully reveal something.
> >
> 
> I think I did in the past and it seemed to have worked, there is just
> one big issue with this: it's a Dell specific workaround afaik, and
> this issue plagues not just Dell, but we've seen it on HP and Lenovo
> laptops as well, and I've heard about users having the same issues on
> Asus and MSI laptops as well.

Maybe it is not a workaround at all but instead it simply determines
whether the system supports RTD3 or something like that (IIRC Windows 8
started supporting it). Maybe Dell added check for Linux because at that
time Linux did not support it.

In case RTD3 is supported it invokes LKDS() which probably does the L2
or L3 entry and this is for some reason does not work the same way in
Linux than it does with Windows 8+.

I don't remember if this happens only with nouveau or with the
proprietary driver as well but looking at the nouveau runtime PM suspend
hook (assuming I'm looking at the correct code):

static int
nouveau_pmops_runtime_suspend(struct device *dev)
{       
        struct pci_dev *pdev = to_pci_dev(dev);
        struct drm_device *drm_dev = pci_get_drvdata(pdev);
        int ret;

        if (!nouveau_pmops_runtime()) {
                pm_runtime_forbid(dev);
                return -EBUSY;
        }

        nouveau_switcheroo_optimus_dsm();
        ret = nouveau_do_suspend(drm_dev, true);
        pci_save_state(pdev);
        pci_disable_device(pdev);
        pci_ignore_hotplug(pdev);
        pci_set_power_state(pdev, PCI_D3cold);
        drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
        return ret;
}

Normally PCI drivers leave the PCI bus PM things to PCI core but here
the driver does these. So I wonder if it makes any difference if we let
the core handle all that:

static int
nouveau_pmops_runtime_suspend(struct device *dev)
{       
        struct pci_dev *pdev = to_pci_dev(dev);
        struct drm_device *drm_dev = pci_get_drvdata(pdev);
        int ret;

        if (!nouveau_pmops_runtime()) {
                pm_runtime_forbid(dev);
                return -EBUSY;
        }

        nouveau_switcheroo_optimus_dsm();
        ret = nouveau_do_suspend(drm_dev, true);
        pci_ignore_hotplug(pdev);
        drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
        return ret;
}

and similar for the nouveau_pmops_runtime_resume().
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-20 15:15                       ` Mika Westerberg
@ 2019-11-20 15:37                         ` Karol Herbst
  -1 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 15:37 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > overall, what I really want to know is, _why_ does it work on windows?
> > >
> > > So do I ;-)
> > >
> > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > anybody has any idea on how we could dig into this and figure it out
> > > > on this level, this would probably allow us to get closer to the root
> > > > cause? no?
> > >
> > > Have you tried to use the acpi_rev_override parameter in your system and
> > > does it have any effect?
> > >
> > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > should hopefully reveal something.
> > >
> >
> > I think I did in the past and it seemed to have worked, there is just
> > one big issue with this: it's a Dell specific workaround afaik, and
> > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > laptops as well, and I've heard about users having the same issues on
> > Asus and MSI laptops as well.
>
> Maybe it is not a workaround at all but instead it simply determines
> whether the system supports RTD3 or something like that (IIRC Windows 8
> started supporting it). Maybe Dell added check for Linux because at that
> time Linux did not support it.
>

the point is, it's not checking it by default, so by default you still
run into the windows 8 codepath.

> In case RTD3 is supported it invokes LKDS() which probably does the L2
> or L3 entry and this is for some reason does not work the same way in
> Linux than it does with Windows 8+.
>
> I don't remember if this happens only with nouveau or with the
> proprietary driver as well but looking at the nouveau runtime PM suspend
> hook (assuming I'm looking at the correct code):
>
> static int
> nouveau_pmops_runtime_suspend(struct device *dev)
> {
>         struct pci_dev *pdev = to_pci_dev(dev);
>         struct drm_device *drm_dev = pci_get_drvdata(pdev);
>         int ret;
>
>         if (!nouveau_pmops_runtime()) {
>                 pm_runtime_forbid(dev);
>                 return -EBUSY;
>         }
>
>         nouveau_switcheroo_optimus_dsm();
>         ret = nouveau_do_suspend(drm_dev, true);
>         pci_save_state(pdev);
>         pci_disable_device(pdev);
>         pci_ignore_hotplug(pdev);
>         pci_set_power_state(pdev, PCI_D3cold);
>         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
>         return ret;
> }
>
> Normally PCI drivers leave the PCI bus PM things to PCI core but here
> the driver does these. So I wonder if it makes any difference if we let
> the core handle all that:
>
> static int
> nouveau_pmops_runtime_suspend(struct device *dev)
> {
>         struct pci_dev *pdev = to_pci_dev(dev);
>         struct drm_device *drm_dev = pci_get_drvdata(pdev);
>         int ret;
>
>         if (!nouveau_pmops_runtime()) {
>                 pm_runtime_forbid(dev);
>                 return -EBUSY;
>         }
>
>         nouveau_switcheroo_optimus_dsm();
>         ret = nouveau_do_suspend(drm_dev, true);
>         pci_ignore_hotplug(pdev);
>         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
>         return ret;
> }
>
> and similar for the nouveau_pmops_runtime_resume().
>

yeah, I tried that at some point and it didn't help either. The reason
we call those from inside Nouveau is to support systems pre _PR where
nouveau invokes custom _DSM calls on its own. We could potentially
check for that though.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 15:37                         ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 15:37 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > overall, what I really want to know is, _why_ does it work on windows?
> > >
> > > So do I ;-)
> > >
> > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > anybody has any idea on how we could dig into this and figure it out
> > > > on this level, this would probably allow us to get closer to the root
> > > > cause? no?
> > >
> > > Have you tried to use the acpi_rev_override parameter in your system and
> > > does it have any effect?
> > >
> > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > should hopefully reveal something.
> > >
> >
> > I think I did in the past and it seemed to have worked, there is just
> > one big issue with this: it's a Dell specific workaround afaik, and
> > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > laptops as well, and I've heard about users having the same issues on
> > Asus and MSI laptops as well.
>
> Maybe it is not a workaround at all but instead it simply determines
> whether the system supports RTD3 or something like that (IIRC Windows 8
> started supporting it). Maybe Dell added check for Linux because at that
> time Linux did not support it.
>

the point is, it's not checking it by default, so by default you still
run into the windows 8 codepath.

> In case RTD3 is supported it invokes LKDS() which probably does the L2
> or L3 entry and this is for some reason does not work the same way in
> Linux than it does with Windows 8+.
>
> I don't remember if this happens only with nouveau or with the
> proprietary driver as well but looking at the nouveau runtime PM suspend
> hook (assuming I'm looking at the correct code):
>
> static int
> nouveau_pmops_runtime_suspend(struct device *dev)
> {
>         struct pci_dev *pdev = to_pci_dev(dev);
>         struct drm_device *drm_dev = pci_get_drvdata(pdev);
>         int ret;
>
>         if (!nouveau_pmops_runtime()) {
>                 pm_runtime_forbid(dev);
>                 return -EBUSY;
>         }
>
>         nouveau_switcheroo_optimus_dsm();
>         ret = nouveau_do_suspend(drm_dev, true);
>         pci_save_state(pdev);
>         pci_disable_device(pdev);
>         pci_ignore_hotplug(pdev);
>         pci_set_power_state(pdev, PCI_D3cold);
>         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
>         return ret;
> }
>
> Normally PCI drivers leave the PCI bus PM things to PCI core but here
> the driver does these. So I wonder if it makes any difference if we let
> the core handle all that:
>
> static int
> nouveau_pmops_runtime_suspend(struct device *dev)
> {
>         struct pci_dev *pdev = to_pci_dev(dev);
>         struct drm_device *drm_dev = pci_get_drvdata(pdev);
>         int ret;
>
>         if (!nouveau_pmops_runtime()) {
>                 pm_runtime_forbid(dev);
>                 return -EBUSY;
>         }
>
>         nouveau_switcheroo_optimus_dsm();
>         ret = nouveau_do_suspend(drm_dev, true);
>         pci_ignore_hotplug(pdev);
>         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
>         return ret;
> }
>
> and similar for the nouveau_pmops_runtime_resume().
>

yeah, I tried that at some point and it didn't help either. The reason
we call those from inside Nouveau is to support systems pre _PR where
nouveau invokes custom _DSM calls on its own. We could potentially
check for that though.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 15:53                           ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 15:53 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > >
> > > > So do I ;-)
> > > >
> > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > on this level, this would probably allow us to get closer to the root
> > > > > cause? no?
> > > >
> > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > does it have any effect?
> > > >
> > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > should hopefully reveal something.
> > > >
> > >
> > > I think I did in the past and it seemed to have worked, there is just
> > > one big issue with this: it's a Dell specific workaround afaik, and
> > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > laptops as well, and I've heard about users having the same issues on
> > > Asus and MSI laptops as well.
> >
> > Maybe it is not a workaround at all but instead it simply determines
> > whether the system supports RTD3 or something like that (IIRC Windows 8
> > started supporting it). Maybe Dell added check for Linux because at that
> > time Linux did not support it.
> >
> 
> the point is, it's not checking it by default, so by default you still
> run into the windows 8 codepath.

Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
path by default. There are a bunch of similar entries for Dell machines.

Of course this does not help the non-Dell users so we would still need
to figure out the root cause.

> > In case RTD3 is supported it invokes LKDS() which probably does the L2
> > or L3 entry and this is for some reason does not work the same way in
> > Linux than it does with Windows 8+.
> >
> > I don't remember if this happens only with nouveau or with the
> > proprietary driver as well but looking at the nouveau runtime PM suspend
> > hook (assuming I'm looking at the correct code):
> >
> > static int
> > nouveau_pmops_runtime_suspend(struct device *dev)
> > {
> >         struct pci_dev *pdev = to_pci_dev(dev);
> >         struct drm_device *drm_dev = pci_get_drvdata(pdev);
> >         int ret;
> >
> >         if (!nouveau_pmops_runtime()) {
> >                 pm_runtime_forbid(dev);
> >                 return -EBUSY;
> >         }
> >
> >         nouveau_switcheroo_optimus_dsm();
> >         ret = nouveau_do_suspend(drm_dev, true);
> >         pci_save_state(pdev);
> >         pci_disable_device(pdev);
> >         pci_ignore_hotplug(pdev);
> >         pci_set_power_state(pdev, PCI_D3cold);
> >         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
> >         return ret;
> > }
> >
> > Normally PCI drivers leave the PCI bus PM things to PCI core but here
> > the driver does these. So I wonder if it makes any difference if we let
> > the core handle all that:
> >
> > static int
> > nouveau_pmops_runtime_suspend(struct device *dev)
> > {
> >         struct pci_dev *pdev = to_pci_dev(dev);
> >         struct drm_device *drm_dev = pci_get_drvdata(pdev);
> >         int ret;
> >
> >         if (!nouveau_pmops_runtime()) {
> >                 pm_runtime_forbid(dev);
> >                 return -EBUSY;
> >         }
> >
> >         nouveau_switcheroo_optimus_dsm();
> >         ret = nouveau_do_suspend(drm_dev, true);
> >         pci_ignore_hotplug(pdev);
> >         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
> >         return ret;
> > }
> >
> > and similar for the nouveau_pmops_runtime_resume().
> >
> 
> yeah, I tried that at some point and it didn't help either. The reason
> we call those from inside Nouveau is to support systems pre _PR where
> nouveau invokes custom _DSM calls on its own. We could potentially
> check for that though.

OK.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 15:53                           ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 15:53 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > >
> > > > So do I ;-)
> > > >
> > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > on this level, this would probably allow us to get closer to the root
> > > > > cause? no?
> > > >
> > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > does it have any effect?
> > > >
> > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > should hopefully reveal something.
> > > >
> > >
> > > I think I did in the past and it seemed to have worked, there is just
> > > one big issue with this: it's a Dell specific workaround afaik, and
> > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > laptops as well, and I've heard about users having the same issues on
> > > Asus and MSI laptops as well.
> >
> > Maybe it is not a workaround at all but instead it simply determines
> > whether the system supports RTD3 or something like that (IIRC Windows 8
> > started supporting it). Maybe Dell added check for Linux because at that
> > time Linux did not support it.
> >
> 
> the point is, it's not checking it by default, so by default you still
> run into the windows 8 codepath.

Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
path by default. There are a bunch of similar entries for Dell machines.

Of course this does not help the non-Dell users so we would still need
to figure out the root cause.

> > In case RTD3 is supported it invokes LKDS() which probably does the L2
> > or L3 entry and this is for some reason does not work the same way in
> > Linux than it does with Windows 8+.
> >
> > I don't remember if this happens only with nouveau or with the
> > proprietary driver as well but looking at the nouveau runtime PM suspend
> > hook (assuming I'm looking at the correct code):
> >
> > static int
> > nouveau_pmops_runtime_suspend(struct device *dev)
> > {
> >         struct pci_dev *pdev = to_pci_dev(dev);
> >         struct drm_device *drm_dev = pci_get_drvdata(pdev);
> >         int ret;
> >
> >         if (!nouveau_pmops_runtime()) {
> >                 pm_runtime_forbid(dev);
> >                 return -EBUSY;
> >         }
> >
> >         nouveau_switcheroo_optimus_dsm();
> >         ret = nouveau_do_suspend(drm_dev, true);
> >         pci_save_state(pdev);
> >         pci_disable_device(pdev);
> >         pci_ignore_hotplug(pdev);
> >         pci_set_power_state(pdev, PCI_D3cold);
> >         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
> >         return ret;
> > }
> >
> > Normally PCI drivers leave the PCI bus PM things to PCI core but here
> > the driver does these. So I wonder if it makes any difference if we let
> > the core handle all that:
> >
> > static int
> > nouveau_pmops_runtime_suspend(struct device *dev)
> > {
> >         struct pci_dev *pdev = to_pci_dev(dev);
> >         struct drm_device *drm_dev = pci_get_drvdata(pdev);
> >         int ret;
> >
> >         if (!nouveau_pmops_runtime()) {
> >                 pm_runtime_forbid(dev);
> >                 return -EBUSY;
> >         }
> >
> >         nouveau_switcheroo_optimus_dsm();
> >         ret = nouveau_do_suspend(drm_dev, true);
> >         pci_ignore_hotplug(pdev);
> >         drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
> >         return ret;
> > }
> >
> > and similar for the nouveau_pmops_runtime_resume().
> >
> 
> yeah, I tried that at some point and it didn't help either. The reason
> we call those from inside Nouveau is to support systems pre _PR where
> nouveau invokes custom _DSM calls on its own. We could potentially
> check for that though.

OK.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 16:23                             ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 16:23 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 05:53:07PM +0200, Mika Westerberg wrote:
> On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > >
> > > > > So do I ;-)
> > > > >
> > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > cause? no?
> > > > >
> > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > does it have any effect?
> > > > >
> > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > should hopefully reveal something.
> > > > >
> > > >
> > > > I think I did in the past and it seemed to have worked, there is just
> > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > laptops as well, and I've heard about users having the same issues on
> > > > Asus and MSI laptops as well.
> > >
> > > Maybe it is not a workaround at all but instead it simply determines
> > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > started supporting it). Maybe Dell added check for Linux because at that
> > > time Linux did not support it.
> > >
> > 
> > the point is, it's not checking it by default, so by default you still
> > run into the windows 8 codepath.
> 
> Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> path by default. There are a bunch of similar entries for Dell machines.
> 
> Of course this does not help the non-Dell users so we would still need
> to figure out the root cause.

I think I asked you to test the PCIe delay patch and it did not help but
I wonder if it helps if we increase the delay. As an experiment could
you try Bjorn's pci/pm branch. The last two commits are for the delay.

If you could pull that branch and apply the following patch on top and
give it a try? Then post the dmesg somewhere so we can see whether it
did the delay at all.

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 1f319b1175da..1ad6f1372ed5 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -4697,12 +4697,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 		return;
 	}
 
-	/* Take d3cold_delay requirements into account */
-	delay = pci_bus_max_d3cold_delay(dev->subordinate);
-	if (!delay) {
-		up_read(&pci_bus_sem);
-		return;
-	}
+	delay = 500;
 
 	child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
 				 bus_list);
@@ -4715,7 +4710,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 	 * management for them (see pci_bridge_d3_possible()).
 	 */
 	if (!pci_is_pcie(dev)) {
-		pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
+		pci_info(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
 		msleep(1000 + delay);
 		return;
 	}
@@ -4741,10 +4736,10 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 		return;
 
 	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
-		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
+		pci_info(dev, "waiting %d ms for downstream link\n", delay);
 		msleep(delay);
 	} else {
-		pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
+		pci_info(dev, "waiting %d ms for downstream link, after activation\n",
 			delay);
 		if (!pcie_wait_for_link_delay(dev, true, delay)) {
 			/* Did not train, no need to wait any further */
@@ -4753,7 +4748,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 	}
 
 	if (!pci_device_is_present(child)) {
-		pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+		pci_info(child, "waiting additional %d ms to become accessible\n", delay);
 		msleep(delay);
 	}
 }

^ permalink raw reply related	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 16:23                             ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-20 16:23 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 05:53:07PM +0200, Mika Westerberg wrote:
> On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > >
> > > > > So do I ;-)
> > > > >
> > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > cause? no?
> > > > >
> > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > does it have any effect?
> > > > >
> > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > should hopefully reveal something.
> > > > >
> > > >
> > > > I think I did in the past and it seemed to have worked, there is just
> > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > laptops as well, and I've heard about users having the same issues on
> > > > Asus and MSI laptops as well.
> > >
> > > Maybe it is not a workaround at all but instead it simply determines
> > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > started supporting it). Maybe Dell added check for Linux because at that
> > > time Linux did not support it.
> > >
> > 
> > the point is, it's not checking it by default, so by default you still
> > run into the windows 8 codepath.
> 
> Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> path by default. There are a bunch of similar entries for Dell machines.
> 
> Of course this does not help the non-Dell users so we would still need
> to figure out the root cause.

I think I asked you to test the PCIe delay patch and it did not help but
I wonder if it helps if we increase the delay. As an experiment could
you try Bjorn's pci/pm branch. The last two commits are for the delay.

If you could pull that branch and apply the following patch on top and
give it a try? Then post the dmesg somewhere so we can see whether it
did the delay at all.

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 1f319b1175da..1ad6f1372ed5 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -4697,12 +4697,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 		return;
 	}
 
-	/* Take d3cold_delay requirements into account */
-	delay = pci_bus_max_d3cold_delay(dev->subordinate);
-	if (!delay) {
-		up_read(&pci_bus_sem);
-		return;
-	}
+	delay = 500;
 
 	child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
 				 bus_list);
@@ -4715,7 +4710,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 	 * management for them (see pci_bridge_d3_possible()).
 	 */
 	if (!pci_is_pcie(dev)) {
-		pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
+		pci_info(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
 		msleep(1000 + delay);
 		return;
 	}
@@ -4741,10 +4736,10 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 		return;
 
 	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
-		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
+		pci_info(dev, "waiting %d ms for downstream link\n", delay);
 		msleep(delay);
 	} else {
-		pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
+		pci_info(dev, "waiting %d ms for downstream link, after activation\n",
 			delay);
 		if (!pcie_wait_for_link_delay(dev, true, delay)) {
 			/* Did not train, no need to wait any further */
@@ -4753,7 +4748,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 	}
 
 	if (!pci_device_is_present(child)) {
-		pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+		pci_info(child, "waiting additional %d ms to become accessible\n", delay);
 		msleep(delay);
 	}
 }
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 21:36                               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 21:36 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

with the branch and patch applied:
https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt

On Wed, Nov 20, 2019 at 5:23 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 05:53:07PM +0200, Mika Westerberg wrote:
> > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > > >
> > > > > > So do I ;-)
> > > > > >
> > > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > > cause? no?
> > > > > >
> > > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > > does it have any effect?
> > > > > >
> > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > > should hopefully reveal something.
> > > > > >
> > > > >
> > > > > I think I did in the past and it seemed to have worked, there is just
> > > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > > laptops as well, and I've heard about users having the same issues on
> > > > > Asus and MSI laptops as well.
> > > >
> > > > Maybe it is not a workaround at all but instead it simply determines
> > > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > > started supporting it). Maybe Dell added check for Linux because at that
> > > > time Linux did not support it.
> > > >
> > >
> > > the point is, it's not checking it by default, so by default you still
> > > run into the windows 8 codepath.
> >
> > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> > path by default. There are a bunch of similar entries for Dell machines.
> >
> > Of course this does not help the non-Dell users so we would still need
> > to figure out the root cause.
>
> I think I asked you to test the PCIe delay patch and it did not help but
> I wonder if it helps if we increase the delay. As an experiment could
> you try Bjorn's pci/pm branch. The last two commits are for the delay.
>
> If you could pull that branch and apply the following patch on top and
> give it a try? Then post the dmesg somewhere so we can see whether it
> did the delay at all.
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 1f319b1175da..1ad6f1372ed5 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4697,12 +4697,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>                 return;
>         }
>
> -       /* Take d3cold_delay requirements into account */
> -       delay = pci_bus_max_d3cold_delay(dev->subordinate);
> -       if (!delay) {
> -               up_read(&pci_bus_sem);
> -               return;
> -       }
> +       delay = 500;
>
>         child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
>                                  bus_list);
> @@ -4715,7 +4710,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>          * management for them (see pci_bridge_d3_possible()).
>          */
>         if (!pci_is_pcie(dev)) {
> -               pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
> +               pci_info(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
>                 msleep(1000 + delay);
>                 return;
>         }
> @@ -4741,10 +4736,10 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>                 return;
>
>         if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> -               pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
> +               pci_info(dev, "waiting %d ms for downstream link\n", delay);
>                 msleep(delay);
>         } else {
> -               pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
> +               pci_info(dev, "waiting %d ms for downstream link, after activation\n",
>                         delay);
>                 if (!pcie_wait_for_link_delay(dev, true, delay)) {
>                         /* Did not train, no need to wait any further */
> @@ -4753,7 +4748,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>         }
>
>         if (!pci_device_is_present(child)) {
> -               pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
> +               pci_info(child, "waiting additional %d ms to become accessible\n", delay);
>                 msleep(delay);
>         }
>  }
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 21:36                               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 21:36 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

with the branch and patch applied:
https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt

On Wed, Nov 20, 2019 at 5:23 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 05:53:07PM +0200, Mika Westerberg wrote:
> > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > > >
> > > > > > So do I ;-)
> > > > > >
> > > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > > cause? no?
> > > > > >
> > > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > > does it have any effect?
> > > > > >
> > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > > should hopefully reveal something.
> > > > > >
> > > > >
> > > > > I think I did in the past and it seemed to have worked, there is just
> > > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > > laptops as well, and I've heard about users having the same issues on
> > > > > Asus and MSI laptops as well.
> > > >
> > > > Maybe it is not a workaround at all but instead it simply determines
> > > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > > started supporting it). Maybe Dell added check for Linux because at that
> > > > time Linux did not support it.
> > > >
> > >
> > > the point is, it's not checking it by default, so by default you still
> > > run into the windows 8 codepath.
> >
> > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> > path by default. There are a bunch of similar entries for Dell machines.
> >
> > Of course this does not help the non-Dell users so we would still need
> > to figure out the root cause.
>
> I think I asked you to test the PCIe delay patch and it did not help but
> I wonder if it helps if we increase the delay. As an experiment could
> you try Bjorn's pci/pm branch. The last two commits are for the delay.
>
> If you could pull that branch and apply the following patch on top and
> give it a try? Then post the dmesg somewhere so we can see whether it
> did the delay at all.
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 1f319b1175da..1ad6f1372ed5 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4697,12 +4697,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>                 return;
>         }
>
> -       /* Take d3cold_delay requirements into account */
> -       delay = pci_bus_max_d3cold_delay(dev->subordinate);
> -       if (!delay) {
> -               up_read(&pci_bus_sem);
> -               return;
> -       }
> +       delay = 500;
>
>         child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
>                                  bus_list);
> @@ -4715,7 +4710,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>          * management for them (see pci_bridge_d3_possible()).
>          */
>         if (!pci_is_pcie(dev)) {
> -               pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
> +               pci_info(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
>                 msleep(1000 + delay);
>                 return;
>         }
> @@ -4741,10 +4736,10 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>                 return;
>
>         if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> -               pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
> +               pci_info(dev, "waiting %d ms for downstream link\n", delay);
>                 msleep(delay);
>         } else {
> -               pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
> +               pci_info(dev, "waiting %d ms for downstream link, after activation\n",
>                         delay);
>                 if (!pcie_wait_for_link_delay(dev, true, delay)) {
>                         /* Did not train, no need to wait any further */
> @@ -4753,7 +4748,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>         }
>
>         if (!pci_device_is_present(child)) {
> -               pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
> +               pci_info(child, "waiting additional %d ms to become accessible\n", delay);
>                 msleep(delay);
>         }
>  }
>

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 21:36                               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 21:36 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

with the branch and patch applied:
https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt

On Wed, Nov 20, 2019 at 5:23 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 05:53:07PM +0200, Mika Westerberg wrote:
> > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > > >
> > > > > > So do I ;-)
> > > > > >
> > > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > > cause? no?
> > > > > >
> > > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > > does it have any effect?
> > > > > >
> > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > > should hopefully reveal something.
> > > > > >
> > > > >
> > > > > I think I did in the past and it seemed to have worked, there is just
> > > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > > laptops as well, and I've heard about users having the same issues on
> > > > > Asus and MSI laptops as well.
> > > >
> > > > Maybe it is not a workaround at all but instead it simply determines
> > > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > > started supporting it). Maybe Dell added check for Linux because at that
> > > > time Linux did not support it.
> > > >
> > >
> > > the point is, it's not checking it by default, so by default you still
> > > run into the windows 8 codepath.
> >
> > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> > path by default. There are a bunch of similar entries for Dell machines.
> >
> > Of course this does not help the non-Dell users so we would still need
> > to figure out the root cause.
>
> I think I asked you to test the PCIe delay patch and it did not help but
> I wonder if it helps if we increase the delay. As an experiment could
> you try Bjorn's pci/pm branch. The last two commits are for the delay.
>
> If you could pull that branch and apply the following patch on top and
> give it a try? Then post the dmesg somewhere so we can see whether it
> did the delay at all.
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 1f319b1175da..1ad6f1372ed5 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4697,12 +4697,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>                 return;
>         }
>
> -       /* Take d3cold_delay requirements into account */
> -       delay = pci_bus_max_d3cold_delay(dev->subordinate);
> -       if (!delay) {
> -               up_read(&pci_bus_sem);
> -               return;
> -       }
> +       delay = 500;
>
>         child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
>                                  bus_list);
> @@ -4715,7 +4710,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>          * management for them (see pci_bridge_d3_possible()).
>          */
>         if (!pci_is_pcie(dev)) {
> -               pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
> +               pci_info(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
>                 msleep(1000 + delay);
>                 return;
>         }
> @@ -4741,10 +4736,10 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>                 return;
>
>         if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> -               pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
> +               pci_info(dev, "waiting %d ms for downstream link\n", delay);
>                 msleep(delay);
>         } else {
> -               pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
> +               pci_info(dev, "waiting %d ms for downstream link, after activation\n",
>                         delay);
>                 if (!pcie_wait_for_link_delay(dev, true, delay)) {
>                         /* Did not train, no need to wait any further */
> @@ -4753,7 +4748,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
>         }
>
>         if (!pci_device_is_present(child)) {
> -               pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
> +               pci_info(child, "waiting additional %d ms to become accessible\n", delay);
>                 msleep(delay);
>         }
>  }
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 21:37                             ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 21:37 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > >
> > > > > So do I ;-)
> > > > >
> > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > cause? no?
> > > > >
> > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > does it have any effect?
> > > > >
> > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > should hopefully reveal something.
> > > > >
> > > >
> > > > I think I did in the past and it seemed to have worked, there is just
> > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > laptops as well, and I've heard about users having the same issues on
> > > > Asus and MSI laptops as well.
> > >
> > > Maybe it is not a workaround at all but instead it simply determines
> > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > started supporting it). Maybe Dell added check for Linux because at that
> > > time Linux did not support it.
> > >
> >
> > the point is, it's not checking it by default, so by default you still
> > run into the windows 8 codepath.
>
> Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> path by default. There are a bunch of similar entries for Dell machines.

OK, so the "Linux path" works and the other doesn't.

I thought that this was the other way around, sorry for the confusion.

> Of course this does not help the non-Dell users so we would still need
> to figure out the root cause.

Right.

Whatever it is, though, AML appears to be involved in it and AFAICS
there's no evidence that it affects any root ports that are not
populated with NVidia GPUs.

Now, one thing is still not clear to me from the discussion so far: is
the _PR3 method you mentioned defined under the GPU device object or
under the port device object?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 21:37                             ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 21:37 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > >
> > > > > So do I ;-)
> > > > >
> > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > cause? no?
> > > > >
> > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > does it have any effect?
> > > > >
> > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > should hopefully reveal something.
> > > > >
> > > >
> > > > I think I did in the past and it seemed to have worked, there is just
> > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > laptops as well, and I've heard about users having the same issues on
> > > > Asus and MSI laptops as well.
> > >
> > > Maybe it is not a workaround at all but instead it simply determines
> > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > started supporting it). Maybe Dell added check for Linux because at that
> > > time Linux did not support it.
> > >
> >
> > the point is, it's not checking it by default, so by default you still
> > run into the windows 8 codepath.
>
> Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> path by default. There are a bunch of similar entries for Dell machines.

OK, so the "Linux path" works and the other doesn't.

I thought that this was the other way around, sorry for the confusion.

> Of course this does not help the non-Dell users so we would still need
> to figure out the root cause.

Right.

Whatever it is, though, AML appears to be involved in it and AFAICS
there's no evidence that it affects any root ports that are not
populated with NVidia GPUs.

Now, one thing is still not clear to me from the discussion so far: is
the _PR3 method you mentioned defined under the GPU device object or
under the port device object?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 21:40                               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 21:40 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > > >
> > > > > > So do I ;-)
> > > > > >
> > > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > > cause? no?
> > > > > >
> > > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > > does it have any effect?
> > > > > >
> > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > > should hopefully reveal something.
> > > > > >
> > > > >
> > > > > I think I did in the past and it seemed to have worked, there is just
> > > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > > laptops as well, and I've heard about users having the same issues on
> > > > > Asus and MSI laptops as well.
> > > >
> > > > Maybe it is not a workaround at all but instead it simply determines
> > > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > > started supporting it). Maybe Dell added check for Linux because at that
> > > > time Linux did not support it.
> > > >
> > >
> > > the point is, it's not checking it by default, so by default you still
> > > run into the windows 8 codepath.
> >
> > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> > path by default. There are a bunch of similar entries for Dell machines.
>
> OK, so the "Linux path" works and the other doesn't.
>
> I thought that this was the other way around, sorry for the confusion.
>
> > Of course this does not help the non-Dell users so we would still need
> > to figure out the root cause.
>
> Right.
>
> Whatever it is, though, AML appears to be involved in it and AFAICS
> there's no evidence that it affects any root ports that are not
> populated with NVidia GPUs.
>

last week or so I found systems where the GPU was under the "PCI
Express Root Port" (name from lspci) and on those systems all of that
seems to work. So I am wondering if it's indeed just the 0x1901 one,
which also explains Mikas case that Thunderbolt stuff works as devices
never get populated under this particular bridge controller, but under
those "Root Port"s

> Now, one thing is still not clear to me from the discussion so far: is
> the _PR3 method you mentioned defined under the GPU device object or
> under the port device object?
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 21:40                               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-20 21:40 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > > >
> > > > > > So do I ;-)
> > > > > >
> > > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > > cause? no?
> > > > > >
> > > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > > does it have any effect?
> > > > > >
> > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > > should hopefully reveal something.
> > > > > >
> > > > >
> > > > > I think I did in the past and it seemed to have worked, there is just
> > > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > > laptops as well, and I've heard about users having the same issues on
> > > > > Asus and MSI laptops as well.
> > > >
> > > > Maybe it is not a workaround at all but instead it simply determines
> > > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > > started supporting it). Maybe Dell added check for Linux because at that
> > > > time Linux did not support it.
> > > >
> > >
> > > the point is, it's not checking it by default, so by default you still
> > > run into the windows 8 codepath.
> >
> > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> > path by default. There are a bunch of similar entries for Dell machines.
>
> OK, so the "Linux path" works and the other doesn't.
>
> I thought that this was the other way around, sorry for the confusion.
>
> > Of course this does not help the non-Dell users so we would still need
> > to figure out the root cause.
>
> Right.
>
> Whatever it is, though, AML appears to be involved in it and AFAICS
> there's no evidence that it affects any root ports that are not
> populated with NVidia GPUs.
>

last week or so I found systems where the GPU was under the "PCI
Express Root Port" (name from lspci) and on those systems all of that
seems to work. So I am wondering if it's indeed just the 0x1901 one,
which also explains Mikas case that Thunderbolt stuff works as devices
never get populated under this particular bridge controller, but under
those "Root Port"s

> Now, one thing is still not clear to me from the discussion so far: is
> the _PR3 method you mentioned defined under the GPU device object or
> under the port device object?
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 22:29                                 ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 22:29 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Mika Westerberg, Bjorn Helgaas, LKML,
	Lyude Paul, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 10:40 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > > > >
> > > > > > > So do I ;-)
> > > > > > >
> > > > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > > > cause? no?
> > > > > > >
> > > > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > > > does it have any effect?
> > > > > > >
> > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > > > should hopefully reveal something.
> > > > > > >
> > > > > >
> > > > > > I think I did in the past and it seemed to have worked, there is just
> > > > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > > > laptops as well, and I've heard about users having the same issues on
> > > > > > Asus and MSI laptops as well.
> > > > >
> > > > > Maybe it is not a workaround at all but instead it simply determines
> > > > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > > > started supporting it). Maybe Dell added check for Linux because at that
> > > > > time Linux did not support it.
> > > > >
> > > >
> > > > the point is, it's not checking it by default, so by default you still
> > > > run into the windows 8 codepath.
> > >
> > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> > > path by default. There are a bunch of similar entries for Dell machines.
> >
> > OK, so the "Linux path" works and the other doesn't.
> >
> > I thought that this was the other way around, sorry for the confusion.
> >
> > > Of course this does not help the non-Dell users so we would still need
> > > to figure out the root cause.
> >
> > Right.
> >
> > Whatever it is, though, AML appears to be involved in it and AFAICS
> > there's no evidence that it affects any root ports that are not
> > populated with NVidia GPUs.
> >
>
> last week or so I found systems where the GPU was under the "PCI
> Express Root Port" (name from lspci) and on those systems all of that
> seems to work. So I am wondering if it's indeed just the 0x1901 one,
> which also explains Mikas case that Thunderbolt stuff works as devices
> never get populated under this particular bridge controller, but under
> those "Root Port"s

It always is a PCIe port, but its location within the SoC may matter.

Also some custom AML-based power management is involved and that may
be making specific assumptions on the configuration of the SoC and the
GPU at the time of its invocation which unfortunately are not known to
us.

However, it looks like the AML invoked to power down the GPU from
acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
that point, so it looks like that AML tries to access device memory on
the GPU (beyond the PCI config space) or similar which is not
accessible in PCI power states below D0.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-20 22:29                                 ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-20 22:29 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 10:40 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote:
> > > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote:
> > > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote:
> > > > > > > > overall, what I really want to know is, _why_ does it work on windows?
> > > > > > >
> > > > > > > So do I ;-)
> > > > > > >
> > > > > > > > Or what are we doing differently on Linux so that it doesn't work? If
> > > > > > > > anybody has any idea on how we could dig into this and figure it out
> > > > > > > > on this level, this would probably allow us to get closer to the root
> > > > > > > > cause? no?
> > > > > > >
> > > > > > > Have you tried to use the acpi_rev_override parameter in your system and
> > > > > > > does it have any effect?
> > > > > > >
> > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that
> > > > > > > should hopefully reveal something.
> > > > > > >
> > > > > >
> > > > > > I think I did in the past and it seemed to have worked, there is just
> > > > > > one big issue with this: it's a Dell specific workaround afaik, and
> > > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo
> > > > > > laptops as well, and I've heard about users having the same issues on
> > > > > > Asus and MSI laptops as well.
> > > > >
> > > > > Maybe it is not a workaround at all but instead it simply determines
> > > > > whether the system supports RTD3 or something like that (IIRC Windows 8
> > > > > started supporting it). Maybe Dell added check for Linux because at that
> > > > > time Linux did not support it.
> > > > >
> > > >
> > > > the point is, it's not checking it by default, so by default you still
> > > > run into the windows 8 codepath.
> > >
> > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that
> > > path by default. There are a bunch of similar entries for Dell machines.
> >
> > OK, so the "Linux path" works and the other doesn't.
> >
> > I thought that this was the other way around, sorry for the confusion.
> >
> > > Of course this does not help the non-Dell users so we would still need
> > > to figure out the root cause.
> >
> > Right.
> >
> > Whatever it is, though, AML appears to be involved in it and AFAICS
> > there's no evidence that it affects any root ports that are not
> > populated with NVidia GPUs.
> >
>
> last week or so I found systems where the GPU was under the "PCI
> Express Root Port" (name from lspci) and on those systems all of that
> seems to work. So I am wondering if it's indeed just the 0x1901 one,
> which also explains Mikas case that Thunderbolt stuff works as devices
> never get populated under this particular bridge controller, but under
> those "Root Port"s

It always is a PCIe port, but its location within the SoC may matter.

Also some custom AML-based power management is involved and that may
be making specific assumptions on the configuration of the SoC and the
GPU at the time of its invocation which unfortunately are not known to
us.

However, it looks like the AML invoked to power down the GPU from
acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
that point, so it looks like that AML tries to access device memory on
the GPU (beyond the PCI config space) or similar which is not
accessible in PCI power states below D0.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 10:14                                 ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 10:14 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> with the branch and patch applied:
> https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt

Thanks for testing. Too bad it did not help :( I suppose there is no
change if you increase the delay to say 1s?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 10:14                                 ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 10:14 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> with the branch and patch applied:
> https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt

Thanks for testing. Too bad it did not help :( I suppose there is no
change if you increase the delay to say 1s?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-21 10:14                                 ` Mika Westerberg
@ 2019-11-21 11:03                                   ` Rafael J. Wysocki
  -1 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:03 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > with the branch and patch applied:
> > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
>
> Thanks for testing. Too bad it did not help :( I suppose there is no
> change if you increase the delay to say 1s?

Well, look at the original patch in this thread.

What it does is to prevent the device (GPU in this particular case)
from going into a PCI low-power state before invoking AML to power it
down (the AML is still invoked after this patch AFAICS), so why would
that have anything to do with the delays?

The only reason would be the AML running too early, but that doesn't
seem likely.  IMO more likely is that the AML does something which
cannot be done to a device in a PCI low-power state.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:03                                   ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:03 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > with the branch and patch applied:
> > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
>
> Thanks for testing. Too bad it did not help :( I suppose there is no
> change if you increase the delay to say 1s?

Well, look at the original patch in this thread.

What it does is to prevent the device (GPU in this particular case)
from going into a PCI low-power state before invoking AML to power it
down (the AML is still invoked after this patch AFAICS), so why would
that have anything to do with the delays?

The only reason would be the AML running too early, but that doesn't
seem likely.  IMO more likely is that the AML does something which
cannot be done to a device in a PCI low-power state.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:08                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:08 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > with the branch and patch applied:
> > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> >
> > Thanks for testing. Too bad it did not help :( I suppose there is no
> > change if you increase the delay to say 1s?
>
> Well, look at the original patch in this thread.
>
> What it does is to prevent the device (GPU in this particular case)
> from going into a PCI low-power state before invoking AML to power it
> down (the AML is still invoked after this patch AFAICS), so why would
> that have anything to do with the delays?
>
> The only reason would be the AML running too early, but that doesn't
> seem likely.  IMO more likely is that the AML does something which
> cannot be done to a device in a PCI low-power state.

BTW, I'm wondering if anyone has tried to skip the AML instead of
skipping the PCI PM in this case (as of 5.4-rc that would be a similar
patch to skip the invocations of
__pci_start/complete_power_transition() in pci_set_power_state() for
the affected device).

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:08                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:08 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > with the branch and patch applied:
> > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> >
> > Thanks for testing. Too bad it did not help :( I suppose there is no
> > change if you increase the delay to say 1s?
>
> Well, look at the original patch in this thread.
>
> What it does is to prevent the device (GPU in this particular case)
> from going into a PCI low-power state before invoking AML to power it
> down (the AML is still invoked after this patch AFAICS), so why would
> that have anything to do with the delays?
>
> The only reason would be the AML running too early, but that doesn't
> seem likely.  IMO more likely is that the AML does something which
> cannot be done to a device in a PCI low-power state.

BTW, I'm wondering if anyone has tried to skip the AML instead of
skipping the PCI PM in this case (as of 5.4-rc that would be a similar
patch to skip the invocations of
__pci_start/complete_power_transition() in pci_set_power_state() for
the affected device).
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:08                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:08 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > with the branch and patch applied:
> > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> >
> > Thanks for testing. Too bad it did not help :( I suppose there is no
> > change if you increase the delay to say 1s?
>
> Well, look at the original patch in this thread.
>
> What it does is to prevent the device (GPU in this particular case)
> from going into a PCI low-power state before invoking AML to power it
> down (the AML is still invoked after this patch AFAICS), so why would
> that have anything to do with the delays?
>
> The only reason would be the AML running too early, but that doesn't
> seem likely.  IMO more likely is that the AML does something which
> cannot be done to a device in a PCI low-power state.

BTW, I'm wondering if anyone has tried to skip the AML instead of
skipping the PCI PM in this case (as of 5.4-rc that would be a similar
patch to skip the invocations of
__pci_start/complete_power_transition() in pci_set_power_state() for
the affected device).
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:15                                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:15 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 12:08 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > > with the branch and patch applied:
> > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> > >
> > > Thanks for testing. Too bad it did not help :( I suppose there is no
> > > change if you increase the delay to say 1s?
> >
> > Well, look at the original patch in this thread.
> >
> > What it does is to prevent the device (GPU in this particular case)
> > from going into a PCI low-power state before invoking AML to power it
> > down (the AML is still invoked after this patch AFAICS), so why would
> > that have anything to do with the delays?
> >
> > The only reason would be the AML running too early, but that doesn't
> > seem likely.  IMO more likely is that the AML does something which
> > cannot be done to a device in a PCI low-power state.
>
> BTW, I'm wondering if anyone has tried to skip the AML instead of
> skipping the PCI PM in this case (as of 5.4-rc that would be a similar
> patch to skip the invocations of
> __pci_start/complete_power_transition() in pci_set_power_state() for
> the affected device).

Moving the dev->broken_nv_runpm test into
pci_platform_power_transition() (also for transitions into D0) would
be sufficient for that test if I'm not mistaken.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:15                                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:15 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:08 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > > with the branch and patch applied:
> > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> > >
> > > Thanks for testing. Too bad it did not help :( I suppose there is no
> > > change if you increase the delay to say 1s?
> >
> > Well, look at the original patch in this thread.
> >
> > What it does is to prevent the device (GPU in this particular case)
> > from going into a PCI low-power state before invoking AML to power it
> > down (the AML is still invoked after this patch AFAICS), so why would
> > that have anything to do with the delays?
> >
> > The only reason would be the AML running too early, but that doesn't
> > seem likely.  IMO more likely is that the AML does something which
> > cannot be done to a device in a PCI low-power state.
>
> BTW, I'm wondering if anyone has tried to skip the AML instead of
> skipping the PCI PM in this case (as of 5.4-rc that would be a similar
> patch to skip the invocations of
> __pci_start/complete_power_transition() in pci_set_power_state() for
> the affected device).

Moving the dev->broken_nv_runpm test into
pci_platform_power_transition() (also for transitions into D0) would
be sufficient for that test if I'm not mistaken.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:17                                     ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 11:17 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 12:03:52PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > with the branch and patch applied:
> > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> >
> > Thanks for testing. Too bad it did not help :( I suppose there is no
> > change if you increase the delay to say 1s?
> 
> Well, look at the original patch in this thread.
> 
> What it does is to prevent the device (GPU in this particular case)
> from going into a PCI low-power state before invoking AML to power it
> down (the AML is still invoked after this patch AFAICS), so why would
> that have anything to do with the delays?

Yes, I know what it does :) I was just thinking that maybe it's still
the link that does not come up when we go back to D0 I guess that's not
the case here.

> The only reason would be the AML running too early, but that doesn't
> seem likely.  IMO more likely is that the AML does something which
> cannot be done to a device in a PCI low-power state.

It may very well be the case.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:17                                     ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 11:17 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:03:52PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > with the branch and patch applied:
> > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> >
> > Thanks for testing. Too bad it did not help :( I suppose there is no
> > change if you increase the delay to say 1s?
> 
> Well, look at the original patch in this thread.
> 
> What it does is to prevent the device (GPU in this particular case)
> from going into a PCI low-power state before invoking AML to power it
> down (the AML is still invoked after this patch AFAICS), so why would
> that have anything to do with the delays?

Yes, I know what it does :) I was just thinking that maybe it's still
the link that does not come up when we go back to D0 I guess that's not
the case here.

> The only reason would be the AML running too early, but that doesn't
> seem likely.  IMO more likely is that the AML does something which
> cannot be done to a device in a PCI low-power state.

It may very well be the case.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-20 22:29                                 ` Rafael J. Wysocki
@ 2019-11-21 11:28                                   ` Mika Westerberg
  -1 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 11:28 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > last week or so I found systems where the GPU was under the "PCI
> > Express Root Port" (name from lspci) and on those systems all of that
> > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > which also explains Mikas case that Thunderbolt stuff works as devices
> > never get populated under this particular bridge controller, but under
> > those "Root Port"s
> 
> It always is a PCIe port, but its location within the SoC may matter.

Exactly. Intel hardware has PCIe ports on CPU side (these are called
PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
still the same.

> Also some custom AML-based power management is involved and that may
> be making specific assumptions on the configuration of the SoC and the
> GPU at the time of its invocation which unfortunately are not known to
> us.
> 
> However, it looks like the AML invoked to power down the GPU from
> acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> that point, so it looks like that AML tries to access device memory on
> the GPU (beyond the PCI config space) or similar which is not
> accessible in PCI power states below D0.

Or the PCI config space of the GPU when the parent root port is in D3hot
(as it is the case here). Also then the GPU config space is not
accessible.

I took a look at the HP Omen ACPI tables which has similar problem and
there is also check for Windows 7 (but not Linux) so I think one
alternative workaround would be to add these devices into
acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
pass 'acpi_osi="!Windows 2012"' in the kernel command line).

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:28                                   ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 11:28 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > last week or so I found systems where the GPU was under the "PCI
> > Express Root Port" (name from lspci) and on those systems all of that
> > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > which also explains Mikas case that Thunderbolt stuff works as devices
> > never get populated under this particular bridge controller, but under
> > those "Root Port"s
> 
> It always is a PCIe port, but its location within the SoC may matter.

Exactly. Intel hardware has PCIe ports on CPU side (these are called
PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
still the same.

> Also some custom AML-based power management is involved and that may
> be making specific assumptions on the configuration of the SoC and the
> GPU at the time of its invocation which unfortunately are not known to
> us.
> 
> However, it looks like the AML invoked to power down the GPU from
> acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> that point, so it looks like that AML tries to access device memory on
> the GPU (beyond the PCI config space) or similar which is not
> accessible in PCI power states below D0.

Or the PCI config space of the GPU when the parent root port is in D3hot
(as it is the case here). Also then the GPU config space is not
accessible.

I took a look at the HP Omen ACPI tables which has similar problem and
there is also check for Windows 7 (but not Linux) so I think one
alternative workaround would be to add these devices into
acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
pass 'acpi_osi="!Windows 2012"' in the kernel command line).
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:31                                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:31 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 12:17 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 12:03:52PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > > with the branch and patch applied:
> > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> > >
> > > Thanks for testing. Too bad it did not help :( I suppose there is no
> > > change if you increase the delay to say 1s?
> >
> > Well, look at the original patch in this thread.
> >
> > What it does is to prevent the device (GPU in this particular case)
> > from going into a PCI low-power state before invoking AML to power it
> > down (the AML is still invoked after this patch AFAICS), so why would
> > that have anything to do with the delays?
>
> Yes, I know what it does :) I was just thinking that maybe it's still
> the link that does not come up when we go back to D0 I guess that's not
> the case here.

I'm not sure why that would be related to putting the device into,
say, PCI D3 before invoking AML to remove power from it.  If it is not
in PCI D3 at this point, the AML still runs and still removes power
from it IIUC, so on the way back the situation is the same regardless:
the device has no power which (again) needs to be restored by AML.
That (in principle) should not depend on what happened to the device
before it lost power.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:31                                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:31 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:17 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 12:03:52PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote:
> > > > with the branch and patch applied:
> > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt
> > >
> > > Thanks for testing. Too bad it did not help :( I suppose there is no
> > > change if you increase the delay to say 1s?
> >
> > Well, look at the original patch in this thread.
> >
> > What it does is to prevent the device (GPU in this particular case)
> > from going into a PCI low-power state before invoking AML to power it
> > down (the AML is still invoked after this patch AFAICS), so why would
> > that have anything to do with the delays?
>
> Yes, I know what it does :) I was just thinking that maybe it's still
> the link that does not come up when we go back to D0 I guess that's not
> the case here.

I'm not sure why that would be related to putting the device into,
say, PCI D3 before invoking AML to remove power from it.  If it is not
in PCI D3 at this point, the AML still runs and still removes power
from it IIUC, so on the way back the situation is the same regardless:
the device has no power which (again) needs to be restored by AML.
That (in principle) should not depend on what happened to the device
before it lost power.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:34                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:34 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > last week or so I found systems where the GPU was under the "PCI
> > > Express Root Port" (name from lspci) and on those systems all of that
> > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > never get populated under this particular bridge controller, but under
> > > those "Root Port"s
> >
> > It always is a PCIe port, but its location within the SoC may matter.
>
> Exactly. Intel hardware has PCIe ports on CPU side (these are called
> PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> still the same.
>
> > Also some custom AML-based power management is involved and that may
> > be making specific assumptions on the configuration of the SoC and the
> > GPU at the time of its invocation which unfortunately are not known to
> > us.
> >
> > However, it looks like the AML invoked to power down the GPU from
> > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > that point, so it looks like that AML tries to access device memory on
> > the GPU (beyond the PCI config space) or similar which is not
> > accessible in PCI power states below D0.
>
> Or the PCI config space of the GPU when the parent root port is in D3hot
> (as it is the case here). Also then the GPU config space is not
> accessible.

Why would the parent port be in D3hot at that point?  Wouldn't that be
a suspend ordering violation?

> I took a look at the HP Omen ACPI tables which has similar problem and
> there is also check for Windows 7 (but not Linux) so I think one
> alternative workaround would be to add these devices into
> acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
> pass 'acpi_osi="!Windows 2012"' in the kernel command line).

I'd like to understand the facts that have been established so far
before deciding what to do about them. :-)

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:34                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:34 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > last week or so I found systems where the GPU was under the "PCI
> > > Express Root Port" (name from lspci) and on those systems all of that
> > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > never get populated under this particular bridge controller, but under
> > > those "Root Port"s
> >
> > It always is a PCIe port, but its location within the SoC may matter.
>
> Exactly. Intel hardware has PCIe ports on CPU side (these are called
> PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> still the same.
>
> > Also some custom AML-based power management is involved and that may
> > be making specific assumptions on the configuration of the SoC and the
> > GPU at the time of its invocation which unfortunately are not known to
> > us.
> >
> > However, it looks like the AML invoked to power down the GPU from
> > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > that point, so it looks like that AML tries to access device memory on
> > the GPU (beyond the PCI config space) or similar which is not
> > accessible in PCI power states below D0.
>
> Or the PCI config space of the GPU when the parent root port is in D3hot
> (as it is the case here). Also then the GPU config space is not
> accessible.

Why would the parent port be in D3hot at that point?  Wouldn't that be
a suspend ordering violation?

> I took a look at the HP Omen ACPI tables which has similar problem and
> there is also check for Windows 7 (but not Linux) so I think one
> alternative workaround would be to add these devices into
> acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
> pass 'acpi_osi="!Windows 2012"' in the kernel command line).

I'd like to understand the facts that have been established so far
before deciding what to do about them. :-)
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:34                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 11:34 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > last week or so I found systems where the GPU was under the "PCI
> > > Express Root Port" (name from lspci) and on those systems all of that
> > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > never get populated under this particular bridge controller, but under
> > > those "Root Port"s
> >
> > It always is a PCIe port, but its location within the SoC may matter.
>
> Exactly. Intel hardware has PCIe ports on CPU side (these are called
> PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> still the same.
>
> > Also some custom AML-based power management is involved and that may
> > be making specific assumptions on the configuration of the SoC and the
> > GPU at the time of its invocation which unfortunately are not known to
> > us.
> >
> > However, it looks like the AML invoked to power down the GPU from
> > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > that point, so it looks like that AML tries to access device memory on
> > the GPU (beyond the PCI config space) or similar which is not
> > accessible in PCI power states below D0.
>
> Or the PCI config space of the GPU when the parent root port is in D3hot
> (as it is the case here). Also then the GPU config space is not
> accessible.

Why would the parent port be in D3hot at that point?  Wouldn't that be
a suspend ordering violation?

> I took a look at the HP Omen ACPI tables which has similar problem and
> there is also check for Windows 7 (but not Linux) so I think one
> alternative workaround would be to add these devices into
> acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
> pass 'acpi_osi="!Windows 2012"' in the kernel command line).

I'd like to understand the facts that have been established so far
before deciding what to do about them. :-)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:46                                       ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 11:46 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > last week or so I found systems where the GPU was under the "PCI
> > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > never get populated under this particular bridge controller, but under
> > > > those "Root Port"s
> > >
> > > It always is a PCIe port, but its location within the SoC may matter.
> >
> > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > still the same.
> >
> > > Also some custom AML-based power management is involved and that may
> > > be making specific assumptions on the configuration of the SoC and the
> > > GPU at the time of its invocation which unfortunately are not known to
> > > us.
> > >
> > > However, it looks like the AML invoked to power down the GPU from
> > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > that point, so it looks like that AML tries to access device memory on
> > > the GPU (beyond the PCI config space) or similar which is not
> > > accessible in PCI power states below D0.
> >
> > Or the PCI config space of the GPU when the parent root port is in D3hot
> > (as it is the case here). Also then the GPU config space is not
> > accessible.
> 
> Why would the parent port be in D3hot at that point?  Wouldn't that be
> a suspend ordering violation?

No. We put the GPU into D3hot first, then the root port and then turn
off the power resource (which is attached to the root port) resulting
the topology entering D3cold.

> > I took a look at the HP Omen ACPI tables which has similar problem and
> > there is also check for Windows 7 (but not Linux) so I think one
> > alternative workaround would be to add these devices into
> > acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
> > pass 'acpi_osi="!Windows 2012"' in the kernel command line).
> 
> I'd like to understand the facts that have been established so far
> before deciding what to do about them. :-)

Yes, I agree :)

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 11:46                                       ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 11:46 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > last week or so I found systems where the GPU was under the "PCI
> > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > never get populated under this particular bridge controller, but under
> > > > those "Root Port"s
> > >
> > > It always is a PCIe port, but its location within the SoC may matter.
> >
> > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > still the same.
> >
> > > Also some custom AML-based power management is involved and that may
> > > be making specific assumptions on the configuration of the SoC and the
> > > GPU at the time of its invocation which unfortunately are not known to
> > > us.
> > >
> > > However, it looks like the AML invoked to power down the GPU from
> > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > that point, so it looks like that AML tries to access device memory on
> > > the GPU (beyond the PCI config space) or similar which is not
> > > accessible in PCI power states below D0.
> >
> > Or the PCI config space of the GPU when the parent root port is in D3hot
> > (as it is the case here). Also then the GPU config space is not
> > accessible.
> 
> Why would the parent port be in D3hot at that point?  Wouldn't that be
> a suspend ordering violation?

No. We put the GPU into D3hot first, then the root port and then turn
off the power resource (which is attached to the root port) resulting
the topology entering D3cold.

> > I took a look at the HP Omen ACPI tables which has similar problem and
> > there is also check for Windows 7 (but not Linux) so I think one
> > alternative workaround would be to add these devices into
> > acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
> > pass 'acpi_osi="!Windows 2012"' in the kernel command line).
> 
> I'd like to understand the facts that have been established so far
> before deciding what to do about them. :-)

Yes, I agree :)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-21 11:46                                       ` Mika Westerberg
@ 2019-11-21 12:52                                         ` Mika Westerberg
  -1 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 12:52 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > never get populated under this particular bridge controller, but under
> > > > > those "Root Port"s
> > > >
> > > > It always is a PCIe port, but its location within the SoC may matter.
> > >
> > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > still the same.
> > >
> > > > Also some custom AML-based power management is involved and that may
> > > > be making specific assumptions on the configuration of the SoC and the
> > > > GPU at the time of its invocation which unfortunately are not known to
> > > > us.
> > > >
> > > > However, it looks like the AML invoked to power down the GPU from
> > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > that point, so it looks like that AML tries to access device memory on
> > > > the GPU (beyond the PCI config space) or similar which is not
> > > > accessible in PCI power states below D0.
> > >
> > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > (as it is the case here). Also then the GPU config space is not
> > > accessible.
> > 
> > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > a suspend ordering violation?
> 
> No. We put the GPU into D3hot first, then the root port and then turn
> off the power resource (which is attached to the root port) resulting
> the topology entering D3cold.

I don't see that happening in the AML though.

Basically the difference is that when Windows 7 or Linux (the _REV==5
check) then we directly do link disable whereas in Windows 8+ we invoke
LKDS() method that puts the link into L2/L3. None of the fields they
access seem to touch the GPU itself.

LKDS() for the first PEG port looks like this:

   P0L2 = One
   Sleep (0x10)
   Local0 = Zero
   While (P0L2)
   {
	If ((Local0 > 0x04))
	{
	    Break
	}

	Sleep (0x10)
	Local0++
   }

One thing that comes to mind is that the loop can end even if P0L2 is
not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
Sleep() is implemented differently in Windows? I mean Linux may be
"faster" here and return prematurely and if we leave the port into D0
this does not happen, or something. I'm just throwing out ideas :)

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 12:52                                         ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 12:52 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > never get populated under this particular bridge controller, but under
> > > > > those "Root Port"s
> > > >
> > > > It always is a PCIe port, but its location within the SoC may matter.
> > >
> > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > still the same.
> > >
> > > > Also some custom AML-based power management is involved and that may
> > > > be making specific assumptions on the configuration of the SoC and the
> > > > GPU at the time of its invocation which unfortunately are not known to
> > > > us.
> > > >
> > > > However, it looks like the AML invoked to power down the GPU from
> > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > that point, so it looks like that AML tries to access device memory on
> > > > the GPU (beyond the PCI config space) or similar which is not
> > > > accessible in PCI power states below D0.
> > >
> > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > (as it is the case here). Also then the GPU config space is not
> > > accessible.
> > 
> > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > a suspend ordering violation?
> 
> No. We put the GPU into D3hot first, then the root port and then turn
> off the power resource (which is attached to the root port) resulting
> the topology entering D3cold.

I don't see that happening in the AML though.

Basically the difference is that when Windows 7 or Linux (the _REV==5
check) then we directly do link disable whereas in Windows 8+ we invoke
LKDS() method that puts the link into L2/L3. None of the fields they
access seem to touch the GPU itself.

LKDS() for the first PEG port looks like this:

   P0L2 = One
   Sleep (0x10)
   Local0 = Zero
   While (P0L2)
   {
	If ((Local0 > 0x04))
	{
	    Break
	}

	Sleep (0x10)
	Local0++
   }

One thing that comes to mind is that the loop can end even if P0L2 is
not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
Sleep() is implemented differently in Windows? I mean Linux may be
"faster" here and return prematurely and if we leave the port into D0
this does not happen, or something. I'm just throwing out ideas :)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 12:52                                         ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 12:52 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > never get populated under this particular bridge controller, but under
> > > > > those "Root Port"s
> > > >
> > > > It always is a PCIe port, but its location within the SoC may matter.
> > >
> > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > still the same.
> > >

yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
side. And if the Nvidia GPU is on a port on the PCH side it all seems
to work just fine.

> > > > Also some custom AML-based power management is involved and that may
> > > > be making specific assumptions on the configuration of the SoC and the
> > > > GPU at the time of its invocation which unfortunately are not known to
> > > > us.
> > > >
> > > > However, it looks like the AML invoked to power down the GPU from
> > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > that point, so it looks like that AML tries to access device memory on
> > > > the GPU (beyond the PCI config space) or similar which is not
> > > > accessible in PCI power states below D0.
> > >
> > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > (as it is the case here). Also then the GPU config space is not
> > > accessible.
> >
> > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > a suspend ordering violation?
>
> No. We put the GPU into D3hot first, then the root port and then turn
> off the power resource (which is attached to the root port) resulting
> the topology entering D3cold.
>

If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
the power savings are way lower, so I kind of prefer skipping D3hot
instead of D3cold. Skipping D3hot doesn't seem to make any difference
in power savings in my testing.

> > > I took a look at the HP Omen ACPI tables which has similar problem and
> > > there is also check for Windows 7 (but not Linux) so I think one
> > > alternative workaround would be to add these devices into
> > > acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
> > > pass 'acpi_osi="!Windows 2012"' in the kernel command line).
> >
> > I'd like to understand the facts that have been established so far
> > before deciding what to do about them. :-)
>
> Yes, I agree :)
>

Yeah.. and I think those would be too many as we know of several
models with this laptops from Lenovo, Dell and HP and random other
models from random other OEMs. I think we won't ever be able to
blacklist all models if we go that way as those might be just way too
many. Also I know from some reports on bumblebee bugs (hitting the
same issue essentially) that the acpi_osi overwrite didn't help every
user.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 12:52                                         ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 12:52 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > never get populated under this particular bridge controller, but under
> > > > > those "Root Port"s
> > > >
> > > > It always is a PCIe port, but its location within the SoC may matter.
> > >
> > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > still the same.
> > >

yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
side. And if the Nvidia GPU is on a port on the PCH side it all seems
to work just fine.

> > > > Also some custom AML-based power management is involved and that may
> > > > be making specific assumptions on the configuration of the SoC and the
> > > > GPU at the time of its invocation which unfortunately are not known to
> > > > us.
> > > >
> > > > However, it looks like the AML invoked to power down the GPU from
> > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > that point, so it looks like that AML tries to access device memory on
> > > > the GPU (beyond the PCI config space) or similar which is not
> > > > accessible in PCI power states below D0.
> > >
> > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > (as it is the case here). Also then the GPU config space is not
> > > accessible.
> >
> > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > a suspend ordering violation?
>
> No. We put the GPU into D3hot first, then the root port and then turn
> off the power resource (which is attached to the root port) resulting
> the topology entering D3cold.
>

If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
the power savings are way lower, so I kind of prefer skipping D3hot
instead of D3cold. Skipping D3hot doesn't seem to make any difference
in power savings in my testing.

> > > I took a look at the HP Omen ACPI tables which has similar problem and
> > > there is also check for Windows 7 (but not Linux) so I think one
> > > alternative workaround would be to add these devices into
> > > acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or
> > > pass 'acpi_osi="!Windows 2012"' in the kernel command line).
> >
> > I'd like to understand the facts that have been established so far
> > before deciding what to do about them. :-)
>
> Yes, I agree :)
>

Yeah.. and I think those would be too many as we know of several
models with this laptops from Lenovo, Dell and HP and random other
models from random other OEMs. I think we won't ever be able to
blacklist all models if we go that way as those might be just way too
many. Also I know from some reports on bumblebee bugs (hitting the
same issue essentially) that the acpi_osi overwrite didn't help every
user.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 12:56                                           ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 12:56 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first, then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
>
> I don't see that happening in the AML though.
>
> Basically the difference is that when Windows 7 or Linux (the _REV==5
> check) then we directly do link disable whereas in Windows 8+ we invoke
> LKDS() method that puts the link into L2/L3. None of the fields they
> access seem to touch the GPU itself.
>
> LKDS() for the first PEG port looks like this:
>
>    P0L2 = One
>    Sleep (0x10)
>    Local0 = Zero
>    While (P0L2)
>    {
>         If ((Local0 > 0x04))
>         {
>             Break
>         }
>
>         Sleep (0x10)
>         Local0++
>    }
>
> One thing that comes to mind is that the loop can end even if P0L2 is
> not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> Sleep() is implemented differently in Windows? I mean Linux may be
> "faster" here and return prematurely and if we leave the port into D0
> this does not happen, or something. I'm just throwing out ideas :)
>

keep in mind, that I am able to hit this bug with my python script:
https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 12:56                                           ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 12:56 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first, then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
>
> I don't see that happening in the AML though.
>
> Basically the difference is that when Windows 7 or Linux (the _REV==5
> check) then we directly do link disable whereas in Windows 8+ we invoke
> LKDS() method that puts the link into L2/L3. None of the fields they
> access seem to touch the GPU itself.
>
> LKDS() for the first PEG port looks like this:
>
>    P0L2 = One
>    Sleep (0x10)
>    Local0 = Zero
>    While (P0L2)
>    {
>         If ((Local0 > 0x04))
>         {
>             Break
>         }
>
>         Sleep (0x10)
>         Local0++
>    }
>
> One thing that comes to mind is that the loop can end even if P0L2 is
> not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> Sleep() is implemented differently in Windows? I mean Linux may be
> "faster" here and return prematurely and if we leave the port into D0
> this does not happen, or something. I'm just throwing out ideas :)
>

keep in mind, that I am able to hit this bug with my python script:
https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 15:43                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 15:43 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first,

OK

Does this involve any AML, like a _PS3 under the GPU object?

> > then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
>
> I don't see that happening in the AML though.

Which AML do you mean, specifically?  The _OFF method for the root
port's _PR3 power resource or something else?

> Basically the difference is that when Windows 7 or Linux (the _REV==5
> check) then we directly do link disable whereas in Windows 8+ we invoke
> LKDS() method that puts the link into L2/L3. None of the fields they
> access seem to touch the GPU itself.

So that may be where the problem is.

Putting the downstream component into PCI D[1-3] is expected to put
the link into L1, so I'm not sure how that plays with the later
attempt to put it into L2/L3 Ready.

Also, L2/L3 Ready is expected to be transient, so finally power should
be removed somehow.

> LKDS() for the first PEG port looks like this:
>
>    P0L2 = One
>    Sleep (0x10)
>    Local0 = Zero
>    While (P0L2)
>    {
>         If ((Local0 > 0x04))
>         {
>             Break
>         }
>
>         Sleep (0x10)
>         Local0++
>    }
>
> One thing that comes to mind is that the loop can end even if P0L2 is
> not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> Sleep() is implemented differently in Windows? I mean Linux may be
> "faster" here and return prematurely and if we leave the port into D0
> this does not happen, or something. I'm just throwing out ideas :)

But this actually works for the downstream component in D0, doesn't it?

Also, if the downstream component is in D0, the port actually should
stay in D0 too, so what would happen with the $subject patch applied?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 15:43                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 15:43 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first,

OK

Does this involve any AML, like a _PS3 under the GPU object?

> > then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
>
> I don't see that happening in the AML though.

Which AML do you mean, specifically?  The _OFF method for the root
port's _PR3 power resource or something else?

> Basically the difference is that when Windows 7 or Linux (the _REV==5
> check) then we directly do link disable whereas in Windows 8+ we invoke
> LKDS() method that puts the link into L2/L3. None of the fields they
> access seem to touch the GPU itself.

So that may be where the problem is.

Putting the downstream component into PCI D[1-3] is expected to put
the link into L1, so I'm not sure how that plays with the later
attempt to put it into L2/L3 Ready.

Also, L2/L3 Ready is expected to be transient, so finally power should
be removed somehow.

> LKDS() for the first PEG port looks like this:
>
>    P0L2 = One
>    Sleep (0x10)
>    Local0 = Zero
>    While (P0L2)
>    {
>         If ((Local0 > 0x04))
>         {
>             Break
>         }
>
>         Sleep (0x10)
>         Local0++
>    }
>
> One thing that comes to mind is that the loop can end even if P0L2 is
> not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> Sleep() is implemented differently in Windows? I mean Linux may be
> "faster" here and return prematurely and if we leave the port into D0
> this does not happen, or something. I'm just throwing out ideas :)

But this actually works for the downstream component in D0, doesn't it?

Also, if the downstream component is in D0, the port actually should
stay in D0 too, so what would happen with the $subject patch applied?
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 15:43                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 15:43 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first,

OK

Does this involve any AML, like a _PS3 under the GPU object?

> > then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
>
> I don't see that happening in the AML though.

Which AML do you mean, specifically?  The _OFF method for the root
port's _PR3 power resource or something else?

> Basically the difference is that when Windows 7 or Linux (the _REV==5
> check) then we directly do link disable whereas in Windows 8+ we invoke
> LKDS() method that puts the link into L2/L3. None of the fields they
> access seem to touch the GPU itself.

So that may be where the problem is.

Putting the downstream component into PCI D[1-3] is expected to put
the link into L1, so I'm not sure how that plays with the later
attempt to put it into L2/L3 Ready.

Also, L2/L3 Ready is expected to be transient, so finally power should
be removed somehow.

> LKDS() for the first PEG port looks like this:
>
>    P0L2 = One
>    Sleep (0x10)
>    Local0 = Zero
>    While (P0L2)
>    {
>         If ((Local0 > 0x04))
>         {
>             Break
>         }
>
>         Sleep (0x10)
>         Local0++
>    }
>
> One thing that comes to mind is that the loop can end even if P0L2 is
> not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> Sleep() is implemented differently in Windows? I mean Linux may be
> "faster" here and return prematurely and if we leave the port into D0
> this does not happen, or something. I'm just throwing out ideas :)

But this actually works for the downstream component in D0, doesn't it?

Also, if the downstream component is in D0, the port actually should
stay in D0 too, so what would happen with the $subject patch applied?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 15:47                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 15:47 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Mika Westerberg, Rafael J. Wysocki, Bjorn Helgaas, LKML,
	Lyude Paul, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
>
> yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
> side. And if the Nvidia GPU is on a port on the PCH side it all seems
> to work just fine.

But that may involve different AML too, may it not?

> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first, then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
> >
>
> If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
> the power savings are way lower, so I kind of prefer skipping D3hot
> instead of D3cold. Skipping D3hot doesn't seem to make any difference
> in power savings in my testing.

OK

What exactly did you do to skip D3cold in your testing?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 15:47                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 15:47 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
>
> yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
> side. And if the Nvidia GPU is on a port on the PCH side it all seems
> to work just fine.

But that may involve different AML too, may it not?

> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first, then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
> >
>
> If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
> the power savings are way lower, so I kind of prefer skipping D3hot
> instead of D3cold. Skipping D3hot doesn't seem to make any difference
> in power savings in my testing.

OK

What exactly did you do to skip D3cold in your testing?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 16:06                                             ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 16:06 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 4:47 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > those "Root Port"s
> > > > > >
> > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > >
> > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > still the same.
> > > > >
> >
> > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
> > side. And if the Nvidia GPU is on a port on the PCH side it all seems
> > to work just fine.
>
> But that may involve different AML too, may it not?
>
> > > > > > Also some custom AML-based power management is involved and that may
> > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > us.
> > > > > >
> > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > accessible in PCI power states below D0.
> > > > >
> > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > (as it is the case here). Also then the GPU config space is not
> > > > > accessible.
> > > >
> > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > a suspend ordering violation?
> > >
> > > No. We put the GPU into D3hot first, then the root port and then turn
> > > off the power resource (which is attached to the root port) resulting
> > > the topology entering D3cold.
> > >
> >
> > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
> > the power savings are way lower, so I kind of prefer skipping D3hot
> > instead of D3cold. Skipping D3hot doesn't seem to make any difference
> > in power savings in my testing.
>
> OK
>
> What exactly did you do to skip D3cold in your testing?
>

For that I poked into the PCI registers directly and skipped doing the
ACPI calls and simply checked for the idle power consumption on my
laptop. But I guess I should retest with calling pci_d3cold_disable
from nouveau instead? Or is there a different preferable way of
testing this?


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 16:06                                             ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 16:06 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 4:47 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > those "Root Port"s
> > > > > >
> > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > >
> > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > still the same.
> > > > >
> >
> > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
> > side. And if the Nvidia GPU is on a port on the PCH side it all seems
> > to work just fine.
>
> But that may involve different AML too, may it not?
>
> > > > > > Also some custom AML-based power management is involved and that may
> > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > us.
> > > > > >
> > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > accessible in PCI power states below D0.
> > > > >
> > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > (as it is the case here). Also then the GPU config space is not
> > > > > accessible.
> > > >
> > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > a suspend ordering violation?
> > >
> > > No. We put the GPU into D3hot first, then the root port and then turn
> > > off the power resource (which is attached to the root port) resulting
> > > the topology entering D3cold.
> > >
> >
> > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
> > the power savings are way lower, so I kind of prefer skipping D3hot
> > instead of D3cold. Skipping D3hot doesn't seem to make any difference
> > in power savings in my testing.
>
> OK
>
> What exactly did you do to skip D3cold in your testing?
>

For that I poked into the PCI registers directly and skipped doing the
ACPI calls and simply checked for the idle power consumption on my
laptop. But I guess I should retest with calling pci_d3cold_disable
from nouveau instead? Or is there a different preferable way of
testing this?

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 16:39                                               ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 16:39 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Mika Westerberg, Bjorn Helgaas, LKML,
	Lyude Paul, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 5:06 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Thu, Nov 21, 2019 at 4:47 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst <kherbst@redhat.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > those "Root Port"s
> > > > > > >
> > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > >
> > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > still the same.
> > > > > >
> > >
> > > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
> > > side. And if the Nvidia GPU is on a port on the PCH side it all seems
> > > to work just fine.
> >
> > But that may involve different AML too, may it not?
> >
> > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > us.
> > > > > > >
> > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > accessible in PCI power states below D0.
> > > > > >
> > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > accessible.
> > > > >
> > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > a suspend ordering violation?
> > > >
> > > > No. We put the GPU into D3hot first, then the root port and then turn
> > > > off the power resource (which is attached to the root port) resulting
> > > > the topology entering D3cold.
> > > >
> > >
> > > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
> > > the power savings are way lower, so I kind of prefer skipping D3hot
> > > instead of D3cold. Skipping D3hot doesn't seem to make any difference
> > > in power savings in my testing.
> >
> > OK
> >
> > What exactly did you do to skip D3cold in your testing?
> >
>
> For that I poked into the PCI registers directly and skipped doing the
> ACPI calls and simply checked for the idle power consumption on my
> laptop.

That doesn't involve the PCIe port PM, however.

> But I guess I should retest with calling pci_d3cold_disable
> from nouveau instead? Or is there a different preferable way of
> testing this?

There is a sysfs attribute called "d3cold_allowed" which can be used
for "blocking" D3cold, so can you please retest using that?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 16:39                                               ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 16:39 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 5:06 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Thu, Nov 21, 2019 at 4:47 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst <kherbst@redhat.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > those "Root Port"s
> > > > > > >
> > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > >
> > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > still the same.
> > > > > >
> > >
> > > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU
> > > side. And if the Nvidia GPU is on a port on the PCH side it all seems
> > > to work just fine.
> >
> > But that may involve different AML too, may it not?
> >
> > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > us.
> > > > > > >
> > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > accessible in PCI power states below D0.
> > > > > >
> > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > accessible.
> > > > >
> > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > a suspend ordering violation?
> > > >
> > > > No. We put the GPU into D3hot first, then the root port and then turn
> > > > off the power resource (which is attached to the root port) resulting
> > > > the topology entering D3cold.
> > > >
> > >
> > > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but
> > > the power savings are way lower, so I kind of prefer skipping D3hot
> > > instead of D3cold. Skipping D3hot doesn't seem to make any difference
> > > in power savings in my testing.
> >
> > OK
> >
> > What exactly did you do to skip D3cold in your testing?
> >
>
> For that I poked into the PCI registers directly and skipped doing the
> ACPI calls and simply checked for the idle power consumption on my
> laptop.

That doesn't involve the PCIe port PM, however.

> But I guess I should retest with calling pci_d3cold_disable
> from nouveau instead? Or is there a different preferable way of
> testing this?

There is a sysfs attribute called "d3cold_allowed" which can be used
for "blocking" D3cold, so can you please retest using that?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 19:49                                             ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 19:49 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > those "Root Port"s
> > > > > >
> > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > >
> > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > still the same.
> > > > >
> > > > > > Also some custom AML-based power management is involved and that may
> > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > us.
> > > > > >
> > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > accessible in PCI power states below D0.
> > > > >
> > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > (as it is the case here). Also then the GPU config space is not
> > > > > accessible.
> > > >
> > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > a suspend ordering violation?
> > >
> > > No. We put the GPU into D3hot first,
> 
> OK
> 
> Does this involve any AML, like a _PS3 under the GPU object?

I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
itself is not described in ACPI tables at all.

> > > then the root port and then turn
> > > off the power resource (which is attached to the root port) resulting
> > > the topology entering D3cold.
> >
> > I don't see that happening in the AML though.
> 
> Which AML do you mean, specifically?  The _OFF method for the root
> port's _PR3 power resource or something else?

The root port's _OFF method for the power resource returned by its _PR3.

> > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > check) then we directly do link disable whereas in Windows 8+ we invoke
> > LKDS() method that puts the link into L2/L3. None of the fields they
> > access seem to touch the GPU itself.
> 
> So that may be where the problem is.
> 
> Putting the downstream component into PCI D[1-3] is expected to put
> the link into L1, so I'm not sure how that plays with the later
> attempt to put it into L2/L3 Ready.

That should be fine. What I've seen the link goes into L1 when
downstream component is put to D-state (not D0) and then it is put back
to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.

> Also, L2/L3 Ready is expected to be transient, so finally power should
> be removed somehow.

There is GPIO for both power and PERST, I think the line here:

  \_SB.SGOV (0x01010004, Zero)

is the one that removes power.

> > LKDS() for the first PEG port looks like this:
> >
> >    P0L2 = One
> >    Sleep (0x10)
> >    Local0 = Zero
> >    While (P0L2)
> >    {
> >         If ((Local0 > 0x04))
> >         {
> >             Break
> >         }
> >
> >         Sleep (0x10)
> >         Local0++
> >    }
> >
> > One thing that comes to mind is that the loop can end even if P0L2 is
> > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > Sleep() is implemented differently in Windows? I mean Linux may be
> > "faster" here and return prematurely and if we leave the port into D0
> > this does not happen, or something. I'm just throwing out ideas :)
> 
> But this actually works for the downstream component in D0, doesn't it?

It does and that leaves the link in L0 so it could be that then the
above AML works better or something.

That reminds me, ASPM may have something to do with this as well.

> Also, if the downstream component is in D0, the port actually should
> stay in D0 too, so what would happen with the $subject patch applied?

Parent port cannot be lower D-state than the child so I agree it should
stay in D0 as well. However, it seems that what happens is that the
issue goes away :)

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 19:49                                             ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-21 19:49 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > those "Root Port"s
> > > > > >
> > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > >
> > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > still the same.
> > > > >
> > > > > > Also some custom AML-based power management is involved and that may
> > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > us.
> > > > > >
> > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > accessible in PCI power states below D0.
> > > > >
> > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > (as it is the case here). Also then the GPU config space is not
> > > > > accessible.
> > > >
> > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > a suspend ordering violation?
> > >
> > > No. We put the GPU into D3hot first,
> 
> OK
> 
> Does this involve any AML, like a _PS3 under the GPU object?

I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
itself is not described in ACPI tables at all.

> > > then the root port and then turn
> > > off the power resource (which is attached to the root port) resulting
> > > the topology entering D3cold.
> >
> > I don't see that happening in the AML though.
> 
> Which AML do you mean, specifically?  The _OFF method for the root
> port's _PR3 power resource or something else?

The root port's _OFF method for the power resource returned by its _PR3.

> > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > check) then we directly do link disable whereas in Windows 8+ we invoke
> > LKDS() method that puts the link into L2/L3. None of the fields they
> > access seem to touch the GPU itself.
> 
> So that may be where the problem is.
> 
> Putting the downstream component into PCI D[1-3] is expected to put
> the link into L1, so I'm not sure how that plays with the later
> attempt to put it into L2/L3 Ready.

That should be fine. What I've seen the link goes into L1 when
downstream component is put to D-state (not D0) and then it is put back
to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.

> Also, L2/L3 Ready is expected to be transient, so finally power should
> be removed somehow.

There is GPIO for both power and PERST, I think the line here:

  \_SB.SGOV (0x01010004, Zero)

is the one that removes power.

> > LKDS() for the first PEG port looks like this:
> >
> >    P0L2 = One
> >    Sleep (0x10)
> >    Local0 = Zero
> >    While (P0L2)
> >    {
> >         If ((Local0 > 0x04))
> >         {
> >             Break
> >         }
> >
> >         Sleep (0x10)
> >         Local0++
> >    }
> >
> > One thing that comes to mind is that the loop can end even if P0L2 is
> > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > Sleep() is implemented differently in Windows? I mean Linux may be
> > "faster" here and return prematurely and if we leave the port into D0
> > this does not happen, or something. I'm just throwing out ideas :)
> 
> But this actually works for the downstream component in D0, doesn't it?

It does and that leaves the link in L0 so it could be that then the
above AML works better or something.

That reminds me, ASPM may have something to do with this as well.

> Also, if the downstream component is in D0, the port actually should
> stay in D0 too, so what would happen with the $subject patch applied?

Parent port cannot be lower D-state than the child so I agree it should
stay in D0 as well. However, it seems that what happens is that the
issue goes away :)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 22:39                                               ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 22:39 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > those "Root Port"s
> > > > > > >
> > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > >
> > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > still the same.
> > > > > >
> > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > us.
> > > > > > >
> > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > accessible in PCI power states below D0.
> > > > > >
> > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > accessible.
> > > > >
> > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > a suspend ordering violation?
> > > >
> > > > No. We put the GPU into D3hot first,
> >
> > OK
> >
> > Does this involve any AML, like a _PS3 under the GPU object?
>
> I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> itself is not described in ACPI tables at all.

OK

> > > > then the root port and then turn
> > > > off the power resource (which is attached to the root port) resulting
> > > > the topology entering D3cold.
> > >
> > > I don't see that happening in the AML though.
> >
> > Which AML do you mean, specifically?  The _OFF method for the root
> > port's _PR3 power resource or something else?
>
> The root port's _OFF method for the power resource returned by its _PR3.

OK, so without the $subject patch we (1) program the downstream
component (GPU) into D3hot, then we (2) program the port holding it
into D3hot and then we (3) let the AML (_OFF for the power resource
listed by _PR3 under the port object) run.

Something strange happens at this point (and I guess that _OFF doesn't
even reach the point where it removes power from the port which is why
we see a lock-up).

We know that skipping (1) makes things work and we kind of suspect
that skipping (3) would make things work either, but what about doing
(1) and (3) without (2)?

> > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > access seem to touch the GPU itself.
> >
> > So that may be where the problem is.
> >
> > Putting the downstream component into PCI D[1-3] is expected to put
> > the link into L1, so I'm not sure how that plays with the later
> > attempt to put it into L2/L3 Ready.
>
> That should be fine. What I've seen the link goes into L1 when
> downstream component is put to D-state (not D0) and then it is put back
> to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.

Well, that's the expected behavior, but the observed behavior isn't as
expected. :-)

> > Also, L2/L3 Ready is expected to be transient, so finally power should
> > be removed somehow.
>
> There is GPIO for both power and PERST, I think the line here:
>
>   \_SB.SGOV (0x01010004, Zero)
>
> is the one that removes power.

OK

> > > LKDS() for the first PEG port looks like this:
> > >
> > >    P0L2 = One
> > >    Sleep (0x10)
> > >    Local0 = Zero
> > >    While (P0L2)
> > >    {
> > >         If ((Local0 > 0x04))
> > >         {
> > >             Break
> > >         }
> > >
> > >         Sleep (0x10)
> > >         Local0++
> > >    }
> > >
> > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > "faster" here and return prematurely and if we leave the port into D0
> > > this does not happen, or something. I'm just throwing out ideas :)
> >
> > But this actually works for the downstream component in D0, doesn't it?
>
> It does and that leaves the link in L0 so it could be that then the
> above AML works better or something.

That would be my guess.

> That reminds me, ASPM may have something to do with this as well.

Not really if D-states are involved.

> > Also, if the downstream component is in D0, the port actually should
> > stay in D0 too, so what would happen with the $subject patch applied?
>
> Parent port cannot be lower D-state than the child so I agree it should
> stay in D0 as well. However, it seems that what happens is that the
> issue goes away :)

Well, at least this is kind of out of the spec.

Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
downstream device is in D0, so the $subject patch will not work as
expected in the suspend-to-idle case.

Also we really should make up our minds on whether or not to force
PCIe ports to stay in D0 when downstream devices are in D0 and be
consequent about that.  Right now we do one thing during system-wide
suspend and the other one in PM-runtime, which is confusing.

The current design is mostly based on the PCI PM Spec 1.2, so it would
be consequent to follow system-wide suspend in PM-runtime and avoid
putting PCIe ports holding devices in D0 into any low-power states.
but that would make the approach in the $subject patch ineffective.

Moreover, the fact that there are separate branches for "Windows 7"
and "Windows 8+" kind of suggest a change in the expected behavior
between Windows 7 and Windows 8, from the AML perspective.  I would
guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
does something else.  Now, the structure of the "Windows 8+" branch
described by you suggests that, at least in the cases when it is going
to remove power from the port eventually, it goes straight for the
link preparation (the L2/L3 Ready transition) and power removal
without bothering to program the downstream device and port into D3hot
(because that's kind of redundant).

That hypothetical "Windows 8+" approach may really work universally,
because it doesn't seem to break any rules (going straight from D0 to
D3cold is not disallowed and doing that for both a port and a
downstream device at the same time is kind of OK either, as long as
the link is ready for that).

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 22:39                                               ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 22:39 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > those "Root Port"s
> > > > > > >
> > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > >
> > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > still the same.
> > > > > >
> > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > us.
> > > > > > >
> > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > accessible in PCI power states below D0.
> > > > > >
> > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > accessible.
> > > > >
> > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > a suspend ordering violation?
> > > >
> > > > No. We put the GPU into D3hot first,
> >
> > OK
> >
> > Does this involve any AML, like a _PS3 under the GPU object?
>
> I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> itself is not described in ACPI tables at all.

OK

> > > > then the root port and then turn
> > > > off the power resource (which is attached to the root port) resulting
> > > > the topology entering D3cold.
> > >
> > > I don't see that happening in the AML though.
> >
> > Which AML do you mean, specifically?  The _OFF method for the root
> > port's _PR3 power resource or something else?
>
> The root port's _OFF method for the power resource returned by its _PR3.

OK, so without the $subject patch we (1) program the downstream
component (GPU) into D3hot, then we (2) program the port holding it
into D3hot and then we (3) let the AML (_OFF for the power resource
listed by _PR3 under the port object) run.

Something strange happens at this point (and I guess that _OFF doesn't
even reach the point where it removes power from the port which is why
we see a lock-up).

We know that skipping (1) makes things work and we kind of suspect
that skipping (3) would make things work either, but what about doing
(1) and (3) without (2)?

> > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > access seem to touch the GPU itself.
> >
> > So that may be where the problem is.
> >
> > Putting the downstream component into PCI D[1-3] is expected to put
> > the link into L1, so I'm not sure how that plays with the later
> > attempt to put it into L2/L3 Ready.
>
> That should be fine. What I've seen the link goes into L1 when
> downstream component is put to D-state (not D0) and then it is put back
> to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.

Well, that's the expected behavior, but the observed behavior isn't as
expected. :-)

> > Also, L2/L3 Ready is expected to be transient, so finally power should
> > be removed somehow.
>
> There is GPIO for both power and PERST, I think the line here:
>
>   \_SB.SGOV (0x01010004, Zero)
>
> is the one that removes power.

OK

> > > LKDS() for the first PEG port looks like this:
> > >
> > >    P0L2 = One
> > >    Sleep (0x10)
> > >    Local0 = Zero
> > >    While (P0L2)
> > >    {
> > >         If ((Local0 > 0x04))
> > >         {
> > >             Break
> > >         }
> > >
> > >         Sleep (0x10)
> > >         Local0++
> > >    }
> > >
> > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > "faster" here and return prematurely and if we leave the port into D0
> > > this does not happen, or something. I'm just throwing out ideas :)
> >
> > But this actually works for the downstream component in D0, doesn't it?
>
> It does and that leaves the link in L0 so it could be that then the
> above AML works better or something.

That would be my guess.

> That reminds me, ASPM may have something to do with this as well.

Not really if D-states are involved.

> > Also, if the downstream component is in D0, the port actually should
> > stay in D0 too, so what would happen with the $subject patch applied?
>
> Parent port cannot be lower D-state than the child so I agree it should
> stay in D0 as well. However, it seems that what happens is that the
> issue goes away :)

Well, at least this is kind of out of the spec.

Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
downstream device is in D0, so the $subject patch will not work as
expected in the suspend-to-idle case.

Also we really should make up our minds on whether or not to force
PCIe ports to stay in D0 when downstream devices are in D0 and be
consequent about that.  Right now we do one thing during system-wide
suspend and the other one in PM-runtime, which is confusing.

The current design is mostly based on the PCI PM Spec 1.2, so it would
be consequent to follow system-wide suspend in PM-runtime and avoid
putting PCIe ports holding devices in D0 into any low-power states.
but that would make the approach in the $subject patch ineffective.

Moreover, the fact that there are separate branches for "Windows 7"
and "Windows 8+" kind of suggest a change in the expected behavior
between Windows 7 and Windows 8, from the AML perspective.  I would
guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
does something else.  Now, the structure of the "Windows 8+" branch
described by you suggests that, at least in the cases when it is going
to remove power from the port eventually, it goes straight for the
link preparation (the L2/L3 Ready transition) and power removal
without bothering to program the downstream device and port into D3hot
(because that's kind of redundant).

That hypothetical "Windows 8+" approach may really work universally,
because it doesn't seem to break any rules (going straight from D0 to
D3cold is not disallowed and doing that for both a port and a
downstream device at the same time is kind of OK either, as long as
the link is ready for that).
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 22:39                                               ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-21 22:39 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > those "Root Port"s
> > > > > > >
> > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > >
> > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > still the same.
> > > > > >
> > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > us.
> > > > > > >
> > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > accessible in PCI power states below D0.
> > > > > >
> > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > accessible.
> > > > >
> > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > a suspend ordering violation?
> > > >
> > > > No. We put the GPU into D3hot first,
> >
> > OK
> >
> > Does this involve any AML, like a _PS3 under the GPU object?
>
> I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> itself is not described in ACPI tables at all.

OK

> > > > then the root port and then turn
> > > > off the power resource (which is attached to the root port) resulting
> > > > the topology entering D3cold.
> > >
> > > I don't see that happening in the AML though.
> >
> > Which AML do you mean, specifically?  The _OFF method for the root
> > port's _PR3 power resource or something else?
>
> The root port's _OFF method for the power resource returned by its _PR3.

OK, so without the $subject patch we (1) program the downstream
component (GPU) into D3hot, then we (2) program the port holding it
into D3hot and then we (3) let the AML (_OFF for the power resource
listed by _PR3 under the port object) run.

Something strange happens at this point (and I guess that _OFF doesn't
even reach the point where it removes power from the port which is why
we see a lock-up).

We know that skipping (1) makes things work and we kind of suspect
that skipping (3) would make things work either, but what about doing
(1) and (3) without (2)?

> > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > access seem to touch the GPU itself.
> >
> > So that may be where the problem is.
> >
> > Putting the downstream component into PCI D[1-3] is expected to put
> > the link into L1, so I'm not sure how that plays with the later
> > attempt to put it into L2/L3 Ready.
>
> That should be fine. What I've seen the link goes into L1 when
> downstream component is put to D-state (not D0) and then it is put back
> to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.

Well, that's the expected behavior, but the observed behavior isn't as
expected. :-)

> > Also, L2/L3 Ready is expected to be transient, so finally power should
> > be removed somehow.
>
> There is GPIO for both power and PERST, I think the line here:
>
>   \_SB.SGOV (0x01010004, Zero)
>
> is the one that removes power.

OK

> > > LKDS() for the first PEG port looks like this:
> > >
> > >    P0L2 = One
> > >    Sleep (0x10)
> > >    Local0 = Zero
> > >    While (P0L2)
> > >    {
> > >         If ((Local0 > 0x04))
> > >         {
> > >             Break
> > >         }
> > >
> > >         Sleep (0x10)
> > >         Local0++
> > >    }
> > >
> > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > "faster" here and return prematurely and if we leave the port into D0
> > > this does not happen, or something. I'm just throwing out ideas :)
> >
> > But this actually works for the downstream component in D0, doesn't it?
>
> It does and that leaves the link in L0 so it could be that then the
> above AML works better or something.

That would be my guess.

> That reminds me, ASPM may have something to do with this as well.

Not really if D-states are involved.

> > Also, if the downstream component is in D0, the port actually should
> > stay in D0 too, so what would happen with the $subject patch applied?
>
> Parent port cannot be lower D-state than the child so I agree it should
> stay in D0 as well. However, it seems that what happens is that the
> issue goes away :)

Well, at least this is kind of out of the spec.

Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
downstream device is in D0, so the $subject patch will not work as
expected in the suspend-to-idle case.

Also we really should make up our minds on whether or not to force
PCIe ports to stay in D0 when downstream devices are in D0 and be
consequent about that.  Right now we do one thing during system-wide
suspend and the other one in PM-runtime, which is confusing.

The current design is mostly based on the PCI PM Spec 1.2, so it would
be consequent to follow system-wide suspend in PM-runtime and avoid
putting PCIe ports holding devices in D0 into any low-power states.
but that would make the approach in the $subject patch ineffective.

Moreover, the fact that there are separate branches for "Windows 7"
and "Windows 8+" kind of suggest a change in the expected behavior
between Windows 7 and Windows 8, from the AML perspective.  I would
guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
does something else.  Now, the structure of the "Windows 8+" branch
described by you suggests that, at least in the cases when it is going
to remove power from the port eventually, it goes straight for the
link preparation (the L2/L3 Ready transition) and power removal
without bothering to program the downstream device and port into D3hot
(because that's kind of redundant).

That hypothetical "Windows 8+" approach may really work universally,
because it doesn't seem to break any rules (going straight from D0 to
D3cold is not disallowed and doing that for both a port and a
downstream device at the same time is kind of OK either, as long as
the link is ready for that).
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-21 22:39                                               ` Rafael J. Wysocki
@ 2019-11-21 22:50                                                 ` Karol Herbst
  -1 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 22:50 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 11:39 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > those "Root Port"s
> > > > > > > >
> > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > >
> > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > still the same.
> > > > > > >
> > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > us.
> > > > > > > >
> > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > accessible in PCI power states below D0.
> > > > > > >
> > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > accessible.
> > > > > >
> > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > a suspend ordering violation?
> > > > >
> > > > > No. We put the GPU into D3hot first,
> > >
> > > OK
> > >
> > > Does this involve any AML, like a _PS3 under the GPU object?
> >
> > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > itself is not described in ACPI tables at all.
>
> OK
>
> > > > > then the root port and then turn
> > > > > off the power resource (which is attached to the root port) resulting
> > > > > the topology entering D3cold.
> > > >
> > > > I don't see that happening in the AML though.
> > >
> > > Which AML do you mean, specifically?  The _OFF method for the root
> > > port's _PR3 power resource or something else?
> >
> > The root port's _OFF method for the power resource returned by its _PR3.
>
> OK, so without the $subject patch we (1) program the downstream
> component (GPU) into D3hot, then we (2) program the port holding it
> into D3hot and then we (3) let the AML (_OFF for the power resource
> listed by _PR3 under the port object) run.
>
> Something strange happens at this point (and I guess that _OFF doesn't
> even reach the point where it removes power from the port which is why
> we see a lock-up).
>

it does though. I will post the data shortly (with the change in power
consumption), as I also want to do the ACPI traces now.

> We know that skipping (1) makes things work and we kind of suspect
> that skipping (3) would make things work either, but what about doing
> (1) and (3) without (2)?
>
> > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > access seem to touch the GPU itself.
> > >
> > > So that may be where the problem is.
> > >
> > > Putting the downstream component into PCI D[1-3] is expected to put
> > > the link into L1, so I'm not sure how that plays with the later
> > > attempt to put it into L2/L3 Ready.
> >
> > That should be fine. What I've seen the link goes into L1 when
> > downstream component is put to D-state (not D0) and then it is put back
> > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
>
> Well, that's the expected behavior, but the observed behavior isn't as
> expected. :-)
>
> > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > be removed somehow.
> >
> > There is GPIO for both power and PERST, I think the line here:
> >
> >   \_SB.SGOV (0x01010004, Zero)
> >
> > is the one that removes power.
>
> OK
>
> > > > LKDS() for the first PEG port looks like this:
> > > >
> > > >    P0L2 = One
> > > >    Sleep (0x10)
> > > >    Local0 = Zero
> > > >    While (P0L2)
> > > >    {
> > > >         If ((Local0 > 0x04))
> > > >         {
> > > >             Break
> > > >         }
> > > >
> > > >         Sleep (0x10)
> > > >         Local0++
> > > >    }
> > > >
> > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > "faster" here and return prematurely and if we leave the port into D0
> > > > this does not happen, or something. I'm just throwing out ideas :)
> > >
> > > But this actually works for the downstream component in D0, doesn't it?
> >
> > It does and that leaves the link in L0 so it could be that then the
> > above AML works better or something.
>
> That would be my guess.
>
> > That reminds me, ASPM may have something to do with this as well.
>
> Not really if D-states are involved.
>
> > > Also, if the downstream component is in D0, the port actually should
> > > stay in D0 too, so what would happen with the $subject patch applied?
> >
> > Parent port cannot be lower D-state than the child so I agree it should
> > stay in D0 as well. However, it seems that what happens is that the
> > issue goes away :)
>
> Well, at least this is kind of out of the spec.
>
> Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> downstream device is in D0, so the $subject patch will not work as
> expected in the suspend-to-idle case.
>
> Also we really should make up our minds on whether or not to force
> PCIe ports to stay in D0 when downstream devices are in D0 and be
> consequent about that.  Right now we do one thing during system-wide
> suspend and the other one in PM-runtime, which is confusing.
>
> The current design is mostly based on the PCI PM Spec 1.2, so it would
> be consequent to follow system-wide suspend in PM-runtime and avoid
> putting PCIe ports holding devices in D0 into any low-power states.
> but that would make the approach in the $subject patch ineffective.
>
> Moreover, the fact that there are separate branches for "Windows 7"
> and "Windows 8+" kind of suggest a change in the expected behavior
> between Windows 7 and Windows 8, from the AML perspective.  I would
> guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> does something else.  Now, the structure of the "Windows 8+" branch
> described by you suggests that, at least in the cases when it is going
> to remove power from the port eventually, it goes straight for the
> link preparation (the L2/L3 Ready transition) and power removal
> without bothering to program the downstream device and port into D3hot
> (because that's kind of redundant).
>
> That hypothetical "Windows 8+" approach may really work universally,
> because it doesn't seem to break any rules (going straight from D0 to
> D3cold is not disallowed and doing that for both a port and a
> downstream device at the same time is kind of OK either, as long as
> the link is ready for that).
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-21 22:50                                                 ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-21 22:50 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 11:39 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > those "Root Port"s
> > > > > > > >
> > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > >
> > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > still the same.
> > > > > > >
> > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > us.
> > > > > > > >
> > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > accessible in PCI power states below D0.
> > > > > > >
> > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > accessible.
> > > > > >
> > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > a suspend ordering violation?
> > > > >
> > > > > No. We put the GPU into D3hot first,
> > >
> > > OK
> > >
> > > Does this involve any AML, like a _PS3 under the GPU object?
> >
> > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > itself is not described in ACPI tables at all.
>
> OK
>
> > > > > then the root port and then turn
> > > > > off the power resource (which is attached to the root port) resulting
> > > > > the topology entering D3cold.
> > > >
> > > > I don't see that happening in the AML though.
> > >
> > > Which AML do you mean, specifically?  The _OFF method for the root
> > > port's _PR3 power resource or something else?
> >
> > The root port's _OFF method for the power resource returned by its _PR3.
>
> OK, so without the $subject patch we (1) program the downstream
> component (GPU) into D3hot, then we (2) program the port holding it
> into D3hot and then we (3) let the AML (_OFF for the power resource
> listed by _PR3 under the port object) run.
>
> Something strange happens at this point (and I guess that _OFF doesn't
> even reach the point where it removes power from the port which is why
> we see a lock-up).
>

it does though. I will post the data shortly (with the change in power
consumption), as I also want to do the ACPI traces now.

> We know that skipping (1) makes things work and we kind of suspect
> that skipping (3) would make things work either, but what about doing
> (1) and (3) without (2)?
>
> > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > access seem to touch the GPU itself.
> > >
> > > So that may be where the problem is.
> > >
> > > Putting the downstream component into PCI D[1-3] is expected to put
> > > the link into L1, so I'm not sure how that plays with the later
> > > attempt to put it into L2/L3 Ready.
> >
> > That should be fine. What I've seen the link goes into L1 when
> > downstream component is put to D-state (not D0) and then it is put back
> > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
>
> Well, that's the expected behavior, but the observed behavior isn't as
> expected. :-)
>
> > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > be removed somehow.
> >
> > There is GPIO for both power and PERST, I think the line here:
> >
> >   \_SB.SGOV (0x01010004, Zero)
> >
> > is the one that removes power.
>
> OK
>
> > > > LKDS() for the first PEG port looks like this:
> > > >
> > > >    P0L2 = One
> > > >    Sleep (0x10)
> > > >    Local0 = Zero
> > > >    While (P0L2)
> > > >    {
> > > >         If ((Local0 > 0x04))
> > > >         {
> > > >             Break
> > > >         }
> > > >
> > > >         Sleep (0x10)
> > > >         Local0++
> > > >    }
> > > >
> > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > "faster" here and return prematurely and if we leave the port into D0
> > > > this does not happen, or something. I'm just throwing out ideas :)
> > >
> > > But this actually works for the downstream component in D0, doesn't it?
> >
> > It does and that leaves the link in L0 so it could be that then the
> > above AML works better or something.
>
> That would be my guess.
>
> > That reminds me, ASPM may have something to do with this as well.
>
> Not really if D-states are involved.
>
> > > Also, if the downstream component is in D0, the port actually should
> > > stay in D0 too, so what would happen with the $subject patch applied?
> >
> > Parent port cannot be lower D-state than the child so I agree it should
> > stay in D0 as well. However, it seems that what happens is that the
> > issue goes away :)
>
> Well, at least this is kind of out of the spec.
>
> Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> downstream device is in D0, so the $subject patch will not work as
> expected in the suspend-to-idle case.
>
> Also we really should make up our minds on whether or not to force
> PCIe ports to stay in D0 when downstream devices are in D0 and be
> consequent about that.  Right now we do one thing during system-wide
> suspend and the other one in PM-runtime, which is confusing.
>
> The current design is mostly based on the PCI PM Spec 1.2, so it would
> be consequent to follow system-wide suspend in PM-runtime and avoid
> putting PCIe ports holding devices in D0 into any low-power states.
> but that would make the approach in the $subject patch ineffective.
>
> Moreover, the fact that there are separate branches for "Windows 7"
> and "Windows 8+" kind of suggest a change in the expected behavior
> between Windows 7 and Windows 8, from the AML perspective.  I would
> guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> does something else.  Now, the structure of the "Windows 8+" branch
> described by you suggests that, at least in the cases when it is going
> to remove power from the port eventually, it goes straight for the
> link preparation (the L2/L3 Ready transition) and power removal
> without bothering to program the downstream device and port into D3hot
> (because that's kind of redundant).
>
> That hypothetical "Windows 8+" approach may really work universally,
> because it doesn't seem to break any rules (going straight from D0 to
> D3cold is not disallowed and doing that for both a port and a
> downstream device at the same time is kind of OK either, as long as
> the link is ready for that).
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22  0:13                                                   ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22  0:13 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

so while trying to test with d3cold disabled, I noticed that I run
into the exact same error. And I verified that the
\_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be
turned on.

On Thu, Nov 21, 2019 at 11:50 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Thu, Nov 21, 2019 at 11:39 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > >
> > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > those "Root Port"s
> > > > > > > > >
> > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > >
> > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > still the same.
> > > > > > > >
> > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > us.
> > > > > > > > >
> > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > accessible in PCI power states below D0.
> > > > > > > >
> > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > accessible.
> > > > > > >
> > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > a suspend ordering violation?
> > > > > >
> > > > > > No. We put the GPU into D3hot first,
> > > >
> > > > OK
> > > >
> > > > Does this involve any AML, like a _PS3 under the GPU object?
> > >
> > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > itself is not described in ACPI tables at all.
> >
> > OK
> >
> > > > > > then the root port and then turn
> > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > the topology entering D3cold.
> > > > >
> > > > > I don't see that happening in the AML though.
> > > >
> > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > port's _PR3 power resource or something else?
> > >
> > > The root port's _OFF method for the power resource returned by its _PR3.
> >
> > OK, so without the $subject patch we (1) program the downstream
> > component (GPU) into D3hot, then we (2) program the port holding it
> > into D3hot and then we (3) let the AML (_OFF for the power resource
> > listed by _PR3 under the port object) run.
> >
> > Something strange happens at this point (and I guess that _OFF doesn't
> > even reach the point where it removes power from the port which is why
> > we see a lock-up).
> >
>
> it does though. I will post the data shortly (with the change in power
> consumption), as I also want to do the ACPI traces now.
>
> > We know that skipping (1) makes things work and we kind of suspect
> > that skipping (3) would make things work either, but what about doing
> > (1) and (3) without (2)?
> >
> > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > access seem to touch the GPU itself.
> > > >
> > > > So that may be where the problem is.
> > > >
> > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > the link into L1, so I'm not sure how that plays with the later
> > > > attempt to put it into L2/L3 Ready.
> > >
> > > That should be fine. What I've seen the link goes into L1 when
> > > downstream component is put to D-state (not D0) and then it is put back
> > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> >
> > Well, that's the expected behavior, but the observed behavior isn't as
> > expected. :-)
> >
> > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > be removed somehow.
> > >
> > > There is GPIO for both power and PERST, I think the line here:
> > >
> > >   \_SB.SGOV (0x01010004, Zero)
> > >
> > > is the one that removes power.
> >
> > OK
> >
> > > > > LKDS() for the first PEG port looks like this:
> > > > >
> > > > >    P0L2 = One
> > > > >    Sleep (0x10)
> > > > >    Local0 = Zero
> > > > >    While (P0L2)
> > > > >    {
> > > > >         If ((Local0 > 0x04))
> > > > >         {
> > > > >             Break
> > > > >         }
> > > > >
> > > > >         Sleep (0x10)
> > > > >         Local0++
> > > > >    }
> > > > >
> > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > >
> > > > But this actually works for the downstream component in D0, doesn't it?
> > >
> > > It does and that leaves the link in L0 so it could be that then the
> > > above AML works better or something.
> >
> > That would be my guess.
> >
> > > That reminds me, ASPM may have something to do with this as well.
> >
> > Not really if D-states are involved.
> >
> > > > Also, if the downstream component is in D0, the port actually should
> > > > stay in D0 too, so what would happen with the $subject patch applied?
> > >
> > > Parent port cannot be lower D-state than the child so I agree it should
> > > stay in D0 as well. However, it seems that what happens is that the
> > > issue goes away :)
> >
> > Well, at least this is kind of out of the spec.
> >
> > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > downstream device is in D0, so the $subject patch will not work as
> > expected in the suspend-to-idle case.
> >
> > Also we really should make up our minds on whether or not to force
> > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > consequent about that.  Right now we do one thing during system-wide
> > suspend and the other one in PM-runtime, which is confusing.
> >
> > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > be consequent to follow system-wide suspend in PM-runtime and avoid
> > putting PCIe ports holding devices in D0 into any low-power states.
> > but that would make the approach in the $subject patch ineffective.
> >
> > Moreover, the fact that there are separate branches for "Windows 7"
> > and "Windows 8+" kind of suggest a change in the expected behavior
> > between Windows 7 and Windows 8, from the AML perspective.  I would
> > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > does something else.  Now, the structure of the "Windows 8+" branch
> > described by you suggests that, at least in the cases when it is going
> > to remove power from the port eventually, it goes straight for the
> > link preparation (the L2/L3 Ready transition) and power removal
> > without bothering to program the downstream device and port into D3hot
> > (because that's kind of redundant).
> >
> > That hypothetical "Windows 8+" approach may really work universally,
> > because it doesn't seem to break any rules (going straight from D0 to
> > D3cold is not disallowed and doing that for both a port and a
> > downstream device at the same time is kind of OK either, as long as
> > the link is ready for that).
> >


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22  0:13                                                   ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22  0:13 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

so while trying to test with d3cold disabled, I noticed that I run
into the exact same error. And I verified that the
\_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be
turned on.

On Thu, Nov 21, 2019 at 11:50 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Thu, Nov 21, 2019 at 11:39 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > >
> > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > those "Root Port"s
> > > > > > > > >
> > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > >
> > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > still the same.
> > > > > > > >
> > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > us.
> > > > > > > > >
> > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > accessible in PCI power states below D0.
> > > > > > > >
> > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > accessible.
> > > > > > >
> > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > a suspend ordering violation?
> > > > > >
> > > > > > No. We put the GPU into D3hot first,
> > > >
> > > > OK
> > > >
> > > > Does this involve any AML, like a _PS3 under the GPU object?
> > >
> > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > itself is not described in ACPI tables at all.
> >
> > OK
> >
> > > > > > then the root port and then turn
> > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > the topology entering D3cold.
> > > > >
> > > > > I don't see that happening in the AML though.
> > > >
> > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > port's _PR3 power resource or something else?
> > >
> > > The root port's _OFF method for the power resource returned by its _PR3.
> >
> > OK, so without the $subject patch we (1) program the downstream
> > component (GPU) into D3hot, then we (2) program the port holding it
> > into D3hot and then we (3) let the AML (_OFF for the power resource
> > listed by _PR3 under the port object) run.
> >
> > Something strange happens at this point (and I guess that _OFF doesn't
> > even reach the point where it removes power from the port which is why
> > we see a lock-up).
> >
>
> it does though. I will post the data shortly (with the change in power
> consumption), as I also want to do the ACPI traces now.
>
> > We know that skipping (1) makes things work and we kind of suspect
> > that skipping (3) would make things work either, but what about doing
> > (1) and (3) without (2)?
> >
> > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > access seem to touch the GPU itself.
> > > >
> > > > So that may be where the problem is.
> > > >
> > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > the link into L1, so I'm not sure how that plays with the later
> > > > attempt to put it into L2/L3 Ready.
> > >
> > > That should be fine. What I've seen the link goes into L1 when
> > > downstream component is put to D-state (not D0) and then it is put back
> > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> >
> > Well, that's the expected behavior, but the observed behavior isn't as
> > expected. :-)
> >
> > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > be removed somehow.
> > >
> > > There is GPIO for both power and PERST, I think the line here:
> > >
> > >   \_SB.SGOV (0x01010004, Zero)
> > >
> > > is the one that removes power.
> >
> > OK
> >
> > > > > LKDS() for the first PEG port looks like this:
> > > > >
> > > > >    P0L2 = One
> > > > >    Sleep (0x10)
> > > > >    Local0 = Zero
> > > > >    While (P0L2)
> > > > >    {
> > > > >         If ((Local0 > 0x04))
> > > > >         {
> > > > >             Break
> > > > >         }
> > > > >
> > > > >         Sleep (0x10)
> > > > >         Local0++
> > > > >    }
> > > > >
> > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > >
> > > > But this actually works for the downstream component in D0, doesn't it?
> > >
> > > It does and that leaves the link in L0 so it could be that then the
> > > above AML works better or something.
> >
> > That would be my guess.
> >
> > > That reminds me, ASPM may have something to do with this as well.
> >
> > Not really if D-states are involved.
> >
> > > > Also, if the downstream component is in D0, the port actually should
> > > > stay in D0 too, so what would happen with the $subject patch applied?
> > >
> > > Parent port cannot be lower D-state than the child so I agree it should
> > > stay in D0 as well. However, it seems that what happens is that the
> > > issue goes away :)
> >
> > Well, at least this is kind of out of the spec.
> >
> > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > downstream device is in D0, so the $subject patch will not work as
> > expected in the suspend-to-idle case.
> >
> > Also we really should make up our minds on whether or not to force
> > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > consequent about that.  Right now we do one thing during system-wide
> > suspend and the other one in PM-runtime, which is confusing.
> >
> > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > be consequent to follow system-wide suspend in PM-runtime and avoid
> > putting PCIe ports holding devices in D0 into any low-power states.
> > but that would make the approach in the $subject patch ineffective.
> >
> > Moreover, the fact that there are separate branches for "Windows 7"
> > and "Windows 8+" kind of suggest a change in the expected behavior
> > between Windows 7 and Windows 8, from the AML perspective.  I would
> > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > does something else.  Now, the structure of the "Windows 8+" branch
> > described by you suggests that, at least in the cases when it is going
> > to remove power from the port eventually, it goes straight for the
> > link preparation (the L2/L3 Ready transition) and power removal
> > without bothering to program the downstream device and port into D3hot
> > (because that's kind of redundant).
> >
> > That hypothetical "Windows 8+" approach may really work universally,
> > because it doesn't seem to break any rules (going straight from D0 to
> > D3cold is not disallowed and doing that for both a port and a
> > downstream device at the same time is kind of OK either, as long as
> > the link is ready for that).
> >

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22  9:07                                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22  9:07 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Mika Westerberg, Bjorn Helgaas, LKML,
	Lyude Paul, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Fri, Nov 22, 2019 at 1:13 AM Karol Herbst <kherbst@redhat.com> wrote:
>
> so while trying to test with d3cold disabled, I noticed that I run
> into the exact same error.

Does this mean that you disabled d3cold on the GPU via sysfs (the
"d3cold_allowed" attribute was 0) and the original problem still
occurred in that configuration?

> And I verified that the
> \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be
> turned on.

I don't really understand this comment, so can you explain it a bit to
me, please?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22  9:07                                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22  9:07 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 1:13 AM Karol Herbst <kherbst@redhat.com> wrote:
>
> so while trying to test with d3cold disabled, I noticed that I run
> into the exact same error.

Does this mean that you disabled d3cold on the GPU via sysfs (the
"d3cold_allowed" attribute was 0) and the original problem still
occurred in that configuration?

> And I verified that the
> \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be
> turned on.

I don't really understand this comment, so can you explain it a bit to
me, please?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-21 22:39                                               ` Rafael J. Wysocki
@ 2019-11-22 10:36                                                 ` Mika Westerberg
  -1 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-22 10:36 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > those "Root Port"s
> > > > > > > >
> > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > >
> > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > still the same.
> > > > > > >
> > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > us.
> > > > > > > >
> > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > accessible in PCI power states below D0.
> > > > > > >
> > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > accessible.
> > > > > >
> > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > a suspend ordering violation?
> > > > >
> > > > > No. We put the GPU into D3hot first,
> > >
> > > OK
> > >
> > > Does this involve any AML, like a _PS3 under the GPU object?
> >
> > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > itself is not described in ACPI tables at all.
> 
> OK
> 
> > > > > then the root port and then turn
> > > > > off the power resource (which is attached to the root port) resulting
> > > > > the topology entering D3cold.
> > > >
> > > > I don't see that happening in the AML though.
> > >
> > > Which AML do you mean, specifically?  The _OFF method for the root
> > > port's _PR3 power resource or something else?
> >
> > The root port's _OFF method for the power resource returned by its _PR3.
> 
> OK, so without the $subject patch we (1) program the downstream
> component (GPU) into D3hot, then we (2) program the port holding it
> into D3hot and then we (3) let the AML (_OFF for the power resource
> listed by _PR3 under the port object) run.
> 
> Something strange happens at this point (and I guess that _OFF doesn't
> even reach the point where it removes power from the port which is why
> we see a lock-up).

It does not necessary lead to lock-up. Here is dmesg from Karol's
system:

  https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt

what seems to happen is that the GPU never "comes back" from D3cold so
the driver starts breaking apart as the hardware is gone now.

> We know that skipping (1) makes things work and we kind of suspect
> that skipping (3) would make things work either, but what about doing
> (1) and (3) without (2)?

You mean in this particular case or in general? Because if the port has
_PSx methods we need to put it into D3hot AFAIK.

> > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > access seem to touch the GPU itself.
> > >
> > > So that may be where the problem is.
> > >
> > > Putting the downstream component into PCI D[1-3] is expected to put
> > > the link into L1, so I'm not sure how that plays with the later
> > > attempt to put it into L2/L3 Ready.
> >
> > That should be fine. What I've seen the link goes into L1 when
> > downstream component is put to D-state (not D0) and then it is put back
> > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> 
> Well, that's the expected behavior, but the observed behavior isn't as
> expected. :-)

Right :)

> > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > be removed somehow.
> >
> > There is GPIO for both power and PERST, I think the line here:
> >
> >   \_SB.SGOV (0x01010004, Zero)
> >
> > is the one that removes power.
> 
> OK
> 
> > > > LKDS() for the first PEG port looks like this:
> > > >
> > > >    P0L2 = One
> > > >    Sleep (0x10)
> > > >    Local0 = Zero
> > > >    While (P0L2)
> > > >    {
> > > >         If ((Local0 > 0x04))
> > > >         {
> > > >             Break
> > > >         }
> > > >
> > > >         Sleep (0x10)
> > > >         Local0++
> > > >    }
> > > >
> > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > "faster" here and return prematurely and if we leave the port into D0
> > > > this does not happen, or something. I'm just throwing out ideas :)
> > >
> > > But this actually works for the downstream component in D0, doesn't it?
> >
> > It does and that leaves the link in L0 so it could be that then the
> > above AML works better or something.
> 
> That would be my guess.
> 
> > That reminds me, ASPM may have something to do with this as well.
> 
> Not really if D-states are involved.
> 
> > > Also, if the downstream component is in D0, the port actually should
> > > stay in D0 too, so what would happen with the $subject patch applied?
> >
> > Parent port cannot be lower D-state than the child so I agree it should
> > stay in D0 as well. However, it seems that what happens is that the
> > issue goes away :)
> 
> Well, at least this is kind of out of the spec.
> 
> Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> downstream device is in D0, so the $subject patch will not work as
> expected in the suspend-to-idle case.
> 
> Also we really should make up our minds on whether or not to force
> PCIe ports to stay in D0 when downstream devices are in D0 and be
> consequent about that.  Right now we do one thing during system-wide
> suspend and the other one in PM-runtime, which is confusing.
> 
> The current design is mostly based on the PCI PM Spec 1.2, so it would
> be consequent to follow system-wide suspend in PM-runtime and avoid
> putting PCIe ports holding devices in D0 into any low-power states.
> but that would make the approach in the $subject patch ineffective.
> 
> Moreover, the fact that there are separate branches for "Windows 7"
> and "Windows 8+" kind of suggest a change in the expected behavior
> between Windows 7 and Windows 8, from the AML perspective.  I would
> guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> does something else.

My understanding (which may not be correct) is that up to Windows 7 it
never put the devices into D3cold runtime. Only when the system entered
Sx states it evaluated the _OFF methods.

Starting from Windows 8 it started doing this runtime so devices can
enter D3cold even when system is in S0.

> Now, the structure of the "Windows 8+" branch
> described by you suggests that, at least in the cases when it is going
> to remove power from the port eventually, it goes straight for the
> link preparation (the L2/L3 Ready transition) and power removal
> without bothering to program the downstream device and port into D3hot
> (because that's kind of redundant).
> 
> That hypothetical "Windows 8+" approach may really work universally,
> because it doesn't seem to break any rules (going straight from D0 to
> D3cold is not disallowed and doing that for both a port and a
> downstream device at the same time is kind of OK either, as long as
> the link is ready for that).

I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
5.8 we can see the supported PM state transitions and it shows that you
get to D3cold through D3hot. Of course the device goes into D3cold if
you simply remove its power so I agree with you as well. However, if
there is _PS3 method we can't skip the D3hot phase.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 10:36                                                 ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-22 10:36 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > >
> > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > those "Root Port"s
> > > > > > > >
> > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > >
> > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > still the same.
> > > > > > >
> > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > us.
> > > > > > > >
> > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > accessible in PCI power states below D0.
> > > > > > >
> > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > accessible.
> > > > > >
> > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > a suspend ordering violation?
> > > > >
> > > > > No. We put the GPU into D3hot first,
> > >
> > > OK
> > >
> > > Does this involve any AML, like a _PS3 under the GPU object?
> >
> > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > itself is not described in ACPI tables at all.
> 
> OK
> 
> > > > > then the root port and then turn
> > > > > off the power resource (which is attached to the root port) resulting
> > > > > the topology entering D3cold.
> > > >
> > > > I don't see that happening in the AML though.
> > >
> > > Which AML do you mean, specifically?  The _OFF method for the root
> > > port's _PR3 power resource or something else?
> >
> > The root port's _OFF method for the power resource returned by its _PR3.
> 
> OK, so without the $subject patch we (1) program the downstream
> component (GPU) into D3hot, then we (2) program the port holding it
> into D3hot and then we (3) let the AML (_OFF for the power resource
> listed by _PR3 under the port object) run.
> 
> Something strange happens at this point (and I guess that _OFF doesn't
> even reach the point where it removes power from the port which is why
> we see a lock-up).

It does not necessary lead to lock-up. Here is dmesg from Karol's
system:

  https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt

what seems to happen is that the GPU never "comes back" from D3cold so
the driver starts breaking apart as the hardware is gone now.

> We know that skipping (1) makes things work and we kind of suspect
> that skipping (3) would make things work either, but what about doing
> (1) and (3) without (2)?

You mean in this particular case or in general? Because if the port has
_PSx methods we need to put it into D3hot AFAIK.

> > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > access seem to touch the GPU itself.
> > >
> > > So that may be where the problem is.
> > >
> > > Putting the downstream component into PCI D[1-3] is expected to put
> > > the link into L1, so I'm not sure how that plays with the later
> > > attempt to put it into L2/L3 Ready.
> >
> > That should be fine. What I've seen the link goes into L1 when
> > downstream component is put to D-state (not D0) and then it is put back
> > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> 
> Well, that's the expected behavior, but the observed behavior isn't as
> expected. :-)

Right :)

> > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > be removed somehow.
> >
> > There is GPIO for both power and PERST, I think the line here:
> >
> >   \_SB.SGOV (0x01010004, Zero)
> >
> > is the one that removes power.
> 
> OK
> 
> > > > LKDS() for the first PEG port looks like this:
> > > >
> > > >    P0L2 = One
> > > >    Sleep (0x10)
> > > >    Local0 = Zero
> > > >    While (P0L2)
> > > >    {
> > > >         If ((Local0 > 0x04))
> > > >         {
> > > >             Break
> > > >         }
> > > >
> > > >         Sleep (0x10)
> > > >         Local0++
> > > >    }
> > > >
> > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > "faster" here and return prematurely and if we leave the port into D0
> > > > this does not happen, or something. I'm just throwing out ideas :)
> > >
> > > But this actually works for the downstream component in D0, doesn't it?
> >
> > It does and that leaves the link in L0 so it could be that then the
> > above AML works better or something.
> 
> That would be my guess.
> 
> > That reminds me, ASPM may have something to do with this as well.
> 
> Not really if D-states are involved.
> 
> > > Also, if the downstream component is in D0, the port actually should
> > > stay in D0 too, so what would happen with the $subject patch applied?
> >
> > Parent port cannot be lower D-state than the child so I agree it should
> > stay in D0 as well. However, it seems that what happens is that the
> > issue goes away :)
> 
> Well, at least this is kind of out of the spec.
> 
> Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> downstream device is in D0, so the $subject patch will not work as
> expected in the suspend-to-idle case.
> 
> Also we really should make up our minds on whether or not to force
> PCIe ports to stay in D0 when downstream devices are in D0 and be
> consequent about that.  Right now we do one thing during system-wide
> suspend and the other one in PM-runtime, which is confusing.
> 
> The current design is mostly based on the PCI PM Spec 1.2, so it would
> be consequent to follow system-wide suspend in PM-runtime and avoid
> putting PCIe ports holding devices in D0 into any low-power states.
> but that would make the approach in the $subject patch ineffective.
> 
> Moreover, the fact that there are separate branches for "Windows 7"
> and "Windows 8+" kind of suggest a change in the expected behavior
> between Windows 7 and Windows 8, from the AML perspective.  I would
> guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> does something else.

My understanding (which may not be correct) is that up to Windows 7 it
never put the devices into D3cold runtime. Only when the system entered
Sx states it evaluated the _OFF methods.

Starting from Windows 8 it started doing this runtime so devices can
enter D3cold even when system is in S0.

> Now, the structure of the "Windows 8+" branch
> described by you suggests that, at least in the cases when it is going
> to remove power from the port eventually, it goes straight for the
> link preparation (the L2/L3 Ready transition) and power removal
> without bothering to program the downstream device and port into D3hot
> (because that's kind of redundant).
> 
> That hypothetical "Windows 8+" approach may really work universally,
> because it doesn't seem to break any rules (going straight from D0 to
> D3cold is not disallowed and doing that for both a port and a
> downstream device at the same time is kind of OK either, as long as
> the link is ready for that).

I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
5.8 we can see the supported PM state transitions and it shows that you
get to D3cold through D3hot. Of course the device goes into D3cold if
you simply remove its power so I agree with you as well. However, if
there is _PS3 method we can't skip the D3hot phase.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:30                                                   ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22 11:30 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > >
> > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > those "Root Port"s
> > > > > > > > >
> > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > >
> > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > still the same.
> > > > > > > >
> > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > us.
> > > > > > > > >
> > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > accessible in PCI power states below D0.
> > > > > > > >
> > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > accessible.
> > > > > > >
> > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > a suspend ordering violation?
> > > > > >
> > > > > > No. We put the GPU into D3hot first,
> > > >
> > > > OK
> > > >
> > > > Does this involve any AML, like a _PS3 under the GPU object?
> > >
> > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > itself is not described in ACPI tables at all.
> >
> > OK
> >
> > > > > > then the root port and then turn
> > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > the topology entering D3cold.
> > > > >
> > > > > I don't see that happening in the AML though.
> > > >
> > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > port's _PR3 power resource or something else?
> > >
> > > The root port's _OFF method for the power resource returned by its _PR3.
> >
> > OK, so without the $subject patch we (1) program the downstream
> > component (GPU) into D3hot, then we (2) program the port holding it
> > into D3hot and then we (3) let the AML (_OFF for the power resource
> > listed by _PR3 under the port object) run.
> >
> > Something strange happens at this point (and I guess that _OFF doesn't
> > even reach the point where it removes power from the port which is why
> > we see a lock-up).
>
> It does not necessary lead to lock-up. Here is dmesg from Karol's
> system:
>
>   https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
>
> what seems to happen is that the GPU never "comes back" from D3cold so
> the driver starts breaking apart as the hardware is gone now.

I meant a lock-up in hardware to be precise, that causes it to stop to
respond (which causes it to appear to be permanently in D3cold).

I wonder if the port accepts PCI config space writes then.

> > We know that skipping (1) makes things work and we kind of suspect
> > that skipping (3) would make things work either, but what about doing
> > (1) and (3) without (2)?
>
> You mean in this particular case or in general?

In this case in particular to start with.  Just do an experiment to
see what happens if we skip pci_raw_set_power_state() for the port
instead of skipping it for the downstream device.

> Because if the port has _PSx methods we need to put it into D3hot AFAIK.

Yes, we need to run _PS3 then, but maybe we don't need to write to its
PMCSR directly.

> > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > access seem to touch the GPU itself.
> > > >
> > > > So that may be where the problem is.
> > > >
> > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > the link into L1, so I'm not sure how that plays with the later
> > > > attempt to put it into L2/L3 Ready.
> > >
> > > That should be fine. What I've seen the link goes into L1 when
> > > downstream component is put to D-state (not D0) and then it is put back
> > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> >
> > Well, that's the expected behavior, but the observed behavior isn't as
> > expected. :-)
>
> Right :)
>
> > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > be removed somehow.
> > >
> > > There is GPIO for both power and PERST, I think the line here:
> > >
> > >   \_SB.SGOV (0x01010004, Zero)
> > >
> > > is the one that removes power.
> >
> > OK
> >
> > > > > LKDS() for the first PEG port looks like this:
> > > > >
> > > > >    P0L2 = One
> > > > >    Sleep (0x10)
> > > > >    Local0 = Zero
> > > > >    While (P0L2)
> > > > >    {
> > > > >         If ((Local0 > 0x04))
> > > > >         {
> > > > >             Break
> > > > >         }
> > > > >
> > > > >         Sleep (0x10)
> > > > >         Local0++
> > > > >    }
> > > > >
> > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > >
> > > > But this actually works for the downstream component in D0, doesn't it?
> > >
> > > It does and that leaves the link in L0 so it could be that then the
> > > above AML works better or something.
> >
> > That would be my guess.
> >
> > > That reminds me, ASPM may have something to do with this as well.
> >
> > Not really if D-states are involved.
> >
> > > > Also, if the downstream component is in D0, the port actually should
> > > > stay in D0 too, so what would happen with the $subject patch applied?
> > >
> > > Parent port cannot be lower D-state than the child so I agree it should
> > > stay in D0 as well. However, it seems that what happens is that the
> > > issue goes away :)
> >
> > Well, at least this is kind of out of the spec.
> >
> > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > downstream device is in D0, so the $subject patch will not work as
> > expected in the suspend-to-idle case.
> >
> > Also we really should make up our minds on whether or not to force
> > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > consequent about that.  Right now we do one thing during system-wide
> > suspend and the other one in PM-runtime, which is confusing.
> >
> > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > be consequent to follow system-wide suspend in PM-runtime and avoid
> > putting PCIe ports holding devices in D0 into any low-power states.
> > but that would make the approach in the $subject patch ineffective.
> >
> > Moreover, the fact that there are separate branches for "Windows 7"
> > and "Windows 8+" kind of suggest a change in the expected behavior
> > between Windows 7 and Windows 8, from the AML perspective.  I would
> > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > does something else.
>
> My understanding (which may not be correct) is that up to Windows 7 it
> never put the devices into D3cold runtime. Only when the system entered
> Sx states it evaluated the _OFF methods.

I see.

> Starting from Windows 8 it started doing this runtime so devices can
> enter D3cold even when system is in S0.

Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?

> > Now, the structure of the "Windows 8+" branch
> > described by you suggests that, at least in the cases when it is going
> > to remove power from the port eventually, it goes straight for the
> > link preparation (the L2/L3 Ready transition) and power removal
> > without bothering to program the downstream device and port into D3hot
> > (because that's kind of redundant).
> >
> > That hypothetical "Windows 8+" approach may really work universally,
> > because it doesn't seem to break any rules (going straight from D0 to
> > D3cold is not disallowed and doing that for both a port and a
> > downstream device at the same time is kind of OK either, as long as
> > the link is ready for that).
>
> I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> 5.8 we can see the supported PM state transitions and it shows that you
> get to D3cold through D3hot. Of course the device goes into D3cold if
> you simply remove its power so I agree with you as well. However, if
> there is _PS3 method we can't skip the D3hot phase.

That's my understanding too, but I'm wondering about direct PMCSR
writes.  It is unclear to me if they are necessary, or more precisely,
whether or not Windows 10, say, carries them out if ACPI PM is going
to be applied.

Maybe I'm going too far with my conclusions, but please let me know
what you think about the approach proposed at the end of
https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:30                                                   ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22 11:30 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > >
> > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > >
> > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > >
> > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > those "Root Port"s
> > > > > > > > >
> > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > >
> > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > still the same.
> > > > > > > >
> > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > us.
> > > > > > > > >
> > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > accessible in PCI power states below D0.
> > > > > > > >
> > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > accessible.
> > > > > > >
> > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > a suspend ordering violation?
> > > > > >
> > > > > > No. We put the GPU into D3hot first,
> > > >
> > > > OK
> > > >
> > > > Does this involve any AML, like a _PS3 under the GPU object?
> > >
> > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > itself is not described in ACPI tables at all.
> >
> > OK
> >
> > > > > > then the root port and then turn
> > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > the topology entering D3cold.
> > > > >
> > > > > I don't see that happening in the AML though.
> > > >
> > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > port's _PR3 power resource or something else?
> > >
> > > The root port's _OFF method for the power resource returned by its _PR3.
> >
> > OK, so without the $subject patch we (1) program the downstream
> > component (GPU) into D3hot, then we (2) program the port holding it
> > into D3hot and then we (3) let the AML (_OFF for the power resource
> > listed by _PR3 under the port object) run.
> >
> > Something strange happens at this point (and I guess that _OFF doesn't
> > even reach the point where it removes power from the port which is why
> > we see a lock-up).
>
> It does not necessary lead to lock-up. Here is dmesg from Karol's
> system:
>
>   https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
>
> what seems to happen is that the GPU never "comes back" from D3cold so
> the driver starts breaking apart as the hardware is gone now.

I meant a lock-up in hardware to be precise, that causes it to stop to
respond (which causes it to appear to be permanently in D3cold).

I wonder if the port accepts PCI config space writes then.

> > We know that skipping (1) makes things work and we kind of suspect
> > that skipping (3) would make things work either, but what about doing
> > (1) and (3) without (2)?
>
> You mean in this particular case or in general?

In this case in particular to start with.  Just do an experiment to
see what happens if we skip pci_raw_set_power_state() for the port
instead of skipping it for the downstream device.

> Because if the port has _PSx methods we need to put it into D3hot AFAIK.

Yes, we need to run _PS3 then, but maybe we don't need to write to its
PMCSR directly.

> > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > access seem to touch the GPU itself.
> > > >
> > > > So that may be where the problem is.
> > > >
> > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > the link into L1, so I'm not sure how that plays with the later
> > > > attempt to put it into L2/L3 Ready.
> > >
> > > That should be fine. What I've seen the link goes into L1 when
> > > downstream component is put to D-state (not D0) and then it is put back
> > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> >
> > Well, that's the expected behavior, but the observed behavior isn't as
> > expected. :-)
>
> Right :)
>
> > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > be removed somehow.
> > >
> > > There is GPIO for both power and PERST, I think the line here:
> > >
> > >   \_SB.SGOV (0x01010004, Zero)
> > >
> > > is the one that removes power.
> >
> > OK
> >
> > > > > LKDS() for the first PEG port looks like this:
> > > > >
> > > > >    P0L2 = One
> > > > >    Sleep (0x10)
> > > > >    Local0 = Zero
> > > > >    While (P0L2)
> > > > >    {
> > > > >         If ((Local0 > 0x04))
> > > > >         {
> > > > >             Break
> > > > >         }
> > > > >
> > > > >         Sleep (0x10)
> > > > >         Local0++
> > > > >    }
> > > > >
> > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > >
> > > > But this actually works for the downstream component in D0, doesn't it?
> > >
> > > It does and that leaves the link in L0 so it could be that then the
> > > above AML works better or something.
> >
> > That would be my guess.
> >
> > > That reminds me, ASPM may have something to do with this as well.
> >
> > Not really if D-states are involved.
> >
> > > > Also, if the downstream component is in D0, the port actually should
> > > > stay in D0 too, so what would happen with the $subject patch applied?
> > >
> > > Parent port cannot be lower D-state than the child so I agree it should
> > > stay in D0 as well. However, it seems that what happens is that the
> > > issue goes away :)
> >
> > Well, at least this is kind of out of the spec.
> >
> > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > downstream device is in D0, so the $subject patch will not work as
> > expected in the suspend-to-idle case.
> >
> > Also we really should make up our minds on whether or not to force
> > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > consequent about that.  Right now we do one thing during system-wide
> > suspend and the other one in PM-runtime, which is confusing.
> >
> > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > be consequent to follow system-wide suspend in PM-runtime and avoid
> > putting PCIe ports holding devices in D0 into any low-power states.
> > but that would make the approach in the $subject patch ineffective.
> >
> > Moreover, the fact that there are separate branches for "Windows 7"
> > and "Windows 8+" kind of suggest a change in the expected behavior
> > between Windows 7 and Windows 8, from the AML perspective.  I would
> > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > does something else.
>
> My understanding (which may not be correct) is that up to Windows 7 it
> never put the devices into D3cold runtime. Only when the system entered
> Sx states it evaluated the _OFF methods.

I see.

> Starting from Windows 8 it started doing this runtime so devices can
> enter D3cold even when system is in S0.

Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?

> > Now, the structure of the "Windows 8+" branch
> > described by you suggests that, at least in the cases when it is going
> > to remove power from the port eventually, it goes straight for the
> > link preparation (the L2/L3 Ready transition) and power removal
> > without bothering to program the downstream device and port into D3hot
> > (because that's kind of redundant).
> >
> > That hypothetical "Windows 8+" approach may really work universally,
> > because it doesn't seem to break any rules (going straight from D0 to
> > D3cold is not disallowed and doing that for both a port and a
> > downstream device at the same time is kind of OK either, as long as
> > the link is ready for that).
>
> I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> 5.8 we can see the supported PM state transitions and it shows that you
> get to D3cold through D3hot. Of course the device goes into D3cold if
> you simply remove its power so I agree with you as well. However, if
> there is _PS3 method we can't skip the D3hot phase.

That's my understanding too, but I'm wondering about direct PMCSR
writes.  It is unclear to me if they are necessary, or more precisely,
whether or not Windows 10, say, carries them out if ACPI PM is going
to be applied.

Maybe I'm going too far with my conclusions, but please let me know
what you think about the approach proposed at the end of
https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:30                                                       ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22 11:30 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Fri, Nov 22, 2019 at 10:07 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, Nov 22, 2019 at 1:13 AM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > so while trying to test with d3cold disabled, I noticed that I run
> > into the exact same error.
>
> Does this mean that you disabled d3cold on the GPU via sysfs (the
> "d3cold_allowed" attribute was 0) and the original problem still
> occurred in that configuration?
>

yes. In my previous testing I was poking into the PCI registers of the
bridge controller and the GPU directly and that never caused any
issues as long as I limited it to putting the devices into D3hot.

> > And I verified that the
> > \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be
> > turned on.
>
> I don't really understand this comment, so can you explain it a bit to
> me, please?
>

that's the ACPI method to fetch the "status" of the power resource, it
should return 0 when the device was powered off (the GPU) and 1
otherwise.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:30                                                       ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22 11:30 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 10:07 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, Nov 22, 2019 at 1:13 AM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > so while trying to test with d3cold disabled, I noticed that I run
> > into the exact same error.
>
> Does this mean that you disabled d3cold on the GPU via sysfs (the
> "d3cold_allowed" attribute was 0) and the original problem still
> occurred in that configuration?
>

yes. In my previous testing I was poking into the PCI registers of the
bridge controller and the GPU directly and that never caused any
issues as long as I limited it to putting the devices into D3hot.

> > And I verified that the
> > \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be
> > turned on.
>
> I don't really understand this comment, so can you explain it a bit to
> me, please?
>

that's the ACPI method to fetch the "status" of the power resource, it
should return 0 when the device was powered off (the GPU) and 1
otherwise.

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:30                                                       ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22 11:30 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 10:07 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, Nov 22, 2019 at 1:13 AM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > so while trying to test with d3cold disabled, I noticed that I run
> > into the exact same error.
>
> Does this mean that you disabled d3cold on the GPU via sysfs (the
> "d3cold_allowed" attribute was 0) and the original problem still
> occurred in that configuration?
>

yes. In my previous testing I was poking into the PCI registers of the
bridge controller and the GPU directly and that never caused any
issues as long as I limited it to putting the devices into D3hot.

> > And I verified that the
> > \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be
> > turned on.
>
> I don't really understand this comment, so can you explain it a bit to
> me, please?
>

that's the ACPI method to fetch the "status" of the power resource, it
should return 0 when the device was powered off (the GPU) and 1
otherwise.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:34                                                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22 11:34 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Fri, Nov 22, 2019 at 12:30 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > > those "Root Port"s
> > > > > > > > > >
> > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > > >
> > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > > still the same.
> > > > > > > > >
> > > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > > us.
> > > > > > > > > >
> > > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > > accessible in PCI power states below D0.
> > > > > > > > >
> > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > > accessible.
> > > > > > > >
> > > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > > a suspend ordering violation?
> > > > > > >
> > > > > > > No. We put the GPU into D3hot first,
> > > > >
> > > > > OK
> > > > >
> > > > > Does this involve any AML, like a _PS3 under the GPU object?
> > > >
> > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > > itself is not described in ACPI tables at all.
> > >
> > > OK
> > >
> > > > > > > then the root port and then turn
> > > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > > the topology entering D3cold.
> > > > > >
> > > > > > I don't see that happening in the AML though.
> > > > >
> > > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > > port's _PR3 power resource or something else?
> > > >
> > > > The root port's _OFF method for the power resource returned by its _PR3.
> > >
> > > OK, so without the $subject patch we (1) program the downstream
> > > component (GPU) into D3hot, then we (2) program the port holding it
> > > into D3hot and then we (3) let the AML (_OFF for the power resource
> > > listed by _PR3 under the port object) run.
> > >
> > > Something strange happens at this point (and I guess that _OFF doesn't
> > > even reach the point where it removes power from the port which is why
> > > we see a lock-up).
> >
> > It does not necessary lead to lock-up. Here is dmesg from Karol's
> > system:
> >
> >   https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> >
> > what seems to happen is that the GPU never "comes back" from D3cold so
> > the driver starts breaking apart as the hardware is gone now.
>
> I meant a lock-up in hardware to be precise, that causes it to stop to
> respond (which causes it to appear to be permanently in D3cold).
>
> I wonder if the port accepts PCI config space writes then.
>

the issue is not AML related at all as I am able to reproduce this
issue without having to invoke any of that at all, I just need to poke
into the PCI register directly to cut the power. The register is not
documented, but effectively what the AML code is writing to as well.
Of course it might also be that the code I was testing it was doing
things in a non conformant way and I just hit a different issue as
well, but in the end I don't think that the AML code is the root cause
of all of that.

> > > We know that skipping (1) makes things work and we kind of suspect
> > > that skipping (3) would make things work either, but what about doing
> > > (1) and (3) without (2)?
> >
> > You mean in this particular case or in general?
>
> In this case in particular to start with.  Just do an experiment to
> see what happens if we skip pci_raw_set_power_state() for the port
> instead of skipping it for the downstream device.
>
> > Because if the port has _PSx methods we need to put it into D3hot AFAIK.
>
> Yes, we need to run _PS3 then, but maybe we don't need to write to its
> PMCSR directly.
>
> > > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > > access seem to touch the GPU itself.
> > > > >
> > > > > So that may be where the problem is.
> > > > >
> > > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > > the link into L1, so I'm not sure how that plays with the later
> > > > > attempt to put it into L2/L3 Ready.
> > > >
> > > > That should be fine. What I've seen the link goes into L1 when
> > > > downstream component is put to D-state (not D0) and then it is put back
> > > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> > >
> > > Well, that's the expected behavior, but the observed behavior isn't as
> > > expected. :-)
> >
> > Right :)
> >
> > > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > > be removed somehow.
> > > >
> > > > There is GPIO for both power and PERST, I think the line here:
> > > >
> > > >   \_SB.SGOV (0x01010004, Zero)
> > > >
> > > > is the one that removes power.
> > >
> > > OK
> > >
> > > > > > LKDS() for the first PEG port looks like this:
> > > > > >
> > > > > >    P0L2 = One
> > > > > >    Sleep (0x10)
> > > > > >    Local0 = Zero
> > > > > >    While (P0L2)
> > > > > >    {
> > > > > >         If ((Local0 > 0x04))
> > > > > >         {
> > > > > >             Break
> > > > > >         }
> > > > > >
> > > > > >         Sleep (0x10)
> > > > > >         Local0++
> > > > > >    }
> > > > > >
> > > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > > >
> > > > > But this actually works for the downstream component in D0, doesn't it?
> > > >
> > > > It does and that leaves the link in L0 so it could be that then the
> > > > above AML works better or something.
> > >
> > > That would be my guess.
> > >
> > > > That reminds me, ASPM may have something to do with this as well.
> > >
> > > Not really if D-states are involved.
> > >
> > > > > Also, if the downstream component is in D0, the port actually should
> > > > > stay in D0 too, so what would happen with the $subject patch applied?
> > > >
> > > > Parent port cannot be lower D-state than the child so I agree it should
> > > > stay in D0 as well. However, it seems that what happens is that the
> > > > issue goes away :)
> > >
> > > Well, at least this is kind of out of the spec.
> > >
> > > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > > downstream device is in D0, so the $subject patch will not work as
> > > expected in the suspend-to-idle case.
> > >
> > > Also we really should make up our minds on whether or not to force
> > > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > > consequent about that.  Right now we do one thing during system-wide
> > > suspend and the other one in PM-runtime, which is confusing.
> > >
> > > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > > be consequent to follow system-wide suspend in PM-runtime and avoid
> > > putting PCIe ports holding devices in D0 into any low-power states.
> > > but that would make the approach in the $subject patch ineffective.
> > >
> > > Moreover, the fact that there are separate branches for "Windows 7"
> > > and "Windows 8+" kind of suggest a change in the expected behavior
> > > between Windows 7 and Windows 8, from the AML perspective.  I would
> > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > > does something else.
> >
> > My understanding (which may not be correct) is that up to Windows 7 it
> > never put the devices into D3cold runtime. Only when the system entered
> > Sx states it evaluated the _OFF methods.
>
> I see.
>
> > Starting from Windows 8 it started doing this runtime so devices can
> > enter D3cold even when system is in S0.
>
> Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?
>
> > > Now, the structure of the "Windows 8+" branch
> > > described by you suggests that, at least in the cases when it is going
> > > to remove power from the port eventually, it goes straight for the
> > > link preparation (the L2/L3 Ready transition) and power removal
> > > without bothering to program the downstream device and port into D3hot
> > > (because that's kind of redundant).
> > >
> > > That hypothetical "Windows 8+" approach may really work universally,
> > > because it doesn't seem to break any rules (going straight from D0 to
> > > D3cold is not disallowed and doing that for both a port and a
> > > downstream device at the same time is kind of OK either, as long as
> > > the link is ready for that).
> >
> > I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> > 5.8 we can see the supported PM state transitions and it shows that you
> > get to D3cold through D3hot. Of course the device goes into D3cold if
> > you simply remove its power so I agree with you as well. However, if
> > there is _PS3 method we can't skip the D3hot phase.
>
> That's my understanding too, but I'm wondering about direct PMCSR
> writes.  It is unclear to me if they are necessary, or more precisely,
> whether or not Windows 10, say, carries them out if ACPI PM is going
> to be applied.
>
> Maybe I'm going too far with my conclusions, but please let me know
> what you think about the approach proposed at the end of
> https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
> ?
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:34                                                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22 11:34 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 12:30 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > > those "Root Port"s
> > > > > > > > > >
> > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > > >
> > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > > still the same.
> > > > > > > > >
> > > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > > us.
> > > > > > > > > >
> > > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > > accessible in PCI power states below D0.
> > > > > > > > >
> > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > > accessible.
> > > > > > > >
> > > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > > a suspend ordering violation?
> > > > > > >
> > > > > > > No. We put the GPU into D3hot first,
> > > > >
> > > > > OK
> > > > >
> > > > > Does this involve any AML, like a _PS3 under the GPU object?
> > > >
> > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > > itself is not described in ACPI tables at all.
> > >
> > > OK
> > >
> > > > > > > then the root port and then turn
> > > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > > the topology entering D3cold.
> > > > > >
> > > > > > I don't see that happening in the AML though.
> > > > >
> > > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > > port's _PR3 power resource or something else?
> > > >
> > > > The root port's _OFF method for the power resource returned by its _PR3.
> > >
> > > OK, so without the $subject patch we (1) program the downstream
> > > component (GPU) into D3hot, then we (2) program the port holding it
> > > into D3hot and then we (3) let the AML (_OFF for the power resource
> > > listed by _PR3 under the port object) run.
> > >
> > > Something strange happens at this point (and I guess that _OFF doesn't
> > > even reach the point where it removes power from the port which is why
> > > we see a lock-up).
> >
> > It does not necessary lead to lock-up. Here is dmesg from Karol's
> > system:
> >
> >   https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> >
> > what seems to happen is that the GPU never "comes back" from D3cold so
> > the driver starts breaking apart as the hardware is gone now.
>
> I meant a lock-up in hardware to be precise, that causes it to stop to
> respond (which causes it to appear to be permanently in D3cold).
>
> I wonder if the port accepts PCI config space writes then.
>

the issue is not AML related at all as I am able to reproduce this
issue without having to invoke any of that at all, I just need to poke
into the PCI register directly to cut the power. The register is not
documented, but effectively what the AML code is writing to as well.
Of course it might also be that the code I was testing it was doing
things in a non conformant way and I just hit a different issue as
well, but in the end I don't think that the AML code is the root cause
of all of that.

> > > We know that skipping (1) makes things work and we kind of suspect
> > > that skipping (3) would make things work either, but what about doing
> > > (1) and (3) without (2)?
> >
> > You mean in this particular case or in general?
>
> In this case in particular to start with.  Just do an experiment to
> see what happens if we skip pci_raw_set_power_state() for the port
> instead of skipping it for the downstream device.
>
> > Because if the port has _PSx methods we need to put it into D3hot AFAIK.
>
> Yes, we need to run _PS3 then, but maybe we don't need to write to its
> PMCSR directly.
>
> > > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > > access seem to touch the GPU itself.
> > > > >
> > > > > So that may be where the problem is.
> > > > >
> > > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > > the link into L1, so I'm not sure how that plays with the later
> > > > > attempt to put it into L2/L3 Ready.
> > > >
> > > > That should be fine. What I've seen the link goes into L1 when
> > > > downstream component is put to D-state (not D0) and then it is put back
> > > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> > >
> > > Well, that's the expected behavior, but the observed behavior isn't as
> > > expected. :-)
> >
> > Right :)
> >
> > > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > > be removed somehow.
> > > >
> > > > There is GPIO for both power and PERST, I think the line here:
> > > >
> > > >   \_SB.SGOV (0x01010004, Zero)
> > > >
> > > > is the one that removes power.
> > >
> > > OK
> > >
> > > > > > LKDS() for the first PEG port looks like this:
> > > > > >
> > > > > >    P0L2 = One
> > > > > >    Sleep (0x10)
> > > > > >    Local0 = Zero
> > > > > >    While (P0L2)
> > > > > >    {
> > > > > >         If ((Local0 > 0x04))
> > > > > >         {
> > > > > >             Break
> > > > > >         }
> > > > > >
> > > > > >         Sleep (0x10)
> > > > > >         Local0++
> > > > > >    }
> > > > > >
> > > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > > >
> > > > > But this actually works for the downstream component in D0, doesn't it?
> > > >
> > > > It does and that leaves the link in L0 so it could be that then the
> > > > above AML works better or something.
> > >
> > > That would be my guess.
> > >
> > > > That reminds me, ASPM may have something to do with this as well.
> > >
> > > Not really if D-states are involved.
> > >
> > > > > Also, if the downstream component is in D0, the port actually should
> > > > > stay in D0 too, so what would happen with the $subject patch applied?
> > > >
> > > > Parent port cannot be lower D-state than the child so I agree it should
> > > > stay in D0 as well. However, it seems that what happens is that the
> > > > issue goes away :)
> > >
> > > Well, at least this is kind of out of the spec.
> > >
> > > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > > downstream device is in D0, so the $subject patch will not work as
> > > expected in the suspend-to-idle case.
> > >
> > > Also we really should make up our minds on whether or not to force
> > > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > > consequent about that.  Right now we do one thing during system-wide
> > > suspend and the other one in PM-runtime, which is confusing.
> > >
> > > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > > be consequent to follow system-wide suspend in PM-runtime and avoid
> > > putting PCIe ports holding devices in D0 into any low-power states.
> > > but that would make the approach in the $subject patch ineffective.
> > >
> > > Moreover, the fact that there are separate branches for "Windows 7"
> > > and "Windows 8+" kind of suggest a change in the expected behavior
> > > between Windows 7 and Windows 8, from the AML perspective.  I would
> > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > > does something else.
> >
> > My understanding (which may not be correct) is that up to Windows 7 it
> > never put the devices into D3cold runtime. Only when the system entered
> > Sx states it evaluated the _OFF methods.
>
> I see.
>
> > Starting from Windows 8 it started doing this runtime so devices can
> > enter D3cold even when system is in S0.
>
> Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?
>
> > > Now, the structure of the "Windows 8+" branch
> > > described by you suggests that, at least in the cases when it is going
> > > to remove power from the port eventually, it goes straight for the
> > > link preparation (the L2/L3 Ready transition) and power removal
> > > without bothering to program the downstream device and port into D3hot
> > > (because that's kind of redundant).
> > >
> > > That hypothetical "Windows 8+" approach may really work universally,
> > > because it doesn't seem to break any rules (going straight from D0 to
> > > D3cold is not disallowed and doing that for both a port and a
> > > downstream device at the same time is kind of OK either, as long as
> > > the link is ready for that).
> >
> > I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> > 5.8 we can see the supported PM state transitions and it shows that you
> > get to D3cold through D3hot. Of course the device goes into D3cold if
> > you simply remove its power so I agree with you as well. However, if
> > there is _PS3 method we can't skip the D3hot phase.
>
> That's my understanding too, but I'm wondering about direct PMCSR
> writes.  It is unclear to me if they are necessary, or more precisely,
> whether or not Windows 10, say, carries them out if ACPI PM is going
> to be applied.
>
> Maybe I'm going too far with my conclusions, but please let me know
> what you think about the approach proposed at the end of
> https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
> ?
>

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:34                                                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-22 11:34 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 12:30 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > > those "Root Port"s
> > > > > > > > > >
> > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > > >
> > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > > still the same.
> > > > > > > > >
> > > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > > us.
> > > > > > > > > >
> > > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > > accessible in PCI power states below D0.
> > > > > > > > >
> > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > > accessible.
> > > > > > > >
> > > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > > a suspend ordering violation?
> > > > > > >
> > > > > > > No. We put the GPU into D3hot first,
> > > > >
> > > > > OK
> > > > >
> > > > > Does this involve any AML, like a _PS3 under the GPU object?
> > > >
> > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > > itself is not described in ACPI tables at all.
> > >
> > > OK
> > >
> > > > > > > then the root port and then turn
> > > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > > the topology entering D3cold.
> > > > > >
> > > > > > I don't see that happening in the AML though.
> > > > >
> > > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > > port's _PR3 power resource or something else?
> > > >
> > > > The root port's _OFF method for the power resource returned by its _PR3.
> > >
> > > OK, so without the $subject patch we (1) program the downstream
> > > component (GPU) into D3hot, then we (2) program the port holding it
> > > into D3hot and then we (3) let the AML (_OFF for the power resource
> > > listed by _PR3 under the port object) run.
> > >
> > > Something strange happens at this point (and I guess that _OFF doesn't
> > > even reach the point where it removes power from the port which is why
> > > we see a lock-up).
> >
> > It does not necessary lead to lock-up. Here is dmesg from Karol's
> > system:
> >
> >   https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> >
> > what seems to happen is that the GPU never "comes back" from D3cold so
> > the driver starts breaking apart as the hardware is gone now.
>
> I meant a lock-up in hardware to be precise, that causes it to stop to
> respond (which causes it to appear to be permanently in D3cold).
>
> I wonder if the port accepts PCI config space writes then.
>

the issue is not AML related at all as I am able to reproduce this
issue without having to invoke any of that at all, I just need to poke
into the PCI register directly to cut the power. The register is not
documented, but effectively what the AML code is writing to as well.
Of course it might also be that the code I was testing it was doing
things in a non conformant way and I just hit a different issue as
well, but in the end I don't think that the AML code is the root cause
of all of that.

> > > We know that skipping (1) makes things work and we kind of suspect
> > > that skipping (3) would make things work either, but what about doing
> > > (1) and (3) without (2)?
> >
> > You mean in this particular case or in general?
>
> In this case in particular to start with.  Just do an experiment to
> see what happens if we skip pci_raw_set_power_state() for the port
> instead of skipping it for the downstream device.
>
> > Because if the port has _PSx methods we need to put it into D3hot AFAIK.
>
> Yes, we need to run _PS3 then, but maybe we don't need to write to its
> PMCSR directly.
>
> > > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > > access seem to touch the GPU itself.
> > > > >
> > > > > So that may be where the problem is.
> > > > >
> > > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > > the link into L1, so I'm not sure how that plays with the later
> > > > > attempt to put it into L2/L3 Ready.
> > > >
> > > > That should be fine. What I've seen the link goes into L1 when
> > > > downstream component is put to D-state (not D0) and then it is put back
> > > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> > >
> > > Well, that's the expected behavior, but the observed behavior isn't as
> > > expected. :-)
> >
> > Right :)
> >
> > > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > > be removed somehow.
> > > >
> > > > There is GPIO for both power and PERST, I think the line here:
> > > >
> > > >   \_SB.SGOV (0x01010004, Zero)
> > > >
> > > > is the one that removes power.
> > >
> > > OK
> > >
> > > > > > LKDS() for the first PEG port looks like this:
> > > > > >
> > > > > >    P0L2 = One
> > > > > >    Sleep (0x10)
> > > > > >    Local0 = Zero
> > > > > >    While (P0L2)
> > > > > >    {
> > > > > >         If ((Local0 > 0x04))
> > > > > >         {
> > > > > >             Break
> > > > > >         }
> > > > > >
> > > > > >         Sleep (0x10)
> > > > > >         Local0++
> > > > > >    }
> > > > > >
> > > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > > >
> > > > > But this actually works for the downstream component in D0, doesn't it?
> > > >
> > > > It does and that leaves the link in L0 so it could be that then the
> > > > above AML works better or something.
> > >
> > > That would be my guess.
> > >
> > > > That reminds me, ASPM may have something to do with this as well.
> > >
> > > Not really if D-states are involved.
> > >
> > > > > Also, if the downstream component is in D0, the port actually should
> > > > > stay in D0 too, so what would happen with the $subject patch applied?
> > > >
> > > > Parent port cannot be lower D-state than the child so I agree it should
> > > > stay in D0 as well. However, it seems that what happens is that the
> > > > issue goes away :)
> > >
> > > Well, at least this is kind of out of the spec.
> > >
> > > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > > downstream device is in D0, so the $subject patch will not work as
> > > expected in the suspend-to-idle case.
> > >
> > > Also we really should make up our minds on whether or not to force
> > > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > > consequent about that.  Right now we do one thing during system-wide
> > > suspend and the other one in PM-runtime, which is confusing.
> > >
> > > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > > be consequent to follow system-wide suspend in PM-runtime and avoid
> > > putting PCIe ports holding devices in D0 into any low-power states.
> > > but that would make the approach in the $subject patch ineffective.
> > >
> > > Moreover, the fact that there are separate branches for "Windows 7"
> > > and "Windows 8+" kind of suggest a change in the expected behavior
> > > between Windows 7 and Windows 8, from the AML perspective.  I would
> > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > > does something else.
> >
> > My understanding (which may not be correct) is that up to Windows 7 it
> > never put the devices into D3cold runtime. Only when the system entered
> > Sx states it evaluated the _OFF methods.
>
> I see.
>
> > Starting from Windows 8 it started doing this runtime so devices can
> > enter D3cold even when system is in S0.
>
> Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?
>
> > > Now, the structure of the "Windows 8+" branch
> > > described by you suggests that, at least in the cases when it is going
> > > to remove power from the port eventually, it goes straight for the
> > > link preparation (the L2/L3 Ready transition) and power removal
> > > without bothering to program the downstream device and port into D3hot
> > > (because that's kind of redundant).
> > >
> > > That hypothetical "Windows 8+" approach may really work universally,
> > > because it doesn't seem to break any rules (going straight from D0 to
> > > D3cold is not disallowed and doing that for both a port and a
> > > downstream device at the same time is kind of OK either, as long as
> > > the link is ready for that).
> >
> > I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> > 5.8 we can see the supported PM state transitions and it shows that you
> > get to D3cold through D3hot. Of course the device goes into D3cold if
> > you simply remove its power so I agree with you as well. However, if
> > there is _PS3 method we can't skip the D3hot phase.
>
> That's my understanding too, but I'm wondering about direct PMCSR
> writes.  It is unclear to me if they are necessary, or more precisely,
> whether or not Windows 10, say, carries them out if ACPI PM is going
> to be applied.
>
> Maybe I'm going too far with my conclusions, but please let me know
> what you think about the approach proposed at the end of
> https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
> ?
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-22 11:30                                                   ` Rafael J. Wysocki
@ 2019-11-22 11:52                                                     ` Mika Westerberg
  -1 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-22 11:52 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Fri, Nov 22, 2019 at 12:30:20PM +0100, Rafael J. Wysocki wrote:
> On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > > those "Root Port"s
> > > > > > > > > >
> > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > > >
> > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > > still the same.
> > > > > > > > >
> > > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > > us.
> > > > > > > > > >
> > > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > > accessible in PCI power states below D0.
> > > > > > > > >
> > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > > accessible.
> > > > > > > >
> > > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > > a suspend ordering violation?
> > > > > > >
> > > > > > > No. We put the GPU into D3hot first,
> > > > >
> > > > > OK
> > > > >
> > > > > Does this involve any AML, like a _PS3 under the GPU object?
> > > >
> > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > > itself is not described in ACPI tables at all.
> > >
> > > OK
> > >
> > > > > > > then the root port and then turn
> > > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > > the topology entering D3cold.
> > > > > >
> > > > > > I don't see that happening in the AML though.
> > > > >
> > > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > > port's _PR3 power resource or something else?
> > > >
> > > > The root port's _OFF method for the power resource returned by its _PR3.
> > >
> > > OK, so without the $subject patch we (1) program the downstream
> > > component (GPU) into D3hot, then we (2) program the port holding it
> > > into D3hot and then we (3) let the AML (_OFF for the power resource
> > > listed by _PR3 under the port object) run.
> > >
> > > Something strange happens at this point (and I guess that _OFF doesn't
> > > even reach the point where it removes power from the port which is why
> > > we see a lock-up).
> >
> > It does not necessary lead to lock-up. Here is dmesg from Karol's
> > system:
> >
> >   https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> >
> > what seems to happen is that the GPU never "comes back" from D3cold so
> > the driver starts breaking apart as the hardware is gone now.
> 
> I meant a lock-up in hardware to be precise, that causes it to stop to
> respond (which causes it to appear to be permanently in D3cold).
> 
> I wonder if the port accepts PCI config space writes then.
> 
> > > We know that skipping (1) makes things work and we kind of suspect
> > > that skipping (3) would make things work either, but what about doing
> > > (1) and (3) without (2)?
> >
> > You mean in this particular case or in general?
> 
> In this case in particular to start with.  Just do an experiment to
> see what happens if we skip pci_raw_set_power_state() for the port
> instead of skipping it for the downstream device.
> 
> > Because if the port has _PSx methods we need to put it into D3hot AFAIK.
> 
> Yes, we need to run _PS3 then, but maybe we don't need to write to its
> PMCSR directly.
> 
> > > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > > access seem to touch the GPU itself.
> > > > >
> > > > > So that may be where the problem is.
> > > > >
> > > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > > the link into L1, so I'm not sure how that plays with the later
> > > > > attempt to put it into L2/L3 Ready.
> > > >
> > > > That should be fine. What I've seen the link goes into L1 when
> > > > downstream component is put to D-state (not D0) and then it is put back
> > > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> > >
> > > Well, that's the expected behavior, but the observed behavior isn't as
> > > expected. :-)
> >
> > Right :)
> >
> > > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > > be removed somehow.
> > > >
> > > > There is GPIO for both power and PERST, I think the line here:
> > > >
> > > >   \_SB.SGOV (0x01010004, Zero)
> > > >
> > > > is the one that removes power.
> > >
> > > OK
> > >
> > > > > > LKDS() for the first PEG port looks like this:
> > > > > >
> > > > > >    P0L2 = One
> > > > > >    Sleep (0x10)
> > > > > >    Local0 = Zero
> > > > > >    While (P0L2)
> > > > > >    {
> > > > > >         If ((Local0 > 0x04))
> > > > > >         {
> > > > > >             Break
> > > > > >         }
> > > > > >
> > > > > >         Sleep (0x10)
> > > > > >         Local0++
> > > > > >    }
> > > > > >
> > > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > > >
> > > > > But this actually works for the downstream component in D0, doesn't it?
> > > >
> > > > It does and that leaves the link in L0 so it could be that then the
> > > > above AML works better or something.
> > >
> > > That would be my guess.
> > >
> > > > That reminds me, ASPM may have something to do with this as well.
> > >
> > > Not really if D-states are involved.
> > >
> > > > > Also, if the downstream component is in D0, the port actually should
> > > > > stay in D0 too, so what would happen with the $subject patch applied?
> > > >
> > > > Parent port cannot be lower D-state than the child so I agree it should
> > > > stay in D0 as well. However, it seems that what happens is that the
> > > > issue goes away :)
> > >
> > > Well, at least this is kind of out of the spec.
> > >
> > > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > > downstream device is in D0, so the $subject patch will not work as
> > > expected in the suspend-to-idle case.
> > >
> > > Also we really should make up our minds on whether or not to force
> > > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > > consequent about that.  Right now we do one thing during system-wide
> > > suspend and the other one in PM-runtime, which is confusing.
> > >
> > > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > > be consequent to follow system-wide suspend in PM-runtime and avoid
> > > putting PCIe ports holding devices in D0 into any low-power states.
> > > but that would make the approach in the $subject patch ineffective.
> > >
> > > Moreover, the fact that there are separate branches for "Windows 7"
> > > and "Windows 8+" kind of suggest a change in the expected behavior
> > > between Windows 7 and Windows 8, from the AML perspective.  I would
> > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > > does something else.
> >
> > My understanding (which may not be correct) is that up to Windows 7 it
> > never put the devices into D3cold runtime. Only when the system entered
> > Sx states it evaluated the _OFF methods.
> 
> I see.
> 
> > Starting from Windows 8 it started doing this runtime so devices can
> > enter D3cold even when system is in S0.
> 
> Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?

No, there are two paths in the _OFF() and them some common code such as
removing power etc.

What the _REV 5 did is that it went into path where the AML seemed to
directly disable the link.

The other path that is taken with Windows 8+ does not disable the link
but instead it puts it to low power L2 or L3 state (I suppose L3 since
it removes the power and the GPU probably does not support wake).

The ASL code is below. PGOF() gets called from the power resource
_OFF():

        Method (PGOF, 1, Serialized)
        {
            PIOF = Arg0
            If ((PIOF == Zero))
            {
                If ((SGGP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == One))
            {
                If ((P1GP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == 0x02))
            {
                If ((P2GP == Zero))
                {
                    Return (Zero)
                }
            }

            PEBA = \XBAS /* External reference */
            PDEV = GDEV (PIOF)
            PFUN = GFUN (PIOF)
            Name (SCLK, Package (0x03)
            {
                One, 
                0x80, 
                Zero
            })
            If ((CCHK (PIOF, Zero) == Zero))
            {
                Return (Zero)
            }

            \_SB.PCI0.PEG0.PEGP.LTRE = \_SB.PCI0.PEG0.LREN
            If ((Arg0 == Zero))
            {
                ELC0 = LCT0 /* \_SB_.PCI0.LCT0 */
                H0VI = S0VI /* \_SB_.PCI0.S0VI */
                H0DI = S0DI /* \_SB_.PCI0.S0DI */
                ECP0 = LCP0 /* \_SB_.PCI0.LCP0 */
            }
            ElseIf ((Arg0 == One))
            {
                ELC1 = LCT1 /* \_SB_.PCI0.LCT1 */
                H1VI = S1VI /* \_SB_.PCI0.S1VI */
                H1DI = S1DI /* \_SB_.PCI0.S1DI */
                ECP1 = LCP1 /* \_SB_.PCI0.LCP1 */
            }
            ElseIf ((Arg0 == 0x02))
            {
                ELC2 = LCT2 /* \_SB_.PCI0.LCT2 */
                H2VI = S2VI /* \_SB_.PCI0.S2VI */
                H2DI = S2DI /* \_SB_.PCI0.S2DI */
                ECP2 = LCP2 /* \_SB_.PCI0.LCP2 */
            }

            If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 
                0x05))))
            {
                If ((PIOF == Zero))
                {
                    P0LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P0LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P0RM = One
                    P0AP = 0x03
                }
                ElseIf ((PIOF == One))
                {
                    P1LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P1LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P1RM = One
                    P1AP = 0x03
                }
                ElseIf ((PIOF == 0x02))
                {
                    P2LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P2LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P2RM = One
                    P2AP = 0x03
                }

                If ((PBGE != Zero))
                {
                    If (SBDL (PIOF))
                    {
                        MBDL = GMXB (PIOF)
                        PDUB (PIOF, MBDL)
                    }
                }
            }
            Else
            {
                LKDS (PIOF)
            }

            If ((DerefOf (SCLK [Zero]) != Zero))
            {
                PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
                Sleep (0x10)
            }

            GPPR (PIOF, Zero)
            If ((OSYS != 0x07D9))
            {
                DIWK (PIOF)
            }

            \_SB.SGOV (0x01010004, Zero)
            Sleep (0x14)
            Return (Zero)
        }

> > > Now, the structure of the "Windows 8+" branch
> > > described by you suggests that, at least in the cases when it is going
> > > to remove power from the port eventually, it goes straight for the
> > > link preparation (the L2/L3 Ready transition) and power removal
> > > without bothering to program the downstream device and port into D3hot
> > > (because that's kind of redundant).
> > >
> > > That hypothetical "Windows 8+" approach may really work universally,
> > > because it doesn't seem to break any rules (going straight from D0 to
> > > D3cold is not disallowed and doing that for both a port and a
> > > downstream device at the same time is kind of OK either, as long as
> > > the link is ready for that).
> >
> > I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> > 5.8 we can see the supported PM state transitions and it shows that you
> > get to D3cold through D3hot. Of course the device goes into D3cold if
> > you simply remove its power so I agree with you as well. However, if
> > there is _PS3 method we can't skip the D3hot phase.
> 
> That's my understanding too, but I'm wondering about direct PMCSR
> writes.  It is unclear to me if they are necessary, or more precisely,
> whether or not Windows 10, say, carries them out if ACPI PM is going
> to be applied.

According to this:

https://docs.microsoft.com/en-us/windows-hardware/drivers/pci/pci-power-management-and-device-drivers#scenario-1-turning-off-a-device

it does write PMCSR.

> Maybe I'm going too far with my conclusions, but please let me know
> what you think about the approach proposed at the end of
> https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
> ?

Yes, I think that is better approach.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:52                                                     ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-22 11:52 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 12:30:20PM +0100, Rafael J. Wysocki wrote:
> On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> >
> > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > >
> > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote:
> > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
> > > > > <mika.westerberg@intel.com> wrote:
> > > > > >
> > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > > > > > > <mika.westerberg@intel.com> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > > > > > > never get populated under this particular bridge controller, but under
> > > > > > > > > > > those "Root Port"s
> > > > > > > > > >
> > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > > > > > > >
> > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > > > > > > still the same.
> > > > > > > > >
> > > > > > > > > > Also some custom AML-based power management is involved and that may
> > > > > > > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > > > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > > > > > > us.
> > > > > > > > > >
> > > > > > > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > > > > > > that point, so it looks like that AML tries to access device memory on
> > > > > > > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > > > > > > accessible in PCI power states below D0.
> > > > > > > > >
> > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > > > > > > (as it is the case here). Also then the GPU config space is not
> > > > > > > > > accessible.
> > > > > > > >
> > > > > > > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > > > > > > a suspend ordering violation?
> > > > > > >
> > > > > > > No. We put the GPU into D3hot first,
> > > > >
> > > > > OK
> > > > >
> > > > > Does this involve any AML, like a _PS3 under the GPU object?
> > > >
> > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU
> > > > itself is not described in ACPI tables at all.
> > >
> > > OK
> > >
> > > > > > > then the root port and then turn
> > > > > > > off the power resource (which is attached to the root port) resulting
> > > > > > > the topology entering D3cold.
> > > > > >
> > > > > > I don't see that happening in the AML though.
> > > > >
> > > > > Which AML do you mean, specifically?  The _OFF method for the root
> > > > > port's _PR3 power resource or something else?
> > > >
> > > > The root port's _OFF method for the power resource returned by its _PR3.
> > >
> > > OK, so without the $subject patch we (1) program the downstream
> > > component (GPU) into D3hot, then we (2) program the port holding it
> > > into D3hot and then we (3) let the AML (_OFF for the power resource
> > > listed by _PR3 under the port object) run.
> > >
> > > Something strange happens at this point (and I guess that _OFF doesn't
> > > even reach the point where it removes power from the port which is why
> > > we see a lock-up).
> >
> > It does not necessary lead to lock-up. Here is dmesg from Karol's
> > system:
> >
> >   https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> >
> > what seems to happen is that the GPU never "comes back" from D3cold so
> > the driver starts breaking apart as the hardware is gone now.
> 
> I meant a lock-up in hardware to be precise, that causes it to stop to
> respond (which causes it to appear to be permanently in D3cold).
> 
> I wonder if the port accepts PCI config space writes then.
> 
> > > We know that skipping (1) makes things work and we kind of suspect
> > > that skipping (3) would make things work either, but what about doing
> > > (1) and (3) without (2)?
> >
> > You mean in this particular case or in general?
> 
> In this case in particular to start with.  Just do an experiment to
> see what happens if we skip pci_raw_set_power_state() for the port
> instead of skipping it for the downstream device.
> 
> > Because if the port has _PSx methods we need to put it into D3hot AFAIK.
> 
> Yes, we need to run _PS3 then, but maybe we don't need to write to its
> PMCSR directly.
> 
> > > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5
> > > > > > check) then we directly do link disable whereas in Windows 8+ we invoke
> > > > > > LKDS() method that puts the link into L2/L3. None of the fields they
> > > > > > access seem to touch the GPU itself.
> > > > >
> > > > > So that may be where the problem is.
> > > > >
> > > > > Putting the downstream component into PCI D[1-3] is expected to put
> > > > > the link into L1, so I'm not sure how that plays with the later
> > > > > attempt to put it into L2/L3 Ready.
> > > >
> > > > That should be fine. What I've seen the link goes into L1 when
> > > > downstream component is put to D-state (not D0) and then it is put back
> > > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3.
> > >
> > > Well, that's the expected behavior, but the observed behavior isn't as
> > > expected. :-)
> >
> > Right :)
> >
> > > > > Also, L2/L3 Ready is expected to be transient, so finally power should
> > > > > be removed somehow.
> > > >
> > > > There is GPIO for both power and PERST, I think the line here:
> > > >
> > > >   \_SB.SGOV (0x01010004, Zero)
> > > >
> > > > is the one that removes power.
> > >
> > > OK
> > >
> > > > > > LKDS() for the first PEG port looks like this:
> > > > > >
> > > > > >    P0L2 = One
> > > > > >    Sleep (0x10)
> > > > > >    Local0 = Zero
> > > > > >    While (P0L2)
> > > > > >    {
> > > > > >         If ((Local0 > 0x04))
> > > > > >         {
> > > > > >             Break
> > > > > >         }
> > > > > >
> > > > > >         Sleep (0x10)
> > > > > >         Local0++
> > > > > >    }
> > > > > >
> > > > > > One thing that comes to mind is that the loop can end even if P0L2 is
> > > > > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> > > > > > Sleep() is implemented differently in Windows? I mean Linux may be
> > > > > > "faster" here and return prematurely and if we leave the port into D0
> > > > > > this does not happen, or something. I'm just throwing out ideas :)
> > > > >
> > > > > But this actually works for the downstream component in D0, doesn't it?
> > > >
> > > > It does and that leaves the link in L0 so it could be that then the
> > > > above AML works better or something.
> > >
> > > That would be my guess.
> > >
> > > > That reminds me, ASPM may have something to do with this as well.
> > >
> > > Not really if D-states are involved.
> > >
> > > > > Also, if the downstream component is in D0, the port actually should
> > > > > stay in D0 too, so what would happen with the $subject patch applied?
> > > >
> > > > Parent port cannot be lower D-state than the child so I agree it should
> > > > stay in D0 as well. However, it seems that what happens is that the
> > > > issue goes away :)
> > >
> > > Well, at least this is kind of out of the spec.
> > >
> > > Note that pci_pm_suspend_noirq() won't let the port go into D3 if the
> > > downstream device is in D0, so the $subject patch will not work as
> > > expected in the suspend-to-idle case.
> > >
> > > Also we really should make up our minds on whether or not to force
> > > PCIe ports to stay in D0 when downstream devices are in D0 and be
> > > consequent about that.  Right now we do one thing during system-wide
> > > suspend and the other one in PM-runtime, which is confusing.
> > >
> > > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > > be consequent to follow system-wide suspend in PM-runtime and avoid
> > > putting PCIe ports holding devices in D0 into any low-power states.
> > > but that would make the approach in the $subject patch ineffective.
> > >
> > > Moreover, the fact that there are separate branches for "Windows 7"
> > > and "Windows 8+" kind of suggest a change in the expected behavior
> > > between Windows 7 and Windows 8, from the AML perspective.  I would
> > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > > does something else.
> >
> > My understanding (which may not be correct) is that up to Windows 7 it
> > never put the devices into D3cold runtime. Only when the system entered
> > Sx states it evaluated the _OFF methods.
> 
> I see.
> 
> > Starting from Windows 8 it started doing this runtime so devices can
> > enter D3cold even when system is in S0.
> 
> Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?

No, there are two paths in the _OFF() and them some common code such as
removing power etc.

What the _REV 5 did is that it went into path where the AML seemed to
directly disable the link.

The other path that is taken with Windows 8+ does not disable the link
but instead it puts it to low power L2 or L3 state (I suppose L3 since
it removes the power and the GPU probably does not support wake).

The ASL code is below. PGOF() gets called from the power resource
_OFF():

        Method (PGOF, 1, Serialized)
        {
            PIOF = Arg0
            If ((PIOF == Zero))
            {
                If ((SGGP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == One))
            {
                If ((P1GP == Zero))
                {
                    Return (Zero)
                }
            }
            ElseIf ((PIOF == 0x02))
            {
                If ((P2GP == Zero))
                {
                    Return (Zero)
                }
            }

            PEBA = \XBAS /* External reference */
            PDEV = GDEV (PIOF)
            PFUN = GFUN (PIOF)
            Name (SCLK, Package (0x03)
            {
                One, 
                0x80, 
                Zero
            })
            If ((CCHK (PIOF, Zero) == Zero))
            {
                Return (Zero)
            }

            \_SB.PCI0.PEG0.PEGP.LTRE = \_SB.PCI0.PEG0.LREN
            If ((Arg0 == Zero))
            {
                ELC0 = LCT0 /* \_SB_.PCI0.LCT0 */
                H0VI = S0VI /* \_SB_.PCI0.S0VI */
                H0DI = S0DI /* \_SB_.PCI0.S0DI */
                ECP0 = LCP0 /* \_SB_.PCI0.LCP0 */
            }
            ElseIf ((Arg0 == One))
            {
                ELC1 = LCT1 /* \_SB_.PCI0.LCT1 */
                H1VI = S1VI /* \_SB_.PCI0.S1VI */
                H1DI = S1DI /* \_SB_.PCI0.S1DI */
                ECP1 = LCP1 /* \_SB_.PCI0.LCP1 */
            }
            ElseIf ((Arg0 == 0x02))
            {
                ELC2 = LCT2 /* \_SB_.PCI0.LCT2 */
                H2VI = S2VI /* \_SB_.PCI0.S2VI */
                H2DI = S2DI /* \_SB_.PCI0.S2DI */
                ECP2 = LCP2 /* \_SB_.PCI0.LCP2 */
            }

            If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 
                0x05))))
            {
                If ((PIOF == Zero))
                {
                    P0LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P0LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P0RM = One
                    P0AP = 0x03
                }
                ElseIf ((PIOF == One))
                {
                    P1LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P1LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P1RM = One
                    P1AP = 0x03
                }
                ElseIf ((PIOF == 0x02))
                {
                    P2LD = One
                    TCNT = Zero
                    While ((TCNT < LDLY))
                    {
                        If ((P2LT == 0x08))
                        {
                            Break
                        }

                        Sleep (0x10)
                        TCNT += 0x10
                    }

                    P2RM = One
                    P2AP = 0x03
                }

                If ((PBGE != Zero))
                {
                    If (SBDL (PIOF))
                    {
                        MBDL = GMXB (PIOF)
                        PDUB (PIOF, MBDL)
                    }
                }
            }
            Else
            {
                LKDS (PIOF)
            }

            If ((DerefOf (SCLK [Zero]) != Zero))
            {
                PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
                Sleep (0x10)
            }

            GPPR (PIOF, Zero)
            If ((OSYS != 0x07D9))
            {
                DIWK (PIOF)
            }

            \_SB.SGOV (0x01010004, Zero)
            Sleep (0x14)
            Return (Zero)
        }

> > > Now, the structure of the "Windows 8+" branch
> > > described by you suggests that, at least in the cases when it is going
> > > to remove power from the port eventually, it goes straight for the
> > > link preparation (the L2/L3 Ready transition) and power removal
> > > without bothering to program the downstream device and port into D3hot
> > > (because that's kind of redundant).
> > >
> > > That hypothetical "Windows 8+" approach may really work universally,
> > > because it doesn't seem to break any rules (going straight from D0 to
> > > D3cold is not disallowed and doing that for both a port and a
> > > downstream device at the same time is kind of OK either, as long as
> > > the link is ready for that).
> >
> > I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> > 5.8 we can see the supported PM state transitions and it shows that you
> > get to D3cold through D3hot. Of course the device goes into D3cold if
> > you simply remove its power so I agree with you as well. However, if
> > there is _PS3 method we can't skip the D3hot phase.
> 
> That's my understanding too, but I'm wondering about direct PMCSR
> writes.  It is unclear to me if they are necessary, or more precisely,
> whether or not Windows 10, say, carries them out if ACPI PM is going
> to be applied.

According to this:

https://docs.microsoft.com/en-us/windows-hardware/drivers/pci/pci-power-management-and-device-drivers#scenario-1-turning-off-a-device

it does write PMCSR.

> Maybe I'm going too far with my conclusions, but please let me know
> what you think about the approach proposed at the end of
> https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
> ?

Yes, I think that is better approach.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:54                                                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22 11:54 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Mika Westerberg, Bjorn Helgaas, LKML,
	Lyude Paul, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Fri, Nov 22, 2019 at 12:34 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Fri, Nov 22, 2019 at 12:30 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >

[cut]

> >
>
> the issue is not AML related at all as I am able to reproduce this
> issue without having to invoke any of that at all, I just need to poke
> into the PCI register directly to cut the power.

Since the register is not documented, you don't actually know what
exactly happens when it is written to.

You basically are saying something like "if I write a specific value
to an undocumented register, that makes things fail".  And yes,
writing things to undocumented registers is likely to cause failure to
happen, in general.

The point is that the kernel will never write into this register by itself.

> The register is not documented, but effectively what the AML code is writing to as well.

So that AML code is problematic.  It expects the write to do something
useful, but that's not the case.  Without the AML, the register would
not have been written to at all.

> Of course it might also be that the code I was testing it was doing
> things in a non conformant way and I just hit a different issue as
> well, but in the end I don't think that the AML code is the root cause
> of all of that.

If AML is not involved at all, things work.  You've just said so in
another message in this thread, quoting verbatim:

"yes. In my previous testing I was poking into the PCI registers of the
bridge controller and the GPU directly and that never caused any
issues as long as I limited it to putting the devices into D3hot."

You cannot claim a hardware bug just because a write to an
undocumented register from AML causes things to break.

First, that may be a bug in the AML (which is not unheard of).
Second, and that is more likely, the expectations of the AML code may
not be met at the time it is run.

Assuming the latter, the root cause is really that the kernel executes
the AML in a hardware configuration in which the expectations of that
AML are not met.

We are now trying to understand what those expectations may be and so
how to cause them to be met.

Your observation that the issue can be avoided if the GPU is not put
into D3hot by a PMCSR write is a step in that direction and it is a
good finding.  The information from Mika based on the ASL analysis is
helpful too.  Let's not jump to premature conclusions too quickly,
though.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 11:54                                                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22 11:54 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 12:34 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Fri, Nov 22, 2019 at 12:30 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >

[cut]

> >
>
> the issue is not AML related at all as I am able to reproduce this
> issue without having to invoke any of that at all, I just need to poke
> into the PCI register directly to cut the power.

Since the register is not documented, you don't actually know what
exactly happens when it is written to.

You basically are saying something like "if I write a specific value
to an undocumented register, that makes things fail".  And yes,
writing things to undocumented registers is likely to cause failure to
happen, in general.

The point is that the kernel will never write into this register by itself.

> The register is not documented, but effectively what the AML code is writing to as well.

So that AML code is problematic.  It expects the write to do something
useful, but that's not the case.  Without the AML, the register would
not have been written to at all.

> Of course it might also be that the code I was testing it was doing
> things in a non conformant way and I just hit a different issue as
> well, but in the end I don't think that the AML code is the root cause
> of all of that.

If AML is not involved at all, things work.  You've just said so in
another message in this thread, quoting verbatim:

"yes. In my previous testing I was poking into the PCI registers of the
bridge controller and the GPU directly and that never caused any
issues as long as I limited it to putting the devices into D3hot."

You cannot claim a hardware bug just because a write to an
undocumented register from AML causes things to break.

First, that may be a bug in the AML (which is not unheard of).
Second, and that is more likely, the expectations of the AML code may
not be met at the time it is run.

Assuming the latter, the root cause is really that the kernel executes
the AML in a hardware configuration in which the expectations of that
AML are not met.

We are now trying to understand what those expectations may be and so
how to cause them to be met.

Your observation that the issue can be avoided if the GPU is not put
into D3hot by a PMCSR write is a step in that direction and it is a
good finding.  The information from Mika based on the ASL analysis is
helpful too.  Let's not jump to premature conclusions too quickly,
though.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-22 11:52                                                     ` Mika Westerberg
@ 2019-11-22 12:15                                                       ` Rafael J. Wysocki
  -1 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22 12:15 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML, Lyude Paul,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Fri, Nov 22, 2019 at 12:52 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>

[cut]

[I'm really running out of time for today, unfortunately.]

> > > > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > > > be consequent to follow system-wide suspend in PM-runtime and avoid
> > > > putting PCIe ports holding devices in D0 into any low-power states.
> > > > but that would make the approach in the $subject patch ineffective.
> > > >
> > > > Moreover, the fact that there are separate branches for "Windows 7"
> > > > and "Windows 8+" kind of suggest a change in the expected behavior
> > > > between Windows 7 and Windows 8, from the AML perspective.  I would
> > > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > > > does something else.
> > >
> > > My understanding (which may not be correct) is that up to Windows 7 it
> > > never put the devices into D3cold runtime. Only when the system entered
> > > Sx states it evaluated the _OFF methods.
> >
> > I see.

I think I have misunderstood what you said.

I also think that Windows 7 and before didin't do RTD3, but it did PCI
PM nevertheless and platform firmware could expect it to behave in a
specific way in that respect.  That expected behavior seems to have
changed in the next generations of Windows, as reflected by the OS
version and _REV checks in ASL.

> > > Starting from Windows 8 it started doing this runtime so devices can
> > > enter D3cold even when system is in S0.
> >
> > Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?
>
> No, there are two paths in the _OFF() and them some common code such as
> removing power etc.
>
> What the _REV 5 did is that it went into path where the AML seemed to
> directly disable the link.
>
> The other path that is taken with Windows 8+ does not disable the link
> but instead it puts it to low power L2 or L3 state (I suppose L3 since
> it removes the power and the GPU probably does not support wake).

OK, so the very existence of the two paths means that the OS behavior
expected by the firmware in the two cases represented by them is
different.  Presumably, the expected hardware configuration in which
the AML runs also is different in these two cases.

> The ASL code is below. PGOF() gets called from the power resource
> _OFF():

I'll look at it in detail when I have some more time later.

>         Method (PGOF, 1, Serialized)
>         {
>             PIOF = Arg0
>             If ((PIOF == Zero))
>             {
>                 If ((SGGP == Zero))
>                 {
>                     Return (Zero)
>                 }
>             }
>             ElseIf ((PIOF == One))
>             {
>                 If ((P1GP == Zero))
>                 {
>                     Return (Zero)
>                 }
>             }
>             ElseIf ((PIOF == 0x02))
>             {
>                 If ((P2GP == Zero))
>                 {
>                     Return (Zero)
>                 }
>             }
>
>             PEBA = \XBAS /* External reference */
>             PDEV = GDEV (PIOF)
>             PFUN = GFUN (PIOF)
>             Name (SCLK, Package (0x03)
>             {
>                 One,
>                 0x80,
>                 Zero
>             })
>             If ((CCHK (PIOF, Zero) == Zero))
>             {
>                 Return (Zero)
>             }
>
>             \_SB.PCI0.PEG0.PEGP.LTRE = \_SB.PCI0.PEG0.LREN
>             If ((Arg0 == Zero))
>             {
>                 ELC0 = LCT0 /* \_SB_.PCI0.LCT0 */
>                 H0VI = S0VI /* \_SB_.PCI0.S0VI */
>                 H0DI = S0DI /* \_SB_.PCI0.S0DI */
>                 ECP0 = LCP0 /* \_SB_.PCI0.LCP0 */
>             }
>             ElseIf ((Arg0 == One))
>             {
>                 ELC1 = LCT1 /* \_SB_.PCI0.LCT1 */
>                 H1VI = S1VI /* \_SB_.PCI0.S1VI */
>                 H1DI = S1DI /* \_SB_.PCI0.S1DI */
>                 ECP1 = LCP1 /* \_SB_.PCI0.LCP1 */
>             }
>             ElseIf ((Arg0 == 0x02))
>             {
>                 ELC2 = LCT2 /* \_SB_.PCI0.LCT2 */
>                 H2VI = S2VI /* \_SB_.PCI0.S2VI */
>                 H2DI = S2DI /* \_SB_.PCI0.S2DI */
>                 ECP2 = LCP2 /* \_SB_.PCI0.LCP2 */
>             }
>
>             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
>                 0x05))))
>             {
>                 If ((PIOF == Zero))
>                 {
>                     P0LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P0LT == 0x08))
>                         {
>                             Break
>                         }
>
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
>
>                     P0RM = One
>                     P0AP = 0x03
>                 }
>                 ElseIf ((PIOF == One))
>                 {
>                     P1LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P1LT == 0x08))
>                         {
>                             Break
>                         }
>
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
>
>                     P1RM = One
>                     P1AP = 0x03
>                 }
>                 ElseIf ((PIOF == 0x02))
>                 {
>                     P2LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P2LT == 0x08))
>                         {
>                             Break
>                         }
>
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
>
>                     P2RM = One
>                     P2AP = 0x03
>                 }
>
>                 If ((PBGE != Zero))
>                 {
>                     If (SBDL (PIOF))
>                     {
>                         MBDL = GMXB (PIOF)
>                         PDUB (PIOF, MBDL)
>                     }
>                 }
>             }
>             Else
>             {
>                 LKDS (PIOF)
>             }
>
>             If ((DerefOf (SCLK [Zero]) != Zero))
>             {
>                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
>                 Sleep (0x10)
>             }
>
>             GPPR (PIOF, Zero)
>             If ((OSYS != 0x07D9))
>             {
>                 DIWK (PIOF)
>             }
>
>             \_SB.SGOV (0x01010004, Zero)
>             Sleep (0x14)
>             Return (Zero)
>         }
>
> > > > Now, the structure of the "Windows 8+" branch
> > > > described by you suggests that, at least in the cases when it is going
> > > > to remove power from the port eventually, it goes straight for the
> > > > link preparation (the L2/L3 Ready transition) and power removal
> > > > without bothering to program the downstream device and port into D3hot
> > > > (because that's kind of redundant).
> > > >
> > > > That hypothetical "Windows 8+" approach may really work universally,
> > > > because it doesn't seem to break any rules (going straight from D0 to
> > > > D3cold is not disallowed and doing that for both a port and a
> > > > downstream device at the same time is kind of OK either, as long as
> > > > the link is ready for that).
> > >
> > > I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> > > 5.8 we can see the supported PM state transitions and it shows that you
> > > get to D3cold through D3hot. Of course the device goes into D3cold if
> > > you simply remove its power so I agree with you as well. However, if
> > > there is _PS3 method we can't skip the D3hot phase.
> >
> > That's my understanding too, but I'm wondering about direct PMCSR
> > writes.  It is unclear to me if they are necessary, or more precisely,
> > whether or not Windows 10, say, carries them out if ACPI PM is going
> > to be applied.
>
> According to this:
>
> https://docs.microsoft.com/en-us/windows-hardware/drivers/pci/pci-power-management-and-device-drivers#scenario-1-turning-off-a-device
>
> it does write PMCSR.

But I don't think that the case at hand is covered by this at all.

In the "generic" case covered by the doc, the world is simple: the
device has a PMCSR and some AML, possibly to support D3cold.  You use
both and are happy.  And you don't care about the upstream PCIe port
at all.

In the case at had you need a power resource at the PCIe port level to
put the downstream device into D3cold (BTW, I'm not even sure if the
port really is power managed by this at all), so they both are treated
kind of as a combo.  Quite a bit more complicated in my view.

> > Maybe I'm going too far with my conclusions, but please let me know
> > what you think about the approach proposed at the end of
> > https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
> > ?
>
> Yes, I think that is better approach.

OK, thanks!

If I have the time next week, I'll try to prototype patch to implement
this idea.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-22 12:15                                                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-11-22 12:15 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Fri, Nov 22, 2019 at 12:52 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>

[cut]

[I'm really running out of time for today, unfortunately.]

> > > > The current design is mostly based on the PCI PM Spec 1.2, so it would
> > > > be consequent to follow system-wide suspend in PM-runtime and avoid
> > > > putting PCIe ports holding devices in D0 into any low-power states.
> > > > but that would make the approach in the $subject patch ineffective.
> > > >
> > > > Moreover, the fact that there are separate branches for "Windows 7"
> > > > and "Windows 8+" kind of suggest a change in the expected behavior
> > > > between Windows 7 and Windows 8, from the AML perspective.  I would
> > > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later)
> > > > does something else.
> > >
> > > My understanding (which may not be correct) is that up to Windows 7 it
> > > never put the devices into D3cold runtime. Only when the system entered
> > > Sx states it evaluated the _OFF methods.
> >
> > I see.

I think I have misunderstood what you said.

I also think that Windows 7 and before didin't do RTD3, but it did PCI
PM nevertheless and platform firmware could expect it to behave in a
specific way in that respect.  That expected behavior seems to have
changed in the next generations of Windows, as reflected by the OS
version and _REV checks in ASL.

> > > Starting from Windows 8 it started doing this runtime so devices can
> > > enter D3cold even when system is in S0.
> >
> > Hmm.  So by setting _REV to 5 we effectively change the _OFF into a NOP?
>
> No, there are two paths in the _OFF() and them some common code such as
> removing power etc.
>
> What the _REV 5 did is that it went into path where the AML seemed to
> directly disable the link.
>
> The other path that is taken with Windows 8+ does not disable the link
> but instead it puts it to low power L2 or L3 state (I suppose L3 since
> it removes the power and the GPU probably does not support wake).

OK, so the very existence of the two paths means that the OS behavior
expected by the firmware in the two cases represented by them is
different.  Presumably, the expected hardware configuration in which
the AML runs also is different in these two cases.

> The ASL code is below. PGOF() gets called from the power resource
> _OFF():

I'll look at it in detail when I have some more time later.

>         Method (PGOF, 1, Serialized)
>         {
>             PIOF = Arg0
>             If ((PIOF == Zero))
>             {
>                 If ((SGGP == Zero))
>                 {
>                     Return (Zero)
>                 }
>             }
>             ElseIf ((PIOF == One))
>             {
>                 If ((P1GP == Zero))
>                 {
>                     Return (Zero)
>                 }
>             }
>             ElseIf ((PIOF == 0x02))
>             {
>                 If ((P2GP == Zero))
>                 {
>                     Return (Zero)
>                 }
>             }
>
>             PEBA = \XBAS /* External reference */
>             PDEV = GDEV (PIOF)
>             PFUN = GFUN (PIOF)
>             Name (SCLK, Package (0x03)
>             {
>                 One,
>                 0x80,
>                 Zero
>             })
>             If ((CCHK (PIOF, Zero) == Zero))
>             {
>                 Return (Zero)
>             }
>
>             \_SB.PCI0.PEG0.PEGP.LTRE = \_SB.PCI0.PEG0.LREN
>             If ((Arg0 == Zero))
>             {
>                 ELC0 = LCT0 /* \_SB_.PCI0.LCT0 */
>                 H0VI = S0VI /* \_SB_.PCI0.S0VI */
>                 H0DI = S0DI /* \_SB_.PCI0.S0DI */
>                 ECP0 = LCP0 /* \_SB_.PCI0.LCP0 */
>             }
>             ElseIf ((Arg0 == One))
>             {
>                 ELC1 = LCT1 /* \_SB_.PCI0.LCT1 */
>                 H1VI = S1VI /* \_SB_.PCI0.S1VI */
>                 H1DI = S1DI /* \_SB_.PCI0.S1DI */
>                 ECP1 = LCP1 /* \_SB_.PCI0.LCP1 */
>             }
>             ElseIf ((Arg0 == 0x02))
>             {
>                 ELC2 = LCT2 /* \_SB_.PCI0.LCT2 */
>                 H2VI = S2VI /* \_SB_.PCI0.S2VI */
>                 H2DI = S2DI /* \_SB_.PCI0.S2DI */
>                 ECP2 = LCP2 /* \_SB_.PCI0.LCP2 */
>             }
>
>             If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV ==
>                 0x05))))
>             {
>                 If ((PIOF == Zero))
>                 {
>                     P0LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P0LT == 0x08))
>                         {
>                             Break
>                         }
>
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
>
>                     P0RM = One
>                     P0AP = 0x03
>                 }
>                 ElseIf ((PIOF == One))
>                 {
>                     P1LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P1LT == 0x08))
>                         {
>                             Break
>                         }
>
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
>
>                     P1RM = One
>                     P1AP = 0x03
>                 }
>                 ElseIf ((PIOF == 0x02))
>                 {
>                     P2LD = One
>                     TCNT = Zero
>                     While ((TCNT < LDLY))
>                     {
>                         If ((P2LT == 0x08))
>                         {
>                             Break
>                         }
>
>                         Sleep (0x10)
>                         TCNT += 0x10
>                     }
>
>                     P2RM = One
>                     P2AP = 0x03
>                 }
>
>                 If ((PBGE != Zero))
>                 {
>                     If (SBDL (PIOF))
>                     {
>                         MBDL = GMXB (PIOF)
>                         PDUB (PIOF, MBDL)
>                     }
>                 }
>             }
>             Else
>             {
>                 LKDS (PIOF)
>             }
>
>             If ((DerefOf (SCLK [Zero]) != Zero))
>             {
>                 PCRO (0xDC, 0x100C, DerefOf (SCLK [One]))
>                 Sleep (0x10)
>             }
>
>             GPPR (PIOF, Zero)
>             If ((OSYS != 0x07D9))
>             {
>                 DIWK (PIOF)
>             }
>
>             \_SB.SGOV (0x01010004, Zero)
>             Sleep (0x14)
>             Return (Zero)
>         }
>
> > > > Now, the structure of the "Windows 8+" branch
> > > > described by you suggests that, at least in the cases when it is going
> > > > to remove power from the port eventually, it goes straight for the
> > > > link preparation (the L2/L3 Ready transition) and power removal
> > > > without bothering to program the downstream device and port into D3hot
> > > > (because that's kind of redundant).
> > > >
> > > > That hypothetical "Windows 8+" approach may really work universally,
> > > > because it doesn't seem to break any rules (going straight from D0 to
> > > > D3cold is not disallowed and doing that for both a port and a
> > > > downstream device at the same time is kind of OK either, as long as
> > > > the link is ready for that).
> > >
> > > I guess it depends on how you interpret the specs ;-) From PCIe 5.0 sec
> > > 5.8 we can see the supported PM state transitions and it shows that you
> > > get to D3cold through D3hot. Of course the device goes into D3cold if
> > > you simply remove its power so I agree with you as well. However, if
> > > there is _PS3 method we can't skip the D3hot phase.
> >
> > That's my understanding too, but I'm wondering about direct PMCSR
> > writes.  It is unclear to me if they are necessary, or more precisely,
> > whether or not Windows 10, say, carries them out if ACPI PM is going
> > to be applied.
>
> According to this:
>
> https://docs.microsoft.com/en-us/windows-hardware/drivers/pci/pci-power-management-and-device-drivers#scenario-1-turning-off-a-device
>
> it does write PMCSR.

But I don't think that the case at hand is covered by this at all.

In the "generic" case covered by the doc, the world is simple: the
device has a PMCSR and some AML, possibly to support D3cold.  You use
both and are happy.  And you don't care about the upstream PCIe port
at all.

In the case at had you need a power resource at the PCIe port level to
put the downstream device into D3cold (BTW, I'm not even sure if the
port really is power managed by this at all), so they both are treated
kind of as a combo.  Quite a bit more complicated in my view.

> > Maybe I'm going too far with my conclusions, but please let me know
> > what you think about the approach proposed at the end of
> > https://lore.kernel.org/linux-pm/CAJZ5v0iQttGB4m5TbzCtjp2C1j5qEkUhqhpWb++LhSk3mbW=Lw@mail.gmail.com/T/#t
> > ?
>
> Yes, I think that is better approach.

OK, thanks!

If I have the time next week, I'll try to prototype patch to implement
this idea.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-21 16:39                                               ` Rafael J. Wysocki
@ 2019-11-26 23:10                                                 ` Lyude Paul
  -1 siblings, 0 replies; 150+ messages in thread
From: Lyude Paul @ 2019-11-26 23:10 UTC (permalink / raw)
  To: Rafael J. Wysocki, Karol Herbst
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Rafael J . Wysocki,
	Linux PCI, Linux PM, dri-devel, nouveau, Dave Airlie,
	Mario Limonciello

[big snip]
> There is a sysfs attribute called "d3cold_allowed" which can be used
> for "blocking" D3cold, so can you please retest using that?
> 

Hey-this is almost certainly not the right place in this thread to respond,
but this thread has gotten so deep evolution can't push the subject further to
the right, heh. So I'll just respond here.

I've been following this and helping out Karol with testing here and there.
They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which
has a turing GPU and 8086:1901 PCI bridge.

I was about to say "the patch fixed things, hooray!" but it seems that after
trying runtime suspend/resume a couple times things fall apart again:

[   27.680433] nouveau 0000:01:00.0: enabling device (0000 -> 0003)
[   27.680578] nouveau 0000:01:00.0: NVIDIA TU117 (167000a1)
[   27.763967] nouveau 0000:01:00.0: bios: version 90.17.20.00.16
[   27.764664] nouveau 0000:01:00.0: fb: 4096 MiB GDDR5
[   27.806115] vga_switcheroo: enabled
[   27.806221] [TTM] Zone  kernel: Available graphics memory: 16244510 KiB
[   27.806222] [TTM] Zone   dma32: Available graphics memory: 2097152 KiB
[   27.806222] [TTM] Initializing pool allocator
[   27.806224] [TTM] Initializing DMA pool allocator
[   27.806249] nouveau 0000:01:00.0: DRM: VRAM: 4096 MiB
[   27.806249] nouveau 0000:01:00.0: DRM: GART: 536870912 MiB
[   27.806250] nouveau 0000:01:00.0: DRM: BIT table 'A' not found
[   27.806251] nouveau 0000:01:00.0: DRM: BIT table 'L' not found
[   27.806251] nouveau 0000:01:00.0: DRM: TMDS table version 2.0
[   27.806252] nouveau 0000:01:00.0: DRM: DCB version 4.1
[   27.806253] nouveau 0000:01:00.0: DRM: DCB outp 00: 02800f66 04600020
[   27.806253] nouveau 0000:01:00.0: DRM: DCB outp 01: 02011f52 00020010
[   27.806254] nouveau 0000:01:00.0: DRM: DCB outp 02: 01022f36 04600010
[   27.806254] nouveau 0000:01:00.0: DRM: DCB outp 03: 01033f46 04600020
[   27.806255] nouveau 0000:01:00.0: DRM: DCB conn 00: 00020047
[   27.806255] nouveau 0000:01:00.0: DRM: DCB conn 01: 00010161
[   27.806256] nouveau 0000:01:00.0: DRM: DCB conn 02: 00001248
[   27.806256] nouveau 0000:01:00.0: DRM: DCB conn 03: 00002348
[   27.806257] nouveau 0000:01:00.0: DRM: Pointer to flat panel table invalid
[   27.806415] nouveau 0000:01:00.0: DRM: failed to create kernel channel, -22
[   27.806530] nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
[   28.114808] nouveau 0000:01:00.0: DRM: unknown connector type 48
[   28.114943] nouveau 0000:01:00.0: DRM: unknown connector type 48
[   28.115026] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[   28.115027] [drm] Driver supports precise vblank timestamp query.
[   28.116611] [drm] Cannot find any crtc or sizes
[   28.117452] [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
[   28.118074] [drm] Cannot find any crtc or sizes
[   28.119523] [drm] Cannot find any crtc or sizes
[   34.081503] nouveau 0000:01:00.0: DRM: suspending console...
[   34.081508] nouveau 0000:01:00.0: DRM: suspending display...
[   34.081528] nouveau 0000:01:00.0: DRM: evicting buffers...
[   34.081531] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[   34.081551] nouveau 0000:01:00.0: DRM: suspending fence...
[   34.091173] nouveau 0000:01:00.0: DRM: suspending object tree...
[   37.729746] nouveau 0000:01:00.0: DRM: resuming object tree...
[   38.042076] nouveau 0000:01:00.0: DRM: resuming fence...
[   38.042167] nouveau 0000:01:00.0: DRM: resuming display...
[   38.042175] nouveau 0000:01:00.0: DRM: resuming console...
[   44.309325] nouveau 0000:01:00.0: DRM: suspending console...
[   44.309331] nouveau 0000:01:00.0: DRM: suspending display...
[   44.309349] nouveau 0000:01:00.0: DRM: evicting buffers...
[   44.309352] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[   44.309371] nouveau 0000:01:00.0: DRM: suspending fence...
[   44.318938] nouveau 0000:01:00.0: DRM: suspending object tree...
[   76.577644] nouveau 0000:01:00.0: DRM: resuming object tree...
[   76.890266] nouveau 0000:01:00.0: DRM: resuming fence...
[   76.890362] nouveau 0000:01:00.0: DRM: resuming display...
[   76.890379] nouveau 0000:01:00.0: DRM: resuming console...
[   82.721356] nouveau 0000:01:00.0: DRM: suspending console...
[   82.721361] nouveau 0000:01:00.0: DRM: suspending display...
[   82.721380] nouveau 0000:01:00.0: DRM: evicting buffers...
[   82.721383] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[   82.721403] nouveau 0000:01:00.0: DRM: suspending fence...
[   82.730483] nouveau 0000:01:00.0: DRM: suspending object tree...
[  681.369950] nouveau 0000:01:00.0: DRM: resuming object tree...
[  681.690464] nouveau 0000:01:00.0: DRM: resuming fence...
[  681.690555] nouveau 0000:01:00.0: DRM: resuming display...
[  681.690568] nouveau 0000:01:00.0: DRM: resuming console...
[  686.873629] nouveau 0000:01:00.0: DRM: suspending console...
[  686.873634] nouveau 0000:01:00.0: DRM: suspending display...
[  686.873653] nouveau 0000:01:00.0: DRM: evicting buffers...
[  686.873656] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[  686.873676] nouveau 0000:01:00.0: DRM: suspending fence...
[  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
[  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
[  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
[  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
[  752.866542] acpi device:00: Failed to change power state to D0
[  754.209030] video LNXVIDEO:00: Cannot transition to power state D0 for parent in (unknown)
[  755.848894] nouveau 0000:01:00.0: not ready 1023ms after Switch to D0; waiting
[  756.936876] nouveau 0000:01:00.0: not ready 2047ms after Switch to D0; waiting
[  759.048849] nouveau 0000:01:00.0: not ready 4095ms after Switch to D0; waiting
[  763.208807] nouveau 0000:01:00.0: not ready 8191ms after Switch to D0; waiting
[  771.912692] nouveau 0000:01:00.0: not ready 16383ms after Switch to D0; waiting
[  788.808505] nouveau 0000:01:00.0: not ready 32767ms after Switch to D0; waiting

752.866542 is where I start trying to resume the GPU. lspci -nn:

00:00.0 Host bridge [0600]: Intel Corporation Device [8086:3e20] (rev 0d)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 0d)
00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Mobile) [8086:3e9b] (rev 02)
00:04.0 Signal processing controller [1180]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem [8086:1903] (rev 0d)
00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
00:12.0 Signal processing controller [1180]: Intel Corporation Cannon Lake PCH Thermal Controller [8086:a379] (rev 10)
00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #0 [8086:a368] (rev 10)
00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
00:1b.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 [8086:a340] (rev f0)
00:1b.4 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #21 [8086:a32c] (rev f0)
00:1c.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 [8086:a338] (rev f0)
00:1d.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 [8086:a330] (rev f0)
00:1d.6 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #15 [8086:a336] (rev f0)
00:1e.0 Communication controller [0780]: Intel Corporation Device [8086:a328] (rev 10)
00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:a30e] (rev 10)
00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-LM [8086:15bb] (rev 10)
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1f91] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
04:00.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:00.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:01.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:02.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:04.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
06:00.0 System peripheral [0880]: Intel Corporation JHL7540 Thunderbolt 3 NHI [Titan Ridge 4C 2018] [8086:15eb] (rev 06)
2c:00.0 USB controller [0c03]: Intel Corporation JHL7540 Thunderbolt 3 USB Controller [Titan Ridge 4C 2018] [8086:15ec] (rev 06)
52:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX200 [8086:2723] (rev 1a)
53:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS525A PCI Express Card Reader [10ec:525a] (rev 01)
-- 
Cheers,
	Lyude Paul


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-26 23:10                                                 ` Lyude Paul
  0 siblings, 0 replies; 150+ messages in thread
From: Lyude Paul @ 2019-11-26 23:10 UTC (permalink / raw)
  To: Rafael J. Wysocki, Karol Herbst
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

[big snip]
> There is a sysfs attribute called "d3cold_allowed" which can be used
> for "blocking" D3cold, so can you please retest using that?
> 

Hey-this is almost certainly not the right place in this thread to respond,
but this thread has gotten so deep evolution can't push the subject further to
the right, heh. So I'll just respond here.

I've been following this and helping out Karol with testing here and there.
They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which
has a turing GPU and 8086:1901 PCI bridge.

I was about to say "the patch fixed things, hooray!" but it seems that after
trying runtime suspend/resume a couple times things fall apart again:

[   27.680433] nouveau 0000:01:00.0: enabling device (0000 -> 0003)
[   27.680578] nouveau 0000:01:00.0: NVIDIA TU117 (167000a1)
[   27.763967] nouveau 0000:01:00.0: bios: version 90.17.20.00.16
[   27.764664] nouveau 0000:01:00.0: fb: 4096 MiB GDDR5
[   27.806115] vga_switcheroo: enabled
[   27.806221] [TTM] Zone  kernel: Available graphics memory: 16244510 KiB
[   27.806222] [TTM] Zone   dma32: Available graphics memory: 2097152 KiB
[   27.806222] [TTM] Initializing pool allocator
[   27.806224] [TTM] Initializing DMA pool allocator
[   27.806249] nouveau 0000:01:00.0: DRM: VRAM: 4096 MiB
[   27.806249] nouveau 0000:01:00.0: DRM: GART: 536870912 MiB
[   27.806250] nouveau 0000:01:00.0: DRM: BIT table 'A' not found
[   27.806251] nouveau 0000:01:00.0: DRM: BIT table 'L' not found
[   27.806251] nouveau 0000:01:00.0: DRM: TMDS table version 2.0
[   27.806252] nouveau 0000:01:00.0: DRM: DCB version 4.1
[   27.806253] nouveau 0000:01:00.0: DRM: DCB outp 00: 02800f66 04600020
[   27.806253] nouveau 0000:01:00.0: DRM: DCB outp 01: 02011f52 00020010
[   27.806254] nouveau 0000:01:00.0: DRM: DCB outp 02: 01022f36 04600010
[   27.806254] nouveau 0000:01:00.0: DRM: DCB outp 03: 01033f46 04600020
[   27.806255] nouveau 0000:01:00.0: DRM: DCB conn 00: 00020047
[   27.806255] nouveau 0000:01:00.0: DRM: DCB conn 01: 00010161
[   27.806256] nouveau 0000:01:00.0: DRM: DCB conn 02: 00001248
[   27.806256] nouveau 0000:01:00.0: DRM: DCB conn 03: 00002348
[   27.806257] nouveau 0000:01:00.0: DRM: Pointer to flat panel table invalid
[   27.806415] nouveau 0000:01:00.0: DRM: failed to create kernel channel, -22
[   27.806530] nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
[   28.114808] nouveau 0000:01:00.0: DRM: unknown connector type 48
[   28.114943] nouveau 0000:01:00.0: DRM: unknown connector type 48
[   28.115026] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[   28.115027] [drm] Driver supports precise vblank timestamp query.
[   28.116611] [drm] Cannot find any crtc or sizes
[   28.117452] [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
[   28.118074] [drm] Cannot find any crtc or sizes
[   28.119523] [drm] Cannot find any crtc or sizes
[   34.081503] nouveau 0000:01:00.0: DRM: suspending console...
[   34.081508] nouveau 0000:01:00.0: DRM: suspending display...
[   34.081528] nouveau 0000:01:00.0: DRM: evicting buffers...
[   34.081531] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[   34.081551] nouveau 0000:01:00.0: DRM: suspending fence...
[   34.091173] nouveau 0000:01:00.0: DRM: suspending object tree...
[   37.729746] nouveau 0000:01:00.0: DRM: resuming object tree...
[   38.042076] nouveau 0000:01:00.0: DRM: resuming fence...
[   38.042167] nouveau 0000:01:00.0: DRM: resuming display...
[   38.042175] nouveau 0000:01:00.0: DRM: resuming console...
[   44.309325] nouveau 0000:01:00.0: DRM: suspending console...
[   44.309331] nouveau 0000:01:00.0: DRM: suspending display...
[   44.309349] nouveau 0000:01:00.0: DRM: evicting buffers...
[   44.309352] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[   44.309371] nouveau 0000:01:00.0: DRM: suspending fence...
[   44.318938] nouveau 0000:01:00.0: DRM: suspending object tree...
[   76.577644] nouveau 0000:01:00.0: DRM: resuming object tree...
[   76.890266] nouveau 0000:01:00.0: DRM: resuming fence...
[   76.890362] nouveau 0000:01:00.0: DRM: resuming display...
[   76.890379] nouveau 0000:01:00.0: DRM: resuming console...
[   82.721356] nouveau 0000:01:00.0: DRM: suspending console...
[   82.721361] nouveau 0000:01:00.0: DRM: suspending display...
[   82.721380] nouveau 0000:01:00.0: DRM: evicting buffers...
[   82.721383] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[   82.721403] nouveau 0000:01:00.0: DRM: suspending fence...
[   82.730483] nouveau 0000:01:00.0: DRM: suspending object tree...
[  681.369950] nouveau 0000:01:00.0: DRM: resuming object tree...
[  681.690464] nouveau 0000:01:00.0: DRM: resuming fence...
[  681.690555] nouveau 0000:01:00.0: DRM: resuming display...
[  681.690568] nouveau 0000:01:00.0: DRM: resuming console...
[  686.873629] nouveau 0000:01:00.0: DRM: suspending console...
[  686.873634] nouveau 0000:01:00.0: DRM: suspending display...
[  686.873653] nouveau 0000:01:00.0: DRM: evicting buffers...
[  686.873656] nouveau 0000:01:00.0: DRM: waiting for kernel channels to go idle...
[  686.873676] nouveau 0000:01:00.0: DRM: suspending fence...
[  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
[  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
[  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
[  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
[  752.866542] acpi device:00: Failed to change power state to D0
[  754.209030] video LNXVIDEO:00: Cannot transition to power state D0 for parent in (unknown)
[  755.848894] nouveau 0000:01:00.0: not ready 1023ms after Switch to D0; waiting
[  756.936876] nouveau 0000:01:00.0: not ready 2047ms after Switch to D0; waiting
[  759.048849] nouveau 0000:01:00.0: not ready 4095ms after Switch to D0; waiting
[  763.208807] nouveau 0000:01:00.0: not ready 8191ms after Switch to D0; waiting
[  771.912692] nouveau 0000:01:00.0: not ready 16383ms after Switch to D0; waiting
[  788.808505] nouveau 0000:01:00.0: not ready 32767ms after Switch to D0; waiting

752.866542 is where I start trying to resume the GPU. lspci -nn:

00:00.0 Host bridge [0600]: Intel Corporation Device [8086:3e20] (rev 0d)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 0d)
00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Mobile) [8086:3e9b] (rev 02)
00:04.0 Signal processing controller [1180]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem [8086:1903] (rev 0d)
00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
00:12.0 Signal processing controller [1180]: Intel Corporation Cannon Lake PCH Thermal Controller [8086:a379] (rev 10)
00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #0 [8086:a368] (rev 10)
00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
00:1b.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 [8086:a340] (rev f0)
00:1b.4 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #21 [8086:a32c] (rev f0)
00:1c.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 [8086:a338] (rev f0)
00:1d.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 [8086:a330] (rev f0)
00:1d.6 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #15 [8086:a336] (rev f0)
00:1e.0 Communication controller [0780]: Intel Corporation Device [8086:a328] (rev 10)
00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:a30e] (rev 10)
00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-LM [8086:15bb] (rev 10)
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1f91] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
04:00.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:00.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:01.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:02.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
05:04.0 PCI bridge [0604]: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] [8086:15ea] (rev 06)
06:00.0 System peripheral [0880]: Intel Corporation JHL7540 Thunderbolt 3 NHI [Titan Ridge 4C 2018] [8086:15eb] (rev 06)
2c:00.0 USB controller [0c03]: Intel Corporation JHL7540 Thunderbolt 3 USB Controller [Titan Ridge 4C 2018] [8086:15ec] (rev 06)
52:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX200 [8086:2723] (rev 1a)
53:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS525A PCI Express Card Reader [10ec:525a] (rev 01)
-- 
Cheers,
	Lyude Paul

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-27 11:48                                                   ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-27 11:48 UTC (permalink / raw)
  To: Lyude Paul
  Cc: Rafael J. Wysocki, Karol Herbst, Bjorn Helgaas, LKML,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> Hey-this is almost certainly not the right place in this thread to respond,
> but this thread has gotten so deep evolution can't push the subject further to
> the right, heh. So I'll just respond here.

:)

> I've been following this and helping out Karol with testing here and there.
> They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which
> has a turing GPU and 8086:1901 PCI bridge.
> 
> I was about to say "the patch fixed things, hooray!" but it seems that after
> trying runtime suspend/resume a couple times things fall apart again:

You mean $subject patch, no?

> [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)

This is probably the culprit. The same AML code fails to properly turn
on the device.

Is acpidump from this system available somewhere?

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-27 11:48                                                   ` Mika Westerberg
  0 siblings, 0 replies; 150+ messages in thread
From: Mika Westerberg @ 2019-11-27 11:48 UTC (permalink / raw)
  To: Lyude Paul
  Cc: Karol Herbst, Rafael J. Wysocki, Linux PCI, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> Hey-this is almost certainly not the right place in this thread to respond,
> but this thread has gotten so deep evolution can't push the subject further to
> the right, heh. So I'll just respond here.

:)

> I've been following this and helping out Karol with testing here and there.
> They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which
> has a turing GPU and 8086:1901 PCI bridge.
> 
> I was about to say "the patch fixed things, hooray!" but it seems that after
> trying runtime suspend/resume a couple times things fall apart again:

You mean $subject patch, no?

> [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)

This is probably the culprit. The same AML code fails to properly turn
on the device.

Is acpidump from this system available somewhere?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-27 11:51                                                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-27 11:51 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Lyude Paul, Rafael J. Wysocki, Bjorn Helgaas, LKML,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > Hey-this is almost certainly not the right place in this thread to respond,
> > but this thread has gotten so deep evolution can't push the subject further to
> > the right, heh. So I'll just respond here.
>
> :)
>
> > I've been following this and helping out Karol with testing here and there.
> > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which
> > has a turing GPU and 8086:1901 PCI bridge.
> >
> > I was about to say "the patch fixed things, hooray!" but it seems that after
> > trying runtime suspend/resume a couple times things fall apart again:
>
> You mean $subject patch, no?
>

no, I told Lyude to test the pci/pm branch as the runpm errors we saw
on that machine looked different. Some BAR error the GPU reported
after it got resumed, so I was wondering if the delays were helping
with that. But after some cycles it still caused the same issue, that
the GPU disappeared. Later testing also showed that my patch also
didn't seem to help with this error sadly :/

> > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
>
> This is probably the culprit. The same AML code fails to properly turn
> on the device.
>
> Is acpidump from this system available somewhere?
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-27 11:51                                                     ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-11-27 11:51 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Rafael J. Wysocki, Linux PCI, Linux PM, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
<mika.westerberg@intel.com> wrote:
>
> On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > Hey-this is almost certainly not the right place in this thread to respond,
> > but this thread has gotten so deep evolution can't push the subject further to
> > the right, heh. So I'll just respond here.
>
> :)
>
> > I've been following this and helping out Karol with testing here and there.
> > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which
> > has a turing GPU and 8086:1901 PCI bridge.
> >
> > I was about to say "the patch fixed things, hooray!" but it seems that after
> > trying runtime suspend/resume a couple times things fall apart again:
>
> You mean $subject patch, no?
>

no, I told Lyude to test the pci/pm branch as the runpm errors we saw
on that machine looked different. Some BAR error the GPU reported
after it got resumed, so I was wondering if the delays were helping
with that. But after some cycles it still caused the same issue, that
the GPU disappeared. Later testing also showed that my patch also
didn't seem to help with this error sadly :/

> > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
>
> This is probably the culprit. The same AML code fails to properly turn
> on the device.
>
> Is acpidump from this system available somewhere?
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-27 19:51                                                       ` Lyude Paul
  0 siblings, 0 replies; 150+ messages in thread
From: Lyude Paul @ 2019-11-27 19:51 UTC (permalink / raw)
  To: Karol Herbst, Mika Westerberg
  Cc: Rafael J. Wysocki, Bjorn Helgaas, LKML, Rafael J . Wysocki,
	Linux PCI, Linux PM, dri-devel, nouveau, Dave Airlie,
	Mario Limonciello

[-- Attachment #1: Type: text/plain, Size: 2014 bytes --]

On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > Hey-this is almost certainly not the right place in this thread to
> > > respond,
> > > but this thread has gotten so deep evolution can't push the subject
> > > further to
> > > the right, heh. So I'll just respond here.
> > 
> > :)
> > 
> > > I've been following this and helping out Karol with testing here and
> > > there.
> > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > which
> > > has a turing GPU and 8086:1901 PCI bridge.
> > > 
> > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > after
> > > trying runtime suspend/resume a couple times things fall apart again:
> > 
> > You mean $subject patch, no?
> > 
> 
> no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> on that machine looked different. Some BAR error the GPU reported
> after it got resumed, so I was wondering if the delays were helping
> with that. But after some cycles it still caused the same issue, that
> the GPU disappeared. Later testing also showed that my patch also
> didn't seem to help with this error sadly :/
> 
> > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > 
> > This is probably the culprit. The same AML code fails to properly turn
> > on the device.
> > 
> > Is acpidump from this system available somewhere?

Attached it to this email

> > 
-- 
Cheers,
	Lyude Paul

[-- Attachment #2: x1extremeg2-acpi.tar.xz --]
[-- Type: application/x-xz-compressed-tar, Size: 181604 bytes --]

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-11-27 19:51                                                       ` Lyude Paul
  0 siblings, 0 replies; 150+ messages in thread
From: Lyude Paul @ 2019-11-27 19:51 UTC (permalink / raw)
  To: Karol Herbst, Mika Westerberg
  Cc: Linux PM, Linux PCI, Rafael J. Wysocki, Rafael J . Wysocki, LKML,
	dri-devel, Mario Limonciello, Bjorn Helgaas, nouveau

[-- Attachment #1: Type: text/plain, Size: 2014 bytes --]

On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> <mika.westerberg@intel.com> wrote:
> > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > Hey-this is almost certainly not the right place in this thread to
> > > respond,
> > > but this thread has gotten so deep evolution can't push the subject
> > > further to
> > > the right, heh. So I'll just respond here.
> > 
> > :)
> > 
> > > I've been following this and helping out Karol with testing here and
> > > there.
> > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > which
> > > has a turing GPU and 8086:1901 PCI bridge.
> > > 
> > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > after
> > > trying runtime suspend/resume a couple times things fall apart again:
> > 
> > You mean $subject patch, no?
> > 
> 
> no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> on that machine looked different. Some BAR error the GPU reported
> after it got resumed, so I was wondering if the delays were helping
> with that. But after some cycles it still caused the same issue, that
> the GPU disappeared. Later testing also showed that my patch also
> didn't seem to help with this error sadly :/
> 
> > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > 
> > This is probably the culprit. The same AML code fails to properly turn
> > on the device.
> > 
> > Is acpidump from this system available somewhere?

Attached it to this email

> > 
-- 
Cheers,
	Lyude Paul

[-- Attachment #2: x1extremeg2-acpi.tar.xz --]
[-- Type: application/x-xz-compressed-tar, Size: 181604 bytes --]

[-- Attachment #3: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-11-27 19:51                                                       ` Lyude Paul
@ 2019-12-09 11:17                                                         ` Karol Herbst
  -1 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-12-09 11:17 UTC (permalink / raw)
  To: Lyude Paul
  Cc: Mika Westerberg, Rafael J. Wysocki, Bjorn Helgaas, LKML,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

anybody any other ideas? It seems that both patches don't really fix
the issue and I have no idea left on my side to try out. The only
thing left I could do to further investigate would be to reverse
engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
but I've heard users having similar issues to the one Lyude told us
about... and I couldn't verify that the patches help there either in a
reliable way.

On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul <lyude@redhat.com> wrote:
>
> On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > > Hey-this is almost certainly not the right place in this thread to
> > > > respond,
> > > > but this thread has gotten so deep evolution can't push the subject
> > > > further to
> > > > the right, heh. So I'll just respond here.
> > >
> > > :)
> > >
> > > > I've been following this and helping out Karol with testing here and
> > > > there.
> > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > > which
> > > > has a turing GPU and 8086:1901 PCI bridge.
> > > >
> > > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > > after
> > > > trying runtime suspend/resume a couple times things fall apart again:
> > >
> > > You mean $subject patch, no?
> > >
> >
> > no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> > on that machine looked different. Some BAR error the GPU reported
> > after it got resumed, so I was wondering if the delays were helping
> > with that. But after some cycles it still caused the same issue, that
> > the GPU disappeared. Later testing also showed that my patch also
> > didn't seem to help with this error sadly :/
> >
> > > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > >
> > > This is probably the culprit. The same AML code fails to properly turn
> > > on the device.
> > >
> > > Is acpidump from this system available somewhere?
>
> Attached it to this email
>
> > >
> --
> Cheers,
>         Lyude Paul


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-12-09 11:17                                                         ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-12-09 11:17 UTC (permalink / raw)
  To: Lyude Paul
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

anybody any other ideas? It seems that both patches don't really fix
the issue and I have no idea left on my side to try out. The only
thing left I could do to further investigate would be to reverse
engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
but I've heard users having similar issues to the one Lyude told us
about... and I couldn't verify that the patches help there either in a
reliable way.

On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul <lyude@redhat.com> wrote:
>
> On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> > <mika.westerberg@intel.com> wrote:
> > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > > Hey-this is almost certainly not the right place in this thread to
> > > > respond,
> > > > but this thread has gotten so deep evolution can't push the subject
> > > > further to
> > > > the right, heh. So I'll just respond here.
> > >
> > > :)
> > >
> > > > I've been following this and helping out Karol with testing here and
> > > > there.
> > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > > which
> > > > has a turing GPU and 8086:1901 PCI bridge.
> > > >
> > > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > > after
> > > > trying runtime suspend/resume a couple times things fall apart again:
> > >
> > > You mean $subject patch, no?
> > >
> >
> > no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> > on that machine looked different. Some BAR error the GPU reported
> > after it got resumed, so I was wondering if the delays were helping
> > with that. But after some cycles it still caused the same issue, that
> > the GPU disappeared. Later testing also showed that my patch also
> > didn't seem to help with this error sadly :/
> >
> > > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > >
> > > This is probably the culprit. The same AML code fails to properly turn
> > > on the device.
> > >
> > > Is acpidump from this system available somewhere?
>
> Attached it to this email
>
> > >
> --
> Cheers,
>         Lyude Paul

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-12-09 11:38                                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-12-09 11:38 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Lyude Paul, Mika Westerberg, Rafael J. Wysocki, Bjorn Helgaas,
	LKML, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Dave Airlie, Mario Limonciello

On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> anybody any other ideas?

Not yet, but I'm trying to collect some more information.

> It seems that both patches don't really fix
> the issue and I have no idea left on my side to try out. The only
> thing left I could do to further investigate would be to reverse
> engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> but I've heard users having similar issues to the one Lyude told us
> about... and I couldn't verify that the patches help there either in a
> reliable way.

It looks like the newer (8+) versions of Windows expect the GPU driver
to prepare the GPU for power removal in some specific way and the
latter fails if the GPU has not been prepared as expected.

Because testing indicates that the Windows 7 path in the platform
firmware works, it may be worth trying to do what it does to the PCIe
link before invoking the _OFF method for the power resource
controlling the GPU power.

If the Mika's theory that the Win7 path simply turns the PCIe link off
is correct, then whatever the _OFF method tries to do to the link
after that should not matter.

> On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul <lyude@redhat.com> wrote:
> >
> > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > > > Hey-this is almost certainly not the right place in this thread to
> > > > > respond,
> > > > > but this thread has gotten so deep evolution can't push the subject
> > > > > further to
> > > > > the right, heh. So I'll just respond here.
> > > >
> > > > :)
> > > >
> > > > > I've been following this and helping out Karol with testing here and
> > > > > there.
> > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > > > which
> > > > > has a turing GPU and 8086:1901 PCI bridge.
> > > > >
> > > > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > > > after
> > > > > trying runtime suspend/resume a couple times things fall apart again:
> > > >
> > > > You mean $subject patch, no?
> > > >
> > >
> > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> > > on that machine looked different. Some BAR error the GPU reported
> > > after it got resumed, so I was wondering if the delays were helping
> > > with that. But after some cycles it still caused the same issue, that
> > > the GPU disappeared. Later testing also showed that my patch also
> > > didn't seem to help with this error sadly :/
> > >
> > > > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > >
> > > > This is probably the culprit. The same AML code fails to properly turn
> > > > on the device.
> > > >
> > > > Is acpidump from this system available somewhere?
> >
> > Attached it to this email
> >
> > > >
> > --
> > Cheers,
> >         Lyude Paul
>

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-12-09 11:38                                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-12-09 11:38 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> anybody any other ideas?

Not yet, but I'm trying to collect some more information.

> It seems that both patches don't really fix
> the issue and I have no idea left on my side to try out. The only
> thing left I could do to further investigate would be to reverse
> engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> but I've heard users having similar issues to the one Lyude told us
> about... and I couldn't verify that the patches help there either in a
> reliable way.

It looks like the newer (8+) versions of Windows expect the GPU driver
to prepare the GPU for power removal in some specific way and the
latter fails if the GPU has not been prepared as expected.

Because testing indicates that the Windows 7 path in the platform
firmware works, it may be worth trying to do what it does to the PCIe
link before invoking the _OFF method for the power resource
controlling the GPU power.

If the Mika's theory that the Win7 path simply turns the PCIe link off
is correct, then whatever the _OFF method tries to do to the link
after that should not matter.

> On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul <lyude@redhat.com> wrote:
> >
> > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > > > Hey-this is almost certainly not the right place in this thread to
> > > > > respond,
> > > > > but this thread has gotten so deep evolution can't push the subject
> > > > > further to
> > > > > the right, heh. So I'll just respond here.
> > > >
> > > > :)
> > > >
> > > > > I've been following this and helping out Karol with testing here and
> > > > > there.
> > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > > > which
> > > > > has a turing GPU and 8086:1901 PCI bridge.
> > > > >
> > > > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > > > after
> > > > > trying runtime suspend/resume a couple times things fall apart again:
> > > >
> > > > You mean $subject patch, no?
> > > >
> > >
> > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> > > on that machine looked different. Some BAR error the GPU reported
> > > after it got resumed, so I was wondering if the delays were helping
> > > with that. But after some cycles it still caused the same issue, that
> > > the GPU disappeared. Later testing also showed that my patch also
> > > didn't seem to help with this error sadly :/
> > >
> > > > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > >
> > > > This is probably the culprit. The same AML code fails to properly turn
> > > > on the device.
> > > >
> > > > Is acpidump from this system available somewhere?
> >
> > Attached it to this email
> >
> > > >
> > --
> > Cheers,
> >         Lyude Paul
>
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-12-09 11:38                                                           ` Rafael J. Wysocki
  0 siblings, 0 replies; 150+ messages in thread
From: Rafael J. Wysocki @ 2019-12-09 11:38 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> anybody any other ideas?

Not yet, but I'm trying to collect some more information.

> It seems that both patches don't really fix
> the issue and I have no idea left on my side to try out. The only
> thing left I could do to further investigate would be to reverse
> engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> but I've heard users having similar issues to the one Lyude told us
> about... and I couldn't verify that the patches help there either in a
> reliable way.

It looks like the newer (8+) versions of Windows expect the GPU driver
to prepare the GPU for power removal in some specific way and the
latter fails if the GPU has not been prepared as expected.

Because testing indicates that the Windows 7 path in the platform
firmware works, it may be worth trying to do what it does to the PCIe
link before invoking the _OFF method for the power resource
controlling the GPU power.

If the Mika's theory that the Win7 path simply turns the PCIe link off
is correct, then whatever the _OFF method tries to do to the link
after that should not matter.

> On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul <lyude@redhat.com> wrote:
> >
> > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> > > <mika.westerberg@intel.com> wrote:
> > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > > > Hey-this is almost certainly not the right place in this thread to
> > > > > respond,
> > > > > but this thread has gotten so deep evolution can't push the subject
> > > > > further to
> > > > > the right, heh. So I'll just respond here.
> > > >
> > > > :)
> > > >
> > > > > I've been following this and helping out Karol with testing here and
> > > > > there.
> > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > > > which
> > > > > has a turing GPU and 8086:1901 PCI bridge.
> > > > >
> > > > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > > > after
> > > > > trying runtime suspend/resume a couple times things fall apart again:
> > > >
> > > > You mean $subject patch, no?
> > > >
> > >
> > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> > > on that machine looked different. Some BAR error the GPU reported
> > > after it got resumed, so I was wondering if the delays were helping
> > > with that. But after some cycles it still caused the same issue, that
> > > the GPU disappeared. Later testing also showed that my patch also
> > > didn't seem to help with this error sadly :/
> > >
> > > > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > >
> > > > This is probably the culprit. The same AML code fails to properly turn
> > > > on the device.
> > > >
> > > > Is acpidump from this system available somewhere?
> >
> > Attached it to this email
> >
> > > >
> > --
> > Cheers,
> >         Lyude Paul
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-12-09 11:38                                                           ` Rafael J. Wysocki
@ 2019-12-09 12:24                                                             ` Karol Herbst
  -1 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-12-09 12:24 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Lyude Paul, Mika Westerberg, Bjorn Helgaas, LKML,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Dave Airlie, Mario Limonciello

On Mon, Dec 9, 2019 at 12:39 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > anybody any other ideas?
>
> Not yet, but I'm trying to collect some more information.
>
> > It seems that both patches don't really fix
> > the issue and I have no idea left on my side to try out. The only
> > thing left I could do to further investigate would be to reverse
> > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > but I've heard users having similar issues to the one Lyude told us
> > about... and I couldn't verify that the patches help there either in a
> > reliable way.
>
> It looks like the newer (8+) versions of Windows expect the GPU driver
> to prepare the GPU for power removal in some specific way and the
> latter fails if the GPU has not been prepared as expected.
>
> Because testing indicates that the Windows 7 path in the platform
> firmware works, it may be worth trying to do what it does to the PCIe
> link before invoking the _OFF method for the power resource
> controlling the GPU power.
>

ohh, that actually makes sense. Didn't think of that yet.

> If the Mika's theory that the Win7 path simply turns the PCIe link off
> is correct, then whatever the _OFF method tries to do to the link
> after that should not matter.
>

By the way, and I was only thinking about it after sending my last
email out, do you think we should fail the runtime resume path if the
device gets stuck in a power state?

Currently pci core always calls into the driver regardless, but maybe
for D3cold it really makes sense to just bail and refuse to resume? I
think I tried that as an early "fix" and might even have a patch
around. This should at least prevent crashes inside drivers trying to
access invalid memory or getting stuck in loops.

> > On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul <lyude@redhat.com> wrote:
> > >
> > > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> > > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > > > > Hey-this is almost certainly not the right place in this thread to
> > > > > > respond,
> > > > > > but this thread has gotten so deep evolution can't push the subject
> > > > > > further to
> > > > > > the right, heh. So I'll just respond here.
> > > > >
> > > > > :)
> > > > >
> > > > > > I've been following this and helping out Karol with testing here and
> > > > > > there.
> > > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > > > > which
> > > > > > has a turing GPU and 8086:1901 PCI bridge.
> > > > > >
> > > > > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > > > > after
> > > > > > trying runtime suspend/resume a couple times things fall apart again:
> > > > >
> > > > > You mean $subject patch, no?
> > > > >
> > > >
> > > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> > > > on that machine looked different. Some BAR error the GPU reported
> > > > after it got resumed, so I was wondering if the delays were helping
> > > > with that. But after some cycles it still caused the same issue, that
> > > > the GPU disappeared. Later testing also showed that my patch also
> > > > didn't seem to help with this error sadly :/
> > > >
> > > > > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > > > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > >
> > > > > This is probably the culprit. The same AML code fails to properly turn
> > > > > on the device.
> > > > >
> > > > > Is acpidump from this system available somewhere?
> > >
> > > Attached it to this email
> > >
> > > > >
> > > --
> > > Cheers,
> > >         Lyude Paul
> >
>


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-12-09 12:24                                                             ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-12-09 12:24 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Linux PCI, Mika Westerberg, Mario Limonciello,
	Rafael J . Wysocki, LKML, dri-devel, Bjorn Helgaas, nouveau

On Mon, Dec 9, 2019 at 12:39 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > anybody any other ideas?
>
> Not yet, but I'm trying to collect some more information.
>
> > It seems that both patches don't really fix
> > the issue and I have no idea left on my side to try out. The only
> > thing left I could do to further investigate would be to reverse
> > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > but I've heard users having similar issues to the one Lyude told us
> > about... and I couldn't verify that the patches help there either in a
> > reliable way.
>
> It looks like the newer (8+) versions of Windows expect the GPU driver
> to prepare the GPU for power removal in some specific way and the
> latter fails if the GPU has not been prepared as expected.
>
> Because testing indicates that the Windows 7 path in the platform
> firmware works, it may be worth trying to do what it does to the PCIe
> link before invoking the _OFF method for the power resource
> controlling the GPU power.
>

ohh, that actually makes sense. Didn't think of that yet.

> If the Mika's theory that the Win7 path simply turns the PCIe link off
> is correct, then whatever the _OFF method tries to do to the link
> after that should not matter.
>

By the way, and I was only thinking about it after sending my last
email out, do you think we should fail the runtime resume path if the
device gets stuck in a power state?

Currently pci core always calls into the driver regardless, but maybe
for D3cold it really makes sense to just bail and refuse to resume? I
think I tried that as an early "fix" and might even have a patch
around. This should at least prevent crashes inside drivers trying to
access invalid memory or getting stuck in loops.

> > On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul <lyude@redhat.com> wrote:
> > >
> > > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote:
> > > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg
> > > > <mika.westerberg@intel.com> wrote:
> > > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote:
> > > > > > Hey-this is almost certainly not the right place in this thread to
> > > > > > respond,
> > > > > > but this thread has gotten so deep evolution can't push the subject
> > > > > > further to
> > > > > > the right, heh. So I'll just respond here.
> > > > >
> > > > > :)
> > > > >
> > > > > > I've been following this and helping out Karol with testing here and
> > > > > > there.
> > > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation,
> > > > > > which
> > > > > > has a turing GPU and 8086:1901 PCI bridge.
> > > > > >
> > > > > > I was about to say "the patch fixed things, hooray!" but it seems that
> > > > > > after
> > > > > > trying runtime suspend/resume a couple times things fall apart again:
> > > > >
> > > > > You mean $subject patch, no?
> > > > >
> > > >
> > > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw
> > > > on that machine looked different. Some BAR error the GPU reported
> > > > after it got resumed, so I was wondering if the delays were helping
> > > > with that. But after some cycles it still caused the same issue, that
> > > > the GPU disappeared. Later testing also showed that my patch also
> > > > didn't seem to help with this error sadly :/
> > > >
> > > > > > [  686.883247] nouveau 0000:01:00.0: DRM: suspending object tree...
> > > > > > [  752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due
> > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > > [  752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to
> > > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > > > [  752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due
> > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529)
> > > > >
> > > > > This is probably the culprit. The same AML code fails to properly turn
> > > > > on the device.
> > > > >
> > > > > Is acpidump from this system available somewhere?
> > >
> > > Attached it to this email
> > >
> > > > >
> > > --
> > > Cheers,
> > >         Lyude Paul
> >
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-12-09 11:38                                                           ` Rafael J. Wysocki
@ 2019-12-10 19:58                                                             ` Dave Airlie
  -1 siblings, 0 replies; 150+ messages in thread
From: Dave Airlie @ 2019-12-10 19:58 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Lyude Paul, Mika Westerberg, Bjorn Helgaas, LKML,
	Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel, nouveau,
	Mario Limonciello

On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > anybody any other ideas?
>
> Not yet, but I'm trying to collect some more information.
>
> > It seems that both patches don't really fix
> > the issue and I have no idea left on my side to try out. The only
> > thing left I could do to further investigate would be to reverse
> > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > but I've heard users having similar issues to the one Lyude told us
> > about... and I couldn't verify that the patches help there either in a
> > reliable way.
>
> It looks like the newer (8+) versions of Windows expect the GPU driver
> to prepare the GPU for power removal in some specific way and the
> latter fails if the GPU has not been prepared as expected.
>
> Because testing indicates that the Windows 7 path in the platform
> firmware works, it may be worth trying to do what it does to the PCIe
> link before invoking the _OFF method for the power resource
> controlling the GPU power.
>

Remember the pre Win8 path required calling a DSM method to actually
power the card down, I think by the time we reach these methods in
those cases the card is already gone.

Dave.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-12-10 19:58                                                             ` Dave Airlie
  0 siblings, 0 replies; 150+ messages in thread
From: Dave Airlie @ 2019-12-10 19:58 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Karol Herbst, Linux PM, Linux PCI, Mika Westerberg,
	Mario Limonciello, Rafael J . Wysocki, LKML, dri-devel,
	Bjorn Helgaas, nouveau

On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> >
> > anybody any other ideas?
>
> Not yet, but I'm trying to collect some more information.
>
> > It seems that both patches don't really fix
> > the issue and I have no idea left on my side to try out. The only
> > thing left I could do to further investigate would be to reverse
> > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > but I've heard users having similar issues to the one Lyude told us
> > about... and I couldn't verify that the patches help there either in a
> > reliable way.
>
> It looks like the newer (8+) versions of Windows expect the GPU driver
> to prepare the GPU for power removal in some specific way and the
> latter fails if the GPU has not been prepared as expected.
>
> Because testing indicates that the Windows 7 path in the platform
> firmware works, it may be worth trying to do what it does to the PCIe
> link before invoking the _OFF method for the power resource
> controlling the GPU power.
>

Remember the pre Win8 path required calling a DSM method to actually
power the card down, I think by the time we reach these methods in
those cases the card is already gone.

Dave.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
  2019-12-10 19:58                                                             ` Dave Airlie
@ 2019-12-10 20:49                                                               ` Karol Herbst
  -1 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-12-10 20:49 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Rafael J. Wysocki, Lyude Paul, Mika Westerberg, Bjorn Helgaas,
	LKML, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Mario Limonciello

On Tue, Dec 10, 2019 at 8:58 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> > >
> > > anybody any other ideas?
> >
> > Not yet, but I'm trying to collect some more information.
> >
> > > It seems that both patches don't really fix
> > > the issue and I have no idea left on my side to try out. The only
> > > thing left I could do to further investigate would be to reverse
> > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > > but I've heard users having similar issues to the one Lyude told us
> > > about... and I couldn't verify that the patches help there either in a
> > > reliable way.
> >
> > It looks like the newer (8+) versions of Windows expect the GPU driver
> > to prepare the GPU for power removal in some specific way and the
> > latter fails if the GPU has not been prepared as expected.
> >
> > Because testing indicates that the Windows 7 path in the platform
> > firmware works, it may be worth trying to do what it does to the PCIe
> > link before invoking the _OFF method for the power resource
> > controlling the GPU power.
> >
>
> Remember the pre Win8 path required calling a DSM method to actually
> power the card down, I think by the time we reach these methods in
> those cases the card is already gone.
>
> Dave.
>

The point was that the firmware seems to do more in the legacy paths
and maybe we just have to do those things inside the driver instead
when using the new method. Also the _DSM call just wraps around the
interfaces on newer firmware anyway. The OS check is usually what
makes the difference. I might be wrong about the _DSM call just
wrapping though, but I think I saw it at least in some firmware at
some point.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2019-12-10 20:49                                                               ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2019-12-10 20:49 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

On Tue, Dec 10, 2019 at 8:58 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> > >
> > > anybody any other ideas?
> >
> > Not yet, but I'm trying to collect some more information.
> >
> > > It seems that both patches don't really fix
> > > the issue and I have no idea left on my side to try out. The only
> > > thing left I could do to further investigate would be to reverse
> > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > > but I've heard users having similar issues to the one Lyude told us
> > > about... and I couldn't verify that the patches help there either in a
> > > reliable way.
> >
> > It looks like the newer (8+) versions of Windows expect the GPU driver
> > to prepare the GPU for power removal in some specific way and the
> > latter fails if the GPU has not been prepared as expected.
> >
> > Because testing indicates that the Windows 7 path in the platform
> > firmware works, it may be worth trying to do what it does to the PCIe
> > link before invoking the _OFF method for the power resource
> > controlling the GPU power.
> >
>
> Remember the pre Win8 path required calling a DSM method to actually
> power the card down, I think by the time we reach these methods in
> those cases the card is already gone.
>
> Dave.
>

The point was that the firmware seems to do more in the legacy paths
and maybe we just have to do those things inside the driver instead
when using the new method. Also the _DSM call just wraps around the
interfaces on newer firmware anyway. The OS check is usually what
makes the difference. I might be wrong about the _DSM call just
wrapping though, but I think I saw it at least in some firmware at
some point.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2020-01-13 15:31                                                                 ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2020-01-13 15:31 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Rafael J. Wysocki, Lyude Paul, Mika Westerberg, Bjorn Helgaas,
	LKML, Rafael J . Wysocki, Linux PCI, Linux PM, dri-devel,
	nouveau, Mario Limonciello

okay.. so checking whatever is the difference with _REV being 5
(meaning the firmware uses the legacy paths) doesn't help in any way.
It's using a different method to turn the link of and the other ACPI
variables touched either point to undocumented registers on the PCI
bridge or internal ACPI memory...

so, anybody with any other ideas? I really wished the nvidia driver
would enable runpm on pre turing GPUs, but that's sadly not the case
and on Turing things seem to be totally different, so it wouldn't help
to check there as well... *sigh*

On Tue, Dec 10, 2019 at 9:49 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Tue, Dec 10, 2019 at 8:58 PM Dave Airlie <airlied@gmail.com> wrote:
> >
> > On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> > > >
> > > > anybody any other ideas?
> > >
> > > Not yet, but I'm trying to collect some more information.
> > >
> > > > It seems that both patches don't really fix
> > > > the issue and I have no idea left on my side to try out. The only
> > > > thing left I could do to further investigate would be to reverse
> > > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > > > but I've heard users having similar issues to the one Lyude told us
> > > > about... and I couldn't verify that the patches help there either in a
> > > > reliable way.
> > >
> > > It looks like the newer (8+) versions of Windows expect the GPU driver
> > > to prepare the GPU for power removal in some specific way and the
> > > latter fails if the GPU has not been prepared as expected.
> > >
> > > Because testing indicates that the Windows 7 path in the platform
> > > firmware works, it may be worth trying to do what it does to the PCIe
> > > link before invoking the _OFF method for the power resource
> > > controlling the GPU power.
> > >
> >
> > Remember the pre Win8 path required calling a DSM method to actually
> > power the card down, I think by the time we reach these methods in
> > those cases the card is already gone.
> >
> > Dave.
> >
>
> The point was that the firmware seems to do more in the legacy paths
> and maybe we just have to do those things inside the driver instead
> when using the new method. Also the _DSM call just wraps around the
> interfaces on newer firmware anyway. The OS check is usually what
> makes the difference. I might be wrong about the _DSM call just
> wrapping though, but I think I saw it at least in some firmware at
> some point.


^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2020-01-13 15:31                                                                 ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2020-01-13 15:31 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

okay.. so checking whatever is the difference with _REV being 5
(meaning the firmware uses the legacy paths) doesn't help in any way.
It's using a different method to turn the link of and the other ACPI
variables touched either point to undocumented registers on the PCI
bridge or internal ACPI memory...

so, anybody with any other ideas? I really wished the nvidia driver
would enable runpm on pre turing GPUs, but that's sadly not the case
and on Turing things seem to be totally different, so it wouldn't help
to check there as well... *sigh*

On Tue, Dec 10, 2019 at 9:49 PM Karol Herbst <kherbst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>
> On Tue, Dec 10, 2019 at 8:58 PM Dave Airlie <airlied-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> >
> > On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki <rafael-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> > >
> > > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > >
> > > > anybody any other ideas?
> > >
> > > Not yet, but I'm trying to collect some more information.
> > >
> > > > It seems that both patches don't really fix
> > > > the issue and I have no idea left on my side to try out. The only
> > > > thing left I could do to further investigate would be to reverse
> > > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > > > but I've heard users having similar issues to the one Lyude told us
> > > > about... and I couldn't verify that the patches help there either in a
> > > > reliable way.
> > >
> > > It looks like the newer (8+) versions of Windows expect the GPU driver
> > > to prepare the GPU for power removal in some specific way and the
> > > latter fails if the GPU has not been prepared as expected.
> > >
> > > Because testing indicates that the Windows 7 path in the platform
> > > firmware works, it may be worth trying to do what it does to the PCIe
> > > link before invoking the _OFF method for the power resource
> > > controlling the GPU power.
> > >
> >
> > Remember the pre Win8 path required calling a DSM method to actually
> > power the card down, I think by the time we reach these methods in
> > those cases the card is already gone.
> >
> > Dave.
> >
>
> The point was that the firmware seems to do more in the legacy paths
> and maybe we just have to do those things inside the driver instead
> when using the new method. Also the _DSM call just wraps around the
> interfaces on newer firmware anyway. The OS check is usually what
> makes the difference. I might be wrong about the _DSM call just
> wrapping though, but I think I saw it at least in some firmware at
> some point.

^ permalink raw reply	[flat|nested] 150+ messages in thread

* Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
@ 2020-01-13 15:31                                                                 ` Karol Herbst
  0 siblings, 0 replies; 150+ messages in thread
From: Karol Herbst @ 2020-01-13 15:31 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Rafael J. Wysocki, Linux PCI, Mika Westerberg, Linux PM,
	Rafael J . Wysocki, LKML, dri-devel, Mario Limonciello,
	Bjorn Helgaas, nouveau

okay.. so checking whatever is the difference with _REV being 5
(meaning the firmware uses the legacy paths) doesn't help in any way.
It's using a different method to turn the link of and the other ACPI
variables touched either point to undocumented registers on the PCI
bridge or internal ACPI memory...

so, anybody with any other ideas? I really wished the nvidia driver
would enable runpm on pre turing GPUs, but that's sadly not the case
and on Turing things seem to be totally different, so it wouldn't help
to check there as well... *sigh*

On Tue, Dec 10, 2019 at 9:49 PM Karol Herbst <kherbst@redhat.com> wrote:
>
> On Tue, Dec 10, 2019 at 8:58 PM Dave Airlie <airlied@gmail.com> wrote:
> >
> > On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst <kherbst@redhat.com> wrote:
> > > >
> > > > anybody any other ideas?
> > >
> > > Not yet, but I'm trying to collect some more information.
> > >
> > > > It seems that both patches don't really fix
> > > > the issue and I have no idea left on my side to try out. The only
> > > > thing left I could do to further investigate would be to reverse
> > > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now,
> > > > but I've heard users having similar issues to the one Lyude told us
> > > > about... and I couldn't verify that the patches help there either in a
> > > > reliable way.
> > >
> > > It looks like the newer (8+) versions of Windows expect the GPU driver
> > > to prepare the GPU for power removal in some specific way and the
> > > latter fails if the GPU has not been prepared as expected.
> > >
> > > Because testing indicates that the Windows 7 path in the platform
> > > firmware works, it may be worth trying to do what it does to the PCIe
> > > link before invoking the _OFF method for the power resource
> > > controlling the GPU power.
> > >
> >
> > Remember the pre Win8 path required calling a DSM method to actually
> > power the card down, I think by the time we reach these methods in
> > those cases the card is already gone.
> >
> > Dave.
> >
>
> The point was that the firmware seems to do more in the legacy paths
> and maybe we just have to do those things inside the driver instead
> when using the new method. Also the _DSM call just wraps around the
> interfaces on newer firmware anyway. The OS check is usually what
> makes the difference. I might be wrong about the _DSM call just
> wrapping though, but I think I saw it at least in some firmware at
> some point.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 150+ messages in thread

end of thread, other threads:[~2020-01-13 15:32 UTC | newest]

Thread overview: 150+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-17 12:19 [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges Karol Herbst
2019-11-14 19:17 ` Karol Herbst
2019-11-14 19:17   ` Karol Herbst
2019-11-19 20:06 ` Dave Airlie
2019-11-19 20:06   ` Dave Airlie
2019-11-19 20:06   ` Dave Airlie
2019-11-19 21:49 ` Bjorn Helgaas
2019-11-19 21:49   ` Bjorn Helgaas
2019-11-19 22:26   ` Karol Herbst
2019-11-19 22:26     ` Karol Herbst
2019-11-19 22:26     ` Karol Herbst
2019-11-19 22:57     ` Bjorn Helgaas
2019-11-19 22:57       ` Bjorn Helgaas
2019-11-20 10:18     ` Mika Westerberg
2019-11-20 10:18       ` Mika Westerberg
2019-11-20 10:52       ` Rafael J. Wysocki
2019-11-20 10:52         ` Rafael J. Wysocki
2019-11-20 10:52         ` Rafael J. Wysocki
2019-11-20 11:22         ` Mika Westerberg
2019-11-20 11:22           ` Mika Westerberg
2019-11-20 11:48           ` Rafael J. Wysocki
2019-11-20 11:48             ` Rafael J. Wysocki
2019-11-20 11:51             ` Karol Herbst
2019-11-20 11:51               ` Karol Herbst
2019-11-20 12:06               ` Rafael J. Wysocki
2019-11-20 12:06                 ` Rafael J. Wysocki
2019-11-20 12:06                 ` Rafael J. Wysocki
2019-11-20 12:09                 ` Karol Herbst
2019-11-20 12:09                   ` Karol Herbst
2019-11-20 12:09                   ` Karol Herbst
2019-11-20 12:14                   ` Rafael J. Wysocki
2019-11-20 12:14                     ` Rafael J. Wysocki
2019-11-20 12:19                     ` Karol Herbst
2019-11-20 12:19                       ` Karol Herbst
2019-11-20 12:11                 ` Rafael J. Wysocki
2019-11-20 12:11                   ` Rafael J. Wysocki
2019-11-20 11:51           ` Mika Westerberg
2019-11-20 11:51             ` Mika Westerberg
2019-11-20 11:54             ` Karol Herbst
2019-11-20 11:54               ` Karol Herbst
2019-11-20 11:54               ` Karol Herbst
2019-11-20 11:58               ` Karol Herbst
2019-11-20 11:58                 ` Karol Herbst
2019-11-20 11:58                 ` Karol Herbst
2019-11-20 12:09                 ` Mika Westerberg
2019-11-20 12:09                   ` Mika Westerberg
2019-11-20 12:11                   ` Karol Herbst
2019-11-20 12:11                     ` Karol Herbst
2019-11-20 12:11                     ` Karol Herbst
2019-11-20 15:15                     ` Mika Westerberg
2019-11-20 15:15                       ` Mika Westerberg
2019-11-20 15:37                       ` Karol Herbst
2019-11-20 15:37                         ` Karol Herbst
2019-11-20 15:53                         ` Mika Westerberg
2019-11-20 15:53                           ` Mika Westerberg
2019-11-20 16:23                           ` Mika Westerberg
2019-11-20 16:23                             ` Mika Westerberg
2019-11-20 21:36                             ` Karol Herbst
2019-11-20 21:36                               ` Karol Herbst
2019-11-20 21:36                               ` Karol Herbst
2019-11-21 10:14                               ` Mika Westerberg
2019-11-21 10:14                                 ` Mika Westerberg
2019-11-21 11:03                                 ` Rafael J. Wysocki
2019-11-21 11:03                                   ` Rafael J. Wysocki
2019-11-21 11:08                                   ` Rafael J. Wysocki
2019-11-21 11:08                                     ` Rafael J. Wysocki
2019-11-21 11:08                                     ` Rafael J. Wysocki
2019-11-21 11:15                                     ` Rafael J. Wysocki
2019-11-21 11:15                                       ` Rafael J. Wysocki
2019-11-21 11:17                                   ` Mika Westerberg
2019-11-21 11:17                                     ` Mika Westerberg
2019-11-21 11:31                                     ` Rafael J. Wysocki
2019-11-21 11:31                                       ` Rafael J. Wysocki
2019-11-20 21:37                           ` Rafael J. Wysocki
2019-11-20 21:37                             ` Rafael J. Wysocki
2019-11-20 21:40                             ` Karol Herbst
2019-11-20 21:40                               ` Karol Herbst
2019-11-20 22:29                               ` Rafael J. Wysocki
2019-11-20 22:29                                 ` Rafael J. Wysocki
2019-11-21 11:28                                 ` Mika Westerberg
2019-11-21 11:28                                   ` Mika Westerberg
2019-11-21 11:34                                   ` Rafael J. Wysocki
2019-11-21 11:34                                     ` Rafael J. Wysocki
2019-11-21 11:34                                     ` Rafael J. Wysocki
2019-11-21 11:46                                     ` Mika Westerberg
2019-11-21 11:46                                       ` Mika Westerberg
2019-11-21 12:52                                       ` Mika Westerberg
2019-11-21 12:52                                         ` Mika Westerberg
2019-11-21 12:56                                         ` Karol Herbst
2019-11-21 12:56                                           ` Karol Herbst
2019-11-21 15:43                                         ` Rafael J. Wysocki
2019-11-21 15:43                                           ` Rafael J. Wysocki
2019-11-21 15:43                                           ` Rafael J. Wysocki
2019-11-21 19:49                                           ` Mika Westerberg
2019-11-21 19:49                                             ` Mika Westerberg
2019-11-21 22:39                                             ` Rafael J. Wysocki
2019-11-21 22:39                                               ` Rafael J. Wysocki
2019-11-21 22:39                                               ` Rafael J. Wysocki
2019-11-21 22:50                                               ` Karol Herbst
2019-11-21 22:50                                                 ` Karol Herbst
2019-11-22  0:13                                                 ` Karol Herbst
2019-11-22  0:13                                                   ` Karol Herbst
2019-11-22  9:07                                                   ` Rafael J. Wysocki
2019-11-22  9:07                                                     ` Rafael J. Wysocki
2019-11-22 11:30                                                     ` Karol Herbst
2019-11-22 11:30                                                       ` Karol Herbst
2019-11-22 11:30                                                       ` Karol Herbst
2019-11-22 10:36                                               ` Mika Westerberg
2019-11-22 10:36                                                 ` Mika Westerberg
2019-11-22 11:30                                                 ` Rafael J. Wysocki
2019-11-22 11:30                                                   ` Rafael J. Wysocki
2019-11-22 11:34                                                   ` Karol Herbst
2019-11-22 11:34                                                     ` Karol Herbst
2019-11-22 11:34                                                     ` Karol Herbst
2019-11-22 11:54                                                     ` Rafael J. Wysocki
2019-11-22 11:54                                                       ` Rafael J. Wysocki
2019-11-22 11:52                                                   ` Mika Westerberg
2019-11-22 11:52                                                     ` Mika Westerberg
2019-11-22 12:15                                                     ` Rafael J. Wysocki
2019-11-22 12:15                                                       ` Rafael J. Wysocki
2019-11-21 12:52                                       ` Karol Herbst
2019-11-21 12:52                                         ` Karol Herbst
2019-11-21 15:47                                         ` Rafael J. Wysocki
2019-11-21 15:47                                           ` Rafael J. Wysocki
2019-11-21 16:06                                           ` Karol Herbst
2019-11-21 16:06                                             ` Karol Herbst
2019-11-21 16:39                                             ` Rafael J. Wysocki
2019-11-21 16:39                                               ` Rafael J. Wysocki
2019-11-26 23:10                                               ` Lyude Paul
2019-11-26 23:10                                                 ` Lyude Paul
2019-11-27 11:48                                                 ` Mika Westerberg
2019-11-27 11:48                                                   ` Mika Westerberg
2019-11-27 11:51                                                   ` Karol Herbst
2019-11-27 11:51                                                     ` Karol Herbst
2019-11-27 19:51                                                     ` Lyude Paul
2019-11-27 19:51                                                       ` Lyude Paul
2019-12-09 11:17                                                       ` Karol Herbst
2019-12-09 11:17                                                         ` Karol Herbst
2019-12-09 11:38                                                         ` Rafael J. Wysocki
2019-12-09 11:38                                                           ` Rafael J. Wysocki
2019-12-09 11:38                                                           ` Rafael J. Wysocki
2019-12-09 12:24                                                           ` Karol Herbst
2019-12-09 12:24                                                             ` Karol Herbst
2019-12-10 19:58                                                           ` Dave Airlie
2019-12-10 19:58                                                             ` Dave Airlie
2019-12-10 20:49                                                             ` Karol Herbst
2019-12-10 20:49                                                               ` Karol Herbst
2020-01-13 15:31                                                               ` Karol Herbst
2020-01-13 15:31                                                                 ` Karol Herbst
2020-01-13 15:31                                                                 ` Karol Herbst

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.