linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
@ 2019-09-30 17:07 Karol Herbst
  0 siblings, 0 replies; 19+ messages in thread
From: Karol Herbst @ 2019-09-30 17:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Karol Herbst, Bjorn Helgaas, Lyude Paul, Rafael J . Wysocki,
	Mika Westerberg, linux-pci, linux-pm, dri-devel, nouveau

Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device
states.

v2: convert to pci_dev quirk
    put a proper technical explenation of the issue as a in-code comment

RFC comment (copied from last sent):
We are quite sure that there is a higher amount of bridges affected by this,
but I was only testing it on my own machine for now.

I've stresstested runpm by doing 5000 runpm cycles with that patch applied
and never saw it fail.

I mainly wanted to get a discussion going on if that's a feasable workaround
indeed or if we need something better.

I am also sure, that the nouveau driver itself isn't at fault as I am able
to reproduce the same issue by poking into some PCI registers on the PCIe
bridge to put the GPU into D3cold as it's done in ACPI code.

I've written a little python script to reproduce this issue without the need
of loading nouveau:
https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py

Signed-off-by: Karol Herbst <kherbst@redhat.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Mika Westerberg <mika.westerberg@intel.com>
Cc: linux-pci@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
---
 drivers/pci/pci.c    | 11 ++++++++++
 drivers/pci/quirks.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
 include/linux/pci.h  |  1 +
 3 files changed, 62 insertions(+)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 088fcdc8d2b4..65516b024ee5 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -799,6 +799,13 @@ static inline bool platform_pci_bridge_d3(struct pci_dev *dev)
 	return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false;
 }
 
+static inline bool parent_broken_child_pm(struct pci_dev *dev)
+{
+	if (!dev->bus || !dev->bus->self)
+		return false;
+	return dev->bus->self->broken_child_pm;
+}
+
 /**
  * pci_raw_set_power_state - Use PCI PM registers to set the power state of
  *			     given PCI device
@@ -844,6 +851,10 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
 	   || (state == PCI_D2 && !dev->d2_support))
 		return -EIO;
 
+	/* check if the bus controller causes issues */
+	if (state != PCI_D0 && parent_broken_child_pm(dev))
+		return 0;
+
 	pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
 
 	/*
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 0f16acc323c6..2be0deec2c3d 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -5198,3 +5198,53 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
 			      PCI_CLASS_DISPLAY_VGA, 8,
 			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
+
+/*
+ * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after
+ * those were put into D3cold state if they were put into a non D0 PCI PM
+ * device state before doing so.
+ *
+ * This leads to various issue different issues which all manifest differently,
+ * but have the same root cause:
+ *  - AIML code execution hits an infinite loop (as the coe waits on device
+ *    memory to change).
+ *  - kernel crashes, as all pci reads return -1, which most code isn't able
+ *    to handle well enough.
+ *  - sudden shutdowns, as the kernel identified an unrecoverable error after
+ *    userspace tries to access the GPU.
+ *
+ * In all cases dmesg will contain at least one line like this:
+ * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
+ * followed by a lot of nouveau timeouts.
+ *
+ * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the
+ * PCIe bridge controller in order to power down the GPU.
+ * Nonetheless, there are other code paths inside the ACPI firmware which use
+ * other registers, which seem to work fine:
+ *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
+ *  - 0xb0 bit 0x10 (link disable)
+ * Changing the conditions inside the firmware by poking into the relevant
+ * addresses does resolve the issue, but it seemed to be ACPI private memory
+ * and not any device accessible memory at all, so there is no portable way of
+ * changing the conditions.
+ *
+ * The only systems where this behavior can be seen are hybrid graphics laptops
+ * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue
+ * only occurs in combination with listed Intel PCIe bridge controllers and
+ * the mentioned GPUs or if it's only a hw bug in the bridge controller.
+ *
+ * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU
+ * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug
+ * in the bridge controller rather than in the GPU.
+ *
+ * This issue was not able to be reproduced on non laptop systems.
+ */
+
+static void quirk_broken_child_pm(struct pci_dev *dev)
+{
+	dev->broken_child_pm = 1;
+	printk("applied broken child pm quirk!\n");
+}
+/* kaby lake */
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1901,
+			quirk_broken_child_pm);
diff --git a/include/linux/pci.h b/include/linux/pci.h
index dd436da7eccc..01eb0a9e6c13 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -413,6 +413,7 @@ struct pci_dev {
 	unsigned int	__aer_firmware_first_valid:1;
 	unsigned int	__aer_firmware_first:1;
 	unsigned int	broken_intx_masking:1;	/* INTx masking can't be used */
+	unsigned int	broken_child_pm:1;	/* children put into lower PCI PM states won't recover after D3cold transition */
 	unsigned int	io_window_1k:1;		/* Intel bridge 1K I/O windows */
 	unsigned int	irq_managed:1;
 	unsigned int	has_secondary_link:1;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-10-01 10:00                       ` Karol Herbst
@ 2019-10-03  9:47                         ` Rafael J. Wysocki
  0 siblings, 0 replies; 19+ messages in thread
From: Rafael J. Wysocki @ 2019-10-03  9:47 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Mika Westerberg, Bjorn Helgaas, LKML, Lyude Paul, Linux PCI,
	dri-devel, nouveau, Linux PM

On Tuesday, October 1, 2019 12:00:50 PM CEST Karol Herbst wrote:
> On Tue, Oct 1, 2019 at 11:11 AM Mika Westerberg
> <mika.westerberg@linux.intel.com> wrote:
> >
> > On Tue, Oct 01, 2019 at 10:56:39AM +0200, Karol Herbst wrote:
> > > On Tue, Oct 1, 2019 at 10:47 AM Mika Westerberg
> > > <mika.westerberg@linux.intel.com> wrote:
> > > >
> > > > On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> > > > > On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> > > > > <mika.westerberg@linux.intel.com> wrote:
> > > > > >
> > > > > > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > > > > > still happens with your patch applied. The machine simply gets shut down.
> > > > > > >
> > > > > > > dmesg can be found here:
> > > > > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> > > > > >
> > > > > > Looking your dmesg:
> > > > > >
> > > > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > > > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > > > > > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> > > > > >
> > > > > > I would assume it runtime suspends here. Then it wakes up because of PCI
> > > > > > access from userspace:
> > > > > >
> > > > > > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> > > > > >
> > > > > > and for some reason it does not get resumed properly. There are also few
> > > > > > warnings from ACPI that might be relevant:
> > > > > >
> > > > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > > > >
> > > > >
> > > > > afaik this is the case for essentially every laptop out there.
> > > >
> > > > OK, so they are harmless?
> > > >
> > >
> > > yes
> > >
> > > > > > This seems to be Dell XPS 9560 which I think has been around some time
> > > > > > already so I wonder why we only see issues now. Has it ever worked for
> > > > > > you or maybe there is a regression that causes it to happen now?
> > > > >
> > > > > oh, it's broken since forever, we just tried to get more information
> > > > > from Nvidia if they know what this is all about, but we got nothing
> > > > > useful.
> > > > >
> > > > > We were also hoping to find a reliable fix or workaround we could have
> > > > > inside nouveau to fix that as I think nouveau is the only driver
> > > > > actually hit by this issue, but nothing turned out to be reliable
> > > > > enough.
> > > >
> > > > Can't you just block runtime PM from the nouveau driver until this is
> > > > understood better? That can be done by calling pm_runtime_forbid() (or
> > > > not calling pm_runtime_allow() in the driver). Or in case of PCI driver
> > > > you just don't decrease the reference count when probe() ends.
> > > >
> > >
> > > the thing is, it does work for a lot of laptops. We could only observe
> > > this on kaby lake and skylake ones. Even on Cannon Lakes it seems to
> > > work just fine.
> >
> > Can't you then limit it to those?
> >
> > I've experienced that Kabylake root ports can enter and exit in D3cold
> > just fine because we do that for Thunderbolt for example. But that
> > always requires help from ACPI. If the system is using non-standard ACPI
> > methods for example that may require some tricks in the driver side.
> >
> 
> yeah.. I am not quite sure what's actually the root cause. I was also
> trying to use the same PCI registers ACPI is using to trigger this
> issue on a normal desktop, no luck. Using the same registers does
> trigger the issue (hence the script).
> 
> The script is essentially just doing what ACPI does, just skipping a lot.
> 
> > > > I think that would be much better than blocking any devices behind
> > > > Kabylake PCIe root ports from entering D3 (I don't really think the
> > > > problem is in the root ports itself but there is something we are
> > > > missing when the NVIDIA GPU is put into D3cold or back from there).
> > >
> > > I highly doubt there is anything wrong with the GPU alone as we have
> > > too many indications which tell us otherwise.
> > >
> > > Anyway, at this point I don't know where to look further for what's
> > > actually wrong. And apparently it works on Windows, but I don't know
> > > why and I have no idea what Windows does on such systems to make it
> > > work reliably.
> >
> > By works you mean that Windows is able to put it into D3cold and back?
> > If that's the case it may be that there is some ACPI magic that the
> > Windows driver does and we of course are missing in Linux.
> 
> Afaik that's the case. We were talking with Nvidia about it, but they
> are not aware of any issues generally. (on Windows, nor the hardware).
> No idea if we can trust their statements though.
> 
> But yeah, it might be that on Windows they still do _DSM calls or
> something... but until today, Nvidia didn't provide any documentation
> to us for that.

So IMO in that case the right approach is to quirk the combinations of
GPU/root complex that are known problematic.

Quirking the root complex alone is likely to affect working configurations
which generally should be avoided.




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-10-01 19:34                     ` Bjorn Helgaas
@ 2019-10-02  7:51                       ` Rafael J. Wysocki
  0 siblings, 0 replies; 19+ messages in thread
From: Rafael J. Wysocki @ 2019-10-02  7:51 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Karol Herbst, Mika Westerberg, LKML, Lyude Paul, Linux PCI,
	dri-devel, nouveau, Rafael J. Wysocki, Linux PM,
	ACPI Devel Maling List, Schmauss, Erik, Robert Moore

On Tue, Oct 1, 2019 at 9:34 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Tue, Oct 01, 2019 at 06:21:28PM +0200, Karol Herbst wrote:
> > On Tue, Oct 1, 2019 at 3:27 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> > > > On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> > > > <mika.westerberg@linux.intel.com> wrote:
> > > > >
> > > > > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > > > > still happens with your patch applied. The machine simply gets shut down.
> > > > > >
> > > > > > dmesg can be found here:
> > > > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> > > > >
> > > > > Looking your dmesg:
> > > > >
> > > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > > > > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> > > > >
> > > > > I would assume it runtime suspends here. Then it wakes up because of PCI
> > > > > access from userspace:
> > > > >
> > > > > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> > > > >
> > > > > and for some reason it does not get resumed properly. There are also few
> > > > > warnings from ACPI that might be relevant:
> > > > >
> > > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > >
> > > > afaik this is the case for essentially every laptop out there.
> > >
> > > I think we should look into this a little bit.
> > > acpi_ns_check_argument_types() checks the argument type and prints
> > > this message, but AFAICT it doesn't actually fix anything or prevent
> > > execution of the method, so I have no idea what happens when we
> > > actually execute the _DSM.
> >
> > I can assure you that this warning happens on every single laptop out
> > there with dual Nvidia graphics and it's essentially just a firmware
> > bug. And it never caused any issues on any of the older laptops (or
> > newest one for that matter).
>
> Rafael, do you know anything about this?

IIRC ACPICA will simply run the method with the assumption that the
AML in there will deal with the arguments properly anyway.

> If ACPI has some sort of workaround so it can execute the method correctly
> anyway, maybe we should remove or reword the warning?

I can agree that printing these warnings on a user system by default
is not very useful, at least as long as no visible functional issues
are present, but if there are such issues, it is good to know that
something fishy is going on.  For instance, while the method may
execute successfully, the result of that may not be as expected.

So maybe they should be debug level or similar.

> Or if this does prevent execution of the method, maybe we need to add
> a workaround since the problem is so prevalent in the field?

As par the above, no workarounds should be needed, but I'll let Bob
and Erik (CCed now) confirm or deny this.

A side note: please CC all discussions regarding general ACPI issues
to linux-acpi, so they can get the attention of all of the right
people (who may not subscribe linux-pci, for example).

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-10-01 16:21                   ` Karol Herbst
@ 2019-10-01 19:34                     ` Bjorn Helgaas
  2019-10-02  7:51                       ` Rafael J. Wysocki
  0 siblings, 1 reply; 19+ messages in thread
From: Bjorn Helgaas @ 2019-10-01 19:34 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Mika Westerberg, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Tue, Oct 01, 2019 at 06:21:28PM +0200, Karol Herbst wrote:
> On Tue, Oct 1, 2019 at 3:27 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> > > On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> > > <mika.westerberg@linux.intel.com> wrote:
> > > >
> > > > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > > > still happens with your patch applied. The machine simply gets shut down.
> > > > >
> > > > > dmesg can be found here:
> > > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> > > >
> > > > Looking your dmesg:
> > > >
> > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > > > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> > > >
> > > > I would assume it runtime suspends here. Then it wakes up because of PCI
> > > > access from userspace:
> > > >
> > > > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> > > >
> > > > and for some reason it does not get resumed properly. There are also few
> > > > warnings from ACPI that might be relevant:
> > > >
> > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > >
> > > afaik this is the case for essentially every laptop out there.
> >
> > I think we should look into this a little bit.
> > acpi_ns_check_argument_types() checks the argument type and prints
> > this message, but AFAICT it doesn't actually fix anything or prevent
> > execution of the method, so I have no idea what happens when we
> > actually execute the _DSM.
> 
> I can assure you that this warning happens on every single laptop out
> there with dual Nvidia graphics and it's essentially just a firmware
> bug. And it never caused any issues on any of the older laptops (or
> newest one for that matter).

Rafael, do you know anything about this?  If ACPI has some sort of
workaround so it can execute the method correctly anyway, maybe we
should remove or reword the warning?

Or if this does prevent execution of the method, maybe we need to add
a workaround since the problem is so prevalent in the field?

> > If we execute this _DSM as part of power management, and the _DSM
> > doesn't work right, it would be no surprise that we have problems.
> >
> > Maybe we could learn something by turning on ACPI_DB_PARSE output (see
> > Documentation/firmware-guide/acpi/debug.rst).
> >
> > You must have an acpidump already from all your investigation.  Can
> > you put it somewhere, e.g., bugzilla.kernel.org, and include a URL?
> 
> Will do so later, right now I am traveling to XDC and will have more
> time for that then.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-10-01 13:27                 ` Bjorn Helgaas
@ 2019-10-01 16:21                   ` Karol Herbst
  2019-10-01 19:34                     ` Bjorn Helgaas
  0 siblings, 1 reply; 19+ messages in thread
From: Karol Herbst @ 2019-10-01 16:21 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Mika Westerberg, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Tue, Oct 1, 2019 at 3:27 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> > On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> > <mika.westerberg@linux.intel.com> wrote:
> > >
> > > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > > still happens with your patch applied. The machine simply gets shut down.
> > > >
> > > > dmesg can be found here:
> > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> > >
> > > Looking your dmesg:
> > >
> > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> > >
> > > I would assume it runtime suspends here. Then it wakes up because of PCI
> > > access from userspace:
> > >
> > > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> > >
> > > and for some reason it does not get resumed properly. There are also few
> > > warnings from ACPI that might be relevant:
> > >
> > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> >
> > afaik this is the case for essentially every laptop out there.
>
> I think we should look into this a little bit.
> acpi_ns_check_argument_types() checks the argument type and prints
> this message, but AFAICT it doesn't actually fix anything or prevent
> execution of the method, so I have no idea what happens when we
> actually execute the _DSM.
>

I can assure you that this warning happens on every single laptop out
there with dual Nvidia graphics and it's essentially just a firmware
bug. And it never caused any issues on any of the older laptops (or
newest one for that matter).

> If we execute this _DSM as part of power management, and the _DSM
> doesn't work right, it would be no surprise that we have problems.
>
> Maybe we could learn something by turning on ACPI_DB_PARSE output (see
> Documentation/firmware-guide/acpi/debug.rst).
>
> You must have an acpidump already from all your investigation.  Can
> you put it somewhere, e.g., bugzilla.kernel.org, and include a URL?

Will do so later, right now I am traveling to XDC and will have more
time for that then.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-30 16:36               ` Karol Herbst
  2019-10-01  8:46                 ` Mika Westerberg
@ 2019-10-01 13:27                 ` Bjorn Helgaas
  2019-10-01 16:21                   ` Karol Herbst
  1 sibling, 1 reply; 19+ messages in thread
From: Bjorn Helgaas @ 2019-10-01 13:27 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Mika Westerberg, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> <mika.westerberg@linux.intel.com> wrote:
> >
> > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > still happens with your patch applied. The machine simply gets shut down.
> > >
> > > dmesg can be found here:
> > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> >
> > Looking your dmesg:
> >
> > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> >
> > I would assume it runtime suspends here. Then it wakes up because of PCI
> > access from userspace:
> >
> > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> >
> > and for some reason it does not get resumed properly. There are also few
> > warnings from ACPI that might be relevant:
> >
> > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> 
> afaik this is the case for essentially every laptop out there.

I think we should look into this a little bit.
acpi_ns_check_argument_types() checks the argument type and prints
this message, but AFAICT it doesn't actually fix anything or prevent
execution of the method, so I have no idea what happens when we
actually execute the _DSM.

If we execute this _DSM as part of power management, and the _DSM
doesn't work right, it would be no surprise that we have problems.

Maybe we could learn something by turning on ACPI_DB_PARSE output (see
Documentation/firmware-guide/acpi/debug.rst).

You must have an acpidump already from all your investigation.  Can
you put it somewhere, e.g., bugzilla.kernel.org, and include a URL?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-10-01  9:11                     ` Mika Westerberg
@ 2019-10-01 10:00                       ` Karol Herbst
  2019-10-03  9:47                         ` Rafael J. Wysocki
  0 siblings, 1 reply; 19+ messages in thread
From: Karol Herbst @ 2019-10-01 10:00 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Tue, Oct 1, 2019 at 11:11 AM Mika Westerberg
<mika.westerberg@linux.intel.com> wrote:
>
> On Tue, Oct 01, 2019 at 10:56:39AM +0200, Karol Herbst wrote:
> > On Tue, Oct 1, 2019 at 10:47 AM Mika Westerberg
> > <mika.westerberg@linux.intel.com> wrote:
> > >
> > > On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> > > > On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> > > > <mika.westerberg@linux.intel.com> wrote:
> > > > >
> > > > > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > > > > still happens with your patch applied. The machine simply gets shut down.
> > > > > >
> > > > > > dmesg can be found here:
> > > > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> > > > >
> > > > > Looking your dmesg:
> > > > >
> > > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > > > > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> > > > >
> > > > > I would assume it runtime suspends here. Then it wakes up because of PCI
> > > > > access from userspace:
> > > > >
> > > > > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> > > > >
> > > > > and for some reason it does not get resumed properly. There are also few
> > > > > warnings from ACPI that might be relevant:
> > > > >
> > > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > > >
> > > >
> > > > afaik this is the case for essentially every laptop out there.
> > >
> > > OK, so they are harmless?
> > >
> >
> > yes
> >
> > > > > This seems to be Dell XPS 9560 which I think has been around some time
> > > > > already so I wonder why we only see issues now. Has it ever worked for
> > > > > you or maybe there is a regression that causes it to happen now?
> > > >
> > > > oh, it's broken since forever, we just tried to get more information
> > > > from Nvidia if they know what this is all about, but we got nothing
> > > > useful.
> > > >
> > > > We were also hoping to find a reliable fix or workaround we could have
> > > > inside nouveau to fix that as I think nouveau is the only driver
> > > > actually hit by this issue, but nothing turned out to be reliable
> > > > enough.
> > >
> > > Can't you just block runtime PM from the nouveau driver until this is
> > > understood better? That can be done by calling pm_runtime_forbid() (or
> > > not calling pm_runtime_allow() in the driver). Or in case of PCI driver
> > > you just don't decrease the reference count when probe() ends.
> > >
> >
> > the thing is, it does work for a lot of laptops. We could only observe
> > this on kaby lake and skylake ones. Even on Cannon Lakes it seems to
> > work just fine.
>
> Can't you then limit it to those?
>
> I've experienced that Kabylake root ports can enter and exit in D3cold
> just fine because we do that for Thunderbolt for example. But that
> always requires help from ACPI. If the system is using non-standard ACPI
> methods for example that may require some tricks in the driver side.
>

yeah.. I am not quite sure what's actually the root cause. I was also
trying to use the same PCI registers ACPI is using to trigger this
issue on a normal desktop, no luck. Using the same registers does
trigger the issue (hence the script).

The script is essentially just doing what ACPI does, just skipping a lot.

> > > I think that would be much better than blocking any devices behind
> > > Kabylake PCIe root ports from entering D3 (I don't really think the
> > > problem is in the root ports itself but there is something we are
> > > missing when the NVIDIA GPU is put into D3cold or back from there).
> >
> > I highly doubt there is anything wrong with the GPU alone as we have
> > too many indications which tell us otherwise.
> >
> > Anyway, at this point I don't know where to look further for what's
> > actually wrong. And apparently it works on Windows, but I don't know
> > why and I have no idea what Windows does on such systems to make it
> > work reliably.
>
> By works you mean that Windows is able to put it into D3cold and back?
> If that's the case it may be that there is some ACPI magic that the
> Windows driver does and we of course are missing in Linux.

Afaik that's the case. We were talking with Nvidia about it, but they
are not aware of any issues generally. (on Windows, nor the hardware).
No idea if we can trust their statements though.

But yeah, it might be that on Windows they still do _DSM calls or
something... but until today, Nvidia didn't provide any documentation
to us for that.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-10-01  8:56                   ` Karol Herbst
@ 2019-10-01  9:11                     ` Mika Westerberg
  2019-10-01 10:00                       ` Karol Herbst
  0 siblings, 1 reply; 19+ messages in thread
From: Mika Westerberg @ 2019-10-01  9:11 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Tue, Oct 01, 2019 at 10:56:39AM +0200, Karol Herbst wrote:
> On Tue, Oct 1, 2019 at 10:47 AM Mika Westerberg
> <mika.westerberg@linux.intel.com> wrote:
> >
> > On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> > > On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> > > <mika.westerberg@linux.intel.com> wrote:
> > > >
> > > > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > > > still happens with your patch applied. The machine simply gets shut down.
> > > > >
> > > > > dmesg can be found here:
> > > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> > > >
> > > > Looking your dmesg:
> > > >
> > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > > > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> > > >
> > > > I would assume it runtime suspends here. Then it wakes up because of PCI
> > > > access from userspace:
> > > >
> > > > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> > > >
> > > > and for some reason it does not get resumed properly. There are also few
> > > > warnings from ACPI that might be relevant:
> > > >
> > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > >
> > >
> > > afaik this is the case for essentially every laptop out there.
> >
> > OK, so they are harmless?
> >
> 
> yes
> 
> > > > This seems to be Dell XPS 9560 which I think has been around some time
> > > > already so I wonder why we only see issues now. Has it ever worked for
> > > > you or maybe there is a regression that causes it to happen now?
> > >
> > > oh, it's broken since forever, we just tried to get more information
> > > from Nvidia if they know what this is all about, but we got nothing
> > > useful.
> > >
> > > We were also hoping to find a reliable fix or workaround we could have
> > > inside nouveau to fix that as I think nouveau is the only driver
> > > actually hit by this issue, but nothing turned out to be reliable
> > > enough.
> >
> > Can't you just block runtime PM from the nouveau driver until this is
> > understood better? That can be done by calling pm_runtime_forbid() (or
> > not calling pm_runtime_allow() in the driver). Or in case of PCI driver
> > you just don't decrease the reference count when probe() ends.
> >
> 
> the thing is, it does work for a lot of laptops. We could only observe
> this on kaby lake and skylake ones. Even on Cannon Lakes it seems to
> work just fine.

Can't you then limit it to those?

I've experienced that Kabylake root ports can enter and exit in D3cold
just fine because we do that for Thunderbolt for example. But that
always requires help from ACPI. If the system is using non-standard ACPI
methods for example that may require some tricks in the driver side.

> > I think that would be much better than blocking any devices behind
> > Kabylake PCIe root ports from entering D3 (I don't really think the
> > problem is in the root ports itself but there is something we are
> > missing when the NVIDIA GPU is put into D3cold or back from there).
> 
> I highly doubt there is anything wrong with the GPU alone as we have
> too many indications which tell us otherwise.
> 
> Anyway, at this point I don't know where to look further for what's
> actually wrong. And apparently it works on Windows, but I don't know
> why and I have no idea what Windows does on such systems to make it
> work reliably.

By works you mean that Windows is able to put it into D3cold and back?
If that's the case it may be that there is some ACPI magic that the
Windows driver does and we of course are missing in Linux.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-10-01  8:46                 ` Mika Westerberg
@ 2019-10-01  8:56                   ` Karol Herbst
  2019-10-01  9:11                     ` Mika Westerberg
  0 siblings, 1 reply; 19+ messages in thread
From: Karol Herbst @ 2019-10-01  8:56 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Tue, Oct 1, 2019 at 10:47 AM Mika Westerberg
<mika.westerberg@linux.intel.com> wrote:
>
> On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> > On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> > <mika.westerberg@linux.intel.com> wrote:
> > >
> > > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > > still happens with your patch applied. The machine simply gets shut down.
> > > >
> > > > dmesg can be found here:
> > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> > >
> > > Looking your dmesg:
> > >
> > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> > >
> > > I would assume it runtime suspends here. Then it wakes up because of PCI
> > > access from userspace:
> > >
> > > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> > >
> > > and for some reason it does not get resumed properly. There are also few
> > > warnings from ACPI that might be relevant:
> > >
> > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > >
> >
> > afaik this is the case for essentially every laptop out there.
>
> OK, so they are harmless?
>

yes

> > > This seems to be Dell XPS 9560 which I think has been around some time
> > > already so I wonder why we only see issues now. Has it ever worked for
> > > you or maybe there is a regression that causes it to happen now?
> >
> > oh, it's broken since forever, we just tried to get more information
> > from Nvidia if they know what this is all about, but we got nothing
> > useful.
> >
> > We were also hoping to find a reliable fix or workaround we could have
> > inside nouveau to fix that as I think nouveau is the only driver
> > actually hit by this issue, but nothing turned out to be reliable
> > enough.
>
> Can't you just block runtime PM from the nouveau driver until this is
> understood better? That can be done by calling pm_runtime_forbid() (or
> not calling pm_runtime_allow() in the driver). Or in case of PCI driver
> you just don't decrease the reference count when probe() ends.
>

the thing is, it does work for a lot of laptops. We could only observe
this on kaby lake and skylake ones. Even on Cannon Lakes it seems to
work just fine.

> I think that would be much better than blocking any devices behind
> Kabylake PCIe root ports from entering D3 (I don't really think the
> problem is in the root ports itself but there is something we are
> missing when the NVIDIA GPU is put into D3cold or back from there).

I highly doubt there is anything wrong with the GPU alone as we have
too many indications which tell us otherwise.

Anyway, at this point I don't know where to look further for what's
actually wrong. And apparently it works on Windows, but I don't know
why and I have no idea what Windows does on such systems to make it
work reliably.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-30 16:36               ` Karol Herbst
@ 2019-10-01  8:46                 ` Mika Westerberg
  2019-10-01  8:56                   ` Karol Herbst
  2019-10-01 13:27                 ` Bjorn Helgaas
  1 sibling, 1 reply; 19+ messages in thread
From: Mika Westerberg @ 2019-10-01  8:46 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Mon, Sep 30, 2019 at 06:36:12PM +0200, Karol Herbst wrote:
> On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
> <mika.westerberg@linux.intel.com> wrote:
> >
> > On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > > still happens with your patch applied. The machine simply gets shut down.
> > >
> > > dmesg can be found here:
> > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
> >
> > Looking your dmesg:
> >
> > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> > Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> > Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
> >
> > I would assume it runtime suspends here. Then it wakes up because of PCI
> > access from userspace:
> >
> > Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
> >
> > and for some reason it does not get resumed properly. There are also few
> > warnings from ACPI that might be relevant:
> >
> > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> > Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> >
> 
> afaik this is the case for essentially every laptop out there.

OK, so they are harmless?

> > This seems to be Dell XPS 9560 which I think has been around some time
> > already so I wonder why we only see issues now. Has it ever worked for
> > you or maybe there is a regression that causes it to happen now?
> 
> oh, it's broken since forever, we just tried to get more information
> from Nvidia if they know what this is all about, but we got nothing
> useful.
> 
> We were also hoping to find a reliable fix or workaround we could have
> inside nouveau to fix that as I think nouveau is the only driver
> actually hit by this issue, but nothing turned out to be reliable
> enough.

Can't you just block runtime PM from the nouveau driver until this is
understood better? That can be done by calling pm_runtime_forbid() (or
not calling pm_runtime_allow() in the driver). Or in case of PCI driver
you just don't decrease the reference count when probe() ends.

I think that would be much better than blocking any devices behind
Kabylake PCIe root ports from entering D3 (I don't really think the
problem is in the root ports itself but there is something we are
missing when the NVIDIA GPU is put into D3cold or back from there).

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-30 16:30             ` Mika Westerberg
@ 2019-09-30 16:36               ` Karol Herbst
  2019-10-01  8:46                 ` Mika Westerberg
  2019-10-01 13:27                 ` Bjorn Helgaas
  0 siblings, 2 replies; 19+ messages in thread
From: Karol Herbst @ 2019-09-30 16:36 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Mon, Sep 30, 2019 at 6:30 PM Mika Westerberg
<mika.westerberg@linux.intel.com> wrote:
>
> On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> > still happens with your patch applied. The machine simply gets shut down.
> >
> > dmesg can be found here:
> > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt
>
> Looking your dmesg:
>
> Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
> Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
> Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1
>
> I would assume it runtime suspends here. Then it wakes up because of PCI
> access from userspace:
>
> Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
>
> and for some reason it does not get resumed properly. There are also few
> warnings from ACPI that might be relevant:
>
> Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
> Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
>

afaik this is the case for essentially every laptop out there.

> This seems to be Dell XPS 9560 which I think has been around some time
> already so I wonder why we only see issues now. Has it ever worked for
> you or maybe there is a regression that causes it to happen now?

oh, it's broken since forever, we just tried to get more information
from Nvidia if they know what this is all about, but we got nothing
useful.

We were also hoping to find a reliable fix or workaround we could have
inside nouveau to fix that as I think nouveau is the only driver
actually hit by this issue, but nothing turned out to be reliable
enough.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-30 16:05           ` Karol Herbst
@ 2019-09-30 16:30             ` Mika Westerberg
  2019-09-30 16:36               ` Karol Herbst
  0 siblings, 1 reply; 19+ messages in thread
From: Mika Westerberg @ 2019-09-30 16:30 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Mon, Sep 30, 2019 at 06:05:14PM +0200, Karol Herbst wrote:
> still happens with your patch applied. The machine simply gets shut down.
> 
> dmesg can be found here:
> https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt

Looking your dmesg:

Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: DCB version 4.1
Sep 30 17:24:27 kernel: nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
Sep 30 17:24:27 kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:01:00.0 on minor 1

I would assume it runtime suspends here. Then it wakes up because of PCI
access from userspace:

Sep 30 17:24:42 kernel: pci_raw_set_power_state: 56 callbacks suppressed
 
and for some reason it does not get resumed properly. There are also few
warnings from ACPI that might be relevant:

Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)
Sep 30 17:24:27 kernel: ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190509/nsarguments-59)

This seems to be Dell XPS 9560 which I think has been around some time
already so I wonder why we only see issues now. Has it ever worked for
you or maybe there is a regression that causes it to happen now?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-30  9:29         ` Mika Westerberg
@ 2019-09-30 16:05           ` Karol Herbst
  2019-09-30 16:30             ` Mika Westerberg
  0 siblings, 1 reply; 19+ messages in thread
From: Karol Herbst @ 2019-09-30 16:05 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

still happens with your patch applied. The machine simply gets shut down.

dmesg can be found here:
https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt

If there are no other things to try out, I will post the updated patch shortly.

On Mon, Sep 30, 2019 at 11:29 AM Mika Westerberg
<mika.westerberg@linux.intel.com> wrote:
>
> On Mon, Sep 30, 2019 at 11:15:48AM +0200, Karol Herbst wrote:
> > On Mon, Sep 30, 2019 at 10:05 AM Mika Westerberg
> > <mika.westerberg@linux.intel.com> wrote:
> > >
> > > Hi Karol,
> > >
> > > On Fri, Sep 27, 2019 at 11:53:48PM +0200, Karol Herbst wrote:
> > > > > What exactly is the serious issue?  I guess it's that the rescan
> > > > > doesn't detect the GPU, which means it's not responding to config
> > > > > accesses?  Is there any timing component here, e.g., maybe we're
> > > > > missing some delay like the ones Mika is adding to the reset paths?
> > > >
> > > > When I was checking up on some of the PCI registers of the bridge
> > > > controller, the slot detection told me that there is no device
> > > > recognized anymore. I don't know which register it was anymore, though
> > > > I guess one could read it up in the SoC spec document by Intel.
> > > >
> > > > My guess is, that the bridge controller fails to detect the GPU being
> > > > here or actively threw it of the bus or something. But a normal system
> > > > suspend/resume cycle brings the GPU back online (doing a rescan via
> > > > sysfs gets the device detected again)
> > >
> > > Can you elaborate a bit what kind of scenario the issue happens (e.g
> > > steps how it reproduces)? It was not 100% clear from the changelog. Also
> > > what the result when the failure happens?
> > >
> >
> > yeah, I already have an updated patch in the works which also does the
> > rework Bjorn suggested. Had no time yet to test if I didn't mess it
> > up.
> >
> > I am also thinking of adding a kernel parameter to enable this
> > workaround on demand, but not quite sure on that one yet.
>
> Right, I think it would be good to figure out the root cause before
> adding any workarounds ;-) It might very well be that we are just
> missing something the PCIe spec requires but not implemented in Linux.
>
> > > I see there is a script that does something but unfortunately I'm not
> > > fluent in Python so can't extract the steps how the issue can be
> > > reproduced ;-)
> > >
> > > One thing that I'm working on is that Linux PCI subsystem misses certain
> > > delays that are needed after D3cold -> D0 transition, otherwise the
> > > device and/or link may not be ready before we access it. What you are
> > > experiencing sounds similar. I wonder if you could try the following
> > > patch and see if it makes any difference?
> > >
> > > https://patchwork.kernel.org/patch/11106611/
> >
> > I think I already tried this path. The problem isn't that the device
> > isn't accessible too late, but that it seems that the device
> > completely falls off the bus. But I can retest again just to be sure.
>
> Yes, please try it and share full dmesg if/when the failure still happens.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-30  9:15       ` Karol Herbst
@ 2019-09-30  9:29         ` Mika Westerberg
  2019-09-30 16:05           ` Karol Herbst
  0 siblings, 1 reply; 19+ messages in thread
From: Mika Westerberg @ 2019-09-30  9:29 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Mon, Sep 30, 2019 at 11:15:48AM +0200, Karol Herbst wrote:
> On Mon, Sep 30, 2019 at 10:05 AM Mika Westerberg
> <mika.westerberg@linux.intel.com> wrote:
> >
> > Hi Karol,
> >
> > On Fri, Sep 27, 2019 at 11:53:48PM +0200, Karol Herbst wrote:
> > > > What exactly is the serious issue?  I guess it's that the rescan
> > > > doesn't detect the GPU, which means it's not responding to config
> > > > accesses?  Is there any timing component here, e.g., maybe we're
> > > > missing some delay like the ones Mika is adding to the reset paths?
> > >
> > > When I was checking up on some of the PCI registers of the bridge
> > > controller, the slot detection told me that there is no device
> > > recognized anymore. I don't know which register it was anymore, though
> > > I guess one could read it up in the SoC spec document by Intel.
> > >
> > > My guess is, that the bridge controller fails to detect the GPU being
> > > here or actively threw it of the bus or something. But a normal system
> > > suspend/resume cycle brings the GPU back online (doing a rescan via
> > > sysfs gets the device detected again)
> >
> > Can you elaborate a bit what kind of scenario the issue happens (e.g
> > steps how it reproduces)? It was not 100% clear from the changelog. Also
> > what the result when the failure happens?
> >
> 
> yeah, I already have an updated patch in the works which also does the
> rework Bjorn suggested. Had no time yet to test if I didn't mess it
> up.
> 
> I am also thinking of adding a kernel parameter to enable this
> workaround on demand, but not quite sure on that one yet.

Right, I think it would be good to figure out the root cause before
adding any workarounds ;-) It might very well be that we are just
missing something the PCIe spec requires but not implemented in Linux.

> > I see there is a script that does something but unfortunately I'm not
> > fluent in Python so can't extract the steps how the issue can be
> > reproduced ;-)
> >
> > One thing that I'm working on is that Linux PCI subsystem misses certain
> > delays that are needed after D3cold -> D0 transition, otherwise the
> > device and/or link may not be ready before we access it. What you are
> > experiencing sounds similar. I wonder if you could try the following
> > patch and see if it makes any difference?
> >
> > https://patchwork.kernel.org/patch/11106611/
> 
> I think I already tried this path. The problem isn't that the device
> isn't accessible too late, but that it seems that the device
> completely falls off the bus. But I can retest again just to be sure.

Yes, please try it and share full dmesg if/when the failure still happens.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-30  8:05     ` Mika Westerberg
@ 2019-09-30  9:15       ` Karol Herbst
  2019-09-30  9:29         ` Mika Westerberg
  0 siblings, 1 reply; 19+ messages in thread
From: Karol Herbst @ 2019-09-30  9:15 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

On Mon, Sep 30, 2019 at 10:05 AM Mika Westerberg
<mika.westerberg@linux.intel.com> wrote:
>
> Hi Karol,
>
> On Fri, Sep 27, 2019 at 11:53:48PM +0200, Karol Herbst wrote:
> > > What exactly is the serious issue?  I guess it's that the rescan
> > > doesn't detect the GPU, which means it's not responding to config
> > > accesses?  Is there any timing component here, e.g., maybe we're
> > > missing some delay like the ones Mika is adding to the reset paths?
> >
> > When I was checking up on some of the PCI registers of the bridge
> > controller, the slot detection told me that there is no device
> > recognized anymore. I don't know which register it was anymore, though
> > I guess one could read it up in the SoC spec document by Intel.
> >
> > My guess is, that the bridge controller fails to detect the GPU being
> > here or actively threw it of the bus or something. But a normal system
> > suspend/resume cycle brings the GPU back online (doing a rescan via
> > sysfs gets the device detected again)
>
> Can you elaborate a bit what kind of scenario the issue happens (e.g
> steps how it reproduces)? It was not 100% clear from the changelog. Also
> what the result when the failure happens?
>

yeah, I already have an updated patch in the works which also does the
rework Bjorn suggested. Had no time yet to test if I didn't mess it
up.

I am also thinking of adding a kernel parameter to enable this
workaround on demand, but not quite sure on that one yet.

> I see there is a script that does something but unfortunately I'm not
> fluent in Python so can't extract the steps how the issue can be
> reproduced ;-)
>
> One thing that I'm working on is that Linux PCI subsystem misses certain
> delays that are needed after D3cold -> D0 transition, otherwise the
> device and/or link may not be ready before we access it. What you are
> experiencing sounds similar. I wonder if you could try the following
> patch and see if it makes any difference?
>
> https://patchwork.kernel.org/patch/11106611/

I think I already tried this path. The problem isn't that the device
isn't accessible too late, but that it seems that the device
completely falls off the bus. But I can retest again just to be sure.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-27 21:53   ` Karol Herbst
@ 2019-09-30  8:05     ` Mika Westerberg
  2019-09-30  9:15       ` Karol Herbst
  0 siblings, 1 reply; 19+ messages in thread
From: Mika Westerberg @ 2019-09-30  8:05 UTC (permalink / raw)
  To: Karol Herbst
  Cc: Bjorn Helgaas, LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM

Hi Karol,

On Fri, Sep 27, 2019 at 11:53:48PM +0200, Karol Herbst wrote:
> > What exactly is the serious issue?  I guess it's that the rescan
> > doesn't detect the GPU, which means it's not responding to config
> > accesses?  Is there any timing component here, e.g., maybe we're
> > missing some delay like the ones Mika is adding to the reset paths?
> 
> When I was checking up on some of the PCI registers of the bridge
> controller, the slot detection told me that there is no device
> recognized anymore. I don't know which register it was anymore, though
> I guess one could read it up in the SoC spec document by Intel.
> 
> My guess is, that the bridge controller fails to detect the GPU being
> here or actively threw it of the bus or something. But a normal system
> suspend/resume cycle brings the GPU back online (doing a rescan via
> sysfs gets the device detected again)

Can you elaborate a bit what kind of scenario the issue happens (e.g
steps how it reproduces)? It was not 100% clear from the changelog. Also
what the result when the failure happens?

I see there is a script that does something but unfortunately I'm not
fluent in Python so can't extract the steps how the issue can be
reproduced ;-)

One thing that I'm working on is that Linux PCI subsystem misses certain
delays that are needed after D3cold -> D0 transition, otherwise the
device and/or link may not be ready before we access it. What you are
experiencing sounds similar. I wonder if you could try the following
patch and see if it makes any difference?

https://patchwork.kernel.org/patch/11106611/

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-27 21:42 ` Bjorn Helgaas
@ 2019-09-27 21:53   ` Karol Herbst
  2019-09-30  8:05     ` Mika Westerberg
  0 siblings, 1 reply; 19+ messages in thread
From: Karol Herbst @ 2019-09-27 21:53 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: LKML, Lyude Paul, Linux PCI, dri-devel, nouveau,
	Rafael J. Wysocki, Linux PM, Mika Westerberg

On Fri, Sep 27, 2019 at 11:42 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Rafael, Mika, linux-pm]
>
> On Fri, Sep 27, 2019 at 04:44:21PM +0200, Karol Herbst wrote:
> > Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume.
>
> I don't know what runpm is.  Some userspace utility?  Module
> parameter?
>

runpm aka runtime powermanagement aka runtime_resume and runtime_suspend

> > Works perfectly with this workaround applied.
> >
> > RFC comment:
> > We are quite sure that there is a higher amount of bridges affected by this,
> > but I was only testing it on my own machine for now.
> >
> > I've stresstested runpm by doing 5000 runpm cycles with that patch applied
> > and never saw it fail.
> >
> > I mainly wanted to get a discussion going on if that's a feasable workaround
> > indeed or if we need something better.
> >
> > I am also sure, that the nouveau driver itself isn't at fault as I am able
> > to reproduce the same issue by poking into some PCI registers on the PCIe
> > bridge to put the GPU into D3cold as it's done in ACPI code.
> >
> > I've written a little python script to reproduce this issue without the need
> > of loading nouveau:
> > https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py
>
> Nice script, thanks for sharing it :)  I could learn a lot of useful
> python by studying it.
>
> > Signed-off-by: Karol Herbst <kherbst@redhat.com>
> > Cc: Bjorn Helgaas <bhelgaas@google.com>
> > Cc: Lyude Paul <lyude@redhat.com>
> > Cc: linux-pci@vger.kernel.org
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: nouveau@lists.freedesktop.org
> > ---
> >  drivers/pci/pci.c | 39 +++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 39 insertions(+)
> >
> > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> > index 088fcdc8d2b4..9dbd29ced1ac 100644
> > --- a/drivers/pci/pci.c
> > +++ b/drivers/pci/pci.c
> > @@ -799,6 +799,42 @@ static inline bool platform_pci_bridge_d3(struct pci_dev *dev)
> >       return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false;
> >  }
> >
> > +/*
> > + * some intel bridges cause serious issues with runpm if the client device
> > + * is put into D1/D2/D3hot before putting the client into D3cold via
> > + * platform means (generally ACPI).
>
> You mention Nvidia GPUs above, but I guess the same issue may affect
> other devices?  I would really like to chase this down to a more
> specific issue, e.g., a hardware defect with erratum, an ACPI defect,
> or a Linux defect.  Without the specifics, this is just a band-aid.
>

yep.. but we were trying to talk to Nvidia and Intel about it and had
no luck getting anything out of them so far. We gave up on Nvidia,
Intel is still pending.

> I don't see any relevant requirements in the _OFF description, but I
> don't know much about ACPI power control.
>
> Your script allows several scenarios; I *guess* the one that causes
> the problem is:
>
>   - write 3 (D3hot) to GPU PowerState (PCIE_PM_REG == 0x64, I assume
>     PM Capability Control Register)

correct

>   - write 3 (D3hot) to bridge PowerState (0x84, I assume PM Capability
>     Control Register)

correct, but this seems to be fine and doesn't fix the issue if that
part is skipped

>   - run _OFF on the power resource for the bridge
>
> From your script I assume you do:
>
>   - run _ON on the power resource for the bridge
>   - write 0 (D0) to the bridge PowerState
>
> You do *not* write the GPU PowerState (which we can't do if the GPU is
> in D3cold).  Is there some assumption that it comes out of D3cold via
> some other mechanism, e.g., is the _ON supposed to wake up the GPU?

if the "link" is powered up again (via 0x248, 0xbc or 0xb0 on the GPU
bridge or the ACPI _ON method) the GPU comes up in the D0 state and is
fully operational, well besides the little issue we've got with the D3
write. It can also be worked around by putting the PCIe link into 5.0
and 8.0 mode, but that's not reliable enough as it fails it around 10%
of all tries.

>
> What exactly is the serious issue?  I guess it's that the rescan
> doesn't detect the GPU, which means it's not responding to config
> accesses?  Is there any timing component here, e.g., maybe we're
> missing some delay like the ones Mika is adding to the reset paths?
>

When I was checking up on some of the PCI registers of the bridge
controller, the slot detection told me that there is no device
recognized anymore. I don't know which register it was anymore, though
I guess one could read it up in the SoC spec document by Intel.

My guess is, that the bridge controller fails to detect the GPU being
here or actively threw it of the bus or something. But a normal system
suspend/resume cycle brings the GPU back online (doing a rescan via
sysfs gets the device detected again)

> > + *
> > + * skipping this makes runpm work perfectly fine on such devices.
> > + *
> > + * As far as we know only skylake and kaby lake SoCs are affected.
> > + */
> > +static unsigned short intel_broken_d3_bridges[] = {
> > +     /* kbl */
> > +     0x1901,
> > +};
> > +
> > +static inline bool intel_broken_pci_pm(struct pci_bus *bus)
> > +{
> > +     struct pci_dev *bridge;
> > +     int i;
> > +
> > +     if (!bus || !bus->self)
> > +             return false;
> > +
> > +     bridge = bus->self;
> > +     if (bridge->vendor != PCI_VENDOR_ID_INTEL)
> > +             return false;
> > +
> > +     for (i = 0; i < ARRAY_SIZE(intel_broken_d3_bridges); i++) {
> > +             if (bridge->device == intel_broken_d3_bridges[i]) {
> > +                     pci_err(bridge, "found broken intel bridge\n");
>
> If this ends up being a hardware defect, we should use a quirk to set
> a bit in the pci_dev once, as we do for broken_intx_masking and
> similar bits.

okay, if you think this is the preferred way then I can change the
patch accordingly.

>
> > +                     return true;
> > +             }
> > +     }
> > +
> > +     return false;
> > +}
> > +
> >  /**
> >   * pci_raw_set_power_state - Use PCI PM registers to set the power state of
> >   *                        given PCI device
> > @@ -827,6 +863,9 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
> >       if (state < PCI_D0 || state > PCI_D3hot)
> >               return -EINVAL;
> >
> > +     if (state != PCI_D0 && intel_broken_pci_pm(dev->bus))
> > +             return 0;
> > +
> >       /*
> >        * Validate current state:
> >        * Can enter D0 from any state, but if we can only go deeper
> > --
> > 2.21.0
> >

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
  2019-09-27 14:44 Karol Herbst
@ 2019-09-27 21:42 ` Bjorn Helgaas
  2019-09-27 21:53   ` Karol Herbst
  0 siblings, 1 reply; 19+ messages in thread
From: Bjorn Helgaas @ 2019-09-27 21:42 UTC (permalink / raw)
  To: Karol Herbst
  Cc: linux-kernel, Lyude Paul, linux-pci, dri-devel, nouveau,
	Rafael J. Wysocki, linux-pm, Mika Westerberg

[+cc Rafael, Mika, linux-pm]

On Fri, Sep 27, 2019 at 04:44:21PM +0200, Karol Herbst wrote:
> Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume.

I don't know what runpm is.  Some userspace utility?  Module
parameter?

> Works perfectly with this workaround applied.
> 
> RFC comment:
> We are quite sure that there is a higher amount of bridges affected by this,
> but I was only testing it on my own machine for now.
> 
> I've stresstested runpm by doing 5000 runpm cycles with that patch applied
> and never saw it fail.
> 
> I mainly wanted to get a discussion going on if that's a feasable workaround
> indeed or if we need something better.
> 
> I am also sure, that the nouveau driver itself isn't at fault as I am able
> to reproduce the same issue by poking into some PCI registers on the PCIe
> bridge to put the GPU into D3cold as it's done in ACPI code.
> 
> I've written a little python script to reproduce this issue without the need
> of loading nouveau:
> https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py

Nice script, thanks for sharing it :)  I could learn a lot of useful
python by studying it.

> Signed-off-by: Karol Herbst <kherbst@redhat.com>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: linux-pci@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> ---
>  drivers/pci/pci.c | 39 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 39 insertions(+)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 088fcdc8d2b4..9dbd29ced1ac 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -799,6 +799,42 @@ static inline bool platform_pci_bridge_d3(struct pci_dev *dev)
>  	return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false;
>  }
>  
> +/*
> + * some intel bridges cause serious issues with runpm if the client device
> + * is put into D1/D2/D3hot before putting the client into D3cold via
> + * platform means (generally ACPI).

You mention Nvidia GPUs above, but I guess the same issue may affect
other devices?  I would really like to chase this down to a more
specific issue, e.g., a hardware defect with erratum, an ACPI defect,
or a Linux defect.  Without the specifics, this is just a band-aid.

I don't see any relevant requirements in the _OFF description, but I
don't know much about ACPI power control.

Your script allows several scenarios; I *guess* the one that causes
the problem is:

  - write 3 (D3hot) to GPU PowerState (PCIE_PM_REG == 0x64, I assume
    PM Capability Control Register)
  - write 3 (D3hot) to bridge PowerState (0x84, I assume PM Capability
    Control Register)
  - run _OFF on the power resource for the bridge

From your script I assume you do:

  - run _ON on the power resource for the bridge
  - write 0 (D0) to the bridge PowerState

You do *not* write the GPU PowerState (which we can't do if the GPU is
in D3cold).  Is there some assumption that it comes out of D3cold via
some other mechanism, e.g., is the _ON supposed to wake up the GPU?

What exactly is the serious issue?  I guess it's that the rescan
doesn't detect the GPU, which means it's not responding to config
accesses?  Is there any timing component here, e.g., maybe we're
missing some delay like the ones Mika is adding to the reset paths?

> + *
> + * skipping this makes runpm work perfectly fine on such devices.
> + *
> + * As far as we know only skylake and kaby lake SoCs are affected.
> + */
> +static unsigned short intel_broken_d3_bridges[] = {
> +	/* kbl */
> +	0x1901,
> +};
> +
> +static inline bool intel_broken_pci_pm(struct pci_bus *bus)
> +{
> +	struct pci_dev *bridge;
> +	int i;
> +
> +	if (!bus || !bus->self)
> +		return false;
> +
> +	bridge = bus->self;
> +	if (bridge->vendor != PCI_VENDOR_ID_INTEL)
> +		return false;
> +
> +	for (i = 0; i < ARRAY_SIZE(intel_broken_d3_bridges); i++) {
> +		if (bridge->device == intel_broken_d3_bridges[i]) {
> +			pci_err(bridge, "found broken intel bridge\n");

If this ends up being a hardware defect, we should use a quirk to set
a bit in the pci_dev once, as we do for broken_intx_masking and
similar bits.

> +			return true;
> +		}
> +	}
> +
> +	return false;
> +}
> +
>  /**
>   * pci_raw_set_power_state - Use PCI PM registers to set the power state of
>   *			     given PCI device
> @@ -827,6 +863,9 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
>  	if (state < PCI_D0 || state > PCI_D3hot)
>  		return -EINVAL;
>  
> +	if (state != PCI_D0 && intel_broken_pci_pm(dev->bus))
> +		return 0;
> +
>  	/*
>  	 * Validate current state:
>  	 * Can enter D0 from any state, but if we can only go deeper
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
@ 2019-09-27 14:44 Karol Herbst
  2019-09-27 21:42 ` Bjorn Helgaas
  0 siblings, 1 reply; 19+ messages in thread
From: Karol Herbst @ 2019-09-27 14:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Karol Herbst, Bjorn Helgaas, Lyude Paul, linux-pci, dri-devel, nouveau

Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume.

Works perfectly with this workaround applied.

RFC comment:
We are quite sure that there is a higher amount of bridges affected by this,
but I was only testing it on my own machine for now.

I've stresstested runpm by doing 5000 runpm cycles with that patch applied
and never saw it fail.

I mainly wanted to get a discussion going on if that's a feasable workaround
indeed or if we need something better.

I am also sure, that the nouveau driver itself isn't at fault as I am able
to reproduce the same issue by poking into some PCI registers on the PCIe
bridge to put the GPU into D3cold as it's done in ACPI code.

I've written a little python script to reproduce this issue without the need
of loading nouveau:
https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py

Signed-off-by: Karol Herbst <kherbst@redhat.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: linux-pci@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
---
 drivers/pci/pci.c | 39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 088fcdc8d2b4..9dbd29ced1ac 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -799,6 +799,42 @@ static inline bool platform_pci_bridge_d3(struct pci_dev *dev)
 	return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false;
 }
 
+/*
+ * some intel bridges cause serious issues with runpm if the client device
+ * is put into D1/D2/D3hot before putting the client into D3cold via
+ * platform means (generally ACPI).
+ *
+ * skipping this makes runpm work perfectly fine on such devices.
+ *
+ * As far as we know only skylake and kaby lake SoCs are affected.
+ */
+static unsigned short intel_broken_d3_bridges[] = {
+	/* kbl */
+	0x1901,
+};
+
+static inline bool intel_broken_pci_pm(struct pci_bus *bus)
+{
+	struct pci_dev *bridge;
+	int i;
+
+	if (!bus || !bus->self)
+		return false;
+
+	bridge = bus->self;
+	if (bridge->vendor != PCI_VENDOR_ID_INTEL)
+		return false;
+
+	for (i = 0; i < ARRAY_SIZE(intel_broken_d3_bridges); i++) {
+		if (bridge->device == intel_broken_d3_bridges[i]) {
+			pci_err(bridge, "found broken intel bridge\n");
+			return true;
+		}
+	}
+
+	return false;
+}
+
 /**
  * pci_raw_set_power_state - Use PCI PM registers to set the power state of
  *			     given PCI device
@@ -827,6 +863,9 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
 	if (state < PCI_D0 || state > PCI_D3hot)
 		return -EINVAL;
 
+	if (state != PCI_D0 && intel_broken_pci_pm(dev->bus))
+		return 0;
+
 	/*
 	 * Validate current state:
 	 * Can enter D0 from any state, but if we can only go deeper
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-10-03  9:47 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-30 17:07 [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges Karol Herbst
  -- strict thread matches above, loose matches on Subject: below --
2019-09-27 14:44 Karol Herbst
2019-09-27 21:42 ` Bjorn Helgaas
2019-09-27 21:53   ` Karol Herbst
2019-09-30  8:05     ` Mika Westerberg
2019-09-30  9:15       ` Karol Herbst
2019-09-30  9:29         ` Mika Westerberg
2019-09-30 16:05           ` Karol Herbst
2019-09-30 16:30             ` Mika Westerberg
2019-09-30 16:36               ` Karol Herbst
2019-10-01  8:46                 ` Mika Westerberg
2019-10-01  8:56                   ` Karol Herbst
2019-10-01  9:11                     ` Mika Westerberg
2019-10-01 10:00                       ` Karol Herbst
2019-10-03  9:47                         ` Rafael J. Wysocki
2019-10-01 13:27                 ` Bjorn Helgaas
2019-10-01 16:21                   ` Karol Herbst
2019-10-01 19:34                     ` Bjorn Helgaas
2019-10-02  7:51                       ` Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).