linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code
@ 2020-07-31 17:15 Jon Derrick
  2020-08-05  7:54 ` You-Sheng Yang
  2020-08-05 15:09 ` Bjorn Helgaas
  0 siblings, 2 replies; 6+ messages in thread
From: Jon Derrick @ 2020-07-31 17:15 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Lorenzo Pieralisi, linux-pci, Jon Derrick, Kai-Heng Feng, You-Sheng Yang

The pci_save_state call in vmd_suspend can be performed by
pci_pm_suspend_irq. This allows the call to pci_prepare_to_sleep into
ASPM flow.

The pci_restore_state call in vmd_resume was restoring state after
pci_pm_resume->pci_restore_standard_config had already restored state.
It's also been suspected that the config state should be restored before
re-requesting IRQs.

Remove the pci_{save,restore}_state calls in vmd_{suspend,resume} in
order to allow proper flow through PCI core power management ASPM code.

Cc: Kai-Heng Feng <kai.heng.feng@canonical.com>
Cc: You-Sheng Yang <vicamo.yang@canonical.com>
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 drivers/pci/controller/vmd.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 76d8acbee7d5..15c1d85d8780 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -719,7 +719,6 @@ static int vmd_suspend(struct device *dev)
 	for (i = 0; i < vmd->msix_count; i++)
 		devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]);
 
-	pci_save_state(pdev);
 	return 0;
 }
 
@@ -737,7 +736,6 @@ static int vmd_resume(struct device *dev)
 			return err;
 	}
 
-	pci_restore_state(pdev);
 	return 0;
 }
 #endif
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code
  2020-07-31 17:15 [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code Jon Derrick
@ 2020-08-05  7:54 ` You-Sheng Yang
  2020-08-05 15:30   ` Derrick, Jonathan
  2020-08-05 15:09 ` Bjorn Helgaas
  1 sibling, 1 reply; 6+ messages in thread
From: You-Sheng Yang @ 2020-08-05  7:54 UTC (permalink / raw)
  To: Jon Derrick; +Cc: Bjorn Helgaas, Lorenzo Pieralisi, linux-pci, Kai-Heng Feng


[-- Attachment #1.1: Type: text/plain, Size: 713 bytes --]

On 2020-08-01 01:15, Jon Derrick wrote:
> The pci_save_state call in vmd_suspend can be performed by
> pci_pm_suspend_irq. This allows the call to pci_prepare_to_sleep into
> ASPM flow.
> 
> The pci_restore_state call in vmd_resume was restoring state after
> pci_pm_resume->pci_restore_standard_config had already restored state.
> It's also been suspected that the config state should be restored before
> re-requesting IRQs.
> 
> Remove the pci_{save,restore}_state calls in vmd_{suspend,resume} in
> order to allow proper flow through PCI core power management ASPM code.

I had a try on this patch but `lspci` still shows ASPM Disabled.
Anything prerequisite missing here?

You-Sheng Yang


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code
  2020-07-31 17:15 [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code Jon Derrick
  2020-08-05  7:54 ` You-Sheng Yang
@ 2020-08-05 15:09 ` Bjorn Helgaas
  2020-08-05 16:09   ` Derrick, Jonathan
  1 sibling, 1 reply; 6+ messages in thread
From: Bjorn Helgaas @ 2020-08-05 15:09 UTC (permalink / raw)
  To: Jon Derrick
  Cc: Lorenzo Pieralisi, linux-pci, Kai-Heng Feng, You-Sheng Yang,
	Vaibhav Gupta, Rafael J. Wysocki

[+cc Vaibhav, Rafael for suspend/resume question]

On Fri, Jul 31, 2020 at 01:15:44PM -0400, Jon Derrick wrote:
> The pci_save_state call in vmd_suspend can be performed by
> pci_pm_suspend_irq. This allows the call to pci_prepare_to_sleep into
> ASPM flow.

Add "()" after function names so they don't look like English words.

What is this "ASPM flow"?  The only ASPM-related code should be
configuration (enable/disable ASPM) (which happens at
enumeration-time, not suspend/resume time) and save/restore if we turn
the device off and we have to reconfigure it when we turn it on again.

> The pci_restore_state call in vmd_resume was restoring state after
> pci_pm_resume->pci_restore_standard_config had already restored state.
> It's also been suspected that the config state should be restored before
> re-requesting IRQs.
> 
> Remove the pci_{save,restore}_state calls in vmd_{suspend,resume} in
> order to allow proper flow through PCI core power management ASPM code.
> 
> Cc: Kai-Heng Feng <kai.heng.feng@canonical.com>
> Cc: You-Sheng Yang <vicamo.yang@canonical.com>
> Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
> ---
>  drivers/pci/controller/vmd.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 76d8acbee7d5..15c1d85d8780 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -719,7 +719,6 @@ static int vmd_suspend(struct device *dev)
>  	for (i = 0; i < vmd->msix_count; i++)
>  		devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]);
>  
> -	pci_save_state(pdev);

The VMD driver uses generic PM, not legacy PCI PM, so I think removing
the save/restore state from your suspend/resume functions is the right
thing to do.  You should only need to do VMD-specific things there.

I'm not even sure you need to free/request the IRQs in your
suspend/resume.  Maybe Rafael or Vaibhav know.

I just think the justification in the commit log is wrong.

>  	return 0;
>  }
>  
> @@ -737,7 +736,6 @@ static int vmd_resume(struct device *dev)
>  			return err;
>  	}
>  
> -	pci_restore_state(pdev);
>  	return 0;
>  }
>  #endif
> -- 
> 2.18.1
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code
  2020-08-05  7:54 ` You-Sheng Yang
@ 2020-08-05 15:30   ` Derrick, Jonathan
  2020-08-05 15:35     ` Bjorn Helgaas
  0 siblings, 1 reply; 6+ messages in thread
From: Derrick, Jonathan @ 2020-08-05 15:30 UTC (permalink / raw)
  To: vicamo.yang; +Cc: lorenzo.pieralisi, linux-pci, helgaas, kai.heng.feng

On Wed, 2020-08-05 at 15:54 +0800, You-Sheng Yang wrote:
> On 2020-08-01 01:15, Jon Derrick wrote:
> > The pci_save_state call in vmd_suspend can be performed by
> > pci_pm_suspend_irq. This allows the call to pci_prepare_to_sleep into
> > ASPM flow.
> > 
> > The pci_restore_state call in vmd_resume was restoring state after
> > pci_pm_resume->pci_restore_standard_config had already restored state.
> > It's also been suspected that the config state should be restored before
> > re-requesting IRQs.
> > 
> > Remove the pci_{save,restore}_state calls in vmd_{suspend,resume} in
> > order to allow proper flow through PCI core power management ASPM code.
> 
> I had a try on this patch but `lspci` still shows ASPM Disabled.
> Anything prerequisite missing here?
> 

Is enabling L0s/L1/etc on a device something that the driver should be
doing?

Does the state change with pcie_aspm.policy=powersave ?


> You-Sheng Yang
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code
  2020-08-05 15:30   ` Derrick, Jonathan
@ 2020-08-05 15:35     ` Bjorn Helgaas
  0 siblings, 0 replies; 6+ messages in thread
From: Bjorn Helgaas @ 2020-08-05 15:35 UTC (permalink / raw)
  To: Derrick, Jonathan
  Cc: vicamo.yang, lorenzo.pieralisi, linux-pci, kai.heng.feng

On Wed, Aug 05, 2020 at 03:30:00PM +0000, Derrick, Jonathan wrote:
> On Wed, 2020-08-05 at 15:54 +0800, You-Sheng Yang wrote:
> > On 2020-08-01 01:15, Jon Derrick wrote:
> > > The pci_save_state call in vmd_suspend can be performed by
> > > pci_pm_suspend_irq. This allows the call to pci_prepare_to_sleep into
> > > ASPM flow.
> > > 
> > > The pci_restore_state call in vmd_resume was restoring state after
> > > pci_pm_resume->pci_restore_standard_config had already restored state.
> > > It's also been suspected that the config state should be restored before
> > > re-requesting IRQs.
> > > 
> > > Remove the pci_{save,restore}_state calls in vmd_{suspend,resume} in
> > > order to allow proper flow through PCI core power management ASPM code.
> > 
> > I had a try on this patch but `lspci` still shows ASPM Disabled.
> > Anything prerequisite missing here?
> > 
> 
> Is enabling L0s/L1/etc on a device something that the driver should be
> doing?

No.  ASPM should be completely managed by the PCI core.  There are a
few drivers that *do* muck with ASPM, but they are broken and they
cause problems.

Drivers can use pci_disable_link_state() to completely disable ASPM
states, e.g., if they are known to be broken in hardware.  But they
should not update the Link Control register directly because there are
specific requirements that involve both ends of the link, not just the
endpoint.

Bjorn

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code
  2020-08-05 15:09 ` Bjorn Helgaas
@ 2020-08-05 16:09   ` Derrick, Jonathan
  0 siblings, 0 replies; 6+ messages in thread
From: Derrick, Jonathan @ 2020-08-05 16:09 UTC (permalink / raw)
  To: helgaas
  Cc: lorenzo.pieralisi, linux-pci, kai.heng.feng, vaibhavgupta40,
	vicamo.yang, rjw

On Wed, 2020-08-05 at 10:09 -0500, Bjorn Helgaas wrote:
> [+cc Vaibhav, Rafael for suspend/resume question]
> 
> On Fri, Jul 31, 2020 at 01:15:44PM -0400, Jon Derrick wrote:
> > The pci_save_state call in vmd_suspend can be performed by
> > pci_pm_suspend_irq. This allows the call to pci_prepare_to_sleep into
> > ASPM flow.
> 
> Add "()" after function names so they don't look like English words.
> 
> What is this "ASPM flow"? 
(Forgive my misunderstanding)

>  The only ASPM-related code should be
> configuration (enable/disable ASPM) (which happens at
> enumeration-time, not suspend/resume time) and save/restore if we turn
> the device off and we have to reconfigure it when we turn it on again.
So in the suspend path we gain pci_prepare_to_sleep() by not pre-saving 
state.

The big component here is that we are a Non-ACPI device/domain on an
ACPI PM manageable system, so I'm not certain if we're missing some
platform critical elements here in pci_platform_power_* functions.

> > The pci_restore_state call in vmd_resume was restoring state after
> > pci_pm_resume->pci_restore_standard_config had already restored state.
> > It's also been suspected that the config state should be restored before
> > re-requesting IRQs.
> > 
> > Remove the pci_{save,restore}_state calls in vmd_{suspend,resume} in
> > order to allow proper flow through PCI core power management ASPM code.
> > 
> > Cc: Kai-Heng Feng <kai.heng.feng@canonical.com>
> > Cc: You-Sheng Yang <vicamo.yang@canonical.com>
> > Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
> > ---
> >  drivers/pci/controller/vmd.c | 2 --
> >  1 file changed, 2 deletions(-)
> > 
> > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> > index 76d8acbee7d5..15c1d85d8780 100644
> > --- a/drivers/pci/controller/vmd.c
> > +++ b/drivers/pci/controller/vmd.c
> > @@ -719,7 +719,6 @@ static int vmd_suspend(struct device *dev)
> >  	for (i = 0; i < vmd->msix_count; i++)
> >  		devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]);
> >  
> > -	pci_save_state(pdev);
> 
> The VMD driver uses generic PM, not legacy PCI PM, so I think removing
> the save/restore state from your suspend/resume functions is the right
> thing to do.  You should only need to do VMD-specific things there.
> 
> I'm not even sure you need to free/request the IRQs in your
> suspend/resume.  Maybe Rafael or Vaibhav know.
The history on that was due to too many VMD instances consuming too
many IRQs for the suspend path when moving IRQs from CPUs being
offlined:

commit e2b1820bd5d0962d6f271b0d47c3a0e38647df2f
Author: Scott Bauer <scott.bauer@intel.com>
Date:   Fri Aug 11 14:54:32 2017 -0600

    PCI: vmd: Free up IRQs on suspend path
    
    Free up the IRQs we request on the suspend path and reallocate them on the
    resume path.
    
    Fixes this error:
    
      CPU 111 disable failed: CPU has 9 vectors assigned and there are only 0 available.
      Error taking CPU111 down: -34
      Non-boot CPUs are not disabled
      Enabling non-boot CPUs ...

> 
> I just think the justification in the commit log is wrong.
> 
> >  	return 0;
> >  }
> >  
> > @@ -737,7 +736,6 @@ static int vmd_resume(struct device *dev)
> >  			return err;
> >  	}
> >  
> > -	pci_restore_state(pdev);
> >  	return 0;
> >  }
> >  #endif
> > -- 
> > 2.18.1
> > 

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-08-05 20:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-31 17:15 [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code Jon Derrick
2020-08-05  7:54 ` You-Sheng Yang
2020-08-05 15:30   ` Derrick, Jonathan
2020-08-05 15:35     ` Bjorn Helgaas
2020-08-05 15:09 ` Bjorn Helgaas
2020-08-05 16:09   ` Derrick, Jonathan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).