All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Kuppuswamy, Sathyanarayanan"  <sathyanarayanan.kuppuswamy@linux.intel.com>
To: Lukas Wunner <lukas@wunner.de>,
	Sathyanarayanan Kuppuswamy Natarajan 
	<sathyanarayanan.nkuppuswamy@gmail.com>
Cc: Dan Williams <dan.j.williams@intel.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Linux PCI <linux-pci@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	Keith Busch <kbusch@kernel.org>,
	knsathya@kernel.org, Sinan Kaya <okaya@kernel.org>
Subject: Re: [PATCH v2 1/1] PCI: pciehp: Skip DLLSC handling if DPC is triggered
Date: Wed, 17 Mar 2021 13:02:07 -0700	[thread overview]
Message-ID: <0a020128-80e8-76a7-6b94-e165d3c6f778@linux.intel.com> (raw)
In-Reply-To: <20210317190151.GA27146@wunner.de>



On 3/17/21 12:01 PM, Lukas Wunner wrote:
> On Wed, Mar 17, 2021 at 10:54:09AM -0700, Sathyanarayanan Kuppuswamy Natarajan wrote:
>> Flush of hotplug event after successful recovery, and a simulated
>> hotplug link down event after link recovery fails should solve the
>> problems raised by Lukas. I assume Lukas' proposal adds this support.
>> I will check his patch shortly.
> 
> Thank you!
> 
> I'd like to get a better understanding of the issues around hotplug/DPC,
> specifically I'm wondering:
> 
> If DPC recovery was successful, what is the desired behavior by pciehp,
> should it ignore the Link Down/Up or bring the slot down and back up
> after DPC recovery?
> 
> If the events are ignored, the driver of the device in the hotplug slot
> is not unbound and rebound.  So the driver must be able to cope with
> loss of TLPs during DPC recovery and it must be able to cope with
> whatever state the endpoint device is in after DPC recovery.
> Is this really safe?  How does the nvme driver deal with it?
During DPC recovery, in pcie_do_recovery() function, we use
report_frozen_detected() to notify all devices attached to the port
about the fatal error. After this notification, we expect all
affected devices to halt its IO transactions.

Regarding state restoration, after successful recovery, we use
report_slot_reset() to notify about the slot/link reset. So device
drivers are expected to restore the device to working state after this
notification.
> 
> Also, if DPC is handled by firmware, your patch does not ignore the
> Link Down/Up events, 
Only for pure firmware model. For EDR case, we still ignore the Link
Down/Up events.
so pciehp brings down the slot when DPC is
> triggered, then brings it up after succesful recovery.  In a code
> comment, you write that this behavior is okay because there's "no
> race between hotplug and DPC recovery". 
My point is, there is no race in OS handlers (pciehp_ist() vs pcie_do_recovery())
  However, Sinan wrote in
> 2018 that one of the issues with hotplug versus DPC is that pciehp
> may turn off slot power and thereby foil DPC recovery.  (Power off =
> cold reset, whereas DPC recovery = warm reset.)  This can occur
> as well if DPC is handled by firmware.
I am not sure how pure firmware DPC recovery works. Is there a platform
which uses this combination? For firmware DPC model, spec does not clarify
following points.

1. Who will notify the affected device drivers to halt the IO transactions.
2. Who is responsible to restore the state of the device after link reset.

IMO, pure firmware DPC does not support seamless recovery. I think after it
clears the DPC trigger status, it might expect hotplug handler be responsible
for device recovery.

I don't want to add fix to the code path that I don't understand. This is the
reason for extending this logic to pure firmware DPC case.

> 
> So I guess pciehp should make an attempt to await DPC recovery even
> if it's handled by firmware?  Or am I missing something?  We may be
> able to achieve that by polling the DPC Trigger Status bit and
> DLLLA bit, but it won't work as perfectly as with native DPC support.
> 
> Finally, you write in your commit message that there are "a lot of
> stability issues" if pciehp and DPC are allowed to recover freely
> without proper serialization.  What are these issues exactly?
In most cases, I see failure of DPC recovery handler (you can see the
example dmesg in commit log). Other than this, we also noticed some
extended delay or failure in link retraining (while waiting for LINK UP
in pcie_wait_for_link(pdev, true)).
In some cases, we noticed slot power on failures and that card will not
be detected when running lscpi.

Above mentioned cases are just observations we have made. we did not dig
deeper on why the race between pciehp and DPC leads to such issues.
> (Beyond the slot power issue mentioned above, and that the endpoint
> device's driver should presumably not be unbound if DPC recovery
> was successful.)
> 
> Thanks!
> 
> Lukas
> 

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

  reply	other threads:[~2021-03-17 20:02 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-13  3:32 [PATCH v2 1/1] PCI: pciehp: Skip DLLSC handling if DPC is triggered sathyanarayanan.kuppuswamy
2021-03-13  3:35 ` Kuppuswamy, Sathyanarayanan
2021-03-17  4:13 ` Lukas Wunner
2021-03-17  5:08   ` Dan Williams
2021-03-17  5:31     ` Lukas Wunner
2021-03-17 16:31       ` Dan Williams
2021-03-17 17:19         ` Sathyanarayanan Kuppuswamy Natarajan
2021-03-17 17:45           ` Dan Williams
2021-03-17 17:54             ` Sathyanarayanan Kuppuswamy Natarajan
2021-03-17 19:01               ` Lukas Wunner
2021-03-17 20:02                 ` Kuppuswamy, Sathyanarayanan [this message]
2021-03-18 15:35                   ` Sinan Kaya
2021-03-28  9:53                   ` Lukas Wunner
2021-03-17 19:09             ` Lukas Wunner
2021-03-17 19:22               ` Raj, Ashok
2021-03-17 19:40                 ` Lukas Wunner
2021-03-28  5:49   ` Kuppuswamy, Sathyanarayanan
2021-03-28  9:07     ` Lukas Wunner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0a020128-80e8-76a7-6b94-e165d3c6f778@linux.intel.com \
    --to=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=ashok.raj@intel.com \
    --cc=bhelgaas@google.com \
    --cc=dan.j.williams@intel.com \
    --cc=kbusch@kernel.org \
    --cc=knsathya@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lukas@wunner.de \
    --cc=okaya@kernel.org \
    --cc=sathyanarayanan.nkuppuswamy@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.