From: Christoph Hellwig <hch@infradead.org>
To: Austin.Bolen@dell.com
Cc: Alex_Gagniuc@Dellteam.com, torvalds@linux-foundation.org,
keith.busch@intel.com, sagi@grimberg.me,
linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
axboe@fb.com, mr.nuke.me@gmail.com, hch@lst.de,
jonathan.derrick@intel.com
Subject: Re: [PATCH] nvme-pci: Prevent mmio reads if pci channel offline
Date: Thu, 28 Feb 2019 06:16:55 -0800 [thread overview]
Message-ID: <20190228141655.GA18319@infradead.org> (raw)
In-Reply-To: <443262761d0e41fbb46a46dab28759c2@AUSX13MPC131.AMER.DELL.COM>
On Wed, Feb 27, 2019 at 08:04:35PM +0000, Austin.Bolen@dell.com wrote:
> Confirmed this issue does not apply to the referenced Dell servers so I
> don't not have a stake in how this should be handled for those systems.
> It may be they just don't support surprise removal. I know in our case
> all the Linux distributions we qualify (RHEL, SLES, Ubuntu Server) have
> told us they do not support surprise removal. So I'm guessing that any
> issues found with surprise removal could potentially fall under the
> category of "unsupported".
>
> Still though, the larger issue of recovering from other types of PCIe
> errors that are not due to device removal is still important. I would
> expect many system from many platform makers to not be able to recover
> PCIe errors in general and hopefully the new DPC CER model will help
> address this and provide added protection for cases like above as well.
FYI, a related issue I saw about a year two ago with Dell servers was
with a dual ported NVMe add-in (non U.2) card, is that once you did
a subsystem reset, which would cause both controller to retrain the link
you'd run into Firmware First error handling issue that would instantly
crash the system. I don't really have the hardware anymore, but the
end result was that I think the affected product ended up shipping
with subsystem resets only enabled for the U.2 form factor.
next prev parent reply other threads:[~2019-02-28 14:17 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-22 1:05 [PATCH] nvme-pci: Prevent mmio reads if pci channel offline Jon Derrick
2019-02-22 21:28 ` Linus Torvalds
2019-02-22 21:59 ` Keith Busch
2019-02-24 20:37 ` Alex_Gagniuc
2019-02-24 22:42 ` Linus Torvalds
2019-02-24 23:27 ` Alex_Gagniuc
2019-02-25 0:43 ` Linus Torvalds
2019-02-25 15:55 ` Keith Busch
2019-02-26 22:37 ` Alex_Gagniuc
2019-02-27 1:01 ` Linus Torvalds
2019-02-27 16:42 ` Alex_Gagniuc
2019-02-27 17:51 ` Keith Busch
2019-02-27 18:07 ` Alex_Gagniuc
2019-02-27 17:55 ` Austin.Bolen
2019-02-27 20:04 ` Austin.Bolen
2019-02-28 14:16 ` Christoph Hellwig [this message]
2019-02-28 23:10 ` Austin.Bolen
2019-02-28 23:20 ` Keith Busch
2019-02-28 23:43 ` Austin.Bolen
2019-03-01 0:30 ` Keith Busch
2019-03-01 1:52 ` Austin.Bolen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190228141655.GA18319@infradead.org \
--to=hch@infradead.org \
--cc=Alex_Gagniuc@Dellteam.com \
--cc=Austin.Bolen@dell.com \
--cc=axboe@fb.com \
--cc=hch@lst.de \
--cc=jonathan.derrick@intel.com \
--cc=keith.busch@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=mr.nuke.me@gmail.com \
--cc=sagi@grimberg.me \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).