From: Dan Williams <dan.j.williams@intel.com>
To: "Derrick, Jonathan" <jonathan.derrick@intel.com>
Cc: "hch@infradead.org" <hch@infradead.org>,
"wangxiongfeng2@huawei.com" <wangxiongfeng2@huawei.com>,
"kw@linux.com" <kw@linux.com>,
"hkallweit1@gmail.com" <hkallweit1@gmail.com>,
"kai.heng.feng@canonical.com" <kai.heng.feng@canonical.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"mika.westerberg@linux.intel.com"
<mika.westerberg@linux.intel.com>,
"Mario.Limonciello@dell.com" <Mario.Limonciello@dell.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"bhelgaas@google.com" <bhelgaas@google.com>,
"Huffman, Amber" <amber.huffman@intel.com>,
"Wysocki, Rafael J" <rafael.j.wysocki@intel.com>
Subject: Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain
Date: Thu, 27 Aug 2020 14:33:56 -0700 [thread overview]
Message-ID: <CAPcyv4ie53kswpk8E8=SCv4HBUAjCuFTNb6mLNUR+V-=cJ_XtA@mail.gmail.com> (raw)
In-Reply-To: <eb45485d9107440a667e598da99ad949320b77b1.camel@intel.com>
On Thu, Aug 27, 2020 at 9:46 AM Derrick, Jonathan
<jonathan.derrick@intel.com> wrote:
>
> On Thu, 2020-08-27 at 17:23 +0100, hch@infradead.org wrote:
> > On Thu, Aug 27, 2020 at 04:13:44PM +0000, Derrick, Jonathan wrote:
> > > On Thu, 2020-08-27 at 06:34 +0000, hch@infradead.org wrote:
> > > > On Wed, Aug 26, 2020 at 09:43:27PM +0000, Derrick, Jonathan wrote:
> > > > > Feel free to review my set to disable the MSI remapping which will
> > > > > make
> > > > > it perform as well as direct-attached:
> > > > >
> > > > > https://patchwork.kernel.org/project/linux-pci/list/?series=325681
> > > >
> > > > So that then we have to deal with your schemes to make individual
> > > > device direct assignment work in a convoluted way?
> > >
> > > That's not the intent of that patchset -at all-. It was to address the
> > > performance bottlenecks with VMD that you constantly complain about.
> >
> > I know. But once we fix that bottleneck we fix the next issue,
> > then to tackle the next. While at the same time VMD brings zero
> > actual benefits.
> >
>
> Just a few benefits and there are other users with unique use cases:
> 1. Passthrough of the endpoint to OSes which don't natively support
> hotplug can enable hotplug for that OS using the guest VMD driver
> 2. Some hypervisors have a limit on the number of devices that can be
> passed through. VMD endpoint is a single device that expands to many.
> 3. Expansion of possible bus numbers beyond 256 by using other
> segments.
> 4. Custom RAID LED patterns driven by ledctl
>
> I'm not trying to market this. Just pointing out that this isn't
> "bringing zero actual benefits" to many users.
>
The initial intent of the VMD driver was to allow Linux to find and
initialize devices behind a VMD configuration where VMD was required
for a non-Linux OS. For Linux, if full native PCI-E is an available
configuration option I think it makes sense to recommend Linux users
to flip that knob rather than continue to wrestle with the caveats of
the VMD driver. Where that knob isn't possible / available VMD can be
a fallback, but full native PCI-E is what Linux wants in the end.
next prev parent reply other threads:[~2020-08-27 21:34 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-21 12:32 [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain Kai-Heng Feng
2020-08-24 13:04 ` Mika Westerberg
2020-08-25 6:23 ` Christoph Hellwig
2020-08-25 6:39 ` Kai Heng Feng
2020-08-25 6:56 ` Christoph Hellwig
2020-08-26 5:53 ` Kai-Heng Feng
2020-09-02 19:48 ` David Fugate
2020-09-02 22:54 ` Keith Busch
2020-08-26 21:43 ` Derrick, Jonathan
2020-08-27 6:34 ` hch
2020-08-27 16:13 ` Derrick, Jonathan
2020-08-27 16:23 ` hch
2020-08-27 16:45 ` Derrick, Jonathan
2020-08-27 16:50 ` hch
2020-08-27 21:33 ` Dan Williams [this message]
2020-08-29 7:23 ` hch
2020-08-27 17:49 ` Limonciello, Mario
2020-08-29 7:24 ` hch
2020-09-10 1:55 ` Bjorn Helgaas
2020-09-10 16:33 ` Derrick, Jonathan
2020-09-10 17:38 ` Bjorn Helgaas
[not found] <0f902d555deb423ef1c79835b23c917be2633162.camel@intel.com>
2020-09-10 19:17 ` Bjorn Helgaas
2020-09-10 19:51 ` Derrick, Jonathan
2020-09-17 17:20 ` Bjorn Helgaas
2020-09-23 14:29 ` Kai-Heng Feng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAPcyv4ie53kswpk8E8=SCv4HBUAjCuFTNb6mLNUR+V-=cJ_XtA@mail.gmail.com' \
--to=dan.j.williams@intel.com \
--cc=Mario.Limonciello@dell.com \
--cc=amber.huffman@intel.com \
--cc=bhelgaas@google.com \
--cc=hch@infradead.org \
--cc=hkallweit1@gmail.com \
--cc=jonathan.derrick@intel.com \
--cc=kai.heng.feng@canonical.com \
--cc=kw@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=mika.westerberg@linux.intel.com \
--cc=rafael.j.wysocki@intel.com \
--cc=wangxiongfeng2@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).