All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
To: Bjorn Helgaas <helgaas@kernel.org>
Cc: Kai-Heng Feng <kai.heng.feng@canonical.com>,
	Nirmal Patel <nirmal.patel@linux.intel.com>,
	Lorenzo Pieralisi <lpieralisi@kernel.org>,
	<linux-pci@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>
Subject: Re: [PATCH v2] PCI: vmd: Enable Hotplug based on BIOS setting on VMD rootports
Date: Tue, 2 Apr 2024 09:10:08 -0700	[thread overview]
Message-ID: <5cf66eb2-6820-4514-9d5f-1a7c4228cff9@intel.com> (raw)
In-Reply-To: <20240326210841.GA1497045@bhelgaas>

On 3/26/2024 2:08 PM, Bjorn Helgaas wrote:
> On Tue, Mar 26, 2024 at 08:51:45AM -0700, Paul M Stillwell Jr wrote:
>> On 3/25/2024 6:59 PM, Kai-Heng Feng wrote:
>>> On Tue, Mar 26, 2024 at 8:17 AM Nirmal Patel
>>> <nirmal.patel@linux.intel.com> wrote:
>>>>
>>>> On Fri, 22 Mar 2024 18:36:37 -0500 Bjorn Helgaas
>>>> <helgaas@kernel.org> wrote:
>>>>
>>>>> On Fri, Mar 22, 2024 at 03:43:27PM -0700, Paul M Stillwell Jr
>>>>> wrote:
>>>>>> On 3/22/2024 2:37 PM, Bjorn Helgaas wrote:
>>>>>>> On Fri, Mar 22, 2024 at 01:57:00PM -0700, Nirmal Patel
>>>>>>> wrote:
>>>>>>>> On Fri, 15 Mar 2024 09:29:32 +0800 Kai-Heng Feng
>>>>>>>> <kai.heng.feng@canonical.com> wrote: ...
>>>>>>>
>>>>>>>>> If there's an official document on intel.com, it can
>>>>>>>>> make many things clearer and easier. States what VMD does
>>>>>>>>> and what VMD expect OS to do can be really helpful.
>>>>>>>>> Basically put what you wrote in an official document.
>>>>>>>>
>>>>>>>> Thanks for the suggestion. I can certainly find official
>>>>>>>> VMD architecture document and add that required information
>>>>>>>> to Documentation/PCI/controller/vmd.rst. Will that be
>>>>>>>> okay?
>>>>>>>
>>>>>>> I'd definitely be interested in whatever you can add to
>>>>>>> illuminate these issues.
>>>>>>>
>>>>>>>> I also need your some help/suggestion on following
>>>>>>>> alternate solution. We have been looking at VMD HW
>>>>>>>> registers to find some empty registers. Cache Line Size
>>>>>>>> register offset OCh is not being used by VMD. This is the
>>>>>>>> explanation in PCI spec 5.0 section 7.5.1.1.7: "This
>>>>>>>> read-write register is implemented for legacy compatibility
>>>>>>>> purposes but has no effect on any PCI Express device
>>>>>>>> behavior." Can these registers be used for passing _OSC
>>>>>>>> settings from BIOS to VMD OS driver?
>>>>>>>>
>>>>>>>> These 8 bits are more than enough for UEFI VMD driver to
>>>>>>>> store all _OSC flags and VMD OS driver can read it during
>>>>>>>> OS boot up. This will solve all of our issues.
>>>>>>>
>>>>>>> Interesting idea.  I think you'd have to do some work to
>>>>>>> separate out the conventional PCI devices, where
>>>>>>> PCI_CACHE_LINE_SIZE is still relevant, to make sure nothing
>>>>>>> breaks.  But I think we overwrite it in some cases even for
>>>>>>> PCIe devices where it's pointless, and it would be nice to
>>>>>>> clean that up.
>>>>>>
>>>>>> I think the suggestion here is to use the VMD devices Cache
>>>>>> Line Size register, not the other PCI devices. In that case we
>>>>>> don't have to worry about conventional PCI devices because we
>>>>>> aren't touching them.
>>>>>
>>>>> Yes, but there is generic code that writes PCI_CACHE_LINE_SIZE
>>>>> for every device in some cases.  If we wrote the VMD
>>>>> PCI_CACHE_LINE_SIZE, it would obliterate whatever you want to
>>>>> pass there.
>>>>>
>>>> Our initial testing showed no change in value by overwrite from
>>>> pci. The registers didn't change in Host as well as in Guest OS
>>>> when some data was written from BIOS. I will perform more testing
>>>> and also make sure to write every register just in case.
>>>
>>> If the VMD hardware is designed in this way and there's an official
>>> document states that "VMD ports should follow _OSC expect for
>>> hotplugging" then IMO there's no need to find alternative.
>>
>> There isn't any official documentation for this, it's just the way VMD
>> is designed :). VMD assumes that everything behind it is hotpluggable so
>> the bits don't *really* matter. In other words, VMD supports hotplug and
>> you can't turn that off so hotplug is always on regardless of what the
>> bits say.
> 
> This whole discussion is about who *owns* the feature, not whether the
> hardware supports the feature.
> 

I'm not disagreeing about who owns the feature :) I'm trying to find a 
solution when the data that the driver gets is wrong.

> The general rule is "if _OSC covers the feature, the platform owns the
> feature and the OS shouldn't touch it unless the OS requests and is
> granted control of it."
> 

The code is fine in a bare metal system, but completely broken in a 
hypervisor because all the _OSC bits are set to 0. So I am trying to 
find a solution where the hotplug bits can be set correctly in this 
situation.

> VMD wants special treatment, and we'd like a simple description of
> what that treatment is.  Right now it sounds like you want the OS to
> own *hotplug* below VMD regardless of what _OSC says.
> 

What I want is for hotplug to be enabled in a hypervisor :) The slot 
capability register has the correct bits set, but it doesn't make sense 
(to me) to use a register in a root port to set the bridge settings. 
That's why this patch just removed the hotplug bits from the code. 
Everything is set up correctly (as far as I can tell) in both bare metal 
and hypervisors if we omit those 2 lines.

Does anyone have a better suggestion on how to handle the case where the 
_OSC bits are incorrect?

Paul

> But I still have no idea about other features like AER, etc., so we're
> kind of in no man's land about how to handle them.
> 
> Bjorn


  reply	other threads:[~2024-04-02 16:10 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-27 21:17 [PATCH v2] PCI: vmd: Enable Hotplug based on BIOS setting on VMD rootports Nirmal Patel
2023-12-02  0:07 ` Bjorn Helgaas
2023-12-05 22:20   ` Nirmal Patel
2023-12-06  0:27     ` Bjorn Helgaas
2023-12-11 23:05       ` Nirmal Patel
2023-12-12 21:13         ` Bjorn Helgaas
2023-12-14  1:07           ` Nirmal Patel
2023-12-14 19:23             ` Bjorn Helgaas
2023-12-14 22:22               ` Nirmal Patel
2024-01-12  0:02                 ` Nirmal Patel
2024-01-12 22:55                 ` Bjorn Helgaas
2024-01-16 20:37                   ` Nirmal Patel
2024-01-17  0:49                     ` Bjorn Helgaas
2024-02-01 21:16                       ` Bjorn Helgaas
2024-02-01 18:38                     ` Nirmal Patel
2024-02-01 23:00                       ` Bjorn Helgaas
2024-02-07  0:27                         ` Nirmal Patel
2024-02-07 18:55                           ` Bjorn Helgaas
2024-02-13 17:47                             ` Nirmal Patel
2024-03-06 22:27                               ` Nirmal Patel
2024-03-07  6:44                                 ` Kai-Heng Feng
2024-03-08  0:09                                   ` Nirmal Patel
2024-03-15  1:29                                     ` Kai-Heng Feng
2024-03-22 20:57                                       ` Nirmal Patel
2024-03-22 21:37                                         ` Bjorn Helgaas
2024-03-22 22:43                                           ` Paul M Stillwell Jr
2024-03-22 23:36                                             ` Bjorn Helgaas
2024-03-25 15:10                                               ` Paul M Stillwell Jr
2024-03-26  0:17                                               ` Nirmal Patel
2024-03-26  1:59                                                 ` Kai-Heng Feng
2024-03-26 15:51                                                   ` Paul M Stillwell Jr
2024-03-26 16:03                                                     ` Paul M Stillwell Jr
2024-03-26 21:08                                                     ` Bjorn Helgaas
2024-04-02 16:10                                                       ` Paul M Stillwell Jr [this message]
2024-02-01 22:22                     ` Bjorn Helgaas
2024-01-16 23:54     ` Bjorn Helgaas
2024-02-14 13:43     ` Lorenzo Pieralisi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5cf66eb2-6820-4514-9d5f-1a7c4228cff9@intel.com \
    --to=paul.m.stillwell.jr@intel.com \
    --cc=helgaas@kernel.org \
    --cc=kai.heng.feng@canonical.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=nirmal.patel@linux.intel.com \
    --cc=rjw@rjwysocki.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.