linux-cxl.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vikram Sethi <vsethi@nvidia.com>
To: Jonathan Cameron <Jonathan.Cameron@Huawei.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Ben Widawsky <ben.widawsky@intel.com>,
	Chris Browy <cbrowy@avery-design.com>,
	Linux PCI <linux-pci@vger.kernel.org>,
	"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Bjorn Helgaas <bjorn@helgaas.com>
Cc: "Krzysztof Wilczyński" <kw@linux.com>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Fangjian <f.fangjian@huawei.com>,
	"Natu, Mahesh" <mahesh.natu@intel.com>,
	"Varun Sampath" <varuns@nvidia.com>
Subject: RE: RFC: Plumbers microconf topic: PCI DOE and related.
Date: Tue, 27 Jul 2021 16:50:05 +0000	[thread overview]
Message-ID: <BL0PR12MB2532CC3B64CAB199051D5AA4BDE99@BL0PR12MB2532.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20210727130653.00006a0a@Huawei.com>

Hi Jonathan, 

> -----Original Message-----
> From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>


> Open Questions / Problems:
> 1. Control which software entity uses DOE.
>    It does not appear to be safe (as in not going to disrupt each other rather
>    than security) for multiple software entities (Userspace, Kernel, TEE,
>    Firmware) to access an individual DOE instance on a device without
>    mediation.  Some DOE protocols have clear reasons for Linux kernel
>    access (e.g. CDAT) others are more debatable.
>    Even running the discovery protocol could disrupt other users. Hardening
>    against such disruption is probably best effort only (no guarantees).
>    Question is: How to prevent this?
>     a) Userspace vs Kernel. Are there valid reasons for userspace to access
>        a DOE? If so do how do we enable that? Does a per protocol approach
>        make sense? Potential vendor defined protocols? Do we need to lock
>        out 'developer' tools such as setpci - or do we let developers shoot
>        themselves in the foot?
>     b) OS vs lower levels / TEE. Do we need to propose a means of telling the
> OS
>        to keep its hands off a DOE?  How to do it?
> 
> 2. CMA support.
>    Usecases for in kernel CMA support and whether strong enough to support
>    native access. (e.g. authentication of VF from a VM, or systems not running
>    any suitable lower level software / TEE)

Any time the device is reset, you'd want to measure again. I'd think every kernel
PF FLR/SBR/CXL reset needs to be followed by a measurement of the device
In kernel. Of course needs bigger discussion on the plumbing/infrastructure
to report the measurement and attest that the measurements post reset are valid.
Instead of native access, could it be mediated via ACPI or UEFI runtime service?
Not clear that ACPI/UEFI would be the appropriate mediator in all cases. 

>    Key / Certificate management. This is somewhat like IMA, but we probably
>    need to manage the certificate chain separately for each CMA/SPDM
> instance.
>    Understanding provisioning models would be useful to guide this work.
> 
> 3. IDE support
>    Is native kernel support worthwhile? Perhaps good to discuss
>    potential usecases + get some idea on priority for this feature.
> 
> 4. Potential blockers on merging emulation support in QEMU. (I'm less sure
>    on this one, but perhaps worth briefly touching on or a separate
>    session on emulation if people are interested? Ben, do you think this
>    would be worthwhile?)
> 
> There are other minor questions we might slip into the discussion, time
> allowing such as need for async support handling in the kernel DOE code.
> 
> For all these features, we have multiple layers on top of underlying PCI so
> discussion of 'how' to support this might be useful.
> 1) Service model - detected at PCI subsystem level, services to drivers.
> 2) Driver initiated mode - library code, but per driver instantiation etc.
> 
> That's what have come up with this morning, so please poke holes in it and
> point out what I've forgotten about.
> 
> Note for an actual CFP proposal, I'll probably split this into at least two.
> Topic 1: DOE only.  Topic 2: CMA / IDE. As there is a lot here, for some topics
> we may be looking at introduce the topic + questions rather than resolving
> everything on the day.
> 
> Thanks,
> 
> Jonathan
> 
> p.s. Perhaps it is a little unusual to have this level of 'planning' discussion
> explicitly on list, but we are working under some unusual constraints and
> inclusiveness and openness always good anyway!


  reply	other threads:[~2021-07-27 16:50 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-27 12:06 RFC: Plumbers microconf topic: PCI DOE and related Jonathan Cameron
2021-07-27 16:50 ` Vikram Sethi [this message]
2021-07-28  8:56   ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BL0PR12MB2532CC3B64CAB199051D5AA4BDE99@BL0PR12MB2532.namprd12.prod.outlook.com \
    --to=vsethi@nvidia.com \
    --cc=Jonathan.Cameron@Huawei.com \
    --cc=ben.widawsky@intel.com \
    --cc=bjorn@helgaas.com \
    --cc=cbrowy@avery-design.com \
    --cc=dan.j.williams@intel.com \
    --cc=f.fangjian@huawei.com \
    --cc=kw@linux.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mahesh.natu@intel.com \
    --cc=varuns@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).