All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sreenivas Bagalkote <sreenivas.bagalkote@broadcom.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Dan Williams <dan.j.williams@intel.com>
Cc: linux-cxl@vger.kernel.org,
	Brett Henning <brett.henning@broadcom.com>,
	 Harold Johnson <harold.johnson@broadcom.com>,
	 Sumanesh Samanta <sumanesh.samanta@broadcom.com>,
	linux-kernel@vger.kernel.org,
	 Davidlohr Bueso <dave@stgolabs.net>,
	Dave Jiang <dave.jiang@intel.com>,
	 Alison Schofield <alison.schofield@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	 Ira Weiny <ira.weiny@intel.com>,
	linuxarm@huawei.com, linux-api@vger.kernel.org,
	 Lorenzo Pieralisi <lpieralisi@kernel.org>,
	"Natu, Mahesh" <mahesh.natu@intel.com>
Subject: Re: RFC: Restricting userspace interfaces for CXL fabric management
Date: Tue, 23 Apr 2024 16:44:37 -0600	[thread overview]
Message-ID: <CACX_a4UM7wqb_eGSP2m2f2ytQGB3j+3Y4iP2H1UfMdVmm2a+=w@mail.gmail.com> (raw)
In-Reply-To: <CACX_a4XGLgmQC3cqCmDJnrcnfjQRW4EmV8BZTCC=MgzwYwdhXA@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 16895 bytes --]

Can somebody please at least acknowledge that you are getting my emails?

Thank you
Sreeni
Sreenivas Bagalkote <Sreenivas.Bagalkote@broadcom.com>
Product Planning & Management
Broadcom Datacenter Solutions Group

On Mon, Apr 15, 2024 at 2:09 PM Sreenivas Bagalkote <
sreenivas.bagalkote@broadcom.com> wrote:

> Hello,
>
> >> We need guidance from the community.
>
> >> 1. Datacenter customers must be able to manage PCIe switches in-band.>> 2. Management of switches includes getting health, performance, and error telemetry.>> 3. These telemetry functions are not yet part of the CXL standard>> 4. We built the CCI mailboxes into our PCIe switches per CXL spec and developed our management scheme around them.>> >> If the Linux community does not allow a CXL spec-compliant switch to be>> managed via the CXL spec-defined CCI mailbox, then please guide us on>> the right approach. Please tell us how you propose we manage our switches>> in-band.
>
> I am still looking for your guidance. We need to be able to manage our
> switch via the CCI mailbox. We need to use vendor-defined commands per CXL
> spec.
>
> You talked about whitelisting commands (allow-list) which we agreed to.
> Would you please confirm that you will allow the vendor-defined allow-list
> of commands?
>
> Thank you
> Sreeni
>
> On Wed, Apr 10, 2024 at 5:45 AM Jonathan Cameron <
> Jonathan.Cameron@huawei.com> wrote:
>
>> On Fri, 5 Apr 2024 17:04:34 -0700
>> Dan Williams <dan.j.williams@intel.com> wrote:
>>
>> Hi Dan,
>>
>> > Jonathan Cameron wrote:
>> > > Hi All,
>> > >
>> > > This is has come up in a number of discussions both on list and in
>> private,
>> > > so I wanted to lay out a potential set of rules when deciding whether
>> or not
>> > > to provide a user space interface for a particular feature of CXL
>> Fabric
>> > > Management.  The intent is to drive discussion, not to simply tell
>> people
>> > > a set of rules.  I've brought this to the public lists as it's a
>> Linux kernel
>> > > policy discussion, not a standards one.
>> > >
>> > > Whilst I'm writing the RFC this my attempt to summarize a possible
>> > > position rather than necessarily being my personal view.
>> > >
>> > > It's a straw man - shoot at it!
>> > >
>> > > Not everyone in this discussion is familiar with relevant kernel or
>> CXL concepts
>> > > so I've provided more info than I normally would.
>> >
>> > Thanks for writing this up Jonathan!
>> >
>> > [..]
>> > > 2) Unfiltered userspace use of mailbox for Fabric Management - BMC
>> kernels
>> > >
>> ==========================================================================
>> > >
>> > > (This would just be a kernel option that we'd advise normal server
>> > > distributions not to turn on. Would be enabled by openBMC etc)
>> > >
>> > > This is fine - there is some work to do, but the switch-cci PCI driver
>> > > will hopefully be ready for upstream merge soon. There is no
>> filtering of
>> > > accesses. Think of this as similar to all the damage you can do via
>> > > MCTP from a BMC. Similarly it is likely that much of the complexity
>> > > of the actual commands will be left to user space tooling:
>> > > https://gitlab.com/jic23/cxl-fmapi-tests has some test examples.
>> > >
>> > > Whether Kconfig help text is strong enough to ensure this only gets
>> > > enabled for BMC targeted distros is an open question we can address
>> > > alongside an updated patch set.
>> >
>> > It is not clear to me that this material makes sense to house in
>> > drivers/ vs tools/ or even out-of-tree just for maintenance burden
>> > relief of keeping the universes separated. What does the Linux kernel
>> > project get out of carrying this in mainline alongside the inband code?
>>
>> I'm not sure what you mean by in band.  Aim here was to discuss
>> in-band drivers for switch CCI etc. Same reason from a kernel point of
>> view for why we include embedded drivers.  I'll interpret in band
>> as host driven and not inband as FM-API stuff.
>>
>> > I do think the mailbox refactoring to support non-CXL use cases is
>> > interesting, but only so far as refactoring is consumed for inband use
>> > cases like RAS API.
>>
>> If I read this right, I disagree with the 'only so far' bit.
>>
>> In all substantial ways we should support BMC use case of the Linux Kernel
>> at a similar level to how we support forms of Linux Distros.  It may
>> not be our target market as developers for particular parts of our
>> companies,
>> but we should not block those who want to support it.
>>
>> We should support them in drivers/ - maybe with example userspace code
>> in tools.  Linux distros on BMCs is a big market, there are a number
>> of different distros using (and in some cases contributing to) the
>> upstream kernel. Not everyone is using openBMC so there is not one
>> common place where downstream patches could be carried.
>> From a personal point of view, I like that for the same reasons that
>> I like there being multiple Linux sever focused distros. It's a sign
>> of a healthy ecosystem to have diverse options taking the mainline
>> kernel as their starting point.
>>
>> BMCs are just another embedded market, and like other embedded markets
>> we want to encourage upstream first etc.
>> openBMC has a policy on this:
>> https://github.com/openbmc/docs/blob/master/kernel-development.md
>> "The OpenBMC project maintains a kernel tree for use by the project.
>> The tree's general development policy is that code must be upstream
>> first." There are paths to bypass that for openBMC so it's a little
>> more relaxed than some enterprise distros (today, their policies used
>> to look very similar to this) but we should not be telling
>> them they need to carry support downstream.  If we are
>> going to tell them that, we need to be able to point at a major
>> sticking point for maintenance burden.  So far I don't see the
>> additional complexity as remotely close reaching that bar.
>>
>> So I think we do want switch-cci support and for that matter the
>> equivalent
>> for MHDs in the upstream kernel.
>>
>> One place I think there is some wiggle room is the taint on use of raw
>> commands.  Leaving removal of that for BMC kernels as a patch they need
>> to carry downstream doesn't seem too burdensome. I'm sure they'll push
>> back if it is a problem for them!  So I think we can kick that question
>> into the future.
>>
>> Addressing maintenance burden, there is a question of where we split
>> the stack.  Ignore MHDs for now (I won't go into why in this forum...)
>>
>> The current proposal is (simplified to ignore some sharing in lookup code
>> etc
>> that I can rip out if we think it might be a long term problem)
>>
>>      _____________          _____________________
>>     |             |        |                     |
>>     | Switch CCI  |        |  Type 3 Driver stack|
>>     |_____________|        |_____________________|
>>            |___________________________|              Whatever GPU etc
>>                   _______|_______                   _______|______
>>                  |               |                 |              |
>>                  |  CXL MBOX     |                 | RAS API etc  |
>>                  |_______________|                 |______________|
>>                              |_____________________________|
>>                                            |
>>                                   _________|______
>>                                  |                |
>>                                  |   MMPT mbox    |
>>                                  |________________|
>>
>> Switch CCI Driver: PCI driver doing everything beyond the CXL mbox
>> specific bit.
>> Type 3 Stack: All the normal stack just with the CXL Mailbox specific
>> stuff factored
>>               out. Note we can move different amounts of shared logic in
>> here, but
>>               in essence it deals with the extra layer on top of the raw
>> MMPT mbox.
>> MMPT Mbox: Mailbox as per the PCI spec.
>> RAS API:   Shared RAS API specific infrastructure used by other drivers.
>>
>> If we see a significant maintenance burden, maybe we duplicate the CXL
>> specific
>> MBOX layer - I can see advantages in that as there is some stuff not
>> relevant
>> to the Switch CCI.  There will be some duplication of logic however such
>> as background command support (which is CXL only IIUC)  We can even use
>> a difference IOCTL number so the two can diverge if needed in the long
>> run.
>>
>> e.g. If it makes it easier to get upstream, we can merrily duplicated code
>> so that only the bit common with RAS API etc is shared (assuming the
>> actually end up with MMPT, not the CXL mailbox which is what their current
>> publicly available spec talks about and I assume is a pref MMPT left
>> over?)
>>
>>      _____________          _____________________
>>     |             |        |                     |
>>     | Switch CCI  |        |  Type 3 Driver stack|
>>     |_____________|        |_____________________|
>>            |                           |              Whatever GPU etc
>>     _______|_______             _______|_______        ______|_______
>>    |               |           |               |      |              |
>>    |  CXL MBOX     |           |  CXL MBOX     |      | RAS API etc  |
>>    |_______________|           |_______________|      |______________|
>>            |_____________________________|____________________|
>>                                          |
>>                                  ________|______
>>                                 |               |
>>                                 |   MMPT mbox   |
>>                                 |_______________|
>>
>>
>> > > (On to the one that the "debate" is about)
>> > >
>> > > 3) Unfiltered user space use of mailbox for Fabric Management -
>> Distro kernels
>> > >
>> =============================================================================
>> > > (General purpose Linux Server Distro (Redhat, Suse etc))
>> > >
>> > > This is equivalent of RAW command support on CXL Type 3 memory
>> devices.
>> > > You can enable those in a distro kernel build despite the scary config
>> > > help text, but if you use it the kernel is tainted. The result
>> > > of the taint is to add a flag to bug reports and print a big message
>> to say
>> > > that you've used a feature that might result in you shooting yourself
>> > > in the foot.
>> > >
>> > > The taint is there because software is not at first written to deal
>> with
>> > > everything that can happen smoothly (e.g. surprise removal) It's hard
>> > > to survive some of these events, so is never on the initial feature
>> list
>> > > for any bus, so this flag is just to indicate we have entered a world
>> > > where almost all bets are off wrt to stability.  We might not know
>> what
>> > > a command does so we can't assess the impact (and no one trusts vendor
>> > > commands to report affects right in the Command Effects Log - which
>> > > in theory tells you if a command can result problems).
>> >
>> > That is a secondary reason that the taint is there. Yes, it helps
>> > upstream not waste their time on bug reports from proprietary use cases,
>> > but the effect of that is to make "raw" command mode unattractive for
>> > deploying solutions at scale. It clarifies that this interface is a
>> > debug-tool that enterprise environment need not worry about.
>> >
>> > The more salient reason for the taint, speaking only for myself as a
>> > Linux kernel community member not for $employer, is to encourage open
>> > collaboration. Take firmware-update for example that is a standard
>> > command with known side effects that is inaccessible via the ioctl()
>> > path. It is placed behind an ABI that is easier to maintain and reason
>> > about. Everyone has the firmware update tool if they have the 'cat'
>> > command. Distros appreciate the fact that they do not need ship yet
>> > another vendor device-update tool, vendors get free tooling and end
>> > users also appreciate one flow for all devices.
>> >
>> > As I alluded here [1], I am not against innovation outside of the
>> > specification, but it needs to be open, and it needs to plausibly become
>> > if not a de jure standard at least a de facto standard.
>> >
>> > [1]:
>> https://lore.kernel.org/all/CAPcyv4gDShAYih5iWabKg_eTHhuHm54vEAei8ZkcmHnPp3B0cw@mail.gmail.com/
>>
>> Agree with all this.
>>
>> >
>> > > A concern was raised about GAE/FAST/LDST tables for CXL Fabrics
>> > > (a r3.1 feature) but, as I understand it, these are intended for a
>> > > host to configure and should not have side effects on other hosts?
>> > > My working assumption is that the kernel driver stack will handle
>> > > these (once we catch up with the current feature backlog!) Currently
>> > > we have no visibility of what the OS driver stack for a fabrics will
>> > > actually look like - the spec is just the starting point for that.
>> > > (patches welcome ;)
>> > >
>> > > The various CXL upstream developers and maintainers may have
>> > > differing views of course, but my current understanding is we want
>> > > to support 1 and 2, but are very resistant to 3!
>> >
>> > 1, yes, 2, need to see the patches, and agree on 3.
>>
>> If we end up with top architecture of the diagrams above, 2 will look
>> pretty
>> similar to last version of the switch-cci patches.  So raw commands only
>> + taint.
>> Factoring out MMPT is another layer that doesn't make that much
>> difference in
>> practice to this discussion. Good to have, but the reuse here would be
>> one layer
>> above that.
>>
>> Or we just say go for second proposed architecture and 0 impact on the
>> CXL specific code, just reuse of the MMPT layer.  I'd imagine people will
>> get
>> grumpy on code duplication (and we'll spend years rejecting patch sets
>> that
>> try to share the cdoe) but there should be no maintenance burden as
>> a result.
>>
>> >
>> > > General Notes
>> > > =============
>> > >
>> > > One side aspect of why we really don't like unfiltered userspace
>> access to any
>> > > of these devices is that people start building non standard hacks in
>> and we
>> > > lose the ecosystem advantages. Forcing a considered discussion +
>> patches
>> > > to let a particular command be supported, drives standardization.
>> >
>> > Like I said above, I think this is not a side aspect. It is fundamental
>> > to the viability Linux as a project. This project only works because
>> > organizations with competing goals realize they need some common
>> > infrastructure and that there is little to be gained by competing on the
>> > commons.
>> >
>> > >
>> https://lore.kernel.org/linux-cxl/CAPcyv4gDShAYih5iWabKg_eTHhuHm54vEAei8ZkcmHnPp3B0cw@mail.gmail.com/
>> > > provides some history on vendor specific extensions and why in
>> general we
>> > > won't support them upstream.
>> >
>> > Oh, you linked my writeup... I will leave the commentary I added here
>> in case
>> > restating it helps.
>> >
>> > > To address another question raised in an earlier discussion:
>> > > Putting these Fabric Management interfaces behind guard rails of some
>> type
>> > > (e.g. CONFIG_IM_A_BMC_AND_CAN_MAKE_A_MESS) does not encourage the risk
>> > > of non standard interfaces, because we will be even less likely to
>> accept
>> > > those upstream!
>> > >
>> > > If anyone needs more details on any aspect of this please ask.
>> > > There are a lot of things involved and I've only tried to give a
>> fairly
>> > > minimal illustration to drive the discussion. I may well have missed
>> > > something crucial.
>> >
>> > You captured it well, and this is open source so I may have missed
>> > something crucial as well.
>> >
>>
>> Thanks for detailed reply!
>>
>> Jonathan
>>
>>
>>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 22034 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4230 bytes --]

  reply	other threads:[~2024-04-23 22:45 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-21 17:44 RFC: Restricting userspace interfaces for CXL fabric management Jonathan Cameron
2024-03-21 21:41 ` Sreenivas Bagalkote
2024-03-22  9:32   ` Jonathan Cameron
2024-03-22 13:24     ` Sreenivas Bagalkote
2024-04-01 16:51       ` Sreenivas Bagalkote
2024-04-06  0:04 ` Dan Williams
2024-04-10 11:45   ` Jonathan Cameron
2024-04-15 20:09     ` Sreenivas Bagalkote
2024-04-23 22:44       ` Sreenivas Bagalkote [this message]
2024-04-23 23:24         ` Greg KH
2024-04-24  0:07     ` Dan Williams
2024-04-25 11:33       ` Jonathan Cameron
2024-04-25 16:18         ` Dan Williams
2024-04-25 17:26           ` Jonathan Cameron
2024-04-25 19:25             ` Dan Williams
2024-04-26  8:45               ` Jonathan Cameron
2024-04-26 16:16                 ` Dan Williams
2024-04-26 16:53                   ` Jonathan Cameron
2024-04-26 19:25                     ` Harold Johnson
2024-04-27 11:12                       ` Greg KH
2024-04-27 16:22                         ` Dan Williams
2024-04-28  4:25                           ` Sirius
2024-04-29 12:18                       ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACX_a4UM7wqb_eGSP2m2f2ytQGB3j+3Y4iP2H1UfMdVmm2a+=w@mail.gmail.com' \
    --to=sreenivas.bagalkote@broadcom.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=brett.henning@broadcom.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dave@stgolabs.net \
    --cc=harold.johnson@broadcom.com \
    --cc=ira.weiny@intel.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lpieralisi@kernel.org \
    --cc=mahesh.natu@intel.com \
    --cc=sumanesh.samanta@broadcom.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.