linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg KH <gregkh@linuxfoundation.org>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Dan Williams <dan.j.williams@intel.com>,
	Ira Weiny <ira.weiny@intel.com>,
	linux-cxl@vger.kernel.org, Linux PCI <linux-pci@vger.kernel.org>,
	Bjorn Helgaas <helgaas@kernel.org>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Ben Widawsky <ben.widawsky@intel.com>,
	Chris Browy <cbrowy@avery-design.com>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	"Schofield, Alison" <alison.schofield@intel.com>,
	Vishal L Verma <vishal.l.verma@intel.com>,
	Linuxarm <linuxarm@huawei.com>, Fangjian <f.fangjian@huawei.com>
Subject: Re: [RFC PATCH v3 2/4] PCI/doe: Add Data Object Exchange support
Date: Mon, 17 May 2021 10:51:23 +0200	[thread overview]
Message-ID: <YKIui/XSLtmQ1azU@kroah.com> (raw)
In-Reply-To: <20210517094045.00004d58@Huawei.com>

On Mon, May 17, 2021 at 09:40:45AM +0100, Jonathan Cameron wrote:
> On Fri, 14 May 2021 11:37:12 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > On Fri, May 14, 2021 at 1:50 AM Jonathan Cameron
> > <Jonathan.Cameron@huawei.com> wrote:
> > [..]
> > > > If it simplifies the kernel implementation to assume single
> > > > kernel-initiator then I think that's more than enough reason to block
> > > > out userspace, and/or provide userspace a method to get into the
> > > > kernel's queue for service.  
> > >
> > > This last suggestion makes sense to me. Let's provide a 'right' way
> > > to access the DOE from user space. I like the idea if it being possible
> > > to run CXL compliance tests from userspace whilst the driver is loaded.  
> > 
> > Ah, and I like your observation that once the kernel provides a
> > "right" way to access DOE then userspace direct-access of DOE is
> > indeed a "you get to keep the pieces" event like any other unwanted
> > userspace config-write.
> > 
> > > Bjorn, given this would be a generic PCI thing, any preference for what
> > > this interface might look like?   /dev/pcidoe[xxxxxx].i with ioctls similar
> > > to those for the BAR based CXL mailboxes?  
> > 
> > (warning, anti-ioctl bias incoming...)
> 
> I feel very similar about ioctls - my immediate thought was to shove this in
> debugfs, but that feels the wrong choice if we are trying to persuade people
> to use it instead of writing code that directly accesses the config space.
> 
> > 
> > Hmm, DOE has an enumeration capability, could the DOE driver use a
> > scheme to have a sysfs bin_attr per discovered object type? This would
> > make it simliar to the pci-vpd sysfs interface.
> 
> We can discover the protocols, but anything beyond that is protocol
> specific.  I don't think there is a enough info available by any standards
> defined method. Also part of the reason to allow a safe userspace interface
> would be to provide a generic interface for vendor protocols and things like
> CXL compliance tests where we will almost certainly never provide a more
> specific kernel interface.
> 
> Whilst sysfs would work for CDAT, some protocols are challenge response rather
> than simple read back and that really doesn't fit well for sysfs model.
> If we get other protocols that are simple data read back, then I would
> advocate giving them a simple sysfs interface much like proposed for CDAT
> as it will always be simpler to use + self describing.
> 
> On a lesser note it might be helpful to provide sysfs attrs for
> what protocols are supported.  The alternative is to let userspace run
> the discovery protocol. Perhaps we can do this as a later phase.
> 
> > 
> > Then the kernel could cache objects like CDAT that don't change
> > outside of some invalidation event.
> 
> It's been a while since I last saw any conversation on sysfs bin_attrs
> but mostly I thought the feeling was pretty strongly against them for anything
> but a few niche usecases.
> 
> Feels to me like it would break most of the usual rules in a way vpd does
> not (IIRC VPD is supposed to be a simple in the sense that if you write a value
> to a writable part, you will read back the same value).
> 
> +CC Greg who is a fount of knowledge in this area (and regularly + correctly
> screams at the ways I try to abuse sysfs :)  Note I don't think Dan was
> suggesting implementing response / request directly, but I think that is
> all we could do given DOE protocols can be vendor specific and the standard
> discovery protocol doesn't let us know the fine grained support (what commands
> within a given protocol).

sysfs binary files are ONLY for pass-through things that go to/from
userspace/hardware without the kernel touching them at all.  Like raw
PCI config descriptors.

challenge/response type stuff, really does still fit the ioctl model, so
that is a viable solution if needed.

thanks,

greg k-h

  reply	other threads:[~2021-05-17  8:51 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-19 16:54 [RFC PATCH v3 0/4] PCI Data Object Exchange support + CXL CDAT Jonathan Cameron
2021-04-19 16:54 ` [RFC PATCH v3 1/4] PCI: Add vendor ID for the PCI SIG Jonathan Cameron
2021-04-19 16:54 ` [RFC PATCH v3 2/4] PCI/doe: Add Data Object Exchange support Jonathan Cameron
2021-05-06 21:59   ` Ira Weiny
2021-05-11 16:50     ` Jonathan Cameron
2021-05-13 21:20       ` Dan Williams
2021-05-14  8:47         ` Jonathan Cameron
2021-05-14 11:15           ` Lorenzo Pieralisi
2021-05-14 12:39             ` Jonathan Cameron
2021-05-14 18:37           ` Dan Williams
2021-05-17  8:40             ` Jonathan Cameron
2021-05-17  8:51               ` Greg KH [this message]
2021-05-17 17:21               ` Dan Williams
2021-05-18 10:04                 ` Jonathan Cameron
2021-05-19 14:18                   ` Dan Williams
2021-05-19 15:11                     ` Jonathan Cameron
2021-05-19 15:29                       ` Dan Williams
2021-05-19 16:20                         ` Jonathan Cameron
2021-05-19 16:33                           ` Jonathan Cameron
2021-05-19 16:53                             ` Dan Williams
2021-05-19 17:00                               ` Jonathan Cameron
2021-05-19 19:20                                 ` Dan Williams
2021-05-19 20:18                                   ` Jonathan Cameron
2021-05-19 23:51                                     ` Dan Williams
2021-05-20  0:16                                       ` Dan Williams
2021-05-20  8:22                                       ` Jonathan Cameron
2021-05-07  9:36   ` Jonathan Cameron
2021-05-07 23:10   ` Bjorn Helgaas
2021-05-12 12:44     ` Jonathan Cameron
2021-04-19 16:54 ` [RFC PATCH v3 3/4] cxl/mem: Add CDAT table reading from DOE Jonathan Cameron
2021-04-19 16:54 ` [RFC PATCH v3 4/4] cxl/mem: Add a debug parser for CDAT commands Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YKIui/XSLtmQ1azU@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=ben.widawsky@intel.com \
    --cc=cbrowy@avery-design.com \
    --cc=dan.j.williams@intel.com \
    --cc=f.fangjian@huawei.com \
    --cc=helgaas@kernel.org \
    --cc=ira.weiny@intel.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).