linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: linux-cxl@vger.kernel.org, Linux PCI <linux-pci@vger.kernel.org>,
	Ben Widawsky <ben.widawsky@intel.com>,
	Bjorn Helgaas <helgaas@kernel.org>,
	Chris Browy <cbrowy@avery-design.com>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	"Schofield, Alison" <alison.schofield@intel.com>,
	Vishal L Verma <vishal.l.verma@intel.com>,
	"Weiny, Ira" <ira.weiny@intel.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Linuxarm <linuxarm@huawei.com>, Fangjian <f.fangjian@huawei.com>
Subject: Re: [RFC PATCH 2/2] cxl/mem: Add CDAT table reading from DOE
Date: Tue, 16 Mar 2021 18:42:38 -0700	[thread overview]
Message-ID: <CAPcyv4ikiWau7J-uc7yj85VCfPrc2ivnJ81fSnkEN0SXciDUyg@mail.gmail.com> (raw)
In-Reply-To: <20210316103647.00002f4b@Huawei.com>

On Tue, Mar 16, 2021 at 3:38 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Mon, 15 Mar 2021 15:00:08 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
>
> > On Wed, Mar 10, 2021 at 10:15 AM Jonathan Cameron
> > <Jonathan.Cameron@huawei.com> wrote:
> > >
> > > On Thu, 11 Mar 2021 02:03:06 +0800
> > > Jonathan Cameron <Jonathan.Cameron@huawei.com> wrote:
> > >
> > > > This patch simply provides some debug print outs of the entries
> > > > at probe time + a sysfs binary attribute to allow dumping of the
> > > > whole table.
> > > >
> > > > Binary dumping is modelled on /sys/firmware/ACPI/tables/
> > > >
> > > > The ability to dump this table will be very useful for emulation of
> > > > real devices once they become available as QEMU CXL type 3 device
> > > > emulation will be able to load this file in.
> > > >
> > > > Open questions:
> > > > * No support here for table updates. Worth including these from the
> > > >   start, or leave that complexity for later?
> > > > * Worth logging the reported info for debug, or is the binary attribute
> > > >   sufficient?  Larger open question of whether to expose this info to
> > > >   userspace or not left for another day!
> > > > * Where to put the CDAT file?  Is it worth a subdirectory?
> > > > * What is maximum size of the SSLBIS entry - I haven't quite managed
> > > >   to figure that out and this is the record with largest size.
> > > >   We could support dynamic allocation of the record size, but it
> > > >   would add complexity that seems unnecessary.
> > > >   It would not be compliant with the specification for a type 3 memory
> > > >   device to report this record anyway so I'm not that worried about this
> > > >   for now.  It will become relevant once we have support for reading
> > > >   CDAT from CXL switches.
> > > > * cdat.h is formatted in a similar style to pci_regs.h on basis that
> > > >   it may well be helpful to share this header with userspace tools.
> > > > * Move the generic parts of this out to driver/cxl/cdat.c or leave that
> > > >   until we have other CXL drivers wishing to use this?
> > >
> > > Naturally I remembered another open question within 10 seconds of sending :(
> > >
> > >   * Do we want to add any sort of header to the RAW dump of CDAT to aid
> > >     tooling?  Whilst it looks a little like an ACPI table it doesn't have
> > >     a signature.
> > >
> > > My gut feeling is no, because the CDAT specification doesn't define one but
> > > I can see that it might be very convenient to have something that identified
> > > the data once it was put in a file.
> >
> > I'm not yet convinced raw dumping is worth it for the same reason that
> > command payload logging was eliminated from the v5.12-rc1 submission.
> > There's not much userspace can do with the information besides debug
> > the kernel behavior. If the kernel assigns a numa node to target a
> > given CXL memory range with NUMA apis then HMEM_REPORTING should
> > enumerate the properties. In other words, don't expand the userspace
> > ABI problem, funnel users to the canonical source for such data.
>
> As someone who finds raw dumping of ACPI tables extremely helpful in every
> day use for debugging of some of our 'interesting' hardware, I know I'm going
> to end up carrying that element locally anyway.  I don't have a particular
> problem doing so if we decide to not to upstream it.
>
> Much like the ACPI table dumping, it's not an interface you expect userspace
> to ever use and I fully agree that we should expose things properly as you
> describe.
>
> Short term my interest here is to get the DOE code upstream as step 1 of
> moving to a full solution.  The printing and dumping is really just PoC element
> to prove out the interface.  Any issue with putting the prints under _dbg()?

debugfs_create_blob()? Although debugfs makes it annoying to support
per device blobs. I could get on board with a root-only sysfs
attribute, but using a static DEVICE_ATTR_ADMIN_RO()... more comments
incoming in a review of the patch.

  reply	other threads:[~2021-03-17  1:43 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-10 18:03 [RFC PATCH 0/2] PCI Data Object Exchange support + CXL CDAT Jonathan Cameron
2021-03-10 18:03 ` [RFC PATCH 1/2] PCI/doe: Initial support PCI Data Object Exchange Jonathan Cameron
2021-03-15 19:45   ` Dan Williams
2021-03-16 16:29     ` Jonathan Cameron
2021-03-16 16:57       ` Jonathan Cameron
2021-03-16 18:14       ` Dan Williams
2021-03-16 23:26         ` Chris Browy
2021-03-18  1:30           ` Dan Williams
2021-03-18 14:25             ` Jonathan Cameron
2021-06-17 17:12               ` Jonathan Cameron
2021-06-17 19:48                 ` Chris Browy
2021-03-23 18:22         ` Jonathan Cameron
2021-03-23 18:57           ` Dan Williams
2021-03-10 18:03 ` [RFC PATCH 2/2] cxl/mem: Add CDAT table reading from DOE Jonathan Cameron
2021-03-10 18:14   ` Jonathan Cameron
2021-03-10 22:59     ` Chris Browy
2021-03-15 22:00     ` Dan Williams
2021-03-16 10:36       ` Jonathan Cameron
2021-03-17  1:42         ` Dan Williams [this message]
2021-03-17  1:55   ` Dan Williams
     [not found] <CAAJ9+9fq1=EcOaSoo3oD_5QjYNAv6PPDjKS+gC9o7XDp2p1XpQ@mail.gmail.com>
2022-01-12 22:16 ` Zayd Qumsieh
2022-01-13  9:25   ` Jonathan Cameron
2022-01-15  0:15     ` Zayd Qumsieh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPcyv4ikiWau7J-uc7yj85VCfPrc2ivnJ81fSnkEN0SXciDUyg@mail.gmail.com \
    --to=dan.j.williams@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=ben.widawsky@intel.com \
    --cc=cbrowy@avery-design.com \
    --cc=f.fangjian@huawei.com \
    --cc=helgaas@kernel.org \
    --cc=ira.weiny@intel.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).