From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
To: <linux-cxl@vger.kernel.org>,
Dan Williams <dan.j.williams@intel.com>,
Alison Schofield <alison.schofield@intel.com>,
Vishal Verma <vishal.l.verma@intel.com>,
Ira Weiny <ira.weiny@intel.com>,
Ben Widawsky <bwidawsk@kernel.org>,
<linux-perf-users@vger.kernel.org>, Will Deacon <will@kernel.org>,
Mark Rutland <mark.rutland@arm.com>
Cc: <linuxarm@huawei.com>
Subject: [RFC PATCH v2 0/4] CXL 3.0 Performance Monitoring Unit support
Date: Wed, 24 Aug 2022 11:36:13 +0100 [thread overview]
Message-ID: <20220824103617.21781-1-Jonathan.Cameron@huawei.com> (raw)
v2:
- Fix up various build issues, mostly 32bit related (BIT_ULL needed
in various places for example) found by kernel test robot <lkp@intel.com>
The CXL rev 3.0 specification introduces a CXL Performance Monitoring
Unit definition. CXL components may have any number of these blocks. The
definition is highly flexible, but that does bring complexity in the
driver.
Initially posted as an RFC for a number of reasons.
1) The QEMU model against which this was developed needs tidying up and
review for correctness. I'll reply with a link to that thread once
the QEMU code has been posted for review.
2) There are quite a lot of corner cases that will need working through
with variants of the model, or I'll have to design a pathological
set of CPMUs to hit all the corner cases in one go.
3) I'm not sure it makes sense to hang this of the cxl/pci driver but
couldn't really figure out where else in the current structure we could
make it fit cleanly.
4) The interrupt initialization code is something we talked about for DOE
but in the end DOE interrupt support was dropped (for now). It requires
the cxl/pci driver to do a small amount of parsing of registers otherwise
only relevant to the CPMU driver in order to establish what interrupt
vector the CPMU is using and hence ensure the cxl/pci driver requests
sufficient vectors. Given this effects how other interrupts will be
handled in cxl/pci, we need to confirm the handle in general enough
to not need a complete rewrite when we add another interrupt use case.
5) I'm not sure how to expose to user space the sets of events that may
be summed (given by a mask in the Counter Event Capabilities registers).
For now the driver advertises the individual events. Each individual
event may form part of multiple overlapping groups for example.
It may be a case of these allowed combinations only being discoverable
by requesting a combination and checking for errors on start.
6) Driver location. In past perf maintainers have requested perf drivers
for PCI etc be under drivers/perf. That would require moving some
CXL headers to be more generally visible, but is certainly possible
if there is agreement between CXL and perf maintainers on the correct
location.
7) Documentation needs improving, but I didn't want to spend too much
time on that whilst we have so many open questions. I'll separately
raise the question about pmu->dev parenting which is mentioned in the
Docs patch introduction.
CXL rev 3.0 specification available from https://www.computeexpresslink.org
Jonathan Cameron (4):
cxl: Add function to count regblocks of a given type.
cxl/pci: Find and register CXL PMU devices
cxl: CXL Performance Monitoring Unit driver
docs: perf: Minimal introduction the the CXL PMU device and driver.
Documentation/admin-guide/perf/cxl.rst | 60 ++
Documentation/admin-guide/perf/index.rst | 1 +
drivers/cxl/Kconfig | 12 +
drivers/cxl/Makefile | 1 +
drivers/cxl/core/Makefile | 1 +
drivers/cxl/core/core.h | 3 +
drivers/cxl/core/cpmu.c | 69 ++
drivers/cxl/core/pci.c | 2 +-
drivers/cxl/core/port.c | 4 +-
drivers/cxl/core/regs.c | 64 +-
drivers/cxl/cpmu.c | 945 +++++++++++++++++++++++
drivers/cxl/cpmu.h | 54 ++
drivers/cxl/cxl.h | 16 +
drivers/cxl/cxlpci.h | 1 +
drivers/cxl/pci.c | 78 +-
15 files changed, 1304 insertions(+), 7 deletions(-)
create mode 100644 Documentation/admin-guide/perf/cxl.rst
create mode 100644 drivers/cxl/core/cpmu.c
create mode 100644 drivers/cxl/cpmu.c
create mode 100644 drivers/cxl/cpmu.h
--
2.32.0
next reply other threads:[~2022-08-24 10:36 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-24 10:36 Jonathan Cameron [this message]
2022-08-24 10:36 ` [RFC PATCH v2 1/4] cxl: Add function to count regblocks of a given type Jonathan Cameron
2022-09-22 20:19 ` Dave Jiang
2022-08-24 10:36 ` [RFC PATCH v2 2/4] cxl/pci: Find and register CXL PMU devices Jonathan Cameron
2022-09-01 22:36 ` Dave Jiang
2022-10-18 11:19 ` Jonathan Cameron
2022-10-21 17:26 ` Dave Jiang
2022-08-24 10:36 ` [RFC PATCH v2 3/4] cxl: CXL Performance Monitoring Unit driver Jonathan Cameron
2022-09-22 20:19 ` Dave Jiang
2022-10-18 11:26 ` Jonathan Cameron
2022-08-24 10:36 ` [RFC PATCH v2 4/4] docs: perf: Minimal introduction the the CXL PMU device and driver Jonathan Cameron
2022-09-22 20:41 ` Dave Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220824103617.21781-1-Jonathan.Cameron@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=bwidawsk@kernel.org \
--cc=dan.j.williams@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linuxarm@huawei.com \
--cc=mark.rutland@arm.com \
--cc=vishal.l.verma@intel.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).