linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jiaqi Yan <jiaqiyan@google.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	"Luck, Tony" <tony.luck@intel.com>,
	 dave.hansen@linux.intel.com, jon.grimm@amd.com,
	vilas.sridharan@amd.com
Cc: linuxarm@huawei.com, shiju.jose@huawei.com,
	 David Rientjes <rientjes@google.com>,
	linux-acpi@vger.kernel.org, linux-mm@kvack.org,
	 linux-kernel@vger.kernel.org, rafael@kernel.org,
	lenb@kernel.org,  naoya.horiguchi@nec.com, james.morse@arm.com,
	david@redhat.com,  jthoughton@google.com, somasundaram.a@hpe.com,
	erdemaktas@google.com,  pgonda@google.com, duenwen@google.com,
	mike.malvestuto@intel.com,  gthelen@google.com,
	tanxiaofei@huawei.com, prime.zeng@hisilicon.com
Subject: Re: [RFC PATCH 2/9] memory: scrub: sysfs: Add Documentation entries for set of scrub attributes
Date: Wed, 27 Sep 2023 22:25:52 -0700	[thread overview]
Message-ID: <CACw3F539gZc0FoJLo6VvYSyZmeWZ3Pbec7AzsH+MYUJJNzQbUQ@mail.gmail.com> (raw)
In-Reply-To: <20230922111740.000046d7@huawei.com>

On Fri, Sep 22, 2023 at 3:20 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Thu, 21 Sep 2023 17:07:04 -0700
> Jiaqi Yan <jiaqiyan@google.com> wrote:
>
> > On Fri, Sep 15, 2023 at 10:29 AM <shiju.jose@huawei.com> wrote:
> > >
> > > From: Shiju Jose <shiju.jose@huawei.com>
> > >
> > > Add sysfs documentation entries for the set of attributes those are
> > > exposed in /sys/class/scrub/ by the scrub driver. These attributes
> > > support configuring parameters of a scrub device.
> > >
> > > Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
> > > ---
> > >  .../ABI/testing/sysfs-class-scrub-configure   | 82 +++++++++++++++++++
> > >  1 file changed, 82 insertions(+)
> > >  create mode 100644 Documentation/ABI/testing/sysfs-class-scrub-configure
> > >
> > > diff --git a/Documentation/ABI/testing/sysfs-class-scrub-configure b/Documentation/ABI/testing/sysfs-class-scrub-configure
> > > new file mode 100644
> > > index 000000000000..347e2167dc62
> > > --- /dev/null
> > > +++ b/Documentation/ABI/testing/sysfs-class-scrub-configure
> > > @@ -0,0 +1,82 @@
> > > +What:          /sys/class/scrub/
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               The scrub/ class subdirectory belongs to the
> > > +               scrubber subsystem.
> > > +
> > > +What:          /sys/class/scrub/scrubX/
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               The /sys/class/scrub/scrub{0,1,2,3,...} directories
> >
> > This API (sysfs interface) is very specific to the ACPI interface
> > defined for hardware patrol scrubber. I wonder can we have some
> > interface that is more generic, for a couple of reasons:
>
> Agreed that it makes sense to define a broad interface.  We have
> some hardware focused drivers we can't share yet (IP rules until
> a release date in the near future) where this is a reasonable fit
> - but indeed there are others such as mapping this to DDR ECS
> where it isn't a great mapping.
>
> I'd love to come up with an interface that has the right blend
> of generality and flexibility.  That is easiest done before we have
> any implementation merged.
>
> >
> > 1. I am not aware of any chip/platform hardware that implemented the
> > hw ps part defined in ACPI RASF/RAS2 spec. So I am curious what the
> > RAS experts from different hardware vendors think about this. For
> > example, Tony and Dave from Intel, Jon and Vilas from AMD. Is there
> > any hardware platform (if allowed to disclose) that implemented ACPI
> > RASF/RAS2? If so, will vendors continue to support the control of
> > patrol scrubber using the ACPI spec? If not (as Tony said in [1], will
> > the vendor consider starting some future platform?
> >
> > If we are unlikely to get the vendor support, creating this ACPI
> > specific sysfs API (and the driver implementations) in Linux seems to
> > have limited meaning.
>
> There is a bit of a chicken and egg problem here. Until there is
> reasonable support in kernel (or it looks like there will be),
> BIOS teams push back on a requirement to add the tables.
> I'd encourage no one to bother with RASF - RAS2 is much less
> ambiguous.

Here mainly to re-ping folks from Intel (Tony and Dave)  and AMD (Jon
and Vilas) for your opinion on RAS2.

>
> >
> > > +               correspond to each scrub device.
> > > +
> > > +What:          /sys/class/scrub/scrubX/name
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               (RO) name of the memory scrub device
> > > +
> > > +What:          /sys/class/scrub/scrubX/regionY/
> >
> > 2. I believe the concept of "region" here is probably from
> > PATROL_SCRUB defined in “APCI section 5.2.20.5. Parameter Block". It
> > is indeed powerful: if a process's physical memory spans over multiple
> > memory controllers, OS can in theory scrub chunks of the memory
> > belonging to the process. However, from a previous discussion [1],
> > "From a h/w perspective it might always be complex". IIUC, the address
> > translation from physical address to channel address is hard to
> > achieve, and probably that's one of the tech reasons the patrol scrub
> > ACPI spec is not in practice implemented?
>
> Next bit is kind of an aside as I mostly agree with your conclusions ;)
>
> This obviously depends on your memory interleave. You want to present
> physical address ranges as single controllable regions - probably
> most interesting being stuff that maps to NUMA nodes.  The short
> answer is that any firmware control will end up writing to all the
> memory controllers involved in a given PA range - firmware can easily
> establish which those are.
>
> A memory controller can support multiple scrub regions
> which map from a PA range to a particular set of RAM addresses
> - that's implementation specific. The memory controller is getting
> the host PA and can carry out appropriate hashing if it wants to.
> Many scrub solutions won't do this - in which case it's max one
> region per memory controller (mapped by firmware to one region per
> interleave set - assuming interleave sets take in whole memory
> controllers - which they normally do).
>
> I would expect existing systems (not design with this differentiated
> scrub in mind) to only support scrub control by PA range at the
> granularity of interleave sets.
>
> Note that this general question of PA based controls also
> maps to things like resource control (resctl) where it's only interesting
> to partition memory bandwidth such that the partition applies to the
> whole interleave set - that's done for ARM systems anyway by having
> the userspace interface pretend there is only one control, but
> write the settings to all actual controllers involved. Not sure what
> x86 does.
>
> Taking a few examples that I know of.  All 4 socket server - with
> control of these as bios options ;).
> Assuming individual memory controllers don't allow scrub to be
> configured by PA range.
>
> 1. Full system memory interleave (people do this form of crazy)
>    In that case, there is only one set of firmware controls
>    that write to the interfaces of every memory controller.  Depending
>    on the interleave design that may still allow multiple regions.
>
> 2. Socket wide memory interleave.  In that case, firmware controls
>    need to effect all memory controllers in that socket if the
>    requested 'region' maps to them.  So 4 PA regions.
>
> 3. Die wide memory interleave.  Finer granularity of control
>    so perhaps 8 PA rgiones.
>
> 4. Finer granularity (there are reasons to do this for above mentioned
>    bandwidth resource control which you can only do if not interleaving
>    across multiple controllers).
>
>
>
> >
> > So my take is, control at the granularity of the memory controller is
> > probably a nice compromise.
> > Both OS and userspace can get a pretty
> > decent amount of flexibility, start/stop/adjust speed of the scrubbing
> > on a memory controller; meanwhile it doesn't impose too much pain to
> > hardware vendors when they provide these features in hardware. In
> > terms of how these controls/features will be implemented, I imagine it
> > could be implemented:
> > * via hardware registers that directly or indirectly control on memory
> > controllers
> > * via ACPI if the situation changes in 10 years (and the RASF/RAS2/PCC
> > drivers implemented in this patchset can be directly plugged into)
> > * a kernel-thread that uses cpu read to detect memory errors, if
> > hardware support is unavailable or not good enough
> >
>
> I more or less agree, but would tweak a little.
>
> 1) Allow for multiple regions per memory controller - that's needed
>    for RASF etc anyway - because as far as they are concerned there
>    is only one interface presented.
> 2) Tie the interface to interleave sets, not memory controllers.
>    NUMA nodes often being a good stand in for those.

Does you mean /sys/devices/system/node/nodeX/scrub, where scrub is a
virtual concept of scrubbing device that mapps to 1 or several
physical scrubber backends.

For example, starting/stopping the virtual device means issuing
START/STOP cmd to all backends. And...

>    Controlling memory controllers separately for parts of an interleave
>    isn't something I'd think was very useful.  This will probably get
>    messy in the future though and the complexity could be pushed to
>    a userspace tool - as long as enough info is available elsewhere
>    in sysfs.  So need those memory controller directories you propose
>    to include info on the PA ranges that they are involved in backing

is it acceptable if we don't provide PA range or region in the
interface *for now* if it complicates things a lot? I could be wrong,
but the user of scrubber seems would be ok with not being able to
scrub an arbitrary physical address range. In contrast, not knowing
the scrub results seems to be more annoying to users. So simply giving
some progress indicator like how many bytes a scrubber has scrubbed.

When we really need to support PA range or region, under the
/sys/devices/system/node/nodeX/scrub interface, it basically uses NUMA
node X's PA range. Then to scrub node memory in range [PA1, PA2), some
driver that understand all backends (or can talk to all backends'
drivers) needs to translate the PA into the address in backend's
address space, for example, [PA1, PA2) is mapped to 2 device ranges
[DA11, DA12) on backend_1 and [DA21, DA22) on backend_2.

>    (which to keep things horrible, can change at runtime via memory
>     hotplug and remapping of host physical address ranges on CXL etc)

CXL memory locally attached to the host is probably more or less the
same as normal physical memory. I wonder what it would be like for CXL
memory remotely attached through a memory pool. Does it make sense
that the controller/owner of the memory pool takes the responsibility
of controlling the CXL memory controller to control scrubbing? Does
the owner need to provide/mediate scrubbing support for other clients
using the memory pool?

Thanks,
Jiaqi

>
> > Given these possible backends of scrubbing, I think a more generic
> > sysfs API that covers and abstracts these backends will be more
> > valuable right now. I haven’t thought thoroughly, but how about
> > defining the top-level interface as something like
> > “/sys/devices/system/memory_controller_scrubX/”, or
> > “/sys/class/memory_controllerX/scrub”?
>
> No particular harm in the rename of the directory I guess.
> Though some of those 'memory_controllers' would be virtual as they
> wouldn't correspond to actual memory controllers but rather to
> sets of them.
>
> Jonathan
>
> >
> > [1] https://lore.kernel.org/linux-mm/SJ1PR11MB6083BF93E9A88E659CED5EC4FC3F9@SJ1PR11MB6083.namprd11.prod.outlook.com/T/#m13516ee35caa05b506080ae805bee14f9f958d43
>
> >
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               The /sys/class/scrub/scrubX/region{0,1,2,3,...}
> > > +               directories correspond to each scrub region under a scrub device.
> > > +               Scrub region is a physical address range for which scrub may be
> > > +               separately controlled. Regions may overlap in which case the
> > > +               scrubbing rate of the overlapped memory will be at least that
> > > +               expected due to each overlapping region.
> > > +
> > > +What:          /sys/class/scrub/scrubX/regionY/addr_base
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               (RW) The base of the address range of the memory region
> > > +               to be patrol scrubbed.
> > > +               On reading, returns the base of the memory region for
> > > +               the actual address range(The platform calculates
> > > +               the nearest patrol scrub boundary address from where
> > > +               it can start scrubbing).
> > > +
> > > +What:          /sys/class/scrub/scrubX/regionY/addr_size
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               (RW) The size of the address range to be patrol scrubbed.
> > > +               On reading, returns the size of the memory region for
> > > +               the actual address range.
> > > +
> > > +What:          /sys/class/scrub/scrubX/regionY/enable
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               (WO) Start/Stop scrubbing the memory region.
> > > +               1 - enable the memory scrubbing.
> > > +               0 - disable the memory scrubbing..
> > > +
> > > +What:          /sys/class/scrub/scrubX/regionY/speed_available
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               (RO) Supported range for the partol speed(scrub rate)
> > > +               by the scrubber for a memory region.
> > > +               The unit of the scrub rate vary depends on the scrubber.
> > > +
> > > +What:          /sys/class/scrub/scrubX/regionY/speed
> > > +Date:          September 2023
> > > +KernelVersion: 6.7
> > > +Contact:       linux-kernel@vger.kernel.org
> > > +Description:
> > > +               (RW) The partol speed(scrub rate) on the memory region specified and
> > > +               it must be with in the supported range by the scrubber.
> > > +               The unit of the scrub rate vary depends on the scrubber.
> > > --
> > > 2.34.1
> > >
> > >
> >
>
>


  reply	other threads:[~2023-09-28  5:26 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-15 17:28 [RFC PATCH 0/9] ACPI:RASF: Add support for ACPI RASF, ACPI RAS2 and configure scrubbers shiju.jose
2023-09-15 17:28 ` [RFC PATCH 1/9] memory: scrub: Add scrub driver supports configuring memory scrubbers in the system shiju.jose
2023-09-15 17:28 ` [RFC PATCH 2/9] memory: scrub: sysfs: Add Documentation entries for set of scrub attributes shiju.jose
2023-09-22  0:07   ` Jiaqi Yan
2023-09-22 10:20     ` Jonathan Cameron
2023-09-28  5:25       ` Jiaqi Yan [this message]
2023-09-28 13:14         ` Jonathan Cameron
2023-10-05  3:18         ` David Rientjes
2023-10-06 13:02           ` Jonathan Cameron
2023-10-06 13:06             ` Sridharan, Vilas
2023-10-11 16:35               ` Jonathan Cameron
2023-10-12 13:41                 ` Sridharan, Vilas
2023-10-12 15:02                   ` Jonathan Cameron
2023-10-12 15:44                     ` Sridharan, Vilas
2023-10-13  9:07                       ` Jonathan Cameron
2023-09-15 17:28 ` [RFC PATCH 3/9] Documentation/scrub-configure.rst: Add documentation for scrub driver shiju.jose
2023-09-18  7:23   ` David Hildenbrand
2023-09-18 10:25     ` Shiju Jose
2023-09-18 12:15       ` David Hildenbrand
2023-09-18 12:28         ` Jonathan Cameron
2023-09-18 12:34           ` David Hildenbrand
2023-09-18 15:03             ` Shiju Jose
2023-09-15 17:28 ` [RFC PATCH 4/9] ACPI:RASF: Add extract RASF table to register RASF platform devices shiju.jose
2023-09-15 17:28 ` [RFC PATCH 5/9] ACPI:RASF: Add common library for RASF and RAS2 PCC interfaces shiju.jose
2023-09-15 17:28 ` [RFC PATCH 6/9] memory: RASF: Add memory RASF driver shiju.jose
2023-09-15 17:28 ` [RFC PATCH 7/9] ACPICA: ACPI 6.5: Add support for RAS2 table shiju.jose
2023-09-15 17:28 ` [RFC PATCH 8/9] ACPI:RAS2: Add driver for ACPI RAS2 feature table (RAS2) shiju.jose
2023-09-15 17:28 ` [RFC PATCH 9/9] memory: RAS2: Add memory RAS2 driver shiju.jose
2023-09-17 21:14 ` [RFC PATCH 0/9] ACPI:RASF: Add support for ACPI RASF, ACPI RAS2 and configure scrubbers Jiaqi Yan
2023-09-18 10:19   ` Shiju Jose
2023-09-18 17:47     ` Jiaqi Yan
2023-09-19  8:28       ` Shiju Jose

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACw3F539gZc0FoJLo6VvYSyZmeWZ3Pbec7AzsH+MYUJJNzQbUQ@mail.gmail.com \
    --to=jiaqiyan@google.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=duenwen@google.com \
    --cc=erdemaktas@google.com \
    --cc=gthelen@google.com \
    --cc=james.morse@arm.com \
    --cc=jon.grimm@amd.com \
    --cc=jthoughton@google.com \
    --cc=lenb@kernel.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxarm@huawei.com \
    --cc=mike.malvestuto@intel.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=pgonda@google.com \
    --cc=prime.zeng@hisilicon.com \
    --cc=rafael@kernel.org \
    --cc=rientjes@google.com \
    --cc=shiju.jose@huawei.com \
    --cc=somasundaram.a@hpe.com \
    --cc=tanxiaofei@huawei.com \
    --cc=tony.luck@intel.com \
    --cc=vilas.sridharan@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).