From: Shiju Jose <shiju.jose@huawei.com>
To: fan <nifan.cxl@gmail.com>
Cc: "linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"dave@stgolabs.net" <dave@stgolabs.net>,
Jonathan Cameron <jonathan.cameron@huawei.com>,
"dave.jiang@intel.com" <dave.jiang@intel.com>,
"alison.schofield@intel.com" <alison.schofield@intel.com>,
"vishal.l.verma@intel.com" <vishal.l.verma@intel.com>,
"ira.weiny@intel.com" <ira.weiny@intel.com>,
"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
"linux-edac@vger.kernel.org" <linux-edac@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"david@redhat.com" <david@redhat.com>,
"Vilas.Sridharan@amd.com" <Vilas.Sridharan@amd.com>,
"leo.duran@amd.com" <leo.duran@amd.com>,
"Yazen.Ghannam@amd.com" <Yazen.Ghannam@amd.com>,
"rientjes@google.com" <rientjes@google.com>,
"jiaqiyan@google.com" <jiaqiyan@google.com>,
"tony.luck@intel.com" <tony.luck@intel.com>,
"Jon.Grimm@amd.com" <Jon.Grimm@amd.com>,
"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
"rafael@kernel.org" <rafael@kernel.org>,
"lenb@kernel.org" <lenb@kernel.org>,
"naoya.horiguchi@nec.com" <naoya.horiguchi@nec.com>,
"james.morse@arm.com" <james.morse@arm.com>,
"jthoughton@google.com" <jthoughton@google.com>,
"somasundaram.a@hpe.com" <somasundaram.a@hpe.com>,
"erdemaktas@google.com" <erdemaktas@google.com>,
"pgonda@google.com" <pgonda@google.com>,
"duenwen@google.com" <duenwen@google.com>,
"mike.malvestuto@intel.com" <mike.malvestuto@intel.com>,
"gthelen@google.com" <gthelen@google.com>,
"wschwartz@amperecomputing.com" <wschwartz@amperecomputing.com>,
"dferguson@amperecomputing.com" <dferguson@amperecomputing.com>,
tanxiaofei <tanxiaofei@huawei.com>,
"Zengtao (B)" <prime.zeng@hisilicon.com>,
"kangkang.shen@futurewei.com" <kangkang.shen@futurewei.com>,
wanghuiqiang <wanghuiqiang@huawei.com>,
Linuxarm <linuxarm@huawei.com>,
"fan.ni@samsung.com" <fan.ni@samsung.com>
Subject: RE: [RFC PATCH v5 04/12] cxl/memscrub: Add CXL device patrol scrub control feature
Date: Fri, 16 Feb 2024 12:22:56 +0000 [thread overview]
Message-ID: <86ac936adec1415193ce6cd352c19d71@huawei.com> (raw)
In-Reply-To: <Zc6wr2mh7Ie1-QnC@debian>
Hi Fan,
Thanks for the feedback.
>-----Original Message-----
>From: fan <nifan.cxl@gmail.com>
>Sent: 16 February 2024 00:48
>To: Shiju Jose <shiju.jose@huawei.com>
>Cc: linux-cxl@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
>mm@kvack.org; dave@stgolabs.net; Jonathan Cameron
><jonathan.cameron@huawei.com>; dave.jiang@intel.com;
>alison.schofield@intel.com; vishal.l.verma@intel.com; ira.weiny@intel.com;
>dan.j.williams@intel.com; linux-edac@vger.kernel.org; linux-
>kernel@vger.kernel.org; david@redhat.com; Vilas.Sridharan@amd.com;
>leo.duran@amd.com; Yazen.Ghannam@amd.com; rientjes@google.com;
>jiaqiyan@google.com; tony.luck@intel.com; Jon.Grimm@amd.com;
>dave.hansen@linux.intel.com; rafael@kernel.org; lenb@kernel.org;
>naoya.horiguchi@nec.com; james.morse@arm.com; jthoughton@google.com;
>somasundaram.a@hpe.com; erdemaktas@google.com; pgonda@google.com;
>duenwen@google.com; mike.malvestuto@intel.com; gthelen@google.com;
>wschwartz@amperecomputing.com; dferguson@amperecomputing.com;
>tanxiaofei <tanxiaofei@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
>kangkang.shen@futurewei.com; wanghuiqiang <wanghuiqiang@huawei.com>;
>Linuxarm <linuxarm@huawei.com>; fan.ni@samsung.com
>Subject: Re: [RFC PATCH v5 04/12] cxl/memscrub: Add CXL device patrol scrub
>control feature
>
>On Thu, Jan 11, 2024 at 09:17:33PM +0800, shiju.jose@huawei.com wrote:
>> From: Shiju Jose <shiju.jose@huawei.com>
>>
>> CXL spec 3.1 section 8.2.9.9.11.1 describes the device patrol scrub
>> control feature. The device patrol scrub proactively locates and makes
>> corrections to errors in regular cycle. The patrol scrub control
>> allows the request to configure patrol scrub input configurations.
>>
>> The patrol scrub control allows the requester to specify the number of
>> hours for which the patrol scrub cycles must be completed, provided
>> that the requested number is not less than the minimum number of hours
>> for the patrol scrub cycle that the device is capable of. In addition,
>> the patrol scrub controls allow the host to disable and enable the
>> feature in case disabling of the feature is needed for other purposes
>> such as performance-aware operations which require the background
>> operations to be turned off.
>>
>> Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
>> ---
>> drivers/cxl/Kconfig | 17 +++
>> drivers/cxl/core/Makefile | 1 +
>> drivers/cxl/core/memscrub.c | 266
>++++++++++++++++++++++++++++++++++++
>> drivers/cxl/cxlmem.h | 8 ++
>> drivers/cxl/pci.c | 5 +
>> 5 files changed, 297 insertions(+)
>> create mode 100644 drivers/cxl/core/memscrub.c
>>
>> diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index
>> 8ea1d340e438..67d88f9bf52b 100644
>> --- a/drivers/cxl/Kconfig
>> +++ b/drivers/cxl/Kconfig
>> @@ -154,4 +154,21 @@ config CXL_PMU
>> monitoring units and provide standard perf based interfaces.
>>
>> If unsure say 'm'.
>> +
>> +config CXL_SCRUB
>> + bool "CXL: Memory scrub feature"
>> + depends on CXL_PCI
>> + depends on CXL_MEM
>> + help
>> + The CXL memory scrub control is an optional feature allows host to
>> + control the scrub configurations of CXL Type 3 devices, which
>> + support patrol scrub and/or DDR5 ECS(Error Check Scrub).
>> +
>> + Say 'y/n' to enable/disable the CXL memory scrub driver that will
>> + attach to CXL.mem devices for memory scrub control feature. See
>> + sections 8.2.9.9.11.1 and 8.2.9.9.11.2 in the CXL 3.1 specification
>> + for a detailed description of CXL memory scrub control features.
>> +
>> + If unsure say 'n'.
>> +
>> endif
>> diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
>> index 1f66b5d4d935..99e3202f868f 100644
>> --- a/drivers/cxl/core/Makefile
>> +++ b/drivers/cxl/core/Makefile
>> @@ -15,3 +15,4 @@ cxl_core-y += hdm.o
>> cxl_core-y += pmu.o
>> cxl_core-$(CONFIG_TRACING) += trace.o
>> cxl_core-$(CONFIG_CXL_REGION) += region.o
>> +cxl_core-$(CONFIG_CXL_SCRUB) += memscrub.o
>> diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c
>> new file mode 100644 index 000000000000..e0d482b0bf3a
>> --- /dev/null
>> +++ b/drivers/cxl/core/memscrub.c
>> @@ -0,0 +1,266 @@
>> +// SPDX-License-Identifier: GPL-2.0-or-later
>> +/*
>> + * cxl_memscrub.c - CXL memory scrub driver
>> + *
>> + * Copyright (c) 2023 HiSilicon Limited.
>> + *
>> + * - Provides functions to configure patrol scrub
>> + * feature of the CXL memory devices.
>> + */
>> +
>> +#define pr_fmt(fmt) "CXL_MEM_SCRUB: " fmt
>> +
>> +#include <cxlmem.h>
>> +
>> +/* CXL memory scrub feature common definitions */
>> +#define CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH 128
>> +
>> +static int cxl_mem_get_supported_feature_entry(struct cxl_memdev *cxlmd,
>const uuid_t *feat_uuid,
>> + struct cxl_mbox_supp_feat_entry
>*feat_entry_out) {
>> + struct cxl_mbox_get_supp_feats_out *feats_out __free(kvfree) = NULL;
>> + struct cxl_mbox_supp_feat_entry *feat_entry;
>> + struct cxl_dev_state *cxlds = cxlmd->cxlds;
>> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
>> + struct cxl_mbox_get_supp_feats_in pi;
>> + int feat_index, count;
>> + int nentries;
>> + int ret;
>> +
>> + feat_index = 0;
>> + pi.count = sizeof(struct cxl_mbox_get_supp_feats_out) +
>> + sizeof(struct cxl_mbox_supp_feat_entry);
>> + feats_out = kvmalloc(pi.count, GFP_KERNEL);
>> + if (!feats_out)
>> + return -ENOMEM;
>> +
>> + do {
>> + pi.start_index = feat_index;
>> + memset(feats_out, 0, pi.count);
>> + ret = cxl_get_supported_features(mds, &pi, feats_out);
>> + if (ret)
>> + return ret;
>> +
>> + nentries = feats_out->entries;
>> + if (!nentries)
>> + break;
>> +
>> + /* Check CXL memdev supports the feature */
>> + feat_entry = (void *)feats_out->feat_entries;
>> + for (count = 0; count < nentries; count++, feat_entry++) {
>> + if (uuid_equal(&feat_entry->uuid, feat_uuid)) {
>> + memcpy(feat_entry_out, feat_entry,
>sizeof(*feat_entry_out));
>> + return 0;
>> + }
>> + }
>> + feat_index += nentries;
>> + } while (nentries);
>> +
>> + return -ENOTSUPP;
>> +}
>> +
>> +/* CXL memory patrol scrub control definitions */
>> +#define CXL_MEMDEV_PS_GET_FEAT_VERSION 0x01
>> +#define CXL_MEMDEV_PS_SET_FEAT_VERSION 0x01
>> +
>> +static const uuid_t cxl_patrol_scrub_uuid =
>> + UUID_INIT(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, 0x75, 0x77, 0x4e,
>\
>> + 0x06, 0xdb, 0x8a);
>> +
>> +/* CXL memory patrol scrub control functions */ struct
>> +cxl_patrol_scrub_context {
>> + struct device *dev;
>> + u16 get_feat_size;
>> + u16 set_feat_size;
>> + bool scrub_cycle_changeable;
>> +};
>> +
>> +/**
>> + * struct cxl_memdev_ps_params - CXL memory patrol scrub parameter data
>structure.
>> + * @enable: [IN] enable(1)/disable(0) patrol scrub.
>> + * @scrub_cycle_changeable: [OUT] scrub cycle attribute of patrol scrub is
>changeable.
>> + * @rate: [IN] Requested patrol scrub cycle in hours.
>> + * [OUT] Current patrol scrub cycle in hours.
>> + * @min_rate:[OUT] minimum patrol scrub cycle, in hours, supported.
>> + * @rate_avail:[OUT] Supported patrol scrub cycle in hours.
>> + */
>> +struct cxl_memdev_ps_params {
>> + bool enable;
>> + bool scrub_cycle_changeable;
>> + u16 rate;
>> + u16 min_rate;
>> + char rate_avail[CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH];
>> +};
>> +
>> +enum {
>> + CXL_MEMDEV_PS_PARAM_ENABLE = 0,
>> + CXL_MEMDEV_PS_PARAM_RATE,
>> +};
>> +
>> +#define CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK BIT(0)
>> +#define
> CXL_MEMDEV_PS_SCRUB_CYCLE_REALTIME_REPORT_CAP_MASK
> BIT(1)
>> +#define CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK GENMASK(7, 0)
>> +#define CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK GENMASK(15,
>8)
>> +#define CXL_MEMDEV_PS_FLAG_ENABLED_MASK BIT(0)
>> +
>> +struct cxl_memdev_ps_feat_read_attrbs {
>> + u8 scrub_cycle_cap;
>> + __le16 scrub_cycle;
>> + u8 scrub_flags;
>> +} __packed;
>> +
>> +struct cxl_memdev_ps_set_feat_pi {
>> + struct cxl_mbox_set_feat_in pi;
>> + u8 scrub_cycle_hr;
>> + u8 scrub_flags;
>> +} __packed;
>> +
>> +static int cxl_mem_ps_get_attrbs(struct device *dev,
>> + struct cxl_memdev_ps_params *params) {
>> + struct cxl_memdev_ps_feat_read_attrbs *rd_attrbs __free(kvfree) =
>NULL;
>> + struct cxl_mbox_get_feat_in pi = {
>> + .uuid = cxl_patrol_scrub_uuid,
>> + .offset = 0,
>> + .count = sizeof(struct cxl_memdev_ps_feat_read_attrbs),
>> + .selection = CXL_GET_FEAT_SEL_CURRENT_VALUE,
>> + };
>> + struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
>> + struct cxl_dev_state *cxlds = cxlmd->cxlds;
>> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
>> + int ret;
>> +
>> + if (!mds)
>> + return -EFAULT;
>> +
>> + rd_attrbs = kvmalloc(pi.count, GFP_KERNEL);
>> + if (!rd_attrbs)
>> + return -ENOMEM;
>> +
>> + ret = cxl_get_feature(mds, &pi, rd_attrbs);
>> + if (ret) {
>> + params->enable = 0;
>> + params->rate = 0;
>> + snprintf(params->rate_avail,
>CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH,
>> + "Unavailable");
>> + return ret;
>> + }
>> + params->scrub_cycle_changeable =
>FIELD_GET(CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK,
>> + rd_attrbs->scrub_cycle_cap);
>> + params->enable =
>FIELD_GET(CXL_MEMDEV_PS_FLAG_ENABLED_MASK,
>> + rd_attrbs->scrub_flags);
>> + params->rate =
>FIELD_GET(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK,
>> + rd_attrbs->scrub_cycle);
>> + params->min_rate =
>FIELD_GET(CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK,
>> + rd_attrbs->scrub_cycle);
>> + snprintf(params->rate_avail,
>CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH,
>> + "Minimum scrub cycle = %d hour", params->min_rate);
>> +
>> + return 0;
>> +}
>> +
>> +static int __maybe_unused
>> +cxl_mem_ps_set_attrbs(struct device *dev, struct cxl_memdev_ps_params
>*params,
>> + u8 param_type)
>> +{
>> + struct cxl_memdev_ps_set_feat_pi set_pi = {
>> + .pi.uuid = cxl_patrol_scrub_uuid,
>> + .pi.flags =
>CXL_SET_FEAT_FLAG_MOD_VALUE_SAVED_ACROSS_RESET |
>> + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER,
>> + .pi.offset = 0,
>> + .pi.version = CXL_MEMDEV_PS_SET_FEAT_VERSION,
>> + };
>> + struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
>> + struct cxl_dev_state *cxlds = cxlmd->cxlds;
>> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
>> + struct cxl_memdev_ps_params rd_params;
>> + int ret;
>> +
>> + if (!mds)
>> + return -EFAULT;
>> +
>> + ret = cxl_mem_ps_get_attrbs(dev, &rd_params);
>> + if (ret) {
>> + dev_err(dev, "Get cxlmemdev patrol scrub params fail
>ret=%d\n",
>> + ret);
>> + return ret;
>> + }
>> +
>> + switch (param_type) {
>> + case CXL_MEMDEV_PS_PARAM_ENABLE:
>> + set_pi.scrub_flags =
>FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK,
>> + params->enable);
>> + set_pi.scrub_cycle_hr =
>FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK,
>> + rd_params.rate);
>> + break;
>> + case CXL_MEMDEV_PS_PARAM_RATE:
>> + if (params->rate < rd_params.min_rate) {
>> + dev_err(dev, "Invalid CXL patrol scrub cycle(%d) to
>set\n",
>> + params->rate);
>> + dev_err(dev, "Minimum supported CXL patrol scrub
>cycle in hour %d\n",
>> + params->min_rate);
>> + return -EINVAL;
>> + }
>> + set_pi.scrub_cycle_hr =
>FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK,
>> + params->rate);
>> + set_pi.scrub_flags =
>FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK,
>> + rd_params.enable);
>> + break;
>> + default:
>> + dev_err(dev, "Invalid CXL patrol scrub parameter to set\n");
>> + return -EINVAL;
>> + }
>> +
>> + ret = cxl_set_feature(mds, &set_pi, sizeof(set_pi));
>> + if (ret) {
>> + dev_err(dev, "CXL patrol scrub set feature fail ret=%d\n",
>> + ret);
>> + return ret;
>> + }
>> +
>> + /* Verify attribute set successfully */
>> + if (param_type == CXL_MEMDEV_PS_PARAM_RATE) {
>> + ret = cxl_mem_ps_get_attrbs(dev, &rd_params);
>> + if (ret) {
>> + dev_err(dev, "Get cxlmemdev patrol scrub params fail
>ret=%d\n", ret);
>> + return ret;
>> + }
>> + if (rd_params.rate != params->rate)
>> + return -EFAULT;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) {
>> + struct cxl_patrol_scrub_context *cxl_ps_ctx;
>> + struct cxl_mbox_supp_feat_entry feat_entry;
>> + struct cxl_memdev_ps_params params;
>> + int ret;
>> +
>> + ret = cxl_mem_get_supported_feature_entry(cxlmd,
>&cxl_patrol_scrub_uuid,
>> + &feat_entry);
>> + if (ret < 0)
>> + return ret;
>> +
>> + if (!(feat_entry.attrb_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE))
>> + return -ENOTSUPP;
>> +
>> + cxl_ps_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ps_ctx),
>GFP_KERNEL);
>> + if (!cxl_ps_ctx)
>> + return -ENOMEM;
>> +
>> + cxl_ps_ctx->get_feat_size = feat_entry.get_feat_size;
>> + cxl_ps_ctx->set_feat_size = feat_entry.set_feat_size;
>> + ret = cxl_mem_ps_get_attrbs(&cxlmd->dev, ¶ms);
>> + if (ret) {
>> + dev_err(&cxlmd->dev, "Get CXL patrol scrub params fail
>ret=%d\n",
>> + ret);
>> + return ret;
>> + }
>> + cxl_ps_ctx->scrub_cycle_changeable =
>params.scrub_cycle_changeable;
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL);
>> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index
>> 46131dcd0900..25c46e72af16 100644
>> --- a/drivers/cxl/cxlmem.h
>> +++ b/drivers/cxl/cxlmem.h
>> @@ -983,6 +983,14 @@ int cxl_trigger_poison_list(struct cxl_memdev
>> *cxlmd); int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa);
>> int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa);
>>
>> +/* cxl memory scrub functions */
>> +#ifdef CONFIG_CXL_SCRUB
>> +int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd); #else static
>> +inline int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) {
>> +return -ENOTSUPP; } #endif
>> +
>> #ifdef CONFIG_CXL_SUSPEND
>> void cxl_mem_active_inc(void);
>> void cxl_mem_active_dec(void);
>> diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index
>> 0155fb66b580..acc337b8c365 100644
>> --- a/drivers/cxl/pci.c
>> +++ b/drivers/cxl/pci.c
>> @@ -881,6 +881,11 @@ static int cxl_pci_probe(struct pci_dev *pdev, const
>struct pci_device_id *id)
>> if (rc)
>> return rc;
>>
>> + /*
>> + * Initialize optional CXL scrub features
>> + */
>> + cxl_mem_patrol_scrub_init(cxlmd);
>
>It will return a value but never be captured. The return value may indicate an
>error other than the fact it is optional, maybe we want to capture it and handle
>it properly?
I will add a warning log here on failure?
>
>Fan
>
>
>> +
>> rc = devm_cxl_sanitize_setup_notifier(&pdev->dev, cxlmd);
>> if (rc)
>> return rc;
>> --
>> 2.34.1
>>
Thanks,
Shiju
next prev parent reply other threads:[~2024-02-16 12:23 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-11 13:17 [RFC PATCH v5 00/12] cxl: Add support for CXL feature commands, CXL device patrol scrub control and DDR5 ECS control features shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 01/12] cxl/mbox: Add GET_SUPPORTED_FEATURES mailbox command shiju.jose
2024-02-08 20:52 ` Davidlohr Bueso
2024-02-09 11:17 ` Shiju Jose
2024-01-11 13:17 ` [RFC PATCH v5 02/12] cxl/mbox: Add GET_FEATURE " shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 03/12] cxl/mbox: Add SET_FEATURE " shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 04/12] cxl/memscrub: Add CXL device patrol scrub control feature shiju.jose
2024-02-16 0:47 ` fan
2024-02-16 12:22 ` Shiju Jose [this message]
2024-01-11 13:17 ` [RFC PATCH v5 05/12] cxl/memscrub: Add CXL device ECS " shiju.jose
2024-02-16 17:58 ` fan
2024-02-20 11:05 ` Shiju Jose
2024-01-11 13:17 ` [RFC PATCH v5 06/12] memory: scrub: Add scrub subsystem driver supports configuring memory scrubs in the system shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 07/12] cxl/memscrub: Register CXL device patrol scrub with scrub configure driver shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 08/12] cxl/memscrub: Register CXL device ECS " shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 09/12] ACPI:RASF: Add common library for RASF and RAS2 PCC interfaces shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 10/12] ACPICA: ACPI 6.5: Add support for RAS2 table shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 11/12] ACPI:RAS2: Add driver for ACPI RAS2 feature table (RAS2) shiju.jose
2024-01-11 13:17 ` [RFC PATCH v5 12/12] memory: RAS2: Add memory RAS2 driver shiju.jose
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=86ac936adec1415193ce6cd352c19d71@huawei.com \
--to=shiju.jose@huawei.com \
--cc=Jon.Grimm@amd.com \
--cc=Vilas.Sridharan@amd.com \
--cc=Yazen.Ghannam@amd.com \
--cc=alison.schofield@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dferguson@amperecomputing.com \
--cc=duenwen@google.com \
--cc=erdemaktas@google.com \
--cc=fan.ni@samsung.com \
--cc=gthelen@google.com \
--cc=ira.weiny@intel.com \
--cc=james.morse@arm.com \
--cc=jiaqiyan@google.com \
--cc=jonathan.cameron@huawei.com \
--cc=jthoughton@google.com \
--cc=kangkang.shen@futurewei.com \
--cc=lenb@kernel.org \
--cc=leo.duran@amd.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-edac@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxarm@huawei.com \
--cc=mike.malvestuto@intel.com \
--cc=naoya.horiguchi@nec.com \
--cc=nifan.cxl@gmail.com \
--cc=pgonda@google.com \
--cc=prime.zeng@hisilicon.com \
--cc=rafael@kernel.org \
--cc=rientjes@google.com \
--cc=somasundaram.a@hpe.com \
--cc=tanxiaofei@huawei.com \
--cc=tony.luck@intel.com \
--cc=vishal.l.verma@intel.com \
--cc=wanghuiqiang@huawei.com \
--cc=wschwartz@amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).