All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alistair Popple <apopple@nvidia.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev,
	linux-acpi@vger.kernel.org,
	"Aneesh Kumar K  . V" <aneesh.kumar@linux.ibm.com>,
	Wei Xu <weixugc@google.com>,
	Dan  Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Michal Hocko <mhocko@kernel.org>, Yang Shi <shy828301@gmail.com>,
	Rafael J Wysocki <rafael.j.wysocki@intel.com>
Subject: Re: [PATCH RESEND 4/4] dax, kmem: calculate abstract distance with general interface
Date: Fri, 25 Aug 2023 16:00:28 +1000	[thread overview]
Message-ID: <87wmxj7j2v.fsf@nvdebian.thelocal> (raw)
In-Reply-To: <87lee2bj5g.fsf@yhuang6-desk2.ccr.corp.intel.com>


"Huang, Ying" <ying.huang@intel.com> writes:

> Alistair Popple <apopple@nvidia.com> writes:
>
>> "Huang, Ying" <ying.huang@intel.com> writes:
>>
>>> Alistair Popple <apopple@nvidia.com> writes:
>>>
>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>
>>>>> Alistair Popple <apopple@nvidia.com> writes:
>>>>>
>>>>>> Huang Ying <ying.huang@intel.com> writes:
>>>>>>
>>>>>>> Previously, a fixed abstract distance MEMTIER_DEFAULT_DAX_ADISTANCE is
>>>>>>> used for slow memory type in kmem driver.  This limits the usage of
>>>>>>> kmem driver, for example, it cannot be used for HBM (high bandwidth
>>>>>>> memory).
>>>>>>>
>>>>>>> So, we use the general abstract distance calculation mechanism in kmem
>>>>>>> drivers to get more accurate abstract distance on systems with proper
>>>>>>> support.  The original MEMTIER_DEFAULT_DAX_ADISTANCE is used as
>>>>>>> fallback only.
>>>>>>>
>>>>>>> Now, multiple memory types may be managed by kmem.  These memory types
>>>>>>> are put into the "kmem_memory_types" list and protected by
>>>>>>> kmem_memory_type_lock.
>>>>>>
>>>>>> See below but I wonder if kmem_memory_types could be a common helper
>>>>>> rather than kdax specific?
>>>>>>
>>>>>>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>>>>>>> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>>>>>>> Cc: Wei Xu <weixugc@google.com>
>>>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>>>> Cc: Dan Williams <dan.j.williams@intel.com>
>>>>>>> Cc: Dave Hansen <dave.hansen@intel.com>
>>>>>>> Cc: Davidlohr Bueso <dave@stgolabs.net>
>>>>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>>>>> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>>>>>>> Cc: Michal Hocko <mhocko@kernel.org>
>>>>>>> Cc: Yang Shi <shy828301@gmail.com>
>>>>>>> Cc: Rafael J Wysocki <rafael.j.wysocki@intel.com>
>>>>>>> ---
>>>>>>>  drivers/dax/kmem.c           | 54 +++++++++++++++++++++++++++---------
>>>>>>>  include/linux/memory-tiers.h |  2 ++
>>>>>>>  mm/memory-tiers.c            |  2 +-
>>>>>>>  3 files changed, 44 insertions(+), 14 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
>>>>>>> index 898ca9505754..837165037231 100644
>>>>>>> --- a/drivers/dax/kmem.c
>>>>>>> +++ b/drivers/dax/kmem.c
>>>>>>> @@ -49,14 +49,40 @@ struct dax_kmem_data {
>>>>>>>  	struct resource *res[];
>>>>>>>  };
>>>>>>>  
>>>>>>> -static struct memory_dev_type *dax_slowmem_type;
>>>>>>> +static DEFINE_MUTEX(kmem_memory_type_lock);
>>>>>>> +static LIST_HEAD(kmem_memory_types);
>>>>>>> +
>>>>>>> +static struct memory_dev_type *kmem_find_alloc_memorty_type(int adist)
>>>>>>> +{
>>>>>>> +	bool found = false;
>>>>>>> +	struct memory_dev_type *mtype;
>>>>>>> +
>>>>>>> +	mutex_lock(&kmem_memory_type_lock);
>>>>>>> +	list_for_each_entry(mtype, &kmem_memory_types, list) {
>>>>>>> +		if (mtype->adistance == adist) {
>>>>>>> +			found = true;
>>>>>>> +			break;
>>>>>>> +		}
>>>>>>> +	}
>>>>>>> +	if (!found) {
>>>>>>> +		mtype = alloc_memory_type(adist);
>>>>>>> +		if (!IS_ERR(mtype))
>>>>>>> +			list_add(&mtype->list, &kmem_memory_types);
>>>>>>> +	}
>>>>>>> +	mutex_unlock(&kmem_memory_type_lock);
>>>>>>> +
>>>>>>> +	return mtype;
>>>>>>> +}
>>>>>>> +
>>>>>>>  static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
>>>>>>>  {
>>>>>>>  	struct device *dev = &dev_dax->dev;
>>>>>>>  	unsigned long total_len = 0;
>>>>>>>  	struct dax_kmem_data *data;
>>>>>>> +	struct memory_dev_type *mtype;
>>>>>>>  	int i, rc, mapped = 0;
>>>>>>>  	int numa_node;
>>>>>>> +	int adist = MEMTIER_DEFAULT_DAX_ADISTANCE;
>>>>>>>  
>>>>>>>  	/*
>>>>>>>  	 * Ensure good NUMA information for the persistent memory.
>>>>>>> @@ -71,6 +97,11 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
>>>>>>>  		return -EINVAL;
>>>>>>>  	}
>>>>>>>  
>>>>>>> +	mt_calc_adistance(numa_node, &adist);
>>>>>>> +	mtype = kmem_find_alloc_memorty_type(adist);
>>>>>>> +	if (IS_ERR(mtype))
>>>>>>> +		return PTR_ERR(mtype);
>>>>>>> +
>>>>>>
>>>>>> I wrote my own quick and dirty module to test this and wrote basically
>>>>>> the same code sequence.
>>>>>>
>>>>>> I notice your using a list of memory types here though. I think it would
>>>>>> be nice to have a common helper that other users could call to do the
>>>>>> mt_calc_adistance() / kmem_find_alloc_memory_type() /
>>>>>> init_node_memory_type() sequence and cleanup as my naive approach would
>>>>>> result in a new memory_dev_type per device even though adist might be
>>>>>> the same. A common helper would make it easy to de-dup those.
>>>>>
>>>>> If it's useful, we can move kmem_find_alloc_memory_type() to
>>>>> memory-tier.c after some revision.  But I tend to move it after we have
>>>>> the second user.  What do you think about that?
>>>>
>>>> Usually I would agree, but this series already introduces a general
>>>> interface for calculating adist even though there's only one user and
>>>> implementation. So if we're going to add a general interface I think it
>>>> would be better to make it more usable now rather than after variations
>>>> of it have been cut and pasted into other drivers.
>>>
>>> In general, I would like to introduce complexity when necessary.  So, we
>>> can discuss the necessity of the general interface firstly.  We can do
>>> that in [1/4] of the series.
>>
>> Do we need one memory_dev_type per adistance or per adistance+device?
>>
>> If IUC correctly I think it's the former. Logically that means
>> memory_dev_types should be managed by the memory-tiering subsystem
>> because they are system wide rather than driver specific resources. That
>> we need to add the list field to struct memory_dev_type specifically for
>> use by dax/kmem supports that idea.
>
> In the original design (page 9/10/11 of [1]), memory_dev_type (Memory
> Type) is driver specific.

Oh fair enough. I was making these comments based on the incorrect
understanding that these were a global rather than driver specific
resource. Thanks for correcting that!

>> Also I'm not sure why you consider moving the
>> kmem_memory_types/kmem_find_alloc_memory_type()/etc. functions into
>> mm/memory-tiers.c to add complexity. Isn't it just moving code around or
>> am I missing some other subtlety that makes this hard? I really think
>> logically memory-tiering.c is where management of the various
>> memory_dev_types belongs.
>
> IMHO, it depends on whether these functions are shared by at least 2
> drivers.  If so, we can put them in mm/memory-tiers.c.  Otherwise, we
> should keep them in the driver.

Ok. Not sure I entirely agree because I suspect it would still make the
code clearer even for a single user. But generally you're correct and as
these memory_dev_type's are *supposed* to be driver specific (rather
than one per adistance) I don't think it's such a big issue.

  reply	other threads:[~2023-08-25  6:05 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-21  1:29 [PATCH RESEND 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Huang Ying
2023-07-21  1:29 ` [PATCH RESEND 1/4] memory tiering: add abstract distance calculation algorithms management Huang Ying
2023-07-25  2:13   ` Alistair Popple
2023-07-25  3:14     ` Huang, Ying
2023-07-25  8:26       ` Alistair Popple
2023-07-26  7:33         ` Huang, Ying
2023-07-27  3:42           ` Alistair Popple
2023-07-27  4:02             ` Huang, Ying
2023-07-27  4:07               ` Alistair Popple
2023-07-27  5:41                 ` Huang, Ying
2023-07-28  1:20                   ` Alistair Popple
2023-08-11  3:51                     ` Huang, Ying
2023-08-21 11:26                       ` Alistair Popple
2023-08-21 22:50                         ` Huang, Ying
2023-08-21 23:52                           ` Alistair Popple
2023-08-22  0:58                             ` Huang, Ying
2023-08-22  7:11                               ` Alistair Popple
2023-08-23  5:56                                 ` Huang, Ying
2023-08-25  5:41                                   ` Alistair Popple
2023-07-21  1:29 ` [PATCH RESEND 2/4] acpi, hmat: refactor hmat_register_target_initiators() Huang Ying
2023-07-25  2:44   ` Alistair Popple
2023-08-07 16:55   ` Jonathan Cameron
2023-08-11  1:13     ` Huang, Ying
2023-07-21  1:29 ` [PATCH RESEND 3/4] acpi, hmat: calculate abstract distance with HMAT Huang Ying
2023-07-25  2:45   ` Alistair Popple
2023-07-25  6:47     ` Huang, Ying
2023-08-21 11:53       ` Alistair Popple
2023-08-21 23:28         ` Huang, Ying
2023-07-21  1:29 ` [PATCH RESEND 4/4] dax, kmem: calculate abstract distance with general interface Huang Ying
2023-07-25  3:11   ` Alistair Popple
2023-07-25  7:02     ` Huang, Ying
2023-08-21 12:03       ` Alistair Popple
2023-08-21 23:33         ` Huang, Ying
2023-08-22  7:36           ` Alistair Popple
2023-08-23  2:13             ` Huang, Ying
2023-08-25  6:00               ` Alistair Popple [this message]
2023-07-21  4:15 ` [PATCH RESEND 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Alistair Popple
2023-07-24 17:58   ` Andrew Morton
2023-08-01  2:35     ` Bharata B Rao
2023-08-11  6:26       ` Huang, Ying
2023-08-11  7:49         ` Bharata B Rao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wmxj7j2v.fsf@nvdebian.thelocal \
    --to=apopple@nvidia.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dave@stgolabs.net \
    --cc=hannes@cmpxchg.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=rafael.j.wysocki@intel.com \
    --cc=shy828301@gmail.com \
    --cc=weixugc@google.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.