Linux-NVDIMM Archive on lore.kernel.org
 help / color / Atom feed
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: Re: [RFC PATCH] libnvdimm: Update the meaning for persistence_domain values
Date: Thu, 16 Jan 2020 11:54:52 +0530
Message-ID: <3a9db225-37dc-a4f0-a160-b5fb3c63e663@linux.ibm.com> (raw)
In-Reply-To: <CAPcyv4i9rvDmFW09A6uChhsiRAgENVp6KTPUmDcUrO5haan6=g@mail.gmail.com>

On 1/16/20 1:18 AM, Dan Williams wrote:
> On Wed, Jan 15, 2020 at 9:56 AM Aneesh Kumar K.V
> <aneesh.kumar@linux.ibm.com> wrote:
>>
>> On 1/15/20 11:05 PM, Jeff Moyer wrote:
>>> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
>>>
>>
>>>>>> diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
>>>>>> index f2a33f2e3ba8..9126737377e1 100644
>>>>>> --- a/include/linux/libnvdimm.h
>>>>>> +++ b/include/linux/libnvdimm.h
>>>>>> @@ -52,9 +52,9 @@ enum {
>>>>>>              */
>>>>>>             ND_REGION_PERSIST_CACHE = 1,
>>>>>>             /*
>>>>>> -   * Platform provides mechanisms to automatically flush outstanding
>>>>>> -   * write data from memory controler to pmem on system power loss.
>>>>>> -   * (ADR)
>>>>>> +   * Platform provides instructions to flush data such that on completion
>>>>>> +   * of the instructions, data flushed is guaranteed to be on pmem even
>>>>>> +   * in case of a system power loss.
>>>>>
>>>>> I find the prior description easier to understand.
>>>>
>>>> I was trying to avoid the term 'automatically, 'memory controler' and
>>>> ADR. Can I update the above as
>>>
>>> I can understand avoiding the very x86-specific "ADR," but is memory
>>> controller not accurate for your platform?
>>>
>>>> /*
>>>>    * Platform provides mechanisms to flush outstanding write data
>>>>    * to pmem on system power loss.
>>>>    */
>>>
>>> That's way too broad.  :) The comments are describing the persistence
>>> domain.  i.e. if you get data to $HERE, it is guaranteed to make it out
>>> to stable media.
>>
>> With technologies like OpenCAPI, we possibly may not want to call them
>> "memory controller"? In a way platform mechanism will flush them such
>> that on power failure, it is guaranteed to be on the pmem media. But
>> should we call these boundary "memory_controller"? May be we can
>> consider "memory controller" an overloaded term there. Considering we
>> are  calling this as memory_controller for application to parse via
>> sysfs, may be the documentation can also carry the same term.
> 
> I don't see how OpenCAPI or any other transport has any bearing on the
> "memory_controller" term. It's still a controller of persistent memory
> and it needs to have the write data received at its buffers / queue to
> ensure that the data gets persisted, or, as in the cpu_cache case,
> some other agent takes responsibility for shuttling pending writes
> that have hit the cache out over the transport to be persisted.
> 

Agreed. I want to make sure we document that details correctly. It is a 
controller of persistent memory and in some cases, there is no reserve 
power available to keep things in self-refresh mode and to flush things 
automatically. The platform provided mechanism will ensure the write 
data is in the pmem media.

Should the later have a persistence_domain value "pmem media" ? Even 
then I am not sure how applications are supposed to use this information.

IMHO what is important for application is to differentiate between 
whether a platform specific flush mechanism is needed or not. Hence was 
trying to keep this as two value property. IS there any other detail 
application is supposed to infer from this property?

-aneesh
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

      reply index

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-08  6:49 Aneesh Kumar K.V
2020-01-15 16:55 ` Jeff Moyer
2020-01-15 17:27   ` Aneesh Kumar K.V
2020-01-15 17:31     ` Aneesh Kumar K.V
2020-01-15 17:42       ` Jeff Moyer
2020-01-15 19:44       ` Dan Williams
2020-01-15 17:35     ` Jeff Moyer
2020-01-15 17:55       ` Aneesh Kumar K.V
2020-01-15 19:48         ` Dan Williams
2020-01-16  6:24           ` Aneesh Kumar K.V [this message]

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3a9db225-37dc-a4f0-a160-b5fb3c63e663@linux.ibm.com \
    --to=aneesh.kumar@linux.ibm.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-nvdimm@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NVDIMM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvdimm/0 linux-nvdimm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvdimm linux-nvdimm/ https://lore.kernel.org/linux-nvdimm \
		linux-nvdimm@lists.01.org
	public-inbox-index linux-nvdimm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.01.lists.linux-nvdimm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git