linux-nvdimm.lists.01.org archive mirror
 help / color / mirror / Atom feed
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: linux-nvdimm@lists.01.org
Subject: Re: [RFC PATCH] libnvdimm: Update the meaning for persistence_domain values
Date: Wed, 15 Jan 2020 23:25:57 +0530	[thread overview]
Message-ID: <3184f0f6-1dc7-28b8-0b29-b2b9afa490d3@linux.ibm.com> (raw)
In-Reply-To: <x49k15soc5v.fsf@segfault.boston.devel.redhat.com>

On 1/15/20 11:05 PM, Jeff Moyer wrote:
> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> 

>>>> diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
>>>> index f2a33f2e3ba8..9126737377e1 100644
>>>> --- a/include/linux/libnvdimm.h
>>>> +++ b/include/linux/libnvdimm.h
>>>> @@ -52,9 +52,9 @@ enum {
>>>>    	 */
>>>>    	ND_REGION_PERSIST_CACHE = 1,
>>>>    	/*
>>>> -	 * Platform provides mechanisms to automatically flush outstanding
>>>> -	 * write data from memory controler to pmem on system power loss.
>>>> -	 * (ADR)
>>>> +	 * Platform provides instructions to flush data such that on completion
>>>> +	 * of the instructions, data flushed is guaranteed to be on pmem even
>>>> +	 * in case of a system power loss.
>>>
>>> I find the prior description easier to understand.
>>
>> I was trying to avoid the term 'automatically, 'memory controler' and
>> ADR. Can I update the above as
> 
> I can understand avoiding the very x86-specific "ADR," but is memory
> controller not accurate for your platform?
> 
>> /*
>>   * Platform provides mechanisms to flush outstanding write data
>>   * to pmem on system power loss.
>>   */
> 
> That's way too broad.  :) The comments are describing the persistence
> domain.  i.e. if you get data to $HERE, it is guaranteed to make it out
> to stable media.

With technologies like OpenCAPI, we possibly may not want to call them 
"memory controller"? In a way platform mechanism will flush them such 
that on power failure, it is guaranteed to be on the pmem media. But 
should we call these boundary "memory_controller"? May be we can 
consider "memory controller" an overloaded term there. Considering we 
are  calling this as memory_controller for application to parse via 
sysfs, may be the documentation can also carry the same term.

-aneesh
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

  reply	other threads:[~2020-01-15 17:56 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-08  6:49 [RFC PATCH] libnvdimm: Update the meaning for persistence_domain values Aneesh Kumar K.V
2020-01-15 16:55 ` Jeff Moyer
2020-01-15 17:27   ` Aneesh Kumar K.V
2020-01-15 17:31     ` Aneesh Kumar K.V
2020-01-15 17:42       ` Jeff Moyer
2020-01-15 19:44       ` Dan Williams
2020-01-15 17:35     ` Jeff Moyer
2020-01-15 17:55       ` Aneesh Kumar K.V [this message]
2020-01-15 19:48         ` Dan Williams
2020-01-16  6:24           ` Aneesh Kumar K.V

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3184f0f6-1dc7-28b8-0b29-b2b9afa490d3@linux.ibm.com \
    --to=aneesh.kumar@linux.ibm.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-nvdimm@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).