Linux-NVDIMM Archive on lore.kernel.org
 help / color / Atom feed
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: linux-nvdimm@lists.01.org
Subject: Re: [RFC PATCH] libnvdimm: Update the meaning for persistence_domain values
Date: Wed, 15 Jan 2020 23:01:15 +0530
Message-ID: <0f44df90-1f75-9d0a-10af-6e7f48158bc7@linux.ibm.com> (raw)
In-Reply-To: <a87b5da8-54d1-3c1a-f068-4d2f389576c9@linux.ibm.com>

On 1/15/20 10:57 PM, Aneesh Kumar K.V wrote:
> On 1/15/20 10:25 PM, Jeff Moyer wrote:
>> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
>>
>>> Currently, kernel shows the below values
>>>     "persistence_domain":"cpu_cache"
>>>     "persistence_domain":"memory_controller"
>>>     "persistence_domain":"unknown"
>>>
>>> This patch updates the meaning of these values such that
>>>
>>> "cpu_cache" indicates no extra instructions is needed to ensure the 
>>> persistence
>>> of data in the pmem media on power failure.
>>>
>>> "memory_controller" indicates platform provided instructions need to 
>>> be issued
>>> as per documented sequence to make sure data flushed is guaranteed to 
>>> be on pmem
>>> media in case of system power loss.
>>>
>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>>> ---
>>>   arch/powerpc/platforms/pseries/papr_scm.c | 7 ++++++-
>>>   include/linux/libnvdimm.h                 | 6 +++---
>>>   2 files changed, 9 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/platforms/pseries/papr_scm.c 
>>> b/arch/powerpc/platforms/pseries/papr_scm.c
>>> index c2ef320ba1bf..26a5ef263758 100644
>>> --- a/arch/powerpc/platforms/pseries/papr_scm.c
>>> +++ b/arch/powerpc/platforms/pseries/papr_scm.c
>>> @@ -360,8 +360,13 @@ static int papr_scm_nvdimm_init(struct 
>>> papr_scm_priv *p)
>>>       if (p->is_volatile)
>>>           p->region = nvdimm_volatile_region_create(p->bus, &ndr_desc);
>>> -    else
>>> +    else {
>>> +        /*
>>> +         * We need to flush things correctly to guarantee persistance
>>> +         */
>>> +        set_bit(ND_REGION_PERSIST_MEMCTRL, &ndr_desc.flags);
>>>           p->region = nvdimm_pmem_region_create(p->bus, &ndr_desc);
>>> +    }
>>>       if (!p->region) {
>>>           dev_err(dev, "Error registering region %pR from %pOF\n",
>>>                   ndr_desc.res, p->dn);
>>
>> Would you also update of_pmem to indicate the persistence domain,
>> please?
>>
> 
> sure.
> 
> 
>>> diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
>>> index f2a33f2e3ba8..9126737377e1 100644
>>> --- a/include/linux/libnvdimm.h
>>> +++ b/include/linux/libnvdimm.h
>>> @@ -52,9 +52,9 @@ enum {
>>>        */
>>>       ND_REGION_PERSIST_CACHE = 1,
>>>       /*
>>> -     * Platform provides mechanisms to automatically flush outstanding
>>> -     * write data from memory controler to pmem on system power loss.
>>> -     * (ADR)
>>> +     * Platform provides instructions to flush data such that on 
>>> completion
>>> +     * of the instructions, data flushed is guaranteed to be on pmem 
>>> even
>>> +     * in case of a system power loss.
>>
>> I find the prior description easier to understand.
> 
> 
> I was trying to avoid the term 'automatically, 'memory controler' and 
> ADR. Can I update the above as
> 
> /*
>   * Platform provides mechanisms to flush outstanding write data
>   * to pmem on system power loss.
>   */
> 

Wanted to add more details. So with the above interpretation, if the 
persistence_domain is found to be 'cpu_cache', application can expect a 
store instruction to guarantee persistence. If it is 'none' there is no 
persistence ( I am not sure how that is the difference from 'volatile' 
pmem region). If it is  'memory_controller' ( I am not sure whether that 
is the right term), application needs to follow the recommended 
mechanism to flush write data to pmem.

-aneesh
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

  reply index

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-08  6:49 Aneesh Kumar K.V
2020-01-15 16:55 ` Jeff Moyer
2020-01-15 17:27   ` Aneesh Kumar K.V
2020-01-15 17:31     ` Aneesh Kumar K.V [this message]
2020-01-15 17:42       ` Jeff Moyer
2020-01-15 19:44       ` Dan Williams
2020-01-15 17:35     ` Jeff Moyer
2020-01-15 17:55       ` Aneesh Kumar K.V
2020-01-15 19:48         ` Dan Williams
2020-01-16  6:24           ` Aneesh Kumar K.V

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0f44df90-1f75-9d0a-10af-6e7f48158bc7@linux.ibm.com \
    --to=aneesh.kumar@linux.ibm.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-nvdimm@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NVDIMM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvdimm/0 linux-nvdimm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvdimm linux-nvdimm/ https://lore.kernel.org/linux-nvdimm \
		linux-nvdimm@lists.01.org
	public-inbox-index linux-nvdimm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.01.lists.linux-nvdimm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git