All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: kajoljain <kjain@linux.ibm.com>
Cc: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
	linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org,
	maddy@linux.vnet.ibm.com, santosh@fossix.org,
	aneesh.kumar@linux.ibm.com, vaibhav@linux.ibm.com,
	dan.j.williams@intel.com, ira.weiny@intel.com,
	atrajeev@linux.vnet.ibm.com, tglx@linutronix.de,
	rnsastry@linux.ibm.com
Subject: Re: [RFC v2 4/4] powerpc/papr_scm: Add cpu hotplug support for nvdimm pmu device
Date: Wed, 26 May 2021 10:32:35 +0200	[thread overview]
Message-ID: <YK4Ho7e+LCqjYA2X@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <b89d1954-638b-34c0-2d79-5d1ce4e72a3a@linux.ibm.com>

On Wed, May 26, 2021 at 12:56:58PM +0530, kajoljain wrote:
> On 5/25/21 7:46 PM, Peter Zijlstra wrote:
> > On Tue, May 25, 2021 at 06:52:16PM +0530, Kajol Jain wrote:

> >> It adds cpumask to designate a cpu to make HCALL to
> >> collect the counter data for the nvdimm device and
> >> update ABI documentation accordingly.
> >>
> >> Result in power9 lpar system:
> >> command:# cat /sys/devices/nmem0/cpumask
> >> 0
> > 
> > Is this specific to the papr thing, or should this be in generic nvdimm
> > code?
> 
> This code is not specific to papr device and we can move it to
> generic nvdimm interface. But do we need to add some checks on whether
> any arch/platform specific driver want that support or we can assume 
> that this will be something needed by all platforms?

I'm a complete NVDIMM n00b, but to me it would appear they would have to
conform to the normal memory hierarchy and would thus always be
per-node.

Also, if/when deviation from this rule is observed, we can always
rework/extend this. For now I think it would make sense to have the
per-node ness of the thing expressed in the generic layer.

WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org>
To: kajoljain <kjain@linux.ibm.com>
Cc: santosh@fossix.org, maddy@linux.vnet.ibm.com,
	ira.weiny@intel.com, linux-nvdimm@lists.01.org,
	rnsastry@linux.ibm.com, linux-kernel@vger.kernel.org,
	atrajeev@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com,
	vaibhav@linux.ibm.com, dan.j.williams@intel.com,
	linuxppc-dev@lists.ozlabs.org, tglx@linutronix.de
Subject: Re: [RFC v2 4/4] powerpc/papr_scm: Add cpu hotplug support for nvdimm pmu device
Date: Wed, 26 May 2021 10:32:35 +0200	[thread overview]
Message-ID: <YK4Ho7e+LCqjYA2X@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <b89d1954-638b-34c0-2d79-5d1ce4e72a3a@linux.ibm.com>

On Wed, May 26, 2021 at 12:56:58PM +0530, kajoljain wrote:
> On 5/25/21 7:46 PM, Peter Zijlstra wrote:
> > On Tue, May 25, 2021 at 06:52:16PM +0530, Kajol Jain wrote:

> >> It adds cpumask to designate a cpu to make HCALL to
> >> collect the counter data for the nvdimm device and
> >> update ABI documentation accordingly.
> >>
> >> Result in power9 lpar system:
> >> command:# cat /sys/devices/nmem0/cpumask
> >> 0
> > 
> > Is this specific to the papr thing, or should this be in generic nvdimm
> > code?
> 
> This code is not specific to papr device and we can move it to
> generic nvdimm interface. But do we need to add some checks on whether
> any arch/platform specific driver want that support or we can assume 
> that this will be something needed by all platforms?

I'm a complete NVDIMM n00b, but to me it would appear they would have to
conform to the normal memory hierarchy and would thus always be
per-node.

Also, if/when deviation from this rule is observed, we can always
rework/extend this. For now I think it would make sense to have the
per-node ness of the thing expressed in the generic layer.

  reply	other threads:[~2021-05-26  8:33 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-25 13:22 [RFC v2 0/4] Add perf interface to expose nvdimm Kajol Jain
2021-05-25 13:22 ` Kajol Jain
2021-05-25 13:22 ` [RFC v2 1/4] drivers/nvdimm: Add perf interface to expose nvdimm performance stats Kajol Jain
2021-05-25 13:22   ` Kajol Jain
2021-05-25 22:11   ` kernel test robot
2021-05-25 13:22 ` [RFC v2 2/4] powerpc/papr_scm: Add perf interface support Kajol Jain
2021-05-25 13:22   ` Kajol Jain
2021-05-25 13:22 ` [RFC v2 3/4] powerpc/papr_scm: Document papr_scm sysfs event format entries Kajol Jain
2021-05-25 13:22   ` Kajol Jain
2021-05-25 13:22 ` [RFC v2 4/4] powerpc/papr_scm: Add cpu hotplug support for nvdimm pmu device Kajol Jain
2021-05-25 13:22   ` Kajol Jain
2021-05-25 14:16   ` Peter Zijlstra
2021-05-25 14:16     ` Peter Zijlstra
2021-05-26  7:26     ` kajoljain
2021-05-26  7:26       ` kajoljain
2021-05-26  8:32       ` Peter Zijlstra [this message]
2021-05-26  8:32         ` Peter Zijlstra
2021-05-28  7:53         ` kajoljain
2021-05-28  7:53           ` kajoljain
2021-05-26  8:45       ` Aneesh Kumar K.V
2021-05-26  8:45         ` Aneesh Kumar K.V
2021-05-26  9:08         ` kajoljain
2021-05-26  9:08           ` kajoljain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YK4Ho7e+LCqjYA2X@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=atrajeev@linux.vnet.ibm.com \
    --cc=dan.j.williams@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=kjain@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=maddy@linux.vnet.ibm.com \
    --cc=mpe@ellerman.id.au \
    --cc=rnsastry@linux.ibm.com \
    --cc=santosh@fossix.org \
    --cc=tglx@linutronix.de \
    --cc=vaibhav@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.