From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=DKIM_ADSP_NXDOMAIN, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 434EDC43331 for ; Thu, 2 Apr 2020 18:31:47 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0A6752078E for ; Thu, 2 Apr 2020 18:31:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A6752078E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vajain21.in.ibm.com.in.ibm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 4882910FCD489; Thu, 2 Apr 2020 11:32:36 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=vajain21@vajain21.in.ibm.com.in.ibm.com; receiver= Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 844DB10FCD485 for ; Thu, 2 Apr 2020 11:32:34 -0700 (PDT) Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 032I4Zuc003554 for ; Thu, 2 Apr 2020 14:31:44 -0400 Received: from e06smtp04.uk.ibm.com (e06smtp04.uk.ibm.com [195.75.94.100]) by mx0a-001b2d01.pphosted.com with ESMTP id 304swt6pra-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 02 Apr 2020 14:31:44 -0400 Received: from localhost by e06smtp04.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 2 Apr 2020 19:31:25 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp04.uk.ibm.com (192.168.101.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Thu, 2 Apr 2020 19:31:22 +0100 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 032IVbts54132908 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 2 Apr 2020 18:31:37 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0A47BA4057; Thu, 2 Apr 2020 18:31:37 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 751EFA404D; Thu, 2 Apr 2020 18:31:33 +0000 (GMT) Received: from vajain21.in.ibm.com (unknown [9.199.46.171]) by d06av23.portsmouth.uk.ibm.com (Postfix) with SMTP; Thu, 2 Apr 2020 18:31:33 +0000 (GMT) Received: by vajain21.in.ibm.com (sSMTP sendmail emulation); Fri, 03 Apr 2020 00:01:32 +0530 From: Vaibhav Jain To: Michael Ellerman , Vaibhav Jain , linuxppc-dev@lists.ozlabs.org, linux-nvdimm@lists.01.org Subject: Re: [PATCH v5 1/4] powerpc/papr_scm: Fetch nvdimm health information from PHYP In-Reply-To: <878sjetcis.fsf@mpe.ellerman.id.au> References: <20200331143229.306718-1-vaibhav@linux.ibm.com> <20200331143229.306718-2-vaibhav@linux.ibm.com> <878sjetcis.fsf@mpe.ellerman.id.au> Date: Fri, 03 Apr 2020 00:01:32 +0530 MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 20040218-0016-0000-0000-000002FD1748 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20040218-0017-0000-0000-00003360E2DF Message-Id: <87o8s9g2nv.fsf@vajain21.in.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.676 definitions=2020-04-02_08:2020-04-02,2020-04-02 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=1 bulkscore=0 clxscore=1034 lowpriorityscore=0 mlxlogscore=999 phishscore=0 adultscore=0 impostorscore=0 malwarescore=0 priorityscore=1501 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2004020136 Message-ID-Hash: 4UKM6EA2OBGPQXCWQ4I5IMGDQ3LNPKVX X-Message-ID-Hash: 4UKM6EA2OBGPQXCWQ4I5IMGDQ3LNPKVX X-MailFrom: vajain21@vajain21.in.ibm.com.in.ibm.com X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: Alastair D'Silva , "Aneesh Kumar K . V" , Vaibhav Jain X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Thanks for reviewing this patch Mpe, Michael Ellerman writes: > Vaibhav Jain writes: > >> Implement support for fetching nvdimm health information via >> H_SCM_HEALTH hcall as documented in Ref[1]. The hcall returns a pair >> of 64-bit big-endian integers which are then stored in 'struct >> papr_scm_priv' and subsequently partially exposed to user-space via >> newly introduced dimm specific attribute 'papr_flags'. Also a new asm >> header named 'papr-scm.h' is added that describes the interface >> between PHYP and guest kernel. >> >> Following flags are reported via 'papr_flags' sysfs attribute contents >> of which are space separated string flags indicating various nvdimm >> states: >> >> * "not_armed" : Indicating that nvdimm contents wont survive a power >> cycle. >> * "save_fail" : Indicating that nvdimm contents couldn't be flushed >> during last shutdown event. >> * "restore_fail": Indicating that nvdimm contents couldn't be restored >> during dimm initialization. >> * "encrypted" : Dimm contents are encrypted. >> * "smart_notify": There is health event for the nvdimm. >> * "scrubbed" : Indicating that contents of the nvdimm have been >> scrubbed. >> * "locked" : Indicating that nvdimm contents cant be modified >> until next power cycle. >> >> [1]: commit 58b278f568f0 ("powerpc: Provide initial documentation for >> PAPR hcalls") >> >> Signed-off-by: Vaibhav Jain >> --- >> Changelog: >> >> v4..v5 : None >> >> v3..v4 : None >> >> v2..v3 : Removed PAPR_SCM_DIMM_HEALTH_NON_CRITICAL as a condition for >> NVDIMM unarmed [Aneesh] >> >> v1..v2 : New patch in the series. >> --- >> arch/powerpc/include/asm/papr_scm.h | 48 ++++++++++ >> arch/powerpc/platforms/pseries/papr_scm.c | 105 +++++++++++++++++++++- >> 2 files changed, 151 insertions(+), 2 deletions(-) >> create mode 100644 arch/powerpc/include/asm/papr_scm.h >> >> diff --git a/arch/powerpc/include/asm/papr_scm.h b/arch/powerpc/include/asm/papr_scm.h >> new file mode 100644 >> index 000000000000..868d3360f56a >> --- /dev/null >> +++ b/arch/powerpc/include/asm/papr_scm.h >> @@ -0,0 +1,48 @@ >> +/* SPDX-License-Identifier: GPL-2.0-or-later */ >> +/* >> + * Structures and defines needed to manage nvdimms for spapr guests. >> + */ >> +#ifndef _ASM_POWERPC_PAPR_SCM_H_ >> +#define _ASM_POWERPC_PAPR_SCM_H_ >> + >> +#include >> +#include >> + >> +/* DIMM health bitmap bitmap indicators */ >> +/* SCM device is unable to persist memory contents */ >> +#define PAPR_SCM_DIMM_UNARMED PPC_BIT(0) > > Please don't use PPC_BIT, it's just unncessary obfuscation for folks > who are reading the code without access to the docs (ie. more or less > everyone other than you :) Sure, will get that replaced with int literals. > >> diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c >> index 0b4467e378e5..aaf2e4ab1f75 100644 >> --- a/arch/powerpc/platforms/pseries/papr_scm.c >> +++ b/arch/powerpc/platforms/pseries/papr_scm.c >> @@ -14,6 +14,7 @@ >> #include >> >> #include >> +#include >> >> #define BIND_ANY_ADDR (~0ul) >> >> @@ -39,6 +40,13 @@ struct papr_scm_priv { >> struct resource res; >> struct nd_region *region; >> struct nd_interleave_set nd_set; >> + >> + /* Protect dimm data from concurrent access */ >> + struct mutex dimm_mutex; >> + >> + /* Health information for the dimm */ >> + __be64 health_bitmap; >> + __be64 health_bitmap_valid; > > It's much less error prone to store the data in CPU endian and do the > endian conversion only at the point where the data either comes from or > goes to firmware. > > That would also mean you can define flags above without needing PPC_BIT > because they'll be in CPU endian too. Fair suggestion, will update this to u64 types in next iteration. > >> @@ -144,6 +152,35 @@ static int drc_pmem_query_n_bind(struct papr_scm_priv *p) >> return drc_pmem_bind(p); >> } >> >> +static int drc_pmem_query_health(struct papr_scm_priv *p) >> +{ >> + unsigned long ret[PLPAR_HCALL_BUFSIZE]; >> + int64_t rc; > > Use kernel types please, ie. s64, or just long. Agree, will get it fixed in next iteration. > >> + rc = plpar_hcall(H_SCM_HEALTH, ret, p->drc_index); >> + if (rc != H_SUCCESS) { >> + dev_err(&p->pdev->dev, >> + "Failed to query health information, Err:%lld\n", rc); >> + return -ENXIO; >> + } >> + >> + /* Protect modifications to papr_scm_priv with the mutex */ >> + rc = mutex_lock_interruptible(&p->dimm_mutex); >> + if (rc) >> + return rc; >> + >> + /* Store the retrieved health information in dimm platform data */ >> + p->health_bitmap = ret[0]; >> + p->health_bitmap_valid = ret[1]; >> + >> + dev_dbg(&p->pdev->dev, >> + "Queried dimm health info. Bitmap:0x%016llx Mask:0x%016llx\n", >> + be64_to_cpu(p->health_bitmap), >> + be64_to_cpu(p->health_bitmap_valid)); >> + >> + mutex_unlock(&p->dimm_mutex); >> + return 0; >> +} >> >> static int papr_scm_meta_get(struct papr_scm_priv *p, >> struct nd_cmd_get_config_data_hdr *hdr) >> @@ -304,6 +341,67 @@ static inline int papr_scm_node(int node) >> return min_node; >> } >> >> +static ssize_t papr_flags_show(struct device *dev, >> + struct device_attribute *attr, char *buf) >> +{ >> + struct nvdimm *dimm = to_nvdimm(dev); >> + struct papr_scm_priv *p = nvdimm_provider_data(dimm); >> + __be64 health; > > No need for __be64 here if health_bitmap was stored in CPU endian. Right, will change this to u64 > >> + int rc; >> + >> + rc = drc_pmem_query_health(p); >> + if (rc) >> + return rc; >> + >> + /* Protect against modifications to papr_scm_priv with the mutex */ >> + rc = mutex_lock_interruptible(&p->dimm_mutex); >> + if (rc) >> + return rc; >> + >> + health = p->health_bitmap & p->health_bitmap_valid; > > This is all you ever do with the health_bitmap? In which case why not > just do the masking before storing it into priv and save yourself 8 > bytes? Fair suggestion. will address this in next iteration. > >> + /* Check for various masks in bitmap and set the buffer */ >> + if (health & PAPR_SCM_DIMM_UNARMED_MASK) >> + rc += sprintf(buf, "not_armed "); > > I know buf is "big enough" but using sprintf() in 2020 is a bit ... :) > > seq_buf is a pretty thin wrapper over a buffer you can use to make this > cleaner and also handles overflow for you. > > See eg. show_user_instructions() for an example. Unfortunatly seq_buf_printf() is still not an exported symbol hence not usable in external modules. > >> + >> + if (health & PAPR_SCM_DIMM_BAD_SHUTDOWN_MASK) >> + rc += sprintf(buf + rc, "save_fail "); >> + >> + if (health & PAPR_SCM_DIMM_BAD_RESTORE_MASK) >> + rc += sprintf(buf + rc, "restore_fail "); >> + >> + if (health & PAPR_SCM_DIMM_ENCRYPTED) >> + rc += sprintf(buf + rc, "encrypted "); >> + >> + if (health & PAPR_SCM_DIMM_SMART_EVENT_MASK) >> + rc += sprintf(buf + rc, "smart_notify "); >> + >> + if (health & PAPR_SCM_DIMM_SCRUBBED_AND_LOCKED) >> + rc += sprintf(buf + rc, "scrubbed locked "); >> + >> + if (rc > 0) >> + rc += sprintf(buf + rc, "\n"); >> + >> + mutex_unlock(&p->dimm_mutex); >> + return rc; >> +} >> +DEVICE_ATTR_RO(papr_flags); > > cheers -- Vaibhav Jain Linux Technology Center, IBM India Pvt. Ltd. _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org