All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support
@ 2019-06-19 17:36 Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 01/13] Remove superfluous casts Bart Van Assche
                   ` (12 more replies)
  0 siblings, 13 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


Hi Keith,

This patch series is what I came up with after having analyzed the output
of 'sparse' for the nvme-cli source code. This patch series also includes
two patches that add support for the new NVMe 1.4 Identify Namespace fields.

Thanks,

Bart.

Bart Van Assche (13):
  Remove superfluous casts
  Use NULL instead of 0 where a pointer is expected
  huawei: Declare local functions static
  seagate: Declare local functions static
  virtium: Declare local symbols static
  lightnvm: Fix an endianness issue
  virtium: Fix an endianness issue
  wdc: Fix endianness bugs
  Avoid using arrays with a variable length
  nvme-cli: Rework the code for getting and setting NVMf properties
  nvme-cli: Skip properties that are not supported
  Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns
  nvme-cli: Report the NVMe 1.4 NPWG, NPWA, NPDG, NPDA and NOWS fields

 fabrics.c                      |   2 +-
 linux/nvme.h                   |   7 +-
 nvme-ioctl.c                   | 126 +++++++++++++++++----------------
 nvme-ioctl.h                   |   2 +-
 nvme-lightnvm.c                |   2 +-
 nvme-models.c                  |   1 +
 nvme-print.c                   |  51 +++++++------
 plugins/huawei/huawei-nvme.c   |   3 +-
 plugins/intel/intel-nvme.c     |  16 ++---
 plugins/seagate/seagate-nvme.c |  58 +++++++--------
 plugins/virtium/virtium-nvme.c |  14 ++--
 plugins/wdc/wdc-nvme.c         |  88 ++++++++++++-----------
 12 files changed, 200 insertions(+), 170 deletions(-)

-- 
2.22.0.rc3

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 01/13] Remove superfluous casts
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-24 10:00   ` Mikhail Skorzhinskii
  2019-06-19 17:36 ` [PATCH nvme-cli 02/13] Use NULL instead of 0 where a pointer is expected Bart Van Assche
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


The le64_to_cpu() definition is as follows:

  #define le64_to_cpu(x) le64toh((__force __u64)(x))

According to the le64toh() man page, the return type of that function
is uint64_t. Hence drop the cast from (uint64_t)le64_to_cpu(x)
expressions. This patch has been generated as follows:

  git ls-tree --name-only -r HEAD |
    while read f; do
      [ -f "$f" ] && sed -i 's/(uint64_t)le64_to_cpu(/le64_to_cpu(/g' "$f"
    done

Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 fabrics.c                      |  2 +-
 nvme-print.c                   | 40 +++++++++---------
 plugins/intel/intel-nvme.c     | 16 +++----
 plugins/seagate/seagate-nvme.c | 26 ++++++------
 plugins/wdc/wdc-nvme.c         | 76 +++++++++++++++++-----------------
 5 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index 9ed4a5684f6c..b17f4061e0b8 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -420,7 +420,7 @@ static void print_discovery_log(struct nvmf_disc_rsp_page_hdr *log, int numrec)
 
 	printf("\nDiscovery Log Number of Records %d, "
 	       "Generation counter %"PRIu64"\n",
-		numrec, (uint64_t)le64_to_cpu(log->genctr));
+		numrec, le64_to_cpu(log->genctr));
 
 	for (i = 0; i < numrec; i++) {
 		struct nvmf_disc_rsp_page_entry *e = &log->entries[i];
diff --git a/nvme-print.c b/nvme-print.c
index b058d73f7b57..ea8f720748ef 100644
--- a/nvme-print.c
+++ b/nvme-print.c
@@ -680,9 +680,9 @@ void show_nvme_id_ns(struct nvme_id_ns *ns, unsigned int mode)
 	int human = mode & HUMAN,
 		vs = mode & VS;
 
-	printf("nsze    : %#"PRIx64"\n", (uint64_t)le64_to_cpu(ns->nsze));
-	printf("ncap    : %#"PRIx64"\n", (uint64_t)le64_to_cpu(ns->ncap));
-	printf("nuse    : %#"PRIx64"\n", (uint64_t)le64_to_cpu(ns->nuse));
+	printf("nsze    : %#"PRIx64"\n", le64_to_cpu(ns->nsze));
+	printf("ncap    : %#"PRIx64"\n", le64_to_cpu(ns->ncap));
+	printf("nuse    : %#"PRIx64"\n", le64_to_cpu(ns->nuse));
 	printf("nsfeat  : %#x\n", ns->nsfeat);
 	if (human)
 		show_nvme_id_ns_nsfeat(ns->nsfeat);
@@ -1221,13 +1221,13 @@ void show_error_log(struct nvme_error_log_page *err_log, int entries, const char
 	for (i = 0; i < entries; i++) {
 		printf(" Entry[%2d]   \n", i);
 		printf(".................\n");
-		printf("error_count  : %"PRIu64"\n", (uint64_t)le64_to_cpu(err_log[i].error_count));
+		printf("error_count  : %"PRIu64"\n", le64_to_cpu(err_log[i].error_count));
 		printf("sqid         : %d\n", err_log[i].sqid);
 		printf("cmdid        : %#x\n", err_log[i].cmdid);
 		printf("status_field : %#x(%s)\n", err_log[i].status_field,
 			nvme_status_to_string(err_log[i].status_field >> 1));
 		printf("parm_err_loc : %#x\n", err_log[i].parm_error_location);
-		printf("lba          : %#"PRIx64"\n",(uint64_t)le64_to_cpu(err_log[i].lba));
+		printf("lba          : %#"PRIx64"\n",le64_to_cpu(err_log[i].lba));
 		printf("nsid         : %#x\n", err_log[i].nsid);
 		printf("vs           : %d\n", err_log[i].vs);
 		printf("cs           : %#"PRIx64"\n", (uint64_t) err_log[i].cs);
@@ -1258,8 +1258,8 @@ void show_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 			printf("regctl[%d] :\n", i);
 			printf("  cntlid  : %x\n", le16_to_cpu(status->regctl_ds[i].cntlid));
 			printf("  rcsts   : %x\n", status->regctl_ds[i].rcsts);
-			printf("  hostid  : %"PRIx64"\n", (uint64_t)le64_to_cpu(status->regctl_ds[i].hostid));
-			printf("  rkey    : %"PRIx64"\n", (uint64_t)le64_to_cpu(status->regctl_ds[i].rkey));
+			printf("  hostid  : %"PRIx64"\n", le64_to_cpu(status->regctl_ds[i].hostid));
+			printf("  rkey    : %"PRIx64"\n", le64_to_cpu(status->regctl_ds[i].rkey));
 		}
 	} else {
 		struct nvme_reservation_status_ext *ext_status = (struct nvme_reservation_status_ext *)status;
@@ -1272,7 +1272,7 @@ void show_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 			printf("regctlext[%d] :\n", i);
 			printf("  cntlid     : %x\n", le16_to_cpu(ext_status->regctl_eds[i].cntlid));
 			printf("  rcsts      : %x\n", ext_status->regctl_eds[i].rcsts);
-			printf("  rkey       : %"PRIx64"\n", (uint64_t)le64_to_cpu(ext_status->regctl_eds[i].rkey));
+			printf("  rkey       : %"PRIx64"\n", le64_to_cpu(ext_status->regctl_eds[i].rkey));
 			printf("  hostid     : ");
 			for (j = 0; j < 16; j++)
 				printf("%x", ext_status->regctl_eds[i].hostid[j]);
@@ -1518,7 +1518,7 @@ void show_ana_log(struct nvme_ana_rsp_hdr *ana_log, const char *devname)
 			devname);
 	printf("ANA LOG HEADER :-\n");
 	printf("chgcnt	:	%"PRIu64"\n",
-			(uint64_t)le64_to_cpu(hdr->chgcnt));
+			le64_to_cpu(hdr->chgcnt));
 	printf("ngrps	:	%u\n", le16_to_cpu(hdr->ngrps));
 	printf("ANA Log Desc :-\n");
 
@@ -1531,7 +1531,7 @@ void show_ana_log(struct nvme_ana_rsp_hdr *ana_log, const char *devname)
 		printf("grpid	:	%u\n", le32_to_cpu(desc->grpid));
 		printf("nnsids	:	%u\n", le32_to_cpu(desc->nnsids));
 		printf("chgcnt	:	%"PRIu64"\n",
-		       (uint64_t)le64_to_cpu(desc->chgcnt));
+		       le64_to_cpu(desc->chgcnt));
 		printf("state	:	%s\n",
 				nvme_ana_state_to_string(desc->state));
 		for (j = 0; j < le32_to_cpu(desc->nnsids); j++)
@@ -1598,14 +1598,14 @@ void show_self_test_log(struct nvme_self_test_log *self_test, const char *devnam
 		temp = self_test->result[i].valid_diagnostic_info;
 		printf("  Valid Diagnostic Information : %#x\n", temp);
 		printf("  Power on hours (POH)         : %#"PRIx64"\n",
-			(uint64_t)le64_to_cpu(self_test->result[i].power_on_hours));
+			le64_to_cpu(self_test->result[i].power_on_hours));
 
 		if (temp & NVME_SELF_TEST_VALID_NSID)
 			printf("  Namespace Identifier         : %#x\n",
 				le32_to_cpu(self_test->result[i].nsid));
 		if (temp & NVME_SELF_TEST_VALID_FLBA)
 			printf("  Failing LBA                  : %#"PRIx64"\n",
-				(uint64_t)le64_to_cpu(self_test->result[i].failing_lba));
+				le64_to_cpu(self_test->result[i].failing_lba));
 		if (temp & NVME_SELF_TEST_VALID_SCT)
 			printf("  Status Code Type             : %#x\n",
 				self_test->result[i].status_code_type);
@@ -2012,9 +2012,9 @@ static const char *nvme_plm_window(__u32 plm)
 static void show_plm_config(struct nvme_plm_config *plmcfg)
 {
 	printf("\tEnable Event          :%04x\n", le16_to_cpu(plmcfg->enable_event));
-	printf("\tDTWIN Reads Threshold :%"PRIu64"\n", (uint64_t)le64_to_cpu(plmcfg->dtwin_reads_thresh));
-	printf("\tDTWIN Writes Threshold:%"PRIu64"\n", (uint64_t)le64_to_cpu(plmcfg->dtwin_writes_thresh));
-	printf("\tDTWIN Time Threshold  :%"PRIu64"\n", (uint64_t)le64_to_cpu(plmcfg->dtwin_time_thresh));
+	printf("\tDTWIN Reads Threshold :%"PRIu64"\n", le64_to_cpu(plmcfg->dtwin_reads_thresh));
+	printf("\tDTWIN Writes Threshold:%"PRIu64"\n", le64_to_cpu(plmcfg->dtwin_writes_thresh));
+	printf("\tDTWIN Time Threshold  :%"PRIu64"\n", le64_to_cpu(plmcfg->dtwin_time_thresh));
 }
 
 void nvme_feature_show_fields(__u32 fid, unsigned int result, unsigned char *buf)
@@ -2509,8 +2509,8 @@ void json_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 
 			json_object_add_value_int(rc, "cntlid", le16_to_cpu(status->regctl_ds[i].cntlid));
 			json_object_add_value_int(rc, "rcsts", status->regctl_ds[i].rcsts);
-			json_object_add_value_uint(rc, "hostid", (uint64_t)le64_to_cpu(status->regctl_ds[i].hostid));
-			json_object_add_value_uint(rc, "rkey", (uint64_t)le64_to_cpu(status->regctl_ds[i].rkey));
+			json_object_add_value_uint(rc, "hostid", le64_to_cpu(status->regctl_ds[i].hostid));
+			json_object_add_value_uint(rc, "rkey", le64_to_cpu(status->regctl_ds[i].rkey));
 
 			json_array_add_value_object(rcs, rc);
 		}
@@ -2529,7 +2529,7 @@ void json_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 
 			json_object_add_value_int(rc, "cntlid", le16_to_cpu(ext_status->regctl_eds[i].cntlid));
 			json_object_add_value_int(rc, "rcsts", ext_status->regctl_eds[i].rcsts);
-			json_object_add_value_uint(rc, "rkey", (uint64_t)le64_to_cpu(ext_status->regctl_eds[i].rkey));
+			json_object_add_value_uint(rc, "rkey", le64_to_cpu(ext_status->regctl_eds[i].rkey));
 			for (j = 0; j < 16; j++)
 				sprintf(hostid + j * 2, "%02x", ext_status->regctl_eds[i].hostid[j]);
 
@@ -2717,7 +2717,7 @@ void json_ana_log(struct nvme_ana_rsp_hdr *ana_log, const char *devname)
 			"Asynchronous Namespace Access Log for NVMe device:",
 			devname);
 	json_object_add_value_uint(root, "chgcnt",
-			(uint64_t)le64_to_cpu(hdr->chgcnt));
+			le64_to_cpu(hdr->chgcnt));
 	json_object_add_value_uint(root, "ngrps", le16_to_cpu(hdr->ngrps));
 
 	desc_list = json_create_array();
@@ -2779,7 +2779,7 @@ void json_self_test_log(struct nvme_self_test_log *self_test, const char *devnam
 		if (self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_NSID)
 			json_object_add_value_int(valid_attrs, "Namespace Identifier (NSID)", le32_to_cpu(self_test->result[i].nsid));
 		if (self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_FLBA)
-			json_object_add_value_uint(valid_attrs, "Failing LBA",(uint64_t)le64_to_cpu(self_test->result[i].failing_lba));
+			json_object_add_value_uint(valid_attrs, "Failing LBA",le64_to_cpu(self_test->result[i].failing_lba));
 		if (self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_SCT)
 			json_object_add_value_int(valid_attrs, "Status Code Type",self_test->result[i].status_code_type);
 		if(self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_SC)
diff --git a/plugins/intel/intel-nvme.c b/plugins/intel/intel-nvme.c
index 9aaf36768731..37f2c705c90f 100644
--- a/plugins/intel/intel-nvme.c
+++ b/plugins/intel/intel-nvme.c
@@ -322,14 +322,14 @@ static void show_temp_stats(struct intel_temp_stats *stats)
 {
 	printf("  Intel Temperature Statistics\n");
 	printf("--------------------------------\n");
-	printf("Current temperature         : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->curr));
-	printf("Last critical overtemp flag : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->last_overtemp));
-	printf("Life critical overtemp flag : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->life_overtemp));
-	printf("Highest temperature         : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->highest_temp));
-	printf("Lowest temperature          : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->lowest_temp));
-	printf("Max operating temperature   : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->max_operating_temp));
-	printf("Min operating temperature   : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->min_operating_temp));
-	printf("Estimated offset            : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->est_offset));
+	printf("Current temperature         : %"PRIu64"\n", le64_to_cpu(stats->curr));
+	printf("Last critical overtemp flag : %"PRIu64"\n", le64_to_cpu(stats->last_overtemp));
+	printf("Life critical overtemp flag : %"PRIu64"\n", le64_to_cpu(stats->life_overtemp));
+	printf("Highest temperature         : %"PRIu64"\n", le64_to_cpu(stats->highest_temp));
+	printf("Lowest temperature          : %"PRIu64"\n", le64_to_cpu(stats->lowest_temp));
+	printf("Max operating temperature   : %"PRIu64"\n", le64_to_cpu(stats->max_operating_temp));
+	printf("Min operating temperature   : %"PRIu64"\n", le64_to_cpu(stats->min_operating_temp));
+	printf("Estimated offset            : %"PRIu64"\n", le64_to_cpu(stats->est_offset));
 }
 
 static int get_temp_stats_log(int argc, char **argv, struct command *cmd, struct plugin *plugin)
diff --git a/plugins/seagate/seagate-nvme.c b/plugins/seagate/seagate-nvme.c
index 4fa29d950d9c..4b5b0acb9244 100644
--- a/plugins/seagate/seagate-nvme.c
+++ b/plugins/seagate/seagate-nvme.c
@@ -615,35 +615,35 @@ void print_smart_log_CF(vendor_log_page_CF *pLogPageCF)
 	printf("%-40s", "Super-cap current temperature");
 	currentTemp = pLogPageCF->AttrCF.SuperCapCurrentTemperature;
 	/*currentTemp = currentTemp ? currentTemp - 273 : 0;*/
-	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(currentTemp));
+	printf(" 0x%016"PRIx64"", le64_to_cpu(currentTemp));
 	printf("\n");
 
 	maxTemp = pLogPageCF->AttrCF.SuperCapMaximumTemperature;
 	/*maxTemp = maxTemp ? maxTemp - 273 : 0;*/
 	printf("%-40s", "Super-cap maximum temperature");
-	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(maxTemp));
+	printf(" 0x%016"PRIx64"", le64_to_cpu(maxTemp));
 	printf("\n");
 
 	printf("%-40s", "Super-cap status");
-	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.SuperCapStatus));
+	printf(" 0x%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.SuperCapStatus));
 	printf("\n");
 
 	printf("%-40s", "Data units read to DRAM namespace");
-	printf(" 0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
-	       (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
+	printf(" 0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
+	       le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
 	printf("\n");
 
 	printf("%-40s", "Data units written to DRAM namespace");
-	printf(" 0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
-	       (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
+	printf(" 0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
+	       le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
 	printf("\n");
 
 	printf("%-40s", "DRAM correctable error count");
-	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DramCorrectableErrorCount));
+	printf(" 0x%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DramCorrectableErrorCount));
 	printf("\n");
 
 	printf("%-40s", "DRAM uncorrectable error count");
-	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DramUncorrectableErrorCount));
+	printf(" 0x%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DramUncorrectableErrorCount));
 	printf("\n");
 
 }
@@ -682,16 +682,16 @@ void json_print_smart_log_CF(struct json_object *root, vendor_log_page_CF *pLogP
 	lbaf = json_create_object();
 	json_object_add_value_string(lbaf, "attribute_name", "Data units read to DRAM namespace");
 	memset(buf, 0, sizeof(buf));
-	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
-		(uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
+	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
+		le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
 	json_object_add_value_string(lbaf, "attribute_value", buf);
 	json_array_add_value_object(logPages, lbaf);
 
 	lbaf = json_create_object();
 	json_object_add_value_string(lbaf, "attribute_name", "Data units written to DRAM namespace");
 	memset(buf, 0, sizeof(buf));
-	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
-		(uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
+	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
+		le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
 	json_object_add_value_string(lbaf, "attribute_value", buf);
 	json_array_add_value_object(logPages, lbaf);
 
diff --git a/plugins/wdc/wdc-nvme.c b/plugins/wdc/wdc-nvme.c
index a1cb1ebf766f..a9c86b6eced2 100644
--- a/plugins/wdc/wdc-nvme.c
+++ b/plugins/wdc/wdc-nvme.c
@@ -2125,55 +2125,55 @@ static void wdc_print_log_normal(struct wdc_ssd_perf_stats *perf)
 {
 	printf("  C1 Log Page Performance Statistics :- \n");
 	printf("  Host Read Commands                             %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hr_cmds));
+			le64_to_cpu(perf->hr_cmds));
 	printf("  Host Read Blocks                               %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hr_blks));
+			le64_to_cpu(perf->hr_blks));
 	printf("  Average Read Size                              %20lf\n",
 			safe_div_fp((le64_to_cpu(perf->hr_blks)), (le64_to_cpu(perf->hr_cmds))));
 	printf("  Host Read Cache Hit Commands                   %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hr_ch_cmds));
+			le64_to_cpu(perf->hr_ch_cmds));
 	printf("  Host Read Cache Hit_Percentage                 %20"PRIu64"%%\n",
 			(uint64_t) calc_percent(le64_to_cpu(perf->hr_ch_cmds), le64_to_cpu(perf->hr_cmds)));
 	printf("  Host Read Cache Hit Blocks                     %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hr_ch_blks));
+			le64_to_cpu(perf->hr_ch_blks));
 	printf("  Average Read Cache Hit Size                    %20f\n",
 			safe_div_fp((le64_to_cpu(perf->hr_ch_blks)), (le64_to_cpu(perf->hr_ch_cmds))));
 	printf("  Host Read Commands Stalled                     %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hr_st_cmds));
+			le64_to_cpu(perf->hr_st_cmds));
 	printf("  Host Read Commands Stalled Percentage          %20"PRIu64"%%\n",
 			(uint64_t)calc_percent((le64_to_cpu(perf->hr_st_cmds)), le64_to_cpu(perf->hr_cmds)));
 	printf("  Host Write Commands                            %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hw_cmds));
+			le64_to_cpu(perf->hw_cmds));
 	printf("  Host Write Blocks                              %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hw_blks));
+			le64_to_cpu(perf->hw_blks));
 	printf("  Average Write Size                             %20f\n",
 			safe_div_fp((le64_to_cpu(perf->hw_blks)), (le64_to_cpu(perf->hw_cmds))));
 	printf("  Host Write Odd Start Commands                  %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hw_os_cmds));
+			le64_to_cpu(perf->hw_os_cmds));
 	printf("  Host Write Odd Start Commands Percentage       %20"PRIu64"%%\n",
 			(uint64_t)calc_percent((le64_to_cpu(perf->hw_os_cmds)), (le64_to_cpu(perf->hw_cmds))));
 	printf("  Host Write Odd End Commands                    %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->hw_oe_cmds));
+			le64_to_cpu(perf->hw_oe_cmds));
 	printf("  Host Write Odd End Commands Percentage         %20"PRIu64"%%\n",
 			(uint64_t)calc_percent((le64_to_cpu(perf->hw_oe_cmds)), (le64_to_cpu((perf->hw_cmds)))));
 	printf("  Host Write Commands Stalled                    %20"PRIu64"\n",
-		(uint64_t)le64_to_cpu(perf->hw_st_cmds));
+		le64_to_cpu(perf->hw_st_cmds));
 	printf("  Host Write Commands Stalled Percentage         %20"PRIu64"%%\n",
 		(uint64_t)calc_percent((le64_to_cpu(perf->hw_st_cmds)), (le64_to_cpu(perf->hw_cmds))));
 	printf("  NAND Read Commands                             %20"PRIu64"\n",
-		(uint64_t)le64_to_cpu(perf->nr_cmds));
+		le64_to_cpu(perf->nr_cmds));
 	printf("  NAND Read Blocks Commands                      %20"PRIu64"\n",
-		(uint64_t)le64_to_cpu(perf->nr_blks));
+		le64_to_cpu(perf->nr_blks));
 	printf("  Average NAND Read Size                         %20f\n",
 		safe_div_fp((le64_to_cpu(perf->nr_blks)), (le64_to_cpu((perf->nr_cmds)))));
 	printf("  Nand Write Commands                            %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->nw_cmds));
+			le64_to_cpu(perf->nw_cmds));
 	printf("  NAND Write Blocks                              %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->nw_blks));
+			le64_to_cpu(perf->nw_blks));
 	printf("  Average NAND Write Size                        %20f\n",
 			safe_div_fp((le64_to_cpu(perf->nw_blks)), (le64_to_cpu(perf->nw_cmds))));
 	printf("  NAND Read Before Write                         %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->nrbw));
+			le64_to_cpu(perf->nrbw));
 }
 
 static void wdc_print_log_json(struct wdc_ssd_perf_stats *perf)
@@ -2186,49 +2186,49 @@ static void wdc_print_log_json(struct wdc_ssd_perf_stats *perf)
 	json_object_add_value_int(root, "Average Read Size",
 			safe_div_fp((le64_to_cpu(perf->hr_blks)), (le64_to_cpu(perf->hr_cmds))));
 	json_object_add_value_int(root, "Host Read Cache Hit Commands",
-			(uint64_t)le64_to_cpu(perf->hr_ch_cmds));
+			le64_to_cpu(perf->hr_ch_cmds));
 	json_object_add_value_int(root, "Host Read Cache Hit Percentage",
 			(uint64_t) calc_percent(le64_to_cpu(perf->hr_ch_cmds), le64_to_cpu(perf->hr_cmds)));
 	json_object_add_value_int(root, "Host Read Cache Hit Blocks",
-			(uint64_t)le64_to_cpu(perf->hr_ch_blks));
+			le64_to_cpu(perf->hr_ch_blks));
 	json_object_add_value_int(root, "Average Read Cache Hit Size",
 			safe_div_fp((le64_to_cpu(perf->hr_ch_blks)), (le64_to_cpu(perf->hr_ch_cmds))));
 	json_object_add_value_int(root, "Host Read Commands Stalled",
-			(uint64_t)le64_to_cpu(perf->hr_st_cmds));
+			le64_to_cpu(perf->hr_st_cmds));
 	json_object_add_value_int(root, "Host Read Commands Stalled Percentage",
 			(uint64_t)calc_percent((le64_to_cpu(perf->hr_st_cmds)), le64_to_cpu(perf->hr_cmds)));
 	json_object_add_value_int(root, "Host Write Commands",
-			(uint64_t)le64_to_cpu(perf->hw_cmds));
+			le64_to_cpu(perf->hw_cmds));
 	json_object_add_value_int(root, "Host Write Blocks",
-			(uint64_t)le64_to_cpu(perf->hw_blks));
+			le64_to_cpu(perf->hw_blks));
 	json_object_add_value_int(root, "Average Write Size",
 			safe_div_fp((le64_to_cpu(perf->hw_blks)), (le64_to_cpu(perf->hw_cmds))));
 	json_object_add_value_int(root, "Host Write Odd Start Commands",
-			(uint64_t)le64_to_cpu(perf->hw_os_cmds));
+			le64_to_cpu(perf->hw_os_cmds));
 	json_object_add_value_int(root, "Host Write Odd Start Commands Percentage",
 			(uint64_t)calc_percent((le64_to_cpu(perf->hw_os_cmds)), (le64_to_cpu(perf->hw_cmds))));
 	json_object_add_value_int(root, "Host Write Odd End Commands",
-			(uint64_t)le64_to_cpu(perf->hw_oe_cmds));
+			le64_to_cpu(perf->hw_oe_cmds));
 	json_object_add_value_int(root, "Host Write Odd End Commands Percentage",
 			(uint64_t)calc_percent((le64_to_cpu(perf->hw_oe_cmds)), (le64_to_cpu((perf->hw_cmds)))));
 	json_object_add_value_int(root, "Host Write Commands Stalled",
-		(uint64_t)le64_to_cpu(perf->hw_st_cmds));
+		le64_to_cpu(perf->hw_st_cmds));
 	json_object_add_value_int(root, "Host Write Commands Stalled Percentage",
 		(uint64_t)calc_percent((le64_to_cpu(perf->hw_st_cmds)), (le64_to_cpu(perf->hw_cmds))));
 	json_object_add_value_int(root, "NAND Read Commands",
-		(uint64_t)le64_to_cpu(perf->nr_cmds));
+		le64_to_cpu(perf->nr_cmds));
 	json_object_add_value_int(root, "NAND Read Blocks Commands",
-		(uint64_t)le64_to_cpu(perf->nr_blks));
+		le64_to_cpu(perf->nr_blks));
 	json_object_add_value_int(root, "Average NAND Read Size",
 		safe_div_fp((le64_to_cpu(perf->nr_blks)), (le64_to_cpu((perf->nr_cmds)))));
 	json_object_add_value_int(root, "Nand Write Commands",
-			(uint64_t)le64_to_cpu(perf->nw_cmds));
+			le64_to_cpu(perf->nw_cmds));
 	json_object_add_value_int(root, "NAND Write Blocks",
-			(uint64_t)le64_to_cpu(perf->nw_blks));
+			le64_to_cpu(perf->nw_blks));
 	json_object_add_value_int(root, "Average NAND Write Size",
 			safe_div_fp((le64_to_cpu(perf->nw_blks)), (le64_to_cpu(perf->nw_cmds))));
 	json_object_add_value_int(root, "NAND Read Before Written",
-			(uint64_t)le64_to_cpu(perf->nrbw));
+			le64_to_cpu(perf->nrbw));
 	json_print_object(root, NULL);
 	printf("\n");
 	json_free_object(root);
@@ -2257,9 +2257,9 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 
 	printf("  CA Log Page Performance Statistics :- \n");
 	printf("  NAND Bytes Written                             %20"PRIu64 "%20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->nand_bytes_wr_hi), (uint64_t)le64_to_cpu(perf->nand_bytes_wr_lo));
+			le64_to_cpu(perf->nand_bytes_wr_hi), le64_to_cpu(perf->nand_bytes_wr_lo));
 	printf("  NAND Bytes Read                                %20"PRIu64 "%20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->nand_bytes_rd_hi), (uint64_t)le64_to_cpu(perf->nand_bytes_rd_lo));
+			le64_to_cpu(perf->nand_bytes_rd_hi), le64_to_cpu(perf->nand_bytes_rd_lo));
 
 	converted = le64_to_cpu(perf->nand_bad_block);
 	printf("  NAND Bad Block Count (Normalized)              %20"PRIu64"\n",
@@ -2268,9 +2268,9 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 			converted >> 16);
 
 	printf("  Uncorrectable Read Count                       %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->uncorr_read_count));
+			le64_to_cpu(perf->uncorr_read_count));
 	printf("  Soft ECC Error Count                           %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->ecc_error_count));
+			le64_to_cpu(perf->ecc_error_count));
 	printf("  SSD End to End Detected Correction Count       %20"PRIu32"\n",
 			(uint32_t)le32_to_cpu(perf->ssd_detect_count));
 	printf("  SSD End to End Corrected Correction Count      %20"PRIu32"\n",
@@ -2282,7 +2282,7 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 	printf("  User Data Erase Counts Min                     %20"PRIu32"\n",
 			(uint32_t)le32_to_cpu(perf->data_erase_min));
 	printf("  Refresh Count                                  %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->refresh_count));
+			le64_to_cpu(perf->refresh_count));
 
 	converted = le64_to_cpu(perf->program_fail);
 	printf("  Program Fail Count (Normalized)                %20"PRIu64"\n",
@@ -2307,7 +2307,7 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 	printf("  Thermal Throttling Count                       %20"PRIu8"\n",
 			perf->thermal_throttle_count);
 	printf("  PCIe Correctable Error Count                   %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->pcie_corr_error));
+			le64_to_cpu(perf->pcie_corr_error));
 	printf("  Incomplete Shutdown Count                      %20"PRIu32"\n",
 			(uint32_t)le32_to_cpu(perf->incomplete_shutdown_count));
 	printf("  Percent Free Blocks                            %20"PRIu32"%%\n",
@@ -2411,13 +2411,13 @@ static void wdc_print_d0_log_normal(struct wdc_ssd_d0_smart_log *perf)
 	printf("  Lifetime Read Disturb Reallocation Events	 %20"PRIu32"\n",
 			(uint32_t)le32_to_cpu(perf->lifetime_read_disturb_realloc_events));
 	printf("  Lifetime NAND Writes	                         %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->lifetime_nand_writes));
+			le64_to_cpu(perf->lifetime_nand_writes));
 	printf("  Capacitor Health			 	 %20"PRIu32"%%\n",
 			(uint32_t)le32_to_cpu(perf->capacitor_health));
 	printf("  Lifetime User Writes	                         %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->lifetime_user_writes));
+			le64_to_cpu(perf->lifetime_user_writes));
 	printf("  Lifetime User Reads	                         %20"PRIu64"\n",
-			(uint64_t)le64_to_cpu(perf->lifetime_user_reads));
+			le64_to_cpu(perf->lifetime_user_reads));
 	printf("  Lifetime Thermal Throttle Activations	         %20"PRIu32"\n",
 			(uint32_t)le32_to_cpu(perf->lifetime_thermal_throttle_act));
 	printf("  Percentage of P/E Cycles Remaining             %20"PRIu32"%%\n",
@@ -3726,7 +3726,7 @@ static void wdc_print_nand_stats_normal(struct wdc_nand_stats *data)
 	printf("  Bad Block Count			         %"PRIu32"\n",
 			(uint32_t)le32_to_cpu(data->bad_block_count));
 	printf("  NAND XOR/RAID Recovery Trigger Events		 %"PRIu64"\n",
-			(uint64_t)le64_to_cpu(data->nand_rec_trigger_event));
+			le64_to_cpu(data->nand_rec_trigger_event));
 }
 
 static void wdc_print_nand_stats_json(struct wdc_nand_stats *data)
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 02/13] Use NULL instead of 0 where a pointer is expected
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 01/13] Remove superfluous casts Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 03/13] huawei: Declare local functions static Bart Van Assche
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


This patch avoids that sparse reports the following:

warning: Using plain integer as NULL pointer

Cc: Muhammad Ahmad <muhammad.ahmad at seagate.com>
Cc: Quyen Truong <quyen.truong at virtium.com>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 plugins/seagate/seagate-nvme.c | 8 ++++----
 plugins/virtium/virtium-nvme.c | 8 ++++----
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/plugins/seagate/seagate-nvme.c b/plugins/seagate/seagate-nvme.c
index 4b5b0acb9244..69f3667d5b16 100644
--- a/plugins/seagate/seagate-nvme.c
+++ b/plugins/seagate/seagate-nvme.c
@@ -173,7 +173,7 @@ static int log_pages_supp(int argc, char **argv, struct command *cmd,
 	const struct argconfig_commandline_options command_line_options[] = {
 		{"output-format", 'o', "FMT", CFG_STRING, &cfg.output_format,
 		 required_argument, output_format },
-		{0}
+		{ }
 	};
 
 	fd = parse_and_open(argc, argv, desc, command_line_options,
@@ -735,7 +735,7 @@ static int vs_smart_log(int argc, char **argv, struct command *cmd, struct plugi
 
 	const struct argconfig_commandline_options command_line_options[] = {
 		{"output-format", 'o', "FMT", CFG_STRING, &cfg.output_format, required_argument, output_format },
-		{0}
+		{ }
 	};
 
 	fd = parse_and_open(argc, argv, desc, command_line_options, &cfg, sizeof(cfg));
@@ -832,7 +832,7 @@ static int temp_stats(int argc, char **argv, struct command *cmd, struct plugin
 
 	const struct argconfig_commandline_options command_line_options[] = {
 		{"output-format", 'o', "FMT", CFG_STRING, &cfg.output_format, required_argument, output_format },
-		{0}
+		{ }
 	};
 
 	fd = parse_and_open(argc, argv, desc, command_line_options, &cfg, sizeof(cfg));
@@ -1004,7 +1004,7 @@ static int vs_pcie_error_log(int argc, char **argv, struct command *cmd, struct
 
 	const struct argconfig_commandline_options command_line_options[] = {
 		{"output-format", 'o', "FMT", CFG_STRING, &cfg.output_format, required_argument, output_format },
-		{0}
+		{ }
 	};
 
 	fd = parse_and_open(argc, argv, desc, command_line_options, &cfg, sizeof(cfg));
diff --git a/plugins/virtium/virtium-nvme.c b/plugins/virtium/virtium-nvme.c
index b5b30af5dacd..ca5f5774af8c 100644
--- a/plugins/virtium/virtium-nvme.c
+++ b/plugins/virtium/virtium-nvme.c
@@ -287,7 +287,7 @@ static int vt_add_entry_to_log(const int fd, const char *path, const struct vtvi
         strcpy(filename, cfg->output_file);
     }
 	
-    smart.time_stamp = time(0);
+    smart.time_stamp = time(NULL);
     nsid = nvme_get_nsid(fd);
 	
     if(nsid <= 0) 
@@ -353,7 +353,7 @@ static int vt_update_vtview_log_header(const int fd, const char *path, const str
     }
 
     printf("Log file: %s\n", filename);
-    header.time_stamp = time(0);
+    header.time_stamp = time(NULL);
 
     ret = nvme_identify_ctrl(fd, &header.raw_ctrl);
     if(ret) 
@@ -881,14 +881,14 @@ Just logging :\n\
         freq_time = 1;
     }
 	
-    start_time = time(0);
+    start_time = time(NULL);
     end_time = start_time + total_time;
 
     fflush(stdout);
 	
     while(1)
     {
-        cur_time = time(0);
+        cur_time = time(NULL);
         if(cur_time >= end_time)
         {
             break;
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 03/13] huawei: Declare local functions static
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 01/13] Remove superfluous casts Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 02/13] Use NULL instead of 0 where a pointer is expected Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 04/13] seagate: " Bart Van Assche
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


This patch avoids that sparse complains about missing declarations.

Cc: Zou Ming <zouming.zouming at huawei.com>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 plugins/huawei/huawei-nvme.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/plugins/huawei/huawei-nvme.c b/plugins/huawei/huawei-nvme.c
index b68e05c55aee..3130607a6443 100644
--- a/plugins/huawei/huawei-nvme.c
+++ b/plugins/huawei/huawei-nvme.c
@@ -163,7 +163,8 @@ static void format(char *formatter, size_t fmt_sz, char *tofmt, size_t tofmtsz)
 	}
 }
 
-void huawei_json_print_list_items(struct huawei_list_item *list_items, unsigned len)
+static void huawei_json_print_list_items(struct huawei_list_item *list_items,
+					 unsigned len)
 {
 	struct json_object *root;
 	struct json_array *devices;
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 04/13] seagate: Declare local functions static
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (2 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 03/13] huawei: Declare local functions static Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 05/13] virtium: Declare local symbols static Bart Van Assche
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


This patch avoids that sparse complains about missing declarations.

Cc: Muhammad Ahmad <muhammad.ahmad at seagate.com>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 nvme-models.c                  |  1 +
 plugins/seagate/seagate-nvme.c | 24 +++++++++++++-----------
 2 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/nvme-models.c b/nvme-models.c
index f90ab9574d04..9b5077d0cdbb 100644
--- a/nvme-models.c
+++ b/nvme-models.c
@@ -7,6 +7,7 @@
 #include <fcntl.h>
 #include <unistd.h>
 #include <errno.h>
+#include "nvme-models.h"
 
 static char *_fmt1 = "/sys/class/nvme/nvme%d/device/subsystem_vendor";
 static char *_fmt2 = "/sys/class/nvme/nvme%d/device/subsystem_device";
diff --git a/plugins/seagate/seagate-nvme.c b/plugins/seagate/seagate-nvme.c
index 69f3667d5b16..a93bf8537315 100644
--- a/plugins/seagate/seagate-nvme.c
+++ b/plugins/seagate/seagate-nvme.c
@@ -46,7 +46,7 @@
 /***************************************
 *Command for "log-pages-supp"
 ***************************************/
-char* log_pages_supp_print(__u32 pageID)
+static char *log_pages_supp_print(__u32 pageID)
 {
 	switch(pageID) {
 	case 0x01:
@@ -128,7 +128,7 @@ char* log_pages_supp_print(__u32 pageID)
 }
 
 
-void json_log_pages_supp(log_page_map *logPageMap)
+static void json_log_pages_supp(log_page_map *logPageMap)
 {
 	struct json_object *root;
 	struct json_array *logPages;
@@ -214,7 +214,7 @@ static int log_pages_supp(int argc, char **argv, struct command *cmd,
 /***************************************
 * Extended-SMART Information
 ***************************************/
-char* print_ext_smart_id(__u8 attrId)
+static char *print_ext_smart_id(__u8 attrId)
 {
 	switch(attrId) {
 	case VS_ATTR_ID_SOFT_READ_ERROR_RATE:
@@ -360,7 +360,7 @@ char* print_ext_smart_id(__u8 attrId)
 	}
 }
 
-__u64 smart_attribute_vs(__u16 verNo, SmartVendorSpecific attr)
+static __u64 smart_attribute_vs(__u16 verNo, SmartVendorSpecific attr)
 {
 	__u64 val = 0;
 	vendor_smart_attribute_data *attrVendor;
@@ -376,7 +376,7 @@ __u64 smart_attribute_vs(__u16 verNo, SmartVendorSpecific attr)
 		return le32_to_cpu(attr.Raw0_3);
 }
 
-void print_smart_log(__u16 verNo, SmartVendorSpecific attr, int lastAttr)
+static void print_smart_log(__u16 verNo, SmartVendorSpecific attr, int lastAttr)
 {
 	static __u64 lsbGbErased = 0, msbGbErased = 0, lsbLifWrtToFlash = 0, msbLifWrtToFlash = 0,
 		lsbLifWrtFrmHost = 0, msbLifWrtFrmHost = 0, lsbLifRdToHost = 0, msbLifRdToHost = 0, lsbTrimCnt = 0, msbTrimCnt = 0;
@@ -491,7 +491,8 @@ void print_smart_log(__u16 verNo, SmartVendorSpecific attr, int lastAttr)
 	}
 }
 
-void json_print_smart_log(struct json_object *root, EXTENDED_SMART_INFO_T* ExtdSMARTInfo )
+static void json_print_smart_log(struct json_object *root,
+				 EXTENDED_SMART_INFO_T *ExtdSMARTInfo )
 {
 	/*struct json_object *root; */
 	struct json_array *lbafs;
@@ -606,7 +607,7 @@ void json_print_smart_log(struct json_object *root, EXTENDED_SMART_INFO_T* ExtdS
 	*/
 }
 
-void print_smart_log_CF(vendor_log_page_CF *pLogPageCF)
+static void print_smart_log_CF(vendor_log_page_CF *pLogPageCF)
 {
 	__u64 currentTemp, maxTemp;
 	printf("\n\nSeagate DRAM Supercap SMART Attributes :\n");
@@ -648,7 +649,8 @@ void print_smart_log_CF(vendor_log_page_CF *pLogPageCF)
 
 }
 
-void json_print_smart_log_CF(struct json_object *root, vendor_log_page_CF *pLogPageCF)
+static void json_print_smart_log_CF(struct json_object *root,
+				    vendor_log_page_CF *pLogPageCF)
 {
 	/*struct json_object *root;*/
 	struct json_array *logPages;
@@ -904,7 +906,7 @@ static int temp_stats(int argc, char **argv, struct command *cmd, struct plugin
 /***************************************
  * PCIe error-log information
  ***************************************/
-void print_vs_pcie_error_log(pcie_error_log_page  pcieErrorLog)
+static void print_vs_pcie_error_log(pcie_error_log_page  pcieErrorLog)
 {
 	__u32 correctPcieEc = 0;
 	__u32 uncorrectPcieEc = 0;
@@ -944,7 +946,7 @@ void print_vs_pcie_error_log(pcie_error_log_page  pcieErrorLog)
 	printf("%-45s : %s\n", "Completer Abort Status (CAS)", "Not Supported");
 }
 
-void json_vs_pcie_error_log(pcie_error_log_page  pcieErrorLog)
+static void json_vs_pcie_error_log(pcie_error_log_page pcieErrorLog)
 {
 	struct json_object *root;
 	root = json_create_object();
@@ -1392,7 +1394,7 @@ static int vs_internal_log(int argc, char **argv, struct command *cmd, struct pl
 }
 
 //SEAGATE-PLUGIN Version
-int seagate_plugin_version(int argc, char **argv, struct command *cmd,
+static int seagate_plugin_version(int argc, char **argv, struct command *cmd,
 			   struct plugin *plugin)
 {
 	printf("Seagate-Plugin version : %d.%d \n", SEAGATE_PLUGIN_VERSION_MAJOR, SEAGATE_PLUGIN_VERSION_MINOR);
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 05/13] virtium: Declare local symbols static
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (3 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 04/13] seagate: " Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 06/13] lightnvm: Fix an endianness issue Bart Van Assche
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


This patch avoids that sparse complains about missing declarations.

Cc: Quyen Truong <quyen.truong at virtium.com>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 plugins/virtium/virtium-nvme.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/plugins/virtium/virtium-nvme.c b/plugins/virtium/virtium-nvme.c
index ca5f5774af8c..7d91ef8d29dd 100644
--- a/plugins/virtium/virtium-nvme.c
+++ b/plugins/virtium/virtium-nvme.c
@@ -377,7 +377,9 @@ static int vt_update_vtview_log_header(const int fd, const char *path, const str
     return (ret);
 }
 
-void vt_build_identify_lv2(unsigned int data, unsigned int start, unsigned int count, const char **table, bool isEnd)
+static void vt_build_identify_lv2(unsigned int data, unsigned int start,
+				  unsigned int count, const char **table,
+				  bool isEnd)
 {
     unsigned int i, end, pos, sh = 1;
     unsigned int temp;
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 06/13] lightnvm: Fix an endianness issue
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (4 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 05/13] virtium: Declare local symbols static Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 07/13] virtium: " Bart Van Assche
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


This patch avoids that sparse reports the following:

nvme-lightnvm.c:497:35: warning: incorrect type in initializer (different base types)
nvme-lightnvm.c:497:35:    expected unsigned int [usertype] data_len
nvme-lightnvm.c:497:35:    got restricted __le32 [usertype]

Cc: Matias Bjorling <m at bjorling.me>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 nvme-lightnvm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/nvme-lightnvm.c b/nvme-lightnvm.c
index ee21d6b28e08..0b99786fbab2 100644
--- a/nvme-lightnvm.c
+++ b/nvme-lightnvm.c
@@ -494,7 +494,7 @@ static int __lnvm_do_get_bbtbl(int fd, struct nvme_nvm_id12 *id,
 		.opcode		= nvme_nvm_admin_get_bb_tbl,
 		.nsid		= cpu_to_le32(1),
 		.addr		= (__u64)(uintptr_t)bbtbl,
-		.data_len	= cpu_to_le32(bufsz),
+		.data_len	= bufsz,
 		.ppa		= cpu_to_le64(ppa.ppa),
 	};
 	void *tmp = &cmd;
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 07/13] virtium: Fix an endianness issue
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (5 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 06/13] lightnvm: Fix an endianness issue Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 08/13] wdc: Fix endianness bugs Bart Van Assche
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


Convert nsze from little endian to CPU endian before using it.

Cc: Quyen Truong <quyen.truong at virtium.com>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 plugins/virtium/virtium-nvme.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/plugins/virtium/virtium-nvme.c b/plugins/virtium/virtium-nvme.c
index 7d91ef8d29dd..5b295e2a47eb 100644
--- a/plugins/virtium/virtium-nvme.c
+++ b/plugins/virtium/virtium-nvme.c
@@ -132,7 +132,7 @@ static void vt_convert_smart_data_to_human_readable_format(struct vtview_smart_l
     snprintf(tempbuff, sizeof(tempbuff), "log;%s;%lu;%s;%s;%-.*s;", smart->raw_ctrl.sn, smart->time_stamp, smart->path, \
             smart->raw_ctrl.mn, (int)sizeof(smart->raw_ctrl.fr), smart->raw_ctrl.fr);
     strcpy(text, tempbuff);
-    snprintf(tempbuff, sizeof(tempbuff), "Capacity;%f;", (double)smart->raw_ns.nsze / 1000000000);
+    snprintf(tempbuff, sizeof(tempbuff), "Capacity;%f;", (double)le64_to_cpu(smart->raw_ns.nsze) / 1000000000);
     strcat(text, tempbuff);
     snprintf(tempbuff, sizeof(tempbuff), "Critical_Warning;%u;", smart->raw_smart.critical_warning);
     strcat(text, tempbuff);
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 08/13] wdc: Fix endianness bugs
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (6 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 07/13] virtium: " Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 09/13] Avoid using arrays with a variable length Bart Van Assche
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


Insert le16_to_cpu() / le32_to_cpu() where required.

Cc: Dong Ho <Dong.Ho at wdc.com>
Fixes: 6bd8ab436693 ("wdc: Add data area extraction for DUI command").
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 plugins/wdc/wdc-nvme.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/plugins/wdc/wdc-nvme.c b/plugins/wdc/wdc-nvme.c
index a9c86b6eced2..ba90fc09e0fe 100644
--- a/plugins/wdc/wdc-nvme.c
+++ b/plugins/wdc/wdc-nvme.c
@@ -914,7 +914,8 @@ static bool get_dev_mgment_cbs_data(int fd, __u8 log_id, void **cbs_data)
 	if (le32_to_cpu(hdr_ptr->length) > WDC_C2_LOG_BUF_LEN) {
 		/* Log Page buffer too small, free and reallocate the necessary size */
 		free(data);
-		if ((data = (__u8*) calloc(hdr_ptr->length, sizeof (__u8))) == NULL) {
+		data = calloc(le32_to_cpu(hdr_ptr->length), sizeof(__u8));
+		if (data == NULL) {
 			fprintf(stderr, "ERROR : WDC : malloc : %s\n", strerror(errno));
 			return false;
 		}
@@ -1290,9 +1291,12 @@ static int wdc_do_cap_dui(int fd, char *file, __u32 xfer_size, int data_area)
 		/* parse log header for all sections up to specified data area inclusively */
 		if (data_area != WDC_NVME_DUI_MAX_DATA_AREA) {
 			for(int i = 0; i < WDC_NVME_DUI_MAX_SECTION; i++) {
-				if (log_hdr->log_section[i].data_area_id <= data_area &&
-				    log_hdr->log_section[i].data_area_id != 0)
-					log_size += log_hdr->log_section[i].section_size;
+				__u16 data_area_id = le16_to_cpu(log_hdr->log_section[i].data_area_id);
+				__u16 section_size = le16_to_cpu(log_hdr->log_section[i].section_size);
+
+				if (data_area_id <= data_area &&
+				    data_area_id != 0)
+					log_size += section_size;
 				else
 					break;
 			}
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 09/13] Avoid using arrays with a variable length
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (7 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 08/13] wdc: Fix endianness bugs Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 10/13] nvme-cli: Rework the code for getting and setting NVMf properties Bart Van Assche
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


Since variable length arrays result in suboptimal code, avoid using
variable length arrays.

Cc: Patrick McCormick <patrick.m.mccormick at intel.com>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 nvme-print.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/nvme-print.c b/nvme-print.c
index ea8f720748ef..94222a3b1c15 100644
--- a/nvme-print.c
+++ b/nvme-print.c
@@ -1,3 +1,4 @@
+#include <assert.h>
 #include <stdio.h>
 #include <string.h>
 #include <stdlib.h>
@@ -41,8 +42,9 @@ static long double int128_to_double(__u8 *data)
 void d(unsigned char *buf, int len, int width, int group)
 {
 	int i, offset = 0, line_done = 0;
-	char ascii[width + 1];
+	char ascii[32 + 1];
 
+	assert(width < sizeof(ascii));
 	printf("     ");
 	for (i = 0; i <= 15; i++)
 		printf("%3x", i);
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 10/13] nvme-cli: Rework the code for getting and setting NVMf properties
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (8 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 09/13] Avoid using arrays with a variable length Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:36 ` [PATCH nvme-cli 11/13] nvme-cli: Skip properties that are not supported Bart Van Assche
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


The current nvme_property() implementation is much harder to read
because it uses an nvme_admin_cmd structure to represent the
nvmf_property_get_command and nvmf_property_set_command data structures.
Verifying the implementation of nvme_property() requires to compare the
offsets of members of nvme_admin_cmd with the offsets of members of
nvmf_property_[gs]et_command. Make the code easier to read by using the
nvmf_property_get_command and nvmf_property_set data structures directly.

This patch fixes the sparse complaints about endianness mismatches in the
nvme_property(), nvme_get_property() and nvme_set_property() functions.

Cc: Eyal Ben David <eyalbe at il.ibm.com>
Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 nvme-ioctl.c | 121 ++++++++++++++++++++++++++-------------------------
 nvme-ioctl.h |   2 +-
 2 files changed, 63 insertions(+), 60 deletions(-)

diff --git a/nvme-ioctl.c b/nvme-ioctl.c
index f3a56dc51086..0c8de0748c18 100644
--- a/nvme-ioctl.c
+++ b/nvme-ioctl.c
@@ -1,3 +1,4 @@
+#include <assert.h>
 #include <sys/ioctl.h>
 #include <sys/stat.h>
 #include <string.h>
@@ -547,60 +548,60 @@ int nvme_set_feature(int fd, __u32 nsid, __u8 fid, __u32 value, __u32 cdw12,
 			    cdw12, data_len, data, result);
 }
 
-static int nvme_property(int fd, __u8 fctype, __le32 off, __le64 *value, __u8 attrib)
-{
-	int err;
-	struct nvme_admin_cmd cmd = {
-		.opcode		= nvme_fabrics_command,
-		.cdw10		= attrib,
-		.cdw11		= off,
-	};
-
-	if (!value) {
-		errno = EINVAL;
-		return -errno;
-	}
 
-	if (fctype == nvme_fabrics_type_property_get){
-		cmd.nsid = nvme_fabrics_type_property_get;
-	} else if (fctype == nvme_fabrics_type_property_set) {
-		cmd.nsid = nvme_fabrics_type_property_set;
-		cmd.cdw12 = *value;
-	} else {
-		errno = EINVAL;
-		return -errno;
-	}
-
-	err = nvme_submit_admin_passthru(fd, &cmd);
-	if (!err && fctype == nvme_fabrics_type_property_get)
-		*value = cpu_to_le64(cmd.result);
-	return err;
+/*
+ * Perform the opposite operation of the byte-swapping code at the start of the
+ * kernel function nvme_user_cmd().
+ */
+static void nvme_to_passthru_cmd(struct nvme_passthru_cmd *pcmd,
+				 const struct nvme_command *ncmd)
+{
+	assert(sizeof(*ncmd) < sizeof(*pcmd));
+	memset(pcmd, 0, sizeof(*pcmd));
+	pcmd->opcode = ncmd->common.opcode;
+	pcmd->flags = ncmd->common.flags;
+	pcmd->rsvd1 = ncmd->common.command_id;
+	pcmd->nsid = le32_to_cpu(ncmd->common.nsid);
+	pcmd->cdw2 = le32_to_cpu(ncmd->common.cdw2[0]);
+	pcmd->cdw3 = le32_to_cpu(ncmd->common.cdw2[1]);
+	/* Skip metadata and addr */
+	pcmd->cdw10 = le32_to_cpu(ncmd->common.cdw10[0]);
+	pcmd->cdw11 = le32_to_cpu(ncmd->common.cdw10[1]);
+	pcmd->cdw12 = le32_to_cpu(ncmd->common.cdw10[2]);
+	pcmd->cdw13 = le32_to_cpu(ncmd->common.cdw10[3]);
+	pcmd->cdw14 = le32_to_cpu(ncmd->common.cdw10[4]);
+	pcmd->cdw15 = le32_to_cpu(ncmd->common.cdw10[5]);
 }
 
 int nvme_get_property(int fd, int offset, uint64_t *value)
 {
-	__le64 value64;
-	int err = -EINVAL;
-
-	if (!value)
-		return err;
-
-	err = nvme_property(fd, nvme_fabrics_type_property_get,
-			cpu_to_le32(offset), &value64, is_64bit_reg(offset));
+	struct nvme_passthru_cmd pcmd;
+	struct nvme_command gcmd = {
+		.prop_get = {
+			.opcode	= nvme_fabrics_command,
+			.fctype	= nvme_fabrics_type_property_get,
+			.offset	= cpu_to_le32(offset),
+			.attrib = is_64bit_reg(offset),
+		}
+	};
+	int err;
 
+	nvme_to_passthru_cmd(&pcmd, &gcmd);
+	err = nvme_submit_admin_passthru(fd, &pcmd);
 	if (!err) {
-		if (is_64bit_reg(offset))
-			*((uint64_t *)value) = le64_to_cpu(value64);
-		else
-			*((uint32_t *)value) = le32_to_cpu(value64);
+		/*
+		 * nvme_submit_admin_passthru() stores the lower 32 bits
+		 * of the property value in pcmd.result using CPU endianness.
+		 */
+		*value = pcmd.result;
 	}
-
 	return err;
 }
 
 int nvme_get_properties(int fd, void **pbar)
 {
 	int offset;
+	uint64_t value;
 	int err;
 	int size = getpagesize();
 
@@ -612,36 +613,38 @@ int nvme_get_properties(int fd, void **pbar)
 
 	memset(*pbar, 0xff, size);
 	for (offset = NVME_REG_CAP; offset <= NVME_REG_CMBSZ;) {
-		err = nvme_get_property(fd, offset, *pbar + offset);
+		err = nvme_get_property(fd, offset, &value);
 		if (err) {
 			free(*pbar);
 			break;
 		}
-
-		offset += is_64bit_reg(offset) ? 8 : 4;
+		if (is_64bit_reg(offset)) {
+			*(uint64_t *)(*pbar + offset) = value;
+			offset += 8;
+		} else {
+			*(uint32_t *)(*pbar + offset) = value;
+			offset += 4;
+		}
 	}
 
 	return err;
 }
 
-int nvme_set_property(int fd, int offset, int value)
+int nvme_set_property(int fd, int offset, uint64_t value)
 {
-	__le64 val = cpu_to_le64(value);
-	__le32 off = cpu_to_le32(offset);
-	bool is64bit;
-
-	switch (off) {
-	case NVME_REG_CAP:
-	case NVME_REG_ASQ:
-	case NVME_REG_ACQ:
-		is64bit = true;
-		break;
-	default:
-		is64bit = false;
-	}
+	struct nvme_command scmd = {
+		.prop_set = {
+			.opcode	= nvme_fabrics_command,
+			.fctype	= nvme_fabrics_type_property_set,
+			.offset	= cpu_to_le32(offset),
+			.value = cpu_to_le64(value),
+			.attrib = is_64bit_reg(offset),
+		}
+	};
+	struct nvme_passthru_cmd pcmd;
 
-	return nvme_property(fd, nvme_fabrics_type_property_set,
-			off, &val, is64bit ? 1: 0);
+	nvme_to_passthru_cmd(&pcmd, &scmd);
+	return nvme_submit_admin_passthru(fd, &pcmd);
 }
 
 int nvme_get_feature(int fd, __u32 nsid, __u8 fid, __u8 sel, __u32 cdw11,
diff --git a/nvme-ioctl.h b/nvme-ioctl.h
index f4553e0bd264..68f0892570b4 100644
--- a/nvme-ioctl.h
+++ b/nvme-ioctl.h
@@ -133,7 +133,7 @@ int nvme_dir_send(int fd, __u32 nsid, __u16 dspec, __u8 dtype, __u8 doper,
 int nvme_dir_recv(int fd, __u32 nsid, __u16 dspec, __u8 dtype, __u8 doper,
 		  __u32 data_len, __u32 dw12, void *data, __u32 *result);
 int nvme_get_properties(int fd, void **pbar);
-int nvme_set_property(int fd, int offset, int value);
+int nvme_set_property(int fd, int offset, uint64_t value);
 int nvme_get_property(int fd, int offset, uint64_t *value);
 int nvme_sanitize(int fd, __u8 sanact, __u8 ause, __u8 owpass, __u8 oipbp,
 		  __u8 no_dealloc, __u32 ovrpat);
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 11/13] nvme-cli: Skip properties that are not supported
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (9 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 10/13] nvme-cli: Rework the code for getting and setting NVMf properties Bart Van Assche
@ 2019-06-19 17:36 ` Bart Van Assche
  2019-06-19 17:37 ` [PATCH nvme-cli 12/13] Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche
  2019-06-19 17:37 ` [PATCH nvme-cli 13/13] nvme-cli: Report the NVMe 1.4 NPWG, NPWA, NPDG, NPDA and NOWS fields Bart Van Assche
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:36 UTC (permalink / raw)


Instead of causing the show-regs command to fail if a property is
encountered that is not supported by the target side, do not display
the property. With this patch applied the following output is displayed
for the Linux NVMf target:

$ nvme show-regs /dev/nvme0
cap     : f0003ff
version : 10300
cc      : 460001
csts    : 1

Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 nvme-ioctl.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/nvme-ioctl.c b/nvme-ioctl.c
index 0c8de0748c18..57fae22e1e37 100644
--- a/nvme-ioctl.c
+++ b/nvme-ioctl.c
@@ -614,7 +614,10 @@ int nvme_get_properties(int fd, void **pbar)
 	memset(*pbar, 0xff, size);
 	for (offset = NVME_REG_CAP; offset <= NVME_REG_CMBSZ;) {
 		err = nvme_get_property(fd, offset, &value);
-		if (err) {
+		if (err > 0 && (err & 0xff) == NVME_SC_INVALID_FIELD) {
+			err = 0;
+			value = -1;
+		} else if (err) {
 			free(*pbar);
 			break;
 		}
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 12/13] Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (10 preceding siblings ...)
  2019-06-19 17:36 ` [PATCH nvme-cli 11/13] nvme-cli: Skip properties that are not supported Bart Van Assche
@ 2019-06-19 17:37 ` Bart Van Assche
  2019-06-19 17:37 ` [PATCH nvme-cli 13/13] nvme-cli: Report the NVMe 1.4 NPWG, NPWA, NPDG, NPDA and NOWS fields Bart Van Assche
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:37 UTC (permalink / raw)


Several new fields have been introduced in version 1.4 of the NVMe spec
at offsets that were defined as reserved in version 1.3d of the NVMe
spec. Update the definition of the nvme_id_ns data structure such that
it is in sync with version 1.4 of the NVMe spec. This change preserves
backwards compatibility.

Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 linux/nvme.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/linux/nvme.h b/linux/nvme.h
index 69f287e682f2..af14339bdca6 100644
--- a/linux/nvme.h
+++ b/linux/nvme.h
@@ -332,7 +332,12 @@ struct nvme_id_ns {
 	__le16			nabspf;
 	__le16			noiob;
 	__u8			nvmcap[16];
-	__u8			rsvd64[28];
+	__le16			npwg;
+	__le16			npwa;
+	__le16			npdg;
+	__le16			npda;
+	__le16			nows;
+	__u8			rsvd74[18];
 	__le32			anagrpid;
 	__u8			rsvd96[3];
 	__u8			nsattr;
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 13/13] nvme-cli: Report the NVMe 1.4 NPWG, NPWA, NPDG, NPDA and NOWS fields
  2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
                   ` (11 preceding siblings ...)
  2019-06-19 17:37 ` [PATCH nvme-cli 12/13] Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche
@ 2019-06-19 17:37 ` Bart Van Assche
  12 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-19 17:37 UTC (permalink / raw)


>From the NVMe 1.4 paragraph about NSFEAT:

"Bit 4 if set to 1: indicates that the fields NPWG, NPWA, NPDG, NPDA, and
NOWS are defined for this namespace and should be used by the host for I/O
optimization; and NOWS defined for this namespace shall adhere to Optimal
Write Size field setting defined in NVM Sets Attributes Entry (refer to
Figure 251) for the NVM Set with which this namespace is associated."

Hence only report these fields if bit 4 in NSFEAT has been set.

Signed-off-by: Bart Van Assche <bvanassche at acm.org>
---
 nvme-print.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/nvme-print.c b/nvme-print.c
index 94222a3b1c15..17478f9856e2 100644
--- a/nvme-print.c
+++ b/nvme-print.c
@@ -721,6 +721,13 @@ void show_nvme_id_ns(struct nvme_id_ns *ns, unsigned int mode)
 	printf("nabspf  : %d\n", le16_to_cpu(ns->nabspf));
 	printf("noiob   : %d\n", le16_to_cpu(ns->noiob));
 	printf("nvmcap  : %.0Lf\n", int128_to_double(ns->nvmcap));
+	if (ns->nsfeat & 0x10) {
+		printf("npwg    : %u\n", le16_to_cpu(ns->npwg));
+		printf("npwa    : %u\n", le16_to_cpu(ns->npwa));
+		printf("npdg    : %u\n", le16_to_cpu(ns->npdg));
+		printf("npda    : %u\n", le16_to_cpu(ns->npda));
+		printf("nows    : %u\n", le16_to_cpu(ns->nows));
+	}
 	printf("nsattr	: %u\n", ns->nsattr);
 	printf("nvmsetid: %d\n", le16_to_cpu(ns->nvmsetid));
 	printf("anagrpid: %u\n", le32_to_cpu(ns->anagrpid));
-- 
2.22.0.rc3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 01/13] Remove superfluous casts
  2019-06-19 17:36 ` [PATCH nvme-cli 01/13] Remove superfluous casts Bart Van Assche
@ 2019-06-24 10:00   ` Mikhail Skorzhinskii
  2019-06-24 10:49     ` Minwoo Im
  2019-06-24 13:55     ` Bart Van Assche
  0 siblings, 2 replies; 20+ messages in thread
From: Mikhail Skorzhinskii @ 2019-06-24 10:00 UTC (permalink / raw)


Hi Bart,

I'm not completely sure that anyone interesting in fixing this, but this
change breaks compilation on anything with glibc v2.24 or lower. This is
due to long lasting bug #16458[1] which was fixed 2 years ago and landed
in glibc v2.25.

I noticed it due to compiling it in rhel7\centos7 which is using glibc
v2.17.

May be it is possible to revert this change for a while?

Mikhail

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=16458

Bart Van Assche <bvanassche at acm.org> writes:

 > The le64_to_cpu() definition is as follows:
 >
 >   #define le64_to_cpu(x) le64toh((__force __u64)(x))
 >
 > According to the le64toh() man page, the return type of that function
 > is uint64_t. Hence drop the cast from (uint64_t)le64_to_cpu(x)
 > expressions. This patch has been generated as follows:
 >
 >   git ls-tree --name-only -r HEAD |
 >     while read f; do
 >       [ -f "$f" ] && sed -i 's/(uint64_t)le64_to_cpu(/le64_to_cpu(/g' "$f"
 >     done
 >
 > Signed-off-by: Bart Van Assche <bvanassche at acm.org>
 > ---
 >  fabrics.c                      |  2 +-
 >  nvme-print.c                   | 40 +++++++++---------
 >  plugins/intel/intel-nvme.c     | 16 +++----
 >  plugins/seagate/seagate-nvme.c | 26 ++++++------
 >  plugins/wdc/wdc-nvme.c         | 76 +++++++++++++++++-----------------
 >  5 files changed, 80 insertions(+), 80 deletions(-)
 >
 > diff --git a/fabrics.c b/fabrics.c
 > index 9ed4a5684f6c..b17f4061e0b8 100644
 > --- a/fabrics.c
 > +++ b/fabrics.c
 > @@ -420,7 +420,7 @@ static void print_discovery_log(struct nvmf_disc_rsp_page_hdr *log, int numrec)
 >
 >  	printf("\nDiscovery Log Number of Records %d, "
 >  	       "Generation counter %"PRIu64"\n",
 > -		numrec, (uint64_t)le64_to_cpu(log->genctr));
 > +		numrec, le64_to_cpu(log->genctr));
 >
 >  	for (i = 0; i < numrec; i++) {
 >  		struct nvmf_disc_rsp_page_entry *e = &log->entries[i];
 > diff --git a/nvme-print.c b/nvme-print.c
 > index b058d73f7b57..ea8f720748ef 100644
 > --- a/nvme-print.c
 > +++ b/nvme-print.c
 > @@ -680,9 +680,9 @@ void show_nvme_id_ns(struct nvme_id_ns *ns, unsigned int mode)
 >  	int human = mode & HUMAN,
 >  		vs = mode & VS;
 >
 > -	printf("nsze    : %#"PRIx64"\n", (uint64_t)le64_to_cpu(ns->nsze));
 > -	printf("ncap    : %#"PRIx64"\n", (uint64_t)le64_to_cpu(ns->ncap));
 > -	printf("nuse    : %#"PRIx64"\n", (uint64_t)le64_to_cpu(ns->nuse));
 > +	printf("nsze    : %#"PRIx64"\n", le64_to_cpu(ns->nsze));
 > +	printf("ncap    : %#"PRIx64"\n", le64_to_cpu(ns->ncap));
 > +	printf("nuse    : %#"PRIx64"\n", le64_to_cpu(ns->nuse));
 >  	printf("nsfeat  : %#x\n", ns->nsfeat);
 >  	if (human)
 >  		show_nvme_id_ns_nsfeat(ns->nsfeat);
 > @@ -1221,13 +1221,13 @@ void show_error_log(struct nvme_error_log_page *err_log, int entries, const char
 >  	for (i = 0; i < entries; i++) {
 >  		printf(" Entry[%2d]   \n", i);
 >  		printf(".................\n");
 > -		printf("error_count  : %"PRIu64"\n", (uint64_t)le64_to_cpu(err_log[i].error_count));
 > +		printf("error_count  : %"PRIu64"\n", le64_to_cpu(err_log[i].error_count));
 >  		printf("sqid         : %d\n", err_log[i].sqid);
 >  		printf("cmdid        : %#x\n", err_log[i].cmdid);
 >  		printf("status_field : %#x(%s)\n", err_log[i].status_field,
 >  			nvme_status_to_string(err_log[i].status_field >> 1));
 >  		printf("parm_err_loc : %#x\n", err_log[i].parm_error_location);
 > -		printf("lba          : %#"PRIx64"\n",(uint64_t)le64_to_cpu(err_log[i].lba));
 > +		printf("lba          : %#"PRIx64"\n",le64_to_cpu(err_log[i].lba));
 >  		printf("nsid         : %#x\n", err_log[i].nsid);
 >  		printf("vs           : %d\n", err_log[i].vs);
 >  		printf("cs           : %#"PRIx64"\n", (uint64_t) err_log[i].cs);
 > @@ -1258,8 +1258,8 @@ void show_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 >  			printf("regctl[%d] :\n", i);
 >  			printf("  cntlid  : %x\n", le16_to_cpu(status->regctl_ds[i].cntlid));
 >  			printf("  rcsts   : %x\n", status->regctl_ds[i].rcsts);
 > -			printf("  hostid  : %"PRIx64"\n", (uint64_t)le64_to_cpu(status->regctl_ds[i].hostid));
 > -			printf("  rkey    : %"PRIx64"\n", (uint64_t)le64_to_cpu(status->regctl_ds[i].rkey));
 > +			printf("  hostid  : %"PRIx64"\n", le64_to_cpu(status->regctl_ds[i].hostid));
 > +			printf("  rkey    : %"PRIx64"\n", le64_to_cpu(status->regctl_ds[i].rkey));
 >  		}
 >  	} else {
 >  		struct nvme_reservation_status_ext *ext_status = (struct nvme_reservation_status_ext *)status;
 > @@ -1272,7 +1272,7 @@ void show_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 >  			printf("regctlext[%d] :\n", i);
 >  			printf("  cntlid     : %x\n", le16_to_cpu(ext_status->regctl_eds[i].cntlid));
 >  			printf("  rcsts      : %x\n", ext_status->regctl_eds[i].rcsts);
 > -			printf("  rkey       : %"PRIx64"\n", (uint64_t)le64_to_cpu(ext_status->regctl_eds[i].rkey));
 > +			printf("  rkey       : %"PRIx64"\n", le64_to_cpu(ext_status->regctl_eds[i].rkey));
 >  			printf("  hostid     : ");
 >  			for (j = 0; j < 16; j++)
 >  				printf("%x", ext_status->regctl_eds[i].hostid[j]);
 > @@ -1518,7 +1518,7 @@ void show_ana_log(struct nvme_ana_rsp_hdr *ana_log, const char *devname)
 >  			devname);
 >  	printf("ANA LOG HEADER :-\n");
 >  	printf("chgcnt	:	%"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(hdr->chgcnt));
 > +			le64_to_cpu(hdr->chgcnt));
 >  	printf("ngrps	:	%u\n", le16_to_cpu(hdr->ngrps));
 >  	printf("ANA Log Desc :-\n");
 >
 > @@ -1531,7 +1531,7 @@ void show_ana_log(struct nvme_ana_rsp_hdr *ana_log, const char *devname)
 >  		printf("grpid	:	%u\n", le32_to_cpu(desc->grpid));
 >  		printf("nnsids	:	%u\n", le32_to_cpu(desc->nnsids));
 >  		printf("chgcnt	:	%"PRIu64"\n",
 > -		       (uint64_t)le64_to_cpu(desc->chgcnt));
 > +		       le64_to_cpu(desc->chgcnt));
 >  		printf("state	:	%s\n",
 >  				nvme_ana_state_to_string(desc->state));
 >  		for (j = 0; j < le32_to_cpu(desc->nnsids); j++)
 > @@ -1598,14 +1598,14 @@ void show_self_test_log(struct nvme_self_test_log *self_test, const char *devnam
 >  		temp = self_test->result[i].valid_diagnostic_info;
 >  		printf("  Valid Diagnostic Information : %#x\n", temp);
 >  		printf("  Power on hours (POH)         : %#"PRIx64"\n",
 > -			(uint64_t)le64_to_cpu(self_test->result[i].power_on_hours));
 > +			le64_to_cpu(self_test->result[i].power_on_hours));
 >
 >  		if (temp & NVME_SELF_TEST_VALID_NSID)
 >  			printf("  Namespace Identifier         : %#x\n",
 >  				le32_to_cpu(self_test->result[i].nsid));
 >  		if (temp & NVME_SELF_TEST_VALID_FLBA)
 >  			printf("  Failing LBA                  : %#"PRIx64"\n",
 > -				(uint64_t)le64_to_cpu(self_test->result[i].failing_lba));
 > +				le64_to_cpu(self_test->result[i].failing_lba));
 >  		if (temp & NVME_SELF_TEST_VALID_SCT)
 >  			printf("  Status Code Type             : %#x\n",
 >  				self_test->result[i].status_code_type);
 > @@ -2012,9 +2012,9 @@ static const char *nvme_plm_window(__u32 plm)
 >  static void show_plm_config(struct nvme_plm_config *plmcfg)
 >  {
 >  	printf("\tEnable Event          :%04x\n", le16_to_cpu(plmcfg->enable_event));
 > -	printf("\tDTWIN Reads Threshold :%"PRIu64"\n", (uint64_t)le64_to_cpu(plmcfg->dtwin_reads_thresh));
 > -	printf("\tDTWIN Writes Threshold:%"PRIu64"\n", (uint64_t)le64_to_cpu(plmcfg->dtwin_writes_thresh));
 > -	printf("\tDTWIN Time Threshold  :%"PRIu64"\n", (uint64_t)le64_to_cpu(plmcfg->dtwin_time_thresh));
 > +	printf("\tDTWIN Reads Threshold :%"PRIu64"\n", le64_to_cpu(plmcfg->dtwin_reads_thresh));
 > +	printf("\tDTWIN Writes Threshold:%"PRIu64"\n", le64_to_cpu(plmcfg->dtwin_writes_thresh));
 > +	printf("\tDTWIN Time Threshold  :%"PRIu64"\n", le64_to_cpu(plmcfg->dtwin_time_thresh));
 >  }
 >
 >  void nvme_feature_show_fields(__u32 fid, unsigned int result, unsigned char *buf)
 > @@ -2509,8 +2509,8 @@ void json_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 >
 >  			json_object_add_value_int(rc, "cntlid", le16_to_cpu(status->regctl_ds[i].cntlid));
 >  			json_object_add_value_int(rc, "rcsts", status->regctl_ds[i].rcsts);
 > -			json_object_add_value_uint(rc, "hostid", (uint64_t)le64_to_cpu(status->regctl_ds[i].hostid));
 > -			json_object_add_value_uint(rc, "rkey", (uint64_t)le64_to_cpu(status->regctl_ds[i].rkey));
 > +			json_object_add_value_uint(rc, "hostid", le64_to_cpu(status->regctl_ds[i].hostid));
 > +			json_object_add_value_uint(rc, "rkey", le64_to_cpu(status->regctl_ds[i].rkey));
 >
 >  			json_array_add_value_object(rcs, rc);
 >  		}
 > @@ -2529,7 +2529,7 @@ void json_nvme_resv_report(struct nvme_reservation_status *status, int bytes, __
 >
 >  			json_object_add_value_int(rc, "cntlid", le16_to_cpu(ext_status->regctl_eds[i].cntlid));
 >  			json_object_add_value_int(rc, "rcsts", ext_status->regctl_eds[i].rcsts);
 > -			json_object_add_value_uint(rc, "rkey", (uint64_t)le64_to_cpu(ext_status->regctl_eds[i].rkey));
 > +			json_object_add_value_uint(rc, "rkey", le64_to_cpu(ext_status->regctl_eds[i].rkey));
 >  			for (j = 0; j < 16; j++)
 >  				sprintf(hostid + j * 2, "%02x", ext_status->regctl_eds[i].hostid[j]);
 >
 > @@ -2717,7 +2717,7 @@ void json_ana_log(struct nvme_ana_rsp_hdr *ana_log, const char *devname)
 >  			"Asynchronous Namespace Access Log for NVMe device:",
 >  			devname);
 >  	json_object_add_value_uint(root, "chgcnt",
 > -			(uint64_t)le64_to_cpu(hdr->chgcnt));
 > +			le64_to_cpu(hdr->chgcnt));
 >  	json_object_add_value_uint(root, "ngrps", le16_to_cpu(hdr->ngrps));
 >
 >  	desc_list = json_create_array();
 > @@ -2779,7 +2779,7 @@ void json_self_test_log(struct nvme_self_test_log *self_test, const char *devnam
 >  		if (self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_NSID)
 >  			json_object_add_value_int(valid_attrs, "Namespace Identifier (NSID)", le32_to_cpu(self_test->result[i].nsid));
 >  		if (self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_FLBA)
 > -			json_object_add_value_uint(valid_attrs, "Failing LBA",(uint64_t)le64_to_cpu(self_test->result[i].failing_lba));
 > +			json_object_add_value_uint(valid_attrs, "Failing LBA",le64_to_cpu(self_test->result[i].failing_lba));
 >  		if (self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_SCT)
 >  			json_object_add_value_int(valid_attrs, "Status Code Type",self_test->result[i].status_code_type);
 >  		if(self_test->result[i].valid_diagnostic_info & NVME_SELF_TEST_VALID_SC)
 > diff --git a/plugins/intel/intel-nvme.c b/plugins/intel/intel-nvme.c
 > index 9aaf36768731..37f2c705c90f 100644
 > --- a/plugins/intel/intel-nvme.c
 > +++ b/plugins/intel/intel-nvme.c
 > @@ -322,14 +322,14 @@ static void show_temp_stats(struct intel_temp_stats *stats)
 >  {
 >  	printf("  Intel Temperature Statistics\n");
 >  	printf("--------------------------------\n");
 > -	printf("Current temperature         : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->curr));
 > -	printf("Last critical overtemp flag : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->last_overtemp));
 > -	printf("Life critical overtemp flag : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->life_overtemp));
 > -	printf("Highest temperature         : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->highest_temp));
 > -	printf("Lowest temperature          : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->lowest_temp));
 > -	printf("Max operating temperature   : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->max_operating_temp));
 > -	printf("Min operating temperature   : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->min_operating_temp));
 > -	printf("Estimated offset            : %"PRIu64"\n", (uint64_t)le64_to_cpu(stats->est_offset));
 > +	printf("Current temperature         : %"PRIu64"\n", le64_to_cpu(stats->curr));
 > +	printf("Last critical overtemp flag : %"PRIu64"\n", le64_to_cpu(stats->last_overtemp));
 > +	printf("Life critical overtemp flag : %"PRIu64"\n", le64_to_cpu(stats->life_overtemp));
 > +	printf("Highest temperature         : %"PRIu64"\n", le64_to_cpu(stats->highest_temp));
 > +	printf("Lowest temperature          : %"PRIu64"\n", le64_to_cpu(stats->lowest_temp));
 > +	printf("Max operating temperature   : %"PRIu64"\n", le64_to_cpu(stats->max_operating_temp));
 > +	printf("Min operating temperature   : %"PRIu64"\n", le64_to_cpu(stats->min_operating_temp));
 > +	printf("Estimated offset            : %"PRIu64"\n", le64_to_cpu(stats->est_offset));
 >  }
 >
 >  static int get_temp_stats_log(int argc, char **argv, struct command *cmd, struct plugin *plugin)
 > diff --git a/plugins/seagate/seagate-nvme.c b/plugins/seagate/seagate-nvme.c
 > index 4fa29d950d9c..4b5b0acb9244 100644
 > --- a/plugins/seagate/seagate-nvme.c
 > +++ b/plugins/seagate/seagate-nvme.c
 > @@ -615,35 +615,35 @@ void print_smart_log_CF(vendor_log_page_CF *pLogPageCF)
 >  	printf("%-40s", "Super-cap current temperature");
 >  	currentTemp = pLogPageCF->AttrCF.SuperCapCurrentTemperature;
 >  	/*currentTemp = currentTemp ? currentTemp - 273 : 0;*/
 > -	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(currentTemp));
 > +	printf(" 0x%016"PRIx64"", le64_to_cpu(currentTemp));
 >  	printf("\n");
 >
 >  	maxTemp = pLogPageCF->AttrCF.SuperCapMaximumTemperature;
 >  	/*maxTemp = maxTemp ? maxTemp - 273 : 0;*/
 >  	printf("%-40s", "Super-cap maximum temperature");
 > -	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(maxTemp));
 > +	printf(" 0x%016"PRIx64"", le64_to_cpu(maxTemp));
 >  	printf("\n");
 >
 >  	printf("%-40s", "Super-cap status");
 > -	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.SuperCapStatus));
 > +	printf(" 0x%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.SuperCapStatus));
 >  	printf("\n");
 >
 >  	printf("%-40s", "Data units read to DRAM namespace");
 > -	printf(" 0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
 > -	       (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
 > +	printf(" 0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
 > +	       le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
 >  	printf("\n");
 >
 >  	printf("%-40s", "Data units written to DRAM namespace");
 > -	printf(" 0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
 > -	       (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
 > +	printf(" 0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
 > +	       le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
 >  	printf("\n");
 >
 >  	printf("%-40s", "DRAM correctable error count");
 > -	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DramCorrectableErrorCount));
 > +	printf(" 0x%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DramCorrectableErrorCount));
 >  	printf("\n");
 >
 >  	printf("%-40s", "DRAM uncorrectable error count");
 > -	printf(" 0x%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DramUncorrectableErrorCount));
 > +	printf(" 0x%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DramUncorrectableErrorCount));
 >  	printf("\n");
 >
 >  }
 > @@ -682,16 +682,16 @@ void json_print_smart_log_CF(struct json_object *root, vendor_log_page_CF *pLogP
 >  	lbaf = json_create_object();
 >  	json_object_add_value_string(lbaf, "attribute_name", "Data units read to DRAM namespace");
 >  	memset(buf, 0, sizeof(buf));
 > -	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
 > -		(uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
 > +	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.MS__u64),
 > +		le64_to_cpu(pLogPageCF->AttrCF.DataUnitsReadToDramNamespace.LS__u64));
 >  	json_object_add_value_string(lbaf, "attribute_value", buf);
 >  	json_array_add_value_object(logPages, lbaf);
 >
 >  	lbaf = json_create_object();
 >  	json_object_add_value_string(lbaf, "attribute_name", "Data units written to DRAM namespace");
 >  	memset(buf, 0, sizeof(buf));
 > -	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", (uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
 > -		(uint64_t)le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
 > +	sprintf(buf, "0x%016"PRIx64"%016"PRIx64"", le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.MS__u64),
 > +		le64_to_cpu(pLogPageCF->AttrCF.DataUnitsWrittenToDramNamespace.LS__u64));
 >  	json_object_add_value_string(lbaf, "attribute_value", buf);
 >  	json_array_add_value_object(logPages, lbaf);
 >
 > diff --git a/plugins/wdc/wdc-nvme.c b/plugins/wdc/wdc-nvme.c
 > index a1cb1ebf766f..a9c86b6eced2 100644
 > --- a/plugins/wdc/wdc-nvme.c
 > +++ b/plugins/wdc/wdc-nvme.c
 > @@ -2125,55 +2125,55 @@ static void wdc_print_log_normal(struct wdc_ssd_perf_stats *perf)
 >  {
 >  	printf("  C1 Log Page Performance Statistics :- \n");
 >  	printf("  Host Read Commands                             %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hr_cmds));
 > +			le64_to_cpu(perf->hr_cmds));
 >  	printf("  Host Read Blocks                               %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hr_blks));
 > +			le64_to_cpu(perf->hr_blks));
 >  	printf("  Average Read Size                              %20lf\n",
 >  			safe_div_fp((le64_to_cpu(perf->hr_blks)), (le64_to_cpu(perf->hr_cmds))));
 >  	printf("  Host Read Cache Hit Commands                   %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hr_ch_cmds));
 > +			le64_to_cpu(perf->hr_ch_cmds));
 >  	printf("  Host Read Cache Hit_Percentage                 %20"PRIu64"%%\n",
 >  			(uint64_t) calc_percent(le64_to_cpu(perf->hr_ch_cmds), le64_to_cpu(perf->hr_cmds)));
 >  	printf("  Host Read Cache Hit Blocks                     %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hr_ch_blks));
 > +			le64_to_cpu(perf->hr_ch_blks));
 >  	printf("  Average Read Cache Hit Size                    %20f\n",
 >  			safe_div_fp((le64_to_cpu(perf->hr_ch_blks)), (le64_to_cpu(perf->hr_ch_cmds))));
 >  	printf("  Host Read Commands Stalled                     %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hr_st_cmds));
 > +			le64_to_cpu(perf->hr_st_cmds));
 >  	printf("  Host Read Commands Stalled Percentage          %20"PRIu64"%%\n",
 >  			(uint64_t)calc_percent((le64_to_cpu(perf->hr_st_cmds)), le64_to_cpu(perf->hr_cmds)));
 >  	printf("  Host Write Commands                            %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hw_cmds));
 > +			le64_to_cpu(perf->hw_cmds));
 >  	printf("  Host Write Blocks                              %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hw_blks));
 > +			le64_to_cpu(perf->hw_blks));
 >  	printf("  Average Write Size                             %20f\n",
 >  			safe_div_fp((le64_to_cpu(perf->hw_blks)), (le64_to_cpu(perf->hw_cmds))));
 >  	printf("  Host Write Odd Start Commands                  %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hw_os_cmds));
 > +			le64_to_cpu(perf->hw_os_cmds));
 >  	printf("  Host Write Odd Start Commands Percentage       %20"PRIu64"%%\n",
 >  			(uint64_t)calc_percent((le64_to_cpu(perf->hw_os_cmds)), (le64_to_cpu(perf->hw_cmds))));
 >  	printf("  Host Write Odd End Commands                    %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->hw_oe_cmds));
 > +			le64_to_cpu(perf->hw_oe_cmds));
 >  	printf("  Host Write Odd End Commands Percentage         %20"PRIu64"%%\n",
 >  			(uint64_t)calc_percent((le64_to_cpu(perf->hw_oe_cmds)), (le64_to_cpu((perf->hw_cmds)))));
 >  	printf("  Host Write Commands Stalled                    %20"PRIu64"\n",
 > -		(uint64_t)le64_to_cpu(perf->hw_st_cmds));
 > +		le64_to_cpu(perf->hw_st_cmds));
 >  	printf("  Host Write Commands Stalled Percentage         %20"PRIu64"%%\n",
 >  		(uint64_t)calc_percent((le64_to_cpu(perf->hw_st_cmds)), (le64_to_cpu(perf->hw_cmds))));
 >  	printf("  NAND Read Commands                             %20"PRIu64"\n",
 > -		(uint64_t)le64_to_cpu(perf->nr_cmds));
 > +		le64_to_cpu(perf->nr_cmds));
 >  	printf("  NAND Read Blocks Commands                      %20"PRIu64"\n",
 > -		(uint64_t)le64_to_cpu(perf->nr_blks));
 > +		le64_to_cpu(perf->nr_blks));
 >  	printf("  Average NAND Read Size                         %20f\n",
 >  		safe_div_fp((le64_to_cpu(perf->nr_blks)), (le64_to_cpu((perf->nr_cmds)))));
 >  	printf("  Nand Write Commands                            %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->nw_cmds));
 > +			le64_to_cpu(perf->nw_cmds));
 >  	printf("  NAND Write Blocks                              %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->nw_blks));
 > +			le64_to_cpu(perf->nw_blks));
 >  	printf("  Average NAND Write Size                        %20f\n",
 >  			safe_div_fp((le64_to_cpu(perf->nw_blks)), (le64_to_cpu(perf->nw_cmds))));
 >  	printf("  NAND Read Before Write                         %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->nrbw));
 > +			le64_to_cpu(perf->nrbw));
 >  }
 >
 >  static void wdc_print_log_json(struct wdc_ssd_perf_stats *perf)
 > @@ -2186,49 +2186,49 @@ static void wdc_print_log_json(struct wdc_ssd_perf_stats *perf)
 >  	json_object_add_value_int(root, "Average Read Size",
 >  			safe_div_fp((le64_to_cpu(perf->hr_blks)), (le64_to_cpu(perf->hr_cmds))));
 >  	json_object_add_value_int(root, "Host Read Cache Hit Commands",
 > -			(uint64_t)le64_to_cpu(perf->hr_ch_cmds));
 > +			le64_to_cpu(perf->hr_ch_cmds));
 >  	json_object_add_value_int(root, "Host Read Cache Hit Percentage",
 >  			(uint64_t) calc_percent(le64_to_cpu(perf->hr_ch_cmds), le64_to_cpu(perf->hr_cmds)));
 >  	json_object_add_value_int(root, "Host Read Cache Hit Blocks",
 > -			(uint64_t)le64_to_cpu(perf->hr_ch_blks));
 > +			le64_to_cpu(perf->hr_ch_blks));
 >  	json_object_add_value_int(root, "Average Read Cache Hit Size",
 >  			safe_div_fp((le64_to_cpu(perf->hr_ch_blks)), (le64_to_cpu(perf->hr_ch_cmds))));
 >  	json_object_add_value_int(root, "Host Read Commands Stalled",
 > -			(uint64_t)le64_to_cpu(perf->hr_st_cmds));
 > +			le64_to_cpu(perf->hr_st_cmds));
 >  	json_object_add_value_int(root, "Host Read Commands Stalled Percentage",
 >  			(uint64_t)calc_percent((le64_to_cpu(perf->hr_st_cmds)), le64_to_cpu(perf->hr_cmds)));
 >  	json_object_add_value_int(root, "Host Write Commands",
 > -			(uint64_t)le64_to_cpu(perf->hw_cmds));
 > +			le64_to_cpu(perf->hw_cmds));
 >  	json_object_add_value_int(root, "Host Write Blocks",
 > -			(uint64_t)le64_to_cpu(perf->hw_blks));
 > +			le64_to_cpu(perf->hw_blks));
 >  	json_object_add_value_int(root, "Average Write Size",
 >  			safe_div_fp((le64_to_cpu(perf->hw_blks)), (le64_to_cpu(perf->hw_cmds))));
 >  	json_object_add_value_int(root, "Host Write Odd Start Commands",
 > -			(uint64_t)le64_to_cpu(perf->hw_os_cmds));
 > +			le64_to_cpu(perf->hw_os_cmds));
 >  	json_object_add_value_int(root, "Host Write Odd Start Commands Percentage",
 >  			(uint64_t)calc_percent((le64_to_cpu(perf->hw_os_cmds)), (le64_to_cpu(perf->hw_cmds))));
 >  	json_object_add_value_int(root, "Host Write Odd End Commands",
 > -			(uint64_t)le64_to_cpu(perf->hw_oe_cmds));
 > +			le64_to_cpu(perf->hw_oe_cmds));
 >  	json_object_add_value_int(root, "Host Write Odd End Commands Percentage",
 >  			(uint64_t)calc_percent((le64_to_cpu(perf->hw_oe_cmds)), (le64_to_cpu((perf->hw_cmds)))));
 >  	json_object_add_value_int(root, "Host Write Commands Stalled",
 > -		(uint64_t)le64_to_cpu(perf->hw_st_cmds));
 > +		le64_to_cpu(perf->hw_st_cmds));
 >  	json_object_add_value_int(root, "Host Write Commands Stalled Percentage",
 >  		(uint64_t)calc_percent((le64_to_cpu(perf->hw_st_cmds)), (le64_to_cpu(perf->hw_cmds))));
 >  	json_object_add_value_int(root, "NAND Read Commands",
 > -		(uint64_t)le64_to_cpu(perf->nr_cmds));
 > +		le64_to_cpu(perf->nr_cmds));
 >  	json_object_add_value_int(root, "NAND Read Blocks Commands",
 > -		(uint64_t)le64_to_cpu(perf->nr_blks));
 > +		le64_to_cpu(perf->nr_blks));
 >  	json_object_add_value_int(root, "Average NAND Read Size",
 >  		safe_div_fp((le64_to_cpu(perf->nr_blks)), (le64_to_cpu((perf->nr_cmds)))));
 >  	json_object_add_value_int(root, "Nand Write Commands",
 > -			(uint64_t)le64_to_cpu(perf->nw_cmds));
 > +			le64_to_cpu(perf->nw_cmds));
 >  	json_object_add_value_int(root, "NAND Write Blocks",
 > -			(uint64_t)le64_to_cpu(perf->nw_blks));
 > +			le64_to_cpu(perf->nw_blks));
 >  	json_object_add_value_int(root, "Average NAND Write Size",
 >  			safe_div_fp((le64_to_cpu(perf->nw_blks)), (le64_to_cpu(perf->nw_cmds))));
 >  	json_object_add_value_int(root, "NAND Read Before Written",
 > -			(uint64_t)le64_to_cpu(perf->nrbw));
 > +			le64_to_cpu(perf->nrbw));
 >  	json_print_object(root, NULL);
 >  	printf("\n");
 >  	json_free_object(root);
 > @@ -2257,9 +2257,9 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 >
 >  	printf("  CA Log Page Performance Statistics :- \n");
 >  	printf("  NAND Bytes Written                             %20"PRIu64 "%20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->nand_bytes_wr_hi), (uint64_t)le64_to_cpu(perf->nand_bytes_wr_lo));
 > +			le64_to_cpu(perf->nand_bytes_wr_hi), le64_to_cpu(perf->nand_bytes_wr_lo));
 >  	printf("  NAND Bytes Read                                %20"PRIu64 "%20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->nand_bytes_rd_hi), (uint64_t)le64_to_cpu(perf->nand_bytes_rd_lo));
 > +			le64_to_cpu(perf->nand_bytes_rd_hi), le64_to_cpu(perf->nand_bytes_rd_lo));
 >
 >  	converted = le64_to_cpu(perf->nand_bad_block);
 >  	printf("  NAND Bad Block Count (Normalized)              %20"PRIu64"\n",
 > @@ -2268,9 +2268,9 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 >  			converted >> 16);
 >
 >  	printf("  Uncorrectable Read Count                       %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->uncorr_read_count));
 > +			le64_to_cpu(perf->uncorr_read_count));
 >  	printf("  Soft ECC Error Count                           %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->ecc_error_count));
 > +			le64_to_cpu(perf->ecc_error_count));
 >  	printf("  SSD End to End Detected Correction Count       %20"PRIu32"\n",
 >  			(uint32_t)le32_to_cpu(perf->ssd_detect_count));
 >  	printf("  SSD End to End Corrected Correction Count      %20"PRIu32"\n",
 > @@ -2282,7 +2282,7 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 >  	printf("  User Data Erase Counts Min                     %20"PRIu32"\n",
 >  			(uint32_t)le32_to_cpu(perf->data_erase_min));
 >  	printf("  Refresh Count                                  %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->refresh_count));
 > +			le64_to_cpu(perf->refresh_count));
 >
 >  	converted = le64_to_cpu(perf->program_fail);
 >  	printf("  Program Fail Count (Normalized)                %20"PRIu64"\n",
 > @@ -2307,7 +2307,7 @@ static void wdc_print_ca_log_normal(struct wdc_ssd_ca_perf_stats *perf)
 >  	printf("  Thermal Throttling Count                       %20"PRIu8"\n",
 >  			perf->thermal_throttle_count);
 >  	printf("  PCIe Correctable Error Count                   %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->pcie_corr_error));
 > +			le64_to_cpu(perf->pcie_corr_error));
 >  	printf("  Incomplete Shutdown Count                      %20"PRIu32"\n",
 >  			(uint32_t)le32_to_cpu(perf->incomplete_shutdown_count));
 >  	printf("  Percent Free Blocks                            %20"PRIu32"%%\n",
 > @@ -2411,13 +2411,13 @@ static void wdc_print_d0_log_normal(struct wdc_ssd_d0_smart_log *perf)
 >  	printf("  Lifetime Read Disturb Reallocation Events	 %20"PRIu32"\n",
 >  			(uint32_t)le32_to_cpu(perf->lifetime_read_disturb_realloc_events));
 >  	printf("  Lifetime NAND Writes	                         %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->lifetime_nand_writes));
 > +			le64_to_cpu(perf->lifetime_nand_writes));
 >  	printf("  Capacitor Health			 	 %20"PRIu32"%%\n",
 >  			(uint32_t)le32_to_cpu(perf->capacitor_health));
 >  	printf("  Lifetime User Writes	                         %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->lifetime_user_writes));
 > +			le64_to_cpu(perf->lifetime_user_writes));
 >  	printf("  Lifetime User Reads	                         %20"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(perf->lifetime_user_reads));
 > +			le64_to_cpu(perf->lifetime_user_reads));
 >  	printf("  Lifetime Thermal Throttle Activations	         %20"PRIu32"\n",
 >  			(uint32_t)le32_to_cpu(perf->lifetime_thermal_throttle_act));
 >  	printf("  Percentage of P/E Cycles Remaining             %20"PRIu32"%%\n",
 > @@ -3726,7 +3726,7 @@ static void wdc_print_nand_stats_normal(struct wdc_nand_stats *data)
 >  	printf("  Bad Block Count			         %"PRIu32"\n",
 >  			(uint32_t)le32_to_cpu(data->bad_block_count));
 >  	printf("  NAND XOR/RAID Recovery Trigger Events		 %"PRIu64"\n",
 > -			(uint64_t)le64_to_cpu(data->nand_rec_trigger_event));
 > +			le64_to_cpu(data->nand_rec_trigger_event));
 >  }
 >
 >  static void wdc_print_nand_stats_json(struct wdc_nand_stats *data)
 > --
 > 2.22.0.rc3
 >
 >
 > _______________________________________________
 > Linux-nvme mailing list
 > Linux-nvme at lists.infradead.org
 > http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 01/13] Remove superfluous casts
  2019-06-24 10:00   ` Mikhail Skorzhinskii
@ 2019-06-24 10:49     ` Minwoo Im
  2019-06-24 13:55     ` Bart Van Assche
  1 sibling, 0 replies; 20+ messages in thread
From: Minwoo Im @ 2019-06-24 10:49 UTC (permalink / raw)


I have the same issue on my VM whose version of glibc is lower asMikhail
said. Is this really superfluous now? :)

Thanks,

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 01/13] Remove superfluous casts
  2019-06-24 10:00   ` Mikhail Skorzhinskii
  2019-06-24 10:49     ` Minwoo Im
@ 2019-06-24 13:55     ` Bart Van Assche
  2019-06-24 18:46       ` Minwoo Im
  2019-06-24 20:51       ` Mikhail Skorzhinskii
  1 sibling, 2 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-24 13:55 UTC (permalink / raw)


On 6/24/19 3:00 AM, Mikhail Skorzhinskii wrote:
> I'm not completely sure that anyone interesting in fixing this, but this
> change breaks compilation on anything with glibc v2.24 or lower. This is
> due to long lasting bug #16458[1] which was fixed 2 years ago and landed
> in glibc v2.25.
> 
> I noticed it due to compiling it in rhel7\centos7 which is using glibc
> v2.17.

How about restoring RHEL 7 compatibility with the (untested) patch below?

Thanks,

Bart.

diff --git a/nvme.h b/nvme.h
index a149005a0425..ecac52d4d172 100644
--- a/nvme.h
+++ b/nvme.h
@@ -119,19 +119,31 @@ struct nvme_bar_cap {
  #define __force
  #endif

-#define cpu_to_le16(x) \
-	((__force __le16)htole16(x))
-#define cpu_to_le32(x) \
-	((__force __le32)htole32(x))
-#define cpu_to_le64(x) \
-	((__force __le64)htole64(x))
-
-#define le16_to_cpu(x) \
-	le16toh((__force __u16)(x))
-#define le32_to_cpu(x) \
-	le32toh((__force __u32)(x))
-#define le64_to_cpu(x) \
-	le64toh((__force __u64)(x))
+static inline __le16 cpu_to_le16(uint16_t x)
+{
+	return ((__force __le16)htole16(x));
+}
+static inline __le32 cpu_to_le32(uint32_t x)
+{
+	return ((__force __le32)htole32(x));
+}
+static inline __le64 cpu_to_le64(uint64_t x)
+{
+	return ((__force __le64)htole64(x));
+}
+
+static inline uint16_t le16_to_cpu(__le16 x)
+{
+	return le16toh((__force __u16)x);
+}
+static inline uint32_t le32_to_cpu(__le32 x)
+{
+	return le32toh((__force __u32)x);
+}
+static inline uint64_t le64_to_cpu(__le64 x)
+{
+	return le64toh((__force __u64)x);
+}

  #define MAX_LIST_ITEMS 256
  struct list_item {
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 01/13] Remove superfluous casts
  2019-06-24 13:55     ` Bart Van Assche
@ 2019-06-24 18:46       ` Minwoo Im
  2019-06-24 19:49         ` Bart Van Assche
  2019-06-24 20:51       ` Mikhail Skorzhinskii
  1 sibling, 1 reply; 20+ messages in thread
From: Minwoo Im @ 2019-06-24 18:46 UTC (permalink / raw)


On 19-06-24 06:55:21, Bart Van Assche wrote:
> On 6/24/19 3:00 AM, Mikhail Skorzhinskii wrote:
> > I'm not completely sure that anyone interesting in fixing this, but this
> > change breaks compilation on anything with glibc v2.24 or lower. This is
> > due to long lasting bug #16458[1] which was fixed 2 years ago and landed
> > in glibc v2.25.
> > 
> > I noticed it due to compiling it in rhel7\centos7 which is using glibc
> > v2.17.
> 
> How about restoring RHEL 7 compatibility with the (untested) patch below?

Bart,

It works for me.

> 
> Thanks,
> 
> Bart.
> 
> diff --git a/nvme.h b/nvme.h
> index a149005a0425..ecac52d4d172 100644
> --- a/nvme.h
> +++ b/nvme.h
> @@ -119,19 +119,31 @@ struct nvme_bar_cap {
>  #define __force
>  #endif
> 
> -#define cpu_to_le16(x) \
> -	((__force __le16)htole16(x))
> -#define cpu_to_le32(x) \
> -	((__force __le32)htole32(x))
> -#define cpu_to_le64(x) \
> -	((__force __le64)htole64(x))
> -
> -#define le16_to_cpu(x) \
> -	le16toh((__force __u16)(x))
> -#define le32_to_cpu(x) \
> -	le32toh((__force __u32)(x))
> -#define le64_to_cpu(x) \
> -	le64toh((__force __u64)(x))
> +static inline __le16 cpu_to_le16(uint16_t x)
> +{
> +	return ((__force __le16)htole16(x));
> +}
> +static inline __le32 cpu_to_le32(uint32_t x)
> +{
> +	return ((__force __le32)htole32(x));
> +}
> +static inline __le64 cpu_to_le64(uint64_t x)
> +{
> +	return ((__force __le64)htole64(x));
> +}
> +
> +static inline uint16_t le16_to_cpu(__le16 x)
> +{
> +	return le16toh((__force __u16)x);
> +}
> +static inline uint32_t le32_to_cpu(__le32 x)
> +{
> +	return le32toh((__force __u32)x);
> +}
> +static inline uint64_t le64_to_cpu(__le64 x)
> +{
> +	return le64toh((__force __u64)x);
> +}
> 
>  #define MAX_LIST_ITEMS 256
>  struct list_item {
> -- 
> 2.21.0
> 
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 01/13] Remove superfluous casts
  2019-06-24 18:46       ` Minwoo Im
@ 2019-06-24 19:49         ` Bart Van Assche
  0 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-06-24 19:49 UTC (permalink / raw)


On 6/24/19 11:46 AM, Minwoo Im wrote:
> On 19-06-24 06:55:21, Bart Van Assche wrote:
>> How about restoring RHEL 7 compatibility with the (untested) patch below?
> 
> Bart,
> 
> It works for me.

Thanks Minwoo for the testing. I will post a patch series soon.

Bart.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH nvme-cli 01/13] Remove superfluous casts
  2019-06-24 13:55     ` Bart Van Assche
  2019-06-24 18:46       ` Minwoo Im
@ 2019-06-24 20:51       ` Mikhail Skorzhinskii
  1 sibling, 0 replies; 20+ messages in thread
From: Mikhail Skorzhinskii @ 2019-06-24 20:51 UTC (permalink / raw)


Bart Van Assche <bvanassche at acm.org> writes:
 > How about restoring RHEL 7 compatibility with the (untested) patch
 > below?

It is now tested and it works fine: both compilation on rhel7/non-rhel
and some basic functionality testing.

Thank you.

Mikhail Skorzhinskii

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-06-24 20:51 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-19 17:36 [PATCH nvme-cli 00/13] Static checker fixes and NVMe 1.4 support Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 01/13] Remove superfluous casts Bart Van Assche
2019-06-24 10:00   ` Mikhail Skorzhinskii
2019-06-24 10:49     ` Minwoo Im
2019-06-24 13:55     ` Bart Van Assche
2019-06-24 18:46       ` Minwoo Im
2019-06-24 19:49         ` Bart Van Assche
2019-06-24 20:51       ` Mikhail Skorzhinskii
2019-06-19 17:36 ` [PATCH nvme-cli 02/13] Use NULL instead of 0 where a pointer is expected Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 03/13] huawei: Declare local functions static Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 04/13] seagate: " Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 05/13] virtium: Declare local symbols static Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 06/13] lightnvm: Fix an endianness issue Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 07/13] virtium: " Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 08/13] wdc: Fix endianness bugs Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 09/13] Avoid using arrays with a variable length Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 10/13] nvme-cli: Rework the code for getting and setting NVMf properties Bart Van Assche
2019-06-19 17:36 ` [PATCH nvme-cli 11/13] nvme-cli: Skip properties that are not supported Bart Van Assche
2019-06-19 17:37 ` [PATCH nvme-cli 12/13] Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche
2019-06-19 17:37 ` [PATCH nvme-cli 13/13] nvme-cli: Report the NVMe 1.4 NPWG, NPWA, NPDG, NPDA and NOWS fields Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.