From: Vishal Verma <vishal.l.verma@intel.com>
To: <linux-nvdimm@lists.01.org>
Cc: Ben Olson <ben.olson@intel.com>,
Dave Hansen <dave.hansen@linux.intel.com>
Subject: [ndctl PATCH v2 05/10] libdaxctl: allow memblock_in_dev() to return an error
Date: Sat, 19 Oct 2019 21:23:27 -0600 [thread overview]
Message-ID: <20191020032332.16776-6-vishal.l.verma@intel.com> (raw)
In-Reply-To: <20191020032332.16776-1-vishal.l.verma@intel.com>
With the MEM_FIND_ZONE operation, and the expectation that it will be
called from 'daxctl list' listings, it is possible that memblock_in_dev()
gets called without sufficient privileges. If this happens, currently,
the above simply returns a 'false'. This was acceptable when the only
operations were onlining/offlining (as there would be an actual failure
later). However, it is not acceptable in the MEM_FIND_ZONE case, as it
could yeild a different answer based on the privilege level.
Change memblock_in_dev() to return an 'int' instead of a 'bool' so that
error cases can be distinguished from actual address range test.
Cc: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
---
daxctl/lib/libdaxctl.c | 32 +++++++++++++++++++-------------
1 file changed, 19 insertions(+), 13 deletions(-)
diff --git a/daxctl/lib/libdaxctl.c b/daxctl/lib/libdaxctl.c
index 03f38f2..65a09c8 100644
--- a/daxctl/lib/libdaxctl.c
+++ b/daxctl/lib/libdaxctl.c
@@ -1211,40 +1211,42 @@ static int memblock_find_zone(struct daxctl_memory *mem, char *memblock,
return 0;
}
-static bool memblock_in_dev(struct daxctl_memory *mem, const char *memblock)
+static int memblock_in_dev(struct daxctl_memory *mem, const char *memblock)
{
const char *mem_base = "/sys/devices/system/memory/";
struct daxctl_dev *dev = daxctl_memory_get_dev(mem);
unsigned long long memblock_res, dev_start, dev_end;
const char *devname = daxctl_dev_get_devname(dev);
struct daxctl_ctx *ctx = daxctl_dev_get_ctx(dev);
+ int rc, path_len = mem->buf_len;
unsigned long memblock_size;
- int path_len = mem->buf_len;
char buf[SYSFS_ATTR_SIZE];
unsigned long phys_index;
char *path = mem->mem_buf;
if (snprintf(path, path_len, "%s/%s/phys_index",
mem_base, memblock) < 0)
- return false;
+ return -ENXIO;
- if (sysfs_read_attr(ctx, path, buf) == 0) {
+ rc = sysfs_read_attr(ctx, path, buf);
+ if (rc == 0) {
phys_index = strtoul(buf, NULL, 16);
if (phys_index == 0 || phys_index == ULONG_MAX) {
+ rc = -errno;
err(ctx, "%s: %s: Unable to determine phys_index: %s\n",
- devname, memblock, strerror(errno));
- return false;
+ devname, memblock, strerror(-rc));
+ return rc;
}
} else {
err(ctx, "%s: %s: Unable to determine phys_index: %s\n",
- devname, memblock, strerror(errno));
- return false;
+ devname, memblock, strerror(-rc));
+ return rc;
}
dev_start = daxctl_dev_get_resource(dev);
if (!dev_start) {
err(ctx, "%s: Unable to determine resource\n", devname);
- return false;
+ return -EACCES;
}
dev_end = dev_start + daxctl_dev_get_size(dev);
@@ -1252,14 +1254,14 @@ static bool memblock_in_dev(struct daxctl_memory *mem, const char *memblock)
if (!memblock_size) {
err(ctx, "%s: Unable to determine memory block size\n",
devname);
- return false;
+ return -ENXIO;
}
memblock_res = phys_index * memblock_size;
if (memblock_res >= dev_start && memblock_res <= dev_end)
- return true;
+ return 1;
- return false;
+ return 0;
}
static int op_for_one_memblock(struct daxctl_memory *mem, char *memblock,
@@ -1317,8 +1319,12 @@ static int daxctl_memory_op(struct daxctl_memory *mem, enum memory_op op)
errno = 0;
while ((de = readdir(node_dir)) != NULL) {
if (strncmp(de->d_name, "memory", 6) == 0) {
- if (!memblock_in_dev(mem, de->d_name))
+ rc = memblock_in_dev(mem, de->d_name);
+ if (rc < 0)
+ goto out_dir;
+ if (rc == 0) /* memblock not in dev */
continue;
+ /* memblock is in dev, perform op */
rc = op_for_one_memblock(mem, de->d_name, op,
&status_flags);
if (rc < 0)
--
2.20.1
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
next prev parent reply other threads:[~2019-10-20 3:23 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-20 3:23 [ndctl PATCH v2 00/10] fixes and movability for system-ram mode Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 01/10] libdaxctl: refactor path construction in op_for_one_memblock() Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 02/10] libdaxctl: refactor memblock_is_online() checks Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 03/10] daxctl/device.c: fix json output omission for reconfigure-device Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 04/10] libdaxctl: add an API to determine if memory is movable Vishal Verma
2019-10-20 3:23 ` Vishal Verma [this message]
2019-10-20 3:23 ` [ndctl PATCH v2 06/10] daxctl: show a 'movable' attribute in device listings Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 07/10] daxctl: detect races when onlining memory blocks Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 08/10] Documentation: clarify memory movablity for reconfigure-device Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 09/10] libdaxctl: add an API to online memory in a non-movable state Vishal Verma
2019-10-20 3:23 ` [ndctl PATCH v2 10/10] daxctl: add --no-movable option for onlining memory Vishal Verma
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191020032332.16776-6-vishal.l.verma@intel.com \
--to=vishal.l.verma@intel.com \
--cc=ben.olson@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=linux-nvdimm@lists.01.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).