From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, David Hildenbrand <david@redhat.com>,
Alexey Dobriyan <adobriyan@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Oscar Salvador <osalvador@suse.de>,
Michal Hocko <mhocko@kernel.org>,
Stephen Rothwell <sfr@canb.auug.org.au>,
linux-fsdevel@vger.kernel.org
Subject: [PATCH v1 2/3] fs/proc/page.c: allow inspection of last section and fix end detection
Date: Mon, 9 Dec 2019 18:48:35 +0100 [thread overview]
Message-ID: <20191209174836.11063-3-david@redhat.com> (raw)
In-Reply-To: <20191209174836.11063-1-david@redhat.com>
If max_pfn does not fall onto a section boundary, it is possible to inspect
PFNs up to max_pfn, and PFNs above max_pfn, however, max_pfn itself can't
be inspected. We can have a valid (and online) memmap at and above max_pfn
if max_pfn is not aligned to a section boundary. The whole early section
has a memmap and is marked online. Being able to inspect the state of these
PFNs is valuable for debugging, especially because max_pfn can change on
memory hotplug and expose these memmaps.
Also, querying page flags via "./page-types -r -a 0x144001,"
(tools/vm/page-types.c) inside a x86-64 guest with 4160MB under QEMU
results in an (almost) endless loop in user space, because the end is
not detected properly when starting after max_pfn.
Instead, let's allow to inspect all pages in the highest section and
return 0 directly if we try to access pages above that section.
While at it, check the count before adjusting it, to avoid masking user
errors.
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: David Hildenbrand <david@redhat.com>
---
fs/proc/page.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/fs/proc/page.c b/fs/proc/page.c
index e40dbfe1168e..da01d3d9999a 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -29,6 +29,7 @@
static ssize_t kpagecount_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
+ const unsigned long max_dump_pfn = round_up(max_pfn, PAGES_PER_SECTION);
u64 __user *out = (u64 __user *)buf;
struct page *ppage;
unsigned long src = *ppos;
@@ -37,9 +38,11 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf,
u64 pcount;
pfn = src / KPMSIZE;
- count = min_t(size_t, count, (max_pfn * KPMSIZE) - src);
if (src & KPMMASK || count & KPMMASK)
return -EINVAL;
+ if (src >= max_dump_pfn * KPMSIZE)
+ return 0;
+ count = min_t(unsigned long, count, (max_dump_pfn * KPMSIZE) - src);
while (count > 0) {
/*
@@ -208,6 +211,7 @@ u64 stable_page_flags(struct page *page)
static ssize_t kpageflags_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
+ const unsigned long max_dump_pfn = round_up(max_pfn, PAGES_PER_SECTION);
u64 __user *out = (u64 __user *)buf;
struct page *ppage;
unsigned long src = *ppos;
@@ -215,9 +219,11 @@ static ssize_t kpageflags_read(struct file *file, char __user *buf,
ssize_t ret = 0;
pfn = src / KPMSIZE;
- count = min_t(unsigned long, count, (max_pfn * KPMSIZE) - src);
if (src & KPMMASK || count & KPMMASK)
return -EINVAL;
+ if (src >= max_dump_pfn * KPMSIZE)
+ return 0;
+ count = min_t(unsigned long, count, (max_dump_pfn * KPMSIZE) - src);
while (count > 0) {
/*
@@ -253,6 +259,7 @@ static const struct file_operations proc_kpageflags_operations = {
static ssize_t kpagecgroup_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
+ const unsigned long max_dump_pfn = round_up(max_pfn, PAGES_PER_SECTION);
u64 __user *out = (u64 __user *)buf;
struct page *ppage;
unsigned long src = *ppos;
@@ -261,9 +268,11 @@ static ssize_t kpagecgroup_read(struct file *file, char __user *buf,
u64 ino;
pfn = src / KPMSIZE;
- count = min_t(unsigned long, count, (max_pfn * KPMSIZE) - src);
if (src & KPMMASK || count & KPMMASK)
return -EINVAL;
+ if (src >= max_dump_pfn * KPMSIZE)
+ return 0;
+ count = min_t(unsigned long, count, (max_dump_pfn * KPMSIZE) - src);
while (count > 0) {
/*
--
2.21.0
next prev parent reply other threads:[~2019-12-09 17:48 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-09 17:48 [PATCH v1 0/3] mm: fix max_pfn not falling on section boundary David Hildenbrand
2019-12-09 17:48 ` [PATCH v1 1/3] mm: fix uninitialized memmaps on a partially populated last section David Hildenbrand
2019-12-09 21:15 ` Daniel Jordan
2019-12-10 10:11 ` David Hildenbrand
2019-12-10 22:18 ` Daniel Jordan
2019-12-09 17:48 ` David Hildenbrand [this message]
2019-12-10 0:46 ` [PATCH v1 2/3] fs/proc/page.c: allow inspection of last section and fix end detection kbuild test robot
2019-12-10 1:04 ` kbuild test robot
2019-12-10 10:53 ` David Hildenbrand
2019-12-09 17:48 ` [PATCH v1 3/3] mm: initialize memmap of unavailable memory directly David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191209174836.11063-3-david@redhat.com \
--to=david@redhat.com \
--cc=adobriyan@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=osalvador@suse.de \
--cc=sfr@canb.auug.org.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).