From: Sourav Panda <souravpanda@google.com>
To: corbet@lwn.net, gregkh@linuxfoundation.org, rafael@kernel.org,
akpm@linux-foundation.org, mike.kravetz@oracle.com,
muchun.song@linux.dev, rppt@kernel.org, david@redhat.com,
rdunlap@infradead.org, chenlinxuan@uniontech.com,
yang.yang29@zte.com.cn, souravpanda@google.com,
tomas.mudrunka@gmail.com, bhelgaas@google.com,
ivan@cloudflare.com, pasha.tatashin@soleen.com,
yosryahmed@google.com, hannes@cmpxchg.org, shakeelb@google.com,
kirill.shutemov@linux.intel.com, wangkefeng.wang@huawei.com,
adobriyan@gmail.com, vbabka@suse.cz, Liam.Howlett@Oracle.com,
surenb@google.com, linux-kernel@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org,
linux-mm@kvack.org
Subject: [PATCH v1 0/1] Report perpage metadata information.
Date: Wed, 13 Sep 2023 10:29:59 -0700 [thread overview]
Message-ID: <20230913173000.4016218-1-souravpanda@google.com> (raw)
Hi!
This patch adds a new per-node PageMetadata field to
/sys/devices/system/node/nodeN/meminfo and a global PageMetadata field
to /proc/meminfo. This information can be used by users to see how much
memory is being used by per-page metadata, which can vary depending on
build configuration, machine architecture, and system use.
Per-page metadata is the amount of memory that Linux needs in order to
manage memory at the page granularity. The majority of such memory is
used by "struct page" and "page_ext" data structures.
Background
----------
Kernel overhead observability is missing some of the largest
allocations during runtime, including vmemmap (struct pages) and
page_ext. This patch aims to address this problem by exporting a
new metric PageMetadata.
On the contrary, the kernel does provide observibility for boot memory
allocations. For example, the metric reserved_pages depicts the pages
allocated by the bootmem allocator. This can be simply calculated as
present_pages - managed_pages, which are both exported in /proc/zoneinfo.
The metric reserved_pages is primarily composed of struct pages and
page_ext.
What about the struct pages (allocated by bootmem allocator) that are
free'd during hugetlbfs allocations and then allocated by buddy-allocator
once hugtlbfs pages are free'd?
/proc/meminfo MemTotal changes: MemTotal does not include memblock
allocations but includes buddy allocations. However, during runtime
memblock allocations can be shifted into buddy allocations, and therefore
become part of MemTotal.
Once the struct pages get allocated by buddy allocator, we lose track of
these struct page allocations overhead accounting. Therefore, we must
export a new metric that we shall refer to as PageMetadata (exported by
node). This shall also comprise the struct page and page_ext allocations
made during runtime.
Results and analysis
--------------------
Memory model: Sparsemem-vmemmap
$ echo 1 > /proc/sys/vm/hugetlb_optimize_vmemmap
$ cat /proc/meminfo | grep MemTotal
MemTotal: 32918196 kB
$ cat /proc/meminfo | grep Meta
PageMetadata: 589824 kB
$ cat /sys/devices/system/node/node0/meminfo | grep Meta
Node 0 PageMetadata: 294912 kB
$ cat /sys/devices/system/node/node1/meminfo | grep Meta
Node 1 PageMetadata: 294912 kB
AFTER HUGTLBFS RESERVATION
$ echo 512 > /proc/sys/vm/nr_hugepages
$ cat /proc/meminfo | grep MemTotal
MemTotal: 32934580 kB
$ cat /proc/meminfo | grep Meta
PageMetadata: 575488 kB
$ cat /sys/devices/system/node/node0/meminfo | grep Meta
Node 0 PageMetadata: 287744 kB
$ cat /sys/devices/system/node/node1/meminfo | grep Meta
Node 1 PageMetadata: 287744 kB
AFTER FREEING HUGTLBFS RESERVATION
$ echo 0 > /proc/sys/vm/nr_hugepages
$ cat /proc/meminfo | grep MemTotal
MemTotal: 32934580 kB
$ cat /proc/meminfo | grep Meta
PageMetadata: 589824 kB
$ cat /sys/devices/system/node/node0/meminfo | grep Meta
Node 0 PageMetadata: 294912 kB
$ cat /sys/devices/system/node/node1/meminfo | grep Meta
Node 1 PageMetadata: 294912 kB
Sourav Panda (1):
mm: report per-page metadata information
Documentation/filesystems/proc.rst | 3 +++
drivers/base/node.c | 2 ++
fs/proc/meminfo.c | 7 +++++++
include/linux/mmzone.h | 3 +++
include/linux/vmstat.h | 4 ++++
mm/hugetlb.c | 8 +++++++-
mm/hugetlb_vmemmap.c | 9 ++++++++-
mm/mm_init.c | 3 +++
mm/page_alloc.c | 1 +
mm/page_ext.c | 17 +++++++++++++----
mm/sparse-vmemmap.c | 3 +++
mm/sparse.c | 7 ++++++-
mm/vmstat.c | 21 +++++++++++++++++++++
13 files changed, 81 insertions(+), 7 deletions(-)
--
2.42.0.283.g2d96d420d3-goog
next reply other threads:[~2023-09-13 17:30 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-13 17:29 Sourav Panda [this message]
2023-09-13 17:30 ` [PATCH v1 1/1] mm: report per-page metadata information Sourav Panda
2023-09-13 17:56 ` Matthew Wilcox
2023-09-14 21:04 ` Sourav Panda
2023-09-13 19:34 ` kernel test robot
2023-09-13 20:51 ` Mike Rapoport
2023-09-14 12:47 ` Matthew Wilcox
2023-09-14 22:45 ` Sourav Panda
2023-09-14 22:41 ` Sourav Panda
2023-09-13 21:53 ` kernel test robot
2023-09-14 13:00 ` David Hildenbrand
2023-09-14 22:47 ` Sourav Panda
2023-09-18 8:14 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230913173000.4016218-1-souravpanda@google.com \
--to=souravpanda@google.com \
--cc=Liam.Howlett@Oracle.com \
--cc=adobriyan@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bhelgaas@google.com \
--cc=chenlinxuan@uniontech.com \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=gregkh@linuxfoundation.org \
--cc=hannes@cmpxchg.org \
--cc=ivan@cloudflare.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
--cc=pasha.tatashin@soleen.com \
--cc=rafael@kernel.org \
--cc=rdunlap@infradead.org \
--cc=rppt@kernel.org \
--cc=shakeelb@google.com \
--cc=surenb@google.com \
--cc=tomas.mudrunka@gmail.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=yang.yang29@zte.com.cn \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.