linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rafael Aquini <aquini@redhat.com>
To: Waiman Long <longman@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Petr Mladek <pmladek@suse.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	linux-mm@kvack.org, Ira Weiny <ira.weiny@intel.com>,
	Mike Rapoport <rppt@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Roman Gushchin <guro@fb.com>
Subject: Re: [PATCH v4 0/4] mm/page_owner: Extend page_owner to show memcg information
Date: Wed, 2 Feb 2022 18:06:51 -0500	[thread overview]
Message-ID: <YfsOi38nXkyCrYam@optiplex-fbsd> (raw)
In-Reply-To: <20220202203036.744010-1-longman@redhat.com>

On Wed, Feb 02, 2022 at 03:30:32PM -0500, Waiman Long wrote:
>  v4:
>   - Take rcu_read_lock() when memcg is being accessed as suggested by
>     Michal.
>   - Make print_page_owner_memcg() return the new offset into the buffer
>     and put CONFIG_MEMCG block inside as suggested by Mike.
>   - Directly use TASK_COMM_LEN as length of name buffer as suggested by
>     Roman.
> 
>  v3:
>   - Add unlikely() to patch 1 and clarify that -1 will not be returned.
>   - Use a helper function to print out memcg information in patch 3.
>   - Add a new patch 4 to store task command name in page_owner
>     structure.
> 
>  v2:
>   - Remove the SNPRINTF() macro as suggested by Ira and use scnprintf()
>     instead to remove some buffer overrun checks.
>   - Add a patch to optimize vscnprintf with a size parameter of 0.
> 
> While debugging the constant increase in percpu memory consumption on
> a system that spawned large number of containers, it was found that a
> lot of offline mem_cgroup structures remained in place without being
> freed. Further investigation indicated that those mem_cgroup structures
> were pinned by some pages.
> 
> In order to find out what those pages are, the existing page_owner
> debugging tool is extended to show memory cgroup information and whether
> those memcgs are offline or not. With the enhanced page_owner tool,
> the following is a typical page that pinned the mem_cgroup structure
> in my test case:
> 
> Page allocated via order 0, mask 0x1100cca(GFP_HIGHUSER_MOVABLE), pid 162970 (podman), ts 1097761405537 ns, free_ts 1097760838089 ns
> PFN 1925700 type Movable Block 3761 type Movable Flags 0x17ffffc00c001c(uptodate|dirty|lru|reclaim|swapbacked|node=0|zone=2|lastcpupid=0x1fffff)
>  prep_new_page+0xac/0xe0
>  get_page_from_freelist+0x1327/0x14d0
>  __alloc_pages+0x191/0x340
>  alloc_pages_vma+0x84/0x250
>  shmem_alloc_page+0x3f/0x90
>  shmem_alloc_and_acct_page+0x76/0x1c0
>  shmem_getpage_gfp+0x281/0x940
>  shmem_write_begin+0x36/0xe0
>  generic_perform_write+0xed/0x1d0
>  __generic_file_write_iter+0xdc/0x1b0
>  generic_file_write_iter+0x5d/0xb0
>  new_sync_write+0x11f/0x1b0
>  vfs_write+0x1ba/0x2a0
>  ksys_write+0x59/0xd0
>  do_syscall_64+0x37/0x80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> Charged to offline memcg libpod-conmon-15e4f9c758422306b73b2dd99f9d50a5ea53cbb16b4a13a2c2308a4253cc0ec8.
> 
> So the page was not freed because it was part of a shmem segment. That
> is useful information that can help users to diagnose similar problems.
> 
> With cgroup v1, /proc/cgroups can be read to find out the total number
> of memory cgroups (online + offline). With cgroup v2, the cgroup.stat of
> the root cgroup can be read to find the number of dying cgroups (most
> likely pinned by dying memcgs).
> 
> The page_owner feature is not supposed to be enabled for production
> system due to its memory overhead. However, if it is suspected that
> dying memcgs are increasing over time, a test environment with page_owner
> enabled can then be set up with appropriate workload for further analysis
> on what may be causing the increasing number of dying memcgs.
> 
> Waiman Long (4):
>   lib/vsprintf: Avoid redundant work with 0 size
>   mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check
>   mm/page_owner: Print memcg information
>   mm/page_owner: Record task command name
> 
>  lib/vsprintf.c  |  8 +++---
>  mm/page_owner.c | 70 ++++++++++++++++++++++++++++++++++++++-----------
>  2 files changed, 60 insertions(+), 18 deletions(-)
> 
> -- 
> 2.27.0
>

Thank you, Waiman.

Acked-by: Rafael Aquini <aquini@redhat.com>


  reply	other threads:[~2022-02-02 23:06 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-31 19:23 [PATCH v3 0/4] mm/page_owner: Extend page_owner to show memcg information Waiman Long
2022-01-31 19:23 ` [PATCH v3 1/4] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
2022-01-31 20:42   ` Mike Rapoport
2022-01-31 19:23 ` [PATCH v3 2/4] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-01-31 20:38   ` Roman Gushchin
2022-01-31 20:43   ` Mike Rapoport
2022-01-31 19:23 ` [PATCH v3 3/4] mm/page_owner: Print memcg information Waiman Long
2022-01-31 20:51   ` Mike Rapoport
2022-01-31 21:43     ` Waiman Long
2022-02-01  6:23       ` Mike Rapoport
2022-01-31 20:51   ` Roman Gushchin
2022-02-01 10:54   ` Michal Hocko
2022-02-01 17:04     ` Waiman Long
2022-02-02  8:49       ` Michal Hocko
2022-02-02 16:12         ` Waiman Long
2022-01-31 19:23 ` [PATCH v3 4/4] mm/page_owner: Record task command name Waiman Long
2022-01-31 20:54   ` Roman Gushchin
2022-01-31 21:46     ` Waiman Long
2022-01-31 22:03   ` [PATCH v4 " Waiman Long
2022-02-01 15:28     ` Michal Hocko
2022-02-02 16:53       ` Waiman Long
2022-02-03 12:10         ` Vlastimil Babka
2022-02-03 18:53           ` Waiman Long
2022-02-02 20:30   ` [PATCH v4 0/4] mm/page_owner: Extend page_owner to show memcg information Waiman Long
2022-02-02 23:06     ` Rafael Aquini [this message]
2022-02-02 20:30   ` [PATCH v4 1/4] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
2022-02-08 10:08     ` Petr Mladek
2022-02-02 20:30   ` [PATCH v4 2/4] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-02-03 15:46     ` Vlastimil Babka
2022-02-03 18:49       ` Waiman Long
2022-02-08 10:51         ` Petr Mladek
2022-02-02 20:30   ` [PATCH v4 3/4] mm/page_owner: Print memcg information Waiman Long
2022-02-03  6:53     ` Mike Rapoport
2022-02-03 12:46     ` Michal Hocko
2022-02-03 19:03       ` Waiman Long
2022-02-07 17:20         ` Michal Hocko
2022-02-07 19:09           ` Andrew Morton
2022-02-07 19:33             ` Waiman Long
2022-02-02 20:30   ` [PATCH v4 4/4] mm/page_owner: Record task command name Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YfsOi38nXkyCrYam@optiplex-fbsd \
    --to=aquini@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=ira.weiny@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@rasmusvillemoes.dk \
    --cc=longman@redhat.com \
    --cc=mhocko@kernel.org \
    --cc=pmladek@suse.com \
    --cc=rientjes@google.com \
    --cc=rostedt@goodmis.org \
    --cc=rppt@kernel.org \
    --cc=senozhatsky@chromium.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).