linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Michal Hocko <mhocko@kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH v2 2/4] mm, page_owner: record page owner for each subpage
Date: Tue, 24 Sep 2019 18:16:52 +0300	[thread overview]
Message-ID: <20190924151652.bjtjgj7brflrgcuv@box> (raw)
In-Reply-To: <ee93f01a-9f34-d237-9a34-33314f4298bc@suse.cz>

On Tue, Sep 24, 2019 at 05:10:59PM +0200, Vlastimil Babka wrote:
> On 9/24/19 1:31 PM, Kirill A. Shutemov wrote:
> > On Tue, Aug 20, 2019 at 03:18:26PM +0200, Vlastimil Babka wrote:
> >> Currently, page owner info is only recorded for the first page of a high-order
> >> allocation, and copied to tail pages in the event of a split page. With the
> >> plan to keep previous owner info after freeing the page, it would be benefical
> >> to record page owner for each subpage upon allocation. This increases the
> >> overhead for high orders, but that should be acceptable for a debugging option.
> >>
> >> The order stored for each subpage is the order of the whole allocation. This
> >> makes it possible to calculate the "head" pfn and to recognize "tail" pages
> >> (quoted because not all high-order allocations are compound pages with true
> >> head and tail pages). When reading the page_owner debugfs file, keep skipping
> >> the "tail" pages so that stats gathered by existing scripts don't get inflated.
> >>
> >> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> >> ---
> >>  mm/page_owner.c | 40 ++++++++++++++++++++++++++++------------
> >>  1 file changed, 28 insertions(+), 12 deletions(-)
> >>
> >> diff --git a/mm/page_owner.c b/mm/page_owner.c
> >> index addcbb2ae4e4..813fcb70547b 100644
> >> --- a/mm/page_owner.c
> >> +++ b/mm/page_owner.c
> >> @@ -154,18 +154,23 @@ static noinline depot_stack_handle_t save_stack(gfp_t flags)
> >>  	return handle;
> >>  }
> >>  
> >> -static inline void __set_page_owner_handle(struct page_ext *page_ext,
> >> -	depot_stack_handle_t handle, unsigned int order, gfp_t gfp_mask)
> >> +static inline void __set_page_owner_handle(struct page *page,
> >> +	struct page_ext *page_ext, depot_stack_handle_t handle,
> >> +	unsigned int order, gfp_t gfp_mask)
> >>  {
> >>  	struct page_owner *page_owner;
> >> +	int i;
> >>  
> >> -	page_owner = get_page_owner(page_ext);
> >> -	page_owner->handle = handle;
> >> -	page_owner->order = order;
> >> -	page_owner->gfp_mask = gfp_mask;
> >> -	page_owner->last_migrate_reason = -1;
> >> +	for (i = 0; i < (1 << order); i++) {
> >> +		page_owner = get_page_owner(page_ext);
> >> +		page_owner->handle = handle;
> >> +		page_owner->order = order;
> >> +		page_owner->gfp_mask = gfp_mask;
> >> +		page_owner->last_migrate_reason = -1;
> >> +		__set_bit(PAGE_EXT_OWNER, &page_ext->flags);
> >>  
> >> -	__set_bit(PAGE_EXT_OWNER, &page_ext->flags);
> >> +		page_ext = lookup_page_ext(page + i);
> > 
> > Isn't it off-by-one? You are calculating page_ext for the next page,
> > right?
> 
> You're right, thanks!
> 
> > And cant we just do page_ext++ here instead?
> 
> Unfortunately no, as that implies sizeof(page_ext), which only declares
> unsigned long flags; and the rest is runtime-determined.
> Perhaps I could add a wrapper named e.g. page_ext_next() that would use
> get_entry_size() internally and hide the necessary casts to void * and back?

Yeah, it looks less costly than calling lookup_page_ext() on each
iteration. And looks like it can be inlined if we make 'extra_mem' visible
(under different name) outside page_ext.c.

-- 
 Kirill A. Shutemov

  reply	other threads:[~2019-09-24 15:16 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-20 13:18 [PATCH v2 0/4] debug_pagealloc improvements through page_owner Vlastimil Babka
2019-08-20 13:18 ` [PATCH v2 1/4] mm, page_owner: handle THP splits correctly Vlastimil Babka
2019-08-20 13:18 ` [PATCH v2 2/4] mm, page_owner: record page owner for each subpage Vlastimil Babka
2019-09-24 11:31   ` Kirill A. Shutemov
2019-09-24 15:10     ` Vlastimil Babka
2019-09-24 15:16       ` Kirill A. Shutemov [this message]
2019-08-20 13:18 ` [PATCH v2 3/4] mm, page_owner: keep owner info when freeing the page Vlastimil Babka
2019-09-24 11:35   ` Kirill A. Shutemov
2019-08-20 13:18 ` [PATCH v2 4/4] mm, page_owner, debug_pagealloc: save and dump freeing stack trace Vlastimil Babka
2019-09-24 11:42   ` Kirill A. Shutemov
2019-09-24 15:15     ` Vlastimil Babka
2019-09-24 15:24       ` Kirill A. Shutemov
2019-08-22 23:03 ` [PATCH v2 0/4] debug_pagealloc improvements through page_owner Andrew Morton
2019-09-20 23:34   ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190924151652.bjtjgj7brflrgcuv@box \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).