All of lore.kernel.org
 help / color / mirror / Atom feed
From: Miles Chen <miles.chen@mediatek.com>
To: Qian Cai <cai@lca.pw>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>, <linux-kernel@vger.kernel.org>,
	<linux-mm@kvack.org>, <linux-mediatek@lists.infradead.org>,
	<wsd_upstream@mediatek.com>, Miles Chen <miles.chen@mediatek.com>
Subject: Re: [PATCH] mm/page_owner: print largest memory consumer when OOM panic occurs
Date: Thu, 26 Dec 2019 12:01:14 +0800	[thread overview]
Message-ID: <20191226040114.8123-1-miles.chen@mediatek.com> (raw)
In-Reply-To: <1806FE86&#45;9508&#45;43BC&#45;8E2F&#45;3620CD243B14@lca.pw>

> Not sure if you have code that can share but I can't imagine there are many places that would have a single call site in the driver doing alloc_pages() over and over again. For example, there is only two alloc_pages() in intel-iommu.c with one is only in the cold path, so even if alloc_pgtable_page() one do leaking, it is still up to there air if your patch will catch it because it may not a single call site and it needs to leak significant amount of memory to be the greatest consumer where it is just not so realistic. 

That is what the patch does -- targeting on the memory leakage which causes an OOM kernel panic, so the greatest consumer information helps (the amount of leakage is big enough to cause an OOM kernel panic)

I've posted the number of real problems since 2019/5 I solved by this approach.

  Miles

WARNING: multiple messages have this Message-ID (diff)
From: Miles Chen <miles.chen@mediatek.com>
To: Qian Cai <cai@lca.pw>
Cc: Michal Hocko <mhocko@suse.com>,
	wsd_upstream@mediatek.com, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, Miles Chen <miles.chen@mediatek.com>,
	linux-mediatek@lists.infradead.org,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH] mm/page_owner: print largest memory consumer when OOM panic occurs
Date: Thu, 26 Dec 2019 12:01:14 +0800	[thread overview]
Message-ID: <20191226040114.8123-1-miles.chen@mediatek.com> (raw)
In-Reply-To: <1806FE86&#45;9508&#45;43BC&#45;8E2F&#45;3620CD243B14@lca.pw>

> Not sure if you have code that can share but I can't imagine there are many places that would have a single call site in the driver doing alloc_pages() over and over again. For example, there is only two alloc_pages() in intel-iommu.c with one is only in the cold path, so even if alloc_pgtable_page() one do leaking, it is still up to there air if your patch will catch it because it may not a single call site and it needs to leak significant amount of memory to be the greatest consumer where it is just not so realistic. 

That is what the patch does -- targeting on the memory leakage which causes an OOM kernel panic, so the greatest consumer information helps (the amount of leakage is big enough to cause an OOM kernel panic)

I've posted the number of real problems since 2019/5 I solved by this approach.

  Miles
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

       reply	other threads:[~2019-12-26  4:01 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1806FE86&#45;9508&#45;43BC&#45;8E2F&#45;3620CD243B14@lca.pw>
2019-12-26  4:01 ` Miles Chen [this message]
2019-12-26  4:01   ` [PATCH] mm/page_owner: print largest memory consumer when OOM panic occurs Miles Chen
2019-12-26  5:53   ` Qian Cai
2019-12-26  5:53     ` Qian Cai
2019-12-27  7:44     ` Miles Chen
2019-12-27  7:44       ` Miles Chen
2019-12-27 13:46       ` Qian Cai
2019-12-27 13:46         ` Qian Cai
2019-12-30  1:30         ` Miles Chen
2019-12-30  1:30           ` Miles Chen
2019-12-30  1:51           ` Qian Cai
2019-12-30  1:51             ` Qian Cai
2019-12-30  3:28             ` Miles Chen
2019-12-30  3:28               ` Miles Chen
2019-12-23 11:33 Miles Chen
2019-12-23 11:33 ` Miles Chen
2019-12-23 12:32 ` Qian Cai
2019-12-23 12:32   ` Qian Cai
2019-12-24  6:45   ` Miles Chen
2019-12-24  6:45     ` Miles Chen
2019-12-24 13:47     ` Qian Cai
2019-12-24 13:47       ` Qian Cai
2019-12-25  9:29       ` Miles Chen
2019-12-25  9:29         ` Miles Chen
2019-12-25 13:53         ` Qian Cai
2019-12-25 13:53           ` Qian Cai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191226040114.8123-1-miles.chen@mediatek.com \
    --to=miles.chen@mediatek.com \
    --cc=akpm@linux-foundation.org \
    --cc=cai@lca.pw \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=wsd_upstream@mediatek.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.