linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "AL13N" <alien@rmail.be>
To: linux-kernel@vger.kernel.org
Cc: "Vlastimil Babka" <vbabka@suse.cz>
Subject: Re: Memory leaks on atom-based boards?
Date: Mon, 10 Nov 2014 00:19:56 -0000	[thread overview]
Message-ID: <bd46f2de6255dce560c42bac059d78a3.squirrel@mail.rmail.be> (raw)
In-Reply-To: <545FE59B.3020702@suse.cz>

> On 11/09/2014 05:38 PM, AL13N wrote:
>>> On 10/27/2014 07:44 PM, AL13N wrote:
>>>
>>> Hi, this does look like a kernel memory leak. There was recently a
>>> known
>>> one fixed by patch from https://lkml.org/lkml/2014/10/15/447 which made
>>> it to 3.18-rc3 and should be backported to stable kernels 3.8+ soon.
>>> You would recognize if this is the fix for you by checking the
>>> thp_zero_page_alloc value in /proc/vmstat. Value X > 1 basically means
>>> that X*2 MB memory is leaked.
>>> You say in the serverfault post that 3.17.2 helped, but the fix is not
>>> in 3.17.2... but it could be just that the circumstances changed and
>>> THP
>>> zero pages are no longer freed and realocated.
>>> So if you want to be sure, I would suggest trying again a version where
>>> the problem appeared on your system, and checking the
>>> thp_zero_page_alloc. Perhaps you'll see a >1 value even on 3.17.2,
>>> which
>>> means some leak did occur there as well, but maybe not so severe.
>>
>>
>> i was gonna tell you guys, but i was waiting until i was sure, but
>> indeed
>> 3.17.2 fixed, it, where i had OOM after 3, maybe 4 days (for at least 2
>> months), now i'm up more than 4 days and the MemAvailable is still high
>> enough... at about 3.5GB whereas otherwise it would dwindle until 0. (at
>> about 1GB/day)
>>
>> Well, it results to 0 on 3.17.2 ... so... i guess not? i'll keep this
>> value under observation...
>
> Hm, 0 sounds like nobody was allocating transparent huge pages at all.
> What
> about the other thp_* stats?

thp_fault_alloc 0
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0


i guess on 3.17.2 there's something that doesn't allocate thp? either
that, or it was a different issue after all...

>>>>  - How can i find out what is allocating all this memory?
>>>
>>> There's no simple way, unfortunately. Checking the kpageflags /proc
>>> file
>>> might help. IIRC there used to be a patch in -mm tree to store who
>>> allocated what page, but it might be bitrotten.
>>
>>
>> i checked what was in kpageflags (or kpagecount) but it's all some kind
>> of
>> binary stuff...
>>
>> do i need some tool to interprete these values?
>
> There's tools/vm/page-types.c in kernel sources which can read kpageflags,
> but
> not the kpagecount...

good to know...


  reply	other threads:[~2014-11-10  0:19 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-27 18:44 Memory leaks on atom-based boards? AL13N
2014-11-09 11:56 ` Vlastimil Babka
2014-11-09 16:38   ` AL13N
2014-11-09 22:07     ` Vlastimil Babka
2014-11-10  0:19       ` AL13N [this message]
2014-11-21 20:06 ` Pavel Machek
2014-11-21 21:08   ` AL13N

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bd46f2de6255dce560c42bac059d78a3.squirrel@mail.rmail.be \
    --to=alien@rmail.be \
    --cc=linux-kernel@vger.kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).