From: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
To: Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@kernel.org>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
l.roehrs@profihost.ag, cgroups@vger.kernel.org,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: lot of MemAvailable but falling cache and raising PSI
Date: Mon, 9 Sep 2019 14:31:04 +0200 [thread overview]
Message-ID: <79e3af38-0b85-51aa-1737-078fab076a87@profihost.ag> (raw)
In-Reply-To: <ec45c3f9-5649-fb39-ecf8-6ca7620a6e2a@suse.cz>
Am 09.09.19 um 14:21 schrieb Vlastimil Babka:
> On 9/9/19 2:09 PM, Stefan Priebe - Profihost AG wrote:
>>
>> Am 09.09.19 um 13:49 schrieb Vlastimil Babka:
>>> On 9/9/19 10:54 AM, Stefan Priebe - Profihost AG wrote:
>>>>> Do you have more snapshots of /proc/vmstat as suggested by
>>>>> Vlastimil and
>>>>> me earlier in this thread? Seeing the overall progress would tell us
>>>>> much more than before and after. Or have I missed this data?
>>>>
>>>> I needed to wait until today to grab again such a situation but from
>>>> what i know it is very clear that MemFree is low and than the kernel
>>>> starts to drop the chaches.
>>>>
>>>> Attached you'll find two log files.
>>>
>>> Thanks, what about my other requests/suggestions from earlier?
>>
>> Sorry i missed your email.
>>
>>> 1. How does /proc/pagetypeinfo look like?
>>
>> # cat /proc/pagetypeinfo
>> Page block order: 9
>> Pages per block: 512
>
> Looks like it might be fragmented, but was that snapshot taken in the
> situation where there's free memory and the system still drops cache?
No this one is from "now" where no pressure is recorded and where mem
free is at 3G and cache is also at 3G.
>>> 2. Could you also try if the bad trend stops after you execute:
>>> echo never > /sys/kernel/mm/transparent_hugepage/defrag
>>> and report the result?
>>
>> it's pretty difficult to catch those moments. Is it OK so set the value
>> now and monitor if it happens again?
>
> Well if it doesn't happen again after changing that setting, it would
> definitely point at THP interactions.
OK i set it to never.
>> Just to let you know:
>> I've now also some more servers where memfree show 10-20Gb but cache
>> drops suddently and memory PSI raises.
>
> You mean those are in that state right now? So how does
> /proc/pagetypeinfo look there, and would changing the defrag setting help?
Yes i've a system which constantly triggers PSI (just 1-3%) but Mem Free
is at 29GB.
1402:
# cat /proc/pagetypeinfo
Page block order: 9
Pages per block: 512
Free pages count per migrate type at order 0 1 2 3
4 5 6 7 8 9 10
Node 0, zone DMA, type Unmovable 0 0 0 1
2 1 1 0 1 0 0
Node 0, zone DMA, type Movable 0 0 0 0
0 0 0 0 0 1 3
Node 0, zone DMA, type Reclaimable 0 0 0 0
0 0 0 0 0 0 0
Node 0, zone DMA, type HighAtomic 0 0 0 0
0 0 0 0 0 0 0
Node 0, zone DMA, type Isolate 0 0 0 0
0 0 0 0 0 0 0
Node 0, zone DMA32, type Unmovable 0 1 0 1
0 1 0 1 1 0 3
Node 0, zone DMA32, type Movable 42 29 60 52
56 52 47 46 24 3 48
Node 0, zone DMA32, type Reclaimable 0 0 3 1
0 1 1 1 1 0 0
Node 0, zone DMA32, type HighAtomic 0 0 0 0
0 0 0 0 0 0 0
Node 0, zone DMA32, type Isolate 0 0 0 0
0 0 0 0 0 0 0
Node 0, zone Normal, type Unmovable 189 7690 24737 14314
7620 5362 3458 1607 165 0 0
Node 0, zone Normal, type Movable 29269 31003 70251 73957
54776 37134 21084 10547 2307 35 4
Node 0, zone Normal, type Reclaimable 1431 3837 1821 2137
2475 978 386 112 2 0 0
Node 0, zone Normal, type HighAtomic 0 0 1 3
3 3 1 0 1 0 0
Node 0, zone Normal, type Isolate 0 0 0 0
0 0 0 0 0 0 0
Number of blocks type Unmovable Movable Reclaimable
HighAtomic Isolate
Node 0, zone DMA 1 7 0
0 0
Node 0, zone DMA32 10 1005 1
0 0
Node 0, zone Normal 3407 27184 1152
1 0
Stefan
next prev parent reply other threads:[~2019-09-09 12:31 UTC|newest]
Thread overview: 61+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-05 11:27 lot of MemAvailable but falling cache and raising PSI Stefan Priebe - Profihost AG
2019-09-05 11:40 ` Michal Hocko
2019-09-05 11:56 ` Stefan Priebe - Profihost AG
2019-09-05 16:28 ` Yang Shi
2019-09-05 17:26 ` Stefan Priebe - Profihost AG
2019-09-05 18:46 ` Yang Shi
2019-09-05 19:31 ` Stefan Priebe - Profihost AG
2019-09-06 10:08 ` Stefan Priebe - Profihost AG
2019-09-06 10:25 ` Vlastimil Babka
2019-09-06 18:52 ` Yang Shi
2019-09-07 7:32 ` Stefan Priebe - Profihost AG
2019-09-09 8:27 ` Michal Hocko
2019-09-09 8:54 ` Stefan Priebe - Profihost AG
2019-09-09 11:01 ` Michal Hocko
2019-09-09 12:08 ` Michal Hocko
2019-09-09 12:10 ` Stefan Priebe - Profihost AG
2019-09-09 12:28 ` Michal Hocko
2019-09-09 12:37 ` Stefan Priebe - Profihost AG
2019-09-09 12:49 ` Michal Hocko
2019-09-09 12:56 ` Stefan Priebe - Profihost AG
[not found] ` <52235eda-ffe2-721c-7ad7-575048e2d29d@profihost.ag>
2019-09-10 5:58 ` Stefan Priebe - Profihost AG
2019-09-10 8:29 ` Michal Hocko
2019-09-10 8:38 ` Stefan Priebe - Profihost AG
2019-09-10 9:02 ` Michal Hocko
2019-09-10 9:37 ` Stefan Priebe - Profihost AG
2019-09-10 11:07 ` Michal Hocko
2019-09-10 12:45 ` Stefan Priebe - Profihost AG
2019-09-10 12:57 ` Michal Hocko
2019-09-10 13:05 ` Stefan Priebe - Profihost AG
2019-09-10 13:14 ` Stefan Priebe - Profihost AG
2019-09-10 13:24 ` Michal Hocko
2019-09-11 6:12 ` Stefan Priebe - Profihost AG
2019-09-11 6:24 ` Stefan Priebe - Profihost AG
2019-09-11 13:59 ` Stefan Priebe - Profihost AG
2019-09-12 10:53 ` Stefan Priebe - Profihost AG
2019-09-12 11:06 ` Stefan Priebe - Profihost AG
2019-09-11 7:09 ` 5.3-rc-8 hung task in IO (was: Re: lot of MemAvailable but falling cache and raising PSI) Michal Hocko
2019-09-11 14:09 ` Stefan Priebe - Profihost AG
2019-09-11 14:56 ` Filipe Manana
2019-09-11 15:39 ` Stefan Priebe - Profihost AG
2019-09-11 15:56 ` Filipe Manana
2019-09-11 16:15 ` Stefan Priebe - Profihost AG
2019-09-11 16:19 ` Filipe Manana
2019-09-19 10:21 ` lot of MemAvailable but falling cache and raising PSI Stefan Priebe - Profihost AG
2019-09-23 12:08 ` Michal Hocko
2019-09-27 12:45 ` Vlastimil Babka
2019-09-30 6:56 ` Stefan Priebe - Profihost AG
2019-09-30 7:21 ` Vlastimil Babka
2019-10-22 7:41 ` Stefan Priebe - Profihost AG
2019-10-22 7:48 ` Vlastimil Babka
2019-10-22 10:02 ` Stefan Priebe - Profihost AG
2019-10-22 10:20 ` Oscar Salvador
2019-10-22 10:21 ` Vlastimil Babka
2019-10-22 11:08 ` Stefan Priebe - Profihost AG
2019-09-10 5:41 ` Stefan Priebe - Profihost AG
2019-09-09 11:49 ` Vlastimil Babka
2019-09-09 12:09 ` Stefan Priebe - Profihost AG
2019-09-09 12:21 ` Vlastimil Babka
2019-09-09 12:31 ` Stefan Priebe - Profihost AG [this message]
2019-09-05 12:15 ` Vlastimil Babka
2019-09-05 12:27 ` Stefan Priebe - Profihost AG
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=79e3af38-0b85-51aa-1737-078fab076a87@profihost.ag \
--to=s.priebe@profihost.ag \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=l.roehrs@profihost.ag \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).