Linux-mm Archive on lore.kernel.org
 help / color / Atom feed
From: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
To: Yang Shi <shy828301@gmail.com>
Cc: Michal Hocko <mhocko@kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	l.roehrs@profihost.ag, cgroups@vger.kernel.org,
	Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: lot of MemAvailable but falling cache and raising PSI
Date: Sat, 7 Sep 2019 09:32:14 +0200
Message-ID: <4cf93888-2f12-a749-cc5d-7e05782d0422@profihost.ag> (raw)
In-Reply-To: <CAHbLzkrfzWS+epQif4ck9dv8f9sEQyJq7vcW55Zav7m_vYY96w@mail.gmail.com>


Am 06.09.19 um 20:52 schrieb Yang Shi:
> On Fri, Sep 6, 2019 at 3:08 AM Stefan Priebe - Profihost AG
> <s.priebe@profihost.ag> wrote:
>>
>> These are the biggest differences in meminfo before and after cached
>> starts to drop. I didn't expect cached end up in MemFree.
>>
>> Before:
>> MemTotal:       16423116 kB
>> MemFree:          374572 kB
> 
> Here MemFree is only ~300MB? It is quite low comparing with the amount
> of total memory. It may trigger water mark to launch kswapd.

mhm yes that might be possible but i don't see kswapd running in the
process list at least not using cpu.

Also does it really free all cache? i thought it would only free up to
vm.min_free_kbytes but this is 160MB on this machine.

>> MemAvailable:    5633816 kB
>> Cached:          5550972 kB
>> Inactive:        4696580 kB
>> Inactive(file):  3624776 kB
>>
>>
>> After:
>> MemTotal:       16423116 kB
>> MemFree:         3477168 kB
> 
> Here MemFree is ~3GB and file cache was shrunk from ~5G down to ~2G.

yes - bu i thought if file cache gets shrunk another process has
requested tis memory and would use it. So it does not end up in free.

I'm sure this all is explainable - but i really would like to know how.

Greets,
Stefan

>> MemAvailable:    6066916 kB
>> Cached:          2724504 kB
>> Inactive:        1854740 kB
>> Inactive(file):   950680 kB
>>
>> Any explanation?
>>
>> Greets,
>> Stefan
>> Am 05.09.19 um 13:56 schrieb Stefan Priebe - Profihost AG:
>>>
>>> Am 05.09.19 um 13:40 schrieb Michal Hocko:
>>>> On Thu 05-09-19 13:27:10, Stefan Priebe - Profihost AG wrote:
>>>>> Hello all,
>>>>>
>>>>> i hope you can help me again to understand the current MemAvailable
>>>>> value in the linux kernel. I'm running a 4.19.52 kernel + psi patches in
>>>>> this case.
>>>>>
>>>>> I'm seeing the following behaviour i don't understand and ask for help.
>>>>>
>>>>> While MemAvailable shows 5G the kernel starts to drop cache from 4G down
>>>>> to 1G while the apache spawns some PHP processes. After that the PSI
>>>>> mem.some value rises and the kernel tries to reclaim memory but
>>>>> MemAvailable stays at 5G.
>>>>>
>>>>> Any ideas?
>>>>
>>>> Can you collect /proc/vmstat (every second or so) and post it while this
>>>> is the case please?
>>>
>>> Yes sure.
>>>
>>> But i don't know which event you mean exactly. Current situation is PSI
>>> / memory pressure is > 20 but:
>>>
>>> This is the current status where MemAvailable show 5G but Cached is
>>> already dropped to 1G coming from 4G:
>>>
>>>
>>> meminfo:
>>> MemTotal:       16423116 kB
>>> MemFree:         5280736 kB
>>> MemAvailable:    5332752 kB
>>> Buffers:            2572 kB
>>> Cached:          1225112 kB
>>> SwapCached:            0 kB
>>> Active:          8934976 kB
>>> Inactive:        1026900 kB
>>> Active(anon):    8740396 kB
>>> Inactive(anon):   873448 kB
>>> Active(file):     194580 kB
>>> Inactive(file):   153452 kB
>>> Unevictable:       19900 kB
>>> Mlocked:           19900 kB
>>> SwapTotal:             0 kB
>>> SwapFree:              0 kB
>>> Dirty:              1980 kB
>>> Writeback:             0 kB
>>> AnonPages:       8423480 kB
>>> Mapped:           978212 kB
>>> Shmem:            875680 kB
>>> Slab:             839868 kB
>>> SReclaimable:     383396 kB
>>> SUnreclaim:       456472 kB
>>> KernelStack:       22576 kB
>>> PageTables:        49824 kB
>>> NFS_Unstable:          0 kB
>>> Bounce:                0 kB
>>> WritebackTmp:          0 kB
>>> CommitLimit:     8211556 kB
>>> Committed_AS:   32060624 kB
>>> VmallocTotal:   34359738367 kB
>>> VmallocUsed:           0 kB
>>> VmallocChunk:          0 kB
>>> Percpu:           118048 kB
>>> HardwareCorrupted:     0 kB
>>> AnonHugePages:   6406144 kB
>>> ShmemHugePages:        0 kB
>>> ShmemPmdMapped:        0 kB
>>> HugePages_Total:       0
>>> HugePages_Free:        0
>>> HugePages_Rsvd:        0
>>> HugePages_Surp:        0
>>> Hugepagesize:       2048 kB
>>> Hugetlb:               0 kB
>>> DirectMap4k:     2580336 kB
>>> DirectMap2M:    14196736 kB
>>> DirectMap1G:     2097152 kB
>>>
>>>
>>> vmstat shows:
>>> nr_free_pages 1320053
>>> nr_zone_inactive_anon 218362
>>> nr_zone_active_anon 2185108
>>> nr_zone_inactive_file 38363
>>> nr_zone_active_file 48645
>>> nr_zone_unevictable 4975
>>> nr_zone_write_pending 495
>>> nr_mlock 4975
>>> nr_page_table_pages 12553
>>> nr_kernel_stack 22576
>>> nr_bounce 0
>>> nr_zspages 0
>>> nr_free_cma 0
>>> numa_hit 13916119899
>>> numa_miss 0
>>> numa_foreign 0
>>> numa_interleave 15629
>>> numa_local 13916119899
>>> numa_other 0
>>> nr_inactive_anon 218362
>>> nr_active_anon 2185164
>>> nr_inactive_file 38363
>>> nr_active_file 48645
>>> nr_unevictable 4975
>>> nr_slab_reclaimable 95849
>>> nr_slab_unreclaimable 114118
>>> nr_isolated_anon 0
>>> nr_isolated_file 0
>>> workingset_refault 71365357
>>> workingset_activate 20281670
>>> workingset_restore 8995665
>>> workingset_nodereclaim 326085
>>> nr_anon_pages 2105903
>>> nr_mapped 244553
>>> nr_file_pages 306921
>>> nr_dirty 495
>>> nr_writeback 0
>>> nr_writeback_temp 0
>>> nr_shmem 218920
>>> nr_shmem_hugepages 0
>>> nr_shmem_pmdmapped 0
>>> nr_anon_transparent_hugepages 3128
>>> nr_unstable 0
>>> nr_vmscan_write 0
>>> nr_vmscan_immediate_reclaim 1833104
>>> nr_dirtied 386544087
>>> nr_written 259220036
>>> nr_dirty_threshold 265636
>>> nr_dirty_background_threshold 132656
>>> pgpgin 1817628997
>>> pgpgout 3730818029
>>> pswpin 0
>>> pswpout 0
>>> pgalloc_dma 0
>>> pgalloc_dma32 5790777997
>>> pgalloc_normal 20003662520
>>> pgalloc_movable 0
>>> allocstall_dma 0
>>> allocstall_dma32 0
>>> allocstall_normal 39
>>> allocstall_movable 1980089
>>> pgskip_dma 0
>>> pgskip_dma32 0
>>> pgskip_normal 0
>>> pgskip_movable 0
>>> pgfree 26637215947
>>> pgactivate 316722654
>>> pgdeactivate 261039211
>>> pglazyfree 0
>>> pgfault 17719356599
>>> pgmajfault 30985544
>>> pglazyfreed 0
>>> pgrefill 286826568
>>> pgsteal_kswapd 36740923
>>> pgsteal_direct 349291470
>>> pgscan_kswapd 36878966
>>> pgscan_direct 395327492
>>> pgscan_direct_throttle 0
>>> zone_reclaim_failed 0
>>> pginodesteal 49817087
>>> slabs_scanned 597956834
>>> kswapd_inodesteal 1412447
>>> kswapd_low_wmark_hit_quickly 39
>>> kswapd_high_wmark_hit_quickly 319
>>> pageoutrun 3585
>>> pgrotated 2873743
>>> drop_pagecache 0
>>> drop_slab 0
>>> oom_kill 0
>>> pgmigrate_success 839062285
>>> pgmigrate_fail 507313
>>> compact_migrate_scanned 9619077010
>>> compact_free_scanned 67985619651
>>> compact_isolated 1684537704
>>> compact_stall 205761
>>> compact_fail 182420
>>> compact_success 23341
>>> compact_daemon_wake 2
>>> compact_daemon_migrate_scanned 811
>>> compact_daemon_free_scanned 490241
>>> htlb_buddy_alloc_success 0
>>> htlb_buddy_alloc_fail 0
>>> unevictable_pgs_culled 1006521
>>> unevictable_pgs_scanned 0
>>> unevictable_pgs_rescued 997077
>>> unevictable_pgs_mlocked 1319203
>>> unevictable_pgs_munlocked 842471
>>> unevictable_pgs_cleared 470531
>>> unevictable_pgs_stranded 459613
>>> thp_fault_alloc 20263113
>>> thp_fault_fallback 3368635
>>> thp_collapse_alloc 226476
>>> thp_collapse_alloc_failed 17594
>>> thp_file_alloc 0
>>> thp_file_mapped 0
>>> thp_split_page 1159
>>> thp_split_page_failed 3927
>>> thp_deferred_split_page 20348941
>>> thp_split_pmd 53361
>>> thp_split_pud 0
>>> thp_zero_page_alloc 1
>>> thp_zero_page_alloc_failed 0
>>> thp_swpout 0
>>> thp_swpout_fallback 0
>>> balloon_inflate 0
>>> balloon_deflate 0
>>> balloon_migrate 0
>>> swap_ra 0
>>> swap_ra_hit 0
>>>
>>> Greets,
>>> Stefan
>>>
>>


  reply index

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-05 11:27 Stefan Priebe - Profihost AG
2019-09-05 11:40 ` Michal Hocko
2019-09-05 11:56   ` Stefan Priebe - Profihost AG
2019-09-05 16:28     ` Yang Shi
2019-09-05 17:26       ` Stefan Priebe - Profihost AG
2019-09-05 18:46         ` Yang Shi
2019-09-05 19:31           ` Stefan Priebe - Profihost AG
2019-09-06 10:08     ` Stefan Priebe - Profihost AG
2019-09-06 10:25       ` Vlastimil Babka
2019-09-06 18:52       ` Yang Shi
2019-09-07  7:32         ` Stefan Priebe - Profihost AG [this message]
2019-09-09  8:27       ` Michal Hocko
2019-09-09  8:54         ` Stefan Priebe - Profihost AG
2019-09-09 11:01           ` Michal Hocko
2019-09-09 12:08             ` Michal Hocko
2019-09-09 12:10               ` Stefan Priebe - Profihost AG
2019-09-09 12:28                 ` Michal Hocko
2019-09-09 12:37                   ` Stefan Priebe - Profihost AG
2019-09-09 12:49                     ` Michal Hocko
2019-09-09 12:56                       ` Stefan Priebe - Profihost AG
     [not found]                         ` <52235eda-ffe2-721c-7ad7-575048e2d29d@profihost.ag>
2019-09-10  5:58                           ` Stefan Priebe - Profihost AG
2019-09-10  8:29                           ` Michal Hocko
2019-09-10  8:38                             ` Stefan Priebe - Profihost AG
2019-09-10  9:02                               ` Michal Hocko
2019-09-10  9:37                                 ` Stefan Priebe - Profihost AG
2019-09-10 11:07                                   ` Michal Hocko
2019-09-10 12:45                                     ` Stefan Priebe - Profihost AG
2019-09-10 12:57                                       ` Michal Hocko
2019-09-10 13:05                                         ` Stefan Priebe - Profihost AG
2019-09-10 13:14                                           ` Stefan Priebe - Profihost AG
2019-09-10 13:24                                             ` Michal Hocko
2019-09-11  6:12                                               ` Stefan Priebe - Profihost AG
2019-09-11  6:24                                                 ` Stefan Priebe - Profihost AG
2019-09-11 13:59                                                   ` Stefan Priebe - Profihost AG
2019-09-12 10:53                                                     ` Stefan Priebe - Profihost AG
2019-09-12 11:06                                                       ` Stefan Priebe - Profihost AG
2019-09-11  7:09                                                 ` 5.3-rc-8 hung task in IO (was: Re: lot of MemAvailable but falling cache and raising PSI) Michal Hocko
2019-09-11 14:09                                                   ` Stefan Priebe - Profihost AG
2019-09-11 14:56                                                   ` Filipe Manana
2019-09-11 15:39                                                     ` Stefan Priebe - Profihost AG
2019-09-11 15:56                                                       ` Filipe Manana
2019-09-11 16:15                                                         ` Stefan Priebe - Profihost AG
2019-09-11 16:19                                                           ` Filipe Manana
2019-09-19 10:21                                                 ` lot of MemAvailable but falling cache and raising PSI Stefan Priebe - Profihost AG
2019-09-23 12:08                                                   ` Michal Hocko
2019-09-27 12:45                                                   ` Vlastimil Babka
2019-09-30  6:56                                                     ` Stefan Priebe - Profihost AG
2019-09-30  7:21                                                       ` Vlastimil Babka
2019-10-22  7:41                                                     ` Stefan Priebe - Profihost AG
2019-10-22  7:48                                                       ` Vlastimil Babka
2019-10-22 10:02                                                         ` Stefan Priebe - Profihost AG
2019-10-22 10:20                                                           ` Oscar Salvador
2019-10-22 10:21                                                           ` Vlastimil Babka
2019-10-22 11:08                                                             ` Stefan Priebe - Profihost AG
2019-09-10  5:41                       ` Stefan Priebe - Profihost AG
2019-09-09 11:49           ` Vlastimil Babka
2019-09-09 12:09             ` Stefan Priebe - Profihost AG
2019-09-09 12:21               ` Vlastimil Babka
2019-09-09 12:31                 ` Stefan Priebe - Profihost AG
2019-09-05 12:15 ` Vlastimil Babka
2019-09-05 12:27   ` Stefan Priebe - Profihost AG

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4cf93888-2f12-a749-cc5d-7e05782d0422@profihost.ag \
    --to=s.priebe@profihost.ag \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=l.roehrs@profihost.ag \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shy828301@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-mm Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-mm/0 linux-mm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-mm linux-mm/ https://lore.kernel.org/linux-mm \
		linux-mm@kvack.org
	public-inbox-index linux-mm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kvack.linux-mm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git