All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-next-20190214] Free pages statistics is broken.
@ 2019-02-15  2:27 Tetsuo Handa
  2019-02-15 13:01 ` Michal Hocko
  0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2019-02-15  2:27 UTC (permalink / raw)
  To: linux-mm; +Cc: Andrew Morton

I noticed that amount of free memory reported by DMA: / DMA32: / Normal: fields are
increasing over time. Since 5.0-rc6 is working correctly, some change in linux-next
is causing this problem.

----------
[   92.010105][T14750] Mem-Info:
[   92.012409][T14750] active_anon:623678 inactive_anon:2182 isolated_anon:0
[   92.012409][T14750]  active_file:7 inactive_file:99 isolated_file:0
[   92.012409][T14750]  unevictable:0 dirty:0 writeback:0 unstable:0
[   92.012409][T14750]  slab_reclaimable:16216 slab_unreclaimable:48544
[   92.012409][T14750]  mapped:623 shmem:2334 pagetables:9774 bounce:0
[   92.012409][T14750]  free:21145 free_pcp:332 free_cma:0
[   92.034020][T14750] Node 0 active_anon:2494712kB inactive_anon:8728kB active_file:80kB inactive_file:320kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:2592kB dirty:0kB writeback:0kB shmem:9336kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 2144256kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[   92.052787][T14750] DMA free:12096kB min:352kB low:440kB high:528kB active_anon:3696kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15960kB managed:15876kB mlocked:0kB kernel_stack:0kB pagetables:36kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   92.063686][T14750] lowmem_reserve[]: 0 2647 2941 2941
[   92.066370][T14750] DMA32 free:61212kB min:60508kB low:75632kB high:90756kB active_anon:2411444kB inactive_anon:460kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:3129152kB managed:2711060kB mlocked:0kB kernel_stack:36544kB pagetables:36212kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[   92.084432][T14750] lowmem_reserve[]: 0 0 294 294
[   92.088254][T14750] Normal free:11120kB min:6716kB low:8392kB high:10068kB active_anon:79572kB inactive_anon:8268kB active_file:360kB inactive_file:540kB unevictable:0kB writepending:0kB present:1048576kB managed:301068kB mlocked:0kB kernel_stack:7776kB pagetables:2848kB bounce:0kB free_pcp:140kB local_pcp:88kB free_cma:0kB
[   92.102106][T14750] lowmem_reserve[]: 0 0 0 0
[   92.105259][T14750] DMA: 210*4kB () 105*8kB () 56*16kB (UM) 29*32kB (U) 13*64kB () 9*128kB (UM) 6*256kB (UM) 2*512kB () 2*1024kB (U) 2*2048kB (M) 2*4096kB (M) = 22384kB
[   92.113929][T14750] DMA32: 85952*4kB (UM) 36165*8kB (UM) 17368*16kB (UME) 11953*32kB (UME) 5598*64kB (UME) 2641*128kB (UM) 1252*256kB (ME) 604*512kB (UM) 303*1024kB (UM) 680*2048kB (U) 1*4096kB (M) = 4326600kB
[   92.124563][T14750] Normal: 41430*4kB (UE) 14837*8kB (UME) 10319*16kB (UE) 6379*32kB (UE) 2677*64kB () 1230*128kB () 557*256kB () 239*512kB () 83*1024kB () 42*2048kB () 0*4096kB = 1418384kB
[   92.132526][T14750] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[   92.136838][T14750] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   92.141838][T14750] 2683 total pagecache pages
[   92.144736][T14750] 0 pages in swap cache
[   92.147422][T14750] Swap cache stats: add 0, delete 0, find 0/0
[   92.150690][T14750] Free swap  = 0kB
[   92.153216][T14750] Total swap = 0kB
[   92.156285][T14750] 1048422 pages RAM
[   92.159794][T14750] 0 pages HighMem/MovableOnly
[   92.162966][T14750] 291421 pages reserved
[   92.165806][T14750] 0 pages cma reserved
----------

----------
[ 3204.099198][T42110] Mem-Info:
[ 3204.101094][T42110] active_anon:645144 inactive_anon:14056 isolated_anon:0
[ 3204.101094][T42110]  active_file:0 inactive_file:0 isolated_file:0
[ 3204.101094][T42110]  unevictable:0 dirty:0 writeback:0 unstable:0
[ 3204.101094][T42110]  slab_reclaimable:8328 slab_unreclaimable:47169
[ 3204.101094][T42110]  mapped:990 shmem:22735 pagetables:1462 bounce:0
[ 3204.101094][T42110]  free:22187 free_pcp:181 free_cma:0
[ 3204.116827][T42110] Node 0 active_anon:2580576kB inactive_anon:56224kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:3960kB dirty:0kB writeback:0kB shmem:90940kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 1159168kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 3204.127991][T42110] DMA free:12116kB min:352kB low:440kB high:528kB active_anon:3724kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15960kB managed:15876kB mlocked:0kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 3204.137735][T42110] lowmem_reserve[]: 0 2647 2941 2941
[ 3204.140385][T42110] DMA32 free:61592kB min:60508kB low:75632kB high:90756kB active_anon:2508676kB inactive_anon:9448kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:3129152kB managed:2711060kB mlocked:0kB kernel_stack:2304kB pagetables:4908kB bounce:0kB free_pcp:540kB local_pcp:0kB free_cma:0kB
[ 3204.151829][T42110] lowmem_reserve[]: 0 0 294 294
[ 3204.154387][T42110] Normal free:15040kB min:21052kB low:22728kB high:24404kB active_anon:68176kB inactive_anon:46776kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1048576kB managed:301068kB mlocked:0kB kernel_stack:6848kB pagetables:936kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
[ 3204.166972][T42110] lowmem_reserve[]: 0 0 0 0
[ 3204.169666][T42110] DMA: 3842*4kB (M) 1924*8kB () 970*16kB (M) 494*32kB (UM) 254*64kB (UM) 128*128kB (U) 74*256kB (UM) 36*512kB () 19*1024kB (U) 18*2048kB (M) 2*4096kB (M) = 196616kB
[ 3204.177392][T42110] DMA32: 3222548*4kB (UM) 1353981*8kB (U) 579496*16kB (UM) 267125*32kB (UME) 111607*64kB (UME) 46106*128kB (UME) 18144*256kB (UM) 6284*512kB () 1521*1024kB () 7061*2048kB () 0*4096kB = 78467096kB
[ 3204.185907][T42110] Normal: 637045*4kB () 202228*8kB (U) 64530*16kB (U) 19303*32kB (UE) 3969*64kB () 1321*128kB () 562*256kB () 239*512kB () 83*1024kB () 42*2048kB () 0*4096kB = 6676532kB
[ 3204.193851][T42110] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 3204.198253][T42110] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 3204.202507][T42110] 22747 total pagecache pages
[ 3204.205522][T42110] 0 pages in swap cache
[ 3204.208297][T42110] Swap cache stats: add 0, delete 0, find 0/0
[ 3204.211716][T42110] Free swap  = 0kB
[ 3204.214380][T42110] Total swap = 0kB
[ 3204.217017][T42110] 1048422 pages RAM
[ 3204.219747][T42110] 0 pages HighMem/MovableOnly
[ 3204.222754][T42110] 291421 pages reserved
[ 3204.225527][T42110] 0 pages cma reserved
----------


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-next-20190214] Free pages statistics is broken.
  2019-02-15  2:27 [linux-next-20190214] Free pages statistics is broken Tetsuo Handa
@ 2019-02-15 13:01 ` Michal Hocko
  2019-02-15 14:27   ` Tetsuo Handa
  0 siblings, 1 reply; 6+ messages in thread
From: Michal Hocko @ 2019-02-15 13:01 UTC (permalink / raw)
  To: Tetsuo Handa; +Cc: linux-mm, Andrew Morton

On Fri 15-02-19 11:27:10, Tetsuo Handa wrote:
> I noticed that amount of free memory reported by DMA: / DMA32: / Normal: fields are
> increasing over time. Since 5.0-rc6 is working correctly, some change in linux-next
> is causing this problem.

Just a shot into the dark. Could you try to disable the page allocator
randomization (page_alloc.shuffle kernel command line parameter)? Not
that I see any bug there but it is a recent change in the page allocator
I am aware of and it might have some anticipated side effects.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-next-20190214] Free pages statistics is broken.
  2019-02-15 13:01 ` Michal Hocko
@ 2019-02-15 14:27   ` Tetsuo Handa
  2019-02-15 17:44     ` Vlastimil Babka
  0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2019-02-15 14:27 UTC (permalink / raw)
  To: Michal Hocko; +Cc: linux-mm, Andrew Morton

On 2019/02/15 22:01, Michal Hocko wrote:
> On Fri 15-02-19 11:27:10, Tetsuo Handa wrote:
>> I noticed that amount of free memory reported by DMA: / DMA32: / Normal: fields are
>> increasing over time. Since 5.0-rc6 is working correctly, some change in linux-next
>> is causing this problem.
> 
> Just a shot into the dark. Could you try to disable the page allocator
> randomization (page_alloc.shuffle kernel command line parameter)? Not
> that I see any bug there but it is a recent change in the page allocator
> I am aware of and it might have some anticipated side effects.
> 

I tried CONFIG_SHUFFLE_PAGE_ALLOCATOR=n but problem still exists.

[   45.788185][    C3] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15872kB
[   45.793869][    C3] Node 0 DMA32: 2017*4kB (M) 1007*8kB () 511*16kB (UM) 257*32kB (UM) 129*64kB (UM) 62*128kB () 33*256kB (UM) 15*512kB (U) 9*1024kB (UM) 186*2048kB (M) 481*4096kB (M) = 2425164kB
[   45.800355][    C3] Node 0 Normal: 71712*4kB () 32360*8kB () 16640*16kB (UME) 10536*32kB (UE) 5199*64kB (UME) 2551*128kB (M) 1232*256kB (UM) 604*512kB () 300*1024kB (UM) 233*2048kB (E) 4*4096kB (M) = 3233792kB

[  212.578797][ T9783] Node 0 DMA: 298*4kB (UM) 151*8kB (UM) 76*16kB (U) 41*32kB (UM) 21*64kB (UM) 11*128kB (U) 8*256kB (UM) 5*512kB (M) 3*1024kB (U) 0*2048kB 3*4096kB (M) = 27648kB
[  212.585534][ T9783] Node 0 DMA32: 18100*4kB () 8704*8kB (UM) 3984*16kB (M) 1704*32kB (M) 673*64kB (M) 261*128kB (M) 139*256kB (M) 48*512kB (UM) 23*1024kB (UM) 1308*2048kB (M) 10*4096kB (UM) = 3140240kB
[  212.593410][ T9783] Node 0 Normal: 285472*4kB (H) 105638*8kB (UEH) 43419*16kB (UEH) 19474*32kB (UEH) 7986*64kB (H) 3628*128kB () 1661*256kB () 753*512kB () 349*1024kB () 316*2048kB () 0*4096kB = 6095648kB

[  230.654713][ T9550] Node 0 DMA: 298*4kB (UM) 151*8kB (UM) 76*16kB (U) 41*32kB (UM) 21*64kB (UM) 11*128kB (U) 8*256kB (UM) 5*512kB (M) 3*1024kB (U) 0*2048kB 3*4096kB (M) = 27648kB
[  230.661248][ T9550] Node 0 DMA32: 29452*4kB () 14391*8kB () 6814*16kB () 3109*32kB (M) 1365*64kB (M) 491*128kB (M) 263*256kB (M) 150*512kB (M) 125*1024kB (M) 1309*2048kB (UM) 10*4096kB (UM) = 3585576kB
[  230.669879][ T9550] Node 0 Normal: 367054*4kB (UMEH) 123969*8kB (UMEH) 48498*16kB (UMEH) 20325*32kB (UMEH) 8069*64kB (UMH) 3640*128kB (H) 1662*256kB (H) 753*512kB () 350*1024kB (H) 316*2048kB () 0*4096kB = 6685248kB


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-next-20190214] Free pages statistics is broken.
  2019-02-15 14:27   ` Tetsuo Handa
@ 2019-02-15 17:44     ` Vlastimil Babka
  2019-02-15 18:13       ` Dan Williams
  0 siblings, 1 reply; 6+ messages in thread
From: Vlastimil Babka @ 2019-02-15 17:44 UTC (permalink / raw)
  To: Tetsuo Handa, Michal Hocko, Dan Williams; +Cc: linux-mm, Andrew Morton

On 2/15/19 3:27 PM, Tetsuo Handa wrote:
> On 2019/02/15 22:01, Michal Hocko wrote:
>> On Fri 15-02-19 11:27:10, Tetsuo Handa wrote:
>>> I noticed that amount of free memory reported by DMA: / DMA32: / Normal: fields are
>>> increasing over time. Since 5.0-rc6 is working correctly, some change in linux-next
>>> is causing this problem.
>> 
>> Just a shot into the dark. Could you try to disable the page allocator
>> randomization (page_alloc.shuffle kernel command line parameter)? Not
>> that I see any bug there but it is a recent change in the page allocator
>> I am aware of and it might have some anticipated side effects.
>> 
> 
> I tried CONFIG_SHUFFLE_PAGE_ALLOCATOR=n but problem still exists.

I think it's the preparation patch [1], even with randomization off:

@@ -1910,7 +1900,7 @@ static inline void expand(struct zone *zone, struct page *page,
                if (set_page_guard(zone, &page[size], high, migratetype))
                        continue;
 
-               list_add(&page[size].lru, &area->free_list[migratetype]);
+               add_to_free_area(&page[size], area, migratetype);
                area->nr_free++;
                set_page_order(&page[size], high);
        }

This should have removed the 'area->nr_free++;' line, as add_to_free_area()
includes the increment.

[1] https://www.ozlabs.org/~akpm/mmotm/broken-out/mm-move-buddy-list-manipulations-into-helpers.patch


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-next-20190214] Free pages statistics is broken.
  2019-02-15 17:44     ` Vlastimil Babka
@ 2019-02-15 18:13       ` Dan Williams
  2019-02-16  2:18         ` Tetsuo Handa
  0 siblings, 1 reply; 6+ messages in thread
From: Dan Williams @ 2019-02-15 18:13 UTC (permalink / raw)
  To: Vlastimil Babka; +Cc: Tetsuo Handa, Michal Hocko, Linux MM, Andrew Morton

On Fri, Feb 15, 2019 at 9:44 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 2/15/19 3:27 PM, Tetsuo Handa wrote:
> > On 2019/02/15 22:01, Michal Hocko wrote:
> >> On Fri 15-02-19 11:27:10, Tetsuo Handa wrote:
> >>> I noticed that amount of free memory reported by DMA: / DMA32: / Normal: fields are
> >>> increasing over time. Since 5.0-rc6 is working correctly, some change in linux-next
> >>> is causing this problem.
> >>
> >> Just a shot into the dark. Could you try to disable the page allocator
> >> randomization (page_alloc.shuffle kernel command line parameter)? Not
> >> that I see any bug there but it is a recent change in the page allocator
> >> I am aware of and it might have some anticipated side effects.
> >>
> >
> > I tried CONFIG_SHUFFLE_PAGE_ALLOCATOR=n but problem still exists.
>
> I think it's the preparation patch [1], even with randomization off:
>
> @@ -1910,7 +1900,7 @@ static inline void expand(struct zone *zone, struct page *page,
>                 if (set_page_guard(zone, &page[size], high, migratetype))
>                         continue;
>
> -               list_add(&page[size].lru, &area->free_list[migratetype]);
> +               add_to_free_area(&page[size], area, migratetype);
>                 area->nr_free++;
>                 set_page_order(&page[size], high);
>         }
>
> This should have removed the 'area->nr_free++;' line, as add_to_free_area()
> includes the increment.

Yes, good find! I'll send an incremental fixup patch in a moment
unless someone beats me to it.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-next-20190214] Free pages statistics is broken.
  2019-02-15 18:13       ` Dan Williams
@ 2019-02-16  2:18         ` Tetsuo Handa
  0 siblings, 0 replies; 6+ messages in thread
From: Tetsuo Handa @ 2019-02-16  2:18 UTC (permalink / raw)
  To: Dan Williams, Vlastimil Babka; +Cc: Michal Hocko, Linux MM, Andrew Morton

On 2019/02/16 3:13, Dan Williams wrote:
> On Fri, Feb 15, 2019 at 9:44 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> On 2/15/19 3:27 PM, Tetsuo Handa wrote:
>>> On 2019/02/15 22:01, Michal Hocko wrote:
>>>> On Fri 15-02-19 11:27:10, Tetsuo Handa wrote:
>>>>> I noticed that amount of free memory reported by DMA: / DMA32: / Normal: fields are
>>>>> increasing over time. Since 5.0-rc6 is working correctly, some change in linux-next
>>>>> is causing this problem.
>>>>
>>>> Just a shot into the dark. Could you try to disable the page allocator
>>>> randomization (page_alloc.shuffle kernel command line parameter)? Not
>>>> that I see any bug there but it is a recent change in the page allocator
>>>> I am aware of and it might have some anticipated side effects.
>>>>
>>>
>>> I tried CONFIG_SHUFFLE_PAGE_ALLOCATOR=n but problem still exists.
>>
>> I think it's the preparation patch [1], even with randomization off:
>>
>> @@ -1910,7 +1900,7 @@ static inline void expand(struct zone *zone, struct page *page,
>>                 if (set_page_guard(zone, &page[size], high, migratetype))
>>                         continue;
>>
>> -               list_add(&page[size].lru, &area->free_list[migratetype]);
>> +               add_to_free_area(&page[size], area, migratetype);
>>                 area->nr_free++;
>>                 set_page_order(&page[size], high);
>>         }
>>
>> This should have removed the 'area->nr_free++;' line, as add_to_free_area()
>> includes the increment.
> 
> Yes, good find! I'll send an incremental fixup patch in a moment
> unless someone beats me to it.
> 

Removing the 'area->nr_free++;' line solved the problem. Thank you.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-02-16  2:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-15  2:27 [linux-next-20190214] Free pages statistics is broken Tetsuo Handa
2019-02-15 13:01 ` Michal Hocko
2019-02-15 14:27   ` Tetsuo Handa
2019-02-15 17:44     ` Vlastimil Babka
2019-02-15 18:13       ` Dan Williams
2019-02-16  2:18         ` Tetsuo Handa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.