* Re: [PATCH] mm: Add nr_free_highatomimic to fix incorrect watermatk routine
[not found] <1567157153-22024-1-git-send-email-sangwoo2.park@lge.com>
@ 2019-08-30 11:09 ` Michal Hocko
[not found] ` <OF7501D4D5.8C005EEB-ON49258469.00192B40-49258469.00192B40@lge.com>
2019-09-05 13:59 ` Vlastimil Babka
1 sibling, 1 reply; 3+ messages in thread
From: Michal Hocko @ 2019-08-30 11:09 UTC (permalink / raw)
To: Sangwoo
Cc: hannes, arunks, guro, richard.weiyang, glider, jannh,
dan.j.williams, akpm, alexander.h.duyck, rppt, gregkh,
janne.huttunen, pasha.tatashin, vbabka, osalvador, mgorman,
khlebnikov, linux-mm, linux-kernel
On Fri 30-08-19 18:25:53, Sangwoo wrote:
> The highatomic migrate block can be increased to 1% of Total memory.
> And, this is for only highorder ( > 0 order). So, this block size is
> excepted during check watermark if allocation type isn't alloc_harder.
>
> It has problem. The usage of highatomic is already calculated at NR_FREE_PAGES.
> So, if we except total block size of highatomic, it's twice minus size of allocated
> highatomic.
> It's cause allocation fail although free pages enough.
>
> We checked this by random test on my target(8GB RAM).
>
> Binder:6218_2: page allocation failure: order:0, mode:0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null)
> Binder:6218_2 cpuset=background mems_allowed=0
How come this order-0 sleepable allocation fails? The upstream kernel
doesn't fail those allocations unless the process context is killed by
the oom killer.
Also please note that atomic reserves are released when the memory
pressure is high and we cannot reclaim any other memory. Have a look at
unreserve_highatomic_pageblock called from should_reclaim_retry.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mm: Add nr_free_highatomimic to fix incorrect watermatk routine
[not found] <1567157153-22024-1-git-send-email-sangwoo2.park@lge.com>
2019-08-30 11:09 ` [PATCH] mm: Add nr_free_highatomimic to fix incorrect watermatk routine Michal Hocko
@ 2019-09-05 13:59 ` Vlastimil Babka
1 sibling, 0 replies; 3+ messages in thread
From: Vlastimil Babka @ 2019-09-05 13:59 UTC (permalink / raw)
To: Sangwoo, hannes, arunks, guro, richard.weiyang, glider, jannh,
dan.j.williams, akpm, alexander.h.duyck, rppt, gregkh,
janne.huttunen, pasha.tatashin, Michal Hocko, osalvador, mgorman,
khlebnikov
Cc: linux-mm, linux-kernel
On 8/30/19 11:25 AM, Sangwoo wrote:
> The highatomic migrate block can be increased to 1% of Total memory.
> And, this is for only highorder ( > 0 order). So, this block size is
> excepted during check watermark if allocation type isn't alloc_harder.
>
> It has problem. The usage of highatomic is already calculated at NR_FREE_PAGES.
> So, if we except total block size of highatomic, it's twice minus size of allocated
> highatomic.
> It's cause allocation fail although free pages enough.
This is known, the comment in __zone_watermark_order says "This will
over-estimate the size of the atomic reserve but it avoids a search."
It was discussed during review and wasn't considered a large issue
thanks to unreserving on demand before OOM happens.
> @@ -919,6 +923,9 @@ static inline void __free_one_page(struct page *page,
> VM_BUG_ON(migratetype == -1);
> if (likely(!is_migrate_isolate(migratetype)))
> __mod_zone_freepage_state(zone, 1 << order, migratetype);
> + if (is_migrate_highatomic(migratetype) ||
> + is_migrate_highatomic_page(page))
> + __mod_zone_page_state(zone, NR_FREE_HIGHATOMIC_PAGES, 1 << order);
I suspect the counter will eventually get imbalanced, at the least due
to merging a highatomic pageblock and non-highatomic pageblock. To get
it right, it would have to be complicated in a similar way that we
handle MIGRATE_ISOLATED and MIGRATE_CMA. It wasn't considered serious
enough to warrant these complications.
^ permalink raw reply [flat|nested] 3+ messages in thread