From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D705C433DB for ; Mon, 15 Mar 2021 16:00:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88EF064EE9 for ; Mon, 15 Mar 2021 16:00:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88EF064EE9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D82386B0070; Mon, 15 Mar 2021 12:00:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D321B6B0071; Mon, 15 Mar 2021 12:00:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5E006B0072; Mon, 15 Mar 2021 12:00:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 900246B0070 for ; Mon, 15 Mar 2021 12:00:14 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CB9BE180428F8 for ; Mon, 15 Mar 2021 16:00:13 +0000 (UTC) X-FDA: 77922570306.02.7933F50 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 8926BE00EB86 for ; Mon, 15 Mar 2021 15:46:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1615823208; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dCGMJXdZDX+eUsjNbEae8XUm5kQEQ0L0WJulH3/e4AY=; b=el3wO2p38Qb70QYHQzzsPJynaiOyAiAyx5Tzw6hQIaDyyZ5jrrv74GjXZcuZwnAk55RFRo GPV8j6WYTb/OdN9e/HaCkCxlv2BWEJj0W85RrFpSGTFN+BO2J87+bl0tbweOvPKneAgSBk fg5uBfUYXKmxGyOGE5IT/n21EKdM4iY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-393-mKud1rV6NGKEiBtrUnHYVg-1; Mon, 15 Mar 2021 11:46:44 -0400 X-MC-Unique: mKud1rV6NGKEiBtrUnHYVg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9D5B8100D671; Mon, 15 Mar 2021 15:46:41 +0000 (UTC) Received: from [10.36.112.200] (ovpn-112-200.ams2.redhat.com [10.36.112.200]) by smtp.corp.redhat.com (Postfix) with ESMTP id 777311378D; Mon, 15 Mar 2021 15:46:34 +0000 (UTC) To: zhou , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, mhocko@kernel.org, mgorman@suse.de, willy@linux.intel.com, rostedt@goodmis.org, mingo@redhat.com, vbabka@suse.cz, rientjes@google.com, pankaj.gupta.linux@gmail.com, bhe@redhat.com, ying.huang@intel.com, iamjoonsoo.kim@lge.com, minchan@kernel.org, ruxian.feng@transsion.com, kai.cheng@transsion.com, zhao.xu@transsion.com, zhouxianrong@tom.com, zhou xianrong References: <20210313083109.5410-1-xianrong_zhou@163.com> From: David Hildenbrand Organization: Red Hat GmbH Subject: Re: [PATCH] kswapd: no need reclaim cma pages triggered by unmovable allocation Message-ID: <64f8c03f-7fd9-2e03-6b90-67e2a5a45b9d@redhat.com> Date: Mon, 15 Mar 2021 16:46:33 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: <20210313083109.5410-1-xianrong_zhou@163.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Stat-Signature: x9ffucjhxxba83ew9fmo3twownm9qtx5 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8926BE00EB86 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615823209-400260 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 13.03.21 09:31, zhou wrote: > From: zhou xianrong >=20 > For purpose of better migration cma pages are allocated after > failure movalbe allocations and are used normally for file pages > or anonymous pages. I failed to parse that senctence. "For better migration, CMA pages are allocated after failing allocation=20 of movable allocations and are used for backing files or anonymous memory= ." Still doesn't make any sense to me. Can you clarify? >=20 > In reclaim path many cma pages if configurated are reclaimed s/configurated/configured/ > from lru lists in kswapd mainly or direct reclaim triggered by > unmovable or reclaimable allocations. But these reclaimed cma > pages can not be used by original unmovable or reclaimable > allocations. So the reclaim are unnecessary. Might be a dump question, but why can't reclaimable allocations end up=20 on CMA? (for unmovable allocations, this is clear) Or did I=20 misunderstand what that paragraph was trying to tell me? >=20 > So the unmovable or reclaimable allocations should not trigger > reclaiming cma pages. The patch adds third factor of migratetype > which is just like factors of zone index or order kswapd need > consider. The modification follows codes of zone index > consideration. And it is straightforward that skips reclaiming > cma pages in reclaim procedure which is triggered only by > unmovable or reclaimable allocations. >=20 > This optimization can avoid ~3% unnecessary isolations from cma > (cma isolated / total isolated) with configuration of total 100Mb > cma pages. Can you say a few words about interaction with ZONE_MOVABLE, which=20 behaves similar to CMA? I.e., does the same apply to ZONE_MOVABLE? Is it=20 already handled? >=20 > Signed-off-by: zhou xianrong > Signed-off-by: feng ruxian > --- > include/linux/mmzone.h | 6 ++-- > include/trace/events/vmscan.h | 20 +++++++---- > mm/page_alloc.c | 5 +-- > mm/vmscan.c | 63 +++++++++++++++++++++++++++++-----= - > 4 files changed, 73 insertions(+), 21 deletions(-) >=20 > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index b593316bff3d..7dd38d7372b9 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -301,6 +301,8 @@ struct lruvec { > #define ISOLATE_ASYNC_MIGRATE ((__force isolate_mode_t)0x4) > /* Isolate unevictable pages */ > #define ISOLATE_UNEVICTABLE ((__force isolate_mode_t)0x8) > +/* Isolate none cma pages */ > +#define ISOLATE_NONCMA ((__force isolate_mode_t)0x10) > =20 > /* LRU Isolation modes. */ > typedef unsigned __bitwise isolate_mode_t; > @@ -756,7 +758,7 @@ typedef struct pglist_data { > wait_queue_head_t pfmemalloc_wait; > struct task_struct *kswapd; /* Protected by > mem_hotplug_begin/end() */ > - int kswapd_order; > + int kswapd_order, kswapd_migratetype; > enum zone_type kswapd_highest_zoneidx; > =20 > int kswapd_failures; /* Number of 'reclaimed =3D=3D 0' runs */ > @@ -840,7 +842,7 @@ static inline bool pgdat_is_empty(pg_data_t *pgdat) > =20 > void build_all_zonelists(pg_data_t *pgdat); > void wakeup_kswapd(struct zone *zone, gfp_t gfp_mask, int order, > - enum zone_type highest_zoneidx); > + int migratetype, enum zone_type highest_zoneidx); > bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned= long mark, > int highest_zoneidx, unsigned int alloc_flags, > long free_pages); > diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmsca= n.h > index 2070df64958e..41bbafdfde84 100644 > --- a/include/trace/events/vmscan.h > +++ b/include/trace/events/vmscan.h > @@ -51,37 +51,41 @@ TRACE_EVENT(mm_vmscan_kswapd_sleep, > =20 > TRACE_EVENT(mm_vmscan_kswapd_wake, > =20 > - TP_PROTO(int nid, int zid, int order), > + TP_PROTO(int nid, int zid, int order, int mt), > =20 > - TP_ARGS(nid, zid, order), > + TP_ARGS(nid, zid, order, mt), > =20 > TP_STRUCT__entry( > __field( int, nid ) > __field( int, zid ) > __field( int, order ) > + __field( int, mt ) > ), > =20 > TP_fast_assign( > __entry->nid =3D nid; > __entry->zid =3D zid; > __entry->order =3D order; > + __entry->mt =3D mt; > ), > =20 > - TP_printk("nid=3D%d order=3D%d", > + TP_printk("nid=3D%d order=3D%d migratetype=3D%d", > __entry->nid, > - __entry->order) > + __entry->order, > + __entry->mt) > ); > =20 > TRACE_EVENT(mm_vmscan_wakeup_kswapd, > =20 > - TP_PROTO(int nid, int zid, int order, gfp_t gfp_flags), > + TP_PROTO(int nid, int zid, int order, int mt, gfp_t gfp_flags), > =20 > - TP_ARGS(nid, zid, order, gfp_flags), > + TP_ARGS(nid, zid, order, mt, gfp_flags), > =20 > TP_STRUCT__entry( > __field( int, nid ) > __field( int, zid ) > __field( int, order ) > + __field( int, mt ) > __field( gfp_t, gfp_flags ) > ), > =20 > @@ -89,12 +93,14 @@ TRACE_EVENT(mm_vmscan_wakeup_kswapd, > __entry->nid =3D nid; > __entry->zid =3D zid; > __entry->order =3D order; > + __entry->mt =3D mt; > __entry->gfp_flags =3D gfp_flags; > ), > =20 > - TP_printk("nid=3D%d order=3D%d gfp_flags=3D%s", > + TP_printk("nid=3D%d order=3D%d migratetype=3D%d gfp_flags=3D%s", > __entry->nid, > __entry->order, > + __entry->mt, > show_gfp_flags(__entry->gfp_flags)) > ); > =20 > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 519a60d5b6f7..45ceb15721b8 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3517,7 +3517,7 @@ struct page *rmqueue(struct zone *preferred_zone, > /* Separate test+clear to avoid unnecessary atomics */ > if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) { > clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags); > - wakeup_kswapd(zone, 0, 0, zone_idx(zone)); > + wakeup_kswapd(zone, 0, 0, migratetype, zone_idx(zone)); > } > =20 > VM_BUG_ON_PAGE(page && bad_range(zone, page), page); > @@ -4426,11 +4426,12 @@ static void wake_all_kswapds(unsigned int order= , gfp_t gfp_mask, > struct zone *zone; > pg_data_t *last_pgdat =3D NULL; > enum zone_type highest_zoneidx =3D ac->highest_zoneidx; > + int migratetype =3D ac->migratetype; > =20 > for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zonei= dx, > ac->nodemask) { > if (last_pgdat !=3D zone->zone_pgdat) > - wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx); > + wakeup_kswapd(zone, gfp_mask, order, migratetype, highest_zoneidx); > last_pgdat =3D zone->zone_pgdat; > } > } > diff --git a/mm/vmscan.c b/mm/vmscan.c > index b1b574ad199d..184f0c4c7151 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -99,6 +99,9 @@ struct scan_control { > /* Can pages be swapped as part of reclaim? */ > unsigned int may_swap:1; > =20 > + /* Can cma pages be reclaimed? */ > + unsigned int may_cma:1; > + > /* > * Cgroups are not reclaimed below their configured memory.low, > * unless we threaten to OOM. If any cgroups are skipped due to > @@ -286,6 +289,11 @@ static bool writeback_throttling_sane(struct scan_= control *sc) > } > #endif > =20 > +static bool movable_reclaim(gfp_t gfp_mask) > +{ > + return is_migrate_movable(gfp_migratetype(gfp_mask)); > +} > + > /* > * This misses isolated pages which are not accounted for to save cou= nters. > * As the data only determines if reclaim or compaction continues, it= is > @@ -1499,6 +1507,7 @@ unsigned int reclaim_clean_pages_from_list(struct= zone *zone, > .gfp_mask =3D GFP_KERNEL, > .priority =3D DEF_PRIORITY, > .may_unmap =3D 1, > + .may_cma =3D 1, > }; > struct reclaim_stat stat; > unsigned int nr_reclaimed; > @@ -1593,6 +1602,9 @@ int __isolate_lru_page_prepare(struct page *page,= isolate_mode_t mode) > if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) > return ret; > =20 > + if ((mode & ISOLATE_NONCMA) && is_migrate_cma(get_pageblock_migratety= pe(page))) > + return ret; > + > return 0; > } > =20 > @@ -1647,7 +1659,10 @@ static unsigned long isolate_lru_pages(unsigned = long nr_to_scan, > unsigned long skipped =3D 0; > unsigned long scan, total_scan, nr_pages; > LIST_HEAD(pages_skipped); > - isolate_mode_t mode =3D (sc->may_unmap ? 0 : ISOLATE_UNMAPPED); > + isolate_mode_t mode; > + > + mode =3D (sc->may_unmap ? 0 : ISOLATE_UNMAPPED); > + mode |=3D (sc->may_cma ? 0 : ISOLATE_NONCMA); > =20 > total_scan =3D 0; > scan =3D 0; > @@ -2125,6 +2140,7 @@ unsigned long reclaim_pages(struct list_head *pag= e_list) > .may_writepage =3D 1, > .may_unmap =3D 1, > .may_swap =3D 1, > + .may_cma =3D 1, > }; > =20 > while (!list_empty(page_list)) { > @@ -3253,6 +3269,7 @@ unsigned long try_to_free_pages(struct zonelist *= zonelist, int order, > .may_writepage =3D !laptop_mode, > .may_unmap =3D 1, > .may_swap =3D 1, > + .may_cma =3D movable_reclaim(gfp_mask), > }; > =20 > /* > @@ -3298,6 +3315,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_c= group *memcg, > .may_unmap =3D 1, > .reclaim_idx =3D MAX_NR_ZONES - 1, > .may_swap =3D !noswap, > + .may_cma =3D 1, > }; > =20 > WARN_ON_ONCE(!current->reclaim_state); > @@ -3341,6 +3359,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct= mem_cgroup *memcg, > .may_writepage =3D !laptop_mode, > .may_unmap =3D 1, > .may_swap =3D may_swap, > + .may_cma =3D 1, > }; > /* > * Traverse the ZONELIST_FALLBACK zonelist of the current node to pu= t > @@ -3548,7 +3567,7 @@ static bool kswapd_shrink_node(pg_data_t *pgdat, > * or lower is eligible for reclaim until at least one usable zone is > * balanced. > */ > -static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zone= idx) > +static int balance_pgdat(pg_data_t *pgdat, int order, int migratetype,= int highest_zoneidx) > { > int i; > unsigned long nr_soft_reclaimed; > @@ -3650,6 +3669,7 @@ static int balance_pgdat(pg_data_t *pgdat, int or= der, int highest_zoneidx) > */ > sc.may_writepage =3D !laptop_mode && !nr_boost_reclaim; > sc.may_swap =3D !nr_boost_reclaim; > + sc.may_cma =3D is_migrate_movable(migratetype); > =20 > /* > * Do some background aging of the anon list, to give > @@ -3771,8 +3791,15 @@ static enum zone_type kswapd_highest_zoneidx(pg_= data_t *pgdat, > return curr_idx =3D=3D MAX_NR_ZONES ? prev_highest_zoneidx : curr_id= x; > } > =20 > +static int kswapd_migratetype(pg_data_t *pgdat, int prev_migratetype) > +{ > + int curr_migratetype =3D READ_ONCE(pgdat->kswapd_migratetype); > + > + return curr_migratetype =3D=3D MIGRATE_TYPES ? prev_migratetype : cur= r_migratetype; > +} > + > static void kswapd_try_to_sleep(pg_data_t *pgdat, int alloc_order, in= t reclaim_order, > - unsigned int highest_zoneidx) > + int migratetype, unsigned int highest_zoneidx) > { > long remaining =3D 0; > DEFINE_WAIT(wait); > @@ -3807,8 +3834,8 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat,= int alloc_order, int reclaim_o > remaining =3D schedule_timeout(HZ/10); > =20 > /* > - * If woken prematurely then reset kswapd_highest_zoneidx and > - * order. The values will either be from a wakeup request or > + * If woken prematurely then reset kswapd_highest_zoneidx, order > + * and migratetype. The values will either be from a wakeup request = or > * the previous request that slept prematurely. > */ > if (remaining) { > @@ -3818,6 +3845,10 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat= , int alloc_order, int reclaim_o > =20 > if (READ_ONCE(pgdat->kswapd_order) < reclaim_order) > WRITE_ONCE(pgdat->kswapd_order, reclaim_order); > + > + if (!is_migrate_movable(READ_ONCE(pgdat->kswapd_migratetype))) > + WRITE_ONCE(pgdat->kswapd_migratetype, > + kswapd_migratetype(pgdat, migratetype)); > } > =20 > finish_wait(&pgdat->kswapd_wait, &wait); > @@ -3870,6 +3901,7 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat,= int alloc_order, int reclaim_o > */ > static int kswapd(void *p) > { > + int migratetype =3D 0; > unsigned int alloc_order, reclaim_order; > unsigned int highest_zoneidx =3D MAX_NR_ZONES - 1; > pg_data_t *pgdat =3D (pg_data_t*)p; > @@ -3895,23 +3927,27 @@ static int kswapd(void *p) > set_freezable(); > =20 > WRITE_ONCE(pgdat->kswapd_order, 0); > + WRITE_ONCE(pgdat->kswapd_migratetype, MIGRATE_TYPES); > WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES); > for ( ; ; ) { > bool ret; > =20 > alloc_order =3D reclaim_order =3D READ_ONCE(pgdat->kswapd_order); > + migratetype =3D kswapd_migratetype(pgdat, migratetype); > highest_zoneidx =3D kswapd_highest_zoneidx(pgdat, > highest_zoneidx); > =20 > kswapd_try_sleep: > kswapd_try_to_sleep(pgdat, alloc_order, reclaim_order, > - highest_zoneidx); > + migratetype, highest_zoneidx); > =20 > /* Read the new order and highest_zoneidx */ > alloc_order =3D READ_ONCE(pgdat->kswapd_order); > + migratetype =3D kswapd_migratetype(pgdat, migratetype); > highest_zoneidx =3D kswapd_highest_zoneidx(pgdat, > highest_zoneidx); > WRITE_ONCE(pgdat->kswapd_order, 0); > + WRITE_ONCE(pgdat->kswapd_migratetype, MIGRATE_TYPES); > WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES); > =20 > ret =3D try_to_freeze(); > @@ -3934,8 +3970,8 @@ static int kswapd(void *p) > * request (alloc_order). > */ > trace_mm_vmscan_kswapd_wake(pgdat->node_id, highest_zoneidx, > - alloc_order); > - reclaim_order =3D balance_pgdat(pgdat, alloc_order, > + alloc_order, migratetype); > + reclaim_order =3D balance_pgdat(pgdat, alloc_order, migratetype, > highest_zoneidx); > if (reclaim_order < alloc_order) > goto kswapd_try_sleep; > @@ -3953,11 +3989,12 @@ static int kswapd(void *p) > * has failed or is not needed, still wake up kcompactd if only compa= ction is > * needed. > */ > -void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order, > +void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order, int = migratetype > enum zone_type highest_zoneidx) > { > pg_data_t *pgdat; > enum zone_type curr_idx; > + int curr_migratetype; > =20 > if (!managed_zone(zone)) > return; > @@ -3967,6 +4004,7 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_f= lags, int order, > =20 > pgdat =3D zone->zone_pgdat; > curr_idx =3D READ_ONCE(pgdat->kswapd_highest_zoneidx); > + curr_migratetype =3D READ_ONCE(pgdat->kswapd_migratetype); > =20 > if (curr_idx =3D=3D MAX_NR_ZONES || curr_idx < highest_zoneidx) > WRITE_ONCE(pgdat->kswapd_highest_zoneidx, highest_zoneidx); > @@ -3974,6 +4012,9 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_f= lags, int order, > if (READ_ONCE(pgdat->kswapd_order) < order) > WRITE_ONCE(pgdat->kswapd_order, order); > =20 > + if (curr_migratetype =3D=3D MIGRATE_TYPES || is_migrate_movable(migra= tetype)) > + WRITE_ONCE(pgdat->kswapd_migratetype, migratetype); > + > if (!waitqueue_active(&pgdat->kswapd_wait)) > return; > =20 > @@ -3994,7 +4035,7 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_f= lags, int order, > } > =20 > trace_mm_vmscan_wakeup_kswapd(pgdat->node_id, highest_zoneidx, order= , > - gfp_flags); > + migratetype, gfp_flags); > wake_up_interruptible(&pgdat->kswapd_wait); > } > =20 > @@ -4017,6 +4058,7 @@ unsigned long shrink_all_memory(unsigned long nr_= to_reclaim) > .may_writepage =3D 1, > .may_unmap =3D 1, > .may_swap =3D 1, > + .may_cma =3D 1, > .hibernation_mode =3D 1, > }; > struct zonelist *zonelist =3D node_zonelist(numa_node_id(), sc.gfp_m= ask); > @@ -4176,6 +4218,7 @@ static int __node_reclaim(struct pglist_data *pgd= at, gfp_t gfp_mask, unsigned in > .may_writepage =3D !!(node_reclaim_mode & RECLAIM_WRITE), > .may_unmap =3D !!(node_reclaim_mode & RECLAIM_UNMAP), > .may_swap =3D 1, > + .may_cma =3D movable_reclaim(gfp_mask), > .reclaim_idx =3D gfp_zone(gfp_mask), > }; > =20 >=20 --=20 Thanks, David / dhildenb