From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E3D5C433E0 for ; Tue, 9 Jun 2020 13:53:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3B9A420737 for ; Tue, 9 Jun 2020 13:53:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B9A420737 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8E786B0003; Tue, 9 Jun 2020 09:53:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C3F706B000D; Tue, 9 Jun 2020 09:53:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B2F036B000E; Tue, 9 Jun 2020 09:53:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0022.hostedemail.com [216.40.44.22]) by kanga.kvack.org (Postfix) with ESMTP id 9BB346B0003 for ; Tue, 9 Jun 2020 09:53:30 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4EFE9181ABE88 for ; Tue, 9 Jun 2020 13:53:30 +0000 (UTC) X-FDA: 76909815780.03.blow60_100d15626dc3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 2BD0715BB5 for ; Tue, 9 Jun 2020 13:53:29 +0000 (UTC) X-HE-Tag: blow60_100d15626dc3 X-Filterd-Recvd-Size: 7033 Received: from mail-ej1-f65.google.com (mail-ej1-f65.google.com [209.85.218.65]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 9 Jun 2020 13:53:28 +0000 (UTC) Received: by mail-ej1-f65.google.com with SMTP id l27so22499845ejc.1 for ; Tue, 09 Jun 2020 06:53:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ln7jPBzpTkga2kF63B32p1/NUg6bHxfaCJ0cULrwnb4=; b=HH6gQGnfHEePlsnnF+xGUIJ+XIMulHREq/QptsKoStFkm43WjscnI3FGW3Fq9nmcde dZekfrHd9u+/LaTXtxHwASX92MkEedvSsa7MFwK0USOuccxRzmzZJ7h/gK4uIicjqjTc vncDcpQQfbtqZSauIbSvpT0ubDBk7URa3yD4PiuixvkkntgxYYuo5J6/Qxpxg9WsYkCl Ak/tsJX3lE9cqxgW0pgD226Fnm2anRnRLmcAn0jrfUaRI4cdYlp1eJyrKRN1K8xab0Oo 2JVE6lnVxj/o0sGGgvc8G3pe3MAbUJ2c6i1Bw5y483+DyDj5TAKTPMIfP5tZuZkfXSiz Mb4g== X-Gm-Message-State: AOAM530+9pL1Oj3YZUJgxhS/ONy+4VNgjdtOO7x404tbDJSuianzpvl7 1eP8POcXfyRqOKQhRIOTH9w= X-Google-Smtp-Source: ABdhPJyC47KBslfYp3SVMiEiQjKNUQiqs95aOiIyQ3CW+vCSDBDmli8Ctiu4yVXdRPzHj6b2IcbXxQ== X-Received: by 2002:a17:906:73d5:: with SMTP id n21mr26921005ejl.24.1591710807501; Tue, 09 Jun 2020 06:53:27 -0700 (PDT) Received: from localhost (ip-37-188-174-195.eurotel.cz. [37.188.174.195]) by smtp.gmail.com with ESMTPSA id 63sm15057599edm.48.2020.06.09.06.53.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jun 2020 06:53:26 -0700 (PDT) Date: Tue, 9 Jun 2020 15:53:25 +0200 From: Michal Hocko To: js1304@gmail.com Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Joonsoo Kim Subject: Re: [PATCH v2 06/12] mm/hugetlb: make hugetlb migration target allocation APIs CMA aware Message-ID: <20200609135325.GH22623@dhcp22.suse.cz> References: <1590561903-13186-1-git-send-email-iamjoonsoo.kim@lge.com> <1590561903-13186-7-git-send-email-iamjoonsoo.kim@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1590561903-13186-7-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 2BD0715BB5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 27-05-20 15:44:57, Joonsoo Kim wrote: > From: Joonsoo Kim > > There is a user who do not want to use CMA memory for migration. Until > now, it is implemented by caller side but it's not optimal since there > is limited information on caller. This patch implements it on callee side > to get better result. I do not follow this changelog and honestly do not see an improvement. skip_cma in the alloc_control sound like a hack to me. I can now see why your earlier patch has started to or the given gfp_mask. If anything this should be folded here. But even then I do not like a partial gfp_mask (__GFP_NOWARN on its own really has GFP_NOWAIT like semantic). > Acked-by: Mike Kravetz > Signed-off-by: Joonsoo Kim > --- > include/linux/hugetlb.h | 2 -- > mm/gup.c | 9 +++------ > mm/hugetlb.c | 21 +++++++++++++++++---- > mm/internal.h | 1 + > 4 files changed, 21 insertions(+), 12 deletions(-) > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index f482563..3d05f7d 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -503,8 +503,6 @@ struct huge_bootmem_page { > struct hstate *hstate; > }; > > -struct page *alloc_migrate_huge_page(struct hstate *h, > - struct alloc_control *ac); > struct page *alloc_huge_page_nodemask(struct hstate *h, > struct alloc_control *ac); > struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, > diff --git a/mm/gup.c b/mm/gup.c > index 6b78f11..87eca79 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1617,14 +1617,11 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) > struct alloc_control ac = { > .nid = nid, > .nmask = NULL, > - .gfp_mask = gfp_mask, > + .gfp_mask = __GFP_NOWARN, > + .skip_cma = true, > }; > > - /* > - * We don't want to dequeue from the pool because pool pages will > - * mostly be from the CMA region. > - */ > - return alloc_migrate_huge_page(h, &ac); > + return alloc_huge_page_nodemask(h, &ac); > } > > if (PageTransHuge(page)) { > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 8132985..e465582 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1033,13 +1033,19 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) > h->free_huge_pages_node[nid]++; > } > > -static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) > +static struct page *dequeue_huge_page_node_exact(struct hstate *h, > + int nid, bool skip_cma) > { > struct page *page; > > - list_for_each_entry(page, &h->hugepage_freelists[nid], lru) > + list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { > + if (skip_cma && is_migrate_cma_page(page)) > + continue; > + > if (!PageHWPoison(page)) > break; > + } > + > /* > * if 'non-isolated free hugepage' not found on the list, > * the allocation fails. > @@ -1080,7 +1086,7 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, > continue; > node = zone_to_nid(zone); > > - page = dequeue_huge_page_node_exact(h, node); > + page = dequeue_huge_page_node_exact(h, node, ac->skip_cma); > if (page) > return page; > } > @@ -1937,7 +1943,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, > return page; > } > > -struct page *alloc_migrate_huge_page(struct hstate *h, > +static struct page *alloc_migrate_huge_page(struct hstate *h, > struct alloc_control *ac) > { > struct page *page; > @@ -1999,6 +2005,13 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, > } > spin_unlock(&hugetlb_lock); > > + /* > + * clearing __GFP_MOVABLE flag ensure that allocated page > + * will not come from CMA area > + */ > + if (ac->skip_cma) > + ac->gfp_mask &= ~__GFP_MOVABLE; > + > return alloc_migrate_huge_page(h, ac); > } > > diff --git a/mm/internal.h b/mm/internal.h > index 6e613ce..159cfd6 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -618,6 +618,7 @@ struct alloc_control { > int nid; /* preferred node id */ > nodemask_t *nmask; > gfp_t gfp_mask; > + bool skip_cma; > }; > > #endif /* __MM_INTERNAL_H */ > -- > 2.7.4 > -- Michal Hocko SUSE Labs