From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50E6CC433DF for ; Wed, 27 May 2020 06:46:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 133982078C for ; Wed, 27 May 2020 06:46:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Q0yHuXuL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 133982078C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 97ABB800BD; Wed, 27 May 2020 02:46:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 92A6B80010; Wed, 27 May 2020 02:46:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8673F800BD; Wed, 27 May 2020 02:46:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 79DA680010 for ; Wed, 27 May 2020 02:46:04 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 500D645AB for ; Wed, 27 May 2020 06:46:04 +0000 (UTC) X-FDA: 76861564248.30.swim04_4b0ae9d26d50 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 38C1A180B3C8B for ; Wed, 27 May 2020 06:46:04 +0000 (UTC) X-HE-Tag: swim04_4b0ae9d26d50 X-Filterd-Recvd-Size: 6877 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 06:46:03 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id l73so829296pjb.1 for ; Tue, 26 May 2020 23:46:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DH1Hdew4VRZCZhJxkyQryefaoqzOycwYmvHf5lPUN74=; b=Q0yHuXuLI71c7kOPwlXr4t8Ok3o0qsTJXGkXSY/LyuptMkVYki8i92yDxlVja+6Cx2 syU3Ry3ooaq1PJB9xEX7wVoGXTExvIfwtMRXIHLSzpdvYfi/5x5R/6kN/s7HuSwN1Yi1 1MM0d8k5TRhX3GI/TgygXGGqTIAbhato8S01QOHnNzfiYrBrqU3P4hA/cX7bi+rzdggM a6r/chg95PsnR6sc9sS92doJZIH2Q9Dxf3ZiXzhZWlLl6gFzR7/xteGxdluPTAhjMfr8 eXEKJ2suLE1n7QhcCk7UGzsF8Tu6rHyvHfgNbK5FU7+5GKgO2QT5GhrdF16PbTcY94ZO hSMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DH1Hdew4VRZCZhJxkyQryefaoqzOycwYmvHf5lPUN74=; b=oiPZRZhk0z13NY/0+3GqvW8Mtin55hJuWKzUUMiA6/nuA5wvtoDfwLxW4rzvfA7hNW 9vdpPPGMNaSIpUYkuKbQlqLkEiIkemN/hUPiRpRzzmwGZ643qqhApsJfeyrsJ2nhQJvB UpmrNq9nRavy0bEj0M6j9gZieGr0QDRsc0KbHz/VIc6aTW1ozGzlnRJ/nZxOtPzRgEv6 L3BhkZgKcMMJ7rVqmrQZik2ufBpUyeuAYcl7PLnKugFn23RoKLCr4P0OES4QPU1xrPkD n4QhpEgPGbdlmgU8E6o92UyDpFSaN7moTf1UqWlxJ+gubj9BzhJmPc+UMYf7JfhIpJER gaXA== X-Gm-Message-State: AOAM533q3gM2go65C9yb4ULD35XXDkRo6bqoFV9JNfLoZb5IhawDOiq7 t0ktUAu8UjCTNFB8scHxLoo= X-Google-Smtp-Source: ABdhPJyx0G41Rs4o8104XqHZYQ36JBYP1fUYGFzqqpMG0zG/bKxDfppyxVjKUtDKNinBCmUScK644w== X-Received: by 2002:a17:902:7e4e:: with SMTP id a14mr4452265pln.329.1590561962552; Tue, 26 May 2020 23:46:02 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id r13sm443883pgv.50.2020.05.26.23.45.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 May 2020 23:46:02 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v2 06/12] mm/hugetlb: make hugetlb migration target allocation APIs CMA aware Date: Wed, 27 May 2020 15:44:57 +0900 Message-Id: <1590561903-13186-7-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1590561903-13186-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1590561903-13186-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 38C1A180B3C8B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is a user who do not want to use CMA memory for migration. Until now, it is implemented by caller side but it's not optimal since there is limited information on caller. This patch implements it on callee side to get better result. Acked-by: Mike Kravetz Signed-off-by: Joonsoo Kim --- include/linux/hugetlb.h | 2 -- mm/gup.c | 9 +++------ mm/hugetlb.c | 21 +++++++++++++++++---- mm/internal.h | 1 + 4 files changed, 21 insertions(+), 12 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index f482563..3d05f7d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -503,8 +503,6 @@ struct huge_bootmem_page { struct hstate *hstate; }; -struct page *alloc_migrate_huge_page(struct hstate *h, - struct alloc_control *ac); struct page *alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, diff --git a/mm/gup.c b/mm/gup.c index 6b78f11..87eca79 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1617,14 +1617,11 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) struct alloc_control ac = { .nid = nid, .nmask = NULL, - .gfp_mask = gfp_mask, + .gfp_mask = __GFP_NOWARN, + .skip_cma = true, }; - /* - * We don't want to dequeue from the pool because pool pages will - * mostly be from the CMA region. - */ - return alloc_migrate_huge_page(h, &ac); + return alloc_huge_page_nodemask(h, &ac); } if (PageTransHuge(page)) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8132985..e465582 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1033,13 +1033,19 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) h->free_huge_pages_node[nid]++; } -static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) +static struct page *dequeue_huge_page_node_exact(struct hstate *h, + int nid, bool skip_cma) { struct page *page; - list_for_each_entry(page, &h->hugepage_freelists[nid], lru) + list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { + if (skip_cma && is_migrate_cma_page(page)) + continue; + if (!PageHWPoison(page)) break; + } + /* * if 'non-isolated free hugepage' not found on the list, * the allocation fails. @@ -1080,7 +1086,7 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, continue; node = zone_to_nid(zone); - page = dequeue_huge_page_node_exact(h, node); + page = dequeue_huge_page_node_exact(h, node, ac->skip_cma); if (page) return page; } @@ -1937,7 +1943,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, return page; } -struct page *alloc_migrate_huge_page(struct hstate *h, +static struct page *alloc_migrate_huge_page(struct hstate *h, struct alloc_control *ac) { struct page *page; @@ -1999,6 +2005,13 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, } spin_unlock(&hugetlb_lock); + /* + * clearing __GFP_MOVABLE flag ensure that allocated page + * will not come from CMA area + */ + if (ac->skip_cma) + ac->gfp_mask &= ~__GFP_MOVABLE; + return alloc_migrate_huge_page(h, ac); } diff --git a/mm/internal.h b/mm/internal.h index 6e613ce..159cfd6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -618,6 +618,7 @@ struct alloc_control { int nid; /* preferred node id */ nodemask_t *nmask; gfp_t gfp_mask; + bool skip_cma; }; #endif /* __MM_INTERNAL_H */ -- 2.7.4