From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30225C4361B for ; Fri, 11 Dec 2020 20:21:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A508923F38 for ; Fri, 11 Dec 2020 20:21:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A508923F38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D4A806B006E; Fri, 11 Dec 2020 15:21:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D21C46B0070; Fri, 11 Dec 2020 15:21:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C144A6B0072; Fri, 11 Dec 2020 15:21:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 9E68C6B006E for ; Fri, 11 Dec 2020 15:21:52 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 705062C12 for ; Fri, 11 Dec 2020 20:21:52 +0000 (UTC) X-FDA: 77582122464.09.park17_1212cd627403 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 5F70C180FAA9F for ; Fri, 11 Dec 2020 20:21:52 +0000 (UTC) X-HE-Tag: park17_1212cd627403 X-Filterd-Recvd-Size: 11112 Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Dec 2020 20:21:51 +0000 (UTC) Received: by mail-qt1-f170.google.com with SMTP id u21so7392636qtw.11 for ; Fri, 11 Dec 2020 12:21:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=e4U32UDa9FIR2WW2M/Hs6yDtPhoxlLQYE5D/fEXlT1M=; b=iUm815QkcoXTftgKqSIhU5C+ikvvGNOqTXnuFVw5J64xBIqtebG+lMokASH9kcUIBp IFGB8V8vFW3Mcut07z4jqtgdTiL27vE4EMMXVBltu38ndFXMpa79ksrAbcOoUqZy49J0 Y04RgoYqyGESlmeHBijXULNXYQSdRmImQ5I9OmSeHpapRP7ZpdpxZ3msyXMnkPt34Tmz WDBvZc5l4/HgJPn3hStBbYle3W1Dxqu75LDrBuLx/WrMPm26J9qtTA+6Y21Gzk7uUgJE n/tQe5RbRe5c6/UiDOQ/OxEXiSH7omB+K5blMmvc7KmsnO6heCloSue5OtJict1nXZnl nUTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e4U32UDa9FIR2WW2M/Hs6yDtPhoxlLQYE5D/fEXlT1M=; b=KbBRvWYpfoDGi/s5g5bDj5pGfvV+nJpPP54ovARCqLvLGJ5Yynda8QFA7xHXYDxdjI ILdr6j083fBob+wey+tB5KqsqwEicFUeQUkfD/8tjkq06D4q5Ze7Ldf0swdcWtY14o+p yYqBuDNcx+FI4LvxI/VFZx04+uqKyRrNSj+pbQpN7pwzWy6tUPEqr4fUN7MUu8bYPoid 8hXKnWVsO2yZ6Yfe8GADOOv1uSqRzf+tfT0LSfeZ2c/oRAFNp3DiIP/qeGO3xYqFTgk0 xT9bi2D7wVJ3H3mUBzWG+HpKn5rMCSx2UqyuBG7fI0xuL7ZLRZnirzWKIGXV9zVRaVBt XW7g== X-Gm-Message-State: AOAM532L7Igy5PlRz1Iq8nqelTkI8mERrkounD7bB4ItueR9VRyG584U g1BSoVPYIjljN3i4NtIZB8mY4A== X-Google-Smtp-Source: ABdhPJzRC9ql4OcWj+2zPv8HmJtSEtO8Eo8pfbmlM02Fiu75AvjuOAVQl2FNE4JzXT1uSRx1Fch8Qg== X-Received: by 2002:ac8:5990:: with SMTP id e16mr18050767qte.52.1607718111263; Fri, 11 Dec 2020 12:21:51 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y192sm8514455qkb.12.2020.12.11.12.21.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Dec 2020 12:21:50 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v3 5/6] mm/gup: migrate pinned pages out of movable zone Date: Fri, 11 Dec 2020 15:21:39 -0500 Message-Id: <20201211202140.396852-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201211202140.396852-1-pasha.tatashin@soleen.com> References: <20201211202140.396852-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only movable CMA pages. Generalize the function that migrates CMA pages to migrate all movable pages. Use is_pinnable_page() to check which pages need to be migrated Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- include/linux/migrate.h | 1 + include/linux/mmzone.h | 11 ++++-- include/trace/events/migrate.h | 3 +- mm/gup.c | 66 ++++++++++++++-------------------- 4 files changed, 38 insertions(+), 43 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 4594838a0f7c..aae5ef0b3ba1 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_LONGTERM_PIN, MR_TYPES }; =20 diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..25c0c13ba4b1 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -386,9 +386,14 @@ enum zone_type { * likely to succeed, and to locally limit unmovable allocations - e.g.= , * to increase the number of THP/huge pages. Notable special cases are: * - * 1. Pinned pages: (long-term) pinning of movable pages might - * essentially turn such pages unmovable. Memory offlining might - * retry a long time. + * 1. Pinned pages: (long-term) pinning of movable pages is avoided + * when pages are pinned and faulted, but it is still possible that + * address space already has pages in ZONE_MOVABLE at the time when + * pages are pinned (i.e. user has touches that memory before + * pinning). In such case we try to migrate them to a different zone= , + * but if migration fails the pages can still end-up pinned in + * ZONE_MOVABLE. In such case, memory offlining might retry a long + * time and will only succeed once user application unpins pages. * 2. memblock allocations: kernelcore/movablecore setups might create * situations where ZONE_MOVABLE contains unmovable allocations * after boot. Memory offlining and allocations fail early. diff --git a/include/trace/events/migrate.h b/include/trace/events/migrat= e.h index 4d434398d64d..363b54ce104c 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_LONGTERM_PIN, "longterm_pin") =20 /* * First define the enums in the above macros to be exported to userspac= e diff --git a/mm/gup.c b/mm/gup.c index 007060e66a48..d5e9c459952e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -89,11 +89,12 @@ static __maybe_unused struct page *try_grab_compound_= head(struct page *page, int orig_refs =3D refs; =20 /* - * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast - * path, so fail and let the caller fall back to the slow path. + * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a + * right zone, so fail and let the caller fall back to the slow + * path. */ - if (unlikely(flags & FOLL_LONGTERM) && - is_migrate_cma_page(page)) + if (unlikely((flags & FOLL_LONGTERM) && + !is_pinnable_page(page))) return NULL; =20 /* @@ -1549,19 +1550,18 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ =20 -#ifdef CONFIG_CMA -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { unsigned long i; unsigned long step; bool drain_allow =3D true; bool migrate_allow =3D true; - LIST_HEAD(cma_page_list); + LIST_HEAD(movable_page_list); long ret =3D nr_pages; struct migration_target_control mtc =3D { .nid =3D NUMA_NO_NODE, @@ -1579,13 +1579,12 @@ static long check_and_migrate_cma_pages(struct mm= _struct *mm, */ step =3D compound_nr(head) - (pages[i] - head); /* - * If we get a page from the CMA zone, since we are going to - * be pinning these entries, we might as well move them out - * of the CMA zone if possible. + * If we get a movable page, since we are going to be pinning + * these entries, try to move them out if possible. */ - if (is_migrate_cma_page(head)) { + if (!is_pinnable_page(head)) { if (PageHuge(head)) - isolate_huge_page(head, &cma_page_list); + isolate_huge_page(head, &movable_page_list); else { if (!PageLRU(head) && drain_allow) { lru_add_drain_all(); @@ -1593,7 +1592,7 @@ static long check_and_migrate_cma_pages(struct mm_s= truct *mm, } =20 if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, &cma_page_list); + list_add_tail(&head->lru, &movable_page_list); mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + page_is_file_lru(head), @@ -1605,7 +1604,7 @@ static long check_and_migrate_cma_pages(struct mm_s= truct *mm, i +=3D step; } =20 - if (!list_empty(&cma_page_list)) { + if (!list_empty(&movable_page_list)) { /* * drop the above get_user_pages reference. */ @@ -1615,7 +1614,7 @@ static long check_and_migrate_cma_pages(struct mm_s= truct *mm, for (i =3D 0; i < nr_pages; i++) put_page(pages[i]); =20 - if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, + if (migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages @@ -1623,17 +1622,16 @@ static long check_and_migrate_cma_pages(struct mm= _struct *mm, */ migrate_allow =3D false; =20 - if (!list_empty(&cma_page_list)) - putback_movable_pages(&cma_page_list); + if (!list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); } /* * We did migrate all the pages, Try to get the page references - * again migrating any new CMA pages which we failed to isolate - * earlier. + * again migrating any pages which we failed to isolate earlier. */ ret =3D __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, NULL, - gup_flags); + pages, vmas, NULL, + gup_flags); =20 if ((ret > 0) && migrate_allow) { nr_pages =3D ret; @@ -1644,17 +1642,6 @@ static long check_and_migrate_cma_pages(struct mm_= struct *mm, =20 return ret; } -#else -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) -{ - return nr_pages; -} -#endif /* CONFIG_CMA */ =20 /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked whic= h @@ -1678,8 +1665,9 @@ static long __gup_longterm_locked(struct mm_struct = *mm, =20 if (gup_flags & FOLL_LONGTERM) { if (rc > 0) - rc =3D check_and_migrate_cma_pages(mm, start, rc, pages, - vmas, gup_flags); + rc =3D check_and_migrate_movable_pages(mm, start, rc, + pages, vmas, + gup_flags); memalloc_pin_restore(flags); } return rc; --=20 2.25.1