From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB87DC43460 for ; Wed, 5 May 2021 01:38:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2A3861422 for ; Wed, 5 May 2021 01:38:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232036AbhEEBjn (ORCPT ); Tue, 4 May 2021 21:39:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:42968 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231440AbhEEBjm (ORCPT ); Tue, 4 May 2021 21:39:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 712F761182; Wed, 5 May 2021 01:38:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1620178727; bh=O6G8jMzh61gEiDKFjaQBXyHq2EengRwa5NShtXsEdsk=; h=Date:From:To:Subject:In-Reply-To:From; b=xQBTzbnUrcNSAntRp6mfnrZ2J5xcORorUfRNftGG4zBG1ReaE/XqpHGEXS35RNYdX sh+9Xb0qPDeeEx2IaYaZmcMebjoxysG9ySWJEQ/7XImGPwNfbwb0+IrL/f0NvAjvOg T0UmZOwUzlhQ7vsNkggJnOQV6S+9lSkKHiRGZWsU= Date: Tue, 04 May 2021 18:38:46 -0700 From: Andrew Morton To: akpm@linux-foundation.org, dan.j.williams@intel.com, david@redhat.com, iamjoonsoo.kim@lge.com, ira.weiny@intel.com, jgg@nvidia.com, jgg@ziepe.ca, jhubbard@nvidia.com, jmorris@namei.org, linux-mm@kvack.org, mgorman@suse.de, mhocko@kernel.org, mhocko@suse.com, mike.kravetz@oracle.com, mingo@redhat.com, mm-commits@vger.kernel.org, osalvador@suse.de, pasha.tatashin@soleen.com, peterz@infradead.org, rientjes@google.com, rostedt@goodmis.org, sashal@kernel.org, torvalds@linux-foundation.org, tyhicks@linux.microsoft.com, vbabka@suse.cz, willy@infradead.org Subject: [patch 112/143] mm/gup: return an error on migration failure Message-ID: <20210505013846.4yrwf55ju%akpm@linux-foundation.org> In-Reply-To: <20210504183219.a3cc46aee4013d77402276c5@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Pavel Tatashin Subject: mm/gup: return an error on migration failure When migration failure occurs, we still pin pages, which means that we may pin CMA movable pages which should never be the case. Instead return an error without pinning pages when migration failure happens. No need to retry migrating, because migrate_pages() already retries 10 times. Link: https://lkml.kernel.org/r/20210215161349.246722-4-pasha.tatashin@soleen.com Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe Cc: Dan Williams Cc: David Hildenbrand Cc: David Rientjes Cc: Ingo Molnar Cc: Ira Weiny Cc: James Morris Cc: Jason Gunthorpe Cc: John Hubbard Cc: Joonsoo Kim Cc: Matthew Wilcox Cc: Mel Gorman Cc: Michal Hocko Cc: Michal Hocko Cc: Mike Kravetz Cc: Oscar Salvador Cc: Peter Zijlstra Cc: Sasha Levin Cc: Steven Rostedt (VMware) Cc: Tyler Hicks Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- mm/gup.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) --- a/mm/gup.c~mm-gup-return-an-error-on-migration-failure +++ a/mm/gup.c @@ -1610,7 +1610,6 @@ static long check_and_migrate_cma_pages( { unsigned long i; bool drain_allow = true; - bool migrate_allow = true; LIST_HEAD(cma_page_list); long ret = nr_pages; struct page *prev_head, *head; @@ -1661,17 +1660,15 @@ check_again: for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { - /* - * some of the pages failed migration. Do get_user_pages - * without migration. - */ - migrate_allow = false; - + ret = migrate_pages(&cma_page_list, alloc_migration_target, + NULL, (unsigned long)&mtc, MIGRATE_SYNC, + MR_CONTIG_RANGE); + if (ret) { if (!list_empty(&cma_page_list)) putback_movable_pages(&cma_page_list); + return ret > 0 ? -ENOMEM : ret; } + /* * We did migrate all the pages, Try to get the page references * again migrating any new CMA pages which we failed to isolate @@ -1681,7 +1678,7 @@ check_again: pages, vmas, NULL, gup_flags); - if ((ret > 0) && migrate_allow) { + if (ret > 0) { nr_pages = ret; drain_allow = true; goto check_again; _