From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DBF9C4332B for ; Mon, 23 Mar 2020 23:11:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B6BE820409 for ; Mon, 23 Mar 2020 23:11:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="fr9zaypf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6BE820409 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 194666B0005; Mon, 23 Mar 2020 19:11:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 145476B0006; Mon, 23 Mar 2020 19:11:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05AF16B0007; Mon, 23 Mar 2020 19:11:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id DF5026B0005 for ; Mon, 23 Mar 2020 19:11:57 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BBFCC1804475A for ; Mon, 23 Mar 2020 23:11:57 +0000 (UTC) X-FDA: 76628176674.16.rose41_110ea73c2041f X-HE-Tag: rose41_110ea73c2041f X-Filterd-Recvd-Size: 5267 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 23:11:57 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 23 Mar 2020 16:11:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 23 Mar 2020 16:11:55 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 23 Mar 2020 16:11:55 -0700 Received: from [10.2.48.195] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 23 Mar 2020 23:11:55 +0000 Subject: Re: [PATCHv8 2/2] mm/gup: fix omission of check on FOLL_LONGTERM in gup fast path To: Pingfan Liu , CC: Ira Weiny , Andrew Morton , Mike Rapoport , Dan Williams , Matthew Wilcox , Aneesh Kumar K.V , Christoph Hellwig , Shuah Khan , Jason Gunthorpe , References: <1584876733-17405-1-git-send-email-kernelfans@gmail.com> <1584876733-17405-3-git-send-email-kernelfans@gmail.com> X-Nvconfidentiality: public From: John Hubbard Message-ID: <235271bd-cd64-f6d2-bbe1-43de47196ec6@nvidia.com> Date: Mon, 23 Mar 2020 16:11:54 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <1584876733-17405-3-git-send-email-kernelfans@gmail.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1585005071; bh=ACQ6FT0G1l2a3dThVlaN/TkjQf2hbh/+Acsn76kMJRY=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=fr9zaypfHIpSzr4SAWVQqI/Hx9bpIyTBrBOUMwtbZkEKpPE2Whud4ret4t2nWd3/P /d8v4zOtMa87yRMCo3cC191LGi24yNLBf+ILFys0AgoBJG45IYERPucpf0qfHpgxxi dgcog6v4M+eQJdzq6P+UQcxA8ysB0UP34PEJ5CW0NPGUwF/Se6u8tTpWcvnLGxnuEZ vwpbGnbVB7qHGVt7DzpX6JRiM7uTcsA1GiGyiknUkc1DE2lF3qTD9+XEicJpesoYXC SA8Fs1xnVBhTd8tfturNDO5MYjjUT7+aET7Gzt2nHkrLNIH0dl05PHG3+qGxVg0RyP c/lNX1VsT7LJw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/22/20 4:32 AM, Pingfan Liu wrote: > FOLL_LONGTERM is a special case of FOLL_PIN. It suggests a pin which is > going to be given to hardware and can't move. It would truncate CMA > permanently and should be excluded. > > In gup slow path, where > __gup_longterm_locked->check_and_migrate_cma_pages() handles FOLL_LONGTERM, > but in fast path, there lacks such a check, which means a possible leak of > CMA page to longterm pinned. > > Place a check in try_grab_compound_head() in the fast path to fix the leak, > and if FOLL_LONGTERM happens on CMA, it will fall back to slow path to > migrate the page. > > Some note about the check: > Huge page's subpages have the same migrate type due to either > allocation from a free_list[] or alloc_contig_range() with param > MIGRATE_MOVABLE. So it is enough to check on a single subpage > by is_migrate_cma_page(subpage) > > Signed-off-by: Pingfan Liu > Reviewed-by: Christoph Hellwig > Reviewed-by: Jason Gunthorpe > Cc: Ira Weiny > Cc: Andrew Morton > Cc: Mike Rapoport > Cc: Dan Williams > Cc: Matthew Wilcox > Cc: John Hubbard > Cc: "Aneesh Kumar K.V" > Cc: Christoph Hellwig > Cc: Shuah Khan > Cc: Jason Gunthorpe > To: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > --- > mm/gup.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/mm/gup.c b/mm/gup.c > index 02a95b1..3fe75c4 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -89,6 +89,14 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page, > int orig_refs = refs; > > /* > + * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast > + * path, so fail and let the caller fall back to the slow path. > + */ > + if (unlikely(flags & FOLL_LONGTERM) && > + is_migrate_cma_page(page)) > + return NULL; > + > + /* Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > * When pinning a compound page of order > 1 (which is what > * hpage_pincount_available() checks for), use an exact count to > * track it, via hpage_pincount_add/_sub(). > -- > 2.7.5 > >