From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28AE5CA9EB3 for ; Fri, 18 Oct 2019 02:15:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 95F1821925 for ; Fri, 18 Oct 2019 02:15:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95F1821925 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3B5568E0005; Thu, 17 Oct 2019 22:15:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3637C8E0003; Thu, 17 Oct 2019 22:15:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A1388E0005; Thu, 17 Oct 2019 22:15:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 02E318E0003 for ; Thu, 17 Oct 2019 22:15:17 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 8E82D1807867C for ; Fri, 18 Oct 2019 02:15:17 +0000 (UTC) X-FDA: 76055288274.03.cough87_8536d6a9ab62d X-HE-Tag: cough87_8536d6a9ab62d X-Filterd-Recvd-Size: 6117 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Oct 2019 02:15:15 +0000 (UTC) Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 92E305187D112CC3F30F; Fri, 18 Oct 2019 10:15:07 +0800 (CST) Received: from [127.0.0.1] (10.177.29.68) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.439.0; Fri, 18 Oct 2019 10:15:02 +0800 Message-ID: <5DA92026.7060402@huawei.com> Date: Fri, 18 Oct 2019 10:15:02 +0800 From: zhong jiang User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: John Hubbard CC: , , Subject: Re: [PATCH] mm: Unsigned 'nr_pages' always larger than zero References: <1571282386-65076-1-git-send-email-zhongjiang@huawei.com> <60492b3f-c933-b54c-8551-18ebb813d7d1@nvidia.com> In-Reply-To: <60492b3f-c933-b54c-8551-18ebb813d7d1@nvidia.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.29.68] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2019/10/18 4:42, John Hubbard wrote: > On 10/16/19 8:19 PM, zhong jiang wrote: >> With the help of unsigned_lesser_than_zero.cocci. Unsigned 'nr_pages"' >> compare with zero. And __get_user_pages_locked will return an long value. >> >> The patch use a new local variable to store the return value of >> __get_user_pages_locked(). Then use it to compare with zero. > Hi Zhong, > > The above are actually more like notes to yourself, and while those are > good to have, but it's not yet a perfect commit description: it talks > about how you got here, which is only part of the story. A higher level > summary is better, and let the code itself cover the details. > > Like this: > > First line (subject): > > mm/gup: allow CMA migration to propagate errors back to caller > > Commit description: > > check_and_migrate_cma_pages() was recording the result of > __get_user_pages_locked() in an unsigned "nr_pages" variable. Because > __get_user_pages_locked() returns a signed value that can include > negative errno values, this had the effect of hiding errors. > > Change check_and_migrate_cma_pages() implementation so that it > uses a signed variable instead, and propagates the results back > to the caller just as other gup internal functions do. > > This was discovered with the help of unsigned_lesser_than_zero.cocci. > >> Suggested-by: Andrew Morton >> Signed-off-by: zhong jiang >> --- >> mm/gup.c | 8 +++++--- >> 1 file changed, 5 insertions(+), 3 deletions(-) >> >> diff --git a/mm/gup.c b/mm/gup.c >> index 8f236a3..1fe0ceb 100644 >> --- a/mm/gup.c >> +++ b/mm/gup.c >> @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, >> bool drain_allow = true; >> bool migrate_allow = true; >> LIST_HEAD(cma_page_list); >> + long ret; > Ira pointed out that this needs initialization, see below. I miss that. Thanks for pointing out. >> >> check_again: >> for (i = 0; i < nr_pages;) { >> @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, >> * again migrating any new CMA pages which we failed to isolate >> * earlier. >> */ >> - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, >> + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, >> pages, vmas, NULL, >> gup_flags); >> >> - if ((nr_pages > 0) && migrate_allow) { >> + nr_pages = ret; > Although technically correct, it makes me feel odd to see the assignment > done from signed to unsigned, *before* checking for >0. And Ira is hinting > at the same thing, when he asks if we can return early here. See below... At first. I realize the same thing, But assign the value from signed to unsigned before checking for >0 doesn't matter when we can return early here. Because we only return the ret. Of course, The assignment is meaningless when we can return early. It indeed is better to do as you suggested. :-( >> + if ((ret > 0) && migrate_allow) { >> drain_allow = true; >> goto check_again; >> } >> } >> >> - return nr_pages; >> + return ret; >> } >> #else >> static long check_and_migrate_cma_pages(struct task_struct *tsk, >> > So, overall, I'd recommend this, instead: > > diff --git a/mm/gup.c b/mm/gup.c > index 23a9f9c9d377..72bc027037fa 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > bool drain_allow = true; > bool migrate_allow = true; > LIST_HEAD(cma_page_list); > + long ret = nr_pages; > > check_again: > for (i = 0; i < nr_pages;) { > @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > * again migrating any new CMA pages which we failed to isolate > * earlier. > */ > - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, > + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, > pages, vmas, NULL, > gup_flags); > > - if ((nr_pages > 0) && migrate_allow) { > + if ((ret > 0) && migrate_allow) { > + nr_pages = ret; > drain_allow = true; > goto check_again; > } > } > > - return nr_pages; > + return ret; > } > #else > static long check_and_migrate_cma_pages(struct task_struct *tsk, > > > thanks, > > John Hubbard > NVIDIA > > . > Thanks for your suggestion. I will repost it as you has proposed. Sincerely, zhong jiang