From: John Hubbard <jhubbard@nvidia.com>
To: zhong jiang <zhongjiang@huawei.com>, <akpm@linux-foundation.org>
Cc: <vbabka@suse.cz>, <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: Unsigned 'nr_pages' always larger than zero
Date: Thu, 17 Oct 2019 13:42:20 -0700 [thread overview]
Message-ID: <60492b3f-c933-b54c-8551-18ebb813d7d1@nvidia.com> (raw)
In-Reply-To: <1571282386-65076-1-git-send-email-zhongjiang@huawei.com>
On 10/16/19 8:19 PM, zhong jiang wrote:
> With the help of unsigned_lesser_than_zero.cocci. Unsigned 'nr_pages"'
> compare with zero. And __get_user_pages_locked will return an long value.
>
> The patch use a new local variable to store the return value of
> __get_user_pages_locked(). Then use it to compare with zero.
Hi Zhong,
The above are actually more like notes to yourself, and while those are
good to have, but it's not yet a perfect commit description: it talks
about how you got here, which is only part of the story. A higher level
summary is better, and let the code itself cover the details.
Like this:
First line (subject):
mm/gup: allow CMA migration to propagate errors back to caller
Commit description:
check_and_migrate_cma_pages() was recording the result of
__get_user_pages_locked() in an unsigned "nr_pages" variable. Because
__get_user_pages_locked() returns a signed value that can include
negative errno values, this had the effect of hiding errors.
Change check_and_migrate_cma_pages() implementation so that it
uses a signed variable instead, and propagates the results back
to the caller just as other gup internal functions do.
This was discovered with the help of unsigned_lesser_than_zero.cocci.
>
> Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
> ---
> mm/gup.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 8f236a3..1fe0ceb 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> bool drain_allow = true;
> bool migrate_allow = true;
> LIST_HEAD(cma_page_list);
> + long ret;
Ira pointed out that this needs initialization, see below.
>
> check_again:
> for (i = 0; i < nr_pages;) {
> @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> * again migrating any new CMA pages which we failed to isolate
> * earlier.
> */
> - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
> + ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
> pages, vmas, NULL,
> gup_flags);
>
> - if ((nr_pages > 0) && migrate_allow) {
> + nr_pages = ret;
Although technically correct, it makes me feel odd to see the assignment
done from signed to unsigned, *before* checking for >0. And Ira is hinting
at the same thing, when he asks if we can return early here. See below...
> + if ((ret > 0) && migrate_allow) {
> drain_allow = true;
> goto check_again;
> }
> }
>
> - return nr_pages;
> + return ret;
> }
> #else
> static long check_and_migrate_cma_pages(struct task_struct *tsk,
>
So, overall, I'd recommend this, instead:
diff --git a/mm/gup.c b/mm/gup.c
index 23a9f9c9d377..72bc027037fa 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
bool drain_allow = true;
bool migrate_allow = true;
LIST_HEAD(cma_page_list);
+ long ret = nr_pages;
check_again:
for (i = 0; i < nr_pages;) {
@@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
* again migrating any new CMA pages which we failed to isolate
* earlier.
*/
- nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
+ ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
pages, vmas, NULL,
gup_flags);
- if ((nr_pages > 0) && migrate_allow) {
+ if ((ret > 0) && migrate_allow) {
+ nr_pages = ret;
drain_allow = true;
goto check_again;
}
}
- return nr_pages;
+ return ret;
}
#else
static long check_and_migrate_cma_pages(struct task_struct *tsk,
thanks,
John Hubbard
NVIDIA
next prev parent reply other threads:[~2019-10-17 20:42 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-17 3:19 [PATCH] mm: Unsigned 'nr_pages' always larger than zero zhong jiang
2019-10-17 18:01 ` Ira Weiny
2019-10-18 2:01 ` zhong jiang
2019-10-17 20:42 ` John Hubbard [this message]
2019-10-18 2:15 ` zhong jiang
2019-10-18 14:00 ` zhong jiang
2019-10-18 16:07 ` Alexander Duyck
2019-10-21 15:11 ` zhong jiang
2019-10-21 12:57 ` Vlastimil Babka
2019-10-21 13:47 ` zhong jiang
-- strict thread matches above, loose matches on Subject: below --
2019-09-04 10:26 zhong jiang
2019-09-04 11:24 ` Vlastimil Babka
2019-09-04 18:25 ` Weiny, Ira
2019-09-04 18:39 ` Matthew Wilcox
2019-09-04 18:48 ` Andrew Morton
2019-09-04 20:20 ` Weiny, Ira
2019-09-04 20:40 ` Vlastimil Babka
2019-09-05 6:18 ` zhong jiang
2019-09-05 7:19 ` Vlastimil Babka
2019-09-05 6:19 ` John Hubbard
2019-10-16 9:07 ` zhong jiang
2019-10-17 0:49 ` Andrew Morton
2019-10-17 1:38 ` zhong jiang
2019-09-04 19:01 ` Andrew Morton
2019-09-04 20:44 ` Vlastimil Babka
2019-09-05 2:05 ` zhong jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=60492b3f-c933-b54c-8551-18ebb813d7d1@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
--cc=vbabka@suse.cz \
--cc=zhongjiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).