From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8B23CA9EAF for ; Mon, 21 Oct 2019 18:25:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7EDE9207FC for ; Mon, 21 Oct 2019 18:25:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7EDE9207FC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D9326B0003; Mon, 21 Oct 2019 14:25:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 288606B0005; Mon, 21 Oct 2019 14:25:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 177626B0006; Mon, 21 Oct 2019 14:25:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id E99346B0003 for ; Mon, 21 Oct 2019 14:25:20 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 9A01852BA for ; Mon, 21 Oct 2019 18:25:20 +0000 (UTC) X-FDA: 76068619200.25.flag48_2e83cd2666223 X-HE-Tag: flag48_2e83cd2666223 X-Filterd-Recvd-Size: 3212 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Oct 2019 18:25:19 +0000 (UTC) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Oct 2019 11:25:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,324,1566889200"; d="scan'208";a="209421199" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by orsmga002.jf.intel.com with ESMTP; 21 Oct 2019 11:25:17 -0700 Date: Mon, 21 Oct 2019 11:25:17 -0700 From: Ira Weiny To: zhong jiang Cc: akpm@linux-foundation.org, jhubbard@nvidia.com, vbabka@suse.cz, linux-mm@kvack.org Subject: Re: [PATCH] mm/gup: allow CMA migration to propagate errors back to caller Message-ID: <20191021182517.GB23024@iweiny-DESK2.sc.intel.com> References: <1571671030-58029-1-git-send-email-zhongjiang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1571671030-58029-1-git-send-email-zhongjiang@huawei.com> User-Agent: Mutt/1.11.1 (2018-12-01) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 21, 2019 at 11:17:10PM +0800, zhong jiang wrote: > check_and_migrate_cma_pages() was recording the result of > __get_user_pages_locked() in an unsigned "nr_pages" variable. Because > __get_user_pages_locked() returns a signed value that can include > negative errno values, this had the effect of hiding errors. > > Change check_and_migrate_cma_pages() implementation so that it > uses a signed variable instead, and propagates the results back > to the caller just as other gup internal functions do. > > This was discovered with the help of unsigned_lesser_than_zero.cocci. > > Suggested-by: John Hubbard > Signed-off-by: zhong jiang Reviewed-by: Ira Weiny > --- > mm/gup.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 8f236a3..c2b3e11 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > bool drain_allow = true; > bool migrate_allow = true; > LIST_HEAD(cma_page_list); > + long ret = nr_pages; > > check_again: > for (i = 0; i < nr_pages;) { > @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > * again migrating any new CMA pages which we failed to isolate > * earlier. > */ > - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, > + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, > pages, vmas, NULL, > gup_flags); > > - if ((nr_pages > 0) && migrate_allow) { > + if ((ret > 0) && migrate_allow) { > + nr_pages = ret; > drain_allow = true; > goto check_again; > } > } > > - return nr_pages; > + return ret; > } > #else > static long check_and_migrate_cma_pages(struct task_struct *tsk, > -- > 1.7.12.4 > >