From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ot1-f67.google.com ([209.85.210.67]:36594 "EHLO mail-ot1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729371AbeKMCIx (ORCPT ); Mon, 12 Nov 2018 21:08:53 -0500 Received: by mail-ot1-f67.google.com with SMTP id k98so8471443otk.3 for ; Mon, 12 Nov 2018 08:14:58 -0800 (PST) MIME-Version: 1.0 References: <20181110085041.10071-1-jhubbard@nvidia.com> <20181110085041.10071-2-jhubbard@nvidia.com> <20181112154127.GA8247@localhost.localdomain> In-Reply-To: <20181112154127.GA8247@localhost.localdomain> From: Dan Williams Date: Mon, 12 Nov 2018 08:14:46 -0800 Message-ID: Subject: Re: [PATCH v2 1/6] mm/gup: finish consolidating error handling To: Keith Busch Cc: John Hubbard , Linux MM , Andrew Morton , Linux Kernel Mailing List , linux-rdma , linux-fsdevel , John Hubbard , "Kirill A. Shutemov" , Dave Hansen Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Mon, Nov 12, 2018 at 7:45 AM Keith Busch wrote: > > On Sat, Nov 10, 2018 at 12:50:36AM -0800, john.hubbard@gmail.com wrote: > > From: John Hubbard > > > > An upcoming patch wants to be able to operate on each page that > > get_user_pages has retrieved. In order to do that, it's best to > > have a common exit point from the routine. Most of this has been > > taken care of by commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap while > > pinning pages"), but there was one case remaining. > > > > Also, there was still an unnecessary shadow declaration (with a > > different type) of the "ret" variable, which this commit removes. > > > > Cc: Keith Busch > > Cc: Dan Williams > > Cc: Kirill A. Shutemov > > Cc: Dave Hansen > > Signed-off-by: John Hubbard > > --- > > mm/gup.c | 3 +-- > > 1 file changed, 1 insertion(+), 2 deletions(-) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index f76e77a2d34b..55a41dee0340 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -696,12 +696,11 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, > > if (!vma || start >= vma->vm_end) { > > vma = find_extend_vma(mm, start); > > if (!vma && in_gate_area(mm, start)) { > > - int ret; > > ret = get_gate_page(mm, start & PAGE_MASK, > > gup_flags, &vma, > > pages ? &pages[i] : NULL); > > if (ret) > > - return i ? : ret; > > + goto out; > > ctx.page_mask = 0; > > goto next_page; > > } > > This also fixes a potentially leaked dev_pagemap reference count if a > failure occurs when an iteration crosses a vma boundary. I don't think > it's normal to have different vma's on a users mapped zone device memory, > but good to fix anyway. Does not sound abnormal to me, we should promote this as a fix for the current cycle with an updated changelog.