From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EE13CA9ECB for ; Thu, 31 Oct 2019 23:47:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 541722087F for ; Thu, 31 Oct 2019 23:47:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Fl9ftqk4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729156AbfJaXqx (ORCPT ); Thu, 31 Oct 2019 19:46:53 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:7772 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729119AbfJaXqw (ORCPT ); Thu, 31 Oct 2019 19:46:52 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 31 Oct 2019 16:46:53 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 31 Oct 2019 16:46:46 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 31 Oct 2019 16:46:46 -0700 Received: from [10.110.48.28] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 31 Oct 2019 23:46:45 +0000 Subject: Re: [PATCH 08/19] mm/process_vm_access: set FOLL_PIN via pin_user_pages_remote() To: Ira Weiny CC: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML References: <20191030224930.3990755-1-jhubbard@nvidia.com> <20191030224930.3990755-9-jhubbard@nvidia.com> <20191031233519.GH14771@iweiny-DESK2.sc.intel.com> From: John Hubbard X-Nvconfidentiality: public Message-ID: <7e79d9b5-772e-3628-4a60-65efc2f490c5@nvidia.com> Date: Thu, 31 Oct 2019 16:46:45 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20191031233519.GH14771@iweiny-DESK2.sc.intel.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1572565613; bh=Ni3MG24cn6KOx2mpKQf1IX0uBXAOxI/S5ACFQstwCH0=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=Fl9ftqk4J63wpZ0nmIiycLWCL11bYX50L+387BZ180hQh//V0Vp2ib5lF+gUyjqai BUNKoNlXTqc57g1YISa4wLoDF8FDUSOEDk//A2D/giSdADgbBgvBgvcIBixm1N1+Zk H8cgPXG2CeU33ioP0ErDWx6EuJtDE7b8cwg7DR730s9O71+39F8ouq+P6zklOCkkR4 oDUx19ikJNj9J8uGqwUijC3VwpTpRuFvJG1fW6Ohvn+piDj3I1HEee28haJHWEOnTW Uwhe+F8yMyQV4WRYSdi28gKVURtaIyz8Dwtt8DEOye19K0wcNBp+Nz8BepqnA7FG/T 4uFqHk6o1ZuLQ== Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On 10/31/19 4:35 PM, Ira Weiny wrote: > On Wed, Oct 30, 2019 at 03:49:19PM -0700, John Hubbard wrote: >> Convert process_vm_access to use the new pin_user_pages_remote() >> call, which sets FOLL_PIN. Setting FOLL_PIN is now required for >> code that requires tracking of pinned pages. >> >> Also, release the pages via put_user_page*(). >> >> Also, rename "pages" to "pinned_pages", as this makes for >> easier reading of process_vm_rw_single_vec(). > > Ok... but it made review a bit harder... > Yes, sorry about that. After dealing with "pages means struct page *[]" for all this time, having an "int pages" just was a step too far for me here. :) Thanks for working through it. thanks, John Hubbard NVIDIA > Reviewed-by: Ira Weiny > >> >> Signed-off-by: John Hubbard >> --- >> mm/process_vm_access.c | 28 +++++++++++++++------------- >> 1 file changed, 15 insertions(+), 13 deletions(-) >> >> diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c >> index 357aa7bef6c0..fd20ab675b85 100644 >> --- a/mm/process_vm_access.c >> +++ b/mm/process_vm_access.c >> @@ -42,12 +42,11 @@ static int process_vm_rw_pages(struct page **pages, >> if (copy > len) >> copy = len; >> >> - if (vm_write) { >> + if (vm_write) >> copied = copy_page_from_iter(page, offset, copy, iter); >> - set_page_dirty_lock(page); >> - } else { >> + else >> copied = copy_page_to_iter(page, offset, copy, iter); >> - } >> + >> len -= copied; >> if (copied < copy && iov_iter_count(iter)) >> return -EFAULT; >> @@ -96,7 +95,7 @@ static int process_vm_rw_single_vec(unsigned long addr, >> flags |= FOLL_WRITE; >> >> while (!rc && nr_pages && iov_iter_count(iter)) { >> - int pages = min(nr_pages, max_pages_per_loop); >> + int pinned_pages = min(nr_pages, max_pages_per_loop); >> int locked = 1; >> size_t bytes; >> >> @@ -106,14 +105,15 @@ static int process_vm_rw_single_vec(unsigned long addr, >> * current/current->mm >> */ >> down_read(&mm->mmap_sem); >> - pages = get_user_pages_remote(task, mm, pa, pages, flags, >> - process_pages, NULL, &locked); >> + pinned_pages = pin_user_pages_remote(task, mm, pa, pinned_pages, >> + flags, process_pages, >> + NULL, &locked); >> if (locked) >> up_read(&mm->mmap_sem); >> - if (pages <= 0) >> + if (pinned_pages <= 0) >> return -EFAULT; >> >> - bytes = pages * PAGE_SIZE - start_offset; >> + bytes = pinned_pages * PAGE_SIZE - start_offset; >> if (bytes > len) >> bytes = len; >> >> @@ -122,10 +122,12 @@ static int process_vm_rw_single_vec(unsigned long addr, >> vm_write); >> len -= bytes; >> start_offset = 0; >> - nr_pages -= pages; >> - pa += pages * PAGE_SIZE; >> - while (pages) >> - put_page(process_pages[--pages]); >> + nr_pages -= pinned_pages; >> + pa += pinned_pages * PAGE_SIZE; >> + >> + /* If vm_write is set, the pages need to be made dirty: */ >> + put_user_pages_dirty_lock(process_pages, pinned_pages, >> + vm_write); >> } >> >> return rc; >> -- >> 2.23.0 >>