From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D243AC31E40 for ; Tue, 6 Aug 2019 20:43:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A20722070D for ; Tue, 6 Aug 2019 20:43:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ap4Gh4eA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726783AbfHFUnq (ORCPT ); Tue, 6 Aug 2019 16:43:46 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:12036 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726539AbfHFUnp (ORCPT ); Tue, 6 Aug 2019 16:43:45 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 06 Aug 2019 13:43:53 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 06 Aug 2019 13:43:43 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 06 Aug 2019 13:43:43 -0700 Received: from [10.110.48.28] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 6 Aug 2019 20:43:43 +0000 Subject: Re: [PATCH v6 1/3] mm/gup: add make_dirty arg to put_user_pages_dirty_lock() To: Ira Weiny , CC: Andrew Morton , Alexander Viro , =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , , , , , , LKML References: <20190804214042.4564-1-jhubbard@nvidia.com> <20190804214042.4564-2-jhubbard@nvidia.com> <20190806174017.GB4748@iweiny-DESK2.sc.intel.com> X-Nvconfidentiality: public From: John Hubbard Message-ID: <662e3f1e-b63e-ce80-274b-cb407bce6f78@nvidia.com> Date: Tue, 6 Aug 2019 13:43:42 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190806174017.GB4748@iweiny-DESK2.sc.intel.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL104.nvidia.com (172.18.146.11) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1565124233; bh=69F2w6W//D6RGOmiznV0xApH922IMQ0bjLWukK3Y3fM=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=ap4Gh4eAjhabzOCGXg03MlGS3E39LbT2CXYDFUwj2rPlbF7hC6RWG9GhsJ2iB7VW/ iYlQoSLezyw7mHlxXl/fKIiZ8rXughCfKjTKN4/Z6IMCoXcHiVjqfWI1NEleGSaZ1D nb/98DcL6LZnWfzY2lODnRbMeV4rL6NENctX00s/9WgBVnL9OMB/MDykHOE9G7K2wL YOLJ7T/SYKJKGHtGsae2JANuXCcr7t7WsPen/yG+U/cLlENdGZ/NhUeRG2cxaPNVG0 iSR9G6EBnJbeu0s0AK9qSs5J/PxhKEARuOdUaS2rM7tLFwOLC4PrmECrXiOHXA9ve3 yBfnp2wxB5hBA== Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On 8/6/19 10:40 AM, Ira Weiny wrote: > On Sun, Aug 04, 2019 at 02:40:40PM -0700, john.hubbard@gmail.com wrote: >> From: John Hubbard >> >> Provide a more capable variation of put_user_pages_dirty_lock(), >> and delete put_user_pages_dirty(). This is based on the >> following: >> >> 1. Lots of call sites become simpler if a bool is passed >> into put_user_page*(), instead of making the call site >> choose which put_user_page*() variant to call. >> >> 2. Christoph Hellwig's observation that set_page_dirty_lock() >> is usually correct, and set_page_dirty() is usually a >> bug, or at least questionable, within a put_user_page*() >> calling chain. >> >> This leads to the following API choices: >> >> * put_user_pages_dirty_lock(page, npages, make_dirty) >> >> * There is no put_user_pages_dirty(). You have to >> hand code that, in the rare case that it's >> required. >> >> Reviewed-by: Christoph Hellwig >> Cc: Matthew Wilcox >> Cc: Jan Kara >> Cc: Ira Weiny >> Cc: Jason Gunthorpe >> Signed-off-by: John Hubbard > > I assume this is superseded by the patch in the large series? > Actually, it's the other way around (there is a note that that effect in the admittedly wall-of-text cover letter [1] in the 34-patch series. However, I'm trying hard to ensure that it doesn't actually matter: * Patch 1 in the latest of each patch series, is identical * I'm reposting the two series together. ...and yes, it might have been better to merge the two patchsets, but the smaller one is more reviewable. And as a result, Andrew has already merged it into the akpm tree. [1] https://lore.kernel.org/r/20190804224915.28669-1-jhubbard@nvidia.com thanks, -- John Hubbard NVIDIA