From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA09EC32754 for ; Fri, 2 Aug 2019 18:49:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 91A6320665 for ; Fri, 2 Aug 2019 18:49:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Y3fHk+98" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728412AbfHBSts (ORCPT ); Fri, 2 Aug 2019 14:49:48 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:15257 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728100AbfHBStr (ORCPT ); Fri, 2 Aug 2019 14:49:47 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 02 Aug 2019 11:49:47 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Fri, 02 Aug 2019 11:49:46 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 02 Aug 2019 11:49:46 -0700 Received: from [10.2.171.217] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 2 Aug 2019 18:49:45 +0000 Subject: Re: [PATCH 06/34] drm/i915: convert put_page() to put_user_page*() To: Joonas Lahtinen , Andrew Morton , CC: Christoph Hellwig , Dan Williams , Dave Chinner , Dave Hansen , Ira Weiny , Jan Kara , Jason Gunthorpe , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , LKML , , , , , , , , , , , , , , , , , , , , , , , , Jani Nikula , Rodrigo Vivi , David Airlie References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802022005.5117-7-jhubbard@nvidia.com> <156473756254.19842.12384378926183716632@jlahtine-desk.ger.corp.intel.com> X-Nvconfidentiality: public From: John Hubbard Message-ID: <7d9a9c57-4322-270b-b636-7214019f87e9@nvidia.com> Date: Fri, 2 Aug 2019 11:48:08 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <156473756254.19842.12384378926183716632@jlahtine-desk.ger.corp.intel.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL104.nvidia.com (172.18.146.11) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564771787; bh=S6tEtRzvj2UDxwZj4cj8eQvbGOkT9/8mALKEDpv4kUk=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=Y3fHk+98nOh18vMhIm6oxYzfGEP8O2OFNum6iQoB2YYU2gm7Ou5mcASXhMaNa0kba BTSi3qY+ef5NxNUgS+af9aZz1Xc0xlcalUQEIFNZpatz8EG3c4tRcTzbThhIfTheTe KUIJTej9BZ3lpHo5Yb8ku9IclBxYdC+NGE9oZ9fgbh7w5eghYXLn/xfb7XAp0GzwuH IJ6d9B7kXtD+UvM1BqXw9xtNu+yigVn+igq6/71hO2kl6CxZVC/aFF46g+y63m13bY ASWUBsfN5BXUM2Pu5Zj4LSXQjLICQo96hEH38MKObK2sbDPzzqMyeBnMmqRYo7/r9E eYoSABVJVs3oQ== Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On 8/2/19 2:19 AM, Joonas Lahtinen wrote: > Quoting john.hubbard@gmail.com (2019-08-02 05:19:37) >> From: John Hubbard >> >> For pages that were retained via get_user_pages*(), release those pages >> via the new put_user_page*() routines, instead of via put_page() or >> release_pages(). >> >> This is part a tree-wide conversion, as described in commit fc1d8e7cca2d >> ("mm: introduce put_user_page*(), placeholder versions"). >> >> Note that this effectively changes the code's behavior in >> i915_gem_userptr_put_pages(): it now calls set_page_dirty_lock(), >> instead of set_page_dirty(). This is probably more accurate. > > We've already fixed this in drm-tip where the current code uses > set_page_dirty_lock(). > > This would conflict with our tree. Rodrigo is handling > drm-intel-next for 5.4, so you guys want to coordinate how > to merge. > Hi Joonas, Rodrigo, First of all, I apologize for the API breakage: put_user_pages_dirty_lock() has an additional "dirty" parameter. In order to deal with the merge problem, I'll drop this patch from my series, and I'd recommend that the drm-intel-next take the following approach: 1) For now, s/put_page/put_user_page/ in i915_gem_userptr_put_pages(), and fix up the set_page_dirty() --> set_page_dirty_lock() issue, like this (based against linux.git): diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 528b61678334..94721cc0093b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -664,10 +664,10 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj, for_each_sgt_page(page, sgt_iter, pages) { if (obj->mm.dirty) - set_page_dirty(page); + set_page_dirty_lock(page); mark_page_accessed(page); - put_page(page); + put_user_page(page); } obj->mm.dirty = false; That will leave you with your original set_page_dirty_lock() calls and everything works properly. 2) Next cycle, move to the new put_user_pages_dirty_lock(). thanks, -- John Hubbard NVIDIA > Regards, Joonas >