From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B75E0C47247 for ; Tue, 5 May 2020 00:21:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7940220661 for ; Tue, 5 May 2020 00:21:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="jlUDA7Xe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7940220661 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F3A3B8E008B; Mon, 4 May 2020 20:21:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EED6F8E0058; Mon, 4 May 2020 20:21:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E01638E008B; Mon, 4 May 2020 20:21:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id C5E658E0058 for ; Mon, 4 May 2020 20:21:01 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 89DB819237 for ; Tue, 5 May 2020 00:21:01 +0000 (UTC) X-FDA: 76780760322.23.level84_8e3e418dfb0a X-HE-Tag: level84_8e3e418dfb0a X-Filterd-Recvd-Size: 5022 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 May 2020 00:21:00 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 04 May 2020 17:19:50 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 04 May 2020 17:20:59 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 04 May 2020 17:20:59 -0700 Received: from [10.2.56.198] (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 5 May 2020 00:20:59 +0000 Subject: Re: [PATCH hmm v2 2/5] mm/hmm: make hmm_range_fault return 0 or -1 To: Jason Gunthorpe , , Ralph Campbell CC: Alex Deucher , , Ben Skeggs , =?UTF-8?Q?Christian_K=c3=b6nig?= , "David (ChunMing) Zhou" , , Felix Kuehling , Christoph Hellwig , , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , , Niranjana Vishwanathapura , , "Yang, Philip" References: <2-v2-b4e84f444c7d+24f57-hmm_no_flags_jgg@mellanox.com> X-Nvconfidentiality: public From: John Hubbard Message-ID: <9cf9f4f0-58b7-1992-6c6e-eed226ba42c0@nvidia.com> Date: Mon, 4 May 2020 17:20:58 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <2-v2-b4e84f444c7d+24f57-hmm_no_flags_jgg@mellanox.com> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1588637990; bh=IxP1FX28IvHHn1UJ333IIWhaZQbrlpthRJPkCq5tlOE=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=jlUDA7Xe/cEVYvcoOLBLT9WxsiTxB4eaIcRkJbYt7sChT+skJNIVNGkwSxeX311N9 8Icf+Bb+N7FsISzQG4Ci4zeBXx95nSpEwUjZe7BytMfOd9TIqtDhoWdCDk3g2renix JJp4862GZJKQrIMSPPWchAxpo2TL3LdIgZ1nciGXyRvtuaWnU7bnlRQkU6QhCIwddA zZt1IUaeh+IgfHdHcJKHNaZAmd2XKaHg3iGk3SkLeBMTYoH9nYDLidjG4h/hxqZ6a6 Hae57Q5zwNGF0Ax1wOvaZcYkMzvipoDz/QPdM+7uCWP3VezcDvR1L7Kvdm3oMQaoTe S2aJNC1fom1GQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020-05-01 11:20, Jason Gunthorpe wrote: > From: Jason Gunthorpe > > hmm_vma_walk->last is supposed to be updated after every write to the > pfns, so that it can be returned by hmm_range_fault(). However, this is > not done consistently. Fortunately nothing checks the return code of > hmm_range_fault() for anything other than error. > > More importantly last must be set before returning -EBUSY as it is used to > prevent reading an output pfn as an input flags when the loop restarts. > > For clarity and simplicity make hmm_range_fault() return 0 or -ERRNO. Only > set last when returning -EBUSY. Yes, this is also a nice simplification. > ... > @@ -590,10 +580,13 @@ long hmm_range_fault(struct hmm_range *range) > return -EBUSY; > ret = walk_page_range(mm, hmm_vma_walk.last, range->end, > &hmm_walk_ops, &hmm_vma_walk); > + /* > + * When -EBUSY is returned the loop restarts with > + * hmm_vma_walk.last set to an address that has not been stored > + * in pfns. All entries < last in the pfn array are set to their > + * output, and all >= are still at their input values. > + */ I'm glad you added that comment. This is much easier to figure out with that in place. After poking around this patch and eventually understanding the .last handling, I wondered if you might like this slightly tweaked wording instead: /* * Each of the hmm_walk_ops routines returns -EBUSY if and only * hmm_vma_walk.last has been set to an address that has not yet * been stored in pfns. All entries < last in the pfn array are * set to their output, and all >= are still at their input * values. */ Either way, Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > } while (ret == -EBUSY); > - > - if (ret) > - return ret; > - return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT; > + return ret; > } > EXPORT_SYMBOL(hmm_range_fault); >