From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5270C5518B for ; Wed, 22 Apr 2020 00:21:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5E31720575 for ; Wed, 22 Apr 2020 00:21:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="gIWP5ilm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E31720575 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 64BA48E0003; Tue, 21 Apr 2020 20:21:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FD448E0007; Tue, 21 Apr 2020 20:21:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49E2E8E0003; Tue, 21 Apr 2020 20:21:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 14EBA8E0007 for ; Tue, 21 Apr 2020 20:21:50 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BDB62181AEF1F for ; Wed, 22 Apr 2020 00:21:49 +0000 (UTC) X-FDA: 76733587938.11.loss65_8458145549c42 X-HE-Tag: loss65_8458145549c42 X-Filterd-Recvd-Size: 10697 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 22 Apr 2020 00:21:49 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id x66so720383qkd.9 for ; Tue, 21 Apr 2020 17:21:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dXUNQISf0T88Bj6Lj8KEYM0L0EwolOgRaHpyefYtmUk=; b=gIWP5ilmkmnLh/9NQ4YyFacAf5kT+bxEXkMSwIe/v0D2Ic/uYFKO3FaPeBpefOaLjP 0WP4+VmSjV3Sf/jbcpkqFE+5CiDA2aMMdwfCGoFtIG1TLq3gdTwID80fz7GAZpFzfm+P 3B1k0kODSc2crx4cbT2UNyZuA8iWvN2EBSF1KyPCpMrVWbJZBAL3QO5rMaD9RaiAiKVx MrGa53XEdZgZtT8fmp1PHyQUs81fwhw6ONIUu7XQ9wFZOZ2jNhdPzhvzaK8drXk5SDsm WOdpsQlR3Gp50p9NNVM8d1R506OI/BYArmq58KUhXYb1oCPDRv97xODXd3cpwxJr0E5d FmGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dXUNQISf0T88Bj6Lj8KEYM0L0EwolOgRaHpyefYtmUk=; b=SNtmXHu/hYaA6Y2GL5nl3JAvk3hq3qisLRU0g08qQ6yhxRg9XJdhLbXKNvyeunoL8v YdCvi7Fm7ALyQjK9K09GaPLd84zGZyyotgw+YGAF8X3PyEe5o8nC7VCDkDg8LP6ky8vb OAbJwIczPm7ZYJ2B1sushmdZr6m/bDstuD4P5+4jrW+gUamNKNni5/+aC+Dj8wZEv52v tgt3v5RNtSHkYVrue9wwlZUDrE8Taayd2qnHDb8zlW5QAOzqT7X1I+TIlsYCM/cfY5S/ ExV07TrF+p17sBeo5eubpexUFc4DDazT74vfOXRbc5YX7bp/f47dtBIxkCy/DLGQyeGt CJ8Q== X-Gm-Message-State: AGi0PuYWeOQ16u2WT4zDIAUlF2X8MrODfc2BXht+JtuOMgeMLH3xRpwr GP2QTK8MSpNjGzY7RVMdeIalW2Zp21VKGg== X-Google-Smtp-Source: APiQypKXSZmOfI1ovmhCRa4BWx0ruNS7ifeofBzvyBvIZ85CC1GrC+lD+aCIj59iOvBapC96nnzX1A== X-Received: by 2002:a37:68cd:: with SMTP id d196mr23995404qkc.188.1587514908479; Tue, 21 Apr 2020 17:21:48 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-68-57-212.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.68.57.212]) by smtp.gmail.com with ESMTPSA id p10sm2821195qtu.14.2020.04.21.17.21.46 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Apr 2020 17:21:47 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1jR398-0000B6-EG; Tue, 21 Apr 2020 21:21:46 -0300 From: Jason Gunthorpe To: linux-mm@kvack.org, Ralph Campbell Cc: Alex Deucher , amd-gfx@lists.freedesktop.org, Ben Skeggs , =?utf-8?q?Christian_K=C3=B6nig?= , "David (ChunMing) Zhou" , dri-devel@lists.freedesktop.org, "Kuehling, Felix" , Christoph Hellwig , intel-gfx@lists.freedesktop.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , linux-kernel@vger.kernel.org, Niranjana Vishwanathapura , nouveau@lists.freedesktop.org Subject: [PATCH hmm 2/5] mm/hmm: make hmm_range_fault return 0 or -1 Date: Tue, 21 Apr 2020 21:21:43 -0300 Message-Id: <2-v1-4eb72686de3c+5062-hmm_no_flags_jgg@mellanox.com> In-Reply-To: <0-v1-4eb72686de3c+5062-hmm_no_flags_jgg@mellanox.com> References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jason Gunthorpe hmm_vma_walk->last is supposed to be updated after every write to the pfns, so that it can be returned by hmm_range_fault(). However, this is not done consistently. Fortunately nothing checks the return code of hmm_range_fault() for anything other than error. More importantly last must be set before returning -EBUSY as it is used t= o prevent reading an output pfn as an input flags when the loop restarts. For clarity and simplicity make hmm_range_fault() return 0 or -ERRNO. Onl= y set last when returning -EBUSY. Signed-off-by: Jason Gunthorpe --- Documentation/vm/hmm.rst | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 ++-- drivers/gpu/drm/nouveau/nouveau_svm.c | 6 +++--- include/linux/hmm.h | 2 +- mm/hmm.c | 25 +++++++++---------------- 5 files changed, 16 insertions(+), 23 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index 4e3e9362afeb10..9924f2caa0184c 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -161,7 +161,7 @@ device must complete the update before the driver cal= lback returns. When the device driver wants to populate a range of virtual addresses, i= t can use:: =20 - long hmm_range_fault(struct hmm_range *range); + int hmm_range_fault(struct hmm_range *range); =20 It will trigger a page fault on missing or read-only entries if write ac= cess is requested (see below). Page faults use the generic mm page fault code pa= th just diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/am= d/amdgpu/amdgpu_ttm.c index 6309ff72bd7876..efc1329a019127 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -852,12 +852,12 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *= bo, struct page **pages) down_read(&mm->mmap_sem); r =3D hmm_range_fault(range); up_read(&mm->mmap_sem); - if (unlikely(r <=3D 0)) { + if (unlikely(r)) { /* * FIXME: This timeout should encompass the retry from * mmu_interval_read_retry() as well. */ - if ((r =3D=3D 0 || r =3D=3D -EBUSY) && !time_after(jiffies, timeout)) + if ((r =3D=3D -EBUSY) && !time_after(jiffies, timeout)) goto retry; goto out_free_pfns; } diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouv= eau/nouveau_svm.c index 645fedd77e21b4..c68e9317cf0740 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -536,7 +536,7 @@ static int nouveau_range_fault(struct nouveau_svmm *s= vmm, .pfn_shift =3D NVIF_VMM_PFNMAP_V0_ADDR_SHIFT, }; struct mm_struct *mm =3D notifier->notifier.mm; - long ret; + int ret; =20 while (true) { if (time_after(jiffies, timeout)) @@ -548,8 +548,8 @@ static int nouveau_range_fault(struct nouveau_svmm *s= vmm, down_read(&mm->mmap_sem); ret =3D hmm_range_fault(&range); up_read(&mm->mmap_sem); - if (ret <=3D 0) { - if (ret =3D=3D 0 || ret =3D=3D -EBUSY) + if (ret) { + if (ret =3D=3D -EBUSY) continue; return ret; } diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 7475051100c782..0df27dd03d53d7 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -120,7 +120,7 @@ static inline struct page *hmm_device_entry_to_page(c= onst struct hmm_range *rang /* * Please see Documentation/vm/hmm.rst for how to use the range API. */ -long hmm_range_fault(struct hmm_range *range); +int hmm_range_fault(struct hmm_range *range); =20 /* * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a r= ange diff --git a/mm/hmm.c b/mm/hmm.c index 280585833adfc1..4c7c396655b528 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -174,7 +174,6 @@ static int hmm_vma_walk_hole(unsigned long addr, unsi= gned long end, } if (required_fault) return hmm_vma_fault(addr, end, required_fault, walk); - hmm_vma_walk->last =3D addr; return hmm_pfns_fill(addr, end, range, HMM_PFN_NONE); } =20 @@ -207,7 +206,6 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, u= nsigned long addr, pfn =3D pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); for (i =3D 0; addr < end; addr +=3D PAGE_SIZE, i++, pfn++) pfns[i] =3D hmm_device_entry_from_pfn(range, pfn) | cpu_flags; - hmm_vma_walk->last =3D end; return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -386,13 +384,10 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, r =3D hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, pfns); if (r) { /* hmm_vma_handle_pte() did pte_unmap() */ - hmm_vma_walk->last =3D addr; return r; } } pte_unmap(ptep - 1); - - hmm_vma_walk->last =3D addr; return 0; } =20 @@ -455,7 +450,6 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned lon= g start, unsigned long end, for (i =3D 0; i < npages; ++i, ++pfn) pfns[i] =3D hmm_device_entry_from_pfn(range, pfn) | cpu_flags; - hmm_vma_walk->last =3D end; goto out_unlock; } =20 @@ -500,7 +494,6 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, uns= igned long hmask, for (; addr < end; addr +=3D PAGE_SIZE, i++, pfn++) range->pfns[i] =3D hmm_device_entry_from_pfn(range, pfn) | cpu_flags; - hmm_vma_walk->last =3D end; spin_unlock(ptl); return 0; } @@ -537,7 +530,6 @@ static int hmm_vma_walk_test(unsigned long start, uns= igned long end, return -EFAULT; =20 hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); - hmm_vma_walk->last =3D end; =20 /* Skip this vma and continue processing the next vma. */ return 1; @@ -555,9 +547,7 @@ static const struct mm_walk_ops hmm_walk_ops =3D { * hmm_range_fault - try to fault some address in a virtual address rang= e * @range: argument structure * - * Return: the number of valid pages in range->pfns[] (from range start - * address), which may be zero. On error one of the following status co= des - * can be returned: + * Return: 0 or -ERRNO with one of the following status codes: * * -EINVAL: Invalid arguments or mm or virtual address is in an invalid = vma * (e.g., device file vma). @@ -572,7 +562,7 @@ static const struct mm_walk_ops hmm_walk_ops =3D { * This is similar to get_user_pages(), except that it can read the page= tables * without mutating them (ie causing faults). */ -long hmm_range_fault(struct hmm_range *range) +int hmm_range_fault(struct hmm_range *range) { struct hmm_vma_walk hmm_vma_walk =3D { .range =3D range, @@ -590,10 +580,13 @@ long hmm_range_fault(struct hmm_range *range) return -EBUSY; ret =3D walk_page_range(mm, hmm_vma_walk.last, range->end, &hmm_walk_ops, &hmm_vma_walk); + /* + * When -EBUSY is returned the loop restarts with + * hmm_vma_walk.last set to an address that has not been stored + * in pfns. All entries < last in the pfn array are set to their + * output, and all >=3D are still at their input values. + */ } while (ret =3D=3D -EBUSY); - - if (ret) - return ret; - return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT; + return ret; } EXPORT_SYMBOL(hmm_range_fault); --=20 2.26.0