WARNING: multiple messages have this Message-ID
From: Jason Gunthorpe <jgg@ziepe.ca> To: linux-mm@kvack.org, Jerome Glisse <jglisse@redhat.com>, Ralph Campbell <rcampbell@nvidia.com>, John Hubbard <jhubbard@nvidia.com>, Felix.Kuehling@amd.com Cc: "Juergen Gross" <jgross@suse.com>, "David Zhou" <David1.Zhou@amd.com>, "Mike Marciniszyn" <mike.marciniszyn@intel.com>, "Stefano Stabellini" <sstabellini@kernel.org>, "Philip Yang" <Philip.Yang@amd.com>, "Oleksandr Andrushchenko" <oleksandr_andrushchenko@epam.com>, linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, "Dennis Dalessandro" <dennis.dalessandro@intel.com>, amd-gfx@lists.freedesktop.org, "Christoph Hellwig" <hch@infradead.org>, "Jason Gunthorpe" <jgg@mellanox.com>, dri-devel@lists.freedesktop.org, "Alex Deucher" <alexander.deucher@amd.com>, xen-devel@lists.xenproject.org, "Boris Ostrovsky" <boris.ostrovsky@oracle.com>, "Petr Cvek" <petrcvekcz@gmail.com>, "Christian König" <christian.koenig@amd.com>, "Ben Skeggs" <bskeggs@redhat.com> Subject: [PATCH v3 10/14] drm/amdgpu: Call find_vma under mmap_sem Date: Tue, 12 Nov 2019 16:22:27 -0400 Message-ID: <20191112202231.3856-11-jgg@ziepe.ca> (raw) In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> From: Jason Gunthorpe <jgg@mellanox.com> find_vma() must be called under the mmap_sem, reorganize this code to do the vma check after entering the lock. Further, fix the unlocked use of struct task_struct's mm, instead use the mm from hmm_mirror which has an active mm_grab. Also the mm_grab must be converted to a mm_get before acquiring mmap_sem or calling find_vma(). Fixes: 66c45500bfdc ("drm/amdgpu: use new HMM APIs and helpers") Fixes: 0919195f2b0d ("drm/amdgpu: Enable amdgpu_ttm_tt_get_user_pages in worker threads") Acked-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Philip Yang <Philip.Yang@amd.com> Tested-by: Philip Yang <Philip.Yang@amd.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 37 ++++++++++++++----------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index dff41d0a85fe96..c0e41f1f0c2365 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -35,6 +35,7 @@ #include <linux/hmm.h> #include <linux/pagemap.h> #include <linux/sched/task.h> +#include <linux/sched/mm.h> #include <linux/seq_file.h> #include <linux/slab.h> #include <linux/swap.h> @@ -788,7 +789,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) struct hmm_mirror *mirror = bo->mn ? &bo->mn->mirror : NULL; struct ttm_tt *ttm = bo->tbo.ttm; struct amdgpu_ttm_tt *gtt = (void *)ttm; - struct mm_struct *mm = gtt->usertask->mm; + struct mm_struct *mm; unsigned long start = gtt->userptr; struct vm_area_struct *vma; struct hmm_range *range; @@ -796,25 +797,14 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) uint64_t *pfns; int r = 0; - if (!mm) /* Happens during process shutdown */ - return -ESRCH; - if (unlikely(!mirror)) { DRM_DEBUG_DRIVER("Failed to get hmm_mirror\n"); - r = -EFAULT; - goto out; + return -EFAULT; } - vma = find_vma(mm, start); - if (unlikely(!vma || start < vma->vm_start)) { - r = -EFAULT; - goto out; - } - if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && - vma->vm_file)) { - r = -EPERM; - goto out; - } + mm = mirror->hmm->mmu_notifier.mm; + if (!mmget_not_zero(mm)) /* Happens during process shutdown */ + return -ESRCH; range = kzalloc(sizeof(*range), GFP_KERNEL); if (unlikely(!range)) { @@ -847,6 +837,17 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT); down_read(&mm->mmap_sem); + vma = find_vma(mm, start); + if (unlikely(!vma || start < vma->vm_start)) { + r = -EFAULT; + goto out_unlock; + } + if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && + vma->vm_file)) { + r = -EPERM; + goto out_unlock; + } + r = hmm_range_fault(range, 0); up_read(&mm->mmap_sem); @@ -865,15 +866,19 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) } gtt->range = range; + mmput(mm); return 0; +out_unlock: + up_read(&mm->mmap_sem); out_free_pfns: hmm_range_unregister(range); kvfree(pfns); out_free_ranges: kfree(range); out: + mmput(mm); return r; } -- 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
From: Jason Gunthorpe <jgg@ziepe.ca> To: linux-mm@kvack.org, Jerome Glisse <jglisse@redhat.com>, Ralph Campbell <rcampbell@nvidia.com>, John Hubbard <jhubbard@nvidia.com>, Felix.Kuehling@amd.com Cc: "Juergen Gross" <jgross@suse.com>, "David Zhou" <David1.Zhou@amd.com>, "Mike Marciniszyn" <mike.marciniszyn@intel.com>, "Stefano Stabellini" <sstabellini@kernel.org>, "Philip Yang" <Philip.Yang@amd.com>, "Oleksandr Andrushchenko" <oleksandr_andrushchenko@epam.com>, linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, "Dennis Dalessandro" <dennis.dalessandro@intel.com>, amd-gfx@lists.freedesktop.org, "Christoph Hellwig" <hch@infradead.org>, "Jason Gunthorpe" <jgg@mellanox.com>, dri-devel@lists.freedesktop.org, "Alex Deucher" <alexander.deucher@amd.com>, xen-devel@lists.xenproject.org, "Boris Ostrovsky" <boris.ostrovsky@oracle.com>, "Petr Cvek" <petrcvekcz@gmail.com>, "Christian König" <christian.koenig@amd.com>, "Ben Skeggs" <bskeggs@redhat.com> Subject: [PATCH v3 10/14] drm/amdgpu: Call find_vma under mmap_sem Date: Tue, 12 Nov 2019 16:22:27 -0400 Message-ID: <20191112202231.3856-11-jgg@ziepe.ca> (raw) Message-ID: <20191112202227.WMjm0h0DcfF6zW6dJvIRq-RAsgecmUhmiGWpEyang5A@z> (raw) In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> From: Jason Gunthorpe <jgg@mellanox.com> find_vma() must be called under the mmap_sem, reorganize this code to do the vma check after entering the lock. Further, fix the unlocked use of struct task_struct's mm, instead use the mm from hmm_mirror which has an active mm_grab. Also the mm_grab must be converted to a mm_get before acquiring mmap_sem or calling find_vma(). Fixes: 66c45500bfdc ("drm/amdgpu: use new HMM APIs and helpers") Fixes: 0919195f2b0d ("drm/amdgpu: Enable amdgpu_ttm_tt_get_user_pages in worker threads") Acked-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Philip Yang <Philip.Yang@amd.com> Tested-by: Philip Yang <Philip.Yang@amd.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 37 ++++++++++++++----------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index dff41d0a85fe96..c0e41f1f0c2365 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -35,6 +35,7 @@ #include <linux/hmm.h> #include <linux/pagemap.h> #include <linux/sched/task.h> +#include <linux/sched/mm.h> #include <linux/seq_file.h> #include <linux/slab.h> #include <linux/swap.h> @@ -788,7 +789,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) struct hmm_mirror *mirror = bo->mn ? &bo->mn->mirror : NULL; struct ttm_tt *ttm = bo->tbo.ttm; struct amdgpu_ttm_tt *gtt = (void *)ttm; - struct mm_struct *mm = gtt->usertask->mm; + struct mm_struct *mm; unsigned long start = gtt->userptr; struct vm_area_struct *vma; struct hmm_range *range; @@ -796,25 +797,14 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) uint64_t *pfns; int r = 0; - if (!mm) /* Happens during process shutdown */ - return -ESRCH; - if (unlikely(!mirror)) { DRM_DEBUG_DRIVER("Failed to get hmm_mirror\n"); - r = -EFAULT; - goto out; + return -EFAULT; } - vma = find_vma(mm, start); - if (unlikely(!vma || start < vma->vm_start)) { - r = -EFAULT; - goto out; - } - if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && - vma->vm_file)) { - r = -EPERM; - goto out; - } + mm = mirror->hmm->mmu_notifier.mm; + if (!mmget_not_zero(mm)) /* Happens during process shutdown */ + return -ESRCH; range = kzalloc(sizeof(*range), GFP_KERNEL); if (unlikely(!range)) { @@ -847,6 +837,17 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT); down_read(&mm->mmap_sem); + vma = find_vma(mm, start); + if (unlikely(!vma || start < vma->vm_start)) { + r = -EFAULT; + goto out_unlock; + } + if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && + vma->vm_file)) { + r = -EPERM; + goto out_unlock; + } + r = hmm_range_fault(range, 0); up_read(&mm->mmap_sem); @@ -865,15 +866,19 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) } gtt->range = range; + mmput(mm); return 0; +out_unlock: + up_read(&mm->mmap_sem); out_free_pfns: hmm_range_unregister(range); kvfree(pfns); out_free_ranges: kfree(range); out: + mmput(mm); return r; } -- 2.24.0 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
next prev parent reply index Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-11-12 20:22 [PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 01/14] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-13 13:52 ` Christoph Hellwig 2019-11-13 13:52 ` Christoph Hellwig 2019-11-12 20:22 ` [PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-13 13:59 ` Christoph Hellwig 2019-11-13 13:59 ` Christoph Hellwig 2019-11-13 16:46 ` Jason Gunthorpe 2019-11-13 16:46 ` Jason Gunthorpe 2019-11-23 0:54 ` Ralph Campbell 2019-11-23 0:54 ` Ralph Campbell 2019-11-23 23:59 ` Jason Gunthorpe 2019-11-23 23:59 ` Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 03/14] mm/hmm: allow hmm_range to be used with a mmu_interval_notifier or hmm_mirror Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-13 14:00 ` Christoph Hellwig 2019-11-13 14:00 ` Christoph Hellwig 2019-11-12 20:22 ` [PATCH v3 04/14] mm/hmm: define the pre-processor related parts of hmm.h even if disabled Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-13 14:01 ` Christoph Hellwig 2019-11-13 14:01 ` Christoph Hellwig 2019-11-12 20:22 ` [PATCH v3 05/14] RDMA/odp: Use mmu_interval_notifier_insert() Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 06/14] RDMA/hfi1: Use mmu_interval_notifier_insert for user_exp_rcv Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 07/14] drm/radeon: use mmu_interval_notifier_insert Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 08/14] nouveau: use mmu_notifier directly for invalidate_range_start Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 09/14] nouveau: use mmu_interval_notifier instead of hmm_mirror Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe [this message] 2019-11-12 20:22 ` [PATCH v3 10/14] drm/amdgpu: Call find_vma under mmap_sem Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 11/14] drm/amdgpu: Use mmu_interval_insert instead of hmm_mirror Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-12 20:22 ` [PATCH v3 12/14] drm/amdgpu: Use mmu_interval_notifier " Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-19 19:59 ` Philip Yang 2019-11-19 19:59 ` Philip Yang 2019-11-12 20:22 ` [PATCH v3 13/14] mm/hmm: remove hmm_mirror and related Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe 2019-11-13 14:02 ` Christoph Hellwig 2019-11-13 14:02 ` Christoph Hellwig 2019-11-12 20:22 ` [PATCH v3 14/14] xen/gntdev: use mmu_interval_notifier_insert Jason Gunthorpe 2019-11-12 20:22 ` Jason Gunthorpe
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191112202231.3856-11-jgg@ziepe.ca \ --to=jgg@ziepe.ca \ --cc=David1.Zhou@amd.com \ --cc=Felix.Kuehling@amd.com \ --cc=Philip.Yang@amd.com \ --cc=alexander.deucher@amd.com \ --cc=amd-gfx@lists.freedesktop.org \ --cc=boris.ostrovsky@oracle.com \ --cc=bskeggs@redhat.com \ --cc=christian.koenig@amd.com \ --cc=dennis.dalessandro@intel.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=hch@infradead.org \ --cc=jgg@mellanox.com \ --cc=jglisse@redhat.com \ --cc=jgross@suse.com \ --cc=jhubbard@nvidia.com \ --cc=linux-mm@kvack.org \ --cc=linux-rdma@vger.kernel.org \ --cc=mike.marciniszyn@intel.com \ --cc=nouveau@lists.freedesktop.org \ --cc=oleksandr_andrushchenko@epam.com \ --cc=petrcvekcz@gmail.com \ --cc=rcampbell@nvidia.com \ --cc=sstabellini@kernel.org \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
AMD-GFX Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/amd-gfx/0 amd-gfx/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 amd-gfx amd-gfx/ https://lore.kernel.org/amd-gfx \ amd-gfx@lists.freedesktop.org public-inbox-index amd-gfx Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.freedesktop.lists.amd-gfx AGPL code for this site: git clone https://public-inbox.org/public-inbox.git