From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E312AC433E3 for ; Tue, 7 Jul 2020 22:51:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8A970207BB for ; Tue, 7 Jul 2020 22:51:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LmloVdwo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A970207BB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3D81C6B00D1; Tue, 7 Jul 2020 18:51:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 312926B00D3; Tue, 7 Jul 2020 18:51:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 055F96B00D4; Tue, 7 Jul 2020 18:51:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id C93406B00D1 for ; Tue, 7 Jul 2020 18:51:10 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8CF7C8248D52 for ; Tue, 7 Jul 2020 22:51:10 +0000 (UTC) X-FDA: 77012777100.29.pump28_0a101e626eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 65CC218086E25 for ; Tue, 7 Jul 2020 22:51:10 +0000 (UTC) X-HE-Tag: pump28_0a101e626eb8 X-Filterd-Recvd-Size: 33868 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:51:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162269; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KtNeO3INyNL5XJFAMgVH5cdzfRxks36s0QSRiPvhN3c=; b=LmloVdwo+lhoDjzF0u9SDFgB5EVwONcPmD1d9GWdvbskM5l9BpEhBLtYxWLHBJ7kfk5beW X7HSVPWewpAZFlgPWaY945GIXn1P0CZJSeuuGIYxrfSJnHWgKBuu/Q0AK4Zkv/SUdB/xKA 7pjKxD/qoiysBzpwcyH7Cuybj+cIIBU= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-502-e0EERscsP6SbgIknD7SqRA-1; Tue, 07 Jul 2020 18:51:07 -0400 X-MC-Unique: e0EERscsP6SbgIknD7SqRA-1 Received: by mail-qv1-f70.google.com with SMTP id u1so14572327qvu.18 for ; Tue, 07 Jul 2020 15:51:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KtNeO3INyNL5XJFAMgVH5cdzfRxks36s0QSRiPvhN3c=; b=e5PDXqUP3zTOp0sVtJNEmr8pk7h+g+2a3gBv3hfPZxGpDg33pKiHubMNh8Dlu9eobz 4CHs4v1xgFbpGrZo8qEqy6QTU6QUMqt14Sw7TsbtDnwIFEl4w+jikGPjYz8dfGQIuWS7 YVJRhxGGW5Sh9okDHreeWvI/NOSmFSn4lHP3fpkXBCb/m7/ucTb6Hmwz1PCLTqSzllTT 9ORGd3+mztA2k5Csysd5ZmnzEqBLeg8hpp6UxtCjTPCeQC8WCkJoNN4C7l73nN5wL6hd VwOmoOc2wD4XuUbvNQ/lNE8o9m03+gvNmqzoUXfnOIDRxo0o/LYdmkqbwG5NP9XM2ehe 1orA== X-Gm-Message-State: AOAM532aOeWcdnF6YJKRPuj7pNj3CbiOn2LIgPXRkFtxdpqNSA8e1Ker qJ4MBLBy4Ds/pWA2JVzi5kugdokxAKezGArp04XxbX04SQnYeLbivR4XVFarJu6jTudo56vookP 6KGFYvGE9yXg= X-Received: by 2002:ac8:4b47:: with SMTP id e7mr58505098qts.242.1594162266312; Tue, 07 Jul 2020 15:51:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzpuWhgQ+LnM1iq3TmBfVxaCRtCie9H2AfW4REYZ7OdmyIVS3dmv0ZgnTdk3dIvl4IDplCYMg== X-Received: by 2002:ac8:4b47:: with SMTP id e7mr58505054qts.242.1594162265738; Tue, 07 Jul 2020 15:51:05 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.51.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:51:05 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman Subject: [PATCH v5 25/25] mm/gup: Remove task_struct pointer for all gup code Date: Tue, 7 Jul 2020 18:50:21 -0400 Message-Id: <20200707225021.200906-26-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 65CC218086E25 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After the cleanup of page fault accounting, gup does not need to pass task_struct around any more. Remove that parameter in the whole gup stac= k. Reviewed-by: John Hubbard Signed-off-by: Peter Xu --- arch/arc/kernel/process.c | 2 +- arch/s390/kvm/interrupt.c | 2 +- arch/s390/kvm/kvm-s390.c | 2 +- arch/s390/kvm/priv.c | 8 +- arch/s390/mm/gmap.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/vfio/vfio_iommu_type1.c | 4 +- fs/exec.c | 2 +- include/linux/mm.h | 9 +- kernel/events/uprobes.c | 6 +- kernel/futex.c | 2 +- mm/gup.c | 101 ++++++++------------ mm/memory.c | 2 +- mm/process_vm_access.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 2 +- 18 files changed, 69 insertions(+), 87 deletions(-) diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c index 105420c23c8b..a1d2eea66bba 100644 --- a/arch/arc/kernel/process.c +++ b/arch/arc/kernel/process.c @@ -91,7 +91,7 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, exp= ected, int, new) goto fail; =20 mmap_read_lock(current->mm); - ret =3D fixup_user_fault(current, current->mm, (unsigned long) uaddr, + ret =3D fixup_user_fault(current->mm, (unsigned long) uaddr, FAULT_FLAG_WRITE, NULL); mmap_read_unlock(current->mm); =20 diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index 1608fd99bbee..2f177298c663 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2768,7 +2768,7 @@ static struct page *get_map_page(struct kvm *kvm, u= 64 uaddr) struct page *page =3D NULL; =20 mmap_read_lock(kvm->mm); - get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE, + get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, &page, NULL, NULL); mmap_read_unlock(kvm->mm); return page; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 08e6cf6cb454..f78921bc11b3 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1892,7 +1892,7 @@ static long kvm_s390_set_skeys(struct kvm *kvm, str= uct kvm_s390_skeys *args) =20 r =3D set_guest_storage_key(current->mm, hva, keys[i], 0); if (r) { - r =3D fixup_user_fault(current, current->mm, hva, + r =3D fixup_user_fault(current->mm, hva, FAULT_FLAG_WRITE, &unlocked); if (r) break; diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 2f721a923b54..cd74989ce0b0 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -273,7 +273,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) rc =3D get_guest_storage_key(current->mm, vmaddr, &key); =20 if (rc) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { mmap_read_unlock(current->mm); @@ -319,7 +319,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) mmap_read_lock(current->mm); rc =3D reset_guest_reference_bit(current->mm, vmaddr); if (rc < 0) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { mmap_read_unlock(current->mm); @@ -390,7 +390,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) m3 & SSKE_MC); =20 if (rc < 0) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc =3D !rc ? -EAGAIN : rc; } @@ -1094,7 +1094,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) rc =3D cond_set_guest_storage_key(current->mm, vmaddr, key, NULL, nq, mr, mc); if (rc < 0) { - rc =3D fixup_user_fault(current, current->mm, vmaddr, + rc =3D fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc =3D !rc ? -EAGAIN : rc; } diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 190357ff86b3..8747487c50a8 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -649,7 +649,7 @@ int gmap_fault(struct gmap *gmap, unsigned long gaddr= , rc =3D vmaddr; goto out_up; } - if (fixup_user_fault(current, gmap->mm, vmaddr, fault_flags, + if (fixup_user_fault(gmap->mm, vmaddr, fault_flags, &unlocked)) { rc =3D -EFAULT; goto out_up; @@ -879,7 +879,7 @@ static int gmap_pte_op_fixup(struct gmap *gmap, unsig= ned long gaddr, =20 BUG_ON(gmap_is_shadow(gmap)); fault_flags =3D (prot =3D=3D PROT_WRITE) ? FAULT_FLAG_WRITE : 0; - if (fixup_user_fault(current, mm, vmaddr, fault_flags, &unlocked)) + if (fixup_user_fault(mm, vmaddr, fault_flags, &unlocked)) return -EFAULT; if (unlocked) /* lost mmap_lock, caller has to retry __gmap_translate */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/dr= m/i915/gem/i915_gem_userptr.c index e946032b13e4..2c2bf24140c9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -469,7 +469,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struc= t *_work) locked =3D 1; } ret =3D pin_user_pages_remote - (work->task, mm, + (mm, obj->userptr.ptr + pinned * PAGE_SIZE, npages - pinned, flags, diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core= /umem_odp.c index 5e32f61a2fe4..cc6b4befde7c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -439,7 +439,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *ume= m_odp, u64 user_virt, * complex (and doesn't gain us much performance in most use * cases). */ - npages =3D get_user_pages_remote(owning_process, owning_mm, + npages =3D get_user_pages_remote(owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); mmap_read_unlock(owning_mm); diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_ty= pe1.c index 5e556ac9102a..9d41105bfd01 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -425,7 +425,7 @@ static int follow_fault_pfn(struct vm_area_struct *vm= a, struct mm_struct *mm, if (ret) { bool unlocked =3D false; =20 - ret =3D fixup_user_fault(NULL, mm, vaddr, + ret =3D fixup_user_fault(mm, vaddr, FAULT_FLAG_REMOTE | (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); @@ -453,7 +453,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsign= ed long vaddr, flags |=3D FOLL_WRITE; =20 mmap_read_lock(mm); - ret =3D pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM= , + ret =3D pin_user_pages_remote(mm, vaddr, 1, flags | FOLL_LONGTERM, page, NULL, NULL); if (ret =3D=3D 1) { *pfn =3D page_to_pfn(page[0]); diff --git a/fs/exec.c b/fs/exec.c index 7b7cbb180785..3cf806de5710 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -217,7 +217,7 @@ static struct page *get_arg_page(struct linux_binprm = *bprm, unsigned long pos, * We are doing an exec(). 'current' is the process * doing the exec and bprm->mm is the new process's mm. */ - ret =3D get_user_pages_remote(current, bprm->mm, pos, 1, gup_flags, + ret =3D get_user_pages_remote(bprm->mm, pos, 1, gup_flags, &page, NULL, NULL); if (ret <=3D 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 33f8236a68a2..678ea25625d7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1662,7 +1662,7 @@ int invalidate_inode_page(struct page *page); extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs); -extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *m= m, +extern int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); void unmap_mapping_pages(struct address_space *mapping, @@ -1678,8 +1678,7 @@ static inline vm_fault_t handle_mm_fault(struct vm_= area_struct *vma, BUG(); return VM_FAULT_SIGBUS; } -static inline int fixup_user_fault(struct task_struct *tsk, - struct mm_struct *mm, unsigned long address, +static inline int fixup_user_fault(struct mm_struct *mm, unsigned long a= ddress, unsigned int fault_flags, bool *unlocked) { /* should never happen if there's no MMU */ @@ -1705,11 +1704,11 @@ extern int access_remote_vm(struct mm_struct *mm,= unsigned long addr, extern int __access_remote_vm(struct task_struct *tsk, struct mm_struct = *mm, unsigned long addr, void *buf, int len, unsigned int gup_flags); =20 -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index e84eb52b646b..f500204eb70d 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -376,7 +376,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long = vaddr, short d) if (!vaddr || !d) return -EINVAL; =20 - ret =3D get_user_pages_remote(NULL, mm, vaddr, 1, + ret =3D get_user_pages_remote(mm, vaddr, 1, FOLL_WRITE, &page, &vma, NULL); if (unlikely(ret <=3D 0)) { /* @@ -477,7 +477,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, = struct mm_struct *mm, if (is_register) gup_flags |=3D FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - ret =3D get_user_pages_remote(NULL, mm, vaddr, 1, gup_flags, + ret =3D get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, &vma, NULL); if (ret <=3D 0) return ret; @@ -2029,7 +2029,7 @@ static int is_trap_at_addr(struct mm_struct *mm, un= signed long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result =3D get_user_pages_remote(NULL, mm, vaddr, 1, FOLL_FORCE, &page, + result =3D get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, NULL, NULL); if (result < 0) return result; diff --git a/kernel/futex.c b/kernel/futex.c index 05e88562de68..d024fcef62e8 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -699,7 +699,7 @@ static int fault_in_user_writeable(u32 __user *uaddr) int ret; =20 mmap_read_lock(mm); - ret =3D fixup_user_fault(current, mm, (unsigned long)uaddr, + ret =3D fixup_user_fault(mm, (unsigned long)uaddr, FAULT_FLAG_WRITE, NULL); mmap_read_unlock(mm); =20 diff --git a/mm/gup.c b/mm/gup.c index 71e1d501a1d3..c4ec86ff67e4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -859,7 +859,7 @@ static int get_gate_page(struct mm_struct *mm, unsign= ed long address, * does not include FOLL_NOWAIT, the mmap_lock may be released. If it * is, *@locked will be set to 0 and -EBUSY returned. */ -static int faultin_page(struct task_struct *tsk, struct vm_area_struct *= vma, +static int faultin_page(struct vm_area_struct *vma, unsigned long address, unsigned int *flags, int *locked) { unsigned int fault_flags =3D 0; @@ -962,7 +962,6 @@ static int check_vma_flags(struct vm_area_struct *vma= , unsigned long gup_flags) =20 /** * __get_user_pages() - pin user pages in memory - * @tsk: task_struct of target task * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1021,7 +1020,7 @@ static int check_vma_flags(struct vm_area_struct *v= ma, unsigned long gup_flags) * instead of __get_user_pages. __get_user_pages should be used only if * you need some special @gup_flags. */ -static long __get_user_pages(struct task_struct *tsk, struct mm_struct *= mm, +static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1103,8 +1102,7 @@ static long __get_user_pages(struct task_struct *ts= k, struct mm_struct *mm, =20 page =3D follow_page_mask(vma, start, foll_flags, &ctx); if (!page) { - ret =3D faultin_page(tsk, vma, start, &foll_flags, - locked); + ret =3D faultin_page(vma, start, &foll_flags, locked); switch (ret) { case 0: goto retry; @@ -1178,8 +1176,6 @@ static bool vma_permits_fault(struct vm_area_struct= *vma, =20 /** * fixup_user_fault() - manually resolve a user page fault - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @address: user address * @fault_flags:flags to pass down to handle_mm_fault() @@ -1207,7 +1203,7 @@ static bool vma_permits_fault(struct vm_area_struct= *vma, * This function will not return with an unlocked mmap_lock. So it has n= ot the * same semantics wrt the @mm->mmap_lock as does filemap_fault(). */ -int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { @@ -1256,8 +1252,7 @@ EXPORT_SYMBOL_GPL(fixup_user_fault); * Please note that this function, unlike __get_user_pages will not * return 0 for nr_pages > 0 without FOLL_NOWAIT */ -static __always_inline long __get_user_pages_locked(struct task_struct *= tsk, - struct mm_struct *mm, +static __always_inline long __get_user_pages_locked(struct mm_struct *mm= , unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1290,7 +1285,7 @@ static __always_inline long __get_user_pages_locked= (struct task_struct *tsk, pages_done =3D 0; lock_dropped =3D false; for (;;) { - ret =3D __get_user_pages(tsk, mm, start, nr_pages, flags, pages, + ret =3D __get_user_pages(mm, start, nr_pages, flags, pages, vmas, locked); if (!locked) /* VM_FAULT_RETRY couldn't trigger, bypass */ @@ -1350,7 +1345,7 @@ static __always_inline long __get_user_pages_locked= (struct task_struct *tsk, } =20 *locked =3D 1; - ret =3D __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, + ret =3D __get_user_pages(mm, start, 1, flags | FOLL_TRIED, pages, NULL, locked); if (!*locked) { /* Continue to retry until we succeeded */ @@ -1436,7 +1431,7 @@ long populate_vma_page_range(struct vm_area_struct = *vma, * We made sure addr is within a VMA, so the following will * not result in a stack expansion that recurses back here. */ - return __get_user_pages(current, mm, start, nr_pages, gup_flags, + return __get_user_pages(mm, start, nr_pages, gup_flags, NULL, NULL, locked); } =20 @@ -1520,7 +1515,7 @@ struct page *get_dump_page(unsigned long addr) struct vm_area_struct *vma; struct page *page; =20 - if (__get_user_pages(current, current->mm, addr, 1, + if (__get_user_pages(current->mm, addr, 1, FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, NULL) < 1) return NULL; @@ -1529,8 +1524,7 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ -static long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, unsigned long start, +static long __get_user_pages_locked(struct mm_struct *mm, unsigned long = start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, int *locked, unsigned int foll_flags) @@ -1606,8 +1600,7 @@ static struct page *alloc_migration_target_non_cma(= struct page *page, unsigned l return alloc_migration_target(page, (unsigned long)&mtc); } =20 -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1681,7 +1674,7 @@ static long check_and_migrate_cma_pages(struct task= _struct *tsk, * again migrating any new CMA pages which we failed to isolate * earlier. */ - ret =3D __get_user_pages_locked(tsk, mm, start, nr_pages, + ret =3D __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); =20 @@ -1695,8 +1688,7 @@ static long check_and_migrate_cma_pages(struct task= _struct *tsk, return ret; } #else -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1711,8 +1703,7 @@ static long check_and_migrate_cma_pages(struct task= _struct *tsk, * __gup_longterm_locked() is a wrapper for __get_user_pages_locked whic= h * allows us to process the FOLL_LONGTERM flag. */ -static long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1737,7 +1728,7 @@ static long __gup_longterm_locked(struct task_struc= t *tsk, flags =3D memalloc_nocma_save(); } =20 - rc =3D __get_user_pages_locked(tsk, mm, start, nr_pages, pages, + rc =3D __get_user_pages_locked(mm, start, nr_pages, pages, vmas_tmp, NULL, gup_flags); =20 if (gup_flags & FOLL_LONGTERM) { @@ -1752,7 +1743,7 @@ static long __gup_longterm_locked(struct task_struc= t *tsk, goto out; } =20 - rc =3D check_and_migrate_cma_pages(tsk, mm, start, rc, pages, + rc =3D check_and_migrate_cma_pages(mm, start, rc, pages, vmas_tmp, gup_flags); } =20 @@ -1762,22 +1753,20 @@ static long __gup_longterm_locked(struct task_str= uct *tsk, return rc; } #else /* !CONFIG_FS_DAX && !CONFIG_CMA */ -static __always_inline long __gup_longterm_locked(struct task_struct *ts= k, - struct mm_struct *mm, +static __always_inline long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, unsigned int flags) { - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, flags); } #endif /* CONFIG_FS_DAX || CONFIG_CMA */ =20 #ifdef CONFIG_MMU -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1796,20 +1785,18 @@ static long __get_user_pages_remote(struct task_s= truct *tsk, * This will check the vmas (even if our vmas arg is NULL) * and return -ENOTSUPP if DAX isn't allowed in this case: */ - return __gup_longterm_locked(tsk, mm, start, nr_pages, pages, + return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } =20 - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } =20 /** * get_user_pages_remote() - pin user pages in memory - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1868,7 +1855,7 @@ static long __get_user_pages_remote(struct task_str= uct *tsk, * should use get_user_pages_remote because it cannot pass * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1880,13 +1867,13 @@ long get_user_pages_remote(struct task_struct *ts= k, struct mm_struct *mm, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; =20 - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(get_user_pages_remote); =20 #else /* CONFIG_MMU */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1894,8 +1881,7 @@ long get_user_pages_remote(struct task_struct *tsk,= struct mm_struct *mm, return 0; } =20 -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1915,11 +1901,10 @@ static long __get_user_pages_remote(struct task_s= truct *tsk, * @vmas: array of pointers to vmas corresponding to each page. * Or NULL if the caller does not require them. * - * This is the same as get_user_pages_remote(), just with a - * less-flexible calling convention where we assume that the task - * and mm being operated on are the current task's and don't allow - * passing of a locked parameter. We also obviously don't pass - * FOLL_REMOTE in here. + * This is the same as get_user_pages_remote(), just with a less-flexibl= e + * calling convention where we assume that the mm being operated on belo= ngs to + * the current task, and doesn't allow passing of a locked parameter. W= e also + * obviously don't pass FOLL_REMOTE in here. */ long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, @@ -1932,7 +1917,7 @@ long get_user_pages(unsigned long start, unsigned l= ong nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; =20 - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages); @@ -1942,7 +1927,7 @@ EXPORT_SYMBOL(get_user_pages); * * mmap_read_lock(mm); * do_something() - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * to: @@ -1950,7 +1935,7 @@ EXPORT_SYMBOL(get_user_pages); * int locked =3D 1; * mmap_read_lock(mm); * do_something() - * get_user_pages_locked(tsk, mm, ..., pages, &locked); + * get_user_pages_locked(mm, ..., pages, &locked); * if (locked) * mmap_read_unlock(mm); * @@ -1988,7 +1973,7 @@ long get_user_pages_locked(unsigned long start, uns= igned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; =20 - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } @@ -1998,12 +1983,12 @@ EXPORT_SYMBOL(get_user_pages_locked); * get_user_pages_unlocked() is suitable to replace the form: * * mmap_read_lock(mm); - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * with: * - * get_user_pages_unlocked(tsk, mm, ..., pages); + * get_user_pages_unlocked(mm, ..., pages); * * It is functionally equivalent to get_user_pages_fast so * get_user_pages_fast should be used instead if specific gup_flags @@ -2026,7 +2011,7 @@ long get_user_pages_unlocked(unsigned long start, u= nsigned long nr_pages, return -EINVAL; =20 mmap_read_lock(mm); - ret =3D __get_user_pages_locked(current, mm, start, nr_pages, pages, NU= LL, + ret =3D __get_user_pages_locked(mm, start, nr_pages, pages, NULL, &locked, gup_flags | FOLL_TOUCH); if (locked) mmap_read_unlock(mm); @@ -2671,7 +2656,7 @@ static int __gup_longterm_unlocked(unsigned long st= art, int nr_pages, */ if (gup_flags & FOLL_LONGTERM) { mmap_read_lock(current->mm); - ret =3D __gup_longterm_locked(current, current->mm, + ret =3D __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, gup_flags); mmap_read_unlock(current->mm); @@ -2914,10 +2899,8 @@ int pin_user_pages_fast_only(unsigned long start, = int nr_pages, EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); =20 /** - * pin_user_pages_remote() - pin pages of a remote process (task !=3D cu= rrent) + * pin_user_pages_remote() - pin pages of a remote process * - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -2938,7 +2921,7 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); * FOLL_PIN means that the pages must be released via unpin_user_page().= Please * see Documentation/core-api/pin_user_pages.rst for details. */ -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm= , +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -2948,7 +2931,7 @@ long pin_user_pages_remote(struct task_struct *tsk,= struct mm_struct *mm, return -EINVAL; =20 gup_flags |=3D FOLL_PIN; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(pin_user_pages_remote); @@ -2980,7 +2963,7 @@ long pin_user_pages(unsigned long start, unsigned l= ong nr_pages, return -EINVAL; =20 gup_flags |=3D FOLL_PIN; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags); } EXPORT_SYMBOL(pin_user_pages); @@ -3025,7 +3008,7 @@ long pin_user_pages_locked(unsigned long start, uns= igned long nr_pages, return -EINVAL; =20 gup_flags |=3D FOLL_PIN; - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } diff --git a/mm/memory.c b/mm/memory.c index ad5eca9dd1ed..c8cfc19d17f1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4746,7 +4746,7 @@ int __access_remote_vm(struct task_struct *tsk, str= uct mm_struct *mm, void *maddr; struct page *page =3D NULL; =20 - ret =3D get_user_pages_remote(tsk, mm, addr, 1, + ret =3D get_user_pages_remote(mm, addr, 1, gup_flags, &page, &vma, NULL); if (ret <=3D 0) { #ifndef CONFIG_HAVE_IOREMAP_PROT diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index cc85ce81914a..29c052099aff 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -105,7 +105,7 @@ static int process_vm_rw_single_vec(unsigned long add= r, * current/current->mm */ mmap_read_lock(mm); - pinned_pages =3D pin_user_pages_remote(task, mm, pa, pinned_pages, + pinned_pages =3D pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, NULL, &locked); if (locked) diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 7869d6a9980b..afe5e68ede77 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -914,7 +914,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsi= gned long pos, * (represented by bprm). 'current' is the process doing * the execve(). */ - if (get_user_pages_remote(current, bprm->mm, pos, 1, + if (get_user_pages_remote(bprm->mm, pos, 1, FOLL_FORCE, &page, NULL, NULL) <=3D 0) return false; #else diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 45799606bb3e..0939ed377688 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -61,7 +61,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ mmap_read_lock(mm); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, &locked); if (locked) mmap_read_unlock(mm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0a68c9d3d3ab..e684b9b74483 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1830,7 +1830,7 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, * not call the fault handler, so do it here. */ bool unlocked =3D false; - r =3D fixup_user_fault(current, current->mm, addr, + r =3D fixup_user_fault(current->mm, addr, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) --=20 2.26.2