From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753118AbbKQDpu (ORCPT ); Mon, 16 Nov 2015 22:45:50 -0500 Received: from mga11.intel.com ([192.55.52.93]:58193 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752184AbbKQDfP (ORCPT ); Mon, 16 Nov 2015 22:35:15 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,305,1444719600"; d="scan'208";a="687166050" Subject: [PATCH 03/37] mm: kill get_user_pages_locked() To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, Dave Hansen , dave.hansen@linux.intel.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, aarcange@redhat.com, n-horiguchi@ah.jp.nec.com From: Dave Hansen Date: Mon, 16 Nov 2015 19:35:15 -0800 References: <20151117033511.BFFA1440@viggo.jf.intel.com> In-Reply-To: <20151117033511.BFFA1440@viggo.jf.intel.com> Message-Id: <20151117033515.4AA18669@viggo.jf.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen We have no remaining users of get_user_pages_locked(). Kill it. Signed-off-by: Dave Hansen Cc: Andrew Morton Cc: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Naoya Horiguchi --- b/include/linux/mm.h | 4 ---- b/mm/gup.c | 31 ------------------------------- 2 files changed, 35 deletions(-) diff -puN include/linux/mm.h~kill-get_user_pages_locked include/linux/mm.h --- a/include/linux/mm.h~kill-get_user_pages_locked 2015-11-16 12:35:35.684187540 -0800 +++ b/include/linux/mm.h 2015-11-16 12:35:35.689187767 -0800 @@ -1195,10 +1195,6 @@ long get_user_pages(struct task_struct * unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages, struct vm_area_struct **vmas); -long get_user_pages_locked(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - int *locked); long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages, diff -puN mm/gup.c~kill-get_user_pages_locked mm/gup.c --- a/mm/gup.c~kill-get_user_pages_locked 2015-11-16 12:35:35.686187631 -0800 +++ b/mm/gup.c 2015-11-16 12:35:35.690187812 -0800 @@ -715,37 +715,6 @@ static __always_inline long __get_user_p } /* - * We can leverage the VM_FAULT_RETRY functionality in the page fault - * paths better by using either get_user_pages_locked() or - * get_user_pages_unlocked(). - * - * get_user_pages_locked() is suitable to replace the form: - * - * down_read(&mm->mmap_sem); - * do_something() - * get_user_pages(tsk, mm, ..., pages, NULL); - * up_read(&mm->mmap_sem); - * - * to: - * - * int locked = 1; - * down_read(&mm->mmap_sem); - * do_something() - * get_user_pages_locked(tsk, mm, ..., pages, &locked); - * if (locked) - * up_read(&mm->mmap_sem); - */ -long get_user_pages_locked(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - int *locked) -{ - return __get_user_pages_locked(tsk, mm, start, nr_pages, write, force, - pages, NULL, locked, true, FOLL_TOUCH); -} -EXPORT_SYMBOL(get_user_pages_locked); - -/* * Same as get_user_pages_unlocked(...., FOLL_TOUCH) but it allows to * pass additional gup_flags as last parameter (like FOLL_HWPOISON). * _