All of lore.kernel.org
 help / color / mirror / Atom feed
* [to-be-updated] mm-remove-unused-token-argument-from-apply_to_page_range-callback.patch removed from -mm tree
@ 2011-05-18 21:37 akpm
  0 siblings, 0 replies; only message in thread
From: akpm @ 2011-05-18 21:37 UTC (permalink / raw)
  To: jeremy.fitzhardinge, hugh.dickins, npiggin, mm-commits


The patch titled
     mm: remove unused "token" argument from apply_to_page_range callback.
has been removed from the -mm tree.  Its filename was
     mm-remove-unused-token-argument-from-apply_to_page_range-callback.patch

This patch was dropped because an updated version will be merged

The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/

------------------------------------------------------
Subject: mm: remove unused "token" argument from apply_to_page_range callback.
From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

We've had apply_to_page_range() for a while, which is a general way to
apply a function to ptes across a range of addresses - including
allocating any missing parts of the pagetable as needed.  This logic is
replicated in a number of places throughout the kernel, but it hasn't been
widely replaced by this function, partly because of concerns about the
overhead of calling the function once per pte.

This series adds apply_to_page_range_batch() (and reimplements
apply_to_page_range() in terms of it), which calls the pte operation
function once per pte page, moving the inner loop into the callback
function.

apply_to_page_range(_batch) also calls its callback with lazy mmu updates
enabled, which allows batching of the operations in environments where
this is beneficial (ie, virtualization).  The only caveat this introduces
is callbacks can't expect to immediately see the effects of the pte
updates in memory.

Since this is effectively identical to the code in lib/ioremap.c and
mm/vmalloc.c (twice!), I replace their open-coded variants.  I'm sure
there are others places in the kernel which could do with this (I only
stumbled over ioremap by accident).

I also add a minor optimisation to vunmap_page_range() to use a plain
pte_clear() rather than the more expensive and unnecessary
ptep_get_and_clear().


This patch:

The `token' argument is basically the struct page of the pte_t * passed
into the callback.  But there's no need to pass that, since it can be
fairly easily derived from the pte_t * itself if needed (and no current
users need to do that anyway).

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 arch/x86/xen/grant-table.c |    6 ++----
 arch/x86/xen/mmu.c         |    3 +--
 include/linux/mm.h         |    3 +--
 mm/memory.c                |    2 +-
 mm/vmalloc.c               |    2 +-
 5 files changed, 6 insertions(+), 10 deletions(-)

diff -puN arch/x86/xen/grant-table.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback arch/x86/xen/grant-table.c
--- a/arch/x86/xen/grant-table.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback
+++ a/arch/x86/xen/grant-table.c
@@ -44,8 +44,7 @@
 
 #include <asm/pgtable.h>
 
-static int map_pte_fn(pte_t *pte, struct page *pmd_page,
-		      unsigned long addr, void *data)
+static int map_pte_fn(pte_t *pte, unsigned long addr, void *data)
 {
 	unsigned long **frames = (unsigned long **)data;
 
@@ -54,8 +53,7 @@ static int map_pte_fn(pte_t *pte, struct
 	return 0;
 }
 
-static int unmap_pte_fn(pte_t *pte, struct page *pmd_page,
-			unsigned long addr, void *data)
+static int unmap_pte_fn(pte_t *pte, unsigned long addr, void *data)
 {
 
 	set_pte_at(&init_mm, addr, pte, __pte(0));
diff -puN arch/x86/xen/mmu.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback arch/x86/xen/mmu.c
--- a/arch/x86/xen/mmu.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback
+++ a/arch/x86/xen/mmu.c
@@ -2365,8 +2365,7 @@ struct remap_data {
 	struct mmu_update *mmu_update;
 };
 
-static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
-				 unsigned long addr, void *data)
+static int remap_area_mfn_pte_fn(pte_t *ptep, unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
 	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
diff -puN include/linux/mm.h~mm-remove-unused-token-argument-from-apply_to_page_range-callback include/linux/mm.h
--- a/include/linux/mm.h~mm-remove-unused-token-argument-from-apply_to_page_range-callback
+++ a/include/linux/mm.h
@@ -1563,8 +1563,7 @@ struct page *follow_page(struct vm_area_
 #define FOLL_SPLIT	0x80	/* don't return transhuge pages, split them */
 #define FOLL_HWPOISON	0x100	/* check page is hwpoisoned */
 
-typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,
-			void *data);
+typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data);
 extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
 
diff -puN mm/memory.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback mm/memory.c
--- a/mm/memory.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback
+++ a/mm/memory.c
@@ -2267,7 +2267,7 @@ static int apply_to_pte_range(struct mm_
 	token = pmd_pgtable(*pmd);
 
 	do {
-		err = fn(pte++, token, addr, data);
+		err = fn(pte++, addr, data);
 		if (err)
 			break;
 	} while (addr += PAGE_SIZE, addr != end);
diff -puN mm/vmalloc.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback mm/vmalloc.c
--- a/mm/vmalloc.c~mm-remove-unused-token-argument-from-apply_to_page_range-callback
+++ a/mm/vmalloc.c
@@ -2116,7 +2116,7 @@ void  __attribute__((weak)) vmalloc_sync
 }
 
 
-static int f(pte_t *pte, pgtable_t table, unsigned long addr, void *data)
+static int f(pte_t *pte, unsigned long addr, void *data)
 {
 	/* apply_to_page_range() does all the hard work. */
 	return 0;
_

Patches currently in -mm which might be from jeremy.fitzhardinge@citrix.com are

linux-next.patch
mm-add-apply_to_page_range_batch.patch
ioremap-use-apply_to_page_range_batch-for-ioremap_page_range.patch
vmalloc-use-plain-pte_clear-for-unmaps.patch
vmalloc-use-apply_to_page_range_batch-for-vunmap_page_range.patch
vmalloc-use-apply_to_page_range_batch-for-vmap_page_range_noflush.patch
vmalloc-use-apply_to_page_range_batch-in-alloc_vm_area.patch
xen-mmu-use-apply_to_page_range_batch-in-xen_remap_domain_mfn_range.patch
xen-grant-table-use-apply_to_page_range_batch.patch


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2011-05-18 21:37 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-18 21:37 [to-be-updated] mm-remove-unused-token-argument-from-apply_to_page_range-callback.patch removed from -mm tree akpm

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.