linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
@ 2006-10-09 16:12 Nick Piggin
  2006-10-09 16:12 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
                   ` (5 more replies)
  0 siblings, 6 replies; 34+ messages in thread
From: Nick Piggin @ 2006-10-09 16:12 UTC (permalink / raw)
  To: Hugh Dickins, Linux Memory Management
  Cc: Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Nick Piggin, Ingo Molnar

OK, I've cleaned up and further improved this patchset, removed duplication
while retaining legacy nopage handling, restored page_mkwrite to the ->fault
path (due to lack of users upstream to attempt a conversion), converted the
rest of the filesystems to use ->fault, restored MAP_POPULATE and population
of remap_file_pages pages, replaced nopfn completely, and removed
NOPAGE_REFAULT because that can be done easily with ->fault.

In the process:
- GFS2, OCFS2 theoretically get nonlinear mapping support
- Nonlinear mappings gain page_mkwrite and dirty page throttling support
- Nonlinear mappings gain the fault vs truncate race fix introduced for linear 

All pretty much for free.

This is lightly compile tested only, unlike the last set, mainly
because it is presently just an RFC regarding the direction I'm going
(and it's bedtime).

Nick


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [patch 1/5] mm: fault vs invalidate/truncate check
  2006-10-09 16:12 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
@ 2006-10-09 16:12 ` Nick Piggin
  2006-10-09 16:12 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 34+ messages in thread
From: Nick Piggin @ 2006-10-09 16:12 UTC (permalink / raw)
  To: Hugh Dickins, Linux Memory Management
  Cc: Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Nick Piggin, Ingo Molnar

Add a bugcheck for Andrea's pagefault vs invalidate race. This is triggerable
for both linear and nonlinear pages with a userspace test harness (using
direct IO and truncate, respectively).

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -120,6 +120,8 @@ void __remove_from_page_cache(struct pag
 	page->mapping = NULL;
 	mapping->nrpages--;
 	__dec_zone_page_state(page, NR_FILE_PAGES);
+
+	BUG_ON(page_mapped(page));
 }
 
 void remove_from_page_cache(struct page *page)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-09 16:12 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
  2006-10-09 16:12 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
@ 2006-10-09 16:12 ` Nick Piggin
  2006-10-09 21:10   ` Mark Fasheh
  2006-10-09 16:12 ` [patch 3/5] mm: fault handler to replace nopage and populate Nick Piggin
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-09 16:12 UTC (permalink / raw)
  To: Hugh Dickins, Linux Memory Management
  Cc: Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Nick Piggin, Ingo Molnar

Fix the race between invalidate_inode_pages and do_no_page.

Andrea Arcangeli identified a subtle race between invalidation of
pages from pagecache with userspace mappings, and do_no_page.

The issue is that invalidation has to shoot down all mappings to the
page, before it can be discarded from the pagecache. Between shooting
down ptes to a particular page, and actually dropping the struct page
from the pagecache, do_no_page from any process might fault on that
page and establish a new mapping to the page just before it gets
discarded from the pagecache.

The most common case where such invalidation is used is in file
truncation. This case was catered for by doing a sort of open-coded
seqlock between the file's i_size, and its truncate_count.

Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then
find the page if it is within i_size, and then check truncate_count
under the page table lock and back out and retry if it had
subsequently been changed (ptl will serialise against unmapping, and
ensure a potentially updated truncate_count is actually visible).

Complexity and documentation issues aside, the locking protocol fails
in the case where we would like to invalidate pagecache inside i_size.
do_no_page can come in anytime and filemap_nopage is not aware of the
invalidation in progress (as it is when it is outside i_size). The
end result is that dangling (->mapping == NULL) pages that appear to
be from a particular file may be mapped into userspace with nonsense
data. Valid mappings to the same place will see a different page.

Andrea implemented two working fixes, one using a real seqlock,
another using a page->flags bit. He also proposed using the page lock
in do_no_page, but that was initially considered too heavyweight.
However, it is not a global or per-file lock, and the page cacheline
is modified in do_no_page to increment _count and _mapcount anyway, so
a further modification should not be a large performance hit.
Scalability is not an issue.

This patch implements this latter approach. ->nopage implementations
return with the page locked if it is possible for their underlying
file to be invalidated (in that case, they must set a special vm_flags
bit to indicate so). do_no_page only unlocks the page after setting
up the mapping completely. invalidation is excluded because it holds
the page lock during invalidation of each page (and ensures that the
page is not mapped while holding the lock).

This allows significant simplifications in do_no_page.

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -166,6 +166,11 @@ extern unsigned int kobjsize(const void 
 #define VM_NONLINEAR	0x00800000	/* Is non-linear (remap_file_pages) */
 #define VM_MAPPED_COPY	0x01000000	/* T if mapped copy of data (nommu mmap) */
 #define VM_INSERTPAGE	0x02000000	/* The vma has had "vm_insert_page()" done on it */
+#define VM_CAN_INVALIDATE	0x04000000	/* The mapping may be invalidated,
+					 * eg. truncate or invalidate_inode_*.
+					 * In this case, do_no_page must
+					 * return with the page locked.
+					 */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1363,9 +1363,10 @@ struct page *filemap_nopage(struct vm_ar
 	unsigned long size, pgoff;
 	int did_readaround = 0, majmin = VM_FAULT_MINOR;
 
+	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
+
 	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
 
-retry_all:
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 	if (pgoff >= size)
 		goto outside_data_content;
@@ -1387,7 +1388,7 @@ retry_all:
 	 * Do we have something in the page cache already?
 	 */
 retry_find:
-	page = find_get_page(mapping, pgoff);
+	page = find_lock_page(mapping, pgoff);
 	if (!page) {
 		unsigned long ra_pages;
 
@@ -1421,7 +1422,7 @@ retry_find:
 				start = pgoff - ra_pages / 2;
 			do_page_cache_readahead(mapping, file, start, ra_pages);
 		}
-		page = find_get_page(mapping, pgoff);
+		page = find_lock_page(mapping, pgoff);
 		if (!page)
 			goto no_cached_page;
 	}
@@ -1430,13 +1431,25 @@ retry_find:
 		ra->mmap_hit++;
 
 	/*
-	 * Ok, found a page in the page cache, now we need to check
-	 * that it's up-to-date.
+	 * We have a locked page in the page cache, now we need to check
+	 * that it's up-to-date. If not, it is going to be due to an error.
 	 */
-	if (!PageUptodate(page))
+	if (unlikely(!PageUptodate(page)))
 		goto page_not_uptodate;
 
-success:
+#if 0
+/*
+ * XXX: no we don't have to, because we check and unmap in
+ * truncate, when the page is locked. Verify and delete me.
+ */
+	/* Must recheck i_size under page lock */
+	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+	if (unlikely(pgoff >= size)) {
+		unlock_page(page);
+		goto outside_data_content;
+	}
+#endif
+
 	/*
 	 * Found the page and have a reference on it.
 	 */
@@ -1479,34 +1492,11 @@ no_cached_page:
 	return NOPAGE_SIGBUS;
 
 page_not_uptodate:
+	/* IO error path */
 	if (!did_readaround) {
 		majmin = VM_FAULT_MAJOR;
 		count_vm_event(PGMAJFAULT);
 	}
-	lock_page(page);
-
-	/* Did it get unhashed while we waited for it? */
-	if (!page->mapping) {
-		unlock_page(page);
-		page_cache_release(page);
-		goto retry_all;
-	}
-
-	/* Did somebody else get it up-to-date? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
-
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
 
 	/*
 	 * Umm, take care of errors if the page isn't up-to-date.
@@ -1514,37 +1504,15 @@ page_not_uptodate:
 	 * because there really aren't any performance issues here
 	 * and we need to check for errors.
 	 */
-	lock_page(page);
-
-	/* Somebody truncated the page on us? */
-	if (!page->mapping) {
-		unlock_page(page);
-		page_cache_release(page);
-		goto retry_all;
-	}
-
-	/* Somebody else successfully read it in? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
 	ClearPageError(page);
 	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
+	page_cache_release(page);
+
+	if (!error || error == AOP_TRUNCATED_PAGE)
 		goto retry_find;
-	}
 
-	/*
-	 * Things didn't work out. Return zero to tell the
-	 * mm layer so, possibly freeing the page cache page first.
-	 */
+	/* Things didn't work out. Return zero to tell the mm layer so. */
 	shrink_readahead_size_eio(file, ra);
-	page_cache_release(page);
 	return NOPAGE_SIGBUS;
 }
 EXPORT_SYMBOL(filemap_nopage);
@@ -1737,6 +1705,7 @@ int generic_file_mmap(struct file * file
 		return -ENOEXEC;
 	file_accessed(file);
 	vma->vm_ops = &generic_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1675,6 +1675,13 @@ static int unmap_mapping_range_vma(struc
 	unsigned long restart_addr;
 	int need_break;
 
+	/*
+	 * files that support invalidating or truncating portions of the
+	 * file from under mmaped areas must set the VM_CAN_INVALIDATE flag, and
+	 * have their .nopage function return the page locked.
+	 */
+	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
+
 again:
 	restart_addr = vma->vm_truncate_count;
 	if (is_restart_addr(restart_addr) && start_addr < restart_addr) {
@@ -1805,17 +1812,8 @@ void unmap_mapping_range(struct address_
 
 	spin_lock(&mapping->i_mmap_lock);
 
-	/* serialize i_size write against truncate_count write */
-	smp_wmb();
-	/* Protect against page faults, and endless unmapping loops */
+	/* Protect against endless unmapping loops */
 	mapping->truncate_count++;
-	/*
-	 * For archs where spin_lock has inclusive semantics like ia64
-	 * this smp_mb() will prevent to read pagetable contents
-	 * before the truncate_count increment is visible to
-	 * other cpus.
-	 */
-	smp_mb();
 	if (unlikely(is_restart_addr(mapping->truncate_count))) {
 		if (mapping->truncate_count == 0)
 			reset_vma_truncate_counts(mapping);
@@ -1854,7 +1852,6 @@ int vmtruncate(struct inode * inode, lof
 	if (IS_SWAPFILE(inode))
 		goto out_busy;
 	i_size_write(inode, offset);
-	unmap_mapping_range(mapping, offset + PAGE_SIZE - 1, 0, 1);
 	truncate_inode_pages(mapping, offset);
 	goto out_truncate;
 
@@ -1893,7 +1890,6 @@ int vmtruncate_range(struct inode *inode
 
 	mutex_lock(&inode->i_mutex);
 	down_write(&inode->i_alloc_sem);
-	unmap_mapping_range(mapping, offset, (end - offset), 1);
 	truncate_inode_pages_range(mapping, offset, end);
 	inode->i_op->truncate_range(inode, offset, end);
 	up_write(&inode->i_alloc_sem);
@@ -2144,10 +2140,8 @@ static int do_no_page(struct mm_struct *
 		int write_access)
 {
 	spinlock_t *ptl;
-	struct page *new_page;
-	struct address_space *mapping = NULL;
+	struct page *page, *nopage_page;
 	pte_t entry;
-	unsigned int sequence = 0;
 	int ret = VM_FAULT_MINOR;
 	int anon = 0;
 	struct page *dirty_page = NULL;
@@ -2155,73 +2149,54 @@ static int do_no_page(struct mm_struct *
 	pte_unmap(page_table);
 	BUG_ON(vma->vm_flags & VM_PFNMAP);
 
-	if (vma->vm_file) {
-		mapping = vma->vm_file->f_mapping;
-		sequence = mapping->truncate_count;
-		smp_rmb(); /* serializes i_size against truncate_count */
-	}
-retry:
-	new_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
-	/*
-	 * No smp_rmb is needed here as long as there's a full
-	 * spin_lock/unlock sequence inside the ->nopage callback
-	 * (for the pagecache lookup) that acts as an implicit
-	 * smp_mb() and prevents the i_size read to happen
-	 * after the next truncate_count read.
-	 */
-
+	nopage_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
 	/* no page was available -- either SIGBUS, OOM or REFAULT */
-	if (unlikely(new_page == NOPAGE_SIGBUS))
+	if (unlikely(nopage_page == NOPAGE_SIGBUS))
 		return VM_FAULT_SIGBUS;
-	else if (unlikely(new_page == NOPAGE_OOM))
+	else if (unlikely(nopage_page == NOPAGE_OOM))
 		return VM_FAULT_OOM;
-	else if (unlikely(new_page == NOPAGE_REFAULT))
+	else if (unlikely(nopage_page == NOPAGE_REFAULT))
 		return VM_FAULT_MINOR;
 
+	BUG_ON(vma->vm_flags & VM_CAN_INVALIDATE && !PageLocked(nopage_page));
+	/*
+	 * For consistency in subsequent calls, make the nopage_page always
+	 * locked.  These should be in the minority but if they turn out to be
+	 * critical then this can always be revisited
+	 */
+	if (unlikely(!(vma->vm_flags & VM_CAN_INVALIDATE)))
+		lock_page(nopage_page);
+
 	/*
 	 * Should we do an early C-O-W break?
 	 */
+	page = nopage_page;
 	if (write_access) {
 		if (!(vma->vm_flags & VM_SHARED)) {
-			struct page *page;
-
-			if (unlikely(anon_vma_prepare(vma)))
-				goto oom;
+			if (unlikely(anon_vma_prepare(vma))) {
+				ret = VM_FAULT_OOM;
+				goto out_error;
+			}
 			page = alloc_page_vma(GFP_HIGHUSER, vma, address);
-			if (!page)
-				goto oom;
-			copy_user_highpage(page, new_page, address);
-			page_cache_release(new_page);
-			new_page = page;
+			if (!page) {
+				ret = VM_FAULT_OOM;
+				goto out_error;
+			}
+			copy_user_highpage(page, nopage_page, address);
 			anon = 1;
-
 		} else {
 			/* if the page will be shareable, see if the backing
 			 * address space wants to know that the page is about
 			 * to become writable */
 			if (vma->vm_ops->page_mkwrite &&
-			    vma->vm_ops->page_mkwrite(vma, new_page) < 0
-			    ) {
-				page_cache_release(new_page);
-				return VM_FAULT_SIGBUS;
+			    vma->vm_ops->page_mkwrite(vma, page) < 0) {
+				ret = VM_FAULT_SIGBUS;
+				goto out_error;
 			}
 		}
 	}
 
 	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
-	/*
-	 * For a file-backed vma, someone could have truncated or otherwise
-	 * invalidated this page.  If unmap_mapping_range got called,
-	 * retry getting the page.
-	 */
-	if (mapping && unlikely(sequence != mapping->truncate_count)) {
-		pte_unmap_unlock(page_table, ptl);
-		page_cache_release(new_page);
-		cond_resched();
-		sequence = mapping->truncate_count;
-		smp_rmb();
-		goto retry;
-	}
 
 	/*
 	 * This silly early PAGE_DIRTY setting removes a race
@@ -2234,43 +2209,51 @@ retry:
 	 * handle that later.
 	 */
 	/* Only go through if we didn't race with anybody else... */
-	if (pte_none(*page_table)) {
-		flush_icache_page(vma, new_page);
-		entry = mk_pte(new_page, vma->vm_page_prot);
+	if (likely(pte_none(*page_table))) {
+		flush_icache_page(vma, page);
+		entry = mk_pte(page, vma->vm_page_prot);
 		if (write_access)
 			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 		set_pte_at(mm, address, page_table, entry);
 		if (anon) {
 			inc_mm_counter(mm, anon_rss);
-			lru_cache_add_active(new_page);
-			page_add_new_anon_rmap(new_page, vma, address);
+			lru_cache_add_active(page);
+			page_add_new_anon_rmap(page, vma, address);
 		} else {
 			inc_mm_counter(mm, file_rss);
-			page_add_file_rmap(new_page);
+			page_add_file_rmap(page);
 			if (write_access) {
-				dirty_page = new_page;
+				dirty_page = page;
 				get_page(dirty_page);
 			}
 		}
+
+		/* no need to invalidate: a not-present page won't be cached */
+		update_mmu_cache(vma, address, entry);
+		lazy_mmu_prot_update(entry);
 	} else {
-		/* One of our sibling threads was faster, back out. */
-		page_cache_release(new_page);
-		goto unlock;
+		if (anon)
+			page_cache_release(page);
+		else
+			anon = 1; /* not anon, but release nopage_page */
 	}
 
-	/* no need to invalidate: a not-present page shouldn't be cached */
-	update_mmu_cache(vma, address, entry);
-	lazy_mmu_prot_update(entry);
-unlock:
 	pte_unmap_unlock(page_table, ptl);
-	if (dirty_page) {
+
+out:
+	unlock_page(nopage_page);
+	if (anon)
+		page_cache_release(nopage_page);
+	else if (dirty_page) {
 		set_page_dirty_balance(dirty_page);
 		put_page(dirty_page);
 	}
+
 	return ret;
-oom:
-	page_cache_release(new_page);
-	return VM_FAULT_OOM;
+
+out_error:
+	anon = 1; /* relase nopage_page */
+	goto out;
 }
 
 /*
Index: linux-2.6/mm/shmem.c
===================================================================
--- linux-2.6.orig/mm/shmem.c
+++ linux-2.6/mm/shmem.c
@@ -81,6 +81,7 @@ enum sgp_type {
 	SGP_READ,	/* don't exceed i_size, don't allocate page */
 	SGP_CACHE,	/* don't exceed i_size, may allocate page */
 	SGP_WRITE,	/* may exceed i_size, may allocate page */
+	SGP_NOPAGE,	/* same as SGP_CACHE, return with page locked */
 };
 
 static int shmem_getpage(struct inode *inode, unsigned long idx,
@@ -1209,8 +1210,10 @@ repeat:
 	}
 done:
 	if (*pagep != filepage) {
-		unlock_page(filepage);
 		*pagep = filepage;
+		if (sgp != SGP_NOPAGE)
+			unlock_page(filepage);
+
 	}
 	return 0;
 
@@ -1229,13 +1232,15 @@ struct page *shmem_nopage(struct vm_area
 	unsigned long idx;
 	int error;
 
+	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
+
 	idx = (address - vma->vm_start) >> PAGE_SHIFT;
 	idx += vma->vm_pgoff;
 	idx >>= PAGE_CACHE_SHIFT - PAGE_SHIFT;
 	if (((loff_t) idx << PAGE_CACHE_SHIFT) >= i_size_read(inode))
 		return NOPAGE_SIGBUS;
 
-	error = shmem_getpage(inode, idx, &page, SGP_CACHE, type);
+	error = shmem_getpage(inode, idx, &page, SGP_NOPAGE, type);
 	if (error)
 		return (error == -ENOMEM)? NOPAGE_OOM: NOPAGE_SIGBUS;
 
@@ -1333,6 +1338,7 @@ int shmem_mmap(struct file *file, struct
 {
 	file_accessed(file);
 	vma->vm_ops = &shmem_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
@@ -2445,5 +2451,6 @@ int shmem_zero_setup(struct vm_area_stru
 		fput(vma->vm_file);
 	vma->vm_file = file;
 	vma->vm_ops = &shmem_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
Index: linux-2.6/fs/ncpfs/mmap.c
===================================================================
--- linux-2.6.orig/fs/ncpfs/mmap.c
+++ linux-2.6/fs/ncpfs/mmap.c
@@ -123,6 +123,7 @@ int ncp_mmap(struct file *file, struct v
 		return -EFBIG;
 
 	vma->vm_ops = &ncp_file_mmap;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	file_accessed(file);
 	return 0;
 }
Index: linux-2.6/fs/ocfs2/mmap.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/mmap.c
+++ linux-2.6/fs/ocfs2/mmap.c
@@ -93,6 +93,7 @@ int ocfs2_mmap(struct file *file, struct
 
 	file_accessed(file);
 	vma->vm_ops = &ocfs2_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
Index: linux-2.6/fs/xfs/linux-2.6/xfs_file.c
===================================================================
--- linux-2.6.orig/fs/xfs/linux-2.6/xfs_file.c
+++ linux-2.6/fs/xfs/linux-2.6/xfs_file.c
@@ -343,6 +343,7 @@ xfs_file_mmap(
 	struct vm_area_struct *vma)
 {
 	vma->vm_ops = &xfs_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 
 #ifdef CONFIG_XFS_DMAPI
 	if (vn_from_inode(filp->f_dentry->d_inode)->v_vfsp->vfs_flag & VFS_DMI)
Index: linux-2.6/ipc/shm.c
===================================================================
--- linux-2.6.orig/ipc/shm.c
+++ linux-2.6/ipc/shm.c
@@ -230,6 +230,7 @@ static int shm_mmap(struct file * file, 
 	ret = shmem_mmap(file, vma);
 	if (ret == 0) {
 		vma->vm_ops = &shm_vm_ops;
+		vma->vm_flags |= VM_CAN_INVALIDATE;
 		if (!(vma->vm_flags & VM_WRITE))
 			vma->vm_flags &= ~VM_MAYWRITE;
 		shm_inc(shm_file_ns(file), file->f_dentry->d_inode->i_ino);
Index: linux-2.6/fs/ocfs2/dlmglue.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/dlmglue.c
+++ linux-2.6/fs/ocfs2/dlmglue.c
@@ -2656,7 +2656,6 @@ static int ocfs2_data_convert_worker(str
 	sync_mapping_buffers(mapping);
 	if (blocking == LKM_EXMODE) {
 		truncate_inode_pages(mapping, 0);
-		unmap_mapping_range(mapping, 0, 0, 0);
 	} else {
 		/* We only need to wait on the I/O if we're not also
 		 * truncating pages because truncate_inode_pages waits
Index: linux-2.6/mm/truncate.c
===================================================================
--- linux-2.6.orig/mm/truncate.c
+++ linux-2.6/mm/truncate.c
@@ -163,6 +163,11 @@ void truncate_inode_pages_range(struct a
 				unlock_page(page);
 				continue;
 			}
+			while (page_mapped(page)) {
+				unmap_mapping_range(mapping,
+				  (loff_t)page_index<<PAGE_CACHE_SHIFT,
+				  PAGE_CACHE_SIZE, 0);
+			}
 			truncate_complete_page(mapping, page);
 			unlock_page(page);
 		}
@@ -200,6 +205,11 @@ void truncate_inode_pages_range(struct a
 				break;
 			lock_page(page);
 			wait_on_page_writeback(page);
+			while (page_mapped(page)) {
+				unmap_mapping_range(mapping,
+				  (loff_t)page_index<<PAGE_CACHE_SHIFT,
+				  PAGE_CACHE_SIZE, 0);
+			}
 			if (page->index > next)
 				next = page->index;
 			next++;
Index: linux-2.6/fs/gfs2/ops_file.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_file.c
+++ linux-2.6/fs/gfs2/ops_file.c
@@ -396,6 +396,8 @@ static int gfs2_mmap(struct file *file, 
 	else
 		vma->vm_ops = &gfs2_vm_ops_private;
 
+	vma->vm_flags |= VM_CAN_INVALIDATE;
+
 	gfs2_glock_dq_uninit(&i_gh);
 
 	return error;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [patch 3/5] mm: fault handler to replace nopage and populate
  2006-10-09 16:12 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
  2006-10-09 16:12 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
  2006-10-09 16:12 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
@ 2006-10-09 16:12 ` Nick Piggin
  2006-10-09 16:12 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 34+ messages in thread
From: Nick Piggin @ 2006-10-09 16:12 UTC (permalink / raw)
  To: Hugh Dickins, Linux Memory Management
  Cc: Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Nick Piggin, Ingo Molnar

Nonlinear mappings are (AFAIKS) simply a virtual memory concept that
encodes the virtual address -> file offset differently from linear
mappings.

I can't see why the filesystem/pagecache code should need to know anything
about it, except for the fact that the ->nopage handler didn't quite pass
down enough information (ie. pgoff). But it is more logical to pass pgoff
rather than have the ->nopage function calculate it itself anyway. And
having the nopage handler install the pte itself is sort of nasty.

This patch introduces a new fault handler that replaces ->nopage and
->populate and (later) ->nopfn. Most of the old mechanism is still in place
so there is a lot of duplication and nice cleanups that can be removed if
everyone switches over.

The rationale for doing this in the first place is that nonlinear mappings
are subject to the pagefault vs invalidate/truncate race too, and it seemed
stupid to duplicate the synchronisation logic rather than just consolidate
the two.

After this patch, MAP_NONBLOCK no longer sets up ptes for pages present in
pagecache. Seems like a fringe functionality anyway.

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -166,11 +166,12 @@ extern unsigned int kobjsize(const void 
 #define VM_NONLINEAR	0x00800000	/* Is non-linear (remap_file_pages) */
 #define VM_MAPPED_COPY	0x01000000	/* T if mapped copy of data (nommu mmap) */
 #define VM_INSERTPAGE	0x02000000	/* The vma has had "vm_insert_page()" done on it */
-#define VM_CAN_INVALIDATE	0x04000000	/* The mapping may be invalidated,
+#define VM_CAN_INVALIDATE 0x04000000	/* The mapping may be invalidated,
 					 * eg. truncate or invalidate_inode_*.
 					 * In this case, do_no_page must
 					 * return with the page locked.
 					 */
+#define VM_CAN_NONLINEAR 0x08000000	/* Has ->fault & does nonlinear pages */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
@@ -194,6 +195,23 @@ extern unsigned int kobjsize(const void 
  */
 extern pgprot_t protection_map[16];
 
+#define FAULT_FLAG_WRITE	0x01
+#define FAULT_FLAG_NONLINEAR	0x02
+
+/*
+ * fault_data is filled in the the pagefault handler and passed to the
+ * vma's ->fault function. That function is responsible for filling in
+ * 'type', which is the type of fault if a page is returned, or the type
+ * of error if NULL is returned.
+ */
+struct fault_data {
+	struct vm_area_struct *vma;
+	unsigned long address;
+	pgoff_t pgoff;
+	unsigned int flags;
+
+	int type;
+};
 
 /*
  * These are the virtual MM functions - opening of an area, closing and
@@ -203,6 +221,7 @@ extern pgprot_t protection_map[16];
 struct vm_operations_struct {
 	void (*open)(struct vm_area_struct * area);
 	void (*close)(struct vm_area_struct * area);
+	struct page * (*fault)(struct fault_data * data);
 	struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int *type);
 	unsigned long (*nopfn)(struct vm_area_struct * area, unsigned long address);
 	int (*populate)(struct vm_area_struct * area, unsigned long address, unsigned long len, pgprot_t prot, unsigned long pgoff, int nonblock);
@@ -598,7 +617,6 @@ static inline int page_mapped(struct pag
  */
 #define NOPAGE_SIGBUS	(NULL)
 #define NOPAGE_OOM	((struct page *) (-1))
-#define NOPAGE_REFAULT	((struct page *) (-2))	/* Return to userspace, rerun */
 
 /*
  * Error return values for the *_nopfn functions
@@ -627,14 +645,13 @@ static inline int page_mapped(struct pag
 extern void show_free_areas(void);
 
 #ifdef CONFIG_SHMEM
-struct page *shmem_nopage(struct vm_area_struct *vma,
-			unsigned long address, int *type);
+struct page *shmem_fault(struct fault_data *fdata);
 int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *new);
 struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
 					unsigned long addr);
 int shmem_lock(struct file *file, int lock, struct user_struct *user);
 #else
-#define shmem_nopage filemap_nopage
+#define shmem_fault filemap_fault
 
 static inline int shmem_lock(struct file *file, int lock,
 			     struct user_struct *user)
@@ -1029,9 +1046,7 @@ extern void truncate_inode_pages_range(s
 				       loff_t lstart, loff_t lend);
 
 /* generic vm_area_ops exported for stackable file systems */
-extern struct page *filemap_nopage(struct vm_area_struct *, unsigned long, int *);
-extern int filemap_populate(struct vm_area_struct *, unsigned long,
-		unsigned long, pgprot_t, unsigned long, int);
+extern struct page *filemap_fault(struct fault_data *data);
 
 /* mm/page-writeback.c */
 int write_one_page(struct page *page, int wait);
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -2123,10 +2123,10 @@ oom:
 }
 
 /*
- * do_no_page() tries to create a new page mapping. It aggressively
+ * __do_fault() tries to create a new page mapping. It aggressively
  * tries to share with existing pages, but makes a separate copy if
- * the "write_access" parameter is true in order to avoid the next
- * page fault.
+ * the FAULT_FLAG_WRITE is set in the flags parameter in order to avoid
+ * the next page fault.
  *
  * As this is called only for pages that do not currently exist, we
  * do not need to flush old virtual caches or the TLB.
@@ -2135,65 +2135,82 @@ oom:
  * but allow concurrent faults), and pte mapped but not yet locked.
  * We return with mmap_sem still held, but pte unmapped and unlocked.
  */
-static int do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
+static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		unsigned long address, pte_t *page_table, pmd_t *pmd,
-		int write_access)
+		pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
 {
 	spinlock_t *ptl;
-	struct page *page, *nopage_page;
+	struct page *page, *faulted_page;
 	pte_t entry;
-	int ret = VM_FAULT_MINOR;
 	int anon = 0;
 	struct page *dirty_page = NULL;
 
+	struct fault_data fdata = {
+		.vma = vma,
+		.address = address & PAGE_MASK,
+		.pgoff = pgoff,
+		.flags = flags,
+	};
+
 	pte_unmap(page_table);
 	BUG_ON(vma->vm_flags & VM_PFNMAP);
 
-	nopage_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
-	/* no page was available -- either SIGBUS, OOM or REFAULT */
-	if (unlikely(nopage_page == NOPAGE_SIGBUS))
-		return VM_FAULT_SIGBUS;
-	else if (unlikely(nopage_page == NOPAGE_OOM))
-		return VM_FAULT_OOM;
-	else if (unlikely(nopage_page == NOPAGE_REFAULT))
-		return VM_FAULT_MINOR;
+	if (likely(vma->vm_ops->fault)) {
+		faulted_page = vma->vm_ops->fault(&fdata);
+		if (unlikely(!faulted_page))
+			return fdata.type;
+	} else {
+		/* Legacy ->nopage path */
+		faulted_page = vma->vm_ops->nopage(vma, address & PAGE_MASK,
+								&fdata.type);
+		/* no page was available -- either SIGBUS or OOM */
+		if (unlikely(faulted_page == NOPAGE_SIGBUS))
+			return VM_FAULT_SIGBUS;
+		else if (unlikely(faulted_page == NOPAGE_OOM))
+			return VM_FAULT_OOM;
+	}
 
-	BUG_ON(vma->vm_flags & VM_CAN_INVALIDATE && !PageLocked(nopage_page));
 	/*
-	 * For consistency in subsequent calls, make the nopage_page always
+	 * For consistency in subsequent calls, make the faulted_page always
 	 * locked.  These should be in the minority but if they turn out to be
 	 * critical then this can always be revisited
 	 */
 	if (unlikely(!(vma->vm_flags & VM_CAN_INVALIDATE)))
-		lock_page(nopage_page);
+		lock_page(faulted_page);
+	else
+		BUG_ON(!PageLocked(faulted_page));
 
 	/*
 	 * Should we do an early C-O-W break?
 	 */
-	page = nopage_page;
-	if (write_access) {
+	page = faulted_page;
+	if (flags & FAULT_FLAG_WRITE) {
 		if (!(vma->vm_flags & VM_SHARED)) {
+			anon = 1;
 			if (unlikely(anon_vma_prepare(vma))) {
-				ret = VM_FAULT_OOM;
-				goto out_error;
+				fdata.type = VM_FAULT_OOM;
+				goto out;
 			}
 			page = alloc_page_vma(GFP_HIGHUSER, vma, address);
 			if (!page) {
-				ret = VM_FAULT_OOM;
-				goto out_error;
+				fdata.type = VM_FAULT_OOM;
+				goto out;
 			}
-			copy_user_highpage(page, nopage_page, address);
-			anon = 1;
+			copy_user_highpage(page, faulted_page, address);
 		} else {
-			/* if the page will be shareable, see if the backing
+			/*
+			 * If the page will be shareable, see if the backing
 			 * address space wants to know that the page is about
-			 * to become writable */
+			 * to become writable
+			 */
 			if (vma->vm_ops->page_mkwrite &&
 			    vma->vm_ops->page_mkwrite(vma, page) < 0) {
-				ret = VM_FAULT_SIGBUS;
-				goto out_error;
+				fdata.type = VM_FAULT_SIGBUS;
+				anon = 1; /* no anon but release faulted_page */
+				goto out;
 			}
 		}
+
 	}
 
 	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
@@ -2209,10 +2226,10 @@ static int do_no_page(struct mm_struct *
 	 * handle that later.
 	 */
 	/* Only go through if we didn't race with anybody else... */
-	if (likely(pte_none(*page_table))) {
+	if (likely(pte_same(*page_table, orig_pte))) {
 		flush_icache_page(vma, page);
 		entry = mk_pte(page, vma->vm_page_prot);
-		if (write_access)
+		if (flags & FAULT_FLAG_WRITE)
 			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 		set_pte_at(mm, address, page_table, entry);
 		if (anon) {
@@ -2222,7 +2239,7 @@ static int do_no_page(struct mm_struct *
 		} else {
 			inc_mm_counter(mm, file_rss);
 			page_add_file_rmap(page);
-			if (write_access) {
+			if (flags & FAULT_FLAG_WRITE) {
 				dirty_page = page;
 				get_page(dirty_page);
 			}
@@ -2235,25 +2252,42 @@ static int do_no_page(struct mm_struct *
 		if (anon)
 			page_cache_release(page);
 		else
-			anon = 1; /* not anon, but release nopage_page */
+			anon = 1; /* no anon but release faulted_page */
 	}
 
 	pte_unmap_unlock(page_table, ptl);
 
 out:
-	unlock_page(nopage_page);
+	unlock_page(faulted_page);
 	if (anon)
-		page_cache_release(nopage_page);
+		page_cache_release(faulted_page);
 	else if (dirty_page) {
 		set_page_dirty_balance(dirty_page);
 		put_page(dirty_page);
 	}
 
-	return ret;
+	return fdata.type;
+}
+
+static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+		unsigned long address, pte_t *page_table, pmd_t *pmd,
+		int write_access, pte_t orig_pte)
+{
+	pgoff_t pgoff = (((address & PAGE_MASK)
+			- vma->vm_start) >> PAGE_CACHE_SHIFT) + vma->vm_pgoff;
+	unsigned int flags = (write_access ? FAULT_FLAG_WRITE : 0);
+
+	return __do_fault(mm, vma, address, page_table, pmd, pgoff, flags, orig_pte);
+}
 
-out_error:
-	anon = 1; /* relase nopage_page */
-	goto out;
+static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+		unsigned long address, pte_t *page_table, pmd_t *pmd,
+		int write_access, pgoff_t pgoff, pte_t orig_pte)
+{
+	unsigned int flags = FAULT_FLAG_NONLINEAR |
+				(write_access ? FAULT_FLAG_WRITE : 0);
+
+	return __do_fault(mm, vma, address, page_table, pmd, pgoff, flags, orig_pte);
 }
 
 /*
@@ -2330,9 +2364,14 @@ static int do_file_page(struct mm_struct
 		print_bad_pte(vma, orig_pte, address);
 		return VM_FAULT_OOM;
 	}
-	/* We can then assume vm->vm_ops && vma->vm_ops->populate */
 
 	pgoff = pte_to_pgoff(orig_pte);
+
+	if (vma->vm_ops && vma->vm_ops->fault)
+		return do_nonlinear_fault(mm, vma, address, page_table, pmd,
+					write_access, pgoff, orig_pte);
+
+	/* We can then assume vm->vm_ops && vma->vm_ops->populate */
 	err = vma->vm_ops->populate(vma, address & PAGE_MASK, PAGE_SIZE,
 					vma->vm_page_prot, pgoff, 0);
 	if (err == -ENOMEM)
@@ -2367,10 +2406,9 @@ static inline int handle_pte_fault(struc
 	if (!pte_present(entry)) {
 		if (pte_none(entry)) {
 			if (vma->vm_ops) {
-				if (vma->vm_ops->nopage)
-					return do_no_page(mm, vma, address,
-							  pte, pmd,
-							  write_access);
+				if (vma->vm_ops->fault || vma->vm_ops->nopage)
+					return do_linear_fault(mm, vma, address,
+						pte, pmd, write_access, entry);
 				if (unlikely(vma->vm_ops->nopfn))
 					return do_no_pfn(mm, vma, address, pte,
 							 pmd, write_access);
Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1339,40 +1339,37 @@ static int fastcall page_cache_read(stru
 #define MMAP_LOTSAMISS  (100)
 
 /**
- * filemap_nopage - read in file data for page fault handling
- * @area:	the applicable vm_area
- * @address:	target address to read in
- * @type:	returned with VM_FAULT_{MINOR,MAJOR} if not %NULL
+ * filemap_fault - read in file data for page fault handling
+ * @data:	the applicable fault_data
  *
- * filemap_nopage() is invoked via the vma operations vector for a
+ * filemap_fault() is invoked via the vma operations vector for a
  * mapped memory region to read in file data during a page fault.
  *
  * The goto's are kind of ugly, but this streamlines the normal case of having
  * it in the page cache, and handles the special cases reasonably without
  * having a lot of duplicated code.
  */
-struct page *filemap_nopage(struct vm_area_struct *area,
-				unsigned long address, int *type)
+struct page *filemap_fault(struct fault_data *fdata)
 {
 	int error;
-	struct file *file = area->vm_file;
+	struct file *file = fdata->vma->vm_file;
 	struct address_space *mapping = file->f_mapping;
 	struct file_ra_state *ra = &file->f_ra;
 	struct inode *inode = mapping->host;
 	struct page *page;
-	unsigned long size, pgoff;
-	int did_readaround = 0, majmin = VM_FAULT_MINOR;
+	unsigned long size;
+	int did_readaround = 0;
 
-	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
+	fdata->type = VM_FAULT_MINOR;
 
-	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
+	BUG_ON(!(fdata->vma->vm_flags & VM_CAN_INVALIDATE));
 
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (pgoff >= size)
+	if (fdata->pgoff >= size)
 		goto outside_data_content;
 
 	/* If we don't want any read-ahead, don't bother */
-	if (VM_RandomReadHint(area))
+	if (VM_RandomReadHint(fdata->vma))
 		goto no_cached_page;
 
 	/*
@@ -1381,19 +1378,19 @@ struct page *filemap_nopage(struct vm_ar
 	 *
 	 * For sequential accesses, we use the generic readahead logic.
 	 */
-	if (VM_SequentialReadHint(area))
-		page_cache_readahead(mapping, ra, file, pgoff, 1);
+	if (VM_SequentialReadHint(fdata->vma))
+		page_cache_readahead(mapping, ra, file, fdata->pgoff, 1);
 
 	/*
 	 * Do we have something in the page cache already?
 	 */
 retry_find:
-	page = find_lock_page(mapping, pgoff);
+	page = find_lock_page(mapping, fdata->pgoff);
 	if (!page) {
 		unsigned long ra_pages;
 
-		if (VM_SequentialReadHint(area)) {
-			handle_ra_miss(mapping, ra, pgoff);
+		if (VM_SequentialReadHint(fdata->vma)) {
+			handle_ra_miss(mapping, ra, fdata->pgoff);
 			goto no_cached_page;
 		}
 		ra->mmap_miss++;
@@ -1410,7 +1407,7 @@ retry_find:
 		 * check did_readaround, as this is an inner loop.
 		 */
 		if (!did_readaround) {
-			majmin = VM_FAULT_MAJOR;
+			fdata->type = VM_FAULT_MAJOR;
 			count_vm_event(PGMAJFAULT);
 		}
 		did_readaround = 1;
@@ -1418,11 +1415,11 @@ retry_find:
 		if (ra_pages) {
 			pgoff_t start = 0;
 
-			if (pgoff > ra_pages / 2)
-				start = pgoff - ra_pages / 2;
+			if (fdata->pgoff > ra_pages / 2)
+				start = fdata->pgoff - ra_pages / 2;
 			do_page_cache_readahead(mapping, file, start, ra_pages);
 		}
-		page = find_lock_page(mapping, pgoff);
+		page = find_lock_page(mapping, fdata->pgoff);
 		if (!page)
 			goto no_cached_page;
 	}
@@ -1444,7 +1441,7 @@ retry_find:
  */
 	/* Must recheck i_size under page lock */
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (unlikely(pgoff >= size)) {
+	if (unlikely(fdata->pgoff >= size)) {
 		unlock_page(page);
 		goto outside_data_content;
 	}
@@ -1454,8 +1451,6 @@ retry_find:
 	 * Found the page and have a reference on it.
 	 */
 	mark_page_accessed(page);
-	if (type)
-		*type = majmin;
 	return page;
 
 outside_data_content:
@@ -1463,15 +1458,17 @@ outside_data_content:
 	 * An external ptracer can access pages that normally aren't
 	 * accessible..
 	 */
-	if (area->vm_mm == current->mm)
-		return NOPAGE_SIGBUS;
+	if (fdata->vma->vm_mm == current->mm) {
+		fdata->type = VM_FAULT_SIGBUS;
+		return NULL;
+	}
 	/* Fall through to the non-read-ahead case */
 no_cached_page:
 	/*
 	 * We're only likely to ever get here if MADV_RANDOM is in
 	 * effect.
 	 */
-	error = page_cache_read(file, pgoff);
+	error = page_cache_read(file, fdata->pgoff);
 	grab_swap_token();
 
 	/*
@@ -1488,13 +1485,15 @@ no_cached_page:
 	 * to schedule I/O.
 	 */
 	if (error == -ENOMEM)
-		return NOPAGE_OOM;
-	return NOPAGE_SIGBUS;
+		fdata->type = VM_FAULT_OOM;
+	else
+		fdata->type = VM_FAULT_SIGBUS;
+	return NULL;
 
 page_not_uptodate:
 	/* IO error path */
 	if (!did_readaround) {
-		majmin = VM_FAULT_MAJOR;
+		fdata->type = VM_FAULT_MAJOR;
 		count_vm_event(PGMAJFAULT);
 	}
 
@@ -1513,186 +1512,13 @@ page_not_uptodate:
 
 	/* Things didn't work out. Return zero to tell the mm layer so. */
 	shrink_readahead_size_eio(file, ra);
-	return NOPAGE_SIGBUS;
-}
-EXPORT_SYMBOL(filemap_nopage);
-
-static struct page * filemap_getpage(struct file *file, unsigned long pgoff,
-					int nonblock)
-{
-	struct address_space *mapping = file->f_mapping;
-	struct page *page;
-	int error;
-
-	/*
-	 * Do we have something in the page cache already?
-	 */
-retry_find:
-	page = find_get_page(mapping, pgoff);
-	if (!page) {
-		if (nonblock)
-			return NULL;
-		goto no_cached_page;
-	}
-
-	/*
-	 * Ok, found a page in the page cache, now we need to check
-	 * that it's up-to-date.
-	 */
-	if (!PageUptodate(page)) {
-		if (nonblock) {
-			page_cache_release(page);
-			return NULL;
-		}
-		goto page_not_uptodate;
-	}
-
-success:
-	/*
-	 * Found the page and have a reference on it.
-	 */
-	mark_page_accessed(page);
-	return page;
-
-no_cached_page:
-	error = page_cache_read(file, pgoff);
-
-	/*
-	 * The page we want has now been added to the page cache.
-	 * In the unlikely event that someone removed it in the
-	 * meantime, we'll just come back here and read it again.
-	 */
-	if (error >= 0)
-		goto retry_find;
-
-	/*
-	 * An error return from page_cache_read can result if the
-	 * system is low on memory, or a problem occurs while trying
-	 * to schedule I/O.
-	 */
-	return NULL;
-
-page_not_uptodate:
-	lock_page(page);
-
-	/* Did it get truncated while we waited for it? */
-	if (!page->mapping) {
-		unlock_page(page);
-		goto err;
-	}
-
-	/* Did somebody else get it up-to-date? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
-
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
-
-	/*
-	 * Umm, take care of errors if the page isn't up-to-date.
-	 * Try to re-read it _once_. We do this synchronously,
-	 * because there really aren't any performance issues here
-	 * and we need to check for errors.
-	 */
-	lock_page(page);
-
-	/* Somebody truncated the page on us? */
-	if (!page->mapping) {
-		unlock_page(page);
-		goto err;
-	}
-	/* Somebody else successfully read it in? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
-
-	ClearPageError(page);
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
-
-	/*
-	 * Things didn't work out. Return zero to tell the
-	 * mm layer so, possibly freeing the page cache page first.
-	 */
-err:
-	page_cache_release(page);
-
+	fdata->type = VM_FAULT_SIGBUS;
 	return NULL;
 }
-
-int filemap_populate(struct vm_area_struct *vma, unsigned long addr,
-		unsigned long len, pgprot_t prot, unsigned long pgoff,
-		int nonblock)
-{
-	struct file *file = vma->vm_file;
-	struct address_space *mapping = file->f_mapping;
-	struct inode *inode = mapping->host;
-	unsigned long size;
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int err;
-
-	if (!nonblock)
-		force_page_cache_readahead(mapping, vma->vm_file,
-					pgoff, len >> PAGE_CACHE_SHIFT);
-
-repeat:
-	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (pgoff + (len >> PAGE_CACHE_SHIFT) > size)
-		return -EINVAL;
-
-	page = filemap_getpage(file, pgoff, nonblock);
-
-	/* XXX: This is wrong, a filesystem I/O error may have happened. Fix that as
-	 * done in shmem_populate calling shmem_getpage */
-	if (!page && !nonblock)
-		return -ENOMEM;
-
-	if (page) {
-		err = install_page(mm, vma, addr, page, prot);
-		if (err) {
-			page_cache_release(page);
-			return err;
-		}
-	} else if (vma->vm_flags & VM_NONLINEAR) {
-		/* No page was found just because we can't read it in now (being
-		 * here implies nonblock != 0), but the page may exist, so set
-		 * the PTE to fault it in later. */
-		err = install_file_pte(mm, vma, addr, pgoff, prot);
-		if (err)
-			return err;
-	}
-
-	len -= PAGE_SIZE;
-	addr += PAGE_SIZE;
-	pgoff++;
-	if (len)
-		goto repeat;
-
-	return 0;
-}
-EXPORT_SYMBOL(filemap_populate);
+EXPORT_SYMBOL(filemap_fault);
 
 struct vm_operations_struct generic_file_vm_ops = {
-	.nopage		= filemap_nopage,
-	.populate	= filemap_populate,
+	.fault		= filemap_fault,
 };
 
 /* This is used for a general mmap of a disk file */
@@ -1705,7 +1531,7 @@ int generic_file_mmap(struct file * file
 		return -ENOEXEC;
 	file_accessed(file);
 	vma->vm_ops = &generic_file_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	return 0;
 }
 
Index: linux-2.6/mm/fremap.c
===================================================================
--- linux-2.6.orig/mm/fremap.c
+++ linux-2.6/mm/fremap.c
@@ -115,6 +115,7 @@ int install_file_pte(struct mm_struct *m
 
 	set_pte_at(mm, addr, pte, pgoff_to_pte(pgoff));
 	pte_val = *pte;
+
 	/*
 	 * We don't need to run update_mmu_cache() here because the "file pte"
 	 * being installed by install_file_pte() is not a real pte - it's a
@@ -128,6 +129,25 @@ out:
 	return err;
 }
 
+static int populate_range(struct mm_struct *mm, struct vm_area_struct *vma,
+			unsigned long addr, unsigned long size, pgoff_t pgoff)
+{
+	int err;
+
+	do {
+		err = install_file_pte(mm, vma, addr, pgoff, vma->vm_page_prot);
+		if (err)
+			return err;
+
+		size -= PAGE_SIZE;
+		addr += PAGE_SIZE;
+		pgoff++;
+	} while (size);
+
+        return 0;
+
+}
+
 /***
  * sys_remap_file_pages - remap arbitrary pages of a shared backing store
  *                        file within an existing vma.
@@ -185,41 +205,63 @@ asmlinkage long sys_remap_file_pages(uns
 	 * the single existing vma.  vm_private_data is used as a
 	 * swapout cursor in a VM_NONLINEAR vma.
 	 */
-	if (vma && (vma->vm_flags & VM_SHARED) &&
-		(!vma->vm_private_data || (vma->vm_flags & VM_NONLINEAR)) &&
-		vma->vm_ops && vma->vm_ops->populate &&
-			end > start && start >= vma->vm_start &&
-				end <= vma->vm_end) {
-
-		/* Must set VM_NONLINEAR before any pages are populated. */
-		if (pgoff != linear_page_index(vma, start) &&
-		    !(vma->vm_flags & VM_NONLINEAR)) {
-			if (!has_write_lock) {
-				up_read(&mm->mmap_sem);
-				down_write(&mm->mmap_sem);
-				has_write_lock = 1;
-				goto retry;
-			}
-			mapping = vma->vm_file->f_mapping;
-			spin_lock(&mapping->i_mmap_lock);
-			flush_dcache_mmap_lock(mapping);
-			vma->vm_flags |= VM_NONLINEAR;
-			vma_prio_tree_remove(vma, &mapping->i_mmap);
-			vma_nonlinear_insert(vma, &mapping->i_mmap_nonlinear);
-			flush_dcache_mmap_unlock(mapping);
-			spin_unlock(&mapping->i_mmap_lock);
+	if (!vma || !(vma->vm_flags & VM_SHARED))
+		goto out;
+
+	if (vma->vm_private_data && !(vma->vm_flags & VM_NONLINEAR))
+		goto out;
+
+	if ((!vma->vm_ops || !vma->vm_ops->populate) &&
+					!(vma->vm_flags & VM_CAN_NONLINEAR))
+		goto out;
+
+	if (end <= start || start < vma->vm_start || end > vma->vm_end)
+		goto out;
+
+	/* Must set VM_NONLINEAR before any pages are populated. */
+	if (!(vma->vm_flags & VM_NONLINEAR)) {
+		/* Don't need a nonlinear mapping, exit success */
+		if (pgoff == linear_page_index(vma, start)) {
+			err = 0;
+			goto out;
 		}
 
-		err = vma->vm_ops->populate(vma, start, size,
-					    vma->vm_page_prot,
-					    pgoff, flags & MAP_NONBLOCK);
-
-		/*
-		 * We can't clear VM_NONLINEAR because we'd have to do
-		 * it after ->populate completes, and that would prevent
-		 * downgrading the lock.  (Locks can't be upgraded).
-		 */
+		if (!has_write_lock) {
+			up_read(&mm->mmap_sem);
+			down_write(&mm->mmap_sem);
+			has_write_lock = 1;
+			goto retry;
+		}
+		mapping = vma->vm_file->f_mapping;
+		spin_lock(&mapping->i_mmap_lock);
+		flush_dcache_mmap_lock(mapping);
+		vma->vm_flags |= VM_NONLINEAR;
+		vma_prio_tree_remove(vma, &mapping->i_mmap);
+		vma_nonlinear_insert(vma, &mapping->i_mmap_nonlinear);
+		flush_dcache_mmap_unlock(mapping);
+		spin_unlock(&mapping->i_mmap_lock);
 	}
+
+	if (vma->vm_flags & VM_CAN_NONLINEAR) {
+		err = populate_range(mm, vma, start, size, pgoff);
+		if (!err && !(flags & MAP_NONBLOCK)) {
+			if (unlikely(has_write_lock)) {
+				downgrade_write(&mm->mmap_sem);
+				has_write_lock = 0;
+			}
+			make_pages_present(start, start+size);
+		}
+	} else
+		err = vma->vm_ops->populate(vma, start, size, vma->vm_page_prot,
+					    	pgoff, flags & MAP_NONBLOCK);
+
+	/*
+	 * We can't clear VM_NONLINEAR because we'd have to do
+	 * it after ->populate completes, and that would prevent
+	 * downgrading the lock.  (Locks can't be upgraded).
+	 */
+
+out:
 	if (likely(!has_write_lock))
 		up_read(&mm->mmap_sem);
 	else
Index: linux-2.6/fs/gfs2/ops_file.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_file.c
+++ linux-2.6/fs/gfs2/ops_file.c
@@ -396,7 +396,7 @@ static int gfs2_mmap(struct file *file, 
 	else
 		vma->vm_ops = &gfs2_vm_ops_private;
 
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE|VM_CAN_NONLINEAR;
 
 	gfs2_glock_dq_uninit(&i_gh);
 
Index: linux-2.6/fs/gfs2/ops_vm.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_vm.c
+++ linux-2.6/fs/gfs2/ops_vm.c
@@ -42,17 +42,16 @@ static void pfault_be_greedy(struct gfs2
 		iput(&ip->i_inode);
 }
 
-static struct page *gfs2_private_nopage(struct vm_area_struct *area,
-					unsigned long address, int *type)
+static struct page *gfs2_private_fault(struct fault_data *fdata)
 {
-	struct gfs2_inode *ip = GFS2_I(area->vm_file->f_mapping->host);
+	struct gfs2_inode *ip = GFS2_I(fdata->vma->vm_file->f_mapping->host);
 	struct page *result;
 
 	set_bit(GIF_PAGED, &ip->i_flags);
 
-	result = filemap_nopage(area, address, type);
+	result = filemap_fault(fdata);
 
-	if (result && result != NOPAGE_OOM)
+	if (result)
 		pfault_be_greedy(ip);
 
 	return result;
@@ -126,16 +125,13 @@ out:
 	return error;
 }
 
-static struct page *gfs2_sharewrite_nopage(struct vm_area_struct *area,
-					   unsigned long address, int *type)
+static struct page *gfs2_sharewrite_fault(struct fault_data *fdata)
 {
-	struct file *file = area->vm_file;
+	struct file *file = fdata->vma->vm_file;
 	struct gfs2_file *gf = file->private_data;
 	struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
 	struct gfs2_holder i_gh;
 	struct page *result = NULL;
-	unsigned long index = ((address - area->vm_start) >> PAGE_CACHE_SHIFT) +
-			      area->vm_pgoff;
 	int alloc_required;
 	int error;
 
@@ -146,21 +142,25 @@ static struct page *gfs2_sharewrite_nopa
 	set_bit(GIF_PAGED, &ip->i_flags);
 	set_bit(GIF_SW_PAGED, &ip->i_flags);
 
-	error = gfs2_write_alloc_required(ip, (u64)index << PAGE_CACHE_SHIFT,
-					  PAGE_CACHE_SIZE, &alloc_required);
-	if (error)
+	error = gfs2_write_alloc_required(ip,
+					(u64)fdata->pgoff << PAGE_CACHE_SHIFT,
+					PAGE_CACHE_SIZE, &alloc_required);
+	if (error) {
+		fdata->type = VM_FAULT_OOM; /* XXX: are these right? */
 		goto out;
+	}
 
 	set_bit(GFF_EXLOCK, &gf->f_flags);
-	result = filemap_nopage(area, address, type);
+	result = filemap_fault(fdata);
 	clear_bit(GFF_EXLOCK, &gf->f_flags);
-	if (!result || result == NOPAGE_OOM)
+	if (!result)
 		goto out;
 
 	if (alloc_required) {
 		error = alloc_page_backing(ip, result);
 		if (error) {
 			page_cache_release(result);
+			fdata->type = VM_FAULT_OOM;
 			result = NULL;
 			goto out;
 		}
@@ -175,10 +175,10 @@ out:
 }
 
 struct vm_operations_struct gfs2_vm_ops_private = {
-	.nopage = gfs2_private_nopage,
+	.fault = gfs2_private_fault,
 };
 
 struct vm_operations_struct gfs2_vm_ops_sharewrite = {
-	.nopage = gfs2_sharewrite_nopage,
+	.fault = gfs2_sharewrite_fault,
 };
 
Index: linux-2.6/fs/ocfs2/mmap.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/mmap.c
+++ linux-2.6/fs/ocfs2/mmap.c
@@ -42,16 +42,13 @@
 #include "inode.h"
 #include "mmap.h"
 
-static struct page *ocfs2_nopage(struct vm_area_struct * area,
-				 unsigned long address,
-				 int *type)
+static struct page *ocfs2_fault(struct fault_data *fdata)
 {
-	struct page *page = NOPAGE_SIGBUS;
+	struct page *page = NULL;
 	sigset_t blocked, oldset;
 	int ret;
 
-	mlog_entry("(area=%p, address=%lu, type=%p)\n", area, address,
-		   type);
+	mlog_entry("(area=%p, address=%lu)\n", fdata->vma, fdata->address);
 
 	/* The best way to deal with signals in this path is
 	 * to block them upfront, rather than allowing the
@@ -63,10 +60,11 @@ static struct page *ocfs2_nopage(struct 
 	ret = sigprocmask(SIG_BLOCK, &blocked, &oldset);
 	if (ret < 0) {
 		mlog_errno(ret);
+		fdata->type = VM_FAULT_SIGBUS;
 		goto out;
 	}
 
-	page = filemap_nopage(area, address, type);
+	page = filemap_fault(fdata);
 
 	ret = sigprocmask(SIG_SETMASK, &oldset, NULL);
 	if (ret < 0)
@@ -77,7 +75,7 @@ out:
 }
 
 static struct vm_operations_struct ocfs2_file_vm_ops = {
-	.nopage = ocfs2_nopage,
+	.fault = ocfs2_fault,
 };
 
 int ocfs2_mmap(struct file *file, struct vm_area_struct *vma)
@@ -93,7 +91,7 @@ int ocfs2_mmap(struct file *file, struct
 
 	file_accessed(file);
 	vma->vm_ops = &ocfs2_file_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	return 0;
 }
 
Index: linux-2.6/fs/xfs/linux-2.6/xfs_file.c
===================================================================
--- linux-2.6.orig/fs/xfs/linux-2.6/xfs_file.c
+++ linux-2.6/fs/xfs/linux-2.6/xfs_file.c
@@ -246,18 +246,19 @@ xfs_file_fsync(
 
 #ifdef CONFIG_XFS_DMAPI
 STATIC struct page *
-xfs_vm_nopage(
-	struct vm_area_struct	*area,
-	unsigned long		address,
-	int			*type)
+xfs_vm_fault(
+	struct fault_data *fdata)
 {
+	struct vm_area_struct *area = fdata->vma;
 	struct inode	*inode = area->vm_file->f_dentry->d_inode;
 	bhv_vnode_t	*vp = vn_from_inode(inode);
 
 	ASSERT_ALWAYS(vp->v_vfsp->vfs_flag & VFS_DMI);
-	if (XFS_SEND_MMAP(XFS_VFSTOM(vp->v_vfsp), area, 0))
+	if (XFS_SEND_MMAP(XFS_VFSTOM(vp->v_vfsp), area, 0)) {
+		fdata->type = VM_FAULT_SIGBUS;
 		return NULL;
-	return filemap_nopage(area, address, type);
+	}
+	return filemap_fault(fdata);
 }
 #endif /* CONFIG_XFS_DMAPI */
 
@@ -343,7 +344,7 @@ xfs_file_mmap(
 	struct vm_area_struct *vma)
 {
 	vma->vm_ops = &xfs_file_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 
 #ifdef CONFIG_XFS_DMAPI
 	if (vn_from_inode(filp->f_dentry->d_inode)->v_vfsp->vfs_flag & VFS_DMI)
@@ -502,14 +503,12 @@ const struct file_operations xfs_dir_fil
 };
 
 static struct vm_operations_struct xfs_file_vm_ops = {
-	.nopage		= filemap_nopage,
-	.populate	= filemap_populate,
+	.fault		= filemap_fault,
 };
 
 #ifdef CONFIG_XFS_DMAPI
 static struct vm_operations_struct xfs_dmapi_file_vm_ops = {
-	.nopage		= xfs_vm_nopage,
-	.populate	= filemap_populate,
+	.fault		= xfs_vm_fault,
 #ifdef HAVE_VMOP_MPROTECT
 	.mprotect	= xfs_vm_mprotect,
 #endif
Index: linux-2.6/mm/mmap.c
===================================================================
--- linux-2.6.orig/mm/mmap.c
+++ linux-2.6/mm/mmap.c
@@ -1148,12 +1148,8 @@ out:	
 		mm->locked_vm += len >> PAGE_SHIFT;
 		make_pages_present(addr, addr + len);
 	}
-	if (flags & MAP_POPULATE) {
-		up_write(&mm->mmap_sem);
-		sys_remap_file_pages(addr, len, 0,
-					pgoff, flags & MAP_NONBLOCK);
-		down_write(&mm->mmap_sem);
-	}
+	if ((flags & MAP_POPULATE) && !(flags & MAP_NONBLOCK))
+		make_pages_present(addr, addr + len);
 	return addr;
 
 unmap_and_free_vma:
Index: linux-2.6/ipc/shm.c
===================================================================
--- linux-2.6.orig/ipc/shm.c
+++ linux-2.6/ipc/shm.c
@@ -260,7 +260,7 @@ static struct file_operations shm_file_o
 static struct vm_operations_struct shm_vm_ops = {
 	.open	= shm_open,	/* callback for a new vm-area open */
 	.close	= shm_close,	/* callback for when the vm-area is released */
-	.nopage	= shmem_nopage,
+	.fault	= shmem_fault,
 #if defined(CONFIG_NUMA) && defined(CONFIG_SHMEM)
 	.set_policy = shmem_set_policy,
 	.get_policy = shmem_get_policy,
Index: linux-2.6/mm/filemap_xip.c
===================================================================
--- linux-2.6.orig/mm/filemap_xip.c
+++ linux-2.6/mm/filemap_xip.c
@@ -200,62 +200,63 @@ __xip_unmap (struct address_space * mapp
 }
 
 /*
- * xip_nopage() is invoked via the vma operations vector for a
+ * xip_fault() is invoked via the vma operations vector for a
  * mapped memory region to read in file data during a page fault.
  *
- * This function is derived from filemap_nopage, but used for execute in place
+ * This function is derived from filemap_fault, but used for execute in place
  */
-static struct page *
-xip_file_nopage(struct vm_area_struct * area,
-		   unsigned long address,
-		   int *type)
+static struct page *xip_file_fault(struct fault_data *fdata)
 {
+	struct vm_area_struct *area = fdata->vma;
 	struct file *file = area->vm_file;
 	struct address_space *mapping = file->f_mapping;
 	struct inode *inode = mapping->host;
 	struct page *page;
-	unsigned long size, pgoff, endoff;
+	pgoff_t size;
 
-	pgoff = ((address - area->vm_start) >> PAGE_CACHE_SHIFT)
-		+ area->vm_pgoff;
-	endoff = ((area->vm_end - area->vm_start) >> PAGE_CACHE_SHIFT)
-		+ area->vm_pgoff;
+	/* XXX: are VM_FAULT_ codes OK? */
 
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (pgoff >= size) {
+	if (fdata->pgoff >= size) {
+		fdata->type = VM_FAULT_SIGBUS;
 		return NULL;
 	}
 
-	page = mapping->a_ops->get_xip_page(mapping, pgoff*(PAGE_SIZE/512), 0);
-	if (!IS_ERR(page)) {
+	page = mapping->a_ops->get_xip_page(mapping,
+					fdata->pgoff*(PAGE_SIZE/512), 0);
+	if (!IS_ERR(page))
 		goto out;
-	}
-	if (PTR_ERR(page) != -ENODATA)
+	if (PTR_ERR(page) != -ENODATA) {
+		fdata->type = VM_FAULT_OOM;
 		return NULL;
+	}
 
 	/* sparse block */
 	if ((area->vm_flags & (VM_WRITE | VM_MAYWRITE)) &&
 	    (area->vm_flags & (VM_SHARED| VM_MAYSHARE)) &&
 	    (!(mapping->host->i_sb->s_flags & MS_RDONLY))) {
 		/* maybe shared writable, allocate new block */
-		page = mapping->a_ops->get_xip_page (mapping,
-			pgoff*(PAGE_SIZE/512), 1);
-		if (IS_ERR(page))
+		page = mapping->a_ops->get_xip_page(mapping,
+					fdata->pgoff*(PAGE_SIZE/512), 1);
+		if (IS_ERR(page)) {
+			fdata->type = VM_FAULT_SIGBUS;
 			return NULL;
+		}
 		/* unmap page at pgoff from all other vmas */
-		__xip_unmap(mapping, pgoff);
+		__xip_unmap(mapping, fdata->pgoff);
 	} else {
 		/* not shared and writable, use ZERO_PAGE() */
-		page = ZERO_PAGE(address);
+		page = ZERO_PAGE(fdata->address);
 	}
 
 out:
+	fdata->type = VM_FAULT_MINOR;
 	page_cache_get(page);
 	return page;
 }
 
 static struct vm_operations_struct xip_file_vm_ops = {
-	.nopage         = xip_file_nopage,
+	.fault	= xip_file_fault,
 };
 
 int xip_file_mmap(struct file * file, struct vm_area_struct * vma)
@@ -264,6 +265,7 @@ int xip_file_mmap(struct file * file, st
 
 	file_accessed(file);
 	vma->vm_ops = &xip_file_vm_ops;
+	vma->vm_flags |= VM_CAN_NONLINEAR;
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xip_file_mmap);
Index: linux-2.6/mm/nommu.c
===================================================================
--- linux-2.6.orig/mm/nommu.c
+++ linux-2.6/mm/nommu.c
@@ -1299,8 +1299,7 @@ int in_gate_area_no_task(unsigned long a
 	return 0;
 }
 
-struct page *filemap_nopage(struct vm_area_struct *area,
-			unsigned long address, int *type)
+struct page *filemap_fault(struct fault_data *fdata)
 {
 	BUG();
 	return NULL;
Index: linux-2.6/mm/shmem.c
===================================================================
--- linux-2.6.orig/mm/shmem.c
+++ linux-2.6/mm/shmem.c
@@ -81,7 +81,7 @@ enum sgp_type {
 	SGP_READ,	/* don't exceed i_size, don't allocate page */
 	SGP_CACHE,	/* don't exceed i_size, may allocate page */
 	SGP_WRITE,	/* may exceed i_size, may allocate page */
-	SGP_NOPAGE,	/* same as SGP_CACHE, return with page locked */
+	SGP_FAULT,	/* same as SGP_CACHE, return with page locked */
 };
 
 static int shmem_getpage(struct inode *inode, unsigned long idx,
@@ -1211,7 +1211,7 @@ repeat:
 done:
 	if (*pagep != filepage) {
 		*pagep = filepage;
-		if (sgp != SGP_NOPAGE)
+		if (sgp != SGP_FAULT)
 			unlock_page(filepage);
 
 	}
@@ -1225,75 +1225,31 @@ failed:
 	return error;
 }
 
-struct page *shmem_nopage(struct vm_area_struct *vma, unsigned long address, int *type)
+struct page *shmem_fault(struct fault_data *fdata)
 {
+	struct vm_area_struct *vma = fdata->vma;
 	struct inode *inode = vma->vm_file->f_dentry->d_inode;
 	struct page *page = NULL;
-	unsigned long idx;
 	int error;
 
 	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
 
-	idx = (address - vma->vm_start) >> PAGE_SHIFT;
-	idx += vma->vm_pgoff;
-	idx >>= PAGE_CACHE_SHIFT - PAGE_SHIFT;
-	if (((loff_t) idx << PAGE_CACHE_SHIFT) >= i_size_read(inode))
-		return NOPAGE_SIGBUS;
+	if (((loff_t)fdata->pgoff << PAGE_CACHE_SHIFT) >= i_size_read(inode)) {
+		fdata->type = VM_FAULT_SIGBUS;
+		return NULL;
+	}
 
-	error = shmem_getpage(inode, idx, &page, SGP_NOPAGE, type);
-	if (error)
-		return (error == -ENOMEM)? NOPAGE_OOM: NOPAGE_SIGBUS;
+	error = shmem_getpage(inode, fdata->pgoff, &page,
+						SGP_FAULT, &fdata->type);
+	if (error) {
+		fdata->type = ((error == -ENOMEM)?VM_FAULT_OOM:VM_FAULT_SIGBUS);
+		return NULL;
+	}
 
 	mark_page_accessed(page);
 	return page;
 }
 
-static int shmem_populate(struct vm_area_struct *vma,
-	unsigned long addr, unsigned long len,
-	pgprot_t prot, unsigned long pgoff, int nonblock)
-{
-	struct inode *inode = vma->vm_file->f_dentry->d_inode;
-	struct mm_struct *mm = vma->vm_mm;
-	enum sgp_type sgp = nonblock? SGP_QUICK: SGP_CACHE;
-	unsigned long size;
-
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (pgoff >= size || pgoff + (len >> PAGE_SHIFT) > size)
-		return -EINVAL;
-
-	while ((long) len > 0) {
-		struct page *page = NULL;
-		int err;
-		/*
-		 * Will need changing if PAGE_CACHE_SIZE != PAGE_SIZE
-		 */
-		err = shmem_getpage(inode, pgoff, &page, sgp, NULL);
-		if (err)
-			return err;
-		/* Page may still be null, but only if nonblock was set. */
-		if (page) {
-			mark_page_accessed(page);
-			err = install_page(mm, vma, addr, page, prot);
-			if (err) {
-				page_cache_release(page);
-				return err;
-			}
-		} else if (vma->vm_flags & VM_NONLINEAR) {
-			/* No page was found just because we can't read it in
-			 * now (being here implies nonblock != 0), but the page
-			 * may exist, so set the PTE to fault it in later. */
-    			err = install_file_pte(mm, vma, addr, pgoff, prot);
-			if (err)
-	    			return err;
-		}
-
-		len -= PAGE_SIZE;
-		addr += PAGE_SIZE;
-		pgoff++;
-	}
-	return 0;
-}
-
 #ifdef CONFIG_NUMA
 int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
 {
@@ -1338,7 +1294,7 @@ int shmem_mmap(struct file *file, struct
 {
 	file_accessed(file);
 	vma->vm_ops = &shmem_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	return 0;
 }
 
@@ -2314,8 +2270,7 @@ static struct super_operations shmem_ops
 };
 
 static struct vm_operations_struct shmem_vm_ops = {
-	.nopage		= shmem_nopage,
-	.populate	= shmem_populate,
+	.fault		= shmem_fault,
 #ifdef CONFIG_NUMA
 	.set_policy     = shmem_set_policy,
 	.get_policy     = shmem_get_policy,
Index: linux-2.6/mm/truncate.c
===================================================================
--- linux-2.6.orig/mm/truncate.c
+++ linux-2.6/mm/truncate.c
@@ -53,7 +53,7 @@ static inline void truncate_partial_page
 /*
  * If truncate cannot remove the fs-private metadata from the page, the page
  * becomes anonymous.  It will be left on the LRU and may even be mapped into
- * user pagetables if we're racing with filemap_nopage().
+ * user pagetables if we're racing with filemap_fault().
  *
  * We need to bale out if page->mapping is no longer equal to the original
  * mapping.  This happens a) when the VM reclaimed the page while we waited on

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [patch 4/5] mm: add vm_insert_pfn helpler
  2006-10-09 16:12 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
                   ` (2 preceding siblings ...)
  2006-10-09 16:12 ` [patch 3/5] mm: fault handler to replace nopage and populate Nick Piggin
@ 2006-10-09 16:12 ` Nick Piggin
  2006-10-09 21:03   ` Benjamin Herrenschmidt
  2006-10-09 16:13 ` [patch 5/5] mm: merge nopfn with fault handler Nick Piggin
  2006-10-09 20:57 ` [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Benjamin Herrenschmidt
  5 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-09 16:12 UTC (permalink / raw)
  To: Hugh Dickins, Linux Memory Management
  Cc: Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Nick Piggin, Ingo Molnar

Add a vm_insert_pfn helper, so that ->fault handlers can have nopfn
functionality by installing their own pte and returning NULL.

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -1104,6 +1104,7 @@ unsigned long vmalloc_to_pfn(void *addr)
 int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
 			unsigned long pfn, unsigned long size, pgprot_t);
 int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
+int vm_insert_pfn(struct vm_area_struct *, unsigned long addr, unsigned long pfn);
 
 struct page *follow_page(struct vm_area_struct *, unsigned long address,
 			unsigned int foll_flags);
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1267,6 +1267,50 @@ int vm_insert_page(struct vm_area_struct
 }
 EXPORT_SYMBOL(vm_insert_page);
 
+/**
+ * vm_insert_pfn - insert single pfn into user vma
+ * @vma: user vma to map to
+ * @addr: target user address of this page
+ * @pfn: source kernel pfn
+ *
+ * Similar to vm_inert_page, this allows drivers to insert individual pages
+ * they've allocated into a user vma. Same comments apply.
+ *
+ * This function should only be called from a vm_ops->fault handler, and
+ * in that case the handler should return NULL.
+ */
+int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	int retval;
+	pte_t *pte, entry;
+	spinlock_t *ptl;
+
+	BUG_ON(is_cow_mapping(vma->vm_flags));
+
+	retval = -ENOMEM;
+	pte = get_locked_pte(mm, addr, &ptl);
+	if (!pte)
+		goto out;
+	retval = -EBUSY;
+	if (!pte_none(*pte))
+		goto out_unlock;
+
+	/* Ok, finally just insert the thing.. */
+	entry = pfn_pte(pfn, vma->vm_page_prot);
+	set_pte_at(mm, addr, pte, entry);
+	update_mmu_cache(vma, addr, entry);
+
+	vma->vm_flags |= VM_PFNMAP;
+	retval = 0;
+out_unlock:
+	pte_unmap_unlock(pte, ptl);
+
+out:
+	return retval;
+}
+EXPORT_SYMBOL(vm_insert_pfn);
+
 /*
  * maps a range of physical memory into the requested pages. the old
  * mappings are removed. any references to nonexistent pages results

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [patch 5/5] mm: merge nopfn with fault handler
  2006-10-09 16:12 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
                   ` (3 preceding siblings ...)
  2006-10-09 16:12 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
@ 2006-10-09 16:13 ` Nick Piggin
  2006-10-09 20:57 ` [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Benjamin Herrenschmidt
  5 siblings, 0 replies; 34+ messages in thread
From: Nick Piggin @ 2006-10-09 16:13 UTC (permalink / raw)
  To: Hugh Dickins, Linux Memory Management
  Cc: Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Nick Piggin, Ingo Molnar

Remove ->nopfn and reimplement the only existing handler using ->fault

Index: linux-2.6/drivers/char/mspec.c
===================================================================
--- linux-2.6.orig/drivers/char/mspec.c
+++ linux-2.6/drivers/char/mspec.c
@@ -178,24 +178,25 @@ mspec_close(struct vm_area_struct *vma)
 
 
 /*
- * mspec_nopfn
+ * mspec_fault
  *
  * Creates a mspec page and maps it to user space.
  */
-static unsigned long
-mspec_nopfn(struct vm_area_struct *vma, unsigned long address)
+static struct page *
+mspec_fault(struct fault_data *fdata)
 {
 	unsigned long paddr, maddr;
 	unsigned long pfn;
-	int index;
-	struct vma_data *vdata = vma->vm_private_data;
+	int index = fdata->pgoff;
+	struct vma_data *vdata = fdata->vma->vm_private_data;
 
-	index = (address - vma->vm_start) >> PAGE_SHIFT;
 	maddr = (volatile unsigned long) vdata->maddr[index];
 	if (maddr == 0) {
 		maddr = uncached_alloc_page(numa_node_id());
-		if (maddr == 0)
-			return NOPFN_OOM;
+		if (maddr == 0) {
+			fdata->type = VM_FAULT_OOM;
+			return NULL;
+		}
 
 		spin_lock(&vdata->lock);
 		if (vdata->maddr[index] == 0) {
@@ -215,13 +216,21 @@ mspec_nopfn(struct vm_area_struct *vma, 
 
 	pfn = paddr >> PAGE_SHIFT;
 
-	return pfn;
+	fdata->type = VM_FAULT_MINOR;
+	/*
+	 * vm_insert_pfn can fail with -EBUSY, but in that case it will
+	 * be because another thread has installed the pte first, so it
+	 * is no problem.
+	 */
+	vm_insert_pfn(fdata->vma, fdata->address, pfn);
+
+	return NULL;
 }
 
 static struct vm_operations_struct mspec_vm_ops = {
 	.open = mspec_open,
 	.close = mspec_close,
-	.nopfn = mspec_nopfn
+	.fault = mspec_fault,
 };
 
 /*
Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -223,7 +223,6 @@ struct vm_operations_struct {
 	void (*close)(struct vm_area_struct * area);
 	struct page * (*fault)(struct fault_data * data);
 	struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int *type);
-	unsigned long (*nopfn)(struct vm_area_struct * area, unsigned long address);
 	int (*populate)(struct vm_area_struct * area, unsigned long address, unsigned long len, pgprot_t prot, unsigned long pgoff, int nonblock);
 
 	/* notification that a previously read-only page is about to become
@@ -619,12 +618,6 @@ static inline int page_mapped(struct pag
 #define NOPAGE_OOM	((struct page *) (-1))
 
 /*
- * Error return values for the *_nopfn functions
- */
-#define NOPFN_SIGBUS	((unsigned long) -1)
-#define NOPFN_OOM	((unsigned long) -2)
-
-/*
  * Different kinds of faults, as returned by handle_mm_fault().
  * Used to decide whether a process gets delivered SIGBUS or
  * just gets major/minor fault counters bumped up.
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1278,6 +1278,11 @@ EXPORT_SYMBOL(vm_insert_page);
  *
  * This function should only be called from a vm_ops->fault handler, and
  * in that case the handler should return NULL.
+ *
+ * vma cannot be a COW mapping.
+ *
+ * As this is called only for pages that do not currently exist, we
+ * do not need to flush old virtual caches or the TLB.
  */
 int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 {
@@ -2335,54 +2340,6 @@ static int do_nonlinear_fault(struct mm_
 }
 
 /*
- * do_no_pfn() tries to create a new page mapping for a page without
- * a struct_page backing it
- *
- * As this is called only for pages that do not currently exist, we
- * do not need to flush old virtual caches or the TLB.
- *
- * We enter with non-exclusive mmap_sem (to exclude vma changes,
- * but allow concurrent faults), and pte mapped but not yet locked.
- * We return with mmap_sem still held, but pte unmapped and unlocked.
- *
- * It is expected that the ->nopfn handler always returns the same pfn
- * for a given virtual mapping.
- *
- * Mark this `noinline' to prevent it from bloating the main pagefault code.
- */
-static noinline int do_no_pfn(struct mm_struct *mm, struct vm_area_struct *vma,
-		     unsigned long address, pte_t *page_table, pmd_t *pmd,
-		     int write_access)
-{
-	spinlock_t *ptl;
-	pte_t entry;
-	unsigned long pfn;
-	int ret = VM_FAULT_MINOR;
-
-	pte_unmap(page_table);
-	BUG_ON(!(vma->vm_flags & VM_PFNMAP));
-	BUG_ON(is_cow_mapping(vma->vm_flags));
-
-	pfn = vma->vm_ops->nopfn(vma, address & PAGE_MASK);
-	if (pfn == NOPFN_OOM)
-		return VM_FAULT_OOM;
-	if (pfn == NOPFN_SIGBUS)
-		return VM_FAULT_SIGBUS;
-
-	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
-
-	/* Only go through if we didn't race with anybody else... */
-	if (pte_none(*page_table)) {
-		entry = pfn_pte(pfn, vma->vm_page_prot);
-		if (write_access)
-			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-		set_pte_at(mm, address, page_table, entry);
-	}
-	pte_unmap_unlock(page_table, ptl);
-	return ret;
-}
-
-/*
  * Fault of a previously existing named mapping. Repopulate the pte
  * from the encoded file_pte if possible. This enables swappable
  * nonlinear vmas.
@@ -2453,9 +2410,6 @@ static inline int handle_pte_fault(struc
 				if (vma->vm_ops->fault || vma->vm_ops->nopage)
 					return do_linear_fault(mm, vma, address,
 						pte, pmd, write_access, entry);
-				if (unlikely(vma->vm_ops->nopfn))
-					return do_no_pfn(mm, vma, address, pte,
-							 pmd, write_access);
 			}
 			return do_anonymous_page(mm, vma, address,
 						 pte, pmd, write_access);

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-09 16:12 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
                   ` (4 preceding siblings ...)
  2006-10-09 16:13 ` [patch 5/5] mm: merge nopfn with fault handler Nick Piggin
@ 2006-10-09 20:57 ` Benjamin Herrenschmidt
  2006-10-09 21:00   ` Benjamin Herrenschmidt
  5 siblings, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-09 20:57 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Hugh Dickins, Linux Memory Management, Andrew Morton,
	Jes Sorensen, Linux Kernel, Ingo Molnar

On Mon, 2006-10-09 at 18:12 +0200, Nick Piggin wrote:
> OK, I've cleaned up and further improved this patchset, removed duplication
> while retaining legacy nopage handling, restored page_mkwrite to the ->fault
> path (due to lack of users upstream to attempt a conversion), converted the
> rest of the filesystems to use ->fault, restored MAP_POPULATE and population
> of remap_file_pages pages, replaced nopfn completely, and removed
> NOPAGE_REFAULT because that can be done easily with ->fault.

What is the replacement ?

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-09 20:57 ` [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Benjamin Herrenschmidt
@ 2006-10-09 21:00   ` Benjamin Herrenschmidt
  2006-10-10  0:53     ` Nick Piggin
  0 siblings, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-09 21:00 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Hugh Dickins, Linux Memory Management, Andrew Morton,
	Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, 2006-10-10 at 06:57 +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2006-10-09 at 18:12 +0200, Nick Piggin wrote:
> > OK, I've cleaned up and further improved this patchset, removed duplication
> > while retaining legacy nopage handling, restored page_mkwrite to the ->fault
> > path (due to lack of users upstream to attempt a conversion), converted the
> > rest of the filesystems to use ->fault, restored MAP_POPULATE and population
> > of remap_file_pages pages, replaced nopfn completely, and removed
> > NOPAGE_REFAULT because that can be done easily with ->fault.
> 
> What is the replacement ?

I see ... so we now use PTR_ERR to return errors and NULL for refault...
good for me but Andrew may want more...

Ben



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [patch 4/5] mm: add vm_insert_pfn helpler
  2006-10-09 16:12 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
@ 2006-10-09 21:03   ` Benjamin Herrenschmidt
  2006-10-10  0:42     ` Nick Piggin
  0 siblings, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-09 21:03 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Hugh Dickins, Linux Memory Management, Andrew Morton,
	Jes Sorensen, Linux Kernel, Ingo Molnar


> +	vma->vm_flags |= VM_PFNMAP;

I wouldn't do that here. I would keep that to the caller (and set it
before setting the PTE along with a wmb maybe to make sure it's visible
before the PTE no ?)

Cheers,
Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-09 16:12 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
@ 2006-10-09 21:10   ` Mark Fasheh
  2006-10-10  1:10     ` Nick Piggin
  0 siblings, 1 reply; 34+ messages in thread
From: Mark Fasheh @ 2006-10-09 21:10 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Hugh Dickins, Linux Memory Management, Andrew Morton,
	Jes Sorensen, Benjamin Herrenschmidt, Linux Kernel, Ingo Molnar

Hi Nick,

On Mon, Oct 09, 2006 at 06:12:26PM +0200, Nick Piggin wrote:
> Complexity and documentation issues aside, the locking protocol fails
> in the case where we would like to invalidate pagecache inside i_size.
That pretty much describes part of what ocfs2_data_convert_worker() does.
It's called when another node wants to take a lock at an incompatible level
on an inodes data.

This involves up to two steps, depending on the level of the lock requested.

1) It always syncs dirty data.

2) If it's dropping due to writes on another node, then pages will be
   invalidated and mappings torn down.


There's actually an ocfs2 patch to support shared writeable mappings in via
the ->page_mkwrite() callback, but I haven't pushed it upstream due to a bug
I found during some later testing. I believe the bug is a VM issue, and your
description of the race Andrea identified leads me to wonder if you all
might have just found it and fixed it for me :)


In short, I have an MPI test program which rotates through a set of
processes which have mmaped a pre-formatted file. One process writes some
data, the rest verify that they see the new data. When I run multiple
processes on multiple nodes, I will sometimes find that one of the processes
fails because it sees stale data.


FWIW, the overall approach taken in the patch below seems fine to me, though
I'm no VM expert :)

Not having ocfs2_data_convert_worker() call unmap_mapping_range() directly,
is ok as long as the intent of the function is preserved. You seem to be
doing this by having truncate_inode_pages() unmap instead.

Thanks,
	--Mark

--
Mark Fasheh
Senior Software Developer, Oracle
mark.fasheh@oracle.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [patch 4/5] mm: add vm_insert_pfn helpler
  2006-10-09 21:03   ` Benjamin Herrenschmidt
@ 2006-10-10  0:42     ` Nick Piggin
  2006-10-10  1:11       ` faults and signals Benjamin Herrenschmidt
  2006-10-10  1:16       ` ptrace and pfn mappings Benjamin Herrenschmidt
  0 siblings, 2 replies; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  0:42 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

Benjamin Herrenschmidt wrote:
>>+	vma->vm_flags |= VM_PFNMAP;
> 
> 
> I wouldn't do that here. I would keep that to the caller (and set it
> before setting the PTE along with a wmb maybe to make sure it's visible
> before the PTE no ?)

Oops, good catch. You're right.

We probably don't need a barrier because we take the ptl lock
around setting the pte, and the only other readers who care should
be ones that also take the same ptl lock.

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-09 21:00   ` Benjamin Herrenschmidt
@ 2006-10-10  0:53     ` Nick Piggin
  0 siblings, 0 replies; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  0:53 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

Benjamin Herrenschmidt wrote:
> On Tue, 2006-10-10 at 06:57 +1000, Benjamin Herrenschmidt wrote:
> 
>>On Mon, 2006-10-09 at 18:12 +0200, Nick Piggin wrote:
>>
>>>OK, I've cleaned up and further improved this patchset, removed duplication
>>>while retaining legacy nopage handling, restored page_mkwrite to the ->fault
>>>path (due to lack of users upstream to attempt a conversion), converted the
>>>rest of the filesystems to use ->fault, restored MAP_POPULATE and population
>>>of remap_file_pages pages, replaced nopfn completely, and removed
>>>NOPAGE_REFAULT because that can be done easily with ->fault.
>>
>>What is the replacement ?
> 
> 
> I see ... so we now use PTR_ERR to return errors and NULL for refault...
> good for me but Andrew may want more...

The fault handler puts its desired return type into fault_data.type, and
returns NULL if there is no page to install, otherwise the pointer to
the struct page.

So you'd just set VM_FAULT_MINOR and return NULL, after doing the
vm_insert_pfn thing.

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-09 21:10   ` Mark Fasheh
@ 2006-10-10  1:10     ` Nick Piggin
  2006-10-11 18:34       ` Mark Fasheh
  0 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  1:10 UTC (permalink / raw)
  To: Mark Fasheh
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Ingo Molnar

Mark Fasheh wrote:
> Hi Nick,
> 
> On Mon, Oct 09, 2006 at 06:12:26PM +0200, Nick Piggin wrote:
> 
>>Complexity and documentation issues aside, the locking protocol fails
>>in the case where we would like to invalidate pagecache inside i_size.
> 
> That pretty much describes part of what ocfs2_data_convert_worker() does.
> It's called when another node wants to take a lock at an incompatible level
> on an inodes data.
> 
> This involves up to two steps, depending on the level of the lock requested.
> 
> 1) It always syncs dirty data.
> 
> 2) If it's dropping due to writes on another node, then pages will be
>    invalidated and mappings torn down.

Yep, your unmap_mapping_range, and invalidate_inode_pages2 calls in there
are all subject to this bug (provided the pages being invalidated are visible
and able to be mmap()ed).

> There's actually an ocfs2 patch to support shared writeable mappings in via
> the ->page_mkwrite() callback, but I haven't pushed it upstream due to a bug
> I found during some later testing. I believe the bug is a VM issue, and your
> description of the race Andrea identified leads me to wonder if you all
> might have just found it and fixed it for me :)
> 
> 
> In short, I have an MPI test program which rotates through a set of
> processes which have mmaped a pre-formatted file. One process writes some
> data, the rest verify that they see the new data. When I run multiple
> processes on multiple nodes, I will sometimes find that one of the processes
> fails because it sees stale data.

This is roughly similar to what my test program does that I wrote to
reproduce the bug. So it wouldn't surprise me.

> FWIW, the overall approach taken in the patch below seems fine to me, though
> I'm no VM expert :)
> 
> Not having ocfs2_data_convert_worker() call unmap_mapping_range() directly,
> is ok as long as the intent of the function is preserved. You seem to be
> doing this by having truncate_inode_pages() unmap instead.

truncate_inode_pages now unmaps the pages internally, so you should
be OK there. If you're expecting this to happen frequently with mapped
pages, it is probably more efficient to call the full unmap_mapping_range
before you call truncate_inode_pages...

[ Somewhere on my todo list is a cleanup of mm/truncate.c ;) ]

If you want a stable patchset for testing, the previous one to linux-mm
starting with "[patch 1/3] mm: fault vs invalidate/truncate check" went
through some stress testing here...

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* faults and signals
  2006-10-10  0:42     ` Nick Piggin
@ 2006-10-10  1:11       ` Benjamin Herrenschmidt
  2006-10-10  1:20         ` Nick Piggin
  2006-10-10  1:16       ` ptrace and pfn mappings Benjamin Herrenschmidt
  1 sibling, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  1:11 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

OK, so we made some good progress... now remains my pet issue... faults
and signals :)

So in SPUfs, I have cases where apps trying to access user-space
registers of an SPE that is scheduled out might end up blocking a long
time in the page fault handler. I'm not 100% sure about DRM at this
point but I suppose they might have good use of a similar ability I'm
trying to provide which is for a page fault to be interruptible. That
would allow various cases of processes stuck in kernel for a logn time
(or forever if something goes wrong).

I think your new fault() thingy is the perfect way to get there. In the
normal page fault case, a signal is easy to handle: just refault
(NOPAGE_REFAULT without your patch or return NULL; with your patch,
though we might want to define a -EINTR result explicitely) and the
signals will be handled on the return to userland path. However, we
can't really handle them in get_user_pages() nor on kernel own faults
(__get_user etc...), at least not until we define versions of these that
can return -EINTR (we might want to do that for get_user_pages, but
let's assume not for now).

Thus what is needed is a way to inform the page fault handler wether it
can be interruptible or not. This could be done using the flags you have
in there, or some other bits in the argument structure.

That way, faults could basically check if coming from userland (testing
the ptregs) and set interruptible in that case (and possibly a flag to
get_user_pages() telling it can be interruptible for use by drivers who
can deal with it).

At the vm_ops level, existing things are fine, they are not
interruptible, and I can modify spufs to check that new flag and return
-EINTR on signals when waiting.

In fact, it might be that filemap and some filesystems might even want
to handle interruptible page faults :) but that's a different matter.

Ben.




^ permalink raw reply	[flat|nested] 34+ messages in thread

* ptrace and pfn mappings
  2006-10-10  0:42     ` Nick Piggin
  2006-10-10  1:11       ` faults and signals Benjamin Herrenschmidt
@ 2006-10-10  1:16       ` Benjamin Herrenschmidt
  2006-10-10  2:23         ` Nick Piggin
  2006-10-10 12:31         ` Christoph Hellwig
  1 sibling, 2 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  1:16 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

And the last of my "issues" here:

get_user_pages() can't handle pfn mappings, thus access_process_vm()
can't, and thus ptrace can't. When they were limited to dodgy /dev/mem
things, it was probably ok. But with more drivers needing that, like the
DRM, sound drivers, and now with SPU problem state registers and local
store mapped that way, it's becoming a real issues to be unable to
access any of those mappings from gdb.

The "easy" way out I can see, but it may have all sort of bad side
effects I haven't thought about at this point, is to switch the mm in
access_process_vm (at least if it's hitting such a VMA).

That means that the ptracing process will temporarily be running in the
kernel using a task->active_mm different from task->mm which might have
funny side effects due to assumptions that this won't happen here or
there, though I don't see any fundamental reasons why it couldn't be
made to work.

That do you guys think ? Any better idea ? The problem with mappings
like what SPUfs or the DRM want is that they can change (be remapped
between HW and backup memory, as described in previous emails), thus we
don't want to get struct pages even if available and peek at them as
they might not be valid anymore, same with PFNs (we could imagine
ioremap'ing those PFN's but that would be racy too). The only way that
is guaranteed not to be racy is to do exactly what a user do, that is do
user accesses via the target process vm itself....

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: faults and signals
  2006-10-10  1:11       ` faults and signals Benjamin Herrenschmidt
@ 2006-10-10  1:20         ` Nick Piggin
  2006-10-10  1:58           ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  1:20 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

Benjamin Herrenschmidt wrote:
> OK, so we made some good progress... now remains my pet issue... faults
> and signals :)
> 
> So in SPUfs, I have cases where apps trying to access user-space
> registers of an SPE that is scheduled out might end up blocking a long
> time in the page fault handler. I'm not 100% sure about DRM at this
> point but I suppose they might have good use of a similar ability I'm
> trying to provide which is for a page fault to be interruptible. That
> would allow various cases of processes stuck in kernel for a logn time
> (or forever if something goes wrong).
> 
> I think your new fault() thingy is the perfect way to get there. In the
> normal page fault case, a signal is easy to handle: just refault
> (NOPAGE_REFAULT without your patch or return NULL; with your patch,
> though we might want to define a -EINTR result explicitely) and the
> signals will be handled on the return to userland path. However, we
> can't really handle them in get_user_pages() nor on kernel own faults
> (__get_user etc...), at least not until we define versions of these that
> can return -EINTR (we might want to do that for get_user_pages, but
> let's assume not for now).
> 
> Thus what is needed is a way to inform the page fault handler wether it
> can be interruptible or not. This could be done using the flags you have
> in there, or some other bits in the argument structure.

Yep, the flags field should be able to do that for you. Since we have
the handle_mm_fault wrapper for machine faults, it isn't too hard to
change the arguments: we should probably turn `write_access` into a
flag so we don't have to push too many arguments onto the stack.

This way we can distinguish get_user_pages faults. And your
architecture will have to switch over to using __handle_mm_fault, and
distinguish kernel faults. Something like that?

> That way, faults could basically check if coming from userland (testing
> the ptregs) and set interruptible in that case (and possibly a flag to
> get_user_pages() telling it can be interruptible for use by drivers who
> can deal with it).
> 
> At the vm_ops level, existing things are fine, they are not
> interruptible, and I can modify spufs to check that new flag and return
> -EINTR on signals when waiting.
> 
> In fact, it might be that filemap and some filesystems might even want
> to handle interruptible page faults :) but that's a different matter.

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: faults and signals
  2006-10-10  1:20         ` Nick Piggin
@ 2006-10-10  1:58           ` Benjamin Herrenschmidt
  2006-10-10  2:00             ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  1:58 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar


> Yep, the flags field should be able to do that for you. Since we have
> the handle_mm_fault wrapper for machine faults, it isn't too hard to
> change the arguments: we should probably turn `write_access` into a
> flag so we don't have to push too many arguments onto the stack.
> 
> This way we can distinguish get_user_pages faults. And your
> architecture will have to switch over to using __handle_mm_fault, and
> distinguish kernel faults. Something like that?

Yes. Tho it's also fairly easy to just add an argument to the wrapper
and fix all archs... but yeah, I will play around.

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: faults and signals
  2006-10-10  1:58           ` Benjamin Herrenschmidt
@ 2006-10-10  2:00             ` Benjamin Herrenschmidt
  2006-10-10  2:04               ` Nick Piggin
  0 siblings, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  2:00 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar


> Yes. Tho it's also fairly easy to just add an argument to the wrapper
> and fix all archs... but yeah, I will play around.

Actually, user_mode(ptregs) is standard, we could add a ptregs arg to
the wrapper... or just get rid of it and fix archs, it's not like it was
that hard. There aren't that many callers :)

Is there any reason why we actually need that wrapper ?

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: faults and signals
  2006-10-10  2:00             ` Benjamin Herrenschmidt
@ 2006-10-10  2:04               ` Nick Piggin
  2006-10-10  2:07                 ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  2:04 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 12:00:01PM +1000, Benjamin Herrenschmidt wrote:
> 
> > Yes. Tho it's also fairly easy to just add an argument to the wrapper
> > and fix all archs... but yeah, I will play around.
> 
> Actually, user_mode(ptregs) is standard, we could add a ptregs arg to
> the wrapper... or just get rid of it and fix archs, it's not like it was
> that hard. There aren't that many callers :)
> 
> Is there any reason why we actually need that wrapper ?

Not much reason. If you go through and fix up all callers then
that should be fine.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: faults and signals
  2006-10-10  2:04               ` Nick Piggin
@ 2006-10-10  2:07                 ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  2:07 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, 2006-10-10 at 04:04 +0200, Nick Piggin wrote:
> On Tue, Oct 10, 2006 at 12:00:01PM +1000, Benjamin Herrenschmidt wrote:
> > 
> > > Yes. Tho it's also fairly easy to just add an argument to the wrapper
> > > and fix all archs... but yeah, I will play around.
> > 
> > Actually, user_mode(ptregs) is standard, we could add a ptregs arg to
> > the wrapper... or just get rid of it and fix archs, it's not like it was
> > that hard. There aren't that many callers :)
> > 
> > Is there any reason why we actually need that wrapper ?
> 
> Not much reason. If you go through and fix up all callers then
> that should be fine.

I suppose I can do that... I'll give it a go once all your new stuff is
in -mm and I've started adapting SPUfs to it :)

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  1:16       ` ptrace and pfn mappings Benjamin Herrenschmidt
@ 2006-10-10  2:23         ` Nick Piggin
  2006-10-10  2:47           ` Benjamin Herrenschmidt
  2006-10-10 12:31         ` Christoph Hellwig
  1 sibling, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  2:23 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 11:16:27AM +1000, Benjamin Herrenschmidt wrote:
> And the last of my "issues" here:
> 
> get_user_pages() can't handle pfn mappings, thus access_process_vm()
> can't, and thus ptrace can't. When they were limited to dodgy /dev/mem
> things, it was probably ok. But with more drivers needing that, like the
> DRM, sound drivers, and now with SPU problem state registers and local
> store mapped that way, it's becoming a real issues to be unable to
> access any of those mappings from gdb.
> 
> The "easy" way out I can see, but it may have all sort of bad side
> effects I haven't thought about at this point, is to switch the mm in
> access_process_vm (at least if it's hitting such a VMA).

Switch the mm and do a copy_from_user? (rather than the GUP).
Sounds pretty ugly :P

Can you do a get_user_pfns, and do a copy_from_user on the pfn
addresses? In other words, is the memory / mmio at the end of a
given address the same from the perspective of any process? It
is for physical memory of course, which is why get_user_pages
works...

> That means that the ptracing process will temporarily be running in the
> kernel using a task->active_mm different from task->mm which might have
> funny side effects due to assumptions that this won't happen here or
> there, though I don't see any fundamental reasons why it couldn't be
> made to work.
> 
> That do you guys think ? Any better idea ? The problem with mappings
> like what SPUfs or the DRM want is that they can change (be remapped
> between HW and backup memory, as described in previous emails), thus we
> don't want to get struct pages even if available and peek at them as
> they might not be valid anymore, same with PFNs (we could imagine
> ioremap'ing those PFN's but that would be racy too). The only way that
> is guaranteed not to be racy is to do exactly what a user do, that is do
> user accesses via the target process vm itself....

What if you hold your per-object lock over the operation? (I guess
it would have to nest *inside* mmap_sem, but that should be OK).

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  2:23         ` Nick Piggin
@ 2006-10-10  2:47           ` Benjamin Herrenschmidt
  2006-10-10  2:56             ` Benjamin Herrenschmidt
  2006-10-10  2:58             ` Nick Piggin
  0 siblings, 2 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  2:47 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar


> Switch the mm and do a copy_from_user? (rather than the GUP).
> Sounds pretty ugly :P
> 
> Can you do a get_user_pfns, and do a copy_from_user on the pfn
> addresses? In other words, is the memory / mmio at the end of a
> given address the same from the perspective of any process? It
> is for physical memory of course, which is why get_user_pages
> works...

Doesn't help with the racyness.

> > That means that the ptracing process will temporarily be running in the
> > kernel using a task->active_mm different from task->mm which might have
> > funny side effects due to assumptions that this won't happen here or
> > there, though I don't see any fundamental reasons why it couldn't be
> > made to work.
> > 
> > That do you guys think ? Any better idea ? The problem with mappings
> > like what SPUfs or the DRM want is that they can change (be remapped
> > between HW and backup memory, as described in previous emails), thus we
> > don't want to get struct pages even if available and peek at them as
> > they might not be valid anymore, same with PFNs (we could imagine
> > ioremap'ing those PFN's but that would be racy too). The only way that
> > is guaranteed not to be racy is to do exactly what a user do, that is do
> > user accesses via the target process vm itself....
> 
> What if you hold your per-object lock over the operation? (I guess
> it would have to nest *inside* mmap_sem, but that should be OK).

Over the ptrace operation ? how so ?

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  2:47           ` Benjamin Herrenschmidt
@ 2006-10-10  2:56             ` Benjamin Herrenschmidt
  2006-10-10  3:03               ` Nick Piggin
  2006-10-10  2:58             ` Nick Piggin
  1 sibling, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  2:56 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar


> > What if you hold your per-object lock over the operation? (I guess
> > it would have to nest *inside* mmap_sem, but that should be OK).
> 
> Over the ptrace operation ? how so ?
> 

Or do you mean the migration ? Well, we have so far managed to avoid
walking the VMAs and thus avoid the mmap_sem during that migration, so
yes, we do take the object lock but not the mmap_sem.

The problem is that a get_user_pfn() (or get_user_pages if we are on the
memory backstore, besides, how do you decide from access_process_vm
which one to call ?) will peek PTEs and just use that if they are
populated. Thus, if the migration races with it, we are stuffed.

Even if we took the mmap_sem for writing during the migration on all
affected VMAs (which I'm trying very hard to avoid, it's a very risky
thing to do taking it on multiple VMAs, think about lock ordering
issues, and it's just plain horrid), we would still at one point return
an array of struct pages or pfn's that may be out of date unless we
-also- do all the copies / accesses with that semaphore held. Now if
that is the case, you gotta hope that the ptracing process doesn't also
have one of those things mmap'ed (and in the case of SPUfs/gdb, it will
to get to the spu program text afaik) or the copy_to_user to return the
data read will be deadly.

So all I see is more cans of worms... the only think that would "just
work" would be to switch the mm and just do the accesses, letting normal
faults do their job. This needs a temporary page in kernel memory to
copy to/form but that's fine. The SPU might get context switched in the
meantime, but that's not a problem, the data will be right.

So yes, there might be other issues with switching the active_mm like
that, and I yet have to find them (if some comes on top of your mind,
please share) but it doesn't at this point seem worse than the
get_user_page/pfn situation.

(We could also make sure the whole switch/copy/switchback is done while
holding the mmap sem of both current and target mm's for writing to
avoid more complications I suppose, if we always take the ptracer first,
the target being sigstopped, we should avoid AB/BA type deadlock
scenarios unless I've missed something subtle).

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  2:47           ` Benjamin Herrenschmidt
  2006-10-10  2:56             ` Benjamin Herrenschmidt
@ 2006-10-10  2:58             ` Nick Piggin
  2006-10-10  3:40               ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  2:58 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 12:47:46PM +1000, Benjamin Herrenschmidt wrote:
> 
> > Switch the mm and do a copy_from_user? (rather than the GUP).
> > Sounds pretty ugly :P
> > 
> > Can you do a get_user_pfns, and do a copy_from_user on the pfn
> > addresses? In other words, is the memory / mmio at the end of a
> > given address the same from the perspective of any process? It
> > is for physical memory of course, which is why get_user_pages
> > works...
> 
> Doesn't help with the racyness.

I don't understand what the racyness is that you can solve by accessing
it from the target process's mm?

> > What if you hold your per-object lock over the operation? (I guess
> > it would have to nest *inside* mmap_sem, but that should be OK).
> 
> Over the ptrace operation ? how so ?

You just have to hold it over access_process_vm, AFAIKS. Once it
is copied into the kernel buffer that's done. Maybe I misunderstood
what the race is?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  2:56             ` Benjamin Herrenschmidt
@ 2006-10-10  3:03               ` Nick Piggin
  2006-10-10  3:42                 ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  3:03 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 12:56:08PM +1000, Benjamin Herrenschmidt wrote:
> 
> > > What if you hold your per-object lock over the operation? (I guess
> > > it would have to nest *inside* mmap_sem, but that should be OK).
> > 
> > Over the ptrace operation ? how so ?
> > 
> 
> Or do you mean the migration ? Well, we have so far managed to avoid
> walking the VMAs and thus avoid the mmap_sem during that migration, so
> yes, we do take the object lock but not the mmap_sem.
> 
> The problem is that a get_user_pfn() (or get_user_pages if we are on the
> memory backstore, besides, how do you decide from access_process_vm
> which one to call ?) will peek PTEs and just use that if they are
> populated. Thus, if the migration races with it, we are stuffed.

Hold your per-object lock? I'm not talking about using mmap_sem for
migration, but the per-object lock in access_process_vm. I thought
this prevented migration?

> 
> Even if we took the mmap_sem for writing during the migration on all
> affected VMAs (which I'm trying very hard to avoid, it's a very risky
> thing to do taking it on multiple VMAs, think about lock ordering
> issues, and it's just plain horrid), we would still at one point return
> an array of struct pages or pfn's that may be out of date unless we
> -also- do all the copies / accesses with that semaphore held. Now if
> that is the case, you gotta hope that the ptracing process doesn't also
> have one of those things mmap'ed (and in the case of SPUfs/gdb, it will
> to get to the spu program text afaik) or the copy_to_user to return the
> data read will be deadly.

OK, just do one pfn at a time. For ptrace that is fine. access_process_vm
already copies from source into kernel buffer, then kernel buffer into
target.

> So all I see is more cans of worms... the only think that would "just
> work" would be to switch the mm and just do the accesses, letting normal
> faults do their job. This needs a temporary page in kernel memory to
> copy to/form but that's fine. The SPU might get context switched in the
> meantime, but that's not a problem, the data will be right.
> 
> So yes, there might be other issues with switching the active_mm like
> that, and I yet have to find them (if some comes on top of your mind,
> please share) but it doesn't at this point seem worse than the
> get_user_page/pfn situation.
> 
> (We could also make sure the whole switch/copy/switchback is done while
> holding the mmap sem of both current and target mm's for writing to
> avoid more complications I suppose, if we always take the ptracer first,
> the target being sigstopped, we should avoid AB/BA type deadlock
> scenarios unless I've missed something subtle).

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  2:58             ` Nick Piggin
@ 2006-10-10  3:40               ` Benjamin Herrenschmidt
  2006-10-10  3:46                 ` Nick Piggin
  0 siblings, 1 reply; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  3:40 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, 2006-10-10 at 04:58 +0200, Nick Piggin wrote:
> On Tue, Oct 10, 2006 at 12:47:46PM +1000, Benjamin Herrenschmidt wrote:
> > 
> > > Switch the mm and do a copy_from_user? (rather than the GUP).
> > > Sounds pretty ugly :P
> > > 
> > > Can you do a get_user_pfns, and do a copy_from_user on the pfn
> > > addresses? In other words, is the memory / mmio at the end of a
> > > given address the same from the perspective of any process? It
> > > is for physical memory of course, which is why get_user_pages
> > > works...
> > 
> > Doesn't help with the racyness.
> 
> I don't understand what the racyness is that you can solve by accessing
> it from the target process's mm?

You get a struct page or a pfn, you race with the migration, and access
something that isn't the "current" one. Doing an actual access goes
through the normal mmu path which guarantees that after the migration
has finished its unmap_mapping_ranges(), no access via those old PTEs is
possible (tlbs have been flushed etc...). We don't get such guarantee if
we get a struct page or a pfn and go peek at it.

> > > What if you hold your per-object lock over the operation? (I guess
> > > it would have to nest *inside* mmap_sem, but that should be OK).
> > 
> > Over the ptrace operation ? how so ?
> 
> You just have to hold it over access_process_vm, AFAIKS. Once it
> is copied into the kernel buffer that's done. Maybe I misunderstood
> what the race is?

But since when ptrace knows about various private locks of objects that
are backing vma's ?

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  3:03               ` Nick Piggin
@ 2006-10-10  3:42                 ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  3:42 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar


> Hold your per-object lock? I'm not talking about using mmap_sem for
> migration, but the per-object lock in access_process_vm. I thought
> this prevented migration?

As I said in my previous mail. access_process_vm() is a generic function
called by ptrace, it has 0 knowledge of the internal locking scheme of
a driver providing a nopage/nopfn for a vma.

> OK, just do one pfn at a time. For ptrace that is fine. access_process_vm
> already copies from source into kernel buffer, then kernel buffer into
> target.

Even one pfn at a time ... the only way would be if we also took the PTE
lock during the copy in fact. That's the only lock that would provide
that same guarantees as an access I think.

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  3:40               ` Benjamin Herrenschmidt
@ 2006-10-10  3:46                 ` Nick Piggin
  2006-10-10  4:58                   ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 34+ messages in thread
From: Nick Piggin @ 2006-10-10  3:46 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 01:40:56PM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2006-10-10 at 04:58 +0200, Nick Piggin wrote:
> > On Tue, Oct 10, 2006 at 12:47:46PM +1000, Benjamin Herrenschmidt wrote:
> > > 
> > > > Switch the mm and do a copy_from_user? (rather than the GUP).
> > > > Sounds pretty ugly :P
> > > > 
> > > > Can you do a get_user_pfns, and do a copy_from_user on the pfn
> > > > addresses? In other words, is the memory / mmio at the end of a
> > > > given address the same from the perspective of any process? It
> > > > is for physical memory of course, which is why get_user_pages
> > > > works...
> > > 
> > > Doesn't help with the racyness.
> > 
> > I don't understand what the racyness is that you can solve by accessing
> > it from the target process's mm?
> 
> You get a struct page or a pfn, you race with the migration, and access
> something that isn't the "current" one. Doing an actual access goes
> through the normal mmu path which guarantees that after the migration
> has finished its unmap_mapping_ranges(), no access via those old PTEs is
> possible (tlbs have been flushed etc...). We don't get such guarantee if
> we get a struct page or a pfn and go peek at it.

OK, so it is a matter of preventing the migration while this is going on.
BTW. I think you need to disallow get_user_pages to this region entirely,
regardless of whether it is backed by a page or not: there is no guarantee
of when the caller will release the page.

> > > > What if you hold your per-object lock over the operation? (I guess
> > > > it would have to nest *inside* mmap_sem, but that should be OK).
> > > 
> > > Over the ptrace operation ? how so ?
> > 
> > You just have to hold it over access_process_vm, AFAIKS. Once it
> > is copied into the kernel buffer that's done. Maybe I misunderstood
> > what the race is?
> 
> But since when ptrace knows about various private locks of objects that
> are backing vma's ?

Since we decided it would be better to make a new function or some arch
specfic hooks rather than switch mm's in the kernel? ;)

No, I don't know. Your idea might be reasonable, but I really haven't
thought about it much.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  3:46                 ` Nick Piggin
@ 2006-10-10  4:58                   ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10  4:58 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar


> Since we decided it would be better to make a new function or some arch
> specfic hooks rather than switch mm's in the kernel? ;)
> 
> No, I don't know. Your idea might be reasonable, but I really haven't
> thought about it much.

Another option is to take the PTE lock while doing the accesses for that
PFN... might work. We still need a temp kernel buffer but that would
sort-of do the trick.

Ben.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10  1:16       ` ptrace and pfn mappings Benjamin Herrenschmidt
  2006-10-10  2:23         ` Nick Piggin
@ 2006-10-10 12:31         ` Christoph Hellwig
  2006-10-10 12:42           ` Benjamin Herrenschmidt
  2006-10-10 18:06           ` Hugh Dickins
  1 sibling, 2 replies; 34+ messages in thread
From: Christoph Hellwig @ 2006-10-10 12:31 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Nick Piggin, Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 11:16:27AM +1000, Benjamin Herrenschmidt wrote:
> And the last of my "issues" here:
> 
> get_user_pages() can't handle pfn mappings, thus access_process_vm()
> can't, and thus ptrace can't. When they were limited to dodgy /dev/mem
> things, it was probably ok. But with more drivers needing that, like the
> DRM, sound drivers, and now with SPU problem state registers and local
> store mapped that way, it's becoming a real issues to be unable to
> access any of those mappings from gdb.
> 
> The "easy" way out I can see, but it may have all sort of bad side
> effects I haven't thought about at this point, is to switch the mm in
> access_process_vm (at least if it's hitting such a VMA).

Switching the mm is definitly no acceptable.  Too many things could
break when violating the existing assumptions.

> That do you guys think ? Any better idea ? The problem with mappings
> like what SPUfs or the DRM want is that they can change (be remapped
> between HW and backup memory, as described in previous emails), thus we
> don't want to get struct pages even if available and peek at them as
> they might not be valid anymore, same with PFNs (we could imagine
> ioremap'ing those PFN's but that would be racy too). The only way that
> is guaranteed not to be racy is to do exactly what a user do, that is do
> user accesses via the target process vm itself....

I think the best idea is to add a new ->access method to the vm_operations
that's called by access_process_vm() when it exists and VM_IO or VM_PFNMAP
are set.   ->access would take the required object locks and copy out the
data manually.  This should work both for spufs and drm.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10 12:31         ` Christoph Hellwig
@ 2006-10-10 12:42           ` Benjamin Herrenschmidt
  2006-10-10 18:06           ` Hugh Dickins
  1 sibling, 0 replies; 34+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-10 12:42 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Nick Piggin, Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Linux Kernel, Ingo Molnar


> I think the best idea is to add a new ->access method to the vm_operations
> that's called by access_process_vm() when it exists and VM_IO or VM_PFNMAP
> are set.   ->access would take the required object locks and copy out the
> data manually.  This should work both for spufs and drm.

Another option is to have access_process_vm() lookup the PTE and lock it
while copying the data from the page.

something like

	- lookup pte & lock
	- check if pte still present
	- copy data to temp kernel buffer
	- unlock pte
	- copy data to user buffer

Ben.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: ptrace and pfn mappings
  2006-10-10 12:31         ` Christoph Hellwig
  2006-10-10 12:42           ` Benjamin Herrenschmidt
@ 2006-10-10 18:06           ` Hugh Dickins
  1 sibling, 0 replies; 34+ messages in thread
From: Hugh Dickins @ 2006-10-10 18:06 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Benjamin Herrenschmidt, Nick Piggin, Nick Piggin,
	Linux Memory Management, Andrew Morton, Jes Sorensen,
	Linux Kernel, Ingo Molnar

On Tue, 10 Oct 2006, Christoph Hellwig wrote:
> On Tue, Oct 10, 2006 at 11:16:27AM +1000, Benjamin Herrenschmidt wrote:
> > 
> > The "easy" way out I can see, but it may have all sort of bad side
> > effects I haven't thought about at this point, is to switch the mm in
> > access_process_vm (at least if it's hitting such a VMA).
> 
> Switching the mm is definitly no acceptable.  Too many things could
> break when violating the existing assumptions.

I disagree.  Ben's switch-mm approach deserves deeper examination than
that.  It's both simple and powerful.  And it's already done by AIO's
use_mm - the big differences being, of course, that the kthread has
no original mm of its own, and it's limited in what it gets up to.

What would be the actual problems with ptrace temporarily adopting
another's mm?  What are our existing assumptions?

We do already have the minor issue that expand_stack uses the wrong
task's rlimits (there was a patch for that, perhaps Nick's fault
struct would help make it less intrusive to fix - I was put off
it by having to pass an additional arg down so many levels).

> I think the best idea is to add a new ->access method to the vm_operations
> that's called by access_process_vm() when it exists and VM_IO or VM_PFNMAP
> are set.   ->access would take the required object locks and copy out the
> data manually.  This should work both for spufs and drm.

I find Ben's idea more appealing; but agree it _may_ prove unworkable.

Hugh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-10  1:10     ` Nick Piggin
@ 2006-10-11 18:34       ` Mark Fasheh
  2006-10-12  3:28         ` Nick Piggin
  0 siblings, 1 reply; 34+ messages in thread
From: Mark Fasheh @ 2006-10-11 18:34 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 11:10:42AM +1000, Nick Piggin wrote:
> If you want a stable patchset for testing, the previous one to linux-mm
> starting with "[patch 1/3] mm: fault vs invalidate/truncate check" went
> through some stress testing here...
Hmm, unfortunately my testing so far hasn't been particularly encouraging...

Shortly after my test starts, one of the "ocfs2-vote" processes on one of my
nodes will begin consuming cpu at a rate which indicates it might be in an
infinite loop. The soft lockup detection code seems to agree:

BUG: soft lockup detected on CPU#0!
Call Trace:
[C00000003795F220] [C000000000011310] .show_stack+0x50/0x1cc (unreliable)
[C00000003795F2D0] [C000000000086100] .softlockup_tick+0xf8/0x120
[C00000003795F380] [C000000000060DA8] .run_local_timers+0x1c/0x30
[C00000003795F400] [C000000000023B28] .timer_interrupt+0x110/0x500
[C00000003795F520] [C0000000000034EC] decrementer_common+0xec/0x100
--- Exception: 901 at ._raw_spin_lock+0x84/0x1a0
    LR = ._spin_lock+0x10/0x24
[C00000003795F810] [C000000000788FC8] init_thread_union+0xfc8/0x4000 (unreliable)
[C00000003795F8B0] [C0000000004A66B8] ._spin_lock+0x10/0x24
[C00000003795F930] [C00000000009EDBC] .unmap_mapping_range+0x88/0x2d4
[C00000003795FA90] [C0000000000967E4] .truncate_inode_pages_range+0x2b8/0x490
[C00000003795FBE0] [D0000000005FA8C0] .ocfs2_data_convert_worker+0x124/0x14c [ocfs2]
[C00000003795FC70] [D0000000005FB0BC] .ocfs2_process_blocked_lock+0x184/0xca4 [ocfs2]
[C00000003795FD50] [D000000000629DE8] .ocfs2_vote_thread+0x1a8/0xc18 [ocfs2]
[C00000003795FEE0] [C00000000007000C] .kthread+0x154/0x1a4
[C00000003795FF90] [C000000000027124] .kernel_thread+0x4c/0x68


A sysrq-t doesn't show anything interesting from any of the other OCFS2
processes. This is your patchset from the 10th, running against Linus' git
tree from that day, with my mmap patch merged in.

The stack seems to indicate that we're stuck in one of these
truncate_inode_pages_range() loops:

+                       while (page_mapped(page)) {
+                               unmap_mapping_range(mapping,
+                                 (loff_t)page_index<<PAGE_CACHE_SHIFT,
+                                 PAGE_CACHE_SIZE, 0);
+                       }


The test I run is over here btw:

http://oss.oracle.com/projects/ocfs2-test/src/trunk/programs/multi_node_mmap/multi_mmap.c

I ran it with the following parameters:

mpirun -np 6 n1-3 ./multi_mmap -w mmap -r mmap -i 1000 -b 1024 /ocfs2/mmap/test4.txt
	--Mark

--
Mark Fasheh
Senior Software Developer, Oracle
mark.fasheh@oracle.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11 18:34       ` Mark Fasheh
@ 2006-10-12  3:28         ` Nick Piggin
  0 siblings, 0 replies; 34+ messages in thread
From: Nick Piggin @ 2006-10-12  3:28 UTC (permalink / raw)
  To: Mark Fasheh
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Ingo Molnar

On Wed, Oct 11, 2006 at 11:34:04AM -0700, Mark Fasheh wrote:
> On Tue, Oct 10, 2006 at 11:10:42AM +1000, Nick Piggin wrote:
> 
> The test I run is over here btw:
> 
> http://oss.oracle.com/projects/ocfs2-test/src/trunk/programs/multi_node_mmap/multi_mmap.c
> 
> I ran it with the following parameters:
> 
> mpirun -np 6 n1-3 ./multi_mmap -w mmap -r mmap -i 1000 -b 1024 /ocfs2/mmap/test4.txt

Thanks, I'll see if I can reproduce.

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2006-10-12  3:28 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-10-09 16:12 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
2006-10-09 16:12 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
2006-10-09 16:12 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
2006-10-09 21:10   ` Mark Fasheh
2006-10-10  1:10     ` Nick Piggin
2006-10-11 18:34       ` Mark Fasheh
2006-10-12  3:28         ` Nick Piggin
2006-10-09 16:12 ` [patch 3/5] mm: fault handler to replace nopage and populate Nick Piggin
2006-10-09 16:12 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
2006-10-09 21:03   ` Benjamin Herrenschmidt
2006-10-10  0:42     ` Nick Piggin
2006-10-10  1:11       ` faults and signals Benjamin Herrenschmidt
2006-10-10  1:20         ` Nick Piggin
2006-10-10  1:58           ` Benjamin Herrenschmidt
2006-10-10  2:00             ` Benjamin Herrenschmidt
2006-10-10  2:04               ` Nick Piggin
2006-10-10  2:07                 ` Benjamin Herrenschmidt
2006-10-10  1:16       ` ptrace and pfn mappings Benjamin Herrenschmidt
2006-10-10  2:23         ` Nick Piggin
2006-10-10  2:47           ` Benjamin Herrenschmidt
2006-10-10  2:56             ` Benjamin Herrenschmidt
2006-10-10  3:03               ` Nick Piggin
2006-10-10  3:42                 ` Benjamin Herrenschmidt
2006-10-10  2:58             ` Nick Piggin
2006-10-10  3:40               ` Benjamin Herrenschmidt
2006-10-10  3:46                 ` Nick Piggin
2006-10-10  4:58                   ` Benjamin Herrenschmidt
2006-10-10 12:31         ` Christoph Hellwig
2006-10-10 12:42           ` Benjamin Herrenschmidt
2006-10-10 18:06           ` Hugh Dickins
2006-10-09 16:13 ` [patch 5/5] mm: merge nopfn with fault handler Nick Piggin
2006-10-09 20:57 ` [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Benjamin Herrenschmidt
2006-10-09 21:00   ` Benjamin Herrenschmidt
2006-10-10  0:53     ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).