linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
@ 2006-10-10 14:21 Nick Piggin
  2006-10-10 14:21 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
                   ` (6 more replies)
  0 siblings, 7 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 14:21 UTC (permalink / raw)
  To: Linux Memory Management, Andrew Morton; +Cc: Linux Kernel, Nick Piggin

This patchset is against 2.6.19-rc1-mm1 up to
numa-add-zone_to_nid-function-swap_prefetch.patch (ie. no readahead stuff,
which causes big rejects and would be much easier to fix in readahead
patches than here). Other than this feature, the -mm specific stuff is
pretty simple (mainly straightforward filesystem conversions).

Changes since last round:
- trimmed the cc list, no big changes since last time.
- fix the few buglets preventing it from actually booting
- reinstate filemap_nopage and filemap_populate, because they're exported
  symbols even though no longer used in the tree. Schedule for removal.
- initialise fault_data one field at a time (akpm)
- change prototype of ->fault so it takes a vma (hch)

Has passed an allyesconfig on G5, and booted and stress tested (on ext3
and tmpfs) on 2x P4 Xeon with my userspace tester (which I will send in
a reply to this email). (stress tests run with the set_page_dirty_buffers
fix that is upstream).

Please apply.

Nick

--
SuSE Labs

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 1/5] mm: fault vs invalidate/truncate check
  2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
@ 2006-10-10 14:21 ` Nick Piggin
  2006-10-10 14:21 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 14:21 UTC (permalink / raw)
  To: Linux Memory Management, Andrew Morton; +Cc: Linux Kernel, Nick Piggin

Add a bugcheck for Andrea's pagefault vs invalidate race. This is triggerable
for both linear and nonlinear pages with a userspace test harness (using
direct IO and truncate, respectively).

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -120,6 +120,8 @@ void __remove_from_page_cache(struct pag
 	page->mapping = NULL;
 	mapping->nrpages--;
 	__dec_zone_page_state(page, NR_FILE_PAGES);
+
+	BUG_ON(page_mapped(page));
 }
 
 void remove_from_page_cache(struct page *page)

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
  2006-10-10 14:21 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
@ 2006-10-10 14:21 ` Nick Piggin
  2006-10-11  4:38   ` Andrew Morton
  2006-10-11  5:13   ` Andrew Morton
  2006-10-10 14:22 ` [patch 3/5] mm: fault handler to replace nopage and populate Nick Piggin
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 14:21 UTC (permalink / raw)
  To: Linux Memory Management, Andrew Morton; +Cc: Linux Kernel, Nick Piggin

Fix the race between invalidate_inode_pages and do_no_page.

Andrea Arcangeli identified a subtle race between invalidation of
pages from pagecache with userspace mappings, and do_no_page.

The issue is that invalidation has to shoot down all mappings to the
page, before it can be discarded from the pagecache. Between shooting
down ptes to a particular page, and actually dropping the struct page
from the pagecache, do_no_page from any process might fault on that
page and establish a new mapping to the page just before it gets
discarded from the pagecache.

The most common case where such invalidation is used is in file
truncation. This case was catered for by doing a sort of open-coded
seqlock between the file's i_size, and its truncate_count.

Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then
find the page if it is within i_size, and then check truncate_count
under the page table lock and back out and retry if it had
subsequently been changed (ptl will serialise against unmapping, and
ensure a potentially updated truncate_count is actually visible).

Complexity and documentation issues aside, the locking protocol fails
in the case where we would like to invalidate pagecache inside i_size.
do_no_page can come in anytime and filemap_nopage is not aware of the
invalidation in progress (as it is when it is outside i_size). The
end result is that dangling (->mapping == NULL) pages that appear to
be from a particular file may be mapped into userspace with nonsense
data. Valid mappings to the same place will see a different page.

Andrea implemented two working fixes, one using a real seqlock,
another using a page->flags bit. He also proposed using the page lock
in do_no_page, but that was initially considered too heavyweight.
However, it is not a global or per-file lock, and the page cacheline
is modified in do_no_page to increment _count and _mapcount anyway, so
a further modification should not be a large performance hit.
Scalability is not an issue.

This patch implements this latter approach. ->nopage implementations
return with the page locked if it is possible for their underlying
file to be invalidated (in that case, they must set a special vm_flags
bit to indicate so). do_no_page only unlocks the page after setting
up the mapping completely. invalidation is excluded because it holds
the page lock during invalidation of each page (and ensures that the
page is not mapped while holding the lock).

This allows significant simplifications in do_no_page.

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -166,6 +166,11 @@ extern unsigned int kobjsize(const void 
 #define VM_NONLINEAR	0x00800000	/* Is non-linear (remap_file_pages) */
 #define VM_MAPPED_COPY	0x01000000	/* T if mapped copy of data (nommu mmap) */
 #define VM_INSERTPAGE	0x02000000	/* The vma has had "vm_insert_page()" done on it */
+#define VM_CAN_INVALIDATE	0x04000000	/* The mapping may be invalidated,
+					 * eg. truncate or invalidate_inode_*.
+					 * In this case, do_no_page must
+					 * return with the page locked.
+					 */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1392,9 +1392,10 @@ struct page *filemap_nopage(struct vm_ar
 	unsigned long size, pgoff;
 	int did_readaround = 0, majmin = VM_FAULT_MINOR;
 
+	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
+
 	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
 
-retry_all:
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 	if (pgoff >= size)
 		goto outside_data_content;
@@ -1416,7 +1417,7 @@ retry_all:
 	 * Do we have something in the page cache already?
 	 */
 retry_find:
-	page = find_get_page(mapping, pgoff);
+	page = find_lock_page(mapping, pgoff);
 	if (!page) {
 		unsigned long ra_pages;
 
@@ -1450,7 +1451,7 @@ retry_find:
 				start = pgoff - ra_pages / 2;
 			do_page_cache_readahead(mapping, file, start, ra_pages);
 		}
-		page = find_get_page(mapping, pgoff);
+		page = find_lock_page(mapping, pgoff);
 		if (!page)
 			goto no_cached_page;
 	}
@@ -1459,13 +1460,19 @@ retry_find:
 		ra->mmap_hit++;
 
 	/*
-	 * Ok, found a page in the page cache, now we need to check
-	 * that it's up-to-date.
+	 * We have a locked page in the page cache, now we need to check
+	 * that it's up-to-date. If not, it is going to be due to an error.
 	 */
-	if (!PageUptodate(page))
+	if (unlikely(!PageUptodate(page)))
 		goto page_not_uptodate;
 
-success:
+	/* Must recheck i_size under page lock */
+	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+	if (unlikely(pgoff >= size)) {
+		unlock_page(page);
+		goto outside_data_content;
+	}
+
 	/*
 	 * Found the page and have a reference on it.
 	 */
@@ -1507,34 +1514,11 @@ no_cached_page:
 	return NOPAGE_SIGBUS;
 
 page_not_uptodate:
+	/* IO error path */
 	if (!did_readaround) {
 		majmin = VM_FAULT_MAJOR;
 		count_vm_event(PGMAJFAULT);
 	}
-	lock_page(page);
-
-	/* Did it get unhashed while we waited for it? */
-	if (!page->mapping) {
-		unlock_page(page);
-		page_cache_release(page);
-		goto retry_all;
-	}
-
-	/* Did somebody else get it up-to-date? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
-
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
 
 	/*
 	 * Umm, take care of errors if the page isn't up-to-date.
@@ -1542,37 +1526,15 @@ page_not_uptodate:
 	 * because there really aren't any performance issues here
 	 * and we need to check for errors.
 	 */
-	lock_page(page);
-
-	/* Somebody truncated the page on us? */
-	if (!page->mapping) {
-		unlock_page(page);
-		page_cache_release(page);
-		goto retry_all;
-	}
-
-	/* Somebody else successfully read it in? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
 	ClearPageError(page);
 	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
+	page_cache_release(page);
+
+	if (!error || error == AOP_TRUNCATED_PAGE)
 		goto retry_find;
-	}
 
-	/*
-	 * Things didn't work out. Return zero to tell the
-	 * mm layer so, possibly freeing the page cache page first.
-	 */
+	/* Things didn't work out. Return zero to tell the mm layer so. */
 	shrink_readahead_size_eio(file, ra);
-	page_cache_release(page);
 	return NOPAGE_SIGBUS;
 }
 EXPORT_SYMBOL(filemap_nopage);
@@ -1765,6 +1727,7 @@ int generic_file_mmap(struct file * file
 		return -ENOEXEC;
 	file_accessed(file);
 	vma->vm_ops = &generic_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1675,6 +1675,13 @@ static int unmap_mapping_range_vma(struc
 	unsigned long restart_addr;
 	int need_break;
 
+	/*
+	 * files that support invalidating or truncating portions of the
+	 * file from under mmaped areas must set the VM_CAN_INVALIDATE flag, and
+	 * have their .nopage function return the page locked.
+	 */
+	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
+
 again:
 	restart_addr = vma->vm_truncate_count;
 	if (is_restart_addr(restart_addr) && start_addr < restart_addr) {
@@ -1805,17 +1812,8 @@ void unmap_mapping_range(struct address_
 
 	spin_lock(&mapping->i_mmap_lock);
 
-	/* serialize i_size write against truncate_count write */
-	smp_wmb();
-	/* Protect against page faults, and endless unmapping loops */
+	/* Protect against endless unmapping loops */
 	mapping->truncate_count++;
-	/*
-	 * For archs where spin_lock has inclusive semantics like ia64
-	 * this smp_mb() will prevent to read pagetable contents
-	 * before the truncate_count increment is visible to
-	 * other cpus.
-	 */
-	smp_mb();
 	if (unlikely(is_restart_addr(mapping->truncate_count))) {
 		if (mapping->truncate_count == 0)
 			reset_vma_truncate_counts(mapping);
@@ -1854,7 +1852,6 @@ int vmtruncate(struct inode * inode, lof
 	if (IS_SWAPFILE(inode))
 		goto out_busy;
 	i_size_write(inode, offset);
-	unmap_mapping_range(mapping, offset + PAGE_SIZE - 1, 0, 1);
 	truncate_inode_pages(mapping, offset);
 	goto out_truncate;
 
@@ -1893,7 +1890,6 @@ int vmtruncate_range(struct inode *inode
 
 	mutex_lock(&inode->i_mutex);
 	down_write(&inode->i_alloc_sem);
-	unmap_mapping_range(mapping, offset, (end - offset), 1);
 	truncate_inode_pages_range(mapping, offset, end);
 	inode->i_op->truncate_range(inode, offset, end);
 	up_write(&inode->i_alloc_sem);
@@ -2144,10 +2140,8 @@ static int do_no_page(struct mm_struct *
 		int write_access)
 {
 	spinlock_t *ptl;
-	struct page *new_page;
-	struct address_space *mapping = NULL;
+	struct page *page, *nopage_page;
 	pte_t entry;
-	unsigned int sequence = 0;
 	int ret = VM_FAULT_MINOR;
 	int anon = 0;
 	struct page *dirty_page = NULL;
@@ -2155,73 +2149,54 @@ static int do_no_page(struct mm_struct *
 	pte_unmap(page_table);
 	BUG_ON(vma->vm_flags & VM_PFNMAP);
 
-	if (vma->vm_file) {
-		mapping = vma->vm_file->f_mapping;
-		sequence = mapping->truncate_count;
-		smp_rmb(); /* serializes i_size against truncate_count */
-	}
-retry:
-	new_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
-	/*
-	 * No smp_rmb is needed here as long as there's a full
-	 * spin_lock/unlock sequence inside the ->nopage callback
-	 * (for the pagecache lookup) that acts as an implicit
-	 * smp_mb() and prevents the i_size read to happen
-	 * after the next truncate_count read.
-	 */
-
+	nopage_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
 	/* no page was available -- either SIGBUS, OOM or REFAULT */
-	if (unlikely(new_page == NOPAGE_SIGBUS))
+	if (unlikely(nopage_page == NOPAGE_SIGBUS))
 		return VM_FAULT_SIGBUS;
-	else if (unlikely(new_page == NOPAGE_OOM))
+	else if (unlikely(nopage_page == NOPAGE_OOM))
 		return VM_FAULT_OOM;
-	else if (unlikely(new_page == NOPAGE_REFAULT))
+	else if (unlikely(nopage_page == NOPAGE_REFAULT))
 		return VM_FAULT_MINOR;
 
+	BUG_ON(vma->vm_flags & VM_CAN_INVALIDATE && !PageLocked(nopage_page));
+	/*
+	 * For consistency in subsequent calls, make the nopage_page always
+	 * locked.  These should be in the minority but if they turn out to be
+	 * critical then this can always be revisited
+	 */
+	if (unlikely(!(vma->vm_flags & VM_CAN_INVALIDATE)))
+		lock_page(nopage_page);
+
 	/*
 	 * Should we do an early C-O-W break?
 	 */
+	page = nopage_page;
 	if (write_access) {
 		if (!(vma->vm_flags & VM_SHARED)) {
-			struct page *page;
-
-			if (unlikely(anon_vma_prepare(vma)))
-				goto oom;
+			if (unlikely(anon_vma_prepare(vma))) {
+				ret = VM_FAULT_OOM;
+				goto out_error;
+			}
 			page = alloc_page_vma(GFP_HIGHUSER, vma, address);
-			if (!page)
-				goto oom;
-			copy_user_highpage(page, new_page, address);
-			page_cache_release(new_page);
-			new_page = page;
+			if (!page) {
+				ret = VM_FAULT_OOM;
+				goto out_error;
+			}
+			copy_user_highpage(page, nopage_page, address);
 			anon = 1;
-
 		} else {
 			/* if the page will be shareable, see if the backing
 			 * address space wants to know that the page is about
 			 * to become writable */
 			if (vma->vm_ops->page_mkwrite &&
-			    vma->vm_ops->page_mkwrite(vma, new_page) < 0
-			    ) {
-				page_cache_release(new_page);
-				return VM_FAULT_SIGBUS;
+			    vma->vm_ops->page_mkwrite(vma, page) < 0) {
+				ret = VM_FAULT_SIGBUS;
+				goto out_error;
 			}
 		}
 	}
 
 	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
-	/*
-	 * For a file-backed vma, someone could have truncated or otherwise
-	 * invalidated this page.  If unmap_mapping_range got called,
-	 * retry getting the page.
-	 */
-	if (mapping && unlikely(sequence != mapping->truncate_count)) {
-		pte_unmap_unlock(page_table, ptl);
-		page_cache_release(new_page);
-		cond_resched();
-		sequence = mapping->truncate_count;
-		smp_rmb();
-		goto retry;
-	}
 
 	/*
 	 * This silly early PAGE_DIRTY setting removes a race
@@ -2234,43 +2209,51 @@ retry:
 	 * handle that later.
 	 */
 	/* Only go through if we didn't race with anybody else... */
-	if (pte_none(*page_table)) {
-		flush_icache_page(vma, new_page);
-		entry = mk_pte(new_page, vma->vm_page_prot);
+	if (likely(pte_none(*page_table))) {
+		flush_icache_page(vma, page);
+		entry = mk_pte(page, vma->vm_page_prot);
 		if (write_access)
 			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 		set_pte_at(mm, address, page_table, entry);
 		if (anon) {
 			inc_mm_counter(mm, anon_rss);
-			lru_cache_add_active(new_page);
-			page_add_new_anon_rmap(new_page, vma, address);
+			lru_cache_add_active(page);
+			page_add_new_anon_rmap(page, vma, address);
 		} else {
 			inc_mm_counter(mm, file_rss);
-			page_add_file_rmap(new_page);
+			page_add_file_rmap(page);
 			if (write_access) {
-				dirty_page = new_page;
+				dirty_page = page;
 				get_page(dirty_page);
 			}
 		}
+
+		/* no need to invalidate: a not-present page won't be cached */
+		update_mmu_cache(vma, address, entry);
+		lazy_mmu_prot_update(entry);
 	} else {
-		/* One of our sibling threads was faster, back out. */
-		page_cache_release(new_page);
-		goto unlock;
+		if (anon)
+			page_cache_release(page);
+		else
+			anon = 1; /* not anon, but release nopage_page */
 	}
 
-	/* no need to invalidate: a not-present page shouldn't be cached */
-	update_mmu_cache(vma, address, entry);
-	lazy_mmu_prot_update(entry);
-unlock:
 	pte_unmap_unlock(page_table, ptl);
-	if (dirty_page) {
+
+out:
+	unlock_page(nopage_page);
+	if (anon)
+		page_cache_release(nopage_page);
+	else if (dirty_page) {
 		set_page_dirty_balance(dirty_page);
 		put_page(dirty_page);
 	}
+
 	return ret;
-oom:
-	page_cache_release(new_page);
-	return VM_FAULT_OOM;
+
+out_error:
+	anon = 1; /* relase nopage_page */
+	goto out;
 }
 
 /*
Index: linux-2.6/mm/shmem.c
===================================================================
--- linux-2.6.orig/mm/shmem.c
+++ linux-2.6/mm/shmem.c
@@ -81,6 +81,7 @@ enum sgp_type {
 	SGP_READ,	/* don't exceed i_size, don't allocate page */
 	SGP_CACHE,	/* don't exceed i_size, may allocate page */
 	SGP_WRITE,	/* may exceed i_size, may allocate page */
+	SGP_NOPAGE,	/* same as SGP_CACHE, return with page locked */
 };
 
 static int shmem_getpage(struct inode *inode, unsigned long idx,
@@ -1209,8 +1210,10 @@ repeat:
 	}
 done:
 	if (*pagep != filepage) {
-		unlock_page(filepage);
 		*pagep = filepage;
+		if (sgp != SGP_NOPAGE)
+			unlock_page(filepage);
+
 	}
 	return 0;
 
@@ -1229,13 +1232,15 @@ struct page *shmem_nopage(struct vm_area
 	unsigned long idx;
 	int error;
 
+	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
+
 	idx = (address - vma->vm_start) >> PAGE_SHIFT;
 	idx += vma->vm_pgoff;
 	idx >>= PAGE_CACHE_SHIFT - PAGE_SHIFT;
 	if (((loff_t) idx << PAGE_CACHE_SHIFT) >= i_size_read(inode))
 		return NOPAGE_SIGBUS;
 
-	error = shmem_getpage(inode, idx, &page, SGP_CACHE, type);
+	error = shmem_getpage(inode, idx, &page, SGP_NOPAGE, type);
 	if (error)
 		return (error == -ENOMEM)? NOPAGE_OOM: NOPAGE_SIGBUS;
 
@@ -1333,6 +1338,7 @@ int shmem_mmap(struct file *file, struct
 {
 	file_accessed(file);
 	vma->vm_ops = &shmem_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
@@ -2523,5 +2529,6 @@ int shmem_zero_setup(struct vm_area_stru
 		fput(vma->vm_file);
 	vma->vm_file = file;
 	vma->vm_ops = &shmem_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
Index: linux-2.6/fs/ncpfs/mmap.c
===================================================================
--- linux-2.6.orig/fs/ncpfs/mmap.c
+++ linux-2.6/fs/ncpfs/mmap.c
@@ -123,6 +123,7 @@ int ncp_mmap(struct file *file, struct v
 		return -EFBIG;
 
 	vma->vm_ops = &ncp_file_mmap;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	file_accessed(file);
 	return 0;
 }
Index: linux-2.6/fs/ocfs2/mmap.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/mmap.c
+++ linux-2.6/fs/ocfs2/mmap.c
@@ -155,6 +155,7 @@ int ocfs2_mmap(struct file *file, struct
 {
 	file_accessed(file);
 	vma->vm_ops = &ocfs2_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
Index: linux-2.6/fs/xfs/linux-2.6/xfs_file.c
===================================================================
--- linux-2.6.orig/fs/xfs/linux-2.6/xfs_file.c
+++ linux-2.6/fs/xfs/linux-2.6/xfs_file.c
@@ -343,6 +343,7 @@ xfs_file_mmap(
 	struct vm_area_struct *vma)
 {
 	vma->vm_ops = &xfs_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 
 #ifdef CONFIG_XFS_DMAPI
 	if (vn_from_inode(filp->f_dentry->d_inode)->v_vfsp->vfs_flag & VFS_DMI)
Index: linux-2.6/ipc/shm.c
===================================================================
--- linux-2.6.orig/ipc/shm.c
+++ linux-2.6/ipc/shm.c
@@ -230,6 +230,7 @@ static int shm_mmap(struct file * file, 
 	ret = shmem_mmap(file, vma);
 	if (ret == 0) {
 		vma->vm_ops = &shm_vm_ops;
+		vma->vm_flags |= VM_CAN_INVALIDATE;
 		if (!(vma->vm_flags & VM_WRITE))
 			vma->vm_flags &= ~VM_MAYWRITE;
 		shm_inc(shm_file_ns(file), file->f_dentry->d_inode->i_ino);
Index: linux-2.6/mm/truncate.c
===================================================================
--- linux-2.6.orig/mm/truncate.c
+++ linux-2.6/mm/truncate.c
@@ -163,6 +163,11 @@ void truncate_inode_pages_range(struct a
 				unlock_page(page);
 				continue;
 			}
+			while (page_mapped(page)) {
+				unmap_mapping_range(mapping,
+				  (loff_t)page_index<<PAGE_CACHE_SHIFT,
+				  PAGE_CACHE_SIZE, 0);
+			}
 			truncate_complete_page(mapping, page);
 			unlock_page(page);
 		}
@@ -200,6 +205,11 @@ void truncate_inode_pages_range(struct a
 				break;
 			lock_page(page);
 			wait_on_page_writeback(page);
+			while (page_mapped(page)) {
+				unmap_mapping_range(mapping,
+				  (loff_t)page_index<<PAGE_CACHE_SHIFT,
+				  PAGE_CACHE_SIZE, 0);
+			}
 			if (page->index > next)
 				next = page->index;
 			next++;
Index: linux-2.6/fs/gfs2/ops_file.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_file.c
+++ linux-2.6/fs/gfs2/ops_file.c
@@ -396,6 +396,8 @@ static int gfs2_mmap(struct file *file, 
 	else
 		vma->vm_ops = &gfs2_vm_ops_private;
 
+	vma->vm_flags |= VM_CAN_INVALIDATE;
+
 	gfs2_glock_dq_uninit(&i_gh);
 
 	return error;
Index: linux-2.6/fs/nfs/fscache.h
===================================================================
--- linux-2.6.orig/fs/nfs/fscache.h
+++ linux-2.6/fs/nfs/fscache.h
@@ -198,8 +198,10 @@ static inline void nfs_fscache_disable_f
 static inline void nfs_fscache_install_vm_ops(struct inode *inode,
 					      struct vm_area_struct *vma)
 {
-	if (NFS_I(inode)->fscache)
+	if (NFS_I(inode)->fscache) {
 		vma->vm_ops = &nfs_fs_vm_operations;
+		vma->vm_flags |= VM_CAN_INVALIDATE;
+	}
 }
 
 /*
Index: linux-2.6/fs/afs/file.c
===================================================================
--- linux-2.6.orig/fs/afs/file.c
+++ linux-2.6/fs/afs/file.c
@@ -82,6 +82,7 @@ static int afs_file_mmap(struct file *fi
 
 	file_accessed(file);
 	vma->vm_ops = &afs_fs_vm_operations;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 3/5] mm: fault handler to replace nopage and populate
  2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
  2006-10-10 14:21 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
  2006-10-10 14:21 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
@ 2006-10-10 14:22 ` Nick Piggin
  2006-10-10 14:22 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 14:22 UTC (permalink / raw)
  To: Linux Memory Management, Andrew Morton; +Cc: Linux Kernel, Nick Piggin

Nonlinear mappings are (AFAIKS) simply a virtual memory concept that
encodes the virtual address -> file offset differently from linear
mappings.

I can't see why the filesystem/pagecache code should need to know anything
about it, except for the fact that the ->nopage handler didn't quite pass
down enough information (ie. pgoff). But it is more logical to pass pgoff
rather than have the ->nopage function calculate it itself anyway. And
having the nopage handler install the pte itself is sort of nasty.

This patch introduces a new fault handler that replaces ->nopage and
->populate and (later) ->nopfn. Most of the old mechanism is still in place
so there is a lot of duplication and nice cleanups that can be removed if
everyone switches over.

The rationale for doing this in the first place is that nonlinear mappings
are subject to the pagefault vs invalidate/truncate race too, and it seemed
stupid to duplicate the synchronisation logic rather than just consolidate
the two.

After this patch, MAP_NONBLOCK no longer sets up ptes for pages present in
pagecache. Seems like a fringe functionality anyway.

NOPAGE_REFAULT is removed. This should be implemented with ->fault, and
no users have hit mainline yet.

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -166,11 +166,12 @@ extern unsigned int kobjsize(const void 
 #define VM_NONLINEAR	0x00800000	/* Is non-linear (remap_file_pages) */
 #define VM_MAPPED_COPY	0x01000000	/* T if mapped copy of data (nommu mmap) */
 #define VM_INSERTPAGE	0x02000000	/* The vma has had "vm_insert_page()" done on it */
-#define VM_CAN_INVALIDATE	0x04000000	/* The mapping may be invalidated,
+#define VM_CAN_INVALIDATE 0x04000000	/* The mapping may be invalidated,
 					 * eg. truncate or invalidate_inode_*.
 					 * In this case, do_no_page must
 					 * return with the page locked.
 					 */
+#define VM_CAN_NONLINEAR 0x08000000	/* Has ->fault & does nonlinear pages */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
@@ -194,6 +195,23 @@ extern unsigned int kobjsize(const void 
  */
 extern pgprot_t protection_map[16];
 
+#define FAULT_FLAG_WRITE	0x01
+#define FAULT_FLAG_NONLINEAR	0x02
+
+/*
+ * fault_data is filled in the the pagefault handler and passed to the
+ * vma's ->fault function. That function is responsible for filling in
+ * 'type', which is the type of fault if a page is returned, or the type
+ * of error if NULL is returned.
+ */
+struct fault_data {
+	struct vm_area_struct *vma;
+	unsigned long address;
+	pgoff_t pgoff;
+	unsigned int flags;
+
+	int type;
+};
 
 /*
  * These are the virtual MM functions - opening of an area, closing and
@@ -203,6 +221,7 @@ extern pgprot_t protection_map[16];
 struct vm_operations_struct {
 	void (*open)(struct vm_area_struct * area);
 	void (*close)(struct vm_area_struct * area);
+	struct page * (*fault)(struct vm_area_struct *vma, struct fault_data * fdata);
 	struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int *type);
 	unsigned long (*nopfn)(struct vm_area_struct * area, unsigned long address);
 	int (*populate)(struct vm_area_struct * area, unsigned long address, unsigned long len, pgprot_t prot, unsigned long pgoff, int nonblock);
@@ -614,7 +633,6 @@ static inline int page_mapped(struct pag
  */
 #define NOPAGE_SIGBUS	(NULL)
 #define NOPAGE_OOM	((struct page *) (-1))
-#define NOPAGE_REFAULT	((struct page *) (-2))	/* Return to userspace, rerun */
 
 /*
  * Error return values for the *_nopfn functions
@@ -643,14 +661,13 @@ static inline int page_mapped(struct pag
 extern void show_free_areas(void);
 
 #ifdef CONFIG_SHMEM
-struct page *shmem_nopage(struct vm_area_struct *vma,
-			unsigned long address, int *type);
+struct page *shmem_fault(struct vm_area_struct *vma, struct fault_data *fdata);
 int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *new);
 struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
 					unsigned long addr);
 int shmem_lock(struct file *file, int lock, struct user_struct *user);
 #else
-#define shmem_nopage filemap_nopage
+#define shmem_fault filemap_fault
 
 static inline int shmem_lock(struct file *file, int lock,
 			     struct user_struct *user)
@@ -1043,7 +1060,8 @@ extern void truncate_inode_pages_range(s
 				       loff_t lstart, loff_t lend);
 
 /* generic vm_area_ops exported for stackable file systems */
-extern struct page *filemap_nopage(struct vm_area_struct *, unsigned long, int *);
+extern struct page * __deprecated filemap_fault(struct vm_area_struct *, struct fault_data *);
+extern struct page * __deprecated filemap_nopage(struct vm_area_struct *, unsigned long, int *);
 extern int filemap_populate(struct vm_area_struct *, unsigned long,
 		unsigned long, pgprot_t, unsigned long, int);
 
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -2123,10 +2123,10 @@ oom:
 }
 
 /*
- * do_no_page() tries to create a new page mapping. It aggressively
+ * __do_fault() tries to create a new page mapping. It aggressively
  * tries to share with existing pages, but makes a separate copy if
- * the "write_access" parameter is true in order to avoid the next
- * page fault.
+ * the FAULT_FLAG_WRITE is set in the flags parameter in order to avoid
+ * the next page fault.
  *
  * As this is called only for pages that do not currently exist, we
  * do not need to flush old virtual caches or the TLB.
@@ -2135,65 +2135,83 @@ oom:
  * but allow concurrent faults), and pte mapped but not yet locked.
  * We return with mmap_sem still held, but pte unmapped and unlocked.
  */
-static int do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
+static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		unsigned long address, pte_t *page_table, pmd_t *pmd,
-		int write_access)
+		pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
 {
 	spinlock_t *ptl;
-	struct page *page, *nopage_page;
+	struct page *page, *faulted_page;
 	pte_t entry;
-	int ret = VM_FAULT_MINOR;
 	int anon = 0;
 	struct page *dirty_page = NULL;
+	struct fault_data fdata;
+
+	fdata.address = address & PAGE_MASK;
+	fdata.pgoff = pgoff;
+	fdata.flags = flags;
 
 	pte_unmap(page_table);
 	BUG_ON(vma->vm_flags & VM_PFNMAP);
 
-	nopage_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
-	/* no page was available -- either SIGBUS, OOM or REFAULT */
-	if (unlikely(nopage_page == NOPAGE_SIGBUS))
-		return VM_FAULT_SIGBUS;
-	else if (unlikely(nopage_page == NOPAGE_OOM))
-		return VM_FAULT_OOM;
-	else if (unlikely(nopage_page == NOPAGE_REFAULT))
-		return VM_FAULT_MINOR;
+	if (likely(vma->vm_ops->fault)) {
+		fdata.type = -1;
+		faulted_page = vma->vm_ops->fault(vma, &fdata);
+		WARN_ON(fdata.type == -1);
+		if (unlikely(!faulted_page))
+			return fdata.type;
+	} else {
+		/* Legacy ->nopage path */
+		fdata.type = VM_FAULT_MINOR;
+		faulted_page = vma->vm_ops->nopage(vma, address & PAGE_MASK,
+								&fdata.type);
+		/* no page was available -- either SIGBUS or OOM */
+		if (unlikely(faulted_page == NOPAGE_SIGBUS))
+			return VM_FAULT_SIGBUS;
+		else if (unlikely(faulted_page == NOPAGE_OOM))
+			return VM_FAULT_OOM;
+	}
 
-	BUG_ON(vma->vm_flags & VM_CAN_INVALIDATE && !PageLocked(nopage_page));
 	/*
-	 * For consistency in subsequent calls, make the nopage_page always
+	 * For consistency in subsequent calls, make the faulted_page always
 	 * locked.  These should be in the minority but if they turn out to be
 	 * critical then this can always be revisited
 	 */
 	if (unlikely(!(vma->vm_flags & VM_CAN_INVALIDATE)))
-		lock_page(nopage_page);
+		lock_page(faulted_page);
+	else
+		BUG_ON(!PageLocked(faulted_page));
 
 	/*
 	 * Should we do an early C-O-W break?
 	 */
-	page = nopage_page;
-	if (write_access) {
+	page = faulted_page;
+	if (flags & FAULT_FLAG_WRITE) {
 		if (!(vma->vm_flags & VM_SHARED)) {
+			anon = 1;
 			if (unlikely(anon_vma_prepare(vma))) {
-				ret = VM_FAULT_OOM;
-				goto out_error;
+				fdata.type = VM_FAULT_OOM;
+				goto out;
 			}
 			page = alloc_page_vma(GFP_HIGHUSER, vma, address);
 			if (!page) {
-				ret = VM_FAULT_OOM;
-				goto out_error;
+				fdata.type = VM_FAULT_OOM;
+				goto out;
 			}
-			copy_user_highpage(page, nopage_page, address);
-			anon = 1;
+			copy_user_highpage(page, faulted_page, address);
 		} else {
-			/* if the page will be shareable, see if the backing
+			/*
+			 * If the page will be shareable, see if the backing
 			 * address space wants to know that the page is about
-			 * to become writable */
+			 * to become writable
+			 */
 			if (vma->vm_ops->page_mkwrite &&
 			    vma->vm_ops->page_mkwrite(vma, page) < 0) {
-				ret = VM_FAULT_SIGBUS;
-				goto out_error;
+				fdata.type = VM_FAULT_SIGBUS;
+				anon = 1; /* no anon but release faulted_page */
+				goto out;
 			}
 		}
+
 	}
 
 	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
@@ -2209,10 +2227,10 @@ static int do_no_page(struct mm_struct *
 	 * handle that later.
 	 */
 	/* Only go through if we didn't race with anybody else... */
-	if (likely(pte_none(*page_table))) {
+	if (likely(pte_same(*page_table, orig_pte))) {
 		flush_icache_page(vma, page);
 		entry = mk_pte(page, vma->vm_page_prot);
-		if (write_access)
+		if (flags & FAULT_FLAG_WRITE)
 			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 		set_pte_at(mm, address, page_table, entry);
 		if (anon) {
@@ -2222,7 +2240,7 @@ static int do_no_page(struct mm_struct *
 		} else {
 			inc_mm_counter(mm, file_rss);
 			page_add_file_rmap(page);
-			if (write_access) {
+			if (flags & FAULT_FLAG_WRITE) {
 				dirty_page = page;
 				get_page(dirty_page);
 			}
@@ -2235,25 +2253,42 @@ static int do_no_page(struct mm_struct *
 		if (anon)
 			page_cache_release(page);
 		else
-			anon = 1; /* not anon, but release nopage_page */
+			anon = 1; /* no anon but release faulted_page */
 	}
 
 	pte_unmap_unlock(page_table, ptl);
 
 out:
-	unlock_page(nopage_page);
+	unlock_page(faulted_page);
 	if (anon)
-		page_cache_release(nopage_page);
+		page_cache_release(faulted_page);
 	else if (dirty_page) {
 		set_page_dirty_balance(dirty_page);
 		put_page(dirty_page);
 	}
 
-	return ret;
+	return fdata.type;
+}
+
+static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+		unsigned long address, pte_t *page_table, pmd_t *pmd,
+		int write_access, pte_t orig_pte)
+{
+	pgoff_t pgoff = (((address & PAGE_MASK)
+			- vma->vm_start) >> PAGE_CACHE_SHIFT) + vma->vm_pgoff;
+	unsigned int flags = (write_access ? FAULT_FLAG_WRITE : 0);
+
+	return __do_fault(mm, vma, address, page_table, pmd, pgoff, flags, orig_pte);
+}
 
-out_error:
-	anon = 1; /* relase nopage_page */
-	goto out;
+static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+		unsigned long address, pte_t *page_table, pmd_t *pmd,
+		int write_access, pgoff_t pgoff, pte_t orig_pte)
+{
+	unsigned int flags = FAULT_FLAG_NONLINEAR |
+				(write_access ? FAULT_FLAG_WRITE : 0);
+
+	return __do_fault(mm, vma, address, page_table, pmd, pgoff, flags, orig_pte);
 }
 
 /*
@@ -2330,9 +2365,14 @@ static int do_file_page(struct mm_struct
 		print_bad_pte(vma, orig_pte, address);
 		return VM_FAULT_OOM;
 	}
-	/* We can then assume vm->vm_ops && vma->vm_ops->populate */
 
 	pgoff = pte_to_pgoff(orig_pte);
+
+	if (vma->vm_ops && vma->vm_ops->fault)
+		return do_nonlinear_fault(mm, vma, address, page_table, pmd,
+					write_access, pgoff, orig_pte);
+
+	/* We can then assume vm->vm_ops && vma->vm_ops->populate */
 	err = vma->vm_ops->populate(vma, address & PAGE_MASK, PAGE_SIZE,
 					vma->vm_page_prot, pgoff, 0);
 	if (err == -ENOMEM)
@@ -2367,10 +2407,9 @@ static inline int handle_pte_fault(struc
 	if (!pte_present(entry)) {
 		if (pte_none(entry)) {
 			if (vma->vm_ops) {
-				if (vma->vm_ops->nopage)
-					return do_no_page(mm, vma, address,
-							  pte, pmd,
-							  write_access);
+				if (vma->vm_ops->fault || vma->vm_ops->nopage)
+					return do_linear_fault(mm, vma, address,
+						pte, pmd, write_access, entry);
 				if (unlikely(vma->vm_ops->nopfn))
 					return do_no_pfn(mm, vma, address, pte,
 							 pmd, write_access);
Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1368,40 +1368,37 @@ static int fastcall page_cache_read(stru
 #define MMAP_LOTSAMISS  (100)
 
 /**
- * filemap_nopage - read in file data for page fault handling
- * @area:	the applicable vm_area
- * @address:	target address to read in
- * @type:	returned with VM_FAULT_{MINOR,MAJOR} if not %NULL
+ * filemap_fault - read in file data for page fault handling
+ * @data:	the applicable fault_data
  *
- * filemap_nopage() is invoked via the vma operations vector for a
+ * filemap_fault() is invoked via the vma operations vector for a
  * mapped memory region to read in file data during a page fault.
  *
  * The goto's are kind of ugly, but this streamlines the normal case of having
  * it in the page cache, and handles the special cases reasonably without
  * having a lot of duplicated code.
  */
-struct page *filemap_nopage(struct vm_area_struct *area,
-				unsigned long address, int *type)
+struct page *filemap_fault(struct vm_area_struct *vma, struct fault_data *fdata)
 {
 	int error;
-	struct file *file = area->vm_file;
+	struct file *file = vma->vm_file;
 	struct address_space *mapping = file->f_mapping;
 	struct file_ra_state *ra = &file->f_ra;
 	struct inode *inode = mapping->host;
 	struct page *page;
-	unsigned long size, pgoff;
-	int did_readaround = 0, majmin = VM_FAULT_MINOR;
+	unsigned long size;
+	int did_readaround = 0;
 
-	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
+	fdata->type = VM_FAULT_MINOR;
 
-	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
+	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
 
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (pgoff >= size)
+	if (fdata->pgoff >= size)
 		goto outside_data_content;
 
 	/* If we don't want any read-ahead, don't bother */
-	if (VM_RandomReadHint(area))
+	if (VM_RandomReadHint(vma))
 		goto no_cached_page;
 
 	/*
@@ -1410,19 +1407,19 @@ struct page *filemap_nopage(struct vm_ar
 	 *
 	 * For sequential accesses, we use the generic readahead logic.
 	 */
-	if (VM_SequentialReadHint(area))
-		page_cache_readahead(mapping, ra, file, pgoff, 1);
+	if (VM_SequentialReadHint(vma))
+		page_cache_readahead(mapping, ra, file, fdata->pgoff, 1);
 
 	/*
 	 * Do we have something in the page cache already?
 	 */
 retry_find:
-	page = find_lock_page(mapping, pgoff);
+	page = find_lock_page(mapping, fdata->pgoff);
 	if (!page) {
 		unsigned long ra_pages;
 
-		if (VM_SequentialReadHint(area)) {
-			handle_ra_miss(mapping, ra, pgoff);
+		if (VM_SequentialReadHint(vma)) {
+			handle_ra_miss(mapping, ra, fdata->pgoff);
 			goto no_cached_page;
 		}
 		ra->mmap_miss++;
@@ -1439,7 +1436,7 @@ retry_find:
 		 * check did_readaround, as this is an inner loop.
 		 */
 		if (!did_readaround) {
-			majmin = VM_FAULT_MAJOR;
+			fdata->type = VM_FAULT_MAJOR;
 			count_vm_event(PGMAJFAULT);
 		}
 		did_readaround = 1;
@@ -1447,11 +1444,11 @@ retry_find:
 		if (ra_pages) {
 			pgoff_t start = 0;
 
-			if (pgoff > ra_pages / 2)
-				start = pgoff - ra_pages / 2;
+			if (fdata->pgoff > ra_pages / 2)
+				start = fdata->pgoff - ra_pages / 2;
 			do_page_cache_readahead(mapping, file, start, ra_pages);
 		}
-		page = find_lock_page(mapping, pgoff);
+		page = find_lock_page(mapping, fdata->pgoff);
 		if (!page)
 			goto no_cached_page;
 	}
@@ -1468,7 +1465,7 @@ retry_find:
 
 	/* Must recheck i_size under page lock */
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (unlikely(pgoff >= size)) {
+	if (unlikely(fdata->pgoff >= size)) {
 		unlock_page(page);
 		goto outside_data_content;
 	}
@@ -1477,8 +1474,6 @@ retry_find:
 	 * Found the page and have a reference on it.
 	 */
 	mark_page_accessed(page);
-	if (type)
-		*type = majmin;
 	return page;
 
 outside_data_content:
@@ -1486,15 +1481,17 @@ outside_data_content:
 	 * An external ptracer can access pages that normally aren't
 	 * accessible..
 	 */
-	if (area->vm_mm == current->mm)
-		return NOPAGE_SIGBUS;
+	if (vma->vm_mm == current->mm) {
+		fdata->type = VM_FAULT_SIGBUS;
+		return NULL;
+	}
 	/* Fall through to the non-read-ahead case */
 no_cached_page:
 	/*
 	 * We're only likely to ever get here if MADV_RANDOM is in
 	 * effect.
 	 */
-	error = page_cache_read(file, pgoff);
+	error = page_cache_read(file, fdata->pgoff);
 
 	/*
 	 * The page we want has now been added to the page cache.
@@ -1510,13 +1507,15 @@ no_cached_page:
 	 * to schedule I/O.
 	 */
 	if (error == -ENOMEM)
-		return NOPAGE_OOM;
-	return NOPAGE_SIGBUS;
+		fdata->type = VM_FAULT_OOM;
+	else
+		fdata->type = VM_FAULT_SIGBUS;
+	return NULL;
 
 page_not_uptodate:
 	/* IO error path */
 	if (!did_readaround) {
-		majmin = VM_FAULT_MAJOR;
+		fdata->type = VM_FAULT_MAJOR;
 		count_vm_event(PGMAJFAULT);
 	}
 
@@ -1535,7 +1534,30 @@ page_not_uptodate:
 
 	/* Things didn't work out. Return zero to tell the mm layer so. */
 	shrink_readahead_size_eio(file, ra);
-	return NOPAGE_SIGBUS;
+	fdata->type = VM_FAULT_SIGBUS;
+	return NULL;
+}
+EXPORT_SYMBOL(filemap_fault);
+
+/*
+ * filemap_nopage and filemap_populate are legacy exports that are not used
+ * in tree. Scheduled for removal.
+ */
+struct page *filemap_nopage(struct vm_area_struct *area,
+				unsigned long address, int *type)
+{
+	struct page *page;
+	struct fault_data fdata;
+	fdata.address = address;
+	fdata.pgoff = ((address - area->vm_start) >> PAGE_CACHE_SHIFT)
+			+ area->vm_pgoff;
+	fdata.flags = 0;
+
+	page = filemap_fault(area, &fdata);
+	if (type)
+		*type = fdata.type;
+
+	return page;
 }
 EXPORT_SYMBOL(filemap_nopage);
 
@@ -1713,8 +1735,7 @@ repeat:
 EXPORT_SYMBOL(filemap_populate);
 
 struct vm_operations_struct generic_file_vm_ops = {
-	.nopage		= filemap_nopage,
-	.populate	= filemap_populate,
+	.fault		= filemap_fault,
 };
 
 /* This is used for a general mmap of a disk file */
@@ -1727,7 +1748,7 @@ int generic_file_mmap(struct file * file
 		return -ENOEXEC;
 	file_accessed(file);
 	vma->vm_ops = &generic_file_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	return 0;
 }
 
Index: linux-2.6/mm/fremap.c
===================================================================
--- linux-2.6.orig/mm/fremap.c
+++ linux-2.6/mm/fremap.c
@@ -128,6 +128,25 @@ out:
 	return err;
 }
 
+static int populate_range(struct mm_struct *mm, struct vm_area_struct *vma,
+			unsigned long addr, unsigned long size, pgoff_t pgoff)
+{
+	int err;
+
+	do {
+		err = install_file_pte(mm, vma, addr, pgoff, vma->vm_page_prot);
+		if (err)
+			return err;
+
+		size -= PAGE_SIZE;
+		addr += PAGE_SIZE;
+		pgoff++;
+	} while (size);
+
+        return 0;
+
+}
+
 /***
  * sys_remap_file_pages - remap arbitrary pages of a shared backing store
  *                        file within an existing vma.
@@ -185,41 +204,63 @@ asmlinkage long sys_remap_file_pages(uns
 	 * the single existing vma.  vm_private_data is used as a
 	 * swapout cursor in a VM_NONLINEAR vma.
 	 */
-	if (vma && (vma->vm_flags & VM_SHARED) &&
-		(!vma->vm_private_data || (vma->vm_flags & VM_NONLINEAR)) &&
-		vma->vm_ops && vma->vm_ops->populate &&
-			end > start && start >= vma->vm_start &&
-				end <= vma->vm_end) {
-
-		/* Must set VM_NONLINEAR before any pages are populated. */
-		if (pgoff != linear_page_index(vma, start) &&
-		    !(vma->vm_flags & VM_NONLINEAR)) {
-			if (!has_write_lock) {
-				up_read(&mm->mmap_sem);
-				down_write(&mm->mmap_sem);
-				has_write_lock = 1;
-				goto retry;
-			}
-			mapping = vma->vm_file->f_mapping;
-			spin_lock(&mapping->i_mmap_lock);
-			flush_dcache_mmap_lock(mapping);
-			vma->vm_flags |= VM_NONLINEAR;
-			vma_prio_tree_remove(vma, &mapping->i_mmap);
-			vma_nonlinear_insert(vma, &mapping->i_mmap_nonlinear);
-			flush_dcache_mmap_unlock(mapping);
-			spin_unlock(&mapping->i_mmap_lock);
+	if (!vma || !(vma->vm_flags & VM_SHARED))
+		goto out;
+
+	if (vma->vm_private_data && !(vma->vm_flags & VM_NONLINEAR))
+		goto out;
+
+	if ((!vma->vm_ops || !vma->vm_ops->populate) &&
+					!(vma->vm_flags & VM_CAN_NONLINEAR))
+		goto out;
+
+	if (end <= start || start < vma->vm_start || end > vma->vm_end)
+		goto out;
+
+	/* Must set VM_NONLINEAR before any pages are populated. */
+	if (!(vma->vm_flags & VM_NONLINEAR)) {
+		/* Don't need a nonlinear mapping, exit success */
+		if (pgoff == linear_page_index(vma, start)) {
+			err = 0;
+			goto out;
 		}
 
-		err = vma->vm_ops->populate(vma, start, size,
-					    vma->vm_page_prot,
-					    pgoff, flags & MAP_NONBLOCK);
-
-		/*
-		 * We can't clear VM_NONLINEAR because we'd have to do
-		 * it after ->populate completes, and that would prevent
-		 * downgrading the lock.  (Locks can't be upgraded).
-		 */
+		if (!has_write_lock) {
+			up_read(&mm->mmap_sem);
+			down_write(&mm->mmap_sem);
+			has_write_lock = 1;
+			goto retry;
+		}
+		mapping = vma->vm_file->f_mapping;
+		spin_lock(&mapping->i_mmap_lock);
+		flush_dcache_mmap_lock(mapping);
+		vma->vm_flags |= VM_NONLINEAR;
+		vma_prio_tree_remove(vma, &mapping->i_mmap);
+		vma_nonlinear_insert(vma, &mapping->i_mmap_nonlinear);
+		flush_dcache_mmap_unlock(mapping);
+		spin_unlock(&mapping->i_mmap_lock);
 	}
+
+	if (vma->vm_flags & VM_CAN_NONLINEAR) {
+		err = populate_range(mm, vma, start, size, pgoff);
+		if (!err && !(flags & MAP_NONBLOCK)) {
+			if (unlikely(has_write_lock)) {
+				downgrade_write(&mm->mmap_sem);
+				has_write_lock = 0;
+			}
+			make_pages_present(start, start+size);
+		}
+	} else
+		err = vma->vm_ops->populate(vma, start, size, vma->vm_page_prot,
+					    	pgoff, flags & MAP_NONBLOCK);
+
+	/*
+	 * We can't clear VM_NONLINEAR because we'd have to do
+	 * it after ->populate completes, and that would prevent
+	 * downgrading the lock.  (Locks can't be upgraded).
+	 */
+
+out:
 	if (likely(!has_write_lock))
 		up_read(&mm->mmap_sem);
 	else
Index: linux-2.6/fs/gfs2/ops_file.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_file.c
+++ linux-2.6/fs/gfs2/ops_file.c
@@ -396,7 +396,7 @@ static int gfs2_mmap(struct file *file, 
 	else
 		vma->vm_ops = &gfs2_vm_ops_private;
 
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE|VM_CAN_NONLINEAR;
 
 	gfs2_glock_dq_uninit(&i_gh);
 
Index: linux-2.6/fs/gfs2/ops_vm.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_vm.c
+++ linux-2.6/fs/gfs2/ops_vm.c
@@ -42,17 +42,17 @@ static void pfault_be_greedy(struct gfs2
 		iput(&ip->i_inode);
 }
 
-static struct page *gfs2_private_nopage(struct vm_area_struct *area,
-					unsigned long address, int *type)
+static struct page *gfs2_private_fault(struct vm_area_struct *vma,
+						struct fault_data *fdata)
 {
-	struct gfs2_inode *ip = GFS2_I(area->vm_file->f_mapping->host);
+	struct gfs2_inode *ip = GFS2_I(vma->vm_file->f_mapping->host);
 	struct page *result;
 
 	set_bit(GIF_PAGED, &ip->i_flags);
 
-	result = filemap_nopage(area, address, type);
+	result = filemap_fault(vma, fdata);
 
-	if (result && result != NOPAGE_OOM)
+	if (result)
 		pfault_be_greedy(ip);
 
 	return result;
@@ -126,16 +126,14 @@ out:
 	return error;
 }
 
-static struct page *gfs2_sharewrite_nopage(struct vm_area_struct *area,
-					   unsigned long address, int *type)
+static struct page *gfs2_sharewrite_fault(struct vm_area_struct *vma,
+						struct fault_data *fdata)
 {
-	struct file *file = area->vm_file;
+	struct file *file = vma->vm_file;
 	struct gfs2_file *gf = file->private_data;
 	struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
 	struct gfs2_holder i_gh;
 	struct page *result = NULL;
-	unsigned long index = ((address - area->vm_start) >> PAGE_CACHE_SHIFT) +
-			      area->vm_pgoff;
 	int alloc_required;
 	int error;
 
@@ -146,21 +144,25 @@ static struct page *gfs2_sharewrite_nopa
 	set_bit(GIF_PAGED, &ip->i_flags);
 	set_bit(GIF_SW_PAGED, &ip->i_flags);
 
-	error = gfs2_write_alloc_required(ip, (u64)index << PAGE_CACHE_SHIFT,
-					  PAGE_CACHE_SIZE, &alloc_required);
-	if (error)
+	error = gfs2_write_alloc_required(ip,
+					(u64)fdata->pgoff << PAGE_CACHE_SHIFT,
+					PAGE_CACHE_SIZE, &alloc_required);
+	if (error) {
+		fdata->type = VM_FAULT_OOM; /* XXX: are these right? */
 		goto out;
+	}
 
 	set_bit(GFF_EXLOCK, &gf->f_flags);
-	result = filemap_nopage(area, address, type);
+	result = filemap_fault(vma, fdata);
 	clear_bit(GFF_EXLOCK, &gf->f_flags);
-	if (!result || result == NOPAGE_OOM)
+	if (!result)
 		goto out;
 
 	if (alloc_required) {
 		error = alloc_page_backing(ip, result);
 		if (error) {
 			page_cache_release(result);
+			fdata->type = VM_FAULT_OOM;
 			result = NULL;
 			goto out;
 		}
@@ -175,10 +177,10 @@ out:
 }
 
 struct vm_operations_struct gfs2_vm_ops_private = {
-	.nopage = gfs2_private_nopage,
+	.fault = gfs2_private_fault,
 };
 
 struct vm_operations_struct gfs2_vm_ops_sharewrite = {
-	.nopage = gfs2_sharewrite_nopage,
+	.fault = gfs2_sharewrite_fault,
 };
 
Index: linux-2.6/fs/ocfs2/mmap.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/mmap.c
+++ linux-2.6/fs/ocfs2/mmap.c
@@ -59,24 +59,23 @@ static inline int ocfs2_vm_op_unblock_si
 	return sigprocmask(SIG_SETMASK, oldset, NULL);
 }
 
-static struct page *ocfs2_nopage(struct vm_area_struct * area,
-				 unsigned long address,
-				 int *type)
+static struct page *ocfs2_fault(struct vm_area_struct *vma,
+					struct fault_data *fdata)
 {
-	struct page *page = NOPAGE_SIGBUS;
+	struct page *page = NULL;
 	sigset_t blocked, oldset;
 	int ret;
 
-	mlog_entry("(area=%p, address=%lu, type=%p)\n", area, address,
-		   type);
+	mlog_entry("(area=%p, address=%lu)\n", vma, fdata->address);
 
 	ret = ocfs2_vm_op_block_sigs(&blocked, &oldset);
 	if (ret < 0) {
 		mlog_errno(ret);
+		fdata->type = VM_FAULT_SIGBUS;
 		goto out;
 	}
 
-	page = filemap_nopage(area, address, type);
+	page = filemap_fault(vma, fdata);
 
 	ret = ocfs2_vm_op_unblock_sigs(&oldset);
 	if (ret < 0)
@@ -147,7 +146,7 @@ out:
 }
 
 static struct vm_operations_struct ocfs2_file_vm_ops = {
-	.nopage		= ocfs2_nopage,
+	.fault		= ocfs2_fault,
 	.page_mkwrite	= ocfs2_page_mkwrite,
 };
 
@@ -155,7 +154,7 @@ int ocfs2_mmap(struct file *file, struct
 {
 	file_accessed(file);
 	vma->vm_ops = &ocfs2_file_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	return 0;
 }
 
Index: linux-2.6/fs/xfs/linux-2.6/xfs_file.c
===================================================================
--- linux-2.6.orig/fs/xfs/linux-2.6/xfs_file.c
+++ linux-2.6/fs/xfs/linux-2.6/xfs_file.c
@@ -246,18 +246,19 @@ xfs_file_fsync(
 
 #ifdef CONFIG_XFS_DMAPI
 STATIC struct page *
-xfs_vm_nopage(
-	struct vm_area_struct	*area,
-	unsigned long		address,
-	int			*type)
+xfs_vm_fault(
+	struct vm_area_struct *vma,
+	struct fault_data *fdata)
 {
-	struct inode	*inode = area->vm_file->f_dentry->d_inode;
+	struct inode	*inode = vma->vm_file->f_dentry->d_inode;
 	bhv_vnode_t	*vp = vn_from_inode(inode);
 
 	ASSERT_ALWAYS(vp->v_vfsp->vfs_flag & VFS_DMI);
-	if (XFS_SEND_MMAP(XFS_VFSTOM(vp->v_vfsp), area, 0))
+	if (XFS_SEND_MMAP(XFS_VFSTOM(vp->v_vfsp), area, 0)) {
+		fdata->type = VM_FAULT_SIGBUS;
 		return NULL;
-	return filemap_nopage(area, address, type);
+	}
+	return filemap_fault(vma, fdata);
 }
 #endif /* CONFIG_XFS_DMAPI */
 
@@ -343,7 +344,7 @@ xfs_file_mmap(
 	struct vm_area_struct *vma)
 {
 	vma->vm_ops = &xfs_file_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 
 #ifdef CONFIG_XFS_DMAPI
 	if (vn_from_inode(filp->f_dentry->d_inode)->v_vfsp->vfs_flag & VFS_DMI)
@@ -502,14 +503,12 @@ const struct file_operations xfs_dir_fil
 };
 
 static struct vm_operations_struct xfs_file_vm_ops = {
-	.nopage		= filemap_nopage,
-	.populate	= filemap_populate,
+	.fault		= filemap_fault,
 };
 
 #ifdef CONFIG_XFS_DMAPI
 static struct vm_operations_struct xfs_dmapi_file_vm_ops = {
-	.nopage		= xfs_vm_nopage,
-	.populate	= filemap_populate,
+	.fault		= xfs_vm_fault,
 #ifdef HAVE_VMOP_MPROTECT
 	.mprotect	= xfs_vm_mprotect,
 #endif
Index: linux-2.6/mm/mmap.c
===================================================================
--- linux-2.6.orig/mm/mmap.c
+++ linux-2.6/mm/mmap.c
@@ -1148,12 +1148,8 @@ out:	
 		mm->locked_vm += len >> PAGE_SHIFT;
 		make_pages_present(addr, addr + len);
 	}
-	if (flags & MAP_POPULATE) {
-		up_write(&mm->mmap_sem);
-		sys_remap_file_pages(addr, len, 0,
-					pgoff, flags & MAP_NONBLOCK);
-		down_write(&mm->mmap_sem);
-	}
+	if ((flags & MAP_POPULATE) && !(flags & MAP_NONBLOCK))
+		make_pages_present(addr, addr + len);
 	return addr;
 
 unmap_and_free_vma:
Index: linux-2.6/ipc/shm.c
===================================================================
--- linux-2.6.orig/ipc/shm.c
+++ linux-2.6/ipc/shm.c
@@ -260,7 +260,7 @@ static struct file_operations shm_file_o
 static struct vm_operations_struct shm_vm_ops = {
 	.open	= shm_open,	/* callback for a new vm-area open */
 	.close	= shm_close,	/* callback for when the vm-area is released */
-	.nopage	= shmem_nopage,
+	.fault	= shmem_fault,
 #if defined(CONFIG_NUMA) && defined(CONFIG_SHMEM)
 	.set_policy = shmem_set_policy,
 	.get_policy = shmem_get_policy,
Index: linux-2.6/mm/filemap_xip.c
===================================================================
--- linux-2.6.orig/mm/filemap_xip.c
+++ linux-2.6/mm/filemap_xip.c
@@ -200,62 +200,63 @@ __xip_unmap (struct address_space * mapp
 }
 
 /*
- * xip_nopage() is invoked via the vma operations vector for a
+ * xip_fault() is invoked via the vma operations vector for a
  * mapped memory region to read in file data during a page fault.
  *
- * This function is derived from filemap_nopage, but used for execute in place
+ * This function is derived from filemap_fault, but used for execute in place
  */
-static struct page *
-xip_file_nopage(struct vm_area_struct * area,
-		   unsigned long address,
-		   int *type)
+static struct page *xip_file_fault(struct vm_area_struct *area,
+					struct fault_data *fdata)
 {
 	struct file *file = area->vm_file;
 	struct address_space *mapping = file->f_mapping;
 	struct inode *inode = mapping->host;
 	struct page *page;
-	unsigned long size, pgoff, endoff;
+	pgoff_t size;
 
-	pgoff = ((address - area->vm_start) >> PAGE_CACHE_SHIFT)
-		+ area->vm_pgoff;
-	endoff = ((area->vm_end - area->vm_start) >> PAGE_CACHE_SHIFT)
-		+ area->vm_pgoff;
+	/* XXX: are VM_FAULT_ codes OK? */
 
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (pgoff >= size) {
+	if (fdata->pgoff >= size) {
+		fdata->type = VM_FAULT_SIGBUS;
 		return NULL;
 	}
 
-	page = mapping->a_ops->get_xip_page(mapping, pgoff*(PAGE_SIZE/512), 0);
-	if (!IS_ERR(page)) {
+	page = mapping->a_ops->get_xip_page(mapping,
+					fdata->pgoff*(PAGE_SIZE/512), 0);
+	if (!IS_ERR(page))
 		goto out;
-	}
-	if (PTR_ERR(page) != -ENODATA)
+	if (PTR_ERR(page) != -ENODATA) {
+		fdata->type = VM_FAULT_OOM;
 		return NULL;
+	}
 
 	/* sparse block */
 	if ((area->vm_flags & (VM_WRITE | VM_MAYWRITE)) &&
 	    (area->vm_flags & (VM_SHARED| VM_MAYSHARE)) &&
 	    (!(mapping->host->i_sb->s_flags & MS_RDONLY))) {
 		/* maybe shared writable, allocate new block */
-		page = mapping->a_ops->get_xip_page (mapping,
-			pgoff*(PAGE_SIZE/512), 1);
-		if (IS_ERR(page))
+		page = mapping->a_ops->get_xip_page(mapping,
+					fdata->pgoff*(PAGE_SIZE/512), 1);
+		if (IS_ERR(page)) {
+			fdata->type = VM_FAULT_SIGBUS;
 			return NULL;
+		}
 		/* unmap page at pgoff from all other vmas */
-		__xip_unmap(mapping, pgoff);
+		__xip_unmap(mapping, fdata->pgoff);
 	} else {
 		/* not shared and writable, use ZERO_PAGE() */
-		page = ZERO_PAGE(address);
+		page = ZERO_PAGE(fdata->address);
 	}
 
 out:
+	fdata->type = VM_FAULT_MINOR;
 	page_cache_get(page);
 	return page;
 }
 
 static struct vm_operations_struct xip_file_vm_ops = {
-	.nopage         = xip_file_nopage,
+	.fault	= xip_file_fault,
 };
 
 int xip_file_mmap(struct file * file, struct vm_area_struct * vma)
@@ -264,6 +265,7 @@ int xip_file_mmap(struct file * file, st
 
 	file_accessed(file);
 	vma->vm_ops = &xip_file_vm_ops;
+	vma->vm_flags |= VM_CAN_NONLINEAR;
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xip_file_mmap);
Index: linux-2.6/mm/nommu.c
===================================================================
--- linux-2.6.orig/mm/nommu.c
+++ linux-2.6/mm/nommu.c
@@ -1299,8 +1299,7 @@ int in_gate_area_no_task(unsigned long a
 	return 0;
 }
 
-struct page *filemap_nopage(struct vm_area_struct *area,
-			unsigned long address, int *type)
+struct page *filemap_fault(struct fault_data *fdata)
 {
 	BUG();
 	return NULL;
Index: linux-2.6/mm/shmem.c
===================================================================
--- linux-2.6.orig/mm/shmem.c
+++ linux-2.6/mm/shmem.c
@@ -81,7 +81,7 @@ enum sgp_type {
 	SGP_READ,	/* don't exceed i_size, don't allocate page */
 	SGP_CACHE,	/* don't exceed i_size, may allocate page */
 	SGP_WRITE,	/* may exceed i_size, may allocate page */
-	SGP_NOPAGE,	/* same as SGP_CACHE, return with page locked */
+	SGP_FAULT,	/* same as SGP_CACHE, return with page locked */
 };
 
 static int shmem_getpage(struct inode *inode, unsigned long idx,
@@ -1021,6 +1021,10 @@ static int shmem_getpage(struct inode *i
 
 	if (idx >= SHMEM_MAX_INDEX)
 		return -EFBIG;
+
+	if (type)
+		*type = VM_FAULT_MINOR;
+
 	/*
 	 * Normally, filepage is NULL on entry, and either found
 	 * uptodate immediately, or allocated and zeroed, or read
@@ -1211,7 +1215,7 @@ repeat:
 done:
 	if (*pagep != filepage) {
 		*pagep = filepage;
-		if (sgp != SGP_NOPAGE)
+		if (sgp != SGP_FAULT)
 			unlock_page(filepage);
 
 	}
@@ -1225,75 +1229,30 @@ failed:
 	return error;
 }
 
-struct page *shmem_nopage(struct vm_area_struct *vma, unsigned long address, int *type)
+struct page *shmem_fault(struct vm_area_struct *vma, struct fault_data *fdata)
 {
 	struct inode *inode = vma->vm_file->f_dentry->d_inode;
 	struct page *page = NULL;
-	unsigned long idx;
 	int error;
 
 	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
 
-	idx = (address - vma->vm_start) >> PAGE_SHIFT;
-	idx += vma->vm_pgoff;
-	idx >>= PAGE_CACHE_SHIFT - PAGE_SHIFT;
-	if (((loff_t) idx << PAGE_CACHE_SHIFT) >= i_size_read(inode))
-		return NOPAGE_SIGBUS;
+	if (((loff_t)fdata->pgoff << PAGE_CACHE_SHIFT) >= i_size_read(inode)) {
+		fdata->type = VM_FAULT_SIGBUS;
+		return NULL;
+	}
 
-	error = shmem_getpage(inode, idx, &page, SGP_NOPAGE, type);
-	if (error)
-		return (error == -ENOMEM)? NOPAGE_OOM: NOPAGE_SIGBUS;
+	error = shmem_getpage(inode, fdata->pgoff, &page,
+						SGP_FAULT, &fdata->type);
+	if (error) {
+		fdata->type = ((error == -ENOMEM)?VM_FAULT_OOM:VM_FAULT_SIGBUS);
+		return NULL;
+	}
 
 	mark_page_accessed(page);
 	return page;
 }
 
-static int shmem_populate(struct vm_area_struct *vma,
-	unsigned long addr, unsigned long len,
-	pgprot_t prot, unsigned long pgoff, int nonblock)
-{
-	struct inode *inode = vma->vm_file->f_dentry->d_inode;
-	struct mm_struct *mm = vma->vm_mm;
-	enum sgp_type sgp = nonblock? SGP_QUICK: SGP_CACHE;
-	unsigned long size;
-
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (pgoff >= size || pgoff + (len >> PAGE_SHIFT) > size)
-		return -EINVAL;
-
-	while ((long) len > 0) {
-		struct page *page = NULL;
-		int err;
-		/*
-		 * Will need changing if PAGE_CACHE_SIZE != PAGE_SIZE
-		 */
-		err = shmem_getpage(inode, pgoff, &page, sgp, NULL);
-		if (err)
-			return err;
-		/* Page may still be null, but only if nonblock was set. */
-		if (page) {
-			mark_page_accessed(page);
-			err = install_page(mm, vma, addr, page, prot);
-			if (err) {
-				page_cache_release(page);
-				return err;
-			}
-		} else if (vma->vm_flags & VM_NONLINEAR) {
-			/* No page was found just because we can't read it in
-			 * now (being here implies nonblock != 0), but the page
-			 * may exist, so set the PTE to fault it in later. */
-    			err = install_file_pte(mm, vma, addr, pgoff, prot);
-			if (err)
-	    			return err;
-		}
-
-		len -= PAGE_SIZE;
-		addr += PAGE_SIZE;
-		pgoff++;
-	}
-	return 0;
-}
-
 #ifdef CONFIG_NUMA
 int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
 {
@@ -1338,7 +1297,7 @@ int shmem_mmap(struct file *file, struct
 {
 	file_accessed(file);
 	vma->vm_ops = &shmem_vm_ops;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	return 0;
 }
 
@@ -2392,8 +2351,7 @@ static struct super_operations shmem_ops
 };
 
 static struct vm_operations_struct shmem_vm_ops = {
-	.nopage		= shmem_nopage,
-	.populate	= shmem_populate,
+	.fault		= shmem_fault,
 #ifdef CONFIG_NUMA
 	.set_policy     = shmem_set_policy,
 	.get_policy     = shmem_get_policy,
Index: linux-2.6/mm/truncate.c
===================================================================
--- linux-2.6.orig/mm/truncate.c
+++ linux-2.6/mm/truncate.c
@@ -53,7 +53,7 @@ static inline void truncate_partial_page
 /*
  * If truncate cannot remove the fs-private metadata from the page, the page
  * becomes anonymous.  It will be left on the LRU and may even be mapped into
- * user pagetables if we're racing with filemap_nopage().
+ * user pagetables if we're racing with filemap_fault().
  *
  * We need to bale out if page->mapping is no longer equal to the original
  * mapping.  This happens a) when the VM reclaimed the page while we waited on
Index: linux-2.6/fs/afs/file.c
===================================================================
--- linux-2.6.orig/fs/afs/file.c
+++ linux-2.6/fs/afs/file.c
@@ -63,8 +63,7 @@ const struct address_space_operations af
 };
 
 static struct vm_operations_struct afs_fs_vm_operations = {
-	.nopage		= filemap_nopage,
-	.populate	= filemap_populate,
+	.fault		= filemap_fault,
 #ifdef CONFIG_AFS_FSCACHE
 	.page_mkwrite	= afs_file_page_mkwrite,
 #endif
@@ -82,7 +81,7 @@ static int afs_file_mmap(struct file *fi
 
 	file_accessed(file);
 	vma->vm_ops = &afs_fs_vm_operations;
-	vma->vm_flags |= VM_CAN_INVALIDATE;
+	vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	return 0;
 }
 
Index: linux-2.6/fs/gfs2/ops_address.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_address.c
+++ linux-2.6/fs/gfs2/ops_address.c
@@ -227,7 +227,7 @@ static int gfs2_readpage(struct file *fi
 		if (file) {
 			gf = file->private_data;
 			if (test_bit(GFF_EXLOCK, &gf->f_flags))
-				/* gfs2_sharewrite_nopage has grabbed the ip->i_gl already */
+				/* gfs2_sharewrite_fault has grabbed the ip->i_gl already */
 				goto skip_lock;
 		}
 		gfs2_holder_init(ip->i_gl, LM_ST_SHARED, GL_ATIME|GL_AOP, &gh);
Index: linux-2.6/fs/ncpfs/mmap.c
===================================================================
--- linux-2.6.orig/fs/ncpfs/mmap.c
+++ linux-2.6/fs/ncpfs/mmap.c
@@ -25,8 +25,8 @@
 /*
  * Fill in the supplied page for mmap
  */
-static struct page* ncp_file_mmap_nopage(struct vm_area_struct *area,
-				     unsigned long address, int *type)
+static struct page* ncp_file_mmap_fault(struct vm_area_struct *area,
+						struct fault_data *fdata)
 {
 	struct file *file = area->vm_file;
 	struct dentry *dentry = file->f_dentry;
@@ -40,15 +40,17 @@ static struct page* ncp_file_mmap_nopage
 
 	page = alloc_page(GFP_HIGHUSER); /* ncpfs has nothing against high pages
 	           as long as recvmsg and memset works on it */
-	if (!page)
-		return page;
+	if (!page) {
+		fdata->type = VM_FAULT_OOM;
+		return NULL;
+	}
 	pg_addr = kmap(page);
-	address &= PAGE_MASK;
-	pos = address - area->vm_start + (area->vm_pgoff << PAGE_SHIFT);
+	pos = fdata->pgoff << PAGE_SHIFT;
 
 	count = PAGE_SIZE;
-	if (address + PAGE_SIZE > area->vm_end) {
-		count = area->vm_end - address;
+	if (fdata->address + PAGE_SIZE > area->vm_end) {
+		WARN_ON(1); /* shouldn't happen? */
+		count = area->vm_end - fdata->address;
 	}
 	/* what we can read in one go */
 	bufsize = NCP_SERVER(inode)->buffer_size;
@@ -91,15 +93,14 @@ static struct page* ncp_file_mmap_nopage
 	 * fetches from the network, here the analogue of disk.
 	 * -- wli
 	 */
-	if (type)
-		*type = VM_FAULT_MAJOR;
+	fdata->type = VM_FAULT_MAJOR;
 	count_vm_event(PGMAJFAULT);
 	return page;
 }
 
 static struct vm_operations_struct ncp_file_mmap =
 {
-	.nopage	= ncp_file_mmap_nopage,
+	.fault = ncp_file_mmap_fault,
 };
 
 
Index: linux-2.6/fs/nfs/fscache.c
===================================================================
--- linux-2.6.orig/fs/nfs/fscache.c
+++ linux-2.6/fs/nfs/fscache.c
@@ -293,8 +293,7 @@ static int nfs_file_page_mkwrite(struct 
 }
 
 struct vm_operations_struct nfs_fs_vm_operations = {
-	.nopage		= filemap_nopage,
-	.populate	= filemap_populate,
+	.fault		= filemap_fault,
 	.page_mkwrite	= nfs_file_page_mkwrite,
 };
 
Index: linux-2.6/fs/nfs/fscache.h
===================================================================
--- linux-2.6.orig/fs/nfs/fscache.h
+++ linux-2.6/fs/nfs/fscache.h
@@ -200,7 +200,7 @@ static inline void nfs_fscache_install_v
 {
 	if (NFS_I(inode)->fscache) {
 		vma->vm_ops = &nfs_fs_vm_operations;
-		vma->vm_flags |= VM_CAN_INVALIDATE;
+		vma->vm_flags |= VM_CAN_INVALIDATE | VM_CAN_NONLINEAR;
 	}
 }
 
Index: linux-2.6/fs/ocfs2/aops.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/aops.c
+++ linux-2.6/fs/ocfs2/aops.c
@@ -215,7 +215,7 @@ static int ocfs2_readpage(struct file *f
 	 * might now be discovering a truncate that hit on another node.
 	 * block_read_full_page->get_block freaks out if it is asked to read
 	 * beyond the end of a file, so we check here.  Callers
-	 * (generic_file_read, fault->nopage) are clever enough to check i_size
+	 * (generic_file_read, vm_ops->fault) are clever enough to check i_size
 	 * and notice that the page they just read isn't needed.
 	 *
 	 * XXX sys_readahead() seems to get that wrong?
Index: linux-2.6/Documentation/feature-removal-schedule.txt
===================================================================
--- linux-2.6.orig/Documentation/feature-removal-schedule.txt
+++ linux-2.6/Documentation/feature-removal-schedule.txt
@@ -203,6 +203,33 @@ Who:	Nick Piggin <npiggin@suse.de>
 
 ---------------------------
 
+What:	filemap_nopage, filemap_populate
+When:	February 2007
+Why:	These legacy interfaces no longer have any callers in the kernel and
+	any functionality provided can be provided with filemap_fault. The
+	removal schedule is short because they are a big maintainence burden
+	and have some bugs.
+Who:	Nick Piggin <npiggin@suse.de>
+
+---------------------------
+
+What:	vm_ops.populate, install_page
+When:	February 2007
+Why:	These legacy interfaces no longer have any callers in the kernel and
+	any functionality provided can be provided with vm_ops.fault.
+Who:	Nick Piggin <npiggin@suse.de>
+
+---------------------------
+
+What:	vm_ops.nopage
+When:	October 2008, provided in-kernel callers have been converted
+Why:	This interface is replaced by vm_ops.fault, but it has been around
+	forever, is used by a lot of drivers, and doesn't cost much to
+	maintain.
+Who:	Nick Piggin <npiggin@suse.de>
+
+---------------------------
+
 What:	Interrupt only SA_* flags
 When:	Januar 2007
 Why:	The interrupt related SA_* flags are replaced by IRQF_* to move them

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 4/5] mm: add vm_insert_pfn helpler
  2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
                   ` (2 preceding siblings ...)
  2006-10-10 14:22 ` [patch 3/5] mm: fault handler to replace nopage and populate Nick Piggin
@ 2006-10-10 14:22 ` Nick Piggin
  2006-10-11 10:12   ` Thomas Hellstrom
  2006-10-10 14:22 ` [patch 5/5] mm: merge nopfn with fault handler Nick Piggin
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 14:22 UTC (permalink / raw)
  To: Linux Memory Management, Andrew Morton; +Cc: Linux Kernel, Nick Piggin

Add a vm_insert_pfn helper, so that ->fault handlers can have nopfn
functionality by installing their own pte and returning NULL.

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -1121,6 +1121,7 @@ unsigned long vmalloc_to_pfn(void *addr)
 int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
 			unsigned long pfn, unsigned long size, pgprot_t);
 int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
+int vm_insert_pfn(struct vm_area_struct *, unsigned long addr, unsigned long pfn);
 
 struct page *follow_page(struct vm_area_struct *, unsigned long address,
 			unsigned int foll_flags);
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1267,6 +1267,50 @@ int vm_insert_page(struct vm_area_struct
 }
 EXPORT_SYMBOL(vm_insert_page);
 
+/**
+ * vm_insert_pfn - insert single pfn into user vma
+ * @vma: user vma to map to
+ * @addr: target user address of this page
+ * @pfn: source kernel pfn
+ *
+ * Similar to vm_inert_page, this allows drivers to insert individual pages
+ * they've allocated into a user vma. Same comments apply.
+ *
+ * This function should only be called from a vm_ops->fault handler, and
+ * in that case the handler should return NULL.
+ */
+int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	int retval;
+	pte_t *pte, entry;
+	spinlock_t *ptl;
+
+	BUG_ON(!(vma->vm_flags & VM_PFNMAP));
+	BUG_ON(is_cow_mapping(vma->vm_flags));
+
+	retval = -ENOMEM;
+	pte = get_locked_pte(mm, addr, &ptl);
+	if (!pte)
+		goto out;
+	retval = -EBUSY;
+	if (!pte_none(*pte))
+		goto out_unlock;
+
+	/* Ok, finally just insert the thing.. */
+	entry = pfn_pte(pfn, vma->vm_page_prot);
+	set_pte_at(mm, addr, pte, entry);
+	update_mmu_cache(vma, addr, entry);
+
+	retval = 0;
+out_unlock:
+	pte_unmap_unlock(pte, ptl);
+
+out:
+	return retval;
+}
+EXPORT_SYMBOL(vm_insert_pfn);
+
 /*
  * maps a range of physical memory into the requested pages. the old
  * mappings are removed. any references to nonexistent pages results

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 5/5] mm: merge nopfn with fault handler
  2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
                   ` (3 preceding siblings ...)
  2006-10-10 14:22 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
@ 2006-10-10 14:22 ` Nick Piggin
  2006-10-10 14:26 ` [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
  2006-10-10 14:33 ` Christoph Hellwig
  6 siblings, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 14:22 UTC (permalink / raw)
  To: Linux Memory Management, Andrew Morton; +Cc: Linux Kernel, Nick Piggin

Remove ->nopfn and reimplement the only existing handler using ->fault

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/drivers/char/mspec.c
===================================================================
--- linux-2.6.orig/drivers/char/mspec.c
+++ linux-2.6/drivers/char/mspec.c
@@ -178,24 +178,25 @@ mspec_close(struct vm_area_struct *vma)
 
 
 /*
- * mspec_nopfn
+ * mspec_fault
  *
  * Creates a mspec page and maps it to user space.
  */
-static unsigned long
-mspec_nopfn(struct vm_area_struct *vma, unsigned long address)
+static struct page *
+mspec_fault(struct fault_data *fdata)
 {
 	unsigned long paddr, maddr;
 	unsigned long pfn;
-	int index;
-	struct vma_data *vdata = vma->vm_private_data;
+	int index = fdata->pgoff;
+	struct vma_data *vdata = fdata->vma->vm_private_data;
 
-	index = (address - vma->vm_start) >> PAGE_SHIFT;
 	maddr = (volatile unsigned long) vdata->maddr[index];
 	if (maddr == 0) {
 		maddr = uncached_alloc_page(numa_node_id());
-		if (maddr == 0)
-			return NOPFN_OOM;
+		if (maddr == 0) {
+			fdata->type = VM_FAULT_OOM;
+			return NULL;
+		}
 
 		spin_lock(&vdata->lock);
 		if (vdata->maddr[index] == 0) {
@@ -215,13 +216,21 @@ mspec_nopfn(struct vm_area_struct *vma, 
 
 	pfn = paddr >> PAGE_SHIFT;
 
-	return pfn;
+	fdata->type = VM_FAULT_MINOR;
+	/*
+	 * vm_insert_pfn can fail with -EBUSY, but in that case it will
+	 * be because another thread has installed the pte first, so it
+	 * is no problem.
+	 */
+	vm_insert_pfn(fdata->vma, fdata->address, pfn);
+
+	return NULL;
 }
 
 static struct vm_operations_struct mspec_vm_ops = {
 	.open = mspec_open,
 	.close = mspec_close,
-	.nopfn = mspec_nopfn
+	.fault = mspec_fault,
 };
 
 /*
Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -223,7 +223,6 @@ struct vm_operations_struct {
 	void (*close)(struct vm_area_struct * area);
 	struct page * (*fault)(struct vm_area_struct *vma, struct fault_data * fdata);
 	struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int *type);
-	unsigned long (*nopfn)(struct vm_area_struct * area, unsigned long address);
 	int (*populate)(struct vm_area_struct * area, unsigned long address, unsigned long len, pgprot_t prot, unsigned long pgoff, int nonblock);
 
 	/* notification that a previously read-only page is about to become
@@ -635,12 +634,6 @@ static inline int page_mapped(struct pag
 #define NOPAGE_OOM	((struct page *) (-1))
 
 /*
- * Error return values for the *_nopfn functions
- */
-#define NOPFN_SIGBUS	((unsigned long) -1)
-#define NOPFN_OOM	((unsigned long) -2)
-
-/*
  * Different kinds of faults, as returned by handle_mm_fault().
  * Used to decide whether a process gets delivered SIGBUS or
  * just gets major/minor fault counters bumped up.
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1278,6 +1278,11 @@ EXPORT_SYMBOL(vm_insert_page);
  *
  * This function should only be called from a vm_ops->fault handler, and
  * in that case the handler should return NULL.
+ *
+ * vma cannot be a COW mapping.
+ *
+ * As this is called only for pages that do not currently exist, we
+ * do not need to flush old virtual caches or the TLB.
  */
 int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
 {
@@ -2336,54 +2341,6 @@ static int do_nonlinear_fault(struct mm_
 }
 
 /*
- * do_no_pfn() tries to create a new page mapping for a page without
- * a struct_page backing it
- *
- * As this is called only for pages that do not currently exist, we
- * do not need to flush old virtual caches or the TLB.
- *
- * We enter with non-exclusive mmap_sem (to exclude vma changes,
- * but allow concurrent faults), and pte mapped but not yet locked.
- * We return with mmap_sem still held, but pte unmapped and unlocked.
- *
- * It is expected that the ->nopfn handler always returns the same pfn
- * for a given virtual mapping.
- *
- * Mark this `noinline' to prevent it from bloating the main pagefault code.
- */
-static noinline int do_no_pfn(struct mm_struct *mm, struct vm_area_struct *vma,
-		     unsigned long address, pte_t *page_table, pmd_t *pmd,
-		     int write_access)
-{
-	spinlock_t *ptl;
-	pte_t entry;
-	unsigned long pfn;
-	int ret = VM_FAULT_MINOR;
-
-	pte_unmap(page_table);
-	BUG_ON(!(vma->vm_flags & VM_PFNMAP));
-	BUG_ON(is_cow_mapping(vma->vm_flags));
-
-	pfn = vma->vm_ops->nopfn(vma, address & PAGE_MASK);
-	if (pfn == NOPFN_OOM)
-		return VM_FAULT_OOM;
-	if (pfn == NOPFN_SIGBUS)
-		return VM_FAULT_SIGBUS;
-
-	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
-
-	/* Only go through if we didn't race with anybody else... */
-	if (pte_none(*page_table)) {
-		entry = pfn_pte(pfn, vma->vm_page_prot);
-		if (write_access)
-			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-		set_pte_at(mm, address, page_table, entry);
-	}
-	pte_unmap_unlock(page_table, ptl);
-	return ret;
-}
-
-/*
  * Fault of a previously existing named mapping. Repopulate the pte
  * from the encoded file_pte if possible. This enables swappable
  * nonlinear vmas.
@@ -2454,9 +2411,6 @@ static inline int handle_pte_fault(struc
 				if (vma->vm_ops->fault || vma->vm_ops->nopage)
 					return do_linear_fault(mm, vma, address,
 						pte, pmd, write_access, entry);
-				if (unlikely(vma->vm_ops->nopfn))
-					return do_no_pfn(mm, vma, address, pte,
-							 pmd, write_access);
 			}
 			return do_anonymous_page(mm, vma, address,
 						 pte, pmd, write_access);

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
                   ` (4 preceding siblings ...)
  2006-10-10 14:22 ` [patch 5/5] mm: merge nopfn with fault handler Nick Piggin
@ 2006-10-10 14:26 ` Nick Piggin
  2006-10-10 14:33 ` Christoph Hellwig
  6 siblings, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 14:26 UTC (permalink / raw)
  To: Linux Memory Management, Andrew Morton; +Cc: Linux Kernel

Arh, bad subject: should be 2.6.19-rc1-mm1

On Tue, Oct 10, 2006 at 04:21:32PM +0200, Nick Piggin wrote:
> Has passed an allyesconfig on G5, and booted and stress tested (on ext3
> and tmpfs) on 2x P4 Xeon with my userspace tester (which I will send in
> a reply to this email). (stress tests run with the set_page_dirty_buffers
> fix that is upstream).

Anyway, here is the pagefault vs invalidate/truncate race finder. It
can trigger the bug introduced in patch 1/5 for both linear and nonlinear
mappings. After the patchset, I cannot reproduce.
--

#define _XOPEN_SOURCE 600
#include <sys/time.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <unistd.h>
#include <time.h>

#include <stdlib.h>
#include <signal.h>
#include <setjmp.h>
#include <errno.h>
#include <string.h>
#include <stdio.h>

#define max(x, y) ((x) > (y) ? (x) : (y))

/* following 4 parameters: (1, 1, 7, 1) gives nonlinear bug */
/* (1, 0, 7, 1) for linear bug */
static int invalidate = 0;
static int nonlinear = 0;
static int faulters = 7;
static int samepage = 1;

#define O_DIRECT	00040000

#define FNAME		"dnp-inv.dat"
static int PAGE_SIZE;

static void error(const char *str)
{
	perror(str);
	exit(EXIT_FAILURE);
}

static void oom(void)
{
	fprintf(stderr, "Out of memory\n");
	exit(EXIT_FAILURE);
}

static sigjmp_buf SIGBUS_env;
static void SIGBUS_handler(int sig)
{
	siglongjmp(SIGBUS_env, 1);
}

static void dnp_child(int nr, int fd)
{
	time_t start;
	int i;
	struct sigaction sa = { .sa_handler = &SIGBUS_handler };
	if (sigemptyset(&sa.sa_mask) == -1)
		error("sigemptyset");
	if (sigaction(SIGBUS, &sa, NULL) == -1)
		error("sigaction");
 
	if (ftruncate(fd, PAGE_SIZE*faulters) == -1)
		error("ftruncate");

	i = 0;
	for (;;) {
		volatile long *lock;
		char *mem;

		if (i > 100) {
			i = 0;
			printf(".");
			fflush(stdout);
		}

		start = time(NULL);

		mem = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE,
						MAP_SHARED, fd, (samepage && !nonlinear) ? 0 : PAGE_SIZE*nr);
		if (mem == MAP_FAILED)
			error("mmap");

		if (nonlinear) {
			if (remap_file_pages(mem, PAGE_SIZE, 0, samepage ? 0 : faulters-nr, 0) == -1)
				goto next;
		}
		lock = mem;
		lock += nr;

		if (!sigsetjmp(SIGBUS_env, 1)) {
			static int stuck = 0;
			int l;
			*lock = 1;

			for (l = 0; *lock; l++) {
				if (!stuck) {
					time_t delta;
					delta = time(NULL) - start;
					if (delta > 10) {
						stuck = 1;
						fprintf(stderr, "page stuck\n");
						exit(EXIT_FAILURE);
					}
				}
				if (l > 1000)	
					usleep(1);
			}
			if (stuck)
				fprintf(stderr, "page unstuck\n");

			i++;
		}

next:
		if (munmap(mem, PAGE_SIZE) == -1)
			error("munmap");
	}
}

static void trunc_child(int fd)
{
	int k;
	char buf = 0;

	for (k = 0;; k++) {
		int i;
		int err;

		if (k > 1000) {
			k = 0;
			printf("+");
			usleep(1*1000*1000);
			fflush(stdout);
		}

		if (ftruncate(fd, 0) == -1)
			error("ftruncate");
		if (ftruncate(fd, PAGE_SIZE*faulters) == -1)
			error("ftruncate");

		for (i = 0; i < (samepage?1:faulters); i++) {
			time_t start = time(NULL);
			do {
				time_t delta;
				err = pwrite(fd, &buf, 1, PAGE_SIZE*i);

				delta = time(NULL) - start;
				if (delta > 10) {
					fprintf(stderr, "write stuck\n");
					break;
				}
			} while (err == -1 /* && errno == EINTR */);
			if (err == -1)
				error("write");
			if (err != 1)
				fprintf(stderr, "Partial write? %d\n", err), exit(EXIT_FAILURE);
		}

	}
}

static void inv_child(int fd)
{
	int k;
	char *buf;
	int flags;

	if (posix_memalign(&buf, PAGE_SIZE, PAGE_SIZE) != 0)
		oom();

	if (ftruncate(fd, PAGE_SIZE*faulters) == -1)
		error("ftruncate");

	flags = fcntl(fd, F_GETFL);
	if (flags == -1)
		error("fcntl");
	if (fcntl(fd, F_SETFL, flags|O_DIRECT) == -1)
		error("fcntl");

	memset(buf, 0, PAGE_SIZE);

	for (k = 0;; k++) {
		int i;
		int err;

		if (k > 100) {
			k = 0;
			printf("+");
			usleep(1*1000*1000);
			fflush(stdout);
		}

		for (i = 0; i < (samepage?1:faulters); i++) {
			time_t start = time(NULL);
			do {
				time_t delta;

				err = pwrite(fd, buf, PAGE_SIZE, PAGE_SIZE*i);

				delta = time(NULL) - start;

				if (delta > 10) {
					fprintf(stderr, "write stuck\n");
					break;
				}
			} while (err == -1 /* && errno == EINTR */);
			if (err == -1)
				error("write");
			if (err != PAGE_SIZE)
				fprintf(stderr, "Partial write? %d\n", err), exit(EXIT_FAILURE);
		}
	}
}

int main(void)
{
	int i;
	int fd, err;

	PAGE_SIZE = getpagesize();

	fd = open(FNAME, O_RDWR|O_CREAT|O_TRUNC, S_IRUSR|S_IWUSR);
	if (fd == -1)
		error("open");

	for (i = 0; i < faulters; i++) {
		err = fork();
		if (err == -1)
			error("fork");
		if (!err) {
			//nice(20);
			dnp_child(i, fd);
			exit(EXIT_SUCCESS);
		}
	}

#if 1
	if (invalidate)
		inv_child(fd);
	else
		trunc_child(fd);
#endif
	sleep(10);

	if (close(fd) == -1)
		error("close");

	exit(EXIT_SUCCESS);
}


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
                   ` (5 preceding siblings ...)
  2006-10-10 14:26 ` [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
@ 2006-10-10 14:33 ` Christoph Hellwig
  2006-10-10 15:01   ` Nick Piggin
  2006-10-10 15:07   ` Arjan van de Ven
  6 siblings, 2 replies; 42+ messages in thread
From: Christoph Hellwig @ 2006-10-10 14:33 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Linux Memory Management, Andrew Morton, Linux Kernel

On Tue, Oct 10, 2006 at 04:21:32PM +0200, Nick Piggin wrote:
> This patchset is against 2.6.19-rc1-mm1 up to
> numa-add-zone_to_nid-function-swap_prefetch.patch (ie. no readahead stuff,
> which causes big rejects and would be much easier to fix in readahead
> patches than here). Other than this feature, the -mm specific stuff is
> pretty simple (mainly straightforward filesystem conversions).
> 
> Changes since last round:
> - trimmed the cc list, no big changes since last time.
> - fix the few buglets preventing it from actually booting
> - reinstate filemap_nopage and filemap_populate, because they're exported
>   symbols even though no longer used in the tree. Schedule for removal.

Just kill them and the whole ->populate methods.  We have a better API that
replaces them 100% with your patch, and they've never been a widespread
API.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-10 14:33 ` Christoph Hellwig
@ 2006-10-10 15:01   ` Nick Piggin
  2006-10-10 16:09     ` Arjan van de Ven
  2006-10-10 15:07   ` Arjan van de Ven
  1 sibling, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-10 15:01 UTC (permalink / raw)
  To: Christoph Hellwig, Linux Memory Management, Andrew Morton, Linux Kernel

On Tue, Oct 10, 2006 at 03:33:42PM +0100, Christoph Hellwig wrote:
> On Tue, Oct 10, 2006 at 04:21:32PM +0200, Nick Piggin wrote:
> > This patchset is against 2.6.19-rc1-mm1 up to
> > numa-add-zone_to_nid-function-swap_prefetch.patch (ie. no readahead stuff,
> > which causes big rejects and would be much easier to fix in readahead
> > patches than here). Other than this feature, the -mm specific stuff is
> > pretty simple (mainly straightforward filesystem conversions).
> > 
> > Changes since last round:
> > - trimmed the cc list, no big changes since last time.
> > - fix the few buglets preventing it from actually booting
> > - reinstate filemap_nopage and filemap_populate, because they're exported
> >   symbols even though no longer used in the tree. Schedule for removal.
> 
> Just kill them and the whole ->populate methods.  We have a better API that
> replaces them 100% with your patch, and they've never been a widespread
> API.

OK, this is what the patch looks like. I'm happy for .nopage to stay
around for a while longer, because it costs us exactly 12 lines of code
in mm/memory.c

Andrew?

--
Remove legacy filemap_nopage and all of the .populate API cruft.

 include/linux/mm.h |    9 --
 mm/filemap.c       |  195 -----------------------------------------------------
 mm/fremap.c        |   71 ++-----------------
 mm/memory.c        |   37 ++--------
 4 files changed, 21 insertions(+), 291 deletions(-)

Signed-off-by: Nick Piggin <npiggin@suse.de>

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -223,8 +223,6 @@ struct vm_operations_struct {
 	void (*close)(struct vm_area_struct * area);
 	struct page * (*fault)(struct vm_area_struct *vma, struct fault_data * fdata);
 	struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int *type);
-	int (*populate)(struct vm_area_struct * area, unsigned long address, unsigned long len, pgprot_t prot, unsigned long pgoff, int nonblock);
-
 	/* notification that a previously read-only page is about to become
 	 * writable, if an error is returned it will cause a SIGBUS */
 	int (*page_mkwrite)(struct vm_area_struct *vma, struct page *page);
@@ -742,8 +740,6 @@ static inline void unmap_shared_mapping_
 
 extern int vmtruncate(struct inode * inode, loff_t offset);
 extern int vmtruncate_range(struct inode * inode, loff_t offset, loff_t end);
-extern int install_page(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, struct page *page, pgprot_t prot);
-extern int install_file_pte(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long pgoff, pgprot_t prot);
 
 #ifdef CONFIG_MMU
 extern int __handle_mm_fault(struct mm_struct *mm,struct vm_area_struct *vma,
@@ -1053,10 +1049,7 @@ extern void truncate_inode_pages_range(s
 				       loff_t lstart, loff_t lend);
 
 /* generic vm_area_ops exported for stackable file systems */
-extern struct page * __deprecated filemap_fault(struct vm_area_struct *, struct fault_data *);
-extern struct page * __deprecated filemap_nopage(struct vm_area_struct *, unsigned long, int *);
-extern int filemap_populate(struct vm_area_struct *, unsigned long,
-		unsigned long, pgprot_t, unsigned long, int);
+extern struct page *filemap_fault(struct vm_area_struct *, struct fault_data *);
 
 /* mm/page-writeback.c */
 int write_one_page(struct page *page, int wait);
Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1539,201 +1539,6 @@ page_not_uptodate:
 }
 EXPORT_SYMBOL(filemap_fault);
 
-/*
- * filemap_nopage and filemap_populate are legacy exports that are not used
- * in tree. Scheduled for removal.
- */
-struct page *filemap_nopage(struct vm_area_struct *area,
-				unsigned long address, int *type)
-{
-	struct page *page;
-	struct fault_data fdata;
-	fdata.address = address;
-	fdata.pgoff = ((address - area->vm_start) >> PAGE_CACHE_SHIFT)
-			+ area->vm_pgoff;
-	fdata.flags = 0;
-
-	page = filemap_fault(area, &fdata);
-	if (type)
-		*type = fdata.type;
-
-	return page;
-}
-EXPORT_SYMBOL(filemap_nopage);
-
-static struct page * filemap_getpage(struct file *file, unsigned long pgoff,
-					int nonblock)
-{
-	struct address_space *mapping = file->f_mapping;
-	struct page *page;
-	int error;
-
-	/*
-	 * Do we have something in the page cache already?
-	 */
-retry_find:
-	page = find_get_page(mapping, pgoff);
-	if (!page) {
-		if (nonblock)
-			return NULL;
-		goto no_cached_page;
-	}
-
-	/*
-	 * Ok, found a page in the page cache, now we need to check
-	 * that it's up-to-date.
-	 */
-	if (!PageUptodate(page)) {
-		if (nonblock) {
-			page_cache_release(page);
-			return NULL;
-		}
-		goto page_not_uptodate;
-	}
-
-success:
-	/*
-	 * Found the page and have a reference on it.
-	 */
-	mark_page_accessed(page);
-	return page;
-
-no_cached_page:
-	error = page_cache_read(file, pgoff);
-
-	/*
-	 * The page we want has now been added to the page cache.
-	 * In the unlikely event that someone removed it in the
-	 * meantime, we'll just come back here and read it again.
-	 */
-	if (error >= 0)
-		goto retry_find;
-
-	/*
-	 * An error return from page_cache_read can result if the
-	 * system is low on memory, or a problem occurs while trying
-	 * to schedule I/O.
-	 */
-	return NULL;
-
-page_not_uptodate:
-	lock_page(page);
-
-	/* Did it get truncated while we waited for it? */
-	if (!page->mapping) {
-		unlock_page(page);
-		goto err;
-	}
-
-	/* Did somebody else get it up-to-date? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
-
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
-
-	/*
-	 * Umm, take care of errors if the page isn't up-to-date.
-	 * Try to re-read it _once_. We do this synchronously,
-	 * because there really aren't any performance issues here
-	 * and we need to check for errors.
-	 */
-	lock_page(page);
-
-	/* Somebody truncated the page on us? */
-	if (!page->mapping) {
-		unlock_page(page);
-		goto err;
-	}
-	/* Somebody else successfully read it in? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
-
-	ClearPageError(page);
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
-
-	/*
-	 * Things didn't work out. Return zero to tell the
-	 * mm layer so, possibly freeing the page cache page first.
-	 */
-err:
-	page_cache_release(page);
-
-	return NULL;
-}
-
-int filemap_populate(struct vm_area_struct *vma, unsigned long addr,
-		unsigned long len, pgprot_t prot, unsigned long pgoff,
-		int nonblock)
-{
-	struct file *file = vma->vm_file;
-	struct address_space *mapping = file->f_mapping;
-	struct inode *inode = mapping->host;
-	unsigned long size;
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int err;
-
-	if (!nonblock)
-		force_page_cache_readahead(mapping, vma->vm_file,
-					pgoff, len >> PAGE_CACHE_SHIFT);
-
-repeat:
-	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (pgoff + (len >> PAGE_CACHE_SHIFT) > size)
-		return -EINVAL;
-
-	page = filemap_getpage(file, pgoff, nonblock);
-
-	/* XXX: This is wrong, a filesystem I/O error may have happened. Fix that as
-	 * done in shmem_populate calling shmem_getpage */
-	if (!page && !nonblock)
-		return -ENOMEM;
-
-	if (page) {
-		err = install_page(mm, vma, addr, page, prot);
-		if (err) {
-			page_cache_release(page);
-			return err;
-		}
-	} else if (vma->vm_flags & VM_NONLINEAR) {
-		/* No page was found just because we can't read it in now (being
-		 * here implies nonblock != 0), but the page may exist, so set
-		 * the PTE to fault it in later. */
-		err = install_file_pte(mm, vma, addr, pgoff, prot);
-		if (err)
-			return err;
-	}
-
-	len -= PAGE_SIZE;
-	addr += PAGE_SIZE;
-	pgoff++;
-	if (len)
-		goto repeat;
-
-	return 0;
-}
-EXPORT_SYMBOL(filemap_populate);
-
 struct vm_operations_struct generic_file_vm_ops = {
 	.fault		= filemap_fault,
 };
Index: linux-2.6/mm/fremap.c
===================================================================
--- linux-2.6.orig/mm/fremap.c
+++ linux-2.6/mm/fremap.c
@@ -45,58 +45,10 @@ static int zap_pte(struct mm_struct *mm,
 }
 
 /*
- * Install a file page to a given virtual memory address, release any
- * previously existing mapping.
- */
-int install_page(struct mm_struct *mm, struct vm_area_struct *vma,
-		unsigned long addr, struct page *page, pgprot_t prot)
-{
-	struct inode *inode;
-	pgoff_t size;
-	int err = -ENOMEM;
-	pte_t *pte;
-	pte_t pte_val;
-	spinlock_t *ptl;
-
-	pte = get_locked_pte(mm, addr, &ptl);
-	if (!pte)
-		goto out;
-
-	/*
-	 * This page may have been truncated. Tell the
-	 * caller about it.
-	 */
-	err = -EINVAL;
-	inode = vma->vm_file->f_mapping->host;
-	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-	if (!page->mapping || page->index >= size)
-		goto unlock;
-	err = -ENOMEM;
-	if (page_mapcount(page) > INT_MAX/2)
-		goto unlock;
-
-	if (pte_none(*pte) || !zap_pte(mm, vma, addr, pte))
-		inc_mm_counter(mm, file_rss);
-
-	flush_icache_page(vma, page);
-	pte_val = mk_pte(page, prot);
-	set_pte_at(mm, addr, pte, pte_val);
-	page_add_file_rmap(page);
-	update_mmu_cache(vma, addr, pte_val);
-	lazy_mmu_prot_update(pte_val);
-	err = 0;
-unlock:
-	pte_unmap_unlock(pte, ptl);
-out:
-	return err;
-}
-EXPORT_SYMBOL(install_page);
-
-/*
  * Install a file pte to a given virtual memory address, release any
  * previously existing mapping.
  */
-int install_file_pte(struct mm_struct *mm, struct vm_area_struct *vma,
+static int install_file_pte(struct mm_struct *mm, struct vm_area_struct *vma,
 		unsigned long addr, unsigned long pgoff, pgprot_t prot)
 {
 	int err = -ENOMEM;
@@ -210,8 +162,7 @@ asmlinkage long sys_remap_file_pages(uns
 	if (vma->vm_private_data && !(vma->vm_flags & VM_NONLINEAR))
 		goto out;
 
-	if ((!vma->vm_ops || !vma->vm_ops->populate) &&
-					!(vma->vm_flags & VM_CAN_NONLINEAR))
+	if (!vma->vm_flags & VM_CAN_NONLINEAR)
 		goto out;
 
 	if (end <= start || start < vma->vm_start || end > vma->vm_end)
@@ -241,18 +192,14 @@ asmlinkage long sys_remap_file_pages(uns
 		spin_unlock(&mapping->i_mmap_lock);
 	}
 
-	if (vma->vm_flags & VM_CAN_NONLINEAR) {
-		err = populate_range(mm, vma, start, size, pgoff);
-		if (!err && !(flags & MAP_NONBLOCK)) {
-			if (unlikely(has_write_lock)) {
-				downgrade_write(&mm->mmap_sem);
-				has_write_lock = 0;
-			}
-			make_pages_present(start, start+size);
+	err = populate_range(mm, vma, start, size, pgoff);
+	if (!err && !(flags & MAP_NONBLOCK)) {
+		if (unlikely(has_write_lock)) {
+			downgrade_write(&mm->mmap_sem);
+			has_write_lock = 0;
 		}
-	} else
-		err = vma->vm_ops->populate(vma, start, size, vma->vm_page_prot,
-					    	pgoff, flags & MAP_NONBLOCK);
+		make_pages_present(start, start+size);
+	}
 
 	/*
 	 * We can't clear VM_NONLINEAR because we'd have to do
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -2327,18 +2327,10 @@ static int do_linear_fault(struct mm_str
 			- vma->vm_start) >> PAGE_CACHE_SHIFT) + vma->vm_pgoff;
 	unsigned int flags = (write_access ? FAULT_FLAG_WRITE : 0);
 
-	return __do_fault(mm, vma, address, page_table, pmd, pgoff, flags, orig_pte);
+	return __do_fault(mm, vma, address, page_table, pmd, pgoff,
+							flags, orig_pte);
 }
 
-static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
-		unsigned long address, pte_t *page_table, pmd_t *pmd,
-		int write_access, pgoff_t pgoff, pte_t orig_pte)
-{
-	unsigned int flags = FAULT_FLAG_NONLINEAR |
-				(write_access ? FAULT_FLAG_WRITE : 0);
-
-	return __do_fault(mm, vma, address, page_table, pmd, pgoff, flags, orig_pte);
-}
 
 /*
  * Fault of a previously existing named mapping. Repopulate the pte
@@ -2349,17 +2341,19 @@ static int do_nonlinear_fault(struct mm_
  * but allow concurrent faults), and pte mapped but not yet locked.
  * We return with mmap_sem still held, but pte unmapped and unlocked.
  */
-static int do_file_page(struct mm_struct *mm, struct vm_area_struct *vma,
+static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		unsigned long address, pte_t *page_table, pmd_t *pmd,
 		int write_access, pte_t orig_pte)
 {
+	unsigned int flags = FAULT_FLAG_NONLINEAR |
+				(write_access ? FAULT_FLAG_WRITE : 0);
 	pgoff_t pgoff;
-	int err;
 
 	if (!pte_unmap_same(mm, pmd, page_table, orig_pte))
 		return VM_FAULT_MINOR;
 
-	if (unlikely(!(vma->vm_flags & VM_NONLINEAR))) {
+	if (unlikely(!(vma->vm_flags & VM_NONLINEAR) ||
+			!(vma->vm_flags & VM_CAN_NONLINEAR))) {
 		/*
 		 * Page table corrupted: show pte and kill process.
 		 */
@@ -2369,18 +2363,8 @@ static int do_file_page(struct mm_struct
 
 	pgoff = pte_to_pgoff(orig_pte);
 
-	if (vma->vm_ops && vma->vm_ops->fault)
-		return do_nonlinear_fault(mm, vma, address, page_table, pmd,
-					write_access, pgoff, orig_pte);
-
-	/* We can then assume vm->vm_ops && vma->vm_ops->populate */
-	err = vma->vm_ops->populate(vma, address & PAGE_MASK, PAGE_SIZE,
-					vma->vm_page_prot, pgoff, 0);
-	if (err == -ENOMEM)
-		return VM_FAULT_OOM;
-	if (err)
-		return VM_FAULT_SIGBUS;
-	return VM_FAULT_MAJOR;
+	return __do_fault(mm, vma, address, page_table, pmd, pgoff,
+							flags, orig_pte);
 }
 
 /*
@@ -2416,7 +2400,7 @@ static inline int handle_pte_fault(struc
 						 pte, pmd, write_access);
 		}
 		if (pte_file(entry))
-			return do_file_page(mm, vma, address,
+			return do_nonlinear_fault(mm, vma, address,
 					pte, pmd, write_access, entry);
 		return do_swap_page(mm, vma, address,
 					pte, pmd, write_access, entry);
Index: linux-2.6/Documentation/feature-removal-schedule.txt
===================================================================
--- linux-2.6.orig/Documentation/feature-removal-schedule.txt
+++ linux-2.6/Documentation/feature-removal-schedule.txt
@@ -203,26 +203,8 @@ Who:	Nick Piggin <npiggin@suse.de>
 
 ---------------------------
 
-What:	filemap_nopage, filemap_populate
-When:	February 2007
-Why:	These legacy interfaces no longer have any callers in the kernel and
-	any functionality provided can be provided with filemap_fault. The
-	removal schedule is short because they are a big maintainence burden
-	and have some bugs.
-Who:	Nick Piggin <npiggin@suse.de>
-
----------------------------
-
-What:	vm_ops.populate, install_page
-When:	February 2007
-Why:	These legacy interfaces no longer have any callers in the kernel and
-	any functionality provided can be provided with vm_ops.fault.
-Who:	Nick Piggin <npiggin@suse.de>
-
----------------------------
-
 What:	vm_ops.nopage
-When:	October 2008, provided in-kernel callers have been converted
+When:	October 2007, provided in-kernel callers have been converted
 Why:	This interface is replaced by vm_ops.fault, but it has been around
 	forever, is used by a lot of drivers, and doesn't cost much to
 	maintain.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-10 14:33 ` Christoph Hellwig
  2006-10-10 15:01   ` Nick Piggin
@ 2006-10-10 15:07   ` Arjan van de Ven
  1 sibling, 0 replies; 42+ messages in thread
From: Arjan van de Ven @ 2006-10-10 15:07 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Nick Piggin, Linux Memory Management, Andrew Morton, Linux Kernel

On Tue, 2006-10-10 at 15:33 +0100, Christoph Hellwig wrote:
> On Tue, Oct 10, 2006 at 04:21:32PM +0200, Nick Piggin wrote:
> > This patchset is against 2.6.19-rc1-mm1 up to
> > numa-add-zone_to_nid-function-swap_prefetch.patch (ie. no readahead stuff,
> > which causes big rejects and would be much easier to fix in readahead
> > patches than here). Other than this feature, the -mm specific stuff is
> > pretty simple (mainly straightforward filesystem conversions).
> > 
> > Changes since last round:
> > - trimmed the cc list, no big changes since last time.
> > - fix the few buglets preventing it from actually booting
> > - reinstate filemap_nopage and filemap_populate, because they're exported
> >   symbols even though no longer used in the tree. Schedule for removal.
> 
> Just kill them and the whole ->populate methods.  We have a better API that
> replaces them 100% with your patch, and they've never been a widespread
> API.

concur; just nuke the parts that have become unused right away. They're
not "removed" as such, but "replaced by better"



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-10 15:01   ` Nick Piggin
@ 2006-10-10 16:09     ` Arjan van de Ven
  2006-10-11  0:46       ` SPAM: " Nick Piggin
  0 siblings, 1 reply; 42+ messages in thread
From: Arjan van de Ven @ 2006-10-10 16:09 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Christoph Hellwig, Linux Memory Management, Andrew Morton, Linux Kernel

> \ What:	vm_ops.nopage
> -When:	October 2008, provided in-kernel callers have been converted
> +When:	October 2007, provided in-kernel callers have been converted
>  Why:	This interface is replaced by vm_ops.fault, but it has been around
>  	forever, is used by a lot of drivers, and doesn't cost much to
>  	maintain.

but a year is a really long time; 6 months would be a lot more
reasonable..
(it's not as if most external modules will switch until it's really
gone.. more notice isn't really going to help that at all; at least make
the kernel printk once on the first use of this so that they notice!)



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: SPAM: Re: [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers
  2006-10-10 16:09     ` Arjan van de Ven
@ 2006-10-11  0:46       ` Nick Piggin
  0 siblings, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-11  0:46 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Christoph Hellwig, Linux Memory Management, Andrew Morton, Linux Kernel

On Tue, Oct 10, 2006 at 06:09:06PM +0200, Arjan van de Ven wrote:
> > \ What:	vm_ops.nopage
> > -When:	October 2008, provided in-kernel callers have been converted
> > +When:	October 2007, provided in-kernel callers have been converted
> >  Why:	This interface is replaced by vm_ops.fault, but it has been around
> >  	forever, is used by a lot of drivers, and doesn't cost much to
> >  	maintain.
> 
> but a year is a really long time; 6 months would be a lot more
> reasonable..
> (it's not as if most external modules will switch until it's really
> gone.. more notice isn't really going to help that at all; at least make
> the kernel printk once on the first use of this so that they notice!)

I agree with that. But the printk can't go in until all the in-tree
users are converted. I will  get around to doing that once the
interface is firmer.

As for timeframe, I don't have any strong feelings, but 6 months might
only be 1 kernel release, and we may not have got around to putting the
printk in yet ;)


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-10 14:21 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
@ 2006-10-11  4:38   ` Andrew Morton
  2006-10-11  5:39     ` Nick Piggin
  2006-10-11  5:13   ` Andrew Morton
  1 sibling, 1 reply; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  4:38 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Linux Memory Management, Linux Kernel

On Tue, 10 Oct 2006 16:21:49 +0200 (CEST)
Nick Piggin <npiggin@suse.de> wrote:

> Fix the race between invalidate_inode_pages and do_no_page.
> 
> Andrea Arcangeli identified a subtle race between invalidation of
> pages from pagecache with userspace mappings, and do_no_page.
> 
> The issue is that invalidation has to shoot down all mappings to the
> page, before it can be discarded from the pagecache. Between shooting
> down ptes to a particular page, and actually dropping the struct page
> from the pagecache, do_no_page from any process might fault on that
> page and establish a new mapping to the page just before it gets
> discarded from the pagecache.
> 
> The most common case where such invalidation is used is in file
> truncation. This case was catered for by doing a sort of open-coded
> seqlock between the file's i_size, and its truncate_count.
> 
> Truncation will decrease i_size, then increment truncate_count before
> unmapping userspace pages; do_no_page will read truncate_count, then
> find the page if it is within i_size, and then check truncate_count
> under the page table lock and back out and retry if it had
> subsequently been changed (ptl will serialise against unmapping, and
> ensure a potentially updated truncate_count is actually visible).
> 
> Complexity and documentation issues aside, the locking protocol fails
> in the case where we would like to invalidate pagecache inside i_size.
> do_no_page can come in anytime and filemap_nopage is not aware of the
> invalidation in progress (as it is when it is outside i_size). The
> end result is that dangling (->mapping == NULL) pages that appear to
> be from a particular file may be mapped into userspace with nonsense
> data. Valid mappings to the same place will see a different page.
> 
> Andrea implemented two working fixes, one using a real seqlock,
> another using a page->flags bit. He also proposed using the page lock
> in do_no_page, but that was initially considered too heavyweight.
> However, it is not a global or per-file lock, and the page cacheline
> is modified in do_no_page to increment _count and _mapcount anyway, so
> a further modification should not be a large performance hit.
> Scalability is not an issue.
> 
> This patch implements this latter approach. ->nopage implementations
> return with the page locked if it is possible for their underlying
> file to be invalidated (in that case, they must set a special vm_flags
> bit to indicate so). do_no_page only unlocks the page after setting
> up the mapping completely. invalidation is excluded because it holds
> the page lock during invalidation of each page (and ensures that the
> page is not mapped while holding the lock).
> 
> This allows significant simplifications in do_no_page.
> 

The (unchangelogged) changes to filemap_nopage() appear to have switched
the try-the-read-twice logic into try-it-forever logic.  I think it'll hang
if there's a bad sector?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-10 14:21 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
  2006-10-11  4:38   ` Andrew Morton
@ 2006-10-11  5:13   ` Andrew Morton
  2006-10-11  5:50     ` Nick Piggin
  1 sibling, 1 reply; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  5:13 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Linux Memory Management, Linux Kernel

On Tue, 10 Oct 2006 16:21:49 +0200 (CEST)
Nick Piggin <npiggin@suse.de> wrote:

> --- linux-2.6.orig/mm/filemap.c
> +++ linux-2.6/mm/filemap.c
> @@ -1392,9 +1392,10 @@ struct page *filemap_nopage(struct vm_ar
>  	unsigned long size, pgoff;
>  	int did_readaround = 0, majmin = VM_FAULT_MINOR;
>  
> +	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
> +
>  	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
>  
> -retry_all:
>  	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
>  	if (pgoff >= size)
>  		goto outside_data_content;
> @@ -1416,7 +1417,7 @@ retry_all:
>  	 * Do we have something in the page cache already?
>  	 */
>  retry_find:
> -	page = find_get_page(mapping, pgoff);
> +	page = find_lock_page(mapping, pgoff);

Here's a little problem.  Locking the page in the pagefault handler takes
our deadlock while writing from a mmapped copy of the page into the same
page from "extremely hard to hit" to "super-easy to hit".  Try running
write-deadlock-demo.c from
http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz

It conveniently deadlocks while holding mmap_sem, so `ps' get stuck too.

So this whole idea of locking the page in the fault handler is off the
table until we fix that deadlock for real.  Coincidentally I started coding
a fix for that a couple of weeks ago, but spend too much time with my nose
in other people's crap to get around to writing my own crap.

The basic idea is

- revert the recent changes to the core write() code (the ones which
  killed writev() performance, especially on NFS overwrites).

- clean some stuff up

- modify the core of write() so that instead of doing copy_from_user(),
  we do inc_preempt_count();copy_from_user_inatomic().  So we never enter
  the pagefault handler while holding the lock on the pagecache page.

  If the fault happens, we run commit_write() on however much stuff we
  managed to copy and then go back and try to fault the target page back in
  again.  Repeat for ten times then give up.

  It gets tricky because it means that we'll need to go back to zeroing
  out the uncopied part of the pagecache page before
  commit_write+unlock_page().  This will resurrect the recently-fixed
  problem where userspace can fleetingly see a bunch of zeroes in pagecache
  where it expected to see either the old data or the new data.

  But I don't think that problem was terribly serious, and we can improve
  the situation quite a lot by not doing that zeroing if the page is
  already up-to-date.

Anyway, if you're feeling up to it I'll document the patches I have and hand
them over - they're not making much progress here.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11  4:38   ` Andrew Morton
@ 2006-10-11  5:39     ` Nick Piggin
  2006-10-11  6:00       ` Andrew Morton
  0 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-11  5:39 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Nick Piggin, Linux Memory Management, Linux Kernel

Andrew Morton wrote:

>On Tue, 10 Oct 2006 16:21:49 +0200 (CEST)
>Nick Piggin <npiggin@suse.de> wrote:
>
>
>>Fix the race between invalidate_inode_pages and do_no_page.
>>
>>Andrea Arcangeli identified a subtle race between invalidation of
>>pages from pagecache with userspace mappings, and do_no_page.
>>
>>The issue is that invalidation has to shoot down all mappings to the
>>page, before it can be discarded from the pagecache. Between shooting
>>down ptes to a particular page, and actually dropping the struct page
>>from the pagecache, do_no_page from any process might fault on that
>>page and establish a new mapping to the page just before it gets
>>discarded from the pagecache.
>>
>>The most common case where such invalidation is used is in file
>>truncation. This case was catered for by doing a sort of open-coded
>>seqlock between the file's i_size, and its truncate_count.
>>
>>Truncation will decrease i_size, then increment truncate_count before
>>unmapping userspace pages; do_no_page will read truncate_count, then
>>find the page if it is within i_size, and then check truncate_count
>>under the page table lock and back out and retry if it had
>>subsequently been changed (ptl will serialise against unmapping, and
>>ensure a potentially updated truncate_count is actually visible).
>>
>>Complexity and documentation issues aside, the locking protocol fails
>>in the case where we would like to invalidate pagecache inside i_size.
>>do_no_page can come in anytime and filemap_nopage is not aware of the
>>invalidation in progress (as it is when it is outside i_size). The
>>end result is that dangling (->mapping == NULL) pages that appear to
>>be from a particular file may be mapped into userspace with nonsense
>>data. Valid mappings to the same place will see a different page.
>>
>>Andrea implemented two working fixes, one using a real seqlock,
>>another using a page->flags bit. He also proposed using the page lock
>>in do_no_page, but that was initially considered too heavyweight.
>>However, it is not a global or per-file lock, and the page cacheline
>>is modified in do_no_page to increment _count and _mapcount anyway, so
>>a further modification should not be a large performance hit.
>>Scalability is not an issue.
>>
>>This patch implements this latter approach. ->nopage implementations
>>return with the page locked if it is possible for their underlying
>>file to be invalidated (in that case, they must set a special vm_flags
>>bit to indicate so). do_no_page only unlocks the page after setting
>>up the mapping completely. invalidation is excluded because it holds
>>the page lock during invalidation of each page (and ensures that the
>>page is not mapped while holding the lock).
>>
>>This allows significant simplifications in do_no_page.
>>
>>
>
>The (unchangelogged) changes to filemap_nopage() appear to have switched
>the try-the-read-twice logic into try-it-forever logic.  I think it'll hang
>if there's a bad sector?
>

It should fall out if there is an error. I think. I'll go through it again.

Some of the changes simply come about due to find_lock_page meaning that we
never see an !uptodate page unless it is an error, so some of that stuff can
come out.

But I see that it does read twice. Do you want that behaviour retained? It
seems like at this level it would be logical to read it once and let lower
layers take care of any retries?

--

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11  5:13   ` Andrew Morton
@ 2006-10-11  5:50     ` Nick Piggin
  2006-10-11  6:10       ` Andrew Morton
                         ` (3 more replies)
  0 siblings, 4 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-11  5:50 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Nick Piggin, Linux Memory Management, Linux Kernel

Andrew Morton wrote:

>On Tue, 10 Oct 2006 16:21:49 +0200 (CEST)
>Nick Piggin <npiggin@suse.de> wrote:
>
>
>>--- linux-2.6.orig/mm/filemap.c
>>+++ linux-2.6/mm/filemap.c
>>@@ -1392,9 +1392,10 @@ struct page *filemap_nopage(struct vm_ar
>> 	unsigned long size, pgoff;
>> 	int did_readaround = 0, majmin = VM_FAULT_MINOR;
>> 
>>+	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
>>+
>> 	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
>> 
>>-retry_all:
>> 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
>> 	if (pgoff >= size)
>> 		goto outside_data_content;
>>@@ -1416,7 +1417,7 @@ retry_all:
>> 	 * Do we have something in the page cache already?
>> 	 */
>> retry_find:
>>-	page = find_get_page(mapping, pgoff);
>>+	page = find_lock_page(mapping, pgoff);
>>
>
>Here's a little problem.  Locking the page in the pagefault handler takes
>our deadlock while writing from a mmapped copy of the page into the same
>page from "extremely hard to hit" to "super-easy to hit".  Try running
>write-deadlock-demo.c from
>http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz
>
>It conveniently deadlocks while holding mmap_sem, so `ps' get stuck too.
>
>So this whole idea of locking the page in the fault handler is off the
>table until we fix that deadlock for real.
>

OK. Can it sit in -mm for now, though? Or is this deadlock less theoretical
than it sounds? At any rate, thanks for catching this.

>  Coincidentally I started coding
>a fix for that a couple of weeks ago, but spend too much time with my nose
>in other people's crap to get around to writing my own crap.
>
>The basic idea is
>
>- revert the recent changes to the core write() code (the ones which
>  killed writev() performance, especially on NFS overwrites).
>
>- clean some stuff up
>
>- modify the core of write() so that instead of doing copy_from_user(),
>  we do inc_preempt_count();copy_from_user_inatomic().  So we never enter
>  the pagefault handler while holding the lock on the pagecache page.
>
>  If the fault happens, we run commit_write() on however much stuff we
>  managed to copy and then go back and try to fault the target page back in
>  again.  Repeat for ten times then give up.
>

Without looking at any code, perhaps we could instead run get_user_pages
and copy the memory that way.

We'd still want to do try the initial copy_from_user, because the TLB is
quite likely to exist or at least the pte will exist so the low level TLB
refill can reach it - so we don't want to walk the pagetables manually if
we can help it.

At that point, if we end up doing the get_user_pages thing, do we even need
to do the intermediate commit_write()? Or just do the whole copy (the 
partial
copied data is going to be in cache on physically indexed caches anyway, so
it will be very low cost to copy again). And it should be a reasonably
unlikely path... but I'll instrument it.

>  It gets tricky because it means that we'll need to go back to zeroing
>  out the uncopied part of the pagecache page before
>  commit_write+unlock_page().  This will resurrect the recently-fixed
>  problem where userspace can fleetingly see a bunch of zeroes in pagecache
>  where it expected to see either the old data or the new data.
>
>  But I don't think that problem was terribly serious, and we can improve
>  the situation quite a lot by not doing that zeroing if the page is
>  already up-to-date.
>
>Anyway, if you're feeling up to it I'll document the patches I have and hand
>them over - they're not making much progress here.
>

Yeah I'll have a go.

--

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11  5:39     ` Nick Piggin
@ 2006-10-11  6:00       ` Andrew Morton
  2006-10-11  9:21         ` Nick Piggin
  2006-10-11 16:21         ` Linus Torvalds
  0 siblings, 2 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:00 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Linux Memory Management, Linux Kernel, Linus Torvalds

On Wed, 11 Oct 2006 15:39:22 +1000
Nick Piggin <nickpiggin@yahoo.com.au> wrote:

> But I see that it does read twice. Do you want that behaviour retained? It
> seems like at this level it would be logical to read it once and let lower
> layers take care of any retries?

argh.  Linus has good-sounding reasons for retrying the pagefault-path's
read a single time, but I forget what they are.  Something to do with
networked filesystems?  (adds cc)

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11  5:50     ` Nick Piggin
@ 2006-10-11  6:10       ` Andrew Morton
  2006-10-11  6:17       ` [patch 1/6] revert "generic_file_buffered_write(): handle zero length iovec segments" Andrew Morton
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:10 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Nick Piggin, Linux Memory Management, Linux Kernel

On Wed, 11 Oct 2006 15:50:11 +1000
Nick Piggin <nickpiggin@yahoo.com.au> wrote:

> Andrew Morton wrote:
> 
> >On Tue, 10 Oct 2006 16:21:49 +0200 (CEST)
> >Nick Piggin <npiggin@suse.de> wrote:
> >
> >
> >>--- linux-2.6.orig/mm/filemap.c
> >>+++ linux-2.6/mm/filemap.c
> >>@@ -1392,9 +1392,10 @@ struct page *filemap_nopage(struct vm_ar
> >> 	unsigned long size, pgoff;
> >> 	int did_readaround = 0, majmin = VM_FAULT_MINOR;
> >> 
> >>+	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
> >>+
> >> 	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
> >> 
> >>-retry_all:
> >> 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
> >> 	if (pgoff >= size)
> >> 		goto outside_data_content;
> >>@@ -1416,7 +1417,7 @@ retry_all:
> >> 	 * Do we have something in the page cache already?
> >> 	 */
> >> retry_find:
> >>-	page = find_get_page(mapping, pgoff);
> >>+	page = find_lock_page(mapping, pgoff);
> >>
> >
> >Here's a little problem.  Locking the page in the pagefault handler takes
> >our deadlock while writing from a mmapped copy of the page into the same
> >page from "extremely hard to hit" to "super-easy to hit".  Try running
> >write-deadlock-demo.c from
> >http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz
> >
> >It conveniently deadlocks while holding mmap_sem, so `ps' get stuck too.
> >
> >So this whole idea of locking the page in the fault handler is off the
> >table until we fix that deadlock for real.
> >
> 
> OK. Can it sit in -mm for now, though?

argh.  It took me two goes to unpickle all the bits and pieces (please
patch things like cachefiles separately, unless you want your stuff to be
merged after that stuff) and now I've gone and deleted it all.

Maybe later?  We do have that infinite-loop-on-EIO to look at as well.

> Or is this deadlock less theoretical
> than it sounds?

I _think_ people have hit it in the wild, due to memory pressure.

But no, it's a silly thing which will only hit when people are running
silly tests under silly amounts of load.

Or if they're trying to kill your computer...

> At any rate, thanks for catching this.
> 
> >  Coincidentally I started coding
> >a fix for that a couple of weeks ago, but spend too much time with my nose
> >in other people's crap to get around to writing my own crap.
> >
> >The basic idea is
> >
> >- revert the recent changes to the core write() code (the ones which
> >  killed writev() performance, especially on NFS overwrites).
> >
> >- clean some stuff up
> >
> >- modify the core of write() so that instead of doing copy_from_user(),
> >  we do inc_preempt_count();copy_from_user_inatomic().  So we never enter
> >  the pagefault handler while holding the lock on the pagecache page.
> >
> >  If the fault happens, we run commit_write() on however much stuff we
> >  managed to copy and then go back and try to fault the target page back in
> >  again.  Repeat for ten times then give up.
> >
> 
> Without looking at any code, perhaps we could instead run get_user_pages
> and copy the memory that way.

That would certainly work, but we've always shied away from doing that
because of the performance implications.

> We'd still want to do try the initial copy_from_user, because the TLB is
> quite likely to exist or at least the pte will exist so the low level TLB
> refill can reach it - so we don't want to walk the pagetables manually if
> we can help it.

Yeah, that's an alternative to the fault-it-in-ten-times-then-give-up
approach.

> At that point, if we end up doing the get_user_pages thing, do we even need
> to do the intermediate commit_write()?

Yes, we will.  get_user_pages() will run the pagefault handler, which will
lock the page, so we're back to square one.

> Or just do the whole copy (the 
> partial
> copied data is going to be in cache on physically indexed caches anyway, so
> it will be very low cost to copy again). And it should be a reasonably
> unlikely path... but I'll instrument it.

I'm not sure what you're suggesting here.

> >  It gets tricky because it means that we'll need to go back to zeroing
> >  out the uncopied part of the pagecache page before
> >  commit_write+unlock_page().  This will resurrect the recently-fixed
> >  problem where userspace can fleetingly see a bunch of zeroes in pagecache
> >  where it expected to see either the old data or the new data.
> >
> >  But I don't think that problem was terribly serious, and we can improve
> >  the situation quite a lot by not doing that zeroing if the page is
> >  already up-to-date.
> >
> >Anyway, if you're feeling up to it I'll document the patches I have and hand
> >them over - they're not making much progress here.
> >
> 
> Yeah I'll have a go.

Thanks.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 1/6] revert "generic_file_buffered_write(): handle zero length iovec segments"
  2006-10-11  5:50     ` Nick Piggin
  2006-10-11  6:10       ` Andrew Morton
@ 2006-10-11  6:17       ` Andrew Morton
       [not found]       ` <20061010231150.fb9e30f5.akpm@osdl.org>
  2006-10-21  1:53       ` [patch 2/5] mm: fault vs invalidate/truncate race fix Benjamin Herrenschmidt
  3 siblings, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:17 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Nick Piggin, Linux Memory Management, Linux Kernel

From: Andrew Morton <akpm@osdl.org>

Revert 81b0c8713385ce1b1b9058e916edcf9561ad76d6.

This was a bugfix against 6527c2bdf1f833cc18e8f42bd97973d583e4aa83, which we
also revert.


Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/filemap.c |    9 +--------
 mm/filemap.h |    4 ++--
 2 files changed, 3 insertions(+), 10 deletions(-)

diff -puN mm/filemap.c~revert-generic_file_buffered_write-handle-zero-length-iovec-segments mm/filemap.c
--- a/mm/filemap.c~revert-generic_file_buffered_write-handle-zero-length-iovec-segments
+++ a/mm/filemap.c
@@ -2121,12 +2121,6 @@ generic_file_buffered_write(struct kiocb
 			break;
 		}
 
-		if (unlikely(bytes == 0)) {
-			status = 0;
-			copied = 0;
-			goto zero_length_segment;
-		}
-
 		status = a_ops->prepare_write(file, page, offset, offset+bytes);
 		if (unlikely(status)) {
 			loff_t isize = i_size_read(inode);
@@ -2156,8 +2150,7 @@ generic_file_buffered_write(struct kiocb
 			page_cache_release(page);
 			continue;
 		}
-zero_length_segment:
-		if (likely(copied >= 0)) {
+		if (likely(copied > 0)) {
 			if (!status)
 				status = copied;
 
diff -puN mm/filemap.h~revert-generic_file_buffered_write-handle-zero-length-iovec-segments mm/filemap.h
--- a/mm/filemap.h~revert-generic_file_buffered_write-handle-zero-length-iovec-segments
+++ a/mm/filemap.h
@@ -87,7 +87,7 @@ filemap_set_next_iovec(const struct iove
 	const struct iovec *iov = *iovp;
 	size_t base = *basep;
 
-	do {
+	while (bytes) {
 		int copy = min(bytes, iov->iov_len - base);
 
 		bytes -= copy;
@@ -96,7 +96,7 @@ filemap_set_next_iovec(const struct iove
 			iov++;
 			base = 0;
 		}
-	} while (bytes);
+	}
 	*iovp = iov;
 	*basep = base;
 }
_


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 2/6] revert "generic_file_buffered_write(): deadlock on vectored write"
       [not found]       ` <20061010231150.fb9e30f5.akpm@osdl.org>
@ 2006-10-11  6:17         ` Andrew Morton
       [not found]         ` <20061010231243.bc8b834c.akpm@osdl.org>
  1 sibling, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:17 UTC (permalink / raw)
  To: Nick Piggin, Nick Piggin, Linux Memory Management, Linux Kernel

From: Andrew Morton <akpm@osdl.org>

Revert 6527c2bdf1f833cc18e8f42bd97973d583e4aa83

This patch fixed the following bug:

  When prefaulting in the pages in generic_file_buffered_write(), we only
  faulted in the pages for the firts segment of the iovec.  If the second of
  successive segment described a mmapping of the page into which we're
  write()ing, and that page is not up-to-date, the fault handler tries to lock
  the already-locked page (to bring it up to date) and deadlocks.

  An exploit for this bug is in writev-deadlock-demo.c, in
  http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz.

  (These demos assume blocksize < PAGE_CACHE_SIZE).

The problem with this fix is that it takes the kernel back to doing a single
prepare_write()/commit_write() per iovec segment.  So in the worst case we'll
run prepare_write+commit_write 1024 times where we previously would have run
it once.

<insert numbers obtained via ext3-tools's writev-speed.c here>

And apparently this change killed NFS overwrite performance, because, I
suppose, it talks to the server for each prepare_write+commit_write.

So just back that patch out - we'll be fixing the deadlock by other means.

Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/filemap.c |   18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff -puN mm/filemap.c~revert-generic_file_buffered_write-deadlock-on-vectored-write mm/filemap.c
--- a/mm/filemap.c~revert-generic_file_buffered_write-deadlock-on-vectored-write
+++ a/mm/filemap.c
@@ -2091,21 +2091,14 @@ generic_file_buffered_write(struct kiocb
 	do {
 		unsigned long index;
 		unsigned long offset;
+		unsigned long maxlen;
 		size_t copied;
 
 		offset = (pos & (PAGE_CACHE_SIZE -1)); /* Within page */
 		index = pos >> PAGE_CACHE_SHIFT;
 		bytes = PAGE_CACHE_SIZE - offset;
-
-		/* Limit the size of the copy to the caller's write size */
-		bytes = min(bytes, count);
-
-		/*
-		 * Limit the size of the copy to that of the current segment,
-		 * because fault_in_pages_readable() doesn't know how to walk
-		 * segments.
-		 */
-		bytes = min(bytes, cur_iov->iov_len - iov_base);
+		if (bytes > count)
+			bytes = count;
 
 		/*
 		 * Bring in the user page that we will copy from _first_.
@@ -2113,7 +2106,10 @@ generic_file_buffered_write(struct kiocb
 		 * same page as we're writing to, without it being marked
 		 * up-to-date.
 		 */
-		fault_in_pages_readable(buf, bytes);
+		maxlen = cur_iov->iov_len - iov_base;
+		if (maxlen > bytes)
+			maxlen = bytes;
+		fault_in_pages_readable(buf, maxlen);
 
 		page = __grab_cache_page(mapping,index,&cached_page,&lru_pvec);
 		if (!page) {
_


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 3/6] generic_file_buffered_write() cleanup
       [not found]         ` <20061010231243.bc8b834c.akpm@osdl.org>
@ 2006-10-11  6:17           ` Andrew Morton
       [not found]           ` <20061010231339.a79c1fae.akpm@osdl.org>
  1 sibling, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:17 UTC (permalink / raw)
  To: Nick Piggin, Nick Piggin, Linux Memory Management, Linux Kernel

From: Andrew Morton <akpm@osdl.org>

Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/filemap.c |   35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

diff -puN mm/filemap.c~generic_file_buffered_write-cleanup mm/filemap.c
--- a/mm/filemap.c~generic_file_buffered_write-cleanup
+++ a/mm/filemap.c
@@ -2064,16 +2064,15 @@ generic_file_buffered_write(struct kiocb
 		size_t count, ssize_t written)
 {
 	struct file *file = iocb->ki_filp;
-	struct address_space * mapping = file->f_mapping;
+	struct address_space *mapping = file->f_mapping;
 	const struct address_space_operations *a_ops = mapping->a_ops;
 	struct inode 	*inode = mapping->host;
 	long		status = 0;
 	struct page	*page;
 	struct page	*cached_page = NULL;
-	size_t		bytes;
 	struct pagevec	lru_pvec;
 	const struct iovec *cur_iov = iov; /* current iovec */
-	size_t		iov_base = 0;	   /* offset in the current iovec */
+	size_t		iov_offset = 0;	   /* offset in the current iovec */
 	char __user	*buf;
 
 	pagevec_init(&lru_pvec, 0);
@@ -2084,31 +2083,33 @@ generic_file_buffered_write(struct kiocb
 	if (likely(nr_segs == 1))
 		buf = iov->iov_base + written;
 	else {
-		filemap_set_next_iovec(&cur_iov, &iov_base, written);
-		buf = cur_iov->iov_base + iov_base;
+		filemap_set_next_iovec(&cur_iov, &iov_offset, written);
+		buf = cur_iov->iov_base + iov_offset;
 	}
 
 	do {
-		unsigned long index;
-		unsigned long offset;
-		unsigned long maxlen;
-		size_t copied;
+		pgoff_t index;		/* Pagecache index for current page */
+		unsigned long offset;	/* Offset into pagecache page */
+		unsigned long maxlen;	/* Bytes remaining in current iovec */
+		size_t bytes;		/* Bytes to write to page */
+		size_t copied;		/* Bytes copied from user */
 
-		offset = (pos & (PAGE_CACHE_SIZE -1)); /* Within page */
+		offset = (pos & (PAGE_CACHE_SIZE - 1));
 		index = pos >> PAGE_CACHE_SHIFT;
 		bytes = PAGE_CACHE_SIZE - offset;
 		if (bytes > count)
 			bytes = count;
 
+		maxlen = cur_iov->iov_len - iov_offset;
+		if (maxlen > bytes)
+			maxlen = bytes;
+
 		/*
 		 * Bring in the user page that we will copy from _first_.
 		 * Otherwise there's a nasty deadlock on copying from the
 		 * same page as we're writing to, without it being marked
 		 * up-to-date.
 		 */
-		maxlen = cur_iov->iov_len - iov_base;
-		if (maxlen > bytes)
-			maxlen = bytes;
 		fault_in_pages_readable(buf, maxlen);
 
 		page = __grab_cache_page(mapping,index,&cached_page,&lru_pvec);
@@ -2139,7 +2140,7 @@ generic_file_buffered_write(struct kiocb
 							buf, bytes);
 		else
 			copied = filemap_copy_from_user_iovec(page, offset,
-						cur_iov, iov_base, bytes);
+						cur_iov, iov_offset, bytes);
 		flush_dcache_page(page);
 		status = a_ops->commit_write(file, page, offset, offset+bytes);
 		if (status == AOP_TRUNCATED_PAGE) {
@@ -2157,12 +2158,12 @@ generic_file_buffered_write(struct kiocb
 				buf += status;
 				if (unlikely(nr_segs > 1)) {
 					filemap_set_next_iovec(&cur_iov,
-							&iov_base, status);
+							&iov_offset, status);
 					if (count)
 						buf = cur_iov->iov_base +
-							iov_base;
+							iov_offset;
 				} else {
-					iov_base += status;
+					iov_offset += status;
 				}
 			}
 		}
_


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 4/6] generic_file_buffered_write(): fix page prefaulting
       [not found]           ` <20061010231339.a79c1fae.akpm@osdl.org>
@ 2006-10-11  6:18             ` Andrew Morton
       [not found]             ` <20061010231424.db88931f.akpm@osdl.org>
  1 sibling, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:18 UTC (permalink / raw)
  To: Nick Piggin, Nick Piggin, Linux Memory Management, Linux Kernel

From: Andrew Morton <akpm@osdl.org>

generic_file_buffered_write() is passing the wrong length arg to
fault_in_pages_readable() (I think - please check).


Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/filemap.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff -puN mm/filemap.c~generic_file_buffered_write-fix-page-prefaulting mm/filemap.c
--- a/mm/filemap.c~generic_file_buffered_write-fix-page-prefaulting
+++ a/mm/filemap.c
@@ -2110,7 +2110,7 @@ generic_file_buffered_write(struct kiocb
 		 * same page as we're writing to, without it being marked
 		 * up-to-date.
 		 */
-		fault_in_pages_readable(buf, maxlen);
+		fault_in_pages_readable(buf, bytes);
 
 		page = __grab_cache_page(mapping,index,&cached_page,&lru_pvec);
 		if (!page) {
_


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 5/6] generic_file_buffered_write(): max_len cleanup
       [not found]             ` <20061010231424.db88931f.akpm@osdl.org>
@ 2006-10-11  6:18               ` Andrew Morton
       [not found]               ` <20061010231514.c1da7355.akpm@osdl.org>
  1 sibling, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:18 UTC (permalink / raw)
  To: Nick Piggin, Nick Piggin, Linux Memory Management, Linux Kernel

From: Andrew Morton <akpm@osdl.org>

More dirty code.

Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/filemap.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff -puN mm/filemap.c~generic_file_buffered_write-max_len-cleanup mm/filemap.c
--- a/mm/filemap.c~generic_file_buffered_write-max_len-cleanup
+++ a/mm/filemap.c
@@ -2090,7 +2090,6 @@ generic_file_buffered_write(struct kiocb
 	do {
 		pgoff_t index;		/* Pagecache index for current page */
 		unsigned long offset;	/* Offset into pagecache page */
-		unsigned long maxlen;	/* Bytes remaining in current iovec */
 		size_t bytes;		/* Bytes to write to page */
 		size_t copied;		/* Bytes copied from user */
 
@@ -2100,9 +2099,7 @@ generic_file_buffered_write(struct kiocb
 		if (bytes > count)
 			bytes = count;
 
-		maxlen = cur_iov->iov_len - iov_offset;
-		if (maxlen > bytes)
-			maxlen = bytes;
+		bytes = min(cur_iov->iov_len - iov_offset, bytes);
 
 		/*
 		 * Bring in the user page that we will copy from _first_.
_


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 6/6] fix pagecache write deadlocks
       [not found]               ` <20061010231514.c1da7355.akpm@osdl.org>
@ 2006-10-11  6:18                 ` Andrew Morton
  0 siblings, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2006-10-11  6:18 UTC (permalink / raw)
  To: Nick Piggin, Nick Piggin, Linux Memory Management, Linux Kernel

From: Andrew Morton <akpm@osdl.org>

This is half-written and won't work.

The idea is to modify the core write() code so that it won't take a pagefault
while holding a lock on the pagecache page.

- Instead of copy_from_user(), use inc_preempt_count() and
  copy_from_user_inatomic().

- If the copy_from_user_inatomic() hits a pagefault, it'll return a short
  copy.

  - So zero out the remainder of the pagecache page (the uncopied bit).

    - but only if the page is not uptodate.

  - commit_write()

  - unlock_page()

  - adjust various pointers and counters

  - go back and try to fault the page in again, redo the lock_page,
    prepare_write, copy_from_user_inatomic(), etc.

  - After a certain number of retries, someone is being silly: give up.


Now, the design objective here isn't just to fix the deadlock.  It's to be
able to copy multiple iovec segments into the pagecache page within a single
prepare-write/commit_write pair.  But to do that, we'll need to prefault them.

That could get complex.  Walk across the segments, touching each user page
until we reach the point where we see that this iovec segment doesn't fall
into the target page.

Alternatively, only prefault the *present* iovec segment.  The code as
designed will handle pagefaults against the user's pages quite happily.  But
is it efficient?  Needs thought.

(I think we will end up with quite a bit of dead code as a result of this
exercise - some of the fancy user-copying inlines.  Needs checking when the
dust has settled).


Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/filemap.c |    7 ++--
 mm/filemap.h |   69 ++++++++++++++++++++++++++++++++++---------------
 2 files changed, 53 insertions(+), 23 deletions(-)

diff -puN mm/filemap.c~fix-pagecache-write-deadlocks mm/filemap.c
--- a/mm/filemap.c~fix-pagecache-write-deadlocks
+++ a/mm/filemap.c
@@ -2133,11 +2133,12 @@ generic_file_buffered_write(struct kiocb
 			break;
 		}
 		if (likely(nr_segs == 1))
-			copied = filemap_copy_from_user(page, offset,
+			copied = filemap_copy_from_user_atomic(page, offset,
 							buf, bytes);
 		else
-			copied = filemap_copy_from_user_iovec(page, offset,
-						cur_iov, iov_offset, bytes);
+			copied = filemap_copy_from_user_iovec_atomic(page,
+						offset, cur_iov, iov_offset,
+						bytes);
 		flush_dcache_page(page);
 		status = a_ops->commit_write(file, page, offset, offset+bytes);
 		if (status == AOP_TRUNCATED_PAGE) {
diff -puN mm/filemap.h~fix-pagecache-write-deadlocks mm/filemap.h
--- a/mm/filemap.h~fix-pagecache-write-deadlocks
+++ a/mm/filemap.h
@@ -22,19 +22,19 @@ __filemap_copy_from_user_iovec_inatomic(
 
 /*
  * Copy as much as we can into the page and return the number of bytes which
- * were sucessfully copied.  If a fault is encountered then clear the page
- * out to (offset+bytes) and return the number of bytes which were copied.
+ * were sucessfully copied.  If a fault is encountered then return the number of
+ * bytes which were copied.
  *
- * NOTE: For this to work reliably we really want copy_from_user_inatomic_nocache
- * to *NOT* zero any tail of the buffer that it failed to copy.  If it does,
- * and if the following non-atomic copy succeeds, then there is a small window
- * where the target page contains neither the data before the write, nor the
- * data after the write (it contains zero).  A read at this time will see
- * data that is inconsistent with any ordering of the read and the write.
- * (This has been detected in practice).
+ * NOTE: For this to work reliably we really want
+ * copy_from_user_inatomic_nocache to *NOT* zero any tail of the buffer that it
+ * failed to copy.  If it does, and if the following non-atomic copy succeeds,
+ * then there is a small window where the target page contains neither the data
+ * before the write, nor the data after the write (it contains zero).  A read at
+ * this time will see data that is inconsistent with any ordering of the read
+ * and the write.  (This has been detected in practice).
  */
 static inline size_t
-filemap_copy_from_user(struct page *page, unsigned long offset,
+filemap_copy_from_user_atomic(struct page *page, unsigned long offset,
 			const char __user *buf, unsigned bytes)
 {
 	char *kaddr;
@@ -53,14 +53,28 @@ filemap_copy_from_user(struct page *page
 	return bytes - left;
 }
 
+static inline size_t
+filemap_copy_from_user_nonatomic(struct page *page, unsigned long offset,
+			const char __user *buf, unsigned bytes)
+{
+	int left;
+	char *kaddr;
+
+	kaddr = kmap(page);
+	left = __copy_from_user_nocache(kaddr + offset, buf, bytes);
+	kunmap(page);
+	return bytes - left;
+}
+
 /*
- * This has the same sideeffects and return value as filemap_copy_from_user().
+ * This has the same sideeffects and return value as
+ * filemap_copy_from_user_atomic().
  * The difference is that on a fault we need to memset the remainder of the
  * page (out to offset+bytes), to emulate filemap_copy_from_user()'s
  * single-segment behaviour.
  */
 static inline size_t
-filemap_copy_from_user_iovec(struct page *page, unsigned long offset,
+filemap_copy_from_user_iovec_atomic(struct page *page, unsigned long offset,
 			const struct iovec *iov, size_t base, size_t bytes)
 {
 	char *kaddr;
@@ -70,14 +84,29 @@ filemap_copy_from_user_iovec(struct page
 	copied = __filemap_copy_from_user_iovec_inatomic(kaddr + offset, iov,
 							 base, bytes);
 	kunmap_atomic(kaddr, KM_USER0);
-	if (copied != bytes) {
-		kaddr = kmap(page);
-		copied = __filemap_copy_from_user_iovec_inatomic(kaddr + offset, iov,
-								 base, bytes);
-		if (bytes - copied)
-			memset(kaddr + offset + copied, 0, bytes - copied);
-		kunmap(page);
-	}
+	return copied;
+}
+
+/*
+ * This has the same sideeffects and return value as
+ * filemap_copy_from_user_nonatomic().
+ * The difference is that on a fault we need to memset the remainder of the
+ * page (out to offset+bytes), to emulate filemap_copy_from_user_nonatomic()'s
+ * single-segment behaviour.
+ */
+static inline size_t
+filemap_copy_from_user_iovec_nonatomic(struct page *page, unsigned long offset,
+			const struct iovec *iov, size_t base, size_t bytes)
+{
+	char *kaddr;
+	size_t copied;
+
+	kaddr = kmap(page);
+	copied = __filemap_copy_from_user_iovec_inatomic(kaddr + offset, iov,
+							 base, bytes);
+	if (bytes - copied)
+		memset(kaddr + offset + copied, 0, bytes - copied);
+	kunmap(page);
 	return copied;
 }
 
_


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11  6:00       ` Andrew Morton
@ 2006-10-11  9:21         ` Nick Piggin
  2006-10-11 16:21         ` Linus Torvalds
  1 sibling, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-11  9:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Nick Piggin, Linux Memory Management, Linux Kernel, Linus Torvalds

On Tue, Oct 10, 2006 at 11:00:42PM -0700, Andrew Morton wrote:
> On Wed, 11 Oct 2006 15:39:22 +1000
> Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> 
> > But I see that it does read twice. Do you want that behaviour retained? It
> > seems like at this level it would be logical to read it once and let lower
> > layers take care of any retries?
> 
> argh.  Linus has good-sounding reasons for retrying the pagefault-path's
> read a single time, but I forget what they are.  Something to do with
> networked filesystems?  (adds cc)

While you're there, can anyone tell me why we want an external
ptracer to be able to access pages that are outside i_size? I
haven't removed the logic of course, but I'm curious about the
history and usage of such a thing.



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 4/5] mm: add vm_insert_pfn helpler
  2006-10-10 14:22 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
@ 2006-10-11 10:12   ` Thomas Hellstrom
  2006-10-11 11:24     ` Nick Piggin
  0 siblings, 1 reply; 42+ messages in thread
From: Thomas Hellstrom @ 2006-10-11 10:12 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Linux Kernel

Nick Piggin wrote:
> Add a vm_insert_pfn helper, so that ->fault handlers can have nopfn
> functionality by installing their own pte and returning NULL.
> 
> Signed-off-by: Nick Piggin <npiggin@suse.de>
> 
> Index: linux-2.6/include/linux/mm.h
> ===================================================================
> --- linux-2.6.orig/include/linux/mm.h
> +++ linux-2.6/include/linux/mm.h
> @@ -1121,6 +1121,7 @@ unsigned long vmalloc_to_pfn(void *addr)
>  int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
>  			unsigned long pfn, unsigned long size, pgprot_t);
>  int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
> +int vm_insert_pfn(struct vm_area_struct *, unsigned long addr, unsigned long pfn);
>  
>  struct page *follow_page(struct vm_area_struct *, unsigned long address,
>  			unsigned int foll_flags);
> Index: linux-2.6/mm/memory.c
> ===================================================================
> --- linux-2.6.orig/mm/memory.c
> +++ linux-2.6/mm/memory.c
> @@ -1267,6 +1267,50 @@ int vm_insert_page(struct vm_area_struct
>  }
>  EXPORT_SYMBOL(vm_insert_page);
>  
> +/**
> + * vm_insert_pfn - insert single pfn into user vma
> + * @vma: user vma to map to
> + * @addr: target user address of this page
> + * @pfn: source kernel pfn
> + *
> + * Similar to vm_inert_page, this allows drivers to insert individual pages
> + * they've allocated into a user vma. Same comments apply.
> + *
> + * This function should only be called from a vm_ops->fault handler, and
> + * in that case the handler should return NULL.
> + */
> +int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)

Nick, I just realized: would it be possible to have a pgprot_t argument 
to this one, instead of it using vma->vm_pgprot?

The motivation for this (DRM again) is that some architectures (powerpc) 
cannot map the AGP aperture through IO space, but needs to remap the 
page from memory with a nocache attribute set. Others need special 
pgprot settings for write-combined mappings.

Now, there's a possibility to change vma->vm_pgprot during the first 
->fault(), but again, we only have the mmap_sem in read mode.

/Thomas





^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 4/5] mm: add vm_insert_pfn helpler
  2006-10-11 10:12   ` Thomas Hellstrom
@ 2006-10-11 11:24     ` Nick Piggin
  2006-10-11 21:30       ` Thomas Hellström
  0 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-11 11:24 UTC (permalink / raw)
  To: Thomas Hellstrom; +Cc: Linux Kernel

On Wed, Oct 11, 2006 at 12:12:19PM +0200, Thomas Hellstrom wrote:
> Nick, I just realized: would it be possible to have a pgprot_t argument 
> to this one, instead of it using vma->vm_pgprot?
> 
> The motivation for this (DRM again) is that some architectures (powerpc) 
> cannot map the AGP aperture through IO space, but needs to remap the 
> page from memory with a nocache attribute set. Others need special 
> pgprot settings for write-combined mappings.
> 
> Now, there's a possibility to change vma->vm_pgprot during the first 
> ->fault(), but again, we only have the mmap_sem in read mode.

I don't see a problem with that. It would be nice if vm_pgprot could
be kept in synch with the pte protections, but I guess a crazy
driver should be allowed to do anything it wants ;)



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11  6:00       ` Andrew Morton
  2006-10-11  9:21         ` Nick Piggin
@ 2006-10-11 16:21         ` Linus Torvalds
  2006-10-11 16:57           ` SPAM: " Nick Piggin
  1 sibling, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2006-10-11 16:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Nick Piggin, Nick Piggin, Linux Memory Management, Linux Kernel



On Tue, 10 Oct 2006, Andrew Morton wrote:
>
> On Wed, 11 Oct 2006 15:39:22 +1000
> Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> 
> > But I see that it does read twice. Do you want that behaviour retained? It
> > seems like at this level it would be logical to read it once and let lower
> > layers take care of any retries?
> 
> argh.  Linus has good-sounding reasons for retrying the pagefault-path's
> read a single time, but I forget what they are.  Something to do with
> networked filesystems?  (adds cc)

Indeed. We _have_ to re-try a failed IO that we didn't start ourselves.

The original IO could have been started by a person who didn't have 
permissions to actually carry it out successfully, so if you enter with 
the page locked (because somebody else started the IO), and you wait for 
the page and it's not up-to-date afterwards, you absolutely _have_ to try 
the IO, and can only return a real IO error after your _own_ IO has 
failed.

There is another issue too: even if the page was marked as having an error 
when we entered (and no longer locked - maybe the IO failed last time 
around), we should _still_ re-try. It might be a temporary error that has 
since gone away, and if we don't re-try, we can end up in the totally 
untenable situation where the kernel makes a soft error into a hard one. 

Neither case simply isn't acceptable. End result: only things like 
read-ahead should actually honor the "page exists but is not up-to-date" 
as a "don't even try".

		Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: SPAM: Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11 16:21         ` Linus Torvalds
@ 2006-10-11 16:57           ` Nick Piggin
  2006-10-11 17:11             ` Linus Torvalds
  0 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-11 16:57 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel

On Wed, Oct 11, 2006 at 09:21:16AM -0700, Linus Torvalds wrote:
> 
> 
> On Tue, 10 Oct 2006, Andrew Morton wrote:
> >
> > On Wed, 11 Oct 2006 15:39:22 +1000
> > Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> > 
> > > But I see that it does read twice. Do you want that behaviour retained? It
> > > seems like at this level it would be logical to read it once and let lower
> > > layers take care of any retries?
> > 
> > argh.  Linus has good-sounding reasons for retrying the pagefault-path's
> > read a single time, but I forget what they are.  Something to do with
> > networked filesystems?  (adds cc)
> 
> Indeed. We _have_ to re-try a failed IO that we didn't start ourselves.
> 
> The original IO could have been started by a person who didn't have 
> permissions to actually carry it out successfully, so if you enter with 
> the page locked (because somebody else started the IO), and you wait for 
> the page and it's not up-to-date afterwards, you absolutely _have_ to try 
> the IO, and can only return a real IO error after your _own_ IO has 
> failed.

Sure, but we currently try to read _twice_, don't we?

> There is another issue too: even if the page was marked as having an error 
> when we entered (and no longer locked - maybe the IO failed last time 
> around), we should _still_ re-try. It might be a temporary error that has 
> since gone away, and if we don't re-try, we can end up in the totally 
> untenable situation where the kernel makes a soft error into a hard one. 

Yes, and in that case I think the page should be !Uptodate, so no
problem there.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: SPAM: Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11 16:57           ` SPAM: " Nick Piggin
@ 2006-10-11 17:11             ` Linus Torvalds
  2006-10-11 17:21               ` SPAM: " Nick Piggin
  0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2006-10-11 17:11 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel



On Wed, 11 Oct 2006, Nick Piggin wrote:
> > 
> > The original IO could have been started by a person who didn't have 
> > permissions to actually carry it out successfully, so if you enter with 
> > the page locked (because somebody else started the IO), and you wait for 
> > the page and it's not up-to-date afterwards, you absolutely _have_ to try 
> > the IO, and can only return a real IO error after your _own_ IO has 
> > failed.
> 
> Sure, but we currently try to read _twice_, don't we?

Well, we have the read-ahead, and then the real read. By the time we do 
the real read, we have forgotten about the read-ahead details, so..

We also end up often having a _third_ one, simply because the _user_ tries 
it twice: it gets a partial IO read first, and then tries to continue and 
won't give up until it gets a real error.

So yes, we can end up reading it even more than twice, if only due to 
standard UNIX interfaces: you always have to have one extra "read()" 
system call in order to get the final error (or - much more commonly - 
EOF, of course).

If we tracked the read-aheads that _we_ started, we could probably get rid 
of one of them.

			Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: SPAM: Re: SPAM: Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11 17:11             ` Linus Torvalds
@ 2006-10-11 17:21               ` Nick Piggin
  2006-10-11 17:38                 ` Linus Torvalds
  0 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-11 17:21 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel

On Wed, Oct 11, 2006 at 10:11:43AM -0700, Linus Torvalds wrote:
> 
> 
> On Wed, 11 Oct 2006, Nick Piggin wrote:
> > > 
> > > The original IO could have been started by a person who didn't have 
> > > permissions to actually carry it out successfully, so if you enter with 
> > > the page locked (because somebody else started the IO), and you wait for 
> > > the page and it's not up-to-date afterwards, you absolutely _have_ to try 
> > > the IO, and can only return a real IO error after your _own_ IO has 
> > > failed.
> > 
> > Sure, but we currently try to read _twice_, don't we?
> 
> Well, we have the read-ahead, and then the real read. By the time we do 
> the real read, we have forgotten about the read-ahead details, so..

I mean filemap_nopage does *two* synchronous reads when finding a !uptodate
page. This is despite the comment saying that it retries once on error.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11 17:21               ` SPAM: " Nick Piggin
@ 2006-10-11 17:38                 ` Linus Torvalds
  2006-10-12  3:33                   ` Nick Piggin
  0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2006-10-11 17:38 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel



On Wed, 11 Oct 2006, Nick Piggin wrote:
> 
> I mean filemap_nopage does *two* synchronous reads when finding a !uptodate
> page. This is despite the comment saying that it retries once on error.

Ahh. 

Yes, now that you point to the actual code, that does look ugly.

I think it's related to the

	ClearPageError(page);

thing, and probably related to that function being rather old and having 
gone through several re-organizations. I suspect we used to fall through 
to the error handling code regardless of whether we did the read ourselves 
etc.

Are you saying that something like this would be preferable?

		Linus

---
diff --git a/mm/filemap.c b/mm/filemap.c
index 3464b68..e5ecf42 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1496,6 +1496,8 @@ page_not_uptodate:
 		goto success;
 	}
 
+	/* Clear any potential old errors, and try to read.. */
+	ClearPageError(page);
 	error = mapping->a_ops->readpage(file, page);
 	if (!error) {
 		wait_on_page_locked(page);
@@ -1526,21 +1528,12 @@ page_not_uptodate:
 		unlock_page(page);
 		goto success;
 	}
-	ClearPageError(page);
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
 
 	/*
 	 * Things didn't work out. Return zero to tell the
 	 * mm layer so, possibly freeing the page cache page first.
 	 */
+	unlock_page(page);
 	shrink_readahead_size_eio(file, ra);
 	page_cache_release(page);
 	return NOPAGE_SIGBUS;

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [patch 4/5] mm: add vm_insert_pfn helpler
  2006-10-11 11:24     ` Nick Piggin
@ 2006-10-11 21:30       ` Thomas Hellström
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Hellström @ 2006-10-11 21:30 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Linux Kernel

Nick Piggin wrote:

>On Wed, Oct 11, 2006 at 12:12:19PM +0200, Thomas Hellstrom wrote:
>  
>
>>Nick, I just realized: would it be possible to have a pgprot_t argument 
>>to this one, instead of it using vma->vm_pgprot?
>>
>>The motivation for this (DRM again) is that some architectures (powerpc) 
>>cannot map the AGP aperture through IO space, but needs to remap the 
>>page from memory with a nocache attribute set. Others need special 
>>pgprot settings for write-combined mappings.
>>
>>Now, there's a possibility to change vma->vm_pgprot during the first 
>>->fault(), but again, we only have the mmap_sem in read mode.
>>    
>>
>
>I don't see a problem with that. It would be nice if vm_pgprot could
>be kept in synch with the pte protections, but I guess a crazy
>driver should be allowed to do anything it wants ;)
>
>
>  
>
:).
Actually, the caching bits are sort of left out from the mm code anyway. 
For example, mprotect will reset them, which is sort of a security risc, 
since an unpriviliged user can call mprotect on uncached page mappings, 
causing inconsistent mappings and stability problems.

Thomas


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11 17:38                 ` Linus Torvalds
@ 2006-10-12  3:33                   ` Nick Piggin
  2006-10-12 15:37                     ` Linus Torvalds
  0 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-12  3:33 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel

On Wed, Oct 11, 2006 at 10:38:31AM -0700, Linus Torvalds wrote:
> 
> 
> On Wed, 11 Oct 2006, Nick Piggin wrote:
> > 
> > I mean filemap_nopage does *two* synchronous reads when finding a !uptodate
> > page. This is despite the comment saying that it retries once on error.
> 
> Ahh. 
> 
> Yes, now that you point to the actual code, that does look ugly.
> 
> I think it's related to the
> 
> 	ClearPageError(page);
> 
> thing, and probably related to that function being rather old and having 
> gone through several re-organizations. I suspect we used to fall through 
> to the error handling code regardless of whether we did the read ourselves 
> etc.

Yeah, it may have even been a mismerge at some point in time.

> Are you saying that something like this would be preferable?

I think so, it is neater and clearer. I actually didn't even bother relocking
and checking the page again on readpage error so got rid of quite a bit of
code.

> 
> 		Linus
> 
> ---
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 3464b68..e5ecf42 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1496,6 +1496,8 @@ page_not_uptodate:
>  		goto success;
>  	}
>  
> +	/* Clear any potential old errors, and try to read.. */
> +	ClearPageError(page);
>  	error = mapping->a_ops->readpage(file, page);
>  	if (!error) {
>  		wait_on_page_locked(page);
> @@ -1526,21 +1528,12 @@ page_not_uptodate:
>  		unlock_page(page);
>  		goto success;
>  	}
> -	ClearPageError(page);
> -	error = mapping->a_ops->readpage(file, page);
> -	if (!error) {
> -		wait_on_page_locked(page);
> -		if (PageUptodate(page))
> -			goto success;
> -	} else if (error == AOP_TRUNCATED_PAGE) {
> -		page_cache_release(page);
> -		goto retry_find;
> -	}
>  
>  	/*
>  	 * Things didn't work out. Return zero to tell the
>  	 * mm layer so, possibly freeing the page cache page first.
>  	 */
> +	unlock_page(page);
>  	shrink_readahead_size_eio(file, ra);
>  	page_cache_release(page);
>  	return NOPAGE_SIGBUS;

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-12  3:33                   ` Nick Piggin
@ 2006-10-12 15:37                     ` Linus Torvalds
  2006-10-12 15:40                       ` Nick Piggin
  0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2006-10-12 15:37 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel



On Thu, 12 Oct 2006, Nick Piggin wrote:
> 
> > Are you saying that something like this would be preferable?
> 
> I think so, it is neater and clearer. I actually didn't even bother relocking
> and checking the page again on readpage error so got rid of quite a bit of
> code.

Well, the readpage error should be rare (and for the _normal_ case we just 
do the "wait_on_page_locked()" thing). And I think we should lock the page 
in order to do the truncation check, no?

But I don't have any really strong feelings. I'm certainly ok with the 
patch I sent out. How about putting it through -mm? Here's my sign-off:

	Signed-off-by: Linus Torvalds <torvalds@osdl.org>

if you want to send it off to Andrew (or if Andrew wants to just take it 
himself ;)

Btw, how did you even notice this? Just by reading the source, or because 
you actually saw multiple errors reported?

		Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-12 15:37                     ` Linus Torvalds
@ 2006-10-12 15:40                       ` Nick Piggin
  0 siblings, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-12 15:40 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel

On Thu, Oct 12, 2006 at 08:37:39AM -0700, Linus Torvalds wrote:
> 
> 
> On Thu, 12 Oct 2006, Nick Piggin wrote:
> > 
> > > Are you saying that something like this would be preferable?
> > 
> > I think so, it is neater and clearer. I actually didn't even bother relocking
> > and checking the page again on readpage error so got rid of quite a bit of
> > code.
> 
> Well, the readpage error should be rare (and for the _normal_ case we just 
> do the "wait_on_page_locked()" thing). And I think we should lock the page 
> in order to do the truncation check, no?

Definitely.

> But I don't have any really strong feelings. I'm certainly ok with the 
> patch I sent out. How about putting it through -mm? Here's my sign-off:
> 
> 	Signed-off-by: Linus Torvalds <torvalds@osdl.org>
> 
> if you want to send it off to Andrew (or if Andrew wants to just take it 
> himself ;)

OK... maybe it can wait till the other changes, and we can think about
it then. I'll carry around the split out patct, though.

> Btw, how did you even notice this? Just by reading the source, or because 
> you actually saw multiple errors reported?

Reading the source, thinking about the cleanups we can do if filemap_nopage
takes the page lock...


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11  5:50     ` Nick Piggin
                         ` (2 preceding siblings ...)
       [not found]       ` <20061010231150.fb9e30f5.akpm@osdl.org>
@ 2006-10-21  1:53       ` Benjamin Herrenschmidt
  3 siblings, 0 replies; 42+ messages in thread
From: Benjamin Herrenschmidt @ 2006-10-21  1:53 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Nick Piggin, Linux Memory Management, Linux Kernel


> Without looking at any code, perhaps we could instead run get_user_pages
> and copy the memory that way.

I have a deep hatred for get_user_pages().... maybe not totally rational
though :) It will also only work with things that are actually backed up
by struct page. Is that ok in your case ?

Ben.



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-11 18:34       ` Mark Fasheh
@ 2006-10-12  3:28         ` Nick Piggin
  0 siblings, 0 replies; 42+ messages in thread
From: Nick Piggin @ 2006-10-12  3:28 UTC (permalink / raw)
  To: Mark Fasheh
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Ingo Molnar

On Wed, Oct 11, 2006 at 11:34:04AM -0700, Mark Fasheh wrote:
> On Tue, Oct 10, 2006 at 11:10:42AM +1000, Nick Piggin wrote:
> 
> The test I run is over here btw:
> 
> http://oss.oracle.com/projects/ocfs2-test/src/trunk/programs/multi_node_mmap/multi_mmap.c
> 
> I ran it with the following parameters:
> 
> mpirun -np 6 n1-3 ./multi_mmap -w mmap -r mmap -i 1000 -b 1024 /ocfs2/mmap/test4.txt

Thanks, I'll see if I can reproduce.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-10  1:10     ` Nick Piggin
@ 2006-10-11 18:34       ` Mark Fasheh
  2006-10-12  3:28         ` Nick Piggin
  0 siblings, 1 reply; 42+ messages in thread
From: Mark Fasheh @ 2006-10-11 18:34 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Ingo Molnar

On Tue, Oct 10, 2006 at 11:10:42AM +1000, Nick Piggin wrote:
> If you want a stable patchset for testing, the previous one to linux-mm
> starting with "[patch 1/3] mm: fault vs invalidate/truncate check" went
> through some stress testing here...
Hmm, unfortunately my testing so far hasn't been particularly encouraging...

Shortly after my test starts, one of the "ocfs2-vote" processes on one of my
nodes will begin consuming cpu at a rate which indicates it might be in an
infinite loop. The soft lockup detection code seems to agree:

BUG: soft lockup detected on CPU#0!
Call Trace:
[C00000003795F220] [C000000000011310] .show_stack+0x50/0x1cc (unreliable)
[C00000003795F2D0] [C000000000086100] .softlockup_tick+0xf8/0x120
[C00000003795F380] [C000000000060DA8] .run_local_timers+0x1c/0x30
[C00000003795F400] [C000000000023B28] .timer_interrupt+0x110/0x500
[C00000003795F520] [C0000000000034EC] decrementer_common+0xec/0x100
--- Exception: 901 at ._raw_spin_lock+0x84/0x1a0
    LR = ._spin_lock+0x10/0x24
[C00000003795F810] [C000000000788FC8] init_thread_union+0xfc8/0x4000 (unreliable)
[C00000003795F8B0] [C0000000004A66B8] ._spin_lock+0x10/0x24
[C00000003795F930] [C00000000009EDBC] .unmap_mapping_range+0x88/0x2d4
[C00000003795FA90] [C0000000000967E4] .truncate_inode_pages_range+0x2b8/0x490
[C00000003795FBE0] [D0000000005FA8C0] .ocfs2_data_convert_worker+0x124/0x14c [ocfs2]
[C00000003795FC70] [D0000000005FB0BC] .ocfs2_process_blocked_lock+0x184/0xca4 [ocfs2]
[C00000003795FD50] [D000000000629DE8] .ocfs2_vote_thread+0x1a8/0xc18 [ocfs2]
[C00000003795FEE0] [C00000000007000C] .kthread+0x154/0x1a4
[C00000003795FF90] [C000000000027124] .kernel_thread+0x4c/0x68


A sysrq-t doesn't show anything interesting from any of the other OCFS2
processes. This is your patchset from the 10th, running against Linus' git
tree from that day, with my mmap patch merged in.

The stack seems to indicate that we're stuck in one of these
truncate_inode_pages_range() loops:

+                       while (page_mapped(page)) {
+                               unmap_mapping_range(mapping,
+                                 (loff_t)page_index<<PAGE_CACHE_SHIFT,
+                                 PAGE_CACHE_SIZE, 0);
+                       }


The test I run is over here btw:

http://oss.oracle.com/projects/ocfs2-test/src/trunk/programs/multi_node_mmap/multi_mmap.c

I ran it with the following parameters:

mpirun -np 6 n1-3 ./multi_mmap -w mmap -r mmap -i 1000 -b 1024 /ocfs2/mmap/test4.txt
	--Mark

--
Mark Fasheh
Senior Software Developer, Oracle
mark.fasheh@oracle.com

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-09 21:10   ` Mark Fasheh
@ 2006-10-10  1:10     ` Nick Piggin
  2006-10-11 18:34       ` Mark Fasheh
  0 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-10  1:10 UTC (permalink / raw)
  To: Mark Fasheh
  Cc: Nick Piggin, Hugh Dickins, Linux Memory Management,
	Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Ingo Molnar

Mark Fasheh wrote:
> Hi Nick,
> 
> On Mon, Oct 09, 2006 at 06:12:26PM +0200, Nick Piggin wrote:
> 
>>Complexity and documentation issues aside, the locking protocol fails
>>in the case where we would like to invalidate pagecache inside i_size.
> 
> That pretty much describes part of what ocfs2_data_convert_worker() does.
> It's called when another node wants to take a lock at an incompatible level
> on an inodes data.
> 
> This involves up to two steps, depending on the level of the lock requested.
> 
> 1) It always syncs dirty data.
> 
> 2) If it's dropping due to writes on another node, then pages will be
>    invalidated and mappings torn down.

Yep, your unmap_mapping_range, and invalidate_inode_pages2 calls in there
are all subject to this bug (provided the pages being invalidated are visible
and able to be mmap()ed).

> There's actually an ocfs2 patch to support shared writeable mappings in via
> the ->page_mkwrite() callback, but I haven't pushed it upstream due to a bug
> I found during some later testing. I believe the bug is a VM issue, and your
> description of the race Andrea identified leads me to wonder if you all
> might have just found it and fixed it for me :)
> 
> 
> In short, I have an MPI test program which rotates through a set of
> processes which have mmaped a pre-formatted file. One process writes some
> data, the rest verify that they see the new data. When I run multiple
> processes on multiple nodes, I will sometimes find that one of the processes
> fails because it sees stale data.

This is roughly similar to what my test program does that I wrote to
reproduce the bug. So it wouldn't surprise me.

> FWIW, the overall approach taken in the patch below seems fine to me, though
> I'm no VM expert :)
> 
> Not having ocfs2_data_convert_worker() call unmap_mapping_range() directly,
> is ok as long as the intent of the function is preserved. You seem to be
> doing this by having truncate_inode_pages() unmap instead.

truncate_inode_pages now unmaps the pages internally, so you should
be OK there. If you're expecting this to happen frequently with mapped
pages, it is probably more efficient to call the full unmap_mapping_range
before you call truncate_inode_pages...

[ Somewhere on my todo list is a cleanup of mm/truncate.c ;) ]

If you want a stable patchset for testing, the previous one to linux-mm
starting with "[patch 1/3] mm: fault vs invalidate/truncate check" went
through some stress testing here...

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-09 16:12 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
@ 2006-10-09 21:10   ` Mark Fasheh
  2006-10-10  1:10     ` Nick Piggin
  0 siblings, 1 reply; 42+ messages in thread
From: Mark Fasheh @ 2006-10-09 21:10 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Hugh Dickins, Linux Memory Management, Andrew Morton,
	Jes Sorensen, Benjamin Herrenschmidt, Linux Kernel, Ingo Molnar

Hi Nick,

On Mon, Oct 09, 2006 at 06:12:26PM +0200, Nick Piggin wrote:
> Complexity and documentation issues aside, the locking protocol fails
> in the case where we would like to invalidate pagecache inside i_size.
That pretty much describes part of what ocfs2_data_convert_worker() does.
It's called when another node wants to take a lock at an incompatible level
on an inodes data.

This involves up to two steps, depending on the level of the lock requested.

1) It always syncs dirty data.

2) If it's dropping due to writes on another node, then pages will be
   invalidated and mappings torn down.


There's actually an ocfs2 patch to support shared writeable mappings in via
the ->page_mkwrite() callback, but I haven't pushed it upstream due to a bug
I found during some later testing. I believe the bug is a VM issue, and your
description of the race Andrea identified leads me to wonder if you all
might have just found it and fixed it for me :)


In short, I have an MPI test program which rotates through a set of
processes which have mmaped a pre-formatted file. One process writes some
data, the rest verify that they see the new data. When I run multiple
processes on multiple nodes, I will sometimes find that one of the processes
fails because it sees stale data.


FWIW, the overall approach taken in the patch below seems fine to me, though
I'm no VM expert :)

Not having ocfs2_data_convert_worker() call unmap_mapping_range() directly,
is ok as long as the intent of the function is preserved. You seem to be
doing this by having truncate_inode_pages() unmap instead.

Thanks,
	--Mark

--
Mark Fasheh
Senior Software Developer, Oracle
mark.fasheh@oracle.com

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [patch 2/5] mm: fault vs invalidate/truncate race fix
  2006-10-09 16:12 Nick Piggin
@ 2006-10-09 16:12 ` Nick Piggin
  2006-10-09 21:10   ` Mark Fasheh
  0 siblings, 1 reply; 42+ messages in thread
From: Nick Piggin @ 2006-10-09 16:12 UTC (permalink / raw)
  To: Hugh Dickins, Linux Memory Management
  Cc: Andrew Morton, Jes Sorensen, Benjamin Herrenschmidt,
	Linux Kernel, Nick Piggin, Ingo Molnar

Fix the race between invalidate_inode_pages and do_no_page.

Andrea Arcangeli identified a subtle race between invalidation of
pages from pagecache with userspace mappings, and do_no_page.

The issue is that invalidation has to shoot down all mappings to the
page, before it can be discarded from the pagecache. Between shooting
down ptes to a particular page, and actually dropping the struct page
from the pagecache, do_no_page from any process might fault on that
page and establish a new mapping to the page just before it gets
discarded from the pagecache.

The most common case where such invalidation is used is in file
truncation. This case was catered for by doing a sort of open-coded
seqlock between the file's i_size, and its truncate_count.

Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then
find the page if it is within i_size, and then check truncate_count
under the page table lock and back out and retry if it had
subsequently been changed (ptl will serialise against unmapping, and
ensure a potentially updated truncate_count is actually visible).

Complexity and documentation issues aside, the locking protocol fails
in the case where we would like to invalidate pagecache inside i_size.
do_no_page can come in anytime and filemap_nopage is not aware of the
invalidation in progress (as it is when it is outside i_size). The
end result is that dangling (->mapping == NULL) pages that appear to
be from a particular file may be mapped into userspace with nonsense
data. Valid mappings to the same place will see a different page.

Andrea implemented two working fixes, one using a real seqlock,
another using a page->flags bit. He also proposed using the page lock
in do_no_page, but that was initially considered too heavyweight.
However, it is not a global or per-file lock, and the page cacheline
is modified in do_no_page to increment _count and _mapcount anyway, so
a further modification should not be a large performance hit.
Scalability is not an issue.

This patch implements this latter approach. ->nopage implementations
return with the page locked if it is possible for their underlying
file to be invalidated (in that case, they must set a special vm_flags
bit to indicate so). do_no_page only unlocks the page after setting
up the mapping completely. invalidation is excluded because it holds
the page lock during invalidation of each page (and ensures that the
page is not mapped while holding the lock).

This allows significant simplifications in do_no_page.

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -166,6 +166,11 @@ extern unsigned int kobjsize(const void 
 #define VM_NONLINEAR	0x00800000	/* Is non-linear (remap_file_pages) */
 #define VM_MAPPED_COPY	0x01000000	/* T if mapped copy of data (nommu mmap) */
 #define VM_INSERTPAGE	0x02000000	/* The vma has had "vm_insert_page()" done on it */
+#define VM_CAN_INVALIDATE	0x04000000	/* The mapping may be invalidated,
+					 * eg. truncate or invalidate_inode_*.
+					 * In this case, do_no_page must
+					 * return with the page locked.
+					 */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1363,9 +1363,10 @@ struct page *filemap_nopage(struct vm_ar
 	unsigned long size, pgoff;
 	int did_readaround = 0, majmin = VM_FAULT_MINOR;
 
+	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
+
 	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
 
-retry_all:
 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 	if (pgoff >= size)
 		goto outside_data_content;
@@ -1387,7 +1388,7 @@ retry_all:
 	 * Do we have something in the page cache already?
 	 */
 retry_find:
-	page = find_get_page(mapping, pgoff);
+	page = find_lock_page(mapping, pgoff);
 	if (!page) {
 		unsigned long ra_pages;
 
@@ -1421,7 +1422,7 @@ retry_find:
 				start = pgoff - ra_pages / 2;
 			do_page_cache_readahead(mapping, file, start, ra_pages);
 		}
-		page = find_get_page(mapping, pgoff);
+		page = find_lock_page(mapping, pgoff);
 		if (!page)
 			goto no_cached_page;
 	}
@@ -1430,13 +1431,25 @@ retry_find:
 		ra->mmap_hit++;
 
 	/*
-	 * Ok, found a page in the page cache, now we need to check
-	 * that it's up-to-date.
+	 * We have a locked page in the page cache, now we need to check
+	 * that it's up-to-date. If not, it is going to be due to an error.
 	 */
-	if (!PageUptodate(page))
+	if (unlikely(!PageUptodate(page)))
 		goto page_not_uptodate;
 
-success:
+#if 0
+/*
+ * XXX: no we don't have to, because we check and unmap in
+ * truncate, when the page is locked. Verify and delete me.
+ */
+	/* Must recheck i_size under page lock */
+	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+	if (unlikely(pgoff >= size)) {
+		unlock_page(page);
+		goto outside_data_content;
+	}
+#endif
+
 	/*
 	 * Found the page and have a reference on it.
 	 */
@@ -1479,34 +1492,11 @@ no_cached_page:
 	return NOPAGE_SIGBUS;
 
 page_not_uptodate:
+	/* IO error path */
 	if (!did_readaround) {
 		majmin = VM_FAULT_MAJOR;
 		count_vm_event(PGMAJFAULT);
 	}
-	lock_page(page);
-
-	/* Did it get unhashed while we waited for it? */
-	if (!page->mapping) {
-		unlock_page(page);
-		page_cache_release(page);
-		goto retry_all;
-	}
-
-	/* Did somebody else get it up-to-date? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
-
-	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
-		goto retry_find;
-	}
 
 	/*
 	 * Umm, take care of errors if the page isn't up-to-date.
@@ -1514,37 +1504,15 @@ page_not_uptodate:
 	 * because there really aren't any performance issues here
 	 * and we need to check for errors.
 	 */
-	lock_page(page);
-
-	/* Somebody truncated the page on us? */
-	if (!page->mapping) {
-		unlock_page(page);
-		page_cache_release(page);
-		goto retry_all;
-	}
-
-	/* Somebody else successfully read it in? */
-	if (PageUptodate(page)) {
-		unlock_page(page);
-		goto success;
-	}
 	ClearPageError(page);
 	error = mapping->a_ops->readpage(file, page);
-	if (!error) {
-		wait_on_page_locked(page);
-		if (PageUptodate(page))
-			goto success;
-	} else if (error == AOP_TRUNCATED_PAGE) {
-		page_cache_release(page);
+	page_cache_release(page);
+
+	if (!error || error == AOP_TRUNCATED_PAGE)
 		goto retry_find;
-	}
 
-	/*
-	 * Things didn't work out. Return zero to tell the
-	 * mm layer so, possibly freeing the page cache page first.
-	 */
+	/* Things didn't work out. Return zero to tell the mm layer so. */
 	shrink_readahead_size_eio(file, ra);
-	page_cache_release(page);
 	return NOPAGE_SIGBUS;
 }
 EXPORT_SYMBOL(filemap_nopage);
@@ -1737,6 +1705,7 @@ int generic_file_mmap(struct file * file
 		return -ENOEXEC;
 	file_accessed(file);
 	vma->vm_ops = &generic_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1675,6 +1675,13 @@ static int unmap_mapping_range_vma(struc
 	unsigned long restart_addr;
 	int need_break;
 
+	/*
+	 * files that support invalidating or truncating portions of the
+	 * file from under mmaped areas must set the VM_CAN_INVALIDATE flag, and
+	 * have their .nopage function return the page locked.
+	 */
+	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
+
 again:
 	restart_addr = vma->vm_truncate_count;
 	if (is_restart_addr(restart_addr) && start_addr < restart_addr) {
@@ -1805,17 +1812,8 @@ void unmap_mapping_range(struct address_
 
 	spin_lock(&mapping->i_mmap_lock);
 
-	/* serialize i_size write against truncate_count write */
-	smp_wmb();
-	/* Protect against page faults, and endless unmapping loops */
+	/* Protect against endless unmapping loops */
 	mapping->truncate_count++;
-	/*
-	 * For archs where spin_lock has inclusive semantics like ia64
-	 * this smp_mb() will prevent to read pagetable contents
-	 * before the truncate_count increment is visible to
-	 * other cpus.
-	 */
-	smp_mb();
 	if (unlikely(is_restart_addr(mapping->truncate_count))) {
 		if (mapping->truncate_count == 0)
 			reset_vma_truncate_counts(mapping);
@@ -1854,7 +1852,6 @@ int vmtruncate(struct inode * inode, lof
 	if (IS_SWAPFILE(inode))
 		goto out_busy;
 	i_size_write(inode, offset);
-	unmap_mapping_range(mapping, offset + PAGE_SIZE - 1, 0, 1);
 	truncate_inode_pages(mapping, offset);
 	goto out_truncate;
 
@@ -1893,7 +1890,6 @@ int vmtruncate_range(struct inode *inode
 
 	mutex_lock(&inode->i_mutex);
 	down_write(&inode->i_alloc_sem);
-	unmap_mapping_range(mapping, offset, (end - offset), 1);
 	truncate_inode_pages_range(mapping, offset, end);
 	inode->i_op->truncate_range(inode, offset, end);
 	up_write(&inode->i_alloc_sem);
@@ -2144,10 +2140,8 @@ static int do_no_page(struct mm_struct *
 		int write_access)
 {
 	spinlock_t *ptl;
-	struct page *new_page;
-	struct address_space *mapping = NULL;
+	struct page *page, *nopage_page;
 	pte_t entry;
-	unsigned int sequence = 0;
 	int ret = VM_FAULT_MINOR;
 	int anon = 0;
 	struct page *dirty_page = NULL;
@@ -2155,73 +2149,54 @@ static int do_no_page(struct mm_struct *
 	pte_unmap(page_table);
 	BUG_ON(vma->vm_flags & VM_PFNMAP);
 
-	if (vma->vm_file) {
-		mapping = vma->vm_file->f_mapping;
-		sequence = mapping->truncate_count;
-		smp_rmb(); /* serializes i_size against truncate_count */
-	}
-retry:
-	new_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
-	/*
-	 * No smp_rmb is needed here as long as there's a full
-	 * spin_lock/unlock sequence inside the ->nopage callback
-	 * (for the pagecache lookup) that acts as an implicit
-	 * smp_mb() and prevents the i_size read to happen
-	 * after the next truncate_count read.
-	 */
-
+	nopage_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, &ret);
 	/* no page was available -- either SIGBUS, OOM or REFAULT */
-	if (unlikely(new_page == NOPAGE_SIGBUS))
+	if (unlikely(nopage_page == NOPAGE_SIGBUS))
 		return VM_FAULT_SIGBUS;
-	else if (unlikely(new_page == NOPAGE_OOM))
+	else if (unlikely(nopage_page == NOPAGE_OOM))
 		return VM_FAULT_OOM;
-	else if (unlikely(new_page == NOPAGE_REFAULT))
+	else if (unlikely(nopage_page == NOPAGE_REFAULT))
 		return VM_FAULT_MINOR;
 
+	BUG_ON(vma->vm_flags & VM_CAN_INVALIDATE && !PageLocked(nopage_page));
+	/*
+	 * For consistency in subsequent calls, make the nopage_page always
+	 * locked.  These should be in the minority but if they turn out to be
+	 * critical then this can always be revisited
+	 */
+	if (unlikely(!(vma->vm_flags & VM_CAN_INVALIDATE)))
+		lock_page(nopage_page);
+
 	/*
 	 * Should we do an early C-O-W break?
 	 */
+	page = nopage_page;
 	if (write_access) {
 		if (!(vma->vm_flags & VM_SHARED)) {
-			struct page *page;
-
-			if (unlikely(anon_vma_prepare(vma)))
-				goto oom;
+			if (unlikely(anon_vma_prepare(vma))) {
+				ret = VM_FAULT_OOM;
+				goto out_error;
+			}
 			page = alloc_page_vma(GFP_HIGHUSER, vma, address);
-			if (!page)
-				goto oom;
-			copy_user_highpage(page, new_page, address);
-			page_cache_release(new_page);
-			new_page = page;
+			if (!page) {
+				ret = VM_FAULT_OOM;
+				goto out_error;
+			}
+			copy_user_highpage(page, nopage_page, address);
 			anon = 1;
-
 		} else {
 			/* if the page will be shareable, see if the backing
 			 * address space wants to know that the page is about
 			 * to become writable */
 			if (vma->vm_ops->page_mkwrite &&
-			    vma->vm_ops->page_mkwrite(vma, new_page) < 0
-			    ) {
-				page_cache_release(new_page);
-				return VM_FAULT_SIGBUS;
+			    vma->vm_ops->page_mkwrite(vma, page) < 0) {
+				ret = VM_FAULT_SIGBUS;
+				goto out_error;
 			}
 		}
 	}
 
 	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
-	/*
-	 * For a file-backed vma, someone could have truncated or otherwise
-	 * invalidated this page.  If unmap_mapping_range got called,
-	 * retry getting the page.
-	 */
-	if (mapping && unlikely(sequence != mapping->truncate_count)) {
-		pte_unmap_unlock(page_table, ptl);
-		page_cache_release(new_page);
-		cond_resched();
-		sequence = mapping->truncate_count;
-		smp_rmb();
-		goto retry;
-	}
 
 	/*
 	 * This silly early PAGE_DIRTY setting removes a race
@@ -2234,43 +2209,51 @@ retry:
 	 * handle that later.
 	 */
 	/* Only go through if we didn't race with anybody else... */
-	if (pte_none(*page_table)) {
-		flush_icache_page(vma, new_page);
-		entry = mk_pte(new_page, vma->vm_page_prot);
+	if (likely(pte_none(*page_table))) {
+		flush_icache_page(vma, page);
+		entry = mk_pte(page, vma->vm_page_prot);
 		if (write_access)
 			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 		set_pte_at(mm, address, page_table, entry);
 		if (anon) {
 			inc_mm_counter(mm, anon_rss);
-			lru_cache_add_active(new_page);
-			page_add_new_anon_rmap(new_page, vma, address);
+			lru_cache_add_active(page);
+			page_add_new_anon_rmap(page, vma, address);
 		} else {
 			inc_mm_counter(mm, file_rss);
-			page_add_file_rmap(new_page);
+			page_add_file_rmap(page);
 			if (write_access) {
-				dirty_page = new_page;
+				dirty_page = page;
 				get_page(dirty_page);
 			}
 		}
+
+		/* no need to invalidate: a not-present page won't be cached */
+		update_mmu_cache(vma, address, entry);
+		lazy_mmu_prot_update(entry);
 	} else {
-		/* One of our sibling threads was faster, back out. */
-		page_cache_release(new_page);
-		goto unlock;
+		if (anon)
+			page_cache_release(page);
+		else
+			anon = 1; /* not anon, but release nopage_page */
 	}
 
-	/* no need to invalidate: a not-present page shouldn't be cached */
-	update_mmu_cache(vma, address, entry);
-	lazy_mmu_prot_update(entry);
-unlock:
 	pte_unmap_unlock(page_table, ptl);
-	if (dirty_page) {
+
+out:
+	unlock_page(nopage_page);
+	if (anon)
+		page_cache_release(nopage_page);
+	else if (dirty_page) {
 		set_page_dirty_balance(dirty_page);
 		put_page(dirty_page);
 	}
+
 	return ret;
-oom:
-	page_cache_release(new_page);
-	return VM_FAULT_OOM;
+
+out_error:
+	anon = 1; /* relase nopage_page */
+	goto out;
 }
 
 /*
Index: linux-2.6/mm/shmem.c
===================================================================
--- linux-2.6.orig/mm/shmem.c
+++ linux-2.6/mm/shmem.c
@@ -81,6 +81,7 @@ enum sgp_type {
 	SGP_READ,	/* don't exceed i_size, don't allocate page */
 	SGP_CACHE,	/* don't exceed i_size, may allocate page */
 	SGP_WRITE,	/* may exceed i_size, may allocate page */
+	SGP_NOPAGE,	/* same as SGP_CACHE, return with page locked */
 };
 
 static int shmem_getpage(struct inode *inode, unsigned long idx,
@@ -1209,8 +1210,10 @@ repeat:
 	}
 done:
 	if (*pagep != filepage) {
-		unlock_page(filepage);
 		*pagep = filepage;
+		if (sgp != SGP_NOPAGE)
+			unlock_page(filepage);
+
 	}
 	return 0;
 
@@ -1229,13 +1232,15 @@ struct page *shmem_nopage(struct vm_area
 	unsigned long idx;
 	int error;
 
+	BUG_ON(!(vma->vm_flags & VM_CAN_INVALIDATE));
+
 	idx = (address - vma->vm_start) >> PAGE_SHIFT;
 	idx += vma->vm_pgoff;
 	idx >>= PAGE_CACHE_SHIFT - PAGE_SHIFT;
 	if (((loff_t) idx << PAGE_CACHE_SHIFT) >= i_size_read(inode))
 		return NOPAGE_SIGBUS;
 
-	error = shmem_getpage(inode, idx, &page, SGP_CACHE, type);
+	error = shmem_getpage(inode, idx, &page, SGP_NOPAGE, type);
 	if (error)
 		return (error == -ENOMEM)? NOPAGE_OOM: NOPAGE_SIGBUS;
 
@@ -1333,6 +1338,7 @@ int shmem_mmap(struct file *file, struct
 {
 	file_accessed(file);
 	vma->vm_ops = &shmem_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
@@ -2445,5 +2451,6 @@ int shmem_zero_setup(struct vm_area_stru
 		fput(vma->vm_file);
 	vma->vm_file = file;
 	vma->vm_ops = &shmem_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
Index: linux-2.6/fs/ncpfs/mmap.c
===================================================================
--- linux-2.6.orig/fs/ncpfs/mmap.c
+++ linux-2.6/fs/ncpfs/mmap.c
@@ -123,6 +123,7 @@ int ncp_mmap(struct file *file, struct v
 		return -EFBIG;
 
 	vma->vm_ops = &ncp_file_mmap;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	file_accessed(file);
 	return 0;
 }
Index: linux-2.6/fs/ocfs2/mmap.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/mmap.c
+++ linux-2.6/fs/ocfs2/mmap.c
@@ -93,6 +93,7 @@ int ocfs2_mmap(struct file *file, struct
 
 	file_accessed(file);
 	vma->vm_ops = &ocfs2_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 	return 0;
 }
 
Index: linux-2.6/fs/xfs/linux-2.6/xfs_file.c
===================================================================
--- linux-2.6.orig/fs/xfs/linux-2.6/xfs_file.c
+++ linux-2.6/fs/xfs/linux-2.6/xfs_file.c
@@ -343,6 +343,7 @@ xfs_file_mmap(
 	struct vm_area_struct *vma)
 {
 	vma->vm_ops = &xfs_file_vm_ops;
+	vma->vm_flags |= VM_CAN_INVALIDATE;
 
 #ifdef CONFIG_XFS_DMAPI
 	if (vn_from_inode(filp->f_dentry->d_inode)->v_vfsp->vfs_flag & VFS_DMI)
Index: linux-2.6/ipc/shm.c
===================================================================
--- linux-2.6.orig/ipc/shm.c
+++ linux-2.6/ipc/shm.c
@@ -230,6 +230,7 @@ static int shm_mmap(struct file * file, 
 	ret = shmem_mmap(file, vma);
 	if (ret == 0) {
 		vma->vm_ops = &shm_vm_ops;
+		vma->vm_flags |= VM_CAN_INVALIDATE;
 		if (!(vma->vm_flags & VM_WRITE))
 			vma->vm_flags &= ~VM_MAYWRITE;
 		shm_inc(shm_file_ns(file), file->f_dentry->d_inode->i_ino);
Index: linux-2.6/fs/ocfs2/dlmglue.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/dlmglue.c
+++ linux-2.6/fs/ocfs2/dlmglue.c
@@ -2656,7 +2656,6 @@ static int ocfs2_data_convert_worker(str
 	sync_mapping_buffers(mapping);
 	if (blocking == LKM_EXMODE) {
 		truncate_inode_pages(mapping, 0);
-		unmap_mapping_range(mapping, 0, 0, 0);
 	} else {
 		/* We only need to wait on the I/O if we're not also
 		 * truncating pages because truncate_inode_pages waits
Index: linux-2.6/mm/truncate.c
===================================================================
--- linux-2.6.orig/mm/truncate.c
+++ linux-2.6/mm/truncate.c
@@ -163,6 +163,11 @@ void truncate_inode_pages_range(struct a
 				unlock_page(page);
 				continue;
 			}
+			while (page_mapped(page)) {
+				unmap_mapping_range(mapping,
+				  (loff_t)page_index<<PAGE_CACHE_SHIFT,
+				  PAGE_CACHE_SIZE, 0);
+			}
 			truncate_complete_page(mapping, page);
 			unlock_page(page);
 		}
@@ -200,6 +205,11 @@ void truncate_inode_pages_range(struct a
 				break;
 			lock_page(page);
 			wait_on_page_writeback(page);
+			while (page_mapped(page)) {
+				unmap_mapping_range(mapping,
+				  (loff_t)page_index<<PAGE_CACHE_SHIFT,
+				  PAGE_CACHE_SIZE, 0);
+			}
 			if (page->index > next)
 				next = page->index;
 			next++;
Index: linux-2.6/fs/gfs2/ops_file.c
===================================================================
--- linux-2.6.orig/fs/gfs2/ops_file.c
+++ linux-2.6/fs/gfs2/ops_file.c
@@ -396,6 +396,8 @@ static int gfs2_mmap(struct file *file, 
 	else
 		vma->vm_ops = &gfs2_vm_ops_private;
 
+	vma->vm_flags |= VM_CAN_INVALIDATE;
+
 	gfs2_glock_dq_uninit(&i_gh);
 
 	return error;

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2006-10-21  1:55 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-10-10 14:21 [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
2006-10-10 14:21 ` [patch 1/5] mm: fault vs invalidate/truncate check Nick Piggin
2006-10-10 14:21 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
2006-10-11  4:38   ` Andrew Morton
2006-10-11  5:39     ` Nick Piggin
2006-10-11  6:00       ` Andrew Morton
2006-10-11  9:21         ` Nick Piggin
2006-10-11 16:21         ` Linus Torvalds
2006-10-11 16:57           ` SPAM: " Nick Piggin
2006-10-11 17:11             ` Linus Torvalds
2006-10-11 17:21               ` SPAM: " Nick Piggin
2006-10-11 17:38                 ` Linus Torvalds
2006-10-12  3:33                   ` Nick Piggin
2006-10-12 15:37                     ` Linus Torvalds
2006-10-12 15:40                       ` Nick Piggin
2006-10-11  5:13   ` Andrew Morton
2006-10-11  5:50     ` Nick Piggin
2006-10-11  6:10       ` Andrew Morton
2006-10-11  6:17       ` [patch 1/6] revert "generic_file_buffered_write(): handle zero length iovec segments" Andrew Morton
     [not found]       ` <20061010231150.fb9e30f5.akpm@osdl.org>
2006-10-11  6:17         ` [patch 2/6] revert "generic_file_buffered_write(): deadlock on vectored write" Andrew Morton
     [not found]         ` <20061010231243.bc8b834c.akpm@osdl.org>
2006-10-11  6:17           ` [patch 3/6] generic_file_buffered_write() cleanup Andrew Morton
     [not found]           ` <20061010231339.a79c1fae.akpm@osdl.org>
2006-10-11  6:18             ` [patch 4/6] generic_file_buffered_write(): fix page prefaulting Andrew Morton
     [not found]             ` <20061010231424.db88931f.akpm@osdl.org>
2006-10-11  6:18               ` [patch 5/6] generic_file_buffered_write(): max_len cleanup Andrew Morton
     [not found]               ` <20061010231514.c1da7355.akpm@osdl.org>
2006-10-11  6:18                 ` [patch 6/6] fix pagecache write deadlocks Andrew Morton
2006-10-21  1:53       ` [patch 2/5] mm: fault vs invalidate/truncate race fix Benjamin Herrenschmidt
2006-10-10 14:22 ` [patch 3/5] mm: fault handler to replace nopage and populate Nick Piggin
2006-10-10 14:22 ` [patch 4/5] mm: add vm_insert_pfn helpler Nick Piggin
2006-10-11 10:12   ` Thomas Hellstrom
2006-10-11 11:24     ` Nick Piggin
2006-10-11 21:30       ` Thomas Hellström
2006-10-10 14:22 ` [patch 5/5] mm: merge nopfn with fault handler Nick Piggin
2006-10-10 14:26 ` [rfc] 2.6.19-rc1-git5: consolidation of file backed fault handlers Nick Piggin
2006-10-10 14:33 ` Christoph Hellwig
2006-10-10 15:01   ` Nick Piggin
2006-10-10 16:09     ` Arjan van de Ven
2006-10-11  0:46       ` SPAM: " Nick Piggin
2006-10-10 15:07   ` Arjan van de Ven
  -- strict thread matches above, loose matches on Subject: below --
2006-10-09 16:12 Nick Piggin
2006-10-09 16:12 ` [patch 2/5] mm: fault vs invalidate/truncate race fix Nick Piggin
2006-10-09 21:10   ` Mark Fasheh
2006-10-10  1:10     ` Nick Piggin
2006-10-11 18:34       ` Mark Fasheh
2006-10-12  3:28         ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).