linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Johannes Weiner <hannes@cmpxchg.org>,
	Rik van Riel <riel@redhat.com>, Minchan Kim <minchan@kernel.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Bob Liu <bob.liu@oracle.com>,
	Christoph Hellwig <hch@infradead.org>,
	Dave Chinner <david@fromorbit.com>,
	Greg Thelen <gthelen@google.com>, Hugh Dickins <hughd@google.com>,
	Jan Kara <jack@suse.cz>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Luigi Semenzato <semenzato@google.com>,
	Mel Gorman <mgorman@suse.de>, Metin Doslu <metin@citusdata.com>,
	Michel Lespinasse <walken@google.com>,
	Ozgun Erdogan <ozgun@citusdata.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Roman Gushchin <klamm@yandex-team.ru>,
	Ryan Mallon <rmallon@gmail.com>, Tejun Heo <tj@kernel.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [PATCH 3.14 102/122] mm + fs: prepare for non-page entries in page cache radix trees
Date: Wed, 19 Nov 2014 12:52:32 -0800	[thread overview]
Message-ID: <20141119205212.209198738@linuxfoundation.org> (raw)
In-Reply-To: <20141119205208.812884198@linuxfoundation.org>

3.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Johannes Weiner <hannes@cmpxchg.org>

commit 0cd6144aadd2afd19d1aca880153530c52957604 upstream.

shmem mappings already contain exceptional entries where swap slot
information is remembered.

To be able to store eviction information for regular page cache, prepare
every site dealing with the radix trees directly to handle entries other
than pages.

The common lookup functions will filter out non-page entries and return
NULL for page cache holes, just as before.  But provide a raw version of
the API which returns non-page entries as well, and switch shmem over to
use it.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jan Kara <jack@suse.cz>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Luigi Semenzato <semenzato@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Metin Doslu <metin@citusdata.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Ozgun Erdogan <ozgun@citusdata.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Roman Gushchin <klamm@yandex-team.ru>
Cc: Ryan Mallon <rmallon@gmail.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 fs/btrfs/compression.c   |    2 
 include/linux/mm.h       |    8 +
 include/linux/pagemap.h  |   15 ++-
 include/linux/pagevec.h  |    5 +
 include/linux/shmem_fs.h |    1 
 mm/filemap.c             |  202 +++++++++++++++++++++++++++++++++++++++++------
 mm/mincore.c             |   20 +++-
 mm/readahead.c           |    2 
 mm/shmem.c               |   97 ++++------------------
 mm/swap.c                |   51 +++++++++++
 mm/truncate.c            |   74 ++++++++++++++---
 11 files changed, 348 insertions(+), 129 deletions(-)

--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -472,7 +472,7 @@ static noinline int add_ra_bio_pages(str
 		rcu_read_lock();
 		page = radix_tree_lookup(&mapping->page_tree, pg_index);
 		rcu_read_unlock();
-		if (page) {
+		if (page && !radix_tree_exceptional_entry(page)) {
 			misses++;
 			if (misses > 4)
 				break;
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1041,6 +1041,14 @@ extern void show_free_areas(unsigned int
 extern bool skip_free_areas_node(unsigned int flags, int nid);
 
 int shmem_zero_setup(struct vm_area_struct *);
+#ifdef CONFIG_SHMEM
+bool shmem_mapping(struct address_space *mapping);
+#else
+static inline bool shmem_mapping(struct address_space *mapping)
+{
+	return false;
+}
+#endif
 
 extern int can_do_mlock(void);
 extern int user_shm_lock(size_t, struct user_struct *);
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -248,12 +248,15 @@ pgoff_t page_cache_next_hole(struct addr
 pgoff_t page_cache_prev_hole(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan);
 
-extern struct page * find_get_page(struct address_space *mapping,
-				pgoff_t index);
-extern struct page * find_lock_page(struct address_space *mapping,
-				pgoff_t index);
-extern struct page * find_or_create_page(struct address_space *mapping,
-				pgoff_t index, gfp_t gfp_mask);
+struct page *find_get_entry(struct address_space *mapping, pgoff_t offset);
+struct page *find_get_page(struct address_space *mapping, pgoff_t offset);
+struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset);
+struct page *find_lock_page(struct address_space *mapping, pgoff_t offset);
+struct page *find_or_create_page(struct address_space *mapping, pgoff_t index,
+				 gfp_t gfp_mask);
+unsigned find_get_entries(struct address_space *mapping, pgoff_t start,
+			  unsigned int nr_entries, struct page **entries,
+			  pgoff_t *indices);
 unsigned find_get_pages(struct address_space *mapping, pgoff_t start,
 			unsigned int nr_pages, struct page **pages);
 unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t start,
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -22,6 +22,11 @@ struct pagevec {
 
 void __pagevec_release(struct pagevec *pvec);
 void __pagevec_lru_add(struct pagevec *pvec);
+unsigned pagevec_lookup_entries(struct pagevec *pvec,
+				struct address_space *mapping,
+				pgoff_t start, unsigned nr_entries,
+				pgoff_t *indices);
+void pagevec_remove_exceptionals(struct pagevec *pvec);
 unsigned pagevec_lookup(struct pagevec *pvec, struct address_space *mapping,
 		pgoff_t start, unsigned nr_pages);
 unsigned pagevec_lookup_tag(struct pagevec *pvec,
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -51,6 +51,7 @@ extern struct file *shmem_kernel_file_se
 					    unsigned long flags);
 extern int shmem_zero_setup(struct vm_area_struct *);
 extern int shmem_lock(struct file *file, int lock, struct user_struct *user);
+extern bool shmem_mapping(struct address_space *mapping);
 extern void shmem_unlock_mapping(struct address_space *mapping);
 extern struct page *shmem_read_mapping_page_gfp(struct address_space *mapping,
 					pgoff_t index, gfp_t gfp_mask);
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -448,6 +448,29 @@ int replace_page_cache_page(struct page
 }
 EXPORT_SYMBOL_GPL(replace_page_cache_page);
 
+static int page_cache_tree_insert(struct address_space *mapping,
+				  struct page *page)
+{
+	void **slot;
+	int error;
+
+	slot = radix_tree_lookup_slot(&mapping->page_tree, page->index);
+	if (slot) {
+		void *p;
+
+		p = radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+		if (!radix_tree_exceptional_entry(p))
+			return -EEXIST;
+		radix_tree_replace_slot(slot, page);
+		mapping->nrpages++;
+		return 0;
+	}
+	error = radix_tree_insert(&mapping->page_tree, page->index, page);
+	if (!error)
+		mapping->nrpages++;
+	return error;
+}
+
 /**
  * add_to_page_cache_locked - add a locked page to the pagecache
  * @page:	page to add
@@ -482,11 +505,10 @@ int add_to_page_cache_locked(struct page
 	page->index = offset;
 
 	spin_lock_irq(&mapping->tree_lock);
-	error = radix_tree_insert(&mapping->page_tree, offset, page);
+	error = page_cache_tree_insert(mapping, page);
 	radix_tree_preload_end();
 	if (unlikely(error))
 		goto err_insert;
-	mapping->nrpages++;
 	__inc_zone_page_state(page, NR_FILE_PAGES);
 	spin_unlock_irq(&mapping->tree_lock);
 	trace_mm_filemap_add_to_page_cache(page);
@@ -714,7 +736,10 @@ pgoff_t page_cache_next_hole(struct addr
 	unsigned long i;
 
 	for (i = 0; i < max_scan; i++) {
-		if (!radix_tree_lookup(&mapping->page_tree, index))
+		struct page *page;
+
+		page = radix_tree_lookup(&mapping->page_tree, index);
+		if (!page || radix_tree_exceptional_entry(page))
 			break;
 		index++;
 		if (index == 0)
@@ -752,7 +777,10 @@ pgoff_t page_cache_prev_hole(struct addr
 	unsigned long i;
 
 	for (i = 0; i < max_scan; i++) {
-		if (!radix_tree_lookup(&mapping->page_tree, index))
+		struct page *page;
+
+		page = radix_tree_lookup(&mapping->page_tree, index);
+		if (!page || radix_tree_exceptional_entry(page))
 			break;
 		index--;
 		if (index == ULONG_MAX)
@@ -764,14 +792,19 @@ pgoff_t page_cache_prev_hole(struct addr
 EXPORT_SYMBOL(page_cache_prev_hole);
 
 /**
- * find_get_page - find and get a page reference
+ * find_get_entry - find and get a page cache entry
  * @mapping: the address_space to search
- * @offset: the page index
+ * @offset: the page cache index
+ *
+ * Looks up the page cache slot at @mapping & @offset.  If there is a
+ * page cache page, it is returned with an increased refcount.
+ *
+ * If the slot holds a shadow entry of a previously evicted page, it
+ * is returned.
  *
- * Is there a pagecache struct page at the given (mapping, offset) tuple?
- * If yes, increment its refcount and return it; if no, return NULL.
+ * Otherwise, %NULL is returned.
  */
-struct page *find_get_page(struct address_space *mapping, pgoff_t offset)
+struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
 {
 	void **pagep;
 	struct page *page;
@@ -812,24 +845,50 @@ out:
 
 	return page;
 }
-EXPORT_SYMBOL(find_get_page);
+EXPORT_SYMBOL(find_get_entry);
 
 /**
- * find_lock_page - locate, pin and lock a pagecache page
+ * find_get_page - find and get a page reference
  * @mapping: the address_space to search
  * @offset: the page index
  *
- * Locates the desired pagecache page, locks it, increments its reference
- * count and returns its address.
+ * Looks up the page cache slot at @mapping & @offset.  If there is a
+ * page cache page, it is returned with an increased refcount.
  *
- * Returns zero if the page was not present. find_lock_page() may sleep.
+ * Otherwise, %NULL is returned.
  */
-struct page *find_lock_page(struct address_space *mapping, pgoff_t offset)
+struct page *find_get_page(struct address_space *mapping, pgoff_t offset)
+{
+	struct page *page = find_get_entry(mapping, offset);
+
+	if (radix_tree_exceptional_entry(page))
+		page = NULL;
+	return page;
+}
+EXPORT_SYMBOL(find_get_page);
+
+/**
+ * find_lock_entry - locate, pin and lock a page cache entry
+ * @mapping: the address_space to search
+ * @offset: the page cache index
+ *
+ * Looks up the page cache slot at @mapping & @offset.  If there is a
+ * page cache page, it is returned locked and with an increased
+ * refcount.
+ *
+ * If the slot holds a shadow entry of a previously evicted page, it
+ * is returned.
+ *
+ * Otherwise, %NULL is returned.
+ *
+ * find_lock_entry() may sleep.
+ */
+struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)
 {
 	struct page *page;
 
 repeat:
-	page = find_get_page(mapping, offset);
+	page = find_get_entry(mapping, offset);
 	if (page && !radix_tree_exception(page)) {
 		lock_page(page);
 		/* Has the page been truncated? */
@@ -842,6 +901,29 @@ repeat:
 	}
 	return page;
 }
+EXPORT_SYMBOL(find_lock_entry);
+
+/**
+ * find_lock_page - locate, pin and lock a pagecache page
+ * @mapping: the address_space to search
+ * @offset: the page index
+ *
+ * Looks up the page cache slot at @mapping & @offset.  If there is a
+ * page cache page, it is returned locked and with an increased
+ * refcount.
+ *
+ * Otherwise, %NULL is returned.
+ *
+ * find_lock_page() may sleep.
+ */
+struct page *find_lock_page(struct address_space *mapping, pgoff_t offset)
+{
+	struct page *page = find_lock_entry(mapping, offset);
+
+	if (radix_tree_exceptional_entry(page))
+		page = NULL;
+	return page;
+}
 EXPORT_SYMBOL(find_lock_page);
 
 /**
@@ -850,16 +932,18 @@ EXPORT_SYMBOL(find_lock_page);
  * @index: the page's index into the mapping
  * @gfp_mask: page allocation mode
  *
- * Locates a page in the pagecache.  If the page is not present, a new page
- * is allocated using @gfp_mask and is added to the pagecache and to the VM's
- * LRU list.  The returned page is locked and has its reference count
- * incremented.
+ * Looks up the page cache slot at @mapping & @offset.  If there is a
+ * page cache page, it is returned locked and with an increased
+ * refcount.
+ *
+ * If the page is not present, a new page is allocated using @gfp_mask
+ * and added to the page cache and the VM's LRU list.  The page is
+ * returned locked and with an increased refcount.
  *
- * find_or_create_page() may sleep, even if @gfp_flags specifies an atomic
- * allocation!
+ * On memory exhaustion, %NULL is returned.
  *
- * find_or_create_page() returns the desired page's address, or zero on
- * memory exhaustion.
+ * find_or_create_page() may sleep, even if @gfp_flags specifies an
+ * atomic allocation!
  */
 struct page *find_or_create_page(struct address_space *mapping,
 		pgoff_t index, gfp_t gfp_mask)
@@ -892,6 +976,76 @@ repeat:
 EXPORT_SYMBOL(find_or_create_page);
 
 /**
+ * find_get_entries - gang pagecache lookup
+ * @mapping:	The address_space to search
+ * @start:	The starting page cache index
+ * @nr_entries:	The maximum number of entries
+ * @entries:	Where the resulting entries are placed
+ * @indices:	The cache indices corresponding to the entries in @entries
+ *
+ * find_get_entries() will search for and return a group of up to
+ * @nr_entries entries in the mapping.  The entries are placed at
+ * @entries.  find_get_entries() takes a reference against any actual
+ * pages it returns.
+ *
+ * The search returns a group of mapping-contiguous page cache entries
+ * with ascending indexes.  There may be holes in the indices due to
+ * not-present pages.
+ *
+ * Any shadow entries of evicted pages are included in the returned
+ * array.
+ *
+ * find_get_entries() returns the number of pages and shadow entries
+ * which were found.
+ */
+unsigned find_get_entries(struct address_space *mapping,
+			  pgoff_t start, unsigned int nr_entries,
+			  struct page **entries, pgoff_t *indices)
+{
+	void **slot;
+	unsigned int ret = 0;
+	struct radix_tree_iter iter;
+
+	if (!nr_entries)
+		return 0;
+
+	rcu_read_lock();
+restart:
+	radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+		struct page *page;
+repeat:
+		page = radix_tree_deref_slot(slot);
+		if (unlikely(!page))
+			continue;
+		if (radix_tree_exception(page)) {
+			if (radix_tree_deref_retry(page))
+				goto restart;
+			/*
+			 * Otherwise, we must be storing a swap entry
+			 * here as an exceptional entry: so return it
+			 * without attempting to raise page count.
+			 */
+			goto export;
+		}
+		if (!page_cache_get_speculative(page))
+			goto repeat;
+
+		/* Has the page moved? */
+		if (unlikely(page != *slot)) {
+			page_cache_release(page);
+			goto repeat;
+		}
+export:
+		indices[ret] = iter.index;
+		entries[ret] = page;
+		if (++ret == nr_entries)
+			break;
+	}
+	rcu_read_unlock();
+	return ret;
+}
+
+/**
  * find_get_pages - gang pagecache lookup
  * @mapping:	The address_space to search
  * @start:	The starting page index
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -70,13 +70,21 @@ static unsigned char mincore_page(struct
 	 * any other file mapping (ie. marked !present and faulted in with
 	 * tmpfs's .fault). So swapped out tmpfs mappings are tested here.
 	 */
-	page = find_get_page(mapping, pgoff);
 #ifdef CONFIG_SWAP
-	/* shmem/tmpfs may return swap: account for swapcache page too. */
-	if (radix_tree_exceptional_entry(page)) {
-		swp_entry_t swap = radix_to_swp_entry(page);
-		page = find_get_page(swap_address_space(swap), swap.val);
-	}
+	if (shmem_mapping(mapping)) {
+		page = find_get_entry(mapping, pgoff);
+		/*
+		 * shmem/tmpfs may return swap: account for swapcache
+		 * page too.
+		 */
+		if (radix_tree_exceptional_entry(page)) {
+			swp_entry_t swp = radix_to_swp_entry(page);
+			page = find_get_page(swap_address_space(swp), swp.val);
+		}
+	} else
+		page = find_get_page(mapping, pgoff);
+#else
+	page = find_get_page(mapping, pgoff);
 #endif
 	if (page) {
 		present = PageUptodate(page);
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -179,7 +179,7 @@ __do_page_cache_readahead(struct address
 		rcu_read_lock();
 		page = radix_tree_lookup(&mapping->page_tree, page_offset);
 		rcu_read_unlock();
-		if (page)
+		if (page && !radix_tree_exceptional_entry(page))
 			continue;
 
 		page = page_cache_alloc_readahead(mapping);
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -330,56 +330,6 @@ static void shmem_delete_from_page_cache
 }
 
 /*
- * Like find_get_pages, but collecting swap entries as well as pages.
- */
-static unsigned shmem_find_get_pages_and_swap(struct address_space *mapping,
-					pgoff_t start, unsigned int nr_pages,
-					struct page **pages, pgoff_t *indices)
-{
-	void **slot;
-	unsigned int ret = 0;
-	struct radix_tree_iter iter;
-
-	if (!nr_pages)
-		return 0;
-
-	rcu_read_lock();
-restart:
-	radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
-		struct page *page;
-repeat:
-		page = radix_tree_deref_slot(slot);
-		if (unlikely(!page))
-			continue;
-		if (radix_tree_exception(page)) {
-			if (radix_tree_deref_retry(page))
-				goto restart;
-			/*
-			 * Otherwise, we must be storing a swap entry
-			 * here as an exceptional entry: so return it
-			 * without attempting to raise page count.
-			 */
-			goto export;
-		}
-		if (!page_cache_get_speculative(page))
-			goto repeat;
-
-		/* Has the page moved? */
-		if (unlikely(page != *slot)) {
-			page_cache_release(page);
-			goto repeat;
-		}
-export:
-		indices[ret] = iter.index;
-		pages[ret] = page;
-		if (++ret == nr_pages)
-			break;
-	}
-	rcu_read_unlock();
-	return ret;
-}
-
-/*
  * Remove swap entry from radix tree, free the swap and its page cache.
  */
 static int shmem_free_swap(struct address_space *mapping,
@@ -397,21 +347,6 @@ static int shmem_free_swap(struct addres
 }
 
 /*
- * Pagevec may contain swap entries, so shuffle up pages before releasing.
- */
-static void shmem_deswap_pagevec(struct pagevec *pvec)
-{
-	int i, j;
-
-	for (i = 0, j = 0; i < pagevec_count(pvec); i++) {
-		struct page *page = pvec->pages[i];
-		if (!radix_tree_exceptional_entry(page))
-			pvec->pages[j++] = page;
-	}
-	pvec->nr = j;
-}
-
-/*
  * SysV IPC SHM_UNLOCK restore Unevictable pages to their evictable lists.
  */
 void shmem_unlock_mapping(struct address_space *mapping)
@@ -429,12 +364,12 @@ void shmem_unlock_mapping(struct address
 		 * Avoid pagevec_lookup(): find_get_pages() returns 0 as if it
 		 * has finished, if it hits a row of PAGEVEC_SIZE swap entries.
 		 */
-		pvec.nr = shmem_find_get_pages_and_swap(mapping, index,
-					PAGEVEC_SIZE, pvec.pages, indices);
+		pvec.nr = find_get_entries(mapping, index,
+					   PAGEVEC_SIZE, pvec.pages, indices);
 		if (!pvec.nr)
 			break;
 		index = indices[pvec.nr - 1] + 1;
-		shmem_deswap_pagevec(&pvec);
+		pagevec_remove_exceptionals(&pvec);
 		check_move_unevictable_pages(pvec.pages, pvec.nr);
 		pagevec_release(&pvec);
 		cond_resched();
@@ -466,9 +401,9 @@ static void shmem_undo_range(struct inod
 	pagevec_init(&pvec, 0);
 	index = start;
 	while (index < end) {
-		pvec.nr = shmem_find_get_pages_and_swap(mapping, index,
-				min(end - index, (pgoff_t)PAGEVEC_SIZE),
-							pvec.pages, indices);
+		pvec.nr = find_get_entries(mapping, index,
+			min(end - index, (pgoff_t)PAGEVEC_SIZE),
+			pvec.pages, indices);
 		if (!pvec.nr)
 			break;
 		mem_cgroup_uncharge_start();
@@ -497,7 +432,7 @@ static void shmem_undo_range(struct inod
 			}
 			unlock_page(page);
 		}
-		shmem_deswap_pagevec(&pvec);
+		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);
 		mem_cgroup_uncharge_end();
 		cond_resched();
@@ -535,9 +470,10 @@ static void shmem_undo_range(struct inod
 	index = start;
 	while (index < end) {
 		cond_resched();
-		pvec.nr = shmem_find_get_pages_and_swap(mapping, index,
+
+		pvec.nr = find_get_entries(mapping, index,
 				min(end - index, (pgoff_t)PAGEVEC_SIZE),
-							pvec.pages, indices);
+				pvec.pages, indices);
 		if (!pvec.nr) {
 			/* If all gone or hole-punch or unfalloc, we're done */
 			if (index == start || end != -1)
@@ -580,7 +516,7 @@ static void shmem_undo_range(struct inod
 			}
 			unlock_page(page);
 		}
-		shmem_deswap_pagevec(&pvec);
+		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);
 		mem_cgroup_uncharge_end();
 		index++;
@@ -1087,7 +1023,7 @@ static int shmem_getpage_gfp(struct inod
 		return -EFBIG;
 repeat:
 	swap.val = 0;
-	page = find_lock_page(mapping, index);
+	page = find_lock_entry(mapping, index);
 	if (radix_tree_exceptional_entry(page)) {
 		swap = radix_to_swp_entry(page);
 		page = NULL;
@@ -1482,6 +1418,11 @@ static struct inode *shmem_get_inode(str
 	return inode;
 }
 
+bool shmem_mapping(struct address_space *mapping)
+{
+	return mapping->backing_dev_info == &shmem_backing_dev_info;
+}
+
 #ifdef CONFIG_TMPFS
 static const struct inode_operations shmem_symlink_inode_operations;
 static const struct inode_operations shmem_short_symlink_operations;
@@ -1794,7 +1735,7 @@ static pgoff_t shmem_seek_hole_data(stru
 	pagevec_init(&pvec, 0);
 	pvec.nr = 1;		/* start small: we may be there already */
 	while (!done) {
-		pvec.nr = shmem_find_get_pages_and_swap(mapping, index,
+		pvec.nr = find_get_entries(mapping, index,
 					pvec.nr, pvec.pages, indices);
 		if (!pvec.nr) {
 			if (whence == SEEK_DATA)
@@ -1821,7 +1762,7 @@ static pgoff_t shmem_seek_hole_data(stru
 				break;
 			}
 		}
-		shmem_deswap_pagevec(&pvec);
+		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);
 		pvec.nr = PAGEVEC_SIZE;
 		cond_resched();
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -948,6 +948,57 @@ void __pagevec_lru_add(struct pagevec *p
 EXPORT_SYMBOL(__pagevec_lru_add);
 
 /**
+ * pagevec_lookup_entries - gang pagecache lookup
+ * @pvec:	Where the resulting entries are placed
+ * @mapping:	The address_space to search
+ * @start:	The starting entry index
+ * @nr_entries:	The maximum number of entries
+ * @indices:	The cache indices corresponding to the entries in @pvec
+ *
+ * pagevec_lookup_entries() will search for and return a group of up
+ * to @nr_entries pages and shadow entries in the mapping.  All
+ * entries are placed in @pvec.  pagevec_lookup_entries() takes a
+ * reference against actual pages in @pvec.
+ *
+ * The search returns a group of mapping-contiguous entries with
+ * ascending indexes.  There may be holes in the indices due to
+ * not-present entries.
+ *
+ * pagevec_lookup_entries() returns the number of entries which were
+ * found.
+ */
+unsigned pagevec_lookup_entries(struct pagevec *pvec,
+				struct address_space *mapping,
+				pgoff_t start, unsigned nr_pages,
+				pgoff_t *indices)
+{
+	pvec->nr = find_get_entries(mapping, start, nr_pages,
+				    pvec->pages, indices);
+	return pagevec_count(pvec);
+}
+
+/**
+ * pagevec_remove_exceptionals - pagevec exceptionals pruning
+ * @pvec:	The pagevec to prune
+ *
+ * pagevec_lookup_entries() fills both pages and exceptional radix
+ * tree entries into the pagevec.  This function prunes all
+ * exceptionals from @pvec without leaving holes, so that it can be
+ * passed on to page-only pagevec operations.
+ */
+void pagevec_remove_exceptionals(struct pagevec *pvec)
+{
+	int i, j;
+
+	for (i = 0, j = 0; i < pagevec_count(pvec); i++) {
+		struct page *page = pvec->pages[i];
+		if (!radix_tree_exceptional_entry(page))
+			pvec->pages[j++] = page;
+	}
+	pvec->nr = j;
+}
+
+/**
  * pagevec_lookup - gang pagecache lookup
  * @pvec:	Where the resulting pages are placed
  * @mapping:	The address_space to search
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -23,6 +23,22 @@
 #include <linux/rmap.h>
 #include "internal.h"
 
+static void clear_exceptional_entry(struct address_space *mapping,
+				    pgoff_t index, void *entry)
+{
+	/* Handled by shmem itself */
+	if (shmem_mapping(mapping))
+		return;
+
+	spin_lock_irq(&mapping->tree_lock);
+	/*
+	 * Regular page slots are stabilized by the page lock even
+	 * without the tree itself locked.  These unlocked entries
+	 * need verification under the tree lock.
+	 */
+	radix_tree_delete_item(&mapping->page_tree, index, entry);
+	spin_unlock_irq(&mapping->tree_lock);
+}
 
 /**
  * do_invalidatepage - invalidate part or all of a page
@@ -209,6 +225,7 @@ void truncate_inode_pages_range(struct a
 	unsigned int	partial_start;	/* inclusive */
 	unsigned int	partial_end;	/* exclusive */
 	struct pagevec	pvec;
+	pgoff_t		indices[PAGEVEC_SIZE];
 	pgoff_t		index;
 	int		i;
 
@@ -239,17 +256,23 @@ void truncate_inode_pages_range(struct a
 
 	pagevec_init(&pvec, 0);
 	index = start;
-	while (index < end && pagevec_lookup(&pvec, mapping, index,
-			min(end - index, (pgoff_t)PAGEVEC_SIZE))) {
+	while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
+			min(end - index, (pgoff_t)PAGEVEC_SIZE),
+			indices)) {
 		mem_cgroup_uncharge_start();
 		for (i = 0; i < pagevec_count(&pvec); i++) {
 			struct page *page = pvec.pages[i];
 
 			/* We rely upon deletion not changing page->index */
-			index = page->index;
+			index = indices[i];
 			if (index >= end)
 				break;
 
+			if (radix_tree_exceptional_entry(page)) {
+				clear_exceptional_entry(mapping, index, page);
+				continue;
+			}
+
 			if (!trylock_page(page))
 				continue;
 			WARN_ON(page->index != index);
@@ -260,6 +283,7 @@ void truncate_inode_pages_range(struct a
 			truncate_inode_page(mapping, page);
 			unlock_page(page);
 		}
+		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);
 		mem_cgroup_uncharge_end();
 		cond_resched();
@@ -308,14 +332,16 @@ void truncate_inode_pages_range(struct a
 	index = start;
 	for ( ; ; ) {
 		cond_resched();
-		if (!pagevec_lookup(&pvec, mapping, index,
-			min(end - index, (pgoff_t)PAGEVEC_SIZE))) {
+		if (!pagevec_lookup_entries(&pvec, mapping, index,
+			min(end - index, (pgoff_t)PAGEVEC_SIZE),
+			indices)) {
 			if (index == start)
 				break;
 			index = start;
 			continue;
 		}
-		if (index == start && pvec.pages[0]->index >= end) {
+		if (index == start && indices[0] >= end) {
+			pagevec_remove_exceptionals(&pvec);
 			pagevec_release(&pvec);
 			break;
 		}
@@ -324,16 +350,22 @@ void truncate_inode_pages_range(struct a
 			struct page *page = pvec.pages[i];
 
 			/* We rely upon deletion not changing page->index */
-			index = page->index;
+			index = indices[i];
 			if (index >= end)
 				break;
 
+			if (radix_tree_exceptional_entry(page)) {
+				clear_exceptional_entry(mapping, index, page);
+				continue;
+			}
+
 			lock_page(page);
 			WARN_ON(page->index != index);
 			wait_on_page_writeback(page);
 			truncate_inode_page(mapping, page);
 			unlock_page(page);
 		}
+		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);
 		mem_cgroup_uncharge_end();
 		index++;
@@ -376,6 +408,7 @@ EXPORT_SYMBOL(truncate_inode_pages);
 unsigned long invalidate_mapping_pages(struct address_space *mapping,
 		pgoff_t start, pgoff_t end)
 {
+	pgoff_t indices[PAGEVEC_SIZE];
 	struct pagevec pvec;
 	pgoff_t index = start;
 	unsigned long ret;
@@ -391,17 +424,23 @@ unsigned long invalidate_mapping_pages(s
 	 */
 
 	pagevec_init(&pvec, 0);
-	while (index <= end && pagevec_lookup(&pvec, mapping, index,
-			min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) {
+	while (index <= end && pagevec_lookup_entries(&pvec, mapping, index,
+			min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1,
+			indices)) {
 		mem_cgroup_uncharge_start();
 		for (i = 0; i < pagevec_count(&pvec); i++) {
 			struct page *page = pvec.pages[i];
 
 			/* We rely upon deletion not changing page->index */
-			index = page->index;
+			index = indices[i];
 			if (index > end)
 				break;
 
+			if (radix_tree_exceptional_entry(page)) {
+				clear_exceptional_entry(mapping, index, page);
+				continue;
+			}
+
 			if (!trylock_page(page))
 				continue;
 			WARN_ON(page->index != index);
@@ -415,6 +454,7 @@ unsigned long invalidate_mapping_pages(s
 				deactivate_page(page);
 			count += ret;
 		}
+		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);
 		mem_cgroup_uncharge_end();
 		cond_resched();
@@ -482,6 +522,7 @@ static int do_launder_page(struct addres
 int invalidate_inode_pages2_range(struct address_space *mapping,
 				  pgoff_t start, pgoff_t end)
 {
+	pgoff_t indices[PAGEVEC_SIZE];
 	struct pagevec pvec;
 	pgoff_t index;
 	int i;
@@ -492,17 +533,23 @@ int invalidate_inode_pages2_range(struct
 	cleancache_invalidate_inode(mapping);
 	pagevec_init(&pvec, 0);
 	index = start;
-	while (index <= end && pagevec_lookup(&pvec, mapping, index,
-			min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) {
+	while (index <= end && pagevec_lookup_entries(&pvec, mapping, index,
+			min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1,
+			indices)) {
 		mem_cgroup_uncharge_start();
 		for (i = 0; i < pagevec_count(&pvec); i++) {
 			struct page *page = pvec.pages[i];
 
 			/* We rely upon deletion not changing page->index */
-			index = page->index;
+			index = indices[i];
 			if (index > end)
 				break;
 
+			if (radix_tree_exceptional_entry(page)) {
+				clear_exceptional_entry(mapping, index, page);
+				continue;
+			}
+
 			lock_page(page);
 			WARN_ON(page->index != index);
 			if (page->mapping != mapping) {
@@ -540,6 +587,7 @@ int invalidate_inode_pages2_range(struct
 				ret = ret2;
 			unlock_page(page);
 		}
+		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);
 		mem_cgroup_uncharge_end();
 		cond_resched();



  parent reply	other threads:[~2014-11-19 21:33 UTC|newest]

Thread overview: 123+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-19 20:50 [PATCH 3.14 000/122] 3.14.25-stable review Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 001/122] Revert "drivers/net: Disable UFO through virtio" Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 002/122] ip6_tunnel: Use ip6_tnl_dev_init as the ndo_init function Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 003/122] vti6: Use vti6_dev_init " Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 004/122] sit: Use ipip6_tunnel_init " Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 005/122] gre6: Move the setting of dev->iflink into the ndo_init functions Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 006/122] vxlan: Do not reuse sockets for a different address family Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 007/122] net: sctp: fix NULL pointer dereference in af->from_addr_param on malformed packet Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 008/122] net: sctp: fix memory leak in auth key management Greg Kroah-Hartman
2014-11-19 20:50 ` [PATCH 3.14 009/122] smsc911x: power-up phydev before doing a software reset Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 010/122] sunvdc: add cdrom and v1.1 protocol support Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 011/122] sunvdc: compute vdisk geometry from capacity Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 012/122] sunvdc: limit each sg segment to a page Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 013/122] vio: fix reuse of vio_dring slot Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 014/122] sunvdc: dont call VD_OP_GET_VTOC Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 015/122] sparc64: Fix crashes in schizo_pcierr_intr_other() Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 016/122] sparc64: Do irq_{enter,exit}() around generic_smp_call_function*() Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 017/122] sparc32: Implement xchg and atomic_xchg using ATOMIC_HASH locks Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 018/122] zram: avoid kunmap_atomic() of a NULL pointer Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 019/122] crypto: caam - fix missing dma unmap on error path Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 020/122] crypto: caam - remove duplicated sg copy functions Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 021/122] hwrng: pseries - port to new read API and fix stack corruption Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 022/122] tun: Fix csum_start with VLAN acceleration Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 023/122] x86, x32, audit: Fix x32s AUDIT_ARCH wrt audit Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 024/122] audit: correct AUDIT_GET_FEATURE return message type Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 025/122] audit: AUDIT_FEATURE_CHANGE message format missing delimiting space Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 026/122] audit: keep inode pinned Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 027/122] ahci: Add Device IDs for Intel Sunrise Point PCH Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 028/122] ahci: disable MSI instead of NCQ on Samsung pci-e SSDs on macbooks Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 029/122] ALSA: usb-audio: Fix memory leak in FTU quirk Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 030/122] xtensa: re-wire umount syscall to sys_oldumount Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 031/122] libceph: do not crash on large auth tickets Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 032/122] macvtap: Fix csum_start when VLAN tags are present Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 033/122] mac80211_hwsim: release driver when ieee80211_register_hw fails Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 034/122] mac80211: properly flush delayed scan work on interface removal Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 035/122] mac80211: use secondary channel offset IE also beacons during CSA Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 036/122] mac80211: schedule the actual switch of the station before CSA count 0 Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 037/122] mac80211: fix use-after-free in defragmentation Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 038/122] drm/radeon: set correct CE ram size for CIK Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 039/122] drm/radeon: make sure mode init is complete in bandwidth_update Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 040/122] drm/radeon: add missing crtc unlock when setting up the MC Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 042/122] ARM: 8191/1: decompressor: ensure I-side picks up relocated code Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 043/122] pinctrl: dra: dt-bindings: Fix output pull up/down Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 044/122] dm thin: grab a virtual cell before looking up the mapping Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 046/122] firewire: cdev: prevent kernel stack leaking into ioctl arguments Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 047/122] ata: sata_rcar: Disable DIPM mode for r8a7790 ES1 Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 048/122] nfs: fix pnfs direct write memory leak Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 049/122] Correct the race condition in aarch64_insn_patch_text_sync() Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 050/122] scsi: only re-lock door after EH on devices that were reset Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 051/122] parisc: Use compat layer for msgctl, shmat, shmctl and semtimedop syscalls Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 052/122] block: Fix computation of merged request priority Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 053/122] dm bufio: change __GFP_IO to __GFP_FS in shrinker callbacks Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 054/122] dm btree: fix a recursion depth bug in btree walking code Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 055/122] dm raid: ensure superblocks size matches devices logical block size Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 056/122] Input: synaptics - add min/max quirk for Lenovo T440s Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 060/122] power: charger-manager: Fix accessing invalidated power supply after fuel gauge unbind Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 061/122] power: charger-manager: Fix accessing invalidated power supply after charger unbind Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 062/122] power: bq2415x_charger: Properly handle ENODEV from power_supply_get_by_phandle Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 063/122] power: bq2415x_charger: Fix memory leak on DTS parsing error Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 064/122] x86, microcode, AMD: Fix early ucode loading on 32-bit Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 065/122] x86, microcode, AMD: Fix ucode patch stashing " Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 066/122] x86, kaslr: Prevent .bss from overlaping initrd Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 067/122] md: Always set RECOVERY_NEEDED when clearing RECOVERY_FROZEN Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 068/122] NFSv4: Ensure that we remove NFSv4.0 delegations when state has expired Greg Kroah-Hartman
2014-11-19 20:51 ` [PATCH 3.14 069/122] NFS: Dont try to reclaim delegation open state if recovery failed Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 070/122] nfs: Fix use of uninitialized variable in nfs_getattr() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 071/122] NFSv4: Fix races between nfs_remove_bad_delegation() and delegation return Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 072/122] NFSv4.1: nfs41_clear_delegation_stateid shouldnt trust NFS_DELEGATED_STATE Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 073/122] media: ttusb-dec: buffer overflow in ioctl Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 074/122] memory-hotplug: Remove "weak" from memory_block_size_bytes() declaration Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 075/122] vmcore: Remove "weak" from function declarations Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 076/122] kgdb: Remove "weak" from kgdb_arch_pc() declaration Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 077/122] clocksource: Remove "weak" from clocksource_default_clock() declaration Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 078/122] IB/core: Clear AH attr variable to prevent garbage data Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 079/122] ipc: always handle a new value of auto_msgmni Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 080/122] netfilter: ipset: off by one in ip_set_nfnl_get_byindex() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 081/122] netfilter: nf_log: account for size of NLMSG_DONE attribute Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 082/122] netfilter: nfnetlink_log: fix maximum packet length logged to userspace Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 083/122] netfilter: nf_log: release skbuff on nlmsg put failure Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 084/122] netfilter: nft_compat: fix wrong target lookup in nft_target_select_ops() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 085/122] netfilter: xt_bpf: add mising opaque struct sk_filter definition Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 086/122] ARM: probes: fix instruction fetch order with <asm/opcodes.h> Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 087/122] GFS2: Fix address space from page function Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 088/122] rcu: Make callers awaken grace-period kthread Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 089/122] rcu: Use rcu_gp_kthread_wake() to wake up grace period kthreads Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 091/122] perf: Handle compat ioctl Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 092/122] perf/x86/intel: Use proper dTLB-load-misses event on IvyBridge Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 093/122] KVM: x86: Dont report guest userspace emulation error to userspace Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 094/122] net: sctp: fix remote memory pressure from excessive queueing Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 095/122] net: sctp: fix panic on duplicate ASCONF chunks Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 096/122] net: sctp: fix skb_over_panic when receiving malformed " Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 097/122] iwlwifi: configure the LTR Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 098/122] regmap: fix kernel hang on regmap_bulk_write with zero val_count Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 099/122] lib: radix-tree: add radix_tree_delete_item() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 100/122] mm: shmem: save one radix tree lookup when truncating swapped pages Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 101/122] mm: filemap: move radix tree hole searching here Greg Kroah-Hartman
2014-11-19 20:52 ` Greg Kroah-Hartman [this message]
2014-11-20 15:49   ` [PATCH 3.14 102/122] mm + fs: prepare for non-page entries in page cache radix trees Johannes Weiner
2014-11-20 16:20     ` Greg Kroah-Hartman
2014-11-20 16:21     ` Mel Gorman
2014-11-20 20:39       ` Johannes Weiner
2014-11-19 20:52 ` [PATCH 3.14 103/122] mm: madvise: fix MADV_WILLNEED on shmem swapouts Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 104/122] mm: remove read_cache_page_async() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 105/122] callers of iov_copy_from_user_atomic() dont need pagecache_disable() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 106/122] mm/readahead.c: inline ra_submit Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 107/122] mm/compaction: clean up unused code lines Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 108/122] mm/compaction: cleanup isolate_freepages() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 109/122] mm, migration: add destination page freeing callback Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 110/122] mm, compaction: return failed migration target pages back to freelist Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 111/122] mm, compaction: add per-zone migration pfn cache for async compaction Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 112/122] mm, compaction: embed migration mode in compact_control Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 113/122] mm, compaction: terminate async compaction when rescheduling Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 114/122] mm/compaction: do not count migratepages when unnecessary Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 115/122] mm/compaction: avoid rescanning pageblocks in isolate_freepages Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 116/122] mm, compaction: properly signal and act upon lock and need_sched() contention Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 117/122] x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead of flushing the TLB Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 118/122] mm: fix direct reclaim writeback regression Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 119/122] fs/superblock: unregister sb shrinker before ->kill_sb() Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 120/122] fs/superblock: avoid locking counting inodes and dentries before reclaiming them Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 121/122] mm: vmscan: use proportional scanning during direct reclaim and full scan at DEF_PRIORITY Greg Kroah-Hartman
2014-11-19 20:52 ` [PATCH 3.14 122/122] mm/page_alloc: prevent MIGRATE_RESERVE pages from being misplaced Greg Kroah-Hartman
2014-11-20  5:32 ` [PATCH 3.14 000/122] 3.14.25-stable review Guenter Roeck
2014-11-21  1:37 ` Shuah Khan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141119205212.209198738@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bob.liu@oracle.com \
    --cc=david@fromorbit.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hch@infradead.org \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=klamm@yandex-team.ru \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=metin@citusdata.com \
    --cc=mgorman@suse.de \
    --cc=minchan@kernel.org \
    --cc=ozgun@citusdata.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=rmallon@gmail.com \
    --cc=semenzato@google.com \
    --cc=stable@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    --cc=walken@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).