All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Matthew Wilcox <willy@infradead.org>, Jan Kara <jack@suse.cz>,
	Jeff Layton <jlayton@redhat.com>,
	Lukas Czerner <lczerner@redhat.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Goldwyn Rodrigues <rgoldwyn@suse.com>,
	Nicholas Piggin <npiggin@gmail.com>,
	Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>,
	linux-nilfs@vger.kernel.org, Jaegeuk Kim <jaegeuk@kernel.org>,
	Chao Yu <yuchao0@huawei.com>,
	linux-f2fs-devel@lists.sourceforge.net
Subject: [PATCH v14 21/74] page cache: Convert hole search to XArray
Date: Sat, 16 Jun 2018 18:59:59 -0700	[thread overview]
Message-ID: <20180617020052.4759-22-willy@infradead.org> (raw)
In-Reply-To: <20180617020052.4759-1-willy@infradead.org>

The page cache offers the ability to search for a miss in the previous or
next N locations.  Rather than teach the XArray about the page cache's
definition of a miss, use xas_prev() and xas_next() to search the page
array.  This should be more efficient as it does not have to start the
lookup from the top for each index.

Signed-off-by: Matthew Wilcox <willy@infradead.org>
---
 fs/nfs/blocklayout/blocklayout.c |   2 +-
 include/linux/pagemap.h          |   4 +-
 mm/filemap.c                     | 110 ++++++++++++++-----------------
 mm/readahead.c                   |   4 +-
 4 files changed, 55 insertions(+), 65 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 7cb5c38c19e4..961901685007 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -895,7 +895,7 @@ static u64 pnfs_num_cont_bytes(struct inode *inode, pgoff_t idx)
 	end = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
 	if (end != inode->i_mapping->nrpages) {
 		rcu_read_lock();
-		end = page_cache_next_hole(mapping, idx + 1, ULONG_MAX);
+		end = page_cache_next_gap(mapping, idx + 1, ULONG_MAX);
 		rcu_read_unlock();
 	}
 
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index b1bd2186e6d2..2f5d2d3ebaac 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -241,9 +241,9 @@ static inline gfp_t readahead_gfp_mask(struct address_space *x)
 
 typedef int filler_t(void *, struct page *);
 
-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan);
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan);
 
 #define FGP_ACCESSED		0x00000001
diff --git a/mm/filemap.c b/mm/filemap.c
index 4de14e75c4ec..8de36e14e22f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1326,86 +1326,76 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
 }
 
 /**
- * page_cache_next_hole - find the next hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search the set [index, min(index+max_scan-1, MAX_INDEX)] for the
- * lowest indexed hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'return - index >=
- * max_scan' will be true). In rare cases of index wrap-around, 0 will
- * be returned.
- *
- * page_cache_next_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 5, then subsequently a hole is created at
- * index 10, page_cache_next_hole covering both indexes may return 10
- * if called under rcu_read_lock.
+ * page_cache_next_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the
+ * gap with the lowest index.
+ *
+ * This function may be called under the rcu_read_lock.  However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 5, then subsequently a gap is
+ * created at index 10, page_cache_next_gap covering both indices may
+ * return 10 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'return - index >= max_scan' will be true).
+ * In the rare case of index wrap-around, 0 will be returned.
  */
-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan)
 {
-	unsigned long i;
+	XA_STATE(xas, &mapping->i_pages, index);
 
-	for (i = 0; i < max_scan; i++) {
-		struct page *page;
-
-		page = radix_tree_lookup(&mapping->i_pages, index);
-		if (!page || xa_is_value(page))
+	while (max_scan--) {
+		void *entry = xas_next(&xas);
+		if (!entry || xa_is_value(entry))
 			break;
-		index++;
-		if (index == 0)
+		if (xas.xa_index == 0)
 			break;
 	}
 
-	return index;
+	return xas.xa_index;
 }
-EXPORT_SYMBOL(page_cache_next_hole);
+EXPORT_SYMBOL(page_cache_next_gap);
 
 /**
- * page_cache_prev_hole - find the prev hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search backwards in the range [max(index-max_scan+1, 0), index] for
- * the first hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'index - return >=
- * max_scan' will be true). In rare cases of wrap-around, ULONG_MAX
- * will be returned.
- *
- * page_cache_prev_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 10, then subsequently a hole is created at
- * index 5, page_cache_prev_hole covering both indexes may return 5 if
- * called under rcu_read_lock.
+ * page_cache_prev_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [max(index - max_scan + 1, 0), index] for the
+ * gap with the highest index.
+ *
+ * This function may be called under the rcu_read_lock.  However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 10, then subsequently a gap is
+ * created at index 5, page_cache_prev_gap() covering both indices may
+ * return 5 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'index - return >= max_scan' will be true).
+ * In the rare case of wrap-around, ULONG_MAX will be returned.
  */
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan)
 {
-	unsigned long i;
-
-	for (i = 0; i < max_scan; i++) {
-		struct page *page;
+	XA_STATE(xas, &mapping->i_pages, index);
 
-		page = radix_tree_lookup(&mapping->i_pages, index);
-		if (!page || xa_is_value(page))
+	while (max_scan--) {
+		void *entry = xas_prev(&xas);
+		if (!entry || xa_is_value(entry))
 			break;
-		index--;
-		if (index == ULONG_MAX)
+		if (xas.xa_index == ULONG_MAX)
 			break;
 	}
 
-	return index;
+	return xas.xa_index;
 }
-EXPORT_SYMBOL(page_cache_prev_hole);
+EXPORT_SYMBOL(page_cache_prev_gap);
 
 /**
  * find_get_entry - find and get a page cache entry
diff --git a/mm/readahead.c b/mm/readahead.c
index 3757aa549709..59998ca31f2a 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -334,7 +334,7 @@ static pgoff_t count_history_pages(struct address_space *mapping,
 	pgoff_t head;
 
 	rcu_read_lock();
-	head = page_cache_prev_hole(mapping, offset - 1, max);
+	head = page_cache_prev_gap(mapping, offset - 1, max);
 	rcu_read_unlock();
 
 	return offset - 1 - head;
@@ -422,7 +422,7 @@ ondemand_readahead(struct address_space *mapping,
 		pgoff_t start;
 
 		rcu_read_lock();
-		start = page_cache_next_hole(mapping, offset + 1, max_pages);
+		start = page_cache_next_gap(mapping, offset + 1, max_pages);
 		rcu_read_unlock();
 
 		if (!start || start - offset > max_pages)
-- 
2.17.1


WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: linux-nilfs@vger.kernel.org, Jan Kara <jack@suse.cz>,
	Jeff Layton <jlayton@redhat.com>,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	linux-f2fs-devel@lists.sourceforge.net,
	Nicholas Piggin <npiggin@gmail.com>,
	Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>,
	Lukas Czerner <lczerner@redhat.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Goldwyn Rodrigues <rgoldwyn@suse.com>
Subject: [PATCH v14 21/74] page cache: Convert hole search to XArray
Date: Sat, 16 Jun 2018 18:59:59 -0700	[thread overview]
Message-ID: <20180617020052.4759-22-willy@infradead.org> (raw)
In-Reply-To: <20180617020052.4759-1-willy@infradead.org>

The page cache offers the ability to search for a miss in the previous or
next N locations.  Rather than teach the XArray about the page cache's
definition of a miss, use xas_prev() and xas_next() to search the page
array.  This should be more efficient as it does not have to start the
lookup from the top for each index.

Signed-off-by: Matthew Wilcox <willy@infradead.org>
---
 fs/nfs/blocklayout/blocklayout.c |   2 +-
 include/linux/pagemap.h          |   4 +-
 mm/filemap.c                     | 110 ++++++++++++++-----------------
 mm/readahead.c                   |   4 +-
 4 files changed, 55 insertions(+), 65 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 7cb5c38c19e4..961901685007 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -895,7 +895,7 @@ static u64 pnfs_num_cont_bytes(struct inode *inode, pgoff_t idx)
 	end = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
 	if (end != inode->i_mapping->nrpages) {
 		rcu_read_lock();
-		end = page_cache_next_hole(mapping, idx + 1, ULONG_MAX);
+		end = page_cache_next_gap(mapping, idx + 1, ULONG_MAX);
 		rcu_read_unlock();
 	}
 
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index b1bd2186e6d2..2f5d2d3ebaac 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -241,9 +241,9 @@ static inline gfp_t readahead_gfp_mask(struct address_space *x)
 
 typedef int filler_t(void *, struct page *);
 
-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan);
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan);
 
 #define FGP_ACCESSED		0x00000001
diff --git a/mm/filemap.c b/mm/filemap.c
index 4de14e75c4ec..8de36e14e22f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1326,86 +1326,76 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
 }
 
 /**
- * page_cache_next_hole - find the next hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search the set [index, min(index+max_scan-1, MAX_INDEX)] for the
- * lowest indexed hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'return - index >=
- * max_scan' will be true). In rare cases of index wrap-around, 0 will
- * be returned.
- *
- * page_cache_next_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 5, then subsequently a hole is created at
- * index 10, page_cache_next_hole covering both indexes may return 10
- * if called under rcu_read_lock.
+ * page_cache_next_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the
+ * gap with the lowest index.
+ *
+ * This function may be called under the rcu_read_lock.  However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 5, then subsequently a gap is
+ * created at index 10, page_cache_next_gap covering both indices may
+ * return 10 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'return - index >= max_scan' will be true).
+ * In the rare case of index wrap-around, 0 will be returned.
  */
-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan)
 {
-	unsigned long i;
+	XA_STATE(xas, &mapping->i_pages, index);
 
-	for (i = 0; i < max_scan; i++) {
-		struct page *page;
-
-		page = radix_tree_lookup(&mapping->i_pages, index);
-		if (!page || xa_is_value(page))
+	while (max_scan--) {
+		void *entry = xas_next(&xas);
+		if (!entry || xa_is_value(entry))
 			break;
-		index++;
-		if (index == 0)
+		if (xas.xa_index == 0)
 			break;
 	}
 
-	return index;
+	return xas.xa_index;
 }
-EXPORT_SYMBOL(page_cache_next_hole);
+EXPORT_SYMBOL(page_cache_next_gap);
 
 /**
- * page_cache_prev_hole - find the prev hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search backwards in the range [max(index-max_scan+1, 0), index] for
- * the first hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'index - return >=
- * max_scan' will be true). In rare cases of wrap-around, ULONG_MAX
- * will be returned.
- *
- * page_cache_prev_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 10, then subsequently a hole is created at
- * index 5, page_cache_prev_hole covering both indexes may return 5 if
- * called under rcu_read_lock.
+ * page_cache_prev_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [max(index - max_scan + 1, 0), index] for the
+ * gap with the highest index.
+ *
+ * This function may be called under the rcu_read_lock.  However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 10, then subsequently a gap is
+ * created at index 5, page_cache_prev_gap() covering both indices may
+ * return 5 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'index - return >= max_scan' will be true).
+ * In the rare case of wrap-around, ULONG_MAX will be returned.
  */
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan)
 {
-	unsigned long i;
-
-	for (i = 0; i < max_scan; i++) {
-		struct page *page;
+	XA_STATE(xas, &mapping->i_pages, index);
 
-		page = radix_tree_lookup(&mapping->i_pages, index);
-		if (!page || xa_is_value(page))
+	while (max_scan--) {
+		void *entry = xas_prev(&xas);
+		if (!entry || xa_is_value(entry))
 			break;
-		index--;
-		if (index == ULONG_MAX)
+		if (xas.xa_index == ULONG_MAX)
 			break;
 	}
 
-	return index;
+	return xas.xa_index;
 }
-EXPORT_SYMBOL(page_cache_prev_hole);
+EXPORT_SYMBOL(page_cache_prev_gap);
 
 /**
  * find_get_entry - find and get a page cache entry
diff --git a/mm/readahead.c b/mm/readahead.c
index 3757aa549709..59998ca31f2a 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -334,7 +334,7 @@ static pgoff_t count_history_pages(struct address_space *mapping,
 	pgoff_t head;
 
 	rcu_read_lock();
-	head = page_cache_prev_hole(mapping, offset - 1, max);
+	head = page_cache_prev_gap(mapping, offset - 1, max);
 	rcu_read_unlock();
 
 	return offset - 1 - head;
@@ -422,7 +422,7 @@ ondemand_readahead(struct address_space *mapping,
 		pgoff_t start;
 
 		rcu_read_lock();
-		start = page_cache_next_hole(mapping, offset + 1, max_pages);
+		start = page_cache_next_gap(mapping, offset + 1, max_pages);
 		rcu_read_unlock();
 
 		if (!start || start - offset > max_pages)
-- 
2.17.1


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot

  parent reply	other threads:[~2018-06-17  2:19 UTC|newest]

Thread overview: 162+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-17  1:59 [PATCH v14 00/74] Convert page cache to XArray Matthew Wilcox
2018-06-17  1:59 ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 01/74] Update email address Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 02/74] radix tree test suite: Enable ubsan Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 03/74] dax: Fix use of zero page Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 04/74] xarray: Replace exceptional entries Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 05/74] xarray: Change definition of sibling entries Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 06/74] xarray: Add definition of struct xarray Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 07/74] xarray: Define struct xa_node Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 08/74] xarray: Add documentation Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 09/74] xarray: Add XArray load operation Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 10/74] xarray: Add XArray tags Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 11/74] xarray: Add XArray unconditional store operations Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 12/74] xarray: Add XArray conditional " Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 13/74] xarray: Add XArray iterators Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 14/74] xarray: Extract entries from an XArray Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 15/74] xarray: Destroy " Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 16/74] xarray: Step through " Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 17/74] xarray: Add xas_for_each_conflict Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 18/74] xarray: Add xas_create_range Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 19/74] xarray: Add MAINTAINERS entry Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` [PATCH v14 20/74] page cache: Rearrange address_space Matthew Wilcox
2018-06-17  1:59   ` Matthew Wilcox
2018-06-17  1:59 ` Matthew Wilcox [this message]
2018-06-17  1:59   ` [PATCH v14 21/74] page cache: Convert hole search to XArray Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 22/74] page cache: Add and replace pages using the XArray Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 23/74] page cache: Convert page deletion to XArray Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 24/74] page cache: Convert find_get_entry " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 25/74] page cache: Convert find_get_entries " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 26/74] page cache: Convert find_get_pages_range " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 27/74] page cache: Convert find_get_pages_contig " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 28/74] page cache; Convert find_get_pages_range_tag " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 29/74] page cache: Convert find_get_entries_tag " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 30/74] page cache: Convert filemap_map_pages " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 31/74] radix tree test suite: Convert regression1 " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 32/74] page cache: Convert delete_batch " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 33/74] page cache: Remove stray radix comment Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 34/74] page cache: Convert filemap_range_has_page to XArray Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 35/74] mm: Convert page-writeback " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 36/74] mm: Convert workingset " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 37/74] mm: Convert truncate " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 38/74] mm: Convert add_to_swap_cache " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 39/74] mm: Convert delete_from_swap_cache " Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 40/74] mm: Convert __do_page_cache_readahead " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 41/74] mm: Convert page migration " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 42/74] mm: Convert huge_memory " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 43/74] mm: Convert collapse_shmem " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 44/74] mm: Convert khugepaged_scan_shmem " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 45/74] mm: Convert is_page_cache_freeable " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 46/74] pagevec: Use xa_tag_t Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 47/74] shmem: Convert shmem_radix_tree_replace to XArray Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 48/74] shmem: Convert shmem_confirm_swap " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 49/74] shmem: Convert find_swap_entry " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 50/74] shmem: Convert shmem_add_to_page_cache " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 51/74] shmem: Convert shmem_alloc_hugepage " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 52/74] shmem: Convert shmem_free_swap " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 53/74] shmem: Convert shmem_partial_swap_usage " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 54/74] memfd: Convert memfd_wait_for_pins " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 55/74] memfd: Convert memfd_tag_pins " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 56/74] shmem: Comment fixups Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 57/74] btrfs: Convert page cache to XArray Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 58/74] fs: Convert buffer " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 59/74] fs: Convert writeback " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 60/74] nilfs2: Convert " Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 61/74] f2fs: " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 62/74] dax: Rename some functions Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 63/74] dax: Hash on XArray instead of mapping Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 64/74] dax: Convert dax_insert_pfn_mkwrite to XArray Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 65/74] dax: Convert dax_layout_busy_page " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 66/74] dax: Convert __dax_invalidate_entry " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 67/74] dax: Convert dax writeback " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 68/74] dax: Convert dax_lock_page " Matthew Wilcox
2018-06-29 17:30   ` Ross Zwisler
2018-06-29 17:30     ` Ross Zwisler
2018-07-06 11:54     ` [PATCH] fs/dax: remove unused function Anders Roxell
2018-07-06 13:04     ` [PATCH v14 68/74] dax: Convert dax_lock_page to XArray Matthew Wilcox
2018-07-06 13:04       ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 69/74] dax: Convert page fault handlers " Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 70/74] page cache: Finish XArray conversion Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 71/74] radix tree: Remove radix_tree_update_node_t Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 72/74] radix tree: Remove split/join code Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 73/74] radix tree: Remove radix_tree_maybe_preload_order Matthew Wilcox
2018-06-17  2:00 ` [PATCH v14 74/74] radix tree: Remove radix_tree_clear_tags Matthew Wilcox
2018-06-17  2:00   ` Matthew Wilcox
2018-06-19  3:12 ` [PATCH v14 00/74] Convert page cache to XArray Ross Zwisler
2018-06-19  9:22   ` Matthew Wilcox
2018-06-19 16:40     ` Ross Zwisler
2018-06-19 17:16       ` Matthew Wilcox
2018-06-27 11:05         ` Matthew Wilcox
2018-06-27 11:05           ` Matthew Wilcox
2018-06-27 19:44           ` Ross Zwisler
2018-06-28  8:39             ` Matthew Wilcox
2018-06-28 16:30               ` Ross Zwisler
2018-07-25 21:03             ` Matthew Wilcox
2018-07-25 21:12               ` Ross Zwisler
2018-07-27 17:20               ` Ross Zwisler
2018-07-27 17:20                 ` Ross Zwisler
2018-07-30 15:43                 ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180617020052.4759-22-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=jaegeuk@kernel.org \
    --cc=jlayton@redhat.com \
    --cc=konishi.ryusuke@lab.ntt.co.jp \
    --cc=lczerner@redhat.com \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nilfs@vger.kernel.org \
    --cc=npiggin@gmail.com \
    --cc=rgoldwyn@suse.com \
    --cc=ross.zwisler@linux.intel.com \
    --cc=yuchao0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.