All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Jan Kara <jack@suse.cz>,
	William Kucharski <william.kucharski@oracle.com>
Subject: [PATCH 41/48] filemap: Return only folios from find_get_entries()
Date: Wed,  8 Dec 2021 04:22:49 +0000	[thread overview]
Message-ID: <20211208042256.1923824-42-willy@infradead.org> (raw)
In-Reply-To: <20211208042256.1923824-1-willy@infradead.org>

The callers have all been converted to work on folios, so convert
find_get_entries() to return a batch of folios instead of pages.
We also now return multiple large folios in a single call.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
---
 include/linux/pagemap.h |  2 --
 mm/filemap.c            | 43 +++++++++++------------------------------
 mm/internal.h           |  4 ++++
 mm/shmem.c              | 36 +++++++++++++++++++---------------
 mm/truncate.c           | 43 +++++++++++++++++++++++------------------
 5 files changed, 59 insertions(+), 69 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index eb6e58e106c8..d2259a1da51c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -592,8 +592,6 @@ static inline struct page *find_subpage(struct page *head, pgoff_t index)
 	return head + (index & (thp_nr_pages(head) - 1));
 }
 
-unsigned find_get_entries(struct address_space *mapping, pgoff_t start,
-		pgoff_t end, struct pagevec *pvec, pgoff_t *indices);
 unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
 			pgoff_t end, unsigned int nr_pages,
 			struct page **pages);
diff --git a/mm/filemap.c b/mm/filemap.c
index 38726ca96f0e..4f00412d72d3 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2015,57 +2015,36 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max,
  * @mapping:	The address_space to search
  * @start:	The starting page cache index
  * @end:	The final page index (inclusive).
- * @pvec:	Where the resulting entries are placed.
+ * @fbatch:	Where the resulting entries are placed.
  * @indices:	The cache indices corresponding to the entries in @entries
  *
  * find_get_entries() will search for and return a batch of entries in
- * the mapping.  The entries are placed in @pvec.  find_get_entries()
- * takes a reference on any actual pages it returns.
+ * the mapping.  The entries are placed in @fbatch.  find_get_entries()
+ * takes a reference on any actual folios it returns.
  *
- * The search returns a group of mapping-contiguous page cache entries
- * with ascending indexes.  There may be holes in the indices due to
- * not-present pages.
+ * The entries have ascending indexes.  The indices may not be consecutive
+ * due to not-present entries or large folios.
  *
- * Any shadow entries of evicted pages, or swap entries from
+ * Any shadow entries of evicted folios, or swap entries from
  * shmem/tmpfs, are included in the returned array.
  *
- * If it finds a Transparent Huge Page, head or tail, find_get_entries()
- * stops at that page: the caller is likely to have a better way to handle
- * the compound page as a whole, and then skip its extent, than repeatedly
- * calling find_get_entries() to return all its tails.
- *
- * Return: the number of pages and shadow entries which were found.
+ * Return: The number of entries which were found.
  */
 unsigned find_get_entries(struct address_space *mapping, pgoff_t start,
-		pgoff_t end, struct pagevec *pvec, pgoff_t *indices)
+		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices)
 {
 	XA_STATE(xas, &mapping->i_pages, start);
 	struct folio *folio;
-	unsigned int ret = 0;
-	unsigned nr_entries = PAGEVEC_SIZE;
 
 	rcu_read_lock();
 	while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) {
-		struct page *page = &folio->page;
-		/*
-		 * Terminate early on finding a THP, to allow the caller to
-		 * handle it all at once; but continue if this is hugetlbfs.
-		 */
-		if (!xa_is_value(folio) && folio_test_large(folio) &&
-				!folio_test_hugetlb(folio)) {
-			page = folio_file_page(folio, xas.xa_index);
-			nr_entries = ret + 1;
-		}
-
-		indices[ret] = xas.xa_index;
-		pvec->pages[ret] = page;
-		if (++ret == nr_entries)
+		indices[fbatch->nr] = xas.xa_index;
+		if (!folio_batch_add(fbatch, folio))
 			break;
 	}
 	rcu_read_unlock();
 
-	pvec->nr = ret;
-	return ret;
+	return folio_batch_count(fbatch);
 }
 
 /**
diff --git a/mm/internal.h b/mm/internal.h
index d3c7b35934ed..36ad6ffe53bf 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -12,6 +12,8 @@
 #include <linux/pagemap.h>
 #include <linux/tracepoint-defs.h>
 
+struct folio_batch;
+
 /*
  * The set of flags that only affect watermark checking and reclaim
  * behaviour. This is used by the MM to obey the caller constraints
@@ -113,6 +115,8 @@ static inline void force_page_cache_readahead(struct address_space *mapping,
 
 unsigned find_lock_entries(struct address_space *mapping, pgoff_t start,
 		pgoff_t end, struct pagevec *pvec, pgoff_t *indices);
+unsigned find_get_entries(struct address_space *mapping, pgoff_t start,
+		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices);
 void filemap_free_folio(struct address_space *mapping, struct folio *folio);
 int truncate_inode_folio(struct address_space *mapping, struct folio *folio);
 
diff --git a/mm/shmem.c b/mm/shmem.c
index dbef008fb6e5..e909c163fb38 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -920,6 +920,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 	unsigned int partial_start = lstart & (PAGE_SIZE - 1);
 	unsigned int partial_end = (lend + 1) & (PAGE_SIZE - 1);
 	struct pagevec pvec;
+	struct folio_batch fbatch;
 	pgoff_t indices[PAGEVEC_SIZE];
 	long nr_swaps_freed = 0;
 	pgoff_t index;
@@ -987,11 +988,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 	if (start >= end)
 		return;
 
+	folio_batch_init(&fbatch);
 	index = start;
 	while (index < end) {
 		cond_resched();
 
-		if (!find_get_entries(mapping, index, end - 1, &pvec,
+		if (!find_get_entries(mapping, index, end - 1, &fbatch,
 				indices)) {
 			/* If all gone or hole-punch or unfalloc, we're done */
 			if (index == start || end != -1)
@@ -1000,14 +1002,14 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 			index = start;
 			continue;
 		}
-		for (i = 0; i < pagevec_count(&pvec); i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+			struct folio *folio = fbatch.folios[i];
 
 			index = indices[i];
-			if (xa_is_value(page)) {
+			if (xa_is_value(folio)) {
 				if (unfalloc)
 					continue;
-				if (shmem_free_swap(mapping, index, page)) {
+				if (shmem_free_swap(mapping, index, folio)) {
 					/* Swap was replaced by page: retry */
 					index--;
 					break;
@@ -1016,33 +1018,35 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 				continue;
 			}
 
-			lock_page(page);
+			folio_lock(folio);
 
-			if (!unfalloc || !PageUptodate(page)) {
-				if (page_mapping(page) != mapping) {
+			if (!unfalloc || !folio_test_uptodate(folio)) {
+				struct page *page = folio_file_page(folio,
+									index);
+				if (folio_mapping(folio) != mapping) {
 					/* Page was replaced by swap: retry */
-					unlock_page(page);
+					folio_unlock(folio);
 					index--;
 					break;
 				}
-				VM_BUG_ON_PAGE(PageWriteback(page), page);
+				VM_BUG_ON_FOLIO(folio_test_writeback(folio),
+						folio);
 				if (shmem_punch_compound(page, start, end))
-					truncate_inode_folio(mapping,
-							     page_folio(page));
+					truncate_inode_folio(mapping, folio);
 				else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
 					/* Wipe the page and don't get stuck */
 					clear_highpage(page);
 					flush_dcache_page(page);
-					set_page_dirty(page);
+					folio_mark_dirty(folio);
 					if (index <
 					    round_up(start, HPAGE_PMD_NR))
 						start = index + 1;
 				}
 			}
-			unlock_page(page);
+			folio_unlock(folio);
 		}
-		pagevec_remove_exceptionals(&pvec);
-		pagevec_release(&pvec);
+		folio_batch_remove_exceptionals(&fbatch);
+		folio_batch_release(&fbatch);
 		index++;
 	}
 
diff --git a/mm/truncate.c b/mm/truncate.c
index 5370094641d6..357af144df63 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -108,6 +108,13 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
 	pvec->nr = j;
 }
 
+static void truncate_folio_batch_exceptionals(struct address_space *mapping,
+				struct folio_batch *fbatch, pgoff_t *indices)
+{
+	truncate_exceptional_pvec_entries(mapping, (struct pagevec *)fbatch,
+						indices);
+}
+
 /*
  * Invalidate exceptional entry if easily possible. This handles exceptional
  * entries for invalidate_inode_pages().
@@ -297,6 +304,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
 	unsigned int	partial_start;	/* inclusive */
 	unsigned int	partial_end;	/* exclusive */
 	struct pagevec	pvec;
+	struct folio_batch fbatch;
 	pgoff_t		indices[PAGEVEC_SIZE];
 	pgoff_t		index;
 	int		i;
@@ -379,10 +387,11 @@ void truncate_inode_pages_range(struct address_space *mapping,
 	if (start >= end)
 		goto out;
 
+	folio_batch_init(&fbatch);
 	index = start;
 	for ( ; ; ) {
 		cond_resched();
-		if (!find_get_entries(mapping, index, end - 1, &pvec,
+		if (!find_get_entries(mapping, index, end - 1, &fbatch,
 				indices)) {
 			/* If all gone from start onwards, we're done */
 			if (index == start)
@@ -392,16 +401,14 @@ void truncate_inode_pages_range(struct address_space *mapping,
 			continue;
 		}
 
-		for (i = 0; i < pagevec_count(&pvec); i++) {
-			struct page *page = pvec.pages[i];
-			struct folio *folio;
+		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+			struct folio *folio = fbatch.folios[i];
 
 			/* We rely upon deletion not changing page->index */
 			index = indices[i];
 
-			if (xa_is_value(page))
+			if (xa_is_value(folio))
 				continue;
-			folio = page_folio(page);
 
 			folio_lock(folio);
 			VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio);
@@ -410,8 +417,8 @@ void truncate_inode_pages_range(struct address_space *mapping,
 			folio_unlock(folio);
 			index = folio_index(folio) + folio_nr_pages(folio) - 1;
 		}
-		truncate_exceptional_pvec_entries(mapping, &pvec, indices);
-		pagevec_release(&pvec);
+		truncate_folio_batch_exceptionals(mapping, &fbatch, indices);
+		folio_batch_release(&fbatch);
 		index++;
 	}
 
@@ -625,7 +632,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
 				  pgoff_t start, pgoff_t end)
 {
 	pgoff_t indices[PAGEVEC_SIZE];
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	pgoff_t index;
 	int i;
 	int ret = 0;
@@ -635,23 +642,21 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
 	if (mapping_empty(mapping))
 		goto out;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	index = start;
-	while (find_get_entries(mapping, index, end, &pvec, indices)) {
-		for (i = 0; i < pagevec_count(&pvec); i++) {
-			struct page *page = pvec.pages[i];
-			struct folio *folio;
+	while (find_get_entries(mapping, index, end, &fbatch, indices)) {
+		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+			struct folio *folio = fbatch.folios[i];
 
 			/* We rely upon deletion not changing folio->index */
 			index = indices[i];
 
-			if (xa_is_value(page)) {
+			if (xa_is_value(folio)) {
 				if (!invalidate_exceptional_entry2(mapping,
-								   index, page))
+						index, folio))
 					ret = -EBUSY;
 				continue;
 			}
-			folio = page_folio(page);
 
 			if (!did_range_unmap && folio_mapped(folio)) {
 				/*
@@ -684,8 +689,8 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
 				ret = ret2;
 			folio_unlock(folio);
 		}
-		pagevec_remove_exceptionals(&pvec);
-		pagevec_release(&pvec);
+		folio_batch_remove_exceptionals(&fbatch);
+		folio_batch_release(&fbatch);
 		cond_resched();
 		index++;
 	}
-- 
2.33.0


  parent reply	other threads:[~2021-12-08  4:24 UTC|newest]

Thread overview: 128+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-08  4:22 [PATCH 00/48] Folios for 5.17 Matthew Wilcox (Oracle)
2021-12-08  4:22 ` [PATCH 01/48] filemap: Remove PageHWPoison check from next_uptodate_page() Matthew Wilcox (Oracle)
2021-12-23  6:48   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 02/48] fs/writeback: Convert inode_switch_wbs_work_fn to folios Matthew Wilcox (Oracle)
2021-12-23  6:50   ` Christoph Hellwig
2021-12-23 13:50     ` Matthew Wilcox
2021-12-08  4:22 ` [PATCH 03/48] mm/doc: Add documentation for folio_test_uptodate Matthew Wilcox (Oracle)
2021-12-23  6:51   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 04/48] mm/writeback: Improve __folio_mark_dirty() comment Matthew Wilcox (Oracle)
2021-12-23  6:52   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 05/48] pagevec: Add folio_batch Matthew Wilcox (Oracle)
2021-12-23  6:54   ` Christoph Hellwig
2021-12-23 14:18     ` Matthew Wilcox
2021-12-24  6:13       ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 06/48] iov_iter: Add copy_folio_to_iter() Matthew Wilcox (Oracle)
2021-12-23  6:55   ` Christoph Hellwig
2021-12-23 14:22     ` Matthew Wilcox
2021-12-08  4:22 ` [PATCH 07/48] iov_iter: Convert iter_xarray to use folios Matthew Wilcox (Oracle)
2021-12-23  6:57   ` Christoph Hellwig
2021-12-23 14:31     ` Matthew Wilcox
2021-12-23 15:24   ` David Howells
2021-12-24  6:14     ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 08/48] mm: Add folio_test_pmd_mappable() Matthew Wilcox (Oracle)
2021-12-23  6:58   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 09/48] filemap: Add folio_put_wait_locked() Matthew Wilcox (Oracle)
2021-12-23  7:00   ` Christoph Hellwig
2021-12-23 14:32     ` Matthew Wilcox
2021-12-08  4:22 ` [PATCH 10/48] filemap: Convert page_cache_delete to take a folio Matthew Wilcox (Oracle)
2021-12-23  7:01   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 11/48] filemap: Add filemap_unaccount_folio() Matthew Wilcox (Oracle)
2021-12-23  7:03   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 12/48] filemap: Convert tracing of page cache operations to folio Matthew Wilcox (Oracle)
2021-12-23  7:04   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 13/48] filemap: Add filemap_remove_folio and __filemap_remove_folio Matthew Wilcox (Oracle)
2021-12-23  7:06   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 14/48] filemap: Convert find_get_entry to return a folio Matthew Wilcox (Oracle)
2021-12-23  7:08   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 15/48] filemap: Remove thp_contains() Matthew Wilcox (Oracle)
2021-12-23  7:09   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 16/48] filemap: Convert filemap_get_read_batch to use folios Matthew Wilcox (Oracle)
2021-12-23  7:10   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 17/48] filemap: Convert find_get_pages_contig to folios Matthew Wilcox (Oracle)
2021-12-23  7:16   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 18/48] filemap: Convert filemap_read_page to take a folio Matthew Wilcox (Oracle)
2021-12-23  7:16   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 19/48] filemap: Convert filemap_create_page to folio Matthew Wilcox (Oracle)
2021-12-23  7:17   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 20/48] filemap: Convert filemap_range_uptodate to folios Matthew Wilcox (Oracle)
2021-12-23  7:18   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 21/48] readahead: Convert page_cache_async_ra() to take a folio Matthew Wilcox (Oracle)
2021-12-23  7:19   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 22/48] readahead: Convert page_cache_ra_unbounded to folios Matthew Wilcox (Oracle)
2021-12-23  7:19   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 23/48] filemap: Convert do_async_mmap_readahead to take a folio Matthew Wilcox (Oracle)
2021-12-23  7:23   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 24/48] filemap: Convert filemap_fault to folio Matthew Wilcox (Oracle)
2021-12-23  7:25   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 25/48] filemap: Add read_cache_folio and read_mapping_folio Matthew Wilcox (Oracle)
2021-12-23  7:39   ` Christoph Hellwig
2021-12-23 15:18     ` Matthew Wilcox
2021-12-23 16:20       ` Matthew Wilcox
2021-12-23 18:36   ` Matthew Wilcox
2021-12-08  4:22 ` [PATCH 26/48] filemap: Convert filemap_get_pages to use folios Matthew Wilcox (Oracle)
2021-12-23  7:40   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 27/48] filemap: Convert page_cache_delete_batch to folios Matthew Wilcox (Oracle)
2021-12-23  7:40   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 28/48] filemap: Use folios in next_uptodate_page Matthew Wilcox (Oracle)
2021-12-23  8:20   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 29/48] filemap: Use a folio in filemap_map_pages Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 30/48] filemap: Use a folio in filemap_page_mkwrite Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 31/48] filemap: Add filemap_release_folio() Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 32/48] truncate: Add truncate_cleanup_folio() Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 33/48] mm: Add unmap_mapping_folio() Matthew Wilcox (Oracle)
2021-12-23  7:36   ` Christoph Hellwig
2022-01-02 16:11     ` Matthew Wilcox
2022-01-03  7:53       ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 34/48] shmem: Convert part of shmem_undo_range() to use a folio Matthew Wilcox (Oracle)
2021-12-23  7:39   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 35/48] truncate,shmem: Add truncate_inode_folio() Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 36/48] truncate: Skip known-truncated indices Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 37/48] truncate: Convert invalidate_inode_pages2_range() to use a folio Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 38/48] truncate: Add invalidate_complete_folio2() Matthew Wilcox (Oracle)
2021-12-23  8:21   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 39/48] filemap: Convert filemap_read() to use a folio Matthew Wilcox (Oracle)
2021-12-23  8:22   ` Christoph Hellwig
2022-01-01 16:14     ` Matthew Wilcox
2021-12-08  4:22 ` [PATCH 40/48] filemap: Convert filemap_get_read_batch() to use a folio_batch Matthew Wilcox (Oracle)
2021-12-23  8:22   ` Christoph Hellwig
2021-12-08  4:22 ` Matthew Wilcox (Oracle) [this message]
2021-12-23  8:22   ` [PATCH 41/48] filemap: Return only folios from find_get_entries() Christoph Hellwig
2021-12-08  4:22 ` [PATCH 42/48] mm: Convert find_lock_entries() to use a folio_batch Matthew Wilcox (Oracle)
2021-12-08 11:29   ` kernel test robot
2021-12-08 11:29     ` kernel test robot
2021-12-08 14:30     ` Matthew Wilcox
2021-12-08 14:30       ` Matthew Wilcox
2021-12-23  8:22   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 43/48] mm: Remove pagevec_remove_exceptionals() Matthew Wilcox (Oracle)
2021-12-23  8:22   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 44/48] fs: Convert vfs_dedupe_file_range_compare to folios Matthew Wilcox (Oracle)
2021-12-23  8:22   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 45/48] truncate: Convert invalidate_inode_pages2_range " Matthew Wilcox (Oracle)
2021-12-23  8:22   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 46/48] truncate,shmem: Handle truncates that split large folios Matthew Wilcox (Oracle)
2021-12-08 16:43   ` Matthew Wilcox
2021-12-23  8:43   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 47/48] XArray: Add xas_advance() Matthew Wilcox (Oracle)
2021-12-23  8:29   ` Christoph Hellwig
2021-12-08  4:22 ` [PATCH 48/48] mm: Use multi-index entries in the page cache Matthew Wilcox (Oracle)
2021-12-23  8:47   ` Christoph Hellwig
2021-12-26 22:26 ` [PATCH 00/48] Folios for 5.17 William Kucharski
2022-01-03  1:27   ` Matthew Wilcox
2022-01-03 19:28     ` William Kucharski
2022-01-02 16:19 ` Matthew Wilcox
2022-01-02 23:46   ` William Kucharski
2022-01-03  1:29   ` Hugh Dickins
2022-01-03  1:44   ` Matthew Wilcox
2022-01-03  9:29   ` Christoph Hellwig
2022-01-08  5:32   ` Matthew Wilcox
2022-01-08 16:47     ` Hugh Dickins
2022-01-08 16:53       ` Matthew Wilcox
2022-01-08 17:20         ` Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211208042256.1923824-42-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=william.kucharski@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.