From: Matthew Wilcox <willy@infradead.org> To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: Matthew Wilcox <mawilcox@microsoft.com>, Jan Kara <jack@suse.cz>, Jeff Layton <jlayton@redhat.com>, Lukas Czerner <lczerner@redhat.com>, Ross Zwisler <ross.zwisler@linux.intel.com>, Christoph Hellwig <hch@lst.de>, Goldwyn Rodrigues <rgoldwyn@suse.com>, Nicholas Piggin <npiggin@gmail.com>, Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>, linux-nilfs@vger.kernel.org, Jaegeuk Kim <jaegeuk@kernel.org>, Chao Yu <yuchao0@huawei.com>, linux-f2fs-devel@lists.sourceforge.net, Oleg Drokin <oleg.drokin@intel.com>, Andreas Dilger <andreas.dilger@intel.com>, James Simmons <jsimmons@infradead.org>, Mike Kravetz <mike.kravetz@oracle.com> Subject: [PATCH v11 38/63] shmem: Convert shmem_add_to_page_cache to XArray Date: Sat, 14 Apr 2018 07:12:51 -0700 [thread overview] Message-ID: <20180414141316.7167-39-willy@infradead.org> (raw) In-Reply-To: <20180414141316.7167-1-willy@infradead.org> From: Matthew Wilcox <mawilcox@microsoft.com> This removes the last caller of radix_tree_maybe_preload_order(). Simpler code, unless we run out of memory for new xa_nodes partway through inserting entries into the xarray. Hopefully we can support multi-index entries in the page cache soon and all the awful code goes away. Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com> --- mm/shmem.c | 87 ++++++++++++++++++++++++------------------------------ 1 file changed, 39 insertions(+), 48 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 784a49aad902..8d24d59b0e67 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -567,9 +567,10 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected) + pgoff_t index, void *expected, gfp_t gfp) { - int error, nr = hpage_nr_pages(page); + XA_STATE(xas, &mapping->i_pages, index); + unsigned long i, nr = 1UL << compound_order(page); VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -578,49 +579,47 @@ static int shmem_add_to_page_cache(struct page *page, VM_BUG_ON(expected && PageTransHuge(page)); page_ref_add(page, nr); - page->mapping = mapping; page->index = index; + page->mapping = mapping; - xa_lock_irq(&mapping->i_pages); - if (PageTransHuge(page)) { - void __rcu **results; - pgoff_t idx; - int i; - - error = 0; - if (radix_tree_gang_lookup_slot(&mapping->i_pages, - &results, &idx, index, 1) && - idx < index + HPAGE_PMD_NR) { - error = -EEXIST; + do { + xas_lock_irq(&xas); + xas_create_range(&xas, index + nr - 1); + if (xas_error(&xas)) + goto unlock; + for (i = 0; i < nr; i++) { + void *entry = xas_load(&xas); + if (entry != expected) + xas_set_err(&xas, -ENOENT); + if (xas_error(&xas)) + goto undo; + xas_store(&xas, page + i); + xas_next(&xas); } - - if (!error) { - for (i = 0; i < HPAGE_PMD_NR; i++) { - error = radix_tree_insert(&mapping->i_pages, - index + i, page + i); - VM_BUG_ON(error); - } + if (PageTransHuge(page)) { count_vm_event(THP_FILE_ALLOC); + __inc_node_page_state(page, NR_SHMEM_THPS); } - } else if (!expected) { - error = radix_tree_insert(&mapping->i_pages, index, page); - } else { - error = shmem_xa_replace(mapping, index, expected, page); - } - - if (!error) { mapping->nrpages += nr; - if (PageTransHuge(page)) - __inc_node_page_state(page, NR_SHMEM_THPS); __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); __mod_node_page_state(page_pgdat(page), NR_SHMEM, nr); - xa_unlock_irq(&mapping->i_pages); - } else { + goto unlock; +undo: + while (i-- > 0) { + xas_store(&xas, NULL); + xas_prev(&xas); + } +unlock: + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, gfp)); + + if (xas_error(&xas)) { page->mapping = NULL; - xa_unlock_irq(&mapping->i_pages); page_ref_sub(page, nr); + return xas_error(&xas); } - return error; + + return 0; } /* @@ -1168,7 +1167,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info, */ if (!error) error = shmem_add_to_page_cache(*pagep, mapping, index, - radswap); + radswap, gfp); if (error != -ENOMEM) { /* * Truncation and eviction use free_swap_and_cache(), which @@ -1689,7 +1688,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, false); if (!error) { error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap)); + swp_to_radix_entry(swap), gfp); /* * We already confirmed swap under page lock, and make * no memory allocation here, so usually no possibility @@ -1795,13 +1794,8 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode, PageTransHuge(page)); if (error) goto unacct; - error = radix_tree_maybe_preload_order(gfp & GFP_RECLAIM_MASK, - compound_order(page)); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, hindex, - NULL); - radix_tree_preload_end(); - } + error = shmem_add_to_page_cache(page, mapping, hindex, + NULL, gfp & GFP_RECLAIM_MASK); if (error) { mem_cgroup_cancel_charge(page, memcg, PageTransHuge(page)); @@ -2268,11 +2262,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release; - ret = radix_tree_maybe_preload(gfp & GFP_RECLAIM_MASK); - if (!ret) { - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL); - radix_tree_preload_end(); - } + ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, + gfp & GFP_RECLAIM_MASK); if (ret) goto out_release_uncharge; -- 2.17.0
WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org> To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: linux-nilfs@vger.kernel.org, Jan Kara <jack@suse.cz>, Jeff Layton <jlayton@redhat.com>, Matthew Wilcox <mawilcox@microsoft.com>, James Simmons <jsimmons@infradead.org>, Jaegeuk Kim <jaegeuk@kernel.org>, Andreas Dilger <andreas.dilger@intel.com>, Nicholas Piggin <npiggin@gmail.com>, linux-f2fs-devel@lists.sourceforge.net, Oleg Drokin <oleg.drokin@intel.com>, Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>, Lukas Czerner <lczerner@redhat.com>, Ross Zwisler <ross.zwisler@linux.intel.com>, Christoph Hellwig <hch@lst.de>, Goldwyn Rodrigues <rgoldwyn@suse.com>, Mike Kravetz <mike.kravetz@oracle.com> Subject: [PATCH v11 38/63] shmem: Convert shmem_add_to_page_cache to XArray Date: Sat, 14 Apr 2018 07:12:51 -0700 [thread overview] Message-ID: <20180414141316.7167-39-willy@infradead.org> (raw) In-Reply-To: <20180414141316.7167-1-willy@infradead.org> From: Matthew Wilcox <mawilcox@microsoft.com> This removes the last caller of radix_tree_maybe_preload_order(). Simpler code, unless we run out of memory for new xa_nodes partway through inserting entries into the xarray. Hopefully we can support multi-index entries in the page cache soon and all the awful code goes away. Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com> --- mm/shmem.c | 87 ++++++++++++++++++++++++------------------------------ 1 file changed, 39 insertions(+), 48 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 784a49aad902..8d24d59b0e67 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -567,9 +567,10 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected) + pgoff_t index, void *expected, gfp_t gfp) { - int error, nr = hpage_nr_pages(page); + XA_STATE(xas, &mapping->i_pages, index); + unsigned long i, nr = 1UL << compound_order(page); VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -578,49 +579,47 @@ static int shmem_add_to_page_cache(struct page *page, VM_BUG_ON(expected && PageTransHuge(page)); page_ref_add(page, nr); - page->mapping = mapping; page->index = index; + page->mapping = mapping; - xa_lock_irq(&mapping->i_pages); - if (PageTransHuge(page)) { - void __rcu **results; - pgoff_t idx; - int i; - - error = 0; - if (radix_tree_gang_lookup_slot(&mapping->i_pages, - &results, &idx, index, 1) && - idx < index + HPAGE_PMD_NR) { - error = -EEXIST; + do { + xas_lock_irq(&xas); + xas_create_range(&xas, index + nr - 1); + if (xas_error(&xas)) + goto unlock; + for (i = 0; i < nr; i++) { + void *entry = xas_load(&xas); + if (entry != expected) + xas_set_err(&xas, -ENOENT); + if (xas_error(&xas)) + goto undo; + xas_store(&xas, page + i); + xas_next(&xas); } - - if (!error) { - for (i = 0; i < HPAGE_PMD_NR; i++) { - error = radix_tree_insert(&mapping->i_pages, - index + i, page + i); - VM_BUG_ON(error); - } + if (PageTransHuge(page)) { count_vm_event(THP_FILE_ALLOC); + __inc_node_page_state(page, NR_SHMEM_THPS); } - } else if (!expected) { - error = radix_tree_insert(&mapping->i_pages, index, page); - } else { - error = shmem_xa_replace(mapping, index, expected, page); - } - - if (!error) { mapping->nrpages += nr; - if (PageTransHuge(page)) - __inc_node_page_state(page, NR_SHMEM_THPS); __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); __mod_node_page_state(page_pgdat(page), NR_SHMEM, nr); - xa_unlock_irq(&mapping->i_pages); - } else { + goto unlock; +undo: + while (i-- > 0) { + xas_store(&xas, NULL); + xas_prev(&xas); + } +unlock: + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, gfp)); + + if (xas_error(&xas)) { page->mapping = NULL; - xa_unlock_irq(&mapping->i_pages); page_ref_sub(page, nr); + return xas_error(&xas); } - return error; + + return 0; } /* @@ -1168,7 +1167,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info, */ if (!error) error = shmem_add_to_page_cache(*pagep, mapping, index, - radswap); + radswap, gfp); if (error != -ENOMEM) { /* * Truncation and eviction use free_swap_and_cache(), which @@ -1689,7 +1688,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, false); if (!error) { error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap)); + swp_to_radix_entry(swap), gfp); /* * We already confirmed swap under page lock, and make * no memory allocation here, so usually no possibility @@ -1795,13 +1794,8 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode, PageTransHuge(page)); if (error) goto unacct; - error = radix_tree_maybe_preload_order(gfp & GFP_RECLAIM_MASK, - compound_order(page)); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, hindex, - NULL); - radix_tree_preload_end(); - } + error = shmem_add_to_page_cache(page, mapping, hindex, + NULL, gfp & GFP_RECLAIM_MASK); if (error) { mem_cgroup_cancel_charge(page, memcg, PageTransHuge(page)); @@ -2268,11 +2262,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release; - ret = radix_tree_maybe_preload(gfp & GFP_RECLAIM_MASK); - if (!ret) { - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL); - radix_tree_preload_end(); - } + ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, + gfp & GFP_RECLAIM_MASK); if (ret) goto out_release_uncharge; -- 2.17.0 ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot
next prev parent reply other threads:[~2018-04-14 14:12 UTC|newest] Thread overview: 166+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-04-14 14:12 [PATCH v11 00/63] Convert page cache to XArray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 01/63] xarray: Replace exceptional entries Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 02/63] xarray: Change definition of sibling entries Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 03/63] xarray: Add definition of struct xarray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 04/63] xarray: Define struct xa_node Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 05/63] xarray: Add documentation Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 06/63] xarray: Add xa_load Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 07/63] xarray: Add xa_get_tag, xa_set_tag and xa_clear_tag Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 08/63] xarray: Add xa_store Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 09/63] xarray: Add xa_cmpxchg and xa_insert Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 10/63] xarray: Add xa_for_each Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-20 12:00 ` Goldwyn Rodrigues 2018-04-20 12:00 ` Goldwyn Rodrigues 2018-04-21 1:34 ` Matthew Wilcox 2018-04-21 1:34 ` Matthew Wilcox 2018-04-22 20:33 ` Goldwyn Rodrigues 2018-04-22 20:33 ` Goldwyn Rodrigues 2018-04-14 14:12 ` [PATCH v11 11/63] xarray: Add xa_extract Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 12/63] xarray: Add xa_destroy Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 13/63] xarray: Add xas_next and xas_prev Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 14/63] xarray: Add xas_create_range Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 15/63] xarray: Add MAINTAINERS entry Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 16/63] page cache: Rearrange address_space Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 17/63] page cache: Convert hole search to XArray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 18/63] page cache: Add and replace pages using the XArray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-27 3:24 ` Matthew Wilcox 2018-04-27 3:24 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 19/63] page cache: Convert page deletion to XArray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-20 12:00 ` Goldwyn Rodrigues 2018-04-20 12:00 ` Goldwyn Rodrigues 2018-04-27 2:58 ` Matthew Wilcox 2018-04-27 2:58 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 20/63] page cache: Convert page cache lookups " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 21/63] page cache: Convert delete_batch " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 22/63] page cache: Remove stray radix comment Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 23/63] page cache: Convert filemap_range_has_page to XArray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 24/63] mm: Convert page-writeback " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 25/63] mm: Convert workingset " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 26/63] mm: Convert truncate " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 27/63] mm: Convert add_to_swap_cache " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 28/63] mm: Convert delete_from_swap_cache " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 29/63] mm: Convert __do_page_cache_readahead " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 30/63] mm: Convert page migration " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 31/63] mm: Convert huge_memory " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 32/63] mm: Convert collapse_shmem " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 33/63] mm: Convert khugepaged_scan_shmem " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 34/63] pagevec: Use xa_tag_t Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 35/63] shmem: Convert replace to XArray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 36/63] shmem: Convert shmem_confirm_swap " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 37/63] shmem: Convert find_swap_entry " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox [this message] 2018-04-14 14:12 ` [PATCH v11 38/63] shmem: Convert shmem_add_to_page_cache " Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 39/63] shmem: Convert shmem_alloc_hugepage " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 40/63] shmem: Convert shmem_free_swap " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 41/63] shmem: Convert shmem_partial_swap_usage " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 42/63] memfd: Convert shmem_wait_for_pins " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 43/63] memfd: Convert shmem_tag_pins " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 44/63] shmem: Comment fixups Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 45/63] btrfs: Convert page cache to XArray Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:12 ` [PATCH v11 46/63] fs: Convert buffer " Matthew Wilcox 2018-04-14 14:12 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 47/63] fs: Convert writeback " Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 48/63] nilfs2: Convert " Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 49/63] f2fs: " Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 50/63] lustre: " Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 51/63] dax: Fix use of zero page Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-05-21 4:25 ` Ross Zwisler 2018-05-21 4:25 ` Ross Zwisler 2018-04-14 14:13 ` [PATCH v11 52/63] dax: dax_insert_mapping_entry always succeeds Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-05-21 4:33 ` Ross Zwisler 2018-05-21 4:33 ` Ross Zwisler 2018-04-14 14:13 ` [PATCH v11 53/63] dax: Rename some functions Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-05-21 4:42 ` Ross Zwisler 2018-05-21 4:42 ` Ross Zwisler 2018-05-21 10:11 ` Matthew Wilcox 2018-05-21 10:11 ` Matthew Wilcox 2018-05-22 21:39 ` Ross Zwisler 2018-05-22 21:39 ` Ross Zwisler 2018-04-14 14:13 ` [PATCH v11 54/63] dax: Hash on XArray instead of mapping Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-05-21 4:47 ` Ross Zwisler 2018-05-21 4:47 ` Ross Zwisler 2018-05-21 10:25 ` Matthew Wilcox 2018-05-21 10:25 ` Matthew Wilcox 2018-05-22 21:38 ` Ross Zwisler 2018-05-22 21:38 ` Ross Zwisler 2018-04-14 14:13 ` [PATCH v11 55/63] dax: Convert dax_insert_pfn_mkwrite to XArray Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 56/63] dax: Convert __dax_invalidate_entry " Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 57/63] dax: Convert dax writeback " Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 58/63] dax: Convert page fault handlers " Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 59/63] dax: Return fault code from dax_load_hole Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 60/63] page cache: Finish XArray conversion Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 61/63] radix tree: Remove unused functions Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 62/63] radix tree: Remove radix_tree_update_node_t Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-14 14:13 ` [PATCH v11 63/63] radix tree: Remove radix_tree_clear_tags Matthew Wilcox 2018-04-14 14:13 ` Matthew Wilcox 2018-04-16 16:01 ` [PATCH v11 00/63] Convert page cache to XArray Ross Zwisler 2018-04-16 16:01 ` Ross Zwisler 2018-05-31 21:36 ` Ross Zwisler 2018-05-31 21:36 ` Ross Zwisler 2018-05-31 21:37 ` Ross Zwisler 2018-05-31 21:37 ` Ross Zwisler 2018-05-31 21:46 ` Matthew Wilcox 2018-05-31 21:46 ` Matthew Wilcox 2018-05-31 21:53 ` Ross Zwisler 2018-05-31 21:53 ` Ross Zwisler
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20180414141316.7167-39-willy@infradead.org \ --to=willy@infradead.org \ --cc=andreas.dilger@intel.com \ --cc=hch@lst.de \ --cc=jack@suse.cz \ --cc=jaegeuk@kernel.org \ --cc=jlayton@redhat.com \ --cc=jsimmons@infradead.org \ --cc=konishi.ryusuke@lab.ntt.co.jp \ --cc=lczerner@redhat.com \ --cc=linux-f2fs-devel@lists.sourceforge.net \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-nilfs@vger.kernel.org \ --cc=mawilcox@microsoft.com \ --cc=mike.kravetz@oracle.com \ --cc=npiggin@gmail.com \ --cc=oleg.drokin@intel.com \ --cc=rgoldwyn@suse.com \ --cc=ross.zwisler@linux.intel.com \ --cc=yuchao0@huawei.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.