From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net, Oleg Drokin , Andreas Dilger , James Simmons , Mike Kravetz Subject: [PATCH v10 01/62] page cache: Use xa_lock Date: Thu, 29 Mar 2018 20:41:44 -0700 Message-Id: <20180330034245.10462-2-willy@infradead.org> In-Reply-To: <20180330034245.10462-1-willy@infradead.org> References: <20180330034245.10462-1-willy@infradead.org> Sender: owner-linux-mm@kvack.org List-ID: From: Matthew Wilcox Remove the address_space ->tree_lock and use the xa_lock newly added to the radix_tree_root. Rename the address_space ->page_tree to ->i_pages, since we don't really care that it's a tree. Signed-off-by: Matthew Wilcox Acked-by: Jeff Layton Reviewed-by: Josef Bacik --- arch/nds32/include/asm/cacheflush.h | 4 ++-- fs/dax.c | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/nds32/include/asm/cacheflush.h b/arch/nds32/include/asm/cacheflush.h index 7b9b20a381cb..1240f148ec0f 100644 --- a/arch/nds32/include/asm/cacheflush.h +++ b/arch/nds32/include/asm/cacheflush.h @@ -34,8 +34,8 @@ void flush_anon_page(struct vm_area_struct *vma, void flush_kernel_dcache_page(struct page *page); void flush_icache_range(unsigned long start, unsigned long end); void flush_icache_page(struct vm_area_struct *vma, struct page *page); -#define flush_dcache_mmap_lock(mapping) spin_lock_irq(&(mapping)->tree_lock) -#define flush_dcache_mmap_unlock(mapping) spin_unlock_irq(&(mapping)->tree_lock) +#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&(mapping)->i_pages) +#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&(mapping)->i_pages) #else #include diff --git a/fs/dax.c b/fs/dax.c index eeedecc8367b..9dc7337ef571 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -584,7 +584,7 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping, void *entry; struct radix_tree_root *pages = &mapping->i_pages; - xa_lock_irq(&mapping->i_pages); + xa_lock_irq(pages); entry = get_unlocked_mapping_entry(mapping, index, NULL); if (!entry || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry))) goto out; @@ -598,7 +598,7 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping, ret = 1; out: put_unlocked_mapping_entry(mapping, index, entry); - xa_unlock_irq(&mapping->i_pages); + xa_unlock_irq(pages); return ret; } /* @@ -685,7 +685,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping, unmap_mapping_pages(mapping, vmf->pgoff, 1, false); } - xa_lock_irq(&mapping->i_pages); + xa_lock_irq(pages); new_entry = dax_radix_locked_entry(pfn, flags); if (dax_entry_size(entry) != dax_entry_size(new_entry)) { dax_disassociate_entry(entry, mapping, false); -- 2.16.2