From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934545Ab3CNRtS (ORCPT ); Thu, 14 Mar 2013 13:49:18 -0400 Received: from mga02.intel.com ([134.134.136.20]:20168 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934495Ab3CNRtP (ORCPT ); Thu, 14 Mar 2013 13:49:15 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.84,845,1355126400"; d="scan'208";a="302267278" From: "Kirill A. Shutemov" To: Andrea Arcangeli , Andrew Morton , Al Viro , Hugh Dickins Cc: Wu Fengguang , Jan Kara , Mel Gorman , linux-mm@kvack.org, Andi Kleen , Matthew Wilcox , "Kirill A. Shutemov" , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv2, RFC 09/30] thp, mm: rewrite delete_from_page_cache() to support huge pages Date: Thu, 14 Mar 2013 19:50:14 +0200 Message-Id: <1363283435-7666-10-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" As with add_to_page_cache_locked() we handle HPAGE_CACHE_NR pages a time. Signed-off-by: Kirill A. Shutemov --- mm/filemap.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 6bac9e2..0ff3403 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -115,6 +115,7 @@ void __delete_from_page_cache(struct page *page) { struct address_space *mapping = page->mapping; + int nr = 1; trace_mm_filemap_delete_from_page_cache(page); /* @@ -127,13 +128,23 @@ void __delete_from_page_cache(struct page *page) else cleancache_invalidate_page(mapping, page); - radix_tree_delete(&mapping->page_tree, page->index); + if (PageTransHuge(page)) { + int i; + + for (i = 0; i < HPAGE_CACHE_NR; i++) + radix_tree_delete(&mapping->page_tree, page->index + i); + nr = HPAGE_CACHE_NR; + } else { + radix_tree_delete(&mapping->page_tree, page->index); + } + page->mapping = NULL; /* Leave page->index set: truncation lookup relies upon it */ - mapping->nrpages--; - __dec_zone_page_state(page, NR_FILE_PAGES); + + mapping->nrpages -= nr; + __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, -nr); if (PageSwapBacked(page)) - __dec_zone_page_state(page, NR_SHMEM); + __mod_zone_page_state(page_zone(page), NR_SHMEM, -nr); BUG_ON(page_mapped(page)); /* @@ -144,8 +155,8 @@ void __delete_from_page_cache(struct page *page) * having removed the page entirely. */ if (PageDirty(page) && mapping_cap_account_dirty(mapping)) { - dec_zone_page_state(page, NR_FILE_DIRTY); - dec_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE); + mod_zone_page_state(page_zone(page), NR_FILE_DIRTY, -nr); + add_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE, -nr); } } @@ -161,6 +172,7 @@ void delete_from_page_cache(struct page *page) { struct address_space *mapping = page->mapping; void (*freepage)(struct page *); + int i; BUG_ON(!PageLocked(page)); @@ -172,6 +184,9 @@ void delete_from_page_cache(struct page *page) if (freepage) freepage(page); + if (PageTransHuge(page)) + for (i = 1; i < HPAGE_CACHE_NR; i++) + page_cache_release(page + i); page_cache_release(page); } EXPORT_SYMBOL(delete_from_page_cache); -- 1.7.10.4 From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Kirill A. Shutemov" Subject: [PATCHv2, RFC 09/30] thp, mm: rewrite delete_from_page_cache() to support huge pages Date: Thu, 14 Mar 2013 19:50:14 +0200 Message-ID: <1363283435-7666-10-git-send-email-kirill.shutemov@linux.intel.com> References: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> Cc: Wu Fengguang , Jan Kara , Mel Gorman , linux-mm@kvack.org, Andi Kleen , Matthew Wilcox , "Kirill A. Shutemov" , Hillf Danton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" To: Andrea Arcangeli , Andrew Morton , Al Viro , Hugh Dickins Return-path: In-Reply-To: <1363283435-7666-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-Id: linux-fsdevel.vger.kernel.org From: "Kirill A. Shutemov" As with add_to_page_cache_locked() we handle HPAGE_CACHE_NR pages a time. Signed-off-by: Kirill A. Shutemov --- mm/filemap.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 6bac9e2..0ff3403 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -115,6 +115,7 @@ void __delete_from_page_cache(struct page *page) { struct address_space *mapping = page->mapping; + int nr = 1; trace_mm_filemap_delete_from_page_cache(page); /* @@ -127,13 +128,23 @@ void __delete_from_page_cache(struct page *page) else cleancache_invalidate_page(mapping, page); - radix_tree_delete(&mapping->page_tree, page->index); + if (PageTransHuge(page)) { + int i; + + for (i = 0; i < HPAGE_CACHE_NR; i++) + radix_tree_delete(&mapping->page_tree, page->index + i); + nr = HPAGE_CACHE_NR; + } else { + radix_tree_delete(&mapping->page_tree, page->index); + } + page->mapping = NULL; /* Leave page->index set: truncation lookup relies upon it */ - mapping->nrpages--; - __dec_zone_page_state(page, NR_FILE_PAGES); + + mapping->nrpages -= nr; + __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, -nr); if (PageSwapBacked(page)) - __dec_zone_page_state(page, NR_SHMEM); + __mod_zone_page_state(page_zone(page), NR_SHMEM, -nr); BUG_ON(page_mapped(page)); /* @@ -144,8 +155,8 @@ void __delete_from_page_cache(struct page *page) * having removed the page entirely. */ if (PageDirty(page) && mapping_cap_account_dirty(mapping)) { - dec_zone_page_state(page, NR_FILE_DIRTY); - dec_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE); + mod_zone_page_state(page_zone(page), NR_FILE_DIRTY, -nr); + add_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE, -nr); } } @@ -161,6 +172,7 @@ void delete_from_page_cache(struct page *page) { struct address_space *mapping = page->mapping; void (*freepage)(struct page *); + int i; BUG_ON(!PageLocked(page)); @@ -172,6 +184,9 @@ void delete_from_page_cache(struct page *page) if (freepage) freepage(page); + if (PageTransHuge(page)) + for (i = 1; i < HPAGE_CACHE_NR; i++) + page_cache_release(page + i); page_cache_release(page); } EXPORT_SYMBOL(delete_from_page_cache); -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org