From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 098556B02F3 for ; Mon, 24 Jul 2017 21:52:06 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id k190so168670254pgk.8 for ; Mon, 24 Jul 2017 18:52:06 -0700 (PDT) Received: from mga11.intel.com (mga11.intel.com. [192.55.52.93]) by mx.google.com with ESMTPS id s10si7722046pgc.281.2017.07.24.18.52.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jul 2017 18:52:05 -0700 (PDT) From: "Huang, Ying" Subject: [PATCH -mm -v3 3/6] mm, swap: Fix swap readahead marking Date: Tue, 25 Jul 2017 09:51:48 +0800 Message-Id: <20170725015151.19502-4-ying.huang@intel.com> In-Reply-To: <20170725015151.19502-1-ying.huang@intel.com> References: <20170725015151.19502-1-ying.huang@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Johannes Weiner , Minchan Kim , Rik van Riel , Shaohua Li , Hugh Dickins , Fengguang Wu , Tim Chen , Dave Hansen From: Huang Ying In the original implementation, it is possible that the existing pages in the swap cache (not newly readahead) could be marked as the readahead pages. This will cause the statistics of swap readahead be wrong and influence the swap readahead algorithm too. This is fixed via marking a page as the readahead page only if it is newly allocated and read from the disk. When testing with linpack, after the fixing the swap readahead hit rate increased from ~66% to ~86%. Signed-off-by: "Huang, Ying" Cc: Johannes Weiner Cc: Minchan Kim Cc: Rik van Riel Cc: Shaohua Li Cc: Hugh Dickins Cc: Fengguang Wu Cc: Tim Chen Cc: Dave Hansen --- mm/swap_state.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 8be7153967ed..d4d33c43ed36 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -508,7 +508,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, unsigned long start_offset, end_offset; unsigned long mask; struct blk_plug plug; - bool do_poll = true; + bool do_poll = true, page_allocated; mask = swapin_nr_pages(offset) - 1; if (!mask) @@ -524,14 +524,18 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, blk_start_plug(&plug); for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ - page = read_swap_cache_async(swp_entry(swp_type(entry), offset), - gfp_mask, vma, addr, false); + page = __read_swap_cache_async( + swp_entry(swp_type(entry), offset), + gfp_mask, vma, addr, &page_allocated); if (!page) continue; - if (offset != entry_offset && - likely(!PageTransCompound(page))) { - SetPageReadahead(page); - percpu_counter_inc(&swapin_readahead_total); + if (page_allocated) { + swap_readpage(page, false); + if (offset != entry_offset && + likely(!PageTransCompound(page))) { + SetPageReadahead(page); + percpu_counter_inc(&swapin_readahead_total); + } } put_page(page); } -- 2.13.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org