From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 354B26B0499 for ; Wed, 9 May 2018 04:39:26 -0400 (EDT) Received: by mail-pg0-f72.google.com with SMTP id f19-v6so19458492pgv.4 for ; Wed, 09 May 2018 01:39:26 -0700 (PDT) Received: from mga18.intel.com (mga18.intel.com. [134.134.136.126]) by mx.google.com with ESMTPS id y11-v6si18624861pgv.473.2018.05.09.01.39.24 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 01:39:24 -0700 (PDT) From: "Huang, Ying" Subject: [PATCH -mm -V2 07/21] mm, THP, swap: Support PMD swap mapping in split_swap_cluster() Date: Wed, 9 May 2018 16:38:32 +0800 Message-Id: <20180509083846.14823-8-ying.huang@intel.com> In-Reply-To: <20180509083846.14823-1-ying.huang@intel.com> References: <20180509083846.14823-1-ying.huang@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan From: Huang Ying When splitting a THP in swap cache or failing to allocate a THP when swapin a huge swap cluster, the huge swap cluster will be split. In addition to clear the huge flag of the swap cluster, the PMD swap mapping count recorded in cluster_count() will be set to 0. But we will not touch PMD swap mappings themselves, because it is hard to find them all sometimes. When the PMD swap mappings are operated later, it will be found that the huge swap cluster has been split and the PMD swap mappings will be split at that time. Unless splitting a THP in swap cache (specified via "force" parameter), split_swap_cluster() will return -EEXIST if there is SWAP_HAS_CACHE flag in swap_map[offset]. Because this indicates there is a THP corresponds to this huge swap cluster, and it isn't desired to split the THP. When splitting a THP in swap cache, the position to call split_swap_cluster() is changed to before unlocking sub-pages. So that all sub-pages will be kept locked from the THP has been split to the huge swap cluster is split. This makes the code much easier to be reasoned. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan --- include/linux/swap.h | 4 ++-- mm/huge_memory.c | 18 ++++++++++++------ mm/swapfile.c | 45 ++++++++++++++++++++++++++++++--------------- 3 files changed, 44 insertions(+), 23 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index bb9de2cb952a..878f132dabc0 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -617,10 +617,10 @@ static inline swp_entry_t get_swap_page(struct page *page) #endif /* CONFIG_SWAP */ #ifdef CONFIG_THP_SWAP -extern int split_swap_cluster(swp_entry_t entry); +extern int split_swap_cluster(swp_entry_t entry, bool force); extern int split_swap_cluster_map(swp_entry_t entry); #else -static inline int split_swap_cluster(swp_entry_t entry) +static inline int split_swap_cluster(swp_entry_t entry, bool force) { return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 86800ef7c61c..fea9dcba7dc1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2505,6 +2505,17 @@ static void __split_huge_page(struct page *page, struct list_head *list, unfreeze_page(head); + /* + * Split swap cluster before unlocking sub-pages. So all + * sub-pages will be kept locked from THP has been split to + * swap cluster is split. + */ + if (PageSwapCache(head)) { + swp_entry_t entry = { .val = page_private(head) }; + + split_swap_cluster(entry, true); + } + for (i = 0; i < HPAGE_PMD_NR; i++) { struct page *subpage = head + i; if (subpage == page) @@ -2731,12 +2742,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __dec_node_page_state(page, NR_SHMEM_THPS); spin_unlock(&pgdata->split_queue_lock); __split_huge_page(page, list, flags); - if (PageSwapCache(head)) { - swp_entry_t entry = { .val = page_private(head) }; - - ret = split_swap_cluster(entry); - } else - ret = 0; + ret = 0; } else { if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { pr_alert("total_mapcount: %u, page_count(): %u\n", diff --git a/mm/swapfile.c b/mm/swapfile.c index acf2d0c30457..3316820cd3cd 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1414,21 +1414,6 @@ static void swapcache_free_cluster(swp_entry_t entry) } } } - -int split_swap_cluster(swp_entry_t entry) -{ - struct swap_info_struct *si; - struct swap_cluster_info *ci; - unsigned long offset = swp_offset(entry); - - si = _swap_info_get(entry); - if (!si) - return -EBUSY; - ci = lock_cluster(si, offset); - cluster_clear_huge(ci); - unlock_cluster(ci); - return 0; -} #else static inline void swapcache_free_cluster(swp_entry_t entry) { @@ -4067,6 +4052,36 @@ int split_swap_cluster_map(swp_entry_t entry) unlock_cluster(ci); return 0; } + +int split_swap_cluster(swp_entry_t entry, bool force) +{ + struct swap_info_struct *si; + struct swap_cluster_info *ci; + unsigned long offset = swp_offset(entry); + int ret = 0; + + si = get_swap_device(entry); + if (!si) + return -EINVAL; + ci = lock_cluster(si, offset); + /* The swap cluster has been split by someone else */ + if (!cluster_is_huge(ci)) + goto out; + VM_BUG_ON(!is_cluster_offset(offset)); + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER); + /* If not forced, don't split swap cluster has swap cache */ + if (!force && si->swap_map[offset] & SWAP_HAS_CACHE) { + ret = -EEXIST; + goto out; + } + cluster_set_count(ci, SWAPFILE_CLUSTER); + cluster_clear_huge(ci); + +out: + unlock_cluster(ci); + put_swap_device(si); + return ret; +} #endif static int __init swapfile_init(void) -- 2.16.1