All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] swap: THP optimizing refactoring
@ 2018-07-12 23:36 Huang, Ying
  2018-07-12 23:36 ` [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info() Huang, Ying
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-12 23:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Dave Hansen, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams

This patchset is based on 2018-07-10 head of mmotm tree.

Now the THP (Transparent Huge Page) swap optimizing is implemented in
the way like below,

#ifdef CONFIG_THP_SWAP
huge_function(...)
{
}
#else
normal_function(...)
{
}
#endif

general_function(...)
{
	if (huge)
		return thp_function(...);
	else
		return normal_function(...);
}

As pointed out by Dave Hansen, this will,

1. Created a new, wholly untested code path for huge page
2. Created two places to patch bugs
3. Are not reusing code when possible

This patchset is to address these problems via merging huge/normal
code path/functions if possible.

One concern is that this may cause code size to dilate when
!CONFIG_TRANSPARENT_HUGEPAGE.  The data shows that most refactoring
will only cause quite slight code size increase.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info()
  2018-07-12 23:36 [PATCH 0/6] swap: THP optimizing refactoring Huang, Ying
@ 2018-07-12 23:36 ` Huang, Ying
  2018-07-13 10:48   ` Dave Hansen
  2018-07-12 23:36 ` [PATCH 2/6] mm/swapfile.c: Replace some #ifdef with IS_ENABLED() Huang, Ying
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Huang, Ying @ 2018-07-12 23:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams, Dave Hansen

From: Huang Ying <ying.huang@intel.com>

To improve the code readability.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---
 mm/swapfile.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index d8fddfb000ec..e31aa601d9c0 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -297,6 +297,12 @@ static inline void unlock_cluster(struct swap_cluster_info *ci)
 		spin_unlock(&ci->lock);
 }
 
+/*
+ * At most times, fine grained cluster lock is sufficient to protect
+ * the operations on sis->swap_map.  No need to acquire gross grained
+ * sis->lock.  But cluster and cluster lock isn't available for HDD,
+ * so sis->lock will be instead for them.
+ */
 static inline struct swap_cluster_info *lock_cluster_or_swap_info(
 	struct swap_info_struct *si,
 	unsigned long offset)
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/6] mm/swapfile.c: Replace some #ifdef with IS_ENABLED()
  2018-07-12 23:36 [PATCH 0/6] swap: THP optimizing refactoring Huang, Ying
  2018-07-12 23:36 ` [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info() Huang, Ying
@ 2018-07-12 23:36 ` Huang, Ying
  2018-07-13 18:38   ` Daniel Jordan
  2018-07-12 23:36 ` [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped() Huang, Ying
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Huang, Ying @ 2018-07-12 23:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams, Dave Hansen

From: Huang Ying <ying.huang@intel.com>

In mm/swapfile.c, THP (Transparent Huge Page) swap specific code is
enclosed by #ifdef CONFIG_THP_SWAP/#endif to avoid code dilating when
THP isn't enabled.  But #ifdef/#endif in .c file hurt the code
readability, so Dave suggested to use IS_ENABLED(CONFIG_THP_SWAP)
instead and let compiler to do the dirty job for us.  This has
potential to remove some duplicated code too.  From output of `size`,

		text	   data	    bss	    dec	    hex	filename
THP=y:         26269	   2076	    340	  28685	   700d	mm/swapfile.o
ifdef/endif:   24115	   2028	    340	  26483	   6773	mm/swapfile.o
IS_ENABLED:    24179	   2028	    340	  26547	   67b3	mm/swapfile.o

IS_ENABLED() based solution works quite well, almost as good as that
of #ifdef/#endif.  And from the diffstat, the removed lines are more
than added lines.

One #ifdef for split_swap_cluster() is kept.  Because it is a public
function with a stub implementation for CONFIG_THP_SWAP=n in swap.h.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---
 mm/swapfile.c | 56 ++++++++++++++++----------------------------------------
 1 file changed, 16 insertions(+), 40 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index e31aa601d9c0..75c84aa763a3 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -870,7 +870,6 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
 	return n_ret;
 }
 
-#ifdef CONFIG_THP_SWAP
 static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
 {
 	unsigned long idx;
@@ -878,6 +877,11 @@ static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
 	unsigned long offset, i;
 	unsigned char *map;
 
+	if (!IS_ENABLED(CONFIG_THP_SWAP)) {
+		VM_WARN_ON_ONCE(1);
+		return 0;
+	}
+
 	if (cluster_list_empty(&si->free_clusters))
 		return 0;
 
@@ -908,13 +912,6 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
 	unlock_cluster(ci);
 	swap_range_free(si, offset, SWAPFILE_CLUSTER);
 }
-#else
-static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
-{
-	VM_WARN_ON_ONCE(1);
-	return 0;
-}
-#endif /* CONFIG_THP_SWAP */
 
 static unsigned long scan_swap_map(struct swap_info_struct *si,
 				   unsigned char usage)
@@ -1260,7 +1257,6 @@ static void swapcache_free(swp_entry_t entry)
 	}
 }
 
-#ifdef CONFIG_THP_SWAP
 static void swapcache_free_cluster(swp_entry_t entry)
 {
 	unsigned long offset = swp_offset(entry);
@@ -1271,6 +1267,9 @@ static void swapcache_free_cluster(swp_entry_t entry)
 	unsigned int i, free_entries = 0;
 	unsigned char val;
 
+	if (!IS_ENABLED(CONFIG_THP_SWAP))
+		return;
+
 	si = _swap_info_get(entry);
 	if (!si)
 		return;
@@ -1306,6 +1305,7 @@ static void swapcache_free_cluster(swp_entry_t entry)
 	}
 }
 
+#ifdef CONFIG_THP_SWAP
 int split_swap_cluster(swp_entry_t entry)
 {
 	struct swap_info_struct *si;
@@ -1320,11 +1320,7 @@ int split_swap_cluster(swp_entry_t entry)
 	unlock_cluster(ci);
 	return 0;
 }
-#else
-static inline void swapcache_free_cluster(swp_entry_t entry)
-{
-}
-#endif /* CONFIG_THP_SWAP */
+#endif
 
 void put_swap_page(struct page *page, swp_entry_t entry)
 {
@@ -1483,7 +1479,6 @@ int swp_swapcount(swp_entry_t entry)
 	return count;
 }
 
-#ifdef CONFIG_THP_SWAP
 static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
 					 swp_entry_t entry)
 {
@@ -1494,6 +1489,9 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
 	int i;
 	bool ret = false;
 
+	if (!IS_ENABLED(CONFIG_THP_SWAP))
+		return swap_swapcount(si, entry) != 0;
+
 	ci = lock_cluster_or_swap_info(si, offset);
 	if (!ci || !cluster_is_huge(ci)) {
 		if (map[roffset] != SWAP_HAS_CACHE)
@@ -1516,7 +1514,7 @@ static bool page_swapped(struct page *page)
 	swp_entry_t entry;
 	struct swap_info_struct *si;
 
-	if (likely(!PageTransCompound(page)))
+	if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!PageTransCompound(page)))
 		return page_swapcount(page) != 0;
 
 	page = compound_head(page);
@@ -1540,10 +1538,8 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
 	/* hugetlbfs shouldn't call it */
 	VM_BUG_ON_PAGE(PageHuge(page), page);
 
-	if (likely(!PageTransCompound(page))) {
-		mapcount = atomic_read(&page->_mapcount) + 1;
-		if (total_mapcount)
-			*total_mapcount = mapcount;
+	if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!PageTransCompound(page))) {
+		mapcount = page_trans_huge_mapcount(page, total_mapcount);
 		if (PageSwapCache(page))
 			swapcount = page_swapcount(page);
 		if (total_swapcount)
@@ -1590,26 +1586,6 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
 
 	return map_swapcount;
 }
-#else
-#define swap_page_trans_huge_swapped(si, entry)	swap_swapcount(si, entry)
-#define page_swapped(page)			(page_swapcount(page) != 0)
-
-static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
-					 int *total_swapcount)
-{
-	int mapcount, swapcount = 0;
-
-	/* hugetlbfs shouldn't call it */
-	VM_BUG_ON_PAGE(PageHuge(page), page);
-
-	mapcount = page_trans_huge_mapcount(page, total_mapcount);
-	if (PageSwapCache(page))
-		swapcount = page_swapcount(page);
-	if (total_swapcount)
-		*total_swapcount = swapcount;
-	return mapcount + swapcount;
-}
-#endif
 
 /*
  * We can write to an anon page without COW if there are no other references
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped()
  2018-07-12 23:36 [PATCH 0/6] swap: THP optimizing refactoring Huang, Ying
  2018-07-12 23:36 ` [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info() Huang, Ying
  2018-07-12 23:36 ` [PATCH 2/6] mm/swapfile.c: Replace some #ifdef with IS_ENABLED() Huang, Ying
@ 2018-07-12 23:36 ` Huang, Ying
  2018-07-13 20:15   ` Daniel Jordan
  2018-07-12 23:36 ` [PATCH 4/6] swap: Unify normal/huge code path in put_swap_page() Huang, Ying
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Huang, Ying @ 2018-07-12 23:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams, Dave Hansen

From: Huang Ying <ying.huang@intel.com>

As suggested by Dave, we should unify the code path for normal and
huge swap support if possible to avoid duplicated code, bugs, etc. and
make it easier to review code.

In this patch, the normal/huge code path in swap_page_trans_huge_swapped()
is unified, the added and removed lines are same.  And the binary size
is kept almost same when CONFIG_TRANSPARENT_HUGEPAGE=n.

		 text	   data	    bss	    dec	    hex	filename
base:		24179	   2028	    340	  26547	   67b3	mm/swapfile.o
unified:	24215	   2028	    340	  26583	   67d7	mm/swapfile.o

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---
 mm/swapfile.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 75c84aa763a3..160f78072667 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -270,7 +270,10 @@ static inline void cluster_set_null(struct swap_cluster_info *info)
 
 static inline bool cluster_is_huge(struct swap_cluster_info *info)
 {
-	return info->flags & CLUSTER_FLAG_HUGE;
+	if (IS_ENABLED(CONFIG_THP_SWAP))
+		return info->flags & CLUSTER_FLAG_HUGE;
+	else
+		return false;
 }
 
 static inline void cluster_clear_huge(struct swap_cluster_info *info)
@@ -1489,9 +1492,6 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
 	int i;
 	bool ret = false;
 
-	if (!IS_ENABLED(CONFIG_THP_SWAP))
-		return swap_swapcount(si, entry) != 0;
-
 	ci = lock_cluster_or_swap_info(si, offset);
 	if (!ci || !cluster_is_huge(ci)) {
 		if (map[roffset] != SWAP_HAS_CACHE)
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/6] swap: Unify normal/huge code path in put_swap_page()
  2018-07-12 23:36 [PATCH 0/6] swap: THP optimizing refactoring Huang, Ying
                   ` (2 preceding siblings ...)
  2018-07-12 23:36 ` [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped() Huang, Ying
@ 2018-07-12 23:36 ` Huang, Ying
  2018-07-12 23:36 ` [PATCH 5/6] swap: Add __swap_entry_free_locked() Huang, Ying
  2018-07-12 23:36 ` [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path Huang, Ying
  5 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-12 23:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams, Dave Hansen

From: Huang Ying <ying.huang@intel.com>

In this patch, the normal/huge code path in put_swap_page() and
several helper functions are unified to avoid duplicated code, bugs,
etc. and make it easier to review the code.

The removed lines are more than added lines.  And the binary size is
kept exactly same when CONFIG_TRANSPARENT_HUGEPAGE=n.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---
 mm/swapfile.c | 83 ++++++++++++++++++++++++++---------------------------------
 1 file changed, 37 insertions(+), 46 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 160f78072667..e0df8d22ac92 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -205,8 +205,16 @@ static void discard_swap_cluster(struct swap_info_struct *si,
 
 #ifdef CONFIG_THP_SWAP
 #define SWAPFILE_CLUSTER	HPAGE_PMD_NR
+
+#define nr_swap_entries(nr)	(nr)
 #else
 #define SWAPFILE_CLUSTER	256
+
+/*
+ * Define nr_swap_entries() as constant to let compiler to optimize
+ * out some code if !CONFIG_THP_SWAP
+ */
+#define nr_swap_entries(nr)	1
 #endif
 #define LATENCY_LIMIT		256
 
@@ -1249,18 +1257,7 @@ void swap_free(swp_entry_t entry)
 /*
  * Called after dropping swapcache to decrease refcnt to swap entries.
  */
-static void swapcache_free(swp_entry_t entry)
-{
-	struct swap_info_struct *p;
-
-	p = _swap_info_get(entry);
-	if (p) {
-		if (!__swap_entry_free(p, entry, SWAP_HAS_CACHE))
-			free_swap_slot(entry);
-	}
-}
-
-static void swapcache_free_cluster(swp_entry_t entry)
+void put_swap_page(struct page *page, swp_entry_t entry)
 {
 	unsigned long offset = swp_offset(entry);
 	unsigned long idx = offset / SWAPFILE_CLUSTER;
@@ -1269,39 +1266,41 @@ static void swapcache_free_cluster(swp_entry_t entry)
 	unsigned char *map;
 	unsigned int i, free_entries = 0;
 	unsigned char val;
-
-	if (!IS_ENABLED(CONFIG_THP_SWAP))
-		return;
+	int nr = nr_swap_entries(hpage_nr_pages(page));
 
 	si = _swap_info_get(entry);
 	if (!si)
 		return;
 
-	ci = lock_cluster(si, offset);
-	VM_BUG_ON(!cluster_is_huge(ci));
-	map = si->swap_map + offset;
-	for (i = 0; i < SWAPFILE_CLUSTER; i++) {
-		val = map[i];
-		VM_BUG_ON(!(val & SWAP_HAS_CACHE));
-		if (val == SWAP_HAS_CACHE)
-			free_entries++;
-	}
-	if (!free_entries) {
-		for (i = 0; i < SWAPFILE_CLUSTER; i++)
-			map[i] &= ~SWAP_HAS_CACHE;
-	}
-	cluster_clear_huge(ci);
-	unlock_cluster(ci);
-	if (free_entries == SWAPFILE_CLUSTER) {
-		spin_lock(&si->lock);
+	if (nr == SWAPFILE_CLUSTER) {
 		ci = lock_cluster(si, offset);
-		memset(map, 0, SWAPFILE_CLUSTER);
+		VM_BUG_ON(!cluster_is_huge(ci));
+		map = si->swap_map + offset;
+		for (i = 0; i < SWAPFILE_CLUSTER; i++) {
+			val = map[i];
+			VM_BUG_ON(!(val & SWAP_HAS_CACHE));
+			if (val == SWAP_HAS_CACHE)
+				free_entries++;
+		}
+		if (!free_entries) {
+			for (i = 0; i < SWAPFILE_CLUSTER; i++)
+				map[i] &= ~SWAP_HAS_CACHE;
+		}
+		cluster_clear_huge(ci);
 		unlock_cluster(ci);
-		mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
-		swap_free_cluster(si, idx);
-		spin_unlock(&si->lock);
-	} else if (free_entries) {
-		for (i = 0; i < SWAPFILE_CLUSTER; i++, entry.val++) {
+		if (free_entries == SWAPFILE_CLUSTER) {
+			spin_lock(&si->lock);
+			ci = lock_cluster(si, offset);
+			memset(map, 0, SWAPFILE_CLUSTER);
+			unlock_cluster(ci);
+			mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
+			swap_free_cluster(si, idx);
+			spin_unlock(&si->lock);
+			return;
+		}
+	}
+	if (nr == 1 || free_entries) {
+		for (i = 0; i < nr; i++, entry.val++) {
 			if (!__swap_entry_free(si, entry, SWAP_HAS_CACHE))
 				free_swap_slot(entry);
 		}
@@ -1325,14 +1324,6 @@ int split_swap_cluster(swp_entry_t entry)
 }
 #endif
 
-void put_swap_page(struct page *page, swp_entry_t entry)
-{
-	if (!PageTransHuge(page))
-		swapcache_free(entry);
-	else
-		swapcache_free_cluster(entry);
-}
-
 static int swp_entry_cmp(const void *ent1, const void *ent2)
 {
 	const swp_entry_t *e1 = ent1, *e2 = ent2;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/6] swap: Add __swap_entry_free_locked()
  2018-07-12 23:36 [PATCH 0/6] swap: THP optimizing refactoring Huang, Ying
                   ` (3 preceding siblings ...)
  2018-07-12 23:36 ` [PATCH 4/6] swap: Unify normal/huge code path in put_swap_page() Huang, Ying
@ 2018-07-12 23:36 ` Huang, Ying
  2018-07-12 23:36 ` [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path Huang, Ying
  5 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-12 23:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Dave Hansen, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams

From: Huang Ying <ying.huang@intel.com>

The part of __swap_entry_free() with lock held is separated into a new
function __swap_entry_free_locked().  Because we want to reuse that
piece of code in some other places.

Just mechanical code refactoring, there is no any functional change in
this function.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---
 mm/swapfile.c | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index e0df8d22ac92..bc488bf36c86 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1180,16 +1180,13 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry)
 	return NULL;
 }
 
-static unsigned char __swap_entry_free(struct swap_info_struct *p,
-				       swp_entry_t entry, unsigned char usage)
+static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
+					      unsigned long offset,
+					      unsigned char usage)
 {
-	struct swap_cluster_info *ci;
-	unsigned long offset = swp_offset(entry);
 	unsigned char count;
 	unsigned char has_cache;
 
-	ci = lock_cluster_or_swap_info(p, offset);
-
 	count = p->swap_map[offset];
 
 	has_cache = count & SWAP_HAS_CACHE;
@@ -1217,6 +1214,17 @@ static unsigned char __swap_entry_free(struct swap_info_struct *p,
 	usage = count | has_cache;
 	p->swap_map[offset] = usage ? : SWAP_HAS_CACHE;
 
+	return usage;
+}
+
+static unsigned char __swap_entry_free(struct swap_info_struct *p,
+				       swp_entry_t entry, unsigned char usage)
+{
+	struct swap_cluster_info *ci;
+	unsigned long offset = swp_offset(entry);
+
+	ci = lock_cluster_or_swap_info(p, offset);
+	usage = __swap_entry_free_locked(p, offset, usage);
 	unlock_cluster_or_swap_info(p, ci);
 
 	return usage;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path
  2018-07-12 23:36 [PATCH 0/6] swap: THP optimizing refactoring Huang, Ying
                   ` (4 preceding siblings ...)
  2018-07-12 23:36 ` [PATCH 5/6] swap: Add __swap_entry_free_locked() Huang, Ying
@ 2018-07-12 23:36 ` Huang, Ying
  2018-07-13 20:18   ` Daniel Jordan
  5 siblings, 1 reply; 18+ messages in thread
From: Huang, Ying @ 2018-07-12 23:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Dave Hansen, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams

From: Huang Ying <ying.huang@intel.com>

In this patch, locking related code is shared between huge/normal code
path in put_swap_page() to reduce code duplication.  And `free_entries
== 0` case is merged into more general `free_entries !=
SWAPFILE_CLUSTER` case, because the new locking method makes it easy.

The added lines is same as the removed lines.  But the code size is
increased when CONFIG_TRANSPARENT_HUGEPAGE=n.

		text	   data	    bss	    dec	    hex	filename
base:	       24215	   2028	    340	  26583	   67d7	mm/swapfile.o
unified:       24577	   2028	    340	  26945	   6941	mm/swapfile.o

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---
 mm/swapfile.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index bc488bf36c86..17dce780b4c8 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1280,8 +1280,8 @@ void put_swap_page(struct page *page, swp_entry_t entry)
 	if (!si)
 		return;
 
+	ci = lock_cluster_or_swap_info(si, offset);
 	if (nr == SWAPFILE_CLUSTER) {
-		ci = lock_cluster(si, offset);
 		VM_BUG_ON(!cluster_is_huge(ci));
 		map = si->swap_map + offset;
 		for (i = 0; i < SWAPFILE_CLUSTER; i++) {
@@ -1290,13 +1290,9 @@ void put_swap_page(struct page *page, swp_entry_t entry)
 			if (val == SWAP_HAS_CACHE)
 				free_entries++;
 		}
-		if (!free_entries) {
-			for (i = 0; i < SWAPFILE_CLUSTER; i++)
-				map[i] &= ~SWAP_HAS_CACHE;
-		}
 		cluster_clear_huge(ci);
-		unlock_cluster(ci);
 		if (free_entries == SWAPFILE_CLUSTER) {
+			unlock_cluster_or_swap_info(si, ci);
 			spin_lock(&si->lock);
 			ci = lock_cluster(si, offset);
 			memset(map, 0, SWAPFILE_CLUSTER);
@@ -1307,12 +1303,16 @@ void put_swap_page(struct page *page, swp_entry_t entry)
 			return;
 		}
 	}
-	if (nr == 1 || free_entries) {
-		for (i = 0; i < nr; i++, entry.val++) {
-			if (!__swap_entry_free(si, entry, SWAP_HAS_CACHE))
-				free_swap_slot(entry);
+	for (i = 0; i < nr; i++, entry.val++) {
+		if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) {
+			unlock_cluster_or_swap_info(si, ci);
+			free_swap_slot(entry);
+			if (i == nr - 1)
+				return;
+			lock_cluster_or_swap_info(si, offset);
 		}
 	}
+	unlock_cluster_or_swap_info(si, ci);
 }
 
 #ifdef CONFIG_THP_SWAP
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info()
  2018-07-12 23:36 ` [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info() Huang, Ying
@ 2018-07-13 10:48   ` Dave Hansen
  2018-07-14  4:07       ` Huang, Ying
  0 siblings, 1 reply; 18+ messages in thread
From: Dave Hansen @ 2018-07-13 10:48 UTC (permalink / raw)
  To: Huang, Ying, Andrew Morton
  Cc: linux-mm, linux-kernel, Michal Hocko, Johannes Weiner,
	Shaohua Li, Hugh Dickins, Minchan Kim, Rik van Riel,
	Daniel Jordan, Dan Williams

> +/*
> + * At most times, fine grained cluster lock is sufficient to protect

Can we call out those times, please?

> + * the operations on sis->swap_map.  

Please be careful with the naming.  You can call it 'si' because that's
what the function argument is named.  Or, swap_info_struct because
that's the struct name.  Calling it 'sis' is a bit sloppy, no?

> 					No need to acquire gross grained

"coarse" is a conventional antonym for "fine".

> + * sis->lock.  But cluster and cluster lock isn't available for HDD,
> + * so sis->lock will be instead for them.
> + */
>  static inline struct swap_cluster_info *lock_cluster_or_swap_info(
>  	struct swap_info_struct *si,
>  	unsigned long offset)

What I already knew was: there are two locks.  We use one sometimes and
the other at other times.

What I don't know is why there are two locks, and the heuristics why we
choose between them.  This comment doesn't help explain the things I
don't know.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/6] mm/swapfile.c: Replace some #ifdef with IS_ENABLED()
  2018-07-12 23:36 ` [PATCH 2/6] mm/swapfile.c: Replace some #ifdef with IS_ENABLED() Huang, Ying
@ 2018-07-13 18:38   ` Daniel Jordan
  2018-07-13 18:55     ` Daniel Jordan
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Jordan @ 2018-07-13 18:38 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams, Dave Hansen

On Fri, Jul 13, 2018 at 07:36:32AM +0800, Huang, Ying wrote:
> @@ -1260,7 +1257,6 @@ static void swapcache_free(swp_entry_t entry)
>  	}
>  }
>  
> -#ifdef CONFIG_THP_SWAP
>  static void swapcache_free_cluster(swp_entry_t entry)
>  {
>  	unsigned long offset = swp_offset(entry);
> @@ -1271,6 +1267,9 @@ static void swapcache_free_cluster(swp_entry_t entry)
>  	unsigned int i, free_entries = 0;
>  	unsigned char val;
>  
> +	if (!IS_ENABLED(CONFIG_THP_SWAP))
> +		return;
> +

VM_WARN_ON_ONCE(1) here too?

The rest of the patch looks good.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/6] mm/swapfile.c: Replace some #ifdef with IS_ENABLED()
  2018-07-13 18:38   ` Daniel Jordan
@ 2018-07-13 18:55     ` Daniel Jordan
  0 siblings, 0 replies; 18+ messages in thread
From: Daniel Jordan @ 2018-07-13 18:55 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams, Dave Hansen

On Fri, Jul 13, 2018 at 11:38:19AM -0700, Daniel Jordan wrote:
> On Fri, Jul 13, 2018 at 07:36:32AM +0800, Huang, Ying wrote:
> > @@ -1260,7 +1257,6 @@ static void swapcache_free(swp_entry_t entry)
> >  	}
> >  }
> >  
> > -#ifdef CONFIG_THP_SWAP
> >  static void swapcache_free_cluster(swp_entry_t entry)
> >  {
> >  	unsigned long offset = swp_offset(entry);
> > @@ -1271,6 +1267,9 @@ static void swapcache_free_cluster(swp_entry_t entry)
> >  	unsigned int i, free_entries = 0;
> >  	unsigned char val;
> >  
> > +	if (!IS_ENABLED(CONFIG_THP_SWAP))
> > +		return;
> > +
> 
> VM_WARN_ON_ONCE(1) here too?

Nevermind, this branch goes away later in the series.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped()
  2018-07-12 23:36 ` [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped() Huang, Ying
@ 2018-07-13 20:15   ` Daniel Jordan
  2018-07-14 12:53       ` Huang, Ying
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Jordan @ 2018-07-13 20:15 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams, Dave Hansen

On Fri, Jul 13, 2018 at 07:36:33AM +0800, Huang, Ying wrote:
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 75c84aa763a3..160f78072667 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -270,7 +270,10 @@ static inline void cluster_set_null(struct swap_cluster_info *info)
>  
>  static inline bool cluster_is_huge(struct swap_cluster_info *info)
>  {
> -	return info->flags & CLUSTER_FLAG_HUGE;
> +	if (IS_ENABLED(CONFIG_THP_SWAP))
> +		return info->flags & CLUSTER_FLAG_HUGE;
> +	else
> +		return false;
>  }
>  
>  static inline void cluster_clear_huge(struct swap_cluster_info *info)
> @@ -1489,9 +1492,6 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
>  	int i;
>  	bool ret = false;
>  
> -	if (!IS_ENABLED(CONFIG_THP_SWAP))
> -		return swap_swapcount(si, entry) != 0;

This tests the value returned from swap_count,

> -
>  	ci = lock_cluster_or_swap_info(si, offset);
>  	if (!ci || !cluster_is_huge(ci)) {
>  		if (map[roffset] != SWAP_HAS_CACHE)

and now we're testing

                    map[roffset] != SWAP_HAS_CACHE

instead.  The two seem to mean the same thing here, since the swap slot hasn't
been freed to the global pool and so can't be 0, but it might be better for
consistency and clarity to use swap_count here, and a few lines down too

        for (i = 0; i < SWAPFILE_CLUSTER; i++) {                                     
                if (map[offset + i] != SWAP_HAS_CACHE) {                             

since swap_count seems to be used everywhere else for this.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path
  2018-07-12 23:36 ` [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path Huang, Ying
@ 2018-07-13 20:18   ` Daniel Jordan
  2018-07-14 12:57       ` Huang, Ying
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Jordan @ 2018-07-13 20:18 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Dave Hansen, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams

On Fri, Jul 13, 2018 at 07:36:36AM +0800, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
> 
> In this patch, locking related code is shared between huge/normal code
> path in put_swap_page() to reduce code duplication.  And `free_entries
> == 0` case is merged into more general `free_entries !=
> SWAPFILE_CLUSTER` case, because the new locking method makes it easy.

Might be a bit easier to think about the two changes if they were split up.

Agree with Dave's comment from patch 1, but otherwise the series looks ok to
me.  I like the nr_swap_entries macro, that's clever.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info()
  2018-07-13 10:48   ` Dave Hansen
@ 2018-07-14  4:07       ` Huang, Ying
  0 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-14  4:07 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams

Dave Hansen <dave.hansen@linux.intel.com> writes:

>> +/*
>> + * At most times, fine grained cluster lock is sufficient to protect
>
> Can we call out those times, please?

To protect si->swap_map[], if HDD, si->lock is used, otherwise cluster
lock is used.  "at most times" is ambiguous here, I will fix it.

>> + * the operations on sis->swap_map.  
>
> Please be careful with the naming.  You can call it 'si' because that's
> what the function argument is named.  Or, swap_info_struct because
> that's the struct name.  Calling it 'sis' is a bit sloppy, no?
>
>> 					No need to acquire gross grained
>
> "coarse" is a conventional antonym for "fine".

Sorry for my poor English, will change this.

>> + * sis->lock.  But cluster and cluster lock isn't available for HDD,
>> + * so sis->lock will be instead for them.
>> + */
>>  static inline struct swap_cluster_info *lock_cluster_or_swap_info(
>>  	struct swap_info_struct *si,
>>  	unsigned long offset)
>
> What I already knew was: there are two locks.  We use one sometimes and
> the other at other times.
>
> What I don't know is why there are two locks, and the heuristics why we
> choose between them.  This comment doesn't help explain the things I
> don't know.

cluster lock is used to protect fields of struct swap_cluster_info, and
si->swap_map[], this is described in comments of struct
swap_cluster_info.  si->lock is used to protect other fields of si.  If
two locks need to be held, hold si->lock first.  This is for non-HDD.
For HDD, there are no cluster, so si->lock is used to protect
si->swap_map[].

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info()
@ 2018-07-14  4:07       ` Huang, Ying
  0 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-14  4:07 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Daniel Jordan, Dan Williams

Dave Hansen <dave.hansen@linux.intel.com> writes:

>> +/*
>> + * At most times, fine grained cluster lock is sufficient to protect
>
> Can we call out those times, please?

To protect si->swap_map[], if HDD, si->lock is used, otherwise cluster
lock is used.  "at most times" is ambiguous here, I will fix it.

>> + * the operations on sis->swap_map.  
>
> Please be careful with the naming.  You can call it 'si' because that's
> what the function argument is named.  Or, swap_info_struct because
> that's the struct name.  Calling it 'sis' is a bit sloppy, no?
>
>> 					No need to acquire gross grained
>
> "coarse" is a conventional antonym for "fine".

Sorry for my poor English, will change this.

>> + * sis->lock.  But cluster and cluster lock isn't available for HDD,
>> + * so sis->lock will be instead for them.
>> + */
>>  static inline struct swap_cluster_info *lock_cluster_or_swap_info(
>>  	struct swap_info_struct *si,
>>  	unsigned long offset)
>
> What I already knew was: there are two locks.  We use one sometimes and
> the other at other times.
>
> What I don't know is why there are two locks, and the heuristics why we
> choose between them.  This comment doesn't help explain the things I
> don't know.

cluster lock is used to protect fields of struct swap_cluster_info, and
si->swap_map[], this is described in comments of struct
swap_cluster_info.  si->lock is used to protect other fields of si.  If
two locks need to be held, hold si->lock first.  This is for non-HDD.
For HDD, there are no cluster, so si->lock is used to protect
si->swap_map[].

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped()
  2018-07-13 20:15   ` Daniel Jordan
@ 2018-07-14 12:53       ` Huang, Ying
  0 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-14 12:53 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Dan Williams, Dave Hansen

Daniel Jordan <daniel.m.jordan@oracle.com> writes:

> On Fri, Jul 13, 2018 at 07:36:33AM +0800, Huang, Ying wrote:
>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>> index 75c84aa763a3..160f78072667 100644
>> --- a/mm/swapfile.c
>> +++ b/mm/swapfile.c
>> @@ -270,7 +270,10 @@ static inline void cluster_set_null(struct swap_cluster_info *info)
>>  
>>  static inline bool cluster_is_huge(struct swap_cluster_info *info)
>>  {
>> -	return info->flags & CLUSTER_FLAG_HUGE;
>> +	if (IS_ENABLED(CONFIG_THP_SWAP))
>> +		return info->flags & CLUSTER_FLAG_HUGE;
>> +	else
>> +		return false;
>>  }
>>  
>>  static inline void cluster_clear_huge(struct swap_cluster_info *info)
>> @@ -1489,9 +1492,6 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
>>  	int i;
>>  	bool ret = false;
>>  
>> -	if (!IS_ENABLED(CONFIG_THP_SWAP))
>> -		return swap_swapcount(si, entry) != 0;
>
> This tests the value returned from swap_count,
>
>> -
>>  	ci = lock_cluster_or_swap_info(si, offset);
>>  	if (!ci || !cluster_is_huge(ci)) {
>>  		if (map[roffset] != SWAP_HAS_CACHE)
>
> and now we're testing
>
>                     map[roffset] != SWAP_HAS_CACHE
>
> instead.  The two seem to mean the same thing here, since the swap slot hasn't
> been freed to the global pool and so can't be 0, but it might be better for
> consistency and clarity to use swap_count here, and a few lines down too
>
>         for (i = 0; i < SWAPFILE_CLUSTER; i++) {                                     
>                 if (map[offset + i] != SWAP_HAS_CACHE) {                             
>
> since swap_count seems to be used everywhere else for this.

Yes.  swap_count() looks better here.  Will change this.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped()
@ 2018-07-14 12:53       ` Huang, Ying
  0 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-14 12:53 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Dan Williams, Dave Hansen

Daniel Jordan <daniel.m.jordan@oracle.com> writes:

> On Fri, Jul 13, 2018 at 07:36:33AM +0800, Huang, Ying wrote:
>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>> index 75c84aa763a3..160f78072667 100644
>> --- a/mm/swapfile.c
>> +++ b/mm/swapfile.c
>> @@ -270,7 +270,10 @@ static inline void cluster_set_null(struct swap_cluster_info *info)
>>  
>>  static inline bool cluster_is_huge(struct swap_cluster_info *info)
>>  {
>> -	return info->flags & CLUSTER_FLAG_HUGE;
>> +	if (IS_ENABLED(CONFIG_THP_SWAP))
>> +		return info->flags & CLUSTER_FLAG_HUGE;
>> +	else
>> +		return false;
>>  }
>>  
>>  static inline void cluster_clear_huge(struct swap_cluster_info *info)
>> @@ -1489,9 +1492,6 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
>>  	int i;
>>  	bool ret = false;
>>  
>> -	if (!IS_ENABLED(CONFIG_THP_SWAP))
>> -		return swap_swapcount(si, entry) != 0;
>
> This tests the value returned from swap_count,
>
>> -
>>  	ci = lock_cluster_or_swap_info(si, offset);
>>  	if (!ci || !cluster_is_huge(ci)) {
>>  		if (map[roffset] != SWAP_HAS_CACHE)
>
> and now we're testing
>
>                     map[roffset] != SWAP_HAS_CACHE
>
> instead.  The two seem to mean the same thing here, since the swap slot hasn't
> been freed to the global pool and so can't be 0, but it might be better for
> consistency and clarity to use swap_count here, and a few lines down too
>
>         for (i = 0; i < SWAPFILE_CLUSTER; i++) {                                     
>                 if (map[offset + i] != SWAP_HAS_CACHE) {                             
>
> since swap_count seems to be used everywhere else for this.

Yes.  swap_count() looks better here.  Will change this.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path
  2018-07-13 20:18   ` Daniel Jordan
@ 2018-07-14 12:57       ` Huang, Ying
  0 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-14 12:57 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: Andrew Morton, linux-mm, linux-kernel, Dave Hansen, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Dan Williams

Daniel Jordan <daniel.m.jordan@oracle.com> writes:

> On Fri, Jul 13, 2018 at 07:36:36AM +0800, Huang, Ying wrote:
>> From: Huang Ying <ying.huang@intel.com>
>> 
>> In this patch, locking related code is shared between huge/normal code
>> path in put_swap_page() to reduce code duplication.  And `free_entries
>> == 0` case is merged into more general `free_entries !=
>> SWAPFILE_CLUSTER` case, because the new locking method makes it easy.
>
> Might be a bit easier to think about the two changes if they were split up.

I just think the second change appears too trivial to be a separate patch.

> Agree with Dave's comment from patch 1, but otherwise the series looks ok to
> me.  I like the nr_swap_entries macro, that's clever.

Thanks!

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path
@ 2018-07-14 12:57       ` Huang, Ying
  0 siblings, 0 replies; 18+ messages in thread
From: Huang, Ying @ 2018-07-14 12:57 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: Andrew Morton, linux-mm, linux-kernel, Dave Hansen, Michal Hocko,
	Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim,
	Rik van Riel, Dan Williams

Daniel Jordan <daniel.m.jordan@oracle.com> writes:

> On Fri, Jul 13, 2018 at 07:36:36AM +0800, Huang, Ying wrote:
>> From: Huang Ying <ying.huang@intel.com>
>> 
>> In this patch, locking related code is shared between huge/normal code
>> path in put_swap_page() to reduce code duplication.  And `free_entries
>> == 0` case is merged into more general `free_entries !=
>> SWAPFILE_CLUSTER` case, because the new locking method makes it easy.
>
> Might be a bit easier to think about the two changes if they were split up.

I just think the second change appears too trivial to be a separate patch.

> Agree with Dave's comment from patch 1, but otherwise the series looks ok to
> me.  I like the nr_swap_entries macro, that's clever.

Thanks!

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-07-14 12:57 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-12 23:36 [PATCH 0/6] swap: THP optimizing refactoring Huang, Ying
2018-07-12 23:36 ` [PATCH 1/6] swap: Add comments to lock_cluster_or_swap_info() Huang, Ying
2018-07-13 10:48   ` Dave Hansen
2018-07-14  4:07     ` Huang, Ying
2018-07-14  4:07       ` Huang, Ying
2018-07-12 23:36 ` [PATCH 2/6] mm/swapfile.c: Replace some #ifdef with IS_ENABLED() Huang, Ying
2018-07-13 18:38   ` Daniel Jordan
2018-07-13 18:55     ` Daniel Jordan
2018-07-12 23:36 ` [PATCH 3/6] swap: Unify normal/huge code path in swap_page_trans_huge_swapped() Huang, Ying
2018-07-13 20:15   ` Daniel Jordan
2018-07-14 12:53     ` Huang, Ying
2018-07-14 12:53       ` Huang, Ying
2018-07-12 23:36 ` [PATCH 4/6] swap: Unify normal/huge code path in put_swap_page() Huang, Ying
2018-07-12 23:36 ` [PATCH 5/6] swap: Add __swap_entry_free_locked() Huang, Ying
2018-07-12 23:36 ` [PATCH 6/6] swap, put_swap_page: Share more between huge/normal code path Huang, Ying
2018-07-13 20:18   ` Daniel Jordan
2018-07-14 12:57     ` Huang, Ying
2018-07-14 12:57       ` Huang, Ying

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.