linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative
@ 2016-06-11 19:15 Ebru Akagunduz
  2016-06-11 19:15 ` [RFC PATCH 1/3] mm, thp: revert allocstall comparing Ebru Akagunduz
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-11 19:15 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, Ebru Akagunduz

This patch series converts thp design from optimistic to conservative, 
creates a sysfs integer knob for conservative threshold and documents it.

Ebru Akagunduz (3):
  mm, thp: revert allocstall comparing
  mm, thp: convert from optimistic to conservative
  doc: add information about min_ptes_young

 Documentation/vm/transhuge.txt     |  7 ++++
 include/trace/events/huge_memory.h | 10 ++---
 mm/khugepaged.c                    | 81 ++++++++++++++++++++++----------------
 3 files changed, 59 insertions(+), 39 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 1/3] mm, thp: revert allocstall comparing
  2016-06-11 19:15 [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
@ 2016-06-11 19:15 ` Ebru Akagunduz
  2016-06-13 18:32   ` Ebru Akagunduz
  2016-06-14  7:07   ` Michal Hocko
  2016-06-11 19:16 ` [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-11 19:15 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, Ebru Akagunduz

This patch takes back allocstall comparing when deciding
whether swapin worthwhile because it does not work,
if vmevent disabled.

Related commit:
http://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/?id=2548306628308aa6a326640d345a737bc898941d

Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
---
 mm/khugepaged.c | 31 ++++++++-----------------------
 1 file changed, 8 insertions(+), 23 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0ac63f7..e3d8da7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -68,7 +68,6 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
  */
 static unsigned int khugepaged_max_ptes_none __read_mostly;
 static unsigned int khugepaged_max_ptes_swap __read_mostly;
-static unsigned long allocstall;
 
 static int khugepaged(void *none);
 
@@ -926,7 +925,6 @@ static void collapse_huge_page(struct mm_struct *mm,
 	struct page *new_page;
 	spinlock_t *pmd_ptl, *pte_ptl;
 	int isolated = 0, result = 0;
-	unsigned long swap, curr_allocstall;
 	struct mem_cgroup *memcg;
 	unsigned long mmun_start;	/* For mmu_notifiers */
 	unsigned long mmun_end;		/* For mmu_notifiers */
@@ -955,8 +953,6 @@ static void collapse_huge_page(struct mm_struct *mm,
 		goto out_nolock;
 	}
 
-	swap = get_mm_counter(mm, MM_SWAPENTS);
-	curr_allocstall = sum_vm_event(ALLOCSTALL);
 	down_read(&mm->mmap_sem);
 	result = hugepage_vma_revalidate(mm, address);
 	if (result) {
@@ -972,22 +968,15 @@ static void collapse_huge_page(struct mm_struct *mm,
 		up_read(&mm->mmap_sem);
 		goto out_nolock;
 	}
-
 	/*
-	 * Don't perform swapin readahead when the system is under pressure,
-	 * to avoid unnecessary resource consumption.
+	 * __collapse_huge_page_swapin always returns with mmap_sem
+	 * locked.  If it fails, release mmap_sem and jump directly
+	 * out.  Continuing to collapse causes inconsistency.
 	 */
-	if (allocstall == curr_allocstall && swap != 0) {
-		/*
-		 * __collapse_huge_page_swapin always returns with mmap_sem
-		 * locked.  If it fails, release mmap_sem and jump directly
-		 * out.  Continuing to collapse causes inconsistency.
-		 */
-		if (!__collapse_huge_page_swapin(mm, vma, address, pmd)) {
-			mem_cgroup_cancel_charge(new_page, memcg, true);
-			up_read(&mm->mmap_sem);
-			goto out_nolock;
-		}
+	if (!__collapse_huge_page_swapin(mm, vma, address, pmd)) {
+		mem_cgroup_cancel_charge(new_page, memcg, true);
+		up_read(&mm->mmap_sem);
+		goto out_nolock;
 	}
 
 	up_read(&mm->mmap_sem);
@@ -1822,7 +1811,6 @@ static void khugepaged_wait_work(void)
 		if (!scan_sleep_jiffies)
 			return;
 
-		allocstall = sum_vm_event(ALLOCSTALL);
 		khugepaged_sleep_expire = jiffies + scan_sleep_jiffies;
 		wait_event_freezable_timeout(khugepaged_wait,
 					     khugepaged_should_wakeup(),
@@ -1830,10 +1818,8 @@ static void khugepaged_wait_work(void)
 		return;
 	}
 
-	if (khugepaged_enabled()) {
-		allocstall = sum_vm_event(ALLOCSTALL);
+	if (khugepaged_enabled())
 		wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
-	}
 }
 
 static int khugepaged(void *none)
@@ -1842,7 +1828,6 @@ static int khugepaged(void *none)
 
 	set_freezable();
 	set_user_nice(current, MAX_NICE);
-	allocstall = sum_vm_event(ALLOCSTALL);
 
 	while (!kthread_should_stop()) {
 		khugepaged_do_scan();
-- 
1.9.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative
  2016-06-11 19:15 [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
  2016-06-11 19:15 ` [RFC PATCH 1/3] mm, thp: revert allocstall comparing Ebru Akagunduz
@ 2016-06-11 19:16 ` Ebru Akagunduz
  2016-06-13 18:35   ` Ebru Akagunduz
                     ` (2 more replies)
  2016-06-11 19:16 ` [RFC PATCH 3/3] doc: add information about min_ptes_young Ebru Akagunduz
  2016-06-13 18:30 ` [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
  3 siblings, 3 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-11 19:16 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, Ebru Akagunduz

Currently, khugepaged collapses pages saying only
a referenced page enough to create a THP.

This patch changes the design from optimistic to conservative.
It gives a default threshold which is half of HPAGE_PMD_NR
for referenced pages, also introduces a new sysfs knob.

Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
---
 include/trace/events/huge_memory.h | 10 ++++----
 mm/khugepaged.c                    | 50 +++++++++++++++++++++++++++++---------
 2 files changed, 44 insertions(+), 16 deletions(-)

diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 830d47d..5f14025 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -13,7 +13,7 @@
 	EM( SCAN_EXCEED_NONE_PTE,	"exceed_none_pte")		\
 	EM( SCAN_PTE_NON_PRESENT,	"pte_non_present")		\
 	EM( SCAN_PAGE_RO,		"no_writable_page")		\
-	EM( SCAN_NO_REFERENCED_PAGE,	"no_referenced_page")		\
+	EM( SCAN_LACK_REFERENCED_PAGE,	"lack_referenced_page")		\
 	EM( SCAN_PAGE_NULL,		"page_null")			\
 	EM( SCAN_SCAN_ABORT,		"scan_aborted")			\
 	EM( SCAN_PAGE_COUNT,		"not_suitable_page_count")	\
@@ -47,7 +47,7 @@ SCAN_STATUS
 TRACE_EVENT(mm_khugepaged_scan_pmd,
 
 	TP_PROTO(struct mm_struct *mm, struct page *page, bool writable,
-		 bool referenced, int none_or_zero, int status, int unmapped),
+		 int referenced, int none_or_zero, int status, int unmapped),
 
 	TP_ARGS(mm, page, writable, referenced, none_or_zero, status, unmapped),
 
@@ -55,7 +55,7 @@ TRACE_EVENT(mm_khugepaged_scan_pmd,
 		__field(struct mm_struct *, mm)
 		__field(unsigned long, pfn)
 		__field(bool, writable)
-		__field(bool, referenced)
+		__field(int, referenced)
 		__field(int, none_or_zero)
 		__field(int, status)
 		__field(int, unmapped)
@@ -108,14 +108,14 @@ TRACE_EVENT(mm_collapse_huge_page,
 TRACE_EVENT(mm_collapse_huge_page_isolate,
 
 	TP_PROTO(struct page *page, int none_or_zero,
-		 bool referenced, bool  writable, int status),
+		 int referenced, bool  writable, int status),
 
 	TP_ARGS(page, none_or_zero, referenced, writable, status),
 
 	TP_STRUCT__entry(
 		__field(unsigned long, pfn)
 		__field(int, none_or_zero)
-		__field(bool, referenced)
+		__field(int, referenced)
 		__field(bool, writable)
 		__field(int, status)
 	),
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index e3d8da7..43fc41e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -27,7 +27,7 @@ enum scan_result {
 	SCAN_EXCEED_NONE_PTE,
 	SCAN_PTE_NON_PRESENT,
 	SCAN_PAGE_RO,
-	SCAN_NO_REFERENCED_PAGE,
+	SCAN_LACK_REFERENCED_PAGE,
 	SCAN_PAGE_NULL,
 	SCAN_SCAN_ABORT,
 	SCAN_PAGE_COUNT,
@@ -68,6 +68,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
  */
 static unsigned int khugepaged_max_ptes_none __read_mostly;
 static unsigned int khugepaged_max_ptes_swap __read_mostly;
+static unsigned int khugepaged_min_ptes_young __read_mostly;
 
 static int khugepaged(void *none);
 
@@ -282,6 +283,32 @@ static struct kobj_attribute khugepaged_max_ptes_swap_attr =
 	__ATTR(max_ptes_swap, 0644, khugepaged_max_ptes_swap_show,
 	       khugepaged_max_ptes_swap_store);
 
+static ssize_t khugepaged_min_ptes_young_show(struct kobject *kobj,
+					      struct kobj_attribute *attr,
+					      char *buf)
+{
+	return sprintf(buf, "%u\n", khugepaged_min_ptes_young);
+}
+
+static ssize_t khugepaged_min_ptes_young_store(struct kobject *kobj,
+					       struct kobj_attribute *attr,
+					       const char *buf, size_t count)
+{
+	int err;
+	unsigned long min_ptes_young;
+	err  = kstrtoul(buf, 10, &min_ptes_young);
+	if (err || min_ptes_young > HPAGE_PMD_NR-1)
+		return -EINVAL;
+
+	khugepaged_min_ptes_young = min_ptes_young;
+
+	return count;
+}
+
+static struct kobj_attribute khugepaged_min_ptes_young_attr =
+		__ATTR(min_ptes_young, 0644, khugepaged_min_ptes_young_show,
+		khugepaged_min_ptes_young_store);
+
 static struct attribute *khugepaged_attr[] = {
 	&khugepaged_defrag_attr.attr,
 	&khugepaged_max_ptes_none_attr.attr,
@@ -291,6 +318,7 @@ static struct attribute *khugepaged_attr[] = {
 	&scan_sleep_millisecs_attr.attr,
 	&alloc_sleep_millisecs_attr.attr,
 	&khugepaged_max_ptes_swap_attr.attr,
+	&khugepaged_min_ptes_young_attr.attr,
 	NULL,
 };
 
@@ -502,8 +530,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 {
 	struct page *page = NULL;
 	pte_t *_pte;
-	int none_or_zero = 0, result = 0;
-	bool referenced = false, writable = false;
+	int none_or_zero = 0, result = 0, referenced = 0;
+	bool writable = false;
 
 	for (_pte = pte; _pte < pte+HPAGE_PMD_NR;
 	     _pte++, address += PAGE_SIZE) {
@@ -582,14 +610,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		VM_BUG_ON_PAGE(!PageLocked(page), page);
 		VM_BUG_ON_PAGE(PageLRU(page), page);
 
-		/* If there is no mapped pte young don't collapse the page */
+		/* There should be enough young pte to collapse the page */
 		if (pte_young(pteval) ||
 		    page_is_young(page) || PageReferenced(page) ||
 		    mmu_notifier_test_young(vma->vm_mm, address))
-			referenced = true;
+			referenced++;
 	}
 	if (likely(writable)) {
-		if (likely(referenced)) {
+		if (referenced >= khugepaged_min_ptes_young) {
 			result = SCAN_SUCCEED;
 			trace_mm_collapse_huge_page_isolate(page, none_or_zero,
 							    referenced, writable, result);
@@ -1082,11 +1110,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 	pmd_t *pmd;
 	pte_t *pte, *_pte;
 	int ret = 0, none_or_zero = 0, result = 0;
+	int node = NUMA_NO_NODE, unmapped = 0, referenced = 0;
 	struct page *page = NULL;
 	unsigned long _address;
 	spinlock_t *ptl;
-	int node = NUMA_NO_NODE, unmapped = 0;
-	bool writable = false, referenced = false;
+	bool writable = false;
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 
@@ -1174,14 +1202,14 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 		if (pte_young(pteval) ||
 		    page_is_young(page) || PageReferenced(page) ||
 		    mmu_notifier_test_young(vma->vm_mm, address))
-			referenced = true;
+			referenced++;
 	}
 	if (writable) {
-		if (referenced) {
+		if (referenced >= khugepaged_min_ptes_young) {
 			result = SCAN_SUCCEED;
 			ret = 1;
 		} else {
-			result = SCAN_NO_REFERENCED_PAGE;
+			result = SCAN_LACK_REFERENCED_PAGE;
 		}
 	} else {
 		result = SCAN_PAGE_RO;
-- 
1.9.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 3/3] doc: add information about min_ptes_young
  2016-06-11 19:15 [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
  2016-06-11 19:15 ` [RFC PATCH 1/3] mm, thp: revert allocstall comparing Ebru Akagunduz
  2016-06-11 19:16 ` [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
@ 2016-06-11 19:16 ` Ebru Akagunduz
  2016-06-13 18:37   ` Ebru Akagunduz
  2016-06-13 18:30 ` [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
  3 siblings, 1 reply; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-11 19:16 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, Ebru Akagunduz

min_ptes_young specifies at least how many young pages needed
to create a THP. This threshold also effects when making swapin
readahead (if needed) to create a THP. We decide whether to make
swapin readahed wortwhile looking the value.

/sys/kernel/mm/transparent_hugepage/khugepaged/min_ptes_young

Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
---
 Documentation/vm/transhuge.txt | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt
index 2ec6adb..0ae713b 100644
--- a/Documentation/vm/transhuge.txt
+++ b/Documentation/vm/transhuge.txt
@@ -193,6 +193,13 @@ memory. A lower value can prevent THPs from being
 collapsed, resulting fewer pages being collapsed into
 THPs, and lower memory access performance.
 
+min_ptes_young specifies at least how many young pages needed
+to create a THP. This threshold also effects when making swapin
+readahead (if needed) to create a THP. We decide whether to make
+swapin readahed wortwhile looking the value.
+
+/sys/kernel/mm/transparent_hugepage/khugepaged/min_ptes_young
+
 == Boot parameter ==
 
 You can change the sysfs boot time defaults of Transparent Hugepage
-- 
1.9.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative
  2016-06-11 19:15 [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
                   ` (2 preceding siblings ...)
  2016-06-11 19:16 ` [RFC PATCH 3/3] doc: add information about min_ptes_young Ebru Akagunduz
@ 2016-06-13 18:30 ` Ebru Akagunduz
  3 siblings, 0 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-13 18:30 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, minchan

On Sat, Jun 11, 2016 at 10:15:58PM +0300, Ebru Akagunduz wrote:
> This patch series converts thp design from optimistic to conservative, 
> creates a sysfs integer knob for conservative threshold and documents it.
> 
This patchset follows Michan Kim's suggestion.
Related discussion is here:
http://marc.info/?l=linux-mm&m=146373278424897&w=2

CC'ed Michan Kim.

> Ebru Akagunduz (3):
>   mm, thp: revert allocstall comparing
>   mm, thp: convert from optimistic to conservative
>   doc: add information about min_ptes_young
> 
>  Documentation/vm/transhuge.txt     |  7 ++++
>  include/trace/events/huge_memory.h | 10 ++---
>  mm/khugepaged.c                    | 81 ++++++++++++++++++++++----------------
>  3 files changed, 59 insertions(+), 39 deletions(-)
> 
> -- 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 1/3] mm, thp: revert allocstall comparing
  2016-06-11 19:15 ` [RFC PATCH 1/3] mm, thp: revert allocstall comparing Ebru Akagunduz
@ 2016-06-13 18:32   ` Ebru Akagunduz
  2016-06-14  7:07   ` Michal Hocko
  1 sibling, 0 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-13 18:32 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, minchan

On Sat, Jun 11, 2016 at 10:15:59PM +0300, Ebru Akagunduz wrote:
> This patch takes back allocstall comparing when deciding
> whether swapin worthwhile because it does not work,
> if vmevent disabled.
> 
> Related commit:
> http://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/?id=2548306628308aa6a326640d345a737bc898941d
> 
> Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
Suggested-by: Minchan Kim <minchan@kernel.org>
> ---
Cc'ed Minchan Kim.
>  mm/khugepaged.c | 31 ++++++++-----------------------
>  1 file changed, 8 insertions(+), 23 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 0ac63f7..e3d8da7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -68,7 +68,6 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
>   */
>  static unsigned int khugepaged_max_ptes_none __read_mostly;
>  static unsigned int khugepaged_max_ptes_swap __read_mostly;
> -static unsigned long allocstall;
>  
>  static int khugepaged(void *none);
>  
> @@ -926,7 +925,6 @@ static void collapse_huge_page(struct mm_struct *mm,
>  	struct page *new_page;
>  	spinlock_t *pmd_ptl, *pte_ptl;
>  	int isolated = 0, result = 0;
> -	unsigned long swap, curr_allocstall;
>  	struct mem_cgroup *memcg;
>  	unsigned long mmun_start;	/* For mmu_notifiers */
>  	unsigned long mmun_end;		/* For mmu_notifiers */
> @@ -955,8 +953,6 @@ static void collapse_huge_page(struct mm_struct *mm,
>  		goto out_nolock;
>  	}
>  
> -	swap = get_mm_counter(mm, MM_SWAPENTS);
> -	curr_allocstall = sum_vm_event(ALLOCSTALL);
>  	down_read(&mm->mmap_sem);
>  	result = hugepage_vma_revalidate(mm, address);
>  	if (result) {
> @@ -972,22 +968,15 @@ static void collapse_huge_page(struct mm_struct *mm,
>  		up_read(&mm->mmap_sem);
>  		goto out_nolock;
>  	}
> -
>  	/*
> -	 * Don't perform swapin readahead when the system is under pressure,
> -	 * to avoid unnecessary resource consumption.
> +	 * __collapse_huge_page_swapin always returns with mmap_sem
> +	 * locked.  If it fails, release mmap_sem and jump directly
> +	 * out.  Continuing to collapse causes inconsistency.
>  	 */
> -	if (allocstall == curr_allocstall && swap != 0) {
> -		/*
> -		 * __collapse_huge_page_swapin always returns with mmap_sem
> -		 * locked.  If it fails, release mmap_sem and jump directly
> -		 * out.  Continuing to collapse causes inconsistency.
> -		 */
> -		if (!__collapse_huge_page_swapin(mm, vma, address, pmd)) {
> -			mem_cgroup_cancel_charge(new_page, memcg, true);
> -			up_read(&mm->mmap_sem);
> -			goto out_nolock;
> -		}
> +	if (!__collapse_huge_page_swapin(mm, vma, address, pmd)) {
> +		mem_cgroup_cancel_charge(new_page, memcg, true);
> +		up_read(&mm->mmap_sem);
> +		goto out_nolock;
>  	}
>  
>  	up_read(&mm->mmap_sem);
> @@ -1822,7 +1811,6 @@ static void khugepaged_wait_work(void)
>  		if (!scan_sleep_jiffies)
>  			return;
>  
> -		allocstall = sum_vm_event(ALLOCSTALL);
>  		khugepaged_sleep_expire = jiffies + scan_sleep_jiffies;
>  		wait_event_freezable_timeout(khugepaged_wait,
>  					     khugepaged_should_wakeup(),
> @@ -1830,10 +1818,8 @@ static void khugepaged_wait_work(void)
>  		return;
>  	}
>  
> -	if (khugepaged_enabled()) {
> -		allocstall = sum_vm_event(ALLOCSTALL);
> +	if (khugepaged_enabled())
>  		wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
> -	}
>  }
>  
>  static int khugepaged(void *none)
> @@ -1842,7 +1828,6 @@ static int khugepaged(void *none)
>  
>  	set_freezable();
>  	set_user_nice(current, MAX_NICE);
> -	allocstall = sum_vm_event(ALLOCSTALL);
>  
>  	while (!kthread_should_stop()) {
>  		khugepaged_do_scan();
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative
  2016-06-11 19:16 ` [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
@ 2016-06-13 18:35   ` Ebru Akagunduz
  2016-06-14  7:18   ` Michal Hocko
  2016-06-15  6:40   ` Minchan Kim
  2 siblings, 0 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-13 18:35 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, minchan

On Sat, Jun 11, 2016 at 10:16:00PM +0300, Ebru Akagunduz wrote:
> Currently, khugepaged collapses pages saying only
> a referenced page enough to create a THP.
> 
> This patch changes the design from optimistic to conservative.
> It gives a default threshold which is half of HPAGE_PMD_NR
> for referenced pages, also introduces a new sysfs knob.
> 
> Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
Suggested-by: Minchan Kim <minchan@kernel.org> 
> ---
Cc'ed Minchan Kim.
>  include/trace/events/huge_memory.h | 10 ++++----
>  mm/khugepaged.c                    | 50 +++++++++++++++++++++++++++++---------
>  2 files changed, 44 insertions(+), 16 deletions(-)
> 
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 830d47d..5f14025 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -13,7 +13,7 @@
>  	EM( SCAN_EXCEED_NONE_PTE,	"exceed_none_pte")		\
>  	EM( SCAN_PTE_NON_PRESENT,	"pte_non_present")		\
>  	EM( SCAN_PAGE_RO,		"no_writable_page")		\
> -	EM( SCAN_NO_REFERENCED_PAGE,	"no_referenced_page")		\
> +	EM( SCAN_LACK_REFERENCED_PAGE,	"lack_referenced_page")		\
>  	EM( SCAN_PAGE_NULL,		"page_null")			\
>  	EM( SCAN_SCAN_ABORT,		"scan_aborted")			\
>  	EM( SCAN_PAGE_COUNT,		"not_suitable_page_count")	\
> @@ -47,7 +47,7 @@ SCAN_STATUS
>  TRACE_EVENT(mm_khugepaged_scan_pmd,
>  
>  	TP_PROTO(struct mm_struct *mm, struct page *page, bool writable,
> -		 bool referenced, int none_or_zero, int status, int unmapped),
> +		 int referenced, int none_or_zero, int status, int unmapped),
>  
>  	TP_ARGS(mm, page, writable, referenced, none_or_zero, status, unmapped),
>  
> @@ -55,7 +55,7 @@ TRACE_EVENT(mm_khugepaged_scan_pmd,
>  		__field(struct mm_struct *, mm)
>  		__field(unsigned long, pfn)
>  		__field(bool, writable)
> -		__field(bool, referenced)
> +		__field(int, referenced)
>  		__field(int, none_or_zero)
>  		__field(int, status)
>  		__field(int, unmapped)
> @@ -108,14 +108,14 @@ TRACE_EVENT(mm_collapse_huge_page,
>  TRACE_EVENT(mm_collapse_huge_page_isolate,
>  
>  	TP_PROTO(struct page *page, int none_or_zero,
> -		 bool referenced, bool  writable, int status),
> +		 int referenced, bool  writable, int status),
>  
>  	TP_ARGS(page, none_or_zero, referenced, writable, status),
>  
>  	TP_STRUCT__entry(
>  		__field(unsigned long, pfn)
>  		__field(int, none_or_zero)
> -		__field(bool, referenced)
> +		__field(int, referenced)
>  		__field(bool, writable)
>  		__field(int, status)
>  	),
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index e3d8da7..43fc41e 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -27,7 +27,7 @@ enum scan_result {
>  	SCAN_EXCEED_NONE_PTE,
>  	SCAN_PTE_NON_PRESENT,
>  	SCAN_PAGE_RO,
> -	SCAN_NO_REFERENCED_PAGE,
> +	SCAN_LACK_REFERENCED_PAGE,
>  	SCAN_PAGE_NULL,
>  	SCAN_SCAN_ABORT,
>  	SCAN_PAGE_COUNT,
> @@ -68,6 +68,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
>   */
>  static unsigned int khugepaged_max_ptes_none __read_mostly;
>  static unsigned int khugepaged_max_ptes_swap __read_mostly;
> +static unsigned int khugepaged_min_ptes_young __read_mostly;
>  
>  static int khugepaged(void *none);
>  
> @@ -282,6 +283,32 @@ static struct kobj_attribute khugepaged_max_ptes_swap_attr =
>  	__ATTR(max_ptes_swap, 0644, khugepaged_max_ptes_swap_show,
>  	       khugepaged_max_ptes_swap_store);
>  
> +static ssize_t khugepaged_min_ptes_young_show(struct kobject *kobj,
> +					      struct kobj_attribute *attr,
> +					      char *buf)
> +{
> +	return sprintf(buf, "%u\n", khugepaged_min_ptes_young);
> +}
> +
> +static ssize_t khugepaged_min_ptes_young_store(struct kobject *kobj,
> +					       struct kobj_attribute *attr,
> +					       const char *buf, size_t count)
> +{
> +	int err;
> +	unsigned long min_ptes_young;
> +	err  = kstrtoul(buf, 10, &min_ptes_young);
> +	if (err || min_ptes_young > HPAGE_PMD_NR-1)
> +		return -EINVAL;
> +
> +	khugepaged_min_ptes_young = min_ptes_young;
> +
> +	return count;
> +}
> +
> +static struct kobj_attribute khugepaged_min_ptes_young_attr =
> +		__ATTR(min_ptes_young, 0644, khugepaged_min_ptes_young_show,
> +		khugepaged_min_ptes_young_store);
> +
>  static struct attribute *khugepaged_attr[] = {
>  	&khugepaged_defrag_attr.attr,
>  	&khugepaged_max_ptes_none_attr.attr,
> @@ -291,6 +318,7 @@ static struct attribute *khugepaged_attr[] = {
>  	&scan_sleep_millisecs_attr.attr,
>  	&alloc_sleep_millisecs_attr.attr,
>  	&khugepaged_max_ptes_swap_attr.attr,
> +	&khugepaged_min_ptes_young_attr.attr,
>  	NULL,
>  };
>  
> @@ -502,8 +530,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>  {
>  	struct page *page = NULL;
>  	pte_t *_pte;
> -	int none_or_zero = 0, result = 0;
> -	bool referenced = false, writable = false;
> +	int none_or_zero = 0, result = 0, referenced = 0;
> +	bool writable = false;
>  
>  	for (_pte = pte; _pte < pte+HPAGE_PMD_NR;
>  	     _pte++, address += PAGE_SIZE) {
> @@ -582,14 +610,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>  		VM_BUG_ON_PAGE(!PageLocked(page), page);
>  		VM_BUG_ON_PAGE(PageLRU(page), page);
>  
> -		/* If there is no mapped pte young don't collapse the page */
> +		/* There should be enough young pte to collapse the page */
>  		if (pte_young(pteval) ||
>  		    page_is_young(page) || PageReferenced(page) ||
>  		    mmu_notifier_test_young(vma->vm_mm, address))
> -			referenced = true;
> +			referenced++;
>  	}
>  	if (likely(writable)) {
> -		if (likely(referenced)) {
> +		if (referenced >= khugepaged_min_ptes_young) {
>  			result = SCAN_SUCCEED;
>  			trace_mm_collapse_huge_page_isolate(page, none_or_zero,
>  							    referenced, writable, result);
> @@ -1082,11 +1110,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>  	pmd_t *pmd;
>  	pte_t *pte, *_pte;
>  	int ret = 0, none_or_zero = 0, result = 0;
> +	int node = NUMA_NO_NODE, unmapped = 0, referenced = 0;
>  	struct page *page = NULL;
>  	unsigned long _address;
>  	spinlock_t *ptl;
> -	int node = NUMA_NO_NODE, unmapped = 0;
> -	bool writable = false, referenced = false;
> +	bool writable = false;
>  
>  	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>  
> @@ -1174,14 +1202,14 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>  		if (pte_young(pteval) ||
>  		    page_is_young(page) || PageReferenced(page) ||
>  		    mmu_notifier_test_young(vma->vm_mm, address))
> -			referenced = true;
> +			referenced++;
>  	}
>  	if (writable) {
> -		if (referenced) {
> +		if (referenced >= khugepaged_min_ptes_young) {
>  			result = SCAN_SUCCEED;
>  			ret = 1;
>  		} else {
> -			result = SCAN_NO_REFERENCED_PAGE;
> +			result = SCAN_LACK_REFERENCED_PAGE;
>  		}
>  	} else {
>  		result = SCAN_PAGE_RO;
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 3/3] doc: add information about min_ptes_young
  2016-06-11 19:16 ` [RFC PATCH 3/3] doc: add information about min_ptes_young Ebru Akagunduz
@ 2016-06-13 18:37   ` Ebru Akagunduz
  0 siblings, 0 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-13 18:37 UTC (permalink / raw)
  To: linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz, minchan

On Sat, Jun 11, 2016 at 10:16:01PM +0300, Ebru Akagunduz wrote:
> min_ptes_young specifies at least how many young pages needed
> to create a THP. This threshold also effects when making swapin
> readahead (if needed) to create a THP. We decide whether to make
> swapin readahed wortwhile looking the value.
> 
> /sys/kernel/mm/transparent_hugepage/khugepaged/min_ptes_young
> 
> Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
Suggested-by: Minchan Kim <minchan@kernel.org> 
> ---
Cc'ed Minchan Kim.
>  Documentation/vm/transhuge.txt | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt
> index 2ec6adb..0ae713b 100644
> --- a/Documentation/vm/transhuge.txt
> +++ b/Documentation/vm/transhuge.txt
> @@ -193,6 +193,13 @@ memory. A lower value can prevent THPs from being
>  collapsed, resulting fewer pages being collapsed into
>  THPs, and lower memory access performance.
>  
> +min_ptes_young specifies at least how many young pages needed
> +to create a THP. This threshold also effects when making swapin
> +readahead (if needed) to create a THP. We decide whether to make
> +swapin readahed wortwhile looking the value.
> +
> +/sys/kernel/mm/transparent_hugepage/khugepaged/min_ptes_young
> +
>  == Boot parameter ==
>  
>  You can change the sysfs boot time defaults of Transparent Hugepage
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 1/3] mm, thp: revert allocstall comparing
  2016-06-11 19:15 ` [RFC PATCH 1/3] mm, thp: revert allocstall comparing Ebru Akagunduz
  2016-06-13 18:32   ` Ebru Akagunduz
@ 2016-06-14  7:07   ` Michal Hocko
  1 sibling, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2016-06-14  7:07 UTC (permalink / raw)
  To: Ebru Akagunduz
  Cc: linux-mm, hughd, riel, akpm, kirill.shutemov, n-horiguchi,
	aarcange, iamjoonsoo.kim, gorcunov, linux-kernel, mgorman,
	rientjes, vbabka, aneesh.kumar, hannes, boaz

On Sat 11-06-16 22:15:59, Ebru Akagunduz wrote:
> This patch takes back allocstall comparing when deciding
> whether swapin worthwhile because it does not work,
> if vmevent disabled.
> 
> Related commit:
> http://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/?id=2548306628308aa6a326640d345a737bc898941d

I guess it would be easier to simply drop
mm-thp-avoid-unnecessary-swapin-in-khugepaged.patch

> Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
> ---
>  mm/khugepaged.c | 31 ++++++++-----------------------
>  1 file changed, 8 insertions(+), 23 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 0ac63f7..e3d8da7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -68,7 +68,6 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
>   */
>  static unsigned int khugepaged_max_ptes_none __read_mostly;
>  static unsigned int khugepaged_max_ptes_swap __read_mostly;
> -static unsigned long allocstall;
>  
>  static int khugepaged(void *none);
>  
> @@ -926,7 +925,6 @@ static void collapse_huge_page(struct mm_struct *mm,
>  	struct page *new_page;
>  	spinlock_t *pmd_ptl, *pte_ptl;
>  	int isolated = 0, result = 0;
> -	unsigned long swap, curr_allocstall;
>  	struct mem_cgroup *memcg;
>  	unsigned long mmun_start;	/* For mmu_notifiers */
>  	unsigned long mmun_end;		/* For mmu_notifiers */
> @@ -955,8 +953,6 @@ static void collapse_huge_page(struct mm_struct *mm,
>  		goto out_nolock;
>  	}
>  
> -	swap = get_mm_counter(mm, MM_SWAPENTS);
> -	curr_allocstall = sum_vm_event(ALLOCSTALL);
>  	down_read(&mm->mmap_sem);
>  	result = hugepage_vma_revalidate(mm, address);
>  	if (result) {
> @@ -972,22 +968,15 @@ static void collapse_huge_page(struct mm_struct *mm,
>  		up_read(&mm->mmap_sem);
>  		goto out_nolock;
>  	}
> -
>  	/*
> -	 * Don't perform swapin readahead when the system is under pressure,
> -	 * to avoid unnecessary resource consumption.
> +	 * __collapse_huge_page_swapin always returns with mmap_sem
> +	 * locked.  If it fails, release mmap_sem and jump directly
> +	 * out.  Continuing to collapse causes inconsistency.
>  	 */
> -	if (allocstall == curr_allocstall && swap != 0) {
> -		/*
> -		 * __collapse_huge_page_swapin always returns with mmap_sem
> -		 * locked.  If it fails, release mmap_sem and jump directly
> -		 * out.  Continuing to collapse causes inconsistency.
> -		 */
> -		if (!__collapse_huge_page_swapin(mm, vma, address, pmd)) {
> -			mem_cgroup_cancel_charge(new_page, memcg, true);
> -			up_read(&mm->mmap_sem);
> -			goto out_nolock;
> -		}
> +	if (!__collapse_huge_page_swapin(mm, vma, address, pmd)) {
> +		mem_cgroup_cancel_charge(new_page, memcg, true);
> +		up_read(&mm->mmap_sem);
> +		goto out_nolock;
>  	}
>  
>  	up_read(&mm->mmap_sem);
> @@ -1822,7 +1811,6 @@ static void khugepaged_wait_work(void)
>  		if (!scan_sleep_jiffies)
>  			return;
>  
> -		allocstall = sum_vm_event(ALLOCSTALL);
>  		khugepaged_sleep_expire = jiffies + scan_sleep_jiffies;
>  		wait_event_freezable_timeout(khugepaged_wait,
>  					     khugepaged_should_wakeup(),
> @@ -1830,10 +1818,8 @@ static void khugepaged_wait_work(void)
>  		return;
>  	}
>  
> -	if (khugepaged_enabled()) {
> -		allocstall = sum_vm_event(ALLOCSTALL);
> +	if (khugepaged_enabled())
>  		wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
> -	}
>  }
>  
>  static int khugepaged(void *none)
> @@ -1842,7 +1828,6 @@ static int khugepaged(void *none)
>  
>  	set_freezable();
>  	set_user_nice(current, MAX_NICE);
> -	allocstall = sum_vm_event(ALLOCSTALL);
>  
>  	while (!kthread_should_stop()) {
>  		khugepaged_do_scan();
> -- 
> 1.9.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative
  2016-06-11 19:16 ` [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
  2016-06-13 18:35   ` Ebru Akagunduz
@ 2016-06-14  7:18   ` Michal Hocko
  2016-06-15  6:40   ` Minchan Kim
  2 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2016-06-14  7:18 UTC (permalink / raw)
  To: Ebru Akagunduz
  Cc: linux-mm, hughd, riel, akpm, kirill.shutemov, n-horiguchi,
	aarcange, iamjoonsoo.kim, gorcunov, linux-kernel, mgorman,
	rientjes, vbabka, aneesh.kumar, hannes, boaz

On Sat 11-06-16 22:16:00, Ebru Akagunduz wrote:
> Currently, khugepaged collapses pages saying only
> a referenced page enough to create a THP.
> 
> This patch changes the design from optimistic to conservative.
> It gives a default threshold which is half of HPAGE_PMD_NR
> for referenced pages, also introduces a new sysfs knob.

I am not really happy about yet another tunable khugepaged_max_ptes_none
is too specific already. We do not want to have one knob per page
bit. Shouldn't we rather make the existing knob more generic to allow
implementation to decide whether young bit or present bit is more
important.

> Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
> ---
>  include/trace/events/huge_memory.h | 10 ++++----
>  mm/khugepaged.c                    | 50 +++++++++++++++++++++++++++++---------
>  2 files changed, 44 insertions(+), 16 deletions(-)
> 
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 830d47d..5f14025 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -13,7 +13,7 @@
>  	EM( SCAN_EXCEED_NONE_PTE,	"exceed_none_pte")		\
>  	EM( SCAN_PTE_NON_PRESENT,	"pte_non_present")		\
>  	EM( SCAN_PAGE_RO,		"no_writable_page")		\
> -	EM( SCAN_NO_REFERENCED_PAGE,	"no_referenced_page")		\
> +	EM( SCAN_LACK_REFERENCED_PAGE,	"lack_referenced_page")		\
>  	EM( SCAN_PAGE_NULL,		"page_null")			\
>  	EM( SCAN_SCAN_ABORT,		"scan_aborted")			\
>  	EM( SCAN_PAGE_COUNT,		"not_suitable_page_count")	\
> @@ -47,7 +47,7 @@ SCAN_STATUS
>  TRACE_EVENT(mm_khugepaged_scan_pmd,
>  
>  	TP_PROTO(struct mm_struct *mm, struct page *page, bool writable,
> -		 bool referenced, int none_or_zero, int status, int unmapped),
> +		 int referenced, int none_or_zero, int status, int unmapped),
>  
>  	TP_ARGS(mm, page, writable, referenced, none_or_zero, status, unmapped),
>  
> @@ -55,7 +55,7 @@ TRACE_EVENT(mm_khugepaged_scan_pmd,
>  		__field(struct mm_struct *, mm)
>  		__field(unsigned long, pfn)
>  		__field(bool, writable)
> -		__field(bool, referenced)
> +		__field(int, referenced)
>  		__field(int, none_or_zero)
>  		__field(int, status)
>  		__field(int, unmapped)
> @@ -108,14 +108,14 @@ TRACE_EVENT(mm_collapse_huge_page,
>  TRACE_EVENT(mm_collapse_huge_page_isolate,
>  
>  	TP_PROTO(struct page *page, int none_or_zero,
> -		 bool referenced, bool  writable, int status),
> +		 int referenced, bool  writable, int status),
>  
>  	TP_ARGS(page, none_or_zero, referenced, writable, status),
>  
>  	TP_STRUCT__entry(
>  		__field(unsigned long, pfn)
>  		__field(int, none_or_zero)
> -		__field(bool, referenced)
> +		__field(int, referenced)
>  		__field(bool, writable)
>  		__field(int, status)
>  	),
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index e3d8da7..43fc41e 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -27,7 +27,7 @@ enum scan_result {
>  	SCAN_EXCEED_NONE_PTE,
>  	SCAN_PTE_NON_PRESENT,
>  	SCAN_PAGE_RO,
> -	SCAN_NO_REFERENCED_PAGE,
> +	SCAN_LACK_REFERENCED_PAGE,
>  	SCAN_PAGE_NULL,
>  	SCAN_SCAN_ABORT,
>  	SCAN_PAGE_COUNT,
> @@ -68,6 +68,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
>   */
>  static unsigned int khugepaged_max_ptes_none __read_mostly;
>  static unsigned int khugepaged_max_ptes_swap __read_mostly;
> +static unsigned int khugepaged_min_ptes_young __read_mostly;
>  
>  static int khugepaged(void *none);
>  
> @@ -282,6 +283,32 @@ static struct kobj_attribute khugepaged_max_ptes_swap_attr =
>  	__ATTR(max_ptes_swap, 0644, khugepaged_max_ptes_swap_show,
>  	       khugepaged_max_ptes_swap_store);
>  
> +static ssize_t khugepaged_min_ptes_young_show(struct kobject *kobj,
> +					      struct kobj_attribute *attr,
> +					      char *buf)
> +{
> +	return sprintf(buf, "%u\n", khugepaged_min_ptes_young);
> +}
> +
> +static ssize_t khugepaged_min_ptes_young_store(struct kobject *kobj,
> +					       struct kobj_attribute *attr,
> +					       const char *buf, size_t count)
> +{
> +	int err;
> +	unsigned long min_ptes_young;
> +	err  = kstrtoul(buf, 10, &min_ptes_young);
> +	if (err || min_ptes_young > HPAGE_PMD_NR-1)
> +		return -EINVAL;
> +
> +	khugepaged_min_ptes_young = min_ptes_young;
> +
> +	return count;
> +}
> +
> +static struct kobj_attribute khugepaged_min_ptes_young_attr =
> +		__ATTR(min_ptes_young, 0644, khugepaged_min_ptes_young_show,
> +		khugepaged_min_ptes_young_store);
> +
>  static struct attribute *khugepaged_attr[] = {
>  	&khugepaged_defrag_attr.attr,
>  	&khugepaged_max_ptes_none_attr.attr,
> @@ -291,6 +318,7 @@ static struct attribute *khugepaged_attr[] = {
>  	&scan_sleep_millisecs_attr.attr,
>  	&alloc_sleep_millisecs_attr.attr,
>  	&khugepaged_max_ptes_swap_attr.attr,
> +	&khugepaged_min_ptes_young_attr.attr,
>  	NULL,
>  };
>  
> @@ -502,8 +530,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>  {
>  	struct page *page = NULL;
>  	pte_t *_pte;
> -	int none_or_zero = 0, result = 0;
> -	bool referenced = false, writable = false;
> +	int none_or_zero = 0, result = 0, referenced = 0;
> +	bool writable = false;
>  
>  	for (_pte = pte; _pte < pte+HPAGE_PMD_NR;
>  	     _pte++, address += PAGE_SIZE) {
> @@ -582,14 +610,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>  		VM_BUG_ON_PAGE(!PageLocked(page), page);
>  		VM_BUG_ON_PAGE(PageLRU(page), page);
>  
> -		/* If there is no mapped pte young don't collapse the page */
> +		/* There should be enough young pte to collapse the page */
>  		if (pte_young(pteval) ||
>  		    page_is_young(page) || PageReferenced(page) ||
>  		    mmu_notifier_test_young(vma->vm_mm, address))
> -			referenced = true;
> +			referenced++;
>  	}
>  	if (likely(writable)) {
> -		if (likely(referenced)) {
> +		if (referenced >= khugepaged_min_ptes_young) {
>  			result = SCAN_SUCCEED;
>  			trace_mm_collapse_huge_page_isolate(page, none_or_zero,
>  							    referenced, writable, result);
> @@ -1082,11 +1110,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>  	pmd_t *pmd;
>  	pte_t *pte, *_pte;
>  	int ret = 0, none_or_zero = 0, result = 0;
> +	int node = NUMA_NO_NODE, unmapped = 0, referenced = 0;
>  	struct page *page = NULL;
>  	unsigned long _address;
>  	spinlock_t *ptl;
> -	int node = NUMA_NO_NODE, unmapped = 0;
> -	bool writable = false, referenced = false;
> +	bool writable = false;
>  
>  	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>  
> @@ -1174,14 +1202,14 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>  		if (pte_young(pteval) ||
>  		    page_is_young(page) || PageReferenced(page) ||
>  		    mmu_notifier_test_young(vma->vm_mm, address))
> -			referenced = true;
> +			referenced++;
>  	}
>  	if (writable) {
> -		if (referenced) {
> +		if (referenced >= khugepaged_min_ptes_young) {
>  			result = SCAN_SUCCEED;
>  			ret = 1;
>  		} else {
> -			result = SCAN_NO_REFERENCED_PAGE;
> +			result = SCAN_LACK_REFERENCED_PAGE;
>  		}
>  	} else {
>  		result = SCAN_PAGE_RO;
> -- 
> 1.9.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative
  2016-06-11 19:16 ` [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
  2016-06-13 18:35   ` Ebru Akagunduz
  2016-06-14  7:18   ` Michal Hocko
@ 2016-06-15  6:40   ` Minchan Kim
  2016-06-16  9:08     ` Ebru Akagunduz
  2 siblings, 1 reply; 12+ messages in thread
From: Minchan Kim @ 2016-06-15  6:40 UTC (permalink / raw)
  To: Ebru Akagunduz
  Cc: linux-mm, hughd, riel, akpm, kirill.shutemov, n-horiguchi,
	aarcange, iamjoonsoo.kim, gorcunov, linux-kernel, mgorman,
	rientjes, vbabka, aneesh.kumar, hannes, mhocko, boaz

Hello,

On Sat, Jun 11, 2016 at 10:16:00PM +0300, Ebru Akagunduz wrote:
> Currently, khugepaged collapses pages saying only
> a referenced page enough to create a THP.
> 
> This patch changes the design from optimistic to conservative.
> It gives a default threshold which is half of HPAGE_PMD_NR
> for referenced pages, also introduces a new sysfs knob.

Strictly speaking, It's not what I suggested.

I didn't mean that let's change threshold for deciding whether we should
collapse or not(although just *a* reference page seems be too
optimistic) and export the knob to the user. In fact, I cannot judge
whether it's worth or not because I never have an experience with THP
workload in practice although I believe it does make sense.

What I suggested is that a swapin operation would be much heavier than
a THP cost to collapse populated anon page so it should be more
conservative than THP collasping decision, at least. Given that thought,
decision point for collasping a THP is *a* reference page now so *half*
reference of populated pages for reading swapped-out page is more
conservative.

> 
> Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
> ---
>  include/trace/events/huge_memory.h | 10 ++++----
>  mm/khugepaged.c                    | 50 +++++++++++++++++++++++++++++---------
>  2 files changed, 44 insertions(+), 16 deletions(-)
> 
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 830d47d..5f14025 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -13,7 +13,7 @@
>  	EM( SCAN_EXCEED_NONE_PTE,	"exceed_none_pte")		\
>  	EM( SCAN_PTE_NON_PRESENT,	"pte_non_present")		\
>  	EM( SCAN_PAGE_RO,		"no_writable_page")		\
> -	EM( SCAN_NO_REFERENCED_PAGE,	"no_referenced_page")		\
> +	EM( SCAN_LACK_REFERENCED_PAGE,	"lack_referenced_page")		\
>  	EM( SCAN_PAGE_NULL,		"page_null")			\
>  	EM( SCAN_SCAN_ABORT,		"scan_aborted")			\
>  	EM( SCAN_PAGE_COUNT,		"not_suitable_page_count")	\
> @@ -47,7 +47,7 @@ SCAN_STATUS
>  TRACE_EVENT(mm_khugepaged_scan_pmd,
>  
>  	TP_PROTO(struct mm_struct *mm, struct page *page, bool writable,
> -		 bool referenced, int none_or_zero, int status, int unmapped),
> +		 int referenced, int none_or_zero, int status, int unmapped),
>  
>  	TP_ARGS(mm, page, writable, referenced, none_or_zero, status, unmapped),
>  
> @@ -55,7 +55,7 @@ TRACE_EVENT(mm_khugepaged_scan_pmd,
>  		__field(struct mm_struct *, mm)
>  		__field(unsigned long, pfn)
>  		__field(bool, writable)
> -		__field(bool, referenced)
> +		__field(int, referenced)
>  		__field(int, none_or_zero)
>  		__field(int, status)
>  		__field(int, unmapped)
> @@ -108,14 +108,14 @@ TRACE_EVENT(mm_collapse_huge_page,
>  TRACE_EVENT(mm_collapse_huge_page_isolate,
>  
>  	TP_PROTO(struct page *page, int none_or_zero,
> -		 bool referenced, bool  writable, int status),
> +		 int referenced, bool  writable, int status),
>  
>  	TP_ARGS(page, none_or_zero, referenced, writable, status),
>  
>  	TP_STRUCT__entry(
>  		__field(unsigned long, pfn)
>  		__field(int, none_or_zero)
> -		__field(bool, referenced)
> +		__field(int, referenced)
>  		__field(bool, writable)
>  		__field(int, status)
>  	),
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index e3d8da7..43fc41e 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -27,7 +27,7 @@ enum scan_result {
>  	SCAN_EXCEED_NONE_PTE,
>  	SCAN_PTE_NON_PRESENT,
>  	SCAN_PAGE_RO,
> -	SCAN_NO_REFERENCED_PAGE,
> +	SCAN_LACK_REFERENCED_PAGE,
>  	SCAN_PAGE_NULL,
>  	SCAN_SCAN_ABORT,
>  	SCAN_PAGE_COUNT,
> @@ -68,6 +68,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
>   */
>  static unsigned int khugepaged_max_ptes_none __read_mostly;
>  static unsigned int khugepaged_max_ptes_swap __read_mostly;
> +static unsigned int khugepaged_min_ptes_young __read_mostly;

We should set it to 1 to preserve old behavior.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative
  2016-06-15  6:40   ` Minchan Kim
@ 2016-06-16  9:08     ` Ebru Akagunduz
  0 siblings, 0 replies; 12+ messages in thread
From: Ebru Akagunduz @ 2016-06-16  9:08 UTC (permalink / raw)
  To: Minchan Kim, mhocko, linux-mm
  Cc: hughd, riel, akpm, kirill.shutemov, n-horiguchi, aarcange,
	iamjoonsoo.kim, gorcunov, linux-kernel, mgorman, rientjes,
	vbabka, aneesh.kumar, hannes, mhocko, boaz

On Wed, Jun 15, 2016 at 03:40:53PM +0900, Minchan Kim wrote:
> Hello,
> 
> On Sat, Jun 11, 2016 at 10:16:00PM +0300, Ebru Akagunduz wrote:
> > Currently, khugepaged collapses pages saying only
> > a referenced page enough to create a THP.
> > 
> > This patch changes the design from optimistic to conservative.
> > It gives a default threshold which is half of HPAGE_PMD_NR
> > for referenced pages, also introduces a new sysfs knob.
> 
> Strictly speaking, It's not what I suggested.
> 
> I didn't mean that let's change threshold for deciding whether we should
> collapse or not(although just *a* reference page seems be too
> optimistic) and export the knob to the user. In fact, I cannot judge
> whether it's worth or not because I never have an experience with THP
> workload in practice although I believe it does make sense.
> 
> What I suggested is that a swapin operation would be much heavier than
> a THP cost to collapse populated anon page so it should be more
> conservative than THP collasping decision, at least. Given that thought,
> decision point for collasping a THP is *a* reference page now so *half*
> reference of populated pages for reading swapped-out page is more
> conservative.
>
Then passing referenced parameter from khugepaged_scan_pmd to
collapse_huge_page_swapin seems okay. A referenced is enough to
create THP, if needs to swapin, we check the value that should
be higher than 256 and so that, we don't need a new sysfs knob.
 
> > 
> > Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com>
> > ---

> > +static unsigned int khugepaged_min_ptes_young __read_mostly;
> 
> We should set it to 1 to preserve old behavior.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-06-16  9:08 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-11 19:15 [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
2016-06-11 19:15 ` [RFC PATCH 1/3] mm, thp: revert allocstall comparing Ebru Akagunduz
2016-06-13 18:32   ` Ebru Akagunduz
2016-06-14  7:07   ` Michal Hocko
2016-06-11 19:16 ` [RFC PATCH 2/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz
2016-06-13 18:35   ` Ebru Akagunduz
2016-06-14  7:18   ` Michal Hocko
2016-06-15  6:40   ` Minchan Kim
2016-06-16  9:08     ` Ebru Akagunduz
2016-06-11 19:16 ` [RFC PATCH 3/3] doc: add information about min_ptes_young Ebru Akagunduz
2016-06-13 18:37   ` Ebru Akagunduz
2016-06-13 18:30 ` [RFC PATCH 0/3] mm, thp: convert from optimistic to conservative Ebru Akagunduz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).