linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Simon Jeons <simon.jeons@gmail.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Ingo Molnar <mingo@kernel.org>, Rik van Riel <riel@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Hugh Dickins <hughd@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Paul Turner <pjt@google.com>, Hillf Danton <dhillf@gmail.com>,
	David Rientjes <rientjes@google.com>,
	Lee Schermerhorn <Lee.Schermerhorn@hp.com>,
	Alex Shi <lkml.alex@gmail.com>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 29/49] mm: numa: Add pte updates, hinting and migration stats
Date: Fri, 04 Jan 2013 05:42:24 -0600	[thread overview]
Message-ID: <1357299744.5273.4.camel@kernel.cn.ibm.com> (raw)
In-Reply-To: <1354875832-9700-30-git-send-email-mgorman@suse.de>

On Fri, 2012-12-07 at 10:23 +0000, Mel Gorman wrote:
> It is tricky to quantify the basic cost of automatic NUMA placement in a
> meaningful manner. This patch adds some vmstats that can be used as part
> of a basic costing model.

Hi Gorman, 

> 
> u    = basic unit = sizeof(void *)
> Ca   = cost of struct page access = sizeof(struct page) / u
> Cpte = Cost PTE access = Ca
> Cupdate = Cost PTE update = (2 * Cpte) + (2 * Wlock)
> 	where Cpte is incurred twice for a read and a write and Wlock
> 	is a constant representing the cost of taking or releasing a
> 	lock
> Cnumahint = Cost of a minor page fault = some high constant e.g. 1000
> Cpagerw = Cost to read or write a full page = Ca + PAGE_SIZE/u

Why cpagerw = Ca + PAGE_SIZE/u instead of Cpte + PAGE_SIZE/u ?

> Ci = Cost of page isolation = Ca + Wi
> 	where Wi is a constant that should reflect the approximate cost
> 	of the locking operation
> Cpagecopy = Cpagerw + (Cpagerw * Wnuma) + Ci + (Ci * Wnuma)
> 	where Wnuma is the approximate NUMA factor. 1 is local. 1.2
> 	would imply that remote accesses are 20% more expensive
> 
> Balancing cost = Cpte * numa_pte_updates +
> 		Cnumahint * numa_hint_faults +
> 		Ci * numa_pages_migrated +
> 		Cpagecopy * numa_pages_migrated
> 

Since Cpagecopy has already accumulated ci why count ci twice ?

> Note that numa_pages_migrated is used as a measure of how many pages
> were isolated even though it would miss pages that failed to migrate. A
> vmstat counter could have been added for it but the isolation cost is
> pretty marginal in comparison to the overall cost so it seemed overkill.
> 
> The ideal way to measure automatic placement benefit would be to count
> the number of remote accesses versus local accesses and do something like
> 
> 	benefit = (remote_accesses_before - remove_access_after) * Wnuma
> 
> but the information is not readily available. As a workload converges, the
> expection would be that the number of remote numa hints would reduce to 0.
> 
> 	convergence = numa_hint_faults_local / numa_hint_faults
> 		where this is measured for the last N number of
> 		numa hints recorded. When the workload is fully
> 		converged the value is 1.
> 

convergence tend to 0 is better or 1 is better? If tend to 1, Cpte *
numa_pte_updates + Cnumahint * numa_hint_faults are just waste, where I
miss?

> This can measure if the placement policy is converging and how fast it is
> doing it.
> 
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> Acked-by: Rik van Riel <riel@redhat.com>
> ---
>  include/linux/vm_event_item.h |    6 ++++++
>  include/linux/vmstat.h        |    8 ++++++++
>  mm/huge_memory.c              |    5 +++++
>  mm/memory.c                   |   12 ++++++++++++
>  mm/mempolicy.c                |    2 ++
>  mm/migrate.c                  |    3 ++-
>  mm/vmstat.c                   |    6 ++++++
>  7 files changed, 41 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> index a1f750b..dded0af 100644
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -38,6 +38,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
>  		KSWAPD_LOW_WMARK_HIT_QUICKLY, KSWAPD_HIGH_WMARK_HIT_QUICKLY,
>  		KSWAPD_SKIP_CONGESTION_WAIT,
>  		PAGEOUTRUN, ALLOCSTALL, PGROTATED,
> +#ifdef CONFIG_BALANCE_NUMA
> +		NUMA_PTE_UPDATES,
> +		NUMA_HINT_FAULTS,
> +		NUMA_HINT_FAULTS_LOCAL,
> +		NUMA_PAGE_MIGRATE,
> +#endif
>  #ifdef CONFIG_MIGRATION
>  		PGMIGRATE_SUCCESS, PGMIGRATE_FAIL,
>  #endif
> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
> index 92a86b2..dffccfa 100644
> --- a/include/linux/vmstat.h
> +++ b/include/linux/vmstat.h
> @@ -80,6 +80,14 @@ static inline void vm_events_fold_cpu(int cpu)
>  
>  #endif /* CONFIG_VM_EVENT_COUNTERS */
>  
> +#ifdef CONFIG_BALANCE_NUMA
> +#define count_vm_numa_event(x)     count_vm_event(x)
> +#define count_vm_numa_events(x, y) count_vm_events(x, y)
> +#else
> +#define count_vm_numa_event(x) do {} while (0)
> +#define count_vm_numa_events(x, y) do {} while (0)
> +#endif /* CONFIG_BALANCE_NUMA */
> +
>  #define __count_zone_vm_events(item, zone, delta) \
>  		__count_vm_events(item##_NORMAL - ZONE_NORMAL + \
>  		zone_idx(zone), delta)
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index b3d4c4b..66e73cc 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1025,6 +1025,7 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  	struct page *page = NULL;
>  	unsigned long haddr = addr & HPAGE_PMD_MASK;
>  	int target_nid;
> +	int current_nid = -1;
>  
>  	spin_lock(&mm->page_table_lock);
>  	if (unlikely(!pmd_same(pmd, *pmdp)))
> @@ -1033,6 +1034,10 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  	page = pmd_page(pmd);
>  	get_page(page);
>  	spin_unlock(&mm->page_table_lock);
> +	current_nid = page_to_nid(page);
> +	count_vm_numa_event(NUMA_HINT_FAULTS);
> +	if (current_nid == numa_node_id())
> +		count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
>  
>  	target_nid = mpol_misplaced(page, vma, haddr);
>  	if (target_nid == -1)
> diff --git a/mm/memory.c b/mm/memory.c
> index 1d6f85a..47f5dd1 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3477,6 +3477,7 @@ int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  	set_pte_at(mm, addr, ptep, pte);
>  	update_mmu_cache(vma, addr, ptep);
>  
> +	count_vm_numa_event(NUMA_HINT_FAULTS);
>  	page = vm_normal_page(vma, addr, pte);
>  	if (!page) {
>  		pte_unmap_unlock(ptep, ptl);
> @@ -3485,6 +3486,8 @@ int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  
>  	get_page(page);
>  	current_nid = page_to_nid(page);
> +	if (current_nid == numa_node_id())
> +		count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
>  	target_nid = mpol_misplaced(page, vma, addr);
>  	pte_unmap_unlock(ptep, ptl);
>  	if (target_nid == -1) {
> @@ -3517,6 +3520,9 @@ static int do_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  	unsigned long offset;
>  	spinlock_t *ptl;
>  	bool numa = false;
> +	int local_nid = numa_node_id();
> +	unsigned long nr_faults = 0;
> +	unsigned long nr_faults_local = 0;
>  
>  	spin_lock(&mm->page_table_lock);
>  	pmd = *pmdp;
> @@ -3565,10 +3571,16 @@ static int do_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  		curr_nid = page_to_nid(page);
>  		task_numa_fault(curr_nid, 1);
>  
> +		nr_faults++;
> +		if (curr_nid == local_nid)
> +			nr_faults_local++;
> +
>  		pte = pte_offset_map_lock(mm, pmdp, addr, &ptl);
>  	}
>  	pte_unmap_unlock(orig_pte, ptl);
>  
> +	count_vm_numa_events(NUMA_HINT_FAULTS, nr_faults);
> +	count_vm_numa_events(NUMA_HINT_FAULTS_LOCAL, nr_faults_local);
>  	return 0;
>  }
>  #else
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index a7a62fe..516491f 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -583,6 +583,8 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
>  	BUILD_BUG_ON(_PAGE_NUMA != _PAGE_PROTNONE);
>  
>  	nr_updated = change_protection(vma, addr, end, vma->vm_page_prot, 0, 1);
> +	if (nr_updated)
> +		count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
>  
>  	return nr_updated;
>  }
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 49878d7..4f55694 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1514,7 +1514,8 @@ int migrate_misplaced_page(struct page *page, int node)
>  		if (nr_remaining) {
>  			putback_lru_pages(&migratepages);
>  			isolated = 0;
> -		}
> +		} else
> +			count_vm_numa_event(NUMA_PAGE_MIGRATE);
>  	}
>  	BUG_ON(!list_empty(&migratepages));
>  out:
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 3a067fa..cfa386da 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -774,6 +774,12 @@ const char * const vmstat_text[] = {
>  
>  	"pgrotated",
>  
> +#ifdef CONFIG_BALANCE_NUMA
> +	"numa_pte_updates",
> +	"numa_hint_faults",
> +	"numa_hint_faults_local",
> +	"numa_pages_migrated",
> +#endif
>  #ifdef CONFIG_MIGRATION
>  	"pgmigrate_success",
>  	"pgmigrate_fail",



  reply	other threads:[~2013-01-04 11:42 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-07 10:23 [PATCH 00/49] Automatic NUMA Balancing v10 Mel Gorman
2012-12-07 10:23 ` [PATCH 01/49] x86: mm: only do a local tlb flush in ptep_set_access_flags() Mel Gorman
2012-12-07 10:23 ` [PATCH 02/49] x86: mm: drop TLB flush from ptep_set_access_flags Mel Gorman
2012-12-07 10:23 ` [PATCH 03/49] mm,generic: only flush the local TLB in ptep_set_access_flags Mel Gorman
2012-12-07 10:23 ` [PATCH 04/49] x86/mm: Introduce pte_accessible() Mel Gorman
2012-12-07 10:23 ` [PATCH 05/49] mm: Only flush the TLB when clearing an accessible pte Mel Gorman
2012-12-07 10:23 ` [PATCH 06/49] mm: Count the number of pages affected in change_protection() Mel Gorman
2012-12-07 10:23 ` [PATCH 07/49] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Mel Gorman
2012-12-07 10:23 ` [PATCH 08/49] mm: compaction: Move migration fail/success stats to migrate.c Mel Gorman
2012-12-07 10:23 ` [PATCH 09/49] mm: migrate: Add a tracepoint for migrate_pages Mel Gorman
2012-12-07 10:23 ` [PATCH 10/49] mm: compaction: Add scanned and isolated counters for compaction Mel Gorman
2012-12-07 10:23 ` [PATCH 11/49] mm: numa: define _PAGE_NUMA Mel Gorman
2012-12-07 10:23 ` [PATCH 12/49] mm: numa: pte_numa() and pmd_numa() Mel Gorman
2012-12-07 10:23 ` [PATCH 13/49] mm: numa: Support NUMA hinting page faults from gup/gup_fast Mel Gorman
2012-12-07 10:23 ` [PATCH 14/49] mm: numa: split_huge_page: transfer the NUMA type from the pmd to the pte Mel Gorman
2012-12-07 10:23 ` [PATCH 15/49] mm: numa: Create basic numa page hinting infrastructure Mel Gorman
2012-12-07 10:23 ` [PATCH 16/49] mm: mempolicy: Make MPOL_LOCAL a real policy Mel Gorman
2012-12-07 10:23 ` [PATCH 17/49] mm: mempolicy: Add MPOL_NOOP Mel Gorman
2012-12-07 10:23 ` [PATCH 18/49] mm: mempolicy: Check for misplaced page Mel Gorman
2012-12-07 10:23 ` [PATCH 19/49] mm: migrate: Introduce migrate_misplaced_page() Mel Gorman
2012-12-07 10:23 ` [PATCH 20/49] mm: migrate: Drop the misplaced pages reference count if the target node is full Mel Gorman
2012-12-07 10:23 ` [PATCH 21/49] mm: mempolicy: Use _PAGE_NUMA to migrate pages Mel Gorman
2012-12-07 10:23 ` [PATCH 22/49] mm: mempolicy: Add MPOL_MF_LAZY Mel Gorman
2013-01-05  5:18   ` Simon Jeons
2013-01-07 15:14     ` Mel Gorman
2012-12-07 10:23 ` [PATCH 23/49] mm: mempolicy: Implement change_prot_numa() in terms of change_protection() Mel Gorman
2012-12-07 10:23 ` [PATCH 24/49] mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now Mel Gorman
2012-12-07 10:23 ` [PATCH 25/49] mm: numa: Add fault driven placement and migration Mel Gorman
2013-01-04 11:56   ` Simon Jeons
2012-12-07 10:23 ` [PATCH 26/49] mm: sched: numa: Implement constant, per task Working Set Sampling (WSS) rate Mel Gorman
2012-12-07 10:23 ` [PATCH 27/49] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Mel Gorman
2012-12-07 10:23 ` [PATCH 28/49] mm: sched: numa: Implement slow start for working set sampling Mel Gorman
2012-12-07 10:23 ` [PATCH 29/49] mm: numa: Add pte updates, hinting and migration stats Mel Gorman
2013-01-04 11:42   ` Simon Jeons [this message]
2013-01-07 15:29     ` Mel Gorman
2012-12-07 10:23 ` [PATCH 30/49] mm: numa: Migrate on reference policy Mel Gorman
2012-12-07 10:23 ` [PATCH 31/49] mm: numa: Migrate pages handled during a pmd_numa hinting fault Mel Gorman
2012-12-07 10:23 ` [PATCH 32/49] mm: numa: Structures for Migrate On Fault per NUMA migration rate limiting Mel Gorman
2012-12-07 10:23 ` [PATCH 33/49] mm: numa: Rate limit the amount of memory that is migrated between nodes Mel Gorman
2012-12-07 10:23 ` [PATCH 34/49] mm: numa: Rate limit setting of pte_numa if node is saturated Mel Gorman
2012-12-07 10:23 ` [PATCH 35/49] sched: numa: Slowly increase the scanning period as NUMA faults are handled Mel Gorman
2012-12-07 10:23 ` [PATCH 36/49] mm: numa: Introduce last_nid to the page frame Mel Gorman
2012-12-07 10:23 ` [PATCH 37/49] mm: numa: split_huge_page: Transfer last_nid on tail page Mel Gorman
2012-12-07 10:23 ` [PATCH 38/49] mm: numa: migrate: Set last_nid on newly allocated page Mel Gorman
2012-12-07 10:23 ` [PATCH 39/49] mm: numa: Use a two-stage filter to restrict pages being migrated for unlikely task<->node relationships Mel Gorman
2012-12-07 10:23 ` [PATCH 40/49] mm: sched: Adapt the scanning rate if a NUMA hinting fault does not migrate Mel Gorman
2012-12-07 10:23 ` [PATCH 41/49] mm: sched: numa: Control enabling and disabling of NUMA balancing Mel Gorman
2012-12-07 10:23 ` [PATCH 42/49] mm: sched: numa: Control enabling and disabling of NUMA balancing if !SCHED_DEBUG Mel Gorman
2012-12-07 10:23 ` [PATCH 43/49] mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node Mel Gorman
2012-12-07 10:23 ` [PATCH 44/49] mm: numa: Add THP migration for the NUMA working set scanning fault case Mel Gorman
     [not found]   ` <20130105084229.GA3208@hacker.(null)>
2013-01-07 15:37     ` Mel Gorman
2012-12-07 10:23 ` [PATCH 45/49] mm: numa: Add THP migration for the NUMA working set scanning fault case build fix Mel Gorman
2012-12-07 10:23 ` [PATCH 46/49] mm: numa: Account for failed allocations and isolations as migration failures Mel Gorman
2012-12-07 10:23 ` [PATCH 47/49] mm: migrate: Account a transhuge page properly when rate limiting Mel Gorman
2012-12-07 10:23 ` [PATCH 48/49] mm/rmap: Convert the struct anon_vma::mutex to an rwsem Mel Gorman
2012-12-07 10:23 ` [PATCH 49/49] mm/rmap, migration: Make rmap_walk_anon() and try_to_unmap_anon() more scalable Mel Gorman
2012-12-07 11:01 ` [PATCH 00/49] Automatic NUMA Balancing v10 Ingo Molnar
2012-12-09 20:36   ` Mel Gorman
2012-12-09 21:17     ` Kirill A. Shutemov
2012-12-10  8:44       ` Mel Gorman
2012-12-10  5:07     ` Srikar Dronamraju
2012-12-10  6:28       ` Srikar Dronamraju
2012-12-10 12:44         ` [PATCH] sched: Fix task_numa_fault() + KSM crash Ingo Molnar
2012-12-13 13:57           ` Srikar Dronamraju
2012-12-10  8:46       ` [PATCH 00/49] Automatic NUMA Balancing v10 Mel Gorman
2012-12-10 12:35       ` Ingo Molnar
2012-12-10 11:39     ` Ingo Molnar
2012-12-10 11:53       ` Ingo Molnar
2012-12-10 15:24       ` Mel Gorman
2012-12-11  1:02         ` Mel Gorman
2012-12-11  8:52           ` Ingo Molnar
2012-12-11  9:18             ` Ingo Molnar
2012-12-11 15:22               ` Mel Gorman
2012-12-11 16:30             ` Mel Gorman
2012-12-17 10:33               ` Ingo Molnar
2012-12-10 16:42 ` Srikar Dronamraju
2012-12-10 19:23   ` Ingo Molnar
2012-12-10 23:35     ` Srikar Dronamraju
2012-12-10 23:40   ` Srikar Dronamraju
2012-12-13 13:21 ` Srikar Dronamraju

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1357299744.5273.4.camel@kernel.cn.ibm.com \
    --to=simon.jeons@gmail.com \
    --cc=Lee.Schermerhorn@hp.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=dhillf@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkml.alex@gmail.com \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=pjt@google.com \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).