All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2] powerpc/mm: Initialize kernel pagetable memory for PTE fragments
@ 2018-06-20  9:22 Anshuman Khandual
  2018-06-20  9:30 ` Aneesh Kumar K.V
  0 siblings, 1 reply; 2+ messages in thread
From: Anshuman Khandual @ 2018-06-20  9:22 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: mpe, aneesh.kumar, benh

Kernel pagetable pages for PTE fragments never go through the standard init
sequence which can cause inaccuracies in utilization statistics reported at
places like /proc and /sysfs interfaces etc. Also the allocated page misses
out on pagetable lock and page flag init as well. Fix it by making sure all
pages allocated for either user process or kernel PTE fragments go through
same initialization.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
---
Changes in V2:

- Call the destructor function during free for all cases

 arch/powerpc/mm/pgtable-book3s64.c | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
index c1f4ca4..a820ee6 100644
--- a/arch/powerpc/mm/pgtable-book3s64.c
+++ b/arch/powerpc/mm/pgtable-book3s64.c
@@ -335,23 +335,21 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm)
 
 static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
 {
+	gfp_t gfp_mask = PGALLOC_GFP;
 	void *ret = NULL;
 	struct page *page;
 
-	if (!kernel) {
-		page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT);
-		if (!page)
-			return NULL;
-		if (!pgtable_page_ctor(page)) {
-			__free_page(page);
-			return NULL;
-		}
-	} else {
-		page = alloc_page(PGALLOC_GFP);
-		if (!page)
-			return NULL;
-	}
+	if (!kernel)
+		gfp_mask |= __GFP_ACCOUNT;
 
+	page = alloc_page(gfp_mask);
+	if (!page)
+		return NULL;
+
+	if (!pgtable_page_ctor(page)) {
+		__free_page(page);
+		return NULL;
+	}
 
 	ret = page_address(page);
 	/*
@@ -391,8 +389,7 @@ void pte_fragment_free(unsigned long *table, int kernel)
 	struct page *page = virt_to_page(table);
 
 	if (put_page_testzero(page)) {
-		if (!kernel)
-			pgtable_page_dtor(page);
+		pgtable_page_dtor(page);
 		free_unref_page(page);
 	}
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH V2] powerpc/mm: Initialize kernel pagetable memory for PTE fragments
  2018-06-20  9:22 [PATCH V2] powerpc/mm: Initialize kernel pagetable memory for PTE fragments Anshuman Khandual
@ 2018-06-20  9:30 ` Aneesh Kumar K.V
  0 siblings, 0 replies; 2+ messages in thread
From: Aneesh Kumar K.V @ 2018-06-20  9:30 UTC (permalink / raw)
  To: Anshuman Khandual, linuxppc-dev

On 06/20/2018 02:52 PM, Anshuman Khandual wrote:
> Kernel pagetable pages for PTE fragments never go through the standard init
> sequence which can cause inaccuracies in utilization statistics reported at
> places like /proc and /sysfs interfaces etc. Also the allocated page misses
> out on pagetable lock and page flag init as well. Fix it by making sure all
> pages allocated for either user process or kernel PTE fragments go through
> same initialization.
> 
> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> ---
> Changes in V2:
> 
> - Call the destructor function during free for all cases
> 
>   arch/powerpc/mm/pgtable-book3s64.c | 27 ++++++++++++---------------
>   1 file changed, 12 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
> index c1f4ca4..a820ee6 100644
> --- a/arch/powerpc/mm/pgtable-book3s64.c
> +++ b/arch/powerpc/mm/pgtable-book3s64.c
> @@ -335,23 +335,21 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm)
>   
>   static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
>   {
> +	gfp_t gfp_mask = PGALLOC_GFP;
>   	void *ret = NULL;
>   	struct page *page;
>   
> -	if (!kernel) {
> -		page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT);
> -		if (!page)
> -			return NULL;
> -		if (!pgtable_page_ctor(page)) {
> -			__free_page(page);
> -			return NULL;
> -		}
> -	} else {
> -		page = alloc_page(PGALLOC_GFP);
> -		if (!page)
> -			return NULL;
> -	}
> +	if (!kernel)
> +		gfp_mask |= __GFP_ACCOUNT;
>   
> +	page = alloc_page(gfp_mask);
> +	if (!page)
> +		return NULL;
> +
> +	if (!pgtable_page_ctor(page)) {
> +		__free_page(page);
> +		return NULL;
> +	}
>   
>   	ret = page_address(page);
>   	/*
> @@ -391,8 +389,7 @@ void pte_fragment_free(unsigned long *table, int kernel)
>   	struct page *page = virt_to_page(table);
>   
>   	if (put_page_testzero(page)) {
> -		if (!kernel)
> -			pgtable_page_dtor(page);
> +		pgtable_page_dtor(page);
>   		free_unref_page(page);
>   	}
>   }
> 

If we really are going to do this, may be we can  do it as 
alloc_for_pmdcache, kill the kernel arg and compare against init_mm?

-aneesh

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-06-20  9:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-20  9:22 [PATCH V2] powerpc/mm: Initialize kernel pagetable memory for PTE fragments Anshuman Khandual
2018-06-20  9:30 ` Aneesh Kumar K.V

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.