linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area
@ 2012-01-13 11:44 Xiao Guangrong
  2012-01-13 11:44 ` [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown Xiao Guangrong
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Xiao Guangrong @ 2012-01-13 11:44 UTC (permalink / raw)
  To: Andrew Morton; +Cc: William Irwin, linux-mm, LKML

Using/updating cached_hole_size and free_area_cache properly to speedup
find free region

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 fs/hugetlbfs/inode.c |   15 +++++++++++----
 1 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index e425ad9..9e0794a 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -154,10 +154,12 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
 			return addr;
 	}

-	start_addr = mm->free_area_cache;
-
-	if (len <= mm->cached_hole_size)
+	if (len > mm->cached_hole_size)
+		start_addr = mm->free_area_cache;
+	else {
 		start_addr = TASK_UNMAPPED_BASE;
+		mm->cached_hole_size = 0;
+	}

 full_search:
 	addr = ALIGN(start_addr, huge_page_size(h));
@@ -171,13 +173,18 @@ full_search:
 			 */
 			if (start_addr != TASK_UNMAPPED_BASE) {
 				start_addr = TASK_UNMAPPED_BASE;
+				mm->cached_hole_size = 0;
 				goto full_search;
 			}
 			return -ENOMEM;
 		}

-		if (!vma || addr + len <= vma->vm_start)
+		if (!vma || addr + len <= vma->vm_start) {
+			mm->free_area_cache = addr + len;
 			return addr;
+		}
+		if (addr + mm->cached_hole_size < vma->vm_start)
+			mm->cached_hole_size = vma->vm_start - addr;
 		addr = ALIGN(vma->vm_end, huge_page_size(h));
 	}
 }
-- 
1.7.7.5


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown
  2012-01-13 11:44 [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
@ 2012-01-13 11:44 ` Xiao Guangrong
  2012-03-07 22:01   ` Andrew Morton
  2012-01-13 11:45 ` [PATCH 3/5] hugetlb: try to search again if it is really needed Xiao Guangrong
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Xiao Guangrong @ 2012-01-13 11:44 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Andrew Morton, William Irwin, linux-mm, LKML

Afte call find_vma_prev(mm, addr, &prev_vma), following condition is always
true:
	!prev_vma || (addr >= prev_vma->vm_end)
it can be happily drop prev_vma and use find_vma instead of find_vma_prev

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 arch/x86/mm/hugetlbpage.c |   21 ++++++---------------
 1 files changed, 6 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index f581a18..e12debc 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -308,7 +308,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
 {
 	struct hstate *h = hstate_file(file);
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev_vma;
+	struct vm_area_struct *vma;
 	unsigned long base = mm->mmap_base, addr = addr0;
 	unsigned long largest_hole = mm->cached_hole_size;
 	int first_time = 1;
@@ -333,24 +333,15 @@ try_again:
 		 * Lookup failure means no vma is above this address,
 		 * i.e. return with success:
 		 */
-		if (!(vma = find_vma_prev(mm, addr, &prev_vma)))
-			return addr;
-
-		/*
-		 * new region fits between prev_vma->vm_end and
-		 * vma->vm_start, use it:
-		 */
-		if (addr + len <= vma->vm_start &&
-		            (!prev_vma || (addr >= prev_vma->vm_end))) {
+		vma = find_vma(mm, addr);
+		if (!vma || addr + len <= vma->vm_start) {
 			/* remember the address as a hint for next time */
 		        mm->cached_hole_size = largest_hole;
 		        return (mm->free_area_cache = addr);
-		} else {
+		} else if (mm->free_area_cache == vma->vm_end) {
 			/* pull free_area_cache down to the first hole */
-		        if (mm->free_area_cache == vma->vm_end) {
-				mm->free_area_cache = vma->vm_start;
-				mm->cached_hole_size = largest_hole;
-			}
+			mm->free_area_cache = vma->vm_start;
+			mm->cached_hole_size = largest_hole;
 		}

 		/* remember the largest hole we saw so far */
-- 
1.7.7.5


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] hugetlb: try to search again if it is really needed
  2012-01-13 11:44 [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
  2012-01-13 11:44 ` [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown Xiao Guangrong
@ 2012-01-13 11:45 ` Xiao Guangrong
  2012-02-01 22:43   ` Andrew Morton
  2012-01-13 11:46 ` [PATCH 4/5] mm: do not reset cached_hole_size when vma is unmapped Xiao Guangrong
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Xiao Guangrong @ 2012-01-13 11:45 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Andrew Morton, William Irwin, linux-mm, LKML

Search again only if some holes may be skipped in the first time

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 arch/x86/mm/hugetlbpage.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index e12debc..6bf5735 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -309,9 +309,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
 	struct hstate *h = hstate_file(file);
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
-	unsigned long base = mm->mmap_base, addr = addr0;
+	unsigned long base = mm->mmap_base, addr = addr0, start_addr;
 	unsigned long largest_hole = mm->cached_hole_size;
-	int first_time = 1;

 	/* don't allow allocations above current base */
 	if (mm->free_area_cache > base)
@@ -322,6 +321,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
 		mm->free_area_cache  = base;
 	}
 try_again:
+	start_addr = mm->free_area_cache;
+
 	/* make sure it can fit in the remaining address space */
 	if (mm->free_area_cache < len)
 		goto fail;
@@ -357,10 +358,9 @@ fail:
 	 * if hint left us with no space for the requested
 	 * mapping then try again:
 	 */
-	if (first_time) {
+	if (start_addr != base) {
 		mm->free_area_cache = base;
 		largest_hole = 0;
-		first_time = 0;
 		goto try_again;
 	}
 	/*
-- 
1.7.7.5


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] mm: do not reset cached_hole_size when vma is unmapped
  2012-01-13 11:44 [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
  2012-01-13 11:44 ` [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown Xiao Guangrong
  2012-01-13 11:45 ` [PATCH 3/5] hugetlb: try to search again if it is really needed Xiao Guangrong
@ 2012-01-13 11:46 ` Xiao Guangrong
  2012-01-13 11:47 ` [PATCH 5/5] mm: search from free_area_cache for the bigger size Xiao Guangrong
  2012-01-31  6:05 ` [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
  4 siblings, 0 replies; 13+ messages in thread
From: Xiao Guangrong @ 2012-01-13 11:46 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Andrew Morton, William Irwin, linux-mm, LKML

In current code, cached_hole_size is set to the maximal value if the unmapped
vma is under free_area_cache, next search will search from the base addr

Actually, we can keep cached_hole_size so that if next required size is more
that cached_hole_size, it can search from free_area_cache

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 mm/mmap.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 3f758c7..970f572 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1423,10 +1423,8 @@ void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
 	/*
 	 * Is this a new hole at the lowest possible address?
 	 */
-	if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache) {
+	if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache)
 		mm->free_area_cache = addr;
-		mm->cached_hole_size = ~0UL;
-	}
 }

 /*
-- 
1.7.7.5


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] mm: search from free_area_cache for the bigger size
  2012-01-13 11:44 [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
                   ` (2 preceding siblings ...)
  2012-01-13 11:46 ` [PATCH 4/5] mm: do not reset cached_hole_size when vma is unmapped Xiao Guangrong
@ 2012-01-13 11:47 ` Xiao Guangrong
  2012-02-01 22:44   ` Andrew Morton
  2012-01-31  6:05 ` [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
  4 siblings, 1 reply; 13+ messages in thread
From: Xiao Guangrong @ 2012-01-13 11:47 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Andrew Morton, William Irwin, linux-mm, LKML

If the required size is bigger than cached_hole_size, we would better search
from free_area_cache, it is more easier to get free region, specifically for
the 64 bit process whose address space is large enough

Do it just as hugetlb_get_unmapped_area_topdown() in arch/x86/mm/hugetlbpage.c

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 arch/x86/kernel/sys_x86_64.c |   34 +++++++++++++++++-----------------
 mm/mmap.c                    |   36 +++++++++++++++++++++---------------
 2 files changed, 38 insertions(+), 32 deletions(-)

diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 0514890..ef59642 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -195,7 +195,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 {
 	struct vm_area_struct *vma;
 	struct mm_struct *mm = current->mm;
-	unsigned long addr = addr0;
+	unsigned long addr = addr0, start_addr;

 	/* requested length too big for entire address space */
 	if (len > TASK_SIZE)
@@ -223,25 +223,14 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		mm->free_area_cache = mm->mmap_base;
 	}

+try_again:
 	/* either no address requested or can't fit in requested address hole */
-	addr = mm->free_area_cache;
-
-	/* make sure it can fit in the remaining address space */
-	if (addr > len) {
-		unsigned long tmp_addr = align_addr(addr - len, filp,
-						    ALIGN_TOPDOWN);
-
-		vma = find_vma(mm, tmp_addr);
-		if (!vma || tmp_addr + len <= vma->vm_start)
-			/* remember the address as a hint for next time */
-			return mm->free_area_cache = tmp_addr;
-	}
-
-	if (mm->mmap_base < len)
-		goto bottomup;
+	start_addr = addr = mm->free_area_cache;

-	addr = mm->mmap_base-len;
+	if (addr < len)
+		goto fail;

+	addr -= len;
 	do {
 		addr = align_addr(addr, filp, ALIGN_TOPDOWN);

@@ -263,6 +252,17 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		addr = vma->vm_start-len;
 	} while (len < vma->vm_start);

+fail:
+	/*
+	 * if hint left us with no space for the requested
+	 * mapping then try again:
+	 */
+	if (start_addr != mm->mmap_base) {
+		mm->free_area_cache = mm->mmap_base;
+		mm->cached_hole_size = 0;
+		goto try_again;
+	}
+
 bottomup:
 	/*
 	 * A failed mmap() very likely causes application failure,
diff --git a/mm/mmap.c b/mm/mmap.c
index 970f572..e3c4b97 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1439,7 +1439,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 {
 	struct vm_area_struct *vma;
 	struct mm_struct *mm = current->mm;
-	unsigned long addr = addr0;
+	unsigned long addr = addr0, start_addr;

 	/* requested length too big for entire address space */
 	if (len > TASK_SIZE)
@@ -1463,22 +1463,14 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
  		mm->free_area_cache = mm->mmap_base;
  	}

+try_again:
 	/* either no address requested or can't fit in requested address hole */
-	addr = mm->free_area_cache;
+	start_addr = addr = mm->free_area_cache;

-	/* make sure it can fit in the remaining address space */
-	if (addr > len) {
-		vma = find_vma(mm, addr-len);
-		if (!vma || addr <= vma->vm_start)
-			/* remember the address as a hint for next time */
-			return (mm->free_area_cache = addr-len);
-	}
-
-	if (mm->mmap_base < len)
-		goto bottomup;
-
-	addr = mm->mmap_base-len;
+	if (addr < len)
+		goto fail;

+	addr -= len;
 	do {
 		/*
 		 * Lookup failure means no vma is above this address,
@@ -1498,7 +1490,21 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		addr = vma->vm_start-len;
 	} while (len < vma->vm_start);

-bottomup:
+fail:
+	/*
+	 * if hint left us with no space for the requested
+	 * mapping then try again:
+	 *
+	 * Note: this is different with the case of bottomup
+	 * which does the fully line-search, but we use find_vma
+	 * here that causes some holes skipped.
+	 */
+	if (start_addr != mm->mmap_base) {
+		mm->free_area_cache = mm->mmap_base;
+		mm->cached_hole_size = 0;
+		goto try_again;
+	}
+
 	/*
 	 * A failed mmap() very likely causes application failure,
 	 * so fall back to the bottom-up function here. This scenario
-- 
1.7.7.5


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area
  2012-01-13 11:44 [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
                   ` (3 preceding siblings ...)
  2012-01-13 11:47 ` [PATCH 5/5] mm: search from free_area_cache for the bigger size Xiao Guangrong
@ 2012-01-31  6:05 ` Xiao Guangrong
  2012-02-01 22:43   ` Andrew Morton
  4 siblings, 1 reply; 13+ messages in thread
From: Xiao Guangrong @ 2012-01-31  6:05 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Andrew Morton, William Irwin, linux-mm, LKML

On 01/13/2012 07:44 PM, Xiao Guangrong wrote:

> Using/updating cached_hole_size and free_area_cache properly to speedup
> find free region
> 


Ping...???


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area
  2012-01-31  6:05 ` [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
@ 2012-02-01 22:43   ` Andrew Morton
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2012-02-01 22:43 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: William Irwin, linux-mm, LKML

On Tue, 31 Jan 2012 14:05:50 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> On 01/13/2012 07:44 PM, Xiao Guangrong wrote:
> 
> > Using/updating cached_hole_size and free_area_cache properly to speedup
> > find free region
> > 
> 
> 
> Ping...???

I'd saved the patches away, hoping that someone who works on hugetlb
would comment.  They didn't.  We haven't heard from wli in a long time.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] hugetlb: try to search again if it is really needed
  2012-01-13 11:45 ` [PATCH 3/5] hugetlb: try to search again if it is really needed Xiao Guangrong
@ 2012-02-01 22:43   ` Andrew Morton
  2012-02-02  5:19     ` Xiao Guangrong
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2012-02-01 22:43 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: William Irwin, linux-mm, LKML

On Fri, 13 Jan 2012 19:45:45 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> Search again only if some holes may be skipped in the first time
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> ---
>  arch/x86/mm/hugetlbpage.c |    8 ++++----
>  1 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> index e12debc..6bf5735 100644
> --- a/arch/x86/mm/hugetlbpage.c
> +++ b/arch/x86/mm/hugetlbpage.c
> @@ -309,9 +309,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
>  	struct hstate *h = hstate_file(file);
>  	struct mm_struct *mm = current->mm;
>  	struct vm_area_struct *vma;
> -	unsigned long base = mm->mmap_base, addr = addr0;
> +	unsigned long base = mm->mmap_base, addr = addr0, start_addr;

grr.  The multiple-definitions-per-line thing is ugly, makes for more
patch conflicts and reduces opportunities to add useful comments.

--- a/arch/x86/mm/hugetlbpage.c~hugetlb-try-to-search-again-if-it-is-really-needed-fix
+++ a/arch/x86/mm/hugetlbpage.c
@@ -309,7 +309,9 @@ static unsigned long hugetlb_get_unmappe
 	struct hstate *h = hstate_file(file);
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
-	unsigned long base = mm->mmap_base, addr = addr0, start_addr;
+	unsigned long base = mm->mmap_base;
+	unsigned long addr = addr0;
+	unsigned long start_addr;
 	unsigned long largest_hole = mm->cached_hole_size;
 
 	/* don't allow allocations above current base */
_


>  	unsigned long largest_hole = mm->cached_hole_size;
> -	int first_time = 1;
> 
>  	/* don't allow allocations above current base */
>  	if (mm->free_area_cache > base)
> @@ -322,6 +321,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
>  		mm->free_area_cache  = base;
>  	}
>  try_again:
> +	start_addr = mm->free_area_cache;
> +
>  	/* make sure it can fit in the remaining address space */
>  	if (mm->free_area_cache < len)
>  		goto fail;
> @@ -357,10 +358,9 @@ fail:
>  	 * if hint left us with no space for the requested
>  	 * mapping then try again:
>  	 */
> -	if (first_time) {
> +	if (start_addr != base) {
>  		mm->free_area_cache = base;
>  		largest_hole = 0;
> -		first_time = 0;
>  		goto try_again;

The code used to retry a single time.  With this change the retrying is
potentially infinite.  What is the reason for this change?  What is the
potential for causing a lockup?


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm: search from free_area_cache for the bigger size
  2012-01-13 11:47 ` [PATCH 5/5] mm: search from free_area_cache for the bigger size Xiao Guangrong
@ 2012-02-01 22:44   ` Andrew Morton
  2012-02-02  7:07     ` Xiao Guangrong
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2012-02-01 22:44 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: William Irwin, linux-mm, LKML

On Fri, 13 Jan 2012 19:47:31 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> If the required size is bigger than cached_hole_size, we would better search
> from free_area_cache, it is more easier to get free region, specifically for
> the 64 bit process whose address space is large enough
> 
> Do it just as hugetlb_get_unmapped_area_topdown() in arch/x86/mm/hugetlbpage.c

Can this cause additional fragmentation of the virtual address region? 
If so, what might be the implications of this?


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] hugetlb: try to search again if it is really needed
  2012-02-01 22:43   ` Andrew Morton
@ 2012-02-02  5:19     ` Xiao Guangrong
  0 siblings, 0 replies; 13+ messages in thread
From: Xiao Guangrong @ 2012-02-02  5:19 UTC (permalink / raw)
  To: Andrew Morton; +Cc: William Irwin, linux-mm, LKML

Hi Andrew,

Thanks for your review!

On 02/02/2012 06:43 AM, Andrew Morton wrote:

> On Fri, 13 Jan 2012 19:45:45 +0800
> Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
> 
>> Search again only if some holes may be skipped in the first time
>>
>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
>> ---
>>  arch/x86/mm/hugetlbpage.c |    8 ++++----
>>  1 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
>> index e12debc..6bf5735 100644
>> --- a/arch/x86/mm/hugetlbpage.c
>> +++ b/arch/x86/mm/hugetlbpage.c
>> @@ -309,9 +309,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
>>  	struct hstate *h = hstate_file(file);
>>  	struct mm_struct *mm = current->mm;
>>  	struct vm_area_struct *vma;
>> -	unsigned long base = mm->mmap_base, addr = addr0;
>> +	unsigned long base = mm->mmap_base, addr = addr0, start_addr;
> 
> grr.  The multiple-definitions-per-line thing is ugly, makes for more
> patch conflicts and reduces opportunities to add useful comments.


Yes, thanks.

>>  try_again:
>> +	start_addr = mm->free_area_cache;
>> +
>>  	/* make sure it can fit in the remaining address space */
>>  	if (mm->free_area_cache < len)
>>  		goto fail;
>> @@ -357,10 +358,9 @@ fail:
>>  	 * if hint left us with no space for the requested
>>  	 * mapping then try again:
>>  	 */
>> -	if (first_time) {
>> +	if (start_addr != base) {
>>  		mm->free_area_cache = base;
>>  		largest_hole = 0;
>> -		first_time = 0;
>>  		goto try_again;
> 
> The code used to retry a single time.  With this change the retrying is
> potentially infinite.  What is the reason for this change?  What is the
> potential for causing a lockup?
> 

Actually, it is not infinite, after retry, we set "mm->free_area_cache = base",
"start_addr" will set to "base" in the second search, so the condition of
"goto try_again" will be unsatisfied.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm: search from free_area_cache for the bigger size
  2012-02-01 22:44   ` Andrew Morton
@ 2012-02-02  7:07     ` Xiao Guangrong
  0 siblings, 0 replies; 13+ messages in thread
From: Xiao Guangrong @ 2012-02-02  7:07 UTC (permalink / raw)
  To: Andrew Morton; +Cc: William Irwin, linux-mm, LKML

On 02/02/2012 06:44 AM, Andrew Morton wrote:

> On Fri, 13 Jan 2012 19:47:31 +0800
> Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
> 
>> If the required size is bigger than cached_hole_size, we would better search
>> from free_area_cache, it is more easier to get free region, specifically for
>> the 64 bit process whose address space is large enough
>>
>> Do it just as hugetlb_get_unmapped_area_topdown() in arch/x86/mm/hugetlbpage.c
> 
> Can this cause additional fragmentation of the virtual address region? 
> If so, what might be the implications of this?


Hmm, i think it is not bad since we have cached_hole_size, and, this way is also
used in other functions and architectures(arch_get_unmapped_area,
hugetlb_get_unmapped_area_bottomup, hugetlb_get_unmapped_area_topdown......).


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown
  2012-01-13 11:44 ` [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown Xiao Guangrong
@ 2012-03-07 22:01   ` Andrew Morton
  2012-03-08  2:28     ` Xiao Guangrong
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2012-03-07 22:01 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: William Irwin, linux-mm, LKML

On Fri, 13 Jan 2012 19:44:53 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> Afte call find_vma_prev(mm, addr, &prev_vma), following condition is always
> true:
> 	!prev_vma || (addr >= prev_vma->vm_end)
> it can be happily drop prev_vma and use find_vma instead of find_vma_prev

I had to rework this patch due to 097d59106a8e4b ("vm: avoid using
find_vma_prev() unnecessarily") in mainline.  Can you please check my
handiwork?



From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Subject: hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown()

After the call find_vma_prev(mm, addr, &prev_vma), the following condition
is always true:

	!prev_vma || (addr >= prev_vma->vm_end)

it can happily drop prev_vma and use find_vma() instead of find_vma_prev().

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 arch/x86/mm/hugetlbpage.c |   23 +++++++----------------
 1 file changed, 7 insertions(+), 16 deletions(-)

diff -puN arch/x86/mm/hugetlbpage.c~hugetlb-drop-prev_vma-in-hugetlb_get_unmapped_area_topdown arch/x86/mm/hugetlbpage.c
--- a/arch/x86/mm/hugetlbpage.c~hugetlb-drop-prev_vma-in-hugetlb_get_unmapped_area_topdown
+++ a/arch/x86/mm/hugetlbpage.c
@@ -308,7 +308,7 @@ static unsigned long hugetlb_get_unmappe
 {
 	struct hstate *h = hstate_file(file);
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev_vma;
+	struct vm_area_struct *vma;
 	unsigned long base = mm->mmap_base, addr = addr0;
 	unsigned long largest_hole = mm->cached_hole_size;
 	int first_time = 1;
@@ -334,25 +334,16 @@ try_again:
 		 * i.e. return with success:
 		 */
 		vma = find_vma(mm, addr);
-		if (!vma)
-			return addr;
-
-		/*
-		 * new region fits between prev_vma->vm_end and
-		 * vma->vm_start, use it:
-		 */
-		prev_vma = vma->vm_prev;
-		if (addr + len <= vma->vm_start &&
-		            (!prev_vma || (addr >= prev_vma->vm_end))) {
+		if (vma)
+			prev_vma = vma->vm_prev;
+		if (!vma || addr + len <= vma->vm_start) {
 			/* remember the address as a hint for next time */
 		        mm->cached_hole_size = largest_hole;
 		        return (mm->free_area_cache = addr);
-		} else {
+		} else if (mm->free_area_cache == vma->vm_end) {
 			/* pull free_area_cache down to the first hole */
-		        if (mm->free_area_cache == vma->vm_end) {
-				mm->free_area_cache = vma->vm_start;
-				mm->cached_hole_size = largest_hole;
-			}
+			mm->free_area_cache = vma->vm_start;
+			mm->cached_hole_size = largest_hole;
 		}
 
 		/* remember the largest hole we saw so far */
_


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown
  2012-03-07 22:01   ` Andrew Morton
@ 2012-03-08  2:28     ` Xiao Guangrong
  0 siblings, 0 replies; 13+ messages in thread
From: Xiao Guangrong @ 2012-03-08  2:28 UTC (permalink / raw)
  To: Andrew Morton; +Cc: William Irwin, linux-mm, LKML

On 03/08/2012 06:01 AM, Andrew Morton wrote:

> On Fri, 13 Jan 2012 19:44:53 +0800
> Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
> 
>> Afte call find_vma_prev(mm, addr, &prev_vma), following condition is always
>> true:
>> 	!prev_vma || (addr >= prev_vma->vm_end)
>> it can be happily drop prev_vma and use find_vma instead of find_vma_prev
> 
> I had to rework this patch due to 097d59106a8e4b ("vm: avoid using
> find_vma_prev() unnecessarily") in mainline.  Can you please check my
> handiwork?
> 


It looks good to me, thanks Andrew!


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2012-03-08  2:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-13 11:44 [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
2012-01-13 11:44 ` [PATCH 2/5] hugetlb: drop prev_vma in hugetlb_get_unmapped_area_topdown Xiao Guangrong
2012-03-07 22:01   ` Andrew Morton
2012-03-08  2:28     ` Xiao Guangrong
2012-01-13 11:45 ` [PATCH 3/5] hugetlb: try to search again if it is really needed Xiao Guangrong
2012-02-01 22:43   ` Andrew Morton
2012-02-02  5:19     ` Xiao Guangrong
2012-01-13 11:46 ` [PATCH 4/5] mm: do not reset cached_hole_size when vma is unmapped Xiao Guangrong
2012-01-13 11:47 ` [PATCH 5/5] mm: search from free_area_cache for the bigger size Xiao Guangrong
2012-02-01 22:44   ` Andrew Morton
2012-02-02  7:07     ` Xiao Guangrong
2012-01-31  6:05 ` [PATCH 1/5] hugetlbfs: fix hugetlb_get_unmapped_area Xiao Guangrong
2012-02-01 22:43   ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).