* [PATCH] mm hugetlb x86: fix hugepage memory leak in mincore()
[not found] <1260172193-14397-1-git-send-email-n-horiguchi@ah.jp.nec.com>
@ 2009-12-07 7:59 ` Naoya Horiguchi
2009-12-08 22:35 ` Andrew Morton
2009-12-07 7:59 ` [PATCH 1/2] mm hugetlb x86: fix hugepage memory leak in walk_page_range() Naoya Horiguchi
2009-12-07 7:59 ` [PATCH 2/2] mm hugetlb: add hugepage support to pagemap Naoya Horiguchi
2 siblings, 1 reply; 4+ messages in thread
From: Naoya Horiguchi @ 2009-12-07 7:59 UTC (permalink / raw)
To: LKML; +Cc: hugh.dickins, linux-mm
Most callers of pmd_none_or_clear_bad() check whether the target
page is in a hugepage or not, but mincore() and walk_page_range()
do not check it. So if we use mincore() on a hugepage on x86 machine,
the hugepage memory is leaked as shown below.
This patch fixes it by extending mincore() system call to support hugepages.
Details
=======
My test program (leak_mincore) works as follows:
- creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
- read()/write() something on it,
- call mincore() for first ten pages and printf() the values of *vec
- munmap() and unlink() the file on hugetlbfs
Without my patch
----------------
$ cat /proc/meminfo| grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ./leak_mincore
vec[0] 0
vec[1] 0
vec[2] 0
vec[3] 0
vec[4] 0
vec[5] 0
vec[6] 0
vec[7] 0
vec[8] 0
vec[9] 0
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 999
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ls /hugetlbfs/
$
Return values in *vec from mincore() are set to 0, while the hugepage
should be in memory, and 1 hugepage is still accounted as used while
there is no file on hugetlbfs.
With my patch
-------------
$ cat /proc/meminfo| grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ./leak_mincore
vec[0] 1
vec[1] 1
vec[2] 1
vec[3] 1
vec[4] 1
vec[5] 1
vec[6] 1
vec[7] 1
vec[8] 1
vec[9] 1
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ls /hugetlbfs/
$
Return value in *vec set to 1 and no memory leaks.
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
---
mm/mincore.c | 34 ++++++++++++++++++++++++++++++++++
1 files changed, 34 insertions(+), 0 deletions(-)
diff --git a/mm/mincore.c b/mm/mincore.c
index 8cb508f..f977e3e 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -14,6 +14,7 @@
#include <linux/syscalls.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/hugetlb.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
@@ -72,6 +73,39 @@ static long do_mincore(unsigned long addr, unsigned char *vec, unsigned long pag
if (!vma || addr < vma->vm_start)
return -ENOMEM;
+ if (is_vm_hugetlb_page(vma)) {
+ struct hstate *h;
+ unsigned long nr_huge;
+ unsigned char present;
+ i = 0;
+ nr = min(pages, (vma->vm_end - addr) >> PAGE_SHIFT);
+ h = hstate_vma(vma);
+ nr_huge = ((addr + pages * PAGE_SIZE - 1) >> huge_page_shift(h))
+ - (addr >> huge_page_shift(h)) + 1;
+ nr_huge = min(nr_huge,
+ (vma->vm_end - addr) >> huge_page_shift(h));
+ while (1) {
+ /* hugepage always in RAM for now,
+ * but generally it needs to be check */
+ ptep = huge_pte_offset(current->mm,
+ addr & huge_page_mask(h));
+ present = !!(ptep &&
+ !huge_pte_none(huge_ptep_get(ptep)));
+ while (1) {
+ vec[i++] = present;
+ addr += PAGE_SIZE;
+ /* reach buffer limit */
+ if (i == nr)
+ return nr;
+ /* check hugepage border */
+ if (!((addr & ~huge_page_mask(h))
+ >> PAGE_SHIFT))
+ break;
+ }
+ }
+ return nr;
+ }
+
/*
* Calculate how many pages there are left in the last level of the
* PTE array for our address.
--
1.6.0.6
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 1/2] mm hugetlb x86: fix hugepage memory leak in walk_page_range()
[not found] <1260172193-14397-1-git-send-email-n-horiguchi@ah.jp.nec.com>
2009-12-07 7:59 ` [PATCH] mm hugetlb x86: fix hugepage memory leak in mincore() Naoya Horiguchi
@ 2009-12-07 7:59 ` Naoya Horiguchi
2009-12-07 7:59 ` [PATCH 2/2] mm hugetlb: add hugepage support to pagemap Naoya Horiguchi
2 siblings, 0 replies; 4+ messages in thread
From: Naoya Horiguchi @ 2009-12-07 7:59 UTC (permalink / raw)
To: LKML; +Cc: hugh.dickins, linux-mm
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
---
mm/pagewalk.c | 15 ++++++++++++++-
1 files changed, 14 insertions(+), 1 deletions(-)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index d5878be..3d88824 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -1,6 +1,7 @@
#include <linux/mm.h>
#include <linux/highmem.h>
#include <linux/sched.h>
+#include <linux/hugetlb.h>
static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
struct mm_walk *walk)
@@ -107,6 +108,7 @@ int walk_page_range(unsigned long addr, unsigned long end,
pgd_t *pgd;
unsigned long next;
int err = 0;
+ struct vm_area_struct *vma;
if (addr >= end)
return err;
@@ -117,11 +119,21 @@ int walk_page_range(unsigned long addr, unsigned long end,
pgd = pgd_offset(walk->mm, addr);
do {
next = pgd_addr_end(addr, end);
+
+ /* skip hugetlb vma to avoid hugepage PMD being cleared
+ * in pmd_none_or_clear_bad(). */
+ vma = find_vma(walk->mm, addr);
+ if (is_vm_hugetlb_page(vma)) {
+ next = (vma->vm_end < next) ? vma->vm_end : next;
+ continue;
+ }
+
if (pgd_none_or_clear_bad(pgd)) {
if (walk->pte_hole)
err = walk->pte_hole(addr, next, walk);
if (err)
break;
+ pgd++;
continue;
}
if (walk->pgd_entry)
@@ -131,7 +143,8 @@ int walk_page_range(unsigned long addr, unsigned long end,
err = walk_pud_range(pgd, addr, next, walk);
if (err)
break;
- } while (pgd++, addr = next, addr != end);
+ pgd++;
+ } while (addr = next, addr != end);
return err;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/2] mm hugetlb: add hugepage support to pagemap
[not found] <1260172193-14397-1-git-send-email-n-horiguchi@ah.jp.nec.com>
2009-12-07 7:59 ` [PATCH] mm hugetlb x86: fix hugepage memory leak in mincore() Naoya Horiguchi
2009-12-07 7:59 ` [PATCH 1/2] mm hugetlb x86: fix hugepage memory leak in walk_page_range() Naoya Horiguchi
@ 2009-12-07 7:59 ` Naoya Horiguchi
2 siblings, 0 replies; 4+ messages in thread
From: Naoya Horiguchi @ 2009-12-07 7:59 UTC (permalink / raw)
To: LKML; +Cc: ak, Wu Fengguang, linux-mm
This patch enables to extract pfn of the hugepage from
/proc/pid/pagemap in architecture independent manner.
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
---
fs/proc/task_mmu.c | 43 +++++++++++++++++++++++++++++++++++++++++++
include/linux/mm.h | 3 +++
mm/pagewalk.c | 18 ++++++++++++++++--
3 files changed, 62 insertions(+), 2 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 2a1bef9..5d8e86b 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -650,6 +650,48 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
return err;
}
+static u64 huge_pte_to_pagemap_entry(pte_t pte, int offset)
+{
+ u64 pme = 0;
+ if (pte_present(pte))
+ pme = PM_PFRAME(pte_pfn(pte) + offset)
+ | PM_PSHIFT(PAGE_SHIFT) | PM_PRESENT;
+ return pme;
+}
+
+static int pagemap_hugetlb_range(pte_t *pte, unsigned long addr,
+ unsigned long end, struct mm_walk *walk)
+{
+ struct vm_area_struct *vma;
+ struct pagemapread *pm = walk->private;
+ struct hstate *hs = NULL;
+ int err = 0;
+
+ vma = find_vma(walk->mm, addr);
+ hs = hstate_vma(vma);
+ for (; addr != end; addr += PAGE_SIZE) {
+ u64 pfn = PM_NOT_PRESENT;
+
+ if (vma && (addr >= vma->vm_end)) {
+ vma = find_vma(walk->mm, addr);
+ hs = hstate_vma(vma);
+ }
+
+ if (vma && (vma->vm_start <= addr) && is_vm_hugetlb_page(vma)) {
+ /* calculate pfn of the "raw" page in the hugepage. */
+ int offset = (addr & ~huge_page_mask(hs)) >> PAGE_SHIFT;
+ pfn = huge_pte_to_pagemap_entry(*pte, offset);
+ }
+ err = add_to_pagemap(addr, pfn, pm);
+ if (err)
+ return err;
+ }
+
+ cond_resched();
+
+ return err;
+}
+
/*
* /proc/pid/pagemap - an array mapping virtual pages to pfns
*
@@ -742,6 +784,7 @@ static ssize_t pagemap_read(struct file *file, char __user *buf,
pagemap_walk.pmd_entry = pagemap_pte_range;
pagemap_walk.pte_hole = pagemap_pte_hole;
+ pagemap_walk.hugetlb_entry = pagemap_hugetlb_range;
pagemap_walk.mm = mm;
pagemap_walk.private = ±
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4d33403..14835f0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -758,6 +758,7 @@ unsigned long unmap_vmas(struct mmu_gather **tlb,
* @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry
* @pte_entry: if set, called for each non-empty PTE (4th-level) entry
* @pte_hole: if set, called for each hole at all levels
+ * @hugetlb_entry: if set, called for each hugetlb entry
*
* (see walk_page_range for more details)
*/
@@ -767,6 +768,8 @@ struct mm_walk {
int (*pmd_entry)(pmd_t *, unsigned long, unsigned long, struct mm_walk *);
int (*pte_entry)(pte_t *, unsigned long, unsigned long, struct mm_walk *);
int (*pte_hole)(unsigned long, unsigned long, struct mm_walk *);
+ int (*hugetlb_entry)(pte_t *, unsigned long, unsigned long,
+ struct mm_walk *);
struct mm_struct *mm;
void *private;
};
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 3d88824..2d27a23 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -106,9 +106,11 @@ int walk_page_range(unsigned long addr, unsigned long end,
struct mm_walk *walk)
{
pgd_t *pgd;
+ pte_t *pte;
unsigned long next;
int err = 0;
struct vm_area_struct *vma;
+ struct hstate *hs;
if (addr >= end)
return err;
@@ -120,11 +122,23 @@ int walk_page_range(unsigned long addr, unsigned long end,
do {
next = pgd_addr_end(addr, end);
- /* skip hugetlb vma to avoid hugepage PMD being cleared
- * in pmd_none_or_clear_bad(). */
+ /*
+ * handle hugetlb vma individually because pagetable walk for
+ * the hugetlb page is dependent on the architecture and
+ * we can't handled it in the same manner as non-huge pages.
+ */
vma = find_vma(walk->mm, addr);
if (is_vm_hugetlb_page(vma)) {
next = (vma->vm_end < next) ? vma->vm_end : next;
+ hs = hstate_vma(vma);
+ pte = huge_pte_offset(walk->mm,
+ addr & huge_page_mask(hs));
+ if (pte && !huge_pte_none(huge_ptep_get(pte))
+ && walk->hugetlb_entry)
+ err = walk->hugetlb_entry(pte, addr,
+ next, walk);
+ if (err)
+ break;
continue;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm hugetlb x86: fix hugepage memory leak in mincore()
2009-12-07 7:59 ` [PATCH] mm hugetlb x86: fix hugepage memory leak in mincore() Naoya Horiguchi
@ 2009-12-08 22:35 ` Andrew Morton
0 siblings, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2009-12-08 22:35 UTC (permalink / raw)
To: n-horiguchi; +Cc: LKML, hugh.dickins, linux-mm, stable
On Mon, 07 Dec 2009 16:59:14 +0900
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
> Most callers of pmd_none_or_clear_bad() check whether the target
> page is in a hugepage or not, but mincore() and walk_page_range()
> do not check it. So if we use mincore() on a hugepage on x86 machine,
> the hugepage memory is leaked as shown below.
> This patch fixes it by extending mincore() system call to support hugepages.
This bug is fairly embarrassing. I tagged the patch for a -stable
backport.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-12-08 22:35 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <1260172193-14397-1-git-send-email-n-horiguchi@ah.jp.nec.com>
2009-12-07 7:59 ` [PATCH] mm hugetlb x86: fix hugepage memory leak in mincore() Naoya Horiguchi
2009-12-08 22:35 ` Andrew Morton
2009-12-07 7:59 ` [PATCH 1/2] mm hugetlb x86: fix hugepage memory leak in walk_page_range() Naoya Horiguchi
2009-12-07 7:59 ` [PATCH 2/2] mm hugetlb: add hugepage support to pagemap Naoya Horiguchi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).