linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] fixes on page table walker and hugepage rmapping
@ 2014-02-27  4:39 Naoya Horiguchi
  2014-02-27  4:39 ` [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range() Naoya Horiguchi
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Naoya Horiguchi @ 2014-02-27  4:39 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Andrew Morton, linux-mm, linux-kernel

Hi,

Sasha, could you test if the bug you reported recently [1] reproduces
on the latest next tree with this patchset? (I'm not sure of this
because the problem looks differently in my own testing...)

[1] http://thread.gmane.org/gmane.linux.kernel.mm/113374/focus=113
---
Summary:

Naoya Horiguchi (3):
      mm/pagewalk.c: fix end address calculation in walk_page_range()
      mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff()
      mm: call vma_adjust_trans_huge() only for thp-enabled vma

 include/linux/pagemap.h | 13 +++++++++++++
 mm/huge_memory.c        |  2 +-
 mm/hugetlb.c            |  5 +++++
 mm/memory-failure.c     |  4 ++--
 mm/mmap.c               |  3 ++-
 mm/pagewalk.c           |  5 +++--
 mm/rmap.c               |  8 ++------
 7 files changed, 28 insertions(+), 12 deletions(-)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range()
  2014-02-27  4:39 [PATCH 0/3] fixes on page table walker and hugepage rmapping Naoya Horiguchi
@ 2014-02-27  4:39 ` Naoya Horiguchi
  2014-02-27 21:03   ` Andrew Morton
  2014-02-27  4:39 ` [PATCH 2/3] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff() Naoya Horiguchi
  2014-02-27  4:39 ` [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma Naoya Horiguchi
  2 siblings, 1 reply; 17+ messages in thread
From: Naoya Horiguchi @ 2014-02-27  4:39 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Andrew Morton, linux-mm, linux-kernel

When we try to walk over inside a vma, walk_page_range() tries to walk
until vma->vm_end even if a given end is before that point.
So this patch takes the smaller one as an end address.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/pagewalk.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git next-20140220.orig/mm/pagewalk.c next-20140220/mm/pagewalk.c
index 416e981243b1..b418407ff4da 100644
--- next-20140220.orig/mm/pagewalk.c
+++ next-20140220/mm/pagewalk.c
@@ -321,8 +321,9 @@ int walk_page_range(unsigned long start, unsigned long end,
 			next = vma->vm_start;
 		} else { /* inside the found vma */
 			walk->vma = vma;
-			next = vma->vm_end;
-			err = walk_page_test(start, end, walk);
+			next = min_t(unsigned long, end, vma->vm_end);
+
+			err = walk_page_test(start, next, walk);
 			if (skip_lower_level_walking(walk))
 				continue;
 			if (err)
-- 
1.8.5.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/3] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff()
  2014-02-27  4:39 [PATCH 0/3] fixes on page table walker and hugepage rmapping Naoya Horiguchi
  2014-02-27  4:39 ` [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range() Naoya Horiguchi
@ 2014-02-27  4:39 ` Naoya Horiguchi
  2014-02-27 21:19   ` Andrew Morton
  2014-02-27  4:39 ` [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma Naoya Horiguchi
  2 siblings, 1 reply; 17+ messages in thread
From: Naoya Horiguchi @ 2014-02-27  4:39 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Andrew Morton, linux-mm, linux-kernel

page->index stores pagecache index when the page is mapped into file mapping
region, and the index is in pagecache size unit, so it depends on the page
size. Some of users of reverse mapping obviously assumes that page->index
is in PAGE_CACHE_SHIFT unit, so they don't work for anonymous hugepage.

For example, consider that we have 3-hugepage vma and try to mbind the 2nd
hugepage to migrate to another node. Then the vma is split and migrate_page()
is called for the 2nd hugepage (belonging to the middle vma.)
In migrate operation, rmap_walk_anon() tries to find the relevant vma to
which the target hugepage belongs, but here we miscalculate pgoff.
So anon_vma_interval_tree_foreach() grabs invalid vma, which fires VM_BUG_ON.

This patch introduces a new API that is usable both for normal page and
hugepage to get PAGE_SIZE offset from page->index. Users should clearly
distinguish page_index for pagecache index and page_pgoff for page offset.

Reported-by: Sasha Levin <sasha.levin@oracle.com> # if the reported problem is fixed
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: stable@vger.kernel.org # 3.12+
---
 include/linux/pagemap.h | 13 +++++++++++++
 mm/huge_memory.c        |  2 +-
 mm/hugetlb.c            |  5 +++++
 mm/memory-failure.c     |  4 ++--
 mm/rmap.c               |  8 ++------
 5 files changed, 23 insertions(+), 9 deletions(-)

diff --git next-20140220.orig/include/linux/pagemap.h next-20140220/include/linux/pagemap.h
index 4f591df66778..a8bd14f42032 100644
--- next-20140220.orig/include/linux/pagemap.h
+++ next-20140220/include/linux/pagemap.h
@@ -316,6 +316,19 @@ static inline loff_t page_file_offset(struct page *page)
 	return ((loff_t)page_file_index(page)) << PAGE_CACHE_SHIFT;
 }
 
+extern pgoff_t hugepage_pgoff(struct page *page);
+
+/*
+ * page->index stores pagecache index whose unit is not always PAGE_SIZE.
+ * This function converts it into PAGE_SIZE offset.
+ */
+#define page_pgoff(page)					\
+({								\
+	unlikely(PageHuge(page)) ?				\
+		hugepage_pgoff(page) :				\
+		page->index >> (PAGE_CACHE_SHIFT - PAGE_SHIFT);	\
+})
+
 extern pgoff_t linear_hugepage_index(struct vm_area_struct *vma,
 				     unsigned long address);
 
diff --git next-20140220.orig/mm/huge_memory.c next-20140220/mm/huge_memory.c
index 6ac89e9f82ef..ef96763a6abf 100644
--- next-20140220.orig/mm/huge_memory.c
+++ next-20140220/mm/huge_memory.c
@@ -1800,7 +1800,7 @@ static void __split_huge_page(struct page *page,
 			      struct list_head *list)
 {
 	int mapcount, mapcount2;
-	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
+	pgoff_t pgoff = page_pgoff(page);
 	struct anon_vma_chain *avc;
 
 	BUG_ON(!PageHead(page));
diff --git next-20140220.orig/mm/hugetlb.c next-20140220/mm/hugetlb.c
index 2252cacf98e8..e159e593d99f 100644
--- next-20140220.orig/mm/hugetlb.c
+++ next-20140220/mm/hugetlb.c
@@ -764,6 +764,11 @@ pgoff_t __basepage_index(struct page *page)
 	return (index << compound_order(page_head)) + compound_idx;
 }
 
+pgoff_t hugepage_pgoff(struct page *page)
+{
+	return page->index << huge_page_order(page_hstate(page));
+}
+
 static struct page *alloc_fresh_huge_page_node(struct hstate *h, int nid)
 {
 	struct page *page;
diff --git next-20140220.orig/mm/memory-failure.c next-20140220/mm/memory-failure.c
index 35ef28acf137..5d85a4afb22c 100644
--- next-20140220.orig/mm/memory-failure.c
+++ next-20140220/mm/memory-failure.c
@@ -404,7 +404,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill,
 	if (av == NULL)	/* Not actually mapped anymore */
 		return;
 
-	pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
+	pgoff = page_pgoff(page);
 	read_lock(&tasklist_lock);
 	for_each_process (tsk) {
 		struct anon_vma_chain *vmac;
@@ -437,7 +437,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill,
 	mutex_lock(&mapping->i_mmap_mutex);
 	read_lock(&tasklist_lock);
 	for_each_process(tsk) {
-		pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
+		pgoff_t pgoff = page_pgoff(page);
 
 		if (!task_early_kill(tsk))
 			continue;
diff --git next-20140220.orig/mm/rmap.c next-20140220/mm/rmap.c
index 9056a1f00b87..78405051474a 100644
--- next-20140220.orig/mm/rmap.c
+++ next-20140220/mm/rmap.c
@@ -515,11 +515,7 @@ void page_unlock_anon_vma_read(struct anon_vma *anon_vma)
 static inline unsigned long
 __vma_address(struct page *page, struct vm_area_struct *vma)
 {
-	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
-
-	if (unlikely(is_vm_hugetlb_page(vma)))
-		pgoff = page->index << huge_page_order(page_hstate(page));
-
+	pgoff_t pgoff = page_pgoff(page);
 	return vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
 }
 
@@ -1598,7 +1594,7 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page,
 static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc)
 {
 	struct anon_vma *anon_vma;
-	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
+	pgoff_t pgoff = page_pgoff(page);
 	struct anon_vma_chain *avc;
 	int ret = SWAP_AGAIN;
 
-- 
1.8.5.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma
  2014-02-27  4:39 [PATCH 0/3] fixes on page table walker and hugepage rmapping Naoya Horiguchi
  2014-02-27  4:39 ` [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range() Naoya Horiguchi
  2014-02-27  4:39 ` [PATCH 2/3] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff() Naoya Horiguchi
@ 2014-02-27  4:39 ` Naoya Horiguchi
  2014-02-27 21:23   ` Andrew Morton
  2014-02-27 22:56   ` Kirill A. Shutemov
  2 siblings, 2 replies; 17+ messages in thread
From: Naoya Horiguchi @ 2014-02-27  4:39 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Andrew Morton, linux-mm, linux-kernel

vma_adjust() is called also for vma(VM_HUGETLB) and it could happen that
we happen to try to split hugetlbfs hugepage. So exclude the possibility.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/mmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git next-20140220.orig/mm/mmap.c next-20140220/mm/mmap.c
index f53397806d7f..45a9c0d51e3f 100644
--- next-20140220.orig/mm/mmap.c
+++ next-20140220/mm/mmap.c
@@ -772,7 +772,8 @@ again:			remove_next = 1 + (end > next->vm_end);
 		}
 	}
 
-	vma_adjust_trans_huge(vma, start, end, adjust_next);
+	if (transparent_hugepage_enabled(vma))
+		vma_adjust_trans_huge(vma, start, end, adjust_next);
 
 	anon_vma = vma->anon_vma;
 	if (!anon_vma && adjust_next)
-- 
1.8.5.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range()
  2014-02-27  4:39 ` [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range() Naoya Horiguchi
@ 2014-02-27 21:03   ` Andrew Morton
       [not found]     ` <530fabcf.05300f0a.7f7e.ffffc80dSMTPIN_ADDED_BROKEN@mx.google.com>
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2014-02-27 21:03 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: Sasha Levin, linux-mm, linux-kernel

On Wed, 26 Feb 2014 23:39:35 -0500 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> When we try to walk over inside a vma, walk_page_range() tries to walk
> until vma->vm_end even if a given end is before that point.
> So this patch takes the smaller one as an end address.
> 
> ...
>
> --- next-20140220.orig/mm/pagewalk.c
> +++ next-20140220/mm/pagewalk.c
> @@ -321,8 +321,9 @@ int walk_page_range(unsigned long start, unsigned long end,
>  			next = vma->vm_start;
>  		} else { /* inside the found vma */
>  			walk->vma = vma;
> -			next = vma->vm_end;
> -			err = walk_page_test(start, end, walk);
> +			next = min_t(unsigned long, end, vma->vm_end);

min_t is unneeded, isn't it?  Everything here has type unsigned long.

> +			err = walk_page_test(start, next, walk);
>  			if (skip_lower_level_walking(walk))
>  				continue;
>  			if (err)

I'm assuming this is a fix against
pagewalk-update-page-table-walker-core.patch and shall eventually be
folded into that patch.  

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff()
  2014-02-27  4:39 ` [PATCH 2/3] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff() Naoya Horiguchi
@ 2014-02-27 21:19   ` Andrew Morton
       [not found]     ` <530fb3ee.03cb0e0a.407a.ffffffbcSMTPIN_ADDED_BROKEN@mx.google.com>
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2014-02-27 21:19 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: Sasha Levin, linux-mm, linux-kernel, Rik van Riel

On Wed, 26 Feb 2014 23:39:36 -0500 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> page->index stores pagecache index when the page is mapped into file mapping
> region, and the index is in pagecache size unit, so it depends on the page
> size. Some of users of reverse mapping obviously assumes that page->index
> is in PAGE_CACHE_SHIFT unit, so they don't work for anonymous hugepage.
> 
> For example, consider that we have 3-hugepage vma and try to mbind the 2nd
> hugepage to migrate to another node. Then the vma is split and migrate_page()
> is called for the 2nd hugepage (belonging to the middle vma.)
> In migrate operation, rmap_walk_anon() tries to find the relevant vma to
> which the target hugepage belongs, but here we miscalculate pgoff.
> So anon_vma_interval_tree_foreach() grabs invalid vma, which fires VM_BUG_ON.
> 
> This patch introduces a new API that is usable both for normal page and
> hugepage to get PAGE_SIZE offset from page->index. Users should clearly
> distinguish page_index for pagecache index and page_pgoff for page offset.

So this patch is really independent of the page-walker changes, but the
page walker changes need it.  So it is appropriate that this patch be
staged before that series, and separately.  Agree?

> Reported-by: Sasha Levin <sasha.levin@oracle.com> # if the reported problem is fixed
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: stable@vger.kernel.org # 3.12+

Why cc:stable?  This problem will only cause runtime issues when the
page walker patches are present?

> index 4f591df66778..a8bd14f42032 100644
> --- next-20140220.orig/include/linux/pagemap.h
> +++ next-20140220/include/linux/pagemap.h
> @@ -316,6 +316,19 @@ static inline loff_t page_file_offset(struct page *page)
>  	return ((loff_t)page_file_index(page)) << PAGE_CACHE_SHIFT;
>  }
>  
> +extern pgoff_t hugepage_pgoff(struct page *page);
> +
> +/*
> + * page->index stores pagecache index whose unit is not always PAGE_SIZE.
> + * This function converts it into PAGE_SIZE offset.
> + */
> +#define page_pgoff(page)					\
> +({								\
> +	unlikely(PageHuge(page)) ?				\
> +		hugepage_pgoff(page) :				\
> +		page->index >> (PAGE_CACHE_SHIFT - PAGE_SHIFT);	\
> +})

- I don't think this needs to be implemented in a macro?  Can we do
  it in good old C?

- Is PageHuge() the appropriate test?
  /*
   * PageHuge() only returns true for hugetlbfs pages, but not for normal or
   * transparent huge pages.  See the PageTransHuge() documentation for more
   * details.
   */

- Should page->index be shifted right or left?  Probably left - I
  doubt if PAGE_CACHE_SHIFT will ever be less than PAGE_SHIFT.

- I'm surprised we don't have a general what-is-this-page's-order
  function, so you can just do

	static inline pgoff_t page_pgoff(struct page *page)
	{
		return page->index << page_size_order(page);
	}

  And I think this would be a better implementation, as the (new)
  page_size_order() could be used elsewhere.

  page_size_order() would be a crappy name - can't think of anything
  better at present.

> --- next-20140220.orig/mm/memory-failure.c
> +++ next-20140220/mm/memory-failure.c
> @@ -404,7 +404,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill,
>  	if (av == NULL)	/* Not actually mapped anymore */
>  		return;
>  
> -	pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);

See this did a left shift.

> +	pgoff = page_pgoff(page);
>  	read_lock(&tasklist_lock);
>  	for_each_process (tsk) {
>  		struct anon_vma_chain *vmac;
> @@ -437,7 +437,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill,
>  	mutex_lock(&mapping->i_mmap_mutex);
>  	read_lock(&tasklist_lock);
>  	for_each_process(tsk) {
> -		pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);

him too.

> +		pgoff_t pgoff = page_pgoff(page);
>  
>  		if (!task_early_kill(tsk))
>  			continue;
> diff --git next-20140220.orig/mm/rmap.c next-20140220/mm/rmap.c
> index 9056a1f00b87..78405051474a 100644
> --- next-20140220.orig/mm/rmap.c
> +++ next-20140220/mm/rmap.c
> @@ -515,11 +515,7 @@ void page_unlock_anon_vma_read(struct anon_vma *anon_vma)
>  static inline unsigned long
>  __vma_address(struct page *page, struct vm_area_struct *vma)
>  {
> -	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);

And him.

> -
> -	if (unlikely(is_vm_hugetlb_page(vma)))
> -		pgoff = page->index << huge_page_order(page_hstate(page));
> -
> +	pgoff_t pgoff = page_pgoff(page);
>  	return vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
>  }
>  
> @@ -1598,7 +1594,7 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page,
>  static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc)
>  {
>  	struct anon_vma *anon_vma;
> -	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);

again.

> +	pgoff_t pgoff = page_pgoff(page);
>  	struct anon_vma_chain *avc;
>  	int ret = SWAP_AGAIN;


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range()
       [not found]     ` <530fabcf.05300f0a.7f7e.ffffc80dSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2014-02-27 21:20       ` Kirill A. Shutemov
  0 siblings, 0 replies; 17+ messages in thread
From: Kirill A. Shutemov @ 2014-02-27 21:20 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: akpm, sasha.levin, linux-mm, linux-kernel

On Thu, Feb 27, 2014 at 04:19:01PM -0500, Naoya Horiguchi wrote:
> On Thu, Feb 27, 2014 at 01:03:23PM -0800, Andrew Morton wrote:
> > On Wed, 26 Feb 2014 23:39:35 -0500 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
> > 
> > > When we try to walk over inside a vma, walk_page_range() tries to walk
> > > until vma->vm_end even if a given end is before that point.
> > > So this patch takes the smaller one as an end address.
> > > 
> > > ...
> > >
> > > --- next-20140220.orig/mm/pagewalk.c
> > > +++ next-20140220/mm/pagewalk.c
> > > @@ -321,8 +321,9 @@ int walk_page_range(unsigned long start, unsigned long end,
> > >  			next = vma->vm_start;
> > >  		} else { /* inside the found vma */
> > >  			walk->vma = vma;
> > > -			next = vma->vm_end;
> > > -			err = walk_page_test(start, end, walk);
> > > +			next = min_t(unsigned long, end, vma->vm_end);
> > 
> > min_t is unneeded, isn't it?  Everything here has type unsigned long.
> 
> Yes, so simply (end < vma->vm_end ? end: vma->vm_end) is enough.
> # I just considered min_t as simple minimum getter without thinking type check.

We have non-typed min() for that.

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma
  2014-02-27  4:39 ` [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma Naoya Horiguchi
@ 2014-02-27 21:23   ` Andrew Morton
  2014-02-27 22:56   ` Kirill A. Shutemov
  1 sibling, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2014-02-27 21:23 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: Sasha Levin, linux-mm, linux-kernel

On Wed, 26 Feb 2014 23:39:37 -0500 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> vma_adjust() is called also for vma(VM_HUGETLB) and it could happen that
> we happen to try to split hugetlbfs hugepage. So exclude the possibility.
> 

It would be nice to have a more complete changelog here please.  Under
what circumstances does this cause problems and what are the
user-observable effects?

> --- next-20140220.orig/mm/mmap.c
> +++ next-20140220/mm/mmap.c
> @@ -772,7 +772,8 @@ again:			remove_next = 1 + (end > next->vm_end);
>  		}
>  	}
>  
> -	vma_adjust_trans_huge(vma, start, end, adjust_next);
> +	if (transparent_hugepage_enabled(vma))
> +		vma_adjust_trans_huge(vma, start, end, adjust_next);
>  
>  	anon_vma = vma->anon_vma;
>  	if (!anon_vma && adjust_next)
> -- 
> 1.8.5.3

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma
  2014-02-27  4:39 ` [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma Naoya Horiguchi
  2014-02-27 21:23   ` Andrew Morton
@ 2014-02-27 22:56   ` Kirill A. Shutemov
  1 sibling, 0 replies; 17+ messages in thread
From: Kirill A. Shutemov @ 2014-02-27 22:56 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: Sasha Levin, Andrew Morton, linux-mm, linux-kernel

On Wed, Feb 26, 2014 at 11:39:37PM -0500, Naoya Horiguchi wrote:
> vma_adjust() is called also for vma(VM_HUGETLB) and it could happen that
> we happen to try to split hugetlbfs hugepage. So exclude the possibility.

NAK.

It can't happen: vma_adjust_trans_huge checks for vma->vm_ops and hugetlb
VMA always have it set unlike THP.

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff()
       [not found]       ` <5310ea8b.c425e00a.2cd9.ffffe097SMTPIN_ADDED_BROKEN@mx.google.com>
@ 2014-02-28 23:14         ` Andrew Morton
       [not found]           ` <1393644926-49vw3qw9@n-horiguchi@ah.jp.nec.com>
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2014-02-28 23:14 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: sasha.levin, linux-mm, linux-kernel, riel

On Fri, 28 Feb 2014 14:59:02 -0500 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> page->index stores pagecache index when the page is mapped into file mapping
> region, and the index is in pagecache size unit, so it depends on the page
> size. Some of users of reverse mapping obviously assumes that page->index
> is in PAGE_CACHE_SHIFT unit, so they don't work for anonymous hugepage.
> 
> For example, consider that we have 3-hugepage vma and try to mbind the 2nd
> hugepage to migrate to another node. Then the vma is split and migrate_page()
> is called for the 2nd hugepage (belonging to the middle vma.)
> In migrate operation, rmap_walk_anon() tries to find the relevant vma to
> which the target hugepage belongs, but here we miscalculate pgoff.
> So anon_vma_interval_tree_foreach() grabs invalid vma, which fires VM_BUG_ON.
> 
> This patch introduces a new API that is usable both for normal page and
> hugepage to get PAGE_SIZE offset from page->index. Users should clearly
> distinguish page_index for pagecache index and page_pgoff for page offset.
> 
> ..
>
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -307,6 +307,22 @@ static inline loff_t page_file_offset(struct page *page)
>  	return ((loff_t)page_file_index(page)) << PAGE_CACHE_SHIFT;
>  }
>  
> +static inline unsigned int page_size_order(struct page *page)
> +{
> +	return unlikely(PageHuge(page)) ?
> +		huge_page_size_order(page) :
> +		(PAGE_CACHE_SHIFT - PAGE_SHIFT);
> +}

Could use some nice documentation, please.  Why it exists, what it
does.  Particularly: what sort of pages it can and can't operate on,
and why.

The presence of PAGE_CACHE_SIZE is unfortunate - it at least implies
that the page is a pagecache page.  I dunno, maybe just use "0"?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff()
       [not found]           ` <1393644926-49vw3qw9@n-horiguchi@ah.jp.nec.com>
@ 2014-03-01 23:08             ` Sasha Levin
  2014-03-03  5:02               ` [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks Naoya Horiguchi
  0 siblings, 1 reply; 17+ messages in thread
From: Sasha Levin @ 2014-03-01 23:08 UTC (permalink / raw)
  To: Naoya Horiguchi, akpm; +Cc: linux-mm, linux-kernel, riel

On 02/28/2014 10:35 PM, Naoya Horiguchi wrote:
> On Fri, Feb 28, 2014 at 03:14:27PM -0800, Andrew Morton wrote:
>> On Fri, 28 Feb 2014 14:59:02 -0500 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
>>
>>> page->index stores pagecache index when the page is mapped into file mapping
>>> region, and the index is in pagecache size unit, so it depends on the page
>>> size. Some of users of reverse mapping obviously assumes that page->index
>>> is in PAGE_CACHE_SHIFT unit, so they don't work for anonymous hugepage.
>>>
>>> For example, consider that we have 3-hugepage vma and try to mbind the 2nd
>>> hugepage to migrate to another node. Then the vma is split and migrate_page()
>>> is called for the 2nd hugepage (belonging to the middle vma.)
>>> In migrate operation, rmap_walk_anon() tries to find the relevant vma to
>>> which the target hugepage belongs, but here we miscalculate pgoff.
>>> So anon_vma_interval_tree_foreach() grabs invalid vma, which fires VM_BUG_ON.
>>>
>>> This patch introduces a new API that is usable both for normal page and
>>> hugepage to get PAGE_SIZE offset from page->index. Users should clearly
>>> distinguish page_index for pagecache index and page_pgoff for page offset.
>>>
>>> ..
>>>
>>> --- a/include/linux/pagemap.h
>>> +++ b/include/linux/pagemap.h
>>> @@ -307,6 +307,22 @@ static inline loff_t page_file_offset(struct page *page)
>>>   	return ((loff_t)page_file_index(page)) << PAGE_CACHE_SHIFT;
>>>   }
>>>   
>>> +static inline unsigned int page_size_order(struct page *page)
>>> +{
>>> +	return unlikely(PageHuge(page)) ?
>>> +		huge_page_size_order(page) :
> 
> I found that we have compound_order(page) for the same purpose, so we don't
> have to define this new function.
> 
>>> +		(PAGE_CACHE_SHIFT - PAGE_SHIFT);
>>> +}
>>
>> Could use some nice documentation, please.  Why it exists, what it
>> does.  Particularly: what sort of pages it can and can't operate on,
>> and why.
> 
> OK.
> 
>> The presence of PAGE_CACHE_SIZE is unfortunate - it at least implies
>> that the page is a pagecache page.  I dunno, maybe just use "0"?
> 
> Yes, PAGE_CACHE_SHIFT makes code messy if PAGE_CACHE_SHIFT is always PAGE_SHIFT.
> But I guess that recently people start to thinking of changing the size of
> pagecache (in the discussion around >4kB sector device.)
> And from readabilitie's perspective, "pagecache size" and "page size" are
> different things, so keeping it is better in a long run.
> 
> Anyway, I revised the patch again, could you take a look?
> 
> Thanks,
> Naoya
> ---
> From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Date: Fri, 28 Feb 2014 21:56:24 -0500
> Subject: [PATCH] mm, hugetlbfs: fix rmapping for anonymous hugepages with
>   page_pgoff()
> 
> page->index stores pagecache index when the page is mapped into file mapping
> region, and the index is in pagecache size unit, so it depends on the page
> size. Some of users of reverse mapping obviously assumes that page->index
> is in PAGE_CACHE_SHIFT unit, so they don't work for anonymous hugepage.
> 
> For example, consider that we have 3-hugepage vma and try to mbind the 2nd
> hugepage to migrate to another node. Then the vma is split and migrate_page()
> is called for the 2nd hugepage (belonging to the middle vma.)
> In migrate operation, rmap_walk_anon() tries to find the relevant vma to
> which the target hugepage belongs, but here we miscalculate pgoff.
> So anon_vma_interval_tree_foreach() grabs invalid vma, which fires VM_BUG_ON.
> 
> This patch introduces a new API that is usable both for normal page and
> hugepage to get PAGE_SIZE offset from page->index. Users should clearly
> distinguish page_index for pagecache index and page_pgoff for page offset.
> 
> ChangeLog v3:
> - add comment on page_size_order()
> - use compound_order(compound_head(page)) instead of huge_page_size_order()
> - use page_pgoff() in rmap_walk_file() too
> - use page_size_order() in kill_proc()
> - fix space indent
> 
> ChangeLog v2:
> - fix wrong shift direction
> - introduce page_size_order() and huge_page_size_order()
> - move the declaration of PageHuge() to include/linux/hugetlb_inline.h
>    to avoid macro definition.
> 
> Reported-by: Sasha Levin <sasha.levin@oracle.com> # if the reported problem is fixed
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: stable@vger.kernel.org # 3.12+

I can confirm that with this patch the lockdep issue is gone. However, the NULL deref in
walk_pte_range() and the BUG at mm/hugemem.c:3580 still appear.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks
  2014-03-01 23:08             ` [PATCH v3] " Sasha Levin
@ 2014-03-03  5:02               ` Naoya Horiguchi
  2014-03-03 20:06                 ` Sasha Levin
  0 siblings, 1 reply; 17+ messages in thread
From: Naoya Horiguchi @ 2014-03-03  5:02 UTC (permalink / raw)
  To: Sasha Levin, Andrew Morton; +Cc: linux-mm, linux-kernel, Rik van Riel

Hi Sasha,

> I can confirm that with this patch the lockdep issue is gone. However, the NULL deref in
> walk_pte_range() and the BUG at mm/hugemem.c:3580 still appear.

I spotted the cause of this problem.
Could you try testing if this patch fixes it?

Thanks,
Naoya
---
Page table walker doesn't check non-present hugetlb entry in common path,
so hugetlb_entry() callbacks must check it. The reason for this behavior
is that some callers want to handle it in its own way.

However, some callers don't check it now, which causes unpredictable result,
for example when we have a race between migrating hugepage and reading
/proc/pid/numa_maps. This patch fixes it by adding pte_present checks on
buggy callbacks.

This bug exists for long and got visible by introducing hugepage migration.

Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: stable@vger.kernel.org # 3.12+
---
 fs/proc/task_mmu.c | 3 +++
 mm/mempolicy.c     | 6 +++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git next-20140228.orig/fs/proc/task_mmu.c next-20140228/fs/proc/task_mmu.c
index 3746d89c768b..a4cecadce867 100644
--- next-20140228.orig/fs/proc/task_mmu.c
+++ next-20140228/fs/proc/task_mmu.c
@@ -1299,6 +1299,9 @@ static int gather_hugetlb_stats(pte_t *pte, unsigned long addr,
 	if (pte_none(*pte))
 		return 0;
 
+	if (pte_present(*pte))
+		return 0;
+
 	page = pte_page(*pte);
 	if (!page)
 		return 0;
diff --git next-20140228.orig/mm/mempolicy.c next-20140228/mm/mempolicy.c
index c0d1cbd68790..1e171186ee6d 100644
--- next-20140228.orig/mm/mempolicy.c
+++ next-20140228/mm/mempolicy.c
@@ -524,8 +524,12 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long addr,
 	unsigned long flags = qp->flags;
 	int nid;
 	struct page *page;
+	pte_t entry;
 
-	page = pte_page(huge_ptep_get(pte));
+	entry = huge_ptep_get(pte);
+	if (pte_present(entry))
+		return 0;
+	page = pte_page(entry);
 	nid = page_to_nid(page);
 	if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT))
 		return 0;
-- 
1.8.5.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks
  2014-03-03  5:02               ` [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks Naoya Horiguchi
@ 2014-03-03 20:06                 ` Sasha Levin
  2014-03-03 21:38                   ` Sasha Levin
  0 siblings, 1 reply; 17+ messages in thread
From: Sasha Levin @ 2014-03-03 20:06 UTC (permalink / raw)
  To: Naoya Horiguchi, Andrew Morton; +Cc: linux-mm, linux-kernel, Rik van Riel

On 03/03/2014 12:02 AM, Naoya Horiguchi wrote:
> Hi Sasha,
>
>> >I can confirm that with this patch the lockdep issue is gone. However, the NULL deref in
>> >walk_pte_range() and the BUG at mm/hugemem.c:3580 still appear.
> I spotted the cause of this problem.
> Could you try testing if this patch fixes it?

I'm seeing a different failure with this patch:

[ 1860.669114] BUG: unable to handle kernel NULL pointer dereference at 0000000000000050
[ 1860.670498] IP: [<ffffffff8129c0bf>] vm_normal_page+0x3f/0x90
[ 1860.672795] PGD 6c1c84067 PUD 6e0a3d067 PMD 0
[ 1860.672795] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[ 1860.672795] Dumping ftrace buffer:
[ 1860.672795]    (ftrace buffer empty)
[ 1860.672795] Modules linked in:
[ 1860.672795] CPU: 4 PID: 34914 Comm: trinity-c184 Tainted: G        W    3.14.0-rc4-ne
[ 1860.672795] task: ffff880717d90000 ti: ffff88070b3da000 task.ti: ffff88070b3da000
[ 1860.672795] RIP: 0010:[<ffffffff8129c0bf>]  [<ffffffff8129c0bf>] vm_normal_page+0x3f/
[ 1860.672795] RSP: 0018:ffff88070b3dbba8  EFLAGS: 00010202
[ 1860.672795] RAX: 000000000000767f RBX: ffff88070b3dbdd8 RCX: ffff88070b3dbd78
[ 1860.672795] RDX: 800000000767f225 RSI: 0100000000699000 RDI: 800000000767f225
[ 1860.672795] RBP: ffff88070b3dbba8 R08: 0000000000000000 R09: 0000000000000000
[ 1860.672795] R10: 0000000000000001 R11: 0000000000000000 R12: ffff880717df24c8
[ 1860.672795] R13: 0000000000000020 R14: 0100000000699000 R15: 0100000000800000
[ 1860.672795] FS:  00007f20a3584700(0000) GS:ffff88052b800000(0000) knlGS:0000000000000
[ 1860.672795] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1860.672795] CR2: 0000000000000050 CR3: 00000006d73cf000 CR4: 00000000000006e0
[ 1860.672795] Stack:
[ 1860.672795]  ffff88070b3dbbd8 ffffffff812c2f3d ffffffff812b2dc0 010000000069a000
[ 1860.672795]  ffff880717df24c8 ffff88070b3dbd78 ffff88070b3dbc28 ffffffff812b2e00
[ 1860.672795]  0000000000000000 ffff88072956bcf0 ffff88070b3dbc28 ffff8806e0a3d018
[ 1860.672795] Call Trace:
[ 1860.672795]  [<ffffffff812c2f3d>] queue_pages_pte+0x3d/0xd0
[ 1860.672795]  [<ffffffff812b2dc0>] ? walk_pte_range+0xc0/0x180
[ 1860.672795]  [<ffffffff812b2e00>] walk_pte_range+0x100/0x180
[ 1860.672795]  [<ffffffff812b3091>] walk_pmd_range+0x211/0x240
[ 1860.672795]  [<ffffffff812b31eb>] walk_pud_range+0x12b/0x160
[ 1860.672795]  [<ffffffff812d2ee4>] ? __slab_free+0x384/0x5e0
[ 1860.672795]  [<ffffffff812b3329>] walk_pgd_range+0x109/0x140
[ 1860.672795]  [<ffffffff812b3395>] __walk_page_range+0x35/0x40
[ 1860.672795]  [<ffffffff812b3552>] walk_page_range+0xf2/0x130
[ 1860.672795]  [<ffffffff812c2e41>] queue_pages_range+0x71/0x90
[ 1860.672795]  [<ffffffff812c2f00>] ? queue_pages_hugetlb+0xa0/0xa0
[ 1860.672795]  [<ffffffff812c2e60>] ? queue_pages_range+0x90/0x90
[ 1860.672795]  [<ffffffff812c30d0>] ? change_prot_numa+0x30/0x30
[ 1860.672795]  [<ffffffff812c6c61>] do_mbind+0x321/0x340
[ 1860.672795]  [<ffffffff8129af2f>] ? might_fault+0x9f/0xb0
[ 1860.672795]  [<ffffffff8129aee6>] ? might_fault+0x56/0xb0
[ 1860.672795]  [<ffffffff812c6d09>] SYSC_mbind+0x89/0xb0
[ 1860.672795]  [<ffffffff81268db5>] ? context_tracking_user_exit+0x195/0x1d0
[ 1860.672795]  [<ffffffff812c6d3e>] SyS_mbind+0xe/0x10
[ 1860.672795]  [<ffffffff8447d890>] tracesys+0xdd/0xe2


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks
  2014-03-03 20:06                 ` Sasha Levin
@ 2014-03-03 21:38                   ` Sasha Levin
       [not found]                     ` <1393968743-imrxpynb@n-horiguchi@ah.jp.nec.com>
  0 siblings, 1 reply; 17+ messages in thread
From: Sasha Levin @ 2014-03-03 21:38 UTC (permalink / raw)
  To: Naoya Horiguchi, Andrew Morton; +Cc: linux-mm, linux-kernel, Rik van Riel

On 03/03/2014 03:06 PM, Sasha Levin wrote:
> On 03/03/2014 12:02 AM, Naoya Horiguchi wrote:
>> Hi Sasha,
>>
>>> >I can confirm that with this patch the lockdep issue is gone. However, the NULL deref in
>>> >walk_pte_range() and the BUG at mm/hugemem.c:3580 still appear.
>> I spotted the cause of this problem.
>> Could you try testing if this patch fixes it?
>
> I'm seeing a different failure with this patch:

And the NULL deref still happens.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks
       [not found]                     ` <1393968743-imrxpynb@n-horiguchi@ah.jp.nec.com>
@ 2014-03-04 22:46                       ` Sasha Levin
       [not found]                         ` <1393976967-lnmm5xcs@n-horiguchi@ah.jp.nec.com>
  0 siblings, 1 reply; 17+ messages in thread
From: Sasha Levin @ 2014-03-04 22:46 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: akpm, linux-mm, linux-kernel, riel

On 03/04/2014 04:32 PM, Naoya Horiguchi wrote:
> # sorry if duplicate message
> 
> On Mon, Mar 03, 2014 at 04:38:41PM -0500, Sasha Levin wrote:
>> On 03/03/2014 03:06 PM, Sasha Levin wrote:
>>> On 03/03/2014 12:02 AM, Naoya Horiguchi wrote:
>>>> Hi Sasha,
>>>>
>>>>>> I can confirm that with this patch the lockdep issue is gone. However, the NULL deref in
>>>>>> walk_pte_range() and the BUG at mm/hugemem.c:3580 still appear.
>>>> I spotted the cause of this problem.
>>>> Could you try testing if this patch fixes it?
>>>
>>> I'm seeing a different failure with this patch:
>>
>> And the NULL deref still happens.
> 
> I don't yet find out the root reason why this issue remains.
> So I tried to run trinity myself but the problem didn't reproduce.
> (I did simply like "./trinity --group vm --dangerous" a few hours.)
> Could you show more detail or tips about how the problem occurs?

I run it as root in a disposable vm, that may be the difference here.


Thanks,
Sasha


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks
       [not found]                         ` <1393976967-lnmm5xcs@n-horiguchi@ah.jp.nec.com>
@ 2014-03-06  4:31                           ` Sasha Levin
       [not found]                             ` <1394122113-xsq3i6vw@n-horiguchi@ah.jp.nec.com>
  0 siblings, 1 reply; 17+ messages in thread
From: Sasha Levin @ 2014-03-06  4:31 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: akpm, linux-mm, linux-kernel, riel

On 03/04/2014 06:49 PM, Naoya Horiguchi wrote:
> On Tue, Mar 04, 2014 at 05:46:52PM -0500, Sasha Levin wrote:
>> On 03/04/2014 04:32 PM, Naoya Horiguchi wrote:
>>> # sorry if duplicate message
>>>
>>> On Mon, Mar 03, 2014 at 04:38:41PM -0500, Sasha Levin wrote:
>>>> On 03/03/2014 03:06 PM, Sasha Levin wrote:
>>>>> On 03/03/2014 12:02 AM, Naoya Horiguchi wrote:
>>>>>> Hi Sasha,
>>>>>>
>>>>>>>> I can confirm that with this patch the lockdep issue is gone. However, the NULL deref in
>>>>>>>> walk_pte_range() and the BUG at mm/hugemem.c:3580 still appear.
>>>>>> I spotted the cause of this problem.
>>>>>> Could you try testing if this patch fixes it?
>>>>>
>>>>> I'm seeing a different failure with this patch:
>>>>
>>>> And the NULL deref still happens.
>>>
>>> I don't yet find out the root reason why this issue remains.
>>> So I tried to run trinity myself but the problem didn't reproduce.
>>> (I did simply like "./trinity --group vm --dangerous" a few hours.)
>>> Could you show more detail or tips about how the problem occurs?
>>
>> I run it as root in a disposable vm, that may be the difference here.
> 
> Sorry, I didn't write it but I also run it as root on VM, so condition is
> the same. It might depend on kernel config, so I'm now trying the config
> you previously gave me, but it doesn't boot correctly on my environment
> (panic in initialization). I may need some time to get over this.

I'd be happy to help with anything off-list, it shouldn't be too difficult
to get that kernel to boot :)

I've also reverted the page walker series for now, it makes it impossible
to test anything else since it seems that hitting one of the issues is quite
easy.


Thanks,
Sasha


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks
       [not found]                             ` <1394122113-xsq3i6vw@n-horiguchi@ah.jp.nec.com>
@ 2014-03-06 21:16                               ` Sasha Levin
  0 siblings, 0 replies; 17+ messages in thread
From: Sasha Levin @ 2014-03-06 21:16 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: akpm, linux-mm, linux-kernel, riel

On 03/06/2014 11:08 AM, Naoya Horiguchi wrote:
> And I found my patch was totally wrong because it should check
> !pte_present(), not pte_present().
> I'm testing fixed one (see below), and the problem seems not to reproduce
> in my environment at least for now.
> But I'm not 100% sure, so I need your double checking.

Nope, I still see the problem. Same NULL deref and trace as before.


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2014-03-06 21:16 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-27  4:39 [PATCH 0/3] fixes on page table walker and hugepage rmapping Naoya Horiguchi
2014-02-27  4:39 ` [PATCH 1/3] mm/pagewalk.c: fix end address calculation in walk_page_range() Naoya Horiguchi
2014-02-27 21:03   ` Andrew Morton
     [not found]     ` <530fabcf.05300f0a.7f7e.ffffc80dSMTPIN_ADDED_BROKEN@mx.google.com>
2014-02-27 21:20       ` Kirill A. Shutemov
2014-02-27  4:39 ` [PATCH 2/3] mm, hugetlbfs: fix rmapping for anonymous hugepages with page_pgoff() Naoya Horiguchi
2014-02-27 21:19   ` Andrew Morton
     [not found]     ` <530fb3ee.03cb0e0a.407a.ffffffbcSMTPIN_ADDED_BROKEN@mx.google.com>
     [not found]       ` <5310ea8b.c425e00a.2cd9.ffffe097SMTPIN_ADDED_BROKEN@mx.google.com>
2014-02-28 23:14         ` [PATCH v2] " Andrew Morton
     [not found]           ` <1393644926-49vw3qw9@n-horiguchi@ah.jp.nec.com>
2014-03-01 23:08             ` [PATCH v3] " Sasha Levin
2014-03-03  5:02               ` [PATCH] mm: add pte_present() check on existing hugetlb_entry callbacks Naoya Horiguchi
2014-03-03 20:06                 ` Sasha Levin
2014-03-03 21:38                   ` Sasha Levin
     [not found]                     ` <1393968743-imrxpynb@n-horiguchi@ah.jp.nec.com>
2014-03-04 22:46                       ` Sasha Levin
     [not found]                         ` <1393976967-lnmm5xcs@n-horiguchi@ah.jp.nec.com>
2014-03-06  4:31                           ` Sasha Levin
     [not found]                             ` <1394122113-xsq3i6vw@n-horiguchi@ah.jp.nec.com>
2014-03-06 21:16                               ` Sasha Levin
2014-02-27  4:39 ` [PATCH 3/3] mm: call vma_adjust_trans_huge() only for thp-enabled vma Naoya Horiguchi
2014-02-27 21:23   ` Andrew Morton
2014-02-27 22:56   ` Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).