All of lore.kernel.org
 help / color / mirror / Atom feed
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mel@csn.ul.ie>, Hugh Dickins <hughd@google.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Andi Kleen <andi@firstfloor.org>, Hillf Danton <dhillf@gmail.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 09/10] memory-hotplug: enable memory hotplug to handle hugepage
Date: Tue, 26 Mar 2013 14:23:24 -0400	[thread overview]
Message-ID: <1364322204-ah777uqs-mutt-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <20130325150952.GA2154@dhcp22.suse.cz>

On Mon, Mar 25, 2013 at 04:09:52PM +0100, Michal Hocko wrote:
> On Fri 22-03-13 16:23:54, Naoya Horiguchi wrote:
...
> > index d9d3dd7..ef79871 100644
> > --- v3.9-rc3.orig/mm/hugetlb.c
> > +++ v3.9-rc3/mm/hugetlb.c
> > @@ -844,6 +844,36 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >  	return ret;
> >  }
> >  
> > +/* Dissolve a given free hugepage into free pages. */
> > +static void dissolve_free_huge_page(struct page *page)
> > +{
> > +	spin_lock(&hugetlb_lock);
> > +	if (PageHuge(page) && !page_count(page)) {
> > +		struct hstate *h = page_hstate(page);
> > +		int nid = page_to_nid(page);
> > +		list_del(&page->lru);
> > +		h->free_huge_pages--;
> > +		h->free_huge_pages_node[nid]--;
> > +		update_and_free_page(h, page);
> > +	}
> 
> What about surplus pages?

This function is only for free hugepage, not for surplus hugepages
(which are considered as in-use hugepages.)
dissolve_free_huge_pages() can be called only when all source hugepages
are free (all in-use hugepages are successfully migrated.)

> > +	spin_unlock(&hugetlb_lock);
> > +}
> > +
> > +/* Dissolve free hugepages in a given pfn range. Used by memory hotplug. */
> > +void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > +{
> > +	unsigned int order = 8 * sizeof(void *);
> > +	unsigned long pfn;
> > +	struct hstate *h;
> > +
> > +	/* Set scan step to minimum hugepage size */
> > +	for_each_hstate(h)
> > +		if (order > huge_page_order(h))
> > +			order = huge_page_order(h);
> > +	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> > +		dissolve_free_huge_page(pfn_to_page(pfn));
> 
> This assumes that start_pfn doesn't at a tail page otherwise you could
> end up traversing only tail pages. This shouldn't happen normally as
> start_pfn will be bound to a memblock but it looks a bit fragile.

I think that this function is never called for such a memblock because
scan_movable_pages() (scan_lru_pages in old name) skips the memblock
starting with a tail page.
But OK, to make code robuster I'll add checking whether first pfn is a
tail page or not.

> 
> It is a bit unfortunate that the offlining code is pfn range oriented
> while hugetlb pages are organized by nodes.
> 
> > +}
> > +
> >  static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
> >  {
> >  	struct page *page;
> > @@ -3155,6 +3185,34 @@ static int is_hugepage_on_freelist(struct page *hpage)
> >  	return 0;
> >  }
> >  
> > +/* Returns true for head pages of in-use hugepages, otherwise returns false. */
> > +bool is_hugepage_movable(struct page *hpage)
> > +{
> > +	struct page *page;
> > +	struct hstate *h;
> > +	bool ret = false;
> > +
> > +	VM_BUG_ON(!PageHuge(hpage));
> > +	/*
> > +	 * This function can be called for a tail page because memory hotplug
> > +	 * scans movability of pages by pfn range of a memory block.
> > +	 * Larger hugepages (1GB for x86_64) are larger than memory block, so
> > +	 * the scan can start at the tail page of larger hugepages.
> > +	 * 1GB hugepage is not movable now, so we return with false for now.
> > +	 */
> > +	if (PageTail(hpage))
> > +		return false;
> > +	h = page_hstate(hpage);
> > +	spin_lock(&hugetlb_lock);
> > +	list_for_each_entry(page, &h->hugepage_activelist, lru)
> > +		if (page == hpage) {
> > +			ret = true;
> > +			break;
> > +		}
> 
> Why are you checking that the page is active?

This is the counterpart to doing PageLRU check for normal pages.

> It doesn't make much sense
> to me because nothing prevents it from being freed/allocated right after
> you release hugetlb_lock.

Such a race can also happen for normal pages because scan_movable_pages()
just check PageLRU flags without holding any lock.
But the caller, __offline_pages(), repeats to call scan_movable_pages()
until no page in the memblock are judged as movable, and in the repeat loop
do_migrate_range() does nothing for free (unmovable) pages.
So there is no behavioral problem even if the movable page is freed just
after the if(PageLRU) check in scan_movable_page().
Note that in this loop, allocating pages in the memblock is forbidden
because we already do set_migratetype_isolate() for them, so we don't have
to worry about being allocated just after scan_movable_pages().

I want the same thing to be the case for hugepage. As you pointed out,
is_hugepage_movable() is not safe from such a race, but in "being freed
just after is_hugepage_movable() returns true" case we have no problem
for the same reason described above.
However, in "being allocated just after is_hugepage_movable() returns false"
case, it seems to be possible to hot-remove an active hugepage. I think we
can avoid this by adding migratetype check in alloc_huge_page().

> > +	spin_unlock(&hugetlb_lock);
> > +	return ret;
> > +}
> > +
> >  /*
> >   * This function is called from memory failure code.
> >   * Assume the caller holds page lock of the head page.
> > diff --git v3.9-rc3.orig/mm/memory_hotplug.c v3.9-rc3/mm/memory_hotplug.c
> > index 9597eec..2d206e8 100644
> > --- v3.9-rc3.orig/mm/memory_hotplug.c
> > +++ v3.9-rc3/mm/memory_hotplug.c
> > @@ -30,6 +30,7 @@
> >  #include <linux/mm_inline.h>
> >  #include <linux/firmware-map.h>
> >  #include <linux/stop_machine.h>
> > +#include <linux/hugetlb.h>
> >  
> >  #include <asm/tlbflush.h>
> >  
> > @@ -1215,10 +1216,12 @@ static int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
> >  }
> >  
> >  /*
> > - * Scanning pfn is much easier than scanning lru list.
> > - * Scan pfn from start to end and Find LRU page.
> > + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
> > + * and hugepages). We scan pfn because it's much easier than scanning over
> > + * linked list. This function returns the pfn of the first found movable
> > + * page if it's found, otherwise 0.
> >   */
> > -static unsigned long scan_lru_pages(unsigned long start, unsigned long end)
> > +static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> >  {
> >  	unsigned long pfn;
> >  	struct page *page;
> > @@ -1227,6 +1230,12 @@ static unsigned long scan_lru_pages(unsigned long start, unsigned long end)
> >  			page = pfn_to_page(pfn);
> >  			if (PageLRU(page))
> >  				return pfn;
> > +			if (PageHuge(page)) {
> > +				if (is_hugepage_movable(page))
> > +					return pfn;
> > +				else
> > +					pfn += (1 << compound_order(page)) - 1;
> 
> This doesn't look right to me. You have to consider where is your tail
> page.
> 					pfn += (1 << compound_order(page)) - (page - compound_head(page)) - 1;
> Or something nicer ;)

OK.

> > +			}
> >  		}
> >  	}
> >  	return 0;
> > @@ -1247,6 +1256,21 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
> >  		if (!pfn_valid(pfn))
> >  			continue;
> >  		page = pfn_to_page(pfn);
> > +
> > +		if (PageHuge(page)) {
> > +			struct page *head = compound_head(page);
> > +			pfn = page_to_pfn(head) + (1<<compound_order(head)) - 1;
> > +			if (compound_order(head) > PFN_SECTION_SHIFT) {
> > +				ret = -EBUSY;
> > +				break;
> > +			}
> > +			if (!get_page_unless_zero(page))
> > +				continue;
> 
> s/page/hpage/

Yes, we should pin head page.

Thanks,
Naoya

WARNING: multiple messages have this Message-ID (diff)
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mel@csn.ul.ie>, Hugh Dickins <hughd@google.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Andi Kleen <andi@firstfloor.org>, Hillf Danton <dhillf@gmail.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 09/10] memory-hotplug: enable memory hotplug to handle hugepage
Date: Tue, 26 Mar 2013 14:23:24 -0400	[thread overview]
Message-ID: <1364322204-ah777uqs-mutt-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <20130325150952.GA2154@dhcp22.suse.cz>

On Mon, Mar 25, 2013 at 04:09:52PM +0100, Michal Hocko wrote:
> On Fri 22-03-13 16:23:54, Naoya Horiguchi wrote:
...
> > index d9d3dd7..ef79871 100644
> > --- v3.9-rc3.orig/mm/hugetlb.c
> > +++ v3.9-rc3/mm/hugetlb.c
> > @@ -844,6 +844,36 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >  	return ret;
> >  }
> >  
> > +/* Dissolve a given free hugepage into free pages. */
> > +static void dissolve_free_huge_page(struct page *page)
> > +{
> > +	spin_lock(&hugetlb_lock);
> > +	if (PageHuge(page) && !page_count(page)) {
> > +		struct hstate *h = page_hstate(page);
> > +		int nid = page_to_nid(page);
> > +		list_del(&page->lru);
> > +		h->free_huge_pages--;
> > +		h->free_huge_pages_node[nid]--;
> > +		update_and_free_page(h, page);
> > +	}
> 
> What about surplus pages?

This function is only for free hugepage, not for surplus hugepages
(which are considered as in-use hugepages.)
dissolve_free_huge_pages() can be called only when all source hugepages
are free (all in-use hugepages are successfully migrated.)

> > +	spin_unlock(&hugetlb_lock);
> > +}
> > +
> > +/* Dissolve free hugepages in a given pfn range. Used by memory hotplug. */
> > +void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > +{
> > +	unsigned int order = 8 * sizeof(void *);
> > +	unsigned long pfn;
> > +	struct hstate *h;
> > +
> > +	/* Set scan step to minimum hugepage size */
> > +	for_each_hstate(h)
> > +		if (order > huge_page_order(h))
> > +			order = huge_page_order(h);
> > +	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> > +		dissolve_free_huge_page(pfn_to_page(pfn));
> 
> This assumes that start_pfn doesn't at a tail page otherwise you could
> end up traversing only tail pages. This shouldn't happen normally as
> start_pfn will be bound to a memblock but it looks a bit fragile.

I think that this function is never called for such a memblock because
scan_movable_pages() (scan_lru_pages in old name) skips the memblock
starting with a tail page.
But OK, to make code robuster I'll add checking whether first pfn is a
tail page or not.

> 
> It is a bit unfortunate that the offlining code is pfn range oriented
> while hugetlb pages are organized by nodes.
> 
> > +}
> > +
> >  static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
> >  {
> >  	struct page *page;
> > @@ -3155,6 +3185,34 @@ static int is_hugepage_on_freelist(struct page *hpage)
> >  	return 0;
> >  }
> >  
> > +/* Returns true for head pages of in-use hugepages, otherwise returns false. */
> > +bool is_hugepage_movable(struct page *hpage)
> > +{
> > +	struct page *page;
> > +	struct hstate *h;
> > +	bool ret = false;
> > +
> > +	VM_BUG_ON(!PageHuge(hpage));
> > +	/*
> > +	 * This function can be called for a tail page because memory hotplug
> > +	 * scans movability of pages by pfn range of a memory block.
> > +	 * Larger hugepages (1GB for x86_64) are larger than memory block, so
> > +	 * the scan can start at the tail page of larger hugepages.
> > +	 * 1GB hugepage is not movable now, so we return with false for now.
> > +	 */
> > +	if (PageTail(hpage))
> > +		return false;
> > +	h = page_hstate(hpage);
> > +	spin_lock(&hugetlb_lock);
> > +	list_for_each_entry(page, &h->hugepage_activelist, lru)
> > +		if (page == hpage) {
> > +			ret = true;
> > +			break;
> > +		}
> 
> Why are you checking that the page is active?

This is the counterpart to doing PageLRU check for normal pages.

> It doesn't make much sense
> to me because nothing prevents it from being freed/allocated right after
> you release hugetlb_lock.

Such a race can also happen for normal pages because scan_movable_pages()
just check PageLRU flags without holding any lock.
But the caller, __offline_pages(), repeats to call scan_movable_pages()
until no page in the memblock are judged as movable, and in the repeat loop
do_migrate_range() does nothing for free (unmovable) pages.
So there is no behavioral problem even if the movable page is freed just
after the if(PageLRU) check in scan_movable_page().
Note that in this loop, allocating pages in the memblock is forbidden
because we already do set_migratetype_isolate() for them, so we don't have
to worry about being allocated just after scan_movable_pages().

I want the same thing to be the case for hugepage. As you pointed out,
is_hugepage_movable() is not safe from such a race, but in "being freed
just after is_hugepage_movable() returns true" case we have no problem
for the same reason described above.
However, in "being allocated just after is_hugepage_movable() returns false"
case, it seems to be possible to hot-remove an active hugepage. I think we
can avoid this by adding migratetype check in alloc_huge_page().

> > +	spin_unlock(&hugetlb_lock);
> > +	return ret;
> > +}
> > +
> >  /*
> >   * This function is called from memory failure code.
> >   * Assume the caller holds page lock of the head page.
> > diff --git v3.9-rc3.orig/mm/memory_hotplug.c v3.9-rc3/mm/memory_hotplug.c
> > index 9597eec..2d206e8 100644
> > --- v3.9-rc3.orig/mm/memory_hotplug.c
> > +++ v3.9-rc3/mm/memory_hotplug.c
> > @@ -30,6 +30,7 @@
> >  #include <linux/mm_inline.h>
> >  #include <linux/firmware-map.h>
> >  #include <linux/stop_machine.h>
> > +#include <linux/hugetlb.h>
> >  
> >  #include <asm/tlbflush.h>
> >  
> > @@ -1215,10 +1216,12 @@ static int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
> >  }
> >  
> >  /*
> > - * Scanning pfn is much easier than scanning lru list.
> > - * Scan pfn from start to end and Find LRU page.
> > + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
> > + * and hugepages). We scan pfn because it's much easier than scanning over
> > + * linked list. This function returns the pfn of the first found movable
> > + * page if it's found, otherwise 0.
> >   */
> > -static unsigned long scan_lru_pages(unsigned long start, unsigned long end)
> > +static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> >  {
> >  	unsigned long pfn;
> >  	struct page *page;
> > @@ -1227,6 +1230,12 @@ static unsigned long scan_lru_pages(unsigned long start, unsigned long end)
> >  			page = pfn_to_page(pfn);
> >  			if (PageLRU(page))
> >  				return pfn;
> > +			if (PageHuge(page)) {
> > +				if (is_hugepage_movable(page))
> > +					return pfn;
> > +				else
> > +					pfn += (1 << compound_order(page)) - 1;
> 
> This doesn't look right to me. You have to consider where is your tail
> page.
> 					pfn += (1 << compound_order(page)) - (page - compound_head(page)) - 1;
> Or something nicer ;)

OK.

> > +			}
> >  		}
> >  	}
> >  	return 0;
> > @@ -1247,6 +1256,21 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
> >  		if (!pfn_valid(pfn))
> >  			continue;
> >  		page = pfn_to_page(pfn);
> > +
> > +		if (PageHuge(page)) {
> > +			struct page *head = compound_head(page);
> > +			pfn = page_to_pfn(head) + (1<<compound_order(head)) - 1;
> > +			if (compound_order(head) > PFN_SECTION_SHIFT) {
> > +				ret = -EBUSY;
> > +				break;
> > +			}
> > +			if (!get_page_unless_zero(page))
> > +				continue;
> 
> s/page/hpage/

Yes, we should pin head page.

Thanks,
Naoya

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-03-26 18:25 UTC|newest]

Thread overview: 132+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-22 20:23 [PATCH v2 0/10] extend hugepage migration Naoya Horiguchi
2013-03-22 20:23 ` Naoya Horiguchi
2013-03-22 20:23 ` [PATCH 01/10] migrate: add migrate_entry_wait_huge() Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-23 15:55   ` Rik van Riel
2013-03-23 15:55     ` Rik van Riel
2013-03-25 10:13   ` Michal Hocko
2013-03-25 10:13     ` Michal Hocko
2013-03-26  4:25     ` Naoya Horiguchi
2013-03-26  4:25       ` Naoya Horiguchi
2013-04-05 20:33   ` KOSAKI Motohiro
2013-04-05 20:33     ` KOSAKI Motohiro
2013-04-08 20:00     ` Naoya Horiguchi
2013-04-08 20:00       ` Naoya Horiguchi
2013-04-05 20:33   ` KOSAKI Motohiro
2013-04-05 20:33     ` KOSAKI Motohiro
2013-03-22 20:23 ` [PATCH 02/10] migrate: make core migration code aware of hugepage Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-25 10:57   ` Michal Hocko
2013-03-25 10:57     ` Michal Hocko
2013-03-26  4:33     ` Naoya Horiguchi
2013-03-26  4:33       ` Naoya Horiguchi
2013-03-26  8:49       ` Michal Hocko
2013-03-26  8:49         ` Michal Hocko
2013-04-05 20:41         ` KOSAKI Motohiro
2013-04-05 20:41           ` KOSAKI Motohiro
2013-03-22 20:23 ` [PATCH 03/10] soft-offline: use migrate_pages() instead of migrate_huge_page() Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-25 12:31   ` Michal Hocko
2013-03-25 12:31     ` Michal Hocko
2013-03-26  4:34     ` Naoya Horiguchi
2013-03-26  4:34       ` Naoya Horiguchi
2013-03-26  9:49       ` Michal Hocko
2013-03-26  9:49         ` Michal Hocko
2013-03-26 20:35         ` Naoya Horiguchi
2013-03-26 20:35           ` Naoya Horiguchi
2013-03-27 13:00           ` Michal Hocko
2013-03-27 13:00             ` Michal Hocko
2013-04-05 21:11             ` KOSAKI Motohiro
2013-04-05 21:11               ` KOSAKI Motohiro
2013-03-26 11:29   ` Aneesh Kumar K.V
2013-03-26 11:29     ` Aneesh Kumar K.V
2013-03-27 13:52     ` Michal Hocko
2013-03-27 13:52       ` Michal Hocko
2013-03-27 19:19       ` Naoya Horiguchi
2013-03-27 19:19         ` Naoya Horiguchi
2013-03-28  8:53         ` Michal Hocko
2013-03-28  8:53           ` Michal Hocko
2013-03-29  5:26       ` Aneesh Kumar K.V
2013-03-29  5:26         ` Aneesh Kumar K.V
2013-03-29  9:36         ` Michal Hocko
2013-03-29  9:36           ` Michal Hocko
2013-04-01  5:13       ` Aneesh Kumar K.V
2013-04-01  5:13         ` Aneesh Kumar K.V
2013-04-02  9:45         ` Michal Hocko
2013-04-02  9:45           ` Michal Hocko
2013-03-22 20:23 ` [PATCH 04/10] migrate: clean up migrate_huge_page() Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-04-05 21:13   ` KOSAKI Motohiro
2013-04-05 21:13     ` KOSAKI Motohiro
2013-03-22 20:23 ` [PATCH 05/10] migrate: add hugepage migration code to migrate_pages() Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-25 13:04   ` Michal Hocko
2013-03-25 13:04     ` Michal Hocko
2013-03-26  5:13     ` Naoya Horiguchi
2013-03-26  5:13       ` Naoya Horiguchi
2013-03-26  8:55       ` Michal Hocko
2013-03-26  8:55         ` Michal Hocko
2013-04-05 21:17   ` KOSAKI Motohiro
2013-04-05 21:17     ` KOSAKI Motohiro
2013-04-08 20:21     ` Naoya Horiguchi
2013-04-08 20:21       ` Naoya Horiguchi
2013-03-22 20:23 ` [PATCH 06/10] migrate: add hugepage migration code to move_pages() Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-25 13:36   ` Michal Hocko
2013-03-25 13:36     ` Michal Hocko
2013-03-26  7:06     ` Naoya Horiguchi
2013-03-26  7:06       ` Naoya Horiguchi
2013-03-26 10:02       ` Michal Hocko
2013-03-26 10:02         ` Michal Hocko
2013-03-26 20:37         ` Naoya Horiguchi
2013-03-26 20:37           ` Naoya Horiguchi
2013-03-22 20:23 ` [PATCH 07/10] mbind: add hugepage migration code to mbind() Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-25 13:49   ` Michal Hocko
2013-03-25 13:49     ` Michal Hocko
2013-04-05 22:23     ` KOSAKI Motohiro
2013-04-05 22:23       ` KOSAKI Motohiro
2013-04-06  7:04       ` Michal Hocko
2013-04-06  7:04         ` Michal Hocko
2013-04-05 22:18   ` KOSAKI Motohiro
2013-04-05 22:18     ` KOSAKI Motohiro
2013-04-08 20:25     ` Naoya Horiguchi
2013-04-08 20:25       ` Naoya Horiguchi
2013-03-22 20:23 ` [PATCH 08/10] migrate: remove VM_HUGETLB from vma flag check in vma_migratable() Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-22 20:23 ` [PATCH 09/10] memory-hotplug: enable memory hotplug to handle hugepage Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-25 15:09   ` Michal Hocko
2013-03-25 15:09     ` Michal Hocko
2013-03-26 18:23     ` Naoya Horiguchi [this message]
2013-03-26 18:23       ` Naoya Horiguchi
2013-03-27 14:19       ` Michal Hocko
2013-03-27 14:19         ` Michal Hocko
2013-03-27 21:29         ` Naoya Horiguchi
2013-03-27 21:29           ` Naoya Horiguchi
2013-03-27 21:58           ` Naoya Horiguchi
2013-03-27 21:58             ` Naoya Horiguchi
2013-03-27 22:55           ` Michal Hocko
2013-03-27 22:55             ` Michal Hocko
2013-03-26 12:01   ` Aneesh Kumar K.V
2013-03-26 12:01     ` Aneesh Kumar K.V
2013-03-27 19:28     ` Naoya Horiguchi
2013-03-27 19:28       ` Naoya Horiguchi
2013-04-06  0:13   ` KOSAKI Motohiro
2013-04-06  0:13     ` KOSAKI Motohiro
2013-04-09 20:07     ` Naoya Horiguchi
2013-04-09 20:07       ` Naoya Horiguchi
2013-04-09 21:27       ` KOSAKI Motohiro
2013-04-09 21:27         ` KOSAKI Motohiro
2013-04-09 22:43         ` Naoya Horiguchi
2013-04-09 22:43           ` Naoya Horiguchi
2013-04-10  1:56           ` KOSAKI Motohiro
2013-04-10  1:56             ` KOSAKI Motohiro
2013-04-10  2:24             ` Naoya Horiguchi
2013-04-10  2:24               ` Naoya Horiguchi
2013-03-22 20:23 ` [PATCH 10/10] prepare to remove /proc/sys/vm/hugepages_treat_as_movable Naoya Horiguchi
2013-03-22 20:23   ` Naoya Horiguchi
2013-03-25 15:12   ` Michal Hocko
2013-03-25 15:12     ` Michal Hocko
2013-04-06  0:15     ` KOSAKI Motohiro
2013-04-06  0:15       ` KOSAKI Motohiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1364322204-ah777uqs-mutt-n-horiguchi@ah.jp.nec.com \
    --to=n-horiguchi@ah.jp.nec.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=dhillf@gmail.com \
    --cc=hughd@google.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.