All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-media@vger.kernel.org, linux-mm@kvack.org,
	linaro-mm-sig@lists.linaro.org
Cc: Michal Nazarewicz <mina86@mina86.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Kyungmin Park <kyungmin.park@samsung.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Ankita Garg <ankita@in.ibm.com>,
	Daniel Walker <dwalker@codeaurora.org>,
	Johan MOSSBERG <johan.xx.mossberg@stericsson.com>,
	Mel Gorman <mel@csn.ul.ie>, Arnd Bergmann <arnd@arndb.de>,
	Jesse Barker <jesse.barker@linaro.org>
Subject: [PATCH 05/10] mm: alloc_contig_range() added
Date: Fri, 10 Jun 2011 11:54:53 +0200	[thread overview]
Message-ID: <1307699698-29369-6-git-send-email-m.szyprowski@samsung.com> (raw)
In-Reply-To: <1307699698-29369-1-git-send-email-m.szyprowski@samsung.com>

From: Michal Nazarewicz <m.nazarewicz@samsung.com>

This commit adds the alloc_contig_range() function which tries
to allecate given range of pages.  It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.

Signed-off-by: Michal Nazarewicz <m.nazarewicz@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
[m.szyprowski: renamed some variables for easier code reading]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
---
 include/linux/page-isolation.h |    2 +
 mm/page_alloc.c                |  144 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 146 insertions(+), 0 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index f1417ed..c5d1a7c 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -34,6 +34,8 @@ extern int set_migratetype_isolate(struct page *page);
 extern void unset_migratetype_isolate(struct page *page);
 extern unsigned long alloc_contig_freed_pages(unsigned long start,
 					      unsigned long end, gfp_t flag);
+extern int alloc_contig_range(unsigned long start, unsigned long end,
+			      gfp_t flags);
 extern void free_contig_pages(struct page *page, int nr_pages);
 
 /*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 00e9b24..2cea044 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5638,6 +5638,150 @@ unsigned long alloc_contig_freed_pages(unsigned long start, unsigned long end,
 	return pfn;
 }
 
+static unsigned long pfn_to_maxpage(unsigned long pfn)
+{
+	return pfn & ~(MAX_ORDER_NR_PAGES - 1);
+}
+
+static unsigned long pfn_to_maxpage_up(unsigned long pfn)
+{
+	return ALIGN(pfn, MAX_ORDER_NR_PAGES);
+}
+
+#define MIGRATION_RETRY	5
+static int __alloc_contig_migrate_range(unsigned long start, unsigned long end)
+{
+	int migration_failed = 0, ret;
+	unsigned long pfn = start;
+
+	/*
+	 * Some code "borrowed" from KAMEZAWA Hiroyuki's
+	 * __alloc_contig_pages().
+	 */
+
+	for (;;) {
+		pfn = scan_lru_pages(pfn, end);
+		if (!pfn || pfn >= end)
+			break;
+
+		ret = do_migrate_range(pfn, end);
+		if (!ret) {
+			migration_failed = 0;
+		} else if (ret != -EBUSY
+			|| ++migration_failed >= MIGRATION_RETRY) {
+			return ret;
+		} else {
+			/* There are unstable pages.on pagevec. */
+			lru_add_drain_all();
+			/*
+			 * there may be pages on pcplist before
+			 * we mark the range as ISOLATED.
+			 */
+			drain_all_pages();
+		}
+		cond_resched();
+	}
+
+	if (!migration_failed) {
+		/* drop all pages in pagevec and pcp list */
+		lru_add_drain_all();
+		drain_all_pages();
+	}
+
+	/* Make sure all pages are isolated */
+	if (WARN_ON(test_pages_isolated(start, end)))
+		return -EBUSY;
+
+	return 0;
+}
+
+/**
+ * alloc_contig_range() -- tries to allocate given range of pages
+ * @start:	start PFN to allocate
+ * @end:	one-past-the-last PFN to allocate
+ * @flags:	flags passed to alloc_contig_freed_pages().
+ *
+ * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
+ * aligned, hovewer it's callers responsibility to guarantee that we
+ * are the only thread that changes migrate type of pageblocks the
+ * pages fall in.
+ *
+ * Returns zero on success or negative error code.  On success all
+ * pages which PFN is in (start, end) are allocated for the caller and
+ * need to be freed with free_contig_pages().
+ */
+int alloc_contig_range(unsigned long start, unsigned long end,
+		       gfp_t flags)
+{
+	unsigned long outer_start, outer_end;
+	int ret;
+
+	/*
+	 * What we do here is we mark all pageblocks in range as
+	 * MIGRATE_ISOLATE.  Because of the way page allocator work, we
+	 * align the range to MAX_ORDER pages so that page allocator
+	 * won't try to merge buddies from different pageblocks and
+	 * change MIGRATE_ISOLATE to some other migration type.
+	 *
+	 * Once the pageblocks are marked as MIGRATE_ISOLATE, we
+	 * migrate the pages from an unaligned range (ie. pages that
+	 * we are interested in).  This will put all the pages in
+	 * range back to page allocator as MIGRATE_ISOLATE.
+	 *
+	 * When this is done, we take the pages in range from page
+	 * allocator removing them from the buddy system.  This way
+	 * page allocator will never consider using them.
+	 *
+	 * This lets us mark the pageblocks back as
+	 * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
+	 * MAX_ORDER aligned range but not in the unaligned, original
+	 * range are put back to page allocator so that buddy can use
+	 * them.
+	 */
+
+	ret = start_isolate_page_range(pfn_to_maxpage(start),
+				       pfn_to_maxpage_up(end));
+	if (ret)
+		goto done;
+
+	ret = __alloc_contig_migrate_range(start, end);
+	if (ret)
+		goto done;
+
+	/*
+	 * Pages from [start, end) are within a MAX_ORDER_NR_PAGES
+	 * aligned blocks that are marked as MIGRATE_ISOLATE.  What's
+	 * more, all pages in [start, end) are free in page allocator.
+	 * What we are going to do is to allocate all pages from
+	 * [start, end) (that is remove them from page allocater).
+	 *
+	 * The only problem is that pages at the beginning and at the
+	 * end of interesting range may be not aligned with pages that
+	 * page allocator holds, ie. they can be part of higher order
+	 * pages.  Because of this, we reserve the bigger range and
+	 * once this is done free the pages we are not interested in.
+	 */
+
+	ret = 0;
+	while (!PageBuddy(pfn_to_page(start & (~0UL << ret))))
+		if (WARN_ON(++ret >= MAX_ORDER))
+			return -EINVAL;
+
+	outer_start = start & (~0UL << ret);
+	outer_end   = alloc_contig_freed_pages(outer_start, end, flags);
+
+	/* Free head and tail (if any) */
+	if (start != outer_start)
+		free_contig_pages(pfn_to_page(outer_start), start - outer_start);
+	if (end != outer_end)
+		free_contig_pages(pfn_to_page(end), outer_end - end);
+
+	ret = 0;
+done:
+	undo_isolate_page_range(pfn_to_maxpage(start), pfn_to_maxpage_up(end));
+	return ret;
+}
+
 void free_contig_pages(struct page *page, int nr_pages)
 {
 	for (; nr_pages; --nr_pages, ++page)
-- 
1.7.1.569.g6f426


WARNING: multiple messages have this Message-ID (diff)
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-media@vger.kernel.org, linux-mm@kvack.org,
	linaro-mm-sig@lists.linaro.org
Cc: Michal Nazarewicz <mina86@mina86.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Kyungmin Park <kyungmin.park@samsung.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Ankita Garg <ankita@in.ibm.com>,
	Daniel Walker <dwalker@codeaurora.org>,
	Johan MOSSBERG <johan.xx.mossberg@stericsson.com>,
	Mel Gorman <mel@csn.ul.ie>, Arnd Bergmann <arnd@arndb.de>,
	Jesse Barker <jesse.barker@linaro.org>
Subject: [PATCH 05/10] mm: alloc_contig_range() added
Date: Fri, 10 Jun 2011 11:54:53 +0200	[thread overview]
Message-ID: <1307699698-29369-6-git-send-email-m.szyprowski@samsung.com> (raw)
In-Reply-To: <1307699698-29369-1-git-send-email-m.szyprowski@samsung.com>

From: Michal Nazarewicz <m.nazarewicz@samsung.com>

This commit adds the alloc_contig_range() function which tries
to allecate given range of pages.  It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.

Signed-off-by: Michal Nazarewicz <m.nazarewicz@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
[m.szyprowski: renamed some variables for easier code reading]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
---
 include/linux/page-isolation.h |    2 +
 mm/page_alloc.c                |  144 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 146 insertions(+), 0 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index f1417ed..c5d1a7c 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -34,6 +34,8 @@ extern int set_migratetype_isolate(struct page *page);
 extern void unset_migratetype_isolate(struct page *page);
 extern unsigned long alloc_contig_freed_pages(unsigned long start,
 					      unsigned long end, gfp_t flag);
+extern int alloc_contig_range(unsigned long start, unsigned long end,
+			      gfp_t flags);
 extern void free_contig_pages(struct page *page, int nr_pages);
 
 /*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 00e9b24..2cea044 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5638,6 +5638,150 @@ unsigned long alloc_contig_freed_pages(unsigned long start, unsigned long end,
 	return pfn;
 }
 
+static unsigned long pfn_to_maxpage(unsigned long pfn)
+{
+	return pfn & ~(MAX_ORDER_NR_PAGES - 1);
+}
+
+static unsigned long pfn_to_maxpage_up(unsigned long pfn)
+{
+	return ALIGN(pfn, MAX_ORDER_NR_PAGES);
+}
+
+#define MIGRATION_RETRY	5
+static int __alloc_contig_migrate_range(unsigned long start, unsigned long end)
+{
+	int migration_failed = 0, ret;
+	unsigned long pfn = start;
+
+	/*
+	 * Some code "borrowed" from KAMEZAWA Hiroyuki's
+	 * __alloc_contig_pages().
+	 */
+
+	for (;;) {
+		pfn = scan_lru_pages(pfn, end);
+		if (!pfn || pfn >= end)
+			break;
+
+		ret = do_migrate_range(pfn, end);
+		if (!ret) {
+			migration_failed = 0;
+		} else if (ret != -EBUSY
+			|| ++migration_failed >= MIGRATION_RETRY) {
+			return ret;
+		} else {
+			/* There are unstable pages.on pagevec. */
+			lru_add_drain_all();
+			/*
+			 * there may be pages on pcplist before
+			 * we mark the range as ISOLATED.
+			 */
+			drain_all_pages();
+		}
+		cond_resched();
+	}
+
+	if (!migration_failed) {
+		/* drop all pages in pagevec and pcp list */
+		lru_add_drain_all();
+		drain_all_pages();
+	}
+
+	/* Make sure all pages are isolated */
+	if (WARN_ON(test_pages_isolated(start, end)))
+		return -EBUSY;
+
+	return 0;
+}
+
+/**
+ * alloc_contig_range() -- tries to allocate given range of pages
+ * @start:	start PFN to allocate
+ * @end:	one-past-the-last PFN to allocate
+ * @flags:	flags passed to alloc_contig_freed_pages().
+ *
+ * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
+ * aligned, hovewer it's callers responsibility to guarantee that we
+ * are the only thread that changes migrate type of pageblocks the
+ * pages fall in.
+ *
+ * Returns zero on success or negative error code.  On success all
+ * pages which PFN is in (start, end) are allocated for the caller and
+ * need to be freed with free_contig_pages().
+ */
+int alloc_contig_range(unsigned long start, unsigned long end,
+		       gfp_t flags)
+{
+	unsigned long outer_start, outer_end;
+	int ret;
+
+	/*
+	 * What we do here is we mark all pageblocks in range as
+	 * MIGRATE_ISOLATE.  Because of the way page allocator work, we
+	 * align the range to MAX_ORDER pages so that page allocator
+	 * won't try to merge buddies from different pageblocks and
+	 * change MIGRATE_ISOLATE to some other migration type.
+	 *
+	 * Once the pageblocks are marked as MIGRATE_ISOLATE, we
+	 * migrate the pages from an unaligned range (ie. pages that
+	 * we are interested in).  This will put all the pages in
+	 * range back to page allocator as MIGRATE_ISOLATE.
+	 *
+	 * When this is done, we take the pages in range from page
+	 * allocator removing them from the buddy system.  This way
+	 * page allocator will never consider using them.
+	 *
+	 * This lets us mark the pageblocks back as
+	 * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
+	 * MAX_ORDER aligned range but not in the unaligned, original
+	 * range are put back to page allocator so that buddy can use
+	 * them.
+	 */
+
+	ret = start_isolate_page_range(pfn_to_maxpage(start),
+				       pfn_to_maxpage_up(end));
+	if (ret)
+		goto done;
+
+	ret = __alloc_contig_migrate_range(start, end);
+	if (ret)
+		goto done;
+
+	/*
+	 * Pages from [start, end) are within a MAX_ORDER_NR_PAGES
+	 * aligned blocks that are marked as MIGRATE_ISOLATE.  What's
+	 * more, all pages in [start, end) are free in page allocator.
+	 * What we are going to do is to allocate all pages from
+	 * [start, end) (that is remove them from page allocater).
+	 *
+	 * The only problem is that pages at the beginning and at the
+	 * end of interesting range may be not aligned with pages that
+	 * page allocator holds, ie. they can be part of higher order
+	 * pages.  Because of this, we reserve the bigger range and
+	 * once this is done free the pages we are not interested in.
+	 */
+
+	ret = 0;
+	while (!PageBuddy(pfn_to_page(start & (~0UL << ret))))
+		if (WARN_ON(++ret >= MAX_ORDER))
+			return -EINVAL;
+
+	outer_start = start & (~0UL << ret);
+	outer_end   = alloc_contig_freed_pages(outer_start, end, flags);
+
+	/* Free head and tail (if any) */
+	if (start != outer_start)
+		free_contig_pages(pfn_to_page(outer_start), start - outer_start);
+	if (end != outer_end)
+		free_contig_pages(pfn_to_page(end), outer_end - end);
+
+	ret = 0;
+done:
+	undo_isolate_page_range(pfn_to_maxpage(start), pfn_to_maxpage_up(end));
+	return ret;
+}
+
 void free_contig_pages(struct page *page, int nr_pages)
 {
 	for (; nr_pages; --nr_pages, ++page)
-- 
1.7.1.569.g6f426

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: m.szyprowski@samsung.com (Marek Szyprowski)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 05/10] mm: alloc_contig_range() added
Date: Fri, 10 Jun 2011 11:54:53 +0200	[thread overview]
Message-ID: <1307699698-29369-6-git-send-email-m.szyprowski@samsung.com> (raw)
In-Reply-To: <1307699698-29369-1-git-send-email-m.szyprowski@samsung.com>

From: Michal Nazarewicz <m.nazarewicz@samsung.com>

This commit adds the alloc_contig_range() function which tries
to allecate given range of pages.  It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.

Signed-off-by: Michal Nazarewicz <m.nazarewicz@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
[m.szyprowski: renamed some variables for easier code reading]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
---
 include/linux/page-isolation.h |    2 +
 mm/page_alloc.c                |  144 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 146 insertions(+), 0 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index f1417ed..c5d1a7c 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -34,6 +34,8 @@ extern int set_migratetype_isolate(struct page *page);
 extern void unset_migratetype_isolate(struct page *page);
 extern unsigned long alloc_contig_freed_pages(unsigned long start,
 					      unsigned long end, gfp_t flag);
+extern int alloc_contig_range(unsigned long start, unsigned long end,
+			      gfp_t flags);
 extern void free_contig_pages(struct page *page, int nr_pages);
 
 /*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 00e9b24..2cea044 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5638,6 +5638,150 @@ unsigned long alloc_contig_freed_pages(unsigned long start, unsigned long end,
 	return pfn;
 }
 
+static unsigned long pfn_to_maxpage(unsigned long pfn)
+{
+	return pfn & ~(MAX_ORDER_NR_PAGES - 1);
+}
+
+static unsigned long pfn_to_maxpage_up(unsigned long pfn)
+{
+	return ALIGN(pfn, MAX_ORDER_NR_PAGES);
+}
+
+#define MIGRATION_RETRY	5
+static int __alloc_contig_migrate_range(unsigned long start, unsigned long end)
+{
+	int migration_failed = 0, ret;
+	unsigned long pfn = start;
+
+	/*
+	 * Some code "borrowed" from KAMEZAWA Hiroyuki's
+	 * __alloc_contig_pages().
+	 */
+
+	for (;;) {
+		pfn = scan_lru_pages(pfn, end);
+		if (!pfn || pfn >= end)
+			break;
+
+		ret = do_migrate_range(pfn, end);
+		if (!ret) {
+			migration_failed = 0;
+		} else if (ret != -EBUSY
+			|| ++migration_failed >= MIGRATION_RETRY) {
+			return ret;
+		} else {
+			/* There are unstable pages.on pagevec. */
+			lru_add_drain_all();
+			/*
+			 * there may be pages on pcplist before
+			 * we mark the range as ISOLATED.
+			 */
+			drain_all_pages();
+		}
+		cond_resched();
+	}
+
+	if (!migration_failed) {
+		/* drop all pages in pagevec and pcp list */
+		lru_add_drain_all();
+		drain_all_pages();
+	}
+
+	/* Make sure all pages are isolated */
+	if (WARN_ON(test_pages_isolated(start, end)))
+		return -EBUSY;
+
+	return 0;
+}
+
+/**
+ * alloc_contig_range() -- tries to allocate given range of pages
+ * @start:	start PFN to allocate
+ * @end:	one-past-the-last PFN to allocate
+ * @flags:	flags passed to alloc_contig_freed_pages().
+ *
+ * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
+ * aligned, hovewer it's callers responsibility to guarantee that we
+ * are the only thread that changes migrate type of pageblocks the
+ * pages fall in.
+ *
+ * Returns zero on success or negative error code.  On success all
+ * pages which PFN is in (start, end) are allocated for the caller and
+ * need to be freed with free_contig_pages().
+ */
+int alloc_contig_range(unsigned long start, unsigned long end,
+		       gfp_t flags)
+{
+	unsigned long outer_start, outer_end;
+	int ret;
+
+	/*
+	 * What we do here is we mark all pageblocks in range as
+	 * MIGRATE_ISOLATE.  Because of the way page allocator work, we
+	 * align the range to MAX_ORDER pages so that page allocator
+	 * won't try to merge buddies from different pageblocks and
+	 * change MIGRATE_ISOLATE to some other migration type.
+	 *
+	 * Once the pageblocks are marked as MIGRATE_ISOLATE, we
+	 * migrate the pages from an unaligned range (ie. pages that
+	 * we are interested in).  This will put all the pages in
+	 * range back to page allocator as MIGRATE_ISOLATE.
+	 *
+	 * When this is done, we take the pages in range from page
+	 * allocator removing them from the buddy system.  This way
+	 * page allocator will never consider using them.
+	 *
+	 * This lets us mark the pageblocks back as
+	 * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
+	 * MAX_ORDER aligned range but not in the unaligned, original
+	 * range are put back to page allocator so that buddy can use
+	 * them.
+	 */
+
+	ret = start_isolate_page_range(pfn_to_maxpage(start),
+				       pfn_to_maxpage_up(end));
+	if (ret)
+		goto done;
+
+	ret = __alloc_contig_migrate_range(start, end);
+	if (ret)
+		goto done;
+
+	/*
+	 * Pages from [start, end) are within a MAX_ORDER_NR_PAGES
+	 * aligned blocks that are marked as MIGRATE_ISOLATE.  What's
+	 * more, all pages in [start, end) are free in page allocator.
+	 * What we are going to do is to allocate all pages from
+	 * [start, end) (that is remove them from page allocater).
+	 *
+	 * The only problem is that pages at the beginning and at the
+	 * end of interesting range may be not aligned with pages that
+	 * page allocator holds, ie. they can be part of higher order
+	 * pages.  Because of this, we reserve the bigger range and
+	 * once this is done free the pages we are not interested in.
+	 */
+
+	ret = 0;
+	while (!PageBuddy(pfn_to_page(start & (~0UL << ret))))
+		if (WARN_ON(++ret >= MAX_ORDER))
+			return -EINVAL;
+
+	outer_start = start & (~0UL << ret);
+	outer_end   = alloc_contig_freed_pages(outer_start, end, flags);
+
+	/* Free head and tail (if any) */
+	if (start != outer_start)
+		free_contig_pages(pfn_to_page(outer_start), start - outer_start);
+	if (end != outer_end)
+		free_contig_pages(pfn_to_page(end), outer_end - end);
+
+	ret = 0;
+done:
+	undo_isolate_page_range(pfn_to_maxpage(start), pfn_to_maxpage_up(end));
+	return ret;
+}
+
 void free_contig_pages(struct page *page, int nr_pages)
 {
 	for (; nr_pages; --nr_pages, ++page)
-- 
1.7.1.569.g6f426

  parent reply	other threads:[~2011-06-10  9:56 UTC|newest]

Thread overview: 178+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-10  9:54 [PATCHv10 0/10] Contiguous Memory Allocator Marek Szyprowski
2011-06-10  9:54 ` Marek Szyprowski
2011-06-10  9:54 ` Marek Szyprowski
2011-06-10  9:54 ` [PATCH 01/10] lib: bitmap: Added alignment offset for bitmap_find_next_zero_area() Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54 ` [PATCH 02/10] lib: genalloc: Generic allocator improvements Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10 11:24   ` Alan Cox
2011-06-10 11:24     ` Alan Cox
2011-06-10 11:24     ` Alan Cox
2011-06-10 12:22     ` Marek Szyprowski
2011-06-10 12:22       ` Marek Szyprowski
2011-06-10 12:22       ` Marek Szyprowski
2011-06-10 12:52       ` Alan Cox
2011-06-10 12:52         ` Alan Cox
2011-06-10 12:52         ` Alan Cox
2011-06-10 17:16         ` Michal Nazarewicz
2011-06-10 17:16           ` Michal Nazarewicz
2011-06-10 17:16           ` Michal Nazarewicz
2011-06-14 15:49         ` [Linaro-mm-sig] " Jordan Crouse
2011-06-14 15:49           ` Jordan Crouse
2011-06-14 15:49           ` Jordan Crouse
2011-06-10  9:54 ` [PATCH 03/10] mm: move some functions from memory_hotplug.c to page_isolation.c Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54 ` [PATCH 04/10] mm: alloc_contig_freed_pages() added Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54 ` Marek Szyprowski [this message]
2011-06-10  9:54   ` [PATCH 05/10] mm: alloc_contig_range() added Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54 ` [PATCH 06/10] mm: MIGRATE_CMA migration type added Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54 ` [PATCH 07/10] mm: MIGRATE_CMA isolation functions added Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54 ` [PATCH 08/10] mm: cma: Contiguous Memory Allocator added Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10 16:21   ` Arnd Bergmann
2011-06-10 16:21     ` Arnd Bergmann
2011-06-10 16:21     ` Arnd Bergmann
2011-06-13  9:05     ` Marek Szyprowski
2011-06-13  9:05       ` Marek Szyprowski
2011-06-13  9:05       ` Marek Szyprowski
2011-06-14 13:49       ` Arnd Bergmann
2011-06-14 13:49         ` Arnd Bergmann
2011-06-14 13:49         ` Arnd Bergmann
2011-06-14 13:55         ` Michal Nazarewicz
2011-06-14 13:55           ` Michal Nazarewicz
2011-06-14 13:55           ` Michal Nazarewicz
2011-06-14 16:03           ` Arnd Bergmann
2011-06-14 16:03             ` Arnd Bergmann
2011-06-14 16:03             ` Arnd Bergmann
2011-06-14 16:58             ` Michal Nazarewicz
2011-06-14 16:58               ` Michal Nazarewicz
2011-06-14 16:58               ` Michal Nazarewicz
2011-06-14 18:30               ` Arnd Bergmann
2011-06-14 18:30                 ` Arnd Bergmann
2011-06-14 18:30                 ` Arnd Bergmann
2011-06-14 18:40                 ` Michal Nazarewicz
2011-06-14 18:40                   ` Michal Nazarewicz
2011-06-14 18:40                   ` Michal Nazarewicz
2011-06-15  7:11                 ` Marek Szyprowski
2011-06-15  7:11                   ` Marek Szyprowski
2011-06-15  7:11                   ` Marek Szyprowski
2011-06-15  7:37                   ` Arnd Bergmann
2011-06-15  7:37                     ` Arnd Bergmann
2011-06-15  7:37                     ` Arnd Bergmann
2011-06-15  8:14                     ` Marek Szyprowski
2011-06-15  8:14                       ` Marek Szyprowski
2011-06-15  8:14                       ` Marek Szyprowski
2011-06-16  0:48                     ` [Linaro-mm-sig] " Philip Balister
2011-06-16  0:48                       ` Philip Balister
2011-06-16  0:48                       ` Philip Balister
2011-06-16  7:03                       ` Arnd Bergmann
2011-06-16  7:03                         ` Arnd Bergmann
2011-06-16  7:03                         ` Arnd Bergmann
2011-06-16  7:03                         ` Arnd Bergmann
2011-06-22  7:03                     ` Hans Verkuil
2011-06-22  7:03                       ` Hans Verkuil
2011-06-22  7:03                       ` Hans Verkuil
2011-06-22  7:32                       ` Michal Nazarewicz
2011-06-22  7:32                         ` Michal Nazarewicz
2011-06-22  7:32                         ` Michal Nazarewicz
2011-06-22 12:42                       ` Arnd Bergmann
2011-06-22 12:42                         ` Arnd Bergmann
2011-06-22 12:42                         ` Arnd Bergmann
2011-06-22 13:15                         ` Marek Szyprowski
2011-06-22 13:15                           ` Marek Szyprowski
2011-06-22 13:15                           ` Marek Szyprowski
2011-06-22 13:39                           ` Arnd Bergmann
2011-06-22 13:39                             ` Arnd Bergmann
2011-06-22 13:39                             ` Arnd Bergmann
2011-06-22 13:39                             ` Arnd Bergmann
2011-06-22 16:04                             ` Michal Nazarewicz
2011-06-22 16:04                               ` Michal Nazarewicz
2011-06-22 16:04                               ` Michal Nazarewicz
2011-06-22 15:54                           ` Michal Nazarewicz
2011-06-22 15:54                             ` Michal Nazarewicz
2011-06-22 15:54                             ` Michal Nazarewicz
2011-06-15 11:53                 ` Daniel Vetter
2011-06-15 11:53                   ` Daniel Vetter
2011-06-15 11:53                   ` Daniel Vetter
2011-06-15 13:12                   ` Thomas Hellstrom
2011-06-15 13:12                     ` Thomas Hellstrom
2011-06-15 13:12                     ` Thomas Hellstrom
2011-06-17 16:08                   ` Arnd Bergmann
2011-06-17 16:08                     ` Arnd Bergmann
2011-06-17 16:08                     ` Arnd Bergmann
2011-06-14 17:01             ` Daniel Stone
2011-06-14 17:01               ` Daniel Stone
2011-06-14 17:01               ` Daniel Stone
2011-06-14 18:58               ` Zach Pfeffer
2011-06-14 18:58                 ` Zach Pfeffer
2011-06-14 18:58                 ` Zach Pfeffer
2011-06-14 20:42                 ` Arnd Bergmann
2011-06-14 20:42                   ` Arnd Bergmann
2011-06-14 20:42                   ` Arnd Bergmann
2011-06-14 21:01                   ` Jordan Crouse
2011-06-14 21:01                     ` Jordan Crouse
2011-06-14 21:01                     ` Jordan Crouse
2011-06-15 11:27                     ` Arnd Bergmann
2011-06-15 11:27                       ` Arnd Bergmann
2011-06-15 11:27                       ` Arnd Bergmann
2011-06-15 11:27                       ` Arnd Bergmann
2011-06-15  6:29                   ` Subash Patel
2011-06-15  8:36                   ` Marek Szyprowski
2011-06-15  8:36                     ` Marek Szyprowski
2011-06-15  8:36                     ` Marek Szyprowski
2011-06-15 21:39                     ` Larry Bassel
2011-06-15 21:39                       ` Larry Bassel
2011-06-15 21:39                       ` Larry Bassel
2011-06-15 22:06                       ` Arnd Bergmann
2011-06-15 22:06                         ` Arnd Bergmann
2011-06-15 22:06                         ` Arnd Bergmann
2011-06-16 17:01                         ` Larry Bassel
2011-06-16 17:01                           ` Larry Bassel
2011-06-16 17:01                           ` Larry Bassel
2011-06-17 12:45                           ` Arnd Bergmann
2011-06-17 12:45                             ` Arnd Bergmann
2011-06-17 12:45                             ` Arnd Bergmann
2011-07-04  5:25                         ` Ankita Garg
2011-07-04  5:25                           ` Ankita Garg
2011-07-04  5:25                           ` Ankita Garg
2011-07-04 14:45                           ` Arnd Bergmann
2011-07-04 14:45                             ` Arnd Bergmann
2011-07-04 14:45                             ` Arnd Bergmann
2011-06-16  3:20                       ` Zach Pfeffer
2011-06-16  3:20                         ` Zach Pfeffer
2011-06-16  3:20                         ` Zach Pfeffer
2011-06-15  9:26                   ` Michal Nazarewicz
2011-06-15  9:26                     ` Michal Nazarewicz
2011-06-15  9:26                     ` Michal Nazarewicz
2011-06-15 11:20                     ` Arnd Bergmann
2011-06-15 11:20                       ` Arnd Bergmann
2011-06-15 11:20                       ` Arnd Bergmann
2011-06-15 11:30                       ` Michal Nazarewicz
2011-06-15 11:30                         ` Michal Nazarewicz
2011-06-15 11:30                         ` Michal Nazarewicz
2011-06-15  6:01             ` Subash Patel
2011-06-15  6:01               ` Subash Patel
2011-06-15  6:01               ` Subash Patel
2011-06-15  8:02         ` Marek Szyprowski
2011-06-15  8:02           ` Marek Szyprowski
2011-06-15  8:02           ` Marek Szyprowski
2011-06-15 11:14           ` Arnd Bergmann
2011-06-15 11:14             ` Arnd Bergmann
2011-06-15 11:14             ` Arnd Bergmann
2011-06-10  9:54 ` [PATCH 09/10] ARM: integrate CMA with dma-mapping subsystem Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54 ` [PATCH 10/10] ARM: S5PV210: add CMA support for FIMC devices on Aquila board Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski
2011-06-10  9:54   ` Marek Szyprowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1307699698-29369-6-git-send-email-m.szyprowski@samsung.com \
    --to=m.szyprowski@samsung.com \
    --cc=akpm@linux-foundation.org \
    --cc=ankita@in.ibm.com \
    --cc=arnd@arndb.de \
    --cc=dwalker@codeaurora.org \
    --cc=jesse.barker@linaro.org \
    --cc=johan.xx.mossberg@stericsson.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kyungmin.park@samsung.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=mina86@mina86.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.