From: Vlastimil Babka <vbabka@suse.cz> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>, Greg Thelen <gthelen@google.com>, Vlastimil Babka <vbabka@suse.cz>, Minchan Kim <minchan@kernel.org>, Mel Gorman <mgorman@suse.de>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Michal Nazarewicz <mina86@mina86.com>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com>, David Rientjes <rientjes@google.com> Subject: [RFC PATCH 4/6] mm, compaction: skip buddy pages by their order in the migrate scanner Date: Wed, 4 Jun 2014 18:11:48 +0200 [thread overview] Message-ID: <1401898310-14525-4-git-send-email-vbabka@suse.cz> (raw) In-Reply-To: <1401898310-14525-1-git-send-email-vbabka@suse.cz> The migration scanner skips PageBuddy pages, but does not consider their order as checking page_order() is generally unsafe without holding the zone->lock, and acquiring the lock just for the check wouldn't be a good tradeoff. Still, this could avoid some iterations over the rest of the buddy page, and if we are careful, the race window between PageBuddy() check and page_order() is small, and the worst thing that can happen is that we skip too much and miss some isolation candidates. This is not that bad, as compaction can already fail for many other reasons like parallel allocations, and those have much larger race window. This patch therefore makes the migration scanner obtain the buddy page order and use it to skip the whole buddy page, if the order appears to be in the valid range. It's important that the page_order() is read only once, so that the value used in the checks and in the pfn calculation is the same. But in theory the compiler can replace the local variable by multiple inlines of page_order(). Therefore, the patch introduces page_order_unsafe() that uses ACCESS_ONCE to prevent this. Preliminary results with stress-highalloc from mmtests show a 10% reduction in number of pages scanned by migration scanner. This change is also important to later allow detecting when a cc->order block of pages cannot be compacted, and the scanner should skip to the next block instead of wasting time. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Christoph Lameter <cl@linux.com> Cc: Rik van Riel <riel@redhat.com> Cc: David Rientjes <rientjes@google.com> --- mm/compaction.c | 20 +++++++++++++++++--- mm/internal.h | 20 +++++++++++++++++++- 2 files changed, 36 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index ae7db5f..3dce5a7 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -640,11 +640,18 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, } /* - * Skip if free. page_order cannot be used without zone->lock - * as nothing prevents parallel allocations or buddy merging. + * Skip if free. We read page order here without zone lock + * which is generally unsafe, but the race window is small and + * the worst thing that can happen is that we skip some + * potential isolation targets. */ - if (PageBuddy(page)) + if (PageBuddy(page)) { + unsigned long freepage_order = page_order_unsafe(page); + + if (freepage_order > 0 && freepage_order < MAX_ORDER) + low_pfn += (1UL << freepage_order) - 1; continue; + } /* * Check may be lockless but that's ok as we recheck later. @@ -733,6 +740,13 @@ next_pageblock: low_pfn = ALIGN(low_pfn + 1, pageblock_nr_pages) - 1; } + /* + * The PageBuddy() check could have potentially brought us outside + * the range to be scanned. + */ + if (unlikely(low_pfn > end_pfn)) + end_pfn = low_pfn; + acct_isolated(zone, locked, cc); if (locked) diff --git a/mm/internal.h b/mm/internal.h index 1a8a0d4..6aa1f74 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -164,7 +164,8 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, * general, page_zone(page)->lock must be held by the caller to prevent the * page from being allocated in parallel and returning garbage as the order. * If a caller does not hold page_zone(page)->lock, it must guarantee that the - * page cannot be allocated or merged in parallel. + * page cannot be allocated or merged in parallel. Alternatively, it must + * handle invalid values gracefully, and use page_order_unsafe() below. */ static inline unsigned long page_order(struct page *page) { @@ -172,6 +173,23 @@ static inline unsigned long page_order(struct page *page) return page_private(page); } +/* + * Like page_order(), but for callers who cannot afford to hold the zone lock, + * and handle invalid values gracefully. ACCESS_ONCE is used so that if the + * caller assigns the result into a local variable and e.g. tests it for valid + * range before using, the compiler cannot decide to remove the variable and + * inline the function multiple times, potentially observing different values + * in the tests and the actual use of the result. + */ +static inline unsigned long page_order_unsafe(struct page *page) +{ + /* + * PageBuddy() should be checked by the caller to minimize race window, + * and invalid values must be handled gracefully. + */ + return ACCESS_ONCE(page_private(page)); +} + /* mm/util.c */ void __vma_link_list(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, struct rb_node *rb_parent); -- 1.8.4.5
WARNING: multiple messages have this Message-ID (diff)
From: Vlastimil Babka <vbabka@suse.cz> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>, Greg Thelen <gthelen@google.com>, Vlastimil Babka <vbabka@suse.cz>, Minchan Kim <minchan@kernel.org>, Mel Gorman <mgorman@suse.de>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Michal Nazarewicz <mina86@mina86.com>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com>, David Rientjes <rientjes@google.com> Subject: [RFC PATCH 4/6] mm, compaction: skip buddy pages by their order in the migrate scanner Date: Wed, 4 Jun 2014 18:11:48 +0200 [thread overview] Message-ID: <1401898310-14525-4-git-send-email-vbabka@suse.cz> (raw) In-Reply-To: <1401898310-14525-1-git-send-email-vbabka@suse.cz> The migration scanner skips PageBuddy pages, but does not consider their order as checking page_order() is generally unsafe without holding the zone->lock, and acquiring the lock just for the check wouldn't be a good tradeoff. Still, this could avoid some iterations over the rest of the buddy page, and if we are careful, the race window between PageBuddy() check and page_order() is small, and the worst thing that can happen is that we skip too much and miss some isolation candidates. This is not that bad, as compaction can already fail for many other reasons like parallel allocations, and those have much larger race window. This patch therefore makes the migration scanner obtain the buddy page order and use it to skip the whole buddy page, if the order appears to be in the valid range. It's important that the page_order() is read only once, so that the value used in the checks and in the pfn calculation is the same. But in theory the compiler can replace the local variable by multiple inlines of page_order(). Therefore, the patch introduces page_order_unsafe() that uses ACCESS_ONCE to prevent this. Preliminary results with stress-highalloc from mmtests show a 10% reduction in number of pages scanned by migration scanner. This change is also important to later allow detecting when a cc->order block of pages cannot be compacted, and the scanner should skip to the next block instead of wasting time. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Christoph Lameter <cl@linux.com> Cc: Rik van Riel <riel@redhat.com> Cc: David Rientjes <rientjes@google.com> --- mm/compaction.c | 20 +++++++++++++++++--- mm/internal.h | 20 +++++++++++++++++++- 2 files changed, 36 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index ae7db5f..3dce5a7 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -640,11 +640,18 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, } /* - * Skip if free. page_order cannot be used without zone->lock - * as nothing prevents parallel allocations or buddy merging. + * Skip if free. We read page order here without zone lock + * which is generally unsafe, but the race window is small and + * the worst thing that can happen is that we skip some + * potential isolation targets. */ - if (PageBuddy(page)) + if (PageBuddy(page)) { + unsigned long freepage_order = page_order_unsafe(page); + + if (freepage_order > 0 && freepage_order < MAX_ORDER) + low_pfn += (1UL << freepage_order) - 1; continue; + } /* * Check may be lockless but that's ok as we recheck later. @@ -733,6 +740,13 @@ next_pageblock: low_pfn = ALIGN(low_pfn + 1, pageblock_nr_pages) - 1; } + /* + * The PageBuddy() check could have potentially brought us outside + * the range to be scanned. + */ + if (unlikely(low_pfn > end_pfn)) + end_pfn = low_pfn; + acct_isolated(zone, locked, cc); if (locked) diff --git a/mm/internal.h b/mm/internal.h index 1a8a0d4..6aa1f74 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -164,7 +164,8 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, * general, page_zone(page)->lock must be held by the caller to prevent the * page from being allocated in parallel and returning garbage as the order. * If a caller does not hold page_zone(page)->lock, it must guarantee that the - * page cannot be allocated or merged in parallel. + * page cannot be allocated or merged in parallel. Alternatively, it must + * handle invalid values gracefully, and use page_order_unsafe() below. */ static inline unsigned long page_order(struct page *page) { @@ -172,6 +173,23 @@ static inline unsigned long page_order(struct page *page) return page_private(page); } +/* + * Like page_order(), but for callers who cannot afford to hold the zone lock, + * and handle invalid values gracefully. ACCESS_ONCE is used so that if the + * caller assigns the result into a local variable and e.g. tests it for valid + * range before using, the compiler cannot decide to remove the variable and + * inline the function multiple times, potentially observing different values + * in the tests and the actual use of the result. + */ +static inline unsigned long page_order_unsafe(struct page *page) +{ + /* + * PageBuddy() should be checked by the caller to minimize race window, + * and invalid values must be handled gracefully. + */ + return ACCESS_ONCE(page_private(page)); +} + /* mm/util.c */ void __vma_link_list(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, struct rb_node *rb_parent); -- 1.8.4.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-06-04 16:13 UTC|newest] Thread overview: 267+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-05-01 0:45 [patch 1/2] mm, migration: add destination page freeing callback David Rientjes 2014-05-01 0:45 ` David Rientjes 2014-05-01 0:45 ` [patch 2/2] mm, compaction: return failed migration target pages back to freelist David Rientjes 2014-05-01 0:45 ` David Rientjes 2014-05-01 5:10 ` Naoya Horiguchi 2014-05-01 21:02 ` David Rientjes 2014-05-01 21:02 ` David Rientjes 2014-05-01 5:08 ` [patch 1/2] mm, migration: add destination page freeing callback Naoya Horiguchi [not found] ` <5361d71e.236ec20a.1b3d.ffffc8aeSMTPIN_ADDED_BROKEN@mx.google.com> 2014-05-01 21:02 ` David Rientjes 2014-05-01 21:02 ` David Rientjes 2014-05-01 21:35 ` [patch v2 1/4] " David Rientjes 2014-05-01 21:35 ` David Rientjes 2014-05-01 21:35 ` [patch v2 2/4] mm, compaction: return failed migration target pages back to freelist David Rientjes 2014-05-01 21:35 ` David Rientjes 2014-05-02 10:11 ` Mel Gorman 2014-05-02 10:11 ` Mel Gorman 2014-05-02 15:23 ` Vlastimil Babka 2014-05-02 15:23 ` Vlastimil Babka 2014-05-02 15:26 ` [PATCH] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka 2014-05-06 21:18 ` Naoya Horiguchi [not found] ` <1399411134-k43fsr0p@n-horiguchi@ah.jp.nec.com> 2014-05-07 9:33 ` Vlastimil Babka 2014-05-07 9:33 ` Vlastimil Babka 2014-05-02 15:27 ` [PATCH 2/2] mm/compaction: avoid rescanning pageblocks in isolate_freepages Vlastimil Babka 2014-05-02 15:27 ` Vlastimil Babka 2014-05-06 22:19 ` Naoya Horiguchi [not found] ` <1399414778-xakujfb3@n-horiguchi@ah.jp.nec.com> 2014-05-07 9:22 ` Vlastimil Babka 2014-05-07 9:22 ` Vlastimil Babka 2014-05-02 15:29 ` [PATCH 1/2] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka 2014-05-02 15:29 ` Vlastimil Babka 2014-05-01 21:35 ` [patch v2 3/4] mm, compaction: add per-zone migration pfn cache for async compaction David Rientjes 2014-05-01 21:35 ` David Rientjes 2014-05-05 9:34 ` Vlastimil Babka 2014-05-05 9:34 ` Vlastimil Babka 2014-05-05 9:51 ` David Rientjes 2014-05-05 9:51 ` David Rientjes 2014-05-05 14:24 ` Vlastimil Babka 2014-05-05 14:24 ` Vlastimil Babka 2014-05-06 0:29 ` David Rientjes 2014-05-06 0:29 ` David Rientjes 2014-05-06 11:52 ` Vlastimil Babka 2014-05-06 11:52 ` Vlastimil Babka 2014-05-01 21:35 ` [patch v2 4/4] mm, thp: do not perform sync compaction on pagefault David Rientjes 2014-05-01 21:35 ` David Rientjes 2014-05-02 10:22 ` Mel Gorman 2014-05-02 10:22 ` Mel Gorman 2014-05-02 11:22 ` David Rientjes 2014-05-02 11:22 ` David Rientjes 2014-05-02 11:58 ` Mel Gorman 2014-05-02 20:29 ` David Rientjes 2014-05-02 20:29 ` David Rientjes 2014-05-05 14:48 ` Vlastimil Babka 2014-05-05 14:48 ` Vlastimil Babka 2014-05-06 8:55 ` Mel Gorman 2014-05-06 8:55 ` Mel Gorman 2014-05-06 15:05 ` Vlastimil Babka 2014-05-06 15:05 ` Vlastimil Babka 2014-05-02 10:10 ` [patch v2 1/4] mm, migration: add destination page freeing callback Mel Gorman 2014-05-07 2:22 ` [patch v3 1/6] " David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 2:22 ` [patch v3 2/6] mm, compaction: return failed migration target pages back to freelist David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 14:14 ` Naoya Horiguchi 2014-05-07 21:15 ` Andrew Morton 2014-05-07 21:15 ` Andrew Morton 2014-05-07 21:21 ` David Rientjes 2014-05-07 21:21 ` David Rientjes 2014-05-12 8:35 ` Vlastimil Babka 2014-05-12 8:35 ` Vlastimil Babka 2014-05-07 21:39 ` Greg Thelen 2014-05-07 21:39 ` Greg Thelen 2014-05-12 8:37 ` Vlastimil Babka 2014-05-12 8:37 ` Vlastimil Babka 2014-05-07 2:22 ` [patch v3 3/6] mm, compaction: add per-zone migration pfn cache for async compaction David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:34 ` Vlastimil Babka 2014-05-07 9:34 ` Vlastimil Babka 2014-05-07 20:56 ` Naoya Horiguchi 2014-05-07 2:22 ` [patch v3 4/6] mm, compaction: embed migration mode in compact_control David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:55 ` Vlastimil Babka 2014-05-07 9:55 ` Vlastimil Babka 2014-05-07 10:36 ` [patch v4 " David Rientjes 2014-05-07 10:36 ` David Rientjes 2014-05-09 22:03 ` Andrew Morton 2014-05-09 22:03 ` Andrew Morton 2014-05-07 2:22 ` [patch v3 5/6] mm, thp: avoid excessive compaction latency during fault David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:39 ` Mel Gorman 2014-05-07 9:39 ` Mel Gorman 2014-05-08 5:30 ` [patch -mm] mm, thp: avoid excessive compaction latency during fault fix David Rientjes 2014-05-08 5:30 ` David Rientjes 2014-05-13 10:00 ` Vlastimil Babka 2014-05-13 10:00 ` Vlastimil Babka 2014-05-22 2:49 ` David Rientjes 2014-05-22 2:49 ` David Rientjes 2014-05-22 8:43 ` Vlastimil Babka 2014-05-22 8:43 ` Vlastimil Babka 2014-05-07 2:22 ` [patch v3 6/6] mm, compaction: terminate async compaction when rescheduling David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:41 ` Mel Gorman 2014-05-07 9:41 ` Mel Gorman 2014-05-07 12:09 ` [PATCH v2 1/2] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka 2014-05-07 12:09 ` Vlastimil Babka 2014-05-07 12:09 ` [PATCH v2 2/2] mm/compaction: avoid rescanning pageblocks in isolate_freepages Vlastimil Babka 2014-05-07 12:09 ` Vlastimil Babka 2014-05-07 21:47 ` David Rientjes 2014-05-07 21:47 ` David Rientjes 2014-05-07 22:06 ` Naoya Horiguchi 2014-05-08 5:28 ` Joonsoo Kim 2014-05-08 5:28 ` Joonsoo Kim 2014-05-12 9:09 ` Vlastimil Babka 2014-05-12 9:09 ` Vlastimil Babka 2014-05-13 1:15 ` Joonsoo Kim 2014-05-13 1:15 ` Joonsoo Kim 2014-05-09 15:49 ` Michal Nazarewicz 2014-05-09 15:49 ` Michal Nazarewicz 2014-05-19 10:14 ` Vlastimil Babka 2014-05-19 10:14 ` Vlastimil Babka 2014-05-22 2:51 ` David Rientjes 2014-05-22 2:51 ` David Rientjes 2014-05-07 21:44 ` [PATCH v2 1/2] mm/compaction: do not count migratepages when unnecessary David Rientjes 2014-05-07 21:44 ` David Rientjes 2014-05-09 15:48 ` Michal Nazarewicz 2014-05-09 15:48 ` Michal Nazarewicz 2014-05-12 9:51 ` Vlastimil Babka 2014-05-12 9:51 ` Vlastimil Babka 2014-05-07 12:10 ` [patch v3 6/6] mm, compaction: terminate async compaction when rescheduling Vlastimil Babka 2014-05-07 12:10 ` Vlastimil Babka 2014-05-07 21:20 ` Andrew Morton 2014-05-07 21:20 ` Andrew Morton 2014-05-07 21:28 ` David Rientjes 2014-05-07 21:28 ` David Rientjes 2014-05-08 5:17 ` Joonsoo Kim 2014-05-08 5:17 ` Joonsoo Kim 2014-05-12 14:15 ` [PATCH] mm, compaction: properly signal and act upon lock and need_sched() contention Vlastimil Babka 2014-05-12 14:15 ` Vlastimil Babka 2014-05-12 15:34 ` Naoya Horiguchi [not found] ` <1399908847-ouuxeneo@n-horiguchi@ah.jp.nec.com> 2014-05-12 15:45 ` Vlastimil Babka 2014-05-12 15:45 ` Vlastimil Babka 2014-05-12 15:53 ` Naoya Horiguchi 2014-05-12 20:28 ` David Rientjes 2014-05-12 20:28 ` David Rientjes 2014-05-13 8:50 ` Vlastimil Babka 2014-05-13 8:50 ` Vlastimil Babka 2014-05-13 0:44 ` Joonsoo Kim 2014-05-13 0:44 ` Joonsoo Kim 2014-05-13 8:54 ` Vlastimil Babka 2014-05-13 8:54 ` Vlastimil Babka 2014-05-15 2:21 ` Joonsoo Kim 2014-05-15 2:21 ` Joonsoo Kim 2014-05-16 9:47 ` [PATCH v2] " Vlastimil Babka 2014-05-16 9:47 ` Vlastimil Babka 2014-05-16 17:33 ` Michal Nazarewicz 2014-05-16 17:33 ` Michal Nazarewicz 2014-05-19 23:37 ` Andrew Morton 2014-05-19 23:37 ` Andrew Morton 2014-05-21 14:13 ` Vlastimil Babka 2014-05-21 14:13 ` Vlastimil Babka 2014-05-21 20:11 ` Andrew Morton 2014-05-21 20:11 ` Andrew Morton 2014-05-22 3:20 ` compaction is still too expensive for thp (was: [PATCH v2] mm, compaction: properly signal and act upon lock and need_sched() contention) David Rientjes 2014-05-22 3:20 ` David Rientjes 2014-05-22 8:10 ` compaction is still too expensive for thp Vlastimil Babka 2014-05-22 8:10 ` Vlastimil Babka 2014-05-22 8:55 ` David Rientjes 2014-05-22 8:55 ` David Rientjes 2014-05-22 12:03 ` Vlastimil Babka 2014-05-22 12:03 ` Vlastimil Babka 2014-06-04 0:29 ` [patch -mm 1/3] mm: rename allocflags_to_migratetype for clarity David Rientjes 2014-06-04 0:29 ` David Rientjes 2014-06-04 0:29 ` [patch -mm 2/3] mm, compaction: pass gfp mask to compact_control David Rientjes 2014-06-04 0:29 ` David Rientjes 2014-06-04 0:30 ` [patch -mm 3/3] mm, compaction: avoid compacting memory for thp if pageblock cannot become free David Rientjes 2014-06-04 0:30 ` David Rientjes 2014-06-04 11:04 ` Mel Gorman 2014-06-04 11:04 ` Mel Gorman 2014-06-04 22:02 ` David Rientjes 2014-06-04 22:02 ` David Rientjes 2014-06-04 16:07 ` Vlastimil Babka 2014-06-04 16:07 ` Vlastimil Babka 2014-06-04 16:11 ` [RFC PATCH 1/6] mm, compaction: periodically drop lock and restore IRQs in scanners Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 16:11 ` [RFC PATCH 2/6] mm, compaction: skip rechecks when lock was already held Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 23:46 ` David Rientjes 2014-06-04 23:46 ` David Rientjes 2014-06-04 16:11 ` [RFC PATCH 3/6] mm, compaction: remember position within pageblock in free pages scanner Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka [this message] 2014-06-04 16:11 ` [RFC PATCH 4/6] mm, compaction: skip buddy pages by their order in the migrate scanner Vlastimil Babka 2014-06-05 0:02 ` David Rientjes 2014-06-05 0:02 ` David Rientjes 2014-06-05 9:24 ` Vlastimil Babka 2014-06-05 9:24 ` Vlastimil Babka 2014-06-05 21:30 ` David Rientjes 2014-06-05 21:30 ` David Rientjes 2014-06-06 7:20 ` Vlastimil Babka 2014-06-06 7:20 ` Vlastimil Babka 2014-06-09 9:09 ` David Rientjes 2014-06-09 9:09 ` David Rientjes 2014-06-09 11:35 ` Vlastimil Babka 2014-06-09 11:35 ` Vlastimil Babka 2014-06-09 22:25 ` David Rientjes 2014-06-09 22:25 ` David Rientjes 2014-06-10 7:26 ` Vlastimil Babka 2014-06-10 7:26 ` Vlastimil Babka 2014-06-10 23:54 ` David Rientjes 2014-06-10 23:54 ` David Rientjes 2014-06-11 12:18 ` Vlastimil Babka 2014-06-11 12:18 ` Vlastimil Babka 2014-06-12 0:21 ` David Rientjes 2014-06-12 0:21 ` David Rientjes 2014-06-12 11:56 ` Vlastimil Babka 2014-06-12 11:56 ` Vlastimil Babka 2014-06-12 21:48 ` David Rientjes 2014-06-12 21:48 ` David Rientjes 2014-06-04 16:11 ` [RFC PATCH 5/6] mm, compaction: try to capture the just-created high-order freepage Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 16:11 ` [RFC PATCH 6/6] mm, compaction: don't migrate in blocks that cannot be fully compacted in async direct compaction Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-05 0:08 ` David Rientjes 2014-06-05 0:08 ` David Rientjes 2014-06-05 15:38 ` Vlastimil Babka 2014-06-05 15:38 ` Vlastimil Babka 2014-06-05 21:38 ` David Rientjes 2014-06-05 21:38 ` David Rientjes 2014-06-06 7:33 ` Vlastimil Babka 2014-06-06 7:33 ` Vlastimil Babka 2014-06-09 9:06 ` David Rientjes 2014-06-09 9:06 ` David Rientjes 2014-06-12 12:18 ` Vlastimil Babka 2014-06-12 12:18 ` Vlastimil Babka 2014-06-04 23:39 ` [RFC PATCH 1/6] mm, compaction: periodically drop lock and restore IRQs in scanners David Rientjes 2014-06-04 23:39 ` David Rientjes 2014-06-05 9:05 ` Vlastimil Babka 2014-06-05 9:05 ` Vlastimil Babka 2014-05-22 23:49 ` [PATCH v2] mm, compaction: properly signal and act upon lock and need_sched() contention Kevin Hilman 2014-05-22 23:49 ` Kevin Hilman 2014-05-23 2:48 ` Shawn Guo 2014-05-23 2:48 ` Shawn Guo 2014-05-23 2:48 ` Shawn Guo 2014-05-23 8:34 ` Vlastimil Babka 2014-05-23 8:34 ` Vlastimil Babka 2014-05-23 8:34 ` Vlastimil Babka 2014-05-23 10:49 ` Shawn Guo 2014-05-23 10:49 ` Shawn Guo 2014-05-23 10:49 ` Shawn Guo 2014-05-23 15:07 ` Kevin Hilman 2014-05-23 15:07 ` Kevin Hilman 2014-05-23 15:07 ` Kevin Hilman 2014-05-30 16:59 ` Stephen Warren 2014-05-30 16:59 ` Stephen Warren 2014-05-30 16:59 ` Stephen Warren 2014-06-02 13:35 ` Fabio Estevam 2014-06-02 13:35 ` Fabio Estevam 2014-06-02 13:35 ` Fabio Estevam 2014-06-02 14:33 ` [PATCH -mm] mm, compaction: properly signal and act upon lock and need_sched() contention - fix Vlastimil Babka 2014-06-02 14:33 ` Vlastimil Babka 2014-06-02 14:33 ` Vlastimil Babka 2014-06-02 15:18 ` Fabio Estevam 2014-06-02 15:18 ` Fabio Estevam 2014-06-02 15:18 ` Fabio Estevam 2014-06-02 20:09 ` David Rientjes 2014-06-02 20:09 ` David Rientjes 2014-06-02 20:09 ` David Rientjes 2014-05-02 13:16 ` [patch 1/2] mm, migration: add destination page freeing callback Vlastimil Babka 2014-05-02 13:16 ` Vlastimil Babka
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1401898310-14525-4-git-send-email-vbabka@suse.cz \ --to=vbabka@suse.cz \ --cc=akpm@linux-foundation.org \ --cc=cl@linux.com \ --cc=gthelen@google.com \ --cc=iamjoonsoo.kim@lge.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mina86@mina86.com \ --cc=minchan@kernel.org \ --cc=n-horiguchi@ah.jp.nec.com \ --cc=riel@redhat.com \ --cc=rientjes@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.