From: David Rientjes <rientjes@google.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Greg Thelen <gthelen@google.com>, Hugh Dickins <hughd@google.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch v2 3/4] mm, compaction: add per-zone migration pfn cache for async compaction Date: Thu, 1 May 2014 14:35:45 -0700 (PDT) [thread overview] Message-ID: <alpine.DEB.2.02.1405011435000.23898@chino.kir.corp.google.com> (raw) In-Reply-To: <alpine.DEB.2.02.1405011434140.23898@chino.kir.corp.google.com> Each zone has a cached migration scanner pfn for memory compaction so that subsequent calls to memory compaction can start where the previous call left off. Currently, the compaction migration scanner only updates the per-zone cached pfn when pageblocks were not skipped for async compaction. This creates a dependency on calling sync compaction to avoid having subsequent calls to async compaction from scanning an enormous amount of non-MOVABLE pageblocks each time it is called. On large machines, this could be potentially very expensive. This patch adds a per-zone cached migration scanner pfn only for async compaction. It is updated everytime a pageblock has been scanned in its entirety and when no pages from it were successfully isolated. The cached migration scanner pfn for sync compaction is updated only when called for sync compaction. Signed-off-by: David Rientjes <rientjes@google.com> --- include/linux/mmzone.h | 5 +++-- mm/compaction.c | 55 ++++++++++++++++++++++++++------------------------ 2 files changed, 32 insertions(+), 28 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -360,9 +360,10 @@ struct zone { /* Set to true when the PG_migrate_skip bits should be cleared */ bool compact_blockskip_flush; - /* pfns where compaction scanners should start */ + /* pfn where compaction free scanner should start */ unsigned long compact_cached_free_pfn; - unsigned long compact_cached_migrate_pfn; + /* pfn where async and sync compaction migration scanner should start */ + unsigned long compact_cached_migrate_pfn[2]; #endif #ifdef CONFIG_MEMORY_HOTPLUG /* see spanned/present_pages for more description */ diff --git a/mm/compaction.c b/mm/compaction.c --- a/mm/compaction.c +++ b/mm/compaction.c @@ -89,7 +89,8 @@ static void __reset_isolation_suitable(struct zone *zone) unsigned long end_pfn = zone_end_pfn(zone); unsigned long pfn; - zone->compact_cached_migrate_pfn = start_pfn; + zone->compact_cached_migrate_pfn[0] = start_pfn; + zone->compact_cached_migrate_pfn[1] = start_pfn; zone->compact_cached_free_pfn = end_pfn; zone->compact_blockskip_flush = false; @@ -134,6 +135,7 @@ static void update_pageblock_skip(struct compact_control *cc, bool migrate_scanner) { struct zone *zone = cc->zone; + unsigned long pfn; if (cc->ignore_skip_hint) return; @@ -141,20 +143,25 @@ static void update_pageblock_skip(struct compact_control *cc, if (!page) return; - if (!nr_isolated) { - unsigned long pfn = page_to_pfn(page); - set_pageblock_skip(page); - - /* Update where compaction should restart */ - if (migrate_scanner) { - if (!cc->finished_update_migrate && - pfn > zone->compact_cached_migrate_pfn) - zone->compact_cached_migrate_pfn = pfn; - } else { - if (!cc->finished_update_free && - pfn < zone->compact_cached_free_pfn) - zone->compact_cached_free_pfn = pfn; - } + if (nr_isolated) + return; + + set_pageblock_skip(page); + pfn = page_to_pfn(page); + + /* Update where async and sync compaction should restart */ + if (migrate_scanner) { + if (cc->finished_update_migrate) + return; + if (pfn > zone->compact_cached_migrate_pfn[0]) + zone->compact_cached_migrate_pfn[0] = pfn; + if (cc->sync && pfn > zone->compact_cached_migrate_pfn[1]) + zone->compact_cached_migrate_pfn[1] = pfn; + } else { + if (cc->finished_update_free) + return; + if (pfn < zone->compact_cached_free_pfn) + zone->compact_cached_free_pfn = pfn; } } #else @@ -464,7 +471,6 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, unsigned long flags; bool locked = false; struct page *page = NULL, *valid_page = NULL; - bool skipped_async_unsuitable = false; const isolate_mode_t mode = (!cc->sync ? ISOLATE_ASYNC_MIGRATE : 0) | (unevictable ? ISOLATE_UNEVICTABLE : 0); @@ -540,11 +546,8 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, * the minimum amount of work satisfies the allocation */ mt = get_pageblock_migratetype(page); - if (!cc->sync && !migrate_async_suitable(mt)) { - cc->finished_update_migrate = true; - skipped_async_unsuitable = true; + if (!cc->sync && !migrate_async_suitable(mt)) goto next_pageblock; - } } /* @@ -646,10 +649,8 @@ next_pageblock: /* * Update the pageblock-skip information and cached scanner pfn, * if the whole pageblock was scanned without isolating any page. - * This is not done when pageblock was skipped due to being unsuitable - * for async compaction, so that eventual sync compaction can try. */ - if (low_pfn == end_pfn && !skipped_async_unsuitable) + if (low_pfn == end_pfn) update_pageblock_skip(cc, valid_page, nr_isolated, true); trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated); @@ -875,7 +876,8 @@ static int compact_finished(struct zone *zone, /* Compaction run completes if the migrate and free scanner meet */ if (cc->free_pfn <= cc->migrate_pfn) { /* Let the next compaction start anew. */ - zone->compact_cached_migrate_pfn = zone->zone_start_pfn; + zone->compact_cached_migrate_pfn[0] = zone->zone_start_pfn; + zone->compact_cached_migrate_pfn[1] = zone->zone_start_pfn; zone->compact_cached_free_pfn = zone_end_pfn(zone); /* @@ -1000,7 +1002,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) * information on where the scanners should start but check that it * is initialised by ensuring the values are within zone boundaries. */ - cc->migrate_pfn = zone->compact_cached_migrate_pfn; + cc->migrate_pfn = zone->compact_cached_migrate_pfn[cc->sync]; cc->free_pfn = zone->compact_cached_free_pfn; if (cc->free_pfn < start_pfn || cc->free_pfn > end_pfn) { cc->free_pfn = end_pfn & ~(pageblock_nr_pages-1); @@ -1008,7 +1010,8 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) } if (cc->migrate_pfn < start_pfn || cc->migrate_pfn > end_pfn) { cc->migrate_pfn = start_pfn; - zone->compact_cached_migrate_pfn = cc->migrate_pfn; + zone->compact_cached_migrate_pfn[0] = cc->migrate_pfn; + zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn; } trace_mm_compaction_begin(start_pfn, cc->migrate_pfn, cc->free_pfn, end_pfn);
WARNING: multiple messages have this Message-ID (diff)
From: David Rientjes <rientjes@google.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Greg Thelen <gthelen@google.com>, Hugh Dickins <hughd@google.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch v2 3/4] mm, compaction: add per-zone migration pfn cache for async compaction Date: Thu, 1 May 2014 14:35:45 -0700 (PDT) [thread overview] Message-ID: <alpine.DEB.2.02.1405011435000.23898@chino.kir.corp.google.com> (raw) In-Reply-To: <alpine.DEB.2.02.1405011434140.23898@chino.kir.corp.google.com> Each zone has a cached migration scanner pfn for memory compaction so that subsequent calls to memory compaction can start where the previous call left off. Currently, the compaction migration scanner only updates the per-zone cached pfn when pageblocks were not skipped for async compaction. This creates a dependency on calling sync compaction to avoid having subsequent calls to async compaction from scanning an enormous amount of non-MOVABLE pageblocks each time it is called. On large machines, this could be potentially very expensive. This patch adds a per-zone cached migration scanner pfn only for async compaction. It is updated everytime a pageblock has been scanned in its entirety and when no pages from it were successfully isolated. The cached migration scanner pfn for sync compaction is updated only when called for sync compaction. Signed-off-by: David Rientjes <rientjes@google.com> --- include/linux/mmzone.h | 5 +++-- mm/compaction.c | 55 ++++++++++++++++++++++++++------------------------ 2 files changed, 32 insertions(+), 28 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -360,9 +360,10 @@ struct zone { /* Set to true when the PG_migrate_skip bits should be cleared */ bool compact_blockskip_flush; - /* pfns where compaction scanners should start */ + /* pfn where compaction free scanner should start */ unsigned long compact_cached_free_pfn; - unsigned long compact_cached_migrate_pfn; + /* pfn where async and sync compaction migration scanner should start */ + unsigned long compact_cached_migrate_pfn[2]; #endif #ifdef CONFIG_MEMORY_HOTPLUG /* see spanned/present_pages for more description */ diff --git a/mm/compaction.c b/mm/compaction.c --- a/mm/compaction.c +++ b/mm/compaction.c @@ -89,7 +89,8 @@ static void __reset_isolation_suitable(struct zone *zone) unsigned long end_pfn = zone_end_pfn(zone); unsigned long pfn; - zone->compact_cached_migrate_pfn = start_pfn; + zone->compact_cached_migrate_pfn[0] = start_pfn; + zone->compact_cached_migrate_pfn[1] = start_pfn; zone->compact_cached_free_pfn = end_pfn; zone->compact_blockskip_flush = false; @@ -134,6 +135,7 @@ static void update_pageblock_skip(struct compact_control *cc, bool migrate_scanner) { struct zone *zone = cc->zone; + unsigned long pfn; if (cc->ignore_skip_hint) return; @@ -141,20 +143,25 @@ static void update_pageblock_skip(struct compact_control *cc, if (!page) return; - if (!nr_isolated) { - unsigned long pfn = page_to_pfn(page); - set_pageblock_skip(page); - - /* Update where compaction should restart */ - if (migrate_scanner) { - if (!cc->finished_update_migrate && - pfn > zone->compact_cached_migrate_pfn) - zone->compact_cached_migrate_pfn = pfn; - } else { - if (!cc->finished_update_free && - pfn < zone->compact_cached_free_pfn) - zone->compact_cached_free_pfn = pfn; - } + if (nr_isolated) + return; + + set_pageblock_skip(page); + pfn = page_to_pfn(page); + + /* Update where async and sync compaction should restart */ + if (migrate_scanner) { + if (cc->finished_update_migrate) + return; + if (pfn > zone->compact_cached_migrate_pfn[0]) + zone->compact_cached_migrate_pfn[0] = pfn; + if (cc->sync && pfn > zone->compact_cached_migrate_pfn[1]) + zone->compact_cached_migrate_pfn[1] = pfn; + } else { + if (cc->finished_update_free) + return; + if (pfn < zone->compact_cached_free_pfn) + zone->compact_cached_free_pfn = pfn; } } #else @@ -464,7 +471,6 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, unsigned long flags; bool locked = false; struct page *page = NULL, *valid_page = NULL; - bool skipped_async_unsuitable = false; const isolate_mode_t mode = (!cc->sync ? ISOLATE_ASYNC_MIGRATE : 0) | (unevictable ? ISOLATE_UNEVICTABLE : 0); @@ -540,11 +546,8 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, * the minimum amount of work satisfies the allocation */ mt = get_pageblock_migratetype(page); - if (!cc->sync && !migrate_async_suitable(mt)) { - cc->finished_update_migrate = true; - skipped_async_unsuitable = true; + if (!cc->sync && !migrate_async_suitable(mt)) goto next_pageblock; - } } /* @@ -646,10 +649,8 @@ next_pageblock: /* * Update the pageblock-skip information and cached scanner pfn, * if the whole pageblock was scanned without isolating any page. - * This is not done when pageblock was skipped due to being unsuitable - * for async compaction, so that eventual sync compaction can try. */ - if (low_pfn == end_pfn && !skipped_async_unsuitable) + if (low_pfn == end_pfn) update_pageblock_skip(cc, valid_page, nr_isolated, true); trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated); @@ -875,7 +876,8 @@ static int compact_finished(struct zone *zone, /* Compaction run completes if the migrate and free scanner meet */ if (cc->free_pfn <= cc->migrate_pfn) { /* Let the next compaction start anew. */ - zone->compact_cached_migrate_pfn = zone->zone_start_pfn; + zone->compact_cached_migrate_pfn[0] = zone->zone_start_pfn; + zone->compact_cached_migrate_pfn[1] = zone->zone_start_pfn; zone->compact_cached_free_pfn = zone_end_pfn(zone); /* @@ -1000,7 +1002,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) * information on where the scanners should start but check that it * is initialised by ensuring the values are within zone boundaries. */ - cc->migrate_pfn = zone->compact_cached_migrate_pfn; + cc->migrate_pfn = zone->compact_cached_migrate_pfn[cc->sync]; cc->free_pfn = zone->compact_cached_free_pfn; if (cc->free_pfn < start_pfn || cc->free_pfn > end_pfn) { cc->free_pfn = end_pfn & ~(pageblock_nr_pages-1); @@ -1008,7 +1010,8 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) } if (cc->migrate_pfn < start_pfn || cc->migrate_pfn > end_pfn) { cc->migrate_pfn = start_pfn; - zone->compact_cached_migrate_pfn = cc->migrate_pfn; + zone->compact_cached_migrate_pfn[0] = cc->migrate_pfn; + zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn; } trace_mm_compaction_begin(start_pfn, cc->migrate_pfn, cc->free_pfn, end_pfn); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-05-01 21:35 UTC|newest] Thread overview: 267+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-05-01 0:45 [patch 1/2] mm, migration: add destination page freeing callback David Rientjes 2014-05-01 0:45 ` David Rientjes 2014-05-01 0:45 ` [patch 2/2] mm, compaction: return failed migration target pages back to freelist David Rientjes 2014-05-01 0:45 ` David Rientjes 2014-05-01 5:10 ` Naoya Horiguchi 2014-05-01 21:02 ` David Rientjes 2014-05-01 21:02 ` David Rientjes 2014-05-01 5:08 ` [patch 1/2] mm, migration: add destination page freeing callback Naoya Horiguchi [not found] ` <5361d71e.236ec20a.1b3d.ffffc8aeSMTPIN_ADDED_BROKEN@mx.google.com> 2014-05-01 21:02 ` David Rientjes 2014-05-01 21:02 ` David Rientjes 2014-05-01 21:35 ` [patch v2 1/4] " David Rientjes 2014-05-01 21:35 ` David Rientjes 2014-05-01 21:35 ` [patch v2 2/4] mm, compaction: return failed migration target pages back to freelist David Rientjes 2014-05-01 21:35 ` David Rientjes 2014-05-02 10:11 ` Mel Gorman 2014-05-02 10:11 ` Mel Gorman 2014-05-02 15:23 ` Vlastimil Babka 2014-05-02 15:23 ` Vlastimil Babka 2014-05-02 15:26 ` [PATCH] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka 2014-05-06 21:18 ` Naoya Horiguchi [not found] ` <1399411134-k43fsr0p@n-horiguchi@ah.jp.nec.com> 2014-05-07 9:33 ` Vlastimil Babka 2014-05-07 9:33 ` Vlastimil Babka 2014-05-02 15:27 ` [PATCH 2/2] mm/compaction: avoid rescanning pageblocks in isolate_freepages Vlastimil Babka 2014-05-02 15:27 ` Vlastimil Babka 2014-05-06 22:19 ` Naoya Horiguchi [not found] ` <1399414778-xakujfb3@n-horiguchi@ah.jp.nec.com> 2014-05-07 9:22 ` Vlastimil Babka 2014-05-07 9:22 ` Vlastimil Babka 2014-05-02 15:29 ` [PATCH 1/2] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka 2014-05-02 15:29 ` Vlastimil Babka 2014-05-01 21:35 ` David Rientjes [this message] 2014-05-01 21:35 ` [patch v2 3/4] mm, compaction: add per-zone migration pfn cache for async compaction David Rientjes 2014-05-05 9:34 ` Vlastimil Babka 2014-05-05 9:34 ` Vlastimil Babka 2014-05-05 9:51 ` David Rientjes 2014-05-05 9:51 ` David Rientjes 2014-05-05 14:24 ` Vlastimil Babka 2014-05-05 14:24 ` Vlastimil Babka 2014-05-06 0:29 ` David Rientjes 2014-05-06 0:29 ` David Rientjes 2014-05-06 11:52 ` Vlastimil Babka 2014-05-06 11:52 ` Vlastimil Babka 2014-05-01 21:35 ` [patch v2 4/4] mm, thp: do not perform sync compaction on pagefault David Rientjes 2014-05-01 21:35 ` David Rientjes 2014-05-02 10:22 ` Mel Gorman 2014-05-02 10:22 ` Mel Gorman 2014-05-02 11:22 ` David Rientjes 2014-05-02 11:22 ` David Rientjes 2014-05-02 11:58 ` Mel Gorman 2014-05-02 20:29 ` David Rientjes 2014-05-02 20:29 ` David Rientjes 2014-05-05 14:48 ` Vlastimil Babka 2014-05-05 14:48 ` Vlastimil Babka 2014-05-06 8:55 ` Mel Gorman 2014-05-06 8:55 ` Mel Gorman 2014-05-06 15:05 ` Vlastimil Babka 2014-05-06 15:05 ` Vlastimil Babka 2014-05-02 10:10 ` [patch v2 1/4] mm, migration: add destination page freeing callback Mel Gorman 2014-05-07 2:22 ` [patch v3 1/6] " David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 2:22 ` [patch v3 2/6] mm, compaction: return failed migration target pages back to freelist David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 14:14 ` Naoya Horiguchi 2014-05-07 21:15 ` Andrew Morton 2014-05-07 21:15 ` Andrew Morton 2014-05-07 21:21 ` David Rientjes 2014-05-07 21:21 ` David Rientjes 2014-05-12 8:35 ` Vlastimil Babka 2014-05-12 8:35 ` Vlastimil Babka 2014-05-07 21:39 ` Greg Thelen 2014-05-07 21:39 ` Greg Thelen 2014-05-12 8:37 ` Vlastimil Babka 2014-05-12 8:37 ` Vlastimil Babka 2014-05-07 2:22 ` [patch v3 3/6] mm, compaction: add per-zone migration pfn cache for async compaction David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:34 ` Vlastimil Babka 2014-05-07 9:34 ` Vlastimil Babka 2014-05-07 20:56 ` Naoya Horiguchi 2014-05-07 2:22 ` [patch v3 4/6] mm, compaction: embed migration mode in compact_control David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:55 ` Vlastimil Babka 2014-05-07 9:55 ` Vlastimil Babka 2014-05-07 10:36 ` [patch v4 " David Rientjes 2014-05-07 10:36 ` David Rientjes 2014-05-09 22:03 ` Andrew Morton 2014-05-09 22:03 ` Andrew Morton 2014-05-07 2:22 ` [patch v3 5/6] mm, thp: avoid excessive compaction latency during fault David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:39 ` Mel Gorman 2014-05-07 9:39 ` Mel Gorman 2014-05-08 5:30 ` [patch -mm] mm, thp: avoid excessive compaction latency during fault fix David Rientjes 2014-05-08 5:30 ` David Rientjes 2014-05-13 10:00 ` Vlastimil Babka 2014-05-13 10:00 ` Vlastimil Babka 2014-05-22 2:49 ` David Rientjes 2014-05-22 2:49 ` David Rientjes 2014-05-22 8:43 ` Vlastimil Babka 2014-05-22 8:43 ` Vlastimil Babka 2014-05-07 2:22 ` [patch v3 6/6] mm, compaction: terminate async compaction when rescheduling David Rientjes 2014-05-07 2:22 ` David Rientjes 2014-05-07 9:41 ` Mel Gorman 2014-05-07 9:41 ` Mel Gorman 2014-05-07 12:09 ` [PATCH v2 1/2] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka 2014-05-07 12:09 ` Vlastimil Babka 2014-05-07 12:09 ` [PATCH v2 2/2] mm/compaction: avoid rescanning pageblocks in isolate_freepages Vlastimil Babka 2014-05-07 12:09 ` Vlastimil Babka 2014-05-07 21:47 ` David Rientjes 2014-05-07 21:47 ` David Rientjes 2014-05-07 22:06 ` Naoya Horiguchi 2014-05-08 5:28 ` Joonsoo Kim 2014-05-08 5:28 ` Joonsoo Kim 2014-05-12 9:09 ` Vlastimil Babka 2014-05-12 9:09 ` Vlastimil Babka 2014-05-13 1:15 ` Joonsoo Kim 2014-05-13 1:15 ` Joonsoo Kim 2014-05-09 15:49 ` Michal Nazarewicz 2014-05-09 15:49 ` Michal Nazarewicz 2014-05-19 10:14 ` Vlastimil Babka 2014-05-19 10:14 ` Vlastimil Babka 2014-05-22 2:51 ` David Rientjes 2014-05-22 2:51 ` David Rientjes 2014-05-07 21:44 ` [PATCH v2 1/2] mm/compaction: do not count migratepages when unnecessary David Rientjes 2014-05-07 21:44 ` David Rientjes 2014-05-09 15:48 ` Michal Nazarewicz 2014-05-09 15:48 ` Michal Nazarewicz 2014-05-12 9:51 ` Vlastimil Babka 2014-05-12 9:51 ` Vlastimil Babka 2014-05-07 12:10 ` [patch v3 6/6] mm, compaction: terminate async compaction when rescheduling Vlastimil Babka 2014-05-07 12:10 ` Vlastimil Babka 2014-05-07 21:20 ` Andrew Morton 2014-05-07 21:20 ` Andrew Morton 2014-05-07 21:28 ` David Rientjes 2014-05-07 21:28 ` David Rientjes 2014-05-08 5:17 ` Joonsoo Kim 2014-05-08 5:17 ` Joonsoo Kim 2014-05-12 14:15 ` [PATCH] mm, compaction: properly signal and act upon lock and need_sched() contention Vlastimil Babka 2014-05-12 14:15 ` Vlastimil Babka 2014-05-12 15:34 ` Naoya Horiguchi [not found] ` <1399908847-ouuxeneo@n-horiguchi@ah.jp.nec.com> 2014-05-12 15:45 ` Vlastimil Babka 2014-05-12 15:45 ` Vlastimil Babka 2014-05-12 15:53 ` Naoya Horiguchi 2014-05-12 20:28 ` David Rientjes 2014-05-12 20:28 ` David Rientjes 2014-05-13 8:50 ` Vlastimil Babka 2014-05-13 8:50 ` Vlastimil Babka 2014-05-13 0:44 ` Joonsoo Kim 2014-05-13 0:44 ` Joonsoo Kim 2014-05-13 8:54 ` Vlastimil Babka 2014-05-13 8:54 ` Vlastimil Babka 2014-05-15 2:21 ` Joonsoo Kim 2014-05-15 2:21 ` Joonsoo Kim 2014-05-16 9:47 ` [PATCH v2] " Vlastimil Babka 2014-05-16 9:47 ` Vlastimil Babka 2014-05-16 17:33 ` Michal Nazarewicz 2014-05-16 17:33 ` Michal Nazarewicz 2014-05-19 23:37 ` Andrew Morton 2014-05-19 23:37 ` Andrew Morton 2014-05-21 14:13 ` Vlastimil Babka 2014-05-21 14:13 ` Vlastimil Babka 2014-05-21 20:11 ` Andrew Morton 2014-05-21 20:11 ` Andrew Morton 2014-05-22 3:20 ` compaction is still too expensive for thp (was: [PATCH v2] mm, compaction: properly signal and act upon lock and need_sched() contention) David Rientjes 2014-05-22 3:20 ` David Rientjes 2014-05-22 8:10 ` compaction is still too expensive for thp Vlastimil Babka 2014-05-22 8:10 ` Vlastimil Babka 2014-05-22 8:55 ` David Rientjes 2014-05-22 8:55 ` David Rientjes 2014-05-22 12:03 ` Vlastimil Babka 2014-05-22 12:03 ` Vlastimil Babka 2014-06-04 0:29 ` [patch -mm 1/3] mm: rename allocflags_to_migratetype for clarity David Rientjes 2014-06-04 0:29 ` David Rientjes 2014-06-04 0:29 ` [patch -mm 2/3] mm, compaction: pass gfp mask to compact_control David Rientjes 2014-06-04 0:29 ` David Rientjes 2014-06-04 0:30 ` [patch -mm 3/3] mm, compaction: avoid compacting memory for thp if pageblock cannot become free David Rientjes 2014-06-04 0:30 ` David Rientjes 2014-06-04 11:04 ` Mel Gorman 2014-06-04 11:04 ` Mel Gorman 2014-06-04 22:02 ` David Rientjes 2014-06-04 22:02 ` David Rientjes 2014-06-04 16:07 ` Vlastimil Babka 2014-06-04 16:07 ` Vlastimil Babka 2014-06-04 16:11 ` [RFC PATCH 1/6] mm, compaction: periodically drop lock and restore IRQs in scanners Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 16:11 ` [RFC PATCH 2/6] mm, compaction: skip rechecks when lock was already held Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 23:46 ` David Rientjes 2014-06-04 23:46 ` David Rientjes 2014-06-04 16:11 ` [RFC PATCH 3/6] mm, compaction: remember position within pageblock in free pages scanner Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 16:11 ` [RFC PATCH 4/6] mm, compaction: skip buddy pages by their order in the migrate scanner Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-05 0:02 ` David Rientjes 2014-06-05 0:02 ` David Rientjes 2014-06-05 9:24 ` Vlastimil Babka 2014-06-05 9:24 ` Vlastimil Babka 2014-06-05 21:30 ` David Rientjes 2014-06-05 21:30 ` David Rientjes 2014-06-06 7:20 ` Vlastimil Babka 2014-06-06 7:20 ` Vlastimil Babka 2014-06-09 9:09 ` David Rientjes 2014-06-09 9:09 ` David Rientjes 2014-06-09 11:35 ` Vlastimil Babka 2014-06-09 11:35 ` Vlastimil Babka 2014-06-09 22:25 ` David Rientjes 2014-06-09 22:25 ` David Rientjes 2014-06-10 7:26 ` Vlastimil Babka 2014-06-10 7:26 ` Vlastimil Babka 2014-06-10 23:54 ` David Rientjes 2014-06-10 23:54 ` David Rientjes 2014-06-11 12:18 ` Vlastimil Babka 2014-06-11 12:18 ` Vlastimil Babka 2014-06-12 0:21 ` David Rientjes 2014-06-12 0:21 ` David Rientjes 2014-06-12 11:56 ` Vlastimil Babka 2014-06-12 11:56 ` Vlastimil Babka 2014-06-12 21:48 ` David Rientjes 2014-06-12 21:48 ` David Rientjes 2014-06-04 16:11 ` [RFC PATCH 5/6] mm, compaction: try to capture the just-created high-order freepage Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-04 16:11 ` [RFC PATCH 6/6] mm, compaction: don't migrate in blocks that cannot be fully compacted in async direct compaction Vlastimil Babka 2014-06-04 16:11 ` Vlastimil Babka 2014-06-05 0:08 ` David Rientjes 2014-06-05 0:08 ` David Rientjes 2014-06-05 15:38 ` Vlastimil Babka 2014-06-05 15:38 ` Vlastimil Babka 2014-06-05 21:38 ` David Rientjes 2014-06-05 21:38 ` David Rientjes 2014-06-06 7:33 ` Vlastimil Babka 2014-06-06 7:33 ` Vlastimil Babka 2014-06-09 9:06 ` David Rientjes 2014-06-09 9:06 ` David Rientjes 2014-06-12 12:18 ` Vlastimil Babka 2014-06-12 12:18 ` Vlastimil Babka 2014-06-04 23:39 ` [RFC PATCH 1/6] mm, compaction: periodically drop lock and restore IRQs in scanners David Rientjes 2014-06-04 23:39 ` David Rientjes 2014-06-05 9:05 ` Vlastimil Babka 2014-06-05 9:05 ` Vlastimil Babka 2014-05-22 23:49 ` [PATCH v2] mm, compaction: properly signal and act upon lock and need_sched() contention Kevin Hilman 2014-05-22 23:49 ` Kevin Hilman 2014-05-23 2:48 ` Shawn Guo 2014-05-23 2:48 ` Shawn Guo 2014-05-23 2:48 ` Shawn Guo 2014-05-23 8:34 ` Vlastimil Babka 2014-05-23 8:34 ` Vlastimil Babka 2014-05-23 8:34 ` Vlastimil Babka 2014-05-23 10:49 ` Shawn Guo 2014-05-23 10:49 ` Shawn Guo 2014-05-23 10:49 ` Shawn Guo 2014-05-23 15:07 ` Kevin Hilman 2014-05-23 15:07 ` Kevin Hilman 2014-05-23 15:07 ` Kevin Hilman 2014-05-30 16:59 ` Stephen Warren 2014-05-30 16:59 ` Stephen Warren 2014-05-30 16:59 ` Stephen Warren 2014-06-02 13:35 ` Fabio Estevam 2014-06-02 13:35 ` Fabio Estevam 2014-06-02 13:35 ` Fabio Estevam 2014-06-02 14:33 ` [PATCH -mm] mm, compaction: properly signal and act upon lock and need_sched() contention - fix Vlastimil Babka 2014-06-02 14:33 ` Vlastimil Babka 2014-06-02 14:33 ` Vlastimil Babka 2014-06-02 15:18 ` Fabio Estevam 2014-06-02 15:18 ` Fabio Estevam 2014-06-02 15:18 ` Fabio Estevam 2014-06-02 20:09 ` David Rientjes 2014-06-02 20:09 ` David Rientjes 2014-06-02 20:09 ` David Rientjes 2014-05-02 13:16 ` [patch 1/2] mm, migration: add destination page freeing callback Vlastimil Babka 2014-05-02 13:16 ` Vlastimil Babka
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=alpine.DEB.2.02.1405011435000.23898@chino.kir.corp.google.com \ --to=rientjes@google.com \ --cc=akpm@linux-foundation.org \ --cc=gthelen@google.com \ --cc=hughd@google.com \ --cc=iamjoonsoo.kim@lge.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=riel@redhat.com \ --cc=vbabka@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.