From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Rik van Riel <riel@redhat.com>, Jiang Liu <jiang.liu@huawei.com>,
Mel Gorman <mgorman@suse.de>,
Cody P Schafer <cody@linux.vnet.ibm.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@suse.cz>, Minchan Kim <minchan@kernel.org>,
Michal Nazarewicz <mina86@mina86.com>,
Andi Kleen <ak@linux.intel.com>,
Wei Yongjun <yongjun_wei@trendmicro.com.cn>,
Tang Chen <tangchen@cn.fujitsu.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Joonsoo Kim <js1304@gmail.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: [PATCH 1/7] mm/page_alloc: synchronize get/set pageblock
Date: Thu, 9 Jan 2014 16:04:41 +0900 [thread overview]
Message-ID: <1389251087-10224-2-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1389251087-10224-1-git-send-email-iamjoonsoo.kim@lge.com>
Now get/set pageblock is done without any syncronization. Therefore
there is race condition and migratetype can be unintended value.
Sometime we move some pageblocks from one migratetype to the other
type, and, at the sametime, some page in this pageblock could be
freed. In this case, we can get totally unintended value,
since get/set pageblock don't get/set atomically. Instead, it is
accessed in bit unit.
Since set pageblock isn't used frequently rather than get pageblock,
I think that seqlock is proper method to synchronize it. This type
of lock has minimum overhead if there are a lot of readers and few
of writers. So it fits to this situation.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index bd791e4..feaa607 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -79,6 +79,7 @@ static inline int get_pageblock_migratetype(struct page *page)
{
return get_pageblock_flags_group(page, PB_migrate, PB_migrate_end);
}
+void set_pageblock_migratetype(struct page *page, int migratetype);
struct free_area {
struct list_head free_list[MIGRATE_TYPES];
@@ -367,6 +368,7 @@ struct zone {
#endif
struct free_area free_area[MAX_ORDER];
+ seqlock_t pageblock_seqlock;
#ifndef CONFIG_SPARSEMEM
/*
* Flags for a pageblock_nr_pages block. See pageblock-flags.h.
diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 3fff8e7..58e2a89 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -23,7 +23,6 @@ static inline bool is_migrate_isolate(int migratetype)
bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
bool skip_hwpoisoned_pages);
-void set_pageblock_migratetype(struct page *page, int migratetype);
int move_freepages_block(struct zone *zone, struct page *page,
int migratetype);
int move_freepages(struct zone *zone,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5248fe0..b36aa5a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4788,6 +4788,7 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
spin_lock_init(&zone->lock);
spin_lock_init(&zone->lru_lock);
zone_seqlock_init(zone);
+ seqlock_init(&zone->pageblock_seqlock);
zone->zone_pgdat = pgdat;
zone_pcp_init(zone);
@@ -5927,15 +5928,19 @@ unsigned long get_pageblock_flags_group(struct page *page,
unsigned long pfn, bitidx;
unsigned long flags = 0;
unsigned long value = 1;
+ unsigned int seq;
zone = page_zone(page);
pfn = page_to_pfn(page);
bitmap = get_pageblock_bitmap(zone, pfn);
bitidx = pfn_to_bitidx(zone, pfn);
- for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
- if (test_bit(bitidx + start_bitidx, bitmap))
- flags |= value;
+ do {
+ seq = read_seqbegin(&zone->pageblock_seqlock);
+ for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
+ if (test_bit(bitidx + start_bitidx, bitmap))
+ flags |= value;
+ } while (read_seqretry(&zone->pageblock_seqlock, seq));
return flags;
}
@@ -5954,6 +5959,7 @@ void set_pageblock_flags_group(struct page *page, unsigned long flags,
unsigned long *bitmap;
unsigned long pfn, bitidx;
unsigned long value = 1;
+ unsigned long irq_flags;
zone = page_zone(page);
pfn = page_to_pfn(page);
@@ -5961,11 +5967,13 @@ void set_pageblock_flags_group(struct page *page, unsigned long flags,
bitidx = pfn_to_bitidx(zone, pfn);
VM_BUG_ON(!zone_spans_pfn(zone, pfn));
+ write_seqlock_irqsave(&zone->pageblock_seqlock, irq_flags);
for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
if (flags & value)
__set_bit(bitidx + start_bitidx, bitmap);
else
__clear_bit(bitidx + start_bitidx, bitmap);
+ write_sequnlock_irqrestore(&zone->pageblock_seqlock, irq_flags);
}
/*
--
1.7.9.5
next prev parent reply other threads:[~2014-01-09 7:05 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-09 7:04 [PATCH 0/7] improve robustness on handling migratetype Joonsoo Kim
2014-01-09 7:04 ` Joonsoo Kim [this message]
2014-01-09 9:08 ` [PATCH 1/7] mm/page_alloc: synchronize get/set pageblock Michal Nazarewicz
2014-01-09 7:04 ` [PATCH 2/7] mm/cma: fix cma free page accounting Joonsoo Kim
2014-01-09 21:10 ` Laura Abbott
2014-01-10 8:50 ` Joonsoo Kim
2014-01-09 7:04 ` [PATCH 3/7] mm/page_alloc: move set_freepage_migratetype() to better place Joonsoo Kim
2014-01-09 7:04 ` [PATCH 4/7] mm/isolation: remove invalid check condition Joonsoo Kim
2014-01-09 7:04 ` [PATCH 5/7] mm/page_alloc: separate interface to set/get migratetype of freepage Joonsoo Kim
2014-01-09 9:18 ` Michal Nazarewicz
2014-01-09 7:04 ` [PATCH 6/7] mm/page_alloc: store freelist migratetype to the page on buddy properly Joonsoo Kim
2014-01-09 9:19 ` Michal Nazarewicz
2014-01-09 7:04 ` [PATCH 7/7] mm/page_alloc: don't merge MIGRATE_(CMA|ISOLATE) pages on buddy Joonsoo Kim
2014-01-09 9:22 ` Michal Nazarewicz
2014-01-09 9:06 ` [PATCH 0/7] improve robustness on handling migratetype Michal Nazarewicz
2014-01-09 14:05 ` Joonsoo Kim
2014-01-09 9:27 ` Mel Gorman
2014-01-10 8:48 ` Joonsoo Kim
2014-01-10 9:48 ` Mel Gorman
2014-01-13 1:57 ` Joonsoo Kim
2014-01-29 16:52 ` Vlastimil Babka
2014-01-31 15:39 ` Mel Gorman
2014-02-03 7:45 ` Joonsoo Kim
2014-02-03 9:16 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1389251087-10224-2-git-send-email-iamjoonsoo.kim@lge.com \
--to=iamjoonsoo.kim@lge.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=cody@linux.vnet.ibm.com \
--cc=hannes@cmpxchg.org \
--cc=jiang.liu@huawei.com \
--cc=js1304@gmail.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=mina86@mina86.com \
--cc=minchan@kernel.org \
--cc=riel@redhat.com \
--cc=tangchen@cn.fujitsu.com \
--cc=yongjun_wei@trendmicro.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).