All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/15] mm: page_alloc: style neatenings
@ 2017-03-16  1:59 ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  1:59 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: linux-kernel

Just neatening.  Maybe useful, maybe not.  Mostly whitespace changes.

There are still many checkpatch messages that should probably
be ignored.

Before:
$ ./scripts/checkpatch.pl --strict --terse --nosummary --show-types \
	-f mm/page_alloc.c | \
  cut -f4 -d":" | sort | uniq -c | sort -rn
    144 PARENTHESIS_ALIGNMENT
     38 SPLIT_STRING
     36 LINE_SPACING
     32 LONG_LINE
     28 SPACING
     14 LONG_LINE_COMMENT
     14 BRACES
     13 LOGGING_CONTINUATION
     12 PREFER_PR_LEVEL
      8 MISPLACED_INIT
      7 EXPORT_SYMBOL
      7 AVOID_BUG
      6 UNSPECIFIED_INT
      5 MACRO_ARG_PRECEDENCE
      4 MULTIPLE_ASSIGNMENTS
      4 LOGICAL_CONTINUATIONS
      4 COMPARISON_TO_NULL
      4 CAMELCASE
      3 UNNECESSARY_PARENTHESES
      3 PRINTK_WITHOUT_KERN_LEVEL
      3 MACRO_ARG_REUSE
      2 UNDOCUMENTED_SETUP
      2 MEMORY_BARRIER
      2 BLOCK_COMMENT_STYLE
      1 VOLATILE
      1 TYPO_SPELLING
      1 SYMBOLIC_PERMS
      1 SUSPECT_CODE_INDENT
      1 SPACE_BEFORE_TAB
      1 FUNCTION_ARGUMENTS
      1 CONSTANT_COMPARISON
      1 CONSIDER_KSTRTO

After:
$ ./scripts/checkpatch.pl --strict --terse --nosummary --show-types \
	-f mm/page_alloc.c | \
  cut -f4 -d":" | sort | uniq -c | sort -rn
     43 SPLIT_STRING
     21 LONG_LINE
     14 LONG_LINE_COMMENT
     13 LOGGING_CONTINUATION
     12 PREFER_PR_LEVEL
      8 PRINTK_WITHOUT_KERN_LEVEL
      7 AVOID_BUG
      5 MACRO_ARG_PRECEDENCE
      4 MULTIPLE_ASSIGNMENTS
      4 CAMELCASE
      3 MACRO_ARG_REUSE
      2 UNDOCUMENTED_SETUP
      2 MEMORY_BARRIER
      2 LEADING_SPACE
      1 VOLATILE
      1 SPACING
      1 FUNCTION_ARGUMENTS
      1 EXPORT_SYMBOL
      1 CONSTANT_COMPARISON
      1 CONSIDER_KSTRTO

Joe Perches (15):
  mm: page_alloc: whitespace neatening
  mm: page_alloc: align arguments to parenthesis
  mm: page_alloc: fix brace positions
  mm: page_alloc: fix blank lines
  mm: page_alloc: Move __meminitdata and __initdata uses
  mm: page_alloc: Use unsigned int instead of unsigned
  mm: page_alloc: Move labels to column 1
  mm: page_alloc: Fix typo acording -> according & the the -> to the
  mm: page_alloc: Use the common commenting style
  mm: page_alloc: 80 column neatening
  mm: page_alloc: Move EXPORT_SYMBOL uses
  mm: page_alloc: Avoid pointer comparisons to NULL
  mm: page_alloc: Remove unnecessary parentheses
  mm: page_alloc: Use octal permissions
  mm: page_alloc: Move logical continuations to EOL

 mm/page_alloc.c | 845 ++++++++++++++++++++++++++++++--------------------------
 1 file changed, 458 insertions(+), 387 deletions(-)

-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 00/15] mm: page_alloc: style neatenings
@ 2017-03-16  1:59 ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  1:59 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: linux-kernel

Just neatening.  Maybe useful, maybe not.  Mostly whitespace changes.

There are still many checkpatch messages that should probably
be ignored.

Before:
$ ./scripts/checkpatch.pl --strict --terse --nosummary --show-types \
	-f mm/page_alloc.c | \
  cut -f4 -d":" | sort | uniq -c | sort -rn
    144 PARENTHESIS_ALIGNMENT
     38 SPLIT_STRING
     36 LINE_SPACING
     32 LONG_LINE
     28 SPACING
     14 LONG_LINE_COMMENT
     14 BRACES
     13 LOGGING_CONTINUATION
     12 PREFER_PR_LEVEL
      8 MISPLACED_INIT
      7 EXPORT_SYMBOL
      7 AVOID_BUG
      6 UNSPECIFIED_INT
      5 MACRO_ARG_PRECEDENCE
      4 MULTIPLE_ASSIGNMENTS
      4 LOGICAL_CONTINUATIONS
      4 COMPARISON_TO_NULL
      4 CAMELCASE
      3 UNNECESSARY_PARENTHESES
      3 PRINTK_WITHOUT_KERN_LEVEL
      3 MACRO_ARG_REUSE
      2 UNDOCUMENTED_SETUP
      2 MEMORY_BARRIER
      2 BLOCK_COMMENT_STYLE
      1 VOLATILE
      1 TYPO_SPELLING
      1 SYMBOLIC_PERMS
      1 SUSPECT_CODE_INDENT
      1 SPACE_BEFORE_TAB
      1 FUNCTION_ARGUMENTS
      1 CONSTANT_COMPARISON
      1 CONSIDER_KSTRTO

After:
$ ./scripts/checkpatch.pl --strict --terse --nosummary --show-types \
	-f mm/page_alloc.c | \
  cut -f4 -d":" | sort | uniq -c | sort -rn
     43 SPLIT_STRING
     21 LONG_LINE
     14 LONG_LINE_COMMENT
     13 LOGGING_CONTINUATION
     12 PREFER_PR_LEVEL
      8 PRINTK_WITHOUT_KERN_LEVEL
      7 AVOID_BUG
      5 MACRO_ARG_PRECEDENCE
      4 MULTIPLE_ASSIGNMENTS
      4 CAMELCASE
      3 MACRO_ARG_REUSE
      2 UNDOCUMENTED_SETUP
      2 MEMORY_BARRIER
      2 LEADING_SPACE
      1 VOLATILE
      1 SPACING
      1 FUNCTION_ARGUMENTS
      1 EXPORT_SYMBOL
      1 CONSTANT_COMPARISON
      1 CONSIDER_KSTRTO

Joe Perches (15):
  mm: page_alloc: whitespace neatening
  mm: page_alloc: align arguments to parenthesis
  mm: page_alloc: fix brace positions
  mm: page_alloc: fix blank lines
  mm: page_alloc: Move __meminitdata and __initdata uses
  mm: page_alloc: Use unsigned int instead of unsigned
  mm: page_alloc: Move labels to column 1
  mm: page_alloc: Fix typo acording -> according & the the -> to the
  mm: page_alloc: Use the common commenting style
  mm: page_alloc: 80 column neatening
  mm: page_alloc: Move EXPORT_SYMBOL uses
  mm: page_alloc: Avoid pointer comparisons to NULL
  mm: page_alloc: Remove unnecessary parentheses
  mm: page_alloc: Use octal permissions
  mm: page_alloc: Move logical continuations to EOL

 mm/page_alloc.c | 845 ++++++++++++++++++++++++++++++--------------------------
 1 file changed, 458 insertions(+), 387 deletions(-)

-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 01/15] mm: page_alloc: whitespace neatening
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  1:59   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  1:59 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

whitespace changes only - git diff -w shows no difference

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 54 +++++++++++++++++++++++++++---------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d3c10734874..504749032400 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -202,7 +202,7 @@ static void __free_pages_ok(struct page *page, unsigned int order);
  * TBD: should special case ZONE_DMA32 machines here - in those we normally
  * don't need any ZONE_NORMAL reservation
  */
-int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES-1] = {
+int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
 #ifdef CONFIG_ZONE_DMA
 	 256,
 #endif
@@ -366,7 +366,7 @@ static inline unsigned long *get_pageblock_bitmap(struct page *page,
 static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
 {
 #ifdef CONFIG_SPARSEMEM
-	pfn &= (PAGES_PER_SECTION-1);
+	pfn &= (PAGES_PER_SECTION - 1);
 	return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS;
 #else
 	pfn = pfn - round_down(page_zone(page)->zone_start_pfn, pageblock_nr_pages);
@@ -395,7 +395,7 @@ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page
 	bitmap = get_pageblock_bitmap(page, pfn);
 	bitidx = pfn_to_bitidx(page, pfn);
 	word_bitidx = bitidx / BITS_PER_LONG;
-	bitidx &= (BITS_PER_LONG-1);
+	bitidx &= (BITS_PER_LONG - 1);
 
 	word = bitmap[word_bitidx];
 	bitidx += end_bitidx;
@@ -436,7 +436,7 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 	bitmap = get_pageblock_bitmap(page, pfn);
 	bitidx = pfn_to_bitidx(page, pfn);
 	word_bitidx = bitidx / BITS_PER_LONG;
-	bitidx &= (BITS_PER_LONG-1);
+	bitidx &= (BITS_PER_LONG - 1);
 
 	VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
 
@@ -867,7 +867,7 @@ static inline void __free_one_page(struct page *page,
 	 * so it's less likely to be used soon and more likely to be merged
 	 * as a higher order page
 	 */
-	if ((order < MAX_ORDER-2) && pfn_valid_within(buddy_pfn)) {
+	if ((order < MAX_ORDER - 2) && pfn_valid_within(buddy_pfn)) {
 		struct page *higher_page, *higher_buddy;
 		combined_pfn = buddy_pfn & pfn;
 		higher_page = page + (combined_pfn - pfn);
@@ -1681,7 +1681,7 @@ static void check_new_page_bad(struct page *page)
 static inline int check_new_page(struct page *page)
 {
 	if (likely(page_expected_state(page,
-				PAGE_FLAGS_CHECK_AT_PREP|__PG_HWPOISON)))
+				PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
 		return 0;
 
 	check_new_page_bad(page);
@@ -1899,7 +1899,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
 	struct page *start_page, *end_page;
 
 	start_pfn = page_to_pfn(page);
-	start_pfn = start_pfn & ~(pageblock_nr_pages-1);
+	start_pfn = start_pfn & ~(pageblock_nr_pages - 1);
 	start_page = pfn_to_page(start_pfn);
 	end_page = start_page + pageblock_nr_pages - 1;
 	end_pfn = start_pfn + pageblock_nr_pages - 1;
@@ -2021,7 +2021,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 	 * If a sufficient number of pages in the block are either free or of
 	 * comparable migratability as our allocation, claim the whole block.
 	 */
-	if (free_pages + alike_pages >= (1 << (pageblock_order-1)) ||
+	if (free_pages + alike_pages >= (1 << (pageblock_order - 1)) ||
 			page_group_by_mobility_disabled)
 		set_pageblock_migratetype(page, start_type);
 
@@ -2205,8 +2205,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 	bool can_steal;
 
 	/* Find the largest possible block of pages in the other list */
-	for (current_order = MAX_ORDER-1;
-				current_order >= order && current_order <= MAX_ORDER-1;
+	for (current_order = MAX_ORDER - 1;
+				current_order >= order && current_order <= MAX_ORDER - 1;
 				--current_order) {
 		area = &(zone->free_area[current_order]);
 		fallback_mt = find_suitable_fallback(area, current_order,
@@ -3188,7 +3188,7 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 	struct page *page;
 
 	page = get_page_from_freelist(gfp_mask, order,
-			alloc_flags|ALLOC_CPUSET, ac);
+			alloc_flags | ALLOC_CPUSET, ac);
 	/*
 	 * fallback to ignore cpuset restriction if our nodes
 	 * are depleted
@@ -3231,7 +3231,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 	 * we're still under heavy pressure.
 	 */
 	page = get_page_from_freelist(gfp_mask | __GFP_HARDWALL, order,
-					ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
+					ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
 	if (page)
 		goto out;
 
@@ -3518,7 +3518,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
 
 	/* __GFP_HIGH is assumed to be the same as ALLOC_HIGH to save a branch. */
-	BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_HIGH);
+	BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t)ALLOC_HIGH);
 
 	/*
 	 * The caller may dip into page reserves a bit more if the caller
@@ -3526,7 +3526,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	 * policy or is asking for __GFP_HIGH memory.  GFP_ATOMIC requests will
 	 * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_HIGH (__GFP_HIGH).
 	 */
-	alloc_flags |= (__force int) (gfp_mask & __GFP_HIGH);
+	alloc_flags |= (__force int)(gfp_mask & __GFP_HIGH);
 
 	if (gfp_mask & __GFP_ATOMIC) {
 		/*
@@ -3642,7 +3642,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 							NR_ZONE_WRITE_PENDING);
 
 				if (2 * write_pending > reclaimable) {
-					congestion_wait(BLK_RW_ASYNC, HZ/10);
+					congestion_wait(BLK_RW_ASYNC, HZ / 10);
 					return true;
 				}
 			}
@@ -3700,8 +3700,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * We also sanity check to catch abuse of atomic reserves being used by
 	 * callers that are not in atomic context.
 	 */
-	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) ==
-				(__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
+	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)) ==
+				(__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
 		gfp_mask &= ~__GFP_ATOMIC;
 
 retry_cpuset:
@@ -3816,7 +3816,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (time_after(jiffies, alloc_start + stall_timeout)) {
 		warn_alloc(gfp_mask & ~__GFP_NOWARN, ac->nodemask,
 			"page allocation stalls for %ums, order:%u",
-			jiffies_to_msecs(jiffies-alloc_start), order);
+			jiffies_to_msecs(jiffies - alloc_start), order);
 		stall_timeout += 10 * HZ;
 	}
 
@@ -4063,7 +4063,7 @@ unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order)
 	page = alloc_pages(gfp_mask, order);
 	if (!page)
 		return 0;
-	return (unsigned long) page_address(page);
+	return (unsigned long)page_address(page);
 }
 EXPORT_SYMBOL(__get_free_pages);
 
@@ -4452,7 +4452,7 @@ static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask
 	return !node_isset(nid, *nodemask);
 }
 
-#define K(x) ((x) << (PAGE_SHIFT-10))
+#define K(x) ((x) << (PAGE_SHIFT - 10))
 
 static void show_migration_types(unsigned char type)
 {
@@ -4754,7 +4754,7 @@ char numa_zonelist_order[16] = "default";
  * interface for configure zonelist ordering.
  * command line option "numa_zonelist_order"
  *	= "[dD]efault	- default, automatic configuration.
- *	= "[nN]ode 	- order by node locality, then by zone within node
+ *	= "[nN]ode	- order by node locality, then by zone within node
  *	= "[zZ]one      - order by zone, then by locality within zone
  */
 
@@ -4881,7 +4881,7 @@ static int find_next_best_node(int node, nodemask_t *used_node_mask)
 			val += PENALTY_FOR_NODE_WITH_CPUS;
 
 		/* Slight preference for less loaded node */
-		val *= (MAX_NODE_LOAD*MAX_NUMNODES);
+		val *= (MAX_NODE_LOAD * MAX_NUMNODES);
 		val += node_load[n];
 
 		if (val < min_val) {
@@ -5381,7 +5381,7 @@ static int zone_batchsize(struct zone *zone)
 	 * of pages of one half of the possible page colors
 	 * and the other with pages of the other colors.
 	 */
-	batch = rounddown_pow_of_two(batch + batch/2) - 1;
+	batch = rounddown_pow_of_two(batch + batch / 2) - 1;
 
 	return batch;
 
@@ -5914,7 +5914,7 @@ static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned l
 {
 	unsigned long usemapsize;
 
-	zonesize += zone_start_pfn & (pageblock_nr_pages-1);
+	zonesize += zone_start_pfn & (pageblock_nr_pages - 1);
 	usemapsize = roundup(zonesize, pageblock_nr_pages);
 	usemapsize = usemapsize >> pageblock_order;
 	usemapsize *= NR_PAGEBLOCK_BITS;
@@ -7224,7 +7224,7 @@ void *__init alloc_large_system_hash(const char *tablename,
 
 		/* It isn't necessary when PAGE_SIZE >= 1MB */
 		if (PAGE_SHIFT < 20)
-			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
+			numentries = round_up(numentries, (1 << 20) / PAGE_SIZE);
 
 		if (flags & HASH_ADAPT) {
 			unsigned long adapt;
@@ -7345,7 +7345,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		 * handle each tail page individually in migration.
 		 */
 		if (PageHuge(page)) {
-			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
+			iter = round_up(iter + 1, 1 << compound_order(page)) - 1;
 			continue;
 		}
 
@@ -7718,7 +7718,7 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 		rmv_page_order(page);
 		zone->free_area[order].nr_free--;
 		for (i = 0; i < (1 << order); i++)
-			SetPageReserved((page+i));
+			SetPageReserved((page + i));
 		pfn += (1 << order);
 	}
 	spin_unlock_irqrestore(&zone->lock, flags);
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 01/15] mm: page_alloc: whitespace neatening
@ 2017-03-16  1:59   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  1:59 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

whitespace changes only - git diff -w shows no difference

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 54 +++++++++++++++++++++++++++---------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d3c10734874..504749032400 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -202,7 +202,7 @@ static void __free_pages_ok(struct page *page, unsigned int order);
  * TBD: should special case ZONE_DMA32 machines here - in those we normally
  * don't need any ZONE_NORMAL reservation
  */
-int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES-1] = {
+int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
 #ifdef CONFIG_ZONE_DMA
 	 256,
 #endif
@@ -366,7 +366,7 @@ static inline unsigned long *get_pageblock_bitmap(struct page *page,
 static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
 {
 #ifdef CONFIG_SPARSEMEM
-	pfn &= (PAGES_PER_SECTION-1);
+	pfn &= (PAGES_PER_SECTION - 1);
 	return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS;
 #else
 	pfn = pfn - round_down(page_zone(page)->zone_start_pfn, pageblock_nr_pages);
@@ -395,7 +395,7 @@ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page
 	bitmap = get_pageblock_bitmap(page, pfn);
 	bitidx = pfn_to_bitidx(page, pfn);
 	word_bitidx = bitidx / BITS_PER_LONG;
-	bitidx &= (BITS_PER_LONG-1);
+	bitidx &= (BITS_PER_LONG - 1);
 
 	word = bitmap[word_bitidx];
 	bitidx += end_bitidx;
@@ -436,7 +436,7 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 	bitmap = get_pageblock_bitmap(page, pfn);
 	bitidx = pfn_to_bitidx(page, pfn);
 	word_bitidx = bitidx / BITS_PER_LONG;
-	bitidx &= (BITS_PER_LONG-1);
+	bitidx &= (BITS_PER_LONG - 1);
 
 	VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
 
@@ -867,7 +867,7 @@ static inline void __free_one_page(struct page *page,
 	 * so it's less likely to be used soon and more likely to be merged
 	 * as a higher order page
 	 */
-	if ((order < MAX_ORDER-2) && pfn_valid_within(buddy_pfn)) {
+	if ((order < MAX_ORDER - 2) && pfn_valid_within(buddy_pfn)) {
 		struct page *higher_page, *higher_buddy;
 		combined_pfn = buddy_pfn & pfn;
 		higher_page = page + (combined_pfn - pfn);
@@ -1681,7 +1681,7 @@ static void check_new_page_bad(struct page *page)
 static inline int check_new_page(struct page *page)
 {
 	if (likely(page_expected_state(page,
-				PAGE_FLAGS_CHECK_AT_PREP|__PG_HWPOISON)))
+				PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
 		return 0;
 
 	check_new_page_bad(page);
@@ -1899,7 +1899,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
 	struct page *start_page, *end_page;
 
 	start_pfn = page_to_pfn(page);
-	start_pfn = start_pfn & ~(pageblock_nr_pages-1);
+	start_pfn = start_pfn & ~(pageblock_nr_pages - 1);
 	start_page = pfn_to_page(start_pfn);
 	end_page = start_page + pageblock_nr_pages - 1;
 	end_pfn = start_pfn + pageblock_nr_pages - 1;
@@ -2021,7 +2021,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 	 * If a sufficient number of pages in the block are either free or of
 	 * comparable migratability as our allocation, claim the whole block.
 	 */
-	if (free_pages + alike_pages >= (1 << (pageblock_order-1)) ||
+	if (free_pages + alike_pages >= (1 << (pageblock_order - 1)) ||
 			page_group_by_mobility_disabled)
 		set_pageblock_migratetype(page, start_type);
 
@@ -2205,8 +2205,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 	bool can_steal;
 
 	/* Find the largest possible block of pages in the other list */
-	for (current_order = MAX_ORDER-1;
-				current_order >= order && current_order <= MAX_ORDER-1;
+	for (current_order = MAX_ORDER - 1;
+				current_order >= order && current_order <= MAX_ORDER - 1;
 				--current_order) {
 		area = &(zone->free_area[current_order]);
 		fallback_mt = find_suitable_fallback(area, current_order,
@@ -3188,7 +3188,7 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 	struct page *page;
 
 	page = get_page_from_freelist(gfp_mask, order,
-			alloc_flags|ALLOC_CPUSET, ac);
+			alloc_flags | ALLOC_CPUSET, ac);
 	/*
 	 * fallback to ignore cpuset restriction if our nodes
 	 * are depleted
@@ -3231,7 +3231,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 	 * we're still under heavy pressure.
 	 */
 	page = get_page_from_freelist(gfp_mask | __GFP_HARDWALL, order,
-					ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
+					ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
 	if (page)
 		goto out;
 
@@ -3518,7 +3518,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
 
 	/* __GFP_HIGH is assumed to be the same as ALLOC_HIGH to save a branch. */
-	BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_HIGH);
+	BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t)ALLOC_HIGH);
 
 	/*
 	 * The caller may dip into page reserves a bit more if the caller
@@ -3526,7 +3526,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 	 * policy or is asking for __GFP_HIGH memory.  GFP_ATOMIC requests will
 	 * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_HIGH (__GFP_HIGH).
 	 */
-	alloc_flags |= (__force int) (gfp_mask & __GFP_HIGH);
+	alloc_flags |= (__force int)(gfp_mask & __GFP_HIGH);
 
 	if (gfp_mask & __GFP_ATOMIC) {
 		/*
@@ -3642,7 +3642,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 							NR_ZONE_WRITE_PENDING);
 
 				if (2 * write_pending > reclaimable) {
-					congestion_wait(BLK_RW_ASYNC, HZ/10);
+					congestion_wait(BLK_RW_ASYNC, HZ / 10);
 					return true;
 				}
 			}
@@ -3700,8 +3700,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * We also sanity check to catch abuse of atomic reserves being used by
 	 * callers that are not in atomic context.
 	 */
-	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) ==
-				(__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
+	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)) ==
+				(__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
 		gfp_mask &= ~__GFP_ATOMIC;
 
 retry_cpuset:
@@ -3816,7 +3816,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (time_after(jiffies, alloc_start + stall_timeout)) {
 		warn_alloc(gfp_mask & ~__GFP_NOWARN, ac->nodemask,
 			"page allocation stalls for %ums, order:%u",
-			jiffies_to_msecs(jiffies-alloc_start), order);
+			jiffies_to_msecs(jiffies - alloc_start), order);
 		stall_timeout += 10 * HZ;
 	}
 
@@ -4063,7 +4063,7 @@ unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order)
 	page = alloc_pages(gfp_mask, order);
 	if (!page)
 		return 0;
-	return (unsigned long) page_address(page);
+	return (unsigned long)page_address(page);
 }
 EXPORT_SYMBOL(__get_free_pages);
 
@@ -4452,7 +4452,7 @@ static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask
 	return !node_isset(nid, *nodemask);
 }
 
-#define K(x) ((x) << (PAGE_SHIFT-10))
+#define K(x) ((x) << (PAGE_SHIFT - 10))
 
 static void show_migration_types(unsigned char type)
 {
@@ -4754,7 +4754,7 @@ char numa_zonelist_order[16] = "default";
  * interface for configure zonelist ordering.
  * command line option "numa_zonelist_order"
  *	= "[dD]efault	- default, automatic configuration.
- *	= "[nN]ode 	- order by node locality, then by zone within node
+ *	= "[nN]ode	- order by node locality, then by zone within node
  *	= "[zZ]one      - order by zone, then by locality within zone
  */
 
@@ -4881,7 +4881,7 @@ static int find_next_best_node(int node, nodemask_t *used_node_mask)
 			val += PENALTY_FOR_NODE_WITH_CPUS;
 
 		/* Slight preference for less loaded node */
-		val *= (MAX_NODE_LOAD*MAX_NUMNODES);
+		val *= (MAX_NODE_LOAD * MAX_NUMNODES);
 		val += node_load[n];
 
 		if (val < min_val) {
@@ -5381,7 +5381,7 @@ static int zone_batchsize(struct zone *zone)
 	 * of pages of one half of the possible page colors
 	 * and the other with pages of the other colors.
 	 */
-	batch = rounddown_pow_of_two(batch + batch/2) - 1;
+	batch = rounddown_pow_of_two(batch + batch / 2) - 1;
 
 	return batch;
 
@@ -5914,7 +5914,7 @@ static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned l
 {
 	unsigned long usemapsize;
 
-	zonesize += zone_start_pfn & (pageblock_nr_pages-1);
+	zonesize += zone_start_pfn & (pageblock_nr_pages - 1);
 	usemapsize = roundup(zonesize, pageblock_nr_pages);
 	usemapsize = usemapsize >> pageblock_order;
 	usemapsize *= NR_PAGEBLOCK_BITS;
@@ -7224,7 +7224,7 @@ void *__init alloc_large_system_hash(const char *tablename,
 
 		/* It isn't necessary when PAGE_SIZE >= 1MB */
 		if (PAGE_SHIFT < 20)
-			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
+			numentries = round_up(numentries, (1 << 20) / PAGE_SIZE);
 
 		if (flags & HASH_ADAPT) {
 			unsigned long adapt;
@@ -7345,7 +7345,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		 * handle each tail page individually in migration.
 		 */
 		if (PageHuge(page)) {
-			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
+			iter = round_up(iter + 1, 1 << compound_order(page)) - 1;
 			continue;
 		}
 
@@ -7718,7 +7718,7 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 		rmv_page_order(page);
 		zone->free_area[order].nr_free--;
 		for (i = 0; i < (1 << order); i++)
-			SetPageReserved((page+i));
+			SetPageReserved((page + i));
 		pfn += (1 << order);
 	}
 	spin_unlock_irqrestore(&zone->lock, flags);
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  1:59   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  1:59 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

whitespace changes only - git diff -w shows no difference

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 552 ++++++++++++++++++++++++++++----------------------------
 1 file changed, 276 insertions(+), 276 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 504749032400..79fc996892c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -204,33 +204,33 @@ static void __free_pages_ok(struct page *page, unsigned int order);
  */
 int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
 #ifdef CONFIG_ZONE_DMA
-	 256,
+	256,
 #endif
 #ifdef CONFIG_ZONE_DMA32
-	 256,
+	256,
 #endif
 #ifdef CONFIG_HIGHMEM
-	 32,
+	32,
 #endif
-	 32,
+	32,
 };
 
 EXPORT_SYMBOL(totalram_pages);
 
 static char * const zone_names[MAX_NR_ZONES] = {
 #ifdef CONFIG_ZONE_DMA
-	 "DMA",
+	"DMA",
 #endif
 #ifdef CONFIG_ZONE_DMA32
-	 "DMA32",
+	"DMA32",
 #endif
-	 "Normal",
+	"Normal",
 #ifdef CONFIG_HIGHMEM
-	 "HighMem",
+	"HighMem",
 #endif
-	 "Movable",
+	"Movable",
 #ifdef CONFIG_ZONE_DEVICE
-	 "Device",
+	"Device",
 #endif
 };
 
@@ -310,8 +310,8 @@ static inline bool __meminit early_page_uninitialised(unsigned long pfn)
  * later in the boot cycle when it can be parallelised.
  */
 static inline bool update_defer_init(pg_data_t *pgdat,
-				unsigned long pfn, unsigned long zone_end,
-				unsigned long *nr_initialised)
+				     unsigned long pfn, unsigned long zone_end,
+				     unsigned long *nr_initialised)
 {
 	unsigned long max_initialise;
 
@@ -323,7 +323,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
 	 * two large system hashes that can take up 1GB for 0.25TB/node.
 	 */
 	max_initialise = max(2UL << (30 - PAGE_SHIFT),
-		(pgdat->node_spanned_pages >> 8));
+			     (pgdat->node_spanned_pages >> 8));
 
 	(*nr_initialised)++;
 	if ((*nr_initialised > max_initialise) &&
@@ -345,8 +345,8 @@ static inline bool early_page_uninitialised(unsigned long pfn)
 }
 
 static inline bool update_defer_init(pg_data_t *pgdat,
-				unsigned long pfn, unsigned long zone_end,
-				unsigned long *nr_initialised)
+				     unsigned long pfn, unsigned long zone_end,
+				     unsigned long *nr_initialised)
 {
 	return true;
 }
@@ -354,7 +354,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
 
 /* Return a pointer to the bitmap storing bits affecting a block of pages */
 static inline unsigned long *get_pageblock_bitmap(struct page *page,
-							unsigned long pfn)
+						  unsigned long pfn)
 {
 #ifdef CONFIG_SPARSEMEM
 	return __pfn_to_section(pfn)->pageblock_flags;
@@ -384,9 +384,9 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
  * Return: pageblock_bits flags
  */
 static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page,
-					unsigned long pfn,
-					unsigned long end_bitidx,
-					unsigned long mask)
+							       unsigned long pfn,
+							       unsigned long end_bitidx,
+							       unsigned long mask)
 {
 	unsigned long *bitmap;
 	unsigned long bitidx, word_bitidx;
@@ -403,8 +403,8 @@ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page
 }
 
 unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn,
-					unsigned long end_bitidx,
-					unsigned long mask)
+				      unsigned long end_bitidx,
+				      unsigned long mask)
 {
 	return __get_pfnblock_flags_mask(page, pfn, end_bitidx, mask);
 }
@@ -423,9 +423,9 @@ static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned
  * @mask: mask of bits that the caller is interested in
  */
 void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
-					unsigned long pfn,
-					unsigned long end_bitidx,
-					unsigned long mask)
+			     unsigned long pfn,
+			     unsigned long end_bitidx,
+			     unsigned long mask)
 {
 	unsigned long *bitmap;
 	unsigned long bitidx, word_bitidx;
@@ -460,7 +460,7 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
 		migratetype = MIGRATE_UNMOVABLE;
 
 	set_pageblock_flags_group(page, (unsigned long)migratetype,
-					PB_migrate, PB_migrate_end);
+				  PB_migrate, PB_migrate_end);
 }
 
 #ifdef CONFIG_DEBUG_VM
@@ -481,8 +481,8 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 
 	if (ret)
 		pr_err("page 0x%lx outside node %d zone %s [ 0x%lx - 0x%lx ]\n",
-			pfn, zone_to_nid(zone), zone->name,
-			start_pfn, start_pfn + sp);
+		       pfn, zone_to_nid(zone), zone->name,
+		       start_pfn, start_pfn + sp);
 
 	return ret;
 }
@@ -516,7 +516,7 @@ static inline int bad_range(struct zone *zone, struct page *page)
 #endif
 
 static void bad_page(struct page *page, const char *reason,
-		unsigned long bad_flags)
+		     unsigned long bad_flags)
 {
 	static unsigned long resume;
 	static unsigned long nr_shown;
@@ -533,7 +533,7 @@ static void bad_page(struct page *page, const char *reason,
 		}
 		if (nr_unshown) {
 			pr_alert(
-			      "BUG: Bad page state: %lu messages suppressed\n",
+				"BUG: Bad page state: %lu messages suppressed\n",
 				nr_unshown);
 			nr_unshown = 0;
 		}
@@ -543,12 +543,12 @@ static void bad_page(struct page *page, const char *reason,
 		resume = jiffies + 60 * HZ;
 
 	pr_alert("BUG: Bad page state in process %s  pfn:%05lx\n",
-		current->comm, page_to_pfn(page));
+		 current->comm, page_to_pfn(page));
 	__dump_page(page, reason);
 	bad_flags &= page->flags;
 	if (bad_flags)
 		pr_alert("bad because of flags: %#lx(%pGp)\n",
-						bad_flags, &bad_flags);
+			 bad_flags, &bad_flags);
 	dump_page_owner(page);
 
 	print_modules();
@@ -599,7 +599,7 @@ void prep_compound_page(struct page *page, unsigned int order)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 unsigned int _debug_guardpage_minorder;
 bool _debug_pagealloc_enabled __read_mostly
-			= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
+= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
 EXPORT_SYMBOL(_debug_pagealloc_enabled);
 bool _debug_guardpage_enabled __read_mostly;
 
@@ -654,7 +654,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
 early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
 
 static inline bool set_page_guard(struct zone *zone, struct page *page,
-				unsigned int order, int migratetype)
+				  unsigned int order, int migratetype)
 {
 	struct page_ext *page_ext;
 
@@ -679,7 +679,7 @@ static inline bool set_page_guard(struct zone *zone, struct page *page,
 }
 
 static inline void clear_page_guard(struct zone *zone, struct page *page,
-				unsigned int order, int migratetype)
+				    unsigned int order, int migratetype)
 {
 	struct page_ext *page_ext;
 
@@ -699,9 +699,9 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 #else
 struct page_ext_operations debug_guardpage_ops;
 static inline bool set_page_guard(struct zone *zone, struct page *page,
-			unsigned int order, int migratetype) { return false; }
+				  unsigned int order, int migratetype) { return false; }
 static inline void clear_page_guard(struct zone *zone, struct page *page,
-				unsigned int order, int migratetype) {}
+				    unsigned int order, int migratetype) {}
 #endif
 
 static inline void set_page_order(struct page *page, unsigned int order)
@@ -732,7 +732,7 @@ static inline void rmv_page_order(struct page *page)
  * For recording page's order, we use page_private(page).
  */
 static inline int page_is_buddy(struct page *page, struct page *buddy,
-							unsigned int order)
+				unsigned int order)
 {
 	if (page_is_guard(buddy) && page_order(buddy) == order) {
 		if (page_zone_id(page) != page_zone_id(buddy))
@@ -785,9 +785,9 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
  */
 
 static inline void __free_one_page(struct page *page,
-		unsigned long pfn,
-		struct zone *zone, unsigned int order,
-		int migratetype)
+				   unsigned long pfn,
+				   struct zone *zone, unsigned int order,
+				   int migratetype)
 {
 	unsigned long combined_pfn;
 	unsigned long uninitialized_var(buddy_pfn);
@@ -848,8 +848,8 @@ static inline void __free_one_page(struct page *page,
 			buddy_mt = get_pageblock_migratetype(buddy);
 
 			if (migratetype != buddy_mt
-					&& (is_migrate_isolate(migratetype) ||
-						is_migrate_isolate(buddy_mt)))
+			    && (is_migrate_isolate(migratetype) ||
+				is_migrate_isolate(buddy_mt)))
 				goto done_merging;
 		}
 		max_order++;
@@ -876,7 +876,7 @@ static inline void __free_one_page(struct page *page,
 		if (pfn_valid_within(buddy_pfn) &&
 		    page_is_buddy(higher_page, higher_buddy, order + 1)) {
 			list_add_tail(&page->lru,
-				&zone->free_area[order].free_list[migratetype]);
+				      &zone->free_area[order].free_list[migratetype]);
 			goto out;
 		}
 	}
@@ -892,17 +892,17 @@ static inline void __free_one_page(struct page *page,
  * check if necessary.
  */
 static inline bool page_expected_state(struct page *page,
-					unsigned long check_flags)
+				       unsigned long check_flags)
 {
 	if (unlikely(atomic_read(&page->_mapcount) != -1))
 		return false;
 
 	if (unlikely((unsigned long)page->mapping |
-			page_ref_count(page) |
+		     page_ref_count(page) |
 #ifdef CONFIG_MEMCG
-			(unsigned long)page->mem_cgroup |
+		     (unsigned long)page->mem_cgroup |
 #endif
-			(page->flags & check_flags)))
+		     (page->flags & check_flags)))
 		return false;
 
 	return true;
@@ -994,7 +994,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
 }
 
 static __always_inline bool free_pages_prepare(struct page *page,
-					unsigned int order, bool check_free)
+					       unsigned int order, bool check_free)
 {
 	int bad = 0;
 
@@ -1042,7 +1042,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 		debug_check_no_locks_freed(page_address(page),
 					   PAGE_SIZE << order);
 		debug_check_no_obj_freed(page_address(page),
-					   PAGE_SIZE << order);
+					 PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
 	kernel_poison_pages(page, 1 << order, 0);
@@ -1086,7 +1086,7 @@ static bool bulkfree_pcp_prepare(struct page *page)
  * pinned" detection logic.
  */
 static void free_pcppages_bulk(struct zone *zone, int count,
-					struct per_cpu_pages *pcp)
+			       struct per_cpu_pages *pcp)
 {
 	int migratetype = 0;
 	int batch_free = 0;
@@ -1142,16 +1142,16 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 }
 
 static void free_one_page(struct zone *zone,
-				struct page *page, unsigned long pfn,
-				unsigned int order,
-				int migratetype)
+			  struct page *page, unsigned long pfn,
+			  unsigned int order,
+			  int migratetype)
 {
 	unsigned long flags;
 
 	spin_lock_irqsave(&zone->lock, flags);
 	__count_vm_events(PGFREE, 1 << order);
 	if (unlikely(has_isolate_pageblock(zone) ||
-		is_migrate_isolate(migratetype))) {
+		     is_migrate_isolate(migratetype))) {
 		migratetype = get_pfnblock_migratetype(page, pfn);
 	}
 	__free_one_page(page, pfn, zone, order, migratetype);
@@ -1159,7 +1159,7 @@ static void free_one_page(struct zone *zone,
 }
 
 static void __meminit __init_single_page(struct page *page, unsigned long pfn,
-				unsigned long zone, int nid)
+					 unsigned long zone, int nid)
 {
 	set_page_links(page, zone, nid, pfn);
 	init_page_count(page);
@@ -1263,7 +1263,7 @@ static void __init __free_pages_boot_core(struct page *page, unsigned int order)
 	__free_pages(page, order);
 }
 
-#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) || \
+#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) ||	\
 	defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
 
 static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata;
@@ -1285,7 +1285,7 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
 
 #ifdef CONFIG_NODES_SPAN_OTHER_NODES
 static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-					struct mminit_pfnnid_cache *state)
+						struct mminit_pfnnid_cache *state)
 {
 	int nid;
 
@@ -1308,7 +1308,7 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
 	return true;
 }
 static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-					struct mminit_pfnnid_cache *state)
+						struct mminit_pfnnid_cache *state)
 {
 	return true;
 }
@@ -1316,7 +1316,7 @@ static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
 
 
 void __init __free_pages_bootmem(struct page *page, unsigned long pfn,
-							unsigned int order)
+				 unsigned int order)
 {
 	if (early_page_uninitialised(pfn))
 		return;
@@ -1373,8 +1373,8 @@ void set_zone_contiguous(struct zone *zone)
 
 	block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages);
 	for (; block_start_pfn < zone_end_pfn(zone);
-			block_start_pfn = block_end_pfn,
-			 block_end_pfn += pageblock_nr_pages) {
+	     block_start_pfn = block_end_pfn,
+		     block_end_pfn += pageblock_nr_pages) {
 
 		block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
 
@@ -1394,7 +1394,7 @@ void clear_zone_contiguous(struct zone *zone)
 
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
 static void __init deferred_free_range(struct page *page,
-					unsigned long pfn, int nr_pages)
+				       unsigned long pfn, int nr_pages)
 {
 	int i;
 
@@ -1501,7 +1501,7 @@ static int __init deferred_init_memmap(void *data)
 			} else {
 				nr_pages += nr_to_free;
 				deferred_free_range(free_base_page,
-						free_base_pfn, nr_to_free);
+						    free_base_pfn, nr_to_free);
 				free_base_page = NULL;
 				free_base_pfn = nr_to_free = 0;
 
@@ -1524,11 +1524,11 @@ static int __init deferred_init_memmap(void *data)
 
 			/* Where possible, batch up pages for a single free */
 			continue;
-free_range:
+		free_range:
 			/* Free the current block of pages to allocator */
 			nr_pages += nr_to_free;
 			deferred_free_range(free_base_page, free_base_pfn,
-								nr_to_free);
+					    nr_to_free);
 			free_base_page = NULL;
 			free_base_pfn = nr_to_free = 0;
 		}
@@ -1543,7 +1543,7 @@ static int __init deferred_init_memmap(void *data)
 	WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone));
 
 	pr_info("node %d initialised, %lu pages in %ums\n", nid, nr_pages,
-					jiffies_to_msecs(jiffies - start));
+		jiffies_to_msecs(jiffies - start));
 
 	pgdat_init_report_one_done();
 	return 0;
@@ -1620,8 +1620,8 @@ void __init init_cma_reserved_pageblock(struct page *page)
  * -- nyc
  */
 static inline void expand(struct zone *zone, struct page *page,
-	int low, int high, struct free_area *area,
-	int migratetype)
+			  int low, int high, struct free_area *area,
+			  int migratetype)
 {
 	unsigned long size = 1 << high;
 
@@ -1681,7 +1681,7 @@ static void check_new_page_bad(struct page *page)
 static inline int check_new_page(struct page *page)
 {
 	if (likely(page_expected_state(page,
-				PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
+				       PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
 		return 0;
 
 	check_new_page_bad(page);
@@ -1729,7 +1729,7 @@ static bool check_new_pages(struct page *page, unsigned int order)
 }
 
 inline void post_alloc_hook(struct page *page, unsigned int order,
-				gfp_t gfp_flags)
+			    gfp_t gfp_flags)
 {
 	set_page_private(page, 0);
 	set_page_refcounted(page);
@@ -1742,7 +1742,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 }
 
 static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
-							unsigned int alloc_flags)
+			  unsigned int alloc_flags)
 {
 	int i;
 	bool poisoned = true;
@@ -1780,7 +1780,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
  */
 static inline
 struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
-						int migratetype)
+				int migratetype)
 {
 	unsigned int current_order;
 	struct free_area *area;
@@ -1790,7 +1790,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
 		area = &(zone->free_area[current_order]);
 		page = list_first_entry_or_null(&area->free_list[migratetype],
-							struct page, lru);
+						struct page, lru);
 		if (!page)
 			continue;
 		list_del(&page->lru);
@@ -1823,13 +1823,13 @@ static int fallbacks[MIGRATE_TYPES][4] = {
 
 #ifdef CONFIG_CMA
 static struct page *__rmqueue_cma_fallback(struct zone *zone,
-					unsigned int order)
+					   unsigned int order)
 {
 	return __rmqueue_smallest(zone, order, MIGRATE_CMA);
 }
 #else
 static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
-					unsigned int order) { return NULL; }
+						  unsigned int order) { return NULL; }
 #endif
 
 /*
@@ -1875,7 +1875,7 @@ static int move_freepages(struct zone *zone,
 			 * isolating, as that would be expensive.
 			 */
 			if (num_movable &&
-					(PageLRU(page) || __PageMovable(page)))
+			    (PageLRU(page) || __PageMovable(page)))
 				(*num_movable)++;
 
 			page++;
@@ -1893,7 +1893,7 @@ static int move_freepages(struct zone *zone,
 }
 
 int move_freepages_block(struct zone *zone, struct page *page,
-				int migratetype, int *num_movable)
+			 int migratetype, int *num_movable)
 {
 	unsigned long start_pfn, end_pfn;
 	struct page *start_page, *end_page;
@@ -1911,11 +1911,11 @@ int move_freepages_block(struct zone *zone, struct page *page,
 		return 0;
 
 	return move_freepages(zone, start_page, end_page, migratetype,
-								num_movable);
+			      num_movable);
 }
 
 static void change_pageblock_range(struct page *pageblock_page,
-					int start_order, int migratetype)
+				   int start_order, int migratetype)
 {
 	int nr_pageblocks = 1 << (start_order - pageblock_order);
 
@@ -1950,9 +1950,9 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
 		return true;
 
 	if (order >= pageblock_order / 2 ||
-		start_mt == MIGRATE_RECLAIMABLE ||
-		start_mt == MIGRATE_UNMOVABLE ||
-		page_group_by_mobility_disabled)
+	    start_mt == MIGRATE_RECLAIMABLE ||
+	    start_mt == MIGRATE_UNMOVABLE ||
+	    page_group_by_mobility_disabled)
 		return true;
 
 	return false;
@@ -1967,7 +1967,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
  * itself, so pages freed in the future will be put on the correct free list.
  */
 static void steal_suitable_fallback(struct zone *zone, struct page *page,
-					int start_type, bool whole_block)
+				    int start_type, bool whole_block)
 {
 	unsigned int current_order = page_order(page);
 	struct free_area *area;
@@ -1994,7 +1994,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 		goto single_page;
 
 	free_pages = move_freepages_block(zone, page, start_type,
-						&movable_pages);
+					  &movable_pages);
 	/*
 	 * Determine how many pages are compatible with our allocation.
 	 * For movable allocation, it's the number of movable pages which
@@ -2012,7 +2012,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 		 */
 		if (old_block_type == MIGRATE_MOVABLE)
 			alike_pages = pageblock_nr_pages
-						- (free_pages + movable_pages);
+				- (free_pages + movable_pages);
 		else
 			alike_pages = 0;
 	}
@@ -2022,7 +2022,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 	 * comparable migratability as our allocation, claim the whole block.
 	 */
 	if (free_pages + alike_pages >= (1 << (pageblock_order - 1)) ||
-			page_group_by_mobility_disabled)
+	    page_group_by_mobility_disabled)
 		set_pageblock_migratetype(page, start_type);
 
 	return;
@@ -2039,7 +2039,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
  * fragmentation due to mixed migratetype pages in one pageblock.
  */
 int find_suitable_fallback(struct free_area *area, unsigned int order,
-			int migratetype, bool only_stealable, bool *can_steal)
+			   int migratetype, bool only_stealable, bool *can_steal)
 {
 	int i;
 	int fallback_mt;
@@ -2074,7 +2074,7 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
  * there are no empty page blocks that contain a page with a suitable order
  */
 static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
-				unsigned int alloc_order)
+					 unsigned int alloc_order)
 {
 	int mt;
 	unsigned long max_managed, flags;
@@ -2116,7 +2116,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * pageblock is exhausted.
  */
 static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
-						bool force)
+					   bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2127,13 +2127,13 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 	bool ret;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
-								ac->nodemask) {
+					ac->nodemask) {
 		/*
 		 * Preserve at least one pageblock unless memory pressure
 		 * is really high.
 		 */
 		if (!force && zone->nr_reserved_highatomic <=
-					pageblock_nr_pages)
+		    pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2141,8 +2141,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			struct free_area *area = &(zone->free_area[order]);
 
 			page = list_first_entry_or_null(
-					&area->free_list[MIGRATE_HIGHATOMIC],
-					struct page, lru);
+				&area->free_list[MIGRATE_HIGHATOMIC],
+				struct page, lru);
 			if (!page)
 				continue;
 
@@ -2162,8 +2162,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 				 * underflows.
 				 */
 				zone->nr_reserved_highatomic -= min(
-						pageblock_nr_pages,
-						zone->nr_reserved_highatomic);
+					pageblock_nr_pages,
+					zone->nr_reserved_highatomic);
 			}
 
 			/*
@@ -2177,7 +2177,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
 			ret = move_freepages_block(zone, page, ac->migratetype,
-									NULL);
+						   NULL);
 			if (ret) {
 				spin_unlock_irqrestore(&zone->lock, flags);
 				return ret;
@@ -2206,22 +2206,22 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 
 	/* Find the largest possible block of pages in the other list */
 	for (current_order = MAX_ORDER - 1;
-				current_order >= order && current_order <= MAX_ORDER - 1;
-				--current_order) {
+	     current_order >= order && current_order <= MAX_ORDER - 1;
+	     --current_order) {
 		area = &(zone->free_area[current_order]);
 		fallback_mt = find_suitable_fallback(area, current_order,
-				start_migratetype, false, &can_steal);
+						     start_migratetype, false, &can_steal);
 		if (fallback_mt == -1)
 			continue;
 
 		page = list_first_entry(&area->free_list[fallback_mt],
-						struct page, lru);
+					struct page, lru);
 
 		steal_suitable_fallback(zone, page, start_migratetype,
-								can_steal);
+					can_steal);
 
 		trace_mm_page_alloc_extfrag(page, order, current_order,
-			start_migratetype, fallback_mt);
+					    start_migratetype, fallback_mt);
 
 		return true;
 	}
@@ -2234,7 +2234,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
  * Call me with the zone->lock already held.
  */
 static struct page *__rmqueue(struct zone *zone, unsigned int order,
-				int migratetype)
+			      int migratetype)
 {
 	struct page *page;
 
@@ -2508,7 +2508,7 @@ void mark_free_pages(struct zone *zone)
 
 	for_each_migratetype_order(order, t) {
 		list_for_each_entry(page,
-				&zone->free_area[order].free_list[t], lru) {
+				    &zone->free_area[order].free_list[t], lru) {
 			unsigned long i;
 
 			pfn = page_to_pfn(page);
@@ -2692,8 +2692,8 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
 
 /* Remove page from the per-cpu list, caller must protect the list */
 static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
-			bool cold, struct per_cpu_pages *pcp,
-			struct list_head *list)
+				      bool cold, struct per_cpu_pages *pcp,
+				      struct list_head *list)
 {
 	struct page *page;
 
@@ -2702,8 +2702,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
 	do {
 		if (list_empty(list)) {
 			pcp->count += rmqueue_bulk(zone, 0,
-					pcp->batch, list,
-					migratetype, cold);
+						   pcp->batch, list,
+						   migratetype, cold);
 			if (unlikely(list_empty(list)))
 				return NULL;
 		}
@@ -2722,8 +2722,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
 
 /* Lock and remove page from the per-cpu list */
 static struct page *rmqueue_pcplist(struct zone *preferred_zone,
-			struct zone *zone, unsigned int order,
-			gfp_t gfp_flags, int migratetype)
+				    struct zone *zone, unsigned int order,
+				    gfp_t gfp_flags, int migratetype)
 {
 	struct per_cpu_pages *pcp;
 	struct list_head *list;
@@ -2747,16 +2747,16 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
  */
 static inline
 struct page *rmqueue(struct zone *preferred_zone,
-			struct zone *zone, unsigned int order,
-			gfp_t gfp_flags, unsigned int alloc_flags,
-			int migratetype)
+		     struct zone *zone, unsigned int order,
+		     gfp_t gfp_flags, unsigned int alloc_flags,
+		     int migratetype)
 {
 	unsigned long flags;
 	struct page *page;
 
 	if (likely(order == 0) && !in_interrupt()) {
 		page = rmqueue_pcplist(preferred_zone, zone, order,
-				gfp_flags, migratetype);
+				       gfp_flags, migratetype);
 		goto out;
 	}
 
@@ -2826,7 +2826,7 @@ static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
 	if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM))
 		return false;
 	if (fail_page_alloc.ignore_gfp_reclaim &&
-			(gfp_mask & __GFP_DIRECT_RECLAIM))
+	    (gfp_mask & __GFP_DIRECT_RECLAIM))
 		return false;
 
 	return should_fail(&fail_page_alloc.attr, 1 << order);
@@ -2845,10 +2845,10 @@ static int __init fail_page_alloc_debugfs(void)
 		return PTR_ERR(dir);
 
 	if (!debugfs_create_bool("ignore-gfp-wait", mode, dir,
-				&fail_page_alloc.ignore_gfp_reclaim))
+				 &fail_page_alloc.ignore_gfp_reclaim))
 		goto fail;
 	if (!debugfs_create_bool("ignore-gfp-highmem", mode, dir,
-				&fail_page_alloc.ignore_gfp_highmem))
+				 &fail_page_alloc.ignore_gfp_highmem))
 		goto fail;
 	if (!debugfs_create_u32("min-order", mode, dir,
 				&fail_page_alloc.min_order))
@@ -2949,14 +2949,14 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
 }
 
 bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
-		      int classzone_idx, unsigned int alloc_flags)
+		       int classzone_idx, unsigned int alloc_flags)
 {
 	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
-					zone_page_state(z, NR_FREE_PAGES));
+				   zone_page_state(z, NR_FREE_PAGES));
 }
 
 static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
-		unsigned long mark, int classzone_idx, unsigned int alloc_flags)
+				       unsigned long mark, int classzone_idx, unsigned int alloc_flags)
 {
 	long free_pages = zone_page_state(z, NR_FREE_PAGES);
 	long cma_pages = 0;
@@ -2978,11 +2978,11 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
 		return true;
 
 	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
-					free_pages);
+				   free_pages);
 }
 
 bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
-			unsigned long mark, int classzone_idx)
+			    unsigned long mark, int classzone_idx)
 {
 	long free_pages = zone_page_state(z, NR_FREE_PAGES);
 
@@ -2990,14 +2990,14 @@ bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
 		free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);
 
 	return __zone_watermark_ok(z, order, mark, classzone_idx, 0,
-								free_pages);
+				   free_pages);
 }
 
 #ifdef CONFIG_NUMA
 static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
 {
 	return node_distance(zone_to_nid(local_zone), zone_to_nid(zone)) <=
-				RECLAIM_DISTANCE;
+		RECLAIM_DISTANCE;
 }
 #else	/* CONFIG_NUMA */
 static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
@@ -3012,7 +3012,7 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
  */
 static struct page *
 get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
-						const struct alloc_context *ac)
+		       const struct alloc_context *ac)
 {
 	struct zoneref *z = ac->preferred_zoneref;
 	struct zone *zone;
@@ -3023,14 +3023,14 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 	 * See also __cpuset_node_allowed() comment in kernel/cpuset.c.
 	 */
 	for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
-								ac->nodemask) {
+					ac->nodemask) {
 		struct page *page;
 		unsigned long mark;
 
 		if (cpusets_enabled() &&
-			(alloc_flags & ALLOC_CPUSET) &&
-			!__cpuset_zone_allowed(zone, gfp_mask))
-				continue;
+		    (alloc_flags & ALLOC_CPUSET) &&
+		    !__cpuset_zone_allowed(zone, gfp_mask))
+			continue;
 		/*
 		 * When allocating a page cache page for writing, we
 		 * want to get it from a node that is within its dirty
@@ -3062,7 +3062,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 
 		mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
 		if (!zone_watermark_fast(zone, order, mark,
-				       ac_classzone_idx(ac), alloc_flags)) {
+					 ac_classzone_idx(ac), alloc_flags)) {
 			int ret;
 
 			/* Checked here to keep the fast path fast */
@@ -3085,16 +3085,16 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			default:
 				/* did we reclaim enough */
 				if (zone_watermark_ok(zone, order, mark,
-						ac_classzone_idx(ac), alloc_flags))
+						      ac_classzone_idx(ac), alloc_flags))
 					goto try_this_zone;
 
 				continue;
 			}
 		}
 
-try_this_zone:
+	try_this_zone:
 		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
-				gfp_mask, alloc_flags, ac->migratetype);
+			       gfp_mask, alloc_flags, ac->migratetype);
 		if (page) {
 			prep_new_page(page, order, gfp_mask, alloc_flags);
 
@@ -3188,21 +3188,21 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 	struct page *page;
 
 	page = get_page_from_freelist(gfp_mask, order,
-			alloc_flags | ALLOC_CPUSET, ac);
+				      alloc_flags | ALLOC_CPUSET, ac);
 	/*
 	 * fallback to ignore cpuset restriction if our nodes
 	 * are depleted
 	 */
 	if (!page)
 		page = get_page_from_freelist(gfp_mask, order,
-				alloc_flags, ac);
+					      alloc_flags, ac);
 
 	return page;
 }
 
 static inline struct page *
 __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
-	const struct alloc_context *ac, unsigned long *did_some_progress)
+		      const struct alloc_context *ac, unsigned long *did_some_progress)
 {
 	struct oom_control oc = {
 		.zonelist = ac->zonelist,
@@ -3231,7 +3231,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 	 * we're still under heavy pressure.
 	 */
 	page = get_page_from_freelist(gfp_mask | __GFP_HARDWALL, order,
-					ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
+				      ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
 	if (page)
 		goto out;
 
@@ -3270,7 +3270,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (gfp_mask & __GFP_NOFAIL)
 			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
-					ALLOC_NO_WATERMARKS, ac);
+							     ALLOC_NO_WATERMARKS, ac);
 	}
 out:
 	mutex_unlock(&oom_lock);
@@ -3287,8 +3287,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 /* Try memory compaction for high-order allocations before reclaim */
 static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-		unsigned int alloc_flags, const struct alloc_context *ac,
-		enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     enum compact_priority prio, enum compact_result *compact_result)
 {
 	struct page *page;
 
@@ -3297,7 +3297,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 
 	current->flags |= PF_MEMALLOC;
 	*compact_result = try_to_compact_pages(gfp_mask, order, alloc_flags, ac,
-									prio);
+					       prio);
 	current->flags &= ~PF_MEMALLOC;
 
 	if (*compact_result <= COMPACT_INACTIVE)
@@ -3389,7 +3389,7 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
 	 */
 check_priority:
 	min_priority = (order > PAGE_ALLOC_COSTLY_ORDER) ?
-			MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
+		MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
 
 	if (*compact_priority > min_priority) {
 		(*compact_priority)--;
@@ -3403,8 +3403,8 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
 #else
 static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-		unsigned int alloc_flags, const struct alloc_context *ac,
-		enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     enum compact_priority prio, enum compact_result *compact_result)
 {
 	*compact_result = COMPACT_SKIPPED;
 	return NULL;
@@ -3431,7 +3431,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
 	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
 					ac->nodemask) {
 		if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
-					ac_classzone_idx(ac), alloc_flags))
+				      ac_classzone_idx(ac), alloc_flags))
 			return true;
 	}
 	return false;
@@ -3441,7 +3441,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
 /* Perform direct synchronous page reclaim */
 static int
 __perform_reclaim(gfp_t gfp_mask, unsigned int order,
-					const struct alloc_context *ac)
+		  const struct alloc_context *ac)
 {
 	struct reclaim_state reclaim_state;
 	int progress;
@@ -3456,7 +3456,7 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 	current->reclaim_state = &reclaim_state;
 
 	progress = try_to_free_pages(ac->zonelist, order, gfp_mask,
-								ac->nodemask);
+				     ac->nodemask);
 
 	current->reclaim_state = NULL;
 	lockdep_clear_current_reclaim_state();
@@ -3470,8 +3470,8 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 /* The really slow allocator path where we enter direct reclaim */
 static inline struct page *
 __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
-		unsigned int alloc_flags, const struct alloc_context *ac,
-		unsigned long *did_some_progress)
+			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     unsigned long *did_some_progress)
 {
 	struct page *page = NULL;
 	bool drained = false;
@@ -3560,8 +3560,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 	if (in_serving_softirq() && (current->flags & PF_MEMALLOC))
 		return true;
 	if (!in_interrupt() &&
-			((current->flags & PF_MEMALLOC) ||
-			 unlikely(test_thread_flag(TIF_MEMDIE))))
+	    ((current->flags & PF_MEMALLOC) ||
+	     unlikely(test_thread_flag(TIF_MEMDIE))))
 		return true;
 
 	return false;
@@ -3625,9 +3625,9 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 		 * reclaimable pages?
 		 */
 		wmark = __zone_watermark_ok(zone, order, min_wmark,
-				ac_classzone_idx(ac), alloc_flags, available);
+					    ac_classzone_idx(ac), alloc_flags, available);
 		trace_reclaim_retry_zone(z, order, reclaimable,
-				available, min_wmark, *no_progress_loops, wmark);
+					 available, min_wmark, *no_progress_loops, wmark);
 		if (wmark) {
 			/*
 			 * If we didn't make any progress and have a lot of
@@ -3639,7 +3639,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 				unsigned long write_pending;
 
 				write_pending = zone_page_state_snapshot(zone,
-							NR_ZONE_WRITE_PENDING);
+									 NR_ZONE_WRITE_PENDING);
 
 				if (2 * write_pending > reclaimable) {
 					congestion_wait(BLK_RW_ASYNC, HZ / 10);
@@ -3670,7 +3670,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
-						struct alloc_context *ac)
+		       struct alloc_context *ac)
 {
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
@@ -3701,7 +3701,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * callers that are not in atomic context.
 	 */
 	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)) ==
-				(__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
+			 (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
 		gfp_mask &= ~__GFP_ATOMIC;
 
 retry_cpuset:
@@ -3724,7 +3724,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * could end up iterating over non-eligible zones endlessly.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-					ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx, ac->nodemask);
 	if (!ac->preferred_zoneref->zone)
 		goto nopage;
 
@@ -3749,13 +3749,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
 	 */
 	if (can_direct_reclaim &&
-			(costly_order ||
-			   (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
-			&& !gfp_pfmemalloc_allowed(gfp_mask)) {
+	    (costly_order ||
+	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
+	    && !gfp_pfmemalloc_allowed(gfp_mask)) {
 		page = __alloc_pages_direct_compact(gfp_mask, order,
-						alloc_flags, ac,
-						INIT_COMPACT_PRIORITY,
-						&compact_result);
+						    alloc_flags, ac,
+						    INIT_COMPACT_PRIORITY,
+						    &compact_result);
 		if (page)
 			goto got_pg;
 
@@ -3800,7 +3800,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (!(alloc_flags & ALLOC_CPUSET) || (alloc_flags & ALLOC_NO_WATERMARKS)) {
 		ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
 		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-					ac->high_zoneidx, ac->nodemask);
+							     ac->high_zoneidx, ac->nodemask);
 	}
 
 	/* Attempt with potentially adjusted zonelist and alloc_flags */
@@ -3815,8 +3815,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	/* Make sure we know about allocations which stall for too long */
 	if (time_after(jiffies, alloc_start + stall_timeout)) {
 		warn_alloc(gfp_mask & ~__GFP_NOWARN, ac->nodemask,
-			"page allocation stalls for %ums, order:%u",
-			jiffies_to_msecs(jiffies - alloc_start), order);
+			   "page allocation stalls for %ums, order:%u",
+			   jiffies_to_msecs(jiffies - alloc_start), order);
 		stall_timeout += 10 * HZ;
 	}
 
@@ -3826,13 +3826,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 
 	/* Try direct reclaim and then allocating */
 	page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
-							&did_some_progress);
+					    &did_some_progress);
 	if (page)
 		goto got_pg;
 
 	/* Try direct compaction and then allocating */
 	page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
-					compact_priority, &compact_result);
+					    compact_priority, &compact_result);
 	if (page)
 		goto got_pg;
 
@@ -3858,9 +3858,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * of free memory (see __compaction_suitable)
 	 */
 	if (did_some_progress > 0 &&
-			should_compact_retry(ac, order, alloc_flags,
-				compact_result, &compact_priority,
-				&compaction_retries))
+	    should_compact_retry(ac, order, alloc_flags,
+				 compact_result, &compact_priority,
+				 &compaction_retries))
 		goto retry;
 
 	/*
@@ -3938,15 +3938,15 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	}
 fail:
 	warn_alloc(gfp_mask, ac->nodemask,
-			"page allocation failure: order:%u", order);
+		   "page allocation failure: order:%u", order);
 got_pg:
 	return page;
 }
 
 static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
-		struct zonelist *zonelist, nodemask_t *nodemask,
-		struct alloc_context *ac, gfp_t *alloc_mask,
-		unsigned int *alloc_flags)
+				       struct zonelist *zonelist, nodemask_t *nodemask,
+				       struct alloc_context *ac, gfp_t *alloc_mask,
+				       unsigned int *alloc_flags)
 {
 	ac->high_zoneidx = gfp_zone(gfp_mask);
 	ac->zonelist = zonelist;
@@ -3976,7 +3976,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
 
 /* Determine whether to spread dirty pages and what the first usable zone */
 static inline void finalise_ac(gfp_t gfp_mask,
-		unsigned int order, struct alloc_context *ac)
+			       unsigned int order, struct alloc_context *ac)
 {
 	/* Dirty zone balancing only done in the fast path */
 	ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE);
@@ -3987,7 +3987,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
 	 * may get reset for allocations that ignore memory policies.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-					ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx, ac->nodemask);
 }
 
 /*
@@ -3995,7 +3995,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
  */
 struct page *
 __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
-			struct zonelist *zonelist, nodemask_t *nodemask)
+		       struct zonelist *zonelist, nodemask_t *nodemask)
 {
 	struct page *page;
 	unsigned int alloc_flags = ALLOC_WMARK_LOW;
@@ -4114,7 +4114,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
 
 #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
 	gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
-		    __GFP_NOMEMALLOC;
+		__GFP_NOMEMALLOC;
 	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
 				PAGE_FRAG_CACHE_MAX_ORDER);
 	nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
@@ -4150,7 +4150,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
 	int offset;
 
 	if (unlikely(!nc->va)) {
-refill:
+	refill:
 		page = __page_frag_cache_refill(nc, gfp_mask);
 		if (!page)
 			return NULL;
@@ -4209,7 +4209,7 @@ void page_frag_free(void *addr)
 EXPORT_SYMBOL(page_frag_free);
 
 static void *make_alloc_exact(unsigned long addr, unsigned int order,
-		size_t size)
+			      size_t size)
 {
 	if (addr) {
 		unsigned long alloc_end = addr + (PAGE_SIZE << order);
@@ -4378,7 +4378,7 @@ long si_mem_available(void)
 	 * and cannot be freed. Cap this estimate at the low watermark.
 	 */
 	available += global_page_state(NR_SLAB_RECLAIMABLE) -
-		     min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
+		min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
 
 	if (available < 0)
 		available = 0;
@@ -4714,7 +4714,7 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
 		zone = pgdat->node_zones + zone_type;
 		if (managed_zone(zone)) {
 			zoneref_set_zone(zone,
-				&zonelist->_zonerefs[nr_zones++]);
+					 &zonelist->_zonerefs[nr_zones++]);
 			check_highest_zone(zone_type);
 		}
 	} while (zone_type);
@@ -4792,8 +4792,8 @@ early_param("numa_zonelist_order", setup_numa_zonelist_order);
  * sysctl handler for numa_zonelist_order
  */
 int numa_zonelist_order_handler(struct ctl_table *table, int write,
-		void __user *buffer, size_t *length,
-		loff_t *ppos)
+				void __user *buffer, size_t *length,
+				loff_t *ppos)
 {
 	char saved_string[NUMA_ZONELIST_ORDER_LEN];
 	int ret;
@@ -4952,7 +4952,7 @@ static void build_zonelists_in_zone_order(pg_data_t *pgdat, int nr_nodes)
 			z = &NODE_DATA(node)->node_zones[zone_type];
 			if (managed_zone(z)) {
 				zoneref_set_zone(z,
-					&zonelist->_zonerefs[pos++]);
+						 &zonelist->_zonerefs[pos++]);
 				check_highest_zone(zone_type);
 			}
 		}
@@ -5056,8 +5056,8 @@ int local_memory_node(int node)
 	struct zoneref *z;
 
 	z = first_zones_zonelist(node_zonelist(node, GFP_KERNEL),
-				   gfp_zone(GFP_KERNEL),
-				   NULL);
+				 gfp_zone(GFP_KERNEL),
+				 NULL);
 	return z->zone->node;
 }
 #endif
@@ -5248,7 +5248,7 @@ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone)
  * done. Non-atomic initialization, single-pass.
  */
 void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
-		unsigned long start_pfn, enum memmap_context context)
+				unsigned long start_pfn, enum memmap_context context)
 {
 	struct vmem_altmap *altmap = to_vmem_altmap(__pfn_to_phys(start_pfn));
 	unsigned long end_pfn = start_pfn + size;
@@ -5315,7 +5315,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 		}
 #endif
 
-not_early:
+	not_early:
 		/*
 		 * Mark the block movable so that blocks are reserved for
 		 * movable at startup. This will force kernel allocations
@@ -5349,7 +5349,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
 }
 
 #ifndef __HAVE_ARCH_MEMMAP_INIT
-#define memmap_init(size, nid, zone, start_pfn) \
+#define memmap_init(size, nid, zone, start_pfn)				\
 	memmap_init_zone((size), (nid), (zone), (start_pfn), MEMMAP_EARLY)
 #endif
 
@@ -5417,13 +5417,13 @@ static int zone_batchsize(struct zone *zone)
  * exist).
  */
 static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
-		unsigned long batch)
+			   unsigned long batch)
 {
-       /* start with a fail safe value for batch */
+	/* start with a fail safe value for batch */
 	pcp->batch = 1;
 	smp_wmb();
 
-       /* Update high, then batch, in order */
+	/* Update high, then batch, in order */
 	pcp->high = high;
 	smp_wmb();
 
@@ -5460,7 +5460,7 @@ static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
  * to the value high for the pageset p.
  */
 static void pageset_set_high(struct per_cpu_pageset *p,
-				unsigned long high)
+			     unsigned long high)
 {
 	unsigned long batch = max(1UL, high / 4);
 	if ((high / 4) > (PAGE_SHIFT * 8))
@@ -5474,8 +5474,8 @@ static void pageset_set_high_and_batch(struct zone *zone,
 {
 	if (percpu_pagelist_fraction)
 		pageset_set_high(pcp,
-			(zone->managed_pages /
-				percpu_pagelist_fraction));
+				 (zone->managed_pages /
+				  percpu_pagelist_fraction));
 	else
 		pageset_set_batch(pcp, zone_batchsize(zone));
 }
@@ -5510,7 +5510,7 @@ void __init setup_per_cpu_pageset(void)
 
 	for_each_online_pgdat(pgdat)
 		pgdat->per_cpu_nodestats =
-			alloc_percpu(struct per_cpu_nodestat);
+		alloc_percpu(struct per_cpu_nodestat);
 }
 
 static __meminit void zone_pcp_init(struct zone *zone)
@@ -5538,10 +5538,10 @@ int __meminit init_currently_empty_zone(struct zone *zone,
 	zone->zone_start_pfn = zone_start_pfn;
 
 	mminit_dprintk(MMINIT_TRACE, "memmap_init",
-			"Initialising map node %d zone %lu pfns %lu -> %lu\n",
-			pgdat->node_id,
-			(unsigned long)zone_idx(zone),
-			zone_start_pfn, (zone_start_pfn + size));
+		       "Initialising map node %d zone %lu pfns %lu -> %lu\n",
+		       pgdat->node_id,
+		       (unsigned long)zone_idx(zone),
+		       zone_start_pfn, (zone_start_pfn + size));
 
 	zone_init_free_lists(zone);
 	zone->initialized = 1;
@@ -5556,7 +5556,7 @@ int __meminit init_currently_empty_zone(struct zone *zone,
  * Required by SPARSEMEM. Given a PFN, return what node the PFN is on.
  */
 int __meminit __early_pfn_to_nid(unsigned long pfn,
-					struct mminit_pfnnid_cache *state)
+				 struct mminit_pfnnid_cache *state)
 {
 	unsigned long start_pfn, end_pfn;
 	int nid;
@@ -5595,8 +5595,8 @@ void __init free_bootmem_with_active_regions(int nid, unsigned long max_low_pfn)
 
 		if (start_pfn < end_pfn)
 			memblock_free_early_nid(PFN_PHYS(start_pfn),
-					(end_pfn - start_pfn) << PAGE_SHIFT,
-					this_nid);
+						(end_pfn - start_pfn) << PAGE_SHIFT,
+						this_nid);
 	}
 }
 
@@ -5628,7 +5628,7 @@ void __init sparse_memory_present_with_active_regions(int nid)
  * PFNs will be 0.
  */
 void __meminit get_pfn_range_for_nid(unsigned int nid,
-			unsigned long *start_pfn, unsigned long *end_pfn)
+				     unsigned long *start_pfn, unsigned long *end_pfn)
 {
 	unsigned long this_start_pfn, this_end_pfn;
 	int i;
@@ -5658,7 +5658,7 @@ static void __init find_usable_zone_for_movable(void)
 			continue;
 
 		if (arch_zone_highest_possible_pfn[zone_index] >
-				arch_zone_lowest_possible_pfn[zone_index])
+		    arch_zone_lowest_possible_pfn[zone_index])
 			break;
 	}
 
@@ -5677,11 +5677,11 @@ static void __init find_usable_zone_for_movable(void)
  * zones within a node are in order of monotonic increases memory addresses
  */
 static void __meminit adjust_zone_range_for_zone_movable(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *zone_start_pfn,
-					unsigned long *zone_end_pfn)
+							 unsigned long zone_type,
+							 unsigned long node_start_pfn,
+							 unsigned long node_end_pfn,
+							 unsigned long *zone_start_pfn,
+							 unsigned long *zone_end_pfn)
 {
 	/* Only adjust if ZONE_MOVABLE is on this node */
 	if (zone_movable_pfn[nid]) {
@@ -5689,15 +5689,15 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
 		if (zone_type == ZONE_MOVABLE) {
 			*zone_start_pfn = zone_movable_pfn[nid];
 			*zone_end_pfn = min(node_end_pfn,
-				arch_zone_highest_possible_pfn[movable_zone]);
+					    arch_zone_highest_possible_pfn[movable_zone]);
 
-		/* Adjust for ZONE_MOVABLE starting within this range */
+			/* Adjust for ZONE_MOVABLE starting within this range */
 		} else if (!mirrored_kernelcore &&
-			*zone_start_pfn < zone_movable_pfn[nid] &&
-			*zone_end_pfn > zone_movable_pfn[nid]) {
+			   *zone_start_pfn < zone_movable_pfn[nid] &&
+			   *zone_end_pfn > zone_movable_pfn[nid]) {
 			*zone_end_pfn = zone_movable_pfn[nid];
 
-		/* Check if this whole range is within ZONE_MOVABLE */
+			/* Check if this whole range is within ZONE_MOVABLE */
 		} else if (*zone_start_pfn >= zone_movable_pfn[nid])
 			*zone_start_pfn = *zone_end_pfn;
 	}
@@ -5708,12 +5708,12 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
  * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node()
  */
 static unsigned long __meminit zone_spanned_pages_in_node(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *zone_start_pfn,
-					unsigned long *zone_end_pfn,
-					unsigned long *ignored)
+							  unsigned long zone_type,
+							  unsigned long node_start_pfn,
+							  unsigned long node_end_pfn,
+							  unsigned long *zone_start_pfn,
+							  unsigned long *zone_end_pfn,
+							  unsigned long *ignored)
 {
 	/* When hotadd a new node from cpu_up(), the node should be empty */
 	if (!node_start_pfn && !node_end_pfn)
@@ -5723,8 +5723,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
 	*zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type];
 	*zone_end_pfn = arch_zone_highest_possible_pfn[zone_type];
 	adjust_zone_range_for_zone_movable(nid, zone_type,
-				node_start_pfn, node_end_pfn,
-				zone_start_pfn, zone_end_pfn);
+					   node_start_pfn, node_end_pfn,
+					   zone_start_pfn, zone_end_pfn);
 
 	/* Check that this node has pages within the zone's required range */
 	if (*zone_end_pfn < node_start_pfn || *zone_start_pfn > node_end_pfn)
@@ -5743,8 +5743,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
  * then all holes in the requested range will be accounted for.
  */
 unsigned long __meminit __absent_pages_in_range(int nid,
-				unsigned long range_start_pfn,
-				unsigned long range_end_pfn)
+						unsigned long range_start_pfn,
+						unsigned long range_end_pfn)
 {
 	unsigned long nr_absent = range_end_pfn - range_start_pfn;
 	unsigned long start_pfn, end_pfn;
@@ -5766,17 +5766,17 @@ unsigned long __meminit __absent_pages_in_range(int nid,
  * It returns the number of pages frames in memory holes within a range.
  */
 unsigned long __init absent_pages_in_range(unsigned long start_pfn,
-							unsigned long end_pfn)
+					   unsigned long end_pfn)
 {
 	return __absent_pages_in_range(MAX_NUMNODES, start_pfn, end_pfn);
 }
 
 /* Return the number of page frames in holes in a zone on a node */
 static unsigned long __meminit zone_absent_pages_in_node(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *ignored)
+							 unsigned long zone_type,
+							 unsigned long node_start_pfn,
+							 unsigned long node_end_pfn,
+							 unsigned long *ignored)
 {
 	unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
 	unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
@@ -5791,8 +5791,8 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
 	zone_end_pfn = clamp(node_end_pfn, zone_low, zone_high);
 
 	adjust_zone_range_for_zone_movable(nid, zone_type,
-			node_start_pfn, node_end_pfn,
-			&zone_start_pfn, &zone_end_pfn);
+					   node_start_pfn, node_end_pfn,
+					   &zone_start_pfn, &zone_end_pfn);
 
 	/* If this node has no page within this zone, return 0. */
 	if (zone_start_pfn == zone_end_pfn)
@@ -5830,12 +5830,12 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
 
 #else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
 static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *zone_start_pfn,
-					unsigned long *zone_end_pfn,
-					unsigned long *zones_size)
+								 unsigned long zone_type,
+								 unsigned long node_start_pfn,
+								 unsigned long node_end_pfn,
+								 unsigned long *zone_start_pfn,
+								 unsigned long *zone_end_pfn,
+								 unsigned long *zones_size)
 {
 	unsigned int zone;
 
@@ -5849,10 +5849,10 @@ static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
 }
 
 static inline unsigned long __meminit zone_absent_pages_in_node(int nid,
-						unsigned long zone_type,
-						unsigned long node_start_pfn,
-						unsigned long node_end_pfn,
-						unsigned long *zholes_size)
+								unsigned long zone_type,
+								unsigned long node_start_pfn,
+								unsigned long node_end_pfn,
+								unsigned long *zholes_size)
 {
 	if (!zholes_size)
 		return 0;
@@ -5883,8 +5883,8 @@ static void __meminit calculate_node_totalpages(struct pglist_data *pgdat,
 						  &zone_end_pfn,
 						  zones_size);
 		real_size = size - zone_absent_pages_in_node(pgdat->node_id, i,
-						  node_start_pfn, node_end_pfn,
-						  zholes_size);
+							     node_start_pfn, node_end_pfn,
+							     zholes_size);
 		if (size)
 			zone->zone_start_pfn = zone_start_pfn;
 		else
@@ -6143,7 +6143,7 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
 }
 
 void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
-		unsigned long node_start_pfn, unsigned long *zholes_size)
+				      unsigned long node_start_pfn, unsigned long *zholes_size)
 {
 	pg_data_t *pgdat = NODE_DATA(nid);
 	unsigned long start_pfn = 0;
@@ -6428,12 +6428,12 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 			if (start_pfn < usable_startpfn) {
 				unsigned long kernel_pages;
 				kernel_pages = min(end_pfn, usable_startpfn)
-								- start_pfn;
+					- start_pfn;
 
 				kernelcore_remaining -= min(kernel_pages,
-							kernelcore_remaining);
+							    kernelcore_remaining);
 				required_kernelcore -= min(kernel_pages,
-							required_kernelcore);
+							   required_kernelcore);
 
 				/* Continue if range is now fully accounted */
 				if (end_pfn <= usable_startpfn) {
@@ -6466,7 +6466,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 			 * satisfied
 			 */
 			required_kernelcore -= min(required_kernelcore,
-								size_pages);
+						   size_pages);
 			kernelcore_remaining -= size_pages;
 			if (!kernelcore_remaining)
 				break;
@@ -6534,9 +6534,9 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 
 	/* Record where the zone boundaries are */
 	memset(arch_zone_lowest_possible_pfn, 0,
-				sizeof(arch_zone_lowest_possible_pfn));
+	       sizeof(arch_zone_lowest_possible_pfn));
 	memset(arch_zone_highest_possible_pfn, 0,
-				sizeof(arch_zone_highest_possible_pfn));
+	       sizeof(arch_zone_highest_possible_pfn));
 
 	start_pfn = find_min_pfn_with_active_regions();
 
@@ -6562,14 +6562,14 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 			continue;
 		pr_info("  %-8s ", zone_names[i]);
 		if (arch_zone_lowest_possible_pfn[i] ==
-				arch_zone_highest_possible_pfn[i])
+		    arch_zone_highest_possible_pfn[i])
 			pr_cont("empty\n");
 		else
 			pr_cont("[mem %#018Lx-%#018Lx]\n",
 				(u64)arch_zone_lowest_possible_pfn[i]
-					<< PAGE_SHIFT,
+				<< PAGE_SHIFT,
 				((u64)arch_zone_highest_possible_pfn[i]
-					<< PAGE_SHIFT) - 1);
+				 << PAGE_SHIFT) - 1);
 	}
 
 	/* Print out the PFNs ZONE_MOVABLE begins at in each node */
@@ -6577,7 +6577,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 	for (i = 0; i < MAX_NUMNODES; i++) {
 		if (zone_movable_pfn[i])
 			pr_info("  Node %d: %#018Lx\n", i,
-			       (u64)zone_movable_pfn[i] << PAGE_SHIFT);
+				(u64)zone_movable_pfn[i] << PAGE_SHIFT);
 	}
 
 	/* Print out the early node map */
@@ -6593,7 +6593,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 	for_each_online_node(nid) {
 		pg_data_t *pgdat = NODE_DATA(nid);
 		free_area_init_node(nid, NULL,
-				find_min_pfn_for_node(nid), NULL);
+				    find_min_pfn_for_node(nid), NULL);
 
 		/* Any memory on that node */
 		if (pgdat->node_present_pages)
@@ -6711,14 +6711,14 @@ void __init mem_init_print_info(const char *str)
 	 *    please refer to arch/tile/kernel/vmlinux.lds.S.
 	 * 3) .rodata.* may be embedded into .text or .data sections.
 	 */
-#define adj_init_size(start, end, size, pos, adj) \
-	do { \
-		if (start <= pos && pos < end && size > adj) \
-			size -= adj; \
+#define adj_init_size(start, end, size, pos, adj)		\
+	do {							\
+		if (start <= pos && pos < end && size > adj)	\
+			size -= adj;				\
 	} while (0)
 
 	adj_init_size(__init_begin, __init_end, init_data_size,
-		     _sinittext, init_code_size);
+		      _sinittext, init_code_size);
 	adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size);
 	adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size);
 	adj_init_size(_stext, _etext, codesize, __start_rodata, rosize);
@@ -6762,7 +6762,7 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
 void __init free_area_init(unsigned long *zones_size)
 {
 	free_area_init_node(0, zones_size,
-			__pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
+			    __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
 }
 
 static int page_alloc_cpu_dead(unsigned int cpu)
@@ -6992,7 +6992,7 @@ int __meminit init_per_zone_wmark_min(void)
 			min_free_kbytes = 65536;
 	} else {
 		pr_warn("min_free_kbytes is not updated to %d because user defined value %d is preferred\n",
-				new_min_free_kbytes, user_min_free_kbytes);
+			new_min_free_kbytes, user_min_free_kbytes);
 	}
 	setup_per_zone_wmarks();
 	refresh_zone_stat_thresholds();
@@ -7013,7 +7013,7 @@ core_initcall(init_per_zone_wmark_min)
  *	changes.
  */
 int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+				   void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7029,7 +7029,7 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
 }
 
 int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					  void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7054,12 +7054,12 @@ static void setup_min_unmapped_ratio(void)
 
 	for_each_zone(zone)
 		zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages *
-				sysctl_min_unmapped_ratio) / 100;
+							 sysctl_min_unmapped_ratio) / 100;
 }
 
 
 int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					     void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7082,11 +7082,11 @@ static void setup_min_slab_ratio(void)
 
 	for_each_zone(zone)
 		zone->zone_pgdat->min_slab_pages += (zone->managed_pages *
-				sysctl_min_slab_ratio) / 100;
+						     sysctl_min_slab_ratio) / 100;
 }
 
 int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					 void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7110,7 +7110,7 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
  * if in function of the boot time zone sizes.
  */
 int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					void __user *buffer, size_t *length, loff_t *ppos)
 {
 	proc_dointvec_minmax(table, write, buffer, length, ppos);
 	setup_per_zone_lowmem_reserve();
@@ -7123,7 +7123,7 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
  * pagelist can have before it gets flushed back to buddy allocator.
  */
 int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					    void __user *buffer, size_t *length, loff_t *ppos)
 {
 	struct zone *zone;
 	int old_percpu_pagelist_fraction;
@@ -7153,7 +7153,7 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
 
 		for_each_possible_cpu(cpu)
 			pageset_set_high_and_batch(zone,
-					per_cpu_ptr(zone->pageset, cpu));
+						   per_cpu_ptr(zone->pageset, cpu));
 	}
 out:
 	mutex_unlock(&pcp_batch_high_lock);
@@ -7461,7 +7461,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
 		}
 
 		nr_reclaimed = reclaim_clean_pages_from_list(cc->zone,
-							&cc->migratepages);
+							     &cc->migratepages);
 		cc->nr_migratepages -= nr_reclaimed;
 
 		ret = migrate_pages(&cc->migratepages, alloc_migrate_target,
@@ -7645,7 +7645,7 @@ void __meminit zone_pcp_update(struct zone *zone)
 	mutex_lock(&pcp_batch_high_lock);
 	for_each_possible_cpu(cpu)
 		pageset_set_high_and_batch(zone,
-				per_cpu_ptr(zone->pageset, cpu));
+					   per_cpu_ptr(zone->pageset, cpu));
 	mutex_unlock(&pcp_batch_high_lock);
 }
 #endif
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
@ 2017-03-16  1:59   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  1:59 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

whitespace changes only - git diff -w shows no difference

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 552 ++++++++++++++++++++++++++++----------------------------
 1 file changed, 276 insertions(+), 276 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 504749032400..79fc996892c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -204,33 +204,33 @@ static void __free_pages_ok(struct page *page, unsigned int order);
  */
 int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
 #ifdef CONFIG_ZONE_DMA
-	 256,
+	256,
 #endif
 #ifdef CONFIG_ZONE_DMA32
-	 256,
+	256,
 #endif
 #ifdef CONFIG_HIGHMEM
-	 32,
+	32,
 #endif
-	 32,
+	32,
 };
 
 EXPORT_SYMBOL(totalram_pages);
 
 static char * const zone_names[MAX_NR_ZONES] = {
 #ifdef CONFIG_ZONE_DMA
-	 "DMA",
+	"DMA",
 #endif
 #ifdef CONFIG_ZONE_DMA32
-	 "DMA32",
+	"DMA32",
 #endif
-	 "Normal",
+	"Normal",
 #ifdef CONFIG_HIGHMEM
-	 "HighMem",
+	"HighMem",
 #endif
-	 "Movable",
+	"Movable",
 #ifdef CONFIG_ZONE_DEVICE
-	 "Device",
+	"Device",
 #endif
 };
 
@@ -310,8 +310,8 @@ static inline bool __meminit early_page_uninitialised(unsigned long pfn)
  * later in the boot cycle when it can be parallelised.
  */
 static inline bool update_defer_init(pg_data_t *pgdat,
-				unsigned long pfn, unsigned long zone_end,
-				unsigned long *nr_initialised)
+				     unsigned long pfn, unsigned long zone_end,
+				     unsigned long *nr_initialised)
 {
 	unsigned long max_initialise;
 
@@ -323,7 +323,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
 	 * two large system hashes that can take up 1GB for 0.25TB/node.
 	 */
 	max_initialise = max(2UL << (30 - PAGE_SHIFT),
-		(pgdat->node_spanned_pages >> 8));
+			     (pgdat->node_spanned_pages >> 8));
 
 	(*nr_initialised)++;
 	if ((*nr_initialised > max_initialise) &&
@@ -345,8 +345,8 @@ static inline bool early_page_uninitialised(unsigned long pfn)
 }
 
 static inline bool update_defer_init(pg_data_t *pgdat,
-				unsigned long pfn, unsigned long zone_end,
-				unsigned long *nr_initialised)
+				     unsigned long pfn, unsigned long zone_end,
+				     unsigned long *nr_initialised)
 {
 	return true;
 }
@@ -354,7 +354,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
 
 /* Return a pointer to the bitmap storing bits affecting a block of pages */
 static inline unsigned long *get_pageblock_bitmap(struct page *page,
-							unsigned long pfn)
+						  unsigned long pfn)
 {
 #ifdef CONFIG_SPARSEMEM
 	return __pfn_to_section(pfn)->pageblock_flags;
@@ -384,9 +384,9 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
  * Return: pageblock_bits flags
  */
 static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page,
-					unsigned long pfn,
-					unsigned long end_bitidx,
-					unsigned long mask)
+							       unsigned long pfn,
+							       unsigned long end_bitidx,
+							       unsigned long mask)
 {
 	unsigned long *bitmap;
 	unsigned long bitidx, word_bitidx;
@@ -403,8 +403,8 @@ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page
 }
 
 unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn,
-					unsigned long end_bitidx,
-					unsigned long mask)
+				      unsigned long end_bitidx,
+				      unsigned long mask)
 {
 	return __get_pfnblock_flags_mask(page, pfn, end_bitidx, mask);
 }
@@ -423,9 +423,9 @@ static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned
  * @mask: mask of bits that the caller is interested in
  */
 void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
-					unsigned long pfn,
-					unsigned long end_bitidx,
-					unsigned long mask)
+			     unsigned long pfn,
+			     unsigned long end_bitidx,
+			     unsigned long mask)
 {
 	unsigned long *bitmap;
 	unsigned long bitidx, word_bitidx;
@@ -460,7 +460,7 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
 		migratetype = MIGRATE_UNMOVABLE;
 
 	set_pageblock_flags_group(page, (unsigned long)migratetype,
-					PB_migrate, PB_migrate_end);
+				  PB_migrate, PB_migrate_end);
 }
 
 #ifdef CONFIG_DEBUG_VM
@@ -481,8 +481,8 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 
 	if (ret)
 		pr_err("page 0x%lx outside node %d zone %s [ 0x%lx - 0x%lx ]\n",
-			pfn, zone_to_nid(zone), zone->name,
-			start_pfn, start_pfn + sp);
+		       pfn, zone_to_nid(zone), zone->name,
+		       start_pfn, start_pfn + sp);
 
 	return ret;
 }
@@ -516,7 +516,7 @@ static inline int bad_range(struct zone *zone, struct page *page)
 #endif
 
 static void bad_page(struct page *page, const char *reason,
-		unsigned long bad_flags)
+		     unsigned long bad_flags)
 {
 	static unsigned long resume;
 	static unsigned long nr_shown;
@@ -533,7 +533,7 @@ static void bad_page(struct page *page, const char *reason,
 		}
 		if (nr_unshown) {
 			pr_alert(
-			      "BUG: Bad page state: %lu messages suppressed\n",
+				"BUG: Bad page state: %lu messages suppressed\n",
 				nr_unshown);
 			nr_unshown = 0;
 		}
@@ -543,12 +543,12 @@ static void bad_page(struct page *page, const char *reason,
 		resume = jiffies + 60 * HZ;
 
 	pr_alert("BUG: Bad page state in process %s  pfn:%05lx\n",
-		current->comm, page_to_pfn(page));
+		 current->comm, page_to_pfn(page));
 	__dump_page(page, reason);
 	bad_flags &= page->flags;
 	if (bad_flags)
 		pr_alert("bad because of flags: %#lx(%pGp)\n",
-						bad_flags, &bad_flags);
+			 bad_flags, &bad_flags);
 	dump_page_owner(page);
 
 	print_modules();
@@ -599,7 +599,7 @@ void prep_compound_page(struct page *page, unsigned int order)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 unsigned int _debug_guardpage_minorder;
 bool _debug_pagealloc_enabled __read_mostly
-			= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
+= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
 EXPORT_SYMBOL(_debug_pagealloc_enabled);
 bool _debug_guardpage_enabled __read_mostly;
 
@@ -654,7 +654,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
 early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
 
 static inline bool set_page_guard(struct zone *zone, struct page *page,
-				unsigned int order, int migratetype)
+				  unsigned int order, int migratetype)
 {
 	struct page_ext *page_ext;
 
@@ -679,7 +679,7 @@ static inline bool set_page_guard(struct zone *zone, struct page *page,
 }
 
 static inline void clear_page_guard(struct zone *zone, struct page *page,
-				unsigned int order, int migratetype)
+				    unsigned int order, int migratetype)
 {
 	struct page_ext *page_ext;
 
@@ -699,9 +699,9 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 #else
 struct page_ext_operations debug_guardpage_ops;
 static inline bool set_page_guard(struct zone *zone, struct page *page,
-			unsigned int order, int migratetype) { return false; }
+				  unsigned int order, int migratetype) { return false; }
 static inline void clear_page_guard(struct zone *zone, struct page *page,
-				unsigned int order, int migratetype) {}
+				    unsigned int order, int migratetype) {}
 #endif
 
 static inline void set_page_order(struct page *page, unsigned int order)
@@ -732,7 +732,7 @@ static inline void rmv_page_order(struct page *page)
  * For recording page's order, we use page_private(page).
  */
 static inline int page_is_buddy(struct page *page, struct page *buddy,
-							unsigned int order)
+				unsigned int order)
 {
 	if (page_is_guard(buddy) && page_order(buddy) == order) {
 		if (page_zone_id(page) != page_zone_id(buddy))
@@ -785,9 +785,9 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
  */
 
 static inline void __free_one_page(struct page *page,
-		unsigned long pfn,
-		struct zone *zone, unsigned int order,
-		int migratetype)
+				   unsigned long pfn,
+				   struct zone *zone, unsigned int order,
+				   int migratetype)
 {
 	unsigned long combined_pfn;
 	unsigned long uninitialized_var(buddy_pfn);
@@ -848,8 +848,8 @@ static inline void __free_one_page(struct page *page,
 			buddy_mt = get_pageblock_migratetype(buddy);
 
 			if (migratetype != buddy_mt
-					&& (is_migrate_isolate(migratetype) ||
-						is_migrate_isolate(buddy_mt)))
+			    && (is_migrate_isolate(migratetype) ||
+				is_migrate_isolate(buddy_mt)))
 				goto done_merging;
 		}
 		max_order++;
@@ -876,7 +876,7 @@ static inline void __free_one_page(struct page *page,
 		if (pfn_valid_within(buddy_pfn) &&
 		    page_is_buddy(higher_page, higher_buddy, order + 1)) {
 			list_add_tail(&page->lru,
-				&zone->free_area[order].free_list[migratetype]);
+				      &zone->free_area[order].free_list[migratetype]);
 			goto out;
 		}
 	}
@@ -892,17 +892,17 @@ static inline void __free_one_page(struct page *page,
  * check if necessary.
  */
 static inline bool page_expected_state(struct page *page,
-					unsigned long check_flags)
+				       unsigned long check_flags)
 {
 	if (unlikely(atomic_read(&page->_mapcount) != -1))
 		return false;
 
 	if (unlikely((unsigned long)page->mapping |
-			page_ref_count(page) |
+		     page_ref_count(page) |
 #ifdef CONFIG_MEMCG
-			(unsigned long)page->mem_cgroup |
+		     (unsigned long)page->mem_cgroup |
 #endif
-			(page->flags & check_flags)))
+		     (page->flags & check_flags)))
 		return false;
 
 	return true;
@@ -994,7 +994,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
 }
 
 static __always_inline bool free_pages_prepare(struct page *page,
-					unsigned int order, bool check_free)
+					       unsigned int order, bool check_free)
 {
 	int bad = 0;
 
@@ -1042,7 +1042,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 		debug_check_no_locks_freed(page_address(page),
 					   PAGE_SIZE << order);
 		debug_check_no_obj_freed(page_address(page),
-					   PAGE_SIZE << order);
+					 PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
 	kernel_poison_pages(page, 1 << order, 0);
@@ -1086,7 +1086,7 @@ static bool bulkfree_pcp_prepare(struct page *page)
  * pinned" detection logic.
  */
 static void free_pcppages_bulk(struct zone *zone, int count,
-					struct per_cpu_pages *pcp)
+			       struct per_cpu_pages *pcp)
 {
 	int migratetype = 0;
 	int batch_free = 0;
@@ -1142,16 +1142,16 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 }
 
 static void free_one_page(struct zone *zone,
-				struct page *page, unsigned long pfn,
-				unsigned int order,
-				int migratetype)
+			  struct page *page, unsigned long pfn,
+			  unsigned int order,
+			  int migratetype)
 {
 	unsigned long flags;
 
 	spin_lock_irqsave(&zone->lock, flags);
 	__count_vm_events(PGFREE, 1 << order);
 	if (unlikely(has_isolate_pageblock(zone) ||
-		is_migrate_isolate(migratetype))) {
+		     is_migrate_isolate(migratetype))) {
 		migratetype = get_pfnblock_migratetype(page, pfn);
 	}
 	__free_one_page(page, pfn, zone, order, migratetype);
@@ -1159,7 +1159,7 @@ static void free_one_page(struct zone *zone,
 }
 
 static void __meminit __init_single_page(struct page *page, unsigned long pfn,
-				unsigned long zone, int nid)
+					 unsigned long zone, int nid)
 {
 	set_page_links(page, zone, nid, pfn);
 	init_page_count(page);
@@ -1263,7 +1263,7 @@ static void __init __free_pages_boot_core(struct page *page, unsigned int order)
 	__free_pages(page, order);
 }
 
-#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) || \
+#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) ||	\
 	defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
 
 static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata;
@@ -1285,7 +1285,7 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
 
 #ifdef CONFIG_NODES_SPAN_OTHER_NODES
 static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-					struct mminit_pfnnid_cache *state)
+						struct mminit_pfnnid_cache *state)
 {
 	int nid;
 
@@ -1308,7 +1308,7 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
 	return true;
 }
 static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-					struct mminit_pfnnid_cache *state)
+						struct mminit_pfnnid_cache *state)
 {
 	return true;
 }
@@ -1316,7 +1316,7 @@ static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
 
 
 void __init __free_pages_bootmem(struct page *page, unsigned long pfn,
-							unsigned int order)
+				 unsigned int order)
 {
 	if (early_page_uninitialised(pfn))
 		return;
@@ -1373,8 +1373,8 @@ void set_zone_contiguous(struct zone *zone)
 
 	block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages);
 	for (; block_start_pfn < zone_end_pfn(zone);
-			block_start_pfn = block_end_pfn,
-			 block_end_pfn += pageblock_nr_pages) {
+	     block_start_pfn = block_end_pfn,
+		     block_end_pfn += pageblock_nr_pages) {
 
 		block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
 
@@ -1394,7 +1394,7 @@ void clear_zone_contiguous(struct zone *zone)
 
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
 static void __init deferred_free_range(struct page *page,
-					unsigned long pfn, int nr_pages)
+				       unsigned long pfn, int nr_pages)
 {
 	int i;
 
@@ -1501,7 +1501,7 @@ static int __init deferred_init_memmap(void *data)
 			} else {
 				nr_pages += nr_to_free;
 				deferred_free_range(free_base_page,
-						free_base_pfn, nr_to_free);
+						    free_base_pfn, nr_to_free);
 				free_base_page = NULL;
 				free_base_pfn = nr_to_free = 0;
 
@@ -1524,11 +1524,11 @@ static int __init deferred_init_memmap(void *data)
 
 			/* Where possible, batch up pages for a single free */
 			continue;
-free_range:
+		free_range:
 			/* Free the current block of pages to allocator */
 			nr_pages += nr_to_free;
 			deferred_free_range(free_base_page, free_base_pfn,
-								nr_to_free);
+					    nr_to_free);
 			free_base_page = NULL;
 			free_base_pfn = nr_to_free = 0;
 		}
@@ -1543,7 +1543,7 @@ static int __init deferred_init_memmap(void *data)
 	WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone));
 
 	pr_info("node %d initialised, %lu pages in %ums\n", nid, nr_pages,
-					jiffies_to_msecs(jiffies - start));
+		jiffies_to_msecs(jiffies - start));
 
 	pgdat_init_report_one_done();
 	return 0;
@@ -1620,8 +1620,8 @@ void __init init_cma_reserved_pageblock(struct page *page)
  * -- nyc
  */
 static inline void expand(struct zone *zone, struct page *page,
-	int low, int high, struct free_area *area,
-	int migratetype)
+			  int low, int high, struct free_area *area,
+			  int migratetype)
 {
 	unsigned long size = 1 << high;
 
@@ -1681,7 +1681,7 @@ static void check_new_page_bad(struct page *page)
 static inline int check_new_page(struct page *page)
 {
 	if (likely(page_expected_state(page,
-				PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
+				       PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
 		return 0;
 
 	check_new_page_bad(page);
@@ -1729,7 +1729,7 @@ static bool check_new_pages(struct page *page, unsigned int order)
 }
 
 inline void post_alloc_hook(struct page *page, unsigned int order,
-				gfp_t gfp_flags)
+			    gfp_t gfp_flags)
 {
 	set_page_private(page, 0);
 	set_page_refcounted(page);
@@ -1742,7 +1742,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 }
 
 static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
-							unsigned int alloc_flags)
+			  unsigned int alloc_flags)
 {
 	int i;
 	bool poisoned = true;
@@ -1780,7 +1780,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
  */
 static inline
 struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
-						int migratetype)
+				int migratetype)
 {
 	unsigned int current_order;
 	struct free_area *area;
@@ -1790,7 +1790,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
 		area = &(zone->free_area[current_order]);
 		page = list_first_entry_or_null(&area->free_list[migratetype],
-							struct page, lru);
+						struct page, lru);
 		if (!page)
 			continue;
 		list_del(&page->lru);
@@ -1823,13 +1823,13 @@ static int fallbacks[MIGRATE_TYPES][4] = {
 
 #ifdef CONFIG_CMA
 static struct page *__rmqueue_cma_fallback(struct zone *zone,
-					unsigned int order)
+					   unsigned int order)
 {
 	return __rmqueue_smallest(zone, order, MIGRATE_CMA);
 }
 #else
 static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
-					unsigned int order) { return NULL; }
+						  unsigned int order) { return NULL; }
 #endif
 
 /*
@@ -1875,7 +1875,7 @@ static int move_freepages(struct zone *zone,
 			 * isolating, as that would be expensive.
 			 */
 			if (num_movable &&
-					(PageLRU(page) || __PageMovable(page)))
+			    (PageLRU(page) || __PageMovable(page)))
 				(*num_movable)++;
 
 			page++;
@@ -1893,7 +1893,7 @@ static int move_freepages(struct zone *zone,
 }
 
 int move_freepages_block(struct zone *zone, struct page *page,
-				int migratetype, int *num_movable)
+			 int migratetype, int *num_movable)
 {
 	unsigned long start_pfn, end_pfn;
 	struct page *start_page, *end_page;
@@ -1911,11 +1911,11 @@ int move_freepages_block(struct zone *zone, struct page *page,
 		return 0;
 
 	return move_freepages(zone, start_page, end_page, migratetype,
-								num_movable);
+			      num_movable);
 }
 
 static void change_pageblock_range(struct page *pageblock_page,
-					int start_order, int migratetype)
+				   int start_order, int migratetype)
 {
 	int nr_pageblocks = 1 << (start_order - pageblock_order);
 
@@ -1950,9 +1950,9 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
 		return true;
 
 	if (order >= pageblock_order / 2 ||
-		start_mt == MIGRATE_RECLAIMABLE ||
-		start_mt == MIGRATE_UNMOVABLE ||
-		page_group_by_mobility_disabled)
+	    start_mt == MIGRATE_RECLAIMABLE ||
+	    start_mt == MIGRATE_UNMOVABLE ||
+	    page_group_by_mobility_disabled)
 		return true;
 
 	return false;
@@ -1967,7 +1967,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
  * itself, so pages freed in the future will be put on the correct free list.
  */
 static void steal_suitable_fallback(struct zone *zone, struct page *page,
-					int start_type, bool whole_block)
+				    int start_type, bool whole_block)
 {
 	unsigned int current_order = page_order(page);
 	struct free_area *area;
@@ -1994,7 +1994,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 		goto single_page;
 
 	free_pages = move_freepages_block(zone, page, start_type,
-						&movable_pages);
+					  &movable_pages);
 	/*
 	 * Determine how many pages are compatible with our allocation.
 	 * For movable allocation, it's the number of movable pages which
@@ -2012,7 +2012,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 		 */
 		if (old_block_type == MIGRATE_MOVABLE)
 			alike_pages = pageblock_nr_pages
-						- (free_pages + movable_pages);
+				- (free_pages + movable_pages);
 		else
 			alike_pages = 0;
 	}
@@ -2022,7 +2022,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 	 * comparable migratability as our allocation, claim the whole block.
 	 */
 	if (free_pages + alike_pages >= (1 << (pageblock_order - 1)) ||
-			page_group_by_mobility_disabled)
+	    page_group_by_mobility_disabled)
 		set_pageblock_migratetype(page, start_type);
 
 	return;
@@ -2039,7 +2039,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
  * fragmentation due to mixed migratetype pages in one pageblock.
  */
 int find_suitable_fallback(struct free_area *area, unsigned int order,
-			int migratetype, bool only_stealable, bool *can_steal)
+			   int migratetype, bool only_stealable, bool *can_steal)
 {
 	int i;
 	int fallback_mt;
@@ -2074,7 +2074,7 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
  * there are no empty page blocks that contain a page with a suitable order
  */
 static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
-				unsigned int alloc_order)
+					 unsigned int alloc_order)
 {
 	int mt;
 	unsigned long max_managed, flags;
@@ -2116,7 +2116,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * pageblock is exhausted.
  */
 static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
-						bool force)
+					   bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2127,13 +2127,13 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 	bool ret;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
-								ac->nodemask) {
+					ac->nodemask) {
 		/*
 		 * Preserve at least one pageblock unless memory pressure
 		 * is really high.
 		 */
 		if (!force && zone->nr_reserved_highatomic <=
-					pageblock_nr_pages)
+		    pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2141,8 +2141,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			struct free_area *area = &(zone->free_area[order]);
 
 			page = list_first_entry_or_null(
-					&area->free_list[MIGRATE_HIGHATOMIC],
-					struct page, lru);
+				&area->free_list[MIGRATE_HIGHATOMIC],
+				struct page, lru);
 			if (!page)
 				continue;
 
@@ -2162,8 +2162,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 				 * underflows.
 				 */
 				zone->nr_reserved_highatomic -= min(
-						pageblock_nr_pages,
-						zone->nr_reserved_highatomic);
+					pageblock_nr_pages,
+					zone->nr_reserved_highatomic);
 			}
 
 			/*
@@ -2177,7 +2177,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
 			ret = move_freepages_block(zone, page, ac->migratetype,
-									NULL);
+						   NULL);
 			if (ret) {
 				spin_unlock_irqrestore(&zone->lock, flags);
 				return ret;
@@ -2206,22 +2206,22 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 
 	/* Find the largest possible block of pages in the other list */
 	for (current_order = MAX_ORDER - 1;
-				current_order >= order && current_order <= MAX_ORDER - 1;
-				--current_order) {
+	     current_order >= order && current_order <= MAX_ORDER - 1;
+	     --current_order) {
 		area = &(zone->free_area[current_order]);
 		fallback_mt = find_suitable_fallback(area, current_order,
-				start_migratetype, false, &can_steal);
+						     start_migratetype, false, &can_steal);
 		if (fallback_mt == -1)
 			continue;
 
 		page = list_first_entry(&area->free_list[fallback_mt],
-						struct page, lru);
+					struct page, lru);
 
 		steal_suitable_fallback(zone, page, start_migratetype,
-								can_steal);
+					can_steal);
 
 		trace_mm_page_alloc_extfrag(page, order, current_order,
-			start_migratetype, fallback_mt);
+					    start_migratetype, fallback_mt);
 
 		return true;
 	}
@@ -2234,7 +2234,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
  * Call me with the zone->lock already held.
  */
 static struct page *__rmqueue(struct zone *zone, unsigned int order,
-				int migratetype)
+			      int migratetype)
 {
 	struct page *page;
 
@@ -2508,7 +2508,7 @@ void mark_free_pages(struct zone *zone)
 
 	for_each_migratetype_order(order, t) {
 		list_for_each_entry(page,
-				&zone->free_area[order].free_list[t], lru) {
+				    &zone->free_area[order].free_list[t], lru) {
 			unsigned long i;
 
 			pfn = page_to_pfn(page);
@@ -2692,8 +2692,8 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
 
 /* Remove page from the per-cpu list, caller must protect the list */
 static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
-			bool cold, struct per_cpu_pages *pcp,
-			struct list_head *list)
+				      bool cold, struct per_cpu_pages *pcp,
+				      struct list_head *list)
 {
 	struct page *page;
 
@@ -2702,8 +2702,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
 	do {
 		if (list_empty(list)) {
 			pcp->count += rmqueue_bulk(zone, 0,
-					pcp->batch, list,
-					migratetype, cold);
+						   pcp->batch, list,
+						   migratetype, cold);
 			if (unlikely(list_empty(list)))
 				return NULL;
 		}
@@ -2722,8 +2722,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
 
 /* Lock and remove page from the per-cpu list */
 static struct page *rmqueue_pcplist(struct zone *preferred_zone,
-			struct zone *zone, unsigned int order,
-			gfp_t gfp_flags, int migratetype)
+				    struct zone *zone, unsigned int order,
+				    gfp_t gfp_flags, int migratetype)
 {
 	struct per_cpu_pages *pcp;
 	struct list_head *list;
@@ -2747,16 +2747,16 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
  */
 static inline
 struct page *rmqueue(struct zone *preferred_zone,
-			struct zone *zone, unsigned int order,
-			gfp_t gfp_flags, unsigned int alloc_flags,
-			int migratetype)
+		     struct zone *zone, unsigned int order,
+		     gfp_t gfp_flags, unsigned int alloc_flags,
+		     int migratetype)
 {
 	unsigned long flags;
 	struct page *page;
 
 	if (likely(order == 0) && !in_interrupt()) {
 		page = rmqueue_pcplist(preferred_zone, zone, order,
-				gfp_flags, migratetype);
+				       gfp_flags, migratetype);
 		goto out;
 	}
 
@@ -2826,7 +2826,7 @@ static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
 	if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM))
 		return false;
 	if (fail_page_alloc.ignore_gfp_reclaim &&
-			(gfp_mask & __GFP_DIRECT_RECLAIM))
+	    (gfp_mask & __GFP_DIRECT_RECLAIM))
 		return false;
 
 	return should_fail(&fail_page_alloc.attr, 1 << order);
@@ -2845,10 +2845,10 @@ static int __init fail_page_alloc_debugfs(void)
 		return PTR_ERR(dir);
 
 	if (!debugfs_create_bool("ignore-gfp-wait", mode, dir,
-				&fail_page_alloc.ignore_gfp_reclaim))
+				 &fail_page_alloc.ignore_gfp_reclaim))
 		goto fail;
 	if (!debugfs_create_bool("ignore-gfp-highmem", mode, dir,
-				&fail_page_alloc.ignore_gfp_highmem))
+				 &fail_page_alloc.ignore_gfp_highmem))
 		goto fail;
 	if (!debugfs_create_u32("min-order", mode, dir,
 				&fail_page_alloc.min_order))
@@ -2949,14 +2949,14 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
 }
 
 bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
-		      int classzone_idx, unsigned int alloc_flags)
+		       int classzone_idx, unsigned int alloc_flags)
 {
 	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
-					zone_page_state(z, NR_FREE_PAGES));
+				   zone_page_state(z, NR_FREE_PAGES));
 }
 
 static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
-		unsigned long mark, int classzone_idx, unsigned int alloc_flags)
+				       unsigned long mark, int classzone_idx, unsigned int alloc_flags)
 {
 	long free_pages = zone_page_state(z, NR_FREE_PAGES);
 	long cma_pages = 0;
@@ -2978,11 +2978,11 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
 		return true;
 
 	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
-					free_pages);
+				   free_pages);
 }
 
 bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
-			unsigned long mark, int classzone_idx)
+			    unsigned long mark, int classzone_idx)
 {
 	long free_pages = zone_page_state(z, NR_FREE_PAGES);
 
@@ -2990,14 +2990,14 @@ bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
 		free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);
 
 	return __zone_watermark_ok(z, order, mark, classzone_idx, 0,
-								free_pages);
+				   free_pages);
 }
 
 #ifdef CONFIG_NUMA
 static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
 {
 	return node_distance(zone_to_nid(local_zone), zone_to_nid(zone)) <=
-				RECLAIM_DISTANCE;
+		RECLAIM_DISTANCE;
 }
 #else	/* CONFIG_NUMA */
 static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
@@ -3012,7 +3012,7 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
  */
 static struct page *
 get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
-						const struct alloc_context *ac)
+		       const struct alloc_context *ac)
 {
 	struct zoneref *z = ac->preferred_zoneref;
 	struct zone *zone;
@@ -3023,14 +3023,14 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 	 * See also __cpuset_node_allowed() comment in kernel/cpuset.c.
 	 */
 	for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
-								ac->nodemask) {
+					ac->nodemask) {
 		struct page *page;
 		unsigned long mark;
 
 		if (cpusets_enabled() &&
-			(alloc_flags & ALLOC_CPUSET) &&
-			!__cpuset_zone_allowed(zone, gfp_mask))
-				continue;
+		    (alloc_flags & ALLOC_CPUSET) &&
+		    !__cpuset_zone_allowed(zone, gfp_mask))
+			continue;
 		/*
 		 * When allocating a page cache page for writing, we
 		 * want to get it from a node that is within its dirty
@@ -3062,7 +3062,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 
 		mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
 		if (!zone_watermark_fast(zone, order, mark,
-				       ac_classzone_idx(ac), alloc_flags)) {
+					 ac_classzone_idx(ac), alloc_flags)) {
 			int ret;
 
 			/* Checked here to keep the fast path fast */
@@ -3085,16 +3085,16 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			default:
 				/* did we reclaim enough */
 				if (zone_watermark_ok(zone, order, mark,
-						ac_classzone_idx(ac), alloc_flags))
+						      ac_classzone_idx(ac), alloc_flags))
 					goto try_this_zone;
 
 				continue;
 			}
 		}
 
-try_this_zone:
+	try_this_zone:
 		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
-				gfp_mask, alloc_flags, ac->migratetype);
+			       gfp_mask, alloc_flags, ac->migratetype);
 		if (page) {
 			prep_new_page(page, order, gfp_mask, alloc_flags);
 
@@ -3188,21 +3188,21 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 	struct page *page;
 
 	page = get_page_from_freelist(gfp_mask, order,
-			alloc_flags | ALLOC_CPUSET, ac);
+				      alloc_flags | ALLOC_CPUSET, ac);
 	/*
 	 * fallback to ignore cpuset restriction if our nodes
 	 * are depleted
 	 */
 	if (!page)
 		page = get_page_from_freelist(gfp_mask, order,
-				alloc_flags, ac);
+					      alloc_flags, ac);
 
 	return page;
 }
 
 static inline struct page *
 __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
-	const struct alloc_context *ac, unsigned long *did_some_progress)
+		      const struct alloc_context *ac, unsigned long *did_some_progress)
 {
 	struct oom_control oc = {
 		.zonelist = ac->zonelist,
@@ -3231,7 +3231,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 	 * we're still under heavy pressure.
 	 */
 	page = get_page_from_freelist(gfp_mask | __GFP_HARDWALL, order,
-					ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
+				      ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
 	if (page)
 		goto out;
 
@@ -3270,7 +3270,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (gfp_mask & __GFP_NOFAIL)
 			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
-					ALLOC_NO_WATERMARKS, ac);
+							     ALLOC_NO_WATERMARKS, ac);
 	}
 out:
 	mutex_unlock(&oom_lock);
@@ -3287,8 +3287,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 /* Try memory compaction for high-order allocations before reclaim */
 static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-		unsigned int alloc_flags, const struct alloc_context *ac,
-		enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     enum compact_priority prio, enum compact_result *compact_result)
 {
 	struct page *page;
 
@@ -3297,7 +3297,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 
 	current->flags |= PF_MEMALLOC;
 	*compact_result = try_to_compact_pages(gfp_mask, order, alloc_flags, ac,
-									prio);
+					       prio);
 	current->flags &= ~PF_MEMALLOC;
 
 	if (*compact_result <= COMPACT_INACTIVE)
@@ -3389,7 +3389,7 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
 	 */
 check_priority:
 	min_priority = (order > PAGE_ALLOC_COSTLY_ORDER) ?
-			MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
+		MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
 
 	if (*compact_priority > min_priority) {
 		(*compact_priority)--;
@@ -3403,8 +3403,8 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
 #else
 static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-		unsigned int alloc_flags, const struct alloc_context *ac,
-		enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     enum compact_priority prio, enum compact_result *compact_result)
 {
 	*compact_result = COMPACT_SKIPPED;
 	return NULL;
@@ -3431,7 +3431,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
 	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
 					ac->nodemask) {
 		if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
-					ac_classzone_idx(ac), alloc_flags))
+				      ac_classzone_idx(ac), alloc_flags))
 			return true;
 	}
 	return false;
@@ -3441,7 +3441,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
 /* Perform direct synchronous page reclaim */
 static int
 __perform_reclaim(gfp_t gfp_mask, unsigned int order,
-					const struct alloc_context *ac)
+		  const struct alloc_context *ac)
 {
 	struct reclaim_state reclaim_state;
 	int progress;
@@ -3456,7 +3456,7 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 	current->reclaim_state = &reclaim_state;
 
 	progress = try_to_free_pages(ac->zonelist, order, gfp_mask,
-								ac->nodemask);
+				     ac->nodemask);
 
 	current->reclaim_state = NULL;
 	lockdep_clear_current_reclaim_state();
@@ -3470,8 +3470,8 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 /* The really slow allocator path where we enter direct reclaim */
 static inline struct page *
 __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
-		unsigned int alloc_flags, const struct alloc_context *ac,
-		unsigned long *did_some_progress)
+			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     unsigned long *did_some_progress)
 {
 	struct page *page = NULL;
 	bool drained = false;
@@ -3560,8 +3560,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 	if (in_serving_softirq() && (current->flags & PF_MEMALLOC))
 		return true;
 	if (!in_interrupt() &&
-			((current->flags & PF_MEMALLOC) ||
-			 unlikely(test_thread_flag(TIF_MEMDIE))))
+	    ((current->flags & PF_MEMALLOC) ||
+	     unlikely(test_thread_flag(TIF_MEMDIE))))
 		return true;
 
 	return false;
@@ -3625,9 +3625,9 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 		 * reclaimable pages?
 		 */
 		wmark = __zone_watermark_ok(zone, order, min_wmark,
-				ac_classzone_idx(ac), alloc_flags, available);
+					    ac_classzone_idx(ac), alloc_flags, available);
 		trace_reclaim_retry_zone(z, order, reclaimable,
-				available, min_wmark, *no_progress_loops, wmark);
+					 available, min_wmark, *no_progress_loops, wmark);
 		if (wmark) {
 			/*
 			 * If we didn't make any progress and have a lot of
@@ -3639,7 +3639,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 				unsigned long write_pending;
 
 				write_pending = zone_page_state_snapshot(zone,
-							NR_ZONE_WRITE_PENDING);
+									 NR_ZONE_WRITE_PENDING);
 
 				if (2 * write_pending > reclaimable) {
 					congestion_wait(BLK_RW_ASYNC, HZ / 10);
@@ -3670,7 +3670,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
-						struct alloc_context *ac)
+		       struct alloc_context *ac)
 {
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
@@ -3701,7 +3701,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * callers that are not in atomic context.
 	 */
 	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)) ==
-				(__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
+			 (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
 		gfp_mask &= ~__GFP_ATOMIC;
 
 retry_cpuset:
@@ -3724,7 +3724,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * could end up iterating over non-eligible zones endlessly.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-					ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx, ac->nodemask);
 	if (!ac->preferred_zoneref->zone)
 		goto nopage;
 
@@ -3749,13 +3749,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
 	 */
 	if (can_direct_reclaim &&
-			(costly_order ||
-			   (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
-			&& !gfp_pfmemalloc_allowed(gfp_mask)) {
+	    (costly_order ||
+	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
+	    && !gfp_pfmemalloc_allowed(gfp_mask)) {
 		page = __alloc_pages_direct_compact(gfp_mask, order,
-						alloc_flags, ac,
-						INIT_COMPACT_PRIORITY,
-						&compact_result);
+						    alloc_flags, ac,
+						    INIT_COMPACT_PRIORITY,
+						    &compact_result);
 		if (page)
 			goto got_pg;
 
@@ -3800,7 +3800,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (!(alloc_flags & ALLOC_CPUSET) || (alloc_flags & ALLOC_NO_WATERMARKS)) {
 		ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
 		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-					ac->high_zoneidx, ac->nodemask);
+							     ac->high_zoneidx, ac->nodemask);
 	}
 
 	/* Attempt with potentially adjusted zonelist and alloc_flags */
@@ -3815,8 +3815,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	/* Make sure we know about allocations which stall for too long */
 	if (time_after(jiffies, alloc_start + stall_timeout)) {
 		warn_alloc(gfp_mask & ~__GFP_NOWARN, ac->nodemask,
-			"page allocation stalls for %ums, order:%u",
-			jiffies_to_msecs(jiffies - alloc_start), order);
+			   "page allocation stalls for %ums, order:%u",
+			   jiffies_to_msecs(jiffies - alloc_start), order);
 		stall_timeout += 10 * HZ;
 	}
 
@@ -3826,13 +3826,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 
 	/* Try direct reclaim and then allocating */
 	page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
-							&did_some_progress);
+					    &did_some_progress);
 	if (page)
 		goto got_pg;
 
 	/* Try direct compaction and then allocating */
 	page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
-					compact_priority, &compact_result);
+					    compact_priority, &compact_result);
 	if (page)
 		goto got_pg;
 
@@ -3858,9 +3858,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * of free memory (see __compaction_suitable)
 	 */
 	if (did_some_progress > 0 &&
-			should_compact_retry(ac, order, alloc_flags,
-				compact_result, &compact_priority,
-				&compaction_retries))
+	    should_compact_retry(ac, order, alloc_flags,
+				 compact_result, &compact_priority,
+				 &compaction_retries))
 		goto retry;
 
 	/*
@@ -3938,15 +3938,15 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	}
 fail:
 	warn_alloc(gfp_mask, ac->nodemask,
-			"page allocation failure: order:%u", order);
+		   "page allocation failure: order:%u", order);
 got_pg:
 	return page;
 }
 
 static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
-		struct zonelist *zonelist, nodemask_t *nodemask,
-		struct alloc_context *ac, gfp_t *alloc_mask,
-		unsigned int *alloc_flags)
+				       struct zonelist *zonelist, nodemask_t *nodemask,
+				       struct alloc_context *ac, gfp_t *alloc_mask,
+				       unsigned int *alloc_flags)
 {
 	ac->high_zoneidx = gfp_zone(gfp_mask);
 	ac->zonelist = zonelist;
@@ -3976,7 +3976,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
 
 /* Determine whether to spread dirty pages and what the first usable zone */
 static inline void finalise_ac(gfp_t gfp_mask,
-		unsigned int order, struct alloc_context *ac)
+			       unsigned int order, struct alloc_context *ac)
 {
 	/* Dirty zone balancing only done in the fast path */
 	ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE);
@@ -3987,7 +3987,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
 	 * may get reset for allocations that ignore memory policies.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-					ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx, ac->nodemask);
 }
 
 /*
@@ -3995,7 +3995,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
  */
 struct page *
 __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
-			struct zonelist *zonelist, nodemask_t *nodemask)
+		       struct zonelist *zonelist, nodemask_t *nodemask)
 {
 	struct page *page;
 	unsigned int alloc_flags = ALLOC_WMARK_LOW;
@@ -4114,7 +4114,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
 
 #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
 	gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
-		    __GFP_NOMEMALLOC;
+		__GFP_NOMEMALLOC;
 	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
 				PAGE_FRAG_CACHE_MAX_ORDER);
 	nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
@@ -4150,7 +4150,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
 	int offset;
 
 	if (unlikely(!nc->va)) {
-refill:
+	refill:
 		page = __page_frag_cache_refill(nc, gfp_mask);
 		if (!page)
 			return NULL;
@@ -4209,7 +4209,7 @@ void page_frag_free(void *addr)
 EXPORT_SYMBOL(page_frag_free);
 
 static void *make_alloc_exact(unsigned long addr, unsigned int order,
-		size_t size)
+			      size_t size)
 {
 	if (addr) {
 		unsigned long alloc_end = addr + (PAGE_SIZE << order);
@@ -4378,7 +4378,7 @@ long si_mem_available(void)
 	 * and cannot be freed. Cap this estimate at the low watermark.
 	 */
 	available += global_page_state(NR_SLAB_RECLAIMABLE) -
-		     min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
+		min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
 
 	if (available < 0)
 		available = 0;
@@ -4714,7 +4714,7 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
 		zone = pgdat->node_zones + zone_type;
 		if (managed_zone(zone)) {
 			zoneref_set_zone(zone,
-				&zonelist->_zonerefs[nr_zones++]);
+					 &zonelist->_zonerefs[nr_zones++]);
 			check_highest_zone(zone_type);
 		}
 	} while (zone_type);
@@ -4792,8 +4792,8 @@ early_param("numa_zonelist_order", setup_numa_zonelist_order);
  * sysctl handler for numa_zonelist_order
  */
 int numa_zonelist_order_handler(struct ctl_table *table, int write,
-		void __user *buffer, size_t *length,
-		loff_t *ppos)
+				void __user *buffer, size_t *length,
+				loff_t *ppos)
 {
 	char saved_string[NUMA_ZONELIST_ORDER_LEN];
 	int ret;
@@ -4952,7 +4952,7 @@ static void build_zonelists_in_zone_order(pg_data_t *pgdat, int nr_nodes)
 			z = &NODE_DATA(node)->node_zones[zone_type];
 			if (managed_zone(z)) {
 				zoneref_set_zone(z,
-					&zonelist->_zonerefs[pos++]);
+						 &zonelist->_zonerefs[pos++]);
 				check_highest_zone(zone_type);
 			}
 		}
@@ -5056,8 +5056,8 @@ int local_memory_node(int node)
 	struct zoneref *z;
 
 	z = first_zones_zonelist(node_zonelist(node, GFP_KERNEL),
-				   gfp_zone(GFP_KERNEL),
-				   NULL);
+				 gfp_zone(GFP_KERNEL),
+				 NULL);
 	return z->zone->node;
 }
 #endif
@@ -5248,7 +5248,7 @@ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone)
  * done. Non-atomic initialization, single-pass.
  */
 void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
-		unsigned long start_pfn, enum memmap_context context)
+				unsigned long start_pfn, enum memmap_context context)
 {
 	struct vmem_altmap *altmap = to_vmem_altmap(__pfn_to_phys(start_pfn));
 	unsigned long end_pfn = start_pfn + size;
@@ -5315,7 +5315,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 		}
 #endif
 
-not_early:
+	not_early:
 		/*
 		 * Mark the block movable so that blocks are reserved for
 		 * movable at startup. This will force kernel allocations
@@ -5349,7 +5349,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
 }
 
 #ifndef __HAVE_ARCH_MEMMAP_INIT
-#define memmap_init(size, nid, zone, start_pfn) \
+#define memmap_init(size, nid, zone, start_pfn)				\
 	memmap_init_zone((size), (nid), (zone), (start_pfn), MEMMAP_EARLY)
 #endif
 
@@ -5417,13 +5417,13 @@ static int zone_batchsize(struct zone *zone)
  * exist).
  */
 static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
-		unsigned long batch)
+			   unsigned long batch)
 {
-       /* start with a fail safe value for batch */
+	/* start with a fail safe value for batch */
 	pcp->batch = 1;
 	smp_wmb();
 
-       /* Update high, then batch, in order */
+	/* Update high, then batch, in order */
 	pcp->high = high;
 	smp_wmb();
 
@@ -5460,7 +5460,7 @@ static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
  * to the value high for the pageset p.
  */
 static void pageset_set_high(struct per_cpu_pageset *p,
-				unsigned long high)
+			     unsigned long high)
 {
 	unsigned long batch = max(1UL, high / 4);
 	if ((high / 4) > (PAGE_SHIFT * 8))
@@ -5474,8 +5474,8 @@ static void pageset_set_high_and_batch(struct zone *zone,
 {
 	if (percpu_pagelist_fraction)
 		pageset_set_high(pcp,
-			(zone->managed_pages /
-				percpu_pagelist_fraction));
+				 (zone->managed_pages /
+				  percpu_pagelist_fraction));
 	else
 		pageset_set_batch(pcp, zone_batchsize(zone));
 }
@@ -5510,7 +5510,7 @@ void __init setup_per_cpu_pageset(void)
 
 	for_each_online_pgdat(pgdat)
 		pgdat->per_cpu_nodestats =
-			alloc_percpu(struct per_cpu_nodestat);
+		alloc_percpu(struct per_cpu_nodestat);
 }
 
 static __meminit void zone_pcp_init(struct zone *zone)
@@ -5538,10 +5538,10 @@ int __meminit init_currently_empty_zone(struct zone *zone,
 	zone->zone_start_pfn = zone_start_pfn;
 
 	mminit_dprintk(MMINIT_TRACE, "memmap_init",
-			"Initialising map node %d zone %lu pfns %lu -> %lu\n",
-			pgdat->node_id,
-			(unsigned long)zone_idx(zone),
-			zone_start_pfn, (zone_start_pfn + size));
+		       "Initialising map node %d zone %lu pfns %lu -> %lu\n",
+		       pgdat->node_id,
+		       (unsigned long)zone_idx(zone),
+		       zone_start_pfn, (zone_start_pfn + size));
 
 	zone_init_free_lists(zone);
 	zone->initialized = 1;
@@ -5556,7 +5556,7 @@ int __meminit init_currently_empty_zone(struct zone *zone,
  * Required by SPARSEMEM. Given a PFN, return what node the PFN is on.
  */
 int __meminit __early_pfn_to_nid(unsigned long pfn,
-					struct mminit_pfnnid_cache *state)
+				 struct mminit_pfnnid_cache *state)
 {
 	unsigned long start_pfn, end_pfn;
 	int nid;
@@ -5595,8 +5595,8 @@ void __init free_bootmem_with_active_regions(int nid, unsigned long max_low_pfn)
 
 		if (start_pfn < end_pfn)
 			memblock_free_early_nid(PFN_PHYS(start_pfn),
-					(end_pfn - start_pfn) << PAGE_SHIFT,
-					this_nid);
+						(end_pfn - start_pfn) << PAGE_SHIFT,
+						this_nid);
 	}
 }
 
@@ -5628,7 +5628,7 @@ void __init sparse_memory_present_with_active_regions(int nid)
  * PFNs will be 0.
  */
 void __meminit get_pfn_range_for_nid(unsigned int nid,
-			unsigned long *start_pfn, unsigned long *end_pfn)
+				     unsigned long *start_pfn, unsigned long *end_pfn)
 {
 	unsigned long this_start_pfn, this_end_pfn;
 	int i;
@@ -5658,7 +5658,7 @@ static void __init find_usable_zone_for_movable(void)
 			continue;
 
 		if (arch_zone_highest_possible_pfn[zone_index] >
-				arch_zone_lowest_possible_pfn[zone_index])
+		    arch_zone_lowest_possible_pfn[zone_index])
 			break;
 	}
 
@@ -5677,11 +5677,11 @@ static void __init find_usable_zone_for_movable(void)
  * zones within a node are in order of monotonic increases memory addresses
  */
 static void __meminit adjust_zone_range_for_zone_movable(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *zone_start_pfn,
-					unsigned long *zone_end_pfn)
+							 unsigned long zone_type,
+							 unsigned long node_start_pfn,
+							 unsigned long node_end_pfn,
+							 unsigned long *zone_start_pfn,
+							 unsigned long *zone_end_pfn)
 {
 	/* Only adjust if ZONE_MOVABLE is on this node */
 	if (zone_movable_pfn[nid]) {
@@ -5689,15 +5689,15 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
 		if (zone_type == ZONE_MOVABLE) {
 			*zone_start_pfn = zone_movable_pfn[nid];
 			*zone_end_pfn = min(node_end_pfn,
-				arch_zone_highest_possible_pfn[movable_zone]);
+					    arch_zone_highest_possible_pfn[movable_zone]);
 
-		/* Adjust for ZONE_MOVABLE starting within this range */
+			/* Adjust for ZONE_MOVABLE starting within this range */
 		} else if (!mirrored_kernelcore &&
-			*zone_start_pfn < zone_movable_pfn[nid] &&
-			*zone_end_pfn > zone_movable_pfn[nid]) {
+			   *zone_start_pfn < zone_movable_pfn[nid] &&
+			   *zone_end_pfn > zone_movable_pfn[nid]) {
 			*zone_end_pfn = zone_movable_pfn[nid];
 
-		/* Check if this whole range is within ZONE_MOVABLE */
+			/* Check if this whole range is within ZONE_MOVABLE */
 		} else if (*zone_start_pfn >= zone_movable_pfn[nid])
 			*zone_start_pfn = *zone_end_pfn;
 	}
@@ -5708,12 +5708,12 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
  * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node()
  */
 static unsigned long __meminit zone_spanned_pages_in_node(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *zone_start_pfn,
-					unsigned long *zone_end_pfn,
-					unsigned long *ignored)
+							  unsigned long zone_type,
+							  unsigned long node_start_pfn,
+							  unsigned long node_end_pfn,
+							  unsigned long *zone_start_pfn,
+							  unsigned long *zone_end_pfn,
+							  unsigned long *ignored)
 {
 	/* When hotadd a new node from cpu_up(), the node should be empty */
 	if (!node_start_pfn && !node_end_pfn)
@@ -5723,8 +5723,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
 	*zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type];
 	*zone_end_pfn = arch_zone_highest_possible_pfn[zone_type];
 	adjust_zone_range_for_zone_movable(nid, zone_type,
-				node_start_pfn, node_end_pfn,
-				zone_start_pfn, zone_end_pfn);
+					   node_start_pfn, node_end_pfn,
+					   zone_start_pfn, zone_end_pfn);
 
 	/* Check that this node has pages within the zone's required range */
 	if (*zone_end_pfn < node_start_pfn || *zone_start_pfn > node_end_pfn)
@@ -5743,8 +5743,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
  * then all holes in the requested range will be accounted for.
  */
 unsigned long __meminit __absent_pages_in_range(int nid,
-				unsigned long range_start_pfn,
-				unsigned long range_end_pfn)
+						unsigned long range_start_pfn,
+						unsigned long range_end_pfn)
 {
 	unsigned long nr_absent = range_end_pfn - range_start_pfn;
 	unsigned long start_pfn, end_pfn;
@@ -5766,17 +5766,17 @@ unsigned long __meminit __absent_pages_in_range(int nid,
  * It returns the number of pages frames in memory holes within a range.
  */
 unsigned long __init absent_pages_in_range(unsigned long start_pfn,
-							unsigned long end_pfn)
+					   unsigned long end_pfn)
 {
 	return __absent_pages_in_range(MAX_NUMNODES, start_pfn, end_pfn);
 }
 
 /* Return the number of page frames in holes in a zone on a node */
 static unsigned long __meminit zone_absent_pages_in_node(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *ignored)
+							 unsigned long zone_type,
+							 unsigned long node_start_pfn,
+							 unsigned long node_end_pfn,
+							 unsigned long *ignored)
 {
 	unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
 	unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
@@ -5791,8 +5791,8 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
 	zone_end_pfn = clamp(node_end_pfn, zone_low, zone_high);
 
 	adjust_zone_range_for_zone_movable(nid, zone_type,
-			node_start_pfn, node_end_pfn,
-			&zone_start_pfn, &zone_end_pfn);
+					   node_start_pfn, node_end_pfn,
+					   &zone_start_pfn, &zone_end_pfn);
 
 	/* If this node has no page within this zone, return 0. */
 	if (zone_start_pfn == zone_end_pfn)
@@ -5830,12 +5830,12 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
 
 #else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
 static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
-					unsigned long zone_type,
-					unsigned long node_start_pfn,
-					unsigned long node_end_pfn,
-					unsigned long *zone_start_pfn,
-					unsigned long *zone_end_pfn,
-					unsigned long *zones_size)
+								 unsigned long zone_type,
+								 unsigned long node_start_pfn,
+								 unsigned long node_end_pfn,
+								 unsigned long *zone_start_pfn,
+								 unsigned long *zone_end_pfn,
+								 unsigned long *zones_size)
 {
 	unsigned int zone;
 
@@ -5849,10 +5849,10 @@ static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
 }
 
 static inline unsigned long __meminit zone_absent_pages_in_node(int nid,
-						unsigned long zone_type,
-						unsigned long node_start_pfn,
-						unsigned long node_end_pfn,
-						unsigned long *zholes_size)
+								unsigned long zone_type,
+								unsigned long node_start_pfn,
+								unsigned long node_end_pfn,
+								unsigned long *zholes_size)
 {
 	if (!zholes_size)
 		return 0;
@@ -5883,8 +5883,8 @@ static void __meminit calculate_node_totalpages(struct pglist_data *pgdat,
 						  &zone_end_pfn,
 						  zones_size);
 		real_size = size - zone_absent_pages_in_node(pgdat->node_id, i,
-						  node_start_pfn, node_end_pfn,
-						  zholes_size);
+							     node_start_pfn, node_end_pfn,
+							     zholes_size);
 		if (size)
 			zone->zone_start_pfn = zone_start_pfn;
 		else
@@ -6143,7 +6143,7 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
 }
 
 void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
-		unsigned long node_start_pfn, unsigned long *zholes_size)
+				      unsigned long node_start_pfn, unsigned long *zholes_size)
 {
 	pg_data_t *pgdat = NODE_DATA(nid);
 	unsigned long start_pfn = 0;
@@ -6428,12 +6428,12 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 			if (start_pfn < usable_startpfn) {
 				unsigned long kernel_pages;
 				kernel_pages = min(end_pfn, usable_startpfn)
-								- start_pfn;
+					- start_pfn;
 
 				kernelcore_remaining -= min(kernel_pages,
-							kernelcore_remaining);
+							    kernelcore_remaining);
 				required_kernelcore -= min(kernel_pages,
-							required_kernelcore);
+							   required_kernelcore);
 
 				/* Continue if range is now fully accounted */
 				if (end_pfn <= usable_startpfn) {
@@ -6466,7 +6466,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 			 * satisfied
 			 */
 			required_kernelcore -= min(required_kernelcore,
-								size_pages);
+						   size_pages);
 			kernelcore_remaining -= size_pages;
 			if (!kernelcore_remaining)
 				break;
@@ -6534,9 +6534,9 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 
 	/* Record where the zone boundaries are */
 	memset(arch_zone_lowest_possible_pfn, 0,
-				sizeof(arch_zone_lowest_possible_pfn));
+	       sizeof(arch_zone_lowest_possible_pfn));
 	memset(arch_zone_highest_possible_pfn, 0,
-				sizeof(arch_zone_highest_possible_pfn));
+	       sizeof(arch_zone_highest_possible_pfn));
 
 	start_pfn = find_min_pfn_with_active_regions();
 
@@ -6562,14 +6562,14 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 			continue;
 		pr_info("  %-8s ", zone_names[i]);
 		if (arch_zone_lowest_possible_pfn[i] ==
-				arch_zone_highest_possible_pfn[i])
+		    arch_zone_highest_possible_pfn[i])
 			pr_cont("empty\n");
 		else
 			pr_cont("[mem %#018Lx-%#018Lx]\n",
 				(u64)arch_zone_lowest_possible_pfn[i]
-					<< PAGE_SHIFT,
+				<< PAGE_SHIFT,
 				((u64)arch_zone_highest_possible_pfn[i]
-					<< PAGE_SHIFT) - 1);
+				 << PAGE_SHIFT) - 1);
 	}
 
 	/* Print out the PFNs ZONE_MOVABLE begins at in each node */
@@ -6577,7 +6577,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 	for (i = 0; i < MAX_NUMNODES; i++) {
 		if (zone_movable_pfn[i])
 			pr_info("  Node %d: %#018Lx\n", i,
-			       (u64)zone_movable_pfn[i] << PAGE_SHIFT);
+				(u64)zone_movable_pfn[i] << PAGE_SHIFT);
 	}
 
 	/* Print out the early node map */
@@ -6593,7 +6593,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 	for_each_online_node(nid) {
 		pg_data_t *pgdat = NODE_DATA(nid);
 		free_area_init_node(nid, NULL,
-				find_min_pfn_for_node(nid), NULL);
+				    find_min_pfn_for_node(nid), NULL);
 
 		/* Any memory on that node */
 		if (pgdat->node_present_pages)
@@ -6711,14 +6711,14 @@ void __init mem_init_print_info(const char *str)
 	 *    please refer to arch/tile/kernel/vmlinux.lds.S.
 	 * 3) .rodata.* may be embedded into .text or .data sections.
 	 */
-#define adj_init_size(start, end, size, pos, adj) \
-	do { \
-		if (start <= pos && pos < end && size > adj) \
-			size -= adj; \
+#define adj_init_size(start, end, size, pos, adj)		\
+	do {							\
+		if (start <= pos && pos < end && size > adj)	\
+			size -= adj;				\
 	} while (0)
 
 	adj_init_size(__init_begin, __init_end, init_data_size,
-		     _sinittext, init_code_size);
+		      _sinittext, init_code_size);
 	adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size);
 	adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size);
 	adj_init_size(_stext, _etext, codesize, __start_rodata, rosize);
@@ -6762,7 +6762,7 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
 void __init free_area_init(unsigned long *zones_size)
 {
 	free_area_init_node(0, zones_size,
-			__pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
+			    __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
 }
 
 static int page_alloc_cpu_dead(unsigned int cpu)
@@ -6992,7 +6992,7 @@ int __meminit init_per_zone_wmark_min(void)
 			min_free_kbytes = 65536;
 	} else {
 		pr_warn("min_free_kbytes is not updated to %d because user defined value %d is preferred\n",
-				new_min_free_kbytes, user_min_free_kbytes);
+			new_min_free_kbytes, user_min_free_kbytes);
 	}
 	setup_per_zone_wmarks();
 	refresh_zone_stat_thresholds();
@@ -7013,7 +7013,7 @@ core_initcall(init_per_zone_wmark_min)
  *	changes.
  */
 int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+				   void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7029,7 +7029,7 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
 }
 
 int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					  void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7054,12 +7054,12 @@ static void setup_min_unmapped_ratio(void)
 
 	for_each_zone(zone)
 		zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages *
-				sysctl_min_unmapped_ratio) / 100;
+							 sysctl_min_unmapped_ratio) / 100;
 }
 
 
 int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					     void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7082,11 +7082,11 @@ static void setup_min_slab_ratio(void)
 
 	for_each_zone(zone)
 		zone->zone_pgdat->min_slab_pages += (zone->managed_pages *
-				sysctl_min_slab_ratio) / 100;
+						     sysctl_min_slab_ratio) / 100;
 }
 
 int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					 void __user *buffer, size_t *length, loff_t *ppos)
 {
 	int rc;
 
@@ -7110,7 +7110,7 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
  * if in function of the boot time zone sizes.
  */
 int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					void __user *buffer, size_t *length, loff_t *ppos)
 {
 	proc_dointvec_minmax(table, write, buffer, length, ppos);
 	setup_per_zone_lowmem_reserve();
@@ -7123,7 +7123,7 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
  * pagelist can have before it gets flushed back to buddy allocator.
  */
 int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
-	void __user *buffer, size_t *length, loff_t *ppos)
+					    void __user *buffer, size_t *length, loff_t *ppos)
 {
 	struct zone *zone;
 	int old_percpu_pagelist_fraction;
@@ -7153,7 +7153,7 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
 
 		for_each_possible_cpu(cpu)
 			pageset_set_high_and_batch(zone,
-					per_cpu_ptr(zone->pageset, cpu));
+						   per_cpu_ptr(zone->pageset, cpu));
 	}
 out:
 	mutex_unlock(&pcp_batch_high_lock);
@@ -7461,7 +7461,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
 		}
 
 		nr_reclaimed = reclaim_clean_pages_from_list(cc->zone,
-							&cc->migratepages);
+							     &cc->migratepages);
 		cc->nr_migratepages -= nr_reclaimed;
 
 		ret = migrate_pages(&cc->migratepages, alloc_migrate_target,
@@ -7645,7 +7645,7 @@ void __meminit zone_pcp_update(struct zone *zone)
 	mutex_lock(&pcp_batch_high_lock);
 	for_each_possible_cpu(cpu)
 		pageset_set_high_and_batch(zone,
-				per_cpu_ptr(zone->pageset, cpu));
+					   per_cpu_ptr(zone->pageset, cpu));
 	mutex_unlock(&pcp_batch_high_lock);
 }
 #endif
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 03/15] mm: page_alloc: fix brace positions
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Remove a few blank lines.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 79fc996892c6..1029a1dd59d9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1375,7 +1375,6 @@ void set_zone_contiguous(struct zone *zone)
 	for (; block_start_pfn < zone_end_pfn(zone);
 	     block_start_pfn = block_end_pfn,
 		     block_end_pfn += pageblock_nr_pages) {
-
 		block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
 
 		if (!__pageblock_pfn_to_page(block_start_pfn,
@@ -4864,7 +4863,6 @@ static int find_next_best_node(int node, nodemask_t *used_node_mask)
 	}
 
 	for_each_node_state(n, N_MEMORY) {
-
 		/* Don't want a node to appear more than once */
 		if (node_isset(n, *used_node_mask))
 			continue;
@@ -6437,7 +6435,6 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 
 				/* Continue if range is now fully accounted */
 				if (end_pfn <= usable_startpfn) {
-
 					/*
 					 * Push zone_movable_pfn to the end so
 					 * that if we have to rebalance
@@ -6767,7 +6764,6 @@ void __init free_area_init(unsigned long *zones_size)
 
 static int page_alloc_cpu_dead(unsigned int cpu)
 {
-
 	lru_add_drain_cpu(cpu);
 	drain_pages(cpu);
 
@@ -6811,7 +6807,6 @@ static void calculate_totalreserve_pages(void)
 	enum zone_type i, j;
 
 	for_each_online_pgdat(pgdat) {
-
 		pgdat->totalreserve_pages = 0;
 
 		for (i = 0; i < MAX_NR_ZONES; i++) {
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 03/15] mm: page_alloc: fix brace positions
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Remove a few blank lines.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 79fc996892c6..1029a1dd59d9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1375,7 +1375,6 @@ void set_zone_contiguous(struct zone *zone)
 	for (; block_start_pfn < zone_end_pfn(zone);
 	     block_start_pfn = block_end_pfn,
 		     block_end_pfn += pageblock_nr_pages) {
-
 		block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
 
 		if (!__pageblock_pfn_to_page(block_start_pfn,
@@ -4864,7 +4863,6 @@ static int find_next_best_node(int node, nodemask_t *used_node_mask)
 	}
 
 	for_each_node_state(n, N_MEMORY) {
-
 		/* Don't want a node to appear more than once */
 		if (node_isset(n, *used_node_mask))
 			continue;
@@ -6437,7 +6435,6 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 
 				/* Continue if range is now fully accounted */
 				if (end_pfn <= usable_startpfn) {
-
 					/*
 					 * Push zone_movable_pfn to the end so
 					 * that if we have to rebalance
@@ -6767,7 +6764,6 @@ void __init free_area_init(unsigned long *zones_size)
 
 static int page_alloc_cpu_dead(unsigned int cpu)
 {
-
 	lru_add_drain_cpu(cpu);
 	drain_pages(cpu);
 
@@ -6811,7 +6807,6 @@ static void calculate_totalreserve_pages(void)
 	enum zone_type i, j;
 
 	for_each_online_pgdat(pgdat) {
-
 		pgdat->totalreserve_pages = 0;
 
 		for (i = 0; i < MAX_NR_ZONES; i++) {
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 04/15] mm: page_alloc: fix blank lines
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Add and remove a few blank lines.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 36 +++++++++++++++++++++++++++---------
 1 file changed, 27 insertions(+), 9 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1029a1dd59d9..ec9832d15d07 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -496,6 +496,7 @@ static int page_is_consistent(struct zone *zone, struct page *page)
 
 	return 1;
 }
+
 /*
  * Temporary debugging check for pages not lying within a given zone.
  */
@@ -589,6 +590,7 @@ void prep_compound_page(struct page *page, unsigned int order)
 	__SetPageHead(page);
 	for (i = 1; i < nr_pages; i++) {
 		struct page *p = page + i;
+
 		set_page_count(p, 0);
 		p->mapping = TAIL_MAPPING;
 		set_compound_head(p, page);
@@ -609,6 +611,7 @@ static int __init early_debug_pagealloc(char *buf)
 		return -EINVAL;
 	return kstrtobool(buf, &_debug_pagealloc_enabled);
 }
+
 early_param("debug_pagealloc", early_debug_pagealloc);
 
 static bool need_debug_guardpage(void)
@@ -651,6 +654,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
 	pr_info("Setting debug_guardpage_minorder to %lu\n", res);
 	return 0;
 }
+
 early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
 
 static inline bool set_page_guard(struct zone *zone, struct page *page,
@@ -869,6 +873,7 @@ static inline void __free_one_page(struct page *page,
 	 */
 	if ((order < MAX_ORDER - 2) && pfn_valid_within(buddy_pfn)) {
 		struct page *higher_page, *higher_buddy;
+
 		combined_pfn = buddy_pfn & pfn;
 		higher_page = page + (combined_pfn - pfn);
 		buddy_pfn = __find_buddy_pfn(combined_pfn, order + 1);
@@ -1307,6 +1312,7 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
 {
 	return true;
 }
+
 static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
 						struct mminit_pfnnid_cache *state)
 {
@@ -1314,7 +1320,6 @@ static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
 }
 #endif
 
-
 void __init __free_pages_bootmem(struct page *page, unsigned long pfn,
 				 unsigned int order)
 {
@@ -1708,6 +1713,7 @@ static bool check_pcp_refill(struct page *page)
 {
 	return check_new_page(page);
 }
+
 static bool check_new_pcp(struct page *page)
 {
 	return false;
@@ -1717,6 +1723,7 @@ static bool check_new_pcp(struct page *page)
 static bool check_new_pages(struct page *page, unsigned int order)
 {
 	int i;
+
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
 
@@ -1748,6 +1755,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
 
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
+
 		if (poisoned)
 			poisoned &= page_is_poisoned(p);
 	}
@@ -1803,7 +1811,6 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 	return NULL;
 }
 
-
 /*
  * This array describes the order lists are fallen back to when
  * the free lists for the desirable migrate type are depleted
@@ -2266,6 +2273,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 	spin_lock_irqsave(&zone->lock, flags);
 	for (i = 0; i < count; ++i) {
 		struct page *page = __rmqueue(zone, order, migratetype);
+
 		if (unlikely(page == NULL))
 			break;
 
@@ -2470,6 +2478,7 @@ void drain_all_pages(struct zone *zone)
 
 	for_each_cpu(cpu, &cpus_with_pcps) {
 		struct work_struct *work = per_cpu_ptr(&pcpu_drain, cpu);
+
 		INIT_WORK(work, drain_local_pages_wq);
 		queue_work_on(cpu, mm_percpu_wq, work);
 	}
@@ -2566,6 +2575,7 @@ void free_hot_cold_page(struct page *page, bool cold)
 	pcp->count++;
 	if (pcp->count >= pcp->high) {
 		unsigned long batch = READ_ONCE(pcp->batch);
+
 		free_pcppages_bulk(zone, batch, pcp);
 		pcp->count -= batch;
 	}
@@ -2653,8 +2663,10 @@ int __isolate_free_page(struct page *page, unsigned int order)
 	 */
 	if (order >= pageblock_order - 1) {
 		struct page *endpage = page + (1 << order) - 1;
+
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
+
 			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
 			    && !is_migrate_highatomic(mt))
 				set_pageblock_migratetype(page,
@@ -2662,7 +2674,6 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		}
 	}
 
-
 	return 1UL << order;
 }
 
@@ -4260,6 +4271,7 @@ void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask)
 {
 	unsigned int order = get_order(size);
 	struct page *p = alloc_pages_node(nid, gfp_mask, order);
+
 	if (!p)
 		return NULL;
 	return make_alloc_exact((unsigned long)page_address(p), order, size);
@@ -4306,6 +4318,7 @@ static unsigned long nr_free_zone_pages(int offset)
 	for_each_zone_zonelist(zone, z, zonelist, offset) {
 		unsigned long size = zone->managed_pages;
 		unsigned long high = high_wmark_pages(zone);
+
 		if (size > high)
 			sum += size - high;
 	}
@@ -4721,7 +4734,6 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
 	return nr_zones;
 }
 
-
 /*
  *  zonelist_order:
  *  0 = automatic detection of better ordering.
@@ -4741,7 +4753,6 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
 static int current_zonelist_order = ZONELIST_ORDER_DEFAULT;
 static char zonelist_order_name[3][8] = {"Default", "Node", "Zone"};
 
-
 #ifdef CONFIG_NUMA
 /* The value user specified ....changed by config */
 static int user_zonelist_order = ZONELIST_ORDER_DEFAULT;
@@ -4785,6 +4796,7 @@ static __init int setup_numa_zonelist_order(char *s)
 
 	return ret;
 }
+
 early_param("numa_zonelist_order", setup_numa_zonelist_order);
 
 /*
@@ -4831,7 +4843,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write,
 	return ret;
 }
 
-
 #define MAX_NODE_LOAD (nr_online_nodes)
 static int node_load[MAX_NUMNODES];
 
@@ -4894,7 +4905,6 @@ static int find_next_best_node(int node, nodemask_t *used_node_mask)
 	return best_node;
 }
 
-
 /*
  * Build zonelists ordered by node and zones within node.
  * This results in maximum locality--normal zone overflows into local
@@ -5340,6 +5350,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 static void __meminit zone_init_free_lists(struct zone *zone)
 {
 	unsigned int order, t;
+
 	for_each_migratetype_order(order, t) {
 		INIT_LIST_HEAD(&zone->free_area[order].free_list[t]);
 		zone->free_area[order].nr_free = 0;
@@ -5461,6 +5472,7 @@ static void pageset_set_high(struct per_cpu_pageset *p,
 			     unsigned long high)
 {
 	unsigned long batch = max(1UL, high / 4);
+
 	if ((high / 4) > (PAGE_SHIFT * 8))
 		batch = PAGE_SHIFT * 8;
 
@@ -5489,6 +5501,7 @@ static void __meminit zone_pageset_init(struct zone *zone, int cpu)
 static void __meminit setup_zone_pageset(struct zone *zone)
 {
 	int cpu;
+
 	zone->pageset = alloc_percpu(struct per_cpu_pageset);
 	for_each_possible_cpu(cpu)
 		zone_pageset_init(zone, cpu);
@@ -5651,6 +5664,7 @@ void __meminit get_pfn_range_for_nid(unsigned int nid,
 static void __init find_usable_zone_for_movable(void)
 {
 	int zone_index;
+
 	for (zone_index = MAX_NR_ZONES - 1; zone_index >= 0; zone_index--) {
 		if (zone_index == ZONE_MOVABLE)
 			continue;
@@ -5927,6 +5941,7 @@ static void __init setup_usemap(struct pglist_data *pgdat,
 				unsigned long zonesize)
 {
 	unsigned long usemapsize = usemap_size(zone_start_pfn, zonesize);
+
 	zone->pageblock_flags = NULL;
 	if (usemapsize)
 		zone->pageblock_flags =
@@ -6425,6 +6440,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 			/* Account for what is only usable for kernelcore */
 			if (start_pfn < usable_startpfn) {
 				unsigned long kernel_pages;
+
 				kernel_pages = min(end_pfn, usable_startpfn)
 					- start_pfn;
 
@@ -6501,6 +6517,7 @@ static void check_for_memory(pg_data_t *pgdat, int nid)
 
 	for (zone_type = 0; zone_type <= ZONE_MOVABLE - 1; zone_type++) {
 		struct zone *zone = &pgdat->node_zones[zone_type];
+
 		if (populated_zone(zone)) {
 			node_set_state(nid, N_HIGH_MEMORY);
 			if (N_NORMAL_MEMORY != N_HIGH_MEMORY &&
@@ -6589,6 +6606,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 	setup_nr_node_ids();
 	for_each_online_node(nid) {
 		pg_data_t *pgdat = NODE_DATA(nid);
+
 		free_area_init_node(nid, NULL,
 				    find_min_pfn_for_node(nid), NULL);
 
@@ -6602,6 +6620,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 static int __init cmdline_parse_core(char *p, unsigned long *core)
 {
 	unsigned long long coremem;
+
 	if (!p)
 		return -EINVAL;
 
@@ -6687,7 +6706,6 @@ void free_highmem_page(struct page *page)
 }
 #endif
 
-
 void __init mem_init_print_info(const char *str)
 {
 	unsigned long physpages, codesize, datasize, rosize, bss_size;
@@ -7052,7 +7070,6 @@ static void setup_min_unmapped_ratio(void)
 							 sysctl_min_unmapped_ratio) / 100;
 }
 
-
 int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
 					     void __user *buffer, size_t *length, loff_t *ppos)
 {
@@ -7637,6 +7654,7 @@ void free_contig_range(unsigned long pfn, unsigned nr_pages)
 void __meminit zone_pcp_update(struct zone *zone)
 {
 	unsigned cpu;
+
 	mutex_lock(&pcp_batch_high_lock);
 	for_each_possible_cpu(cpu)
 		pageset_set_high_and_batch(zone,
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 04/15] mm: page_alloc: fix blank lines
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Add and remove a few blank lines.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 36 +++++++++++++++++++++++++++---------
 1 file changed, 27 insertions(+), 9 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1029a1dd59d9..ec9832d15d07 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -496,6 +496,7 @@ static int page_is_consistent(struct zone *zone, struct page *page)
 
 	return 1;
 }
+
 /*
  * Temporary debugging check for pages not lying within a given zone.
  */
@@ -589,6 +590,7 @@ void prep_compound_page(struct page *page, unsigned int order)
 	__SetPageHead(page);
 	for (i = 1; i < nr_pages; i++) {
 		struct page *p = page + i;
+
 		set_page_count(p, 0);
 		p->mapping = TAIL_MAPPING;
 		set_compound_head(p, page);
@@ -609,6 +611,7 @@ static int __init early_debug_pagealloc(char *buf)
 		return -EINVAL;
 	return kstrtobool(buf, &_debug_pagealloc_enabled);
 }
+
 early_param("debug_pagealloc", early_debug_pagealloc);
 
 static bool need_debug_guardpage(void)
@@ -651,6 +654,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
 	pr_info("Setting debug_guardpage_minorder to %lu\n", res);
 	return 0;
 }
+
 early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
 
 static inline bool set_page_guard(struct zone *zone, struct page *page,
@@ -869,6 +873,7 @@ static inline void __free_one_page(struct page *page,
 	 */
 	if ((order < MAX_ORDER - 2) && pfn_valid_within(buddy_pfn)) {
 		struct page *higher_page, *higher_buddy;
+
 		combined_pfn = buddy_pfn & pfn;
 		higher_page = page + (combined_pfn - pfn);
 		buddy_pfn = __find_buddy_pfn(combined_pfn, order + 1);
@@ -1307,6 +1312,7 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
 {
 	return true;
 }
+
 static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
 						struct mminit_pfnnid_cache *state)
 {
@@ -1314,7 +1320,6 @@ static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
 }
 #endif
 
-
 void __init __free_pages_bootmem(struct page *page, unsigned long pfn,
 				 unsigned int order)
 {
@@ -1708,6 +1713,7 @@ static bool check_pcp_refill(struct page *page)
 {
 	return check_new_page(page);
 }
+
 static bool check_new_pcp(struct page *page)
 {
 	return false;
@@ -1717,6 +1723,7 @@ static bool check_new_pcp(struct page *page)
 static bool check_new_pages(struct page *page, unsigned int order)
 {
 	int i;
+
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
 
@@ -1748,6 +1755,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
 
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
+
 		if (poisoned)
 			poisoned &= page_is_poisoned(p);
 	}
@@ -1803,7 +1811,6 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 	return NULL;
 }
 
-
 /*
  * This array describes the order lists are fallen back to when
  * the free lists for the desirable migrate type are depleted
@@ -2266,6 +2273,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 	spin_lock_irqsave(&zone->lock, flags);
 	for (i = 0; i < count; ++i) {
 		struct page *page = __rmqueue(zone, order, migratetype);
+
 		if (unlikely(page == NULL))
 			break;
 
@@ -2470,6 +2478,7 @@ void drain_all_pages(struct zone *zone)
 
 	for_each_cpu(cpu, &cpus_with_pcps) {
 		struct work_struct *work = per_cpu_ptr(&pcpu_drain, cpu);
+
 		INIT_WORK(work, drain_local_pages_wq);
 		queue_work_on(cpu, mm_percpu_wq, work);
 	}
@@ -2566,6 +2575,7 @@ void free_hot_cold_page(struct page *page, bool cold)
 	pcp->count++;
 	if (pcp->count >= pcp->high) {
 		unsigned long batch = READ_ONCE(pcp->batch);
+
 		free_pcppages_bulk(zone, batch, pcp);
 		pcp->count -= batch;
 	}
@@ -2653,8 +2663,10 @@ int __isolate_free_page(struct page *page, unsigned int order)
 	 */
 	if (order >= pageblock_order - 1) {
 		struct page *endpage = page + (1 << order) - 1;
+
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
+
 			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
 			    && !is_migrate_highatomic(mt))
 				set_pageblock_migratetype(page,
@@ -2662,7 +2674,6 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		}
 	}
 
-
 	return 1UL << order;
 }
 
@@ -4260,6 +4271,7 @@ void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask)
 {
 	unsigned int order = get_order(size);
 	struct page *p = alloc_pages_node(nid, gfp_mask, order);
+
 	if (!p)
 		return NULL;
 	return make_alloc_exact((unsigned long)page_address(p), order, size);
@@ -4306,6 +4318,7 @@ static unsigned long nr_free_zone_pages(int offset)
 	for_each_zone_zonelist(zone, z, zonelist, offset) {
 		unsigned long size = zone->managed_pages;
 		unsigned long high = high_wmark_pages(zone);
+
 		if (size > high)
 			sum += size - high;
 	}
@@ -4721,7 +4734,6 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
 	return nr_zones;
 }
 
-
 /*
  *  zonelist_order:
  *  0 = automatic detection of better ordering.
@@ -4741,7 +4753,6 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
 static int current_zonelist_order = ZONELIST_ORDER_DEFAULT;
 static char zonelist_order_name[3][8] = {"Default", "Node", "Zone"};
 
-
 #ifdef CONFIG_NUMA
 /* The value user specified ....changed by config */
 static int user_zonelist_order = ZONELIST_ORDER_DEFAULT;
@@ -4785,6 +4796,7 @@ static __init int setup_numa_zonelist_order(char *s)
 
 	return ret;
 }
+
 early_param("numa_zonelist_order", setup_numa_zonelist_order);
 
 /*
@@ -4831,7 +4843,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write,
 	return ret;
 }
 
-
 #define MAX_NODE_LOAD (nr_online_nodes)
 static int node_load[MAX_NUMNODES];
 
@@ -4894,7 +4905,6 @@ static int find_next_best_node(int node, nodemask_t *used_node_mask)
 	return best_node;
 }
 
-
 /*
  * Build zonelists ordered by node and zones within node.
  * This results in maximum locality--normal zone overflows into local
@@ -5340,6 +5350,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 static void __meminit zone_init_free_lists(struct zone *zone)
 {
 	unsigned int order, t;
+
 	for_each_migratetype_order(order, t) {
 		INIT_LIST_HEAD(&zone->free_area[order].free_list[t]);
 		zone->free_area[order].nr_free = 0;
@@ -5461,6 +5472,7 @@ static void pageset_set_high(struct per_cpu_pageset *p,
 			     unsigned long high)
 {
 	unsigned long batch = max(1UL, high / 4);
+
 	if ((high / 4) > (PAGE_SHIFT * 8))
 		batch = PAGE_SHIFT * 8;
 
@@ -5489,6 +5501,7 @@ static void __meminit zone_pageset_init(struct zone *zone, int cpu)
 static void __meminit setup_zone_pageset(struct zone *zone)
 {
 	int cpu;
+
 	zone->pageset = alloc_percpu(struct per_cpu_pageset);
 	for_each_possible_cpu(cpu)
 		zone_pageset_init(zone, cpu);
@@ -5651,6 +5664,7 @@ void __meminit get_pfn_range_for_nid(unsigned int nid,
 static void __init find_usable_zone_for_movable(void)
 {
 	int zone_index;
+
 	for (zone_index = MAX_NR_ZONES - 1; zone_index >= 0; zone_index--) {
 		if (zone_index == ZONE_MOVABLE)
 			continue;
@@ -5927,6 +5941,7 @@ static void __init setup_usemap(struct pglist_data *pgdat,
 				unsigned long zonesize)
 {
 	unsigned long usemapsize = usemap_size(zone_start_pfn, zonesize);
+
 	zone->pageblock_flags = NULL;
 	if (usemapsize)
 		zone->pageblock_flags =
@@ -6425,6 +6440,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 			/* Account for what is only usable for kernelcore */
 			if (start_pfn < usable_startpfn) {
 				unsigned long kernel_pages;
+
 				kernel_pages = min(end_pfn, usable_startpfn)
 					- start_pfn;
 
@@ -6501,6 +6517,7 @@ static void check_for_memory(pg_data_t *pgdat, int nid)
 
 	for (zone_type = 0; zone_type <= ZONE_MOVABLE - 1; zone_type++) {
 		struct zone *zone = &pgdat->node_zones[zone_type];
+
 		if (populated_zone(zone)) {
 			node_set_state(nid, N_HIGH_MEMORY);
 			if (N_NORMAL_MEMORY != N_HIGH_MEMORY &&
@@ -6589,6 +6606,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 	setup_nr_node_ids();
 	for_each_online_node(nid) {
 		pg_data_t *pgdat = NODE_DATA(nid);
+
 		free_area_init_node(nid, NULL,
 				    find_min_pfn_for_node(nid), NULL);
 
@@ -6602,6 +6620,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 static int __init cmdline_parse_core(char *p, unsigned long *core)
 {
 	unsigned long long coremem;
+
 	if (!p)
 		return -EINVAL;
 
@@ -6687,7 +6706,6 @@ void free_highmem_page(struct page *page)
 }
 #endif
 
-
 void __init mem_init_print_info(const char *str)
 {
 	unsigned long physpages, codesize, datasize, rosize, bss_size;
@@ -7052,7 +7070,6 @@ static void setup_min_unmapped_ratio(void)
 							 sysctl_min_unmapped_ratio) / 100;
 }
 
-
 int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
 					     void __user *buffer, size_t *length, loff_t *ppos)
 {
@@ -7637,6 +7654,7 @@ void free_contig_range(unsigned long pfn, unsigned nr_pages)
 void __meminit zone_pcp_update(struct zone *zone)
 {
 	unsigned cpu;
+
 	mutex_lock(&pcp_batch_high_lock);
 	for_each_possible_cpu(cpu)
 		pageset_set_high_and_batch(zone,
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 05/15] mm: page_alloc: Move __meminitdata and __initdata uses
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

It's preferred to have these after the declarations.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ec9832d15d07..2933a8a11927 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -262,16 +262,16 @@ int min_free_kbytes = 1024;
 int user_min_free_kbytes = -1;
 int watermark_scale_factor = 10;
 
-static unsigned long __meminitdata nr_kernel_pages;
-static unsigned long __meminitdata nr_all_pages;
-static unsigned long __meminitdata dma_reserve;
+static unsigned long nr_kernel_pages __meminitdata;
+static unsigned long nr_all_pages __meminitdata;
+static unsigned long dma_reserve __meminitdata;
 
 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
-static unsigned long __meminitdata arch_zone_lowest_possible_pfn[MAX_NR_ZONES];
-static unsigned long __meminitdata arch_zone_highest_possible_pfn[MAX_NR_ZONES];
-static unsigned long __initdata required_kernelcore;
-static unsigned long __initdata required_movablecore;
-static unsigned long __meminitdata zone_movable_pfn[MAX_NUMNODES];
+static unsigned long arch_zone_lowest_possible_pfn[MAX_NR_ZONES] __meminitdata;
+static unsigned long arch_zone_highest_possible_pfn[MAX_NR_ZONES] __meminitdata;
+static unsigned long required_kernelcore __initdata;
+static unsigned long required_movablecore __initdata;
+static unsigned long zone_movable_pfn[MAX_NUMNODES] __meminitdata;
 static bool mirrored_kernelcore;
 
 /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 05/15] mm: page_alloc: Move __meminitdata and __initdata uses
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

It's preferred to have these after the declarations.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ec9832d15d07..2933a8a11927 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -262,16 +262,16 @@ int min_free_kbytes = 1024;
 int user_min_free_kbytes = -1;
 int watermark_scale_factor = 10;
 
-static unsigned long __meminitdata nr_kernel_pages;
-static unsigned long __meminitdata nr_all_pages;
-static unsigned long __meminitdata dma_reserve;
+static unsigned long nr_kernel_pages __meminitdata;
+static unsigned long nr_all_pages __meminitdata;
+static unsigned long dma_reserve __meminitdata;
 
 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
-static unsigned long __meminitdata arch_zone_lowest_possible_pfn[MAX_NR_ZONES];
-static unsigned long __meminitdata arch_zone_highest_possible_pfn[MAX_NR_ZONES];
-static unsigned long __initdata required_kernelcore;
-static unsigned long __initdata required_movablecore;
-static unsigned long __meminitdata zone_movable_pfn[MAX_NUMNODES];
+static unsigned long arch_zone_lowest_possible_pfn[MAX_NR_ZONES] __meminitdata;
+static unsigned long arch_zone_highest_possible_pfn[MAX_NR_ZONES] __meminitdata;
+static unsigned long required_kernelcore __initdata;
+static unsigned long required_movablecore __initdata;
+static unsigned long zone_movable_pfn[MAX_NUMNODES] __meminitdata;
 static bool mirrored_kernelcore;
 
 /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 06/15] mm: page_alloc: Use unsigned int instead of unsigned
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

It's what's generally desired.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2933a8a11927..dca8904bbe2e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -467,7 +467,7 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
 static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 {
 	int ret = 0;
-	unsigned seq;
+	unsigned int seq;
 	unsigned long pfn = page_to_pfn(page);
 	unsigned long sp, start_pfn;
 
@@ -1582,7 +1582,7 @@ void __init page_alloc_init_late(void)
 /* Free whole pageblock and set its migration type to MIGRATE_CMA. */
 void __init init_cma_reserved_pageblock(struct page *page)
 {
-	unsigned i = pageblock_nr_pages;
+	unsigned int i = pageblock_nr_pages;
 	struct page *p = page;
 
 	do {
@@ -3588,7 +3588,7 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
  * Returns true if a retry is viable or false to enter the oom path.
  */
 static inline bool
-should_reclaim_retry(gfp_t gfp_mask, unsigned order,
+should_reclaim_retry(gfp_t gfp_mask, unsigned int order,
 		     struct alloc_context *ac, int alloc_flags,
 		     bool did_some_progress, int *no_progress_loops)
 {
@@ -7508,7 +7508,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
  * need to be freed with free_contig_range().
  */
 int alloc_contig_range(unsigned long start, unsigned long end,
-		       unsigned migratetype, gfp_t gfp_mask)
+		       unsigned int migratetype, gfp_t gfp_mask)
 {
 	unsigned long outer_start, outer_end;
 	unsigned int order;
@@ -7632,7 +7632,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	return ret;
 }
 
-void free_contig_range(unsigned long pfn, unsigned nr_pages)
+void free_contig_range(unsigned long pfn, unsigned int nr_pages)
 {
 	unsigned int count = 0;
 
@@ -7653,7 +7653,7 @@ void free_contig_range(unsigned long pfn, unsigned nr_pages)
  */
 void __meminit zone_pcp_update(struct zone *zone)
 {
-	unsigned cpu;
+	unsigned int cpu;
 
 	mutex_lock(&pcp_batch_high_lock);
 	for_each_possible_cpu(cpu)
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 06/15] mm: page_alloc: Use unsigned int instead of unsigned
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

It's what's generally desired.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2933a8a11927..dca8904bbe2e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -467,7 +467,7 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
 static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 {
 	int ret = 0;
-	unsigned seq;
+	unsigned int seq;
 	unsigned long pfn = page_to_pfn(page);
 	unsigned long sp, start_pfn;
 
@@ -1582,7 +1582,7 @@ void __init page_alloc_init_late(void)
 /* Free whole pageblock and set its migration type to MIGRATE_CMA. */
 void __init init_cma_reserved_pageblock(struct page *page)
 {
-	unsigned i = pageblock_nr_pages;
+	unsigned int i = pageblock_nr_pages;
 	struct page *p = page;
 
 	do {
@@ -3588,7 +3588,7 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
  * Returns true if a retry is viable or false to enter the oom path.
  */
 static inline bool
-should_reclaim_retry(gfp_t gfp_mask, unsigned order,
+should_reclaim_retry(gfp_t gfp_mask, unsigned int order,
 		     struct alloc_context *ac, int alloc_flags,
 		     bool did_some_progress, int *no_progress_loops)
 {
@@ -7508,7 +7508,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
  * need to be freed with free_contig_range().
  */
 int alloc_contig_range(unsigned long start, unsigned long end,
-		       unsigned migratetype, gfp_t gfp_mask)
+		       unsigned int migratetype, gfp_t gfp_mask)
 {
 	unsigned long outer_start, outer_end;
 	unsigned int order;
@@ -7632,7 +7632,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	return ret;
 }
 
-void free_contig_range(unsigned long pfn, unsigned nr_pages)
+void free_contig_range(unsigned long pfn, unsigned int nr_pages)
 {
 	unsigned int count = 0;
 
@@ -7653,7 +7653,7 @@ void free_contig_range(unsigned long pfn, unsigned nr_pages)
  */
 void __meminit zone_pcp_update(struct zone *zone)
 {
-	unsigned cpu;
+	unsigned int cpu;
 
 	mutex_lock(&pcp_batch_high_lock);
 	for_each_possible_cpu(cpu)
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 07/15] mm: page_alloc: Move labels to column 1
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Where the kernel style generally has them.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dca8904bbe2e..60ec74894a56 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1528,7 +1528,7 @@ static int __init deferred_init_memmap(void *data)
 
 			/* Where possible, batch up pages for a single free */
 			continue;
-		free_range:
+free_range:
 			/* Free the current block of pages to allocator */
 			nr_pages += nr_to_free;
 			deferred_free_range(free_base_page, free_base_pfn,
@@ -3102,7 +3102,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			}
 		}
 
-	try_this_zone:
+try_this_zone:
 		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
 			       gfp_mask, alloc_flags, ac->migratetype);
 		if (page) {
@@ -4160,7 +4160,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
 	int offset;
 
 	if (unlikely(!nc->va)) {
-	refill:
+refill:
 		page = __page_frag_cache_refill(nc, gfp_mask);
 		if (!page)
 			return NULL;
@@ -5323,7 +5323,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 		}
 #endif
 
-	not_early:
+not_early:
 		/*
 		 * Mark the block movable so that blocks are reserved for
 		 * movable at startup. This will force kernel allocations
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 07/15] mm: page_alloc: Move labels to column 1
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Where the kernel style generally has them.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dca8904bbe2e..60ec74894a56 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1528,7 +1528,7 @@ static int __init deferred_init_memmap(void *data)
 
 			/* Where possible, batch up pages for a single free */
 			continue;
-		free_range:
+free_range:
 			/* Free the current block of pages to allocator */
 			nr_pages += nr_to_free;
 			deferred_free_range(free_base_page, free_base_pfn,
@@ -3102,7 +3102,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			}
 		}
 
-	try_this_zone:
+try_this_zone:
 		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
 			       gfp_mask, alloc_flags, ac->migratetype);
 		if (page) {
@@ -4160,7 +4160,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
 	int offset;
 
 	if (unlikely(!nc->va)) {
-	refill:
+refill:
 		page = __page_frag_cache_refill(nc, gfp_mask);
 		if (!page)
 			return NULL;
@@ -5323,7 +5323,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 		}
 #endif
 
-	not_early:
+not_early:
 		/*
 		 * Mark the block movable so that blocks are reserved for
 		 * movable at startup. This will force kernel allocations
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 08/15] mm: page_alloc: Fix typo acording -> according & the the -> to the
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just a typo fix...

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 60ec74894a56..e417d52b9de9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5419,7 +5419,7 @@ static int zone_batchsize(struct zone *zone)
  * locking.
  *
  * Any new users of pcp->batch and pcp->high should ensure they can cope with
- * those fields changing asynchronously (acording the the above rule).
+ * those fields changing asynchronously (according to the above rule).
  *
  * mutex_is_locked(&pcp_batch_high_lock) required when calling this function
  * outside of boot time (or some other assurance that no concurrent updaters
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 08/15] mm: page_alloc: Fix typo acording -> according & the the -> to the
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just a typo fix...

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 60ec74894a56..e417d52b9de9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5419,7 +5419,7 @@ static int zone_batchsize(struct zone *zone)
  * locking.
  *
  * Any new users of pcp->batch and pcp->high should ensure they can cope with
- * those fields changing asynchronously (acording the the above rule).
+ * those fields changing asynchronously (according to the above rule).
  *
  * mutex_is_locked(&pcp_batch_high_lock) required when calling this function
  * outside of boot time (or some other assurance that no concurrent updaters
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 09/15] mm: page_alloc: Use the common commenting style
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just neatening

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e417d52b9de9..3e1d377201b8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5222,8 +5222,10 @@ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone)
 		if (zone)
 			setup_zone_pageset(zone);
 #endif
-		/* we have to stop all cpus to guarantee there is no user
-		   of zonelist */
+		/*
+		 * we have to stop all cpus to guarantee
+		 * there is no user of zonelist
+		 */
 		stop_machine(__build_all_zonelists, pgdat, NULL);
 		/* cpuset refresh routine should be here */
 	}
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 09/15] mm: page_alloc: Use the common commenting style
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just neatening

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e417d52b9de9..3e1d377201b8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5222,8 +5222,10 @@ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone)
 		if (zone)
 			setup_zone_pageset(zone);
 #endif
-		/* we have to stop all cpus to guarantee there is no user
-		   of zonelist */
+		/*
+		 * we have to stop all cpus to guarantee
+		 * there is no user of zonelist
+		 */
 		stop_machine(__build_all_zonelists, pgdat, NULL);
 		/* cpuset refresh routine should be here */
 	}
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 10/15] mm: page_alloc: 80 column neatening
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Wrap some lines to make it easier to read.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 259 ++++++++++++++++++++++++++++++++++----------------------
 1 file changed, 157 insertions(+), 102 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3e1d377201b8..286b01b4c3e7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -383,10 +383,11 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
  *
  * Return: pageblock_bits flags
  */
-static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page,
-							       unsigned long pfn,
-							       unsigned long end_bitidx,
-							       unsigned long mask)
+static __always_inline
+unsigned long __get_pfnblock_flags_mask(struct page *page,
+					unsigned long pfn,
+					unsigned long end_bitidx,
+					unsigned long mask)
 {
 	unsigned long *bitmap;
 	unsigned long bitidx, word_bitidx;
@@ -409,9 +410,11 @@ unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn,
 	return __get_pfnblock_flags_mask(page, pfn, end_bitidx, mask);
 }
 
-static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned long pfn)
+static __always_inline
+int get_pfnblock_migratetype(struct page *page, unsigned long pfn)
 {
-	return __get_pfnblock_flags_mask(page, pfn, PB_migrate_end, MIGRATETYPE_MASK);
+	return __get_pfnblock_flags_mask(page, pfn, PB_migrate_end,
+					 MIGRATETYPE_MASK);
 }
 
 /**
@@ -446,7 +449,8 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 
 	word = READ_ONCE(bitmap[word_bitidx]);
 	for (;;) {
-		old_word = cmpxchg(&bitmap[word_bitidx], word, (word & ~mask) | flags);
+		old_word = cmpxchg(&bitmap[word_bitidx],
+				   word, (word & ~mask) | flags);
 		if (word == old_word)
 			break;
 		word = old_word;
@@ -533,9 +537,8 @@ static void bad_page(struct page *page, const char *reason,
 			goto out;
 		}
 		if (nr_unshown) {
-			pr_alert(
-				"BUG: Bad page state: %lu messages suppressed\n",
-				nr_unshown);
+			pr_alert("BUG: Bad page state: %lu messages suppressed\n",
+				 nr_unshown);
 			nr_unshown = 0;
 		}
 		nr_shown = 0;
@@ -600,8 +603,8 @@ void prep_compound_page(struct page *page, unsigned int order)
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
 unsigned int _debug_guardpage_minorder;
-bool _debug_pagealloc_enabled __read_mostly
-= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
+bool _debug_pagealloc_enabled __read_mostly =
+	IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
 EXPORT_SYMBOL(_debug_pagealloc_enabled);
 bool _debug_guardpage_enabled __read_mostly;
 
@@ -703,9 +706,15 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 #else
 struct page_ext_operations debug_guardpage_ops;
 static inline bool set_page_guard(struct zone *zone, struct page *page,
-				  unsigned int order, int migratetype) { return false; }
+				  unsigned int order, int migratetype)
+{
+	return false;
+}
+
 static inline void clear_page_guard(struct zone *zone, struct page *page,
-				    unsigned int order, int migratetype) {}
+				    unsigned int order, int migratetype)
+{
+}
 #endif
 
 static inline void set_page_order(struct page *page, unsigned int order)
@@ -998,8 +1007,8 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
 	return ret;
 }
 
-static __always_inline bool free_pages_prepare(struct page *page,
-					       unsigned int order, bool check_free)
+static __always_inline
+bool free_pages_prepare(struct page *page, unsigned int order, bool check_free)
 {
 	int bad = 0;
 
@@ -1269,7 +1278,7 @@ static void __init __free_pages_boot_core(struct page *page, unsigned int order)
 }
 
 #if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) ||	\
-	defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
+    defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
 
 static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata;
 
@@ -1289,8 +1298,9 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
 #endif
 
 #ifdef CONFIG_NODES_SPAN_OTHER_NODES
-static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-						struct mminit_pfnnid_cache *state)
+static inline
+bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
+				  struct mminit_pfnnid_cache *state)
 {
 	int nid;
 
@@ -1313,8 +1323,9 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
 	return true;
 }
 
-static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-						struct mminit_pfnnid_cache *state)
+static inline
+bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
+				  struct mminit_pfnnid_cache *state)
 {
 	return true;
 }
@@ -1564,7 +1575,8 @@ void __init page_alloc_init_late(void)
 	/* There will be num_node_state(N_MEMORY) threads */
 	atomic_set(&pgdat_init_n_undone, num_node_state(N_MEMORY));
 	for_each_node_state(nid, N_MEMORY) {
-		kthread_run(deferred_init_memmap, NODE_DATA(nid), "pgdatinit%d", nid);
+		kthread_run(deferred_init_memmap, NODE_DATA(nid),
+			    "pgdatinit%d", nid);
 	}
 
 	/* Block until all are initialised */
@@ -1747,8 +1759,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	set_page_owner(page, order, gfp_flags);
 }
 
-static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
-			  unsigned int alloc_flags)
+static void prep_new_page(struct page *page, unsigned int order,
+			  gfp_t gfp_flags, unsigned int alloc_flags)
 {
 	int i;
 	bool poisoned = true;
@@ -1835,7 +1847,10 @@ static struct page *__rmqueue_cma_fallback(struct zone *zone,
 }
 #else
 static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
-						  unsigned int order) { return NULL; }
+						  unsigned int order)
+{
+	return NULL;
+}
 #endif
 
 /*
@@ -2216,7 +2231,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 	     --current_order) {
 		area = &(zone->free_area[current_order]);
 		fallback_mt = find_suitable_fallback(area, current_order,
-						     start_migratetype, false, &can_steal);
+						     start_migratetype, false,
+						     &can_steal);
 		if (fallback_mt == -1)
 			continue;
 
@@ -2780,9 +2796,11 @@ struct page *rmqueue(struct zone *preferred_zone,
 	do {
 		page = NULL;
 		if (alloc_flags & ALLOC_HARDER) {
-			page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
+			page = __rmqueue_smallest(zone, order,
+						  MIGRATE_HIGHATOMIC);
 			if (page)
-				trace_mm_page_alloc_zone_locked(page, order, migratetype);
+				trace_mm_page_alloc_zone_locked(page, order,
+								migratetype);
 		}
 		if (!page)
 			page = __rmqueue(zone, order, migratetype);
@@ -2966,7 +2984,8 @@ bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
 }
 
 static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
-				       unsigned long mark, int classzone_idx, unsigned int alloc_flags)
+				       unsigned long mark, int classzone_idx,
+				       unsigned int alloc_flags)
 {
 	long free_pages = zone_page_state(z, NR_FREE_PAGES);
 	long cma_pages = 0;
@@ -2984,7 +3003,8 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
 	 * the caller is !atomic then it'll uselessly search the free
 	 * list. That corner case is then slower but it is harmless.
 	 */
-	if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
+	if (!order &&
+	    (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
 		return true;
 
 	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
@@ -3081,7 +3101,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 				goto try_this_zone;
 
 			if (node_reclaim_mode == 0 ||
-			    !zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
+			    !zone_allows_reclaim(ac->preferred_zoneref->zone,
+						 zone))
 				continue;
 
 			ret = node_reclaim(zone->zone_pgdat, gfp_mask, order);
@@ -3095,7 +3116,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			default:
 				/* did we reclaim enough */
 				if (zone_watermark_ok(zone, order, mark,
-						      ac_classzone_idx(ac), alloc_flags))
+						      ac_classzone_idx(ac),
+						      alloc_flags))
 					goto try_this_zone;
 
 				continue;
@@ -3212,7 +3234,8 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 
 static inline struct page *
 __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
-		      const struct alloc_context *ac, unsigned long *did_some_progress)
+		      const struct alloc_context *ac,
+		      unsigned long *did_some_progress)
 {
 	struct oom_control oc = {
 		.zonelist = ac->zonelist,
@@ -3280,7 +3303,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (gfp_mask & __GFP_NOFAIL)
 			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
-							     ALLOC_NO_WATERMARKS, ac);
+							     ALLOC_NO_WATERMARKS,
+							     ac);
 	}
 out:
 	mutex_unlock(&oom_lock);
@@ -3297,8 +3321,10 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 /* Try memory compaction for high-order allocations before reclaim */
 static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-			     unsigned int alloc_flags, const struct alloc_context *ac,
-			     enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags,
+			     const struct alloc_context *ac,
+			     enum compact_priority prio,
+			     enum compact_result *compact_result)
 {
 	struct page *page;
 
@@ -3413,16 +3439,18 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
 #else
 static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-			     unsigned int alloc_flags, const struct alloc_context *ac,
-			     enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags,
+			     const struct alloc_context *ac,
+			     enum compact_priority prio,
+			     enum compact_result *compact_result)
 {
 	*compact_result = COMPACT_SKIPPED;
 	return NULL;
 }
 
 static inline bool
-should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_flags,
-		     enum compact_result compact_result,
+should_compact_retry(struct alloc_context *ac, unsigned int order,
+		     int alloc_flags, enum compact_result compact_result,
 		     enum compact_priority *compact_priority,
 		     int *compaction_retries)
 {
@@ -3480,7 +3508,8 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 /* The really slow allocator path where we enter direct reclaim */
 static inline struct page *
 __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
-			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     unsigned int alloc_flags,
+			     const struct alloc_context *ac,
 			     unsigned long *did_some_progress)
 {
 	struct page *page = NULL;
@@ -3522,8 +3551,7 @@ static void wake_all_kswapds(unsigned int order, const struct alloc_context *ac)
 	}
 }
 
-static inline unsigned int
-gfp_to_alloc_flags(gfp_t gfp_mask)
+static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask)
 {
 	unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
 
@@ -3635,9 +3663,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned int order,
 		 * reclaimable pages?
 		 */
 		wmark = __zone_watermark_ok(zone, order, min_wmark,
-					    ac_classzone_idx(ac), alloc_flags, available);
+					    ac_classzone_idx(ac), alloc_flags,
+					    available);
 		trace_reclaim_retry_zone(z, order, reclaimable,
-					 available, min_wmark, *no_progress_loops, wmark);
+					 available, min_wmark,
+					 *no_progress_loops, wmark);
 		if (wmark) {
 			/*
 			 * If we didn't make any progress and have a lot of
@@ -3734,7 +3764,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * could end up iterating over non-eligible zones endlessly.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-						     ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx,
+						     ac->nodemask);
 	if (!ac->preferred_zoneref->zone)
 		goto nopage;
 
@@ -3807,10 +3838,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * These allocations are high priority and system rather than user
 	 * orientated.
 	 */
-	if (!(alloc_flags & ALLOC_CPUSET) || (alloc_flags & ALLOC_NO_WATERMARKS)) {
+	if (!(alloc_flags & ALLOC_CPUSET) ||
+	    (alloc_flags & ALLOC_NO_WATERMARKS)) {
 		ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
 		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-							     ac->high_zoneidx, ac->nodemask);
+							     ac->high_zoneidx,
+							     ac->nodemask);
 	}
 
 	/* Attempt with potentially adjusted zonelist and alloc_flags */
@@ -3939,7 +3972,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 * could deplete whole memory reserves which would just make
 		 * the situation worse
 		 */
-		page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac);
+		page = __alloc_pages_cpuset_fallback(gfp_mask, order,
+						     ALLOC_HARDER, ac);
 		if (page)
 			goto got_pg;
 
@@ -3953,10 +3987,11 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	return page;
 }
 
-static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
-				       struct zonelist *zonelist, nodemask_t *nodemask,
-				       struct alloc_context *ac, gfp_t *alloc_mask,
-				       unsigned int *alloc_flags)
+static inline
+bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
+			 struct zonelist *zonelist, nodemask_t *nodemask,
+			 struct alloc_context *ac, gfp_t *alloc_mask,
+			 unsigned int *alloc_flags)
 {
 	ac->high_zoneidx = gfp_zone(gfp_mask);
 	ac->zonelist = zonelist;
@@ -3997,7 +4032,8 @@ static inline void finalise_ac(gfp_t gfp_mask,
 	 * may get reset for allocations that ignore memory policies.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-						     ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx,
+						     ac->nodemask);
 }
 
 /*
@@ -4013,7 +4049,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	struct alloc_context ac = { };
 
 	gfp_mask &= gfp_allowed_mask;
-	if (!prepare_alloc_pages(gfp_mask, order, zonelist, nodemask, &ac, &alloc_mask, &alloc_flags))
+	if (!prepare_alloc_pages(gfp_mask, order, zonelist, nodemask, &ac,
+				 &alloc_mask, &alloc_flags))
 		return NULL;
 
 	finalise_ac(gfp_mask, order, &ac);
@@ -4448,7 +4485,8 @@ void si_meminfo_node(struct sysinfo *val, int nid)
  * Determine whether the node should be displayed or not, depending on whether
  * SHOW_MEM_FILTER_NODES was passed to show_free_areas().
  */
-static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask)
+static bool show_mem_node_skip(unsigned int flags, int nid,
+			       nodemask_t *nodemask)
 {
 	if (!(flags & SHOW_MEM_FILTER_NODES))
 		return false;
@@ -5187,7 +5225,8 @@ static int __build_all_zonelists(void *data)
 		 * node/memory hotplug, we'll fixup all on-line cpus.
 		 */
 		if (cpu_online(cpu))
-			set_cpu_numa_mem(cpu, local_memory_node(cpu_to_node(cpu)));
+			set_cpu_numa_mem(cpu,
+					 local_memory_node(cpu_to_node(cpu)));
 #endif
 	}
 
@@ -5690,12 +5729,13 @@ static void __init find_usable_zone_for_movable(void)
  * highest usable zone for ZONE_MOVABLE. This preserves the assumption that
  * zones within a node are in order of monotonic increases memory addresses
  */
-static void __meminit adjust_zone_range_for_zone_movable(int nid,
-							 unsigned long zone_type,
-							 unsigned long node_start_pfn,
-							 unsigned long node_end_pfn,
-							 unsigned long *zone_start_pfn,
-							 unsigned long *zone_end_pfn)
+static void __meminit
+adjust_zone_range_for_zone_movable(int nid,
+				   unsigned long zone_type,
+				   unsigned long node_start_pfn,
+				   unsigned long node_end_pfn,
+				   unsigned long *zone_start_pfn,
+				   unsigned long *zone_end_pfn)
 {
 	/* Only adjust if ZONE_MOVABLE is on this node */
 	if (zone_movable_pfn[nid]) {
@@ -5721,13 +5761,14 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
  * Return the number of pages a zone spans in a node, including holes
  * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node()
  */
-static unsigned long __meminit zone_spanned_pages_in_node(int nid,
-							  unsigned long zone_type,
-							  unsigned long node_start_pfn,
-							  unsigned long node_end_pfn,
-							  unsigned long *zone_start_pfn,
-							  unsigned long *zone_end_pfn,
-							  unsigned long *ignored)
+static unsigned long __meminit
+zone_spanned_pages_in_node(int nid,
+			   unsigned long zone_type,
+			   unsigned long node_start_pfn,
+			   unsigned long node_end_pfn,
+			   unsigned long *zone_start_pfn,
+			   unsigned long *zone_end_pfn,
+			   unsigned long *ignored)
 {
 	/* When hotadd a new node from cpu_up(), the node should be empty */
 	if (!node_start_pfn && !node_end_pfn)
@@ -5786,11 +5827,12 @@ unsigned long __init absent_pages_in_range(unsigned long start_pfn,
 }
 
 /* Return the number of page frames in holes in a zone on a node */
-static unsigned long __meminit zone_absent_pages_in_node(int nid,
-							 unsigned long zone_type,
-							 unsigned long node_start_pfn,
-							 unsigned long node_end_pfn,
-							 unsigned long *ignored)
+static unsigned long __meminit
+zone_absent_pages_in_node(int nid,
+			  unsigned long zone_type,
+			  unsigned long node_start_pfn,
+			  unsigned long node_end_pfn,
+			  unsigned long *ignored)
 {
 	unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
 	unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
@@ -5843,13 +5885,14 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
 }
 
 #else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
-static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
-								 unsigned long zone_type,
-								 unsigned long node_start_pfn,
-								 unsigned long node_end_pfn,
-								 unsigned long *zone_start_pfn,
-								 unsigned long *zone_end_pfn,
-								 unsigned long *zones_size)
+static inline unsigned long __meminit
+zone_spanned_pages_in_node(int nid,
+			   unsigned long zone_type,
+			   unsigned long node_start_pfn,
+			   unsigned long node_end_pfn,
+			   unsigned long *zone_start_pfn,
+			   unsigned long *zone_end_pfn,
+			   unsigned long *zones_size)
 {
 	unsigned int zone;
 
@@ -5862,11 +5905,12 @@ static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
 	return zones_size[zone_type];
 }
 
-static inline unsigned long __meminit zone_absent_pages_in_node(int nid,
-								unsigned long zone_type,
-								unsigned long node_start_pfn,
-								unsigned long node_end_pfn,
-								unsigned long *zholes_size)
+static inline unsigned long __meminit
+zone_absent_pages_in_node(int nid,
+			  unsigned long zone_type,
+			  unsigned long node_start_pfn,
+			  unsigned long node_end_pfn,
+			  unsigned long *zholes_size)
 {
 	if (!zholes_size)
 		return 0;
@@ -5924,7 +5968,8 @@ static void __meminit calculate_node_totalpages(struct pglist_data *pgdat,
  * round what is now in bits to nearest long in bits, then return it in
  * bytes.
  */
-static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned long zonesize)
+static unsigned long __init usemap_size(unsigned long zone_start_pfn,
+					unsigned long zonesize)
 {
 	unsigned long usemapsize;
 
@@ -6158,7 +6203,8 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
 }
 
 void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
-				      unsigned long node_start_pfn, unsigned long *zholes_size)
+				      unsigned long node_start_pfn,
+				      unsigned long *zholes_size)
 {
 	pg_data_t *pgdat = NODE_DATA(nid);
 	unsigned long start_pfn = 0;
@@ -7028,7 +7074,8 @@ core_initcall(init_per_zone_wmark_min)
  *	changes.
  */
 int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
-				   void __user *buffer, size_t *length, loff_t *ppos)
+				   void __user *buffer, size_t *length,
+				   loff_t *ppos)
 {
 	int rc;
 
@@ -7044,7 +7091,8 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
 }
 
 int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
-					  void __user *buffer, size_t *length, loff_t *ppos)
+					  void __user *buffer, size_t *length,
+					  loff_t *ppos)
 {
 	int rc;
 
@@ -7068,8 +7116,8 @@ static void setup_min_unmapped_ratio(void)
 		pgdat->min_unmapped_pages = 0;
 
 	for_each_zone(zone)
-		zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages *
-							 sysctl_min_unmapped_ratio) / 100;
+		zone->zone_pgdat->min_unmapped_pages +=
+			(zone->managed_pages * sysctl_min_unmapped_ratio) / 100;
 }
 
 int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
@@ -7095,12 +7143,13 @@ static void setup_min_slab_ratio(void)
 		pgdat->min_slab_pages = 0;
 
 	for_each_zone(zone)
-		zone->zone_pgdat->min_slab_pages += (zone->managed_pages *
-						     sysctl_min_slab_ratio) / 100;
+		zone->zone_pgdat->min_slab_pages +=
+			(zone->managed_pages * sysctl_min_slab_ratio) / 100;
 }
 
 int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
-					 void __user *buffer, size_t *length, loff_t *ppos)
+					 void __user *buffer, size_t *length,
+					 loff_t *ppos)
 {
 	int rc;
 
@@ -7124,7 +7173,8 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
  * if in function of the boot time zone sizes.
  */
 int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
-					void __user *buffer, size_t *length, loff_t *ppos)
+					void __user *buffer, size_t *length,
+					loff_t *ppos)
 {
 	proc_dointvec_minmax(table, write, buffer, length, ppos);
 	setup_per_zone_lowmem_reserve();
@@ -7137,7 +7187,8 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
  * pagelist can have before it gets flushed back to buddy allocator.
  */
 int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
-					    void __user *buffer, size_t *length, loff_t *ppos)
+					    void __user *buffer, size_t *length,
+					    loff_t *ppos)
 {
 	struct zone *zone;
 	int old_percpu_pagelist_fraction;
@@ -7167,7 +7218,8 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
 
 		for_each_possible_cpu(cpu)
 			pageset_set_high_and_batch(zone,
-						   per_cpu_ptr(zone->pageset, cpu));
+						   per_cpu_ptr(zone->pageset,
+							       cpu));
 	}
 out:
 	mutex_unlock(&pcp_batch_high_lock);
@@ -7238,7 +7290,8 @@ void *__init alloc_large_system_hash(const char *tablename,
 
 		/* It isn't necessary when PAGE_SIZE >= 1MB */
 		if (PAGE_SHIFT < 20)
-			numentries = round_up(numentries, (1 << 20) / PAGE_SIZE);
+			numentries = round_up(numentries,
+					      (1 << 20) / PAGE_SIZE);
 
 		if (flags & HASH_ADAPT) {
 			unsigned long adapt;
@@ -7359,7 +7412,8 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		 * handle each tail page individually in migration.
 		 */
 		if (PageHuge(page)) {
-			iter = round_up(iter + 1, 1 << compound_order(page)) - 1;
+			iter = round_up(iter + 1,
+					1 << compound_order(page)) - 1;
 			continue;
 		}
 
@@ -7429,7 +7483,8 @@ bool is_pageblock_removable_nolock(struct page *page)
 	return !has_unmovable_pages(zone, page, 0, true);
 }
 
-#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || defined(CONFIG_CMA)
+#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || \
+     defined(CONFIG_CMA)
 
 static unsigned long pfn_max_align_down(unsigned long pfn)
 {
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 10/15] mm: page_alloc: 80 column neatening
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Wrap some lines to make it easier to read.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 259 ++++++++++++++++++++++++++++++++++----------------------
 1 file changed, 157 insertions(+), 102 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3e1d377201b8..286b01b4c3e7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -383,10 +383,11 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
  *
  * Return: pageblock_bits flags
  */
-static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page,
-							       unsigned long pfn,
-							       unsigned long end_bitidx,
-							       unsigned long mask)
+static __always_inline
+unsigned long __get_pfnblock_flags_mask(struct page *page,
+					unsigned long pfn,
+					unsigned long end_bitidx,
+					unsigned long mask)
 {
 	unsigned long *bitmap;
 	unsigned long bitidx, word_bitidx;
@@ -409,9 +410,11 @@ unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn,
 	return __get_pfnblock_flags_mask(page, pfn, end_bitidx, mask);
 }
 
-static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned long pfn)
+static __always_inline
+int get_pfnblock_migratetype(struct page *page, unsigned long pfn)
 {
-	return __get_pfnblock_flags_mask(page, pfn, PB_migrate_end, MIGRATETYPE_MASK);
+	return __get_pfnblock_flags_mask(page, pfn, PB_migrate_end,
+					 MIGRATETYPE_MASK);
 }
 
 /**
@@ -446,7 +449,8 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 
 	word = READ_ONCE(bitmap[word_bitidx]);
 	for (;;) {
-		old_word = cmpxchg(&bitmap[word_bitidx], word, (word & ~mask) | flags);
+		old_word = cmpxchg(&bitmap[word_bitidx],
+				   word, (word & ~mask) | flags);
 		if (word == old_word)
 			break;
 		word = old_word;
@@ -533,9 +537,8 @@ static void bad_page(struct page *page, const char *reason,
 			goto out;
 		}
 		if (nr_unshown) {
-			pr_alert(
-				"BUG: Bad page state: %lu messages suppressed\n",
-				nr_unshown);
+			pr_alert("BUG: Bad page state: %lu messages suppressed\n",
+				 nr_unshown);
 			nr_unshown = 0;
 		}
 		nr_shown = 0;
@@ -600,8 +603,8 @@ void prep_compound_page(struct page *page, unsigned int order)
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
 unsigned int _debug_guardpage_minorder;
-bool _debug_pagealloc_enabled __read_mostly
-= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
+bool _debug_pagealloc_enabled __read_mostly =
+	IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
 EXPORT_SYMBOL(_debug_pagealloc_enabled);
 bool _debug_guardpage_enabled __read_mostly;
 
@@ -703,9 +706,15 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 #else
 struct page_ext_operations debug_guardpage_ops;
 static inline bool set_page_guard(struct zone *zone, struct page *page,
-				  unsigned int order, int migratetype) { return false; }
+				  unsigned int order, int migratetype)
+{
+	return false;
+}
+
 static inline void clear_page_guard(struct zone *zone, struct page *page,
-				    unsigned int order, int migratetype) {}
+				    unsigned int order, int migratetype)
+{
+}
 #endif
 
 static inline void set_page_order(struct page *page, unsigned int order)
@@ -998,8 +1007,8 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
 	return ret;
 }
 
-static __always_inline bool free_pages_prepare(struct page *page,
-					       unsigned int order, bool check_free)
+static __always_inline
+bool free_pages_prepare(struct page *page, unsigned int order, bool check_free)
 {
 	int bad = 0;
 
@@ -1269,7 +1278,7 @@ static void __init __free_pages_boot_core(struct page *page, unsigned int order)
 }
 
 #if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) ||	\
-	defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
+    defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
 
 static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata;
 
@@ -1289,8 +1298,9 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
 #endif
 
 #ifdef CONFIG_NODES_SPAN_OTHER_NODES
-static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-						struct mminit_pfnnid_cache *state)
+static inline
+bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
+				  struct mminit_pfnnid_cache *state)
 {
 	int nid;
 
@@ -1313,8 +1323,9 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
 	return true;
 }
 
-static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
-						struct mminit_pfnnid_cache *state)
+static inline
+bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
+				  struct mminit_pfnnid_cache *state)
 {
 	return true;
 }
@@ -1564,7 +1575,8 @@ void __init page_alloc_init_late(void)
 	/* There will be num_node_state(N_MEMORY) threads */
 	atomic_set(&pgdat_init_n_undone, num_node_state(N_MEMORY));
 	for_each_node_state(nid, N_MEMORY) {
-		kthread_run(deferred_init_memmap, NODE_DATA(nid), "pgdatinit%d", nid);
+		kthread_run(deferred_init_memmap, NODE_DATA(nid),
+			    "pgdatinit%d", nid);
 	}
 
 	/* Block until all are initialised */
@@ -1747,8 +1759,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	set_page_owner(page, order, gfp_flags);
 }
 
-static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
-			  unsigned int alloc_flags)
+static void prep_new_page(struct page *page, unsigned int order,
+			  gfp_t gfp_flags, unsigned int alloc_flags)
 {
 	int i;
 	bool poisoned = true;
@@ -1835,7 +1847,10 @@ static struct page *__rmqueue_cma_fallback(struct zone *zone,
 }
 #else
 static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
-						  unsigned int order) { return NULL; }
+						  unsigned int order)
+{
+	return NULL;
+}
 #endif
 
 /*
@@ -2216,7 +2231,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 	     --current_order) {
 		area = &(zone->free_area[current_order]);
 		fallback_mt = find_suitable_fallback(area, current_order,
-						     start_migratetype, false, &can_steal);
+						     start_migratetype, false,
+						     &can_steal);
 		if (fallback_mt == -1)
 			continue;
 
@@ -2780,9 +2796,11 @@ struct page *rmqueue(struct zone *preferred_zone,
 	do {
 		page = NULL;
 		if (alloc_flags & ALLOC_HARDER) {
-			page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
+			page = __rmqueue_smallest(zone, order,
+						  MIGRATE_HIGHATOMIC);
 			if (page)
-				trace_mm_page_alloc_zone_locked(page, order, migratetype);
+				trace_mm_page_alloc_zone_locked(page, order,
+								migratetype);
 		}
 		if (!page)
 			page = __rmqueue(zone, order, migratetype);
@@ -2966,7 +2984,8 @@ bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
 }
 
 static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
-				       unsigned long mark, int classzone_idx, unsigned int alloc_flags)
+				       unsigned long mark, int classzone_idx,
+				       unsigned int alloc_flags)
 {
 	long free_pages = zone_page_state(z, NR_FREE_PAGES);
 	long cma_pages = 0;
@@ -2984,7 +3003,8 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
 	 * the caller is !atomic then it'll uselessly search the free
 	 * list. That corner case is then slower but it is harmless.
 	 */
-	if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
+	if (!order &&
+	    (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
 		return true;
 
 	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
@@ -3081,7 +3101,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 				goto try_this_zone;
 
 			if (node_reclaim_mode == 0 ||
-			    !zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
+			    !zone_allows_reclaim(ac->preferred_zoneref->zone,
+						 zone))
 				continue;
 
 			ret = node_reclaim(zone->zone_pgdat, gfp_mask, order);
@@ -3095,7 +3116,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			default:
 				/* did we reclaim enough */
 				if (zone_watermark_ok(zone, order, mark,
-						      ac_classzone_idx(ac), alloc_flags))
+						      ac_classzone_idx(ac),
+						      alloc_flags))
 					goto try_this_zone;
 
 				continue;
@@ -3212,7 +3234,8 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 
 static inline struct page *
 __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
-		      const struct alloc_context *ac, unsigned long *did_some_progress)
+		      const struct alloc_context *ac,
+		      unsigned long *did_some_progress)
 {
 	struct oom_control oc = {
 		.zonelist = ac->zonelist,
@@ -3280,7 +3303,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (gfp_mask & __GFP_NOFAIL)
 			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
-							     ALLOC_NO_WATERMARKS, ac);
+							     ALLOC_NO_WATERMARKS,
+							     ac);
 	}
 out:
 	mutex_unlock(&oom_lock);
@@ -3297,8 +3321,10 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 /* Try memory compaction for high-order allocations before reclaim */
 static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-			     unsigned int alloc_flags, const struct alloc_context *ac,
-			     enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags,
+			     const struct alloc_context *ac,
+			     enum compact_priority prio,
+			     enum compact_result *compact_result)
 {
 	struct page *page;
 
@@ -3413,16 +3439,18 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
 #else
 static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
-			     unsigned int alloc_flags, const struct alloc_context *ac,
-			     enum compact_priority prio, enum compact_result *compact_result)
+			     unsigned int alloc_flags,
+			     const struct alloc_context *ac,
+			     enum compact_priority prio,
+			     enum compact_result *compact_result)
 {
 	*compact_result = COMPACT_SKIPPED;
 	return NULL;
 }
 
 static inline bool
-should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_flags,
-		     enum compact_result compact_result,
+should_compact_retry(struct alloc_context *ac, unsigned int order,
+		     int alloc_flags, enum compact_result compact_result,
 		     enum compact_priority *compact_priority,
 		     int *compaction_retries)
 {
@@ -3480,7 +3508,8 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 /* The really slow allocator path where we enter direct reclaim */
 static inline struct page *
 __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
-			     unsigned int alloc_flags, const struct alloc_context *ac,
+			     unsigned int alloc_flags,
+			     const struct alloc_context *ac,
 			     unsigned long *did_some_progress)
 {
 	struct page *page = NULL;
@@ -3522,8 +3551,7 @@ static void wake_all_kswapds(unsigned int order, const struct alloc_context *ac)
 	}
 }
 
-static inline unsigned int
-gfp_to_alloc_flags(gfp_t gfp_mask)
+static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask)
 {
 	unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
 
@@ -3635,9 +3663,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned int order,
 		 * reclaimable pages?
 		 */
 		wmark = __zone_watermark_ok(zone, order, min_wmark,
-					    ac_classzone_idx(ac), alloc_flags, available);
+					    ac_classzone_idx(ac), alloc_flags,
+					    available);
 		trace_reclaim_retry_zone(z, order, reclaimable,
-					 available, min_wmark, *no_progress_loops, wmark);
+					 available, min_wmark,
+					 *no_progress_loops, wmark);
 		if (wmark) {
 			/*
 			 * If we didn't make any progress and have a lot of
@@ -3734,7 +3764,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * could end up iterating over non-eligible zones endlessly.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-						     ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx,
+						     ac->nodemask);
 	if (!ac->preferred_zoneref->zone)
 		goto nopage;
 
@@ -3807,10 +3838,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * These allocations are high priority and system rather than user
 	 * orientated.
 	 */
-	if (!(alloc_flags & ALLOC_CPUSET) || (alloc_flags & ALLOC_NO_WATERMARKS)) {
+	if (!(alloc_flags & ALLOC_CPUSET) ||
+	    (alloc_flags & ALLOC_NO_WATERMARKS)) {
 		ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
 		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-							     ac->high_zoneidx, ac->nodemask);
+							     ac->high_zoneidx,
+							     ac->nodemask);
 	}
 
 	/* Attempt with potentially adjusted zonelist and alloc_flags */
@@ -3939,7 +3972,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 * could deplete whole memory reserves which would just make
 		 * the situation worse
 		 */
-		page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac);
+		page = __alloc_pages_cpuset_fallback(gfp_mask, order,
+						     ALLOC_HARDER, ac);
 		if (page)
 			goto got_pg;
 
@@ -3953,10 +3987,11 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	return page;
 }
 
-static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
-				       struct zonelist *zonelist, nodemask_t *nodemask,
-				       struct alloc_context *ac, gfp_t *alloc_mask,
-				       unsigned int *alloc_flags)
+static inline
+bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
+			 struct zonelist *zonelist, nodemask_t *nodemask,
+			 struct alloc_context *ac, gfp_t *alloc_mask,
+			 unsigned int *alloc_flags)
 {
 	ac->high_zoneidx = gfp_zone(gfp_mask);
 	ac->zonelist = zonelist;
@@ -3997,7 +4032,8 @@ static inline void finalise_ac(gfp_t gfp_mask,
 	 * may get reset for allocations that ignore memory policies.
 	 */
 	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
-						     ac->high_zoneidx, ac->nodemask);
+						     ac->high_zoneidx,
+						     ac->nodemask);
 }
 
 /*
@@ -4013,7 +4049,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	struct alloc_context ac = { };
 
 	gfp_mask &= gfp_allowed_mask;
-	if (!prepare_alloc_pages(gfp_mask, order, zonelist, nodemask, &ac, &alloc_mask, &alloc_flags))
+	if (!prepare_alloc_pages(gfp_mask, order, zonelist, nodemask, &ac,
+				 &alloc_mask, &alloc_flags))
 		return NULL;
 
 	finalise_ac(gfp_mask, order, &ac);
@@ -4448,7 +4485,8 @@ void si_meminfo_node(struct sysinfo *val, int nid)
  * Determine whether the node should be displayed or not, depending on whether
  * SHOW_MEM_FILTER_NODES was passed to show_free_areas().
  */
-static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask)
+static bool show_mem_node_skip(unsigned int flags, int nid,
+			       nodemask_t *nodemask)
 {
 	if (!(flags & SHOW_MEM_FILTER_NODES))
 		return false;
@@ -5187,7 +5225,8 @@ static int __build_all_zonelists(void *data)
 		 * node/memory hotplug, we'll fixup all on-line cpus.
 		 */
 		if (cpu_online(cpu))
-			set_cpu_numa_mem(cpu, local_memory_node(cpu_to_node(cpu)));
+			set_cpu_numa_mem(cpu,
+					 local_memory_node(cpu_to_node(cpu)));
 #endif
 	}
 
@@ -5690,12 +5729,13 @@ static void __init find_usable_zone_for_movable(void)
  * highest usable zone for ZONE_MOVABLE. This preserves the assumption that
  * zones within a node are in order of monotonic increases memory addresses
  */
-static void __meminit adjust_zone_range_for_zone_movable(int nid,
-							 unsigned long zone_type,
-							 unsigned long node_start_pfn,
-							 unsigned long node_end_pfn,
-							 unsigned long *zone_start_pfn,
-							 unsigned long *zone_end_pfn)
+static void __meminit
+adjust_zone_range_for_zone_movable(int nid,
+				   unsigned long zone_type,
+				   unsigned long node_start_pfn,
+				   unsigned long node_end_pfn,
+				   unsigned long *zone_start_pfn,
+				   unsigned long *zone_end_pfn)
 {
 	/* Only adjust if ZONE_MOVABLE is on this node */
 	if (zone_movable_pfn[nid]) {
@@ -5721,13 +5761,14 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
  * Return the number of pages a zone spans in a node, including holes
  * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node()
  */
-static unsigned long __meminit zone_spanned_pages_in_node(int nid,
-							  unsigned long zone_type,
-							  unsigned long node_start_pfn,
-							  unsigned long node_end_pfn,
-							  unsigned long *zone_start_pfn,
-							  unsigned long *zone_end_pfn,
-							  unsigned long *ignored)
+static unsigned long __meminit
+zone_spanned_pages_in_node(int nid,
+			   unsigned long zone_type,
+			   unsigned long node_start_pfn,
+			   unsigned long node_end_pfn,
+			   unsigned long *zone_start_pfn,
+			   unsigned long *zone_end_pfn,
+			   unsigned long *ignored)
 {
 	/* When hotadd a new node from cpu_up(), the node should be empty */
 	if (!node_start_pfn && !node_end_pfn)
@@ -5786,11 +5827,12 @@ unsigned long __init absent_pages_in_range(unsigned long start_pfn,
 }
 
 /* Return the number of page frames in holes in a zone on a node */
-static unsigned long __meminit zone_absent_pages_in_node(int nid,
-							 unsigned long zone_type,
-							 unsigned long node_start_pfn,
-							 unsigned long node_end_pfn,
-							 unsigned long *ignored)
+static unsigned long __meminit
+zone_absent_pages_in_node(int nid,
+			  unsigned long zone_type,
+			  unsigned long node_start_pfn,
+			  unsigned long node_end_pfn,
+			  unsigned long *ignored)
 {
 	unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
 	unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
@@ -5843,13 +5885,14 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
 }
 
 #else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
-static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
-								 unsigned long zone_type,
-								 unsigned long node_start_pfn,
-								 unsigned long node_end_pfn,
-								 unsigned long *zone_start_pfn,
-								 unsigned long *zone_end_pfn,
-								 unsigned long *zones_size)
+static inline unsigned long __meminit
+zone_spanned_pages_in_node(int nid,
+			   unsigned long zone_type,
+			   unsigned long node_start_pfn,
+			   unsigned long node_end_pfn,
+			   unsigned long *zone_start_pfn,
+			   unsigned long *zone_end_pfn,
+			   unsigned long *zones_size)
 {
 	unsigned int zone;
 
@@ -5862,11 +5905,12 @@ static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
 	return zones_size[zone_type];
 }
 
-static inline unsigned long __meminit zone_absent_pages_in_node(int nid,
-								unsigned long zone_type,
-								unsigned long node_start_pfn,
-								unsigned long node_end_pfn,
-								unsigned long *zholes_size)
+static inline unsigned long __meminit
+zone_absent_pages_in_node(int nid,
+			  unsigned long zone_type,
+			  unsigned long node_start_pfn,
+			  unsigned long node_end_pfn,
+			  unsigned long *zholes_size)
 {
 	if (!zholes_size)
 		return 0;
@@ -5924,7 +5968,8 @@ static void __meminit calculate_node_totalpages(struct pglist_data *pgdat,
  * round what is now in bits to nearest long in bits, then return it in
  * bytes.
  */
-static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned long zonesize)
+static unsigned long __init usemap_size(unsigned long zone_start_pfn,
+					unsigned long zonesize)
 {
 	unsigned long usemapsize;
 
@@ -6158,7 +6203,8 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
 }
 
 void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
-				      unsigned long node_start_pfn, unsigned long *zholes_size)
+				      unsigned long node_start_pfn,
+				      unsigned long *zholes_size)
 {
 	pg_data_t *pgdat = NODE_DATA(nid);
 	unsigned long start_pfn = 0;
@@ -7028,7 +7074,8 @@ core_initcall(init_per_zone_wmark_min)
  *	changes.
  */
 int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
-				   void __user *buffer, size_t *length, loff_t *ppos)
+				   void __user *buffer, size_t *length,
+				   loff_t *ppos)
 {
 	int rc;
 
@@ -7044,7 +7091,8 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
 }
 
 int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
-					  void __user *buffer, size_t *length, loff_t *ppos)
+					  void __user *buffer, size_t *length,
+					  loff_t *ppos)
 {
 	int rc;
 
@@ -7068,8 +7116,8 @@ static void setup_min_unmapped_ratio(void)
 		pgdat->min_unmapped_pages = 0;
 
 	for_each_zone(zone)
-		zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages *
-							 sysctl_min_unmapped_ratio) / 100;
+		zone->zone_pgdat->min_unmapped_pages +=
+			(zone->managed_pages * sysctl_min_unmapped_ratio) / 100;
 }
 
 int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
@@ -7095,12 +7143,13 @@ static void setup_min_slab_ratio(void)
 		pgdat->min_slab_pages = 0;
 
 	for_each_zone(zone)
-		zone->zone_pgdat->min_slab_pages += (zone->managed_pages *
-						     sysctl_min_slab_ratio) / 100;
+		zone->zone_pgdat->min_slab_pages +=
+			(zone->managed_pages * sysctl_min_slab_ratio) / 100;
 }
 
 int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
-					 void __user *buffer, size_t *length, loff_t *ppos)
+					 void __user *buffer, size_t *length,
+					 loff_t *ppos)
 {
 	int rc;
 
@@ -7124,7 +7173,8 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
  * if in function of the boot time zone sizes.
  */
 int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
-					void __user *buffer, size_t *length, loff_t *ppos)
+					void __user *buffer, size_t *length,
+					loff_t *ppos)
 {
 	proc_dointvec_minmax(table, write, buffer, length, ppos);
 	setup_per_zone_lowmem_reserve();
@@ -7137,7 +7187,8 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
  * pagelist can have before it gets flushed back to buddy allocator.
  */
 int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
-					    void __user *buffer, size_t *length, loff_t *ppos)
+					    void __user *buffer, size_t *length,
+					    loff_t *ppos)
 {
 	struct zone *zone;
 	int old_percpu_pagelist_fraction;
@@ -7167,7 +7218,8 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
 
 		for_each_possible_cpu(cpu)
 			pageset_set_high_and_batch(zone,
-						   per_cpu_ptr(zone->pageset, cpu));
+						   per_cpu_ptr(zone->pageset,
+							       cpu));
 	}
 out:
 	mutex_unlock(&pcp_batch_high_lock);
@@ -7238,7 +7290,8 @@ void *__init alloc_large_system_hash(const char *tablename,
 
 		/* It isn't necessary when PAGE_SIZE >= 1MB */
 		if (PAGE_SHIFT < 20)
-			numentries = round_up(numentries, (1 << 20) / PAGE_SIZE);
+			numentries = round_up(numentries,
+					      (1 << 20) / PAGE_SIZE);
 
 		if (flags & HASH_ADAPT) {
 			unsigned long adapt;
@@ -7359,7 +7412,8 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		 * handle each tail page individually in migration.
 		 */
 		if (PageHuge(page)) {
-			iter = round_up(iter + 1, 1 << compound_order(page)) - 1;
+			iter = round_up(iter + 1,
+					1 << compound_order(page)) - 1;
 			continue;
 		}
 
@@ -7429,7 +7483,8 @@ bool is_pageblock_removable_nolock(struct page *page)
 	return !has_unmovable_pages(zone, page, 0, true);
 }
 
-#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || defined(CONFIG_CMA)
+#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || \
+     defined(CONFIG_CMA)
 
 static unsigned long pfn_max_align_down(unsigned long pfn)
 {
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 11/15] mm: page_alloc: Move EXPORT_SYMBOL uses
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

To immediately after the declarations

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 33 ++++++++++++++++-----------------
 1 file changed, 16 insertions(+), 17 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 286b01b4c3e7..f9e6387c0ad4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -124,6 +124,7 @@ EXPORT_SYMBOL(node_states);
 static DEFINE_SPINLOCK(managed_page_count_lock);
 
 unsigned long totalram_pages __read_mostly;
+EXPORT_SYMBOL(totalram_pages);
 unsigned long totalreserve_pages __read_mostly;
 unsigned long totalcma_pages __read_mostly;
 
@@ -215,8 +216,6 @@ int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
 	32,
 };
 
-EXPORT_SYMBOL(totalram_pages);
-
 static char * const zone_names[MAX_NR_ZONES] = {
 #ifdef CONFIG_ZONE_DMA
 	"DMA",
@@ -281,8 +280,8 @@ EXPORT_SYMBOL(movable_zone);
 
 #if MAX_NUMNODES > 1
 int nr_node_ids __read_mostly = MAX_NUMNODES;
-int nr_online_nodes __read_mostly = 1;
 EXPORT_SYMBOL(nr_node_ids);
+int nr_online_nodes __read_mostly = 1;
 EXPORT_SYMBOL(nr_online_nodes);
 #endif
 
@@ -2706,9 +2705,9 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
 	if (z->node != numa_node_id())
 		local_stat = NUMA_OTHER;
 
-	if (z->node == preferred_zone->node)
+	if (z->node == preferred_zone->node) {
 		__inc_zone_state(z, NUMA_HIT);
-	else {
+	} else {
 		__inc_zone_state(z, NUMA_MISS);
 		__inc_zone_state(preferred_zone, NUMA_FOREIGN);
 	}
@@ -3578,8 +3577,9 @@ static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask)
 		 * comment for __cpuset_node_allowed().
 		 */
 		alloc_flags &= ~ALLOC_CPUSET;
-	} else if (unlikely(rt_task(current)) && !in_interrupt())
+	} else if (unlikely(rt_task(current)) && !in_interrupt()) {
 		alloc_flags |= ALLOC_HARDER;
+	}
 
 #ifdef CONFIG_CMA
 	if (gfpflags_to_migratetype(gfp_mask) == MIGRATE_MOVABLE)
@@ -4129,7 +4129,6 @@ void __free_pages(struct page *page, unsigned int order)
 			__free_pages_ok(page, order);
 	}
 }
-
 EXPORT_SYMBOL(__free_pages);
 
 void free_pages(unsigned long addr, unsigned int order)
@@ -4139,7 +4138,6 @@ void free_pages(unsigned long addr, unsigned int order)
 		__free_pages(virt_to_page((void *)addr), order);
 	}
 }
-
 EXPORT_SYMBOL(free_pages);
 
 /*
@@ -4445,7 +4443,6 @@ void si_meminfo(struct sysinfo *val)
 	val->freehigh = nr_free_highpages();
 	val->mem_unit = PAGE_SIZE;
 }
-
 EXPORT_SYMBOL(si_meminfo);
 
 #ifdef CONFIG_NUMA
@@ -5189,9 +5186,8 @@ static int __build_all_zonelists(void *data)
 	memset(node_load, 0, sizeof(node_load));
 #endif
 
-	if (self && !node_online(self->node_id)) {
+	if (self && !node_online(self->node_id))
 		build_zonelists(self);
-	}
 
 	for_each_online_node(nid) {
 		pg_data_t *pgdat = NODE_DATA(nid);
@@ -5752,8 +5748,9 @@ adjust_zone_range_for_zone_movable(int nid,
 			*zone_end_pfn = zone_movable_pfn[nid];
 
 			/* Check if this whole range is within ZONE_MOVABLE */
-		} else if (*zone_start_pfn >= zone_movable_pfn[nid])
+		} else if (*zone_start_pfn >= zone_movable_pfn[nid]) {
 			*zone_start_pfn = *zone_end_pfn;
+		}
 	}
 }
 
@@ -6111,9 +6108,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat)
 				if (memmap_pages)
 					printk(KERN_DEBUG "  %s zone: %lu pages used for memmap\n",
 					       zone_names[j], memmap_pages);
-			} else
+			} else {
 				pr_warn("  %s zone: %lu pages exceeds freesize %lu\n",
 					zone_names[j], memmap_pages, freesize);
+			}
 		}
 
 		/* Account for reserved pages */
@@ -7315,8 +7313,9 @@ void *__init alloc_large_system_hash(const char *tablename,
 				numentries = 1UL << *_hash_shift;
 				BUG_ON(!numentries);
 			}
-		} else if (unlikely((numentries * bucketsize) < PAGE_SIZE))
+		} else if (unlikely((numentries * bucketsize) < PAGE_SIZE)) {
 			numentries = PAGE_SIZE / bucketsize;
+		}
 	}
 	numentries = roundup_pow_of_two(numentries);
 
@@ -7341,11 +7340,11 @@ void *__init alloc_large_system_hash(const char *tablename,
 	gfp_flags = (flags & HASH_ZERO) ? GFP_ATOMIC | __GFP_ZERO : GFP_ATOMIC;
 	do {
 		size = bucketsize << log2qty;
-		if (flags & HASH_EARLY)
+		if (flags & HASH_EARLY) {
 			table = memblock_virt_alloc_nopanic(size, 0);
-		else if (hashdist)
+		} else if (hashdist) {
 			table = __vmalloc(size, gfp_flags, PAGE_KERNEL);
-		else {
+		} else {
 			/*
 			 * If bucketsize is not a power-of-two, we may free
 			 * some pages at the end of hash table which
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 11/15] mm: page_alloc: Move EXPORT_SYMBOL uses
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

To immediately after the declarations

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 33 ++++++++++++++++-----------------
 1 file changed, 16 insertions(+), 17 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 286b01b4c3e7..f9e6387c0ad4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -124,6 +124,7 @@ EXPORT_SYMBOL(node_states);
 static DEFINE_SPINLOCK(managed_page_count_lock);
 
 unsigned long totalram_pages __read_mostly;
+EXPORT_SYMBOL(totalram_pages);
 unsigned long totalreserve_pages __read_mostly;
 unsigned long totalcma_pages __read_mostly;
 
@@ -215,8 +216,6 @@ int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
 	32,
 };
 
-EXPORT_SYMBOL(totalram_pages);
-
 static char * const zone_names[MAX_NR_ZONES] = {
 #ifdef CONFIG_ZONE_DMA
 	"DMA",
@@ -281,8 +280,8 @@ EXPORT_SYMBOL(movable_zone);
 
 #if MAX_NUMNODES > 1
 int nr_node_ids __read_mostly = MAX_NUMNODES;
-int nr_online_nodes __read_mostly = 1;
 EXPORT_SYMBOL(nr_node_ids);
+int nr_online_nodes __read_mostly = 1;
 EXPORT_SYMBOL(nr_online_nodes);
 #endif
 
@@ -2706,9 +2705,9 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
 	if (z->node != numa_node_id())
 		local_stat = NUMA_OTHER;
 
-	if (z->node == preferred_zone->node)
+	if (z->node == preferred_zone->node) {
 		__inc_zone_state(z, NUMA_HIT);
-	else {
+	} else {
 		__inc_zone_state(z, NUMA_MISS);
 		__inc_zone_state(preferred_zone, NUMA_FOREIGN);
 	}
@@ -3578,8 +3577,9 @@ static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask)
 		 * comment for __cpuset_node_allowed().
 		 */
 		alloc_flags &= ~ALLOC_CPUSET;
-	} else if (unlikely(rt_task(current)) && !in_interrupt())
+	} else if (unlikely(rt_task(current)) && !in_interrupt()) {
 		alloc_flags |= ALLOC_HARDER;
+	}
 
 #ifdef CONFIG_CMA
 	if (gfpflags_to_migratetype(gfp_mask) == MIGRATE_MOVABLE)
@@ -4129,7 +4129,6 @@ void __free_pages(struct page *page, unsigned int order)
 			__free_pages_ok(page, order);
 	}
 }
-
 EXPORT_SYMBOL(__free_pages);
 
 void free_pages(unsigned long addr, unsigned int order)
@@ -4139,7 +4138,6 @@ void free_pages(unsigned long addr, unsigned int order)
 		__free_pages(virt_to_page((void *)addr), order);
 	}
 }
-
 EXPORT_SYMBOL(free_pages);
 
 /*
@@ -4445,7 +4443,6 @@ void si_meminfo(struct sysinfo *val)
 	val->freehigh = nr_free_highpages();
 	val->mem_unit = PAGE_SIZE;
 }
-
 EXPORT_SYMBOL(si_meminfo);
 
 #ifdef CONFIG_NUMA
@@ -5189,9 +5186,8 @@ static int __build_all_zonelists(void *data)
 	memset(node_load, 0, sizeof(node_load));
 #endif
 
-	if (self && !node_online(self->node_id)) {
+	if (self && !node_online(self->node_id))
 		build_zonelists(self);
-	}
 
 	for_each_online_node(nid) {
 		pg_data_t *pgdat = NODE_DATA(nid);
@@ -5752,8 +5748,9 @@ adjust_zone_range_for_zone_movable(int nid,
 			*zone_end_pfn = zone_movable_pfn[nid];
 
 			/* Check if this whole range is within ZONE_MOVABLE */
-		} else if (*zone_start_pfn >= zone_movable_pfn[nid])
+		} else if (*zone_start_pfn >= zone_movable_pfn[nid]) {
 			*zone_start_pfn = *zone_end_pfn;
+		}
 	}
 }
 
@@ -6111,9 +6108,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat)
 				if (memmap_pages)
 					printk(KERN_DEBUG "  %s zone: %lu pages used for memmap\n",
 					       zone_names[j], memmap_pages);
-			} else
+			} else {
 				pr_warn("  %s zone: %lu pages exceeds freesize %lu\n",
 					zone_names[j], memmap_pages, freesize);
+			}
 		}
 
 		/* Account for reserved pages */
@@ -7315,8 +7313,9 @@ void *__init alloc_large_system_hash(const char *tablename,
 				numentries = 1UL << *_hash_shift;
 				BUG_ON(!numentries);
 			}
-		} else if (unlikely((numentries * bucketsize) < PAGE_SIZE))
+		} else if (unlikely((numentries * bucketsize) < PAGE_SIZE)) {
 			numentries = PAGE_SIZE / bucketsize;
+		}
 	}
 	numentries = roundup_pow_of_two(numentries);
 
@@ -7341,11 +7340,11 @@ void *__init alloc_large_system_hash(const char *tablename,
 	gfp_flags = (flags & HASH_ZERO) ? GFP_ATOMIC | __GFP_ZERO : GFP_ATOMIC;
 	do {
 		size = bucketsize << log2qty;
-		if (flags & HASH_EARLY)
+		if (flags & HASH_EARLY) {
 			table = memblock_virt_alloc_nopanic(size, 0);
-		else if (hashdist)
+		} else if (hashdist) {
 			table = __vmalloc(size, gfp_flags, PAGE_KERNEL);
-		else {
+		} else {
 			/*
 			 * If bucketsize is not a power-of-two, we may free
 			 * some pages at the end of hash table which
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 12/15] mm: page_alloc: Avoid pointer comparisons to NULL
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Use direct test instead.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f9e6387c0ad4..b6605b077053 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -931,7 +931,7 @@ static void free_pages_check_bad(struct page *page)
 
 	if (unlikely(atomic_read(&page->_mapcount) != -1))
 		bad_reason = "nonzero mapcount";
-	if (unlikely(page->mapping != NULL))
+	if (unlikely(page->mapping))
 		bad_reason = "non-NULL mapping";
 	if (unlikely(page_ref_count(page) != 0))
 		bad_reason = "nonzero _refcount";
@@ -1668,7 +1668,7 @@ static void check_new_page_bad(struct page *page)
 
 	if (unlikely(atomic_read(&page->_mapcount) != -1))
 		bad_reason = "nonzero mapcount";
-	if (unlikely(page->mapping != NULL))
+	if (unlikely(page->mapping))
 		bad_reason = "non-NULL mapping";
 	if (unlikely(page_ref_count(page) != 0))
 		bad_reason = "nonzero _count";
@@ -2289,7 +2289,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 	for (i = 0; i < count; ++i) {
 		struct page *page = __rmqueue(zone, order, migratetype);
 
-		if (unlikely(page == NULL))
+		if (unlikely(!page))
 			break;
 
 		if (unlikely(check_pcp_refill(page)))
@@ -4951,7 +4951,7 @@ static void build_zonelists_in_node_order(pg_data_t *pgdat, int node)
 	struct zonelist *zonelist;
 
 	zonelist = &pgdat->node_zonelists[ZONELIST_FALLBACK];
-	for (j = 0; zonelist->_zonerefs[j].zone != NULL; j++)
+	for (j = 0; zonelist->_zonerefs[j].zone; j++)
 		;
 	j = build_zonelists_node(NODE_DATA(node), zonelist, j);
 	zonelist->_zonerefs[j].zone = NULL;
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 12/15] mm: page_alloc: Avoid pointer comparisons to NULL
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Use direct test instead.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f9e6387c0ad4..b6605b077053 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -931,7 +931,7 @@ static void free_pages_check_bad(struct page *page)
 
 	if (unlikely(atomic_read(&page->_mapcount) != -1))
 		bad_reason = "nonzero mapcount";
-	if (unlikely(page->mapping != NULL))
+	if (unlikely(page->mapping))
 		bad_reason = "non-NULL mapping";
 	if (unlikely(page_ref_count(page) != 0))
 		bad_reason = "nonzero _refcount";
@@ -1668,7 +1668,7 @@ static void check_new_page_bad(struct page *page)
 
 	if (unlikely(atomic_read(&page->_mapcount) != -1))
 		bad_reason = "nonzero mapcount";
-	if (unlikely(page->mapping != NULL))
+	if (unlikely(page->mapping))
 		bad_reason = "non-NULL mapping";
 	if (unlikely(page_ref_count(page) != 0))
 		bad_reason = "nonzero _count";
@@ -2289,7 +2289,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 	for (i = 0; i < count; ++i) {
 		struct page *page = __rmqueue(zone, order, migratetype);
 
-		if (unlikely(page == NULL))
+		if (unlikely(!page))
 			break;
 
 		if (unlikely(check_pcp_refill(page)))
@@ -4951,7 +4951,7 @@ static void build_zonelists_in_node_order(pg_data_t *pgdat, int node)
 	struct zonelist *zonelist;
 
 	zonelist = &pgdat->node_zonelists[ZONELIST_FALLBACK];
-	for (j = 0; zonelist->_zonerefs[j].zone != NULL; j++)
+	for (j = 0; zonelist->_zonerefs[j].zone; j++)
 		;
 	j = build_zonelists_node(NODE_DATA(node), zonelist, j);
 	zonelist->_zonerefs[j].zone = NULL;
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 13/15] mm: page_alloc: Remove unnecessary parentheses
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just removing what isn't necessary for human comprehension.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b6605b077053..efc3184aa6bc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1806,7 +1806,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 
 	/* Find a page of the appropriate size in the preferred list */
 	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
-		area = &(zone->free_area[current_order]);
+		area = &zone->free_area[current_order];
 		page = list_first_entry_or_null(&area->free_list[migratetype],
 						struct page, lru);
 		if (!page)
@@ -2158,7 +2158,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 
 		spin_lock_irqsave(&zone->lock, flags);
 		for (order = 0; order < MAX_ORDER; order++) {
-			struct free_area *area = &(zone->free_area[order]);
+			struct free_area *area = &zone->free_area[order];
 
 			page = list_first_entry_or_null(
 				&area->free_list[MIGRATE_HIGHATOMIC],
@@ -2228,7 +2228,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 	for (current_order = MAX_ORDER - 1;
 	     current_order >= order && current_order <= MAX_ORDER - 1;
 	     --current_order) {
-		area = &(zone->free_area[current_order]);
+		area = &zone->free_area[current_order];
 		fallback_mt = find_suitable_fallback(area, current_order,
 						     start_migratetype, false,
 						     &can_steal);
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 13/15] mm: page_alloc: Remove unnecessary parentheses
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just removing what isn't necessary for human comprehension.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b6605b077053..efc3184aa6bc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1806,7 +1806,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 
 	/* Find a page of the appropriate size in the preferred list */
 	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
-		area = &(zone->free_area[current_order]);
+		area = &zone->free_area[current_order];
 		page = list_first_entry_or_null(&area->free_list[migratetype],
 						struct page, lru);
 		if (!page)
@@ -2158,7 +2158,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 
 		spin_lock_irqsave(&zone->lock, flags);
 		for (order = 0; order < MAX_ORDER; order++) {
-			struct free_area *area = &(zone->free_area[order]);
+			struct free_area *area = &zone->free_area[order];
 
 			page = list_first_entry_or_null(
 				&area->free_list[MIGRATE_HIGHATOMIC],
@@ -2228,7 +2228,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 	for (current_order = MAX_ORDER - 1;
 	     current_order >= order && current_order <= MAX_ORDER - 1;
 	     --current_order) {
-		area = &(zone->free_area[current_order]);
+		area = &zone->free_area[current_order];
 		fallback_mt = find_suitable_fallback(area, current_order,
 						     start_migratetype, false,
 						     &can_steal);
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 14/15] mm: page_alloc: Use octal permissions
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Using S_<FOO> permissions can be hard to parse.
Using octal is typical.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index efc3184aa6bc..930773b03b26 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2863,7 +2863,7 @@ static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
 
 static int __init fail_page_alloc_debugfs(void)
 {
-	umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
+	umode_t mode = S_IFREG | 0600;
 	struct dentry *dir;
 
 	dir = fault_create_debugfs_attr("fail_page_alloc", NULL,
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 14/15] mm: page_alloc: Use octal permissions
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Using S_<FOO> permissions can be hard to parse.
Using octal is typical.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index efc3184aa6bc..930773b03b26 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2863,7 +2863,7 @@ static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
 
 static int __init fail_page_alloc_debugfs(void)
 {
-	umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
+	umode_t mode = S_IFREG | 0600;
 	struct dentry *dir;
 
 	dir = fault_create_debugfs_attr("fail_page_alloc", NULL,
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 15/15] mm: page_alloc: Move logical continuations to EOL
  2017-03-16  1:59 ` Joe Perches
@ 2017-03-16  2:00   ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just more code style conformance/neatening.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 930773b03b26..011a8e057639 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -859,9 +859,9 @@ static inline void __free_one_page(struct page *page,
 			buddy = page + (buddy_pfn - pfn);
 			buddy_mt = get_pageblock_migratetype(buddy);
 
-			if (migratetype != buddy_mt
-			    && (is_migrate_isolate(migratetype) ||
-				is_migrate_isolate(buddy_mt)))
+			if (migratetype != buddy_mt &&
+			    (is_migrate_isolate(migratetype) ||
+			     is_migrate_isolate(buddy_mt)))
 				goto done_merging;
 		}
 		max_order++;
@@ -2115,8 +2115,9 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
 
 	/* Yoink! */
 	mt = get_pageblock_migratetype(page);
-	if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt)
-	    && !is_migrate_cma(mt)) {
+	if (!is_migrate_highatomic(mt) &&
+	    !is_migrate_isolate(mt) &&
+	    !is_migrate_cma(mt)) {
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
 		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
@@ -2682,8 +2683,9 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
 
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
-			    && !is_migrate_highatomic(mt))
+			if (!is_migrate_isolate(mt) &&
+			    !is_migrate_cma(mt) &&
+			    !is_migrate_highatomic(mt))
 				set_pageblock_migratetype(page,
 							  MIGRATE_MOVABLE);
 		}
@@ -3791,8 +3793,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 */
 	if (can_direct_reclaim &&
 	    (costly_order ||
-	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
-	    && !gfp_pfmemalloc_allowed(gfp_mask)) {
+	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE)) &&
+	    !gfp_pfmemalloc_allowed(gfp_mask)) {
 		page = __alloc_pages_direct_compact(gfp_mask, order,
 						    alloc_flags, ac,
 						    INIT_COMPACT_PRIORITY,
-- 
2.10.0.rc2.1.g053435c

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 15/15] mm: page_alloc: Move logical continuations to EOL
@ 2017-03-16  2:00   ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16  2:00 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel; +Cc: linux-mm

Just more code style conformance/neatening.

Signed-off-by: Joe Perches <joe@perches.com>
---
 mm/page_alloc.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 930773b03b26..011a8e057639 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -859,9 +859,9 @@ static inline void __free_one_page(struct page *page,
 			buddy = page + (buddy_pfn - pfn);
 			buddy_mt = get_pageblock_migratetype(buddy);
 
-			if (migratetype != buddy_mt
-			    && (is_migrate_isolate(migratetype) ||
-				is_migrate_isolate(buddy_mt)))
+			if (migratetype != buddy_mt &&
+			    (is_migrate_isolate(migratetype) ||
+			     is_migrate_isolate(buddy_mt)))
 				goto done_merging;
 		}
 		max_order++;
@@ -2115,8 +2115,9 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
 
 	/* Yoink! */
 	mt = get_pageblock_migratetype(page);
-	if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt)
-	    && !is_migrate_cma(mt)) {
+	if (!is_migrate_highatomic(mt) &&
+	    !is_migrate_isolate(mt) &&
+	    !is_migrate_cma(mt)) {
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
 		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
@@ -2682,8 +2683,9 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
 
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
-			    && !is_migrate_highatomic(mt))
+			if (!is_migrate_isolate(mt) &&
+			    !is_migrate_cma(mt) &&
+			    !is_migrate_highatomic(mt))
 				set_pageblock_migratetype(page,
 							  MIGRATE_MOVABLE);
 		}
@@ -3791,8 +3793,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 */
 	if (can_direct_reclaim &&
 	    (costly_order ||
-	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
-	    && !gfp_pfmemalloc_allowed(gfp_mask)) {
+	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE)) &&
+	    !gfp_pfmemalloc_allowed(gfp_mask)) {
 		page = __alloc_pages_direct_compact(gfp_mask, order,
 						    alloc_flags, ac,
 						    INIT_COMPACT_PRIORITY,
-- 
2.10.0.rc2.1.g053435c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
  2017-03-16  1:59   ` Joe Perches
@ 2017-03-16  8:02     ` Michal Hocko
  -1 siblings, 0 replies; 40+ messages in thread
From: Michal Hocko @ 2017-03-16  8:02 UTC (permalink / raw)
  To: Joe Perches; +Cc: Andrew Morton, linux-kernel, linux-mm

On Wed 15-03-17 18:59:59, Joe Perches wrote:
> whitespace changes only - git diff -w shows no difference

what is the point of this whitespace noise? Does it help readability?
To be honest I do not think so. Such a patch would make sense only if it
was a part of a larger series where other patches would actually do
something useful.

> Signed-off-by: Joe Perches <joe@perches.com>
> ---
>  mm/page_alloc.c | 552 ++++++++++++++++++++++++++++----------------------------
>  1 file changed, 276 insertions(+), 276 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 504749032400..79fc996892c6 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -204,33 +204,33 @@ static void __free_pages_ok(struct page *page, unsigned int order);
>   */
>  int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
>  #ifdef CONFIG_ZONE_DMA
> -	 256,
> +	256,
>  #endif
>  #ifdef CONFIG_ZONE_DMA32
> -	 256,
> +	256,
>  #endif
>  #ifdef CONFIG_HIGHMEM
> -	 32,
> +	32,
>  #endif
> -	 32,
> +	32,
>  };
>  
>  EXPORT_SYMBOL(totalram_pages);
>  
>  static char * const zone_names[MAX_NR_ZONES] = {
>  #ifdef CONFIG_ZONE_DMA
> -	 "DMA",
> +	"DMA",
>  #endif
>  #ifdef CONFIG_ZONE_DMA32
> -	 "DMA32",
> +	"DMA32",
>  #endif
> -	 "Normal",
> +	"Normal",
>  #ifdef CONFIG_HIGHMEM
> -	 "HighMem",
> +	"HighMem",
>  #endif
> -	 "Movable",
> +	"Movable",
>  #ifdef CONFIG_ZONE_DEVICE
> -	 "Device",
> +	"Device",
>  #endif
>  };
>  
> @@ -310,8 +310,8 @@ static inline bool __meminit early_page_uninitialised(unsigned long pfn)
>   * later in the boot cycle when it can be parallelised.
>   */
>  static inline bool update_defer_init(pg_data_t *pgdat,
> -				unsigned long pfn, unsigned long zone_end,
> -				unsigned long *nr_initialised)
> +				     unsigned long pfn, unsigned long zone_end,
> +				     unsigned long *nr_initialised)
>  {
>  	unsigned long max_initialise;
>  
> @@ -323,7 +323,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
>  	 * two large system hashes that can take up 1GB for 0.25TB/node.
>  	 */
>  	max_initialise = max(2UL << (30 - PAGE_SHIFT),
> -		(pgdat->node_spanned_pages >> 8));
> +			     (pgdat->node_spanned_pages >> 8));
>  
>  	(*nr_initialised)++;
>  	if ((*nr_initialised > max_initialise) &&
> @@ -345,8 +345,8 @@ static inline bool early_page_uninitialised(unsigned long pfn)
>  }
>  
>  static inline bool update_defer_init(pg_data_t *pgdat,
> -				unsigned long pfn, unsigned long zone_end,
> -				unsigned long *nr_initialised)
> +				     unsigned long pfn, unsigned long zone_end,
> +				     unsigned long *nr_initialised)
>  {
>  	return true;
>  }
> @@ -354,7 +354,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
>  
>  /* Return a pointer to the bitmap storing bits affecting a block of pages */
>  static inline unsigned long *get_pageblock_bitmap(struct page *page,
> -							unsigned long pfn)
> +						  unsigned long pfn)
>  {
>  #ifdef CONFIG_SPARSEMEM
>  	return __pfn_to_section(pfn)->pageblock_flags;
> @@ -384,9 +384,9 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
>   * Return: pageblock_bits flags
>   */
>  static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page,
> -					unsigned long pfn,
> -					unsigned long end_bitidx,
> -					unsigned long mask)
> +							       unsigned long pfn,
> +							       unsigned long end_bitidx,
> +							       unsigned long mask)
>  {
>  	unsigned long *bitmap;
>  	unsigned long bitidx, word_bitidx;
> @@ -403,8 +403,8 @@ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page
>  }
>  
>  unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn,
> -					unsigned long end_bitidx,
> -					unsigned long mask)
> +				      unsigned long end_bitidx,
> +				      unsigned long mask)
>  {
>  	return __get_pfnblock_flags_mask(page, pfn, end_bitidx, mask);
>  }
> @@ -423,9 +423,9 @@ static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned
>   * @mask: mask of bits that the caller is interested in
>   */
>  void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
> -					unsigned long pfn,
> -					unsigned long end_bitidx,
> -					unsigned long mask)
> +			     unsigned long pfn,
> +			     unsigned long end_bitidx,
> +			     unsigned long mask)
>  {
>  	unsigned long *bitmap;
>  	unsigned long bitidx, word_bitidx;
> @@ -460,7 +460,7 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
>  		migratetype = MIGRATE_UNMOVABLE;
>  
>  	set_pageblock_flags_group(page, (unsigned long)migratetype,
> -					PB_migrate, PB_migrate_end);
> +				  PB_migrate, PB_migrate_end);
>  }
>  
>  #ifdef CONFIG_DEBUG_VM
> @@ -481,8 +481,8 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
>  
>  	if (ret)
>  		pr_err("page 0x%lx outside node %d zone %s [ 0x%lx - 0x%lx ]\n",
> -			pfn, zone_to_nid(zone), zone->name,
> -			start_pfn, start_pfn + sp);
> +		       pfn, zone_to_nid(zone), zone->name,
> +		       start_pfn, start_pfn + sp);
>  
>  	return ret;
>  }
> @@ -516,7 +516,7 @@ static inline int bad_range(struct zone *zone, struct page *page)
>  #endif
>  
>  static void bad_page(struct page *page, const char *reason,
> -		unsigned long bad_flags)
> +		     unsigned long bad_flags)
>  {
>  	static unsigned long resume;
>  	static unsigned long nr_shown;
> @@ -533,7 +533,7 @@ static void bad_page(struct page *page, const char *reason,
>  		}
>  		if (nr_unshown) {
>  			pr_alert(
> -			      "BUG: Bad page state: %lu messages suppressed\n",
> +				"BUG: Bad page state: %lu messages suppressed\n",
>  				nr_unshown);
>  			nr_unshown = 0;
>  		}
> @@ -543,12 +543,12 @@ static void bad_page(struct page *page, const char *reason,
>  		resume = jiffies + 60 * HZ;
>  
>  	pr_alert("BUG: Bad page state in process %s  pfn:%05lx\n",
> -		current->comm, page_to_pfn(page));
> +		 current->comm, page_to_pfn(page));
>  	__dump_page(page, reason);
>  	bad_flags &= page->flags;
>  	if (bad_flags)
>  		pr_alert("bad because of flags: %#lx(%pGp)\n",
> -						bad_flags, &bad_flags);
> +			 bad_flags, &bad_flags);
>  	dump_page_owner(page);
>  
>  	print_modules();
> @@ -599,7 +599,7 @@ void prep_compound_page(struct page *page, unsigned int order)
>  #ifdef CONFIG_DEBUG_PAGEALLOC
>  unsigned int _debug_guardpage_minorder;
>  bool _debug_pagealloc_enabled __read_mostly
> -			= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
> += IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
>  EXPORT_SYMBOL(_debug_pagealloc_enabled);
>  bool _debug_guardpage_enabled __read_mostly;
>  
> @@ -654,7 +654,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
>  early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
>  
>  static inline bool set_page_guard(struct zone *zone, struct page *page,
> -				unsigned int order, int migratetype)
> +				  unsigned int order, int migratetype)
>  {
>  	struct page_ext *page_ext;
>  
> @@ -679,7 +679,7 @@ static inline bool set_page_guard(struct zone *zone, struct page *page,
>  }
>  
>  static inline void clear_page_guard(struct zone *zone, struct page *page,
> -				unsigned int order, int migratetype)
> +				    unsigned int order, int migratetype)
>  {
>  	struct page_ext *page_ext;
>  
> @@ -699,9 +699,9 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
>  #else
>  struct page_ext_operations debug_guardpage_ops;
>  static inline bool set_page_guard(struct zone *zone, struct page *page,
> -			unsigned int order, int migratetype) { return false; }
> +				  unsigned int order, int migratetype) { return false; }
>  static inline void clear_page_guard(struct zone *zone, struct page *page,
> -				unsigned int order, int migratetype) {}
> +				    unsigned int order, int migratetype) {}
>  #endif
>  
>  static inline void set_page_order(struct page *page, unsigned int order)
> @@ -732,7 +732,7 @@ static inline void rmv_page_order(struct page *page)
>   * For recording page's order, we use page_private(page).
>   */
>  static inline int page_is_buddy(struct page *page, struct page *buddy,
> -							unsigned int order)
> +				unsigned int order)
>  {
>  	if (page_is_guard(buddy) && page_order(buddy) == order) {
>  		if (page_zone_id(page) != page_zone_id(buddy))
> @@ -785,9 +785,9 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
>   */
>  
>  static inline void __free_one_page(struct page *page,
> -		unsigned long pfn,
> -		struct zone *zone, unsigned int order,
> -		int migratetype)
> +				   unsigned long pfn,
> +				   struct zone *zone, unsigned int order,
> +				   int migratetype)
>  {
>  	unsigned long combined_pfn;
>  	unsigned long uninitialized_var(buddy_pfn);
> @@ -848,8 +848,8 @@ static inline void __free_one_page(struct page *page,
>  			buddy_mt = get_pageblock_migratetype(buddy);
>  
>  			if (migratetype != buddy_mt
> -					&& (is_migrate_isolate(migratetype) ||
> -						is_migrate_isolate(buddy_mt)))
> +			    && (is_migrate_isolate(migratetype) ||
> +				is_migrate_isolate(buddy_mt)))
>  				goto done_merging;
>  		}
>  		max_order++;
> @@ -876,7 +876,7 @@ static inline void __free_one_page(struct page *page,
>  		if (pfn_valid_within(buddy_pfn) &&
>  		    page_is_buddy(higher_page, higher_buddy, order + 1)) {
>  			list_add_tail(&page->lru,
> -				&zone->free_area[order].free_list[migratetype]);
> +				      &zone->free_area[order].free_list[migratetype]);
>  			goto out;
>  		}
>  	}
> @@ -892,17 +892,17 @@ static inline void __free_one_page(struct page *page,
>   * check if necessary.
>   */
>  static inline bool page_expected_state(struct page *page,
> -					unsigned long check_flags)
> +				       unsigned long check_flags)
>  {
>  	if (unlikely(atomic_read(&page->_mapcount) != -1))
>  		return false;
>  
>  	if (unlikely((unsigned long)page->mapping |
> -			page_ref_count(page) |
> +		     page_ref_count(page) |
>  #ifdef CONFIG_MEMCG
> -			(unsigned long)page->mem_cgroup |
> +		     (unsigned long)page->mem_cgroup |
>  #endif
> -			(page->flags & check_flags)))
> +		     (page->flags & check_flags)))
>  		return false;
>  
>  	return true;
> @@ -994,7 +994,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
>  }
>  
>  static __always_inline bool free_pages_prepare(struct page *page,
> -					unsigned int order, bool check_free)
> +					       unsigned int order, bool check_free)
>  {
>  	int bad = 0;
>  
> @@ -1042,7 +1042,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
>  		debug_check_no_locks_freed(page_address(page),
>  					   PAGE_SIZE << order);
>  		debug_check_no_obj_freed(page_address(page),
> -					   PAGE_SIZE << order);
> +					 PAGE_SIZE << order);
>  	}
>  	arch_free_page(page, order);
>  	kernel_poison_pages(page, 1 << order, 0);
> @@ -1086,7 +1086,7 @@ static bool bulkfree_pcp_prepare(struct page *page)
>   * pinned" detection logic.
>   */
>  static void free_pcppages_bulk(struct zone *zone, int count,
> -					struct per_cpu_pages *pcp)
> +			       struct per_cpu_pages *pcp)
>  {
>  	int migratetype = 0;
>  	int batch_free = 0;
> @@ -1142,16 +1142,16 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  }
>  
>  static void free_one_page(struct zone *zone,
> -				struct page *page, unsigned long pfn,
> -				unsigned int order,
> -				int migratetype)
> +			  struct page *page, unsigned long pfn,
> +			  unsigned int order,
> +			  int migratetype)
>  {
>  	unsigned long flags;
>  
>  	spin_lock_irqsave(&zone->lock, flags);
>  	__count_vm_events(PGFREE, 1 << order);
>  	if (unlikely(has_isolate_pageblock(zone) ||
> -		is_migrate_isolate(migratetype))) {
> +		     is_migrate_isolate(migratetype))) {
>  		migratetype = get_pfnblock_migratetype(page, pfn);
>  	}
>  	__free_one_page(page, pfn, zone, order, migratetype);
> @@ -1159,7 +1159,7 @@ static void free_one_page(struct zone *zone,
>  }
>  
>  static void __meminit __init_single_page(struct page *page, unsigned long pfn,
> -				unsigned long zone, int nid)
> +					 unsigned long zone, int nid)
>  {
>  	set_page_links(page, zone, nid, pfn);
>  	init_page_count(page);
> @@ -1263,7 +1263,7 @@ static void __init __free_pages_boot_core(struct page *page, unsigned int order)
>  	__free_pages(page, order);
>  }
>  
> -#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) || \
> +#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) ||	\
>  	defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
>  
>  static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata;
> @@ -1285,7 +1285,7 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
>  
>  #ifdef CONFIG_NODES_SPAN_OTHER_NODES
>  static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
> -					struct mminit_pfnnid_cache *state)
> +						struct mminit_pfnnid_cache *state)
>  {
>  	int nid;
>  
> @@ -1308,7 +1308,7 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
>  	return true;
>  }
>  static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
> -					struct mminit_pfnnid_cache *state)
> +						struct mminit_pfnnid_cache *state)
>  {
>  	return true;
>  }
> @@ -1316,7 +1316,7 @@ static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
>  
>  
>  void __init __free_pages_bootmem(struct page *page, unsigned long pfn,
> -							unsigned int order)
> +				 unsigned int order)
>  {
>  	if (early_page_uninitialised(pfn))
>  		return;
> @@ -1373,8 +1373,8 @@ void set_zone_contiguous(struct zone *zone)
>  
>  	block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages);
>  	for (; block_start_pfn < zone_end_pfn(zone);
> -			block_start_pfn = block_end_pfn,
> -			 block_end_pfn += pageblock_nr_pages) {
> +	     block_start_pfn = block_end_pfn,
> +		     block_end_pfn += pageblock_nr_pages) {
>  
>  		block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
>  
> @@ -1394,7 +1394,7 @@ void clear_zone_contiguous(struct zone *zone)
>  
>  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
>  static void __init deferred_free_range(struct page *page,
> -					unsigned long pfn, int nr_pages)
> +				       unsigned long pfn, int nr_pages)
>  {
>  	int i;
>  
> @@ -1501,7 +1501,7 @@ static int __init deferred_init_memmap(void *data)
>  			} else {
>  				nr_pages += nr_to_free;
>  				deferred_free_range(free_base_page,
> -						free_base_pfn, nr_to_free);
> +						    free_base_pfn, nr_to_free);
>  				free_base_page = NULL;
>  				free_base_pfn = nr_to_free = 0;
>  
> @@ -1524,11 +1524,11 @@ static int __init deferred_init_memmap(void *data)
>  
>  			/* Where possible, batch up pages for a single free */
>  			continue;
> -free_range:
> +		free_range:
>  			/* Free the current block of pages to allocator */
>  			nr_pages += nr_to_free;
>  			deferred_free_range(free_base_page, free_base_pfn,
> -								nr_to_free);
> +					    nr_to_free);
>  			free_base_page = NULL;
>  			free_base_pfn = nr_to_free = 0;
>  		}
> @@ -1543,7 +1543,7 @@ static int __init deferred_init_memmap(void *data)
>  	WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone));
>  
>  	pr_info("node %d initialised, %lu pages in %ums\n", nid, nr_pages,
> -					jiffies_to_msecs(jiffies - start));
> +		jiffies_to_msecs(jiffies - start));
>  
>  	pgdat_init_report_one_done();
>  	return 0;
> @@ -1620,8 +1620,8 @@ void __init init_cma_reserved_pageblock(struct page *page)
>   * -- nyc
>   */
>  static inline void expand(struct zone *zone, struct page *page,
> -	int low, int high, struct free_area *area,
> -	int migratetype)
> +			  int low, int high, struct free_area *area,
> +			  int migratetype)
>  {
>  	unsigned long size = 1 << high;
>  
> @@ -1681,7 +1681,7 @@ static void check_new_page_bad(struct page *page)
>  static inline int check_new_page(struct page *page)
>  {
>  	if (likely(page_expected_state(page,
> -				PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
> +				       PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
>  		return 0;
>  
>  	check_new_page_bad(page);
> @@ -1729,7 +1729,7 @@ static bool check_new_pages(struct page *page, unsigned int order)
>  }
>  
>  inline void post_alloc_hook(struct page *page, unsigned int order,
> -				gfp_t gfp_flags)
> +			    gfp_t gfp_flags)
>  {
>  	set_page_private(page, 0);
>  	set_page_refcounted(page);
> @@ -1742,7 +1742,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>  }
>  
>  static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
> -							unsigned int alloc_flags)
> +			  unsigned int alloc_flags)
>  {
>  	int i;
>  	bool poisoned = true;
> @@ -1780,7 +1780,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
>   */
>  static inline
>  struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
> -						int migratetype)
> +				int migratetype)
>  {
>  	unsigned int current_order;
>  	struct free_area *area;
> @@ -1790,7 +1790,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
>  	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
>  		area = &(zone->free_area[current_order]);
>  		page = list_first_entry_or_null(&area->free_list[migratetype],
> -							struct page, lru);
> +						struct page, lru);
>  		if (!page)
>  			continue;
>  		list_del(&page->lru);
> @@ -1823,13 +1823,13 @@ static int fallbacks[MIGRATE_TYPES][4] = {
>  
>  #ifdef CONFIG_CMA
>  static struct page *__rmqueue_cma_fallback(struct zone *zone,
> -					unsigned int order)
> +					   unsigned int order)
>  {
>  	return __rmqueue_smallest(zone, order, MIGRATE_CMA);
>  }
>  #else
>  static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
> -					unsigned int order) { return NULL; }
> +						  unsigned int order) { return NULL; }
>  #endif
>  
>  /*
> @@ -1875,7 +1875,7 @@ static int move_freepages(struct zone *zone,
>  			 * isolating, as that would be expensive.
>  			 */
>  			if (num_movable &&
> -					(PageLRU(page) || __PageMovable(page)))
> +			    (PageLRU(page) || __PageMovable(page)))
>  				(*num_movable)++;
>  
>  			page++;
> @@ -1893,7 +1893,7 @@ static int move_freepages(struct zone *zone,
>  }
>  
>  int move_freepages_block(struct zone *zone, struct page *page,
> -				int migratetype, int *num_movable)
> +			 int migratetype, int *num_movable)
>  {
>  	unsigned long start_pfn, end_pfn;
>  	struct page *start_page, *end_page;
> @@ -1911,11 +1911,11 @@ int move_freepages_block(struct zone *zone, struct page *page,
>  		return 0;
>  
>  	return move_freepages(zone, start_page, end_page, migratetype,
> -								num_movable);
> +			      num_movable);
>  }
>  
>  static void change_pageblock_range(struct page *pageblock_page,
> -					int start_order, int migratetype)
> +				   int start_order, int migratetype)
>  {
>  	int nr_pageblocks = 1 << (start_order - pageblock_order);
>  
> @@ -1950,9 +1950,9 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
>  		return true;
>  
>  	if (order >= pageblock_order / 2 ||
> -		start_mt == MIGRATE_RECLAIMABLE ||
> -		start_mt == MIGRATE_UNMOVABLE ||
> -		page_group_by_mobility_disabled)
> +	    start_mt == MIGRATE_RECLAIMABLE ||
> +	    start_mt == MIGRATE_UNMOVABLE ||
> +	    page_group_by_mobility_disabled)
>  		return true;
>  
>  	return false;
> @@ -1967,7 +1967,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
>   * itself, so pages freed in the future will be put on the correct free list.
>   */
>  static void steal_suitable_fallback(struct zone *zone, struct page *page,
> -					int start_type, bool whole_block)
> +				    int start_type, bool whole_block)
>  {
>  	unsigned int current_order = page_order(page);
>  	struct free_area *area;
> @@ -1994,7 +1994,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  		goto single_page;
>  
>  	free_pages = move_freepages_block(zone, page, start_type,
> -						&movable_pages);
> +					  &movable_pages);
>  	/*
>  	 * Determine how many pages are compatible with our allocation.
>  	 * For movable allocation, it's the number of movable pages which
> @@ -2012,7 +2012,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  		 */
>  		if (old_block_type == MIGRATE_MOVABLE)
>  			alike_pages = pageblock_nr_pages
> -						- (free_pages + movable_pages);
> +				- (free_pages + movable_pages);
>  		else
>  			alike_pages = 0;
>  	}
> @@ -2022,7 +2022,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	 * comparable migratability as our allocation, claim the whole block.
>  	 */
>  	if (free_pages + alike_pages >= (1 << (pageblock_order - 1)) ||
> -			page_group_by_mobility_disabled)
> +	    page_group_by_mobility_disabled)
>  		set_pageblock_migratetype(page, start_type);
>  
>  	return;
> @@ -2039,7 +2039,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>   * fragmentation due to mixed migratetype pages in one pageblock.
>   */
>  int find_suitable_fallback(struct free_area *area, unsigned int order,
> -			int migratetype, bool only_stealable, bool *can_steal)
> +			   int migratetype, bool only_stealable, bool *can_steal)
>  {
>  	int i;
>  	int fallback_mt;
> @@ -2074,7 +2074,7 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
>   * there are no empty page blocks that contain a page with a suitable order
>   */
>  static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> -				unsigned int alloc_order)
> +					 unsigned int alloc_order)
>  {
>  	int mt;
>  	unsigned long max_managed, flags;
> @@ -2116,7 +2116,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * pageblock is exhausted.
>   */
>  static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> -						bool force)
> +					   bool force)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2127,13 +2127,13 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  	bool ret;
>  
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
> -								ac->nodemask) {
> +					ac->nodemask) {
>  		/*
>  		 * Preserve at least one pageblock unless memory pressure
>  		 * is really high.
>  		 */
>  		if (!force && zone->nr_reserved_highatomic <=
> -					pageblock_nr_pages)
> +		    pageblock_nr_pages)
>  			continue;
>  
>  		spin_lock_irqsave(&zone->lock, flags);
> @@ -2141,8 +2141,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  			struct free_area *area = &(zone->free_area[order]);
>  
>  			page = list_first_entry_or_null(
> -					&area->free_list[MIGRATE_HIGHATOMIC],
> -					struct page, lru);
> +				&area->free_list[MIGRATE_HIGHATOMIC],
> +				struct page, lru);
>  			if (!page)
>  				continue;
>  
> @@ -2162,8 +2162,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  				 * underflows.
>  				 */
>  				zone->nr_reserved_highatomic -= min(
> -						pageblock_nr_pages,
> -						zone->nr_reserved_highatomic);
> +					pageblock_nr_pages,
> +					zone->nr_reserved_highatomic);
>  			}
>  
>  			/*
> @@ -2177,7 +2177,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype,
> -									NULL);
> +						   NULL);
>  			if (ret) {
>  				spin_unlock_irqrestore(&zone->lock, flags);
>  				return ret;
> @@ -2206,22 +2206,22 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  
>  	/* Find the largest possible block of pages in the other list */
>  	for (current_order = MAX_ORDER - 1;
> -				current_order >= order && current_order <= MAX_ORDER - 1;
> -				--current_order) {
> +	     current_order >= order && current_order <= MAX_ORDER - 1;
> +	     --current_order) {
>  		area = &(zone->free_area[current_order]);
>  		fallback_mt = find_suitable_fallback(area, current_order,
> -				start_migratetype, false, &can_steal);
> +						     start_migratetype, false, &can_steal);
>  		if (fallback_mt == -1)
>  			continue;
>  
>  		page = list_first_entry(&area->free_list[fallback_mt],
> -						struct page, lru);
> +					struct page, lru);
>  
>  		steal_suitable_fallback(zone, page, start_migratetype,
> -								can_steal);
> +					can_steal);
>  
>  		trace_mm_page_alloc_extfrag(page, order, current_order,
> -			start_migratetype, fallback_mt);
> +					    start_migratetype, fallback_mt);
>  
>  		return true;
>  	}
> @@ -2234,7 +2234,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>   * Call me with the zone->lock already held.
>   */
>  static struct page *__rmqueue(struct zone *zone, unsigned int order,
> -				int migratetype)
> +			      int migratetype)
>  {
>  	struct page *page;
>  
> @@ -2508,7 +2508,7 @@ void mark_free_pages(struct zone *zone)
>  
>  	for_each_migratetype_order(order, t) {
>  		list_for_each_entry(page,
> -				&zone->free_area[order].free_list[t], lru) {
> +				    &zone->free_area[order].free_list[t], lru) {
>  			unsigned long i;
>  
>  			pfn = page_to_pfn(page);
> @@ -2692,8 +2692,8 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
>  
>  /* Remove page from the per-cpu list, caller must protect the list */
>  static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
> -			bool cold, struct per_cpu_pages *pcp,
> -			struct list_head *list)
> +				      bool cold, struct per_cpu_pages *pcp,
> +				      struct list_head *list)
>  {
>  	struct page *page;
>  
> @@ -2702,8 +2702,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
>  	do {
>  		if (list_empty(list)) {
>  			pcp->count += rmqueue_bulk(zone, 0,
> -					pcp->batch, list,
> -					migratetype, cold);
> +						   pcp->batch, list,
> +						   migratetype, cold);
>  			if (unlikely(list_empty(list)))
>  				return NULL;
>  		}
> @@ -2722,8 +2722,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
>  
>  /* Lock and remove page from the per-cpu list */
>  static struct page *rmqueue_pcplist(struct zone *preferred_zone,
> -			struct zone *zone, unsigned int order,
> -			gfp_t gfp_flags, int migratetype)
> +				    struct zone *zone, unsigned int order,
> +				    gfp_t gfp_flags, int migratetype)
>  {
>  	struct per_cpu_pages *pcp;
>  	struct list_head *list;
> @@ -2747,16 +2747,16 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
>   */
>  static inline
>  struct page *rmqueue(struct zone *preferred_zone,
> -			struct zone *zone, unsigned int order,
> -			gfp_t gfp_flags, unsigned int alloc_flags,
> -			int migratetype)
> +		     struct zone *zone, unsigned int order,
> +		     gfp_t gfp_flags, unsigned int alloc_flags,
> +		     int migratetype)
>  {
>  	unsigned long flags;
>  	struct page *page;
>  
>  	if (likely(order == 0) && !in_interrupt()) {
>  		page = rmqueue_pcplist(preferred_zone, zone, order,
> -				gfp_flags, migratetype);
> +				       gfp_flags, migratetype);
>  		goto out;
>  	}
>  
> @@ -2826,7 +2826,7 @@ static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
>  	if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM))
>  		return false;
>  	if (fail_page_alloc.ignore_gfp_reclaim &&
> -			(gfp_mask & __GFP_DIRECT_RECLAIM))
> +	    (gfp_mask & __GFP_DIRECT_RECLAIM))
>  		return false;
>  
>  	return should_fail(&fail_page_alloc.attr, 1 << order);
> @@ -2845,10 +2845,10 @@ static int __init fail_page_alloc_debugfs(void)
>  		return PTR_ERR(dir);
>  
>  	if (!debugfs_create_bool("ignore-gfp-wait", mode, dir,
> -				&fail_page_alloc.ignore_gfp_reclaim))
> +				 &fail_page_alloc.ignore_gfp_reclaim))
>  		goto fail;
>  	if (!debugfs_create_bool("ignore-gfp-highmem", mode, dir,
> -				&fail_page_alloc.ignore_gfp_highmem))
> +				 &fail_page_alloc.ignore_gfp_highmem))
>  		goto fail;
>  	if (!debugfs_create_u32("min-order", mode, dir,
>  				&fail_page_alloc.min_order))
> @@ -2949,14 +2949,14 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
>  }
>  
>  bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
> -		      int classzone_idx, unsigned int alloc_flags)
> +		       int classzone_idx, unsigned int alloc_flags)
>  {
>  	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
> -					zone_page_state(z, NR_FREE_PAGES));
> +				   zone_page_state(z, NR_FREE_PAGES));
>  }
>  
>  static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
> -		unsigned long mark, int classzone_idx, unsigned int alloc_flags)
> +				       unsigned long mark, int classzone_idx, unsigned int alloc_flags)
>  {
>  	long free_pages = zone_page_state(z, NR_FREE_PAGES);
>  	long cma_pages = 0;
> @@ -2978,11 +2978,11 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
>  		return true;
>  
>  	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
> -					free_pages);
> +				   free_pages);
>  }
>  
>  bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
> -			unsigned long mark, int classzone_idx)
> +			    unsigned long mark, int classzone_idx)
>  {
>  	long free_pages = zone_page_state(z, NR_FREE_PAGES);
>  
> @@ -2990,14 +2990,14 @@ bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
>  		free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);
>  
>  	return __zone_watermark_ok(z, order, mark, classzone_idx, 0,
> -								free_pages);
> +				   free_pages);
>  }
>  
>  #ifdef CONFIG_NUMA
>  static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
>  {
>  	return node_distance(zone_to_nid(local_zone), zone_to_nid(zone)) <=
> -				RECLAIM_DISTANCE;
> +		RECLAIM_DISTANCE;
>  }
>  #else	/* CONFIG_NUMA */
>  static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
> @@ -3012,7 +3012,7 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
>   */
>  static struct page *
>  get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> -						const struct alloc_context *ac)
> +		       const struct alloc_context *ac)
>  {
>  	struct zoneref *z = ac->preferred_zoneref;
>  	struct zone *zone;
> @@ -3023,14 +3023,14 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  	 * See also __cpuset_node_allowed() comment in kernel/cpuset.c.
>  	 */
>  	for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
> -								ac->nodemask) {
> +					ac->nodemask) {
>  		struct page *page;
>  		unsigned long mark;
>  
>  		if (cpusets_enabled() &&
> -			(alloc_flags & ALLOC_CPUSET) &&
> -			!__cpuset_zone_allowed(zone, gfp_mask))
> -				continue;
> +		    (alloc_flags & ALLOC_CPUSET) &&
> +		    !__cpuset_zone_allowed(zone, gfp_mask))
> +			continue;
>  		/*
>  		 * When allocating a page cache page for writing, we
>  		 * want to get it from a node that is within its dirty
> @@ -3062,7 +3062,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  
>  		mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
>  		if (!zone_watermark_fast(zone, order, mark,
> -				       ac_classzone_idx(ac), alloc_flags)) {
> +					 ac_classzone_idx(ac), alloc_flags)) {
>  			int ret;
>  
>  			/* Checked here to keep the fast path fast */
> @@ -3085,16 +3085,16 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  			default:
>  				/* did we reclaim enough */
>  				if (zone_watermark_ok(zone, order, mark,
> -						ac_classzone_idx(ac), alloc_flags))
> +						      ac_classzone_idx(ac), alloc_flags))
>  					goto try_this_zone;
>  
>  				continue;
>  			}
>  		}
>  
> -try_this_zone:
> +	try_this_zone:
>  		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
> -				gfp_mask, alloc_flags, ac->migratetype);
> +			       gfp_mask, alloc_flags, ac->migratetype);
>  		if (page) {
>  			prep_new_page(page, order, gfp_mask, alloc_flags);
>  
> @@ -3188,21 +3188,21 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
>  	struct page *page;
>  
>  	page = get_page_from_freelist(gfp_mask, order,
> -			alloc_flags | ALLOC_CPUSET, ac);
> +				      alloc_flags | ALLOC_CPUSET, ac);
>  	/*
>  	 * fallback to ignore cpuset restriction if our nodes
>  	 * are depleted
>  	 */
>  	if (!page)
>  		page = get_page_from_freelist(gfp_mask, order,
> -				alloc_flags, ac);
> +					      alloc_flags, ac);
>  
>  	return page;
>  }
>  
>  static inline struct page *
>  __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
> -	const struct alloc_context *ac, unsigned long *did_some_progress)
> +		      const struct alloc_context *ac, unsigned long *did_some_progress)
>  {
>  	struct oom_control oc = {
>  		.zonelist = ac->zonelist,
> @@ -3231,7 +3231,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  	 * we're still under heavy pressure.
>  	 */
>  	page = get_page_from_freelist(gfp_mask | __GFP_HARDWALL, order,
> -					ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
> +				      ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
>  	if (page)
>  		goto out;
>  
> @@ -3270,7 +3270,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (gfp_mask & __GFP_NOFAIL)
>  			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
> -					ALLOC_NO_WATERMARKS, ac);
> +							     ALLOC_NO_WATERMARKS, ac);
>  	}
>  out:
>  	mutex_unlock(&oom_lock);
> @@ -3287,8 +3287,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  /* Try memory compaction for high-order allocations before reclaim */
>  static struct page *
>  __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
> -		unsigned int alloc_flags, const struct alloc_context *ac,
> -		enum compact_priority prio, enum compact_result *compact_result)
> +			     unsigned int alloc_flags, const struct alloc_context *ac,
> +			     enum compact_priority prio, enum compact_result *compact_result)
>  {
>  	struct page *page;
>  
> @@ -3297,7 +3297,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
>  
>  	current->flags |= PF_MEMALLOC;
>  	*compact_result = try_to_compact_pages(gfp_mask, order, alloc_flags, ac,
> -									prio);
> +					       prio);
>  	current->flags &= ~PF_MEMALLOC;
>  
>  	if (*compact_result <= COMPACT_INACTIVE)
> @@ -3389,7 +3389,7 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
>  	 */
>  check_priority:
>  	min_priority = (order > PAGE_ALLOC_COSTLY_ORDER) ?
> -			MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
> +		MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
>  
>  	if (*compact_priority > min_priority) {
>  		(*compact_priority)--;
> @@ -3403,8 +3403,8 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
>  #else
>  static inline struct page *
>  __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
> -		unsigned int alloc_flags, const struct alloc_context *ac,
> -		enum compact_priority prio, enum compact_result *compact_result)
> +			     unsigned int alloc_flags, const struct alloc_context *ac,
> +			     enum compact_priority prio, enum compact_result *compact_result)
>  {
>  	*compact_result = COMPACT_SKIPPED;
>  	return NULL;
> @@ -3431,7 +3431,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
>  	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
>  					ac->nodemask) {
>  		if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> -					ac_classzone_idx(ac), alloc_flags))
> +				      ac_classzone_idx(ac), alloc_flags))
>  			return true;
>  	}
>  	return false;
> @@ -3441,7 +3441,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
>  /* Perform direct synchronous page reclaim */
>  static int
>  __perform_reclaim(gfp_t gfp_mask, unsigned int order,
> -					const struct alloc_context *ac)
> +		  const struct alloc_context *ac)
>  {
>  	struct reclaim_state reclaim_state;
>  	int progress;
> @@ -3456,7 +3456,7 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  	current->reclaim_state = &reclaim_state;
>  
>  	progress = try_to_free_pages(ac->zonelist, order, gfp_mask,
> -								ac->nodemask);
> +				     ac->nodemask);
>  
>  	current->reclaim_state = NULL;
>  	lockdep_clear_current_reclaim_state();
> @@ -3470,8 +3470,8 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  /* The really slow allocator path where we enter direct reclaim */
>  static inline struct page *
>  __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> -		unsigned int alloc_flags, const struct alloc_context *ac,
> -		unsigned long *did_some_progress)
> +			     unsigned int alloc_flags, const struct alloc_context *ac,
> +			     unsigned long *did_some_progress)
>  {
>  	struct page *page = NULL;
>  	bool drained = false;
> @@ -3560,8 +3560,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
>  	if (in_serving_softirq() && (current->flags & PF_MEMALLOC))
>  		return true;
>  	if (!in_interrupt() &&
> -			((current->flags & PF_MEMALLOC) ||
> -			 unlikely(test_thread_flag(TIF_MEMDIE))))
> +	    ((current->flags & PF_MEMALLOC) ||
> +	     unlikely(test_thread_flag(TIF_MEMDIE))))
>  		return true;
>  
>  	return false;
> @@ -3625,9 +3625,9 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  		 * reclaimable pages?
>  		 */
>  		wmark = __zone_watermark_ok(zone, order, min_wmark,
> -				ac_classzone_idx(ac), alloc_flags, available);
> +					    ac_classzone_idx(ac), alloc_flags, available);
>  		trace_reclaim_retry_zone(z, order, reclaimable,
> -				available, min_wmark, *no_progress_loops, wmark);
> +					 available, min_wmark, *no_progress_loops, wmark);
>  		if (wmark) {
>  			/*
>  			 * If we didn't make any progress and have a lot of
> @@ -3639,7 +3639,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  				unsigned long write_pending;
>  
>  				write_pending = zone_page_state_snapshot(zone,
> -							NR_ZONE_WRITE_PENDING);
> +									 NR_ZONE_WRITE_PENDING);
>  
>  				if (2 * write_pending > reclaimable) {
>  					congestion_wait(BLK_RW_ASYNC, HZ / 10);
> @@ -3670,7 +3670,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> -						struct alloc_context *ac)
> +		       struct alloc_context *ac)
>  {
>  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
>  	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
> @@ -3701,7 +3701,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * callers that are not in atomic context.
>  	 */
>  	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)) ==
> -				(__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
> +			 (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
>  		gfp_mask &= ~__GFP_ATOMIC;
>  
>  retry_cpuset:
> @@ -3724,7 +3724,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * could end up iterating over non-eligible zones endlessly.
>  	 */
>  	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> -					ac->high_zoneidx, ac->nodemask);
> +						     ac->high_zoneidx, ac->nodemask);
>  	if (!ac->preferred_zoneref->zone)
>  		goto nopage;
>  
> @@ -3749,13 +3749,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
>  	 */
>  	if (can_direct_reclaim &&
> -			(costly_order ||
> -			   (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
> -			&& !gfp_pfmemalloc_allowed(gfp_mask)) {
> +	    (costly_order ||
> +	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
> +	    && !gfp_pfmemalloc_allowed(gfp_mask)) {
>  		page = __alloc_pages_direct_compact(gfp_mask, order,
> -						alloc_flags, ac,
> -						INIT_COMPACT_PRIORITY,
> -						&compact_result);
> +						    alloc_flags, ac,
> +						    INIT_COMPACT_PRIORITY,
> +						    &compact_result);
>  		if (page)
>  			goto got_pg;
>  
> @@ -3800,7 +3800,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (!(alloc_flags & ALLOC_CPUSET) || (alloc_flags & ALLOC_NO_WATERMARKS)) {
>  		ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
>  		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> -					ac->high_zoneidx, ac->nodemask);
> +							     ac->high_zoneidx, ac->nodemask);
>  	}
>  
>  	/* Attempt with potentially adjusted zonelist and alloc_flags */
> @@ -3815,8 +3815,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	/* Make sure we know about allocations which stall for too long */
>  	if (time_after(jiffies, alloc_start + stall_timeout)) {
>  		warn_alloc(gfp_mask & ~__GFP_NOWARN, ac->nodemask,
> -			"page allocation stalls for %ums, order:%u",
> -			jiffies_to_msecs(jiffies - alloc_start), order);
> +			   "page allocation stalls for %ums, order:%u",
> +			   jiffies_to_msecs(jiffies - alloc_start), order);
>  		stall_timeout += 10 * HZ;
>  	}
>  
> @@ -3826,13 +3826,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  
>  	/* Try direct reclaim and then allocating */
>  	page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
> -							&did_some_progress);
> +					    &did_some_progress);
>  	if (page)
>  		goto got_pg;
>  
>  	/* Try direct compaction and then allocating */
>  	page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
> -					compact_priority, &compact_result);
> +					    compact_priority, &compact_result);
>  	if (page)
>  		goto got_pg;
>  
> @@ -3858,9 +3858,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * of free memory (see __compaction_suitable)
>  	 */
>  	if (did_some_progress > 0 &&
> -			should_compact_retry(ac, order, alloc_flags,
> -				compact_result, &compact_priority,
> -				&compaction_retries))
> +	    should_compact_retry(ac, order, alloc_flags,
> +				 compact_result, &compact_priority,
> +				 &compaction_retries))
>  		goto retry;
>  
>  	/*
> @@ -3938,15 +3938,15 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	}
>  fail:
>  	warn_alloc(gfp_mask, ac->nodemask,
> -			"page allocation failure: order:%u", order);
> +		   "page allocation failure: order:%u", order);
>  got_pg:
>  	return page;
>  }
>  
>  static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> -		struct zonelist *zonelist, nodemask_t *nodemask,
> -		struct alloc_context *ac, gfp_t *alloc_mask,
> -		unsigned int *alloc_flags)
> +				       struct zonelist *zonelist, nodemask_t *nodemask,
> +				       struct alloc_context *ac, gfp_t *alloc_mask,
> +				       unsigned int *alloc_flags)
>  {
>  	ac->high_zoneidx = gfp_zone(gfp_mask);
>  	ac->zonelist = zonelist;
> @@ -3976,7 +3976,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
>  
>  /* Determine whether to spread dirty pages and what the first usable zone */
>  static inline void finalise_ac(gfp_t gfp_mask,
> -		unsigned int order, struct alloc_context *ac)
> +			       unsigned int order, struct alloc_context *ac)
>  {
>  	/* Dirty zone balancing only done in the fast path */
>  	ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE);
> @@ -3987,7 +3987,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
>  	 * may get reset for allocations that ignore memory policies.
>  	 */
>  	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> -					ac->high_zoneidx, ac->nodemask);
> +						     ac->high_zoneidx, ac->nodemask);
>  }
>  
>  /*
> @@ -3995,7 +3995,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
>   */
>  struct page *
>  __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
> -			struct zonelist *zonelist, nodemask_t *nodemask)
> +		       struct zonelist *zonelist, nodemask_t *nodemask)
>  {
>  	struct page *page;
>  	unsigned int alloc_flags = ALLOC_WMARK_LOW;
> @@ -4114,7 +4114,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
>  
>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>  	gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
> -		    __GFP_NOMEMALLOC;
> +		__GFP_NOMEMALLOC;
>  	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
>  				PAGE_FRAG_CACHE_MAX_ORDER);
>  	nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
> @@ -4150,7 +4150,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>  	int offset;
>  
>  	if (unlikely(!nc->va)) {
> -refill:
> +	refill:
>  		page = __page_frag_cache_refill(nc, gfp_mask);
>  		if (!page)
>  			return NULL;
> @@ -4209,7 +4209,7 @@ void page_frag_free(void *addr)
>  EXPORT_SYMBOL(page_frag_free);
>  
>  static void *make_alloc_exact(unsigned long addr, unsigned int order,
> -		size_t size)
> +			      size_t size)
>  {
>  	if (addr) {
>  		unsigned long alloc_end = addr + (PAGE_SIZE << order);
> @@ -4378,7 +4378,7 @@ long si_mem_available(void)
>  	 * and cannot be freed. Cap this estimate at the low watermark.
>  	 */
>  	available += global_page_state(NR_SLAB_RECLAIMABLE) -
> -		     min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
> +		min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
>  
>  	if (available < 0)
>  		available = 0;
> @@ -4714,7 +4714,7 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
>  		zone = pgdat->node_zones + zone_type;
>  		if (managed_zone(zone)) {
>  			zoneref_set_zone(zone,
> -				&zonelist->_zonerefs[nr_zones++]);
> +					 &zonelist->_zonerefs[nr_zones++]);
>  			check_highest_zone(zone_type);
>  		}
>  	} while (zone_type);
> @@ -4792,8 +4792,8 @@ early_param("numa_zonelist_order", setup_numa_zonelist_order);
>   * sysctl handler for numa_zonelist_order
>   */
>  int numa_zonelist_order_handler(struct ctl_table *table, int write,
> -		void __user *buffer, size_t *length,
> -		loff_t *ppos)
> +				void __user *buffer, size_t *length,
> +				loff_t *ppos)
>  {
>  	char saved_string[NUMA_ZONELIST_ORDER_LEN];
>  	int ret;
> @@ -4952,7 +4952,7 @@ static void build_zonelists_in_zone_order(pg_data_t *pgdat, int nr_nodes)
>  			z = &NODE_DATA(node)->node_zones[zone_type];
>  			if (managed_zone(z)) {
>  				zoneref_set_zone(z,
> -					&zonelist->_zonerefs[pos++]);
> +						 &zonelist->_zonerefs[pos++]);
>  				check_highest_zone(zone_type);
>  			}
>  		}
> @@ -5056,8 +5056,8 @@ int local_memory_node(int node)
>  	struct zoneref *z;
>  
>  	z = first_zones_zonelist(node_zonelist(node, GFP_KERNEL),
> -				   gfp_zone(GFP_KERNEL),
> -				   NULL);
> +				 gfp_zone(GFP_KERNEL),
> +				 NULL);
>  	return z->zone->node;
>  }
>  #endif
> @@ -5248,7 +5248,7 @@ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone)
>   * done. Non-atomic initialization, single-pass.
>   */
>  void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
> -		unsigned long start_pfn, enum memmap_context context)
> +				unsigned long start_pfn, enum memmap_context context)
>  {
>  	struct vmem_altmap *altmap = to_vmem_altmap(__pfn_to_phys(start_pfn));
>  	unsigned long end_pfn = start_pfn + size;
> @@ -5315,7 +5315,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>  		}
>  #endif
>  
> -not_early:
> +	not_early:
>  		/*
>  		 * Mark the block movable so that blocks are reserved for
>  		 * movable at startup. This will force kernel allocations
> @@ -5349,7 +5349,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
>  }
>  
>  #ifndef __HAVE_ARCH_MEMMAP_INIT
> -#define memmap_init(size, nid, zone, start_pfn) \
> +#define memmap_init(size, nid, zone, start_pfn)				\
>  	memmap_init_zone((size), (nid), (zone), (start_pfn), MEMMAP_EARLY)
>  #endif
>  
> @@ -5417,13 +5417,13 @@ static int zone_batchsize(struct zone *zone)
>   * exist).
>   */
>  static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
> -		unsigned long batch)
> +			   unsigned long batch)
>  {
> -       /* start with a fail safe value for batch */
> +	/* start with a fail safe value for batch */
>  	pcp->batch = 1;
>  	smp_wmb();
>  
> -       /* Update high, then batch, in order */
> +	/* Update high, then batch, in order */
>  	pcp->high = high;
>  	smp_wmb();
>  
> @@ -5460,7 +5460,7 @@ static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
>   * to the value high for the pageset p.
>   */
>  static void pageset_set_high(struct per_cpu_pageset *p,
> -				unsigned long high)
> +			     unsigned long high)
>  {
>  	unsigned long batch = max(1UL, high / 4);
>  	if ((high / 4) > (PAGE_SHIFT * 8))
> @@ -5474,8 +5474,8 @@ static void pageset_set_high_and_batch(struct zone *zone,
>  {
>  	if (percpu_pagelist_fraction)
>  		pageset_set_high(pcp,
> -			(zone->managed_pages /
> -				percpu_pagelist_fraction));
> +				 (zone->managed_pages /
> +				  percpu_pagelist_fraction));
>  	else
>  		pageset_set_batch(pcp, zone_batchsize(zone));
>  }
> @@ -5510,7 +5510,7 @@ void __init setup_per_cpu_pageset(void)
>  
>  	for_each_online_pgdat(pgdat)
>  		pgdat->per_cpu_nodestats =
> -			alloc_percpu(struct per_cpu_nodestat);
> +		alloc_percpu(struct per_cpu_nodestat);
>  }
>  
>  static __meminit void zone_pcp_init(struct zone *zone)
> @@ -5538,10 +5538,10 @@ int __meminit init_currently_empty_zone(struct zone *zone,
>  	zone->zone_start_pfn = zone_start_pfn;
>  
>  	mminit_dprintk(MMINIT_TRACE, "memmap_init",
> -			"Initialising map node %d zone %lu pfns %lu -> %lu\n",
> -			pgdat->node_id,
> -			(unsigned long)zone_idx(zone),
> -			zone_start_pfn, (zone_start_pfn + size));
> +		       "Initialising map node %d zone %lu pfns %lu -> %lu\n",
> +		       pgdat->node_id,
> +		       (unsigned long)zone_idx(zone),
> +		       zone_start_pfn, (zone_start_pfn + size));
>  
>  	zone_init_free_lists(zone);
>  	zone->initialized = 1;
> @@ -5556,7 +5556,7 @@ int __meminit init_currently_empty_zone(struct zone *zone,
>   * Required by SPARSEMEM. Given a PFN, return what node the PFN is on.
>   */
>  int __meminit __early_pfn_to_nid(unsigned long pfn,
> -					struct mminit_pfnnid_cache *state)
> +				 struct mminit_pfnnid_cache *state)
>  {
>  	unsigned long start_pfn, end_pfn;
>  	int nid;
> @@ -5595,8 +5595,8 @@ void __init free_bootmem_with_active_regions(int nid, unsigned long max_low_pfn)
>  
>  		if (start_pfn < end_pfn)
>  			memblock_free_early_nid(PFN_PHYS(start_pfn),
> -					(end_pfn - start_pfn) << PAGE_SHIFT,
> -					this_nid);
> +						(end_pfn - start_pfn) << PAGE_SHIFT,
> +						this_nid);
>  	}
>  }
>  
> @@ -5628,7 +5628,7 @@ void __init sparse_memory_present_with_active_regions(int nid)
>   * PFNs will be 0.
>   */
>  void __meminit get_pfn_range_for_nid(unsigned int nid,
> -			unsigned long *start_pfn, unsigned long *end_pfn)
> +				     unsigned long *start_pfn, unsigned long *end_pfn)
>  {
>  	unsigned long this_start_pfn, this_end_pfn;
>  	int i;
> @@ -5658,7 +5658,7 @@ static void __init find_usable_zone_for_movable(void)
>  			continue;
>  
>  		if (arch_zone_highest_possible_pfn[zone_index] >
> -				arch_zone_lowest_possible_pfn[zone_index])
> +		    arch_zone_lowest_possible_pfn[zone_index])
>  			break;
>  	}
>  
> @@ -5677,11 +5677,11 @@ static void __init find_usable_zone_for_movable(void)
>   * zones within a node are in order of monotonic increases memory addresses
>   */
>  static void __meminit adjust_zone_range_for_zone_movable(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *zone_start_pfn,
> -					unsigned long *zone_end_pfn)
> +							 unsigned long zone_type,
> +							 unsigned long node_start_pfn,
> +							 unsigned long node_end_pfn,
> +							 unsigned long *zone_start_pfn,
> +							 unsigned long *zone_end_pfn)
>  {
>  	/* Only adjust if ZONE_MOVABLE is on this node */
>  	if (zone_movable_pfn[nid]) {
> @@ -5689,15 +5689,15 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
>  		if (zone_type == ZONE_MOVABLE) {
>  			*zone_start_pfn = zone_movable_pfn[nid];
>  			*zone_end_pfn = min(node_end_pfn,
> -				arch_zone_highest_possible_pfn[movable_zone]);
> +					    arch_zone_highest_possible_pfn[movable_zone]);
>  
> -		/* Adjust for ZONE_MOVABLE starting within this range */
> +			/* Adjust for ZONE_MOVABLE starting within this range */
>  		} else if (!mirrored_kernelcore &&
> -			*zone_start_pfn < zone_movable_pfn[nid] &&
> -			*zone_end_pfn > zone_movable_pfn[nid]) {
> +			   *zone_start_pfn < zone_movable_pfn[nid] &&
> +			   *zone_end_pfn > zone_movable_pfn[nid]) {
>  			*zone_end_pfn = zone_movable_pfn[nid];
>  
> -		/* Check if this whole range is within ZONE_MOVABLE */
> +			/* Check if this whole range is within ZONE_MOVABLE */
>  		} else if (*zone_start_pfn >= zone_movable_pfn[nid])
>  			*zone_start_pfn = *zone_end_pfn;
>  	}
> @@ -5708,12 +5708,12 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
>   * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node()
>   */
>  static unsigned long __meminit zone_spanned_pages_in_node(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *zone_start_pfn,
> -					unsigned long *zone_end_pfn,
> -					unsigned long *ignored)
> +							  unsigned long zone_type,
> +							  unsigned long node_start_pfn,
> +							  unsigned long node_end_pfn,
> +							  unsigned long *zone_start_pfn,
> +							  unsigned long *zone_end_pfn,
> +							  unsigned long *ignored)
>  {
>  	/* When hotadd a new node from cpu_up(), the node should be empty */
>  	if (!node_start_pfn && !node_end_pfn)
> @@ -5723,8 +5723,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
>  	*zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type];
>  	*zone_end_pfn = arch_zone_highest_possible_pfn[zone_type];
>  	adjust_zone_range_for_zone_movable(nid, zone_type,
> -				node_start_pfn, node_end_pfn,
> -				zone_start_pfn, zone_end_pfn);
> +					   node_start_pfn, node_end_pfn,
> +					   zone_start_pfn, zone_end_pfn);
>  
>  	/* Check that this node has pages within the zone's required range */
>  	if (*zone_end_pfn < node_start_pfn || *zone_start_pfn > node_end_pfn)
> @@ -5743,8 +5743,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
>   * then all holes in the requested range will be accounted for.
>   */
>  unsigned long __meminit __absent_pages_in_range(int nid,
> -				unsigned long range_start_pfn,
> -				unsigned long range_end_pfn)
> +						unsigned long range_start_pfn,
> +						unsigned long range_end_pfn)
>  {
>  	unsigned long nr_absent = range_end_pfn - range_start_pfn;
>  	unsigned long start_pfn, end_pfn;
> @@ -5766,17 +5766,17 @@ unsigned long __meminit __absent_pages_in_range(int nid,
>   * It returns the number of pages frames in memory holes within a range.
>   */
>  unsigned long __init absent_pages_in_range(unsigned long start_pfn,
> -							unsigned long end_pfn)
> +					   unsigned long end_pfn)
>  {
>  	return __absent_pages_in_range(MAX_NUMNODES, start_pfn, end_pfn);
>  }
>  
>  /* Return the number of page frames in holes in a zone on a node */
>  static unsigned long __meminit zone_absent_pages_in_node(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *ignored)
> +							 unsigned long zone_type,
> +							 unsigned long node_start_pfn,
> +							 unsigned long node_end_pfn,
> +							 unsigned long *ignored)
>  {
>  	unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
>  	unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
> @@ -5791,8 +5791,8 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
>  	zone_end_pfn = clamp(node_end_pfn, zone_low, zone_high);
>  
>  	adjust_zone_range_for_zone_movable(nid, zone_type,
> -			node_start_pfn, node_end_pfn,
> -			&zone_start_pfn, &zone_end_pfn);
> +					   node_start_pfn, node_end_pfn,
> +					   &zone_start_pfn, &zone_end_pfn);
>  
>  	/* If this node has no page within this zone, return 0. */
>  	if (zone_start_pfn == zone_end_pfn)
> @@ -5830,12 +5830,12 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
>  
>  #else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
>  static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *zone_start_pfn,
> -					unsigned long *zone_end_pfn,
> -					unsigned long *zones_size)
> +								 unsigned long zone_type,
> +								 unsigned long node_start_pfn,
> +								 unsigned long node_end_pfn,
> +								 unsigned long *zone_start_pfn,
> +								 unsigned long *zone_end_pfn,
> +								 unsigned long *zones_size)
>  {
>  	unsigned int zone;
>  
> @@ -5849,10 +5849,10 @@ static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
>  }
>  
>  static inline unsigned long __meminit zone_absent_pages_in_node(int nid,
> -						unsigned long zone_type,
> -						unsigned long node_start_pfn,
> -						unsigned long node_end_pfn,
> -						unsigned long *zholes_size)
> +								unsigned long zone_type,
> +								unsigned long node_start_pfn,
> +								unsigned long node_end_pfn,
> +								unsigned long *zholes_size)
>  {
>  	if (!zholes_size)
>  		return 0;
> @@ -5883,8 +5883,8 @@ static void __meminit calculate_node_totalpages(struct pglist_data *pgdat,
>  						  &zone_end_pfn,
>  						  zones_size);
>  		real_size = size - zone_absent_pages_in_node(pgdat->node_id, i,
> -						  node_start_pfn, node_end_pfn,
> -						  zholes_size);
> +							     node_start_pfn, node_end_pfn,
> +							     zholes_size);
>  		if (size)
>  			zone->zone_start_pfn = zone_start_pfn;
>  		else
> @@ -6143,7 +6143,7 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
>  }
>  
>  void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
> -		unsigned long node_start_pfn, unsigned long *zholes_size)
> +				      unsigned long node_start_pfn, unsigned long *zholes_size)
>  {
>  	pg_data_t *pgdat = NODE_DATA(nid);
>  	unsigned long start_pfn = 0;
> @@ -6428,12 +6428,12 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  			if (start_pfn < usable_startpfn) {
>  				unsigned long kernel_pages;
>  				kernel_pages = min(end_pfn, usable_startpfn)
> -								- start_pfn;
> +					- start_pfn;
>  
>  				kernelcore_remaining -= min(kernel_pages,
> -							kernelcore_remaining);
> +							    kernelcore_remaining);
>  				required_kernelcore -= min(kernel_pages,
> -							required_kernelcore);
> +							   required_kernelcore);
>  
>  				/* Continue if range is now fully accounted */
>  				if (end_pfn <= usable_startpfn) {
> @@ -6466,7 +6466,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  			 * satisfied
>  			 */
>  			required_kernelcore -= min(required_kernelcore,
> -								size_pages);
> +						   size_pages);
>  			kernelcore_remaining -= size_pages;
>  			if (!kernelcore_remaining)
>  				break;
> @@ -6534,9 +6534,9 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  
>  	/* Record where the zone boundaries are */
>  	memset(arch_zone_lowest_possible_pfn, 0,
> -				sizeof(arch_zone_lowest_possible_pfn));
> +	       sizeof(arch_zone_lowest_possible_pfn));
>  	memset(arch_zone_highest_possible_pfn, 0,
> -				sizeof(arch_zone_highest_possible_pfn));
> +	       sizeof(arch_zone_highest_possible_pfn));
>  
>  	start_pfn = find_min_pfn_with_active_regions();
>  
> @@ -6562,14 +6562,14 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  			continue;
>  		pr_info("  %-8s ", zone_names[i]);
>  		if (arch_zone_lowest_possible_pfn[i] ==
> -				arch_zone_highest_possible_pfn[i])
> +		    arch_zone_highest_possible_pfn[i])
>  			pr_cont("empty\n");
>  		else
>  			pr_cont("[mem %#018Lx-%#018Lx]\n",
>  				(u64)arch_zone_lowest_possible_pfn[i]
> -					<< PAGE_SHIFT,
> +				<< PAGE_SHIFT,
>  				((u64)arch_zone_highest_possible_pfn[i]
> -					<< PAGE_SHIFT) - 1);
> +				 << PAGE_SHIFT) - 1);
>  	}
>  
>  	/* Print out the PFNs ZONE_MOVABLE begins at in each node */
> @@ -6577,7 +6577,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  	for (i = 0; i < MAX_NUMNODES; i++) {
>  		if (zone_movable_pfn[i])
>  			pr_info("  Node %d: %#018Lx\n", i,
> -			       (u64)zone_movable_pfn[i] << PAGE_SHIFT);
> +				(u64)zone_movable_pfn[i] << PAGE_SHIFT);
>  	}
>  
>  	/* Print out the early node map */
> @@ -6593,7 +6593,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  	for_each_online_node(nid) {
>  		pg_data_t *pgdat = NODE_DATA(nid);
>  		free_area_init_node(nid, NULL,
> -				find_min_pfn_for_node(nid), NULL);
> +				    find_min_pfn_for_node(nid), NULL);
>  
>  		/* Any memory on that node */
>  		if (pgdat->node_present_pages)
> @@ -6711,14 +6711,14 @@ void __init mem_init_print_info(const char *str)
>  	 *    please refer to arch/tile/kernel/vmlinux.lds.S.
>  	 * 3) .rodata.* may be embedded into .text or .data sections.
>  	 */
> -#define adj_init_size(start, end, size, pos, adj) \
> -	do { \
> -		if (start <= pos && pos < end && size > adj) \
> -			size -= adj; \
> +#define adj_init_size(start, end, size, pos, adj)		\
> +	do {							\
> +		if (start <= pos && pos < end && size > adj)	\
> +			size -= adj;				\
>  	} while (0)
>  
>  	adj_init_size(__init_begin, __init_end, init_data_size,
> -		     _sinittext, init_code_size);
> +		      _sinittext, init_code_size);
>  	adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size);
>  	adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size);
>  	adj_init_size(_stext, _etext, codesize, __start_rodata, rosize);
> @@ -6762,7 +6762,7 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
>  void __init free_area_init(unsigned long *zones_size)
>  {
>  	free_area_init_node(0, zones_size,
> -			__pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
> +			    __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
>  }
>  
>  static int page_alloc_cpu_dead(unsigned int cpu)
> @@ -6992,7 +6992,7 @@ int __meminit init_per_zone_wmark_min(void)
>  			min_free_kbytes = 65536;
>  	} else {
>  		pr_warn("min_free_kbytes is not updated to %d because user defined value %d is preferred\n",
> -				new_min_free_kbytes, user_min_free_kbytes);
> +			new_min_free_kbytes, user_min_free_kbytes);
>  	}
>  	setup_per_zone_wmarks();
>  	refresh_zone_stat_thresholds();
> @@ -7013,7 +7013,7 @@ core_initcall(init_per_zone_wmark_min)
>   *	changes.
>   */
>  int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +				   void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7029,7 +7029,7 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
>  }
>  
>  int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					  void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7054,12 +7054,12 @@ static void setup_min_unmapped_ratio(void)
>  
>  	for_each_zone(zone)
>  		zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages *
> -				sysctl_min_unmapped_ratio) / 100;
> +							 sysctl_min_unmapped_ratio) / 100;
>  }
>  
>  
>  int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					     void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7082,11 +7082,11 @@ static void setup_min_slab_ratio(void)
>  
>  	for_each_zone(zone)
>  		zone->zone_pgdat->min_slab_pages += (zone->managed_pages *
> -				sysctl_min_slab_ratio) / 100;
> +						     sysctl_min_slab_ratio) / 100;
>  }
>  
>  int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					 void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7110,7 +7110,7 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
>   * if in function of the boot time zone sizes.
>   */
>  int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	proc_dointvec_minmax(table, write, buffer, length, ppos);
>  	setup_per_zone_lowmem_reserve();
> @@ -7123,7 +7123,7 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
>   * pagelist can have before it gets flushed back to buddy allocator.
>   */
>  int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					    void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	struct zone *zone;
>  	int old_percpu_pagelist_fraction;
> @@ -7153,7 +7153,7 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
>  
>  		for_each_possible_cpu(cpu)
>  			pageset_set_high_and_batch(zone,
> -					per_cpu_ptr(zone->pageset, cpu));
> +						   per_cpu_ptr(zone->pageset, cpu));
>  	}
>  out:
>  	mutex_unlock(&pcp_batch_high_lock);
> @@ -7461,7 +7461,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
>  		}
>  
>  		nr_reclaimed = reclaim_clean_pages_from_list(cc->zone,
> -							&cc->migratepages);
> +							     &cc->migratepages);
>  		cc->nr_migratepages -= nr_reclaimed;
>  
>  		ret = migrate_pages(&cc->migratepages, alloc_migrate_target,
> @@ -7645,7 +7645,7 @@ void __meminit zone_pcp_update(struct zone *zone)
>  	mutex_lock(&pcp_batch_high_lock);
>  	for_each_possible_cpu(cpu)
>  		pageset_set_high_and_batch(zone,
> -				per_cpu_ptr(zone->pageset, cpu));
> +					   per_cpu_ptr(zone->pageset, cpu));
>  	mutex_unlock(&pcp_batch_high_lock);
>  }
>  #endif
> -- 
> 2.10.0.rc2.1.g053435c
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
@ 2017-03-16  8:02     ` Michal Hocko
  0 siblings, 0 replies; 40+ messages in thread
From: Michal Hocko @ 2017-03-16  8:02 UTC (permalink / raw)
  To: Joe Perches; +Cc: Andrew Morton, linux-kernel, linux-mm

On Wed 15-03-17 18:59:59, Joe Perches wrote:
> whitespace changes only - git diff -w shows no difference

what is the point of this whitespace noise? Does it help readability?
To be honest I do not think so. Such a patch would make sense only if it
was a part of a larger series where other patches would actually do
something useful.

> Signed-off-by: Joe Perches <joe@perches.com>
> ---
>  mm/page_alloc.c | 552 ++++++++++++++++++++++++++++----------------------------
>  1 file changed, 276 insertions(+), 276 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 504749032400..79fc996892c6 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -204,33 +204,33 @@ static void __free_pages_ok(struct page *page, unsigned int order);
>   */
>  int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES - 1] = {
>  #ifdef CONFIG_ZONE_DMA
> -	 256,
> +	256,
>  #endif
>  #ifdef CONFIG_ZONE_DMA32
> -	 256,
> +	256,
>  #endif
>  #ifdef CONFIG_HIGHMEM
> -	 32,
> +	32,
>  #endif
> -	 32,
> +	32,
>  };
>  
>  EXPORT_SYMBOL(totalram_pages);
>  
>  static char * const zone_names[MAX_NR_ZONES] = {
>  #ifdef CONFIG_ZONE_DMA
> -	 "DMA",
> +	"DMA",
>  #endif
>  #ifdef CONFIG_ZONE_DMA32
> -	 "DMA32",
> +	"DMA32",
>  #endif
> -	 "Normal",
> +	"Normal",
>  #ifdef CONFIG_HIGHMEM
> -	 "HighMem",
> +	"HighMem",
>  #endif
> -	 "Movable",
> +	"Movable",
>  #ifdef CONFIG_ZONE_DEVICE
> -	 "Device",
> +	"Device",
>  #endif
>  };
>  
> @@ -310,8 +310,8 @@ static inline bool __meminit early_page_uninitialised(unsigned long pfn)
>   * later in the boot cycle when it can be parallelised.
>   */
>  static inline bool update_defer_init(pg_data_t *pgdat,
> -				unsigned long pfn, unsigned long zone_end,
> -				unsigned long *nr_initialised)
> +				     unsigned long pfn, unsigned long zone_end,
> +				     unsigned long *nr_initialised)
>  {
>  	unsigned long max_initialise;
>  
> @@ -323,7 +323,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
>  	 * two large system hashes that can take up 1GB for 0.25TB/node.
>  	 */
>  	max_initialise = max(2UL << (30 - PAGE_SHIFT),
> -		(pgdat->node_spanned_pages >> 8));
> +			     (pgdat->node_spanned_pages >> 8));
>  
>  	(*nr_initialised)++;
>  	if ((*nr_initialised > max_initialise) &&
> @@ -345,8 +345,8 @@ static inline bool early_page_uninitialised(unsigned long pfn)
>  }
>  
>  static inline bool update_defer_init(pg_data_t *pgdat,
> -				unsigned long pfn, unsigned long zone_end,
> -				unsigned long *nr_initialised)
> +				     unsigned long pfn, unsigned long zone_end,
> +				     unsigned long *nr_initialised)
>  {
>  	return true;
>  }
> @@ -354,7 +354,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
>  
>  /* Return a pointer to the bitmap storing bits affecting a block of pages */
>  static inline unsigned long *get_pageblock_bitmap(struct page *page,
> -							unsigned long pfn)
> +						  unsigned long pfn)
>  {
>  #ifdef CONFIG_SPARSEMEM
>  	return __pfn_to_section(pfn)->pageblock_flags;
> @@ -384,9 +384,9 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
>   * Return: pageblock_bits flags
>   */
>  static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page,
> -					unsigned long pfn,
> -					unsigned long end_bitidx,
> -					unsigned long mask)
> +							       unsigned long pfn,
> +							       unsigned long end_bitidx,
> +							       unsigned long mask)
>  {
>  	unsigned long *bitmap;
>  	unsigned long bitidx, word_bitidx;
> @@ -403,8 +403,8 @@ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page
>  }
>  
>  unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn,
> -					unsigned long end_bitidx,
> -					unsigned long mask)
> +				      unsigned long end_bitidx,
> +				      unsigned long mask)
>  {
>  	return __get_pfnblock_flags_mask(page, pfn, end_bitidx, mask);
>  }
> @@ -423,9 +423,9 @@ static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned
>   * @mask: mask of bits that the caller is interested in
>   */
>  void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
> -					unsigned long pfn,
> -					unsigned long end_bitidx,
> -					unsigned long mask)
> +			     unsigned long pfn,
> +			     unsigned long end_bitidx,
> +			     unsigned long mask)
>  {
>  	unsigned long *bitmap;
>  	unsigned long bitidx, word_bitidx;
> @@ -460,7 +460,7 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
>  		migratetype = MIGRATE_UNMOVABLE;
>  
>  	set_pageblock_flags_group(page, (unsigned long)migratetype,
> -					PB_migrate, PB_migrate_end);
> +				  PB_migrate, PB_migrate_end);
>  }
>  
>  #ifdef CONFIG_DEBUG_VM
> @@ -481,8 +481,8 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
>  
>  	if (ret)
>  		pr_err("page 0x%lx outside node %d zone %s [ 0x%lx - 0x%lx ]\n",
> -			pfn, zone_to_nid(zone), zone->name,
> -			start_pfn, start_pfn + sp);
> +		       pfn, zone_to_nid(zone), zone->name,
> +		       start_pfn, start_pfn + sp);
>  
>  	return ret;
>  }
> @@ -516,7 +516,7 @@ static inline int bad_range(struct zone *zone, struct page *page)
>  #endif
>  
>  static void bad_page(struct page *page, const char *reason,
> -		unsigned long bad_flags)
> +		     unsigned long bad_flags)
>  {
>  	static unsigned long resume;
>  	static unsigned long nr_shown;
> @@ -533,7 +533,7 @@ static void bad_page(struct page *page, const char *reason,
>  		}
>  		if (nr_unshown) {
>  			pr_alert(
> -			      "BUG: Bad page state: %lu messages suppressed\n",
> +				"BUG: Bad page state: %lu messages suppressed\n",
>  				nr_unshown);
>  			nr_unshown = 0;
>  		}
> @@ -543,12 +543,12 @@ static void bad_page(struct page *page, const char *reason,
>  		resume = jiffies + 60 * HZ;
>  
>  	pr_alert("BUG: Bad page state in process %s  pfn:%05lx\n",
> -		current->comm, page_to_pfn(page));
> +		 current->comm, page_to_pfn(page));
>  	__dump_page(page, reason);
>  	bad_flags &= page->flags;
>  	if (bad_flags)
>  		pr_alert("bad because of flags: %#lx(%pGp)\n",
> -						bad_flags, &bad_flags);
> +			 bad_flags, &bad_flags);
>  	dump_page_owner(page);
>  
>  	print_modules();
> @@ -599,7 +599,7 @@ void prep_compound_page(struct page *page, unsigned int order)
>  #ifdef CONFIG_DEBUG_PAGEALLOC
>  unsigned int _debug_guardpage_minorder;
>  bool _debug_pagealloc_enabled __read_mostly
> -			= IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
> += IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);
>  EXPORT_SYMBOL(_debug_pagealloc_enabled);
>  bool _debug_guardpage_enabled __read_mostly;
>  
> @@ -654,7 +654,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
>  early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
>  
>  static inline bool set_page_guard(struct zone *zone, struct page *page,
> -				unsigned int order, int migratetype)
> +				  unsigned int order, int migratetype)
>  {
>  	struct page_ext *page_ext;
>  
> @@ -679,7 +679,7 @@ static inline bool set_page_guard(struct zone *zone, struct page *page,
>  }
>  
>  static inline void clear_page_guard(struct zone *zone, struct page *page,
> -				unsigned int order, int migratetype)
> +				    unsigned int order, int migratetype)
>  {
>  	struct page_ext *page_ext;
>  
> @@ -699,9 +699,9 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
>  #else
>  struct page_ext_operations debug_guardpage_ops;
>  static inline bool set_page_guard(struct zone *zone, struct page *page,
> -			unsigned int order, int migratetype) { return false; }
> +				  unsigned int order, int migratetype) { return false; }
>  static inline void clear_page_guard(struct zone *zone, struct page *page,
> -				unsigned int order, int migratetype) {}
> +				    unsigned int order, int migratetype) {}
>  #endif
>  
>  static inline void set_page_order(struct page *page, unsigned int order)
> @@ -732,7 +732,7 @@ static inline void rmv_page_order(struct page *page)
>   * For recording page's order, we use page_private(page).
>   */
>  static inline int page_is_buddy(struct page *page, struct page *buddy,
> -							unsigned int order)
> +				unsigned int order)
>  {
>  	if (page_is_guard(buddy) && page_order(buddy) == order) {
>  		if (page_zone_id(page) != page_zone_id(buddy))
> @@ -785,9 +785,9 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
>   */
>  
>  static inline void __free_one_page(struct page *page,
> -		unsigned long pfn,
> -		struct zone *zone, unsigned int order,
> -		int migratetype)
> +				   unsigned long pfn,
> +				   struct zone *zone, unsigned int order,
> +				   int migratetype)
>  {
>  	unsigned long combined_pfn;
>  	unsigned long uninitialized_var(buddy_pfn);
> @@ -848,8 +848,8 @@ static inline void __free_one_page(struct page *page,
>  			buddy_mt = get_pageblock_migratetype(buddy);
>  
>  			if (migratetype != buddy_mt
> -					&& (is_migrate_isolate(migratetype) ||
> -						is_migrate_isolate(buddy_mt)))
> +			    && (is_migrate_isolate(migratetype) ||
> +				is_migrate_isolate(buddy_mt)))
>  				goto done_merging;
>  		}
>  		max_order++;
> @@ -876,7 +876,7 @@ static inline void __free_one_page(struct page *page,
>  		if (pfn_valid_within(buddy_pfn) &&
>  		    page_is_buddy(higher_page, higher_buddy, order + 1)) {
>  			list_add_tail(&page->lru,
> -				&zone->free_area[order].free_list[migratetype]);
> +				      &zone->free_area[order].free_list[migratetype]);
>  			goto out;
>  		}
>  	}
> @@ -892,17 +892,17 @@ static inline void __free_one_page(struct page *page,
>   * check if necessary.
>   */
>  static inline bool page_expected_state(struct page *page,
> -					unsigned long check_flags)
> +				       unsigned long check_flags)
>  {
>  	if (unlikely(atomic_read(&page->_mapcount) != -1))
>  		return false;
>  
>  	if (unlikely((unsigned long)page->mapping |
> -			page_ref_count(page) |
> +		     page_ref_count(page) |
>  #ifdef CONFIG_MEMCG
> -			(unsigned long)page->mem_cgroup |
> +		     (unsigned long)page->mem_cgroup |
>  #endif
> -			(page->flags & check_flags)))
> +		     (page->flags & check_flags)))
>  		return false;
>  
>  	return true;
> @@ -994,7 +994,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page)
>  }
>  
>  static __always_inline bool free_pages_prepare(struct page *page,
> -					unsigned int order, bool check_free)
> +					       unsigned int order, bool check_free)
>  {
>  	int bad = 0;
>  
> @@ -1042,7 +1042,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
>  		debug_check_no_locks_freed(page_address(page),
>  					   PAGE_SIZE << order);
>  		debug_check_no_obj_freed(page_address(page),
> -					   PAGE_SIZE << order);
> +					 PAGE_SIZE << order);
>  	}
>  	arch_free_page(page, order);
>  	kernel_poison_pages(page, 1 << order, 0);
> @@ -1086,7 +1086,7 @@ static bool bulkfree_pcp_prepare(struct page *page)
>   * pinned" detection logic.
>   */
>  static void free_pcppages_bulk(struct zone *zone, int count,
> -					struct per_cpu_pages *pcp)
> +			       struct per_cpu_pages *pcp)
>  {
>  	int migratetype = 0;
>  	int batch_free = 0;
> @@ -1142,16 +1142,16 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  }
>  
>  static void free_one_page(struct zone *zone,
> -				struct page *page, unsigned long pfn,
> -				unsigned int order,
> -				int migratetype)
> +			  struct page *page, unsigned long pfn,
> +			  unsigned int order,
> +			  int migratetype)
>  {
>  	unsigned long flags;
>  
>  	spin_lock_irqsave(&zone->lock, flags);
>  	__count_vm_events(PGFREE, 1 << order);
>  	if (unlikely(has_isolate_pageblock(zone) ||
> -		is_migrate_isolate(migratetype))) {
> +		     is_migrate_isolate(migratetype))) {
>  		migratetype = get_pfnblock_migratetype(page, pfn);
>  	}
>  	__free_one_page(page, pfn, zone, order, migratetype);
> @@ -1159,7 +1159,7 @@ static void free_one_page(struct zone *zone,
>  }
>  
>  static void __meminit __init_single_page(struct page *page, unsigned long pfn,
> -				unsigned long zone, int nid)
> +					 unsigned long zone, int nid)
>  {
>  	set_page_links(page, zone, nid, pfn);
>  	init_page_count(page);
> @@ -1263,7 +1263,7 @@ static void __init __free_pages_boot_core(struct page *page, unsigned int order)
>  	__free_pages(page, order);
>  }
>  
> -#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) || \
> +#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) ||	\
>  	defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP)
>  
>  static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata;
> @@ -1285,7 +1285,7 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
>  
>  #ifdef CONFIG_NODES_SPAN_OTHER_NODES
>  static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
> -					struct mminit_pfnnid_cache *state)
> +						struct mminit_pfnnid_cache *state)
>  {
>  	int nid;
>  
> @@ -1308,7 +1308,7 @@ static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node)
>  	return true;
>  }
>  static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
> -					struct mminit_pfnnid_cache *state)
> +						struct mminit_pfnnid_cache *state)
>  {
>  	return true;
>  }
> @@ -1316,7 +1316,7 @@ static inline bool __meminit meminit_pfn_in_nid(unsigned long pfn, int node,
>  
>  
>  void __init __free_pages_bootmem(struct page *page, unsigned long pfn,
> -							unsigned int order)
> +				 unsigned int order)
>  {
>  	if (early_page_uninitialised(pfn))
>  		return;
> @@ -1373,8 +1373,8 @@ void set_zone_contiguous(struct zone *zone)
>  
>  	block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages);
>  	for (; block_start_pfn < zone_end_pfn(zone);
> -			block_start_pfn = block_end_pfn,
> -			 block_end_pfn += pageblock_nr_pages) {
> +	     block_start_pfn = block_end_pfn,
> +		     block_end_pfn += pageblock_nr_pages) {
>  
>  		block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
>  
> @@ -1394,7 +1394,7 @@ void clear_zone_contiguous(struct zone *zone)
>  
>  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
>  static void __init deferred_free_range(struct page *page,
> -					unsigned long pfn, int nr_pages)
> +				       unsigned long pfn, int nr_pages)
>  {
>  	int i;
>  
> @@ -1501,7 +1501,7 @@ static int __init deferred_init_memmap(void *data)
>  			} else {
>  				nr_pages += nr_to_free;
>  				deferred_free_range(free_base_page,
> -						free_base_pfn, nr_to_free);
> +						    free_base_pfn, nr_to_free);
>  				free_base_page = NULL;
>  				free_base_pfn = nr_to_free = 0;
>  
> @@ -1524,11 +1524,11 @@ static int __init deferred_init_memmap(void *data)
>  
>  			/* Where possible, batch up pages for a single free */
>  			continue;
> -free_range:
> +		free_range:
>  			/* Free the current block of pages to allocator */
>  			nr_pages += nr_to_free;
>  			deferred_free_range(free_base_page, free_base_pfn,
> -								nr_to_free);
> +					    nr_to_free);
>  			free_base_page = NULL;
>  			free_base_pfn = nr_to_free = 0;
>  		}
> @@ -1543,7 +1543,7 @@ static int __init deferred_init_memmap(void *data)
>  	WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone));
>  
>  	pr_info("node %d initialised, %lu pages in %ums\n", nid, nr_pages,
> -					jiffies_to_msecs(jiffies - start));
> +		jiffies_to_msecs(jiffies - start));
>  
>  	pgdat_init_report_one_done();
>  	return 0;
> @@ -1620,8 +1620,8 @@ void __init init_cma_reserved_pageblock(struct page *page)
>   * -- nyc
>   */
>  static inline void expand(struct zone *zone, struct page *page,
> -	int low, int high, struct free_area *area,
> -	int migratetype)
> +			  int low, int high, struct free_area *area,
> +			  int migratetype)
>  {
>  	unsigned long size = 1 << high;
>  
> @@ -1681,7 +1681,7 @@ static void check_new_page_bad(struct page *page)
>  static inline int check_new_page(struct page *page)
>  {
>  	if (likely(page_expected_state(page,
> -				PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
> +				       PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON)))
>  		return 0;
>  
>  	check_new_page_bad(page);
> @@ -1729,7 +1729,7 @@ static bool check_new_pages(struct page *page, unsigned int order)
>  }
>  
>  inline void post_alloc_hook(struct page *page, unsigned int order,
> -				gfp_t gfp_flags)
> +			    gfp_t gfp_flags)
>  {
>  	set_page_private(page, 0);
>  	set_page_refcounted(page);
> @@ -1742,7 +1742,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>  }
>  
>  static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
> -							unsigned int alloc_flags)
> +			  unsigned int alloc_flags)
>  {
>  	int i;
>  	bool poisoned = true;
> @@ -1780,7 +1780,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
>   */
>  static inline
>  struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
> -						int migratetype)
> +				int migratetype)
>  {
>  	unsigned int current_order;
>  	struct free_area *area;
> @@ -1790,7 +1790,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
>  	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
>  		area = &(zone->free_area[current_order]);
>  		page = list_first_entry_or_null(&area->free_list[migratetype],
> -							struct page, lru);
> +						struct page, lru);
>  		if (!page)
>  			continue;
>  		list_del(&page->lru);
> @@ -1823,13 +1823,13 @@ static int fallbacks[MIGRATE_TYPES][4] = {
>  
>  #ifdef CONFIG_CMA
>  static struct page *__rmqueue_cma_fallback(struct zone *zone,
> -					unsigned int order)
> +					   unsigned int order)
>  {
>  	return __rmqueue_smallest(zone, order, MIGRATE_CMA);
>  }
>  #else
>  static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
> -					unsigned int order) { return NULL; }
> +						  unsigned int order) { return NULL; }
>  #endif
>  
>  /*
> @@ -1875,7 +1875,7 @@ static int move_freepages(struct zone *zone,
>  			 * isolating, as that would be expensive.
>  			 */
>  			if (num_movable &&
> -					(PageLRU(page) || __PageMovable(page)))
> +			    (PageLRU(page) || __PageMovable(page)))
>  				(*num_movable)++;
>  
>  			page++;
> @@ -1893,7 +1893,7 @@ static int move_freepages(struct zone *zone,
>  }
>  
>  int move_freepages_block(struct zone *zone, struct page *page,
> -				int migratetype, int *num_movable)
> +			 int migratetype, int *num_movable)
>  {
>  	unsigned long start_pfn, end_pfn;
>  	struct page *start_page, *end_page;
> @@ -1911,11 +1911,11 @@ int move_freepages_block(struct zone *zone, struct page *page,
>  		return 0;
>  
>  	return move_freepages(zone, start_page, end_page, migratetype,
> -								num_movable);
> +			      num_movable);
>  }
>  
>  static void change_pageblock_range(struct page *pageblock_page,
> -					int start_order, int migratetype)
> +				   int start_order, int migratetype)
>  {
>  	int nr_pageblocks = 1 << (start_order - pageblock_order);
>  
> @@ -1950,9 +1950,9 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
>  		return true;
>  
>  	if (order >= pageblock_order / 2 ||
> -		start_mt == MIGRATE_RECLAIMABLE ||
> -		start_mt == MIGRATE_UNMOVABLE ||
> -		page_group_by_mobility_disabled)
> +	    start_mt == MIGRATE_RECLAIMABLE ||
> +	    start_mt == MIGRATE_UNMOVABLE ||
> +	    page_group_by_mobility_disabled)
>  		return true;
>  
>  	return false;
> @@ -1967,7 +1967,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
>   * itself, so pages freed in the future will be put on the correct free list.
>   */
>  static void steal_suitable_fallback(struct zone *zone, struct page *page,
> -					int start_type, bool whole_block)
> +				    int start_type, bool whole_block)
>  {
>  	unsigned int current_order = page_order(page);
>  	struct free_area *area;
> @@ -1994,7 +1994,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  		goto single_page;
>  
>  	free_pages = move_freepages_block(zone, page, start_type,
> -						&movable_pages);
> +					  &movable_pages);
>  	/*
>  	 * Determine how many pages are compatible with our allocation.
>  	 * For movable allocation, it's the number of movable pages which
> @@ -2012,7 +2012,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  		 */
>  		if (old_block_type == MIGRATE_MOVABLE)
>  			alike_pages = pageblock_nr_pages
> -						- (free_pages + movable_pages);
> +				- (free_pages + movable_pages);
>  		else
>  			alike_pages = 0;
>  	}
> @@ -2022,7 +2022,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	 * comparable migratability as our allocation, claim the whole block.
>  	 */
>  	if (free_pages + alike_pages >= (1 << (pageblock_order - 1)) ||
> -			page_group_by_mobility_disabled)
> +	    page_group_by_mobility_disabled)
>  		set_pageblock_migratetype(page, start_type);
>  
>  	return;
> @@ -2039,7 +2039,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>   * fragmentation due to mixed migratetype pages in one pageblock.
>   */
>  int find_suitable_fallback(struct free_area *area, unsigned int order,
> -			int migratetype, bool only_stealable, bool *can_steal)
> +			   int migratetype, bool only_stealable, bool *can_steal)
>  {
>  	int i;
>  	int fallback_mt;
> @@ -2074,7 +2074,7 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
>   * there are no empty page blocks that contain a page with a suitable order
>   */
>  static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> -				unsigned int alloc_order)
> +					 unsigned int alloc_order)
>  {
>  	int mt;
>  	unsigned long max_managed, flags;
> @@ -2116,7 +2116,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * pageblock is exhausted.
>   */
>  static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> -						bool force)
> +					   bool force)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2127,13 +2127,13 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  	bool ret;
>  
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
> -								ac->nodemask) {
> +					ac->nodemask) {
>  		/*
>  		 * Preserve at least one pageblock unless memory pressure
>  		 * is really high.
>  		 */
>  		if (!force && zone->nr_reserved_highatomic <=
> -					pageblock_nr_pages)
> +		    pageblock_nr_pages)
>  			continue;
>  
>  		spin_lock_irqsave(&zone->lock, flags);
> @@ -2141,8 +2141,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  			struct free_area *area = &(zone->free_area[order]);
>  
>  			page = list_first_entry_or_null(
> -					&area->free_list[MIGRATE_HIGHATOMIC],
> -					struct page, lru);
> +				&area->free_list[MIGRATE_HIGHATOMIC],
> +				struct page, lru);
>  			if (!page)
>  				continue;
>  
> @@ -2162,8 +2162,8 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  				 * underflows.
>  				 */
>  				zone->nr_reserved_highatomic -= min(
> -						pageblock_nr_pages,
> -						zone->nr_reserved_highatomic);
> +					pageblock_nr_pages,
> +					zone->nr_reserved_highatomic);
>  			}
>  
>  			/*
> @@ -2177,7 +2177,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype,
> -									NULL);
> +						   NULL);
>  			if (ret) {
>  				spin_unlock_irqrestore(&zone->lock, flags);
>  				return ret;
> @@ -2206,22 +2206,22 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  
>  	/* Find the largest possible block of pages in the other list */
>  	for (current_order = MAX_ORDER - 1;
> -				current_order >= order && current_order <= MAX_ORDER - 1;
> -				--current_order) {
> +	     current_order >= order && current_order <= MAX_ORDER - 1;
> +	     --current_order) {
>  		area = &(zone->free_area[current_order]);
>  		fallback_mt = find_suitable_fallback(area, current_order,
> -				start_migratetype, false, &can_steal);
> +						     start_migratetype, false, &can_steal);
>  		if (fallback_mt == -1)
>  			continue;
>  
>  		page = list_first_entry(&area->free_list[fallback_mt],
> -						struct page, lru);
> +					struct page, lru);
>  
>  		steal_suitable_fallback(zone, page, start_migratetype,
> -								can_steal);
> +					can_steal);
>  
>  		trace_mm_page_alloc_extfrag(page, order, current_order,
> -			start_migratetype, fallback_mt);
> +					    start_migratetype, fallback_mt);
>  
>  		return true;
>  	}
> @@ -2234,7 +2234,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>   * Call me with the zone->lock already held.
>   */
>  static struct page *__rmqueue(struct zone *zone, unsigned int order,
> -				int migratetype)
> +			      int migratetype)
>  {
>  	struct page *page;
>  
> @@ -2508,7 +2508,7 @@ void mark_free_pages(struct zone *zone)
>  
>  	for_each_migratetype_order(order, t) {
>  		list_for_each_entry(page,
> -				&zone->free_area[order].free_list[t], lru) {
> +				    &zone->free_area[order].free_list[t], lru) {
>  			unsigned long i;
>  
>  			pfn = page_to_pfn(page);
> @@ -2692,8 +2692,8 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
>  
>  /* Remove page from the per-cpu list, caller must protect the list */
>  static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
> -			bool cold, struct per_cpu_pages *pcp,
> -			struct list_head *list)
> +				      bool cold, struct per_cpu_pages *pcp,
> +				      struct list_head *list)
>  {
>  	struct page *page;
>  
> @@ -2702,8 +2702,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
>  	do {
>  		if (list_empty(list)) {
>  			pcp->count += rmqueue_bulk(zone, 0,
> -					pcp->batch, list,
> -					migratetype, cold);
> +						   pcp->batch, list,
> +						   migratetype, cold);
>  			if (unlikely(list_empty(list)))
>  				return NULL;
>  		}
> @@ -2722,8 +2722,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
>  
>  /* Lock and remove page from the per-cpu list */
>  static struct page *rmqueue_pcplist(struct zone *preferred_zone,
> -			struct zone *zone, unsigned int order,
> -			gfp_t gfp_flags, int migratetype)
> +				    struct zone *zone, unsigned int order,
> +				    gfp_t gfp_flags, int migratetype)
>  {
>  	struct per_cpu_pages *pcp;
>  	struct list_head *list;
> @@ -2747,16 +2747,16 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
>   */
>  static inline
>  struct page *rmqueue(struct zone *preferred_zone,
> -			struct zone *zone, unsigned int order,
> -			gfp_t gfp_flags, unsigned int alloc_flags,
> -			int migratetype)
> +		     struct zone *zone, unsigned int order,
> +		     gfp_t gfp_flags, unsigned int alloc_flags,
> +		     int migratetype)
>  {
>  	unsigned long flags;
>  	struct page *page;
>  
>  	if (likely(order == 0) && !in_interrupt()) {
>  		page = rmqueue_pcplist(preferred_zone, zone, order,
> -				gfp_flags, migratetype);
> +				       gfp_flags, migratetype);
>  		goto out;
>  	}
>  
> @@ -2826,7 +2826,7 @@ static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
>  	if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM))
>  		return false;
>  	if (fail_page_alloc.ignore_gfp_reclaim &&
> -			(gfp_mask & __GFP_DIRECT_RECLAIM))
> +	    (gfp_mask & __GFP_DIRECT_RECLAIM))
>  		return false;
>  
>  	return should_fail(&fail_page_alloc.attr, 1 << order);
> @@ -2845,10 +2845,10 @@ static int __init fail_page_alloc_debugfs(void)
>  		return PTR_ERR(dir);
>  
>  	if (!debugfs_create_bool("ignore-gfp-wait", mode, dir,
> -				&fail_page_alloc.ignore_gfp_reclaim))
> +				 &fail_page_alloc.ignore_gfp_reclaim))
>  		goto fail;
>  	if (!debugfs_create_bool("ignore-gfp-highmem", mode, dir,
> -				&fail_page_alloc.ignore_gfp_highmem))
> +				 &fail_page_alloc.ignore_gfp_highmem))
>  		goto fail;
>  	if (!debugfs_create_u32("min-order", mode, dir,
>  				&fail_page_alloc.min_order))
> @@ -2949,14 +2949,14 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
>  }
>  
>  bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
> -		      int classzone_idx, unsigned int alloc_flags)
> +		       int classzone_idx, unsigned int alloc_flags)
>  {
>  	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
> -					zone_page_state(z, NR_FREE_PAGES));
> +				   zone_page_state(z, NR_FREE_PAGES));
>  }
>  
>  static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
> -		unsigned long mark, int classzone_idx, unsigned int alloc_flags)
> +				       unsigned long mark, int classzone_idx, unsigned int alloc_flags)
>  {
>  	long free_pages = zone_page_state(z, NR_FREE_PAGES);
>  	long cma_pages = 0;
> @@ -2978,11 +2978,11 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
>  		return true;
>  
>  	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
> -					free_pages);
> +				   free_pages);
>  }
>  
>  bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
> -			unsigned long mark, int classzone_idx)
> +			    unsigned long mark, int classzone_idx)
>  {
>  	long free_pages = zone_page_state(z, NR_FREE_PAGES);
>  
> @@ -2990,14 +2990,14 @@ bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
>  		free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);
>  
>  	return __zone_watermark_ok(z, order, mark, classzone_idx, 0,
> -								free_pages);
> +				   free_pages);
>  }
>  
>  #ifdef CONFIG_NUMA
>  static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
>  {
>  	return node_distance(zone_to_nid(local_zone), zone_to_nid(zone)) <=
> -				RECLAIM_DISTANCE;
> +		RECLAIM_DISTANCE;
>  }
>  #else	/* CONFIG_NUMA */
>  static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
> @@ -3012,7 +3012,7 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
>   */
>  static struct page *
>  get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> -						const struct alloc_context *ac)
> +		       const struct alloc_context *ac)
>  {
>  	struct zoneref *z = ac->preferred_zoneref;
>  	struct zone *zone;
> @@ -3023,14 +3023,14 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  	 * See also __cpuset_node_allowed() comment in kernel/cpuset.c.
>  	 */
>  	for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
> -								ac->nodemask) {
> +					ac->nodemask) {
>  		struct page *page;
>  		unsigned long mark;
>  
>  		if (cpusets_enabled() &&
> -			(alloc_flags & ALLOC_CPUSET) &&
> -			!__cpuset_zone_allowed(zone, gfp_mask))
> -				continue;
> +		    (alloc_flags & ALLOC_CPUSET) &&
> +		    !__cpuset_zone_allowed(zone, gfp_mask))
> +			continue;
>  		/*
>  		 * When allocating a page cache page for writing, we
>  		 * want to get it from a node that is within its dirty
> @@ -3062,7 +3062,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  
>  		mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
>  		if (!zone_watermark_fast(zone, order, mark,
> -				       ac_classzone_idx(ac), alloc_flags)) {
> +					 ac_classzone_idx(ac), alloc_flags)) {
>  			int ret;
>  
>  			/* Checked here to keep the fast path fast */
> @@ -3085,16 +3085,16 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
>  			default:
>  				/* did we reclaim enough */
>  				if (zone_watermark_ok(zone, order, mark,
> -						ac_classzone_idx(ac), alloc_flags))
> +						      ac_classzone_idx(ac), alloc_flags))
>  					goto try_this_zone;
>  
>  				continue;
>  			}
>  		}
>  
> -try_this_zone:
> +	try_this_zone:
>  		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
> -				gfp_mask, alloc_flags, ac->migratetype);
> +			       gfp_mask, alloc_flags, ac->migratetype);
>  		if (page) {
>  			prep_new_page(page, order, gfp_mask, alloc_flags);
>  
> @@ -3188,21 +3188,21 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
>  	struct page *page;
>  
>  	page = get_page_from_freelist(gfp_mask, order,
> -			alloc_flags | ALLOC_CPUSET, ac);
> +				      alloc_flags | ALLOC_CPUSET, ac);
>  	/*
>  	 * fallback to ignore cpuset restriction if our nodes
>  	 * are depleted
>  	 */
>  	if (!page)
>  		page = get_page_from_freelist(gfp_mask, order,
> -				alloc_flags, ac);
> +					      alloc_flags, ac);
>  
>  	return page;
>  }
>  
>  static inline struct page *
>  __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
> -	const struct alloc_context *ac, unsigned long *did_some_progress)
> +		      const struct alloc_context *ac, unsigned long *did_some_progress)
>  {
>  	struct oom_control oc = {
>  		.zonelist = ac->zonelist,
> @@ -3231,7 +3231,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  	 * we're still under heavy pressure.
>  	 */
>  	page = get_page_from_freelist(gfp_mask | __GFP_HARDWALL, order,
> -					ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
> +				      ALLOC_WMARK_HIGH | ALLOC_CPUSET, ac);
>  	if (page)
>  		goto out;
>  
> @@ -3270,7 +3270,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (gfp_mask & __GFP_NOFAIL)
>  			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
> -					ALLOC_NO_WATERMARKS, ac);
> +							     ALLOC_NO_WATERMARKS, ac);
>  	}
>  out:
>  	mutex_unlock(&oom_lock);
> @@ -3287,8 +3287,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  /* Try memory compaction for high-order allocations before reclaim */
>  static struct page *
>  __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
> -		unsigned int alloc_flags, const struct alloc_context *ac,
> -		enum compact_priority prio, enum compact_result *compact_result)
> +			     unsigned int alloc_flags, const struct alloc_context *ac,
> +			     enum compact_priority prio, enum compact_result *compact_result)
>  {
>  	struct page *page;
>  
> @@ -3297,7 +3297,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
>  
>  	current->flags |= PF_MEMALLOC;
>  	*compact_result = try_to_compact_pages(gfp_mask, order, alloc_flags, ac,
> -									prio);
> +					       prio);
>  	current->flags &= ~PF_MEMALLOC;
>  
>  	if (*compact_result <= COMPACT_INACTIVE)
> @@ -3389,7 +3389,7 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
>  	 */
>  check_priority:
>  	min_priority = (order > PAGE_ALLOC_COSTLY_ORDER) ?
> -			MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
> +		MIN_COMPACT_COSTLY_PRIORITY : MIN_COMPACT_PRIORITY;
>  
>  	if (*compact_priority > min_priority) {
>  		(*compact_priority)--;
> @@ -3403,8 +3403,8 @@ should_compact_retry(struct alloc_context *ac, int order, int alloc_flags,
>  #else
>  static inline struct page *
>  __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
> -		unsigned int alloc_flags, const struct alloc_context *ac,
> -		enum compact_priority prio, enum compact_result *compact_result)
> +			     unsigned int alloc_flags, const struct alloc_context *ac,
> +			     enum compact_priority prio, enum compact_result *compact_result)
>  {
>  	*compact_result = COMPACT_SKIPPED;
>  	return NULL;
> @@ -3431,7 +3431,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
>  	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
>  					ac->nodemask) {
>  		if (zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> -					ac_classzone_idx(ac), alloc_flags))
> +				      ac_classzone_idx(ac), alloc_flags))
>  			return true;
>  	}
>  	return false;
> @@ -3441,7 +3441,7 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
>  /* Perform direct synchronous page reclaim */
>  static int
>  __perform_reclaim(gfp_t gfp_mask, unsigned int order,
> -					const struct alloc_context *ac)
> +		  const struct alloc_context *ac)
>  {
>  	struct reclaim_state reclaim_state;
>  	int progress;
> @@ -3456,7 +3456,7 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  	current->reclaim_state = &reclaim_state;
>  
>  	progress = try_to_free_pages(ac->zonelist, order, gfp_mask,
> -								ac->nodemask);
> +				     ac->nodemask);
>  
>  	current->reclaim_state = NULL;
>  	lockdep_clear_current_reclaim_state();
> @@ -3470,8 +3470,8 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  /* The really slow allocator path where we enter direct reclaim */
>  static inline struct page *
>  __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> -		unsigned int alloc_flags, const struct alloc_context *ac,
> -		unsigned long *did_some_progress)
> +			     unsigned int alloc_flags, const struct alloc_context *ac,
> +			     unsigned long *did_some_progress)
>  {
>  	struct page *page = NULL;
>  	bool drained = false;
> @@ -3560,8 +3560,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
>  	if (in_serving_softirq() && (current->flags & PF_MEMALLOC))
>  		return true;
>  	if (!in_interrupt() &&
> -			((current->flags & PF_MEMALLOC) ||
> -			 unlikely(test_thread_flag(TIF_MEMDIE))))
> +	    ((current->flags & PF_MEMALLOC) ||
> +	     unlikely(test_thread_flag(TIF_MEMDIE))))
>  		return true;
>  
>  	return false;
> @@ -3625,9 +3625,9 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  		 * reclaimable pages?
>  		 */
>  		wmark = __zone_watermark_ok(zone, order, min_wmark,
> -				ac_classzone_idx(ac), alloc_flags, available);
> +					    ac_classzone_idx(ac), alloc_flags, available);
>  		trace_reclaim_retry_zone(z, order, reclaimable,
> -				available, min_wmark, *no_progress_loops, wmark);
> +					 available, min_wmark, *no_progress_loops, wmark);
>  		if (wmark) {
>  			/*
>  			 * If we didn't make any progress and have a lot of
> @@ -3639,7 +3639,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  				unsigned long write_pending;
>  
>  				write_pending = zone_page_state_snapshot(zone,
> -							NR_ZONE_WRITE_PENDING);
> +									 NR_ZONE_WRITE_PENDING);
>  
>  				if (2 * write_pending > reclaimable) {
>  					congestion_wait(BLK_RW_ASYNC, HZ / 10);
> @@ -3670,7 +3670,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> -						struct alloc_context *ac)
> +		       struct alloc_context *ac)
>  {
>  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
>  	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
> @@ -3701,7 +3701,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * callers that are not in atomic context.
>  	 */
>  	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)) ==
> -				(__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
> +			 (__GFP_ATOMIC | __GFP_DIRECT_RECLAIM)))
>  		gfp_mask &= ~__GFP_ATOMIC;
>  
>  retry_cpuset:
> @@ -3724,7 +3724,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * could end up iterating over non-eligible zones endlessly.
>  	 */
>  	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> -					ac->high_zoneidx, ac->nodemask);
> +						     ac->high_zoneidx, ac->nodemask);
>  	if (!ac->preferred_zoneref->zone)
>  		goto nopage;
>  
> @@ -3749,13 +3749,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
>  	 */
>  	if (can_direct_reclaim &&
> -			(costly_order ||
> -			   (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
> -			&& !gfp_pfmemalloc_allowed(gfp_mask)) {
> +	    (costly_order ||
> +	     (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
> +	    && !gfp_pfmemalloc_allowed(gfp_mask)) {
>  		page = __alloc_pages_direct_compact(gfp_mask, order,
> -						alloc_flags, ac,
> -						INIT_COMPACT_PRIORITY,
> -						&compact_result);
> +						    alloc_flags, ac,
> +						    INIT_COMPACT_PRIORITY,
> +						    &compact_result);
>  		if (page)
>  			goto got_pg;
>  
> @@ -3800,7 +3800,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (!(alloc_flags & ALLOC_CPUSET) || (alloc_flags & ALLOC_NO_WATERMARKS)) {
>  		ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
>  		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> -					ac->high_zoneidx, ac->nodemask);
> +							     ac->high_zoneidx, ac->nodemask);
>  	}
>  
>  	/* Attempt with potentially adjusted zonelist and alloc_flags */
> @@ -3815,8 +3815,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	/* Make sure we know about allocations which stall for too long */
>  	if (time_after(jiffies, alloc_start + stall_timeout)) {
>  		warn_alloc(gfp_mask & ~__GFP_NOWARN, ac->nodemask,
> -			"page allocation stalls for %ums, order:%u",
> -			jiffies_to_msecs(jiffies - alloc_start), order);
> +			   "page allocation stalls for %ums, order:%u",
> +			   jiffies_to_msecs(jiffies - alloc_start), order);
>  		stall_timeout += 10 * HZ;
>  	}
>  
> @@ -3826,13 +3826,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  
>  	/* Try direct reclaim and then allocating */
>  	page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
> -							&did_some_progress);
> +					    &did_some_progress);
>  	if (page)
>  		goto got_pg;
>  
>  	/* Try direct compaction and then allocating */
>  	page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
> -					compact_priority, &compact_result);
> +					    compact_priority, &compact_result);
>  	if (page)
>  		goto got_pg;
>  
> @@ -3858,9 +3858,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * of free memory (see __compaction_suitable)
>  	 */
>  	if (did_some_progress > 0 &&
> -			should_compact_retry(ac, order, alloc_flags,
> -				compact_result, &compact_priority,
> -				&compaction_retries))
> +	    should_compact_retry(ac, order, alloc_flags,
> +				 compact_result, &compact_priority,
> +				 &compaction_retries))
>  		goto retry;
>  
>  	/*
> @@ -3938,15 +3938,15 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	}
>  fail:
>  	warn_alloc(gfp_mask, ac->nodemask,
> -			"page allocation failure: order:%u", order);
> +		   "page allocation failure: order:%u", order);
>  got_pg:
>  	return page;
>  }
>  
>  static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> -		struct zonelist *zonelist, nodemask_t *nodemask,
> -		struct alloc_context *ac, gfp_t *alloc_mask,
> -		unsigned int *alloc_flags)
> +				       struct zonelist *zonelist, nodemask_t *nodemask,
> +				       struct alloc_context *ac, gfp_t *alloc_mask,
> +				       unsigned int *alloc_flags)
>  {
>  	ac->high_zoneidx = gfp_zone(gfp_mask);
>  	ac->zonelist = zonelist;
> @@ -3976,7 +3976,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
>  
>  /* Determine whether to spread dirty pages and what the first usable zone */
>  static inline void finalise_ac(gfp_t gfp_mask,
> -		unsigned int order, struct alloc_context *ac)
> +			       unsigned int order, struct alloc_context *ac)
>  {
>  	/* Dirty zone balancing only done in the fast path */
>  	ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE);
> @@ -3987,7 +3987,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
>  	 * may get reset for allocations that ignore memory policies.
>  	 */
>  	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> -					ac->high_zoneidx, ac->nodemask);
> +						     ac->high_zoneidx, ac->nodemask);
>  }
>  
>  /*
> @@ -3995,7 +3995,7 @@ static inline void finalise_ac(gfp_t gfp_mask,
>   */
>  struct page *
>  __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
> -			struct zonelist *zonelist, nodemask_t *nodemask)
> +		       struct zonelist *zonelist, nodemask_t *nodemask)
>  {
>  	struct page *page;
>  	unsigned int alloc_flags = ALLOC_WMARK_LOW;
> @@ -4114,7 +4114,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
>  
>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>  	gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
> -		    __GFP_NOMEMALLOC;
> +		__GFP_NOMEMALLOC;
>  	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
>  				PAGE_FRAG_CACHE_MAX_ORDER);
>  	nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
> @@ -4150,7 +4150,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>  	int offset;
>  
>  	if (unlikely(!nc->va)) {
> -refill:
> +	refill:
>  		page = __page_frag_cache_refill(nc, gfp_mask);
>  		if (!page)
>  			return NULL;
> @@ -4209,7 +4209,7 @@ void page_frag_free(void *addr)
>  EXPORT_SYMBOL(page_frag_free);
>  
>  static void *make_alloc_exact(unsigned long addr, unsigned int order,
> -		size_t size)
> +			      size_t size)
>  {
>  	if (addr) {
>  		unsigned long alloc_end = addr + (PAGE_SIZE << order);
> @@ -4378,7 +4378,7 @@ long si_mem_available(void)
>  	 * and cannot be freed. Cap this estimate at the low watermark.
>  	 */
>  	available += global_page_state(NR_SLAB_RECLAIMABLE) -
> -		     min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
> +		min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
>  
>  	if (available < 0)
>  		available = 0;
> @@ -4714,7 +4714,7 @@ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
>  		zone = pgdat->node_zones + zone_type;
>  		if (managed_zone(zone)) {
>  			zoneref_set_zone(zone,
> -				&zonelist->_zonerefs[nr_zones++]);
> +					 &zonelist->_zonerefs[nr_zones++]);
>  			check_highest_zone(zone_type);
>  		}
>  	} while (zone_type);
> @@ -4792,8 +4792,8 @@ early_param("numa_zonelist_order", setup_numa_zonelist_order);
>   * sysctl handler for numa_zonelist_order
>   */
>  int numa_zonelist_order_handler(struct ctl_table *table, int write,
> -		void __user *buffer, size_t *length,
> -		loff_t *ppos)
> +				void __user *buffer, size_t *length,
> +				loff_t *ppos)
>  {
>  	char saved_string[NUMA_ZONELIST_ORDER_LEN];
>  	int ret;
> @@ -4952,7 +4952,7 @@ static void build_zonelists_in_zone_order(pg_data_t *pgdat, int nr_nodes)
>  			z = &NODE_DATA(node)->node_zones[zone_type];
>  			if (managed_zone(z)) {
>  				zoneref_set_zone(z,
> -					&zonelist->_zonerefs[pos++]);
> +						 &zonelist->_zonerefs[pos++]);
>  				check_highest_zone(zone_type);
>  			}
>  		}
> @@ -5056,8 +5056,8 @@ int local_memory_node(int node)
>  	struct zoneref *z;
>  
>  	z = first_zones_zonelist(node_zonelist(node, GFP_KERNEL),
> -				   gfp_zone(GFP_KERNEL),
> -				   NULL);
> +				 gfp_zone(GFP_KERNEL),
> +				 NULL);
>  	return z->zone->node;
>  }
>  #endif
> @@ -5248,7 +5248,7 @@ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone)
>   * done. Non-atomic initialization, single-pass.
>   */
>  void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
> -		unsigned long start_pfn, enum memmap_context context)
> +				unsigned long start_pfn, enum memmap_context context)
>  {
>  	struct vmem_altmap *altmap = to_vmem_altmap(__pfn_to_phys(start_pfn));
>  	unsigned long end_pfn = start_pfn + size;
> @@ -5315,7 +5315,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>  		}
>  #endif
>  
> -not_early:
> +	not_early:
>  		/*
>  		 * Mark the block movable so that blocks are reserved for
>  		 * movable at startup. This will force kernel allocations
> @@ -5349,7 +5349,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
>  }
>  
>  #ifndef __HAVE_ARCH_MEMMAP_INIT
> -#define memmap_init(size, nid, zone, start_pfn) \
> +#define memmap_init(size, nid, zone, start_pfn)				\
>  	memmap_init_zone((size), (nid), (zone), (start_pfn), MEMMAP_EARLY)
>  #endif
>  
> @@ -5417,13 +5417,13 @@ static int zone_batchsize(struct zone *zone)
>   * exist).
>   */
>  static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
> -		unsigned long batch)
> +			   unsigned long batch)
>  {
> -       /* start with a fail safe value for batch */
> +	/* start with a fail safe value for batch */
>  	pcp->batch = 1;
>  	smp_wmb();
>  
> -       /* Update high, then batch, in order */
> +	/* Update high, then batch, in order */
>  	pcp->high = high;
>  	smp_wmb();
>  
> @@ -5460,7 +5460,7 @@ static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
>   * to the value high for the pageset p.
>   */
>  static void pageset_set_high(struct per_cpu_pageset *p,
> -				unsigned long high)
> +			     unsigned long high)
>  {
>  	unsigned long batch = max(1UL, high / 4);
>  	if ((high / 4) > (PAGE_SHIFT * 8))
> @@ -5474,8 +5474,8 @@ static void pageset_set_high_and_batch(struct zone *zone,
>  {
>  	if (percpu_pagelist_fraction)
>  		pageset_set_high(pcp,
> -			(zone->managed_pages /
> -				percpu_pagelist_fraction));
> +				 (zone->managed_pages /
> +				  percpu_pagelist_fraction));
>  	else
>  		pageset_set_batch(pcp, zone_batchsize(zone));
>  }
> @@ -5510,7 +5510,7 @@ void __init setup_per_cpu_pageset(void)
>  
>  	for_each_online_pgdat(pgdat)
>  		pgdat->per_cpu_nodestats =
> -			alloc_percpu(struct per_cpu_nodestat);
> +		alloc_percpu(struct per_cpu_nodestat);
>  }
>  
>  static __meminit void zone_pcp_init(struct zone *zone)
> @@ -5538,10 +5538,10 @@ int __meminit init_currently_empty_zone(struct zone *zone,
>  	zone->zone_start_pfn = zone_start_pfn;
>  
>  	mminit_dprintk(MMINIT_TRACE, "memmap_init",
> -			"Initialising map node %d zone %lu pfns %lu -> %lu\n",
> -			pgdat->node_id,
> -			(unsigned long)zone_idx(zone),
> -			zone_start_pfn, (zone_start_pfn + size));
> +		       "Initialising map node %d zone %lu pfns %lu -> %lu\n",
> +		       pgdat->node_id,
> +		       (unsigned long)zone_idx(zone),
> +		       zone_start_pfn, (zone_start_pfn + size));
>  
>  	zone_init_free_lists(zone);
>  	zone->initialized = 1;
> @@ -5556,7 +5556,7 @@ int __meminit init_currently_empty_zone(struct zone *zone,
>   * Required by SPARSEMEM. Given a PFN, return what node the PFN is on.
>   */
>  int __meminit __early_pfn_to_nid(unsigned long pfn,
> -					struct mminit_pfnnid_cache *state)
> +				 struct mminit_pfnnid_cache *state)
>  {
>  	unsigned long start_pfn, end_pfn;
>  	int nid;
> @@ -5595,8 +5595,8 @@ void __init free_bootmem_with_active_regions(int nid, unsigned long max_low_pfn)
>  
>  		if (start_pfn < end_pfn)
>  			memblock_free_early_nid(PFN_PHYS(start_pfn),
> -					(end_pfn - start_pfn) << PAGE_SHIFT,
> -					this_nid);
> +						(end_pfn - start_pfn) << PAGE_SHIFT,
> +						this_nid);
>  	}
>  }
>  
> @@ -5628,7 +5628,7 @@ void __init sparse_memory_present_with_active_regions(int nid)
>   * PFNs will be 0.
>   */
>  void __meminit get_pfn_range_for_nid(unsigned int nid,
> -			unsigned long *start_pfn, unsigned long *end_pfn)
> +				     unsigned long *start_pfn, unsigned long *end_pfn)
>  {
>  	unsigned long this_start_pfn, this_end_pfn;
>  	int i;
> @@ -5658,7 +5658,7 @@ static void __init find_usable_zone_for_movable(void)
>  			continue;
>  
>  		if (arch_zone_highest_possible_pfn[zone_index] >
> -				arch_zone_lowest_possible_pfn[zone_index])
> +		    arch_zone_lowest_possible_pfn[zone_index])
>  			break;
>  	}
>  
> @@ -5677,11 +5677,11 @@ static void __init find_usable_zone_for_movable(void)
>   * zones within a node are in order of monotonic increases memory addresses
>   */
>  static void __meminit adjust_zone_range_for_zone_movable(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *zone_start_pfn,
> -					unsigned long *zone_end_pfn)
> +							 unsigned long zone_type,
> +							 unsigned long node_start_pfn,
> +							 unsigned long node_end_pfn,
> +							 unsigned long *zone_start_pfn,
> +							 unsigned long *zone_end_pfn)
>  {
>  	/* Only adjust if ZONE_MOVABLE is on this node */
>  	if (zone_movable_pfn[nid]) {
> @@ -5689,15 +5689,15 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
>  		if (zone_type == ZONE_MOVABLE) {
>  			*zone_start_pfn = zone_movable_pfn[nid];
>  			*zone_end_pfn = min(node_end_pfn,
> -				arch_zone_highest_possible_pfn[movable_zone]);
> +					    arch_zone_highest_possible_pfn[movable_zone]);
>  
> -		/* Adjust for ZONE_MOVABLE starting within this range */
> +			/* Adjust for ZONE_MOVABLE starting within this range */
>  		} else if (!mirrored_kernelcore &&
> -			*zone_start_pfn < zone_movable_pfn[nid] &&
> -			*zone_end_pfn > zone_movable_pfn[nid]) {
> +			   *zone_start_pfn < zone_movable_pfn[nid] &&
> +			   *zone_end_pfn > zone_movable_pfn[nid]) {
>  			*zone_end_pfn = zone_movable_pfn[nid];
>  
> -		/* Check if this whole range is within ZONE_MOVABLE */
> +			/* Check if this whole range is within ZONE_MOVABLE */
>  		} else if (*zone_start_pfn >= zone_movable_pfn[nid])
>  			*zone_start_pfn = *zone_end_pfn;
>  	}
> @@ -5708,12 +5708,12 @@ static void __meminit adjust_zone_range_for_zone_movable(int nid,
>   * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node()
>   */
>  static unsigned long __meminit zone_spanned_pages_in_node(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *zone_start_pfn,
> -					unsigned long *zone_end_pfn,
> -					unsigned long *ignored)
> +							  unsigned long zone_type,
> +							  unsigned long node_start_pfn,
> +							  unsigned long node_end_pfn,
> +							  unsigned long *zone_start_pfn,
> +							  unsigned long *zone_end_pfn,
> +							  unsigned long *ignored)
>  {
>  	/* When hotadd a new node from cpu_up(), the node should be empty */
>  	if (!node_start_pfn && !node_end_pfn)
> @@ -5723,8 +5723,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
>  	*zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type];
>  	*zone_end_pfn = arch_zone_highest_possible_pfn[zone_type];
>  	adjust_zone_range_for_zone_movable(nid, zone_type,
> -				node_start_pfn, node_end_pfn,
> -				zone_start_pfn, zone_end_pfn);
> +					   node_start_pfn, node_end_pfn,
> +					   zone_start_pfn, zone_end_pfn);
>  
>  	/* Check that this node has pages within the zone's required range */
>  	if (*zone_end_pfn < node_start_pfn || *zone_start_pfn > node_end_pfn)
> @@ -5743,8 +5743,8 @@ static unsigned long __meminit zone_spanned_pages_in_node(int nid,
>   * then all holes in the requested range will be accounted for.
>   */
>  unsigned long __meminit __absent_pages_in_range(int nid,
> -				unsigned long range_start_pfn,
> -				unsigned long range_end_pfn)
> +						unsigned long range_start_pfn,
> +						unsigned long range_end_pfn)
>  {
>  	unsigned long nr_absent = range_end_pfn - range_start_pfn;
>  	unsigned long start_pfn, end_pfn;
> @@ -5766,17 +5766,17 @@ unsigned long __meminit __absent_pages_in_range(int nid,
>   * It returns the number of pages frames in memory holes within a range.
>   */
>  unsigned long __init absent_pages_in_range(unsigned long start_pfn,
> -							unsigned long end_pfn)
> +					   unsigned long end_pfn)
>  {
>  	return __absent_pages_in_range(MAX_NUMNODES, start_pfn, end_pfn);
>  }
>  
>  /* Return the number of page frames in holes in a zone on a node */
>  static unsigned long __meminit zone_absent_pages_in_node(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *ignored)
> +							 unsigned long zone_type,
> +							 unsigned long node_start_pfn,
> +							 unsigned long node_end_pfn,
> +							 unsigned long *ignored)
>  {
>  	unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
>  	unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
> @@ -5791,8 +5791,8 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
>  	zone_end_pfn = clamp(node_end_pfn, zone_low, zone_high);
>  
>  	adjust_zone_range_for_zone_movable(nid, zone_type,
> -			node_start_pfn, node_end_pfn,
> -			&zone_start_pfn, &zone_end_pfn);
> +					   node_start_pfn, node_end_pfn,
> +					   &zone_start_pfn, &zone_end_pfn);
>  
>  	/* If this node has no page within this zone, return 0. */
>  	if (zone_start_pfn == zone_end_pfn)
> @@ -5830,12 +5830,12 @@ static unsigned long __meminit zone_absent_pages_in_node(int nid,
>  
>  #else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
>  static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
> -					unsigned long zone_type,
> -					unsigned long node_start_pfn,
> -					unsigned long node_end_pfn,
> -					unsigned long *zone_start_pfn,
> -					unsigned long *zone_end_pfn,
> -					unsigned long *zones_size)
> +								 unsigned long zone_type,
> +								 unsigned long node_start_pfn,
> +								 unsigned long node_end_pfn,
> +								 unsigned long *zone_start_pfn,
> +								 unsigned long *zone_end_pfn,
> +								 unsigned long *zones_size)
>  {
>  	unsigned int zone;
>  
> @@ -5849,10 +5849,10 @@ static inline unsigned long __meminit zone_spanned_pages_in_node(int nid,
>  }
>  
>  static inline unsigned long __meminit zone_absent_pages_in_node(int nid,
> -						unsigned long zone_type,
> -						unsigned long node_start_pfn,
> -						unsigned long node_end_pfn,
> -						unsigned long *zholes_size)
> +								unsigned long zone_type,
> +								unsigned long node_start_pfn,
> +								unsigned long node_end_pfn,
> +								unsigned long *zholes_size)
>  {
>  	if (!zholes_size)
>  		return 0;
> @@ -5883,8 +5883,8 @@ static void __meminit calculate_node_totalpages(struct pglist_data *pgdat,
>  						  &zone_end_pfn,
>  						  zones_size);
>  		real_size = size - zone_absent_pages_in_node(pgdat->node_id, i,
> -						  node_start_pfn, node_end_pfn,
> -						  zholes_size);
> +							     node_start_pfn, node_end_pfn,
> +							     zholes_size);
>  		if (size)
>  			zone->zone_start_pfn = zone_start_pfn;
>  		else
> @@ -6143,7 +6143,7 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
>  }
>  
>  void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
> -		unsigned long node_start_pfn, unsigned long *zholes_size)
> +				      unsigned long node_start_pfn, unsigned long *zholes_size)
>  {
>  	pg_data_t *pgdat = NODE_DATA(nid);
>  	unsigned long start_pfn = 0;
> @@ -6428,12 +6428,12 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  			if (start_pfn < usable_startpfn) {
>  				unsigned long kernel_pages;
>  				kernel_pages = min(end_pfn, usable_startpfn)
> -								- start_pfn;
> +					- start_pfn;
>  
>  				kernelcore_remaining -= min(kernel_pages,
> -							kernelcore_remaining);
> +							    kernelcore_remaining);
>  				required_kernelcore -= min(kernel_pages,
> -							required_kernelcore);
> +							   required_kernelcore);
>  
>  				/* Continue if range is now fully accounted */
>  				if (end_pfn <= usable_startpfn) {
> @@ -6466,7 +6466,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  			 * satisfied
>  			 */
>  			required_kernelcore -= min(required_kernelcore,
> -								size_pages);
> +						   size_pages);
>  			kernelcore_remaining -= size_pages;
>  			if (!kernelcore_remaining)
>  				break;
> @@ -6534,9 +6534,9 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  
>  	/* Record where the zone boundaries are */
>  	memset(arch_zone_lowest_possible_pfn, 0,
> -				sizeof(arch_zone_lowest_possible_pfn));
> +	       sizeof(arch_zone_lowest_possible_pfn));
>  	memset(arch_zone_highest_possible_pfn, 0,
> -				sizeof(arch_zone_highest_possible_pfn));
> +	       sizeof(arch_zone_highest_possible_pfn));
>  
>  	start_pfn = find_min_pfn_with_active_regions();
>  
> @@ -6562,14 +6562,14 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  			continue;
>  		pr_info("  %-8s ", zone_names[i]);
>  		if (arch_zone_lowest_possible_pfn[i] ==
> -				arch_zone_highest_possible_pfn[i])
> +		    arch_zone_highest_possible_pfn[i])
>  			pr_cont("empty\n");
>  		else
>  			pr_cont("[mem %#018Lx-%#018Lx]\n",
>  				(u64)arch_zone_lowest_possible_pfn[i]
> -					<< PAGE_SHIFT,
> +				<< PAGE_SHIFT,
>  				((u64)arch_zone_highest_possible_pfn[i]
> -					<< PAGE_SHIFT) - 1);
> +				 << PAGE_SHIFT) - 1);
>  	}
>  
>  	/* Print out the PFNs ZONE_MOVABLE begins at in each node */
> @@ -6577,7 +6577,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  	for (i = 0; i < MAX_NUMNODES; i++) {
>  		if (zone_movable_pfn[i])
>  			pr_info("  Node %d: %#018Lx\n", i,
> -			       (u64)zone_movable_pfn[i] << PAGE_SHIFT);
> +				(u64)zone_movable_pfn[i] << PAGE_SHIFT);
>  	}
>  
>  	/* Print out the early node map */
> @@ -6593,7 +6593,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
>  	for_each_online_node(nid) {
>  		pg_data_t *pgdat = NODE_DATA(nid);
>  		free_area_init_node(nid, NULL,
> -				find_min_pfn_for_node(nid), NULL);
> +				    find_min_pfn_for_node(nid), NULL);
>  
>  		/* Any memory on that node */
>  		if (pgdat->node_present_pages)
> @@ -6711,14 +6711,14 @@ void __init mem_init_print_info(const char *str)
>  	 *    please refer to arch/tile/kernel/vmlinux.lds.S.
>  	 * 3) .rodata.* may be embedded into .text or .data sections.
>  	 */
> -#define adj_init_size(start, end, size, pos, adj) \
> -	do { \
> -		if (start <= pos && pos < end && size > adj) \
> -			size -= adj; \
> +#define adj_init_size(start, end, size, pos, adj)		\
> +	do {							\
> +		if (start <= pos && pos < end && size > adj)	\
> +			size -= adj;				\
>  	} while (0)
>  
>  	adj_init_size(__init_begin, __init_end, init_data_size,
> -		     _sinittext, init_code_size);
> +		      _sinittext, init_code_size);
>  	adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size);
>  	adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size);
>  	adj_init_size(_stext, _etext, codesize, __start_rodata, rosize);
> @@ -6762,7 +6762,7 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
>  void __init free_area_init(unsigned long *zones_size)
>  {
>  	free_area_init_node(0, zones_size,
> -			__pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
> +			    __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
>  }
>  
>  static int page_alloc_cpu_dead(unsigned int cpu)
> @@ -6992,7 +6992,7 @@ int __meminit init_per_zone_wmark_min(void)
>  			min_free_kbytes = 65536;
>  	} else {
>  		pr_warn("min_free_kbytes is not updated to %d because user defined value %d is preferred\n",
> -				new_min_free_kbytes, user_min_free_kbytes);
> +			new_min_free_kbytes, user_min_free_kbytes);
>  	}
>  	setup_per_zone_wmarks();
>  	refresh_zone_stat_thresholds();
> @@ -7013,7 +7013,7 @@ core_initcall(init_per_zone_wmark_min)
>   *	changes.
>   */
>  int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +				   void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7029,7 +7029,7 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
>  }
>  
>  int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					  void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7054,12 +7054,12 @@ static void setup_min_unmapped_ratio(void)
>  
>  	for_each_zone(zone)
>  		zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages *
> -				sysctl_min_unmapped_ratio) / 100;
> +							 sysctl_min_unmapped_ratio) / 100;
>  }
>  
>  
>  int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					     void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7082,11 +7082,11 @@ static void setup_min_slab_ratio(void)
>  
>  	for_each_zone(zone)
>  		zone->zone_pgdat->min_slab_pages += (zone->managed_pages *
> -				sysctl_min_slab_ratio) / 100;
> +						     sysctl_min_slab_ratio) / 100;
>  }
>  
>  int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					 void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	int rc;
>  
> @@ -7110,7 +7110,7 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
>   * if in function of the boot time zone sizes.
>   */
>  int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	proc_dointvec_minmax(table, write, buffer, length, ppos);
>  	setup_per_zone_lowmem_reserve();
> @@ -7123,7 +7123,7 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
>   * pagelist can have before it gets flushed back to buddy allocator.
>   */
>  int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
> -	void __user *buffer, size_t *length, loff_t *ppos)
> +					    void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	struct zone *zone;
>  	int old_percpu_pagelist_fraction;
> @@ -7153,7 +7153,7 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write,
>  
>  		for_each_possible_cpu(cpu)
>  			pageset_set_high_and_batch(zone,
> -					per_cpu_ptr(zone->pageset, cpu));
> +						   per_cpu_ptr(zone->pageset, cpu));
>  	}
>  out:
>  	mutex_unlock(&pcp_batch_high_lock);
> @@ -7461,7 +7461,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
>  		}
>  
>  		nr_reclaimed = reclaim_clean_pages_from_list(cc->zone,
> -							&cc->migratepages);
> +							     &cc->migratepages);
>  		cc->nr_migratepages -= nr_reclaimed;
>  
>  		ret = migrate_pages(&cc->migratepages, alloc_migrate_target,
> @@ -7645,7 +7645,7 @@ void __meminit zone_pcp_update(struct zone *zone)
>  	mutex_lock(&pcp_batch_high_lock);
>  	for_each_possible_cpu(cpu)
>  		pageset_set_high_and_batch(zone,
> -				per_cpu_ptr(zone->pageset, cpu));
> +					   per_cpu_ptr(zone->pageset, cpu));
>  	mutex_unlock(&pcp_batch_high_lock);
>  }
>  #endif
> -- 
> 2.10.0.rc2.1.g053435c
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/15] mm: page_alloc: 80 column neatening
  2017-03-16  2:00   ` Joe Perches
@ 2017-03-16  8:59     ` Sergey Senozhatsky
  -1 siblings, 0 replies; 40+ messages in thread
From: Sergey Senozhatsky @ 2017-03-16  8:59 UTC (permalink / raw)
  To: Joe Perches; +Cc: Andrew Morton, linux-kernel, linux-mm

On (03/15/17 19:00), Joe Perches wrote:
> Wrap some lines to make it easier to read.

hm, I thought that the general rule was "don't fix styles in the
code that left /staging". because it adds noise, messes with git
annotate, etc.

	-ss

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/15] mm: page_alloc: 80 column neatening
@ 2017-03-16  8:59     ` Sergey Senozhatsky
  0 siblings, 0 replies; 40+ messages in thread
From: Sergey Senozhatsky @ 2017-03-16  8:59 UTC (permalink / raw)
  To: Joe Perches; +Cc: Andrew Morton, linux-kernel, linux-mm

On (03/15/17 19:00), Joe Perches wrote:
> Wrap some lines to make it easier to read.

hm, I thought that the general rule was "don't fix styles in the
code that left /staging". because it adds noise, messes with git
annotate, etc.

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
  2017-03-16  8:02     ` Michal Hocko
@ 2017-03-16 10:29       ` Joe Perches
  -1 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16 10:29 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Andrew Morton, linux-kernel, linux-mm

On Thu, 2017-03-16 at 09:02 +0100, Michal Hocko wrote:
> On Wed 15-03-17 18:59:59, Joe Perches wrote:
> > whitespace changes only - git diff -w shows no difference
> 
> what is the point of this whitespace noise? Does it help readability?

Yes.  Consistency helps.

> To be honest I do not think so.

Opinions always vary.

>  Such a patch would make sense only if it
> was a part of a larger series where other patches would actually do
> something useful.

Do please read the 0/n introduction to this series.

And do remember to always strip the useless stuff you
unnecessarily quoted too.  66kb in this case.

There was a separate patch series 0/3 that actually did
the more useful stuff.

This patch series was purposely separated.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
@ 2017-03-16 10:29       ` Joe Perches
  0 siblings, 0 replies; 40+ messages in thread
From: Joe Perches @ 2017-03-16 10:29 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Andrew Morton, linux-kernel, linux-mm

On Thu, 2017-03-16 at 09:02 +0100, Michal Hocko wrote:
> On Wed 15-03-17 18:59:59, Joe Perches wrote:
> > whitespace changes only - git diff -w shows no difference
> 
> what is the point of this whitespace noise? Does it help readability?

Yes.  Consistency helps.

> To be honest I do not think so.

Opinions always vary.

>  Such a patch would make sense only if it
> was a part of a larger series where other patches would actually do
> something useful.

Do please read the 0/n introduction to this series.

And do remember to always strip the useless stuff you
unnecessarily quoted too.  66kb in this case.

There was a separate patch series 0/3 that actually did
the more useful stuff.

This patch series was purposely separated.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
  2017-03-16 10:29       ` Joe Perches
@ 2017-03-16 10:33         ` Michal Hocko
  -1 siblings, 0 replies; 40+ messages in thread
From: Michal Hocko @ 2017-03-16 10:33 UTC (permalink / raw)
  To: Joe Perches; +Cc: Andrew Morton, linux-kernel, linux-mm

On Thu 16-03-17 03:29:27, Joe Perches wrote:
> On Thu, 2017-03-16 at 09:02 +0100, Michal Hocko wrote:
> > On Wed 15-03-17 18:59:59, Joe Perches wrote:
> > > whitespace changes only - git diff -w shows no difference
> > 
> > what is the point of this whitespace noise? Does it help readability?
> 
> Yes.  Consistency helps.

Causing context conflicts doesn't though.

> > To be honest I do not think so.
> 
> Opinions always vary.

Yes the vary and I do not like this. If you find somebody to ack then I
will not complain but I would appreciate if a more useful stuff was done
in mm/page_alloc.c
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/15] mm: page_alloc: align arguments to parenthesis
@ 2017-03-16 10:33         ` Michal Hocko
  0 siblings, 0 replies; 40+ messages in thread
From: Michal Hocko @ 2017-03-16 10:33 UTC (permalink / raw)
  To: Joe Perches; +Cc: Andrew Morton, linux-kernel, linux-mm

On Thu 16-03-17 03:29:27, Joe Perches wrote:
> On Thu, 2017-03-16 at 09:02 +0100, Michal Hocko wrote:
> > On Wed 15-03-17 18:59:59, Joe Perches wrote:
> > > whitespace changes only - git diff -w shows no difference
> > 
> > what is the point of this whitespace noise? Does it help readability?
> 
> Yes.  Consistency helps.

Causing context conflicts doesn't though.

> > To be honest I do not think so.
> 
> Opinions always vary.

Yes the vary and I do not like this. If you find somebody to ack then I
will not complain but I would appreciate if a more useful stuff was done
in mm/page_alloc.c
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2017-03-16 10:33 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-16  1:59 [PATCH 00/15] mm: page_alloc: style neatenings Joe Perches
2017-03-16  1:59 ` Joe Perches
2017-03-16  1:59 ` [PATCH 01/15] mm: page_alloc: whitespace neatening Joe Perches
2017-03-16  1:59   ` Joe Perches
2017-03-16  1:59 ` [PATCH 02/15] mm: page_alloc: align arguments to parenthesis Joe Perches
2017-03-16  1:59   ` Joe Perches
2017-03-16  8:02   ` Michal Hocko
2017-03-16  8:02     ` Michal Hocko
2017-03-16 10:29     ` Joe Perches
2017-03-16 10:29       ` Joe Perches
2017-03-16 10:33       ` Michal Hocko
2017-03-16 10:33         ` Michal Hocko
2017-03-16  2:00 ` [PATCH 03/15] mm: page_alloc: fix brace positions Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 04/15] mm: page_alloc: fix blank lines Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 05/15] mm: page_alloc: Move __meminitdata and __initdata uses Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 06/15] mm: page_alloc: Use unsigned int instead of unsigned Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 07/15] mm: page_alloc: Move labels to column 1 Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 08/15] mm: page_alloc: Fix typo acording -> according & the the -> to the Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 09/15] mm: page_alloc: Use the common commenting style Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 10/15] mm: page_alloc: 80 column neatening Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  8:59   ` Sergey Senozhatsky
2017-03-16  8:59     ` Sergey Senozhatsky
2017-03-16  2:00 ` [PATCH 11/15] mm: page_alloc: Move EXPORT_SYMBOL uses Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 12/15] mm: page_alloc: Avoid pointer comparisons to NULL Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 13/15] mm: page_alloc: Remove unnecessary parentheses Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 14/15] mm: page_alloc: Use octal permissions Joe Perches
2017-03-16  2:00   ` Joe Perches
2017-03-16  2:00 ` [PATCH 15/15] mm: page_alloc: Move logical continuations to EOL Joe Perches
2017-03-16  2:00   ` Joe Perches

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.