All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/11] Reduce compaction-related stalls and improve asynchronous migration of dirty pages v5
@ 2011-12-01 17:36 ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Short summary: Stalls when a USB stick using VFAT is used are reduced
by this series. If you are experiencing this problem, please test
and report back.

Changelog since V4
o Added reviewed-bys, credited Andrea properly for sync-light
o Allow dirty pages without mappings to be considered for migration
o Bound the number of pages freed for compaction
o Isolate PageReclaim pages on their own LRU list

This is against 3.2-rc3 and follows on from discussions on "mm: Do
not stall in synchronous compaction for THP allocations" and "[RFC
PATCH 0/5] Reduce compaction-related stalls". Initially, the proposed
patch eliminated stalls due to compaction which sometimes resulted in
user-visible interactivity problems on browsers by simply never using
sync compaction. The downside was that THP success allocation rates
were lower because dirty pages were not being migrated as reported by
Andrea. His approach at fixing this was nacked on the grounds that
it reverted fixes from Rik merged that reduced the amount of pages
reclaimed as it severely impacted his workloads performance.

This series attempts to reconcile the requirements of maximising THP
usage, without stalling in a user-visible fashion due to compaction
or cheating by reclaiming an excessive number of pages.

Patch 1 partially reverts commit 39deaf85 to allow migration to isolate
	dirty pages. This is because migration can move some dirty
	pages without blocking.

Patch 2 notes that the /proc/sys/vm/compact_memory handler is not using
	synchronous compaction when it should be. This is unrelated
	to the reported stalls but is worth fixing.

Patch 3 checks if we isolated a compound page during lumpy scan and
	account for it properly. For the most part, this affects
	tracing so it's unrelated to the stalls but worth fixing.

Patch 4 notes that it is possible to abort reclaim early for compaction
	and return 0 to the page allocator potentially entering the
	"may oom" path. This has not been observed in practice but
	the rest of the series potentially makes it easier to happen.

Patch 5 adds a sync parameter to the migratepage callback and gives
	the callback responsibility for migrating the page without
	blocking if sync==false. For example, fallback_migrate_page
	will not call writepage if sync==false. This increases the
	number of pages that can be handled by asynchronous compaction
	thereby reducing stalls.

Patch 6 restores filter-awareness to isolate_lru_page for migration.
	In practice, it means that pages under writeback and pages
	without a ->migratepage callback will not be isolated
	for migration.

Patch 7 avoids calling direct reclaim if compaction is deferred but
	makes sure that compaction is only deferred if sync
	compaction was used.

Patch 8 introduces a sync-light migration mechanism that sync compaction
	uses. The objective is to allow some stalls but to not call
	->writepage which can lead to significant user-visible stalls.

Patch 9 notes that while we want to abort reclaim ASAP to allow
	compation to go ahead that we leave a very small window of
	opportunity for compaction to run. This patch allows more pages
	to be freed by reclaim but bounds the number to a reasonable
	level based on the high watermark on each zone.

Patch 10 allows slabs to be shrunk even after compaction_ready() is
	true for one zone. This is to avoid a problem whereby a single
	small zone can abort reclaim even though no pages have been
	reclaimed and no suitably large zone is in a usable state.

Patch 11 fixes a problem with the rate of page scanning. As reclaim is
	rarely stalling on pages under writeback it means that scan
	rates are very high. This is particularly true for direct
	reclaim which is not calling writepage. The vmstat figures
	implied that much of this was busy work with PageReclaim pages
	marked for immediate reclaim. This patch is a prototype that
	moves these pages to their own LRU list.

This has been tested and other than 2 USB keys getting trashed,
nothing horrible fell out.  That said, patch 11 was hacked together
pretty quickly and alternative ideas on how it could be implemented
better are welcome. I'm unhappy with the rescue logic in particular
but did not want to delay the rest of the series because of it and
wanted to include it to illustrate what it does to System CPU time.

What is of critical importance is that stalls due to compaction
are massively reduced even though sync compaction was still
allowed. Testing from people complaining about stalls copying to USBs
with THP enabled are particularly welcome.

The following tests all involve THP usage and USB keys in some
way. Each test follows this type of pattern

1. Read from some fast fast storage, be it raw device or file. Each time
   the copy finishes, start again until the test ends
2. Write a large file to a filesystem on a USB stick. Each time the copy
   finishes, start again until the test ends
3. When memory is low, start an alloc process that creates a mapping
   the size of physical memory to stress THP allocation. This is the
   "real" part of the test and the part that is meant to trigger
   stalls when THP is enabled. Copying continues in the background.
4. Record the CPU usage and time to execute of the alloc process
5. Record the number of THP allocs and fallbacks as well as the number of THP
   pages in use a the end of the test just before alloc exited
6. Run the test 5 times to get an idea of variability
7. Between each run, sync is run and caches dropped and the test
   waits until nr_dirty is a small number to avoid interference
   or caching between iterations that would skew the figures.

The individual tests were then

writebackCPDeviceBasevfat
	Disable THP, read from a raw device (sda), vfat on USB stick
writebackCPDeviceBaseext4
	Disable THP, read from a raw device (sda), ext4 on USB stick
writebackCPDevicevfat
	THP enabled, read from a raw device (sda), vfat on USB stick
writebackCPDeviceext4
	THP enabled, read from a raw device (sda), ext4 on USB stick
writebackCPFilevfat
	THP enabled, read from a file on fast storage and USB, both vfat
writebackCPFileext4
	THP enabled, read from a file on fast storage and USB, both ext4

The kernels tested were

vanilla		3.2-rc3
lessdirect	Patches 1-7
synclight	Patches 1-8
freemore	Patches 1-9
revertAbort	Patches 1-10 (The name revert is misleading in retrospect)
immediate	Patches 1-11
andrea		The 8 patches Andrea posted as a basis of comparison

The results are very long unfortunately. I'll start with the case
where we are not using THP at all

writebackCPDeviceBasevfat
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time        47.95 (    0.00%)   51.55 (   -7.50%)   48.72 (   -1.61%)   48.19 (   -0.49%)   51.82 (   -8.06%)    4.73 (   90.13%)   48.08 (   -0.26%)
+/-                 5.27 (    0.00%)    4.59 (   12.91%)    4.82 (    8.60%)    4.67 (   11.44%)    4.89 (    7.20%)    7.56 (  -43.40%)    5.73 (   -8.68%)
User Time           0.05 (    0.00%)    0.06 (  -11.11%)    0.06 (  -14.81%)    0.07 (  -22.22%)    0.08 (  -40.74%)    0.06 (  -11.11%)    0.06 (  -11.11%)
+/-                 0.01 (    0.00%)    0.02 (  -23.36%)    0.02 (  -17.95%)    0.02 (  -78.15%)    0.01 (   41.02%)    0.01 (    6.75%)    0.01 (   53.37%)
Elapsed Time       50.60 (    0.00%)   52.36 (   -3.48%)   50.68 (   -0.15%)   51.00 (   -0.79%)   53.72 (   -6.15%)   11.48 (   77.31%)   50.45 (    0.30%)
+/-                 5.53 (    0.00%)    4.57 (   17.34%)    4.47 (   19.18%)    5.03 (    9.08%)    4.80 (   13.11%)    6.59 (  -19.17%)    5.51 (    0.33%)
THP Active          0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Alloc         0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Fallback      0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)        644.51    702.99    662.61    643.68    708.07     68.34    651.44
Total Elapsed Time (seconds)                408.30    414.63    415.78    419.48    438.63    209.57    426.63

                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time         1.28 (    0.00%)    1.63 (  -27.19%)    1.20 (    5.94%)    1.38 (   -7.50%)    1.34 (   -4.53%)    0.91 (   29.06%)    1.50 (  -17.34%)
+/-                 0.72 (    0.00%)    0.16 (   78.24%)    0.33 (   54.54%)    0.54 (   24.48%)    0.38 (   47.33%)    0.11 (   84.30%)    0.45 (   37.83%)
User Time           0.08 (    0.00%)    0.07 (   15.00%)    0.08 (    2.50%)    0.07 (   17.50%)    0.07 (   12.50%)    0.07 (    7.50%)    0.07 (   15.00%)
+/-                 0.01 (    0.00%)    0.02 (  -21.66%)    0.01 (    6.19%)    0.02 (  -31.15%)    0.01 (   10.56%)    0.01 (   15.15%)    0.01 (   17.54%)
Elapsed Time      143.00 (    0.00%)   50.97 (   64.36%)  131.85 (    7.80%)  113.76 (   20.45%)  140.47 (    1.76%)   14.12 (   90.12%)   90.66 (   36.60%)
+/-                55.83 (    0.00%)   44.46 (   20.37%)   11.70 (   79.05%)   64.86 (  -16.16%)   18.42 (   67.02%)    5.94 (   89.36%)   66.22 (  -18.61%)
THP Active          0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Alloc         0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Fallback      0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)         23.25     25.42     21.45     22.21     20.48     15.27     26.22
Total Elapsed Time (seconds)               1219.15    775.84   1225.77   1345.05   1128.21    734.50   1119.47

The THP figures are obviously all 0 because THP was enabled. The
main thing to watch is the elapsed times and how they compare to
times when THP is enabled later. One may note that vfat completed far
faster than ext4 but you may also note that the system CPU usage for
vfat was way higher. Looking at the vmstat figures, vfat is scanning
far more aggressively so I expect what is happening is that ext4 is
getting stalled on writing back pages.

writebackCPDevicevfat
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time         2.42 (    0.00%)    4.64 (  -92.06%)   48.13 (-1890.57%)   48.06 (-1887.43%)   46.05 (-1804.47%)    4.07 (  -68.24%)   46.78 (-1834.57%)
+/-                 3.17 (    0.00%)    6.34 (  -99.91%)    4.33 (  -36.54%)    3.89 (  -22.58%)    3.21 (   -1.16%)    5.83 (  -83.70%)    9.85 ( -210.54%)
User Time           0.06 (    0.00%)    0.06 (    0.00%)    0.07 (  -13.79%)    0.06 (   -3.45%)    0.04 (   24.14%)    0.07 (  -17.24%)    0.03 (   41.38%)
+/-                 0.00 (    0.00%)    0.01 (  -87.08%)    0.02 ( -483.10%)    0.00 (    0.00%)    0.01 ( -154.95%)    0.02 ( -330.12%)    0.01 ( -100.00%)
Elapsed Time     1627.12 (    0.00%) 2187.36 (  -34.43%)   51.04 (   96.86%)   49.16 (   96.98%)   74.48 (   95.42%)   18.53 (   98.86%)  453.58 (   72.12%)
+/-                77.40 (    0.00%)  561.41 ( -625.30%)    4.57 (   94.10%)    3.75 (   95.16%)   16.18 (   79.09%)   10.44 (   86.52%)   64.07 (   17.23%)
THP Active         12.20 (    0.00%)   20.00 (  163.93%)   49.40 (  404.92%)   61.00 (  500.00%)   62.00 (  508.20%)   39.40 (  322.95%)   78.00 (  639.34%)
+/-                 7.55 (    0.00%)   15.94 (  211.17%)   23.10 (  306.03%)   37.12 (  491.79%)   42.53 (  563.52%)   31.10 (  412.12%)   47.80 (  633.40%)
Fault Alloc        28.80 (    0.00%)   44.80 (  155.56%)  142.60 (  495.14%)  140.20 (  486.81%)  161.60 (  561.11%)  181.20 (  629.17%)  329.60 ( 1144.44%)
+/-                13.17 (    0.00%)    5.46 (   41.43%)   32.38 (  245.90%)   35.37 (  268.63%)   89.29 (  678.12%)   59.04 (  448.43%)  111.90 (  849.90%)
Fault Fallback    974.40 (    0.00%)  958.60 (    1.62%)  860.40 (   11.70%)  862.80 (   11.45%)  841.60 (   13.63%)  822.00 (   15.64%)  673.80 (   30.85%)
+/-                12.94 (    0.00%)    5.35 (   58.64%)   32.38 ( -150.21%)   35.37 ( -173.33%)   88.89 ( -586.98%)   59.17 ( -357.25%)  111.96 ( -765.26%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       1228.79   1683.09    656.72    644.92    731.17     56.95   1804.35
Total Elapsed Time (seconds)               8314.20  11126.44    428.74    410.35    525.44    246.16   2459.52

The first thing to note is the "Elapsed Time" for the vanilla kernels
of 1627 seconds versus 50 with THP disabled which might explain
the reports of USB stalls with THP enabled. Moving to synclight and
avoiding writeback in compaction brings THP in line with base pages
but at the cost of System CPU usage. Isolating PageReclaim pages on
their own LRU cuts the System CPU usage down.

It is very interesting to note that with the "immediate" kernel that
the completion time is better than the base page case. I do not know
exactly why that is but it may be due to batch reclaiming more pages
when THP is used.

The "Fault Alloc" success rate figures are also improved. The vanilla
kernel only managed to allocate 28.8 pages on average over the course
of 5 iterations. synclight brings that up to 142.6 while immediate
brings it up to 181.20. Of course, there is a lot of variability
which is to be expected with all the IO going on , particularly when
reading from a raw device backing a live filesystem (which is hostile
to fragmentation avoidance).

Andrea's series had a higher success rate for THP allocations but
at a severe cost to elapsed time which is still better than vanilla
but still much worse than disabling THP altogether. One can bring my
series close to Andrea's by removing this check

        /*
         * If compaction is deferred for high-order allocations, it is because
         * sync compaction recently failed. In this is the case and the caller
         * has requested the system not be heavily disrupted, fail the
         * allocation now instead of entering direct reclaim
         */
        if (deferred_compaction && (gfp_mask & __GFP_NO_KSWAPD))
                goto nopage;

If that is done the average time to complete the test increases from
18.53 seconds (immediate kernel) to 367.44 seconds but brings THP
allocation success rates close to in line with Andreas series. It
could probably be pushed higher by deferring compaction less and
combining aggressive reclaim with aggressive compaction but all at
the cost of overall performance.

I didn't include a patch that removed the above check because hurting
overall performance to improve the THP figure is not what the average
user wants. It's something to consider though if someone really wants
to maximise THP usage no matter what it does to the workload initially.

                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
MMTests Statistics: vmstat
Page Ins                                   849238418  1112644112    11522374    10078704    19644823    11465123   153979370
Page Outs                                   20589862    25868815     3455835     3406331     3673611     3981797     7824154
Swap Ins                                        3812        3481        7076        5377        5966        5691        4961
Swap Outs                                     255283      352820      624734      620676      616675      862611      700161
Direct pages scanned                      1350821305  2228775837  1547403976  1560132463  1840632272    98025275  5448209970
Kswapd pages scanned                        10182797    15963121     2364959     2114433     1560570     2164608     2036422
Kswapd pages reclaimed                       7068564    12342958     1449634     1274347      971730     1426735     1648304
Direct pages reclaimed                     210120758   271789656     1902991     1902799     4580919     1946606    38478437
Kswapd efficiency                                69%         77%         61%         60%         62%         65%         80%
Kswapd velocity                             1224.748    1434.702    5516.068    5152.755    2970.025    8793.500     827.975
Direct efficiency                                15%         12%          0%          0%          0%          1%          0%
Direct velocity                           162471.591  200313.473 3609189.663 3801955.557 3503030.359  398217.724 2215151.725
Percentage direct scans                          99%         99%         99%         99%         99%         97%         99%
Page writes by reclaim                        256842      355827      624879      620803      616744      862730      702252
Page writes file                                1559        3007         145         127          69         119        2091
Page writes anon                              255283      352820      624734      620676      616675      862611      700161
Page reclaim immediate                    1066897311  1818638577  1436791650  1448010323  1705255453    95383606  5081414834
Page rescued immediate                             0           0           0           0           0      104874           0
Slabs scanned                                   9216       10240        9216        8192        8192        8192        9216
Direct inode steals                                0           0           0           0           0           0           0
Kswapd inode steals                                0           0           0           0           0           0           0
Kswapd skipped wait                             1176         400           1           1           2          15           8
THP fault alloc                                  144         224         713         701         808         906        1648
THP collapse alloc                                 3          18           0           0           0           0           0
THP splits                                        85         132         468         396         503         713        1286
THP fault fallback                              4872        4793        4302        4314        4208        4110        3369
THP collapse fail                                 91          37           0           0           0           0           1
Compaction stalls                                417        2527         540         527         740         637        3240
Compaction success                                44         232          58          45          71         102         276
Compaction failures                              373        2295         482         482         669         535        2964
Compaction pages moved                         69404      144762      166506      183062      213251      223125      436501
Compaction move failure                         9124       11395        8757       15337       17023       20845       67949

This is summary of vmstat figures from the same test. Sorry about
the formatting. The main things to look at are

1. Page In/out figures are much reduced by the series.

2. Direct page scanning is incredibly high (162471.591 pages scanned
   per second on the vanilla kernel) but isolating PageReclaim pages
   on their own list reduces the number of pages scanned by 95% (Direct
   pages scanned line).

3. The fact that "Page rescued immediate" is a positive number implies
   that we sometimes race removing pages from the LRU_IMMEDIATE list
   that need to be put back on a normal LRU but it happens only for
   0.1% of the pages marked for immediate reclaim.

writebackCPDeviceext4
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time         1.94 (    0.00%)    2.11 (   -8.54%)    1.54 (   20.68%)    1.54 (   20.99%)    1.58 (   18.62%)    1.20 (   38.17%)    1.71 (   12.04%)
+/-                 0.55 (    0.00%)    0.29 (   47.94%)    0.42 (   24.45%)    0.36 (   35.63%)    0.30 (   45.20%)    0.21 (   61.75%)    0.22 (   60.50%)
User Time           0.06 (    0.00%)    0.04 (   35.71%)    0.06 (   -3.57%)    0.05 (   14.29%)    0.03 (   42.86%)    0.06 (  -10.71%)    0.03 (   50.00%)
+/-                 0.02 (    0.00%)    0.01 (   56.28%)    0.02 (   12.55%)    0.01 (   57.99%)    0.01 (   67.92%)    0.02 (   31.40%)    0.01 (   67.92%)
Elapsed Time       62.39 (    0.00%)   98.66 (  -58.14%)  101.12 (  -62.08%)  114.45 (  -83.45%)   94.62 (  -51.68%)   42.73 (   31.51%)  226.70 ( -263.38%)
+/-                55.11 (    0.00%)   47.33 (   14.12%)   54.36 (    1.36%)   26.80 (   51.37%)   56.09 (   -1.79%)    6.76 (   87.74%)  149.78 ( -171.80%)
THP Active         99.80 (    0.00%)   95.40 (   95.59%)   72.40 (   72.55%)  120.60 (  120.84%)  145.60 (  145.89%)   44.20 (   44.29%)   94.60 (   94.79%)
+/-                54.95 (    0.00%)   35.71 (   64.98%)   19.28 (   35.09%)   27.08 (   49.28%)   77.75 (  141.48%)   31.31 (   56.98%)   49.08 (   89.32%)
Fault Alloc       244.20 (    0.00%)  250.60 (  102.62%)  152.80 (   62.57%)  217.60 (   89.11%)  272.00 (  111.38%)  167.20 (   68.47%)  396.40 (  162.33%)
+/-                22.82 (    0.00%)   47.58 (  208.52%)   42.23 (  185.11%)   30.57 (  133.99%)  135.52 (  593.95%)  100.20 (  439.15%)  104.59 (  458.40%)
Fault Fallback    758.80 (    0.00%)  752.80 (    0.79%)  850.20 (  -12.05%)  785.80 (   -3.56%)  731.40 (    3.61%)  836.00 (  -10.17%)  606.80 (   20.03%)
+/-                22.82 (    0.00%)   47.49 ( -108.15%)   42.23 (  -85.11%)   30.80 (  -34.99%)  135.83 ( -495.31%)  100.24 ( -339.33%)  104.43 ( -357.73%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)         34.47     34.29     26.27     24.76     32.13     32.67    104.88
Total Elapsed Time (seconds)                993.38   1217.66   1021.32   1030.08   1026.61    758.28   1688.14

Similar test but the USB stick is using ext4 instead of vfat. As
ext4 does not use writepage for migration, the large stalls due to
compaction when THP is enabled are not observed. Still, isolating
PageReclaim pages on their own list helped completion time largely
by reducing the number of pages scanned by direct reclaim although
time spend in congestion_wait could also be a factor.

Again, Andrea's series had far higher success rates for THP allocation
at the cost of elapsed time. I didn't look too closely but a quick
look at the vmstat figures tells me kswapd reclaimed 6 times more
pages than "immediate" and direct reclaim reclaimed roughly twice
as many pages. It follows that if memory is aggressively reclaimed,
there will be more available for THP.

writebackCPFilevfat
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time        47.67 (    0.00%)   27.95 (   41.37%)   39.35 (   17.46%)   45.70 (    4.14%)   46.49 (    2.48%)    4.91 (   89.69%)   54.42 (  -14.17%)
+/-                17.29 (    0.00%)   26.04 (  -50.62%)   19.21 (  -11.12%)    1.15 (   93.33%)    3.67 (   78.78%)    7.01 (   59.46%)   10.31 (   40.39%)
User Time           0.08 (    0.00%)    0.05 (   34.21%)    0.07 (    5.26%)    0.05 (   28.95%)    0.04 (   42.11%)    0.06 (   18.42%)    0.05 (   36.84%)
+/-                 0.02 (    0.00%)    0.01 (   31.32%)    0.01 (   28.63%)    0.01 (   50.47%)    0.01 (   27.32%)    0.01 (   35.57%)    0.01 (   35.57%)
Elapsed Time     1013.87 (    0.00%) 2009.56 (  -98.21%)   96.54 (   90.48%)   54.48 (   94.63%)   76.83 (   92.42%)   23.04 (   97.73%)  252.74 (   75.07%)
+/-              1164.19 (    0.00%) 1833.78 (  -57.52%)   82.29 (   92.93%)    5.59 (   99.52%)   27.40 (   97.65%)    7.76 (   99.33%)   45.62 (   96.08%)
THP Active          1.20 (    0.00%)   27.60 ( 2300.00%)   25.80 ( 2150.00%)   24.20 ( 2016.67%)   24.20 ( 2016.67%)   18.20 ( 1516.67%)   24.40 ( 2033.33%)
+/-                 1.94 (    0.00%)   24.63 ( 1270.20%)   33.27 ( 1715.82%)   34.65 ( 1786.89%)   28.17 ( 1452.58%)   10.07 (  519.21%)   47.31 ( 2439.61%)
Fault Alloc        42.80 (    0.00%)   87.20 (  203.74%)   71.80 (  167.76%)  147.40 (  344.39%)  110.00 (  257.01%)  123.40 (  288.32%)  152.00 (  355.14%)
+/-                23.71 (    0.00%)   37.49 (  158.11%)   23.45 (   98.89%)   50.07 (  211.18%)   35.29 (  148.83%)   55.19 (  232.77%)   76.58 (  322.97%)
Fault Fallback    960.40 (    0.00%)  916.40 (    4.58%)  931.40 (    3.02%)  855.60 (   10.91%)  893.20 (    7.00%)  879.60 (    8.41%)  851.00 (   11.39%)
+/-                23.81 (    0.00%)   37.31 (  -56.67%)   23.48 (    1.39%)   50.07 ( -110.27%)   35.23 (  -47.96%)   55.19 ( -131.76%)   76.58 ( -221.58%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       2240.06    2527.8    553.11    625.51    748.34     74.92   1271.33
Total Elapsed Time (seconds)               5289.06  10250.24    689.22    483.55    605.43    342.07   1472.99

In this case, the test is reading/writing only from filesystems but as
it's vfat, it's slow due to calling writepage during compaction. Little
to observe really - the time to complete the test goes way down with
the series applied and THP allocation success rates go up.

As before, Andrea's series allocates more THPs at the cost of overall
performance. Again I did not look too closely but it paged in a lot
more and scanned a lot more pages (see system CPU time) although
the actual reclaim figures look similar. It might be getting stuck
in congestion_wait but the tests that would have confirmed that did
not get the chance to run.

writebackCPFileext4
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5r8      synclight-v5r8      freemore-v5r20   revertAbort-v5r20     immediate-v5r20         andrea-v1r1
System Time         2.14 (    0.00%)    2.31 (   -8.04%)    1.78 (   16.84%)    2.38 (  -11.23%)    2.02 (    5.43%)    1.50 (   29.84%)    1.79 (   16.46%)
+/-                 0.42 (    0.00%)    0.41 (    2.49%)    0.47 (  -12.67%)    0.99 ( -136.58%)    0.34 (   19.14%)    0.34 (   19.84%)    0.27 (   35.84%)
User Time           0.06 (    0.00%)    0.04 (   35.48%)    0.05 (   19.35%)    0.05 (   19.35%)    0.06 (    9.68%)    0.04 (   35.48%)    0.05 (   22.58%)
+/-                 0.02 (    0.00%)    0.01 (   27.07%)    0.02 (   20.11%)    0.02 (   13.71%)    0.01 (   47.41%)    0.00 (    0.00%)    0.02 (   11.27%)
Elapsed Time       65.66 (    0.00%)  105.82 (  -61.16%)  110.34 (  -68.04%)   91.03 (  -38.64%)  122.48 (  -86.53%)   28.35 (   56.82%)  245.87 ( -274.45%)
+/-                52.07 (    0.00%)   50.31 (    3.37%)   75.33 (  -44.67%)   55.36 (   -6.32%)   53.90 (   -3.53%)    7.39 (   85.80%)   91.44 (  -75.63%)
THP Active         35.20 (    0.00%)  122.40 (  347.73%)   80.80 (  229.55%)   73.80 (  209.66%)  130.40 (  370.45%)   82.00 (  232.95%)   14.80 (   42.05%)
+/-                17.03 (    0.00%)   74.02 (  434.53%)   92.15 (  540.99%)   40.95 (  240.38%)   44.21 (  259.55%)   68.21 (  400.46%)   18.73 (  109.98%)
Fault Alloc        90.80 (    0.00%)  293.80 (  323.57%)  258.40 (  284.58%)  216.40 (  238.33%)  330.00 (  363.44%)  346.60 (  381.72%)  165.80 (  182.60%)
+/-                22.66 (    0.00%)   67.76 (  299.05%)  109.14 (  481.69%)  138.36 (  610.66%)   76.60 (  338.06%)  122.98 (  542.77%)  120.34 (  531.14%)
Fault Fallback    912.20 (    0.00%)  709.20 (   22.25%)  745.00 (   18.33%)  786.60 (   13.77%)  673.40 (   26.18%)  656.80 (   28.00%)  837.40 (    8.20%)
+/-                22.66 (    0.00%)   67.76 ( -199.05%)  108.89 ( -380.60%)  138.36 ( -510.66%)   76.72 ( -238.63%)  123.07 ( -443.18%)  120.51 ( -431.86%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)         47.14     51.17     41.11     45.13     46.68     33.81    125.39
Total Elapsed Time (seconds)               1032.94   1203.01   1287.99   1085.57   1008.24    764.42   1939.48

This is interesting in that the Elapsed Time goes up for parts of
the series until PageReclaim pages are isolated from the LRU. This
may be because the stalls were not that bad in the first place for
ext4 which may explain why this was missed in earlier testing but was
severe once someone plugged in a USB stick with VFAT on it. What is
interesting in this test is that unlike other tests the allocation
success rate for Andrea's series was lower while Elapsed Time is
still high but am not sure why that is.

Overall the series does reduce latencies and while the tests are
inherency racy as alloc competes with the cp processes, the variability
was included. The THP allocation rates are not as high as they could
be but that is because we would have to be more aggressive about
reclaim and compaction impacting overall performance. Any comments
on what is required to get this into a suitable shape for merging
are welcome. Testing is also welcome.

 fs/btrfs/disk-io.c            |    5 +-
 fs/nfs/internal.h             |    2 +-
 fs/nfs/write.c                |    4 +-
 include/linux/fs.h            |   11 ++-
 include/linux/migrate.h       |   23 +++++-
 include/linux/mmzone.h        |    4 +
 include/linux/vm_event_item.h |    1 +
 mm/compaction.c               |    5 +-
 mm/memory-failure.c           |    2 +-
 mm/memory_hotplug.c           |    2 +-
 mm/mempolicy.c                |    2 +-
 mm/migrate.c                  |  171 ++++++++++++++++++++++++++++-------------
 mm/page_alloc.c               |   50 +++++++++---
 mm/swap.c                     |   74 +++++++++++++++++-
 mm/vmscan.c                   |  114 ++++++++++++++++++++++++----
 mm/vmstat.c                   |    2 +
 16 files changed, 369 insertions(+), 103 deletions(-)

-- 
1.7.3.4


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 0/11] Reduce compaction-related stalls and improve asynchronous migration of dirty pages v5
@ 2011-12-01 17:36 ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Short summary: Stalls when a USB stick using VFAT is used are reduced
by this series. If you are experiencing this problem, please test
and report back.

Changelog since V4
o Added reviewed-bys, credited Andrea properly for sync-light
o Allow dirty pages without mappings to be considered for migration
o Bound the number of pages freed for compaction
o Isolate PageReclaim pages on their own LRU list

This is against 3.2-rc3 and follows on from discussions on "mm: Do
not stall in synchronous compaction for THP allocations" and "[RFC
PATCH 0/5] Reduce compaction-related stalls". Initially, the proposed
patch eliminated stalls due to compaction which sometimes resulted in
user-visible interactivity problems on browsers by simply never using
sync compaction. The downside was that THP success allocation rates
were lower because dirty pages were not being migrated as reported by
Andrea. His approach at fixing this was nacked on the grounds that
it reverted fixes from Rik merged that reduced the amount of pages
reclaimed as it severely impacted his workloads performance.

This series attempts to reconcile the requirements of maximising THP
usage, without stalling in a user-visible fashion due to compaction
or cheating by reclaiming an excessive number of pages.

Patch 1 partially reverts commit 39deaf85 to allow migration to isolate
	dirty pages. This is because migration can move some dirty
	pages without blocking.

Patch 2 notes that the /proc/sys/vm/compact_memory handler is not using
	synchronous compaction when it should be. This is unrelated
	to the reported stalls but is worth fixing.

Patch 3 checks if we isolated a compound page during lumpy scan and
	account for it properly. For the most part, this affects
	tracing so it's unrelated to the stalls but worth fixing.

Patch 4 notes that it is possible to abort reclaim early for compaction
	and return 0 to the page allocator potentially entering the
	"may oom" path. This has not been observed in practice but
	the rest of the series potentially makes it easier to happen.

Patch 5 adds a sync parameter to the migratepage callback and gives
	the callback responsibility for migrating the page without
	blocking if sync==false. For example, fallback_migrate_page
	will not call writepage if sync==false. This increases the
	number of pages that can be handled by asynchronous compaction
	thereby reducing stalls.

Patch 6 restores filter-awareness to isolate_lru_page for migration.
	In practice, it means that pages under writeback and pages
	without a ->migratepage callback will not be isolated
	for migration.

Patch 7 avoids calling direct reclaim if compaction is deferred but
	makes sure that compaction is only deferred if sync
	compaction was used.

Patch 8 introduces a sync-light migration mechanism that sync compaction
	uses. The objective is to allow some stalls but to not call
	->writepage which can lead to significant user-visible stalls.

Patch 9 notes that while we want to abort reclaim ASAP to allow
	compation to go ahead that we leave a very small window of
	opportunity for compaction to run. This patch allows more pages
	to be freed by reclaim but bounds the number to a reasonable
	level based on the high watermark on each zone.

Patch 10 allows slabs to be shrunk even after compaction_ready() is
	true for one zone. This is to avoid a problem whereby a single
	small zone can abort reclaim even though no pages have been
	reclaimed and no suitably large zone is in a usable state.

Patch 11 fixes a problem with the rate of page scanning. As reclaim is
	rarely stalling on pages under writeback it means that scan
	rates are very high. This is particularly true for direct
	reclaim which is not calling writepage. The vmstat figures
	implied that much of this was busy work with PageReclaim pages
	marked for immediate reclaim. This patch is a prototype that
	moves these pages to their own LRU list.

This has been tested and other than 2 USB keys getting trashed,
nothing horrible fell out.  That said, patch 11 was hacked together
pretty quickly and alternative ideas on how it could be implemented
better are welcome. I'm unhappy with the rescue logic in particular
but did not want to delay the rest of the series because of it and
wanted to include it to illustrate what it does to System CPU time.

What is of critical importance is that stalls due to compaction
are massively reduced even though sync compaction was still
allowed. Testing from people complaining about stalls copying to USBs
with THP enabled are particularly welcome.

The following tests all involve THP usage and USB keys in some
way. Each test follows this type of pattern

1. Read from some fast fast storage, be it raw device or file. Each time
   the copy finishes, start again until the test ends
2. Write a large file to a filesystem on a USB stick. Each time the copy
   finishes, start again until the test ends
3. When memory is low, start an alloc process that creates a mapping
   the size of physical memory to stress THP allocation. This is the
   "real" part of the test and the part that is meant to trigger
   stalls when THP is enabled. Copying continues in the background.
4. Record the CPU usage and time to execute of the alloc process
5. Record the number of THP allocs and fallbacks as well as the number of THP
   pages in use a the end of the test just before alloc exited
6. Run the test 5 times to get an idea of variability
7. Between each run, sync is run and caches dropped and the test
   waits until nr_dirty is a small number to avoid interference
   or caching between iterations that would skew the figures.

The individual tests were then

writebackCPDeviceBasevfat
	Disable THP, read from a raw device (sda), vfat on USB stick
writebackCPDeviceBaseext4
	Disable THP, read from a raw device (sda), ext4 on USB stick
writebackCPDevicevfat
	THP enabled, read from a raw device (sda), vfat on USB stick
writebackCPDeviceext4
	THP enabled, read from a raw device (sda), ext4 on USB stick
writebackCPFilevfat
	THP enabled, read from a file on fast storage and USB, both vfat
writebackCPFileext4
	THP enabled, read from a file on fast storage and USB, both ext4

The kernels tested were

vanilla		3.2-rc3
lessdirect	Patches 1-7
synclight	Patches 1-8
freemore	Patches 1-9
revertAbort	Patches 1-10 (The name revert is misleading in retrospect)
immediate	Patches 1-11
andrea		The 8 patches Andrea posted as a basis of comparison

The results are very long unfortunately. I'll start with the case
where we are not using THP at all

writebackCPDeviceBasevfat
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time        47.95 (    0.00%)   51.55 (   -7.50%)   48.72 (   -1.61%)   48.19 (   -0.49%)   51.82 (   -8.06%)    4.73 (   90.13%)   48.08 (   -0.26%)
+/-                 5.27 (    0.00%)    4.59 (   12.91%)    4.82 (    8.60%)    4.67 (   11.44%)    4.89 (    7.20%)    7.56 (  -43.40%)    5.73 (   -8.68%)
User Time           0.05 (    0.00%)    0.06 (  -11.11%)    0.06 (  -14.81%)    0.07 (  -22.22%)    0.08 (  -40.74%)    0.06 (  -11.11%)    0.06 (  -11.11%)
+/-                 0.01 (    0.00%)    0.02 (  -23.36%)    0.02 (  -17.95%)    0.02 (  -78.15%)    0.01 (   41.02%)    0.01 (    6.75%)    0.01 (   53.37%)
Elapsed Time       50.60 (    0.00%)   52.36 (   -3.48%)   50.68 (   -0.15%)   51.00 (   -0.79%)   53.72 (   -6.15%)   11.48 (   77.31%)   50.45 (    0.30%)
+/-                 5.53 (    0.00%)    4.57 (   17.34%)    4.47 (   19.18%)    5.03 (    9.08%)    4.80 (   13.11%)    6.59 (  -19.17%)    5.51 (    0.33%)
THP Active          0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Alloc         0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Fallback      0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)        644.51    702.99    662.61    643.68    708.07     68.34    651.44
Total Elapsed Time (seconds)                408.30    414.63    415.78    419.48    438.63    209.57    426.63

                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time         1.28 (    0.00%)    1.63 (  -27.19%)    1.20 (    5.94%)    1.38 (   -7.50%)    1.34 (   -4.53%)    0.91 (   29.06%)    1.50 (  -17.34%)
+/-                 0.72 (    0.00%)    0.16 (   78.24%)    0.33 (   54.54%)    0.54 (   24.48%)    0.38 (   47.33%)    0.11 (   84.30%)    0.45 (   37.83%)
User Time           0.08 (    0.00%)    0.07 (   15.00%)    0.08 (    2.50%)    0.07 (   17.50%)    0.07 (   12.50%)    0.07 (    7.50%)    0.07 (   15.00%)
+/-                 0.01 (    0.00%)    0.02 (  -21.66%)    0.01 (    6.19%)    0.02 (  -31.15%)    0.01 (   10.56%)    0.01 (   15.15%)    0.01 (   17.54%)
Elapsed Time      143.00 (    0.00%)   50.97 (   64.36%)  131.85 (    7.80%)  113.76 (   20.45%)  140.47 (    1.76%)   14.12 (   90.12%)   90.66 (   36.60%)
+/-                55.83 (    0.00%)   44.46 (   20.37%)   11.70 (   79.05%)   64.86 (  -16.16%)   18.42 (   67.02%)    5.94 (   89.36%)   66.22 (  -18.61%)
THP Active          0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Alloc         0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
Fault Fallback      0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
+/-                 0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)    0.00 (    0.00%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)         23.25     25.42     21.45     22.21     20.48     15.27     26.22
Total Elapsed Time (seconds)               1219.15    775.84   1225.77   1345.05   1128.21    734.50   1119.47

The THP figures are obviously all 0 because THP was enabled. The
main thing to watch is the elapsed times and how they compare to
times when THP is enabled later. One may note that vfat completed far
faster than ext4 but you may also note that the system CPU usage for
vfat was way higher. Looking at the vmstat figures, vfat is scanning
far more aggressively so I expect what is happening is that ext4 is
getting stalled on writing back pages.

writebackCPDevicevfat
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time         2.42 (    0.00%)    4.64 (  -92.06%)   48.13 (-1890.57%)   48.06 (-1887.43%)   46.05 (-1804.47%)    4.07 (  -68.24%)   46.78 (-1834.57%)
+/-                 3.17 (    0.00%)    6.34 (  -99.91%)    4.33 (  -36.54%)    3.89 (  -22.58%)    3.21 (   -1.16%)    5.83 (  -83.70%)    9.85 ( -210.54%)
User Time           0.06 (    0.00%)    0.06 (    0.00%)    0.07 (  -13.79%)    0.06 (   -3.45%)    0.04 (   24.14%)    0.07 (  -17.24%)    0.03 (   41.38%)
+/-                 0.00 (    0.00%)    0.01 (  -87.08%)    0.02 ( -483.10%)    0.00 (    0.00%)    0.01 ( -154.95%)    0.02 ( -330.12%)    0.01 ( -100.00%)
Elapsed Time     1627.12 (    0.00%) 2187.36 (  -34.43%)   51.04 (   96.86%)   49.16 (   96.98%)   74.48 (   95.42%)   18.53 (   98.86%)  453.58 (   72.12%)
+/-                77.40 (    0.00%)  561.41 ( -625.30%)    4.57 (   94.10%)    3.75 (   95.16%)   16.18 (   79.09%)   10.44 (   86.52%)   64.07 (   17.23%)
THP Active         12.20 (    0.00%)   20.00 (  163.93%)   49.40 (  404.92%)   61.00 (  500.00%)   62.00 (  508.20%)   39.40 (  322.95%)   78.00 (  639.34%)
+/-                 7.55 (    0.00%)   15.94 (  211.17%)   23.10 (  306.03%)   37.12 (  491.79%)   42.53 (  563.52%)   31.10 (  412.12%)   47.80 (  633.40%)
Fault Alloc        28.80 (    0.00%)   44.80 (  155.56%)  142.60 (  495.14%)  140.20 (  486.81%)  161.60 (  561.11%)  181.20 (  629.17%)  329.60 ( 1144.44%)
+/-                13.17 (    0.00%)    5.46 (   41.43%)   32.38 (  245.90%)   35.37 (  268.63%)   89.29 (  678.12%)   59.04 (  448.43%)  111.90 (  849.90%)
Fault Fallback    974.40 (    0.00%)  958.60 (    1.62%)  860.40 (   11.70%)  862.80 (   11.45%)  841.60 (   13.63%)  822.00 (   15.64%)  673.80 (   30.85%)
+/-                12.94 (    0.00%)    5.35 (   58.64%)   32.38 ( -150.21%)   35.37 ( -173.33%)   88.89 ( -586.98%)   59.17 ( -357.25%)  111.96 ( -765.26%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       1228.79   1683.09    656.72    644.92    731.17     56.95   1804.35
Total Elapsed Time (seconds)               8314.20  11126.44    428.74    410.35    525.44    246.16   2459.52

The first thing to note is the "Elapsed Time" for the vanilla kernels
of 1627 seconds versus 50 with THP disabled which might explain
the reports of USB stalls with THP enabled. Moving to synclight and
avoiding writeback in compaction brings THP in line with base pages
but at the cost of System CPU usage. Isolating PageReclaim pages on
their own LRU cuts the System CPU usage down.

It is very interesting to note that with the "immediate" kernel that
the completion time is better than the base page case. I do not know
exactly why that is but it may be due to batch reclaiming more pages
when THP is used.

The "Fault Alloc" success rate figures are also improved. The vanilla
kernel only managed to allocate 28.8 pages on average over the course
of 5 iterations. synclight brings that up to 142.6 while immediate
brings it up to 181.20. Of course, there is a lot of variability
which is to be expected with all the IO going on , particularly when
reading from a raw device backing a live filesystem (which is hostile
to fragmentation avoidance).

Andrea's series had a higher success rate for THP allocations but
at a severe cost to elapsed time which is still better than vanilla
but still much worse than disabling THP altogether. One can bring my
series close to Andrea's by removing this check

        /*
         * If compaction is deferred for high-order allocations, it is because
         * sync compaction recently failed. In this is the case and the caller
         * has requested the system not be heavily disrupted, fail the
         * allocation now instead of entering direct reclaim
         */
        if (deferred_compaction && (gfp_mask & __GFP_NO_KSWAPD))
                goto nopage;

If that is done the average time to complete the test increases from
18.53 seconds (immediate kernel) to 367.44 seconds but brings THP
allocation success rates close to in line with Andreas series. It
could probably be pushed higher by deferring compaction less and
combining aggressive reclaim with aggressive compaction but all at
the cost of overall performance.

I didn't include a patch that removed the above check because hurting
overall performance to improve the THP figure is not what the average
user wants. It's something to consider though if someone really wants
to maximise THP usage no matter what it does to the workload initially.

                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
MMTests Statistics: vmstat
Page Ins                                   849238418  1112644112    11522374    10078704    19644823    11465123   153979370
Page Outs                                   20589862    25868815     3455835     3406331     3673611     3981797     7824154
Swap Ins                                        3812        3481        7076        5377        5966        5691        4961
Swap Outs                                     255283      352820      624734      620676      616675      862611      700161
Direct pages scanned                      1350821305  2228775837  1547403976  1560132463  1840632272    98025275  5448209970
Kswapd pages scanned                        10182797    15963121     2364959     2114433     1560570     2164608     2036422
Kswapd pages reclaimed                       7068564    12342958     1449634     1274347      971730     1426735     1648304
Direct pages reclaimed                     210120758   271789656     1902991     1902799     4580919     1946606    38478437
Kswapd efficiency                                69%         77%         61%         60%         62%         65%         80%
Kswapd velocity                             1224.748    1434.702    5516.068    5152.755    2970.025    8793.500     827.975
Direct efficiency                                15%         12%          0%          0%          0%          1%          0%
Direct velocity                           162471.591  200313.473 3609189.663 3801955.557 3503030.359  398217.724 2215151.725
Percentage direct scans                          99%         99%         99%         99%         99%         97%         99%
Page writes by reclaim                        256842      355827      624879      620803      616744      862730      702252
Page writes file                                1559        3007         145         127          69         119        2091
Page writes anon                              255283      352820      624734      620676      616675      862611      700161
Page reclaim immediate                    1066897311  1818638577  1436791650  1448010323  1705255453    95383606  5081414834
Page rescued immediate                             0           0           0           0           0      104874           0
Slabs scanned                                   9216       10240        9216        8192        8192        8192        9216
Direct inode steals                                0           0           0           0           0           0           0
Kswapd inode steals                                0           0           0           0           0           0           0
Kswapd skipped wait                             1176         400           1           1           2          15           8
THP fault alloc                                  144         224         713         701         808         906        1648
THP collapse alloc                                 3          18           0           0           0           0           0
THP splits                                        85         132         468         396         503         713        1286
THP fault fallback                              4872        4793        4302        4314        4208        4110        3369
THP collapse fail                                 91          37           0           0           0           0           1
Compaction stalls                                417        2527         540         527         740         637        3240
Compaction success                                44         232          58          45          71         102         276
Compaction failures                              373        2295         482         482         669         535        2964
Compaction pages moved                         69404      144762      166506      183062      213251      223125      436501
Compaction move failure                         9124       11395        8757       15337       17023       20845       67949

This is summary of vmstat figures from the same test. Sorry about
the formatting. The main things to look at are

1. Page In/out figures are much reduced by the series.

2. Direct page scanning is incredibly high (162471.591 pages scanned
   per second on the vanilla kernel) but isolating PageReclaim pages
   on their own list reduces the number of pages scanned by 95% (Direct
   pages scanned line).

3. The fact that "Page rescued immediate" is a positive number implies
   that we sometimes race removing pages from the LRU_IMMEDIATE list
   that need to be put back on a normal LRU but it happens only for
   0.1% of the pages marked for immediate reclaim.

writebackCPDeviceext4
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time         1.94 (    0.00%)    2.11 (   -8.54%)    1.54 (   20.68%)    1.54 (   20.99%)    1.58 (   18.62%)    1.20 (   38.17%)    1.71 (   12.04%)
+/-                 0.55 (    0.00%)    0.29 (   47.94%)    0.42 (   24.45%)    0.36 (   35.63%)    0.30 (   45.20%)    0.21 (   61.75%)    0.22 (   60.50%)
User Time           0.06 (    0.00%)    0.04 (   35.71%)    0.06 (   -3.57%)    0.05 (   14.29%)    0.03 (   42.86%)    0.06 (  -10.71%)    0.03 (   50.00%)
+/-                 0.02 (    0.00%)    0.01 (   56.28%)    0.02 (   12.55%)    0.01 (   57.99%)    0.01 (   67.92%)    0.02 (   31.40%)    0.01 (   67.92%)
Elapsed Time       62.39 (    0.00%)   98.66 (  -58.14%)  101.12 (  -62.08%)  114.45 (  -83.45%)   94.62 (  -51.68%)   42.73 (   31.51%)  226.70 ( -263.38%)
+/-                55.11 (    0.00%)   47.33 (   14.12%)   54.36 (    1.36%)   26.80 (   51.37%)   56.09 (   -1.79%)    6.76 (   87.74%)  149.78 ( -171.80%)
THP Active         99.80 (    0.00%)   95.40 (   95.59%)   72.40 (   72.55%)  120.60 (  120.84%)  145.60 (  145.89%)   44.20 (   44.29%)   94.60 (   94.79%)
+/-                54.95 (    0.00%)   35.71 (   64.98%)   19.28 (   35.09%)   27.08 (   49.28%)   77.75 (  141.48%)   31.31 (   56.98%)   49.08 (   89.32%)
Fault Alloc       244.20 (    0.00%)  250.60 (  102.62%)  152.80 (   62.57%)  217.60 (   89.11%)  272.00 (  111.38%)  167.20 (   68.47%)  396.40 (  162.33%)
+/-                22.82 (    0.00%)   47.58 (  208.52%)   42.23 (  185.11%)   30.57 (  133.99%)  135.52 (  593.95%)  100.20 (  439.15%)  104.59 (  458.40%)
Fault Fallback    758.80 (    0.00%)  752.80 (    0.79%)  850.20 (  -12.05%)  785.80 (   -3.56%)  731.40 (    3.61%)  836.00 (  -10.17%)  606.80 (   20.03%)
+/-                22.82 (    0.00%)   47.49 ( -108.15%)   42.23 (  -85.11%)   30.80 (  -34.99%)  135.83 ( -495.31%)  100.24 ( -339.33%)  104.43 ( -357.73%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)         34.47     34.29     26.27     24.76     32.13     32.67    104.88
Total Elapsed Time (seconds)                993.38   1217.66   1021.32   1030.08   1026.61    758.28   1688.14

Similar test but the USB stick is using ext4 instead of vfat. As
ext4 does not use writepage for migration, the large stalls due to
compaction when THP is enabled are not observed. Still, isolating
PageReclaim pages on their own list helped completion time largely
by reducing the number of pages scanned by direct reclaim although
time spend in congestion_wait could also be a factor.

Again, Andrea's series had far higher success rates for THP allocation
at the cost of elapsed time. I didn't look too closely but a quick
look at the vmstat figures tells me kswapd reclaimed 6 times more
pages than "immediate" and direct reclaim reclaimed roughly twice
as many pages. It follows that if memory is aggressively reclaimed,
there will be more available for THP.

writebackCPFilevfat
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5      synclight-v5      freemore-v5   revertAbort-v5     immediate-v5         andrea-v1r1
System Time        47.67 (    0.00%)   27.95 (   41.37%)   39.35 (   17.46%)   45.70 (    4.14%)   46.49 (    2.48%)    4.91 (   89.69%)   54.42 (  -14.17%)
+/-                17.29 (    0.00%)   26.04 (  -50.62%)   19.21 (  -11.12%)    1.15 (   93.33%)    3.67 (   78.78%)    7.01 (   59.46%)   10.31 (   40.39%)
User Time           0.08 (    0.00%)    0.05 (   34.21%)    0.07 (    5.26%)    0.05 (   28.95%)    0.04 (   42.11%)    0.06 (   18.42%)    0.05 (   36.84%)
+/-                 0.02 (    0.00%)    0.01 (   31.32%)    0.01 (   28.63%)    0.01 (   50.47%)    0.01 (   27.32%)    0.01 (   35.57%)    0.01 (   35.57%)
Elapsed Time     1013.87 (    0.00%) 2009.56 (  -98.21%)   96.54 (   90.48%)   54.48 (   94.63%)   76.83 (   92.42%)   23.04 (   97.73%)  252.74 (   75.07%)
+/-              1164.19 (    0.00%) 1833.78 (  -57.52%)   82.29 (   92.93%)    5.59 (   99.52%)   27.40 (   97.65%)    7.76 (   99.33%)   45.62 (   96.08%)
THP Active          1.20 (    0.00%)   27.60 ( 2300.00%)   25.80 ( 2150.00%)   24.20 ( 2016.67%)   24.20 ( 2016.67%)   18.20 ( 1516.67%)   24.40 ( 2033.33%)
+/-                 1.94 (    0.00%)   24.63 ( 1270.20%)   33.27 ( 1715.82%)   34.65 ( 1786.89%)   28.17 ( 1452.58%)   10.07 (  519.21%)   47.31 ( 2439.61%)
Fault Alloc        42.80 (    0.00%)   87.20 (  203.74%)   71.80 (  167.76%)  147.40 (  344.39%)  110.00 (  257.01%)  123.40 (  288.32%)  152.00 (  355.14%)
+/-                23.71 (    0.00%)   37.49 (  158.11%)   23.45 (   98.89%)   50.07 (  211.18%)   35.29 (  148.83%)   55.19 (  232.77%)   76.58 (  322.97%)
Fault Fallback    960.40 (    0.00%)  916.40 (    4.58%)  931.40 (    3.02%)  855.60 (   10.91%)  893.20 (    7.00%)  879.60 (    8.41%)  851.00 (   11.39%)
+/-                23.81 (    0.00%)   37.31 (  -56.67%)   23.48 (    1.39%)   50.07 ( -110.27%)   35.23 (  -47.96%)   55.19 ( -131.76%)   76.58 ( -221.58%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)       2240.06    2527.8    553.11    625.51    748.34     74.92   1271.33
Total Elapsed Time (seconds)               5289.06  10250.24    689.22    483.55    605.43    342.07   1472.99

In this case, the test is reading/writing only from filesystems but as
it's vfat, it's slow due to calling writepage during compaction. Little
to observe really - the time to complete the test goes way down with
the series applied and THP allocation success rates go up.

As before, Andrea's series allocates more THPs at the cost of overall
performance. Again I did not look too closely but it paged in a lot
more and scanned a lot more pages (see system CPU time) although
the actual reclaim figures look similar. It might be getting stuck
in congestion_wait but the tests that would have confirmed that did
not get the chance to run.

writebackCPFileext4
                  thpavail-3.2.0           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3           3.2.0-rc3
                     rc3-vanilla     lessdirect-v5r8      synclight-v5r8      freemore-v5r20   revertAbort-v5r20     immediate-v5r20         andrea-v1r1
System Time         2.14 (    0.00%)    2.31 (   -8.04%)    1.78 (   16.84%)    2.38 (  -11.23%)    2.02 (    5.43%)    1.50 (   29.84%)    1.79 (   16.46%)
+/-                 0.42 (    0.00%)    0.41 (    2.49%)    0.47 (  -12.67%)    0.99 ( -136.58%)    0.34 (   19.14%)    0.34 (   19.84%)    0.27 (   35.84%)
User Time           0.06 (    0.00%)    0.04 (   35.48%)    0.05 (   19.35%)    0.05 (   19.35%)    0.06 (    9.68%)    0.04 (   35.48%)    0.05 (   22.58%)
+/-                 0.02 (    0.00%)    0.01 (   27.07%)    0.02 (   20.11%)    0.02 (   13.71%)    0.01 (   47.41%)    0.00 (    0.00%)    0.02 (   11.27%)
Elapsed Time       65.66 (    0.00%)  105.82 (  -61.16%)  110.34 (  -68.04%)   91.03 (  -38.64%)  122.48 (  -86.53%)   28.35 (   56.82%)  245.87 ( -274.45%)
+/-                52.07 (    0.00%)   50.31 (    3.37%)   75.33 (  -44.67%)   55.36 (   -6.32%)   53.90 (   -3.53%)    7.39 (   85.80%)   91.44 (  -75.63%)
THP Active         35.20 (    0.00%)  122.40 (  347.73%)   80.80 (  229.55%)   73.80 (  209.66%)  130.40 (  370.45%)   82.00 (  232.95%)   14.80 (   42.05%)
+/-                17.03 (    0.00%)   74.02 (  434.53%)   92.15 (  540.99%)   40.95 (  240.38%)   44.21 (  259.55%)   68.21 (  400.46%)   18.73 (  109.98%)
Fault Alloc        90.80 (    0.00%)  293.80 (  323.57%)  258.40 (  284.58%)  216.40 (  238.33%)  330.00 (  363.44%)  346.60 (  381.72%)  165.80 (  182.60%)
+/-                22.66 (    0.00%)   67.76 (  299.05%)  109.14 (  481.69%)  138.36 (  610.66%)   76.60 (  338.06%)  122.98 (  542.77%)  120.34 (  531.14%)
Fault Fallback    912.20 (    0.00%)  709.20 (   22.25%)  745.00 (   18.33%)  786.60 (   13.77%)  673.40 (   26.18%)  656.80 (   28.00%)  837.40 (    8.20%)
+/-                22.66 (    0.00%)   67.76 ( -199.05%)  108.89 ( -380.60%)  138.36 ( -510.66%)   76.72 ( -238.63%)  123.07 ( -443.18%)  120.51 ( -431.86%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds)         47.14     51.17     41.11     45.13     46.68     33.81    125.39
Total Elapsed Time (seconds)               1032.94   1203.01   1287.99   1085.57   1008.24    764.42   1939.48

This is interesting in that the Elapsed Time goes up for parts of
the series until PageReclaim pages are isolated from the LRU. This
may be because the stalls were not that bad in the first place for
ext4 which may explain why this was missed in earlier testing but was
severe once someone plugged in a USB stick with VFAT on it. What is
interesting in this test is that unlike other tests the allocation
success rate for Andrea's series was lower while Elapsed Time is
still high but am not sure why that is.

Overall the series does reduce latencies and while the tests are
inherency racy as alloc competes with the cp processes, the variability
was included. The THP allocation rates are not as high as they could
be but that is because we would have to be more aggressive about
reclaim and compaction impacting overall performance. Any comments
on what is required to get this into a suitable shape for merging
are welcome. Testing is also welcome.

 fs/btrfs/disk-io.c            |    5 +-
 fs/nfs/internal.h             |    2 +-
 fs/nfs/write.c                |    4 +-
 include/linux/fs.h            |   11 ++-
 include/linux/migrate.h       |   23 +++++-
 include/linux/mmzone.h        |    4 +
 include/linux/vm_event_item.h |    1 +
 mm/compaction.c               |    5 +-
 mm/memory-failure.c           |    2 +-
 mm/memory_hotplug.c           |    2 +-
 mm/mempolicy.c                |    2 +-
 mm/migrate.c                  |  171 ++++++++++++++++++++++++++++-------------
 mm/page_alloc.c               |   50 +++++++++---
 mm/swap.c                     |   74 +++++++++++++++++-
 mm/vmscan.c                   |  114 ++++++++++++++++++++++++----
 mm/vmstat.c                   |    2 +
 16 files changed, 369 insertions(+), 103 deletions(-)

-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 01/11] mm: compaction: Allow compaction to isolate dirty pages
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Commit [39deaf85: mm: compaction: make isolate_lru_page() filter-aware]
noted that compaction does not migrate dirty or writeback pages and
that is was meaningless to pick the page and re-add it to the LRU list.

What was missed during review is that asynchronous migration moves
dirty pages if their ->migratepage callback is migrate_page() because
these can be moved without blocking. This potentially impacted
hugepage allocation success rates by a factor depending on how many
dirty pages are in the system.

This patch partially reverts 39deaf85 to allow migration to isolate
dirty pages again. This increases how much compaction disrupts the
LRU but that is addressed later in the series.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/compaction.c |    3 ---
 1 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 899d956..237560e 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -349,9 +349,6 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			continue;
 		}
 
-		if (!cc->sync)
-			mode |= ISOLATE_CLEAN;
-
 		/* Try isolate the page */
 		if (__isolate_lru_page(page, mode, 0) != 0)
 			continue;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 01/11] mm: compaction: Allow compaction to isolate dirty pages
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Commit [39deaf85: mm: compaction: make isolate_lru_page() filter-aware]
noted that compaction does not migrate dirty or writeback pages and
that is was meaningless to pick the page and re-add it to the LRU list.

What was missed during review is that asynchronous migration moves
dirty pages if their ->migratepage callback is migrate_page() because
these can be moved without blocking. This potentially impacted
hugepage allocation success rates by a factor depending on how many
dirty pages are in the system.

This patch partially reverts 39deaf85 to allow migration to isolate
dirty pages again. This increases how much compaction disrupts the
LRU but that is addressed later in the series.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/compaction.c |    3 ---
 1 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 899d956..237560e 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -349,9 +349,6 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			continue;
 		}
 
-		if (!cc->sync)
-			mode |= ISOLATE_CLEAN;
-
 		/* Try isolate the page */
 		if (__isolate_lru_page(page, mode, 0) != 0)
 			continue;
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 02/11] mm: compaction: Use synchronous compaction for /proc/sys/vm/compact_memory
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

When asynchronous compaction was introduced, the
/proc/sys/vm/compact_memory handler should have been updated to always
use synchronous compaction. This did not happen so this patch addresses
it. The assumption is if a user writes to /proc/sys/vm/compact_memory,
they are willing for that process to stall.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/compaction.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 237560e..615502b 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -666,6 +666,7 @@ static int compact_node(int nid)
 			.nr_freepages = 0,
 			.nr_migratepages = 0,
 			.order = -1,
+			.sync = true,
 		};
 
 		zone = &pgdat->node_zones[zoneid];
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 02/11] mm: compaction: Use synchronous compaction for /proc/sys/vm/compact_memory
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

When asynchronous compaction was introduced, the
/proc/sys/vm/compact_memory handler should have been updated to always
use synchronous compaction. This did not happen so this patch addresses
it. The assumption is if a user writes to /proc/sys/vm/compact_memory,
they are willing for that process to stall.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/compaction.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 237560e..615502b 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -666,6 +666,7 @@ static int compact_node(int nid)
 			.nr_freepages = 0,
 			.nr_migratepages = 0,
 			.order = -1,
+			.sync = true,
 		};
 
 		zone = &pgdat->node_zones[zoneid];
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

From: Andrea Arcangeli <aarcange@redhat.com>

Properly take into account if we isolated a compound page during the
lumpy scan in reclaim and skip over the tail pages when encountered.
This corrects the values given to the tracepoint for number of lumpy
pages isolated and will avoid breaking the loop early if compound
pages smaller than the requested allocation size are requested.

[mgorman@suse.de: Updated changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/vmscan.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a1893c0..3421746 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1183,13 +1183,16 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 				break;
 
 			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+				unsigned int isolated_pages;
 				list_move(&cursor_page->lru, dst);
 				mem_cgroup_del_lru(cursor_page);
-				nr_taken += hpage_nr_pages(page);
-				nr_lumpy_taken++;
+				isolated_pages = hpage_nr_pages(page);
+				nr_taken += isolated_pages;
+				nr_lumpy_taken += isolated_pages;
 				if (PageDirty(cursor_page))
-					nr_lumpy_dirty++;
+					nr_lumpy_dirty += isolated_pages;
 				scan++;
+				pfn += isolated_pages-1;
 			} else {
 				/*
 				 * Check if the page is freed already.
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

From: Andrea Arcangeli <aarcange@redhat.com>

Properly take into account if we isolated a compound page during the
lumpy scan in reclaim and skip over the tail pages when encountered.
This corrects the values given to the tracepoint for number of lumpy
pages isolated and will avoid breaking the loop early if compound
pages smaller than the requested allocation size are requested.

[mgorman@suse.de: Updated changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/vmscan.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a1893c0..3421746 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1183,13 +1183,16 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 				break;
 
 			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+				unsigned int isolated_pages;
 				list_move(&cursor_page->lru, dst);
 				mem_cgroup_del_lru(cursor_page);
-				nr_taken += hpage_nr_pages(page);
-				nr_lumpy_taken++;
+				isolated_pages = hpage_nr_pages(page);
+				nr_taken += isolated_pages;
+				nr_lumpy_taken += isolated_pages;
 				if (PageDirty(cursor_page))
-					nr_lumpy_dirty++;
+					nr_lumpy_dirty += isolated_pages;
 				scan++;
+				pfn += isolated_pages-1;
 			} else {
 				/*
 				 * Check if the page is freed already.
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 04/11] mm: vmscan: Do not OOM if aborting reclaim to start compaction
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

When direct reclaim is entered is is possible that reclaim will be
aborted so that compaction can be attempted to satisfy a high-order
allocation.  If this decision is made before any pages are reclaimed,
it is possible for 0 to be returned to the page allocator potentially
triggering an OOM. This has not been observed but it is a possibility
so this patch addresses it.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |    8 +++++++-
 1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3421746..5f4c789 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2222,6 +2222,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	struct zoneref *z;
 	struct zone *zone;
 	unsigned long writeback_threshold;
+	bool should_abort_reclaim;
 
 	get_mems_allowed();
 	delayacct_freepages_start();
@@ -2233,7 +2234,8 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		sc->nr_scanned = 0;
 		if (!priority)
 			disable_swap_token(sc->mem_cgroup);
-		if (shrink_zones(priority, zonelist, sc))
+		should_abort_reclaim = shrink_zones(priority, zonelist, sc);
+		if (should_abort_reclaim)
 			break;
 
 		/*
@@ -2301,6 +2303,10 @@ out:
 	if (oom_killer_disabled)
 		return 0;
 
+	/* Aborting reclaim to try compaction? don't OOM, then */
+	if (should_abort_reclaim)
+		return 1;
+
 	/* top priority shrink_zones still had more to do? don't OOM, then */
 	if (scanning_global_lru(sc) && !all_unreclaimable(zonelist, sc))
 		return 1;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 04/11] mm: vmscan: Do not OOM if aborting reclaim to start compaction
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

When direct reclaim is entered is is possible that reclaim will be
aborted so that compaction can be attempted to satisfy a high-order
allocation.  If this decision is made before any pages are reclaimed,
it is possible for 0 to be returned to the page allocator potentially
triggering an OOM. This has not been observed but it is a possibility
so this patch addresses it.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |    8 +++++++-
 1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3421746..5f4c789 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2222,6 +2222,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	struct zoneref *z;
 	struct zone *zone;
 	unsigned long writeback_threshold;
+	bool should_abort_reclaim;
 
 	get_mems_allowed();
 	delayacct_freepages_start();
@@ -2233,7 +2234,8 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		sc->nr_scanned = 0;
 		if (!priority)
 			disable_swap_token(sc->mem_cgroup);
-		if (shrink_zones(priority, zonelist, sc))
+		should_abort_reclaim = shrink_zones(priority, zonelist, sc);
+		if (should_abort_reclaim)
 			break;
 
 		/*
@@ -2301,6 +2303,10 @@ out:
 	if (oom_killer_disabled)
 		return 0;
 
+	/* Aborting reclaim to try compaction? don't OOM, then */
+	if (should_abort_reclaim)
+		return 1;
+
 	/* top priority shrink_zones still had more to do? don't OOM, then */
 	if (scanning_global_lru(sc) && !all_unreclaimable(zonelist, sc))
 		return 1;
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 05/11] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Asynchronous compaction is used when allocating transparent hugepages
to avoid blocking for long periods of time. Due to reports of
stalling, there was a debate on disabling synchronous compaction
but this severely impacted allocation success rates. Part of the
reason was because when deciding whether to migrate dirty pages,
the following check is made;

	if (PageDirty(page) && !sync &&
		mapping->a_ops->migratepage != migrate_page)
			rc = -EBUSY;

This skips over all pages using buffer_migrate_page() even though
it is possible to migrate some of these pages without blocking. This
patch updates the ->migratepage callback with a "sync" parameter. It
is the resposibility of the callback to gracefully fail migration of
the page if it would block.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 fs/btrfs/disk-io.c      |    4 +-
 fs/nfs/internal.h       |    2 +-
 fs/nfs/write.c          |    4 +-
 include/linux/fs.h      |    9 ++-
 include/linux/migrate.h |    2 +-
 mm/migrate.c            |  129 +++++++++++++++++++++++++++++++++-------------
 6 files changed, 104 insertions(+), 46 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 632f8f3..896b87a 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -872,7 +872,7 @@ static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
 
 #ifdef CONFIG_MIGRATION
 static int btree_migratepage(struct address_space *mapping,
-			struct page *newpage, struct page *page)
+			struct page *newpage, struct page *page, bool sync)
 {
 	/*
 	 * we can't safely write a btree page from here,
@@ -887,7 +887,7 @@ static int btree_migratepage(struct address_space *mapping,
 	if (page_has_private(page) &&
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
-	return migrate_page(mapping, newpage, page);
+	return migrate_page(mapping, newpage, page, sync);
 }
 #endif
 
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 3f4d957..8d96ed6 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -330,7 +330,7 @@ void nfs_commit_release_pages(struct nfs_write_data *data);
 
 #ifdef CONFIG_MIGRATION
 extern int nfs_migrate_page(struct address_space *,
-		struct page *, struct page *);
+		struct page *, struct page *, bool);
 #else
 #define nfs_migrate_page NULL
 #endif
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 1dda78d..33475df 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1711,7 +1711,7 @@ out_error:
 
 #ifdef CONFIG_MIGRATION
 int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
-		struct page *page)
+		struct page *page, bool sync)
 {
 	/*
 	 * If PagePrivate is set, then the page is currently associated with
@@ -1726,7 +1726,7 @@ int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
 
 	nfs_fscache_release_page(page, GFP_KERNEL);
 
-	return migrate_page(mapping, newpage, page);
+	return migrate_page(mapping, newpage, page, sync);
 }
 #endif
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e313022..07dae2a 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -609,9 +609,12 @@ struct address_space_operations {
 			loff_t offset, unsigned long nr_segs);
 	int (*get_xip_mem)(struct address_space *, pgoff_t, int,
 						void **, unsigned long *);
-	/* migrate the contents of a page to the specified target */
+	/*
+	 * migrate the contents of a page to the specified target. If sync
+	 * is false, it must not block.
+	 */
 	int (*migratepage) (struct address_space *,
-			struct page *, struct page *);
+			struct page *, struct page *, bool);
 	int (*launder_page) (struct page *);
 	int (*is_partially_uptodate) (struct page *, read_descriptor_t *,
 					unsigned long);
@@ -2578,7 +2581,7 @@ extern int generic_check_addressable(unsigned, u64);
 
 #ifdef CONFIG_MIGRATION
 extern int buffer_migrate_page(struct address_space *,
-				struct page *, struct page *);
+				struct page *, struct page *, bool);
 #else
 #define buffer_migrate_page NULL
 #endif
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index e39aeec..14e6d2a 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -11,7 +11,7 @@ typedef struct page *new_page_t(struct page *, unsigned long private, int **);
 
 extern void putback_lru_pages(struct list_head *l);
 extern int migrate_page(struct address_space *,
-			struct page *, struct page *);
+			struct page *, struct page *, bool);
 extern int migrate_pages(struct list_head *l, new_page_t x,
 			unsigned long private, bool offlining,
 			bool sync);
diff --git a/mm/migrate.c b/mm/migrate.c
index 578e291..a5be362 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -220,6 +220,55 @@ out:
 	pte_unmap_unlock(ptep, ptl);
 }
 
+#ifdef CONFIG_BLOCK
+/* Returns true if all buffers are successfully locked */
+static bool buffer_migrate_lock_buffers(struct buffer_head *head, bool sync)
+{
+	struct buffer_head *bh = head;
+
+	/* Simple case, sync compaction */
+	if (sync) {
+		do {
+			get_bh(bh);
+			lock_buffer(bh);
+			bh = bh->b_this_page;
+
+		} while (bh != head);
+
+		return true;
+	}
+
+	/* async case, we cannot block on lock_buffer so use trylock_buffer */
+	do {
+		get_bh(bh);
+		if (!trylock_buffer(bh)) {
+			/*
+			 * We failed to lock the buffer and cannot stall in
+			 * async migration. Release the taken locks
+			 */
+			struct buffer_head *failed_bh = bh;
+			put_bh(failed_bh);
+			bh = head;
+			while (bh != failed_bh) {
+				unlock_buffer(bh);
+				put_bh(bh);
+				bh = bh->b_this_page;
+			}
+			return false;
+		}
+
+		bh = bh->b_this_page;
+	} while (bh != head);
+	return true;
+}
+#else
+static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
+								bool sync)
+{
+	return true;
+}
+#endif /* CONFIG_BLOCK */
+
 /*
  * Replace the page in the mapping.
  *
@@ -229,7 +278,8 @@ out:
  * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
  */
 static int migrate_page_move_mapping(struct address_space *mapping,
-		struct page *newpage, struct page *page)
+		struct page *newpage, struct page *page,
+		struct buffer_head *head, bool sync)
 {
 	int expected_count;
 	void **pslot;
@@ -259,6 +309,19 @@ static int migrate_page_move_mapping(struct address_space *mapping,
 	}
 
 	/*
+	 * In the async migration case of moving a page with buffers, lock the
+	 * buffers using trylock before the mapping is moved. If the mapping
+	 * was moved, we later failed to lock the buffers and could not move
+	 * the mapping back due to an elevated page count, we would have to
+	 * block waiting on other references to be dropped.
+	 */
+	if (!sync && head && !buffer_migrate_lock_buffers(head, sync)) {
+		page_unfreeze_refs(page, expected_count);
+		spin_unlock_irq(&mapping->tree_lock);
+		return -EAGAIN;
+	}
+
+	/*
 	 * Now we know that no one else is looking at the page.
 	 */
 	get_page(newpage);	/* add cache reference */
@@ -415,13 +478,13 @@ EXPORT_SYMBOL(fail_migrate_page);
  * Pages are locked upon entry and exit.
  */
 int migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page)
+		struct page *newpage, struct page *page, bool sync)
 {
 	int rc;
 
 	BUG_ON(PageWriteback(page));	/* Writeback must be complete */
 
-	rc = migrate_page_move_mapping(mapping, newpage, page);
+	rc = migrate_page_move_mapping(mapping, newpage, page, NULL, sync);
 
 	if (rc)
 		return rc;
@@ -438,28 +501,28 @@ EXPORT_SYMBOL(migrate_page);
  * exist.
  */
 int buffer_migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page)
+		struct page *newpage, struct page *page, bool sync)
 {
 	struct buffer_head *bh, *head;
 	int rc;
 
 	if (!page_has_buffers(page))
-		return migrate_page(mapping, newpage, page);
+		return migrate_page(mapping, newpage, page, sync);
 
 	head = page_buffers(page);
 
-	rc = migrate_page_move_mapping(mapping, newpage, page);
+	rc = migrate_page_move_mapping(mapping, newpage, page, head, sync);
 
 	if (rc)
 		return rc;
 
-	bh = head;
-	do {
-		get_bh(bh);
-		lock_buffer(bh);
-		bh = bh->b_this_page;
-
-	} while (bh != head);
+	/*
+	 * In the async case, migrate_page_move_mapping locked the buffers
+	 * with an IRQ-safe spinlock held. In the sync case, the buffers
+	 * need to be locked no
+	 */
+	if (sync)
+		BUG_ON(!buffer_migrate_lock_buffers(head, sync));
 
 	ClearPagePrivate(page);
 	set_page_private(newpage, page_private(page));
@@ -536,10 +599,13 @@ static int writeout(struct address_space *mapping, struct page *page)
  * Default handling if a filesystem does not provide a migration function.
  */
 static int fallback_migrate_page(struct address_space *mapping,
-	struct page *newpage, struct page *page)
+	struct page *newpage, struct page *page, bool sync)
 {
-	if (PageDirty(page))
+	if (PageDirty(page)) {
+		if (!sync)
+			return -EBUSY;
 		return writeout(mapping, page);
+	}
 
 	/*
 	 * Buffers may be managed in a filesystem specific way.
@@ -549,7 +615,7 @@ static int fallback_migrate_page(struct address_space *mapping,
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
 
-	return migrate_page(mapping, newpage, page);
+	return migrate_page(mapping, newpage, page, sync);
 }
 
 /*
@@ -585,29 +651,18 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 
 	mapping = page_mapping(page);
 	if (!mapping)
-		rc = migrate_page(mapping, newpage, page);
-	else {
+		rc = migrate_page(mapping, newpage, page, sync);
+	else if (mapping->a_ops->migratepage)
 		/*
-		 * Do not writeback pages if !sync and migratepage is
-		 * not pointing to migrate_page() which is nonblocking
-		 * (swapcache/tmpfs uses migratepage = migrate_page).
+		 * Most pages have a mapping and most filesystems provide a
+		 * migratepage callback. Anonymous pages are part of swap
+		 * space which also has its own migratepage callback. This
+		 * is the most common path for page migration.
 		 */
-		if (PageDirty(page) && !sync &&
-		    mapping->a_ops->migratepage != migrate_page)
-			rc = -EBUSY;
-		else if (mapping->a_ops->migratepage)
-			/*
-			 * Most pages have a mapping and most filesystems
-			 * should provide a migration function. Anonymous
-			 * pages are part of swap space which also has its
-			 * own migration function. This is the most common
-			 * path for page migration.
-			 */
-			rc = mapping->a_ops->migratepage(mapping,
-							newpage, page);
-		else
-			rc = fallback_migrate_page(mapping, newpage, page);
-	}
+		rc = mapping->a_ops->migratepage(mapping,
+						newpage, page, sync);
+	else
+		rc = fallback_migrate_page(mapping, newpage, page, sync);
 
 	if (rc) {
 		newpage->mapping = NULL;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 05/11] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Asynchronous compaction is used when allocating transparent hugepages
to avoid blocking for long periods of time. Due to reports of
stalling, there was a debate on disabling synchronous compaction
but this severely impacted allocation success rates. Part of the
reason was because when deciding whether to migrate dirty pages,
the following check is made;

	if (PageDirty(page) && !sync &&
		mapping->a_ops->migratepage != migrate_page)
			rc = -EBUSY;

This skips over all pages using buffer_migrate_page() even though
it is possible to migrate some of these pages without blocking. This
patch updates the ->migratepage callback with a "sync" parameter. It
is the resposibility of the callback to gracefully fail migration of
the page if it would block.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 fs/btrfs/disk-io.c      |    4 +-
 fs/nfs/internal.h       |    2 +-
 fs/nfs/write.c          |    4 +-
 include/linux/fs.h      |    9 ++-
 include/linux/migrate.h |    2 +-
 mm/migrate.c            |  129 +++++++++++++++++++++++++++++++++-------------
 6 files changed, 104 insertions(+), 46 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 632f8f3..896b87a 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -872,7 +872,7 @@ static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
 
 #ifdef CONFIG_MIGRATION
 static int btree_migratepage(struct address_space *mapping,
-			struct page *newpage, struct page *page)
+			struct page *newpage, struct page *page, bool sync)
 {
 	/*
 	 * we can't safely write a btree page from here,
@@ -887,7 +887,7 @@ static int btree_migratepage(struct address_space *mapping,
 	if (page_has_private(page) &&
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
-	return migrate_page(mapping, newpage, page);
+	return migrate_page(mapping, newpage, page, sync);
 }
 #endif
 
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 3f4d957..8d96ed6 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -330,7 +330,7 @@ void nfs_commit_release_pages(struct nfs_write_data *data);
 
 #ifdef CONFIG_MIGRATION
 extern int nfs_migrate_page(struct address_space *,
-		struct page *, struct page *);
+		struct page *, struct page *, bool);
 #else
 #define nfs_migrate_page NULL
 #endif
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 1dda78d..33475df 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1711,7 +1711,7 @@ out_error:
 
 #ifdef CONFIG_MIGRATION
 int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
-		struct page *page)
+		struct page *page, bool sync)
 {
 	/*
 	 * If PagePrivate is set, then the page is currently associated with
@@ -1726,7 +1726,7 @@ int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
 
 	nfs_fscache_release_page(page, GFP_KERNEL);
 
-	return migrate_page(mapping, newpage, page);
+	return migrate_page(mapping, newpage, page, sync);
 }
 #endif
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e313022..07dae2a 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -609,9 +609,12 @@ struct address_space_operations {
 			loff_t offset, unsigned long nr_segs);
 	int (*get_xip_mem)(struct address_space *, pgoff_t, int,
 						void **, unsigned long *);
-	/* migrate the contents of a page to the specified target */
+	/*
+	 * migrate the contents of a page to the specified target. If sync
+	 * is false, it must not block.
+	 */
 	int (*migratepage) (struct address_space *,
-			struct page *, struct page *);
+			struct page *, struct page *, bool);
 	int (*launder_page) (struct page *);
 	int (*is_partially_uptodate) (struct page *, read_descriptor_t *,
 					unsigned long);
@@ -2578,7 +2581,7 @@ extern int generic_check_addressable(unsigned, u64);
 
 #ifdef CONFIG_MIGRATION
 extern int buffer_migrate_page(struct address_space *,
-				struct page *, struct page *);
+				struct page *, struct page *, bool);
 #else
 #define buffer_migrate_page NULL
 #endif
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index e39aeec..14e6d2a 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -11,7 +11,7 @@ typedef struct page *new_page_t(struct page *, unsigned long private, int **);
 
 extern void putback_lru_pages(struct list_head *l);
 extern int migrate_page(struct address_space *,
-			struct page *, struct page *);
+			struct page *, struct page *, bool);
 extern int migrate_pages(struct list_head *l, new_page_t x,
 			unsigned long private, bool offlining,
 			bool sync);
diff --git a/mm/migrate.c b/mm/migrate.c
index 578e291..a5be362 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -220,6 +220,55 @@ out:
 	pte_unmap_unlock(ptep, ptl);
 }
 
+#ifdef CONFIG_BLOCK
+/* Returns true if all buffers are successfully locked */
+static bool buffer_migrate_lock_buffers(struct buffer_head *head, bool sync)
+{
+	struct buffer_head *bh = head;
+
+	/* Simple case, sync compaction */
+	if (sync) {
+		do {
+			get_bh(bh);
+			lock_buffer(bh);
+			bh = bh->b_this_page;
+
+		} while (bh != head);
+
+		return true;
+	}
+
+	/* async case, we cannot block on lock_buffer so use trylock_buffer */
+	do {
+		get_bh(bh);
+		if (!trylock_buffer(bh)) {
+			/*
+			 * We failed to lock the buffer and cannot stall in
+			 * async migration. Release the taken locks
+			 */
+			struct buffer_head *failed_bh = bh;
+			put_bh(failed_bh);
+			bh = head;
+			while (bh != failed_bh) {
+				unlock_buffer(bh);
+				put_bh(bh);
+				bh = bh->b_this_page;
+			}
+			return false;
+		}
+
+		bh = bh->b_this_page;
+	} while (bh != head);
+	return true;
+}
+#else
+static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
+								bool sync)
+{
+	return true;
+}
+#endif /* CONFIG_BLOCK */
+
 /*
  * Replace the page in the mapping.
  *
@@ -229,7 +278,8 @@ out:
  * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
  */
 static int migrate_page_move_mapping(struct address_space *mapping,
-		struct page *newpage, struct page *page)
+		struct page *newpage, struct page *page,
+		struct buffer_head *head, bool sync)
 {
 	int expected_count;
 	void **pslot;
@@ -259,6 +309,19 @@ static int migrate_page_move_mapping(struct address_space *mapping,
 	}
 
 	/*
+	 * In the async migration case of moving a page with buffers, lock the
+	 * buffers using trylock before the mapping is moved. If the mapping
+	 * was moved, we later failed to lock the buffers and could not move
+	 * the mapping back due to an elevated page count, we would have to
+	 * block waiting on other references to be dropped.
+	 */
+	if (!sync && head && !buffer_migrate_lock_buffers(head, sync)) {
+		page_unfreeze_refs(page, expected_count);
+		spin_unlock_irq(&mapping->tree_lock);
+		return -EAGAIN;
+	}
+
+	/*
 	 * Now we know that no one else is looking at the page.
 	 */
 	get_page(newpage);	/* add cache reference */
@@ -415,13 +478,13 @@ EXPORT_SYMBOL(fail_migrate_page);
  * Pages are locked upon entry and exit.
  */
 int migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page)
+		struct page *newpage, struct page *page, bool sync)
 {
 	int rc;
 
 	BUG_ON(PageWriteback(page));	/* Writeback must be complete */
 
-	rc = migrate_page_move_mapping(mapping, newpage, page);
+	rc = migrate_page_move_mapping(mapping, newpage, page, NULL, sync);
 
 	if (rc)
 		return rc;
@@ -438,28 +501,28 @@ EXPORT_SYMBOL(migrate_page);
  * exist.
  */
 int buffer_migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page)
+		struct page *newpage, struct page *page, bool sync)
 {
 	struct buffer_head *bh, *head;
 	int rc;
 
 	if (!page_has_buffers(page))
-		return migrate_page(mapping, newpage, page);
+		return migrate_page(mapping, newpage, page, sync);
 
 	head = page_buffers(page);
 
-	rc = migrate_page_move_mapping(mapping, newpage, page);
+	rc = migrate_page_move_mapping(mapping, newpage, page, head, sync);
 
 	if (rc)
 		return rc;
 
-	bh = head;
-	do {
-		get_bh(bh);
-		lock_buffer(bh);
-		bh = bh->b_this_page;
-
-	} while (bh != head);
+	/*
+	 * In the async case, migrate_page_move_mapping locked the buffers
+	 * with an IRQ-safe spinlock held. In the sync case, the buffers
+	 * need to be locked no
+	 */
+	if (sync)
+		BUG_ON(!buffer_migrate_lock_buffers(head, sync));
 
 	ClearPagePrivate(page);
 	set_page_private(newpage, page_private(page));
@@ -536,10 +599,13 @@ static int writeout(struct address_space *mapping, struct page *page)
  * Default handling if a filesystem does not provide a migration function.
  */
 static int fallback_migrate_page(struct address_space *mapping,
-	struct page *newpage, struct page *page)
+	struct page *newpage, struct page *page, bool sync)
 {
-	if (PageDirty(page))
+	if (PageDirty(page)) {
+		if (!sync)
+			return -EBUSY;
 		return writeout(mapping, page);
+	}
 
 	/*
 	 * Buffers may be managed in a filesystem specific way.
@@ -549,7 +615,7 @@ static int fallback_migrate_page(struct address_space *mapping,
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
 
-	return migrate_page(mapping, newpage, page);
+	return migrate_page(mapping, newpage, page, sync);
 }
 
 /*
@@ -585,29 +651,18 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 
 	mapping = page_mapping(page);
 	if (!mapping)
-		rc = migrate_page(mapping, newpage, page);
-	else {
+		rc = migrate_page(mapping, newpage, page, sync);
+	else if (mapping->a_ops->migratepage)
 		/*
-		 * Do not writeback pages if !sync and migratepage is
-		 * not pointing to migrate_page() which is nonblocking
-		 * (swapcache/tmpfs uses migratepage = migrate_page).
+		 * Most pages have a mapping and most filesystems provide a
+		 * migratepage callback. Anonymous pages are part of swap
+		 * space which also has its own migratepage callback. This
+		 * is the most common path for page migration.
 		 */
-		if (PageDirty(page) && !sync &&
-		    mapping->a_ops->migratepage != migrate_page)
-			rc = -EBUSY;
-		else if (mapping->a_ops->migratepage)
-			/*
-			 * Most pages have a mapping and most filesystems
-			 * should provide a migration function. Anonymous
-			 * pages are part of swap space which also has its
-			 * own migration function. This is the most common
-			 * path for page migration.
-			 */
-			rc = mapping->a_ops->migratepage(mapping,
-							newpage, page);
-		else
-			rc = fallback_migrate_page(mapping, newpage, page);
-	}
+		rc = mapping->a_ops->migratepage(mapping,
+						newpage, page, sync);
+	else
+		rc = fallback_migrate_page(mapping, newpage, page, sync);
 
 	if (rc) {
 		newpage->mapping = NULL;
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 06/11] mm: compaction: make isolate_lru_page() filter-aware again
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Commit [39deaf85: mm: compaction: make isolate_lru_page() filter-aware]
noted that compaction does not migrate dirty or writeback pages and
that is was meaningless to pick the page and re-add it to the LRU list.
This had to be partially reverted because some dirty pages can be
migrated by compaction without blocking.

This patch updates "mm: compaction: make isolate_lru_page" by skipping
over pages that migration has no possibility of migrating to minimise
LRU disruption.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/mmzone.h |    2 ++
 mm/compaction.c        |    3 +++
 mm/vmscan.c            |   35 +++++++++++++++++++++++++++++++++--
 3 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 188cb2f..ac5b522 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -173,6 +173,8 @@ static inline int is_unevictable_lru(enum lru_list l)
 #define ISOLATE_CLEAN		((__force isolate_mode_t)0x4)
 /* Isolate unmapped file */
 #define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x8)
+/* Isolate for asynchronous migration */
+#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x10)
 
 /* LRU Isolation modes. */
 typedef unsigned __bitwise__ isolate_mode_t;
diff --git a/mm/compaction.c b/mm/compaction.c
index 615502b..0379263 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -349,6 +349,9 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			continue;
 		}
 
+		if (!cc->sync)
+			mode |= ISOLATE_ASYNC_MIGRATE;
+
 		/* Try isolate the page */
 		if (__isolate_lru_page(page, mode, 0) != 0)
 			continue;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5f4c789..d2b701a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1061,8 +1061,39 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
 
 	ret = -EBUSY;
 
-	if ((mode & ISOLATE_CLEAN) && (PageDirty(page) || PageWriteback(page)))
-		return ret;
+	/*
+	 * To minimise LRU disruption, the caller can indicate that it only
+	 * wants to isolate pages it will be able to operate on without
+	 * blocking - clean pages for the most part.
+	 *
+	 * ISOLATE_CLEAN means that only clean pages should be isolated. This
+	 * is used by reclaim when it is cannot write to backing storage
+	 *
+	 * ISOLATE_ASYNC_MIGRATE is used to indicate that it only wants to pages
+	 * that it is possible to migrate without blocking
+	 */
+	if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
+		/* All the caller can do on PageWriteback is block */
+		if (PageWriteback(page))
+			return ret;
+
+		if (PageDirty(page)) {
+			struct address_space *mapping;
+
+			/* ISOLATE_CLEAN means only clean pages */
+			if (mode & ISOLATE_CLEAN)
+				return ret;
+
+			/*
+			 * Only pages without mappings or that have a
+			 * ->migratepage callback are possible to migrate
+			 * without blocking
+			 */
+			mapping = page_mapping(page);
+			if (mapping && !mapping->a_ops->migratepage)
+				return ret;
+		}
+	}
 
 	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
 		return ret;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 06/11] mm: compaction: make isolate_lru_page() filter-aware again
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

Commit [39deaf85: mm: compaction: make isolate_lru_page() filter-aware]
noted that compaction does not migrate dirty or writeback pages and
that is was meaningless to pick the page and re-add it to the LRU list.
This had to be partially reverted because some dirty pages can be
migrated by compaction without blocking.

This patch updates "mm: compaction: make isolate_lru_page" by skipping
over pages that migration has no possibility of migrating to minimise
LRU disruption.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/mmzone.h |    2 ++
 mm/compaction.c        |    3 +++
 mm/vmscan.c            |   35 +++++++++++++++++++++++++++++++++--
 3 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 188cb2f..ac5b522 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -173,6 +173,8 @@ static inline int is_unevictable_lru(enum lru_list l)
 #define ISOLATE_CLEAN		((__force isolate_mode_t)0x4)
 /* Isolate unmapped file */
 #define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x8)
+/* Isolate for asynchronous migration */
+#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x10)
 
 /* LRU Isolation modes. */
 typedef unsigned __bitwise__ isolate_mode_t;
diff --git a/mm/compaction.c b/mm/compaction.c
index 615502b..0379263 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -349,6 +349,9 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			continue;
 		}
 
+		if (!cc->sync)
+			mode |= ISOLATE_ASYNC_MIGRATE;
+
 		/* Try isolate the page */
 		if (__isolate_lru_page(page, mode, 0) != 0)
 			continue;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5f4c789..d2b701a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1061,8 +1061,39 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
 
 	ret = -EBUSY;
 
-	if ((mode & ISOLATE_CLEAN) && (PageDirty(page) || PageWriteback(page)))
-		return ret;
+	/*
+	 * To minimise LRU disruption, the caller can indicate that it only
+	 * wants to isolate pages it will be able to operate on without
+	 * blocking - clean pages for the most part.
+	 *
+	 * ISOLATE_CLEAN means that only clean pages should be isolated. This
+	 * is used by reclaim when it is cannot write to backing storage
+	 *
+	 * ISOLATE_ASYNC_MIGRATE is used to indicate that it only wants to pages
+	 * that it is possible to migrate without blocking
+	 */
+	if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
+		/* All the caller can do on PageWriteback is block */
+		if (PageWriteback(page))
+			return ret;
+
+		if (PageDirty(page)) {
+			struct address_space *mapping;
+
+			/* ISOLATE_CLEAN means only clean pages */
+			if (mode & ISOLATE_CLEAN)
+				return ret;
+
+			/*
+			 * Only pages without mappings or that have a
+			 * ->migratepage callback are possible to migrate
+			 * without blocking
+			 */
+			mapping = page_mapping(page);
+			if (mapping && !mapping->a_ops->migratepage)
+				return ret;
+		}
+	}
 
 	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
 		return ret;
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 07/11] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

If compaction is deferred, direct reclaim is used to try free enough
pages for the allocation to succeed. For small high-orders, this has
a reasonable chance of success. However, if the caller has specified
__GFP_NO_KSWAPD to limit the disruption to the system, it makes more
sense to fail the allocation rather than stall the caller in direct
reclaim. This patch skips direct reclaim if compaction is deferred
and the caller specifies __GFP_NO_KSWAPD.

Async compaction only considers a subset of pages so it is possible for
compaction to be deferred prematurely and not enter direct reclaim even
in cases where it should. To compensate for this, this patch also defers
compaction only if sync compaction failed.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/page_alloc.c |   45 +++++++++++++++++++++++++++++++++++----------
 1 files changed, 35 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9dd443d..d979376 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1886,14 +1886,20 @@ static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	struct page *page;
 
-	if (!order || compaction_deferred(preferred_zone))
+	if (!order)
 		return NULL;
 
+	if (compaction_deferred(preferred_zone)) {
+		*deferred_compaction = true;
+		return NULL;
+	}
+
 	current->flags |= PF_MEMALLOC;
 	*did_some_progress = try_to_compact_pages(zonelist, order, gfp_mask,
 						nodemask, sync_migration);
@@ -1921,7 +1927,13 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 		 * but not enough to satisfy watermarks.
 		 */
 		count_vm_event(COMPACTFAIL);
-		defer_compaction(preferred_zone);
+
+		/*
+		 * As async compaction considers a subset of pageblocks, only
+		 * defer if the failure was a sync compaction failure.
+		 */
+		if (sync_migration)
+			defer_compaction(preferred_zone);
 
 		cond_resched();
 	}
@@ -1933,8 +1945,9 @@ static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	return NULL;
 }
@@ -2084,6 +2097,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	bool sync_migration = false;
+	bool deferred_compaction = false;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -2164,12 +2178,22 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 	if (page)
 		goto got_pg;
 	sync_migration = true;
 
+	/*
+	 * If compaction is deferred for high-order allocations, it is because
+	 * sync compaction recently failed. In this is the case and the caller
+	 * has requested the system not be heavily disrupted, fail the
+	 * allocation now instead of entering direct reclaim
+	 */
+	if (deferred_compaction && (gfp_mask & __GFP_NO_KSWAPD))
+		goto nopage;
+
 	/* Try direct reclaim and then allocating */
 	page = __alloc_pages_direct_reclaim(gfp_mask, order,
 					zonelist, high_zoneidx,
@@ -2232,8 +2256,9 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 		if (page)
 			goto got_pg;
 	}
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 07/11] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

If compaction is deferred, direct reclaim is used to try free enough
pages for the allocation to succeed. For small high-orders, this has
a reasonable chance of success. However, if the caller has specified
__GFP_NO_KSWAPD to limit the disruption to the system, it makes more
sense to fail the allocation rather than stall the caller in direct
reclaim. This patch skips direct reclaim if compaction is deferred
and the caller specifies __GFP_NO_KSWAPD.

Async compaction only considers a subset of pages so it is possible for
compaction to be deferred prematurely and not enter direct reclaim even
in cases where it should. To compensate for this, this patch also defers
compaction only if sync compaction failed.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/page_alloc.c |   45 +++++++++++++++++++++++++++++++++++----------
 1 files changed, 35 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9dd443d..d979376 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1886,14 +1886,20 @@ static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	struct page *page;
 
-	if (!order || compaction_deferred(preferred_zone))
+	if (!order)
 		return NULL;
 
+	if (compaction_deferred(preferred_zone)) {
+		*deferred_compaction = true;
+		return NULL;
+	}
+
 	current->flags |= PF_MEMALLOC;
 	*did_some_progress = try_to_compact_pages(zonelist, order, gfp_mask,
 						nodemask, sync_migration);
@@ -1921,7 +1927,13 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 		 * but not enough to satisfy watermarks.
 		 */
 		count_vm_event(COMPACTFAIL);
-		defer_compaction(preferred_zone);
+
+		/*
+		 * As async compaction considers a subset of pageblocks, only
+		 * defer if the failure was a sync compaction failure.
+		 */
+		if (sync_migration)
+			defer_compaction(preferred_zone);
 
 		cond_resched();
 	}
@@ -1933,8 +1945,9 @@ static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	return NULL;
 }
@@ -2084,6 +2097,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	bool sync_migration = false;
+	bool deferred_compaction = false;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -2164,12 +2178,22 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 	if (page)
 		goto got_pg;
 	sync_migration = true;
 
+	/*
+	 * If compaction is deferred for high-order allocations, it is because
+	 * sync compaction recently failed. In this is the case and the caller
+	 * has requested the system not be heavily disrupted, fail the
+	 * allocation now instead of entering direct reclaim
+	 */
+	if (deferred_compaction && (gfp_mask & __GFP_NO_KSWAPD))
+		goto nopage;
+
 	/* Try direct reclaim and then allocating */
 	page = __alloc_pages_direct_reclaim(gfp_mask, order,
 					zonelist, high_zoneidx,
@@ -2232,8 +2256,9 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 		if (page)
 			goto got_pg;
 	}
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 08/11] mm: compaction: Introduce sync-light migration for use by compaction
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

This patch adds a lightweight sync migrate operation MIGRATE_SYNC_LIGHT
mode that avoids writing back pages to backing storage. Async
compaction maps to MIGRATE_ASYNC while sync compaction maps to
MIGRATE_SYNC_LIGHT. For other migrate_pages users such as memory
hotplug, MIGRATE_SYNC is used.

This avoids sync compaction stalling for an excessive length of time,
particularly when copying files to a USB stick where there might be
a large number of dirty pages backed by a filesystem that does not
support ->writepages.

[aarcange@redhat.com: This patch is heavily based on Andrea's work]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 fs/btrfs/disk-io.c      |    3 +-
 fs/nfs/internal.h       |    2 +-
 fs/nfs/write.c          |    2 +-
 include/linux/fs.h      |    6 ++-
 include/linux/migrate.h |   23 +++++++++++---
 mm/compaction.c         |    2 +-
 mm/memory-failure.c     |    2 +-
 mm/memory_hotplug.c     |    2 +-
 mm/mempolicy.c          |    2 +-
 mm/migrate.c            |   78 ++++++++++++++++++++++++++---------------------
 10 files changed, 73 insertions(+), 49 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 896b87a..dbe9518 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -872,7 +872,8 @@ static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
 
 #ifdef CONFIG_MIGRATION
 static int btree_migratepage(struct address_space *mapping,
-			struct page *newpage, struct page *page, bool sync)
+			struct page *newpage, struct page *page,
+			enum migrate_mode sync)
 {
 	/*
 	 * we can't safely write a btree page from here,
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 8d96ed6..68b3f20 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -330,7 +330,7 @@ void nfs_commit_release_pages(struct nfs_write_data *data);
 
 #ifdef CONFIG_MIGRATION
 extern int nfs_migrate_page(struct address_space *,
-		struct page *, struct page *, bool);
+		struct page *, struct page *, enum migrate_mode);
 #else
 #define nfs_migrate_page NULL
 #endif
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 33475df..adb87d9 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1711,7 +1711,7 @@ out_error:
 
 #ifdef CONFIG_MIGRATION
 int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
-		struct page *page, bool sync)
+		struct page *page, enum migrate_mode sync)
 {
 	/*
 	 * If PagePrivate is set, then the page is currently associated with
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 07dae2a..715b344 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -525,6 +525,7 @@ enum positive_aop_returns {
 struct page;
 struct address_space;
 struct writeback_control;
+enum migrate_mode;
 
 struct iov_iter {
 	const struct iovec *iov;
@@ -614,7 +615,7 @@ struct address_space_operations {
 	 * is false, it must not block.
 	 */
 	int (*migratepage) (struct address_space *,
-			struct page *, struct page *, bool);
+			struct page *, struct page *, enum migrate_mode);
 	int (*launder_page) (struct page *);
 	int (*is_partially_uptodate) (struct page *, read_descriptor_t *,
 					unsigned long);
@@ -2581,7 +2582,8 @@ extern int generic_check_addressable(unsigned, u64);
 
 #ifdef CONFIG_MIGRATION
 extern int buffer_migrate_page(struct address_space *,
-				struct page *, struct page *, bool);
+				struct page *, struct page *,
+				enum migrate_mode);
 #else
 #define buffer_migrate_page NULL
 #endif
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 14e6d2a..775787c 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -6,18 +6,31 @@
 
 typedef struct page *new_page_t(struct page *, unsigned long private, int **);
 
+/*
+ * MIGRATE_ASYNC means never block
+ * MIGRATE_SYNC_LIGHT in the current implementation means to allow blocking
+ *	on most operations but not ->writepage as the potential stall time
+ *	is too significant
+ * MIGRATE_SYNC will block when migrating pages
+ */
+enum migrate_mode {
+	MIGRATE_ASYNC,
+	MIGRATE_SYNC_LIGHT,
+	MIGRATE_SYNC,
+};
+
 #ifdef CONFIG_MIGRATION
 #define PAGE_MIGRATION 1
 
 extern void putback_lru_pages(struct list_head *l);
 extern int migrate_page(struct address_space *,
-			struct page *, struct page *, bool);
+			struct page *, struct page *, enum migrate_mode);
 extern int migrate_pages(struct list_head *l, new_page_t x,
 			unsigned long private, bool offlining,
-			bool sync);
+			enum migrate_mode sync);
 extern int migrate_huge_pages(struct list_head *l, new_page_t x,
 			unsigned long private, bool offlining,
-			bool sync);
+			enum migrate_mode sync);
 
 extern int fail_migrate_page(struct address_space *,
 			struct page *, struct page *);
@@ -36,10 +49,10 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping,
 static inline void putback_lru_pages(struct list_head *l) {}
 static inline int migrate_pages(struct list_head *l, new_page_t x,
 		unsigned long private, bool offlining,
-		bool sync) { return -ENOSYS; }
+		enum migrate_mode sync) { return -ENOSYS; }
 static inline int migrate_huge_pages(struct list_head *l, new_page_t x,
 		unsigned long private, bool offlining,
-		bool sync) { return -ENOSYS; }
+		enum migrate_mode sync) { return -ENOSYS; }
 
 static inline int migrate_prep(void) { return -ENOSYS; }
 static inline int migrate_prep_local(void) { return -ENOSYS; }
diff --git a/mm/compaction.c b/mm/compaction.c
index 0379263..dbe1da0 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -555,7 +555,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 		nr_migrate = cc->nr_migratepages;
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
 				(unsigned long)cc, false,
-				cc->sync);
+				cc->sync ? MIGRATE_SYNC_LIGHT : MIGRATE_ASYNC);
 		update_nr_listpages(cc);
 		nr_remaining = cc->nr_migratepages;
 
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 06d3479..56080ea 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1557,7 +1557,7 @@ int soft_offline_page(struct page *page, int flags)
 					    page_is_file_cache(page));
 		list_add(&page->lru, &pagelist);
 		ret = migrate_pages(&pagelist, new_page, MPOL_MF_MOVE_ALL,
-								0, true);
+							0, MIGRATE_SYNC);
 		if (ret) {
 			putback_lru_pages(&pagelist);
 			pr_info("soft offline: %#lx: migration failed %d, type %lx\n",
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2168489..6629faf 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -809,7 +809,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		}
 		/* this function returns # of failed pages */
 		ret = migrate_pages(&source, hotremove_migrate_alloc, 0,
-								true, true);
+							true, MIGRATE_SYNC);
 		if (ret)
 			putback_lru_pages(&source);
 	}
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index adc3954..97009a4 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -933,7 +933,7 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest,
 
 	if (!list_empty(&pagelist)) {
 		err = migrate_pages(&pagelist, new_node_page, dest,
-								false, true);
+							false, MIGRATE_SYNC);
 		if (err)
 			putback_lru_pages(&pagelist);
 	}
diff --git a/mm/migrate.c b/mm/migrate.c
index a5be362..44071dc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -222,12 +222,13 @@ out:
 
 #ifdef CONFIG_BLOCK
 /* Returns true if all buffers are successfully locked */
-static bool buffer_migrate_lock_buffers(struct buffer_head *head, bool sync)
+static bool buffer_migrate_lock_buffers(struct buffer_head *head,
+							enum migrate_mode mode)
 {
 	struct buffer_head *bh = head;
 
 	/* Simple case, sync compaction */
-	if (sync) {
+	if (mode != MIGRATE_ASYNC) {
 		do {
 			get_bh(bh);
 			lock_buffer(bh);
@@ -263,7 +264,7 @@ static bool buffer_migrate_lock_buffers(struct buffer_head *head, bool sync)
 }
 #else
 static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
-								bool sync)
+							enum migrate_mode mode)
 {
 	return true;
 }
@@ -279,7 +280,7 @@ static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
  */
 static int migrate_page_move_mapping(struct address_space *mapping,
 		struct page *newpage, struct page *page,
-		struct buffer_head *head, bool sync)
+		struct buffer_head *head, enum migrate_mode mode)
 {
 	int expected_count;
 	void **pslot;
@@ -315,7 +316,8 @@ static int migrate_page_move_mapping(struct address_space *mapping,
 	 * the mapping back due to an elevated page count, we would have to
 	 * block waiting on other references to be dropped.
 	 */
-	if (!sync && head && !buffer_migrate_lock_buffers(head, sync)) {
+	if (mode == MIGRATE_ASYNC && head &&
+			!buffer_migrate_lock_buffers(head, mode)) {
 		page_unfreeze_refs(page, expected_count);
 		spin_unlock_irq(&mapping->tree_lock);
 		return -EAGAIN;
@@ -478,13 +480,14 @@ EXPORT_SYMBOL(fail_migrate_page);
  * Pages are locked upon entry and exit.
  */
 int migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page, bool sync)
+		struct page *newpage, struct page *page,
+		enum migrate_mode mode)
 {
 	int rc;
 
 	BUG_ON(PageWriteback(page));	/* Writeback must be complete */
 
-	rc = migrate_page_move_mapping(mapping, newpage, page, NULL, sync);
+	rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode);
 
 	if (rc)
 		return rc;
@@ -501,17 +504,17 @@ EXPORT_SYMBOL(migrate_page);
  * exist.
  */
 int buffer_migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page, bool sync)
+		struct page *newpage, struct page *page, enum migrate_mode mode)
 {
 	struct buffer_head *bh, *head;
 	int rc;
 
 	if (!page_has_buffers(page))
-		return migrate_page(mapping, newpage, page, sync);
+		return migrate_page(mapping, newpage, page, mode);
 
 	head = page_buffers(page);
 
-	rc = migrate_page_move_mapping(mapping, newpage, page, head, sync);
+	rc = migrate_page_move_mapping(mapping, newpage, page, head, mode);
 
 	if (rc)
 		return rc;
@@ -521,8 +524,8 @@ int buffer_migrate_page(struct address_space *mapping,
 	 * with an IRQ-safe spinlock held. In the sync case, the buffers
 	 * need to be locked no
 	 */
-	if (sync)
-		BUG_ON(!buffer_migrate_lock_buffers(head, sync));
+	if (mode != MIGRATE_ASYNC)
+		BUG_ON(!buffer_migrate_lock_buffers(head, mode));
 
 	ClearPagePrivate(page);
 	set_page_private(newpage, page_private(page));
@@ -599,10 +602,11 @@ static int writeout(struct address_space *mapping, struct page *page)
  * Default handling if a filesystem does not provide a migration function.
  */
 static int fallback_migrate_page(struct address_space *mapping,
-	struct page *newpage, struct page *page, bool sync)
+	struct page *newpage, struct page *page, enum migrate_mode mode)
 {
 	if (PageDirty(page)) {
-		if (!sync)
+		/* Only writeback pages in full synchronous migration */
+		if (mode != MIGRATE_SYNC)
 			return -EBUSY;
 		return writeout(mapping, page);
 	}
@@ -615,7 +619,7 @@ static int fallback_migrate_page(struct address_space *mapping,
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
 
-	return migrate_page(mapping, newpage, page, sync);
+	return migrate_page(mapping, newpage, page, mode);
 }
 
 /*
@@ -630,7 +634,7 @@ static int fallback_migrate_page(struct address_space *mapping,
  *  == 0 - success
  */
 static int move_to_new_page(struct page *newpage, struct page *page,
-					int remap_swapcache, bool sync)
+				int remap_swapcache, enum migrate_mode mode)
 {
 	struct address_space *mapping;
 	int rc;
@@ -651,7 +655,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 
 	mapping = page_mapping(page);
 	if (!mapping)
-		rc = migrate_page(mapping, newpage, page, sync);
+		rc = migrate_page(mapping, newpage, page, mode);
 	else if (mapping->a_ops->migratepage)
 		/*
 		 * Most pages have a mapping and most filesystems provide a
@@ -660,9 +664,9 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 		 * is the most common path for page migration.
 		 */
 		rc = mapping->a_ops->migratepage(mapping,
-						newpage, page, sync);
+						newpage, page, mode);
 	else
-		rc = fallback_migrate_page(mapping, newpage, page, sync);
+		rc = fallback_migrate_page(mapping, newpage, page, mode);
 
 	if (rc) {
 		newpage->mapping = NULL;
@@ -677,7 +681,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 }
 
 static int __unmap_and_move(struct page *page, struct page *newpage,
-				int force, bool offlining, bool sync)
+			int force, bool offlining, enum migrate_mode mode)
 {
 	int rc = -EAGAIN;
 	int remap_swapcache = 1;
@@ -686,7 +690,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	struct anon_vma *anon_vma = NULL;
 
 	if (!trylock_page(page)) {
-		if (!force || !sync)
+		if (!force || mode == MIGRATE_ASYNC)
 			goto out;
 
 		/*
@@ -732,10 +736,12 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 
 	if (PageWriteback(page)) {
 		/*
-		 * For !sync, there is no point retrying as the retry loop
-		 * is expected to be too short for PageWriteback to be cleared
+		 * Only in the case of a full syncronous migration is it
+		 * necessary to wait for PageWriteback. In the async case,
+		 * the retry loop is too short and in the sync-light case,
+		 * the overhead of stalling is too much
 		 */
-		if (!sync) {
+		if (mode != MIGRATE_SYNC) {
 			rc = -EBUSY;
 			goto uncharge;
 		}
@@ -806,7 +812,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 
 skip_unmap:
 	if (!page_mapped(page))
-		rc = move_to_new_page(newpage, page, remap_swapcache, sync);
+		rc = move_to_new_page(newpage, page, remap_swapcache, mode);
 
 	if (rc && remap_swapcache)
 		remove_migration_ptes(page, page);
@@ -829,7 +835,8 @@ out:
  * to the newly allocated page in newpage.
  */
 static int unmap_and_move(new_page_t get_new_page, unsigned long private,
-			struct page *page, int force, bool offlining, bool sync)
+			struct page *page, int force, bool offlining,
+			enum migrate_mode mode)
 {
 	int rc = 0;
 	int *result = NULL;
@@ -847,7 +854,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
 		if (unlikely(split_huge_page(page)))
 			goto out;
 
-	rc = __unmap_and_move(page, newpage, force, offlining, sync);
+	rc = __unmap_and_move(page, newpage, force, offlining, mode);
 out:
 	if (rc != -EAGAIN) {
 		/*
@@ -895,7 +902,8 @@ out:
  */
 static int unmap_and_move_huge_page(new_page_t get_new_page,
 				unsigned long private, struct page *hpage,
-				int force, bool offlining, bool sync)
+				int force, bool offlining,
+				enum migrate_mode mode)
 {
 	int rc = 0;
 	int *result = NULL;
@@ -908,7 +916,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	rc = -EAGAIN;
 
 	if (!trylock_page(hpage)) {
-		if (!force || !sync)
+		if (!force || mode != MIGRATE_SYNC)
 			goto out;
 		lock_page(hpage);
 	}
@@ -919,7 +927,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	try_to_unmap(hpage, TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
 
 	if (!page_mapped(hpage))
-		rc = move_to_new_page(new_hpage, hpage, 1, sync);
+		rc = move_to_new_page(new_hpage, hpage, 1, mode);
 
 	if (rc)
 		remove_migration_ptes(hpage, hpage);
@@ -962,7 +970,7 @@ out:
  */
 int migrate_pages(struct list_head *from,
 		new_page_t get_new_page, unsigned long private, bool offlining,
-		bool sync)
+		enum migrate_mode mode)
 {
 	int retry = 1;
 	int nr_failed = 0;
@@ -983,7 +991,7 @@ int migrate_pages(struct list_head *from,
 
 			rc = unmap_and_move(get_new_page, private,
 						page, pass > 2, offlining,
-						sync);
+						mode);
 
 			switch(rc) {
 			case -ENOMEM:
@@ -1013,7 +1021,7 @@ out:
 
 int migrate_huge_pages(struct list_head *from,
 		new_page_t get_new_page, unsigned long private, bool offlining,
-		bool sync)
+		enum migrate_mode mode)
 {
 	int retry = 1;
 	int nr_failed = 0;
@@ -1030,7 +1038,7 @@ int migrate_huge_pages(struct list_head *from,
 
 			rc = unmap_and_move_huge_page(get_new_page,
 					private, page, pass > 2, offlining,
-					sync);
+					mode);
 
 			switch(rc) {
 			case -ENOMEM:
@@ -1159,7 +1167,7 @@ set_status:
 	err = 0;
 	if (!list_empty(&pagelist)) {
 		err = migrate_pages(&pagelist, new_page_node,
-				(unsigned long)pm, 0, true);
+				(unsigned long)pm, 0, MIGRATE_SYNC);
 		if (err)
 			putback_lru_pages(&pagelist);
 	}
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 08/11] mm: compaction: Introduce sync-light migration for use by compaction
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

This patch adds a lightweight sync migrate operation MIGRATE_SYNC_LIGHT
mode that avoids writing back pages to backing storage. Async
compaction maps to MIGRATE_ASYNC while sync compaction maps to
MIGRATE_SYNC_LIGHT. For other migrate_pages users such as memory
hotplug, MIGRATE_SYNC is used.

This avoids sync compaction stalling for an excessive length of time,
particularly when copying files to a USB stick where there might be
a large number of dirty pages backed by a filesystem that does not
support ->writepages.

[aarcange@redhat.com: This patch is heavily based on Andrea's work]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 fs/btrfs/disk-io.c      |    3 +-
 fs/nfs/internal.h       |    2 +-
 fs/nfs/write.c          |    2 +-
 include/linux/fs.h      |    6 ++-
 include/linux/migrate.h |   23 +++++++++++---
 mm/compaction.c         |    2 +-
 mm/memory-failure.c     |    2 +-
 mm/memory_hotplug.c     |    2 +-
 mm/mempolicy.c          |    2 +-
 mm/migrate.c            |   78 ++++++++++++++++++++++++++---------------------
 10 files changed, 73 insertions(+), 49 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 896b87a..dbe9518 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -872,7 +872,8 @@ static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
 
 #ifdef CONFIG_MIGRATION
 static int btree_migratepage(struct address_space *mapping,
-			struct page *newpage, struct page *page, bool sync)
+			struct page *newpage, struct page *page,
+			enum migrate_mode sync)
 {
 	/*
 	 * we can't safely write a btree page from here,
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 8d96ed6..68b3f20 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -330,7 +330,7 @@ void nfs_commit_release_pages(struct nfs_write_data *data);
 
 #ifdef CONFIG_MIGRATION
 extern int nfs_migrate_page(struct address_space *,
-		struct page *, struct page *, bool);
+		struct page *, struct page *, enum migrate_mode);
 #else
 #define nfs_migrate_page NULL
 #endif
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 33475df..adb87d9 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1711,7 +1711,7 @@ out_error:
 
 #ifdef CONFIG_MIGRATION
 int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
-		struct page *page, bool sync)
+		struct page *page, enum migrate_mode sync)
 {
 	/*
 	 * If PagePrivate is set, then the page is currently associated with
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 07dae2a..715b344 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -525,6 +525,7 @@ enum positive_aop_returns {
 struct page;
 struct address_space;
 struct writeback_control;
+enum migrate_mode;
 
 struct iov_iter {
 	const struct iovec *iov;
@@ -614,7 +615,7 @@ struct address_space_operations {
 	 * is false, it must not block.
 	 */
 	int (*migratepage) (struct address_space *,
-			struct page *, struct page *, bool);
+			struct page *, struct page *, enum migrate_mode);
 	int (*launder_page) (struct page *);
 	int (*is_partially_uptodate) (struct page *, read_descriptor_t *,
 					unsigned long);
@@ -2581,7 +2582,8 @@ extern int generic_check_addressable(unsigned, u64);
 
 #ifdef CONFIG_MIGRATION
 extern int buffer_migrate_page(struct address_space *,
-				struct page *, struct page *, bool);
+				struct page *, struct page *,
+				enum migrate_mode);
 #else
 #define buffer_migrate_page NULL
 #endif
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 14e6d2a..775787c 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -6,18 +6,31 @@
 
 typedef struct page *new_page_t(struct page *, unsigned long private, int **);
 
+/*
+ * MIGRATE_ASYNC means never block
+ * MIGRATE_SYNC_LIGHT in the current implementation means to allow blocking
+ *	on most operations but not ->writepage as the potential stall time
+ *	is too significant
+ * MIGRATE_SYNC will block when migrating pages
+ */
+enum migrate_mode {
+	MIGRATE_ASYNC,
+	MIGRATE_SYNC_LIGHT,
+	MIGRATE_SYNC,
+};
+
 #ifdef CONFIG_MIGRATION
 #define PAGE_MIGRATION 1
 
 extern void putback_lru_pages(struct list_head *l);
 extern int migrate_page(struct address_space *,
-			struct page *, struct page *, bool);
+			struct page *, struct page *, enum migrate_mode);
 extern int migrate_pages(struct list_head *l, new_page_t x,
 			unsigned long private, bool offlining,
-			bool sync);
+			enum migrate_mode sync);
 extern int migrate_huge_pages(struct list_head *l, new_page_t x,
 			unsigned long private, bool offlining,
-			bool sync);
+			enum migrate_mode sync);
 
 extern int fail_migrate_page(struct address_space *,
 			struct page *, struct page *);
@@ -36,10 +49,10 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping,
 static inline void putback_lru_pages(struct list_head *l) {}
 static inline int migrate_pages(struct list_head *l, new_page_t x,
 		unsigned long private, bool offlining,
-		bool sync) { return -ENOSYS; }
+		enum migrate_mode sync) { return -ENOSYS; }
 static inline int migrate_huge_pages(struct list_head *l, new_page_t x,
 		unsigned long private, bool offlining,
-		bool sync) { return -ENOSYS; }
+		enum migrate_mode sync) { return -ENOSYS; }
 
 static inline int migrate_prep(void) { return -ENOSYS; }
 static inline int migrate_prep_local(void) { return -ENOSYS; }
diff --git a/mm/compaction.c b/mm/compaction.c
index 0379263..dbe1da0 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -555,7 +555,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 		nr_migrate = cc->nr_migratepages;
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
 				(unsigned long)cc, false,
-				cc->sync);
+				cc->sync ? MIGRATE_SYNC_LIGHT : MIGRATE_ASYNC);
 		update_nr_listpages(cc);
 		nr_remaining = cc->nr_migratepages;
 
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 06d3479..56080ea 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1557,7 +1557,7 @@ int soft_offline_page(struct page *page, int flags)
 					    page_is_file_cache(page));
 		list_add(&page->lru, &pagelist);
 		ret = migrate_pages(&pagelist, new_page, MPOL_MF_MOVE_ALL,
-								0, true);
+							0, MIGRATE_SYNC);
 		if (ret) {
 			putback_lru_pages(&pagelist);
 			pr_info("soft offline: %#lx: migration failed %d, type %lx\n",
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2168489..6629faf 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -809,7 +809,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 		}
 		/* this function returns # of failed pages */
 		ret = migrate_pages(&source, hotremove_migrate_alloc, 0,
-								true, true);
+							true, MIGRATE_SYNC);
 		if (ret)
 			putback_lru_pages(&source);
 	}
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index adc3954..97009a4 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -933,7 +933,7 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest,
 
 	if (!list_empty(&pagelist)) {
 		err = migrate_pages(&pagelist, new_node_page, dest,
-								false, true);
+							false, MIGRATE_SYNC);
 		if (err)
 			putback_lru_pages(&pagelist);
 	}
diff --git a/mm/migrate.c b/mm/migrate.c
index a5be362..44071dc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -222,12 +222,13 @@ out:
 
 #ifdef CONFIG_BLOCK
 /* Returns true if all buffers are successfully locked */
-static bool buffer_migrate_lock_buffers(struct buffer_head *head, bool sync)
+static bool buffer_migrate_lock_buffers(struct buffer_head *head,
+							enum migrate_mode mode)
 {
 	struct buffer_head *bh = head;
 
 	/* Simple case, sync compaction */
-	if (sync) {
+	if (mode != MIGRATE_ASYNC) {
 		do {
 			get_bh(bh);
 			lock_buffer(bh);
@@ -263,7 +264,7 @@ static bool buffer_migrate_lock_buffers(struct buffer_head *head, bool sync)
 }
 #else
 static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
-								bool sync)
+							enum migrate_mode mode)
 {
 	return true;
 }
@@ -279,7 +280,7 @@ static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
  */
 static int migrate_page_move_mapping(struct address_space *mapping,
 		struct page *newpage, struct page *page,
-		struct buffer_head *head, bool sync)
+		struct buffer_head *head, enum migrate_mode mode)
 {
 	int expected_count;
 	void **pslot;
@@ -315,7 +316,8 @@ static int migrate_page_move_mapping(struct address_space *mapping,
 	 * the mapping back due to an elevated page count, we would have to
 	 * block waiting on other references to be dropped.
 	 */
-	if (!sync && head && !buffer_migrate_lock_buffers(head, sync)) {
+	if (mode == MIGRATE_ASYNC && head &&
+			!buffer_migrate_lock_buffers(head, mode)) {
 		page_unfreeze_refs(page, expected_count);
 		spin_unlock_irq(&mapping->tree_lock);
 		return -EAGAIN;
@@ -478,13 +480,14 @@ EXPORT_SYMBOL(fail_migrate_page);
  * Pages are locked upon entry and exit.
  */
 int migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page, bool sync)
+		struct page *newpage, struct page *page,
+		enum migrate_mode mode)
 {
 	int rc;
 
 	BUG_ON(PageWriteback(page));	/* Writeback must be complete */
 
-	rc = migrate_page_move_mapping(mapping, newpage, page, NULL, sync);
+	rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode);
 
 	if (rc)
 		return rc;
@@ -501,17 +504,17 @@ EXPORT_SYMBOL(migrate_page);
  * exist.
  */
 int buffer_migrate_page(struct address_space *mapping,
-		struct page *newpage, struct page *page, bool sync)
+		struct page *newpage, struct page *page, enum migrate_mode mode)
 {
 	struct buffer_head *bh, *head;
 	int rc;
 
 	if (!page_has_buffers(page))
-		return migrate_page(mapping, newpage, page, sync);
+		return migrate_page(mapping, newpage, page, mode);
 
 	head = page_buffers(page);
 
-	rc = migrate_page_move_mapping(mapping, newpage, page, head, sync);
+	rc = migrate_page_move_mapping(mapping, newpage, page, head, mode);
 
 	if (rc)
 		return rc;
@@ -521,8 +524,8 @@ int buffer_migrate_page(struct address_space *mapping,
 	 * with an IRQ-safe spinlock held. In the sync case, the buffers
 	 * need to be locked no
 	 */
-	if (sync)
-		BUG_ON(!buffer_migrate_lock_buffers(head, sync));
+	if (mode != MIGRATE_ASYNC)
+		BUG_ON(!buffer_migrate_lock_buffers(head, mode));
 
 	ClearPagePrivate(page);
 	set_page_private(newpage, page_private(page));
@@ -599,10 +602,11 @@ static int writeout(struct address_space *mapping, struct page *page)
  * Default handling if a filesystem does not provide a migration function.
  */
 static int fallback_migrate_page(struct address_space *mapping,
-	struct page *newpage, struct page *page, bool sync)
+	struct page *newpage, struct page *page, enum migrate_mode mode)
 {
 	if (PageDirty(page)) {
-		if (!sync)
+		/* Only writeback pages in full synchronous migration */
+		if (mode != MIGRATE_SYNC)
 			return -EBUSY;
 		return writeout(mapping, page);
 	}
@@ -615,7 +619,7 @@ static int fallback_migrate_page(struct address_space *mapping,
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
 
-	return migrate_page(mapping, newpage, page, sync);
+	return migrate_page(mapping, newpage, page, mode);
 }
 
 /*
@@ -630,7 +634,7 @@ static int fallback_migrate_page(struct address_space *mapping,
  *  == 0 - success
  */
 static int move_to_new_page(struct page *newpage, struct page *page,
-					int remap_swapcache, bool sync)
+				int remap_swapcache, enum migrate_mode mode)
 {
 	struct address_space *mapping;
 	int rc;
@@ -651,7 +655,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 
 	mapping = page_mapping(page);
 	if (!mapping)
-		rc = migrate_page(mapping, newpage, page, sync);
+		rc = migrate_page(mapping, newpage, page, mode);
 	else if (mapping->a_ops->migratepage)
 		/*
 		 * Most pages have a mapping and most filesystems provide a
@@ -660,9 +664,9 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 		 * is the most common path for page migration.
 		 */
 		rc = mapping->a_ops->migratepage(mapping,
-						newpage, page, sync);
+						newpage, page, mode);
 	else
-		rc = fallback_migrate_page(mapping, newpage, page, sync);
+		rc = fallback_migrate_page(mapping, newpage, page, mode);
 
 	if (rc) {
 		newpage->mapping = NULL;
@@ -677,7 +681,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 }
 
 static int __unmap_and_move(struct page *page, struct page *newpage,
-				int force, bool offlining, bool sync)
+			int force, bool offlining, enum migrate_mode mode)
 {
 	int rc = -EAGAIN;
 	int remap_swapcache = 1;
@@ -686,7 +690,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	struct anon_vma *anon_vma = NULL;
 
 	if (!trylock_page(page)) {
-		if (!force || !sync)
+		if (!force || mode == MIGRATE_ASYNC)
 			goto out;
 
 		/*
@@ -732,10 +736,12 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 
 	if (PageWriteback(page)) {
 		/*
-		 * For !sync, there is no point retrying as the retry loop
-		 * is expected to be too short for PageWriteback to be cleared
+		 * Only in the case of a full syncronous migration is it
+		 * necessary to wait for PageWriteback. In the async case,
+		 * the retry loop is too short and in the sync-light case,
+		 * the overhead of stalling is too much
 		 */
-		if (!sync) {
+		if (mode != MIGRATE_SYNC) {
 			rc = -EBUSY;
 			goto uncharge;
 		}
@@ -806,7 +812,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 
 skip_unmap:
 	if (!page_mapped(page))
-		rc = move_to_new_page(newpage, page, remap_swapcache, sync);
+		rc = move_to_new_page(newpage, page, remap_swapcache, mode);
 
 	if (rc && remap_swapcache)
 		remove_migration_ptes(page, page);
@@ -829,7 +835,8 @@ out:
  * to the newly allocated page in newpage.
  */
 static int unmap_and_move(new_page_t get_new_page, unsigned long private,
-			struct page *page, int force, bool offlining, bool sync)
+			struct page *page, int force, bool offlining,
+			enum migrate_mode mode)
 {
 	int rc = 0;
 	int *result = NULL;
@@ -847,7 +854,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
 		if (unlikely(split_huge_page(page)))
 			goto out;
 
-	rc = __unmap_and_move(page, newpage, force, offlining, sync);
+	rc = __unmap_and_move(page, newpage, force, offlining, mode);
 out:
 	if (rc != -EAGAIN) {
 		/*
@@ -895,7 +902,8 @@ out:
  */
 static int unmap_and_move_huge_page(new_page_t get_new_page,
 				unsigned long private, struct page *hpage,
-				int force, bool offlining, bool sync)
+				int force, bool offlining,
+				enum migrate_mode mode)
 {
 	int rc = 0;
 	int *result = NULL;
@@ -908,7 +916,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	rc = -EAGAIN;
 
 	if (!trylock_page(hpage)) {
-		if (!force || !sync)
+		if (!force || mode != MIGRATE_SYNC)
 			goto out;
 		lock_page(hpage);
 	}
@@ -919,7 +927,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	try_to_unmap(hpage, TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
 
 	if (!page_mapped(hpage))
-		rc = move_to_new_page(new_hpage, hpage, 1, sync);
+		rc = move_to_new_page(new_hpage, hpage, 1, mode);
 
 	if (rc)
 		remove_migration_ptes(hpage, hpage);
@@ -962,7 +970,7 @@ out:
  */
 int migrate_pages(struct list_head *from,
 		new_page_t get_new_page, unsigned long private, bool offlining,
-		bool sync)
+		enum migrate_mode mode)
 {
 	int retry = 1;
 	int nr_failed = 0;
@@ -983,7 +991,7 @@ int migrate_pages(struct list_head *from,
 
 			rc = unmap_and_move(get_new_page, private,
 						page, pass > 2, offlining,
-						sync);
+						mode);
 
 			switch(rc) {
 			case -ENOMEM:
@@ -1013,7 +1021,7 @@ out:
 
 int migrate_huge_pages(struct list_head *from,
 		new_page_t get_new_page, unsigned long private, bool offlining,
-		bool sync)
+		enum migrate_mode mode)
 {
 	int retry = 1;
 	int nr_failed = 0;
@@ -1030,7 +1038,7 @@ int migrate_huge_pages(struct list_head *from,
 
 			rc = unmap_and_move_huge_page(get_new_page,
 					private, page, pass > 2, offlining,
-					sync);
+					mode);
 
 			switch(rc) {
 			case -ENOMEM:
@@ -1159,7 +1167,7 @@ set_status:
 	err = 0;
 	if (!list_empty(&pagelist)) {
 		err = migrate_pages(&pagelist, new_page_node,
-				(unsigned long)pm, 0, true);
+				(unsigned long)pm, 0, MIGRATE_SYNC);
 		if (err)
 			putback_lru_pages(&pagelist);
 	}
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 09/11] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

In commit [e0887c19: vmscan: limit direct reclaim for higher order
allocations], Rik noted that reclaim was too aggressive when THP was
enabled. In his initial patch he used the number of free pages to
decide if reclaim should abort for compaction. My feedback was that
reclaim and compaction should be using the same logic when deciding if
reclaim should be aborted.

Unfortunately, this had the effect of reducing THP success rates when
the workload included something like streaming reads that continually
allocated pages. The window during which compaction could run and return
a THP was too small.

This patch combines Rik's two patches together. compaction_suitable()
is still used to decide if reclaim should be aborted to allow
compaction is used. However, it will also ensure that there is a
reasonable buffer of free pages available. This improves upon the
THP allocation success rates but bounds the number of pages that are
freed for compaction.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |   44 +++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d2b701a..6c7085d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2122,6 +2122,42 @@ restart:
 	throttle_vm_writeout(sc->gfp_mask);
 }
 
+/* Returns true if compaction should go ahead for a high-order request */
+static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
+{
+	unsigned long balance_gap, watermark;
+	bool watermark_ok;
+
+	/* Do not consider compaction for orders reclaim is meant to satisfy */
+	if (sc->order <= PAGE_ALLOC_COSTLY_ORDER)
+		return false;
+
+	/*
+	 * Compaction takes time to run and there are potentially other
+	 * callers using the pages just freed. Continue reclaiming until
+	 * there is a buffer of free pages available to give compaction
+	 * a reasonable chance of completing and allocating the page
+	 */
+	balance_gap = min(low_wmark_pages(zone),
+		(zone->present_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
+			KSWAPD_ZONE_BALANCE_GAP_RATIO);
+	watermark = high_wmark_pages(zone) + balance_gap + (2UL << sc->order);
+	watermark_ok = zone_watermark_ok_safe(zone, 0, watermark, 0, 0);
+
+	/*
+	 * If compaction is deferred, reclaim up to a point where
+	 * compaction will have a chance of success when re-enabled
+	 */
+	if (compaction_deferred(zone))
+		return watermark_ok;
+
+	/* If compaction is not ready to start, keep reclaiming */
+	if (!compaction_suitable(zone, sc->order))
+		return false;
+
+	return watermark_ok;
+}
+
 /*
  * This is the direct reclaim path, for page-allocating processes.  We only
  * try to reclaim pages from zones which will satisfy the caller's allocation
@@ -2139,8 +2175,8 @@ restart:
  * scan then give up on it.
  *
  * This function returns true if a zone is being reclaimed for a costly
- * high-order allocation and compaction is either ready to begin or deferred.
- * This indicates to the caller that it should retry the allocation or fail.
+ * high-order allocation and compaction is ready to begin. This indicates to
+ * the caller that it should retry the allocation or fail.
  */
 static bool shrink_zones(int priority, struct zonelist *zonelist,
 					struct scan_control *sc)
@@ -2174,9 +2210,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 				 * noticable problem, like transparent huge page
 				 * allocations.
 				 */
-				if (sc->order > PAGE_ALLOC_COSTLY_ORDER &&
-					(compaction_suitable(zone, sc->order) ||
-					 compaction_deferred(zone))) {
+				if (compaction_ready(zone, sc)) {
 					should_abort_reclaim = true;
 					continue;
 				}
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 09/11] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

In commit [e0887c19: vmscan: limit direct reclaim for higher order
allocations], Rik noted that reclaim was too aggressive when THP was
enabled. In his initial patch he used the number of free pages to
decide if reclaim should abort for compaction. My feedback was that
reclaim and compaction should be using the same logic when deciding if
reclaim should be aborted.

Unfortunately, this had the effect of reducing THP success rates when
the workload included something like streaming reads that continually
allocated pages. The window during which compaction could run and return
a THP was too small.

This patch combines Rik's two patches together. compaction_suitable()
is still used to decide if reclaim should be aborted to allow
compaction is used. However, it will also ensure that there is a
reasonable buffer of free pages available. This improves upon the
THP allocation success rates but bounds the number of pages that are
freed for compaction.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |   44 +++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d2b701a..6c7085d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2122,6 +2122,42 @@ restart:
 	throttle_vm_writeout(sc->gfp_mask);
 }
 
+/* Returns true if compaction should go ahead for a high-order request */
+static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
+{
+	unsigned long balance_gap, watermark;
+	bool watermark_ok;
+
+	/* Do not consider compaction for orders reclaim is meant to satisfy */
+	if (sc->order <= PAGE_ALLOC_COSTLY_ORDER)
+		return false;
+
+	/*
+	 * Compaction takes time to run and there are potentially other
+	 * callers using the pages just freed. Continue reclaiming until
+	 * there is a buffer of free pages available to give compaction
+	 * a reasonable chance of completing and allocating the page
+	 */
+	balance_gap = min(low_wmark_pages(zone),
+		(zone->present_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
+			KSWAPD_ZONE_BALANCE_GAP_RATIO);
+	watermark = high_wmark_pages(zone) + balance_gap + (2UL << sc->order);
+	watermark_ok = zone_watermark_ok_safe(zone, 0, watermark, 0, 0);
+
+	/*
+	 * If compaction is deferred, reclaim up to a point where
+	 * compaction will have a chance of success when re-enabled
+	 */
+	if (compaction_deferred(zone))
+		return watermark_ok;
+
+	/* If compaction is not ready to start, keep reclaiming */
+	if (!compaction_suitable(zone, sc->order))
+		return false;
+
+	return watermark_ok;
+}
+
 /*
  * This is the direct reclaim path, for page-allocating processes.  We only
  * try to reclaim pages from zones which will satisfy the caller's allocation
@@ -2139,8 +2175,8 @@ restart:
  * scan then give up on it.
  *
  * This function returns true if a zone is being reclaimed for a costly
- * high-order allocation and compaction is either ready to begin or deferred.
- * This indicates to the caller that it should retry the allocation or fail.
+ * high-order allocation and compaction is ready to begin. This indicates to
+ * the caller that it should retry the allocation or fail.
  */
 static bool shrink_zones(int priority, struct zonelist *zonelist,
 					struct scan_control *sc)
@@ -2174,9 +2210,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 				 * noticable problem, like transparent huge page
 				 * allocations.
 				 */
-				if (sc->order > PAGE_ALLOC_COSTLY_ORDER &&
-					(compaction_suitable(zone, sc->order) ||
-					 compaction_deferred(zone))) {
+				if (compaction_ready(zone, sc)) {
 					should_abort_reclaim = true;
 					continue;
 				}
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 10/11] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

If compaction can proceed for a given zone, shrink_zones() does not
reclaim any more pages from it. After commit [e0c2327: vmscan: abort
reclaim/compaction if compaction can proceed], do_try_to_free_pages()
tries to finish as soon as possible once one zone can compact.

This was intended to prevent slabs being shrunk unnecessarily but
there are side-effects. One is that a small zone that is ready for
compaction will abort reclaim even if the chances of successfully
allocating a THP from that zone is small. It also means that reclaim
can return too early even though sc->nr_to_reclaim pages were not
reclaimed.

This partially reverts the commit until it is proven that slabs are
really being shrunk unnecessarily but preserves the check to return
1 to avoid OOM if reclaim was aborted prematurely.

[aarcange@redhat.com: This patch replaces a revert from Andrea]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |   19 +++++++++----------
 1 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6c7085d..b0eeec7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2176,7 +2176,8 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
  *
  * This function returns true if a zone is being reclaimed for a costly
  * high-order allocation and compaction is ready to begin. This indicates to
- * the caller that it should retry the allocation or fail.
+ * the caller that it should consider retrying the allocation instead of
+ * further reclaim.
  */
 static bool shrink_zones(int priority, struct zonelist *zonelist,
 					struct scan_control *sc)
@@ -2185,7 +2186,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 	struct zone *zone;
 	unsigned long nr_soft_reclaimed;
 	unsigned long nr_soft_scanned;
-	bool should_abort_reclaim = false;
+	bool aborted_reclaim = false;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist,
 					gfp_zone(sc->gfp_mask), sc->nodemask) {
@@ -2211,7 +2212,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 				 * allocations.
 				 */
 				if (compaction_ready(zone, sc)) {
-					should_abort_reclaim = true;
+					aborted_reclaim = true;
 					continue;
 				}
 			}
@@ -2233,7 +2234,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 		shrink_zone(priority, zone, sc);
 	}
 
-	return should_abort_reclaim;
+	return aborted_reclaim;
 }
 
 static bool zone_reclaimable(struct zone *zone)
@@ -2287,7 +2288,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	struct zoneref *z;
 	struct zone *zone;
 	unsigned long writeback_threshold;
-	bool should_abort_reclaim;
+	bool aborted_reclaim;
 
 	get_mems_allowed();
 	delayacct_freepages_start();
@@ -2299,9 +2300,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		sc->nr_scanned = 0;
 		if (!priority)
 			disable_swap_token(sc->mem_cgroup);
-		should_abort_reclaim = shrink_zones(priority, zonelist, sc);
-		if (should_abort_reclaim)
-			break;
+		aborted_reclaim = shrink_zones(priority, zonelist, sc);
 
 		/*
 		 * Don't shrink slabs when reclaiming memory from
@@ -2368,8 +2367,8 @@ out:
 	if (oom_killer_disabled)
 		return 0;
 
-	/* Aborting reclaim to try compaction? don't OOM, then */
-	if (should_abort_reclaim)
+	/* Aborted reclaim to try compaction? don't OOM, then */
+	if (aborted_reclaim)
 		return 1;
 
 	/* top priority shrink_zones still had more to do? don't OOM, then */
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 10/11] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

If compaction can proceed for a given zone, shrink_zones() does not
reclaim any more pages from it. After commit [e0c2327: vmscan: abort
reclaim/compaction if compaction can proceed], do_try_to_free_pages()
tries to finish as soon as possible once one zone can compact.

This was intended to prevent slabs being shrunk unnecessarily but
there are side-effects. One is that a small zone that is ready for
compaction will abort reclaim even if the chances of successfully
allocating a THP from that zone is small. It also means that reclaim
can return too early even though sc->nr_to_reclaim pages were not
reclaimed.

This partially reverts the commit until it is proven that slabs are
really being shrunk unnecessarily but preserves the check to return
1 to avoid OOM if reclaim was aborted prematurely.

[aarcange@redhat.com: This patch replaces a revert from Andrea]
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |   19 +++++++++----------
 1 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6c7085d..b0eeec7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2176,7 +2176,8 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
  *
  * This function returns true if a zone is being reclaimed for a costly
  * high-order allocation and compaction is ready to begin. This indicates to
- * the caller that it should retry the allocation or fail.
+ * the caller that it should consider retrying the allocation instead of
+ * further reclaim.
  */
 static bool shrink_zones(int priority, struct zonelist *zonelist,
 					struct scan_control *sc)
@@ -2185,7 +2186,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 	struct zone *zone;
 	unsigned long nr_soft_reclaimed;
 	unsigned long nr_soft_scanned;
-	bool should_abort_reclaim = false;
+	bool aborted_reclaim = false;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist,
 					gfp_zone(sc->gfp_mask), sc->nodemask) {
@@ -2211,7 +2212,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 				 * allocations.
 				 */
 				if (compaction_ready(zone, sc)) {
-					should_abort_reclaim = true;
+					aborted_reclaim = true;
 					continue;
 				}
 			}
@@ -2233,7 +2234,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 		shrink_zone(priority, zone, sc);
 	}
 
-	return should_abort_reclaim;
+	return aborted_reclaim;
 }
 
 static bool zone_reclaimable(struct zone *zone)
@@ -2287,7 +2288,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	struct zoneref *z;
 	struct zone *zone;
 	unsigned long writeback_threshold;
-	bool should_abort_reclaim;
+	bool aborted_reclaim;
 
 	get_mems_allowed();
 	delayacct_freepages_start();
@@ -2299,9 +2300,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		sc->nr_scanned = 0;
 		if (!priority)
 			disable_swap_token(sc->mem_cgroup);
-		should_abort_reclaim = shrink_zones(priority, zonelist, sc);
-		if (should_abort_reclaim)
-			break;
+		aborted_reclaim = shrink_zones(priority, zonelist, sc);
 
 		/*
 		 * Don't shrink slabs when reclaiming memory from
@@ -2368,8 +2367,8 @@ out:
 	if (oom_killer_disabled)
 		return 0;
 
-	/* Aborting reclaim to try compaction? don't OOM, then */
-	if (should_abort_reclaim)
+	/* Aborted reclaim to try compaction? don't OOM, then */
+	if (aborted_reclaim)
 		return 1;
 
 	/* top priority shrink_zones still had more to do? don't OOM, then */
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 11/11] mm: Isolate pages for immediate reclaim on their own LRU
  2011-12-01 17:36 ` Mel Gorman
@ 2011-12-01 17:36   ` Mel Gorman
  -1 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

It was observed that scan rates from direct reclaim during tests
writing to both fast and slow storage were extraordinarily high. The
problem was that while pages were being marked for immediate reclaim
when writeback completed, the same pages were being encountered over
and over again during LRU scanning.

This patch isolates file-backed pages that are to be reclaimed when
clean on their own LRU list.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/mmzone.h        |    2 +
 include/linux/vm_event_item.h |    1 +
 mm/page_alloc.c               |    5 ++-
 mm/swap.c                     |   74 ++++++++++++++++++++++++++++++++++++++---
 mm/vmscan.c                   |   11 ++++++
 mm/vmstat.c                   |    2 +
 6 files changed, 89 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ac5b522..80834eb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -84,6 +84,7 @@ enum zone_stat_item {
 	NR_ACTIVE_ANON,		/*  "     "     "   "       "         */
 	NR_INACTIVE_FILE,	/*  "     "     "   "       "         */
 	NR_ACTIVE_FILE,		/*  "     "     "   "       "         */
+	NR_IMMEDIATE,		/*  "     "     "   "       "         */
 	NR_UNEVICTABLE,		/*  "     "     "   "       "         */
 	NR_MLOCK,		/* mlock()ed pages found and moved off LRU */
 	NR_ANON_PAGES,	/* Mapped anonymous pages */
@@ -136,6 +137,7 @@ enum lru_list {
 	LRU_ACTIVE_ANON = LRU_BASE + LRU_ACTIVE,
 	LRU_INACTIVE_FILE = LRU_BASE + LRU_FILE,
 	LRU_ACTIVE_FILE = LRU_BASE + LRU_FILE + LRU_ACTIVE,
+	LRU_IMMEDIATE,
 	LRU_UNEVICTABLE,
 	NR_LRU_LISTS
 };
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 03b90cdc..9696fda 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -36,6 +36,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		KSWAPD_LOW_WMARK_HIT_QUICKLY, KSWAPD_HIGH_WMARK_HIT_QUICKLY,
 		KSWAPD_SKIP_CONGESTION_WAIT,
 		PAGEOUTRUN, ALLOCSTALL, PGROTATED,
+		PGRESCUED,
 #ifdef CONFIG_COMPACTION
 		COMPACTBLOCKS, COMPACTPAGES, COMPACTPAGEFAILED,
 		COMPACTSTALL, COMPACTFAIL, COMPACTSUCCESS,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d979376..9e3cd8d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2590,7 +2590,7 @@ void show_free_areas(unsigned int filter)
 
 	printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n"
 		" active_file:%lu inactive_file:%lu isolated_file:%lu\n"
-		" unevictable:%lu"
+		" immediate:%lu unevictable:%lu"
 		" dirty:%lu writeback:%lu unstable:%lu\n"
 		" free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n"
 		" mapped:%lu shmem:%lu pagetables:%lu bounce:%lu\n",
@@ -2600,6 +2600,7 @@ void show_free_areas(unsigned int filter)
 		global_page_state(NR_ACTIVE_FILE),
 		global_page_state(NR_INACTIVE_FILE),
 		global_page_state(NR_ISOLATED_FILE),
+		global_page_state(NR_IMMEDIATE),
 		global_page_state(NR_UNEVICTABLE),
 		global_page_state(NR_FILE_DIRTY),
 		global_page_state(NR_WRITEBACK),
@@ -2627,6 +2628,7 @@ void show_free_areas(unsigned int filter)
 			" inactive_anon:%lukB"
 			" active_file:%lukB"
 			" inactive_file:%lukB"
+			" immediate:%lukB"
 			" unevictable:%lukB"
 			" isolated(anon):%lukB"
 			" isolated(file):%lukB"
@@ -2655,6 +2657,7 @@ void show_free_areas(unsigned int filter)
 			K(zone_page_state(zone, NR_INACTIVE_ANON)),
 			K(zone_page_state(zone, NR_ACTIVE_FILE)),
 			K(zone_page_state(zone, NR_INACTIVE_FILE)),
+			K(zone_page_state(zone, NR_IMMEDIATE)),
 			K(zone_page_state(zone, NR_UNEVICTABLE)),
 			K(zone_page_state(zone, NR_ISOLATED_ANON)),
 			K(zone_page_state(zone, NR_ISOLATED_FILE)),
diff --git a/mm/swap.c b/mm/swap.c
index a91caf7..9973975 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -39,6 +39,7 @@ int page_cluster;
 
 static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
 static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
+static DEFINE_PER_CPU(struct pagevec, lru_putback_immediate_pvecs);
 static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
 
 /*
@@ -255,24 +256,80 @@ static void pagevec_move_tail(struct pagevec *pvec)
 }
 
 /*
+ * Similar pair of functions to pagevec_move_tail except it is called when
+ * moving a page from the LRU_IMMEDIATE to one of the [in]active_[file|anon]
+ * lists
+ */
+static void pagevec_putback_immediate_fn(struct page *page, void *arg)
+{
+	struct zone *zone = page_zone(page);
+
+	if (PageLRU(page)) {
+		enum lru_list lru = page_lru(page);
+		list_move(&page->lru, &zone->lru[lru].list);
+	}
+}
+
+static void pagevec_putback_immediate(struct pagevec *pvec)
+{
+	pagevec_lru_move_fn(pvec, pagevec_putback_immediate_fn, NULL);
+}
+
+/*
  * Writeback is about to end against a page which has been marked for immediate
  * reclaim.  If it still appears to be reclaimable, move it to the tail of the
  * inactive list.
  */
 void rotate_reclaimable_page(struct page *page)
 {
+	struct zone *zone = page_zone(page);
+	struct list_head *page_list;
+	struct pagevec *pvec;
+	unsigned long flags;
+
+	page_cache_get(page);
+	local_irq_save(flags);
+	__mod_zone_page_state(zone, NR_IMMEDIATE, -1);
+
 	if (!PageLocked(page) && !PageDirty(page) && !PageActive(page) &&
 	    !PageUnevictable(page) && PageLRU(page)) {
-		struct pagevec *pvec;
-		unsigned long flags;
 
-		page_cache_get(page);
-		local_irq_save(flags);
 		pvec = &__get_cpu_var(lru_rotate_pvecs);
 		if (!pagevec_add(pvec, page))
 			pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+	} else {
+		pvec = &__get_cpu_var(lru_putback_immediate_pvecs);
+		if (!pagevec_add(pvec, page))
+			pagevec_putback_immediate(pvec);
+	}
+
+	/*
+	 * There is a potential race that if a page is set PageReclaim
+	 * and moved to the LRU_IMMEDIATE list after writeback completed,
+	 * it can be left on the LRU_IMMEDATE list with no way for
+	 * reclaim to find it.
+	 *
+	 * This race should be very rare but count how often it happens.
+	 * If it is a continual race, then it's very unsatisfactory as there
+	 * is no guarantee that rotate_reclaimable_page() will be called
+	 * to rescue these pages but finding them in page reclaim is also
+	 * problematic due to the problem of deciding when the right time
+	 * to scan this list is.
+	 */
+	page_list = &zone->lru[LRU_IMMEDIATE].list;
+	if (!zone_page_state(zone, NR_IMMEDIATE) && !list_empty(page_list)) {
+		struct page *page;
+
+		spin_lock(&zone->lru_lock);
+		while (!list_empty(page_list)) {
+			page = list_entry(page_list->prev, struct page, lru);
+			list_move(&page->lru, &zone->lru[page_lru(page)].list);
+			__count_vm_event(PGRESCUED);
+		}
+		spin_unlock(&zone->lru_lock);
 	}
+
+	local_irq_restore(flags);
 }
 
 static void update_page_reclaim_stat(struct zone *zone, struct page *page,
@@ -475,6 +532,13 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 		 * is _really_ small and  it's non-critical problem.
 		 */
 		SetPageReclaim(page);
+
+		/*
+		 * Move to the LRU_IMMEDIATE list to avoid being scanned
+		 * by page reclaim uselessly.
+		 */
+		list_move_tail(&page->lru, &zone->lru[LRU_IMMEDIATE].list);
+		__mod_zone_page_state(zone, NR_IMMEDIATE, 1);
 	} else {
 		/*
 		 * The page's writeback ends up during pagevec
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b0eeec7..9879ae5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1404,6 +1404,17 @@ putback_lru_pages(struct zone *zone, struct scan_control *sc,
 		}
 		SetPageLRU(page);
 		lru = page_lru(page);
+
+		/*
+		 * If reclaim has tagged a file page reclaim, move it to
+		 * a separate LRU lists to avoid it being scanned by other
+		 * users. It is expected that as writeback completes that
+		 * they are taken back off and moved to the normal LRU
+		 */
+		if (lru == LRU_INACTIVE_FILE &&
+				PageReclaim(page) && PageWriteback(page))
+			lru = LRU_IMMEDIATE;
+
 		add_page_to_lru_list(zone, page, lru);
 		if (is_active_lru(lru)) {
 			int file = is_file_lru(lru);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8fd603b..dbfec4c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -688,6 +688,7 @@ const char * const vmstat_text[] = {
 	"nr_active_anon",
 	"nr_inactive_file",
 	"nr_active_file",
+	"nr_immediate",
 	"nr_unevictable",
 	"nr_mlock",
 	"nr_anon_pages",
@@ -756,6 +757,7 @@ const char * const vmstat_text[] = {
 	"allocstall",
 
 	"pgrotated",
+	"pgrescued",
 
 #ifdef CONFIG_COMPACTION
 	"compact_blocks_moved",
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 11/11] mm: Isolate pages for immediate reclaim on their own LRU
@ 2011-12-01 17:36   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-01 17:36 UTC (permalink / raw)
  To: Linux-MM
  Cc: Andrea Arcangeli, Minchan Kim, Jan Kara, Andy Isaacson,
	Johannes Weiner, Mel Gorman, Rik van Riel, Nai Xia, LKML

It was observed that scan rates from direct reclaim during tests
writing to both fast and slow storage were extraordinarily high. The
problem was that while pages were being marked for immediate reclaim
when writeback completed, the same pages were being encountered over
and over again during LRU scanning.

This patch isolates file-backed pages that are to be reclaimed when
clean on their own LRU list.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/mmzone.h        |    2 +
 include/linux/vm_event_item.h |    1 +
 mm/page_alloc.c               |    5 ++-
 mm/swap.c                     |   74 ++++++++++++++++++++++++++++++++++++++---
 mm/vmscan.c                   |   11 ++++++
 mm/vmstat.c                   |    2 +
 6 files changed, 89 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ac5b522..80834eb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -84,6 +84,7 @@ enum zone_stat_item {
 	NR_ACTIVE_ANON,		/*  "     "     "   "       "         */
 	NR_INACTIVE_FILE,	/*  "     "     "   "       "         */
 	NR_ACTIVE_FILE,		/*  "     "     "   "       "         */
+	NR_IMMEDIATE,		/*  "     "     "   "       "         */
 	NR_UNEVICTABLE,		/*  "     "     "   "       "         */
 	NR_MLOCK,		/* mlock()ed pages found and moved off LRU */
 	NR_ANON_PAGES,	/* Mapped anonymous pages */
@@ -136,6 +137,7 @@ enum lru_list {
 	LRU_ACTIVE_ANON = LRU_BASE + LRU_ACTIVE,
 	LRU_INACTIVE_FILE = LRU_BASE + LRU_FILE,
 	LRU_ACTIVE_FILE = LRU_BASE + LRU_FILE + LRU_ACTIVE,
+	LRU_IMMEDIATE,
 	LRU_UNEVICTABLE,
 	NR_LRU_LISTS
 };
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 03b90cdc..9696fda 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -36,6 +36,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		KSWAPD_LOW_WMARK_HIT_QUICKLY, KSWAPD_HIGH_WMARK_HIT_QUICKLY,
 		KSWAPD_SKIP_CONGESTION_WAIT,
 		PAGEOUTRUN, ALLOCSTALL, PGROTATED,
+		PGRESCUED,
 #ifdef CONFIG_COMPACTION
 		COMPACTBLOCKS, COMPACTPAGES, COMPACTPAGEFAILED,
 		COMPACTSTALL, COMPACTFAIL, COMPACTSUCCESS,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d979376..9e3cd8d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2590,7 +2590,7 @@ void show_free_areas(unsigned int filter)
 
 	printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n"
 		" active_file:%lu inactive_file:%lu isolated_file:%lu\n"
-		" unevictable:%lu"
+		" immediate:%lu unevictable:%lu"
 		" dirty:%lu writeback:%lu unstable:%lu\n"
 		" free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n"
 		" mapped:%lu shmem:%lu pagetables:%lu bounce:%lu\n",
@@ -2600,6 +2600,7 @@ void show_free_areas(unsigned int filter)
 		global_page_state(NR_ACTIVE_FILE),
 		global_page_state(NR_INACTIVE_FILE),
 		global_page_state(NR_ISOLATED_FILE),
+		global_page_state(NR_IMMEDIATE),
 		global_page_state(NR_UNEVICTABLE),
 		global_page_state(NR_FILE_DIRTY),
 		global_page_state(NR_WRITEBACK),
@@ -2627,6 +2628,7 @@ void show_free_areas(unsigned int filter)
 			" inactive_anon:%lukB"
 			" active_file:%lukB"
 			" inactive_file:%lukB"
+			" immediate:%lukB"
 			" unevictable:%lukB"
 			" isolated(anon):%lukB"
 			" isolated(file):%lukB"
@@ -2655,6 +2657,7 @@ void show_free_areas(unsigned int filter)
 			K(zone_page_state(zone, NR_INACTIVE_ANON)),
 			K(zone_page_state(zone, NR_ACTIVE_FILE)),
 			K(zone_page_state(zone, NR_INACTIVE_FILE)),
+			K(zone_page_state(zone, NR_IMMEDIATE)),
 			K(zone_page_state(zone, NR_UNEVICTABLE)),
 			K(zone_page_state(zone, NR_ISOLATED_ANON)),
 			K(zone_page_state(zone, NR_ISOLATED_FILE)),
diff --git a/mm/swap.c b/mm/swap.c
index a91caf7..9973975 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -39,6 +39,7 @@ int page_cluster;
 
 static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
 static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
+static DEFINE_PER_CPU(struct pagevec, lru_putback_immediate_pvecs);
 static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
 
 /*
@@ -255,24 +256,80 @@ static void pagevec_move_tail(struct pagevec *pvec)
 }
 
 /*
+ * Similar pair of functions to pagevec_move_tail except it is called when
+ * moving a page from the LRU_IMMEDIATE to one of the [in]active_[file|anon]
+ * lists
+ */
+static void pagevec_putback_immediate_fn(struct page *page, void *arg)
+{
+	struct zone *zone = page_zone(page);
+
+	if (PageLRU(page)) {
+		enum lru_list lru = page_lru(page);
+		list_move(&page->lru, &zone->lru[lru].list);
+	}
+}
+
+static void pagevec_putback_immediate(struct pagevec *pvec)
+{
+	pagevec_lru_move_fn(pvec, pagevec_putback_immediate_fn, NULL);
+}
+
+/*
  * Writeback is about to end against a page which has been marked for immediate
  * reclaim.  If it still appears to be reclaimable, move it to the tail of the
  * inactive list.
  */
 void rotate_reclaimable_page(struct page *page)
 {
+	struct zone *zone = page_zone(page);
+	struct list_head *page_list;
+	struct pagevec *pvec;
+	unsigned long flags;
+
+	page_cache_get(page);
+	local_irq_save(flags);
+	__mod_zone_page_state(zone, NR_IMMEDIATE, -1);
+
 	if (!PageLocked(page) && !PageDirty(page) && !PageActive(page) &&
 	    !PageUnevictable(page) && PageLRU(page)) {
-		struct pagevec *pvec;
-		unsigned long flags;
 
-		page_cache_get(page);
-		local_irq_save(flags);
 		pvec = &__get_cpu_var(lru_rotate_pvecs);
 		if (!pagevec_add(pvec, page))
 			pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+	} else {
+		pvec = &__get_cpu_var(lru_putback_immediate_pvecs);
+		if (!pagevec_add(pvec, page))
+			pagevec_putback_immediate(pvec);
+	}
+
+	/*
+	 * There is a potential race that if a page is set PageReclaim
+	 * and moved to the LRU_IMMEDIATE list after writeback completed,
+	 * it can be left on the LRU_IMMEDATE list with no way for
+	 * reclaim to find it.
+	 *
+	 * This race should be very rare but count how often it happens.
+	 * If it is a continual race, then it's very unsatisfactory as there
+	 * is no guarantee that rotate_reclaimable_page() will be called
+	 * to rescue these pages but finding them in page reclaim is also
+	 * problematic due to the problem of deciding when the right time
+	 * to scan this list is.
+	 */
+	page_list = &zone->lru[LRU_IMMEDIATE].list;
+	if (!zone_page_state(zone, NR_IMMEDIATE) && !list_empty(page_list)) {
+		struct page *page;
+
+		spin_lock(&zone->lru_lock);
+		while (!list_empty(page_list)) {
+			page = list_entry(page_list->prev, struct page, lru);
+			list_move(&page->lru, &zone->lru[page_lru(page)].list);
+			__count_vm_event(PGRESCUED);
+		}
+		spin_unlock(&zone->lru_lock);
 	}
+
+	local_irq_restore(flags);
 }
 
 static void update_page_reclaim_stat(struct zone *zone, struct page *page,
@@ -475,6 +532,13 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 		 * is _really_ small and  it's non-critical problem.
 		 */
 		SetPageReclaim(page);
+
+		/*
+		 * Move to the LRU_IMMEDIATE list to avoid being scanned
+		 * by page reclaim uselessly.
+		 */
+		list_move_tail(&page->lru, &zone->lru[LRU_IMMEDIATE].list);
+		__mod_zone_page_state(zone, NR_IMMEDIATE, 1);
 	} else {
 		/*
 		 * The page's writeback ends up during pagevec
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b0eeec7..9879ae5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1404,6 +1404,17 @@ putback_lru_pages(struct zone *zone, struct scan_control *sc,
 		}
 		SetPageLRU(page);
 		lru = page_lru(page);
+
+		/*
+		 * If reclaim has tagged a file page reclaim, move it to
+		 * a separate LRU lists to avoid it being scanned by other
+		 * users. It is expected that as writeback completes that
+		 * they are taken back off and moved to the normal LRU
+		 */
+		if (lru == LRU_INACTIVE_FILE &&
+				PageReclaim(page) && PageWriteback(page))
+			lru = LRU_IMMEDIATE;
+
 		add_page_to_lru_list(zone, page, lru);
 		if (is_active_lru(lru)) {
 			int file = is_file_lru(lru);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8fd603b..dbfec4c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -688,6 +688,7 @@ const char * const vmstat_text[] = {
 	"nr_active_anon",
 	"nr_inactive_file",
 	"nr_active_file",
+	"nr_immediate",
 	"nr_unevictable",
 	"nr_mlock",
 	"nr_anon_pages",
@@ -756,6 +757,7 @@ const char * const vmstat_text[] = {
 	"allocstall",
 
 	"pgrotated",
+	"pgrescued",
 
 #ifdef CONFIG_COMPACTION
 	"compact_blocks_moved",
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan
  2011-12-14 15:41   ` Mel Gorman
@ 2011-12-15 23:21     ` Rik van Riel
  -1 siblings, 0 replies; 28+ messages in thread
From: Rik van Riel @ 2011-12-15 23:21 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Dave Jones,
	Jan Kara, Andy Isaacson, Johannes Weiner, Nai Xia, Linux-MM,
	LKML

On 12/14/2011 10:41 AM, Mel Gorman wrote:
> From: Andrea Arcangeli<aarcange@redhat.com>
>
> Properly take into account if we isolated a compound page during the
> lumpy scan in reclaim and skip over the tail pages when encountered.
> This corrects the values given to the tracepoint for number of lumpy
> pages isolated and will avoid breaking the loop early if compound
> pages smaller than the requested allocation size are requested.
>
> [mgorman@suse.de: Updated changelog]
> Signed-off-by: Andrea Arcangeli<aarcange@redhat.com>
> Signed-off-by: Mel Gorman<mgorman@suse.de>
> Reviewed-by: Minchan Kim<minchan.kim@gmail.com>

Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan
@ 2011-12-15 23:21     ` Rik van Riel
  0 siblings, 0 replies; 28+ messages in thread
From: Rik van Riel @ 2011-12-15 23:21 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Dave Jones,
	Jan Kara, Andy Isaacson, Johannes Weiner, Nai Xia, Linux-MM,
	LKML

On 12/14/2011 10:41 AM, Mel Gorman wrote:
> From: Andrea Arcangeli<aarcange@redhat.com>
>
> Properly take into account if we isolated a compound page during the
> lumpy scan in reclaim and skip over the tail pages when encountered.
> This corrects the values given to the tracepoint for number of lumpy
> pages isolated and will avoid breaking the loop early if compound
> pages smaller than the requested allocation size are requested.
>
> [mgorman@suse.de: Updated changelog]
> Signed-off-by: Andrea Arcangeli<aarcange@redhat.com>
> Signed-off-by: Mel Gorman<mgorman@suse.de>
> Reviewed-by: Minchan Kim<minchan.kim@gmail.com>

Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All rights reversed

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan
  2011-12-14 15:41 [PATCH 0/11] Reduce compaction-related stalls and improve asynchronous migration of dirty pages v6 Mel Gorman
@ 2011-12-14 15:41   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-14 15:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, Minchan Kim, Dave Jones, Jan Kara,
	Andy Isaacson, Johannes Weiner, Mel Gorman, Rik van Riel,
	Nai Xia, Linux-MM, LKML

From: Andrea Arcangeli <aarcange@redhat.com>

Properly take into account if we isolated a compound page during the
lumpy scan in reclaim and skip over the tail pages when encountered.
This corrects the values given to the tracepoint for number of lumpy
pages isolated and will avoid breaking the loop early if compound
pages smaller than the requested allocation size are requested.

[mgorman@suse.de: Updated changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/vmscan.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f54a05b..faf88b8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1183,13 +1183,16 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 				break;
 
 			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+				unsigned int isolated_pages;
 				list_move(&cursor_page->lru, dst);
 				mem_cgroup_del_lru(cursor_page);
-				nr_taken += hpage_nr_pages(page);
-				nr_lumpy_taken++;
+				isolated_pages = hpage_nr_pages(page);
+				nr_taken += isolated_pages;
+				nr_lumpy_taken += isolated_pages;
 				if (PageDirty(cursor_page))
-					nr_lumpy_dirty++;
+					nr_lumpy_dirty += isolated_pages;
 				scan++;
+				pfn += isolated_pages-1;
 			} else {
 				/*
 				 * Check if the page is freed already.
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan
@ 2011-12-14 15:41   ` Mel Gorman
  0 siblings, 0 replies; 28+ messages in thread
From: Mel Gorman @ 2011-12-14 15:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, Minchan Kim, Dave Jones, Jan Kara,
	Andy Isaacson, Johannes Weiner, Mel Gorman, Rik van Riel,
	Nai Xia, Linux-MM, LKML

From: Andrea Arcangeli <aarcange@redhat.com>

Properly take into account if we isolated a compound page during the
lumpy scan in reclaim and skip over the tail pages when encountered.
This corrects the values given to the tracepoint for number of lumpy
pages isolated and will avoid breaking the loop early if compound
pages smaller than the requested allocation size are requested.

[mgorman@suse.de: Updated changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/vmscan.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f54a05b..faf88b8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1183,13 +1183,16 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 				break;
 
 			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+				unsigned int isolated_pages;
 				list_move(&cursor_page->lru, dst);
 				mem_cgroup_del_lru(cursor_page);
-				nr_taken += hpage_nr_pages(page);
-				nr_lumpy_taken++;
+				isolated_pages = hpage_nr_pages(page);
+				nr_taken += isolated_pages;
+				nr_lumpy_taken += isolated_pages;
 				if (PageDirty(cursor_page))
-					nr_lumpy_dirty++;
+					nr_lumpy_dirty += isolated_pages;
 				scan++;
+				pfn += isolated_pages-1;
 			} else {
 				/*
 				 * Check if the page is freed already.
-- 
1.7.3.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2011-12-15 23:22 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-01 17:36 [PATCH 0/11] Reduce compaction-related stalls and improve asynchronous migration of dirty pages v5 Mel Gorman
2011-12-01 17:36 ` Mel Gorman
2011-12-01 17:36 ` [PATCH 01/11] mm: compaction: Allow compaction to isolate dirty pages Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 02/11] mm: compaction: Use synchronous compaction for /proc/sys/vm/compact_memory Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 04/11] mm: vmscan: Do not OOM if aborting reclaim to start compaction Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 05/11] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 06/11] mm: compaction: make isolate_lru_page() filter-aware again Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 07/11] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 08/11] mm: compaction: Introduce sync-light migration for use by compaction Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 09/11] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 10/11] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-01 17:36 ` [PATCH 11/11] mm: Isolate pages for immediate reclaim on their own LRU Mel Gorman
2011-12-01 17:36   ` Mel Gorman
2011-12-14 15:41 [PATCH 0/11] Reduce compaction-related stalls and improve asynchronous migration of dirty pages v6 Mel Gorman
2011-12-14 15:41 ` [PATCH 03/11] mm: vmscan: Check if we isolated a compound page during lumpy scan Mel Gorman
2011-12-14 15:41   ` Mel Gorman
2011-12-15 23:21   ` Rik van Riel
2011-12-15 23:21     ` Rik van Riel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.