[7/7] mm: vmscan: move dirty pages out of the way until they're flushed fix
diff mbox series

Message ID 20170202191957.22872-8-hannes@cmpxchg.org
State New, archived
Headers show
Series
  • mm: vmscan: fix kswapd writeback regression v2
Related show

Commit Message

Johannes Weiner Feb. 2, 2017, 7:19 p.m. UTC
Mention the trade-off between waiting for writeback and potentially
causing hot cache refaults in the code where we make this decisions
and activate writeback pages.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/vmscan.c | 9 +++++++++
 1 file changed, 9 insertions(+)

Patch
diff mbox series

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 70103f411247..ae3d982216b5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1056,6 +1056,15 @@  static unsigned long shrink_page_list(struct list_head *page_list,
 		 *    throttling so we could easily OOM just because too many
 		 *    pages are in writeback and there is nothing else to
 		 *    reclaim. Wait for the writeback to complete.
+		 *
+		 * In cases 1) and 2) we activate the pages to get them out of
+		 * the way while we continue scanning for clean pages on the
+		 * inactive list and refilling from the active list. The
+		 * observation here is that waiting for disk writes is more
+		 * expensive than potentially causing reloads down the line.
+		 * Since they're marked for immediate reclaim, they won't put
+		 * memory pressure on the cache working set any longer than it
+		 * takes to write them to disk.
 		 */
 		if (PageWriteback(page)) {
 			/* Case 1 above */