From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Morton Subject: + mm-remove-use-once-cache-bias-from-lru-balancing.patch added to -mm tree Date: Wed, 20 May 2020 20:31:58 -0700 Message-ID: <20200521033158.kzeKQG1VS%akpm@linux-foundation.org> References: <20200513175005.1f4839360c18c0238df292d1@linux-foundation.org> Reply-To: linux-kernel@vger.kernel.org Return-path: Received: from mail.kernel.org ([198.145.29.99]:34976 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727974AbgEUDb7 (ORCPT ); Wed, 20 May 2020 23:31:59 -0400 In-Reply-To: <20200513175005.1f4839360c18c0238df292d1@linux-foundation.org> Sender: mm-commits-owner@vger.kernel.org List-Id: mm-commits@vger.kernel.org To: hannes@cmpxchg.org, iamjoonsoo.kim@lge.com, mhocko@suse.com, minchan@kernel.org, mm-commits@vger.kernel.org, riel@redhat.com The patch titled Subject: mm: remove use-once cache bias from LRU balancing has been added to the -mm tree. Its filename is mm-remove-use-once-cache-bias-from-lru-balancing.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-remove-use-once-cache-bias-from-lru-balancing.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-remove-use-once-cache-bias-from-lru-balancing.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Johannes Weiner Subject: mm: remove use-once cache bias from LRU balancing When the splitlru patches divided page cache and swap-backed pages into separate LRU lists, the pressure balance between the lists was biased to account for the fact that streaming IO can cause memory pressure with a flood of pages that are used only once. New page cache additions would tip the balance toward the file LRU, and repeat access would neutralize that bias again. This ensured that page reclaim would always go for used-once cache first. Since e9868505987a ("mm,vmscan: only evict file pages when we have plenty"), page reclaim generally skips over swap-backed memory entirely as long as there is used-once cache present, and will apply the LRU balancing when only repeatedly accessed cache pages are left - at which point the previous use-once bias will have been neutralized. This makes the use-once cache balancing bias unnecessary. Link: http://lkml.kernel.org/r/20200520232525.798933-7-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Acked-by: Michal Hocko Acked-by: Minchan Kim Cc: Joonsoo Kim Cc: Rik van Riel Signed-off-by: Andrew Morton --- mm/swap.c | 5 ----- 1 file changed, 5 deletions(-) --- a/mm/swap.c~mm-remove-use-once-cache-bias-from-lru-balancing +++ a/mm/swap.c @@ -277,7 +277,6 @@ static void __activate_page(struct page void *arg) { if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { - int file = page_is_file_lru(page); int lru = page_lru_base_type(page); del_page_from_lru_list(page, lruvec, lru); @@ -287,7 +286,6 @@ static void __activate_page(struct page trace_mm_lru_activate(page); __count_vm_event(PGACTIVATE); - update_page_reclaim_stat(lruvec, file, 1, hpage_nr_pages(page)); } } @@ -936,9 +934,6 @@ static void __pagevec_lru_add_fn(struct if (page_evictable(page)) { lru = page_lru(page); - update_page_reclaim_stat(lruvec, is_file_lru(lru), - PageActive(page), - hpage_nr_pages(page)); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else { _ Patches currently in -mm which might be from hannes@cmpxchg.org are mm-fix-numa-node-file-count-error-in-replace_page_cache.patch mm-memcontrol-fix-stat-corrupting-race-in-charge-moving.patch mm-memcontrol-drop-compound-parameter-from-memcg-charging-api.patch mm-shmem-remove-rare-optimization-when-swapin-races-with-hole-punching.patch mm-memcontrol-move-out-cgroup-swaprate-throttling.patch mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api.patch mm-memcontrol-prepare-uncharging-for-removal-of-private-page-type-counters.patch mm-memcontrol-prepare-move_account-for-removal-of-private-page-type-counters.patch mm-memcontrol-prepare-cgroup-vmstat-infrastructure-for-native-anon-counters.patch mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters.patch mm-memcontrol-switch-to-native-nr_anon_mapped-counter.patch mm-memcontrol-switch-to-native-nr_anon_thps-counter.patch mm-memcontrol-switch-to-native-nr_anon_thps-counter-fix.patch mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api.patch mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api-fix.patch mm-memcontrol-drop-unused-try-commit-cancel-charge-api.patch mm-memcontrol-prepare-swap-controller-setup-for-integration.patch mm-memcontrol-make-swap-tracking-an-integral-part-of-memory-control.patch mm-memcontrol-charge-swapin-pages-on-instantiation.patch mm-memcontrol-delete-unused-lrucare-handling.patch mm-memcontrol-update-page-mem_cgroup-stability-rules.patch mm-fix-lru-balancing-effect-of-new-transparent-huge-pages.patch mm-keep-separate-anon-and-file-statistics-on-page-reclaim-activity.patch mm-allow-swappiness-that-prefers-reclaiming-anon-over-the-file-workingset.patch mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file.patch mm-workingset-let-cache-workingset-challenge-anon.patch mm-remove-use-once-cache-bias-from-lru-balancing.patch mm-vmscan-drop-unnecessary-div0-avoidance-rounding-in-get_scan_count.patch mm-base-lru-balancing-on-an-explicit-cost-model.patch mm-deactivations-shouldnt-bias-the-lru-balance.patch mm-only-count-actual-rotations-as-lru-reclaim-cost.patch mm-balance-lru-lists-based-on-relative-thrashing.patch mm-vmscan-determine-anon-file-pressure-balance-at-the-reclaim-root.patch mm-vmscan-reclaim-writepage-is-io-cost.patch mm-vmscan-limit-the-range-of-lru-type-balancing.patch