From mboxrd@z Thu Jan 1 00:00:00 1970 From: akpm@linux-foundation.org Subject: [merged] mm-memcontrol-move-out-cgroup-swaprate-throttling.patch removed from -mm tree Date: Thu, 04 Jun 2020 10:19:58 -0700 Message-ID: <20200604171958.FXIr9Zbp6%akpm@linux-foundation.org> Reply-To: linux-kernel@vger.kernel.org Return-path: Received: from mail.kernel.org ([198.145.29.99]:41924 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730133AbgFDRUA (ORCPT ); Thu, 4 Jun 2020 13:20:00 -0400 Sender: mm-commits-owner@vger.kernel.org List-Id: mm-commits@vger.kernel.org To: alex.shi@linux.alibaba.com, bsingharora@gmail.com, guro@fb.com, hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com, kirill@shutemov.name, mhocko@suse.com, mm-commits@vger.kernel.org, shakeelb@google.com The patch titled Subject: mm: memcontrol: move out cgroup swaprate throttling has been removed from the -mm tree. Its filename was mm-memcontrol-move-out-cgroup-swaprate-throttling.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Johannes Weiner Subject: mm: memcontrol: move out cgroup swaprate throttling The cgroup swaprate throttling is about matching new anon allocations to the rate of available IO when that is being throttled. It's the io controller hooking into the VM, rather than a memory controller thing. Rename mem_cgroup_throttle_swaprate() to cgroup_throttle_swaprate(), and drop the @memcg argument which is only used to check whether the preceding page charge has succeeded and the fault is proceeding. We could decouple the call from mem_cgroup_try_charge() here as well, but that would cause unnecessary churn: the following patches convert all callsites to a new charge API and we'll decouple as we go along. Link: http://lkml.kernel.org/r/20200508183105.225460-5-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Reviewed-by: Alex Shi Reviewed-by: Joonsoo Kim Reviewed-by: Shakeel Butt Cc: Hugh Dickins Cc: "Kirill A. Shutemov" Cc: Michal Hocko Cc: Roman Gushchin Cc: Balbir Singh Signed-off-by: Andrew Morton --- include/linux/swap.h | 6 ++---- mm/memcontrol.c | 5 ++--- mm/swapfile.c | 14 +++++++------- 3 files changed, 11 insertions(+), 14 deletions(-) --- a/include/linux/swap.h~mm-memcontrol-move-out-cgroup-swaprate-throttling +++ a/include/linux/swap.h @@ -651,11 +651,9 @@ static inline int mem_cgroup_swappiness( #endif #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -extern void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, - gfp_t gfp_mask); +extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask); #else -static inline void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, - int node, gfp_t gfp_mask) +static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { } #endif --- a/mm/memcontrol.c~mm-memcontrol-move-out-cgroup-swaprate-throttling +++ a/mm/memcontrol.c @@ -6553,12 +6553,11 @@ out: int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, struct mem_cgroup **memcgp) { - struct mem_cgroup *memcg; int ret; ret = mem_cgroup_try_charge(page, mm, gfp_mask, memcgp); - memcg = *memcgp; - mem_cgroup_throttle_swaprate(memcg, page_to_nid(page), gfp_mask); + if (*memcgp) + cgroup_throttle_swaprate(page, gfp_mask); return ret; } --- a/mm/swapfile.c~mm-memcontrol-move-out-cgroup-swaprate-throttling +++ a/mm/swapfile.c @@ -3798,11 +3798,12 @@ static void free_swap_count_continuation } #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, - gfp_t gfp_mask) +void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { struct swap_info_struct *si, *next; - if (!(gfp_mask & __GFP_IO) || !memcg) + int nid = page_to_nid(page); + + if (!(gfp_mask & __GFP_IO)) return; if (!blk_cgroup_congested()) @@ -3816,11 +3817,10 @@ void mem_cgroup_throttle_swaprate(struct return; spin_lock(&swap_avail_lock); - plist_for_each_entry_safe(si, next, &swap_avail_heads[node], - avail_lists[node]) { + plist_for_each_entry_safe(si, next, &swap_avail_heads[nid], + avail_lists[nid]) { if (si->bdev) { - blkcg_schedule_throttle(bdev_get_queue(si->bdev), - true); + blkcg_schedule_throttle(bdev_get_queue(si->bdev), true); break; } } _ Patches currently in -mm which might be from hannes@cmpxchg.org are