From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAF7AC433E0 for ; Wed, 3 Jun 2020 23:01:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6B9BA20E65 for ; Wed, 3 Jun 2020 23:01:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="FUIvcLtQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6B9BA20E65 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3AB2A28005B; Wed, 3 Jun 2020 19:01:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 33180280003; Wed, 3 Jun 2020 19:01:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26E1528005B; Wed, 3 Jun 2020 19:01:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id 0FD02280003 for ; Wed, 3 Jun 2020 19:01:27 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CC5321EE6 for ; Wed, 3 Jun 2020 23:01:26 +0000 (UTC) X-FDA: 76889423772.26.kick83_29abd5390832b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id AA0361804B640 for ; Wed, 3 Jun 2020 23:01:26 +0000 (UTC) X-HE-Tag: kick83_29abd5390832b X-Filterd-Recvd-Size: 5764 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:01:26 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1F62720C09; Wed, 3 Jun 2020 23:01:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225285; bh=RfoMPj7PSaGKT5in4KPLiKDV4v45Md7iTQ36u9gMjHw=; h=Date:From:To:Subject:In-Reply-To:From; b=FUIvcLtQPASxIq5YWrCcO+/o/92/JTGQX7RWMhvPVR0vq5R9caCopbdIa1OrVngfl w3Q6WkL1Wkca7A/+E2fMtLBfD++JYKcQrFyvwj7DGcPj6vk0t6T1+TfNM8Wze6Ew/1 uzQexXv7EXNxvIcTZeQCNVOgBJzrCFISF/utwuHI= Date: Wed, 03 Jun 2020 16:01:24 -0700 From: Andrew Morton To: akpm@linux-foundation.org, alex.shi@linux.alibaba.com, bsingharora@gmail.com, guro@fb.com, hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com, kirill@shutemov.name, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, shakeelb@google.com, torvalds@linux-foundation.org Subject: [patch 083/131] mm: fix NUMA node file count error in replace_page_cache() Message-ID: <20200603230124.OWzg88c1Y%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: AA0361804B640 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Subject: mm: fix NUMA node file count error in replace_page_cache() Patch series "mm: memcontrol: charge swapin pages on instantiation", v2. This patch series reworks memcg to charge swapin pages directly at swapin time, rather than at fault time, which may be much later, or not happen at all. Changes in version 2: - prevent double charges on pre-allocated hugepages in khugepaged - leave shmem swapcache when charging fails to avoid double IO (Joonsoo) - fix temporary accounting bug by switching rmap<->commit (Joonsoo) - fix double swap charge bug in cgroup1/cgroup2 code gating - simplify swapin error checking (Joonsoo) - mm: memcontrol: document the new swap control behavior (Alex) - review tags The delayed swapin charging scheme we have right now causes problems: - Alex's per-cgroup lru_lock patches rely on pages that have been isolated from the LRU to have a stable page->mem_cgroup; otherwise the lock may change underneath him. Swapcache pages are charged only after they are added to the LRU, and charging doesn't follow the LRU isolation protocol. - Joonsoo's anon workingset patches need a suitable LRU at the time the page enters the swap cache and displaces the non-resident info. But the correct LRU is only available after charging. - It's a containment hole / DoS vector. Users can trigger arbitrarily large swap readahead using MADV_WILLNEED. The memory is never charged unless somebody actually touches it. - It complicates the page->mem_cgroup stabilization rules In order to charge pages directly at swapin time, the memcg code base needs to be prepared, and several overdue cleanups become a necessity: To charge pages at swapin time, we need to always have cgroup ownership tracking of swap records. We also cannot rely on page->mapping to tell apart page types at charge time, because that's only set up during a page fault. To eliminate the page->mapping dependency, memcg needs to ditch its private page type counters (MEMCG_CACHE, MEMCG_RSS, NR_SHMEM) in favor of the generic vmstat counters and accounting sites, such as NR_FILE_PAGES, NR_ANON_MAPPED etc. To switch to generic vmstat counters, the charge sequence must be adjusted such that page->mem_cgroup is set up by the time these counters are modified. The series is structured as follows: 1. Bug fixes 2. Decoupling charging from rmap 3. Swap controller integration into memcg 4. Direct swapin charging This patch (of 19): When replacing one page with another one in the cache, we have to decrease the file count of the old page's NUMA node and increase the one of the new NUMA node, otherwise the old node leaks the count and the new node eventually underflows its counter. Link: http://lkml.kernel.org/r/20200508183105.225460-1-hannes@cmpxchg.org Link: http://lkml.kernel.org/r/20200508183105.225460-2-hannes@cmpxchg.org Fixes: 74d609585d8b ("page cache: Add and replace pages using the XArray") Signed-off-by: Johannes Weiner Reviewed-by: Alex Shi Reviewed-by: Shakeel Butt Reviewed-by: Joonsoo Kim Reviewed-by: Balbir Singh Cc: Hugh Dickins Cc: Michal Hocko Cc: "Kirill A. Shutemov" Cc: Roman Gushchin Signed-off-by: Andrew Morton --- mm/filemap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/filemap.c~mm-fix-numa-node-file-count-error-in-replace_page_cache +++ a/mm/filemap.c @@ -808,11 +808,11 @@ int replace_page_cache_page(struct page old->mapping = NULL; /* hugetlb pages do not participate in page cache accounting. */ if (!PageHuge(old)) - __dec_node_page_state(new, NR_FILE_PAGES); + __dec_node_page_state(old, NR_FILE_PAGES); if (!PageHuge(new)) __inc_node_page_state(new, NR_FILE_PAGES); if (PageSwapBacked(old)) - __dec_node_page_state(new, NR_SHMEM); + __dec_node_page_state(old, NR_SHMEM); if (PageSwapBacked(new)) __inc_node_page_state(new, NR_SHMEM); xas_unlock_irqrestore(&xas, flags); _