From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C53C9C433DF for ; Wed, 3 Jun 2020 23:01:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 768DF214D8 for ; Wed, 3 Jun 2020 23:01:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hlTQ4DDQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 768DF214D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DE44F280060; Wed, 3 Jun 2020 19:01:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D6B7E280003; Wed, 3 Jun 2020 19:01:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C341F280060; Wed, 3 Jun 2020 19:01:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id A59DC280003 for ; Wed, 3 Jun 2020 19:01:43 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 66995181AEF09 for ; Wed, 3 Jun 2020 23:01:43 +0000 (UTC) X-FDA: 76889424486.21.paper62_2c1407566e226 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 4DAF4180442C2 for ; Wed, 3 Jun 2020 23:01:43 +0000 (UTC) X-HE-Tag: paper62_2c1407566e226 X-Filterd-Recvd-Size: 12629 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:01:42 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B00A92145D; Wed, 3 Jun 2020 23:01:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225302; bh=qb8/h9d80Y/UF3Dlgh7gU/dKIxdI4BEXokjJUkOysu0=; h=Date:From:To:Subject:In-Reply-To:From; b=hlTQ4DDQbNwr7JXQgxzAopYChvWxgfThtkh3VmHryRoapqkOpiYGWNO856I/NJvoT 6mKdv7rbIKCEbYaPkxIfeJen/clNzpMjkmH/f2ih7uorqMMMZpLK5Z1iqwerYoRTD+ uwZTJxFBll08Dg80wONX1padds1fXe+76lVoTYj8= Date: Wed, 03 Jun 2020 16:01:41 -0700 From: Andrew Morton To: akpm@linux-foundation.org, alex.shi@linux.alibaba.com, bsingharora@gmail.com, guro@fb.com, hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com, kirill@shutemov.name, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, shakeelb@google.com, torvalds@linux-foundation.org Subject: [patch 088/131] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Message-ID: <20200603230141.-CneAXq8N%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 4DAF4180442C2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Subject: mm: memcontrol: convert page cache to a new mem_cgroup_charge() API The try/commit/cancel protocol that memcg uses dates back to when pages used to be uncharged upon removal from the page cache, and thus couldn't be committed before the insertion had succeeded. Nowadays, pages are uncharged when they are physically freed; it doesn't matter whether the insertion was successful or not. For the page cache, the transaction dance has become unnecessary. Introduce a mem_cgroup_charge() function that simply charges a newly allocated page to a cgroup and sets up page->mem_cgroup in one single step. If the insertion fails, the caller doesn't have to do anything but free/put the page. Then switch the page cache over to this new API. Subsequent patches will also convert anon pages, but it needs a bit more prep work. Right now, memcg depends on page->mapping being already set up at the time of charging, so that it can maintain its own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set under the same pte lock under which the page is publishd, so a single charge point that can block doesn't work there just yet. The following prep patches will replace the private memcg counters with the generic vmstat counters, thus removing the page->mapping dependency, then complete the transition to the new single-point charge API and delete the old transactional scheme. v2: leave shmem swapcache when charging fails to avoid double IO (Joonsoo) v3: rebase on preceeding shmem simplification patch Link: http://lkml.kernel.org/r/20200508183105.225460-6-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Reviewed-by: Alex Shi Cc: Hugh Dickins Cc: Joonsoo Kim Cc: "Kirill A. Shutemov" Cc: Michal Hocko Cc: Roman Gushchin Cc: Shakeel Butt Cc: Balbir Singh Signed-off-by: Andrew Morton --- include/linux/memcontrol.h | 10 ++++ mm/filemap.c | 24 ++++------- mm/memcontrol.c | 29 ++++++++++++- mm/shmem.c | 73 ++++++++++++++--------------------- 4 files changed, 77 insertions(+), 59 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/include/linux/memcontrol.h @@ -365,6 +365,10 @@ int mem_cgroup_try_charge_delay(struct p void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare); void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); + +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare); + void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_cha { } +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask, bool lrucare) +{ + return 0; +} + static inline void mem_cgroup_uncharge(struct page *page) { } --- a/mm/filemap.c~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/mm/filemap.c @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(st { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); - struct mem_cgroup *memcg; int error; void *old; @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(st VM_BUG_ON_PAGE(PageSwapBacked(page), page); mapping_set_update(&xas, mapping); - if (!huge) { - error = mem_cgroup_try_charge(page, current->mm, - gfp_mask, &memcg); - if (error) - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; + if (!huge) { + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + if (error) + goto error; + } + do { xas_lock_irq(&xas); old = xas_load(&xas); @@ -874,20 +872,18 @@ unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); - if (xas_error(&xas)) + if (xas_error(&xas)) { + error = xas_error(&xas); goto error; + } - if (!huge) - mem_cgroup_commit_charge(page, memcg, false); trace_mm_filemap_add_to_page_cache(page); return 0; error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - if (!huge) - mem_cgroup_cancel_charge(page, memcg); put_page(page); - return xas_error(&xas); + return error; } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); --- a/mm/memcontrol.c~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/mm/memcontrol.c @@ -6637,6 +6637,33 @@ void mem_cgroup_cancel_charge(struct pag cancel_charge(memcg, nr_pages); } +/** + * mem_cgroup_charge - charge a newly allocated page to a cgroup + * @page: page to charge + * @mm: mm context of the victim + * @gfp_mask: reclaim mode + * @lrucare: page might be on the LRU already + * + * Try to charge @page to the memcg that @mm belongs to, reclaiming + * pages according to @gfp_mask if necessary. + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare) +{ + struct mem_cgroup *memcg; + int ret; + + VM_BUG_ON_PAGE(!page->mapping, page); + + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); + if (ret) + return ret; + mem_cgroup_commit_charge(page, memcg, lrucare); + return 0; +} + struct uncharge_gather { struct mem_cgroup *memcg; unsigned long pgpgout; @@ -6684,8 +6711,6 @@ static void uncharge_batch(const struct static void uncharge_page(struct page *page, struct uncharge_gather *ug) { VM_BUG_ON_PAGE(PageLRU(page), page); - VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) && - !PageHWPoison(page) , page); if (!page->mem_cgroup) return; --- a/mm/shmem.c~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/mm/shmem.c @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struc */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected, gfp_t gfp) + pgoff_t index, void *expected, gfp_t gfp, + struct mm_struct *charge_mm) { XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); unsigned long i = 0; unsigned long nr = compound_nr(page); + int error; VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struc page->mapping = mapping; page->index = index; + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); + if (error) { + if (!PageSwapCache(page) && PageTransHuge(page)) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto error; + } + cgroup_throttle_swaprate(page, gfp); + do { void *entry; xas_lock_irq(&xas); @@ -648,12 +660,15 @@ unlock: } while (xas_nomem(&xas, gfp)); if (xas_error(&xas)) { - page->mapping = NULL; - page_ref_sub(page, nr); - return xas_error(&xas); + error = xas_error(&xas); + goto error; } return 0; +error: + page->mapping = NULL; + page_ref_sub(page, nr); + return error; } /* @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inod struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; - struct mem_cgroup *memcg; struct page *page; swp_entry_t swap; int error; @@ -1664,18 +1678,11 @@ static int shmem_swapin_page(struct inod goto failed; } - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) - goto failed; - error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap), gfp); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + swp_to_radix_entry(swap), gfp, + charge_mm); + if (error) goto failed; - } - - mem_cgroup_commit_charge(page, memcg, true); spin_lock_irq(&info->lock); info->swapped--; @@ -1722,7 +1729,6 @@ static int shmem_getpage_gfp(struct inod struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; - struct mem_cgroup *memcg; struct page *page; enum sgp_type sgp_huge = sgp; pgoff_t hindex = index; @@ -1847,21 +1853,11 @@ alloc_nohuge: if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) { - if (PageTransHuge(page)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto unacct; - } error = shmem_add_to_page_cache(page, mapping, hindex, - NULL, gfp & GFP_RECLAIM_MASK); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + NULL, gfp & GFP_RECLAIM_MASK, + charge_mm); + if (error) goto unacct; - } - mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_anon(page); spin_lock_irq(&info->lock); @@ -2299,7 +2295,6 @@ static int shmem_mfill_atomic_pte(struct struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - struct mem_cgroup *memcg; spinlock_t *ptl; void *page_kaddr; struct page *page; @@ -2349,16 +2344,10 @@ static int shmem_mfill_atomic_pte(struct if (unlikely(offset >= max_off)) goto out_release; - ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg); - if (ret) - goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK); + gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) - goto out_release_uncharge; - - mem_cgroup_commit_charge(page, memcg, false); + goto out_release; _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) @@ -2379,11 +2368,11 @@ static int shmem_mfill_atomic_pte(struct ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; + goto out_release_unlock; ret = -EEXIST; if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; + goto out_release_unlock; lru_cache_add_anon(page); @@ -2404,12 +2393,10 @@ static int shmem_mfill_atomic_pte(struct ret = 0; out: return ret; -out_release_uncharge_unlock: +out_release_unlock: pte_unmap_unlock(dst_pte, ptl); ClearPageDirty(page); delete_from_page_cache(page); -out_release_uncharge: - mem_cgroup_cancel_charge(page, memcg); out_release: unlock_page(page); put_page(page); _