From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BODY_ENHANCEMENT, DKIMWL_WL_MED,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C268C54E8D for ; Mon, 11 May 2020 18:44:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 34A7E206DB for ; Mon, 11 May 2020 18:44:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ewA66SNz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 34A7E206DB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C7773900078; Mon, 11 May 2020 14:44:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C2951900036; Mon, 11 May 2020 14:44:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1723900078; Mon, 11 May 2020 14:44:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id 95AD0900036 for ; Mon, 11 May 2020 14:44:38 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4BA22181AEF1D for ; Mon, 11 May 2020 18:44:38 +0000 (UTC) X-FDA: 76805314236.28.veil24_62a653d013b43 X-HE-Tag: veil24_62a653d013b43 X-Filterd-Recvd-Size: 7600 Received: from mail-oi1-f194.google.com (mail-oi1-f194.google.com [209.85.167.194]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 11 May 2020 18:44:37 +0000 (UTC) Received: by mail-oi1-f194.google.com with SMTP id k133so15812234oih.12 for ; Mon, 11 May 2020 11:44:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=fUrEOO+y3D6qyRoObBtBb6IstX2hI5RkiWjny9eyf2g=; b=ewA66SNz/mrr0YGX9ELb06rzKDiBYl/PL2esub6hnYZ+6TALexrmZPMIB8qJFeEFQd IF0LVoeS9dsVfQZcUMGsaoh15m09m9CjX4NdAExvYiE0/lfPlterADU+Q/Hnf8H2j6c5 YKeM8EpuneuojhKYM0ZC9hT2FBdlQ0DPTdwuXwbmSzlYmwFGpW99Hw0ERgZfBzHvgWvh emK8fLJefimnuGZ8BKEF6tl1HinCTmvsETP8ybYotG3S58k+VHH1hK7AmoIPB3ysRIyz OYi/+MGYuIcNxIdA/1tjfRm8BlMts+W7Bxv3mcHAWEwsHl934fFKXGUdEH3AwIy91tW2 szyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=fUrEOO+y3D6qyRoObBtBb6IstX2hI5RkiWjny9eyf2g=; b=p81ciBaf0m2a6EMfqnT49LzbuAAMoLfXvY/9sTOPy4gu0SC+THYq02DdKKjUBEMToq J6BEOxbg8hypvGQ4cc9ErnlwtTwIc2QG/SC/W4M9jXADoHgEdgoxtrvnmXN4LonEsTi2 AjIzU5T1LnYG1X+vfzPVxMji1UvvVwVP3fUerelcuWnJmE5y3W6NcnTQQDFV3gvki3iu f1bwX089ig9br6wtW/rsTgWABGgoVx3BUhM2sA7rLpl1WsFv6cuAz66oGDNbzxFzLCwk 2f4+jbNxIi0KtTns7/skKv+o8xdd4eqiwW7kc2Q7bMF3SuFTC8WwWfg6pMBjFyf9cVTr G2Nw== X-Gm-Message-State: AGi0Pub0ceYotIJjQ8PD5MYzExbbmkpY81GH+iZOc1S6LpNS/xPoZOs5 tgBfDcvzV7NA65tkLCyKiIM4kQ== X-Google-Smtp-Source: APiQypJPKtawHfjFwCRo/VUEXiQqWma/qUvHGiSpLqc2MhbZsPz2K4KVszWv+9LIbRIbt2YcLxL8/A== X-Received: by 2002:aca:478e:: with SMTP id u136mr20830110oia.34.1589222676847; Mon, 11 May 2020 11:44:36 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id k26sm2869896ots.3.2020.05.11.11.44.35 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Mon, 11 May 2020 11:44:36 -0700 (PDT) Date: Mon, 11 May 2020 11:44:21 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Johannes Weiner cc: Hugh Dickins , Joonsoo Kim , Alex Shi , Shakeel Butt , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Andrew Morton Subject: Re: [PATCH 05/18] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API In-Reply-To: <20200511181056.GA339505@cmpxchg.org> Message-ID: References: <20200420221126.341272-1-hannes@cmpxchg.org> <20200420221126.341272-6-hannes@cmpxchg.org> <20200422064041.GE6780@js1304-desktop> <20200422120946.GA358439@cmpxchg.org> <20200423052450.GA12538@js1304-desktop> <20200508160122.GB181181@cmpxchg.org> <20200511150648.GA306292@cmpxchg.org> <20200511181056.GA339505@cmpxchg.org> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 11 May 2020, Johannes Weiner wrote: > > Since commit b56a2d8af914 ("mm: rid swapoff of quadratic complexity"), > shmem_unuse_inode() doesn't have its own copy anymore - it uses > shmem_swapin_page(). > > However, that commit appears to have made shmem's private call to > delete_from_swap_cache() obsolete as well. Whereas before this change > we fully relied on shmem_unuse() to find and clear a shmem swap entry > and its swapcache page, we now only need it to clean out shmem's > private state in the inode, as it's followed by a loop over all > remaining swap slots, calling try_to_free_swap() on stragglers. Great, you've looked deeper into the current situation than I had. > > Unless I missed something, it's still merely an optimization, and we > can delete it for simplicity: Yes, nice ---s, simpler code, and a good idea to separate it out as a precursor: thanks, Hannes. > > --- > > From fc9dcaf68c8b54baf365cd670fb5780c7f0d243f Mon Sep 17 00:00:00 2001 > From: Johannes Weiner > Date: Mon, 11 May 2020 12:59:08 -0400 > Subject: [PATCH] mm: shmem: remove rare optimization when swapin races with > hole punching > > Commit 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() VM_BUG_ON") > recognized that hole punching can race with swapin and removed the > BUG_ON() for a truncated entry from the swapin path. > > The patch also added a swapcache deletion to optimize this rare case: > Since swapin has the page locked, and free_swap_and_cache() merely > trylocks, this situation can leave the page stranded in > swapcache. Usually, page reclaim picks up stale swapcache pages, and > the race can happen at any other time when the page is locked. (The > same happens for non-shmem swapin racing with page table zapping.) The > thinking here was: we already observed the race and we have the page > locked, we may as well do the cleanup instead of waiting for reclaim. > > However, this optimization complicates the next patch which moves the > cgroup charging code around. As this is just a minor speedup for a > race condition that is so rare that it required a fuzzer to trigger > the original BUG_ON(), it's no longer worth the complications. > > Suggested-by: Hugh Dickins > Signed-off-by: Johannes Weiner Acked-by: Hugh Dickins (if one is allowed to suggest and to ack) > --- > mm/shmem.c | 25 +++++++------------------ > 1 file changed, 7 insertions(+), 18 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index d505b6cce4ab..729bbb3513cd 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1665,27 +1665,16 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > } > > error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > - if (!error) { > - error = shmem_add_to_page_cache(page, mapping, index, > - swp_to_radix_entry(swap), gfp); > - /* > - * We already confirmed swap under page lock, and make > - * no memory allocation here, so usually no possibility > - * of error; but free_swap_and_cache() only trylocks a > - * page, so it is just possible that the entry has been > - * truncated or holepunched since swap was confirmed. > - * shmem_undo_range() will have done some of the > - * unaccounting, now delete_from_swap_cache() will do > - * the rest. > - */ > - if (error) { > - mem_cgroup_cancel_charge(page, memcg); > - delete_from_swap_cache(page); > - } > - } > if (error) > goto failed; > > + error = shmem_add_to_page_cache(page, mapping, index, > + swp_to_radix_entry(swap), gfp); > + if (error) { > + mem_cgroup_cancel_charge(page, memcg); > + goto failed; > + } > + > mem_cgroup_commit_charge(page, memcg, true); > > spin_lock_irq(&info->lock); > -- > 2.26.2 > >