From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C778C4338F for ; Tue, 17 Aug 2021 08:10:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D458A60F22 for ; Tue, 17 Aug 2021 08:10:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D458A60F22 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7081F6B006C; Tue, 17 Aug 2021 04:10:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B5DE8D0001; Tue, 17 Aug 2021 04:10:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A3FD6B0073; Tue, 17 Aug 2021 04:10:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 3F58F6B006C for ; Tue, 17 Aug 2021 04:10:35 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E000A8249980 for ; Tue, 17 Aug 2021 08:10:34 +0000 (UTC) X-FDA: 78483850788.31.422F5D9 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) by imf02.hostedemail.com (Postfix) with ESMTP id 9FF09700CBC5 for ; Tue, 17 Aug 2021 08:10:34 +0000 (UTC) Received: by mail-qk1-f180.google.com with SMTP id e14so22132252qkg.3 for ; Tue, 17 Aug 2021 01:10:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=dfE3q4OXpiVQpbNfVoKLEaANxcYqC3T8XxC0IxCHZsc=; b=jPESYkkq91jeQiDwQs4TeKi/+i5vQZlhzi+wNgz4UA8+pT+Iw5xltta00A7jASrMoB xrv44XXsQwHqrHVXMR2cd3SY4K0izMU2zXd/4m1GMTquYERZQfvAMEek387dVCWk4LKH j4qdUn9mLTJ1n2M4X1UxIbJ+U3geInX2P+ixAWHZAG0+wyJxwOMa6DinwtUdyLJe7AFA hmpP1GHYz2xpXdLjGoKsGj5g9zYFWLcKoraRyUmRqBDNI8J4FXOMfXyOaf4ZbuAy/Lq3 YbtJFzJfmQgpsbfxXbKixpfBWtY+7izIQvaorACdBVAG1YglUcI28CUKVwfOxKzWri8j bIEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=dfE3q4OXpiVQpbNfVoKLEaANxcYqC3T8XxC0IxCHZsc=; b=JjxOZDmD2EMP+z+lo6Q0Ul4oL78WoTNEnHz9h2Fw/nKFTZS9lbAqomPdHBpJE8yFwA be96TtiDz9gHQejI4ooEYkrUUbyi9Nu8vMTvzbqiQ1wQBlv+yaIG3yWNRKkJyj/czcq0 7fQ0xxIiXNmDdqm4nykGYG5463VfyI0tvNe7RCsFlOBTKLiFTbXHdd2zWJYBQhc5PH/5 t0Wcosptypb2r1ZHwj0+/EU2pzhn3fnlQnMG6eYSD97FulI7buB7ZOHgDNICtqDOlI9x 0G8R7Q60z6EX9R2Sp/fivXWwc3GCEYb4g/56vQM3En774woB9IJniqU1umdsEgDK54YT LHKg== X-Gm-Message-State: AOAM530+NnAKnW1YxcMR7tJ8Jea27qL+RcwERGWYxQG2Z/77xytyvGmV n/YouMWe296BJgF6UMz5BJOwqQ== X-Google-Smtp-Source: ABdhPJxFjAb7CmwY9wmrY5DQiBrSQtn4PIJbTVaSnCE7ZjrwuE8MNykwjZ3v5ynbRguyyjSKqJolTg== X-Received: by 2002:a05:620a:110d:: with SMTP id o13mr2550467qkk.108.1629187833824; Tue, 17 Aug 2021 01:10:33 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id q72sm873408qka.104.2021.08.17.01.10.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Aug 2021 01:10:33 -0700 (PDT) Date: Tue, 17 Aug 2021 01:10:31 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , Shakeel Butt , "Kirill A. Shutemov" , Yang Shi , Miaohe Lin , Mike Kravetz , Michal Hocko , Rik van Riel , Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/9] huge tmpfs: remove shrinklist addition from shmem_setattr() In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=jPESYkkq; spf=pass (imf02.hostedemail.com: domain of hughd@google.com designates 209.85.222.180 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9FF09700CBC5 X-Stat-Signature: 9qgnjj6ujab53wxwnmh5yiu5aoqdpcjt X-HE-Tag: 1629187834-856827 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's a block of code in shmem_setattr() to add the inode to shmem_unused_huge_shrink()'s shrinklist when lowering i_size: it dates from before 5.7 changed truncation to do split_huge_page() for itself, and should have been removed at that time. I am over-stating that: split_huge_page() can fail (notably if there's an extra reference to the page at that time), so there might be value in retrying. But there were already retries as truncation worked through the tails, and this addition risks repeating unsuccessful retries indefinitely: I'd rather remove it now, and work on reducing the chance of split_huge_page() failures separately, if we need to. Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole") Signed-off-by: Hugh Dickins Reviewed-by: Yang Shi --- mm/shmem.c | 19 ------------------- 1 file changed, 19 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 24c9da6b41c2..ce3ccaac54d6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1061,7 +1061,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns, { struct inode *inode = d_inode(dentry); struct shmem_inode_info *info = SHMEM_I(inode); - struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); int error; error = setattr_prepare(&init_user_ns, dentry, attr); @@ -1097,24 +1096,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns, if (oldsize > holebegin) unmap_mapping_range(inode->i_mapping, holebegin, 0, 1); - - /* - * Part of the huge page can be beyond i_size: subject - * to shrink under memory pressure. - */ - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { - spin_lock(&sbinfo->shrinklist_lock); - /* - * _careful to defend against unlocked access to - * ->shrink_list in shmem_unused_huge_shrink() - */ - if (list_empty_careful(&info->shrinklist)) { - list_add_tail(&info->shrinklist, - &sbinfo->shrinklist); - sbinfo->shrinklist_len++; - } - spin_unlock(&sbinfo->shrinklist_lock); - } } } -- 2.26.2