From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54A1FC432BE for ; Fri, 30 Jul 2021 21:51:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F0E560C40 for ; Fri, 30 Jul 2021 21:51:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231184AbhG3VvT (ORCPT ); Fri, 30 Jul 2021 17:51:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229840AbhG3VvR (ORCPT ); Fri, 30 Jul 2021 17:51:17 -0400 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2761DC06175F; Fri, 30 Jul 2021 14:51:12 -0700 (PDT) Received: by mail-ej1-x629.google.com with SMTP id nd39so19298211ejc.5; Fri, 30 Jul 2021 14:51:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ALuer+FidfIoua3TUSD8gX2hS4i01XLeemyyVEmMoQc=; b=mqVuNenWLGsGEestEyn1VFsrzeMOzP2lqyROg7AXmeoUVCaf6TFWK0BzzfuDLeM8zF jrmLd/hIp0m+ic8y82kCwAN0lqF4BBb2R0/TfRuZAMIj75G3qjAMcNI/bmIg3r+iEPAA zvlmNMBIbCR9gdYmAXk6vjnxw1Qc7zKeYuh+ou6PBz3PX+ICXhcc6+qSBDL10VuMZoMv gO3hy/DQALNsneH7xBudPFMdVe9/kKHac5rsioR4L9rqOdjI/yqHZWh/kci6xACpbJZ7 Oo5wYVhKkHGi6U3QOU2L3BWo8w2xFyhzX3/Vs+5Z/QywaGRQ/qTIwDX6tFPufDV2UvHe Jo8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ALuer+FidfIoua3TUSD8gX2hS4i01XLeemyyVEmMoQc=; b=WXckuOeVbUU0c/Uxmtaby0Jj1Xenjfmzc1NsDUXTDdCpepuI9nBElnc45X23J7nkGV 4B5rnj7E3APvASn84BARp6fUnsnt/RDVokwhNdErKen0NCdREItEaCHiZss4w7VJeb/r B41+Ypu/OyJiYIWZTzSUUJfYgUXjA7NZSmWDpEqDS6YwHqoVAE0RcSnpAcLrkoyoiKBP GYiRgq7l4/9I3J3lVu7m5Lrt99Ot3dTz0phwnjGIumEbBaW8k3cSbCopxWO1rDbSxx6r T+sO3WLdVF1tw97p9TCBYiM6Y2DwhAaJzFUe3kx+PL5gFBjKC5tDzz5A6fikCXOyMvT7 xwyg== X-Gm-Message-State: AOAM5318rp/ej8NAg1t64A+2dubB868JqnqLPwAJfUKH8ZpyxuZbwQcT uZtQSYlAEPeAsxAFuUUPMzzlTn7RaAMVX5VsX2o= X-Google-Smtp-Source: ABdhPJw21kECf87JDJrKuLPPTCnErY9hdTNVhtt2vxlhdDQkryswZ+3LZtsPd3/QdyTUiF1AXes0gf1b7z2RCq/IpB0= X-Received: by 2002:a17:906:fc0b:: with SMTP id ov11mr1436950ejb.238.1627681870825; Fri, 30 Jul 2021 14:51:10 -0700 (PDT) MIME-Version: 1.0 References: <2862852d-badd-7486-3a8e-c5ea9666d6fb@google.com> <42353193-6896-aa85-9127-78881d5fef66@google.com> In-Reply-To: <42353193-6896-aa85-9127-78881d5fef66@google.com> From: Yang Shi Date: Fri, 30 Jul 2021 14:50:58 -0700 Message-ID: Subject: Re: [PATCH 03/16] huge tmpfs: remove shrinklist addition from shmem_setattr() To: Hugh Dickins Cc: Andrew Morton , Shakeel Butt , "Kirill A. Shutemov" , Miaohe Lin , Mike Kravetz , Michal Hocko , Rik van Riel , Christoph Hellwig , Matthew Wilcox , "Eric W. Biederman" , Alexey Gladkov , Chris Wilson , Matthew Auld , Linux FS-devel Mailing List , Linux Kernel Mailing List , linux-api@vger.kernel.org, Linux MM Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 30, 2021 at 12:31 AM Hugh Dickins wrote: > > There's a block of code in shmem_setattr() to add the inode to > shmem_unused_huge_shrink()'s shrinklist when lowering i_size: it dates > from before 5.7 changed truncation to do split_huge_page() for itself, > and should have been removed at that time. > > I am over-stating that: split_huge_page() can fail (notably if there's > an extra reference to the page at that time), so there might be value in > retrying. But there were already retries as truncation worked through > the tails, and this addition risks repeating unsuccessful retries > indefinitely: I'd rather remove it now, and work on reducing the > chance of split_huge_page() failures separately, if we need to. Yes, agreed. Reviewed-by: Yang Shi > > Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole") > Signed-off-by: Hugh Dickins > --- > mm/shmem.c | 19 ------------------- > 1 file changed, 19 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 24c9da6b41c2..ce3ccaac54d6 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1061,7 +1061,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns, > { > struct inode *inode = d_inode(dentry); > struct shmem_inode_info *info = SHMEM_I(inode); > - struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); > int error; > > error = setattr_prepare(&init_user_ns, dentry, attr); > @@ -1097,24 +1096,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns, > if (oldsize > holebegin) > unmap_mapping_range(inode->i_mapping, > holebegin, 0, 1); > - > - /* > - * Part of the huge page can be beyond i_size: subject > - * to shrink under memory pressure. > - */ > - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { > - spin_lock(&sbinfo->shrinklist_lock); > - /* > - * _careful to defend against unlocked access to > - * ->shrink_list in shmem_unused_huge_shrink() > - */ > - if (list_empty_careful(&info->shrinklist)) { > - list_add_tail(&info->shrinklist, > - &sbinfo->shrinklist); > - sbinfo->shrinklist_len++; > - } > - spin_unlock(&sbinfo->shrinklist_lock); > - } > } > } > > -- > 2.26.2 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35B4FC4320E for ; Fri, 30 Jul 2021 21:51:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 407E260C40 for ; Fri, 30 Jul 2021 21:51:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 407E260C40 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AD1E86B0033; Fri, 30 Jul 2021 17:51:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A80528D0001; Fri, 30 Jul 2021 17:51:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96E856B005D; Fri, 30 Jul 2021 17:51:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id 79EDB6B0033 for ; Fri, 30 Jul 2021 17:51:12 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 238A8180ACF8F for ; Fri, 30 Jul 2021 21:51:12 +0000 (UTC) X-FDA: 78420600384.15.D5BA4DA Received: from mail-ej1-f47.google.com (mail-ej1-f47.google.com [209.85.218.47]) by imf16.hostedemail.com (Postfix) with ESMTP id E965BF000EF1 for ; Fri, 30 Jul 2021 21:51:11 +0000 (UTC) Received: by mail-ej1-f47.google.com with SMTP id gn26so19291585ejc.3 for ; Fri, 30 Jul 2021 14:51:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ALuer+FidfIoua3TUSD8gX2hS4i01XLeemyyVEmMoQc=; b=mqVuNenWLGsGEestEyn1VFsrzeMOzP2lqyROg7AXmeoUVCaf6TFWK0BzzfuDLeM8zF jrmLd/hIp0m+ic8y82kCwAN0lqF4BBb2R0/TfRuZAMIj75G3qjAMcNI/bmIg3r+iEPAA zvlmNMBIbCR9gdYmAXk6vjnxw1Qc7zKeYuh+ou6PBz3PX+ICXhcc6+qSBDL10VuMZoMv gO3hy/DQALNsneH7xBudPFMdVe9/kKHac5rsioR4L9rqOdjI/yqHZWh/kci6xACpbJZ7 Oo5wYVhKkHGi6U3QOU2L3BWo8w2xFyhzX3/Vs+5Z/QywaGRQ/qTIwDX6tFPufDV2UvHe Jo8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ALuer+FidfIoua3TUSD8gX2hS4i01XLeemyyVEmMoQc=; b=ifcmOpBXEafDS1t8ez690ac0fkaQ+pOIE8/mGhBf4ByvsHSJWGzrWs12UBaEgq5Yqv Bre/GrX+sa5qEZKJORcnSnb5GB8GWp9iYYH1BfMLv8y3MLG0FODzeEKob5lwck1UjVi+ c9iokLiXt12NuOUWpBAwBkQQ5GJ70/LiW7ZmlaeLz5LDt2qmuq3rLhZLry3NYFuWwPmR oZYjLplyQN3VB9ChmXkAgiOxCEZEmQpdvauMPAN+j9WuVQYpbo0rAF/p9RzLSpsLKcWZ x7ncy1zuTNFPnP00wfb3dF9ip+RcwbuJ6t42cmT1A6m4VlTVCwFaSMDoifiqLszhHXQT Nu/Q== X-Gm-Message-State: AOAM530LxRyxPsTigussSQ6l5v9O2oWttYzgz0YY7/SGp+TDnMLXUrAe XfLHYbeU1j6UL8C5URgzgcV11dNfFKUNX1JkM9A= X-Google-Smtp-Source: ABdhPJw21kECf87JDJrKuLPPTCnErY9hdTNVhtt2vxlhdDQkryswZ+3LZtsPd3/QdyTUiF1AXes0gf1b7z2RCq/IpB0= X-Received: by 2002:a17:906:fc0b:: with SMTP id ov11mr1436950ejb.238.1627681870825; Fri, 30 Jul 2021 14:51:10 -0700 (PDT) MIME-Version: 1.0 References: <2862852d-badd-7486-3a8e-c5ea9666d6fb@google.com> <42353193-6896-aa85-9127-78881d5fef66@google.com> In-Reply-To: <42353193-6896-aa85-9127-78881d5fef66@google.com> From: Yang Shi Date: Fri, 30 Jul 2021 14:50:58 -0700 Message-ID: Subject: Re: [PATCH 03/16] huge tmpfs: remove shrinklist addition from shmem_setattr() To: Hugh Dickins Cc: Andrew Morton , Shakeel Butt , "Kirill A. Shutemov" , Miaohe Lin , Mike Kravetz , Michal Hocko , Rik van Riel , Christoph Hellwig , Matthew Wilcox , "Eric W. Biederman" , Alexey Gladkov , Chris Wilson , Matthew Auld , Linux FS-devel Mailing List , Linux Kernel Mailing List , linux-api@vger.kernel.org, Linux MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E965BF000EF1 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=mqVuNenW; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of shy828301@gmail.com designates 209.85.218.47 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Stat-Signature: d4c4i7je5urej455dh4zzgyq941xdx9o X-HE-Tag: 1627681871-270021 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 30, 2021 at 12:31 AM Hugh Dickins wrote: > > There's a block of code in shmem_setattr() to add the inode to > shmem_unused_huge_shrink()'s shrinklist when lowering i_size: it dates > from before 5.7 changed truncation to do split_huge_page() for itself, > and should have been removed at that time. > > I am over-stating that: split_huge_page() can fail (notably if there's > an extra reference to the page at that time), so there might be value in > retrying. But there were already retries as truncation worked through > the tails, and this addition risks repeating unsuccessful retries > indefinitely: I'd rather remove it now, and work on reducing the > chance of split_huge_page() failures separately, if we need to. Yes, agreed. Reviewed-by: Yang Shi > > Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole") > Signed-off-by: Hugh Dickins > --- > mm/shmem.c | 19 ------------------- > 1 file changed, 19 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 24c9da6b41c2..ce3ccaac54d6 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1061,7 +1061,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns, > { > struct inode *inode = d_inode(dentry); > struct shmem_inode_info *info = SHMEM_I(inode); > - struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); > int error; > > error = setattr_prepare(&init_user_ns, dentry, attr); > @@ -1097,24 +1096,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns, > if (oldsize > holebegin) > unmap_mapping_range(inode->i_mapping, > holebegin, 0, 1); > - > - /* > - * Part of the huge page can be beyond i_size: subject > - * to shrink under memory pressure. > - */ > - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { > - spin_lock(&sbinfo->shrinklist_lock); > - /* > - * _careful to defend against unlocked access to > - * ->shrink_list in shmem_unused_huge_shrink() > - */ > - if (list_empty_careful(&info->shrinklist)) { > - list_add_tail(&info->shrinklist, > - &sbinfo->shrinklist); > - sbinfo->shrinklist_len++; > - } > - spin_unlock(&sbinfo->shrinklist_lock); > - } > } > } > > -- > 2.26.2 >