From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Subject: Re: [PATCH v6 0/7] fs/dcache: Track & limit # of negative dentries To: Michal Hocko Cc: Alexander Viro , Jonathan Corbet , "Luis R. Rodriguez" , Kees Cook , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, Linus Torvalds , Jan Kara , "Paul E. McKenney" , Andrew Morton , Ingo Molnar , Miklos Szeredi , Matthew Wilcox , Larry Woodman , James Bottomley , "Wangkai (Kevin C)" References: <1530905572-817-1-git-send-email-longman@redhat.com> <20180709081920.GD22049@dhcp22.suse.cz> <62275711-e01d-7dbe-06f1-bf094b618195@redhat.com> <20180710142740.GQ14284@dhcp22.suse.cz> <20180711102139.GG20050@dhcp22.suse.cz> From: Waiman Long Message-ID: <9f24c043-1fca-ee86-d609-873a7a8f7a64@redhat.com> Date: Wed, 11 Jul 2018 11:13:58 -0400 MIME-Version: 1.0 In-Reply-To: <20180711102139.GG20050@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Language: en-US Sender: owner-linux-mm@kvack.org List-ID: On 07/11/2018 06:21 AM, Michal Hocko wrote: > On Tue 10-07-18 12:09:17, Waiman Long wrote: >> On 07/10/2018 10:27 AM, Michal Hocko wrote: >>> On Mon 09-07-18 12:01:04, Waiman Long wrote: >>>> On 07/09/2018 04:19 AM, Michal Hocko wrote: > [...] >>>>> percentage has turned out to be a really wrong unit for many tunabl= es >>>>> over time. Even 1% can be just too much on really large machines. >>>> Yes, that is true. Do you have any suggestion of what kind of unit >>>> should be used? I can scale down the unit to 0.1% of the system memo= ry. >>>> Alternatively, one unit can be 10k/cpu thread, so a 20-thread system= >>>> corresponds to 200k, etc. >>> I simply think this is a strange user interface. How much is a >>> reasonable number? How can any admin figure that out? >> Without the optional enforcement, the limit is essentially just a >> notification mechanism where the system signals that there is somethin= g >> wrong going on and the system administrator need to take a look. So it= >> is perfectly OK if the limit is sufficiently high that normally we won= 't >> need to use that many negative dentries. The goal is to prevent negati= ve >> dentries from consuming a significant portion of the system memory. > So again. How do you tell the right number? I guess it will be more a trial and error kind of adjustment as the right figure will depend on the kind of workloads being run on the system. So unless the enforcement option is turned on, setting a limit that is too small won't have too much impact over than a slight performance drop because of the invocation of the slowpaths and the warning messages in the console. Whenever a non-zero value is written into "neg-dentry-limit", an informational message will be printed about what the actual negative dentry limits will be. It can be compared against the current negative dentry number (5th number) from "dentry-state" to see if there is enough safe margin to avoid false positive warning. > >> I am going to reduce the granularity of each unit to 1/1000 of the tot= al >> system memory so that for large system with TB of memory, a smaller >> amount of memory can be specified. > It is just a matter of time for this to be too coarse as well. The goal is to not have too much memory being consumed by negative dentries and also the limit won't be reached by regular daily activities. So a limit of 1/1000 of the total system memory will be good enough on large memory system even if the absolute number is really big. Cheers, Longman