From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754314AbcBPIgp (ORCPT ); Tue, 16 Feb 2016 03:36:45 -0500 Received: from ipmail04.adl6.internode.on.net ([150.101.137.141]:52643 "EHLO ipmail04.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754253AbcBPIgo (ORCPT ); Tue, 16 Feb 2016 03:36:44 -0500 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DRCAAu38JWXJbY03ZegzqBP4Joo2cBAQEBAQEGi3CFRoQIhgcEAgKBNk0BAQEBAQEHRECEQQEBAQMBOhwjBQsIAxgJJQ8FJQMHGhOIEge4SQEBCAIeGIUxhH2IbAWSb4QQiECFEI58RI18gmEcgVwoLohcAQEB Date: Tue, 16 Feb 2016 19:35:18 +1100 From: Dave Chinner To: Nag Avadhanam Cc: "Theodore Ts'o" , "Daniel Walker (danielwa)" , Alexander Viro , "Khalid Mughal (khalidm)" , "xe-kernel@external.cisco.com" , "dave.hansen@intel.com" , "hannes@cmpxchg.org" , "riel@redhat.com" , Jonathan Corbet , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "linux-mm@kvack.org" Subject: Re: [PATCH] kernel: fs: drop_caches: add dds drop_caches_count Message-ID: <20160216083518.GZ19486@dastard> References: <1455308080-27238-1-git-send-email-danielwa@cisco.com> <20160214211856.GT19486@dastard> <56C216CA.7000703@cisco.com> <20160215230511.GU19486@dastard> <56C264BF.3090100@cisco.com> <20160216004531.GA28260@thunk.org> <20160216053827.GX19486@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 15, 2016 at 11:14:13PM -0800, Nag Avadhanam wrote: > On Mon, 15 Feb 2016, Dave Chinner wrote: > > >On Tue, Feb 16, 2016 at 02:58:04AM +0000, Nag Avadhanam (nag) wrote: > >>Its the calculation of the # of bytes of non-reclaimable file system cache > >>pages that has been troubling us. We do not want to count inactive file > >>pages (of programs/binaries) that were once mapped by any process in the > >>system as reclaimable because that might lead to thrashing under memory > >>pressure (we want to alert admins before system starts dropping text > >>pages). > > > >The code presented does not match your requirements. It only counts > >pages that are currently mapped into ptes. hence it will tell you > >that once-used and now unmapped binary pages are reclaimable, and > >drop caches will reclaim them. hence they'll need to be fetched from > >disk again if they are faulted in again after a drop_caches run. > > Will the inactive binary pages be automatically unmapped even if the process > into whose address space they are mapped is still around? I thought they > are left mapped until such time there is memory pressure. Right, page reclaim via memory pressure can unmap mapped pages in order to reclaim them. Drop caches will skip them. > We only care for binary pages (active and inactive) mapped into the > address spaces of live processes. Its okay to aggressively reclaim > inactive > pages once mapped into processes that are no longer around. Ok, if you're only concerned about live processes then drop caches should behave as you want. Cheers, Dave. -- Dave Chinner david@fromorbit.com