From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB0AFC46478 for ; Wed, 3 Jul 2019 14:37:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B4DBF218B0 for ; Wed, 3 Jul 2019 14:37:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1562164626; bh=l/MbGoWcXqKBSAv5GFNfVVKuFR2oJ9rqff5o1kOK3y0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=aUMctnch1gNO8zSVAc/aTXeJikoAVu6Sd3D7eOKUqgcIWHYF1bIAMpMzGTZbME8OL 5iHydrGX7ZVMNqtS8oxGjxNYMDOUK3w84gBb2fiOWEjD3XUXHK8NDPX1VDWcHSxPS7 yPj+JjcGcD8UyBlJef3IhqJ3RvhTN2cWm7xGeJ3M= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727087AbfGCOhF (ORCPT ); Wed, 3 Jul 2019 10:37:05 -0400 Received: from mx2.suse.de ([195.135.220.15]:50812 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726430AbfGCOhE (ORCPT ); Wed, 3 Jul 2019 10:37:04 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 11175AF60; Wed, 3 Jul 2019 14:37:03 +0000 (UTC) Date: Wed, 3 Jul 2019 16:37:01 +0200 From: Michal Hocko To: Waiman Long Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Alexander Viro , Jonathan Corbet , Luis Chamberlain , Kees Cook , Johannes Weiner , Vladimir Davydov , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Roman Gushchin , Shakeel Butt , Andrea Arcangeli Subject: Re: [PATCH] mm, slab: Extend slab/shrink to shrink all the memcg caches Message-ID: <20190703143701.GR978@dhcp22.suse.cz> References: <20190702183730.14461-1-longman@redhat.com> <20190703065628.GK978@dhcp22.suse.cz> <9ade5859-b937-c1ac-9881-2289d734441d@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9ade5859-b937-c1ac-9881-2289d734441d@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 03-07-19 09:12:13, Waiman Long wrote: > On 7/3/19 2:56 AM, Michal Hocko wrote: > > On Tue 02-07-19 14:37:30, Waiman Long wrote: > >> Currently, a value of '1" is written to /sys/kernel/slab//shrink > >> file to shrink the slab by flushing all the per-cpu slabs and free > >> slabs in partial lists. This applies only to the root caches, though. > >> > >> Extends this capability by shrinking all the child memcg caches and > >> the root cache when a value of '2' is written to the shrink sysfs file. > > Why do we need a new value for this functionality? I would tend to think > > that skipping memcg caches is a bug/incomplete implementation. Or is it > > a deliberate decision to cover root caches only? > > It is just that I don't want to change the existing behavior of the > current code. It will definitely take longer to shrink both the root > cache and the memcg caches. Does that matter? To whom and why? I do not expect this interface to be used heavily. > If we all agree that the only sensible > operation is to shrink root cache and the memcg caches together. I am > fine just adding memcg shrink without changing the sysfs interface > definition and be done with it. The existing documentation is really modest on the actual semantic: Description: The shrink file is written when memory should be reclaimed from a cache. Empty partial slabs are freed and the partial list is sorted so the slabs with the fewest available objects are used first. which to me sounds like all slabs are free and nobody should be really thinking of memcgs. This is simply drop_caches kinda thing. We surely do not want to drop caches only for the root memcg for /proc/sys/vm/drop_caches right? -- Michal Hocko SUSE Labs