From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 067FDC433DF for ; Mon, 3 Aug 2020 13:56:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C74A120775 for ; Mon, 3 Aug 2020 13:56:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C74A120775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 491758D00FD; Mon, 3 Aug 2020 09:56:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 441538D0081; Mon, 3 Aug 2020 09:56:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 358098D00FD; Mon, 3 Aug 2020 09:56:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id 1F3C48D0081 for ; Mon, 3 Aug 2020 09:56:39 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CFD0B1EE6 for ; Mon, 3 Aug 2020 13:56:38 +0000 (UTC) X-FDA: 77109407676.30.tree12_04070c026f9e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 9D214180B3AA7 for ; Mon, 3 Aug 2020 13:56:38 +0000 (UTC) X-HE-Tag: tree12_04070c026f9e X-Filterd-Recvd-Size: 7434 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 3 Aug 2020 13:56:38 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 20D6FAE84; Mon, 3 Aug 2020 13:56:52 +0000 (UTC) Date: Mon, 3 Aug 2020 15:56:36 +0200 From: Michal Hocko To: Yafang Shao Cc: Johannes Weiner , Andrew Morton , Linux MM Subject: Re: [PATCH] mm, memcg: do full scan initially in force_empty Message-ID: <20200803135636.GN5174@dhcp22.suse.cz> References: <20200728074032.1555-1-laoar.shao@gmail.com> <20200730112620.GH18727@dhcp22.suse.cz> <20200803101226.GH5174@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 9D214180B3AA7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 03-08-20 21:20:44, Yafang Shao wrote: > On Mon, Aug 3, 2020 at 6:12 PM Michal Hocko wrote: > > > > On Fri 31-07-20 09:50:04, Yafang Shao wrote: > > > On Thu, Jul 30, 2020 at 7:26 PM Michal Hocko wrote: > > > > > > > > On Tue 28-07-20 03:40:32, Yafang Shao wrote: > > > > > Sometimes we use memory.force_empty to drop pages in a memcg to work > > > > > around some memory pressure issues. When we use force_empty, we want the > > > > > pages can be reclaimed ASAP, however force_empty reclaims pages as a > > > > > regular reclaimer which scans the page cache LRUs from DEF_PRIORITY > > > > > priority and finally it will drop to 0 to do full scan. That is a waste > > > > > of time, we'd better do full scan initially in force_empty. > > > > > > > > Do you have any numbers please? > > > > > > > > > > Unfortunately the number doesn't improve obviously, while it is > > > directly proportional to the numbers of total pages to be scanned. > > > > Your changelog claims an optimization and that should be backed by some > > numbers. It is true that reclaim at a higher priority behaves slightly > > and subtly differently but that urge for even more details in the > > changelog. > > > > With the below addition change (nr_to_scan also changed), the elapsed > time of force_empty can be reduced by 10%. > > @@ -3208,6 +3211,7 @@ static inline bool memcg_has_children(struct > mem_cgroup *memcg) > static int mem_cgroup_force_empty(struct mem_cgroup *memcg) > { > int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; > + unsigned long size; > > /* we call try-to-free pages for make this cgroup empty */ > lru_add_drain_all(); > @@ -3215,14 +3219,15 @@ static int mem_cgroup_force_empty(struct > mem_cgroup *memcg) > drain_all_stock(memcg); > /* try to free all pages in this cgroup */ > - while (nr_retries && page_counter_read(&memcg->memory)) { > + while (nr_retries && (size = page_counter_read(&memcg->memory))) { > int progress; > > if (signal_pending(current)) > return -EINTR; > - progress = try_to_free_mem_cgroup_pages(memcg, 1, > - GFP_KERNEL, true); > + progress = try_to_free_mem_cgroup_pages(memcg, size, > + GFP_KERNEL, true, > + 0); Have you tried this change without changing the reclaim priority? > Below are the numbers for a 16G memcg with full clean pagecache. > Without these change, > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty > real 0m2.247s > user 0m0.000s > sys 0m1.722s > > With these change, > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty > real 0m2.053s > user 0m0.000s > sys 0m1.529s > > But I'm not sure whether we should make this improvement, because > force_empty is not a critical path. Well, an isolated change to force_empty would be more acceptable but it is worth noting that a very large reclaim target might affect the userspace triggering this path because it will potentially increase latency to process any signals. I do not expect this to be a huge problem in practice because even reclaim for a smaller target can take quite long if the memory is not really reclaimable and it has to take the full world scan. Moreovere most userspace will simply do echo 1 > $MEMCG_PAGE/force_empty and only care about killing that if it takes too long. > > > But then I notice that force_empty will try to write dirty pages, that > > > is not expected by us, because this behavior may be dangerous in the > > > production environment. > > > > I do not understand your claim here. Direct reclaim doesn't write dirty > > page cache pages directly. > > It will write dirty pages once the sc->priority drops to a very low number. > if (sc->priority < DEF_PRIORITY - 2) > sc->may_writepage = 1; OK, I see what you mean now. Please have a look above that check: /* * Only kswapd can writeback filesystem pages * to avoid risk of stack overflow. But avoid * injecting inefficient single-page IO into * flusher writeback as much as possible: only * write pages when we've encountered many * dirty pages, and when we've already scanned * the rest of the LRU for clean pages and see * the same dirty pages again (PageReclaim). */ > > And it is even less clear why that would be > > dangerous if it did. > > > > It will generate many IOs, which may block the others. > > > > What do you think introducing per memcg drop_cache ? > > > > I do not like the global drop_cache and per memcg is not very much > > different. This all shouldn't be really necessary because we do have > > means to reclaim memory in a memcg. > > -- > > We used to find an issue that there are many negative dentries in some memcgs. Yes, negative dentries can build up but the memory reclaim should be pretty effective reclaiming them. > These negative dentries were introduced by some specific workload in > these memcgs, and we want to drop them as soon as possible. > But unfortunately there is no good way to drop them except the > force_empy or global drop_caches. You can use memcg limits (e.g. memory high) to pro-actively reclaim excess memory. Have you tried that? > The force_empty will also drop the pagecache pages, which is not > expected by us. force_empty is intended to reclaim _all_ pages. > The global drop_caches can't work either because it will drop slabs in > other memcgs. > That is why I want to introduce per memcg drop_caches. Problems with negative dentries has been already discussed in the past. I believe there was no conclusion so far. Please try to dig into archives. -- Michal Hocko SUSE Labs