From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A9D7C433DF for ; Mon, 3 Aug 2020 14:26:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 18748206D4 for ; Mon, 3 Aug 2020 14:26:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QFv/d0Q+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 18748206D4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F6E28D0100; Mon, 3 Aug 2020 10:26:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A85F8D0081; Mon, 3 Aug 2020 10:26:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BE918D0100; Mon, 3 Aug 2020 10:26:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id 47A918D0081 for ; Mon, 3 Aug 2020 10:26:49 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E4E3D181AC9B6 for ; Mon, 3 Aug 2020 14:26:48 +0000 (UTC) X-FDA: 77109483696.11.cave65_320a1be26f9e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 6A562180F8B98 for ; Mon, 3 Aug 2020 14:26:47 +0000 (UTC) X-HE-Tag: cave65_320a1be26f9e X-Filterd-Recvd-Size: 11000 Received: from mail-il1-f195.google.com (mail-il1-f195.google.com [209.85.166.195]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Mon, 3 Aug 2020 14:26:46 +0000 (UTC) Received: by mail-il1-f195.google.com with SMTP id l17so21028553ilq.13 for ; Mon, 03 Aug 2020 07:26:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=/LZ4G/yT+uUPOuDmN/ZUb0Z7m5eCjZ7VJk+S6IHU4tY=; b=QFv/d0Q+dStlQkZ0zga8b0qYjy2Z/i7YMgETBcAWVpNQCJsxUiKsnIsmntht7EOor2 UehsnPUO4dTerziWh+KFbH9sTAF+akPWqr6v88fVdvfkIkzzrRNeYOFyIQ+/FrLR5Qd5 IJtan0c6aQ/QuGO28x5R6AeJTNphE5eEsR45aPKLY2NLpwmwJ4LkdjmR3OOrdkBJg1mb ofnQXaaudX06glGVS+HQaxoFdjsBsXm/4o9Y4GOQxBJjAXsLIPsUwe73YoZFAoRVC1KU uYVOEssL2F09BeEA9JXEgq86GMYpHA3CJITDugKPYzesoI4kEEB9/1kTeD/GxwgBfJU8 nK6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=/LZ4G/yT+uUPOuDmN/ZUb0Z7m5eCjZ7VJk+S6IHU4tY=; b=g9foDZCfFoyAK8iIdTRvn9HhGInudqp1vkoFo2FUeE9RPvlzR23QpkCJqxuMAd+BTT aUKLbcr8JeGc3eQCEkJ+bOLtTvzYCuUPz+MUorzyPTNG2X333F053p/k2Vk4bOd6T8Xs 3HK+MrMTHMCuufLb7CvOM+rHbBXMntn26aUQrGxGc2fbrdmdnPo94p3j+q5q7+930Zpc LJtBPBnI5cuRXTFIkyQd9/M1TwlmpV8W4dotW5FfM8FQDMzU3F0Bhh95ETv1KmSAmeiF V8mmOwrOuhYXbYIYzX1B6f6ZPBizUh64SfOeDcKgWaq2iG3tTq+yy5oIDW0SmxVok5Dj +nbw== X-Gm-Message-State: AOAM531HyKZqr+qQmvr02FSdVe03xqUzlvA0hcGFSTmOMiQNwWs1d1p+ XF3cbIH2R1O1rBENN6GFdcadmqDWH4zaki8rKPk= X-Google-Smtp-Source: ABdhPJxzNd0tFE7uGI6ueCjtPdrTw6S/xaFcp0yIEAbfGCCG/vT58pKO4ncLJliQ83uQUHOy2M8AP3vkmzbzApQXi7g= X-Received: by 2002:a92:9a4d:: with SMTP id t74mr2445700ili.203.1596464806293; Mon, 03 Aug 2020 07:26:46 -0700 (PDT) MIME-Version: 1.0 References: <20200728074032.1555-1-laoar.shao@gmail.com> <20200730112620.GH18727@dhcp22.suse.cz> <20200803101226.GH5174@dhcp22.suse.cz> <20200803135636.GN5174@dhcp22.suse.cz> In-Reply-To: From: Yafang Shao Date: Mon, 3 Aug 2020 22:26:10 +0800 Message-ID: Subject: Re: [PATCH] mm, memcg: do full scan initially in force_empty To: Michal Hocko , longman@redhat.com Cc: Johannes Weiner , Andrew Morton , Linux MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 6A562180F8B98 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 3, 2020 at 10:18 PM Yafang Shao wrote: > > On Mon, Aug 3, 2020 at 9:56 PM Michal Hocko wrote: > > > > On Mon 03-08-20 21:20:44, Yafang Shao wrote: > > > On Mon, Aug 3, 2020 at 6:12 PM Michal Hocko wrote: > > > > > > > > On Fri 31-07-20 09:50:04, Yafang Shao wrote: > > > > > On Thu, Jul 30, 2020 at 7:26 PM Michal Hocko wrote: > > > > > > > > > > > > On Tue 28-07-20 03:40:32, Yafang Shao wrote: > > > > > > > Sometimes we use memory.force_empty to drop pages in a memcg to work > > > > > > > around some memory pressure issues. When we use force_empty, we want the > > > > > > > pages can be reclaimed ASAP, however force_empty reclaims pages as a > > > > > > > regular reclaimer which scans the page cache LRUs from DEF_PRIORITY > > > > > > > priority and finally it will drop to 0 to do full scan. That is a waste > > > > > > > of time, we'd better do full scan initially in force_empty. > > > > > > > > > > > > Do you have any numbers please? > > > > > > > > > > > > > > > > Unfortunately the number doesn't improve obviously, while it is > > > > > directly proportional to the numbers of total pages to be scanned. > > > > > > > > Your changelog claims an optimization and that should be backed by some > > > > numbers. It is true that reclaim at a higher priority behaves slightly > > > > and subtly differently but that urge for even more details in the > > > > changelog. > > > > > > > > > > With the below addition change (nr_to_scan also changed), the elapsed > > > time of force_empty can be reduced by 10%. > > > > > > @@ -3208,6 +3211,7 @@ static inline bool memcg_has_children(struct > > > mem_cgroup *memcg) > > > static int mem_cgroup_force_empty(struct mem_cgroup *memcg) > > > { > > > int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; > > > + unsigned long size; > > > > > > /* we call try-to-free pages for make this cgroup empty */ > > > lru_add_drain_all(); > > > @@ -3215,14 +3219,15 @@ static int mem_cgroup_force_empty(struct > > > mem_cgroup *memcg) > > > drain_all_stock(memcg); > > > /* try to free all pages in this cgroup */ > > > - while (nr_retries && page_counter_read(&memcg->memory)) { > > > + while (nr_retries && (size = page_counter_read(&memcg->memory))) { > > > int progress; > > > > > > if (signal_pending(current)) > > > return -EINTR; > > > - progress = try_to_free_mem_cgroup_pages(memcg, 1, > > > - GFP_KERNEL, true); > > > + progress = try_to_free_mem_cgroup_pages(memcg, size, > > > + GFP_KERNEL, true, > > > + 0); > > > > Have you tried this change without changing the reclaim priority? > > > > I tried it again. Seems the improvement is mostly due to the change of > nr_to_reclaim, rather the reclaim priority, > > - progress = try_to_free_mem_cgroup_pages(memcg, 1, > + progress = try_to_free_mem_cgroup_pages(memcg, size, > > > > > Below are the numbers for a 16G memcg with full clean pagecache. > > > Without these change, > > > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty > > > real 0m2.247s > > > user 0m0.000s > > > sys 0m1.722s > > > > > > With these change, > > > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty > > > real 0m2.053s > > > user 0m0.000s > > > sys 0m1.529s > > > > > > But I'm not sure whether we should make this improvement, because > > > force_empty is not a critical path. > > > > Well, an isolated change to force_empty would be more acceptable but it > > is worth noting that a very large reclaim target might affect the > > userspace triggering this path because it will potentially increase > > latency to process any signals. I do not expect this to be a huge > > problem in practice because even reclaim for a smaller target can take > > quite long if the memory is not really reclaimable and it has to take > > the full world scan. Moreovere most userspace will simply do > > echo 1 > $MEMCG_PAGE/force_empty > > and only care about killing that if it takes too long. > > > > We may do it in a script to force empty many memcgs at the same time. > Of course we can measure the time it takes to force empty, but that > will be complicated. > > > > > > But then I notice that force_empty will try to write dirty pages, that > > > > > is not expected by us, because this behavior may be dangerous in the > > > > > production environment. > > > > > > > > I do not understand your claim here. Direct reclaim doesn't write dirty > > > > page cache pages directly. > > > > > > It will write dirty pages once the sc->priority drops to a very low number. > > > if (sc->priority < DEF_PRIORITY - 2) > > > sc->may_writepage = 1; > > > > OK, I see what you mean now. Please have a look above that check: > > /* > > * Only kswapd can writeback filesystem pages > > * to avoid risk of stack overflow. But avoid > > * injecting inefficient single-page IO into > > * flusher writeback as much as possible: only > > * write pages when we've encountered many > > * dirty pages, and when we've already scanned > > * the rest of the LRU for clean pages and see > > * the same dirty pages again (PageReclaim). > > */ > > > > > > And it is even less clear why that would be > > > > dangerous if it did. > > > > > > > > > > It will generate many IOs, which may block the others. > > > > > > > > What do you think introducing per memcg drop_cache ? > > > > > > > > I do not like the global drop_cache and per memcg is not very much > > > > different. This all shouldn't be really necessary because we do have > > > > means to reclaim memory in a memcg. > > > > -- > > > > > > We used to find an issue that there are many negative dentries in some memcgs. > > > > Yes, negative dentries can build up but the memory reclaim should be > > pretty effective reclaiming them. > > > > > These negative dentries were introduced by some specific workload in > > > these memcgs, and we want to drop them as soon as possible. > > > But unfortunately there is no good way to drop them except the > > > force_empy or global drop_caches. > > > > You can use memcg limits (e.g. memory high) to pro-actively reclaim > > excess memory. Have you tried that? > > > > > The force_empty will also drop the pagecache pages, which is not > > > expected by us. > > > > force_empty is intended to reclaim _all_ pages. > > > > > The global drop_caches can't work either because it will drop slabs in > > > other memcgs. > > > That is why I want to introduce per memcg drop_caches. > > > > Problems with negative dentries has been already discussed in the past. > > I believe there was no conclusion so far. Please try to dig into > > archives. > > I have read the proposal of Waiman. But it seems there isn't a conclusion yet. > If the kernel can't fix this issue perfectly, then giving the user a > chance to work around it would be a possible solution - drop_caches is > that kind of workaround. > > [ adding Waiman to CC ] > > > -- Forgot to reply to your suggestion that using memcg limit. Adding it below, > You can use memcg limits (e.g. memory high) to pro-actively reclaim > excess memory. Have you tried that? The memcg limit not only reclaim the slabs, but also reclaim the pagecaches. Furthermore, there is no per-memcg vm.vfs_cache_pressure neither. -- Thanks Yafang