From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F22A4C433F5 for ; Thu, 28 Apr 2022 16:54:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236190AbiD1Q6K (ORCPT ); Thu, 28 Apr 2022 12:58:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234392AbiD1Q6H (ORCPT ); Thu, 28 Apr 2022 12:58:07 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C41C5AAB6D; Thu, 28 Apr 2022 09:54:52 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id a11so4759830pff.1; Thu, 28 Apr 2022 09:54:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=MvCOJLEv0diYuRsRJ3vfm9TWORadeU8jtlQyqpscZYs=; b=Wj8UXDdE0X8gpr6LoRyZmqUQOw2kxuuPyrfVfL1bKAAZqgSatnq6bYzfRifjXKI+FX Wm9USQkdGM8HZGBT7ts1RW4ABEbw+glCOKAVM2+/Kk3j//GfWRCpzt+Zh9uEuyVTS3RE +BSxCzbRHgD+qVLgtP8lSGFNihGOMbsjyBDNZ+gcJx/AHUi+zyX5Z2+nAFvx/N/QhlRS 7PuL9T3OQ4hj5tyqcKkjBLDhVV7d5z9g2aOVx/D7OCeALJGjMrAi8pkEeLNK13IQC2D2 fjsgAARRjCsT2bNtUGofF1e30Q8XgIa9z97MYStEAnk5Cs0NUbGtoiUeCTdfMFw3D/BW PvEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MvCOJLEv0diYuRsRJ3vfm9TWORadeU8jtlQyqpscZYs=; b=nyQouo1SJltt9db2f3Cxem80fh3mEfi2si3yKy0qKTWf6ol50YR+KVyX14lYEC3Uuj iV6M7189Ti+wtYmmtjcSj2r6B7G/prLzr6W+pc3laDTWE8qsE5FLdUzfDxmzO0c/VTfh 6L0diJmurxzOswXzEuwddoI1KXG+Kz41eWcXNiBeA5zhP89S3Q2mMBiUzCxUzMVj5AoW 2xTAcb7AyxiEO64kFOlIZwbFPbAqNjGvxpzYJrovSoJ5vd9ldOy3btj6gd0spAJF1Nzf 2rnc9ySS+tttgSHdsMCmIZ8rT3RAKswRU8zfG7nQ58XCvT+0DHiUohkL/u6AkTGI23ax d8EQ== X-Gm-Message-State: AOAM530H8GyxRdRMwgEwz/xqTdsHQJn+n26UPwiJOYhm5tLPlRLhHX1M A6UCVCpYPGNtdSij8Urqq2/KYXiWZk10LUF4m34= X-Google-Smtp-Source: ABdhPJwOi4QlvWVAg2x0n7sXu+D03SSjc9/9+Z2qP7Q8SICby6ygCVP2AF2/o6d5nA40Vvl1UIFzJlKaHhWZ+diI4kk= X-Received: by 2002:a65:6951:0:b0:381:f10:ccaa with SMTP id w17-20020a656951000000b003810f10ccaamr28256271pgq.587.1651164892221; Thu, 28 Apr 2022 09:54:52 -0700 (PDT) MIME-Version: 1.0 References: <20220427160016.144237-1-hannes@cmpxchg.org> <20220427160016.144237-5-hannes@cmpxchg.org> In-Reply-To: From: Yang Shi Date: Thu, 28 Apr 2022 09:54:39 -0700 Message-ID: Subject: Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage To: Shakeel Butt Cc: Johannes Weiner , Minchan Kim , Andrew Morton , Michal Hocko , Roman Gushchin , Seth Jennings , Dan Streetman , Linux MM , Cgroups , LKML , Kernel Team Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 28, 2022 at 7:49 AM Shakeel Butt wrote: > > On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner wrote: > > > > On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote: > > > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner wrote: > > > > > > > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote: > > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote: > > > > > > Hi Johannes, > > > > > > > > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote: > > > > > > > Currently it requires poking at debugfs to figure out the size and > > > > > > > population of the zswap cache on a host. There are no counters for > > > > > > > reads and writes against the cache. As a result, it's difficult to > > > > > > > understand zswap behavior on production systems. > > > > > > > > > > > > > > Print zswap memory consumption and how many pages are zswapped out in > > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat. > > > > > > > > > > > > > > Signed-off-by: Johannes Weiner > > > > > > > --- > > > > > > > fs/proc/meminfo.c | 7 +++++++ > > > > > > > include/linux/swap.h | 5 +++++ > > > > > > > include/linux/vm_event_item.h | 4 ++++ > > > > > > > mm/vmstat.c | 4 ++++ > > > > > > > mm/zswap.c | 13 ++++++------- > > > > > > > 5 files changed, 26 insertions(+), 7 deletions(-) > > > > > > > > > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c > > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644 > > > > > > > --- a/fs/proc/meminfo.c > > > > > > > +++ b/fs/proc/meminfo.c > > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v) > > > > > > > > > > > > > > show_val_kb(m, "SwapTotal: ", i.totalswap); > > > > > > > show_val_kb(m, "SwapFree: ", i.freeswap); > > > > > > > +#ifdef CONFIG_ZSWAP > > > > > > > + seq_printf(m, "Zswap: %8lu kB\n", > > > > > > > + (unsigned long)(zswap_pool_total_size >> 10)); > > > > > > > + seq_printf(m, "Zswapped: %8lu kB\n", > > > > > > > + (unsigned long)atomic_read(&zswap_stored_pages) << > > > > > > > + (PAGE_SHIFT - 10)); > > > > > > > +#endif > > > > > > > > > > > > I agree it would be very handy to have the memory consumption in meminfo > > > > > > > > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/ > > > > > > > > > > > > If we really go this Zswap only metric instead of general term > > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason > > > > > > in this patchset. Do you think that's better idea instead of > > > > > > introducing general term like "Compressed:" or something else? > > > > > > > > > > I'm fine with changing it to Compressed. If somebody cares about a > > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed. > > > > > > > > It does raise the question what to do about cgroup, though. Should the > > > > control files (memory.zswap.current & memory.zswap.max) apply to zram > > > > in the future? If so, we should rename them, too. > > > > > > > > I'm not too familiar with zram, maybe you can provide some > > > > background. AFAIU, Google uses zram quite widely; all the more > > > > confusing why there is no container support for it yet. > > > > > > > > Could you shed some light? > > > > > > > > > > I can shed light on the datacenter workloads. We use cgroup (still on > > > v1) and zswap. For the workloads/applications, the swap (or zswap) is > > > transparent in the sense that they are charged exactly the same > > > irrespective of how much their memory is zswapped-out. Basically the > > > applications see the same usage which is actually v1's > > > memsw.usage_in_bytes. We dynamically increase the swap size if it is > > > low, so we are not really worried about one job hogging the swap > > > space. > > > > > > Regarding stats we actually do have them internally representing > > > compressed size and number of pages in zswap. The compressed size is > > > actually used for OOM victim selection. The memsw or v2's swap usage > > > in the presence of compression based swap does not actually tell how > > > much memory can potentially be released by evicting a job. For example > > > if there are two jobs 'A' and 'B'. Both of them have 100 pages > > > compressed but A's 100 pages are compressed to let's say 10 pages > > > while B's 100 pages are compressed to 70 pages. It is preferable to > > > kill B as that will release 70 pages. (This is a very simplified > > > explanation of what we actually do). > > > > Ah, so zram is really only used by the mobile stuff after all. > > > > In the DC, I guess you don't use disk swap in conjunction with zswap, > > so those writeback cache controls are less interesting to you? > > Yes, we have some modifications to zswap to make it work without any > backing real swap. Though there is a future plan to move to zram > eventually. Interesting, if so why not just simply use zram? > > > > > But it sounds like you would benefit from the zswap(ped) counters in > > memory.stat at least. > > Yes and I think if we need zram specific counters/stats in future, > those can be added then. > > > > > Thanks, that is enlightening! >