linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Nitin Gupta <ngupta@vflare.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCHv4 0/9] zsmalloc/zram: configurable zspage size
Date: Fri, 11 Nov 2022 09:56:36 +0900	[thread overview]
Message-ID: <Y22dxEcs2g5mjuQ7@google.com> (raw)
In-Reply-To: <Y21+xp52OQYi/qjQ@google.com>

Hi,

On (22/11/10 14:44), Minchan Kim wrote:
> On Mon, Oct 31, 2022 at 02:40:59PM +0900, Sergey Senozhatsky wrote:
> > 	Hello,
> > 
> > 	Some use-cases and/or data patterns may benefit from
> > larger zspages. Currently the limit on the number of physical
> > pages that are linked into a zspage is hardcoded to 4. Higher
> > limit changes key characteristics of a number of the size
> > classes, improving compactness of the pool and redusing the
> > amount of memory zsmalloc pool uses. More on this in 0002
> > commit message.
> 
> Hi Sergey,
> 
> I think the idea that break of fixed subpages in zspage is
> really good start to optimize further. However, I am worry
> about introducing per-pool config this stage. How about
> to introduce just one golden value for the zspage size?
> order-3 or 4 in Kconfig with keeping default 2?

Sorry, not sure I'm following. So you want a .config value
for zspage limit? I really like the sysfs knob, because then
one may set values on per-device basis (if they have multiple
zram devices in a system with different data patterns):

	zram0 which is used as a swap device uses, say, 4
	zram1 which is vfat block device uses, say, 6
	zram2 which is ext4 block device uses, say, 8

The whole point of the series is that one single value does
not fit all purposes. There is no silver bullet.

> And then we make more efforts to have auto tune based on
> the wasted memory and the number of size classes on the
> fly. A good thing to be able to achieve is we have indirect
> table(handle <-> zpage) so we could move the object anytime
> so I think we could do better way in the end.

It still needs to be per zram device (per zspool). sysfs knob
doesn't stop us from having auto-tuned values in the future.

  reply	other threads:[~2022-11-11  0:56 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-31  5:40 [PATCHv4 0/9] zsmalloc/zram: configurable zspage size Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 1/9] zram: add size class equals check into recompression Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 2/9] zsmalloc: turn zspage order into runtime variable Sergey Senozhatsky
2022-11-10 21:59   ` Minchan Kim
2022-11-11 10:38     ` Sergey Senozhatsky
2022-11-11 17:09       ` Minchan Kim
2022-11-14  3:55         ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 3/9] zsmalloc: move away from page order defines Sergey Senozhatsky
2022-11-10 22:02   ` Minchan Kim
2022-10-31  5:41 ` [PATCHv4 4/9] zsmalloc: make huge class watermark zs_pool member Sergey Senozhatsky
2022-11-10 22:25   ` Minchan Kim
2022-11-11  1:07     ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 5/9] zram: huge size watermark cannot be global Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 6/9] zsmalloc: pass limit on pages per-zspage to zs_create_pool() Sergey Senozhatsky
2022-11-09  6:24   ` Sergey Senozhatsky
2022-11-11 17:14     ` Minchan Kim
2022-11-11  2:10   ` Minchan Kim
2022-11-11 10:32     ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 7/9] zram: add pages_per_pool_page device attribute Sergey Senozhatsky
2022-11-09  4:34   ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 8/9] Documentation: document zram pages_per_pool_page attribute Sergey Senozhatsky
2022-11-11  2:20   ` Minchan Kim
2022-11-11 10:34     ` Sergey Senozhatsky
2022-10-31  5:41 ` [PATCHv4 9/9] zsmalloc: break out of loop when found perfect zspage order Sergey Senozhatsky
2022-11-10 22:44 ` [PATCHv4 0/9] zsmalloc/zram: configurable zspage size Minchan Kim
2022-11-11  0:56   ` Sergey Senozhatsky [this message]
2022-11-11 17:03     ` Minchan Kim
2022-11-14  3:53       ` Sergey Senozhatsky
2022-11-14  7:55       ` Sergey Senozhatsky
2022-11-14  8:37       ` Sergey Senozhatsky
2022-11-15  6:01       ` Sergey Senozhatsky
2022-11-15  7:59         ` Sergey Senozhatsky
2022-11-15 23:23           ` Minchan Kim
2022-11-16  0:52             ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y22dxEcs2g5mjuQ7@google.com \
    --to=senozhatsky@chromium.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=ngupta@vflare.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).