linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vitaly Wool <vitaly.wool@konsulko.com>
To: Nhat Pham <nphamcs@gmail.com>
Cc: akpm@linux-foundation.org, hannes@cmpxchg.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	minchan@kernel.org, ngupta@vflare.org, senozhatsky@chromium.org,
	sjenning@redhat.com, ddstreet@ieee.org
Subject: Re: [PATCH v7 4/6] zsmalloc: Add a LRU to zs_pool to keep track of zspages in LRU order
Date: Tue, 29 Nov 2022 12:53:30 +0100	[thread overview]
Message-ID: <CAM4kBB+7boz+PZfPODbS-KMGOPZpa2QO5xZMoP2q_ZfGyqmQTA@mail.gmail.com> (raw)
In-Reply-To: <20221128191616.1261026-5-nphamcs@gmail.com>

On Mon, Nov 28, 2022 at 8:16 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> This helps determines the coldest zspages as candidates for writeback.
>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
>  mm/zsmalloc.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 5427a00a0518..b1bc231d94a3 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -239,6 +239,11 @@ struct zs_pool {
>         /* Compact classes */
>         struct shrinker shrinker;
>
> +#ifdef CONFIG_ZPOOL
> +       /* List tracking the zspages in LRU order by most recently added object */
> +       struct list_head lru;
> +#endif
> +
>  #ifdef CONFIG_ZSMALLOC_STAT
>         struct dentry *stat_dentry;
>  #endif
> @@ -260,6 +265,12 @@ struct zspage {
>         unsigned int freeobj;
>         struct page *first_page;
>         struct list_head list; /* fullness list */
> +
> +#ifdef CONFIG_ZPOOL
> +       /* links the zspage to the lru list in the pool */
> +       struct list_head lru;
> +#endif
> +
>         struct zs_pool *pool;
>  #ifdef CONFIG_COMPACTION
>         rwlock_t lock;
> @@ -953,6 +964,9 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class,
>         }
>
>         remove_zspage(class, zspage, ZS_EMPTY);
> +#ifdef CONFIG_ZPOOL
> +       list_del(&zspage->lru);
> +#endif
>         __free_zspage(pool, class, zspage);
>  }
>
> @@ -998,6 +1012,10 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
>                 off %= PAGE_SIZE;
>         }
>
> +#ifdef CONFIG_ZPOOL
> +       INIT_LIST_HEAD(&zspage->lru);
> +#endif
> +
>         set_freeobj(zspage, 0);
>  }
>
> @@ -1270,6 +1288,31 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>         obj_to_location(obj, &page, &obj_idx);
>         zspage = get_zspage(page);
>
> +#ifdef CONFIG_ZPOOL
> +       /*
> +        * Move the zspage to front of pool's LRU.
> +        *
> +        * Note that this is swap-specific, so by definition there are no ongoing
> +        * accesses to the memory while the page is swapped out that would make
> +        * it "hot". A new entry is hot, then ages to the tail until it gets either
> +        * written back or swaps back in.
> +        *
> +        * Furthermore, map is also called during writeback. We must not put an
> +        * isolated page on the LRU mid-reclaim.
> +        *
> +        * As a result, only update the LRU when the page is mapped for write
> +        * when it's first instantiated.
> +        *
> +        * This is a deviation from the other backends, which perform this update
> +        * in the allocation function (zbud_alloc, z3fold_alloc).
> +        */
> +       if (mm == ZS_MM_WO) {
> +               if (!list_empty(&zspage->lru))
> +                       list_del(&zspage->lru);
> +               list_add(&zspage->lru, &pool->lru);
> +       }
> +#endif
> +
>         /*
>          * migration cannot move any zpages in this zspage. Here, pool->lock
>          * is too heavy since callers would take some time until they calls
> @@ -1988,6 +2031,9 @@ static void async_free_zspage(struct work_struct *work)
>                 VM_BUG_ON(fullness != ZS_EMPTY);
>                 class = pool->size_class[class_idx];
>                 spin_lock(&pool->lock);
> +#ifdef CONFIG_ZPOOL
> +               list_del(&zspage->lru);
> +#endif
>                 __free_zspage(pool, class, zspage);
>                 spin_unlock(&pool->lock);
>         }
> @@ -2299,6 +2345,10 @@ struct zs_pool *zs_create_pool(const char *name)
>          */
>         zs_register_shrinker(pool);
>
> +#ifdef CONFIG_ZPOOL
> +       INIT_LIST_HEAD(&pool->lru);
> +#endif

I think the amount of #ifdefs here becomes absolutely overwhelming.
Not that zsmalloc code was very readable before, but now it is
starting to look like a plain disaster.

Thanks,
Vitaly

> +
>         return pool;
>
>  err:
> --
> 2.30.2

  parent reply	other threads:[~2022-11-29 11:53 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-28 19:16 [PATCH v7 0/6] Implement writeback for zsmalloc Nhat Pham
2022-11-28 19:16 ` [PATCH v7 1/6] zswap: fix writeback lock ordering " Nhat Pham
2022-11-29  3:47   ` Sergey Senozhatsky
2022-11-28 19:16 ` [PATCH v7 2/6] zpool: clean out dead code Nhat Pham
2022-11-28 19:16 ` [PATCH v7 3/6] zsmalloc: Consolidate zs_pool's migrate_lock and size_class's locks Nhat Pham
2022-11-29  4:01   ` Sergey Senozhatsky
2022-11-28 19:16 ` [PATCH v7 4/6] zsmalloc: Add a LRU to zs_pool to keep track of zspages in LRU order Nhat Pham
2022-11-28 19:25   ` Johannes Weiner
2022-11-29  3:50   ` Sergey Senozhatsky
2022-11-29 11:53   ` Vitaly Wool [this message]
2022-11-29 14:03     ` Sergey Senozhatsky
2022-11-29 15:54       ` Johannes Weiner
2022-11-30 15:23         ` Vitaly Wool
2022-11-28 19:16 ` [PATCH v7 5/6] zsmalloc: Add zpool_ops field to zs_pool to store evict handlers Nhat Pham
2022-11-28 19:27   ` Johannes Weiner
2022-11-29  3:53   ` Sergey Senozhatsky
2022-11-28 19:16 ` [PATCH v7 6/6] zsmalloc: Implement writeback mechanism for zsmalloc Nhat Pham
2022-11-28 19:28   ` Johannes Weiner
2022-11-29  3:58   ` Sergey Senozhatsky
2023-01-03  4:57 ` [PATCH v7 0/6] Implement writeback " Thomas Weißschuh
2023-01-06 20:32   ` Nhat Pham

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAM4kBB+7boz+PZfPODbS-KMGOPZpa2QO5xZMoP2q_ZfGyqmQTA@mail.gmail.com \
    --to=vitaly.wool@konsulko.com \
    --cc=akpm@linux-foundation.org \
    --cc=ddstreet@ieee.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=ngupta@vflare.org \
    --cc=nphamcs@gmail.com \
    --cc=senozhatsky@chromium.org \
    --cc=sjenning@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).