All of lore.kernel.org
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	 Matthew Wilcox <willy@infradead.org>,
	Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
	 Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
	Michal Hocko <mhocko@suse.com>,
	 Kefeng Wang <wangkefeng.wang@huawei.com>,
	Chris Li <chrisl@kernel.org>,  Lance Yang <ioworker0@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 Barry Song <v-songbaohua@oppo.com>
Subject: Re: [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list()
Date: Thu, 28 Mar 2024 21:18:41 +1300	[thread overview]
Message-ID: <CAGsJ_4x40DxoukgRuEt3OKP7dESj3w+HXz=dHYR+PH8LjtCnEA@mail.gmail.com> (raw)
In-Reply-To: <20240327144537.4165578-6-ryan.roberts@arm.com>

On Thu, Mar 28, 2024 at 3:45 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> Now that swap supports storing all mTHP sizes, avoid splitting large
> folios before swap-out. This benefits performance of the swap-out path
> by eliding split_folio_to_list(), which is expensive, and also sets us
> up for swapping in large folios in a future series.
>
> If the folio is partially mapped, we continue to split it since we want
> to avoid the extra IO overhead and storage of writing out pages
> uneccessarily.
>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Barry Song <v-songbaohua@oppo.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  mm/vmscan.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 00adaf1cb2c3..293120fe54f3 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1223,11 +1223,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>                                         if (!can_split_folio(folio, NULL))
>                                                 goto activate_locked;
>                                         /*
> -                                        * Split folios without a PMD map right
> -                                        * away. Chances are some or all of the
> -                                        * tail pages can be freed without IO.
> +                                        * Split partially mapped folios right
> +                                        * away. We can free the unmapped pages
> +                                        * without IO.
>                                          */
> -                                       if (!folio_entire_mapcount(folio) &&
> +                                       if (data_race(!list_empty(
> +                                               &folio->_deferred_list)) &&
>                                             split_folio_to_list(folio,
>                                                                 folio_list))
>                                                 goto activate_locked;

Hi Ryan,

Sorry for bringing up another minor issue at this late stage.

During the debugging of thp counter patch v2, I noticed the discrepancy between
THP_SWPOUT_FALLBACK and THP_SWPOUT.

Should we make adjustments to the counter?

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 293120fe54f3..d7856603f689 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1241,8 +1241,10 @@ static unsigned int shrink_folio_list(struct
list_head *folio_list,
                                                                folio_list))
                                                goto activate_locked;
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-
count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1);
-                                       count_vm_event(THP_SWPOUT_FALLBACK);
+                                       if (folio_test_pmd_mappable(folio)) {
+
count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1);
+
count_vm_event(THP_SWPOUT_FALLBACK);
+                                       }
 #endif
                                        if (!add_to_swap(folio))
                                                goto activate_locked_split;


Because THP_SWPOUT is only for pmd:

static inline void count_swpout_vm_event(struct folio *folio)
{
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
        if (unlikely(folio_test_pmd_mappable(folio))) {
                count_memcg_folio_events(folio, THP_SWPOUT, 1);
                count_vm_event(THP_SWPOUT);
        }
#endif
        count_vm_events(PSWPOUT, folio_nr_pages(folio));
}

I can provide per-order counters for this in my THP counter patch.

> --
> 2.25.1
>

Thanks
Barry

  reply	other threads:[~2024-03-28  8:18 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 14:45 [PATCH v5 0/6] Swap-out mTHP without splitting Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 1/6] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-03-29  1:56   ` Huang, Ying
2024-04-05  9:22   ` David Hildenbrand
2024-03-27 14:45 ` [PATCH v5 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Ryan Roberts
2024-04-01  5:52   ` Huang, Ying
2024-04-02 11:15     ` Ryan Roberts
2024-04-03  3:57       ` Huang, Ying
2024-04-03  7:16         ` Ryan Roberts
2024-04-03  0:30   ` Zi Yan
2024-04-03  0:47     ` Lance Yang
2024-04-03  7:21     ` Ryan Roberts
2024-04-05  9:24       ` David Hildenbrand
2024-03-27 14:45 ` [PATCH v5 3/6] mm: swap: Simplify struct percpu_cluster Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 4/6] mm: swap: Allow storage of all mTHP orders Ryan Roberts
2024-04-01  3:15   ` Huang, Ying
2024-04-02 11:18     ` Ryan Roberts
2024-04-03  3:07       ` Huang, Ying
2024-04-03  7:48         ` Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 5/6] mm: vmscan: Avoid split during shrink_folio_list() Ryan Roberts
2024-03-28  8:18   ` Barry Song [this message]
2024-03-28  8:48     ` Ryan Roberts
2024-04-02 13:10     ` Ryan Roberts
2024-04-02 13:22       ` Lance Yang
2024-04-02 13:22       ` Ryan Roberts
2024-04-02 22:54         ` Barry Song
2024-04-05  4:06       ` Barry Song
2024-04-05  7:28         ` Ryan Roberts
2024-03-27 14:45 ` [PATCH v5 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Ryan Roberts
2024-04-01 12:25   ` Lance Yang
2024-04-02 11:20     ` Ryan Roberts
2024-04-02 11:30       ` Lance Yang
2024-04-02 10:16   ` Barry Song
2024-04-02 10:56     ` Ryan Roberts
2024-04-02 11:01       ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4x40DxoukgRuEt3OKP7dESj3w+HXz=dHYR+PH8LjtCnEA@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=v-songbaohua@oppo.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.