All of lore.kernel.org
 help / color / mirror / Atom feed
From: haoxin <xhao@linux.alibaba.com>
To: Huang Ying <ying.huang@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Zi Yan <ziy@nvidia.com>, Yang Shi <shy828301@gmail.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Oscar Salvador <osalvador@suse.de>,
	Matthew Wilcox <willy@infradead.org>,
	Bharata B Rao <bharata@amd.com>,
	Alistair Popple <apopple@nvidia.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>
Subject: Re: [PATCH -v4 8/9] migrate_pages: batch flushing TLB
Date: Wed, 8 Feb 2023 01:44:04 +0800	[thread overview]
Message-ID: <8464725c-40ce-3e3f-baaf-5c2c136cc870@linux.alibaba.com> (raw)
In-Reply-To: <20230206063313.635011-9-ying.huang@intel.com>


在 2023/2/6 下午2:33, Huang Ying 写道:
> The TLB flushing will cost quite some CPU cycles during the folio
> migration in some situations.  For example, when migrate a folio of a
> process with multiple active threads that run on multiple CPUs.  After
> batching the _unmap and _move in migrate_pages(), the TLB flushing can
> be batched easily with the existing TLB flush batching mechanism.
> This patch implements that.
>
> We use the following test case to test the patch.
>
> On a 2-socket Intel server,
>
> - Run pmbench memory accessing benchmark
>
> - Run `migratepages` to migrate pages of pmbench between node 0 and
>    node 1 back and forth.
>
> With the patch, the TLB flushing IPI reduces 99.1% during the test and
> the number of pages migrated successfully per second increases 291.7%.
>
> NOTE: TLB flushing is batched only for normal folios, not for THP
> folios.  Because the overhead of TLB flushing for THP folios is much
> lower than that for normal folios (about 1/512 on x86 platform).
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Bharata B Rao <bharata@amd.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: haoxin <xhao@linux.alibaba.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
>   mm/migrate.c |  4 +++-
>   mm/rmap.c    | 20 +++++++++++++++++---
>   2 files changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 9378fa2ad4a5..ca6e2ff02a09 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1230,7 +1230,7 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page
>   		/* Establish migration ptes */
>   		VM_BUG_ON_FOLIO(folio_test_anon(src) &&
>   			       !folio_test_ksm(src) && !anon_vma, src);
> -		try_to_migrate(src, 0);
> +		try_to_migrate(src, TTU_BATCH_FLUSH);
>   		page_was_mapped = 1;
>   	}
>   
> @@ -1781,6 +1781,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
>   	stats->nr_thp_failed += thp_retry;
>   	stats->nr_failed_pages += nr_retry_pages;
>   move:
> +	try_to_unmap_flush();
> +
>   	retry = 1;
>   	for (pass = 0;
>   	     pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b616870a09be..2e125f3e462e 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1976,7 +1976,21 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>   		} else {
>   			flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
>   			/* Nuke the page table entry. */
> -			pteval = ptep_clear_flush(vma, address, pvmw.pte);
> +			if (should_defer_flush(mm, flags)) {
> +				/*
> +				 * We clear the PTE but do not flush so potentially
> +				 * a remote CPU could still be writing to the folio.
> +				 * If the entry was previously clean then the
> +				 * architecture must guarantee that a clear->dirty
> +				 * transition on a cached TLB entry is written through
> +				 * and traps if the PTE is unmapped.
> +				 */
> +				pteval = ptep_get_and_clear(mm, address, pvmw.pte);
Nice work, Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
> +
> +				set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
> +			} else {
> +				pteval = ptep_clear_flush(vma, address, pvmw.pte);
> +			}
>   		}
>   
>   		/* Set the dirty flag on the folio now the pte is gone. */
> @@ -2148,10 +2162,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
>   
>   	/*
>   	 * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
> -	 * TTU_SPLIT_HUGE_PMD and TTU_SYNC flags.
> +	 * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
>   	 */
>   	if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> -					TTU_SYNC)))
> +					TTU_SYNC | TTU_BATCH_FLUSH)))
>   		return;
>   
>   	if (folio_is_zone_device(folio) &&

  parent reply	other threads:[~2023-02-07 17:44 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-06  6:33 [PATCH -v4 0/9] migrate_pages(): batch TLB flushing Huang Ying
2023-02-06  6:33 ` [PATCH -v4 1/9] migrate_pages: organize stats with struct migrate_pages_stats Huang Ying
2023-02-07 16:28   ` haoxin
2023-02-06  6:33 ` [PATCH -v4 2/9] migrate_pages: separate hugetlb folios migration Huang Ying
2023-02-07 16:42   ` haoxin
2023-02-08 11:35     ` Huang, Ying
2023-02-06  6:33 ` [PATCH -v4 3/9] migrate_pages: restrict number of pages to migrate in batch Huang Ying
2023-02-07 17:01   ` haoxin
2023-02-06  6:33 ` [PATCH -v4 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2023-02-07 17:11   ` haoxin
2023-02-07 17:27     ` haoxin
2023-02-06  6:33 ` [PATCH -v4 5/9] migrate_pages: batch _unmap and _move Huang Ying
2023-02-06 16:10   ` Zi Yan
2023-02-07  5:58     ` Huang, Ying
2023-02-13  6:55     ` Huang, Ying
2023-02-07 17:33   ` haoxin
2023-02-06  6:33 ` [PATCH -v4 6/9] migrate_pages: move migrate_folio_unmap() Huang Ying
2023-02-07 14:40   ` Zi Yan
2023-02-06  6:33 ` [PATCH -v4 7/9] migrate_pages: share more code between _unmap and _move Huang Ying
2023-02-07 14:50   ` Zi Yan
2023-02-08 12:02     ` Huang, Ying
2023-02-08 19:47       ` Zi Yan
2023-02-10  7:09         ` Huang, Ying
2023-02-06  6:33 ` [PATCH -v4 8/9] migrate_pages: batch flushing TLB Huang Ying
2023-02-07 14:52   ` Zi Yan
2023-02-08 11:27     ` Huang, Ying
2023-02-07 17:44   ` haoxin [this message]
2023-02-06  6:33 ` [PATCH -v4 9/9] migrate_pages: move THP/hugetlb migration support check to simplify code Huang Ying
2023-02-07 14:53   ` Zi Yan
2023-02-08  6:21 ` [PATCH -v4 0/9] migrate_pages(): batch TLB flushing haoxin
2023-02-08  6:27   ` haoxin
2023-02-08 11:04     ` Jonathan Cameron
2023-02-08 11:25   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8464725c-40ce-3e3f-baaf-5c2c136cc870@linux.alibaba.com \
    --to=xhao@linux.alibaba.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bharata@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=minchan@kernel.org \
    --cc=osalvador@suse.de \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.