All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Huang Ying <ying.huang@intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Zi Yan <ziy@nvidia.com>, Yang Shi <shy828301@gmail.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Oscar Salvador <osalvador@suse.de>,
	Matthew Wilcox <willy@infradead.org>,
	Bharata B Rao <bharata@amd.com>,
	Alistair Popple <apopple@nvidia.com>,
	haoxin <xhao@linux.alibaba.com>
Subject: Re: [PATCH 5/8] migrate_pages: batch _unmap and _move
Date: Wed, 28 Dec 2022 15:22:06 -0800	[thread overview]
Message-ID: <20221228152206.52435aef4072c4cad9087541@linux-foundation.org> (raw)
In-Reply-To: <20221227002859.27740-6-ying.huang@intel.com>

On Tue, 27 Dec 2022 08:28:56 +0800 Huang Ying <ying.huang@intel.com> wrote:

> In this patch the _unmap and _move stage of the folio migration is
> batched.  That for, previously, it is,
> 
>   for each folio
>     _unmap()
>     _move()
> 
> Now, it is,
> 
>   for each folio
>     _unmap()
>   for each folio
>     _move()
> 
> Based on this, we can batch the TLB flushing and use some hardware
> accelerator to copy folios between batched _unmap and batched _move
> stages.
> 
> ...
>
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1027,8 +1027,32 @@ static void __migrate_folio_extract(struct folio *dst,
>  	dst->private = NULL;
>  }
>  
> +static void migrate_folio_undo_src(struct folio *src,
> +				   int page_was_mapped,
> +				   struct anon_vma *anon_vma,
> +				   struct list_head *ret)
> +{
> +	if (page_was_mapped)
> +		remove_migration_ptes(src, src, false);
> +	if (anon_vma)
> +		put_anon_vma(anon_vma);
> +	folio_unlock(src);
> +	list_move_tail(&src->lru, ret);
> +}
> +
> +static void migrate_folio_undo_dst(struct folio *dst,
> +				   free_page_t put_new_page,
> +				   unsigned long private)
> +{
> +	folio_unlock(dst);
> +	if (put_new_page)
> +		put_new_page(&dst->page, private);
> +	else
> +		folio_put(dst);
> +}

What do the above do?  Are they so obvious that no comments are needed?


>  static int __migrate_folio_unmap(struct folio *src, struct folio *dst,
> -				int force, enum migrate_mode mode)
> +				 int force, bool force_lock, enum migrate_mode mode)
>  {
>  	int rc = -EAGAIN;
>  	int page_was_mapped = 0;
> @@ -1055,6 +1079,11 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst,
>  		if (current->flags & PF_MEMALLOC)
>  			goto out;
>  
> +		if (!force_lock) {
> +			rc = -EDEADLOCK;
> +			goto out;
> +		}

Please document the use of EDEADLOCK in this code.  What does it signify?

>  		folio_lock(src);
>  	}
>  


  reply	other threads:[~2022-12-28 23:22 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-27  0:28 [PATCH 0/8] migrate_pages(): batch TLB flushing Huang Ying
2022-12-27  0:28 ` [PATCH 1/8] migrate_pages: organize stats with struct migrate_pages_stats Huang Ying
2023-01-03 18:06   ` Zi Yan
2023-01-05  3:02   ` Alistair Popple
2023-01-05  5:53     ` Huang, Ying
2023-01-05  6:50       ` Alistair Popple
2023-01-05  7:06         ` Huang, Ying
2022-12-27  0:28 ` [PATCH 2/8] migrate_pages: separate hugetlb folios migration Huang Ying
2022-12-28 23:17   ` Andrew Morton
2023-01-02 23:53     ` Huang, Ying
2023-01-05  4:13   ` Alistair Popple
2023-01-05  5:51     ` Huang, Ying
2023-01-05  6:43       ` Alistair Popple
2023-01-05  7:31         ` Huang, Ying
2023-01-05  7:39           ` Alistair Popple
2023-01-09  7:23             ` Huang, Ying
2023-01-10  1:37               ` Alistair Popple
2022-12-27  0:28 ` [PATCH 3/8] migrate_pages: restrict number of pages to migrate in batch Huang Ying
2023-01-03 18:40   ` Zi Yan
2023-01-04  0:24     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 4/8] migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2023-01-03 18:55   ` Zi Yan
2023-01-05 18:26   ` Nathan Chancellor
2023-01-05 18:57     ` Kees Cook
2023-01-08 23:33       ` Huang, Ying
2022-12-27  0:28 ` [PATCH 5/8] migrate_pages: batch _unmap and _move Huang Ying
2022-12-28 23:22   ` Andrew Morton [this message]
2023-01-02 23:29     ` Huang, Ying
2023-01-03 19:01   ` Zi Yan
2023-01-04  0:34     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 6/8] migrate_pages: move migrate_folio_done() and migrate_folio_unmap() Huang Ying
2023-01-03 19:02   ` Zi Yan
2023-01-04  1:26     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 7/8] migrate_pages: share more code between _unmap and _move Huang Ying
2023-01-04  7:12   ` Alistair Popple
2023-01-06  4:15     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 8/8] migrate_pages: batch flushing TLB Huang Ying
2023-01-03 19:19   ` Zi Yan
2023-01-04  1:41     ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221228152206.52435aef4072c4cad9087541@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bharata@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=xhao@linux.alibaba.com \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.