All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Subject: Re: [RFC PATCH 01/14] mm: Make folios_put() the basis of release_pages()
Date: Thu, 31 Aug 2023 15:21:53 +0100	[thread overview]
Message-ID: <3321aaff-1822-4421-8280-73a9d890a1f6@arm.com> (raw)
In-Reply-To: <20230825135918.4164671-2-willy@infradead.org>

On 25/08/2023 14:59, Matthew Wilcox (Oracle) wrote:
> By making release_pages() call folios_put(), we can get rid of the calls
> to compound_head() for the callers that already know they have folios.
> We can also get rid of the lock_batch tracking as we know the size of
> the batch is limited by folio_batch.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/mm.h | 19 ++---------
>  mm/mlock.c         |  3 +-
>  mm/swap.c          | 84 +++++++++++++++++++++++++++-------------------
>  3 files changed, 52 insertions(+), 54 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2c6b54b5506a..7d1d96b75d11 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -36,6 +36,7 @@ struct anon_vma;
>  struct anon_vma_chain;
>  struct user_struct;
>  struct pt_regs;
> +struct folio_batch;
>  
>  extern int sysctl_page_lock_unfairness;
>  
> @@ -1521,23 +1522,7 @@ typedef union {
>  } release_pages_arg __attribute__ ((__transparent_union__));
>  
>  void release_pages(release_pages_arg, int nr);
> -
> -/**
> - * folios_put - Decrement the reference count on an array of folios.
> - * @folios: The folios.
> - * @nr: How many folios there are.
> - *
> - * Like folio_put(), but for an array of folios.  This is more efficient
> - * than writing the loop yourself as it will optimise the locks which
> - * need to be taken if the folios are freed.
> - *
> - * Context: May be called in process or interrupt context, but not in NMI
> - * context.  May be called while holding a spinlock.
> - */
> -static inline void folios_put(struct folio **folios, unsigned int nr)
> -{
> -	release_pages(folios, nr);
> -}
> +void folios_put(struct folio_batch *folios);
>  
>  static inline void put_page(struct page *page)
>  {
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 06bdfab83b58..67bd74a6268a 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -206,8 +206,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch)
>  
>  	if (lruvec)
>  		unlock_page_lruvec_irq(lruvec);
> -	folios_put(fbatch->folios, folio_batch_count(fbatch));
> -	folio_batch_reinit(fbatch);
> +	folios_put(fbatch);
>  }
>  
>  void mlock_drain_local(void)
> diff --git a/mm/swap.c b/mm/swap.c
> index cd8f0150ba3a..7bdc63b56859 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -89,7 +89,7 @@ static void __page_cache_release(struct folio *folio)
>  		__folio_clear_lru_flags(folio);
>  		unlock_page_lruvec_irqrestore(lruvec, flags);
>  	}
> -	/* See comment on folio_test_mlocked in release_pages() */
> +	/* See comment on folio_test_mlocked in folios_put() */
>  	if (unlikely(folio_test_mlocked(folio))) {
>  		long nr_pages = folio_nr_pages(folio);
>  
> @@ -175,7 +175,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
>  	 * while the LRU lock is held.
>  	 *
>  	 * (That is not true of __page_cache_release(), and not necessarily
> -	 * true of release_pages(): but those only clear the mlocked flag after
> +	 * true of folios_put(): but those only clear the mlocked flag after
>  	 * folio_put_testzero() has excluded any other users of the folio.)
>  	 */
>  	if (folio_evictable(folio)) {
> @@ -221,8 +221,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
>  
>  	if (lruvec)
>  		unlock_page_lruvec_irqrestore(lruvec, flags);
> -	folios_put(fbatch->folios, folio_batch_count(fbatch));
> -	folio_batch_reinit(fbatch);
> +	folios_put(fbatch);
>  }
>  
>  static void folio_batch_add_and_move(struct folio_batch *fbatch,
> @@ -946,41 +945,27 @@ void lru_cache_disable(void)
>  }
>  
>  /**
> - * release_pages - batched put_page()
> - * @arg: array of pages to release
> - * @nr: number of pages
> + * folios_put - Decrement the reference count on a batch of folios.
> + * @folios: The folios.
>   *
> - * Decrement the reference count on all the pages in @arg.  If it
> - * fell to zero, remove the page from the LRU and free it.
> + * Like folio_put(), but for a batch of folios.  This is more efficient
> + * than writing the loop yourself as it will optimise the locks which need
> + * to be taken if the folios are freed.  The folios batch is returned
> + * empty and ready to be reused for another batch; there is no need to
> + * reinitialise it.
>   *
> - * Note that the argument can be an array of pages, encoded pages,
> - * or folio pointers. We ignore any encoded bits, and turn any of
> - * them into just a folio that gets free'd.
> + * Context: May be called in process or interrupt context, but not in NMI
> + * context.  May be called while holding a spinlock.
>   */
> -void release_pages(release_pages_arg arg, int nr)
> +void folios_put(struct folio_batch *folios)
>  {
>  	int i;
> -	struct encoded_page **encoded = arg.encoded_pages;
>  	LIST_HEAD(pages_to_free);
>  	struct lruvec *lruvec = NULL;
>  	unsigned long flags = 0;
> -	unsigned int lock_batch;
>  
> -	for (i = 0; i < nr; i++) {
> -		struct folio *folio;
> -
> -		/* Turn any of the argument types into a folio */
> -		folio = page_folio(encoded_page_ptr(encoded[i]));
> -
> -		/*
> -		 * Make sure the IRQ-safe lock-holding time does not get
> -		 * excessive with a continuous string of pages from the
> -		 * same lruvec. The lock is held only if lruvec != NULL.
> -		 */
> -		if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {

SWAP_CLUSTER_MAX is 32. By using the folio_batch, I think you are limitted to 15
in your batch, so I guess you could be taking/releasing the lock x2 as often? Is
there any perf implication?

> -			unlock_page_lruvec_irqrestore(lruvec, flags);
> -			lruvec = NULL;
> -		}
> +	for (i = 0; i < folios->nr; i++) {
> +		struct folio *folio = folios->folios[i];
>  
>  		if (is_huge_zero_page(&folio->page))
>  			continue;
> @@ -1010,13 +995,8 @@ void release_pages(release_pages_arg arg, int nr)
>  		}
>  
>  		if (folio_test_lru(folio)) {
> -			struct lruvec *prev_lruvec = lruvec;
> -
>  			lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
>  									&flags);
> -			if (prev_lruvec != lruvec)
> -				lock_batch = 0;
> -
>  			lruvec_del_folio(lruvec, folio);
>  			__folio_clear_lru_flags(folio);
>  		}
> @@ -1040,6 +1020,40 @@ void release_pages(release_pages_arg arg, int nr)
>  
>  	mem_cgroup_uncharge_list(&pages_to_free);
>  	free_unref_page_list(&pages_to_free);
> +	folios->nr = 0;

folio_batch_reinit(folios) ?

> +}
> +EXPORT_SYMBOL(folios_put);
> +
> +/**
> + * release_pages - batched put_page()
> + * @arg: array of pages to release
> + * @nr: number of pages
> + *
> + * Decrement the reference count on all the pages in @arg.  If it
> + * fell to zero, remove the page from the LRU and free it.
> + *
> + * Note that the argument can be an array of pages, encoded pages,
> + * or folio pointers. We ignore any encoded bits, and turn any of
> + * them into just a folio that gets free'd.
> + */
> +void release_pages(release_pages_arg arg, int nr)
> +{
> +	struct folio_batch fbatch;
> +	struct encoded_page **encoded = arg.encoded_pages;
> +	int i;
> +
> +	folio_batch_init(&fbatch);
> +	for (i = 0; i < nr; i++) {
> +		/* Turn any of the argument types into a folio */
> +		struct folio *folio = page_folio(encoded_page_ptr(encoded[i]));
> +
> +		if (folio_batch_add(&fbatch, folio) > 0)
> +			continue;
> +		folios_put(&fbatch);
> +	}
> +
> +	if (fbatch.nr)

if (folio_batch_count(&fbatch)) ?

> +		folios_put(&fbatch);
>  }
>  EXPORT_SYMBOL(release_pages);
>  



  reply	other threads:[~2023-08-31 14:22 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-25 13:59 [RFC PATCH 00/14] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 01/14] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
2023-08-31 14:21   ` Ryan Roberts [this message]
2023-09-01  3:58     ` Matthew Wilcox
2023-09-01  8:14       ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 02/14] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
2023-08-31 14:29   ` Ryan Roberts
2023-09-01  4:03     ` Matthew Wilcox
2023-09-01  8:15       ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 03/14] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
2023-08-31 14:39   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 04/14] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
2023-08-31 14:41   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 05/14] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
2023-08-31 14:49   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 06/14] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
2023-08-31 14:53   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 07/14] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 08/14] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 09/14] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
2023-08-31 15:21   ` Ryan Roberts
2023-09-01  4:09     ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 10/14] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
2023-08-31 15:28   ` Ryan Roberts
2023-09-01  4:10     ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 11/14] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
2023-09-04  3:43   ` Matthew Wilcox
2024-01-05 17:00     ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 12/14] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
2023-08-31 15:46   ` Ryan Roberts
2023-09-01  4:16     ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 13/14] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
2023-08-31 18:26   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 14/14] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
2023-08-31 18:27   ` Ryan Roberts
2023-08-30 18:50 ` [RFC PATCH 15/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
2023-08-30 18:50 ` [RFC PATCH 16/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
2023-08-30 18:50 ` [RFC PATCH 17/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
2023-08-31 18:49   ` Ryan Roberts
2023-08-30 18:50 ` [RFC PATCH 18/18] mm: Add pfn_range_put() Matthew Wilcox (Oracle)
2023-08-31 19:03   ` Ryan Roberts
2023-09-01  4:27     ` Matthew Wilcox
2023-09-01  7:59       ` Ryan Roberts
2023-09-04 13:25 ` [RFC PATCH 00/14] Rearrange batched folio freeing Ryan Roberts
2023-09-05 13:15   ` Matthew Wilcox
2023-09-05 13:26     ` Ryan Roberts
2023-09-05 14:00       ` Matthew Wilcox
2023-09-06  3:48         ` Matthew Wilcox
2023-09-06 10:23           ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3321aaff-1822-4421-8280-73a9d890a1f6@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.