All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yin Fengwei <fengwei.yin@intel.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Yu Zhao <yuzhao@google.com>
Cc: <linux-mm@kvack.org>, <linux-arm-kernel@lists.infradead.org>
Subject: Re: [RFC PATCH 4/6] mm: Implement folio_add_new_anon_rmap_range()
Date: Wed, 22 Mar 2023 15:10:42 +0800	[thread overview]
Message-ID: <2cab90ef-96e1-98ed-51c4-ce744d2e0ca0@intel.com> (raw)
In-Reply-To: <20230317105802.2634004-5-ryan.roberts@arm.com>

On 3/17/23 18:58, Ryan Roberts wrote:
> Like folio_add_new_anon_rmap() but batch-rmaps all the pages belonging
> to a folio, for effciency savings.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>   include/linux/rmap.h |  2 ++
>   mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 45 insertions(+)
> 
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b87d01660412..d1d731650ce8 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
>   		unsigned long address);
>   void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>   		unsigned long address);
> +void folio_add_new_anon_rmap_range(struct folio *folio,
> +		struct vm_area_struct *vma, unsigned long address);
>   void page_add_file_rmap(struct page *, struct vm_area_struct *,
>   		bool compound);
>   void page_remove_rmap(struct page *, struct vm_area_struct *,
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 8632e02661ac..05a0c0a700e7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1302,6 +1302,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>   	__page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>   }
> 
> +/**
> + * folio_add_new_anon_rmap_range - Add mapping to a new anonymous potentially
> + * large but definitely non-THP folio.
> + * @folio:      The folio to add the mapping to.
> + * @vma:        the vm area in which the mapping is added
> + * @address:    the user virtual address of the first page in the folio
> + *
> + * Like folio_add_new_anon_rmap() but must only be called for new *non-THP*
> + * folios. Like folio_add_new_anon_rmap(), the inc-and-test is bypassed and the
> + * folio does not have to be locked. All pages in the folio are individually
> + * accounted.
> + *
> + * As the folio is new, it's assumed to be mapped exclusively by a single
> + * process.
> + */
> +void folio_add_new_anon_rmap_range(struct folio *folio,
> +			struct vm_area_struct *vma, unsigned long address)
> +{
> +	int i;
> +	int nr = folio_nr_pages(folio);
> +	struct page *page = &folio->page;
> +
> +	VM_BUG_ON_VMA(address < vma->vm_start ||
> +		      address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> +	__folio_set_swapbacked(folio);
> +
> +	if (folio_test_large(folio)) {
> +		/* increment count (starts at 0) */
> +		atomic_set(&folio->_nr_pages_mapped, nr);
> +	}
> +
> +	for (i = 0; i < nr; i++) {
> +		/* increment count (starts at -1) */
> +		atomic_set(&page->_mapcount, 0);
> +		__page_set_anon_rmap(folio, page, vma, address, 1);
My bad. You did call it here.

Regards
Yin, Fengwei

> +		page++;
> +		address += PAGE_SIZE;
> +	}
> +
> +	__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
> +
> +}
> +
>   /**
>    * page_add_file_rmap - add pte mapping to a file page
>    * @page:	the page to add the mapping to
> --
> 2.25.1
> 



WARNING: multiple messages have this Message-ID (diff)
From: Yin Fengwei <fengwei.yin@intel.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Yu Zhao <yuzhao@google.com>
Cc: <linux-mm@kvack.org>, <linux-arm-kernel@lists.infradead.org>
Subject: Re: [RFC PATCH 4/6] mm: Implement folio_add_new_anon_rmap_range()
Date: Wed, 22 Mar 2023 15:10:42 +0800	[thread overview]
Message-ID: <2cab90ef-96e1-98ed-51c4-ce744d2e0ca0@intel.com> (raw)
In-Reply-To: <20230317105802.2634004-5-ryan.roberts@arm.com>

On 3/17/23 18:58, Ryan Roberts wrote:
> Like folio_add_new_anon_rmap() but batch-rmaps all the pages belonging
> to a folio, for effciency savings.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>   include/linux/rmap.h |  2 ++
>   mm/rmap.c            | 43 +++++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 45 insertions(+)
> 
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b87d01660412..d1d731650ce8 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
>   		unsigned long address);
>   void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>   		unsigned long address);
> +void folio_add_new_anon_rmap_range(struct folio *folio,
> +		struct vm_area_struct *vma, unsigned long address);
>   void page_add_file_rmap(struct page *, struct vm_area_struct *,
>   		bool compound);
>   void page_remove_rmap(struct page *, struct vm_area_struct *,
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 8632e02661ac..05a0c0a700e7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1302,6 +1302,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>   	__page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>   }
> 
> +/**
> + * folio_add_new_anon_rmap_range - Add mapping to a new anonymous potentially
> + * large but definitely non-THP folio.
> + * @folio:      The folio to add the mapping to.
> + * @vma:        the vm area in which the mapping is added
> + * @address:    the user virtual address of the first page in the folio
> + *
> + * Like folio_add_new_anon_rmap() but must only be called for new *non-THP*
> + * folios. Like folio_add_new_anon_rmap(), the inc-and-test is bypassed and the
> + * folio does not have to be locked. All pages in the folio are individually
> + * accounted.
> + *
> + * As the folio is new, it's assumed to be mapped exclusively by a single
> + * process.
> + */
> +void folio_add_new_anon_rmap_range(struct folio *folio,
> +			struct vm_area_struct *vma, unsigned long address)
> +{
> +	int i;
> +	int nr = folio_nr_pages(folio);
> +	struct page *page = &folio->page;
> +
> +	VM_BUG_ON_VMA(address < vma->vm_start ||
> +		      address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> +	__folio_set_swapbacked(folio);
> +
> +	if (folio_test_large(folio)) {
> +		/* increment count (starts at 0) */
> +		atomic_set(&folio->_nr_pages_mapped, nr);
> +	}
> +
> +	for (i = 0; i < nr; i++) {
> +		/* increment count (starts at -1) */
> +		atomic_set(&page->_mapcount, 0);
> +		__page_set_anon_rmap(folio, page, vma, address, 1);
My bad. You did call it here.

Regards
Yin, Fengwei

> +		page++;
> +		address += PAGE_SIZE;
> +	}
> +
> +	__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
> +
> +}
> +
>   /**
>    * page_add_file_rmap - add pte mapping to a file page
>    * @page:	the page to add the mapping to
> --
> 2.25.1
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2023-03-22  7:15 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-17 10:57 [RFC PATCH 0/6] variable-order, large folios for anonymous memory Ryan Roberts
2023-03-17 10:57 ` Ryan Roberts
2023-03-17 10:57 ` [RFC PATCH 1/6] mm: Expose clear_huge_page() unconditionally Ryan Roberts
2023-03-17 10:57   ` Ryan Roberts
2023-03-17 10:57 ` [RFC PATCH 2/6] mm: pass gfp flags and order to vma_alloc_zeroed_movable_folio() Ryan Roberts
2023-03-17 10:57   ` Ryan Roberts
2023-03-17 10:57 ` [RFC PATCH 3/6] mm: Introduce try_vma_alloc_zeroed_movable_folio() Ryan Roberts
2023-03-17 10:57   ` Ryan Roberts
2023-03-17 10:58 ` [RFC PATCH 4/6] mm: Implement folio_add_new_anon_rmap_range() Ryan Roberts
2023-03-17 10:58   ` Ryan Roberts
2023-03-22  6:59   ` Yin Fengwei
2023-03-22  6:59     ` Yin Fengwei
2023-03-22  7:10   ` Yin Fengwei [this message]
2023-03-22  7:10     ` Yin Fengwei
2023-03-22  7:42     ` Ryan Roberts
2023-03-22  7:42       ` Ryan Roberts
2023-03-17 10:58 ` [RFC PATCH 5/6] mm: Allocate large folios for anonymous memory Ryan Roberts
2023-03-17 10:58   ` Ryan Roberts
2023-03-17 10:58 ` [RFC PATCH 6/6] WORKAROUND: Don't split large folios on madvise Ryan Roberts
2023-03-17 10:58   ` Ryan Roberts
2023-03-22  8:19   ` Yin Fengwei
2023-03-22  8:19     ` Yin Fengwei
2023-03-22  8:59     ` Ryan Roberts
2023-03-22  8:59       ` Ryan Roberts
2023-03-22 12:03 ` [RFC PATCH 0/6] variable-order, large folios for anonymous memory Ryan Roberts
2023-03-22 12:03   ` Ryan Roberts
2023-03-22 13:36   ` Yin, Fengwei
2023-03-22 13:36     ` Yin, Fengwei
2023-03-22 14:25     ` Ryan Roberts
2023-03-22 14:25       ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2cab90ef-96e1-98ed-51c4-ce744d2e0ca0@intel.com \
    --to=fengwei.yin@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.