linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Yin, Fengwei" <fengwei.yin@intel.com>
To: Yu Zhao <yuzhao@google.com>, Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Yang Shi <shy828301@gmail.com>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>
Subject: Re: [PATCH v2 1/5] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap()
Date: Tue, 4 Jul 2023 10:13:34 +0800	[thread overview]
Message-ID: <9e2fe34e-7615-119c-43b3-31d0b8be3af0@intel.com> (raw)
In-Reply-To: <CAOUHufYtW+6Svaq7pcyBiModTSKn1VU-LKxB_Xwnja=f83X2YA@mail.gmail.com>



On 7/4/2023 3:05 AM, Yu Zhao wrote:
> On Mon, Jul 3, 2023 at 7:53 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> In preparation for FLEXIBLE_THP support, improve
>> folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be
>> passed to it. In this case, all contained pages are accounted using the
>> "small" pages scheme.
> 
> Nit: In this case, all *subpages*  are accounted using the *order-0
> folio* (or base page) scheme.
Matthew suggested not to use subpage with folio. Using page with folio:
https://lore.kernel.org/linux-mm/Y9qiS%2FIxZOMx62t6@casper.infradead.org/

> 
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> 
> Reviewed-by: Yu Zhao <yuzhao@google.com>
> 
>>  mm/rmap.c | 26 +++++++++++++++++++-------
>>  1 file changed, 19 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 1d8369549424..82ef5ba363d1 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1278,31 +1278,43 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
>>   * This means the inc-and-test can be bypassed.
>>   * The folio does not have to be locked.
>>   *
>> - * If the folio is large, it is accounted as a THP.  As the folio
>> + * If the folio is pmd-mappable, it is accounted as a THP.  As the folio
>>   * is new, it's assumed to be mapped exclusively by a single process.
>>   */
>>  void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>>                 unsigned long address)
>>  {
>> -       int nr;
>> +       int nr = folio_nr_pages(folio);
>> +       int i;
>> +       struct page *page;
>>
>> -       VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
>> +       VM_BUG_ON_VMA(address < vma->vm_start ||
>> +                       address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
>>         __folio_set_swapbacked(folio);
>>
>> -       if (likely(!folio_test_pmd_mappable(folio))) {
>> +       if (!folio_test_large(folio)) {
>>                 /* increment count (starts at -1) */
>>                 atomic_set(&folio->_mapcount, 0);
>> -               nr = 1;
>> +               __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>> +       } else if (!folio_test_pmd_mappable(folio)) {
>> +               /* increment count (starts at 0) */
>> +               atomic_set(&folio->_nr_pages_mapped, nr);
>> +
>> +               page = &folio->page;
>> +               for (i = 0; i < nr; i++, page++, address += PAGE_SIZE) {
>> +                       /* increment count (starts at -1) */
>> +                       atomic_set(&page->_mapcount, 0);
>> +                       __page_set_anon_rmap(folio, page, vma, address, 1);
>> +               }
> 
> Nit: use folio_page(), e.g.,
> 
>   } else if (!folio_test_pmd_mappable(folio)) {
>     int i;
> 
>     for (i = 0; i < nr; i++) {
>       struct page *page = folio_page(folio, i);
> 
>       /* increment count (starts at -1) */
>       atomic_set(&page->_mapcount, 0);
>       __page_set_anon_rmap(folio, page, vma, address + PAGE_SIZE * i, 1);
>     }
>     /* increment count (starts at 0) */
>     atomic_set(&folio->_nr_pages_mapped, nr);
>   } else {
> 
>>         } else {
>>                 /* increment count (starts at -1) */
>>                 atomic_set(&folio->_entire_mapcount, 0);
>>                 atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED);
>> -               nr = folio_nr_pages(folio);
>>                 __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr);
>> +               __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>         }
>>
>>         __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
>> -       __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>  }


  reply	other threads:[~2023-07-04  2:14 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-03 13:53 [PATCH v2 0/5] variable-order, large folios for anonymous memory Ryan Roberts
2023-07-03 13:53 ` [PATCH v2 1/5] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Ryan Roberts
2023-07-03 19:05   ` Yu Zhao
2023-07-04  2:13     ` Yin, Fengwei [this message]
2023-07-04 11:19       ` Ryan Roberts
2023-07-04  2:14   ` Yin, Fengwei
2023-07-03 13:53 ` [PATCH v2 2/5] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-07-07  8:21   ` Huang, Ying
2023-07-07  9:42     ` Ryan Roberts
2023-07-10  5:37       ` Huang, Ying
2023-07-10  8:29         ` Ryan Roberts
2023-07-10  9:01           ` Huang, Ying
2023-07-10  9:39             ` Ryan Roberts
2023-07-11  1:56               ` Huang, Ying
2023-07-03 13:53 ` [PATCH v2 3/5] mm: Default implementation of arch_wants_pte_order() Ryan Roberts
2023-07-03 19:50   ` Yu Zhao
2023-07-04 13:20     ` Ryan Roberts
2023-07-05  2:07       ` Yu Zhao
2023-07-05  9:11         ` Ryan Roberts
2023-07-05 17:24           ` Yu Zhao
2023-07-05 18:01             ` Ryan Roberts
2023-07-06 19:33         ` Matthew Wilcox
2023-07-07 10:00           ` Ryan Roberts
2023-07-04  2:22   ` Yin, Fengwei
2023-07-04  3:02     ` Yu Zhao
2023-07-04  3:59       ` Yu Zhao
2023-07-04  5:22         ` Yin, Fengwei
2023-07-04  5:42           ` Yu Zhao
2023-07-04 12:36         ` Ryan Roberts
2023-07-04 13:23           ` Ryan Roberts
2023-07-05  1:40             ` Yu Zhao
2023-07-05  1:23           ` Yu Zhao
2023-07-05  2:18             ` Yin Fengwei
2023-07-03 13:53 ` [PATCH v2 4/5] mm: FLEXIBLE_THP for improved performance Ryan Roberts
2023-07-03 15:51   ` kernel test robot
2023-07-03 16:01   ` kernel test robot
2023-07-04  1:35   ` Yu Zhao
2023-07-04 14:08     ` Ryan Roberts
2023-07-04 23:47       ` Yu Zhao
2023-07-04  3:45   ` Yin, Fengwei
2023-07-04 14:20     ` Ryan Roberts
2023-07-04 23:35       ` Yin Fengwei
2023-07-04 23:57       ` Matthew Wilcox
2023-07-05  9:54         ` Ryan Roberts
2023-07-05 12:08           ` Matthew Wilcox
2023-07-07  8:01   ` Huang, Ying
2023-07-07  9:52     ` Ryan Roberts
2023-07-07 11:29       ` David Hildenbrand
2023-07-07 13:57         ` Matthew Wilcox
2023-07-07 14:07           ` David Hildenbrand
2023-07-07 15:13             ` Ryan Roberts
2023-07-07 16:06               ` David Hildenbrand
2023-07-07 16:22                 ` Ryan Roberts
2023-07-07 19:06                   ` David Hildenbrand
2023-07-10  8:41                     ` Ryan Roberts
2023-07-10  3:03               ` Huang, Ying
2023-07-10  8:55                 ` Ryan Roberts
2023-07-10  9:18                   ` Huang, Ying
2023-07-10  9:25                     ` Ryan Roberts
2023-07-11  0:48                       ` Huang, Ying
2023-07-10  2:49           ` Huang, Ying
2023-07-03 13:53 ` [PATCH v2 5/5] arm64: mm: Override arch_wants_pte_order() Ryan Roberts
2023-07-03 20:02   ` Yu Zhao
2023-07-04  2:18 ` [PATCH v2 0/5] variable-order, large folios for anonymous memory Yu Zhao
2023-07-04  6:22   ` Yin, Fengwei
2023-07-04  7:11     ` Yu Zhao
2023-07-04 15:36       ` Ryan Roberts
2023-07-04 23:52         ` Yin Fengwei
2023-07-05  0:21           ` Yu Zhao
2023-07-05 10:16             ` Ryan Roberts
2023-07-05 19:00               ` Yu Zhao
2023-07-05 19:38 ` David Hildenbrand
2023-07-06  8:02   ` Ryan Roberts
2023-07-07 11:40     ` David Hildenbrand
2023-07-07 13:12       ` Matthew Wilcox
2023-07-07 13:24         ` David Hildenbrand
2023-07-10 10:07           ` Ryan Roberts
2023-07-10 16:57             ` Matthew Wilcox
2023-07-10 16:53           ` Zi Yan
2023-07-19 15:49             ` Ryan Roberts
2023-07-19 16:05               ` Zi Yan
2023-07-19 18:37                 ` Ryan Roberts
2023-07-11 21:11         ` Luis Chamberlain
2023-07-11 21:59           ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9e2fe34e-7615-119c-43b3-31d0b8be3af0@intel.com \
    --to=fengwei.yin@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).