linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: <linux-mm@kvack.org>, Roman Gushchin <guro@fb.com>,
	Rik van Riel <riel@surriel.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Matthew Wilcox <willy@infradead.org>,
	Shakeel Butt <shakeelb@google.com>,
	Yang Shi <yang.shi@linux.alibaba.com>,
	David Nellans <dnellans@nvidia.com>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 01/16] mm: add pagechain container for storing multiple pages.
Date: Wed, 9 Sep 2020 10:15:05 -0400	[thread overview]
Message-ID: <1970690D-DA0B-4C46-BE0C-66F33D881653@nvidia.com> (raw)
In-Reply-To: <20200909134622.xkuzx5nf4xq2vudh@box>

[-- Attachment #1: Type: text/plain, Size: 3070 bytes --]

On 9 Sep 2020, at 9:46, Kirill A. Shutemov wrote:

> On Mon, Sep 07, 2020 at 11:11:05AM -0400, Zi Yan wrote:
>> On 7 Sep 2020, at 8:22, Kirill A. Shutemov wrote:
>>
>>> On Wed, Sep 02, 2020 at 02:06:13PM -0400, Zi Yan wrote:
>>>> From: Zi Yan <ziy@nvidia.com>
>>>>
>>>> When depositing page table pages for 1GB THPs, we need 512 PTE pages +
>>>> 1 PMD page. Instead of counting and depositing 513 pages, we can use the
>>>> PMD page as a leader page and chain the rest 512 PTE pages with ->lru.
>>>> This, however, prevents us depositing PMD pages with ->lru, which is
>>>> currently used by depositing PTE pages for 2MB THPs. So add a new
>>>> pagechain container for PMD pages.
>>>>
>>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>>
>>> Just deposit it to a linked list in the mm_struct as we do for PMD if
>>> split ptl disabled.
>>>
>>
>> Thank you for checking the patches. Since we don’t have PUD split lock
>> yet, I store the PMD page table pages in a newly added linked list head
>> in mm_struct like you suggested above.
>>
>> I was too vague about my pagechain design for depositing page table pages
>> for PUD THPs. Sorry about the confusion. Let me clarify why
>> I am doing this pagechain here too. I am sure there would be
>> some other designs and I am happy to change my code.
>>
>> In my design, I did not store all page table pages in a single list.
>> I first deposit 512 PTE pages in one PMD page table page’s pmd_huge_pte
>> using pgtable_trans_huge_depsit(), then deposit the PMD page to
>> a newly added linked list in mm_struct. Since pmd_huge_pte shares space
>> with half of lru in struct page, we cannot use lru to link all PMD
>> pages together. As a result, I added pagechain. Also in this way,
>> we can avoid these things:
>>
>> 1. when we withdraw the PMD page during PUD THP split, we don’t need
>> to withdraw 513 page, set up one PMD page, then, deposit 512 PTE pages
>> in that PMD page.
>>
>> 2. we don’t mix PMD page table pages and PTE page table pages in a single
>> list, since they are initialized in different ways. Otherwise, we need
>> to maintain a subtle rule in the single page table page list that in every
>> 513 pages, first one is PMD page table page and the rest are PTE page
>> table pages.
>>
>> As I am typing, I also realize that my current design does not work
>> when PMD split lock is disabled, so I will fix it. I would store PMD pages
>> and PTE pages in two separate lists in mm_struct.
>>
>>
>> Any comments?
>
> Okay, fair enough.
>
> Although, I think you can get away without a new data structure. We don't
> need double-linked list to deposit page tables. You can rework PTE tables
> deposit code to have single-linked list and use one pointer of ->lru (with
> proper name) and make PMD tables deposit to use the other one. This way
> you can avoid conflict for ->lru.
>
> Does it make sense?

Yes. Thanks. Will do this in the next version. I think the single linked list
from llist.h can be used.

—
Best Regards,
Yan Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

  reply	other threads:[~2020-09-09 14:15 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-02 18:06 [RFC PATCH 00/16] 1GB THP support on x86_64 Zi Yan
2020-09-02 18:06 ` [RFC PATCH 01/16] mm: add pagechain container for storing multiple pages Zi Yan
2020-09-02 20:29   ` Randy Dunlap
2020-09-02 20:48     ` Zi Yan
2020-09-03  3:15   ` Matthew Wilcox
2020-09-07 12:22   ` Kirill A. Shutemov
2020-09-07 15:11     ` Zi Yan
2020-09-09 13:46       ` Kirill A. Shutemov
2020-09-09 14:15         ` Zi Yan [this message]
2020-09-02 18:06 ` [RFC PATCH 02/16] mm: thp: 1GB anonymous page implementation Zi Yan
2020-09-02 18:06 ` [RFC PATCH 03/16] mm: proc: add 1GB THP kpageflag Zi Yan
2020-09-09 13:46   ` Kirill A. Shutemov
2020-09-02 18:06 ` [RFC PATCH 04/16] mm: thp: 1GB THP copy on write implementation Zi Yan
2020-09-02 18:06 ` [RFC PATCH 05/16] mm: thp: handling 1GB THP reference bit Zi Yan
2020-09-09 14:09   ` Kirill A. Shutemov
2020-09-09 14:36     ` Zi Yan
2020-09-02 18:06 ` [RFC PATCH 06/16] mm: thp: add 1GB THP split_huge_pud_page() function Zi Yan
2020-09-09 14:18   ` Kirill A. Shutemov
2020-09-09 14:19     ` Zi Yan
2020-09-02 18:06 ` [RFC PATCH 07/16] mm: stats: make smap stats understand PUD THPs Zi Yan
2020-09-02 18:06 ` [RFC PATCH 08/16] mm: page_vma_walk: teach it about PMD-mapped PUD THP Zi Yan
2020-09-02 18:06 ` [RFC PATCH 09/16] mm: thp: 1GB THP support in try_to_unmap() Zi Yan
2020-09-02 18:06 ` [RFC PATCH 10/16] mm: thp: split 1GB THPs at page reclaim Zi Yan
2020-09-02 18:06 ` [RFC PATCH 11/16] mm: thp: 1GB THP follow_p*d_page() support Zi Yan
2020-09-02 18:06 ` [RFC PATCH 12/16] mm: support 1GB THP pagemap support Zi Yan
2020-09-02 18:06 ` [RFC PATCH 13/16] mm: thp: add a knob to enable/disable 1GB THPs Zi Yan
2020-09-02 18:06 ` [RFC PATCH 14/16] mm: page_alloc: >=MAX_ORDER pages allocation an deallocation Zi Yan
2020-09-02 18:06 ` [RFC PATCH 15/16] hugetlb: cma: move cma reserve function to cma.c Zi Yan
2020-09-02 18:06 ` [RFC PATCH 16/16] mm: thp: use cma reservation for pud thp allocation Zi Yan
2020-09-02 18:40 ` [RFC PATCH 00/16] 1GB THP support on x86_64 Jason Gunthorpe
2020-09-02 18:45   ` Zi Yan
2020-09-02 18:48     ` Jason Gunthorpe
2020-09-02 19:05       ` Zi Yan
2020-09-02 19:57         ` Jason Gunthorpe
2020-09-02 20:29           ` Zi Yan
2020-09-03 16:40             ` Jason Gunthorpe
2020-09-03 16:55               ` Matthew Wilcox
2020-09-03 17:08                 ` Jason Gunthorpe
2020-09-03  7:32 ` Michal Hocko
2020-09-03 16:25   ` Roman Gushchin
2020-09-03 16:50     ` Jason Gunthorpe
2020-09-03 17:01       ` Matthew Wilcox
2020-09-03 17:18         ` Jason Gunthorpe
2020-09-03 20:57     ` Mike Kravetz
2020-09-03 21:06       ` Roman Gushchin
2020-09-04  7:42     ` Michal Hocko
2020-09-04 21:10       ` Roman Gushchin
2020-09-07  7:20         ` Michal Hocko
2020-09-08 15:09           ` Zi Yan
2020-09-08 19:58             ` Roman Gushchin
2020-09-09  4:01               ` John Hubbard
2020-09-09  7:15               ` Michal Hocko
2020-09-03 14:23 ` Kirill A. Shutemov
2020-09-03 16:30   ` Roman Gushchin
2020-09-08 11:57     ` David Hildenbrand
2020-09-08 14:05       ` Zi Yan
2020-09-08 14:22         ` David Hildenbrand
2020-09-08 15:36           ` Zi Yan
2020-09-08 14:27         ` Matthew Wilcox
2020-09-08 15:50           ` Zi Yan
2020-09-09 12:11           ` Jason Gunthorpe
2020-09-09 12:32             ` Matthew Wilcox
2020-09-09 13:14               ` Jason Gunthorpe
2020-09-09 13:27                 ` David Hildenbrand
2020-09-10 10:02                   ` William Kucharski
2020-09-08 14:35         ` Michal Hocko
2020-09-08 14:41           ` Rik van Riel
2020-09-08 15:02             ` David Hildenbrand
2020-09-09  7:04             ` Michal Hocko
2020-09-09 13:19               ` Rik van Riel
2020-09-09 13:43                 ` David Hildenbrand
2020-09-09 13:49                   ` Rik van Riel
2020-09-09 13:54                     ` David Hildenbrand
2020-09-10  7:32                   ` Michal Hocko
2020-09-10  8:27                     ` David Hildenbrand
2020-09-10 14:21                       ` Zi Yan
2020-09-10 14:34                         ` David Hildenbrand
2020-09-10 14:41                           ` Zi Yan
2020-09-10 15:15                             ` David Hildenbrand
2020-09-10 13:32                     ` Rik van Riel
2020-09-10 14:30                       ` Zi Yan
2020-09-09 13:59                 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1970690D-DA0B-4C46-BE0C-66F33D881653@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=dnellans@nvidia.com \
    --cc=guro@fb.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@surriel.com \
    --cc=shakeelb@google.com \
    --cc=willy@infradead.org \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).