* [PATCH v2 0/2] fix several contiguous memmap assumptions @ 2022-03-30 10:25 Chen Wandun 2022-03-30 10:25 ` [PATCH v2 1/2] mm: fix contiguous memmap assumptions about split page Chen Wandun 2022-03-30 10:25 ` [PATCH v2 2/2] mm: fix contiguous memmap assumptions about alloc/free pages Chen Wandun 0 siblings, 2 replies; 5+ messages in thread From: Chen Wandun @ 2022-03-30 10:25 UTC (permalink / raw) To: linux-mm, linux-kernel, akpm, willy v1 ==> v2: remove page_nth It isn't true for only SPARSEMEM configs to assume that a compound page has virtually contiguous page structs, so use nth_page to iterate each page. Inspired by: https://lore.kernel.org/linux-mm/20220204195852.1751729-8-willy@infradead.org/ Chen Wandun (2): mm: fix contiguous memmap assumptions about split page mm: fix contiguous memmap assumptions about alloc/free pages mm/compaction.c | 6 +++--- mm/huge_memory.c | 2 +- mm/page_alloc.c | 18 ++++++++++-------- 3 files changed, 14 insertions(+), 12 deletions(-) -- 2.18.0.huawei.25 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 1/2] mm: fix contiguous memmap assumptions about split page 2022-03-30 10:25 [PATCH v2 0/2] fix several contiguous memmap assumptions Chen Wandun @ 2022-03-30 10:25 ` Chen Wandun 2022-03-30 13:14 ` David Hildenbrand 2022-03-30 10:25 ` [PATCH v2 2/2] mm: fix contiguous memmap assumptions about alloc/free pages Chen Wandun 1 sibling, 1 reply; 5+ messages in thread From: Chen Wandun @ 2022-03-30 10:25 UTC (permalink / raw) To: linux-mm, linux-kernel, akpm, willy It isn't true for only SPARSEMEM configs to assume that a compound page has virtually contiguous page structs, so use nth_page to iterate each page. Inspired by: https://lore.kernel.org/linux-mm/20220204195852.1751729-8-willy@infradead.org/ Signed-off-by: Chen Wandun <chenwandun@huawei.com> --- mm/compaction.c | 6 +++--- mm/huge_memory.c | 2 +- mm/page_alloc.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index c3e37aa9ff9e..ddff13b968a2 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -87,7 +87,7 @@ static unsigned long release_freepages(struct list_head *freelist) static void split_map_pages(struct list_head *list) { unsigned int i, order, nr_pages; - struct page *page, *next; + struct page *page, *next, *tmp; LIST_HEAD(tmp_list); list_for_each_entry_safe(page, next, list, lru) { @@ -101,8 +101,8 @@ static void split_map_pages(struct list_head *list) split_page(page, order); for (i = 0; i < nr_pages; i++) { - list_add(&page->lru, &tmp_list); - page++; + tmp = nth_page(page, i); + list_add(&tmp->lru, &tmp_list); } } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2fe38212e07c..d77fc2ad581d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2297,7 +2297,7 @@ static void lru_add_page_tail(struct page *head, struct page *tail, static void __split_huge_page_tail(struct page *head, int tail, struct lruvec *lruvec, struct list_head *list) { - struct page *page_tail = head + tail; + struct page *page_tail = nth_page(head, tail); VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f648decfe39d..855211dea13e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3513,7 +3513,7 @@ void split_page(struct page *page, unsigned int order) VM_BUG_ON_PAGE(!page_count(page), page); for (i = 1; i < (1 << order); i++) - set_page_refcounted(page + i); + set_page_refcounted(nth_page(page, i)); split_page_owner(page, 1 << order); split_page_memcg(page, 1 << order); } -- 2.18.0.huawei.25 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/2] mm: fix contiguous memmap assumptions about split page 2022-03-30 10:25 ` [PATCH v2 1/2] mm: fix contiguous memmap assumptions about split page Chen Wandun @ 2022-03-30 13:14 ` David Hildenbrand 0 siblings, 0 replies; 5+ messages in thread From: David Hildenbrand @ 2022-03-30 13:14 UTC (permalink / raw) To: Chen Wandun, linux-mm, linux-kernel, akpm, willy On 30.03.22 12:25, Chen Wandun wrote: > It isn't true for only SPARSEMEM configs to assume that a compound page > has virtually contiguous page structs, so use nth_page to iterate each > page. Is this actually a "fix" or rather a preparation for having very large compound pages (>= MAX_ORDER) that we'd be able to split? Naive me would think that we'd currently only have order < MAX_ORDER, and consequently would always fall into a single memory section where the memmap is contiguous. > > Inspired by: > https://lore.kernel.org/linux-mm/20220204195852.1751729-8-willy@infradead.org/ > > Signed-off-by: Chen Wandun <chenwandun@huawei.com> > --- > mm/compaction.c | 6 +++--- > mm/huge_memory.c | 2 +- > mm/page_alloc.c | 2 +- > 3 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index c3e37aa9ff9e..ddff13b968a2 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -87,7 +87,7 @@ static unsigned long release_freepages(struct list_head *freelist) > static void split_map_pages(struct list_head *list) > { > unsigned int i, order, nr_pages; > - struct page *page, *next; > + struct page *page, *next, *tmp; > LIST_HEAD(tmp_list); > > list_for_each_entry_safe(page, next, list, lru) { > @@ -101,8 +101,8 @@ static void split_map_pages(struct list_head *list) > split_page(page, order); > > for (i = 0; i < nr_pages; i++) { > - list_add(&page->lru, &tmp_list); > - page++; > + tmp = nth_page(page, i); > + list_add(&tmp->lru, &tmp_list); > } > } > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 2fe38212e07c..d77fc2ad581d 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2297,7 +2297,7 @@ static void lru_add_page_tail(struct page *head, struct page *tail, > static void __split_huge_page_tail(struct page *head, int tail, > struct lruvec *lruvec, struct list_head *list) > { > - struct page *page_tail = head + tail; > + struct page *page_tail = nth_page(head, tail); > > VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index f648decfe39d..855211dea13e 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3513,7 +3513,7 @@ void split_page(struct page *page, unsigned int order) > VM_BUG_ON_PAGE(!page_count(page), page); > > for (i = 1; i < (1 << order); i++) > - set_page_refcounted(page + i); > + set_page_refcounted(nth_page(page, i)); > split_page_owner(page, 1 << order); > split_page_memcg(page, 1 << order); > } -- Thanks, David / dhildenb ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 2/2] mm: fix contiguous memmap assumptions about alloc/free pages 2022-03-30 10:25 [PATCH v2 0/2] fix several contiguous memmap assumptions Chen Wandun 2022-03-30 10:25 ` [PATCH v2 1/2] mm: fix contiguous memmap assumptions about split page Chen Wandun @ 2022-03-30 10:25 ` Chen Wandun 2022-03-30 13:16 ` David Hildenbrand 1 sibling, 1 reply; 5+ messages in thread From: Chen Wandun @ 2022-03-30 10:25 UTC (permalink / raw) To: linux-mm, linux-kernel, akpm, willy It isn't true for only SPARSEMEM configs to assume that a compound page has virtually contiguous page structs, so use nth_page to iterate each page. Signed-off-by: Chen Wandun <chenwandun@huawei.com> --- mm/page_alloc.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 855211dea13e..758d8f069b32 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -721,7 +721,7 @@ static void prep_compound_head(struct page *page, unsigned int order) static void prep_compound_tail(struct page *head, int tail_idx) { - struct page *p = head + tail_idx; + struct page *p = nth_page(head, tail_idx); p->mapping = TAIL_MAPPING; set_compound_head(p, head); @@ -1199,10 +1199,10 @@ static inline int check_free_page(struct page *page) return 1; } -static int free_tail_pages_check(struct page *head_page, struct page *page) +static int free_tail_pages_check(struct page *head_page, int index) { + struct page *page = nth_page(head_page, index); int ret = 1; - /* * We rely page->lru.next never has bit 0 set, unless the page * is PageTail(). Let's make sure that's true even for poisoned ->lru. @@ -1213,7 +1213,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) ret = 0; goto out; } - switch (page - head_page) { + switch (index) { case 1: /* the first tail page: ->mapping may be compound_mapcount() */ if (unlikely(compound_mapcount(page))) { @@ -1322,6 +1322,7 @@ static __always_inline bool free_pages_prepare(struct page *page, if (unlikely(order)) { bool compound = PageCompound(page); int i; + struct page *tail_page; VM_BUG_ON_PAGE(compound && compound_order(page) != order, page); @@ -1330,13 +1331,14 @@ static __always_inline bool free_pages_prepare(struct page *page, ClearPageHasHWPoisoned(page); } for (i = 1; i < (1 << order); i++) { + tail_page = nth_page(page, i); if (compound) - bad += free_tail_pages_check(page, page + i); - if (unlikely(check_free_page(page + i))) { + bad += free_tail_pages_check(page, i); + if (unlikely(check_free_page(tail_page))) { bad++; continue; } - (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + tail_page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } if (PageMappingFlags(page)) -- 2.18.0.huawei.25 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 2/2] mm: fix contiguous memmap assumptions about alloc/free pages 2022-03-30 10:25 ` [PATCH v2 2/2] mm: fix contiguous memmap assumptions about alloc/free pages Chen Wandun @ 2022-03-30 13:16 ` David Hildenbrand 0 siblings, 0 replies; 5+ messages in thread From: David Hildenbrand @ 2022-03-30 13:16 UTC (permalink / raw) To: Chen Wandun, linux-mm, linux-kernel, akpm, willy On 30.03.22 12:25, Chen Wandun wrote: > It isn't true for only SPARSEMEM configs to assume that a compound page > has virtually contiguous page structs, so use nth_page to iterate each > page. I really don't see how that is currently the case. The buddy deals with order < MAX_ORDER and we know that these always fall into a single memory section. IOW, this patch here would result in overhead where it's not required to have that overhead. What am I missing and which scenario are we fixing? > > Signed-off-by: Chen Wandun <chenwandun@huawei.com> > --- > mm/page_alloc.c | 16 +++++++++------- > 1 file changed, 9 insertions(+), 7 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 855211dea13e..758d8f069b32 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -721,7 +721,7 @@ static void prep_compound_head(struct page *page, unsigned int order) > > static void prep_compound_tail(struct page *head, int tail_idx) > { > - struct page *p = head + tail_idx; > + struct page *p = nth_page(head, tail_idx); > > p->mapping = TAIL_MAPPING; > set_compound_head(p, head); > @@ -1199,10 +1199,10 @@ static inline int check_free_page(struct page *page) > return 1; > } > > -static int free_tail_pages_check(struct page *head_page, struct page *page) > +static int free_tail_pages_check(struct page *head_page, int index) > { > + struct page *page = nth_page(head_page, index); > int ret = 1; > - > /* > * We rely page->lru.next never has bit 0 set, unless the page > * is PageTail(). Let's make sure that's true even for poisoned ->lru. > @@ -1213,7 +1213,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) > ret = 0; > goto out; > } > - switch (page - head_page) { > + switch (index) { > case 1: > /* the first tail page: ->mapping may be compound_mapcount() */ > if (unlikely(compound_mapcount(page))) { > @@ -1322,6 +1322,7 @@ static __always_inline bool free_pages_prepare(struct page *page, > if (unlikely(order)) { > bool compound = PageCompound(page); > int i; > + struct page *tail_page; > > VM_BUG_ON_PAGE(compound && compound_order(page) != order, page); > > @@ -1330,13 +1331,14 @@ static __always_inline bool free_pages_prepare(struct page *page, > ClearPageHasHWPoisoned(page); > } > for (i = 1; i < (1 << order); i++) { > + tail_page = nth_page(page, i); > if (compound) > - bad += free_tail_pages_check(page, page + i); > - if (unlikely(check_free_page(page + i))) { > + bad += free_tail_pages_check(page, i); > + if (unlikely(check_free_page(tail_page))) { > bad++; > continue; > } > - (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; > + tail_page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; > } > } > if (PageMappingFlags(page)) -- Thanks, David / dhildenb ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-03-30 13:16 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-03-30 10:25 [PATCH v2 0/2] fix several contiguous memmap assumptions Chen Wandun 2022-03-30 10:25 ` [PATCH v2 1/2] mm: fix contiguous memmap assumptions about split page Chen Wandun 2022-03-30 13:14 ` David Hildenbrand 2022-03-30 10:25 ` [PATCH v2 2/2] mm: fix contiguous memmap assumptions about alloc/free pages Chen Wandun 2022-03-30 13:16 ` David Hildenbrand
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).