* Re: [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range()
@ 2023-02-02 14:19 kernel test robot
0 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2023-02-02 14:19 UTC (permalink / raw)
To: oe-kbuild; +Cc: lkp, Julia Lawall
BCC: lkp@intel.com
CC: oe-kbuild-all@lists.linux.dev
In-Reply-To: <20230201081737.2330141-4-fengwei.yin@intel.com>
References: <20230201081737.2330141-4-fengwei.yin@intel.com>
TO: Yin Fengwei <fengwei.yin@intel.com>
Hi Yin,
[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on next-20230201]
[cannot apply to linus/master v6.2-rc6 v6.2-rc5 v6.2-rc4 v6.2-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Yin-Fengwei/mm-Enable-fault-around-for-shared-file-page-fault/20230201-161810
patch link: https://lore.kernel.org/r/20230201081737.2330141-4-fengwei.yin%40intel.com
patch subject: [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range()
:::::: branch date: 30 hours ago
:::::: commit date: 30 hours ago
config: arc-randconfig-c44-20230129 (https://download.01.org/0day-ci/archive/20230202/202302022240.ptsY32y6-lkp@intel.com/config)
compiler: arceb-elf-gcc (GCC) 12.1.0
If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Julia Lawall <julia.lawall@lip6.fr>
cocci warnings: (new ones prefixed by >>)
>> mm/rmap.c:1354:17-19: WARNING: Unsigned expression compared with zero: nr < 0
vim +1354 mm/rmap.c
9617d95e6e9ffd Nicholas Piggin 2006-01-06 1304
^1da177e4c3f41 Linus Torvalds 2005-04-16 1305 /**
aa15ac22bbf926 Yin Fengwei 2023-02-01 1306 * page_add_file_rmap_range - add pte mapping to a sub page range of a folio
aa15ac22bbf926 Yin Fengwei 2023-02-01 1307 * @folio: The filio to add the mapping to
aa15ac22bbf926 Yin Fengwei 2023-02-01 1308 * @start: The first sub page index in folio
aa15ac22bbf926 Yin Fengwei 2023-02-01 1309 * @nr_pages: The number of sub pages from the first page
cea86fe246b694 Hugh Dickins 2022-02-14 1310 * @vma: the vm area in which the mapping is added
e8b098fc5747a7 Mike Rapoport 2018-04-05 1311 * @compound: charge the page as compound or small page
^1da177e4c3f41 Linus Torvalds 2005-04-16 1312 *
aa15ac22bbf926 Yin Fengwei 2023-02-01 1313 * The sub page range of folio is defined by
aa15ac22bbf926 Yin Fengwei 2023-02-01 1314 * [first_sub_page, first_sub_page + nr_pages)
aa15ac22bbf926 Yin Fengwei 2023-02-01 1315 *
b8072f099b7829 Hugh Dickins 2005-10-29 1316 * The caller needs to hold the pte lock.
^1da177e4c3f41 Linus Torvalds 2005-04-16 1317 */
aa15ac22bbf926 Yin Fengwei 2023-02-01 1318 void page_add_file_rmap_range(struct folio *folio, unsigned long start,
aa15ac22bbf926 Yin Fengwei 2023-02-01 1319 unsigned int nr_pages, struct vm_area_struct *vma,
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1320) bool compound)
^1da177e4c3f41 Linus Torvalds 2005-04-16 1321 {
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1322) atomic_t *mapped = &folio->_nr_pages_mapped;
aa15ac22bbf926 Yin Fengwei 2023-02-01 1323 unsigned int nr = 0, nr_pmdmapped = 0, first;
dd78fedde4b99b Kirill A. Shutemov 2016-07-26 1324
aa15ac22bbf926 Yin Fengwei 2023-02-01 1325 VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio);
9bd3155ed83b72 Hugh Dickins 2022-11-02 1326
be5ef2d9b006bb Hugh Dickins 2022-11-22 1327 /* Is page being mapped by PTE? Is this its first map to be added? */
be5ef2d9b006bb Hugh Dickins 2022-11-22 1328 if (likely(!compound)) {
aa15ac22bbf926 Yin Fengwei 2023-02-01 1329 struct page *page = folio_page(folio, start);
aa15ac22bbf926 Yin Fengwei 2023-02-01 1330
aa15ac22bbf926 Yin Fengwei 2023-02-01 1331 nr_pages = min_t(unsigned int, nr_pages,
aa15ac22bbf926 Yin Fengwei 2023-02-01 1332 folio_nr_pages(folio) - start);
aa15ac22bbf926 Yin Fengwei 2023-02-01 1333
aa15ac22bbf926 Yin Fengwei 2023-02-01 1334 do {
d8dd5e979d09c7 Hugh Dickins 2022-11-09 1335 first = atomic_inc_and_test(&page->_mapcount);
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1336) if (first && folio_test_large(folio)) {
aa15ac22bbf926 Yin Fengwei 2023-02-01 1337 first = atomic_inc_return_relaxed(mapped);
aa15ac22bbf926 Yin Fengwei 2023-02-01 1338 first = (nr < COMPOUND_MAPPED);
be5ef2d9b006bb Hugh Dickins 2022-11-22 1339 }
aa15ac22bbf926 Yin Fengwei 2023-02-01 1340
aa15ac22bbf926 Yin Fengwei 2023-02-01 1341 if (first)
aa15ac22bbf926 Yin Fengwei 2023-02-01 1342 nr++;
aa15ac22bbf926 Yin Fengwei 2023-02-01 1343 } while (page++, --nr_pages > 0);
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1344) } else if (folio_test_pmd_mappable(folio)) {
be5ef2d9b006bb Hugh Dickins 2022-11-22 1345 /* That test is redundant: it's for safety or to optimize out */
d8dd5e979d09c7 Hugh Dickins 2022-11-09 1346
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1347) first = atomic_inc_and_test(&folio->_entire_mapcount);
9bd3155ed83b72 Hugh Dickins 2022-11-02 1348 if (first) {
4b51634cd16a01 Hugh Dickins 2022-11-22 1349 nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped);
6287b7dae80944 Hugh Dickins 2022-12-04 1350 if (likely(nr < COMPOUND_MAPPED + COMPOUND_MAPPED)) {
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1351) nr_pmdmapped = folio_nr_pages(folio);
23e4d1d73d0155 Matthew Wilcox (Oracle 2023-01-11 1352) nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED);
6287b7dae80944 Hugh Dickins 2022-12-04 1353 /* Raced ahead of a remove and another add? */
6287b7dae80944 Hugh Dickins 2022-12-04 @1354 if (unlikely(nr < 0))
6287b7dae80944 Hugh Dickins 2022-12-04 1355 nr = 0;
6287b7dae80944 Hugh Dickins 2022-12-04 1356 } else {
6287b7dae80944 Hugh Dickins 2022-12-04 1357 /* Raced ahead of a remove of COMPOUND_MAPPED */
6287b7dae80944 Hugh Dickins 2022-12-04 1358 nr = 0;
6287b7dae80944 Hugh Dickins 2022-12-04 1359 }
9bd3155ed83b72 Hugh Dickins 2022-11-02 1360 }
dd78fedde4b99b Kirill A. Shutemov 2016-07-26 1361 }
9bd3155ed83b72 Hugh Dickins 2022-11-02 1362
9bd3155ed83b72 Hugh Dickins 2022-11-02 1363 if (nr_pmdmapped)
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1364) __lruvec_stat_mod_folio(folio, folio_test_swapbacked(folio) ?
9bd3155ed83b72 Hugh Dickins 2022-11-02 1365 NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped);
5d543f13e2f558 Hugh Dickins 2022-03-24 1366 if (nr)
f8328c0f2aa1fd Matthew Wilcox (Oracle 2023-01-11 1367) __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr);
cea86fe246b694 Hugh Dickins 2022-02-14 1368
18b8b3a3769ea1 Matthew Wilcox (Oracle 2023-01-16 1369) mlock_vma_folio(folio, vma, compound);
^1da177e4c3f41 Linus Torvalds 2005-04-16 1370 }
^1da177e4c3f41 Linus Torvalds 2005-04-16 1371
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range()
2023-02-01 17:32 ` Matthew Wilcox
@ 2023-02-02 2:00 ` Yin, Fengwei
0 siblings, 0 replies; 4+ messages in thread
From: Yin, Fengwei @ 2023-02-02 2:00 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: david, linux-mm, dave.hansen, tim.c.chen, ying.huang
On 2/2/2023 1:32 AM, Matthew Wilcox wrote:
> On Wed, Feb 01, 2023 at 04:17:35PM +0800, Yin Fengwei wrote:
>> /**
>> - * page_add_file_rmap - add pte mapping to a file page
>> - * @page: the page to add the mapping to
>> + * page_add_file_rmap_range - add pte mapping to a sub page range of a folio
>> + * @folio: The filio to add the mapping to
>> + * @start: The first sub page index in folio
>> + * @nr_pages: The number of sub pages from the first page
>> * @vma: the vm area in which the mapping is added
>> * @compound: charge the page as compound or small page
>> *
>> + * The sub page range of folio is defined by
>> + * [first_sub_page, first_sub_page + nr_pages)
>
> Lose the "sub" from all of this. That's legacy thinking; pages are
> pages and folios are folios. "subpages" was from when we were trying
> to use the word "page" for both "the allocation" and "the PAGE_SIZE
> range of bytes".
OK. Will remove sub in next version.
>
>> + *
>> * The caller needs to hold the pte lock.
>> */
>> -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
>> - bool compound)
>> +void page_add_file_rmap_range(struct folio *folio, unsigned long start,
>> + unsigned int nr_pages, struct vm_area_struct *vma,
>> + bool compound)
>
> I think this function needs to be called folio_add_file_rmap()
Yes. Maybe a followup patch after this series? Let me know if you want
this change in this series.
>
> I'd like to lose the 'compound' parameter, and base it on nr_pages ==
> folio_nr_pages(), but that may be a step far just now.
Yes. I had a local change to remove if (folio_test_pmd_mappable(folio))
test (It's very close to removing 'compound'). I didn't include it in
this series. I prefer a follow up patch. Let me know if you want the
change in this series. Thanks.
Regards
Yin, Fengwei
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range()
2023-02-01 8:17 ` [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range() Yin Fengwei
@ 2023-02-01 17:32 ` Matthew Wilcox
2023-02-02 2:00 ` Yin, Fengwei
0 siblings, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2023-02-01 17:32 UTC (permalink / raw)
To: Yin Fengwei; +Cc: david, linux-mm, dave.hansen, tim.c.chen, ying.huang
On Wed, Feb 01, 2023 at 04:17:35PM +0800, Yin Fengwei wrote:
> /**
> - * page_add_file_rmap - add pte mapping to a file page
> - * @page: the page to add the mapping to
> + * page_add_file_rmap_range - add pte mapping to a sub page range of a folio
> + * @folio: The filio to add the mapping to
> + * @start: The first sub page index in folio
> + * @nr_pages: The number of sub pages from the first page
> * @vma: the vm area in which the mapping is added
> * @compound: charge the page as compound or small page
> *
> + * The sub page range of folio is defined by
> + * [first_sub_page, first_sub_page + nr_pages)
Lose the "sub" from all of this. That's legacy thinking; pages are
pages and folios are folios. "subpages" was from when we were trying
to use the word "page" for both "the allocation" and "the PAGE_SIZE
range of bytes".
> + *
> * The caller needs to hold the pte lock.
> */
> -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
> - bool compound)
> +void page_add_file_rmap_range(struct folio *folio, unsigned long start,
> + unsigned int nr_pages, struct vm_area_struct *vma,
> + bool compound)
I think this function needs to be called folio_add_file_rmap()
I'd like to lose the 'compound' parameter, and base it on nr_pages ==
folio_nr_pages(), but that may be a step far just now.
^ permalink raw reply [flat|nested] 4+ messages in thread
* [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range()
2023-02-01 8:17 [RFC PATCH v2 0/5] folio based filemap_map_pages() Yin Fengwei
@ 2023-02-01 8:17 ` Yin Fengwei
2023-02-01 17:32 ` Matthew Wilcox
0 siblings, 1 reply; 4+ messages in thread
From: Yin Fengwei @ 2023-02-01 8:17 UTC (permalink / raw)
To: willy, david, linux-mm; +Cc: dave.hansen, tim.c.chen, ying.huang, fengwei.yin
page_add_file_rmap_range() allows to add pte mapping to a specific
range of file folio. Comparing to original page_add_file_rmap(),
it batched updates __lruvec_stat for large folio.
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
---
include/linux/rmap.h | 2 ++
mm/rmap.c | 66 ++++++++++++++++++++++++++++++++++----------
2 files changed, 54 insertions(+), 14 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index a4570da03e58..9631a3701504 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
unsigned long address);
void page_add_file_rmap(struct page *, struct vm_area_struct *,
bool compound);
+void page_add_file_rmap_range(struct folio *, unsigned long start,
+ unsigned int nr_pages, struct vm_area_struct *, bool compound);
void page_remove_rmap(struct page *, struct vm_area_struct *,
bool compound);
diff --git a/mm/rmap.c b/mm/rmap.c
index 15ae24585fc4..090de52e1a9a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1303,31 +1303,44 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
}
/**
- * page_add_file_rmap - add pte mapping to a file page
- * @page: the page to add the mapping to
+ * page_add_file_rmap_range - add pte mapping to a sub page range of a folio
+ * @folio: The filio to add the mapping to
+ * @start: The first sub page index in folio
+ * @nr_pages: The number of sub pages from the first page
* @vma: the vm area in which the mapping is added
* @compound: charge the page as compound or small page
*
+ * The sub page range of folio is defined by
+ * [first_sub_page, first_sub_page + nr_pages)
+ *
* The caller needs to hold the pte lock.
*/
-void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
- bool compound)
+void page_add_file_rmap_range(struct folio *folio, unsigned long start,
+ unsigned int nr_pages, struct vm_area_struct *vma,
+ bool compound)
{
- struct folio *folio = page_folio(page);
atomic_t *mapped = &folio->_nr_pages_mapped;
- int nr = 0, nr_pmdmapped = 0;
- bool first;
+ unsigned int nr = 0, nr_pmdmapped = 0, first;
- VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
+ VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio);
/* Is page being mapped by PTE? Is this its first map to be added? */
if (likely(!compound)) {
- first = atomic_inc_and_test(&page->_mapcount);
- nr = first;
- if (first && folio_test_large(folio)) {
- nr = atomic_inc_return_relaxed(mapped);
- nr = (nr < COMPOUND_MAPPED);
- }
+ struct page *page = folio_page(folio, start);
+
+ nr_pages = min_t(unsigned int, nr_pages,
+ folio_nr_pages(folio) - start);
+
+ do {
+ first = atomic_inc_and_test(&page->_mapcount);
+ if (first && folio_test_large(folio)) {
+ first = atomic_inc_return_relaxed(mapped);
+ first = (nr < COMPOUND_MAPPED);
+ }
+
+ if (first)
+ nr++;
+ } while (page++, --nr_pages > 0);
} else if (folio_test_pmd_mappable(folio)) {
/* That test is redundant: it's for safety or to optimize out */
@@ -1356,6 +1369,31 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
mlock_vma_folio(folio, vma, compound);
}
+/**
+ * page_add_file_rmap - add pte mapping to a file page
+ * @page: the page to add the mapping to
+ * @vma: the vm area in which the mapping is added
+ * @compound: charge the page as compound or small page
+ *
+ * The caller needs to hold the pte lock.
+ */
+void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
+ bool compound)
+{
+ struct folio *folio = page_folio(page);
+ unsigned int nr_pages;
+
+ VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page);
+
+ if (likely(!compound))
+ nr_pages = 1;
+ else
+ nr_pages = folio_nr_pages(folio);
+
+ page_add_file_rmap_range(folio, folio_page_idx(folio, page),
+ nr_pages, vma, compound);
+}
+
/**
* page_remove_rmap - take down pte mapping from a page
* @page: page to remove mapping from
--
2.30.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-02-02 14:19 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-02 14:19 [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range() kernel test robot
-- strict thread matches above, loose matches on Subject: below --
2023-02-01 8:17 [RFC PATCH v2 0/5] folio based filemap_map_pages() Yin Fengwei
2023-02-01 8:17 ` [RFC PATCH v2 3/5] rmap: add page_add_file_rmap_range() Yin Fengwei
2023-02-01 17:32 ` Matthew Wilcox
2023-02-02 2:00 ` Yin, Fengwei
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.