From: Zi Yan <ziy@nvidia.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 52/75] mm/rmap: Convert try_to_migrate() to folios
Date: Wed, 09 Feb 2022 10:27:56 -0500 [thread overview]
Message-ID: <4B0C10F2-97FA-4CDE-A013-0DA377750A96@nvidia.com> (raw)
In-Reply-To: <20220204195852.1751729-53-willy@infradead.org>
[-- Attachment #1: Type: text/plain, Size: 9951 bytes --]
On 4 Feb 2022, at 14:58, Matthew Wilcox (Oracle) wrote:
> Convert the callers to pass a folio and the try_to_migrate_one()
> worker to use a folio throughout. Fixes an assumption that a
> folio must be <= PMD size.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> include/linux/rmap.h | 2 +-
> mm/huge_memory.c | 4 ++--
> mm/migrate.c | 12 ++++++----
> mm/rmap.c | 57 +++++++++++++++++++++++---------------------
> 4 files changed, 41 insertions(+), 34 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 66407434c3b5..502439f20d88 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -192,7 +192,7 @@ static inline void page_dup_rmap(struct page *page, bool compound)
> int folio_referenced(struct folio *, int is_locked,
> struct mem_cgroup *memcg, unsigned long *vm_flags);
>
> -void try_to_migrate(struct page *page, enum ttu_flags flags);
> +void try_to_migrate(struct folio *folio, enum ttu_flags flags);
> void try_to_unmap(struct folio *, enum ttu_flags flags);
>
> int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 4ea22b7319fd..21676a4afd07 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2294,8 +2294,8 @@ static void unmap_page(struct page *page)
> * pages can simply be left unmapped, then faulted back on demand.
> * If that is ever changed (perhaps for mlock), update remap_page().
> */
> - if (PageAnon(page))
> - try_to_migrate(page, ttu_flags);
> + if (folio_test_anon(folio))
> + try_to_migrate(folio, ttu_flags);
> else
> try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK);
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 766dc67874a1..5dcdd43d983d 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -927,6 +927,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
> static int __unmap_and_move(struct page *page, struct page *newpage,
> int force, enum migrate_mode mode)
> {
> + struct folio *folio = page_folio(page);
> int rc = -EAGAIN;
> bool page_was_mapped = false;
> struct anon_vma *anon_vma = NULL;
> @@ -1030,7 +1031,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
> /* Establish migration ptes */
> VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
> page);
> - try_to_migrate(page, 0);
> + try_to_migrate(folio, 0);
> page_was_mapped = true;
> }
>
> @@ -1173,6 +1174,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
> enum migrate_mode mode, int reason,
> struct list_head *ret)
> {
> + struct folio *src = page_folio(hpage);
> int rc = -EAGAIN;
> int page_was_mapped = 0;
> struct page *new_hpage;
> @@ -1249,7 +1251,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
> ttu |= TTU_RMAP_LOCKED;
> }
>
> - try_to_migrate(hpage, ttu);
> + try_to_migrate(src, ttu);
> page_was_mapped = 1;
>
> if (mapping_locked)
> @@ -2449,6 +2451,7 @@ static void migrate_vma_unmap(struct migrate_vma *migrate)
>
> for (i = 0; i < npages; i++) {
> struct page *page = migrate_pfn_to_page(migrate->src[i]);
> + struct folio *folio;
>
> if (!page)
> continue;
> @@ -2472,8 +2475,9 @@ static void migrate_vma_unmap(struct migrate_vma *migrate)
> put_page(page);
> }
>
> - if (page_mapped(page))
> - try_to_migrate(page, 0);
> + folio = page_folio(page);
> + if (folio_mapped(folio))
> + try_to_migrate(folio, 0);
>
> if (page_mapped(page) || !migrate_vma_check_page(page)) {
> if (!is_zone_device_page(page)) {
> diff --git a/mm/rmap.c b/mm/rmap.c
> index c598fd667948..4cfac67e328c 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1767,7 +1767,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> range.end = vma_address_end(&pvmw);
> mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
> address, range.end);
> - if (PageHuge(page)) {
> + if (folio_test_hugetlb(folio)) {
> /*
> * If sharing is possible, start and end will be adjusted
> * accordingly.
> @@ -1781,21 +1781,24 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> /* PMD-mapped THP migration entry */
> if (!pvmw.pte) {
> - VM_BUG_ON_PAGE(PageHuge(page) ||
> - !PageTransCompound(page), page);
> + subpage = folio_page(folio,
> + pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
Here you removed the assumption that folio is always <= PMD, right?
In the commit message, maybe the below is better?
In THP migration code, fixes an assumption that a folio must be <= PMD size.
> + VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
> + !folio_test_pmd_mappable(folio), folio);
>
> - set_pmd_migration_entry(&pvmw, page);
> + set_pmd_migration_entry(&pvmw, subpage);
> continue;
> }
> #endif
>
> /* Unexpected PMD-mapped THP? */
> - VM_BUG_ON_PAGE(!pvmw.pte, page);
> + VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>
> - subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
> + subpage = folio_page(folio,
> + pte_pfn(*pvmw.pte) - folio_pfn(folio));
> address = pvmw.address;
>
> - if (PageHuge(page) && !PageAnon(page)) {
> + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
> /*
> * To call huge_pmd_unshare, i_mmap_rwsem must be
> * held in write mode. Caller needs to explicitly
> @@ -1833,15 +1836,15 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
> pteval = ptep_clear_flush(vma, address, pvmw.pte);
>
> - /* Move the dirty bit to the page. Now the pte is gone. */
> + /* Set the dirty flag on the folio now the pte is gone. */
> if (pte_dirty(pteval))
> - set_page_dirty(page);
> + folio_mark_dirty(folio);
>
> /* Update high watermark before we lower rss */
> update_hiwater_rss(mm);
>
> - if (is_zone_device_page(page)) {
> - unsigned long pfn = page_to_pfn(page);
> + if (folio_is_zone_device(folio)) {
> + unsigned long pfn = folio_pfn(folio);
> swp_entry_t entry;
> pte_t swp_pte;
>
> @@ -1877,16 +1880,16 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> * changed when hugepage migrations to device private
> * memory are supported.
> */
> - subpage = page;
> - } else if (PageHWPoison(page)) {
> + subpage = &folio->page;
> + } else if (PageHWPoison(subpage)) {
> pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
> - if (PageHuge(page)) {
> - hugetlb_count_sub(compound_nr(page), mm);
> + if (folio_test_hugetlb(folio)) {
> + hugetlb_count_sub(folio_nr_pages(folio), mm);
> set_huge_swap_pte_at(mm, address,
> pvmw.pte, pteval,
> vma_mmu_pagesize(vma));
> } else {
> - dec_mm_counter(mm, mm_counter(page));
> + dec_mm_counter(mm, mm_counter(&folio->page));
> set_pte_at(mm, address, pvmw.pte, pteval);
> }
>
> @@ -1901,7 +1904,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> * migration) will not expect userfaults on already
> * copied pages.
> */
> - dec_mm_counter(mm, mm_counter(page));
> + dec_mm_counter(mm, mm_counter(&folio->page));
> /* We have to invalidate as we cleared the pte */
> mmu_notifier_invalidate_range(mm, address,
> address + PAGE_SIZE);
> @@ -1947,8 +1950,8 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
> *
> * See Documentation/vm/mmu_notifier.rst
> */
> - page_remove_rmap(subpage, PageHuge(page));
> - put_page(page);
> + page_remove_rmap(subpage, folio_test_hugetlb(folio));
> + folio_put(folio);
> }
>
> mmu_notifier_invalidate_range_end(&range);
> @@ -1958,13 +1961,13 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
>
> /**
> * try_to_migrate - try to replace all page table mappings with swap entries
> - * @page: the page to replace page table entries for
> + * @folio: the folio to replace page table entries for
> * @flags: action and flags
> *
> - * Tries to remove all the page table entries which are mapping this page and
> - * replace them with special swap entries. Caller must hold the page lock.
> + * Tries to remove all the page table entries which are mapping this folio and
> + * replace them with special swap entries. Caller must hold the folio lock.
> */
> -void try_to_migrate(struct page *page, enum ttu_flags flags)
> +void try_to_migrate(struct folio *folio, enum ttu_flags flags)
> {
> struct rmap_walk_control rwc = {
> .rmap_one = try_to_migrate_one,
> @@ -1981,7 +1984,7 @@ void try_to_migrate(struct page *page, enum ttu_flags flags)
> TTU_SYNC)))
> return;
>
> - if (is_zone_device_page(page) && !is_device_private_page(page))
> + if (folio_is_zone_device(folio) && !folio_is_device_private(folio))
> return;
>
> /*
> @@ -1992,13 +1995,13 @@ void try_to_migrate(struct page *page, enum ttu_flags flags)
> * locking requirements of exec(), migration skips
> * temporary VMAs until after exec() completes.
> */
> - if (!PageKsm(page) && PageAnon(page))
> + if (!folio_test_ksm(folio) && folio_test_anon(folio))
> rwc.invalid_vma = invalid_migration_vma;
>
> if (flags & TTU_RMAP_LOCKED)
> - rmap_walk_locked(page, &rwc);
> + rmap_walk_locked(&folio->page, &rwc);
> else
> - rmap_walk(page, &rwc);
> + rmap_walk(&folio->page, &rwc);
> }
>
> /*
> --
> 2.34.1
Otherwise, LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
next prev parent reply other threads:[~2022-02-09 15:28 UTC|newest]
Thread overview: 115+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-04 19:57 [PATCH 00/75] MM folio patches for 5.18 Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 01/75] mm/gup: Increment the page refcount before the pincount Matthew Wilcox (Oracle)
2022-02-04 21:13 ` John Hubbard
2022-02-04 21:28 ` Matthew Wilcox
2022-02-07 7:45 ` Christoph Hellwig
2022-02-04 19:57 ` [PATCH 02/75] mm/gup: Remove for_each_compound_range() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 03/75] mm/gup: Remove for_each_compound_head() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 04/75] mm/gup: Change the calling convention for compound_range_next() Matthew Wilcox (Oracle)
2022-02-07 7:45 ` Christoph Hellwig
2022-02-04 19:57 ` [PATCH 05/75] mm/gup: Optimise compound_range_next() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 06/75] mm/gup: Change the calling convention for compound_next() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 07/75] mm/gup: Fix some contiguous memmap assumptions Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 08/75] mm/gup: Remove an assumption of a contiguous memmap Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 09/75] mm/gup: Handle page split race more efficiently Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 10/75] mm/gup: Remove hpage_pincount_add() Matthew Wilcox (Oracle)
2022-02-04 21:29 ` John Hubbard
2022-02-07 7:46 ` Christoph Hellwig
2022-02-04 19:57 ` [PATCH 11/75] mm/gup: Remove hpage_pincount_sub() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 12/75] mm: Make compound_pincount always available Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 13/75] mm: Add folio_pincount_ptr() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 14/75] mm: Turn page_maybe_dma_pinned() into folio_maybe_dma_pinned() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 15/75] mm/gup: Add try_get_folio() and try_grab_folio() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 16/75] mm/gup: Convert try_grab_page() to use a folio Matthew Wilcox (Oracle)
2022-02-06 2:12 ` John Hubbard
2022-02-07 7:47 ` Christoph Hellwig
2022-02-04 19:57 ` [PATCH 17/75] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 18/75] mm/gup: Add gup_put_folio() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 19/75] mm/hugetlb: Use try_grab_folio() instead of try_grab_compound_head() Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 20/75] mm/gup: Convert gup_pte_range() to use a folio Matthew Wilcox (Oracle)
2022-02-06 14:52 ` Mark Hemment
2022-02-11 20:20 ` Matthew Wilcox
2022-02-04 19:57 ` [PATCH 21/75] mm/gup: Convert gup_hugepte() " Matthew Wilcox (Oracle)
2022-02-04 19:57 ` [PATCH 22/75] mm/gup: Convert gup_huge_pmd() " Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 23/75] mm/gup: Convert gup_huge_pud() " Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 24/75] mm/gup: Convert gup_huge_pgd() " Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 25/75] mm/gup: Turn compound_next() into gup_folio_next() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 26/75] mm/gup: Turn compound_range_next() into gup_folio_range_next() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 27/75] mm: Turn isolate_lru_page() into folio_isolate_lru() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 28/75] mm/gup: Convert check_and_migrate_movable_pages() to use a folio Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 29/75] mm/workingset: Convert workingset_eviction() to take " Matthew Wilcox (Oracle)
2022-02-07 7:49 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 30/75] mm/memcg: Convert mem_cgroup_swapout() " Matthew Wilcox (Oracle)
2022-02-07 7:49 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 31/75] mm: Add lru_to_folio() Matthew Wilcox (Oracle)
2022-02-07 7:50 ` Christoph Hellwig
2022-02-11 20:24 ` Matthew Wilcox
2022-02-04 19:58 ` [PATCH 32/75] mm: Turn putback_lru_page() into folio_putback_lru() Matthew Wilcox (Oracle)
2022-02-07 7:50 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 33/75] mm/vmscan: Convert __remove_mapping() to take a folio Matthew Wilcox (Oracle)
2022-02-07 7:51 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 34/75] mm/vmscan: Turn page_check_dirty_writeback() into folio_check_dirty_writeback() Matthew Wilcox (Oracle)
2022-02-07 7:51 ` Christoph Hellwig
2022-02-12 1:49 ` Matthew Wilcox
2022-02-04 19:58 ` [PATCH 35/75] mm: Turn head_compound_mapcount() into folio_entire_mapcount() Matthew Wilcox (Oracle)
2022-02-07 7:52 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 36/75] mm: Add folio_mapcount() Matthew Wilcox (Oracle)
2022-02-07 7:53 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 37/75] mm: Add split_folio_to_list() Matthew Wilcox (Oracle)
2022-02-07 7:54 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 38/75] mm: Add folio_is_zone_device() and folio_is_device_private() Matthew Wilcox (Oracle)
2022-02-07 7:54 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 39/75] mm: Add folio_pgoff() Matthew Wilcox (Oracle)
2022-02-07 7:55 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 40/75] mm: Add pvmw_set_page() and pvmw_set_folio() Matthew Wilcox (Oracle)
2022-02-07 7:55 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 41/75] hexagon: Add pmd_pfn() Matthew Wilcox (Oracle)
2022-02-06 18:13 ` Mike Rapoport
2022-02-06 20:46 ` Matthew Wilcox
2022-02-06 21:33 ` Mike Rapoport
2022-02-06 22:05 ` Matthew Wilcox
2022-02-07 14:24 ` Mike Rapoport
2022-02-04 19:58 ` [PATCH 42/75] mm: Convert page_vma_mapped_walk to work on PFNs Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 43/75] mm/page_idle: Convert page_idle_clear_pte_refs() to use a folio Matthew Wilcox (Oracle)
2022-02-07 7:57 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 44/75] mm/rmap: Use a folio in page_mkclean_one() Matthew Wilcox (Oracle)
2022-02-07 7:57 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 45/75] mm/rmap: Turn page_referenced() into folio_referenced() Matthew Wilcox (Oracle)
2022-02-07 7:58 ` Christoph Hellwig
2022-02-04 19:58 ` [PATCH 46/75] mm/mlock: Turn clear_page_mlock() into folio_end_mlock() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 47/75] mm/mlock: Turn mlock_vma_page() into mlock_vma_folio() Matthew Wilcox (Oracle)
2022-02-07 10:46 ` Mike Rapoport
2022-02-04 19:58 ` [PATCH 48/75] mm/rmap: Turn page_mlock() into folio_mlock() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 49/75] mm/mlock: Turn munlock_vma_page() into munlock_vma_folio() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 50/75] mm/huge_memory: Convert __split_huge_pmd() to take a folio Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 51/75] mm/rmap: Convert try_to_unmap() " Matthew Wilcox (Oracle)
2022-02-09 14:24 ` Mauricio Faria de Oliveira
2022-02-09 14:29 ` Matthew Wilcox
2022-02-04 19:58 ` [PATCH 52/75] mm/rmap: Convert try_to_migrate() to folios Matthew Wilcox (Oracle)
2022-02-09 15:27 ` Zi Yan [this message]
2022-02-04 19:58 ` [PATCH 53/75] mm/rmap: Convert make_device_exclusive_range() to use folios Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 54/75] mm/migrate: Convert remove_migration_ptes() to folios Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 55/75] mm/damon: Convert damon_pa_mkold() to use a folio Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 56/75] mm/damon: Convert damon_pa_young() " Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 57/75] mm/rmap: Turn page_lock_anon_vma_read() into folio_lock_anon_vma_read() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 58/75] mm: Turn page_anon_vma() into folio_anon_vma() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 59/75] mm/rmap: Convert rmap_walk() to take a folio Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 60/75] mm/rmap: Constify the rmap_walk_control argument Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 61/75] mm/vmscan: Free non-shmem folios without splitting them Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 62/75] mm/vmscan: Optimise shrink_page_list for non-PMD-sized folios Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 63/75] mm/vmscan: Account large folios correctly Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 64/75] mm/vmscan: Turn page_check_references() into folio_check_references() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 65/75] mm/vmscan: Convert pageout() to take a folio Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 66/75] mm: Turn can_split_huge_page() into can_split_folio() Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 67/75] mm/filemap: Allow large folios to be added to the page cache Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 68/75] mm: Fix READ_ONLY_THP warning Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 69/75] mm: Make large folios depend on THP Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 70/75] mm: Support arbitrary THP sizes Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 71/75] mm/readahead: Add large folio readahead Matthew Wilcox (Oracle)
2022-02-06 13:10 ` Mark Hemment
2022-02-04 19:58 ` [PATCH 72/75] mm/readahead: Align file mappings for non-DAX Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 73/75] mm/readahead: Switch to page_cache_ra_order Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 74/75] mm/filemap: Support VM_HUGEPAGE for file mappings Matthew Wilcox (Oracle)
2022-02-04 19:58 ` [PATCH 75/75] selftests/vm/transhuge-stress: Support file-backed PMD folios Matthew Wilcox (Oracle)
2022-02-13 22:31 ` [PATCH 00/75] MM folio patches for 5.18 John Hubbard
2022-02-14 4:33 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B0C10F2-97FA-4CDE-A013-0DA377750A96@nvidia.com \
--to=ziy@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).