linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	David Hildenbrand <david@redhat.com>,
	Alistair Popple <apopple@nvidia.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Rik van Riel <riel@surriel.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Yu Zhao <yuzhao@google.com>, Greg Thelen <gthelen@google.com>,
	Shakeel Butt <shakeelb@google.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v2 00/13] mm/munlock: rework of mlock+munlock page handling
Date: Tue, 15 Feb 2022 19:35:57 +0000	[thread overview]
Message-ID: <YgwAnYVfg+38leFs@casper.infradead.org> (raw)
In-Reply-To: <55a49083-37f9-3766-1de9-9feea7428ac@google.com>

On Mon, Feb 14, 2022 at 06:18:34PM -0800, Hugh Dickins wrote:
> Andrew, many thanks for including v1 and fixes in mmotm: please now replace
> 
> mm-munlock-delete-page_mlock-and-all-its-works.patch
> mm-munlock-delete-foll_mlock-and-foll_populate.patch
> mm-munlock-delete-munlock_vma_pages_all-allow-oomreap.patch
> mm-munlock-rmap-call-mlock_vma_page-munlock_vma_page.patch
> mm-munlock-replace-clear_page_mlock-by-final-clearance.patch
> mm-munlock-maintain-page-mlock_count-while-unevictable.patch
> mm-munlock-mlock_pte_range-when-mlocking-or-munlocking.patch
> mm-migrate-__unmap_and_move-push-good-newpage-to-lru.patch
> mm-munlock-delete-smp_mb-from-__pagevec_lru_add_fn.patch
> mm-munlock-mlock_page-munlock_page-batch-by-pagevec.patch
> mm-munlock-mlock_page-munlock_page-batch-by-pagevec-fix.patch
> mm-munlock-mlock_page-munlock_page-batch-by-pagevec-fix-2.patch
> mm-munlock-page-migration-needs-mlock-pagevec-drained.patch
> mm-thp-collapse_file-do-try_to_unmapttu_batch_flush.patch
> mm-thp-shrink_page_list-avoid-splitting-vm_locked-thp.patch
> 
> by the following thirteen of v2. As before, some easy fixups will be
> needed to apply in mm/huge_memory.c, but otherwise expected to be clean.
> 
> Many thanks to Vlastimil Babka for his review of 01 through 11, and
> to Matthew Wilcox for graciously volunteering to rebase his over these.

I have now pushed these 13 patches to my for-next branch:

git://git.infradead.org/users/willy/pagecache.git for-next

and rebased my folio patches on top.  Mostly that involved dropping
my mlock-related patches, although there were a few other adjustments
that needed to be made.  That should make Stephen's merge resolution
much easier once Andrew drops v1 of these patches from his tree.


      parent reply	other threads:[~2022-02-15 19:36 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-15  2:18 [PATCH v2 00/13] mm/munlock: rework of mlock+munlock page handling Hugh Dickins
2022-02-15  2:20 ` [PATCH v2 01/13] mm/munlock: delete page_mlock() and all its works Hugh Dickins
2022-02-15  2:21 ` [PATCH v2 02/13] mm/munlock: delete FOLL_MLOCK and FOLL_POPULATE Hugh Dickins
2022-02-15  2:23 ` [PATCH v2 03/13] mm/munlock: delete munlock_vma_pages_all(), allow oomreap Hugh Dickins
2022-02-15  2:26 ` [PATCH v2 04/13] mm/munlock: rmap call mlock_vma_page() munlock_vma_page() Hugh Dickins
2022-02-15 15:22   ` Matthew Wilcox
2022-02-15 21:38     ` Hugh Dickins
2022-02-15 23:21       ` Matthew Wilcox
2022-02-15  2:28 ` [PATCH v2 05/13] mm/munlock: replace clear_page_mlock() by final clearance Hugh Dickins
2022-02-15  2:29 ` [PATCH v2 06/13] mm/munlock: maintain page->mlock_count while unevictable Hugh Dickins
2022-02-15  2:31 ` [PATCH v2 07/13] mm/munlock: mlock_pte_range() when mlocking or munlocking Hugh Dickins
2022-02-18  6:35   ` [mm/munlock] 237b445401: stress-ng.remap.ops_per_sec -62.6% regression kernel test robot
2022-02-18  8:49     ` Hugh Dickins
2022-02-21  6:32       ` Hugh Dickins
2022-02-24  8:37         ` Oliver Sang
2022-02-15  2:33 ` [PATCH v2 08/13] mm/migrate: __unmap_and_move() push good newpage to LRU Hugh Dickins
2022-02-15  2:34 ` [PATCH v2 09/13] mm/munlock: delete smp_mb() from __pagevec_lru_add_fn() Hugh Dickins
2022-02-15  2:37 ` [PATCH v2 10/13] mm/munlock: mlock_page() munlock_page() batch by pagevec Hugh Dickins
2022-02-15 16:40   ` Matthew Wilcox
2022-02-15 21:02     ` Hugh Dickins
2022-02-15 22:56       ` Matthew Wilcox
2022-02-15  2:38 ` [PATCH v2 11/13] mm/munlock: page migration needs mlock pagevec drained Hugh Dickins
2022-02-15  2:40 ` [PATCH v2 12/13] mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH) Hugh Dickins
2022-02-15  2:42 ` [PATCH v2 13/13] mm/thp: shrink_page_list() avoid splitting VM_LOCKED THP Hugh Dickins
2022-02-15 19:35 ` Matthew Wilcox [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YgwAnYVfg+38leFs@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=david@redhat.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=riel@surriel.com \
    --cc=shakeelb@google.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).