linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	 Jan Kara <jack@suse.cz>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,  Zi Yan <ziy@nvidia.com>,
	 Yang Shi <shy828301@gmail.com>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	Oscar Salvador <osalvador@suse.de>,
	 Matthew Wilcox <willy@infradead.org>,
	 Bharata B Rao <bharata@amd.com>,
	 Alistair Popple <apopple@nvidia.com>,
	 Xin Hao <xhao@linux.alibaba.com>,
	 Minchan Kim <minchan@kernel.org>,
	 Mike Kravetz <mike.kravetz@oracle.com>,
	 Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	 "Xu, Pengfei" <pengfei.xu@intel.com>,
	Christoph Hellwig <hch@lst.de>,  Stefan Roesch <shr@devkernel.io>,
	 Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH -v5 0/9] migrate_pages(): batch TLB flushing
Date: Tue, 21 Feb 2023 22:04:03 +0800	[thread overview]
Message-ID: <871qmjdsj0.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <2ab4b33e-f570-a6ff-6315-7d5a4614a7bd@google.com> (Hugh Dickins's message of "Mon, 20 Feb 2023 18:48:38 -0800 (PST)")

Hugh Dickins <hughd@google.com> writes:

> On Mon, 20 Feb 2023, Huang, Ying wrote:
>
>> Hi, Hugh,
>> 
>> Hugh Dickins <hughd@google.com> writes:
>> 
>> > On Mon, 13 Feb 2023, Huang Ying wrote:
>> >
>> >> From: "Huang, Ying" <ying.huang@intel.com>
>> >> 
>> >> Now, migrate_pages() migrate folios one by one, like the fake code as
>> >> follows,
>> >> 
>> >>   for each folio
>> >>     unmap
>> >>     flush TLB
>> >>     copy
>> >>     restore map
>> >> 
>> >> If multiple folios are passed to migrate_pages(), there are
>> >> opportunities to batch the TLB flushing and copying.  That is, we can
>> >> change the code to something as follows,
>> >> 
>> >>   for each folio
>> >>     unmap
>> >>   for each folio
>> >>     flush TLB
>> >>   for each folio
>> >>     copy
>> >>   for each folio
>> >>     restore map
>> >> 
>> >> The total number of TLB flushing IPI can be reduced considerably.  And
>> >> we may use some hardware accelerator such as DSA to accelerate the
>> >> folio copying.
>> >> 
>> >> So in this patch, we refactor the migrate_pages() implementation and
>> >> implement the TLB flushing batching.  Base on this, hardware
>> >> accelerated folio copying can be implemented.
>> >> 
>> >> If too many folios are passed to migrate_pages(), in the naive batched
>> >> implementation, we may unmap too many folios at the same time.  The
>> >> possibility for a task to wait for the migrated folios to be mapped
>> >> again increases.  So the latency may be hurt.  To deal with this
>> >> issue, the max number of folios be unmapped in batch is restricted to
>> >> no more than HPAGE_PMD_NR in the unit of page.  That is, the influence
>> >> is at the same level of THP migration.
>> >> 
>> >> We use the following test to measure the performance impact of the
>> >> patchset,
>> >> 
>> >> On a 2-socket Intel server,
>> >> 
>> >>  - Run pmbench memory accessing benchmark
>> >> 
>> >>  - Run `migratepages` to migrate pages of pmbench between node 0 and
>> >>    node 1 back and forth.
>> >> 
>> >> With the patch, the TLB flushing IPI reduces 99.1% during the test and
>> >> the number of pages migrated successfully per second increases 291.7%.
>> >> 
>> >> Xin Hao helped to test the patchset on an ARM64 server with 128 cores,
>> >> 2 NUMA nodes.  Test results show that the page migration performance
>> >> increases up to 78%.
>> >> 
>> >> This patchset is based on mm-unstable 2023-02-10.
>> >
>> > And back in linux-next this week: I tried next-20230217 overnight.
>> >
>> > There is a deadlock in this patchset (and in previous versions: sorry
>> > it's taken me so long to report), but I think one that's easily solved.
>> >
>> > I've not bisected to precisely which patch (load can take several hours
>> > to hit the deadlock), but it doesn't really matter, and I expect that
>> > you can guess.
>> >
>> > My root and home filesystems are ext4 (4kB blocks with 4kB PAGE_SIZE),
>> > and so is the filesystem I'm testing, ext4 on /dev/loop0 on tmpfs.
>> > So, plenty of ext4 page cache and buffer_heads.
>> >
>> > Again and again, the deadlock is seen with buffer_migrate_folio_norefs(),
>> > either in kcompactd0 or in khugepaged trying to compact, or in both:
>> > it ends up calling __lock_buffer(), and that schedules away, waiting
>> > forever to get BH_lock.  I have not identified who is holding BH_lock,
>> > but I imagine a jbd2 journalling thread, and presume that it wants one
>> > of the folio locks which migrate_pages_batch() is already holding; or
>> > maybe it's all more convoluted than that.  Other tasks then back up
>> > waiting on those folio locks held in the batch.
>> >
>> > Never a problem with buffer_migrate_folio(), always with the "more
>> > careful" buffer_migrate_folio_norefs().  And the patch below fixes
>> > it for me: I've had enough hours with it now, on enough occasions,
>> > to be confident of that.
>> >
>> > Cc'ing Jan Kara, who knows buffer_migrate_folio_norefs() and jbd2
>> > very well, and I hope can assure us that there is an understandable
>> > deadlock here, from holding several random folio locks, then trying
>> > to lock buffers.  Cc'ing fsdevel, because there's a risk that mm
>> > folk think something is safe, when it's not sufficient to cope with
>> > the diversity of filesystems.  I hope nothing more than the below is
>> > needed (and I've had no other problems with the patchset: good job),
>> > but cannot be sure.
>> >
>> > [PATCH next] migrate_pages: fix deadlock on buffer heads
>> >
>> > When __buffer_migrate_folio() is called from buffer_migrate_folio_norefs(),
>> > force MIGRATE_ASYNC mode so that buffer_migrate_lock_buffers() will only
>> > trylock_buffer(), failing with -EAGAIN as usual if that does not succeed.
>> >
>> > Signed-off-by: Hugh Dickins <hughd@google.com>
>> >
>> > --- next-20230217/mm/migrate.c
>> > +++ fixed/mm/migrate.c
>> > @@ -748,7 +748,8 @@ static int __buffer_migrate_folio(struct
>> >  	if (folio_ref_count(src) != expected_count)
>> >  		return -EAGAIN;
>> >  
>> > -	if (!buffer_migrate_lock_buffers(head, mode))
>> > +	if (!buffer_migrate_lock_buffers(head,
>> > +			check_refs ? MIGRATE_ASYNC : mode))
>> >  		return -EAGAIN;
>> >  
>> >  	if (check_refs) {
>> 
>> Thank you very much for pointing this out and the fix patch.  Today, my
>> colleague Pengfei reported a deadlock bug to me.  It seems that we
>> cannot wait the writeback to complete when we have locked some folios.
>> Below patch can fix that deadlock.  I don't know whether this is related
>> to the deadlock you run into.  It appears that we should avoid to
>> lock/wait synchronously if we have locked more than one folios.
>
> Thanks, I've checked now, on next-20230217 without my patch but
> with your patch below: it took a few hours, but still deadlocks
> as I described above, so it's not the same issue.
>
> Yes, that's a good principle, that we should avoid to lock/wait
> synchronously once we have locked one folio (hmm, above you say
> "more than one": I think we mean the same thing, we're just
> stating it differently, given how the code runs at present).
>
> I'm not a great fan of migrate_folio_unmap()'s arguments,
> "force" followed by "oh, but don't force" (but applaud the recent
> "avoid_force_lock" as much better than the original "force_lock").
> I haven't tried, but I wonder if you can avoid both those arguments,
> and both of these patches, by passing down an adjusted mode (perhaps
> MIGRATE_ASYNC, or perhaps a new mode) to all callees, once the first
> folio of a batch has been acquired (then restore to the original mode
> when starting a new batch).
>
> (My patch is weak in that it trylocks for buffer_head even on the
> first folio of a MIGRATE_SYNC norefs batch, although that has never
> given a problem historically: adjusting the mode after acquiring
> the first folio would correct that weakness.)

On second thought, I think that it may be better to provide a fix as
simple as possible firstly.  Then we can work on a more complex fix as
we discussed above.  The simple fix is easy to review now.  And, we will
have more time to test and review the complex fix.

In the following fix, I disabled the migration batching except for the
MIGRATE_ASYNC mode, or the split folios of a THP folio.  After that, I
will work on the complex fix to enable migration batching for all modes.

What do you think about that?

Best Regards,
Huang, Ying

-------------------------------8<---------------------------------
From 8e475812eacd9f2eeac76776c2b1a17af3e59b89 Mon Sep 17 00:00:00 2001
From: Huang Ying <ying.huang@intel.com>
Date: Tue, 21 Feb 2023 16:37:50 +0800
Subject: [PATCH] migrate_pages: fix deadlock in batched migration

Two deadlock bugs were reported for the migrate_pages() batching
series.  Thanks Hugh and Pengfei!  For example, in the following
deadlock trace snippet,

 INFO: task kworker/u4:0:9 blocked for more than 147 seconds.
       Not tainted 6.2.0-rc4-kvm+ #1314
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 task:kworker/u4:0    state:D stack:0     pid:9     ppid:2      flags:0x00004000
 Workqueue: loop4 loop_rootcg_workfn
 Call Trace:
  <TASK>
  __schedule+0x43b/0xd00
  schedule+0x6a/0xf0
  io_schedule+0x4a/0x80
  folio_wait_bit_common+0x1b5/0x4e0
  ? __pfx_wake_page_function+0x10/0x10
  __filemap_get_folio+0x73d/0x770
  shmem_get_folio_gfp+0x1fd/0xc80
  shmem_write_begin+0x91/0x220
  generic_perform_write+0x10e/0x2e0
  __generic_file_write_iter+0x17e/0x290
  ? generic_write_checks+0x12b/0x1a0
  generic_file_write_iter+0x97/0x180
  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
  do_iter_readv_writev+0x13c/0x210
  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
  do_iter_write+0xf6/0x330
  vfs_iter_write+0x46/0x70
  loop_process_work+0x723/0xfe0
  loop_rootcg_workfn+0x28/0x40
  process_one_work+0x3cc/0x8d0
  worker_thread+0x66/0x630
  ? __pfx_worker_thread+0x10/0x10
  kthread+0x153/0x190
  ? __pfx_kthread+0x10/0x10
  ret_from_fork+0x29/0x50
  </TASK>

 INFO: task repro:1023 blocked for more than 147 seconds.
       Not tainted 6.2.0-rc4-kvm+ #1314
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 task:repro           state:D stack:0     pid:1023  ppid:360    flags:0x00004004
 Call Trace:
  <TASK>
  __schedule+0x43b/0xd00
  schedule+0x6a/0xf0
  io_schedule+0x4a/0x80
  folio_wait_bit_common+0x1b5/0x4e0
  ? compaction_alloc+0x77/0x1150
  ? __pfx_wake_page_function+0x10/0x10
  folio_wait_bit+0x30/0x40
  folio_wait_writeback+0x2e/0x1e0
  migrate_pages_batch+0x555/0x1ac0
  ? __pfx_compaction_alloc+0x10/0x10
  ? __pfx_compaction_free+0x10/0x10
  ? __this_cpu_preempt_check+0x17/0x20
  ? lock_is_held_type+0xe6/0x140
  migrate_pages+0x100e/0x1180
  ? __pfx_compaction_free+0x10/0x10
  ? __pfx_compaction_alloc+0x10/0x10
  compact_zone+0xe10/0x1b50
  ? lock_is_held_type+0xe6/0x140
  ? check_preemption_disabled+0x80/0xf0
  compact_node+0xa3/0x100
  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
  ? _find_first_bit+0x7b/0x90
  sysctl_compaction_handler+0x5d/0xb0
  proc_sys_call_handler+0x29d/0x420
  proc_sys_write+0x2b/0x40
  vfs_write+0x3a3/0x780
  ksys_write+0xb7/0x180
  __x64_sys_write+0x26/0x30
  do_syscall_64+0x3b/0x90
  entry_SYSCALL_64_after_hwframe+0x72/0xdc
 RIP: 0033:0x7f3a2471f59d
 RSP: 002b:00007ffe567f7288 EFLAGS: 00000217 ORIG_RAX: 0000000000000001
 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f3a2471f59d
 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000005
 RBP: 00007ffe567f72a0 R08: 0000000000000010 R09: 0000000000000010
 R10: 0000000000000010 R11: 0000000000000217 R12: 00000000004012e0
 R13: 00007ffe567f73e0 R14: 0000000000000000 R15: 0000000000000000
  </TASK>

The page migration task has held the lock of the shmem folio A, and is
waiting the writeback of the folio B of the file system on the loop
block device to complete.  While the loop worker task which writes
back the folio B is waiting to lock the shmem folio A, because the
folio A backs the folio B in the loop device.  Thus deadlock is
triggered.

In general, if we have locked some other folios except the one we are
migrating, it's not safe to wait synchronously, for example, to wait
the writeback to complete or wait to lock the buffer head.

To fix the deadlock, in this patch, we avoid to batch the page
migration except for MIGRATE_ASYNC mode or the split folios of a THP
folio.  In MIGRATE_ASYNC mode, synchronous waiting is avoided.  And
there isn't any dependency relationship among the split folios of a
THP folio.

The fix can be improved via converting migration mode from synchronous
to asynchronous if we have locked some other folios except the one we
are migrating.  We will do that in the near future.

Link: https://lore.kernel.org/linux-mm/87a6c8c-c5c1-67dc-1e32-eb30831d6e3d@google.com/
Link: https://lore.kernel.org/linux-mm/874jrg7kke.fsf@yhuang6-desk2.ccr.corp.intel.com/
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reported-by: Hugh Dickins <hughd@google.com>
Reported-by: "Xu, Pengfei" <pengfei.xu@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Stefan Roesch <shr@devkernel.io>
Cc: Tejun Heo <tj@kernel.org>
Cc: Xin Hao <xhao@linux.alibaba.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/migrate.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index ef68a1aff35c..bc04c34543f3 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1937,7 +1937,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
 		enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
 {
 	int rc, rc_gather;
-	int nr_pages;
+	int nr_pages, batch;
 	struct folio *folio, *folio2;
 	LIST_HEAD(folios);
 	LIST_HEAD(ret_folios);
@@ -1951,6 +1951,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
 				     mode, reason, &stats, &ret_folios);
 	if (rc_gather < 0)
 		goto out;
+
+	if (mode == MIGRATE_ASYNC)
+		batch = NR_MAX_BATCHED_MIGRATION;
+	else
+		batch = 1;
 again:
 	nr_pages = 0;
 	list_for_each_entry_safe(folio, folio2, from, lru) {
@@ -1961,11 +1966,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
 		}
 
 		nr_pages += folio_nr_pages(folio);
-		if (nr_pages > NR_MAX_BATCHED_MIGRATION)
+		if (nr_pages >= batch)
 			break;
 	}
-	if (nr_pages > NR_MAX_BATCHED_MIGRATION)
-		list_cut_before(&folios, from, &folio->lru);
+	if (nr_pages >= batch)
+		list_cut_before(&folios, from, &folio2->lru);
 	else
 		list_splice_init(from, &folios);
 	rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private,
-- 
2.39.1



  parent reply	other threads:[~2023-02-21 14:05 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-13 12:34 [PATCH -v5 0/9] migrate_pages(): batch TLB flushing Huang Ying
2023-02-13 12:34 ` [PATCH -v5 1/9] migrate_pages: organize stats with struct migrate_pages_stats Huang Ying
2023-02-13 12:34 ` [PATCH -v5 2/9] migrate_pages: separate hugetlb folios migration Huang Ying
2023-02-13 12:34 ` [PATCH -v5 3/9] migrate_pages: restrict number of pages to migrate in batch Huang Ying
2023-02-13 12:34 ` [PATCH -v5 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2023-02-13 12:34 ` [PATCH -v5 5/9] migrate_pages: batch _unmap and _move Huang Ying
2023-02-13 12:34 ` [PATCH -v5 6/9] migrate_pages: move migrate_folio_unmap() Huang Ying
2023-02-13 12:34 ` [PATCH -v5 7/9] migrate_pages: share more code between _unmap and _move Huang Ying
2023-02-13 12:34 ` [PATCH -v5 8/9] migrate_pages: batch flushing TLB Huang Ying
2023-02-13 12:34 ` [PATCH -v5 9/9] migrate_pages: move THP/hugetlb migration support check to simplify code Huang Ying
2023-02-17 21:47 ` [PATCH -v5 0/9] migrate_pages(): batch TLB flushing Hugh Dickins
2023-02-20  9:28   ` Huang, Ying
2023-02-21  2:48     ` Hugh Dickins
2023-02-21  3:34       ` Huang, Ying
2023-02-21 22:15         ` Hugh Dickins
2023-02-21  4:30       ` Matthew Wilcox
2023-02-21  4:38         ` Huang, Ying
2023-02-21 14:04       ` Huang, Ying [this message]
2023-02-21 22:25         ` Hugh Dickins
2023-02-22  1:02           ` Huang, Ying
2023-02-27 11:06   ` Jan Kara
2023-02-28  1:13     ` Huang, Ying
2023-02-28  5:59       ` Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=871qmjdsj0.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bharata@amd.com \
    --cc=hch@lst.de \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=minchan@kernel.org \
    --cc=osalvador@suse.de \
    --cc=pengfei.xu@intel.com \
    --cc=shr@devkernel.io \
    --cc=shy828301@gmail.com \
    --cc=tj@kernel.org \
    --cc=willy@infradead.org \
    --cc=xhao@linux.alibaba.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).