From: "Huang, Ying" <ying.huang@intel.com>
To: Douglas Anderson <dianders@chromium.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Vlastimil Babka <vbabka@suse.cz>,
Alexander Viro <viro@zeniv.linux.org.uk>,
Christian Brauner <brauner@kernel.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Yu Zhao <yuzhao@google.com>,
linux-fsdevel@vger.kernel.org,
Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH v2 3/4] migrate_pages: Don't wait forever locking pages in MIGRATE_SYNC_LIGHT
Date: Sun, 23 Apr 2023 15:59:14 +0800 [thread overview]
Message-ID: <87h6t7kp0t.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <20230421151135.v2.3.Ia86ccac02a303154a0b8bc60567e7a95d34c96d3@changeid> (Douglas Anderson's message of "Fri, 21 Apr 2023 15:12:47 -0700")
Douglas Anderson <dianders@chromium.org> writes:
> The MIGRATE_SYNC_LIGHT mode is intended to block for things that will
> finish quickly but not for things that will take a long time. Exactly
> how long is too long is not well defined, but waits of tens of
> milliseconds is likely non-ideal.
>
> Waiting on the folio lock in isolate_movable_page() is something that
> usually is pretty quick, but is not officially bounded. Nothing stops
> another process from holding a folio lock while doing an expensive
> operation. Having an unbounded wait like this is not within the design
> goals of MIGRATE_SYNC_LIGHT.
>
> When putting a Chromebook under memory pressure (opening over 90 tabs
> on a 4GB machine) it was fairly easy to see delays waiting for the
> lock of > 100 ms. While the laptop wasn't amazingly usable in this
> state, it was still limping along and this state isn't something
> artificial. Sometimes we simply end up with a lot of memory pressure.
>
> Putting the same Chromebook under memory pressure while it was running
> Android apps (though not stressing them) showed a much worse result
> (NOTE: this was on a older kernel but the codepaths here are
> similar). Android apps on ChromeOS currently run from a 128K-block,
> zlib-compressed, loopback-mounted squashfs disk. If we get a page
> fault from something backed by the squashfs filesystem we could end up
> holding a folio lock while reading enough from disk to decompress 128K
> (and then decompressing it using the somewhat slow zlib algorithms).
> That reading goes through the ext4 subsystem (because it's a loopback
> mount) before eventually ending up in the block subsystem. This extra
> jaunt adds extra overhead. Without much work I could see cases where
> we ended up blocked on a folio lock for over a second. With more
> more extreme memory pressure I could see up to 25 seconds.
>
> Let's bound the amount of time we can wait for the folio lock. The
> SYNC_LIGHT migration mode can already handle failure for things that
> are slow, so adding this timeout in is fairly straightforward.
>
> With this timeout, it can be seen that kcompactd can move on to more
> productive tasks if it's taking a long time to acquire a lock.
How long is the max wait time of folio_lock_timeout()?
> NOTE: The reason I stated digging into this isn't because some
> benchmark had gone awry, but because we've received in-the-field crash
> reports where we have a hung task waiting on the page lock (which is
> the equivalent code path on old kernels). While the root cause of
> those crashes is likely unrelated and won't be fixed by this patch,
> analyzing those crash reports did point out this unbounded wait and it
> seemed like something good to fix.
>
> ALSO NOTE: the timeout mechanism used here uses "jiffies" and we also
> will retry up to 7 times. That doesn't give us much accuracy in
> specifying the timeout. On 1000 Hz machines we'll end up timing out in
> 7-14 ms. On 100 Hz machines we'll end up in 70-140 ms. Given that we
> don't have a strong definition of how long "too long" is, this is
> probably OK.
You can use HZ to work with different configuration. It doesn't help
much if your target is 1ms. But I think that it's possible to set it to
longer than that in the future. So, some general definition looks
better.
Best Regards,
Huang, Ying
> Suggested-by: Mel Gorman <mgorman@techsingularity.net>
> Signed-off-by: Douglas Anderson <dianders@chromium.org>
> ---
>
> Changes in v2:
> - Keep unbounded delay in "SYNC", delay with a timeout in "SYNC_LIGHT"
>
> mm/migrate.c | 20 +++++++++++++++++++-
> 1 file changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index db3f154446af..60982df71a93 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -58,6 +58,23 @@
>
> #include "internal.h"
>
> +/* Returns the schedule timeout for a non-async mode */
> +static long timeout_for_mode(enum migrate_mode mode)
> +{
> + /*
> + * We'll always return 1 jiffy as the timeout. Since all places using
> + * this timeout are in a retry loop this means that the maximum time
> + * we might block is actually NR_MAX_MIGRATE_SYNC_RETRY jiffies.
> + * If a jiffy is 1 ms that's 7 ms, though with the accuracy of the
> + * timeouts it often ends up more like 14 ms; if a jiffy is 10 ms
> + * that's 70-140 ms.
> + */
> + if (mode == MIGRATE_SYNC_LIGHT)
> + return 1;
> +
> + return MAX_SCHEDULE_TIMEOUT;
> +}
> +
> bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> {
> struct folio *folio = folio_get_nontail_page(page);
> @@ -1162,7 +1179,8 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page
> if (current->flags & PF_MEMALLOC)
> goto out;
>
> - folio_lock(src);
> + if (folio_lock_timeout(src, timeout_for_mode(mode)))
> + goto out;
> }
> locked = true;
next prev parent reply other threads:[~2023-04-23 8:00 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-21 22:12 [PATCH v2 0/4] migrate: Avoid unbounded blocks in MIGRATE_SYNC_LIGHT Douglas Anderson
2023-04-21 22:12 ` [PATCH v2 1/4] mm/filemap: Add folio_lock_timeout() Douglas Anderson
2023-04-22 5:18 ` Hillf Danton
[not found] ` <20230423081203.1812-1-hdanton@sina.com>
2023-04-23 8:35 ` Gao Xiang
2023-04-23 9:49 ` Hillf Danton
2023-04-23 10:45 ` Gao Xiang
2023-04-24 16:56 ` Doug Anderson
2023-04-25 1:09 ` Hillf Danton
2023-04-25 14:19 ` Doug Anderson
2023-04-26 4:42 ` Hillf Danton
2023-04-26 4:55 ` Doug Anderson
2023-04-26 10:09 ` Mel Gorman
2023-04-26 15:14 ` Matthew Wilcox
2023-04-26 20:46 ` Doug Anderson
2023-04-26 21:26 ` Matthew Wilcox
2023-04-26 21:39 ` Doug Anderson
2023-04-27 2:16 ` Matthew Wilcox
2023-04-27 9:48 ` Mel Gorman
2023-04-28 8:17 ` Hillf Danton
2023-04-26 15:24 ` Linus Torvalds
2023-04-23 7:50 ` Huang, Ying
2023-04-24 8:22 ` Mel Gorman
2023-04-24 16:22 ` Doug Anderson
2023-04-25 8:00 ` Mel Gorman
2023-04-21 22:12 ` [PATCH v2 2/4] buffer: Add lock_buffer_timeout() Douglas Anderson
2023-04-23 8:47 ` Huang, Ying
2023-04-21 22:12 ` [PATCH v2 3/4] migrate_pages: Don't wait forever locking pages in MIGRATE_SYNC_LIGHT Douglas Anderson
2023-04-23 7:59 ` Huang, Ying [this message]
2023-04-24 9:38 ` Mel Gorman
2023-04-21 22:12 ` [PATCH v2 4/4] migrate_pages: Don't wait forever locking buffers " Douglas Anderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87h6t7kp0t.fsf@yhuang6-desk2.ccr.corp.intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=dianders@chromium.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).