From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF614ECAAD8 for ; Thu, 22 Sep 2022 01:15:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8694480016; Wed, 21 Sep 2022 21:15:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8188880011; Wed, 21 Sep 2022 21:15:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E13C80016; Wed, 21 Sep 2022 21:15:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5DC3B80011 for ; Wed, 21 Sep 2022 21:15:50 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 42978413B8 for ; Thu, 22 Sep 2022 01:15:50 +0000 (UTC) X-FDA: 79937954460.02.EC15FC2 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf21.hostedemail.com (Postfix) with ESMTP id 613431C000D for ; Thu, 22 Sep 2022 01:15:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663809349; x=1695345349; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=RQhO9fjKBk9h51mo+Xg/DbGqT57oo8RBVEhbP9TrZyo=; b=nfn/9mOeXCA8UfpvoGqncjaoZOJ1ue636A7Gej2Qyp+Pdw8y1bzOtcBg BO3MIBeh4D/GEaEw8kOx128VslX5gwPT++vZnPusTAGWkgqJ4COjxwCBl Hdr3VXIwM9vLFZxZSlhBFS2mZN+y6IX5aG0Iqoia30oBPQ3Ja9D4oFUhX hMz2JsSsF9nRZAttDYG4kjiGl09UtWrJkHPvw4XojYy4o/05ehIwViSTc 8jmCRicyWXxXoer8/2j2OzCiGKrs3myNUTCVUUlGiw9sZqiZ94v9SXQbj FmF3/2/6sLEnV23jvJ97LaqGidQd2GNdmiAMLIa9XTnJPhwTa5tRW0X78 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="300152331" X-IronPort-AV: E=Sophos;i="5.93,334,1654585200"; d="scan'208";a="300152331" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 18:15:48 -0700 X-IronPort-AV: E=Sophos;i="5.93,334,1654585200"; d="scan'208";a="652766117" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 18:15:45 -0700 From: "Huang, Ying" To: Zi Yan Cc: , , Andrew Morton , Yang Shi , Baolin Wang , Oscar Salvador , "Matthew Wilcox" Subject: Re: [RFC 3/6] mm/migrate_pages: restrict number of pages to migrate in batch References: <20220921060616.73086-1-ying.huang@intel.com> <20220921060616.73086-4-ying.huang@intel.com> <46D92605-FED0-4473-9CBD-C3CB7DD46655@nvidia.com> Date: Thu, 22 Sep 2022 09:15:44 +0800 In-Reply-To: <46D92605-FED0-4473-9CBD-C3CB7DD46655@nvidia.com> (Zi Yan's message of "Wed, 21 Sep 2022 12:10:37 -0400") Message-ID: <87edw4ky4f.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663809349; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G39HSutGaW/Q8euS1kIBjkTZ8aEHC7Ljz+WRIKRGpFQ=; b=q6ey/y98MdbCQ+c2VSVT9crdeChEWAg705+H0CqOpK3G2X3/LMogaUrls8bpwcyum9loKG Y+R5lZErcA1tJzu5KamBGsxKRQq1ETX94Mi1lRtop1T9ndWLwPe5pmyYlHB+Fvx7Ux9wlz HHdEu3sETii0B9FpY3Tsdw0nf31Uni0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b="nfn/9mOe"; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663809349; a=rsa-sha256; cv=none; b=0AcK8mnaZCDyKSQtuquLY5FgMfUAFoAeASFy8fneyXJ250q9eDDZ8Oohhc4LGToRj2YAnW YqgwG4sDj21ntMBUgKr1sB/z9Gw3cXGcaPZJgF3oP+QU2zWeJXYY6KEVAfspiVkY6boCGF ovjTAie+/sCmzrnrAKzxukM5NxcZtAM= X-Stat-Signature: ax1cqknsrutuwuswu89px55reajzwxw3 X-Rspamd-Queue-Id: 613431C000D X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b="nfn/9mOe"; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Server: rspam09 X-HE-Tag: 1663809349-510015 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Zi Yan writes: > On 21 Sep 2022, at 2:06, Huang Ying wrote: > >> This is a preparation patch to batch the page unmapping and moving >> for the normal pages and THP. >> >> If we had batched the page unmapping, all pages to be migrated would >> be unmapped before copying the contents and flags of the pages. If >> the number of pages that were passed to migrate_pages() was too large, >> too many pages would be unmapped. Then, the execution of their >> processes would be stopped for too long time. For example, >> migrate_pages() syscall will call migrate_pages() with all pages of a >> process. To avoid this possible issue, in this patch, we restrict the >> number of pages to be migrated to be no more than HPAGE_PMD_NR. That >> is, the influence is at the same level of THP migration. >> >> Signed-off-by: "Huang, Ying" >> Cc: Zi Yan >> Cc: Yang Shi >> Cc: Baolin Wang >> Cc: Oscar Salvador >> Cc: Matthew Wilcox >> --- >> mm/migrate.c | 93 +++++++++++++++++++++++++++++++++++++--------------- >> 1 file changed, 67 insertions(+), 26 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 4a81e0bfdbcd..1077af858e36 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1439,32 +1439,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages >> return rc; >> } >> >> -/* >> - * migrate_pages - migrate the pages specified in a list, to the free pages >> - * supplied as the target for the page migration >> - * >> - * @from: The list of pages to be migrated. >> - * @get_new_page: The function used to allocate free pages to be used >> - * as the target of the page migration. >> - * @put_new_page: The function used to free target pages if migration >> - * fails, or NULL if no special handling is necessary. >> - * @private: Private data to be passed on to get_new_page() >> - * @mode: The migration mode that specifies the constraints for >> - * page migration, if any. >> - * @reason: The reason for page migration. >> - * @ret_succeeded: Set to the number of normal pages migrated successfully if >> - * the caller passes a non-NULL pointer. >> - * >> - * The function returns after 10 attempts or if no pages are movable any more >> - * because the list has become empty or no retryable pages exist any more. >> - * It is caller's responsibility to call putback_movable_pages() to return pages >> - * to the LRU or free list only if ret != 0. >> - * >> - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or >> - * an error code. The number of THP splits will be considered as the number of >> - * non-migrated THP, no matter how many subpages of the THP are migrated successfully. >> - */ >> -int migrate_pages(struct list_head *from, new_page_t get_new_page, >> +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> free_page_t put_new_page, unsigned long private, >> enum migrate_mode mode, int reason, unsigned int *ret_succeeded) >> { >> @@ -1709,6 +1684,72 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> return rc; >> } >> >> +/* >> + * migrate_pages - migrate the pages specified in a list, to the free pages >> + * supplied as the target for the page migration >> + * >> + * @from: The list of pages to be migrated. >> + * @get_new_page: The function used to allocate free pages to be used >> + * as the target of the page migration. >> + * @put_new_page: The function used to free target pages if migration >> + * fails, or NULL if no special handling is necessary. >> + * @private: Private data to be passed on to get_new_page() >> + * @mode: The migration mode that specifies the constraints for >> + * page migration, if any. >> + * @reason: The reason for page migration. >> + * @ret_succeeded: Set to the number of normal pages migrated successfully if >> + * the caller passes a non-NULL pointer. >> + * >> + * The function returns after 10 attempts or if no pages are movable any more >> + * because the list has become empty or no retryable pages exist any more. >> + * It is caller's responsibility to call putback_movable_pages() to return pages >> + * to the LRU or free list only if ret != 0. >> + * >> + * Returns the number of {normal page, THP, hugetlb} that were not migrated, or >> + * an error code. The number of THP splits will be considered as the number of >> + * non-migrated THP, no matter how many subpages of the THP are migrated successfully. >> + */ >> +int migrate_pages(struct list_head *from, new_page_t get_new_page, >> + free_page_t put_new_page, unsigned long private, >> + enum migrate_mode mode, int reason, unsigned int *pret_succeeded) >> +{ >> + int rc, rc_gether = 0; >> + int ret_succeeded, ret_succeeded_gether = 0; >> + int nr_pages; >> + struct page *page; >> + LIST_HEAD(pagelist); >> + LIST_HEAD(ret_pages); >> + >> +again: >> + nr_pages = 0; >> + list_for_each_entry(page, from, lru) { >> + nr_pages += compound_nr(page); >> + if (nr_pages > HPAGE_PMD_NR) > > It is better to define a new MACRO like NR_MAX_BATCHED_MIGRATION to be > HPAGE_PMD_NR. It makes code easier to understand and change. OK. Will do that. Best Regards, Huang, Ying >> + break; >> + } >> + if (nr_pages > HPAGE_PMD_NR) >> + list_cut_before(&pagelist, from, &page->lru); >> + else >> + list_splice_init(from, &pagelist); >> + rc = migrate_pages_batch(&pagelist, get_new_page, put_new_page, private, >> + mode, reason, &ret_succeeded); >> + ret_succeeded_gether += ret_succeeded; >> + list_splice_tail_init(&pagelist, &ret_pages); >> + if (rc == -ENOMEM) { >> + rc_gether = rc; >> + goto out; >> + } >> + rc_gether += rc; >> + if (!list_empty(from)) >> + goto again; >> +out: >> + if (pret_succeeded) >> + *pret_succeeded = ret_succeeded_gether; >> + list_splice(&ret_pages, from); >> + >> + return rc_gether; >> +} >> + >> struct page *alloc_migration_target(struct page *page, unsigned long private) >> { >> struct folio *folio = page_folio(page); >> -- >> 2.35.1 > > > -- > Best Regards, > Yan, Zi