From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5536C433F5 for ; Tue, 12 Apr 2022 01:10:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229689AbiDLBMN (ORCPT ); Mon, 11 Apr 2022 21:12:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348455AbiDLA7V (ORCPT ); Mon, 11 Apr 2022 20:59:21 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 121DA36E21 for ; Mon, 11 Apr 2022 17:52:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649724758; x=1681260758; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=xfxJtTXIuRttIzRBDb+X6akqcsjXQHvcov6CBWoB9aU=; b=kbElOvLGHE9ltBd9uZIwrwjpmlYmtezXbsHsBrKidgQI7HhS6VgVFgr7 NXwcevHM4Ck8vhPRt9yHWAvljgxL0N6W7dR3bOWptbtwPGZ9KlSVYs4yL x7URbeGk4zTjcV+db3hq6Wn1rbjp66aEhHG649oE9hjT5o5TYpMpntvbL y7z1FhnYUSlpHjMMZe68yXwVtaxBUxqvrHaU2ZBrsK5QEyeDGRKtExjtP th1Wd1AARj40ByWk0np13SyKaCq8NP7h36ozCvZ9F8czwGkZif9qdY5QM 2Hr4S0AIX2/l9OYZgyPxrSzalacGh8vfRcphSZcCnxx7HmgvRd4TpCHVs w==; X-IronPort-AV: E=McAfee;i="6400,9594,10314"; a="287260789" X-IronPort-AV: E=Sophos;i="5.90,252,1643702400"; d="scan'208";a="287260789" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Apr 2022 17:52:35 -0700 X-IronPort-AV: E=Sophos;i="5.90,252,1643702400"; d="scan'208";a="572492458" Received: from joliu-mobl2.ccr.corp.intel.com ([10.254.214.243]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Apr 2022 17:52:33 -0700 Message-ID: <8a4a0426c81f13d70d2c82b7adbc957e3e953bf3.camel@intel.com> Subject: Re: [PATCH v2 2/9] mm/vmscan: remove unneeded can_split_huge_page check From: "ying.huang@intel.com" To: Miaohe Lin , akpm@linux-foundation.org Cc: songmuchun@bytedance.com, hch@infradead.org, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 12 Apr 2022 08:52:30 +0800 In-Reply-To: <20220409093500.10329-3-linmiaohe@huawei.com> References: <20220409093500.10329-1-linmiaohe@huawei.com> <20220409093500.10329-3-linmiaohe@huawei.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.3-1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 2022-04-09 at 17:34 +0800, Miaohe Lin wrote: > We don't need to check can_split_folio() because folio_maybe_dma_pinned() > is checked before. It will avoid the long term pinned pages to be swapped > out. And we can live with short term pinned pages. Without can_split_folio > checking we can simplify the code. Also activate_locked can be changed to > keep_locked as it's just short term pinning. > > Suggested-by: Huang, Ying > Signed-off-by: Miaohe Lin Look good to me. Thanks! Reviewed-by: Huang, Ying Best Regards, Huang, Ying > --- >  mm/vmscan.c | 22 ++++++++-------------- >  1 file changed, 8 insertions(+), 14 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 4a76be47bed1..01f5db75a507 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1711,20 +1711,14 @@ static unsigned int shrink_page_list(struct list_head *page_list, >   goto keep_locked; >   if (folio_maybe_dma_pinned(folio)) >   goto keep_locked; > - if (PageTransHuge(page)) { > - /* cannot split THP, skip it */ > - if (!can_split_folio(folio, NULL)) > - goto activate_locked; > - /* > - * Split pages without a PMD map right > - * away. Chances are some or all of the > - * tail pages can be freed without IO. > - */ > - if (!folio_entire_mapcount(folio) && > - split_folio_to_list(folio, > - page_list)) > - goto activate_locked; > - } > + /* > + * Split pages without a PMD map right > + * away. Chances are some or all of the > + * tail pages can be freed without IO. > + */ > + if (PageTransHuge(page) && !folio_entire_mapcount(folio) && > + split_folio_to_list(folio, page_list)) > + goto keep_locked; >   if (!add_to_swap(page)) { >   if (!PageTransHuge(page)) >   goto activate_locked_split;