From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7DA5C433F5 for ; Thu, 12 May 2022 08:52:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351805AbiELIwQ (ORCPT ); Thu, 12 May 2022 04:52:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351726AbiELIvp (ORCPT ); Thu, 12 May 2022 04:51:45 -0400 Received: from outbound-smtp29.blacknight.com (outbound-smtp29.blacknight.com [81.17.249.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA0DD5DA01 for ; Thu, 12 May 2022 01:51:39 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp29.blacknight.com (Postfix) with ESMTPS id DFACABF1D5 for ; Thu, 12 May 2022 09:51:37 +0100 (IST) Received: (qmail 16579 invoked from network); 12 May 2022 08:51:37 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 12 May 2022 08:51:37 -0000 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Date: Thu, 12 May 2022 09:50:41 +0100 Message-Id: <20220512085043.5234-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220512085043.5234-1-mgorman@techsingularity.net> References: <20220512085043.5234-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The VM_BUG_ON check for a valid page can be avoided with a simple change in the flow. The ZONE_BOOSTED_WATERMARK is unlikely in general and even more unlikely if the page allocation failed so mark the branch unlikely. Signed-off-by: Mel Gorman Tested-by: Minchan Kim Acked-by: Minchan Kim --- mm/page_alloc.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1c4c54503a5d..b543333dce8f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3765,17 +3765,18 @@ struct page *rmqueue(struct zone *preferred_zone, page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags, migratetype); - if (unlikely(!page)) - return NULL; out: /* Separate test+clear to avoid unnecessary atomics */ - if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) { + if (unlikely(test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags))) { clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags); wakeup_kswapd(zone, 0, 0, zone_idx(zone)); } - VM_BUG_ON_PAGE(page && bad_range(zone, page), page); + if (unlikely(!page)) + return NULL; + + VM_BUG_ON_PAGE(bad_range(zone, page), page); return page; } -- 2.34.1