From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_RED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76ADDC43333 for ; Mon, 15 Mar 2021 14:06:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E04764E83 for ; Mon, 15 Mar 2021 14:06:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233357AbhCOOFh (ORCPT ); Mon, 15 Mar 2021 10:05:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:34534 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232008AbhCON5a (ORCPT ); Mon, 15 Mar 2021 09:57:30 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 44AA464F0C; Mon, 15 Mar 2021 13:57:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1615816650; bh=VfRhaYnizHKZ9sNu6Y3SPwoeZs7npsdYkghZqU9iG7I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PY2LBWbwfsReYIZRWTz5bIXvbrJ/cRJhmuyA1jR0Z6HqqiVz4DU4VsJCkLsNW7bRF yYUhEM2EXlhrWh7VhUlmMjmM82IiJ6JpIbunuWvK+r3+eCElO4hq5KpDULquFVAKRW mIkmF+8HLl+v6pSckkYPeMgrW5ULGWgqhkGCbMYI= From: gregkh@linuxfoundation.org To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, kernel test robot , Jann Horn , David Rientjes , Joonsoo Kim , Christoph Lameter , Linus Torvalds Subject: [PATCH 5.10 033/290] Revert "mm, slub: consider rest of partial list if acquire_slab() fails" Date: Mon, 15 Mar 2021 14:52:06 +0100 Message-Id: <20210315135543.045652624@linuxfoundation.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210315135541.921894249@linuxfoundation.org> References: <20210315135541.921894249@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Greg Kroah-Hartman From: Linus Torvalds commit 9b1ea29bc0d7b94d420f96a0f4121403efc3dd85 upstream. This reverts commit 8ff60eb052eeba95cfb3efe16b08c9199f8121cf. The kernel test robot reports a huge performance regression due to the commit, and the reason seems fairly straightforward: when there is contention on the page list (which is what causes acquire_slab() to fail), we do _not_ want to just loop and try again, because that will transfer the contention to the 'n->list_lock' spinlock we hold, and just make things even worse. This is admittedly likely a problem only on big machines - the kernel test robot report comes from a 96-thread dual socket Intel Xeon Gold 6252 setup, but the regression there really is quite noticeable: -47.9% regression of stress-ng.rawpkt.ops_per_sec and the commit that was marked as being fixed (7ced37197196: "slub: Acquire_slab() avoid loop") actually did the loop exit early very intentionally (the hint being that "avoid loop" part of that commit message), exactly to avoid this issue. The correct thing to do may be to pick some kind of reasonable middle ground: instead of breaking out of the loop on the very first sign of contention, or trying over and over and over again, the right thing may be to re-try _once_, and then give up on the second failure (or pick your favorite value for "once"..). Reported-by: kernel test robot Link: https://lore.kernel.org/lkml/20210301080404.GF12822@xsang-OptiPlex-9020/ Cc: Jann Horn Cc: David Rientjes Cc: Joonsoo Kim Acked-by: Christoph Lameter Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/slub.c +++ b/mm/slub.c @@ -1971,7 +1971,7 @@ static void *get_partial_node(struct kme t = acquire_slab(s, n, page, object == NULL, &objects); if (!t) - continue; /* cmpxchg raced */ + break; available += objects; if (!object) {