From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF0A0C00140 for ; Mon, 22 Aug 2022 00:18:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232030AbiHVAS0 (ORCPT ); Sun, 21 Aug 2022 20:18:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231996AbiHVASN (ORCPT ); Sun, 21 Aug 2022 20:18:13 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27AA31FCC4 for ; Sun, 21 Aug 2022 17:18:10 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-337ed9110c2so117891507b3.15 for ; Sun, 21 Aug 2022 17:18:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=DTW77A/Vjl+d5bcQDIprq1tGD6UgniZYOTpaR8iNZIc=; b=Dg9yytO9IkPtQr0hKr1opAsmaHFl6Ti0qmk9dgNkQKa6EeZnhuamJ8wFuDDpvnSucO 3jnI66hn0RZluaprAxAC9vZCxkQXXdYA0KwOtMgEey7a9lQkqmoQ767sSys3ES+TUduk 8r9futG6QW922rVpAUoBuyvHCBDkXpj6Z6uIr5bt4pi4JjwABrXPfF7E2R9JF1EvrU3W 3HTuZx9jrCVbRGBo1EjBv/Hu+VbfEktqbj+TpV4cjusZQtsDY5OAzcQ1W3tgUYcjFPCH ebqBDDU8/54Qiztpi0pJPbzqNXn5hxGVXBNshftViCKjFt6eCv7LnIrPsIrM9eVLCdKl WwwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=DTW77A/Vjl+d5bcQDIprq1tGD6UgniZYOTpaR8iNZIc=; b=JS58fn8TIHPt1vZB4fXIjYWV8xZxtV8/Pw1an0WAHowYcPqNNajFkBnsBad7m/bXx7 qLXmnVGTjiSO/bBp8AdNAfljR4Lx6fazVuDUhnSUbeFlqA27HIkNCAAz33v34cWuElWK HLHYGQbbXt+t3gwfAyAVNbk6uI9BgcDTTptlGkOcvPeXqookza0IgJUpFD1YMuEu3bzy z7K6uRWclaT4KgC5qubfAVlvUcWWiiE1Lgg2FvKW6yVsRvXCZbIQk28AUtSYooHQKbhg LyWhfV5sLf34REN4hiA3SMaaauUw1TXFhwZhp203WXpC40++Q+tKUq+wPvTOxFl8dkLH sFmQ== X-Gm-Message-State: ACgBeo0SbWeR0Gn63QW3gWVbuZcOAzZEMrkGj43hUIr/UwmGEt+nX/AD pWWSpTQL7WUFfDEvaxYHZx6orwUjSV5mPw== X-Google-Smtp-Source: AA6agR6tW1MO8bdpOOCAdPjlI2KgLdp2empQdKN2rFu+N7JEzRBKSYp9P+zG59MACSGNvPbZ7uCstsPP+839Yw== X-Received: from shakeelb.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:28b]) (user=shakeelb job=sendgmr) by 2002:a25:9f8d:0:b0:686:9a3d:6f85 with SMTP id u13-20020a259f8d000000b006869a3d6f85mr16587811ybq.400.1661127489865; Sun, 21 Aug 2022 17:18:09 -0700 (PDT) Date: Mon, 22 Aug 2022 00:17:37 +0000 In-Reply-To: <20220822001737.4120417-1-shakeelb@google.com> Message-Id: <20220822001737.4120417-4-shakeelb@google.com> Mime-Version: 1.0 References: <20220822001737.4120417-1-shakeelb@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 3/3] memcg: increase MEMCG_CHARGE_BATCH to 64 From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song Cc: "=?UTF-8?q?Michal=20Koutn=C3=BD?=" , Eric Dumazet , Soheil Hassas Yeganeh , Feng Tang , Oliver Sang , Andrew Morton , lkp@lists.01.org, cgroups@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For several years, MEMCG_CHARGE_BATCH was kept at 32 but with bigger machines and the network intensive workloads requiring througput in Gbps, 32 is too small and makes the memcg charging path a bottleneck. For now, increase it to 64 for easy acceptance to 6.0. We will need to revisit this in future for ever increasing demand of higher performance. Please note that the memcg charge path drain the per-cpu memcg charge stock, so there should not be any oom behavior change. To evaluate the impact of this optimization, on a 72 CPUs machine, we ran the following workload in a three level of cgroup hierarchy with top level having min and low setup appropriately. More specifically memory.min equal to size of netperf binary and memory.low double of that. $ netserver -6 # 36 instances of netperf with following params $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K Results (average throughput of netperf): Without (6.0-rc1) 10482.7 Mbps With patch 17064.7 Mbps (62.7% improvement) With the patch, the throughput improved by 62.7%. Signed-off-by: Shakeel Butt Reported-by: kernel test robot --- include/linux/memcontrol.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4d31ce55b1c0..70ae91188e16 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -354,10 +354,11 @@ struct mem_cgroup { }; /* - * size of first charge trial. "32" comes from vmscan.c's magic value. - * TODO: maybe necessary to use big numbers in big irons. + * size of first charge trial. + * TODO: maybe necessary to use big numbers in big irons or dynamic based of the + * workload. */ -#define MEMCG_CHARGE_BATCH 32U +#define MEMCG_CHARGE_BATCH 64U extern struct mem_cgroup *root_mem_cgroup; -- 2.37.1.595.g718a3a8f04-goog