From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 254C3C04AA5 for ; Thu, 25 Aug 2022 00:05:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FC6B940007; Wed, 24 Aug 2022 20:05:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7853D6B0075; Wed, 24 Aug 2022 20:05:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FDFB940007; Wed, 24 Aug 2022 20:05:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 49ACE6B0074 for ; Wed, 24 Aug 2022 20:05:23 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 1B6A41C7168 for ; Thu, 25 Aug 2022 00:05:23 +0000 (UTC) X-FDA: 79836170526.19.0923806 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf30.hostedemail.com (Postfix) with ESMTP id C0DF38003C for ; Thu, 25 Aug 2022 00:05:22 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id x7-20020aa79407000000b00536368f1a07so5934610pfo.13 for ; Wed, 24 Aug 2022 17:05:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:mime-version:message-id:date:from:to:cc; bh=vgA35UGCl2OHwAbiBhc19zCa8HdxydG1D9TkQ3os1vo=; b=Ge7brMIDek9qbeisJgfHL6rmBn4FcJKYA9NOFZ0ssAyFWYzyQc4kXS30ZHP2em4Z1h P3TuTjpB+zzcwiH6oROwzlRsWero2l5QxfYmurrc9hJBSB8kUcwtU4kRAIqOTRDxWiI9 FnZosPt+RBbfPjyDFj14FpKbfQ3sV+YnH3jHUhIvfJSXyA0AUA63cdDg11e8xX4/Myfg EqjDHsQ+yqVJdJoUnYtFmb24rYbR/drHoWA/7uWQxfNfnukE7muUDFSfj7q7MxclK3cA TIrcuHrDD3wmMumrUkRreEIKBtZbjTxfTYXMiVtJ9KUxV0mLM3E70ZKhnC+Rx7IbBro5 vwJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:mime-version:message-id:date:x-gm-message-state :from:to:cc; bh=vgA35UGCl2OHwAbiBhc19zCa8HdxydG1D9TkQ3os1vo=; b=YACFpac/vyraHgNMNDnMLEBMP2bzKlG9QtmNC1g4Gr9r3iJG2ryjFhuLdOxGXyS513 0aJ2jOjzyZfK1l02R/vQkjh9lALDpkT3UlNDaBaxaWddam4NjGDdJxlTJYh0qpIsowmI TZVmBSjaVSs0W7Y9geu6GwqvD/6L83U5Zxh0HTCRwF8eErCf8rmQ4HI50hTd8FR82Gq1 Mt0jvTgYI7VlCxdWo+6kbV3PAvieJPyO2xl4MEY0xAhSWx7X+micyKPfcSvsZPnu+d9h WU/3PvEqfp4/fwImZkDN6gl7nGiEXQ+h8MXtBO3YxHCwKEyvQXuNeLbDUh4x4wStTaqi wx5Q== X-Gm-Message-State: ACgBeo3IcaXnhOxtDBn8n3RxTzM7UWQinYIvtn0N+NhibgzgfQcSLT7x mdOYIwNWSVpkY1dWZnPHToVe2WfkBcjGDQ== X-Google-Smtp-Source: AA6agR5vxD4a6lk0t3+yzAUH5cJA/JPFjfgrM6SCBQZtySgZ/O1uGCyw+sNky8HbtNAzJypX4vfHs61I5v+CXw== X-Received: from shakeelb.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:262e]) (user=shakeelb job=sendgmr) by 2002:a62:1649:0:b0:536:55af:1f4d with SMTP id 70-20020a621649000000b0053655af1f4dmr1405863pfw.61.1661385921563; Wed, 24 Aug 2022 17:05:21 -0700 (PDT) Date: Thu, 25 Aug 2022 00:05:03 +0000 Message-Id: <20220825000506.239406-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH v2 0/3] memcg: optimize charge codepath From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song Cc: "=?UTF-8?q?Michal=20Koutn=C3=BD?=" , Eric Dumazet , Soheil Hassas Yeganeh , Feng Tang , Oliver Sang , Andrew Morton , lkp@lists.01.org, cgroups@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt Content-Type: text/plain; charset="UTF-8" ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ge7brMID; spf=pass (imf30.hostedemail.com: domain of 3wbwGYwgKCHQkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3wbwGYwgKCHQkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661385922; a=rsa-sha256; cv=none; b=ViychR83EIVUhlVyCv0qcd5JD1oF75/6JmGYjmhXsuY2uNfhdMz9ndvF/Kpw20Kv+IrY+D S0U8mL0GzBTdLzytvJY03iHShKYeTCzxJ14es+lNJci/WgsbnPPK9hTQqfUvm/b8A3nmpS YO9DaUv+uKntEqYh3cj/wtWy+2v/o0Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661385922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=vgA35UGCl2OHwAbiBhc19zCa8HdxydG1D9TkQ3os1vo=; b=KOO9ilfPnuB7jp89XL0M30h8Sv2m4A+mSRsgsWY3fpwljrF8eymwjXW8Sf44cEtMY5nR7r FQRv3OHFIP7/TM10ZCPTOC1D1Q2xafpZ0wa6vbSKzKM2co9ZmhPhLH/wpkPai5noYzMYFm xmXcUjSdqZGIc3XYsMXbeBbVChXRljI= Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ge7brMID; spf=pass (imf30.hostedemail.com: domain of 3wbwGYwgKCHQkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3wbwGYwgKCHQkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C0DF38003C X-Stat-Signature: xib1zzrxg8799q7rz8f6n3o5oncwh6p9 X-HE-Tag: 1661385922-933340 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Recently Linux networking stack has moved from a very old per socket pre-charge caching to per-cpu caching to avoid pre-charge fragmentation and unwarranted OOMs. One impact of this change is that for network traffic workloads, memcg charging codepath can become a bottleneck. The kernel test robot has also reported this regression[1]. This patch series tries to improve the memcg charging for such workloads. This patch series implement three optimizations: (A) Reduce atomic ops in page counter update path. (B) Change layout of struct page_counter to eliminate false sharing between usage and high. (C) Increase the memcg charge batch to 64. To evaluate the impact of these optimizations, on a 72 CPUs machine, we ran the following workload in root memcg and then compared with scenario where the workload is run in a three level of cgroup hierarchy with top level having min and low setup appropriately. $ netserver -6 # 36 instances of netperf with following params $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K Results (average throughput of netperf): 1. root memcg 21694.8 Mbps 2. 6.0-rc1 10482.7 Mbps (-51.6%) 3. 6.0-rc1 + (A) 14542.5 Mbps (-32.9%) 4. 6.0-rc1 + (B) 12413.7 Mbps (-42.7%) 5. 6.0-rc1 + (C) 17063.7 Mbps (-21.3%) 6. 6.0-rc1 + (A+B+C) 20120.3 Mbps (-7.2%) With all three optimizations, the memcg overhead of this workload has been reduced from 51.6% to just 7.2%. [1] https://lore.kernel.org/linux-mm/20220619150456.GB34471@xsang-OptiPlex-9020/ Changes since v1: - Commit message updates - Instead of explicit padding add align compiler option with struct Shakeel Butt (3): mm: page_counter: remove unneeded atomic ops for low/min mm: page_counter: rearrange struct page_counter fields memcg: increase MEMCG_CHARGE_BATCH to 64 include/linux/memcontrol.h | 7 ++++--- include/linux/page_counter.h | 34 +++++++++++++++++++++++----------- mm/page_counter.c | 13 ++++++------- 3 files changed, 33 insertions(+), 21 deletions(-) -- 2.37.1.595.g718a3a8f04-goog