From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 942D523A0 for ; Fri, 24 Jun 2022 04:14:03 +0000 (UTC) Received: by mail-yb1-f174.google.com with SMTP id d5so2532689yba.5 for ; Thu, 23 Jun 2022 21:14:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1qA3djIvmCXy0qDgHJwJPAvZJI/yB7Iv/v6zV71DlqI=; b=dbZ9+ZC5GXZBRWNne1PZdMvrN9BTVVpQyaGafXlsAtWJeAd9l8oyXDXIqFysIl9GVm N4H+lFbi6/LBOckdOGoUbN7WtfvcNbNBOeOFC7rh5NtINkkrczh+6eZUtMSYo2E3AY9Q KYGicWB9gM91hpq5XTmgvSFGXfHwUfmiFGF4JIDqGyLQZ9Ioy2r1nLhocjJhDW2tTunF +GaqHi5XbJRMj4kjJ73QHO1rFkP1twxXYdL2Vjmx+CyHxUvo5vkbtNGr14PQ0ANIM6bW nq7VV+wsKB9fTNwybydirk3rRQxeekz89JST0ITd5vaJmhm+joNueRsC/J/thMNGvbjz MofQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1qA3djIvmCXy0qDgHJwJPAvZJI/yB7Iv/v6zV71DlqI=; b=KweB5u4Ek5gHlLVOuL+KQBO1b3mHDH4u9Ri0MRHrRRmlW28mWEHaprrj7Rh4F95yo+ N/rkmX2hE+nB88MwPxHDqxdtmDLSmlnxkLVMEOWCY+F2LRTeYAM3IqLltDtjw8YU/vD3 PnQ9KvFez6NhFaYmkfqc5VMuMWnpH6W/Fr7iMjh8FQh5S/vC7F2I0S6XgjOpQwJivmfb yG/NYDEraoUmVzyxrvzM61Mv0JOMdh07A2JakGIIWtCT4BhfgnaWZkEAWubq8lOeN6cX qUHnPPy0fnQkdUZLIeatPI1F5ffeAbXwn66sGVGoAgHBMVgppFGNZ8Ar4bIBtldEEQHV SsUw== X-Gm-Message-State: AJIora9LJLJqekblH2DUZF+ZhY7zeN+GviBe42wansmzWXEF7WYLOccF 2ipUTgMXxl5pJ3Y64AdFv0BGBRz1pOdg8/ScRz92CQ== X-Google-Smtp-Source: AGRyM1u5NmicPX8kPXvz1kiPZ+hNXNfnIUMJsLYPxSR6t62siw34qIBUx64zMORNQHZNj8XRPh/W7Dh2yu7XetMiJhw= X-Received: by 2002:a25:6c5:0:b0:669:a17a:2289 with SMTP id 188-20020a2506c5000000b00669a17a2289mr9871960ybg.231.1656044042254; Thu, 23 Jun 2022 21:14:02 -0700 (PDT) Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20220619150456.GB34471@xsang-OptiPlex-9020> <20220622172857.37db0d29@kernel.org> <20220623185730.25b88096@kernel.org> In-Reply-To: <20220623185730.25b88096@kernel.org> From: Eric Dumazet Date: Fri, 24 Jun 2022 06:13:51 +0200 Message-ID: Subject: Re: [net] 4890b686f4: netperf.Throughput_Mbps -69.4% regression To: Jakub Kicinski Cc: Xin Long , Marcelo Ricardo Leitner , kernel test robot , Shakeel Butt , Soheil Hassas Yeganeh , LKML , Linux Memory Management List , network dev , linux-s390@vger.kernel.org, MPTCP Upstream , "linux-sctp @ vger . kernel . org" , lkp@lists.01.org, kbuild test robot , Huang Ying , "Tang, Feng" , zhengjun.xing@linux.intel.com, fengwei.yin@intel.com, Ying Xu Content-Type: text/plain; charset="UTF-8" On Fri, Jun 24, 2022 at 3:57 AM Jakub Kicinski wrote: > > On Thu, 23 Jun 2022 18:50:07 -0400 Xin Long wrote: > > From the perf data, we can see __sk_mem_reduce_allocated() is the one > > using CPU the most more than before, and mem_cgroup APIs are also > > called in this function. It means the mem cgroup must be enabled in > > the test env, which may explain why I couldn't reproduce it. > > > > The Commit 4890b686f4 ("net: keep sk->sk_forward_alloc as small as > > possible") uses sk_mem_reclaim(checking reclaimable >= PAGE_SIZE) to > > reclaim the memory, which is *more frequent* to call > > __sk_mem_reduce_allocated() than before (checking reclaimable >= > > SK_RECLAIM_THRESHOLD). It might be cheap when > > mem_cgroup_sockets_enabled is false, but I'm not sure if it's still > > cheap when mem_cgroup_sockets_enabled is true. > > > > I think SCTP netperf could trigger this, as the CPU is the bottleneck > > for SCTP netperf testing, which is more sensitive to the extra > > function calls than TCP. > > > > Can we re-run this testing without mem cgroup enabled? > > FWIW I defer to Eric, thanks a lot for double checking the report > and digging in! I did tests with TCP + memcg and noticed a very small additional cost in memcg functions, because of suboptimal layout: Extract of an internal Google bug, update from June 9th: -------------------------------- I have noticed a minor false sharing to fetch (struct mem_cgroup)->css.parent, at offset 0xc0, because it shares the cache line containing struct mem_cgroup.memory, at offset 0xd0 Ideally, memcg->socket_pressure and memcg->parent should sit in a read mostly cache line. ----------------------- But nothing that could explain a "-69.4% regression" memcg has a very similar strategy of per-cpu reserves, with MEMCG_CHARGE_BATCH being 32 pages per cpu. It is not clear why SCTP with 10K writes would overflow this reserve constantly. Presumably memcg experts will have to rework structure alignments to make sure they can cope better with more charge/uncharge operations, because we are not going back to gigantic per-socket reserves, this simply does not scale.