From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.6 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5596C433DF for ; Mon, 22 Jun 2020 22:05:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 57A5B20708 for ; Mon, 22 Jun 2020 22:05:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qSJCCJiO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57A5B20708 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F17486B0010; Mon, 22 Jun 2020 18:05:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EEDA46B0022; Mon, 22 Jun 2020 18:05:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E04896B0023; Mon, 22 Jun 2020 18:05:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id C91AA6B0010 for ; Mon, 22 Jun 2020 18:05:44 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 839583C660 for ; Mon, 22 Jun 2020 22:05:44 +0000 (UTC) X-FDA: 76958230608.02.spot10_5413d1626e36 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 4F8781000975F217 for ; Mon, 22 Jun 2020 22:05:44 +0000 (UTC) X-HE-Tag: spot10_5413d1626e36 X-Filterd-Recvd-Size: 8860 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Mon, 22 Jun 2020 22:05:43 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id 9so21105605ljv.5 for ; Mon, 22 Jun 2020 15:05:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Fjks+xycIsjc5Pzu2hz2xgWkMGG8+0ukNqlFtZjxmxU=; b=qSJCCJiOyjgrk91v3NNxA/VmLsGRQSAz/5iVqji9qKcUlOk/pjsZvS9JXPG7GrqGi7 wwfCBzRWznGt+SDrqHW7o9SVhCEWgoPvTMzRcpfxKgFUMg3ZHVrGEjJ86mdyg9uppznk sdjRyBwsorbMJBlP0NcjrkgfWJcOoAzUKcmnbaussJnxc4wlLgRF/02ZIo7+81OOBub4 kHpFEbF20vJqOBIZ8VTq0T0XOnZmZF77NCdVy5TLRyPbeWV2+E95Dp2dQa1RFKOiJiS/ fe8kcLoklSpT6MMn5M/b2SiiDpclKtoGiKJpDhqKMsNyS9i+WBedKqKnps79tq6TOxdt 3QZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Fjks+xycIsjc5Pzu2hz2xgWkMGG8+0ukNqlFtZjxmxU=; b=g4U1/G2dRI8Vk45o5ASRlVJRgI4bjs7m5sOL97b00n7xh7lMBnRpRCOndnvduETiXj Ld0cxTLLAAuc+E7ufJWTQJjBz4nYLSHTHwF53lRXMBJQ2zwzKFRQwfNmn0RHZ3/47aBH HV1RCE7n94fRhbO+AlWeYU3pUpX6fdHEQfgwLQs5tO7sDS3I5eLvdQm0Gpy3nzSKtk7d 8brL0ukMkz8ZKai9C3oSoRwf31+ae3MCfmqqGZolfmxc5dcsfRh8EU+hPBjdRPs38VXH yq18A+vE9YN+ybA4LqlYSaOrQz3agmxn9vSLuZiF9N35uSBcWvS5Xt6PRj4vPUNiVNTy SNzw== X-Gm-Message-State: AOAM532SRMUQ6d9pms/F6Bs6aT9z9R4d6qnw4tN3Fp/amrbuDcf7yQj+ jRDYW01FCim4clmoPh4mf5AiFlt9PVrVfWrX2jfZVg== X-Google-Smtp-Source: ABdhPJzNNXQyEwCuuDgZXlLKbFFu91rPI2CZOO+4b9oR50z1vH9qfjV/O0Q580JCmqbYuOfCGPGUux9ML+w2um48pB0= X-Received: by 2002:a2e:b0e9:: with SMTP id h9mr10007844ljl.460.1592863541924; Mon, 22 Jun 2020 15:05:41 -0700 (PDT) MIME-Version: 1.0 References: <20200608230654.828134-1-guro@fb.com> <20200608230654.828134-18-guro@fb.com> <20200622203739.GD301338@carbon.dhcp.thefacebook.com> <20200622211356.GF301338@carbon.dhcp.thefacebook.com> <20200622215800.GA326762@carbon.DHCP.thefacebook.com> In-Reply-To: <20200622215800.GA326762@carbon.DHCP.thefacebook.com> From: Shakeel Butt Date: Mon, 22 Jun 2020 15:05:30 -0700 Message-ID: Subject: Re: [PATCH v6 17/19] mm: memcg/slab: use a single set of kmem_caches for all allocations To: Roman Gushchin Cc: Andrew Morton , Christoph Lameter , Johannes Weiner , Michal Hocko , Linux MM , Vlastimil Babka , Kernel Team , LKML Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 4F8781000975F217 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 22, 2020 at 2:58 PM Roman Gushchin wrote: > > On Mon, Jun 22, 2020 at 02:28:54PM -0700, Shakeel Butt wrote: > > On Mon, Jun 22, 2020 at 2:15 PM Roman Gushchin wrote: > > > > > > On Mon, Jun 22, 2020 at 02:04:29PM -0700, Shakeel Butt wrote: > > > > On Mon, Jun 22, 2020 at 1:37 PM Roman Gushchin wrote: > > > > > > > > > > On Mon, Jun 22, 2020 at 12:21:28PM -0700, Shakeel Butt wrote: > > > > > > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote: > > > > > > > > > > > > > > Instead of having two sets of kmem_caches: one for system-wide and > > > > > > > non-accounted allocations and the second one shared by all accounted > > > > > > > allocations, we can use just one. > > > > > > > > > > > > > > The idea is simple: space for obj_cgroup metadata can be allocated > > > > > > > on demand and filled only for accounted allocations. > > > > > > > > > > > > > > It allows to remove a bunch of code which is required to handle > > > > > > > kmem_cache clones for accounted allocations. There is no more need > > > > > > > to create them, accumulate statistics, propagate attributes, etc. > > > > > > > It's a quite significant simplification. > > > > > > > > > > > > > > Also, because the total number of slab_caches is reduced almost twice > > > > > > > (not all kmem_caches have a memcg clone), some additional memory > > > > > > > savings are expected. On my devvm it additionally saves about 3.5% > > > > > > > of slab memory. > > > > > > > > > > > > > > Suggested-by: Johannes Weiner > > > > > > > Signed-off-by: Roman Gushchin > > > > > > > Reviewed-by: Vlastimil Babka > > > > > > > --- > > > > > > [snip] > > > > > > > static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, > > > > > > > struct obj_cgroup *objcg, > > > > > > > - size_t size, void **p) > > > > > > > + gfp_t flags, size_t size, > > > > > > > + void **p) > > > > > > > { > > > > > > > struct page *page; > > > > > > > unsigned long off; > > > > > > > size_t i; > > > > > > > > > > > > > > + if (!objcg) > > > > > > > + return; > > > > > > > + > > > > > > > + flags &= ~__GFP_ACCOUNT; > > > > > > > for (i = 0; i < size; i++) { > > > > > > > if (likely(p[i])) { > > > > > > > page = virt_to_head_page(p[i]); > > > > > > > + > > > > > > > + if (!page_has_obj_cgroups(page) && > > > > > > > > > > > > The page is already linked into the kmem_cache, don't you need > > > > > > synchronization for memcg_alloc_page_obj_cgroups(). > > > > > > > > > > Hm, yes, in theory we need it. I guess the reason behind why I've never seen any issues > > > > > here is the SLUB percpu partial list. > > > > > > > > > > So in theory we need something like: > > > > > > > > > > diff --git a/mm/slab.h b/mm/slab.h > > > > > index 0a31600a0f5c..44bf57815816 100644 > > > > > --- a/mm/slab.h > > > > > +++ b/mm/slab.h > > > > > @@ -237,7 +237,10 @@ static inline int memcg_alloc_page_obj_cgroups(struct page *page, > > > > > if (!vec) > > > > > return -ENOMEM; > > > > > > > > > > - page->obj_cgroups = (struct obj_cgroup **) ((unsigned long)vec | 0x1UL); > > > > > + if (cmpxchg(&page->obj_cgroups, 0, > > > > > + (struct obj_cgroup **) ((unsigned long)vec | 0x1UL))) > > > > > + kfree(vec); > > > > > + > > > > > return 0; > > > > > } > > > > > > > > > > > > > > > But I wonder if we might put it under #ifdef CONFIG_SLAB? > > > > > Or any other ideas how to make it less expensive? > > > > > > > > > > > What's the reason to remove this from charge_slab_page()? > > > > > > > > > > Because at charge_slab_page() we don't know if we'll ever need > > > > > page->obj_cgroups. Some caches might have only few or even zero > > > > > accounted objects. > > > > > > > > > > > > > If slab_pre_alloc_hook() returns a non-NULL objcg then we definitely > > > > need page->obj_cgroups. The charge_slab_page() happens between > > > > slab_pre_alloc_hook() & slab_post_alloc_hook(), so, we should be able > > > > to tell if page->obj_cgroups is needed. > > > > > > Yes, but the opposite is not always true: we can reuse the existing page > > > without allocated page->obj_cgroups. In this case charge_slab_page() is > > > not involved at all. > > > > > > > Hmm yeah, you are right. I missed that. > > > > > > > > Or do you mean that we can minimize the amount of required synchronization > > > by allocating some obj_cgroups vectors from charge_slab_page()? > > > > One optimization would be to always pre-allocate page->obj_cgroups for > > kmem_caches with SLAB_ACCOUNT. > > Even this is not completely memory overhead-free, because processes belonging > to the root cgroup and kthreads might allocate from such cache. > Yes, not completely memory overhead-free but please note that in the containerized world, running in the root container is discouraged and for SLAB_ACCOUNT kmem_caches, processes from root container and kthreads should be very rare. > > Anyway, I think I'll go with cmpxchg() for now and will think about possible > optimizations later. I agree to think about optimizations later (particularly such heuristics based optimizations).