From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.6 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52690C433E0 for ; Mon, 22 Jun 2020 21:31:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A35220732 for ; Mon, 22 Jun 2020 21:31:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rUqHnhOb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A35220732 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A45A6B0005; Mon, 22 Jun 2020 17:31:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 92E426B0006; Mon, 22 Jun 2020 17:31:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CE8D6B0024; Mon, 22 Jun 2020 17:31:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 5B3276B0005 for ; Mon, 22 Jun 2020 17:31:19 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1955EB2E97 for ; Mon, 22 Jun 2020 21:31:19 +0000 (UTC) X-FDA: 76958143878.12.pie65_4902e2926e36 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 1DF26180A0CDB for ; Mon, 22 Jun 2020 21:29:08 +0000 (UTC) X-HE-Tag: pie65_4902e2926e36 X-Filterd-Recvd-Size: 7583 Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Mon, 22 Jun 2020 21:29:07 +0000 (UTC) Received: by mail-lf1-f68.google.com with SMTP id t9so1574697lfl.5 for ; Mon, 22 Jun 2020 14:29:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=umdWjH1ky+XmZZj0sm8b6f6B8XCVb8FTSoaNh4N336Y=; b=rUqHnhObUca52Ets+8Fc+3aHjatnCWxqYD8OVOF3UfR6z5eh4yI3+b/i1CmYPXZhnu ZjQ/0AZYcCZCZ+8mFv7NUCaX+H94Uvz0A8WPwCrw2oF1QuzMhigtABomWAZN4vMMn/WB iM2slPaQvce+bUMlBGEyIPf1qUUI7+hNeH755x1Lub1WxaWcLdJmrjY+zNoqxFMVbRTY 5TZMO7fLQ2Z69Xg1Nm0PWDLhxZOLhyFEEd062QJ7+Wvk7JSYQP/eDu5x7TfzSHlFOXHZ eY1+w+K4cp3Uz7xoQzXPEocD+Dxl3OsZHr0DV3s7K21R+lP+7b+2wFucveD8wYDZGKF4 oGKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=umdWjH1ky+XmZZj0sm8b6f6B8XCVb8FTSoaNh4N336Y=; b=D7yh8OBy3NUM/DaOZt2ee5eFW1a3y3/4sJ54IYMQ9x7IBZpjvDomMNN/Y/KVbrxKTR 69VWE5ibT11XyqF141nIOZspOPMJBhsbnFVPHDZphF0gJdotEpytMFBmKRv0XxRoD0UJ NDT/tHaPwzOAY3SpOopLqjV9kryviH4DUNGbqcFW2J7fDSlaJszah5SDyQt5NgEFUGUe gCQB0W94MKtzuWhIrOx6qQ+lsVnuEHz+A0qqZ9Iv7M9Jjzt8G2XXsvQoE1itYUm3A6J+ YdRUO6xCsIVEBG1vlJ8mvur1Zkdw9ia3gAtHoJm+da/G598W/KNrH0fwdxFvKU2yfSRV Euig== X-Gm-Message-State: AOAM532tlkT3jKduzlvLEYQcWtbiY8YzvUoOV4ekRbhqItihS7BnLdF2 J2+rwnXBRvhe7NDuOR+ryRuHRJoKoB1fx+H7+LojeQ== X-Google-Smtp-Source: ABdhPJy3nOR0wjchd+j1NHjop2IkMr3XrhTqYeEOyjbug3rDK3yReNMgtsM1uZTpzmtzwrYjZlH3mwndQC12cbTUYdA= X-Received: by 2002:a19:4301:: with SMTP id q1mr10843586lfa.96.1592861345936; Mon, 22 Jun 2020 14:29:05 -0700 (PDT) MIME-Version: 1.0 References: <20200608230654.828134-1-guro@fb.com> <20200608230654.828134-18-guro@fb.com> <20200622203739.GD301338@carbon.dhcp.thefacebook.com> <20200622211356.GF301338@carbon.dhcp.thefacebook.com> In-Reply-To: <20200622211356.GF301338@carbon.dhcp.thefacebook.com> From: Shakeel Butt Date: Mon, 22 Jun 2020 14:28:54 -0700 Message-ID: Subject: Re: [PATCH v6 17/19] mm: memcg/slab: use a single set of kmem_caches for all allocations To: Roman Gushchin Cc: Andrew Morton , Christoph Lameter , Johannes Weiner , Michal Hocko , Linux MM , Vlastimil Babka , Kernel Team , LKML Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 1DF26180A0CDB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 22, 2020 at 2:15 PM Roman Gushchin wrote: > > On Mon, Jun 22, 2020 at 02:04:29PM -0700, Shakeel Butt wrote: > > On Mon, Jun 22, 2020 at 1:37 PM Roman Gushchin wrote: > > > > > > On Mon, Jun 22, 2020 at 12:21:28PM -0700, Shakeel Butt wrote: > > > > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote: > > > > > > > > > > Instead of having two sets of kmem_caches: one for system-wide and > > > > > non-accounted allocations and the second one shared by all accounted > > > > > allocations, we can use just one. > > > > > > > > > > The idea is simple: space for obj_cgroup metadata can be allocated > > > > > on demand and filled only for accounted allocations. > > > > > > > > > > It allows to remove a bunch of code which is required to handle > > > > > kmem_cache clones for accounted allocations. There is no more need > > > > > to create them, accumulate statistics, propagate attributes, etc. > > > > > It's a quite significant simplification. > > > > > > > > > > Also, because the total number of slab_caches is reduced almost twice > > > > > (not all kmem_caches have a memcg clone), some additional memory > > > > > savings are expected. On my devvm it additionally saves about 3.5% > > > > > of slab memory. > > > > > > > > > > Suggested-by: Johannes Weiner > > > > > Signed-off-by: Roman Gushchin > > > > > Reviewed-by: Vlastimil Babka > > > > > --- > > > > [snip] > > > > > static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, > > > > > struct obj_cgroup *objcg, > > > > > - size_t size, void **p) > > > > > + gfp_t flags, size_t size, > > > > > + void **p) > > > > > { > > > > > struct page *page; > > > > > unsigned long off; > > > > > size_t i; > > > > > > > > > > + if (!objcg) > > > > > + return; > > > > > + > > > > > + flags &= ~__GFP_ACCOUNT; > > > > > for (i = 0; i < size; i++) { > > > > > if (likely(p[i])) { > > > > > page = virt_to_head_page(p[i]); > > > > > + > > > > > + if (!page_has_obj_cgroups(page) && > > > > > > > > The page is already linked into the kmem_cache, don't you need > > > > synchronization for memcg_alloc_page_obj_cgroups(). > > > > > > Hm, yes, in theory we need it. I guess the reason behind why I've never seen any issues > > > here is the SLUB percpu partial list. > > > > > > So in theory we need something like: > > > > > > diff --git a/mm/slab.h b/mm/slab.h > > > index 0a31600a0f5c..44bf57815816 100644 > > > --- a/mm/slab.h > > > +++ b/mm/slab.h > > > @@ -237,7 +237,10 @@ static inline int memcg_alloc_page_obj_cgroups(struct page *page, > > > if (!vec) > > > return -ENOMEM; > > > > > > - page->obj_cgroups = (struct obj_cgroup **) ((unsigned long)vec | 0x1UL); > > > + if (cmpxchg(&page->obj_cgroups, 0, > > > + (struct obj_cgroup **) ((unsigned long)vec | 0x1UL))) > > > + kfree(vec); > > > + > > > return 0; > > > } > > > > > > > > > But I wonder if we might put it under #ifdef CONFIG_SLAB? > > > Or any other ideas how to make it less expensive? > > > > > > > What's the reason to remove this from charge_slab_page()? > > > > > > Because at charge_slab_page() we don't know if we'll ever need > > > page->obj_cgroups. Some caches might have only few or even zero > > > accounted objects. > > > > > > > If slab_pre_alloc_hook() returns a non-NULL objcg then we definitely > > need page->obj_cgroups. The charge_slab_page() happens between > > slab_pre_alloc_hook() & slab_post_alloc_hook(), so, we should be able > > to tell if page->obj_cgroups is needed. > > Yes, but the opposite is not always true: we can reuse the existing page > without allocated page->obj_cgroups. In this case charge_slab_page() is > not involved at all. > Hmm yeah, you are right. I missed that. > > Or do you mean that we can minimize the amount of required synchronization > by allocating some obj_cgroups vectors from charge_slab_page()? One optimization would be to always pre-allocate page->obj_cgroups for kmem_caches with SLAB_ACCOUNT.