From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAC1FC432BE for ; Thu, 19 Aug 2021 14:56:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5026F60FDA for ; Thu, 19 Aug 2021 14:56:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5026F60FDA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linuxfoundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E67456B006C; Thu, 19 Aug 2021 10:56:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E174A8D0002; Thu, 19 Aug 2021 10:56:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2D9A8D0001; Thu, 19 Aug 2021 10:56:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id B998D6B006C for ; Thu, 19 Aug 2021 10:56:01 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5572223E44 for ; Thu, 19 Aug 2021 14:56:01 +0000 (UTC) X-FDA: 78492130122.09.4ED75C4 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP id D870A1019883 for ; Thu, 19 Aug 2021 14:56:00 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 1BED261028; Thu, 19 Aug 2021 14:55:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1629384959; bh=LARd4k2MReliz9vaMMGkKXJD2DNGGz43oLf5i6UBNUg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=bVL95grYD2tZsHEz1wPafpnuQLX3MWkvSQ6jNbQCqi+wn3xjQXtddLwqrq0D+WZuO 0MdUcz15qnbselB/FUUkxiUob92CLmF1KucAIrKYEFPzDu8fXgRjvaxDoa6za0J3ES hCgcSau4DaV18ymI89vBb/DLDLsmp+idEZg+cbMk= Date: Thu, 19 Aug 2021 16:55:55 +0200 From: Greg Kroah-Hartman To: Chen Huang Cc: Roman Gushchin , Muchun Song , Wang Hai , linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, Andrew Morton , Alexei Starovoitov Subject: Re: [PATCH 5.10.y 01/11] mm: memcontrol: Use helpers to read page's memcg data Message-ID: References: <20210816072147.3481782-1-chenhuang5@huawei.com> <20210816072147.3481782-2-chenhuang5@huawei.com> <0d3c6aa4-be05-3c93-bdcd-ac30788d82bd@huawei.com> <9e946879-8a6e-6b86-9d8b-54a17976c6be@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <9e946879-8a6e-6b86-9d8b-54a17976c6be@huawei.com> Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=linuxfoundation.org header.s=korg header.b=bVL95grY; spf=pass (imf13.hostedemail.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org; dmarc=pass (policy=none) header.from=linuxfoundation.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D870A1019883 X-Stat-Signature: 4ysedwqsjom4htmfsotr6mpdgqu756za X-HE-Tag: 1629384960-212926 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Aug 19, 2021 at 07:43:37PM +0800, Chen Huang wrote: >=20 >=20 > =E5=9C=A8 2021/8/17 14:14, Greg Kroah-Hartman =E5=86=99=E9=81=93: > > On Tue, Aug 17, 2021 at 09:45:00AM +0800, Chen Huang wrote: > >> > >> > >> =E5=9C=A8 2021/8/16 21:35, Greg Kroah-Hartman =E5=86=99=E9=81=93: > >>> On Mon, Aug 16, 2021 at 09:21:11PM +0800, Chen Huang wrote: > >>>> > >>>> > >>>> =E5=9C=A8 2021/8/16 16:34, Greg Kroah-Hartman =E5=86=99=E9=81=93: > >>>>> On Mon, Aug 16, 2021 at 07:21:37AM +0000, Chen Huang wrote: > >>>>>> From: Roman Gushchin > >>>>> > >>>>> What is the git commit id of this patch in Linus's tree? > >>>>> > >>>>>> > >>>>>> Patch series "mm: allow mapping accounted kernel pages to usersp= ace", v6. > >>>>>> > >>>>>> Currently a non-slab kernel page which has been charged to a mem= ory cgroup > >>>>>> can't be mapped to userspace. The underlying reason is simple: = PageKmemcg > >>>>>> flag is defined as a page type (like buddy, offline, etc), so it= takes a > >>>>>> bit from a page->mapped counter. Pages with a type set can't be= mapped to > >>>>>> userspace. > >>>>>> > >>>>>> But in general the kmemcg flag has nothing to do with mapping to > >>>>>> userspace. It only means that the page has been accounted by th= e page > >>>>>> allocator, so it has to be properly uncharged on release. > >>>>>> > >>>>>> Some bpf maps are mapping the vmalloc-based memory to userspace,= and their > >>>>>> memory can't be accounted because of this implementation detail. > >>>>>> > >>>>>> This patchset removes this limitation by moving the PageKmemcg f= lag into > >>>>>> one of the free bits of the page->mem_cgroup pointer. Also it f= ormalizes > >>>>>> accesses to the page->mem_cgroup and page->obj_cgroups using new= helpers, > >>>>>> adds several checks and removes a couple of obsolete functions. = As the > >>>>>> result the code became more robust with fewer open-coded bit tri= cks. > >>>>>> > >>>>>> This patch (of 4): > >>>>>> > >>>>>> Currently there are many open-coded reads of the page->mem_cgrou= p pointer, > >>>>>> as well as a couple of read helpers, which are barely used. > >>>>>> > >>>>>> It creates an obstacle on a way to reuse some bits of the pointe= r for > >>>>>> storing additional bits of information. In fact, we already do = this for > >>>>>> slab pages, where the last bit indicates that a pointer has an a= ttached > >>>>>> vector of objcg pointers instead of a regular memcg pointer. > >>>>>> > >>>>>> This commits uses 2 existing helpers and introduces a new helper= to > >>>>>> converts all read sides to calls of these helpers: > >>>>>> struct mem_cgroup *page_memcg(struct page *page); > >>>>>> struct mem_cgroup *page_memcg_rcu(struct page *page); > >>>>>> struct mem_cgroup *page_memcg_check(struct page *page); > >>>>>> > >>>>>> page_memcg_check() is intended to be used in cases when the page= can be a > >>>>>> slab page and have a memcg pointer pointing at objcg vector. It= does > >>>>>> check the lowest bit, and if set, returns NULL. page_memcg() co= ntains a > >>>>>> VM_BUG_ON_PAGE() check for the page not being a slab page. > >>>>>> > >>>>>> To make sure nobody uses a direct access, struct page's > >>>>>> mem_cgroup/obj_cgroups is converted to unsigned long memcg_data. > >>>>>> > >>>>>> Signed-off-by: Roman Gushchin > >>>>>> Signed-off-by: Andrew Morton > >>>>>> Signed-off-by: Alexei Starovoitov > >>>>>> Reviewed-by: Shakeel Butt > >>>>>> Acked-by: Johannes Weiner > >>>>>> Acked-by: Michal Hocko > >>>>>> Link: https://lkml.kernel.org/r/20201027001657.3398190-1-guro@fb= .com > >>>>>> Link: https://lkml.kernel.org/r/20201027001657.3398190-2-guro@fb= .com > >>>>>> Link: https://lore.kernel.org/bpf/20201201215900.3569844-2-guro@= fb.com > >>>>>> > >>>>>> Conflicts: > >>>>>> mm/memcontrol.c > >>>>> > >>>>> The "Conflicts:" lines should be removed. > >>>>> > >>>>> Please fix up the patch series and resubmit. But note, this seem= s > >>>>> really intrusive, are you sure these are all needed? > >>>>> > >>>> > >>>> OK=EF=BC=8CI will resend the patchset. > >>>> Roman Gushchin's patchset formalize accesses to the page->mem_cgro= up and > >>>> page->obj_cgroups. But for LRU pages and most other raw memcg, the= y may > >>>> pin to a memcg cgroup pointer, which should always point to an obj= ect cgroup > >>>> pointer. That's the problem I met. And Muchun Song's patchset fix = this. > >>>> So I think these are all needed. > >>> > >>> What in-tree driver causes this to happen and under what workload? > >>> > >>>>> What UIO driver are you using that is showing problems like this? > >>>>> > >>>> > >>>> The UIO driver is my own driver, and it's creation likes this: > >>>> First, we register a device > >>>> pdev =3D platform_device_register_simple("uio_driver,0, NULL, 0); > >>>> and use uio_info to describe the UIO driver, the page is alloced a= nd used > >>>> for uio_vma_fault > >>>> info->mem[0].addr =3D (phys_addr_t) kzalloc(PAGE_SIZE, GFP_ATOMIC= ); > >>> > >>> That is not a physical address, and is not what the uio api is for = at > >>> all. Please do not abuse it that way. > >>> > >>>> then we register the UIO driver. > >>>> uio_register_device(&pdev->dev, info) > >>> > >>> So no in-tree drivers are having problems with the existing code, o= nly > >>> fake ones? > >> > >> Yes, but the nullptr porblem may not just about uio driver. For now,= page struct > >> has a union > >> union { > >> struct mem_cgroup *mem_cgroup; > >> struct obj_cgroup **obj_cgroups; > >> }; > >> For the slab pages, the union info should belong to obj_cgroups. And= for user > >> pages, it should belong to mem_cgroup. When a slab page changes its = obj_cgroups, > >> then another user page which is in the same compound page of that sl= ab page will > >> gets the wrong mem_cgroup in __mod_lruvec_page_state(), and will tri= gger nullptr > >> in mem_cgroup_lruvec(). Correct me if I'm wrong. Thanks! > >=20 > > And how can that be triggered by a user in the 5.10.y kernel tree at = the > > moment? > >=20 > > I'm all for fixing problems, but this one does not seem like it is an > > actual issue for the 5.10 tree right now. Am I missing something? > >=20 > > thanks, > >=20 > Sorry, it maybe just the problem of my own driver. What driver is it? Please submit it to be included in the tree so it can be reviewed properly and bugs like this can be fixed :) thanks, greg k-h