From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBE41C4338F for ; Tue, 17 Aug 2021 01:45:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 70DA860EE4 for ; Tue, 17 Aug 2021 01:45:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 70DA860EE4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0EE456B006C; Mon, 16 Aug 2021 21:45:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09EA26B0073; Mon, 16 Aug 2021 21:45:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA7D76B0074; Mon, 16 Aug 2021 21:45:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id CDF156B006C for ; Mon, 16 Aug 2021 21:45:07 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 65BA0181BD795 for ; Tue, 17 Aug 2021 01:45:07 +0000 (UTC) X-FDA: 78482879454.28.EF0FEE1 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf01.hostedemail.com (Postfix) with ESMTP id 2C012501F06C for ; Tue, 17 Aug 2021 01:45:05 +0000 (UTC) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4GpYjz3q2kz1CXbv; Tue, 17 Aug 2021 09:44:39 +0800 (CST) Received: from dggema756-chm.china.huawei.com (10.1.198.198) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Tue, 17 Aug 2021 09:45:01 +0800 Received: from [10.174.177.134] (10.174.177.134) by dggema756-chm.china.huawei.com (10.1.198.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 17 Aug 2021 09:45:01 +0800 Subject: Re: [PATCH 5.10.y 01/11] mm: memcontrol: Use helpers to read page's memcg data To: Greg Kroah-Hartman CC: Roman Gushchin , Muchun Song , "Wang Hai" , , , , Andrew Morton , Alexei Starovoitov References: <20210816072147.3481782-1-chenhuang5@huawei.com> <20210816072147.3481782-2-chenhuang5@huawei.com> From: Chen Huang Message-ID: <0d3c6aa4-be05-3c93-bdcd-ac30788d82bd@huawei.com> Date: Tue, 17 Aug 2021 09:45:00 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" X-Originating-IP: [10.174.177.134] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggema756-chm.china.huawei.com (10.1.198.198) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 2C012501F06C Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of chenhuang5@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=chenhuang5@huawei.com; dmarc=pass (policy=none) header.from=huawei.com X-Rspamd-Server: rspam01 X-Stat-Signature: nx6wqmzu87jod696bzh7p963efddezsb X-HE-Tag: 1629164705-655643 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2021/8/16 21:35, Greg Kroah-Hartman =E5=86=99=E9=81=93: > On Mon, Aug 16, 2021 at 09:21:11PM +0800, Chen Huang wrote: >> >> >> =E5=9C=A8 2021/8/16 16:34, Greg Kroah-Hartman =E5=86=99=E9=81=93: >>> On Mon, Aug 16, 2021 at 07:21:37AM +0000, Chen Huang wrote: >>>> From: Roman Gushchin >>> >>> What is the git commit id of this patch in Linus's tree? >>> >>>> >>>> Patch series "mm: allow mapping accounted kernel pages to userspace"= , v6. >>>> >>>> Currently a non-slab kernel page which has been charged to a memory = cgroup >>>> can't be mapped to userspace. The underlying reason is simple: Page= Kmemcg >>>> flag is defined as a page type (like buddy, offline, etc), so it tak= es a >>>> bit from a page->mapped counter. Pages with a type set can't be map= ped to >>>> userspace. >>>> >>>> But in general the kmemcg flag has nothing to do with mapping to >>>> userspace. It only means that the page has been accounted by the pa= ge >>>> allocator, so it has to be properly uncharged on release. >>>> >>>> Some bpf maps are mapping the vmalloc-based memory to userspace, and= their >>>> memory can't be accounted because of this implementation detail. >>>> >>>> This patchset removes this limitation by moving the PageKmemcg flag = into >>>> one of the free bits of the page->mem_cgroup pointer. Also it forma= lizes >>>> accesses to the page->mem_cgroup and page->obj_cgroups using new hel= pers, >>>> adds several checks and removes a couple of obsolete functions. As = the >>>> result the code became more robust with fewer open-coded bit tricks. >>>> >>>> This patch (of 4): >>>> >>>> Currently there are many open-coded reads of the page->mem_cgroup po= inter, >>>> as well as a couple of read helpers, which are barely used. >>>> >>>> It creates an obstacle on a way to reuse some bits of the pointer fo= r >>>> storing additional bits of information. In fact, we already do this= for >>>> slab pages, where the last bit indicates that a pointer has an attac= hed >>>> vector of objcg pointers instead of a regular memcg pointer. >>>> >>>> This commits uses 2 existing helpers and introduces a new helper to >>>> converts all read sides to calls of these helpers: >>>> struct mem_cgroup *page_memcg(struct page *page); >>>> struct mem_cgroup *page_memcg_rcu(struct page *page); >>>> struct mem_cgroup *page_memcg_check(struct page *page); >>>> >>>> page_memcg_check() is intended to be used in cases when the page can= be a >>>> slab page and have a memcg pointer pointing at objcg vector. It doe= s >>>> check the lowest bit, and if set, returns NULL. page_memcg() contai= ns a >>>> VM_BUG_ON_PAGE() check for the page not being a slab page. >>>> >>>> To make sure nobody uses a direct access, struct page's >>>> mem_cgroup/obj_cgroups is converted to unsigned long memcg_data. >>>> >>>> Signed-off-by: Roman Gushchin >>>> Signed-off-by: Andrew Morton >>>> Signed-off-by: Alexei Starovoitov >>>> Reviewed-by: Shakeel Butt >>>> Acked-by: Johannes Weiner >>>> Acked-by: Michal Hocko >>>> Link: https://lkml.kernel.org/r/20201027001657.3398190-1-guro@fb.com >>>> Link: https://lkml.kernel.org/r/20201027001657.3398190-2-guro@fb.com >>>> Link: https://lore.kernel.org/bpf/20201201215900.3569844-2-guro@fb.c= om >>>> >>>> Conflicts: >>>> mm/memcontrol.c >>> >>> The "Conflicts:" lines should be removed. >>> >>> Please fix up the patch series and resubmit. But note, this seems >>> really intrusive, are you sure these are all needed? >>> >> >> OK=EF=BC=8CI will resend the patchset. >> Roman Gushchin's patchset formalize accesses to the page->mem_cgroup a= nd >> page->obj_cgroups. But for LRU pages and most other raw memcg, they ma= y >> pin to a memcg cgroup pointer, which should always point to an object = cgroup >> pointer. That's the problem I met. And Muchun Song's patchset fix this= . >> So I think these are all needed. >=20 > What in-tree driver causes this to happen and under what workload? >=20 >>> What UIO driver are you using that is showing problems like this? >>> >> >> The UIO driver is my own driver, and it's creation likes this: >> First, we register a device >> pdev =3D platform_device_register_simple("uio_driver,0, NULL, 0); >> and use uio_info to describe the UIO driver, the page is alloced and u= sed >> for uio_vma_fault >> info->mem[0].addr =3D (phys_addr_t) kzalloc(PAGE_SIZE, GFP_ATOMIC); >=20 > That is not a physical address, and is not what the uio api is for at > all. Please do not abuse it that way. >=20 >> then we register the UIO driver. >> uio_register_device(&pdev->dev, info) >=20 > So no in-tree drivers are having problems with the existing code, only > fake ones? Yes, but the nullptr porblem may not just about uio driver. For now, page= struct has a union union { struct mem_cgroup *mem_cgroup; struct obj_cgroup **obj_cgroups; }; For the slab pages, the union info should belong to obj_cgroups. And for = user pages, it should belong to mem_cgroup. When a slab page changes its obj_c= groups, then another user page which is in the same compound page of that slab pa= ge will gets the wrong mem_cgroup in __mod_lruvec_page_state(), and will trigger = nullptr in mem_cgroup_lruvec(). Correct me if I'm wrong. Thanks! static inline void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx, int v= al) { struct page *head =3D compound_head(page); /* rmap on tail pages = */ pg_data_t *pgdat =3D page_pgdat(page); struct lruvec *lruvec; /* Untracked pages have no memcg, no lruvec. Update only the node= */ if (!head->mem_cgroup) { __mod_node_page_state(pgdat, idx, val); return; } lruvec =3D mem_cgroup_lruvec(head->mem_cgroup, pgdat); __mod_lruvec_state(lruvec, idx, val); } >=20 > thanks, >=20 > greg k-h > . >=20