From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A75CC433FE for ; Thu, 11 Nov 2021 11:08:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 061116162E for ; Thu, 11 Nov 2021 11:08:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 061116162E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 70EB66B0087; Thu, 11 Nov 2021 06:08:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BE476B0088; Thu, 11 Nov 2021 06:08:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5ACFF6B0089; Thu, 11 Nov 2021 06:08:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 4E1AA6B0087 for ; Thu, 11 Nov 2021 06:08:47 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 06B8980DE7ED for ; Thu, 11 Nov 2021 11:08:47 +0000 (UTC) X-FDA: 78796376694.20.BE91371 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by imf14.hostedemail.com (Postfix) with ESMTP id A6E5360019A1 for ; Thu, 11 Nov 2021 11:08:46 +0000 (UTC) Received: by mail-pg1-f170.google.com with SMTP id s136so4868678pgs.4 for ; Thu, 11 Nov 2021 03:08:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=message-id:date:mime-version:user-agent:subject:to:cc:references :from:in-reply-to:content-transfer-encoding; bh=1+xaCmV6Hn3zO59rMpgR4feSecB3T/0jAB5KPesc7ZA=; b=cYkjIYJYQAO5ZoYoWf+CL9ig21nVS7CqMr23yCxjvuIVAy0SgtP1WiQ0WQQ3BOfl/v hV9R0mNGbxT3v5++I9ooZm3+gj/m4Pu4JdP0flapAf3eVOoQtpv/Cp5FlypDqNYgvO8Y 0JsKCYjQ0tkybNjzafcgdoA1Hwb1hwJt2v+wtbCvzs5r8M40B+6sb+H5L/ThGWi5nI9c tWURndWHuC4Mdi9vpiZWG4kNTEnJxQ+CqtEKijOwVliB+ODZLD1E0byvyQ1eHX6X4pmv Hlmt9ylUjT2DqKRZKRuQoEvEjiEnAic7uzaouay1g/eB1wgkXZ7NyqlbzwkejrwFKF76 WhsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :to:cc:references:from:in-reply-to:content-transfer-encoding; bh=1+xaCmV6Hn3zO59rMpgR4feSecB3T/0jAB5KPesc7ZA=; b=L97pKQuqxZm5AbcE+I9ZHThzXVmvCbG6HcELYzqGIYkZWtnbNMZygEJ1U/9VrXvW6f hzrMgdnTyhFPWTRHOec1z8yiqlddvnMKd4Epl19oHIUn5oD3fqgYEzDh+smFQfT2sOsP bOjd7njNX5lSlpJ6M9a0seKlncHrkK+67blikrnfTHVQuZdKzltHSf0RKIKSCsPp8Q+h /tFk2YanPKlW/V0pzwRrE2aFvjRaEij3pn84b0uhoPvLb3wdbig3vgQ5XNNCfTjqvmlh y9eYi5ttIduwe337RjhhLBkMW2UAUOtOlDCiTJzK1+R5OxQ7bF+M8mkJLgXg4xjhjhVT CR/Q== X-Gm-Message-State: AOAM530tIe5+kgJn5mu7IQG5ktpKwtUbCP/gqZYUa/N5gDoD0yX56AF1 oxChcykT1uTcOIKlh74Z1E3AwA== X-Google-Smtp-Source: ABdhPJz67P4vUHNseHsMlzE9QMANsBWRW6gtm2ephSmH+1A44Wp4PNkq94ElDr7xgac1/L+wC+q9Gg== X-Received: by 2002:a63:1b5c:: with SMTP id b28mr4039791pgm.316.1636628924386; Thu, 11 Nov 2021 03:08:44 -0800 (PST) Received: from [10.254.173.217] ([139.177.225.248]) by smtp.gmail.com with ESMTPSA id z22sm1877942pfe.93.2021.11.11.03.08.38 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 11 Nov 2021 03:08:43 -0800 (PST) Message-ID: <9ee06b52-4844-7996-fa34-34fc7d4fdc10@bytedance.com> Date: Thu, 11 Nov 2021 19:08:35 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.3.0 Subject: Re: [PATCH v3 00/15] Free user PTE page table pages To: David Hildenbrand , Jason Gunthorpe Cc: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com References: <20211110105428.32458-1-zhengqi.arch@bytedance.com> <20211110125601.GQ1740502@nvidia.com> <8d0bc258-58ba-52c5-2e0d-a588489f2572@redhat.com> <20211110143859.GS1740502@nvidia.com> <6ac9cc0d-7dea-0e19-51b3-625ec6561ac7@redhat.com> <20211110163925.GX1740502@nvidia.com> <7c97d86f-57f4-f764-3e92-1660690a0f24@redhat.com> <60515562-5f93-11cd-6c6a-c7cc92ff3bf8@bytedance.com> From: Qi Zheng In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A6E5360019A1 X-Stat-Signature: uuapmppexxct4gg6j4cstncbk6xewi4c Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=cYkjIYJY; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com X-HE-Tag: 1636628926-963374 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/11/21 5:22 PM, David Hildenbrand wrote: > On 11.11.21 04:58, Qi Zheng wrote: >> >> >> On 11/11/21 1:37 AM, David Hildenbrand wrote: >>>>> It would still be a fairly coarse-grained locking, I am not sure if that >>>>> is a step into the right direction. If you want to modify *some* page >>>>> table in your process you have exclude each and every page table walker. >>>>> Or did I mis-interpret what you were saying? >>>> >>>> That is one possible design, it favours fast walking and penalizes >>>> mutation. We could also stick a lock in the PMD (instead of a >>>> refcount) and still logically be using a lock instead of a refcount >>>> scheme. Remember modify here is "want to change a table pointer into a >>>> leaf pointer" so it isn't an every day activity.. >>> >>> It will be if we somewhat frequent when reclaim an empty PTE page table >>> as soon as it turns empty. This not only happens when zapping, but also >>> during writeback/swapping. So while writing back / swapping you might be >>> left with empty page tables to reclaim. >>> >>> Of course, this is the current approach. Another approach that doesn't >>> require additional refcounts is scanning page tables for empty ones and >>> reclaiming them. This scanning can either be triggered manually from >>> user space or automatically from the kernel. >> >> Whether it is introducing a special rwsem or scanning an empty page >> table, there are two problems as follows: >> >> #1. When to trigger the scanning or releasing? > > For example when reclaiming memory, when scanning page tables in > khugepaged, or triggered by user space (note that this is the approach I > originally looked into). But it certainly requires more locking thought > to avoid stopping essentially any page table walker. > >> #2. Every time to release a 4K page table page, 512 page table >> entries need to be scanned. > > It would happen only when actually trigger reclaim of page tables > (again, someone has to trigger it), so it's barely an issue. > > For example, khugepaged already scans the page tables either way. > >> >> For #1, if the scanning is triggered manually from user space, the >> kernel is relatively passive, and the user does not fully know the best >> timing to scan. If the scanning is triggered automatically from the >> kernel, that is great. But the timing is not easy to confirm, is it >> scanned and reclaimed every time zap or try_to_unmap? >> >> For #2, refcount has advantages. >> >>> >>>> >>>> There is some advantage with this thinking because it harmonizes well >>>> with the other stuff that wants to convert tables into leafs, but has >>>> to deal with complicated locking. >>>> >>>> On the other hand, refcounts are a degenerate kind of rwsem and only >>>> help with freeing pages. It also puts more atomics in normal fast >>>> paths since we are refcounting each PTE, not read locking the PMD. >>>> >>>> Perhaps the ideal thing would be to stick a rwsem in the PMD. read >>>> means a table cannot be come a leaf. I don't know if there is space >>>> for another atomic in the PMD level, and we'd have to use a hitching >>>> post/hashed waitq scheme too since there surely isn't room for a waitq >>>> too.. >>>> >>>> I wouldn't be so quick to say one is better than the other, but at >>>> least let's have thought about a locking solution before merging >>>> refcounts :) >>> >>> Yes, absolutely. I can see the beauty in the current approach, because >>> it just reclaims "automatically" once possible -- page table empty and >>> nobody is walking it. The downside is that it doesn't always make sense >>> to reclaim an empty page table immediately once it turns empty. >>> >>> Also, it adds complexity for something that is only a problem in some >>> corner cases -- sparse memory mappings, especially relevant for some >>> memory allocators after freeing a lot of memory or running VMs with >>> memory ballooning after inflating the balloon. Some of these use cases >>> might be good with just triggering page table reclaim manually from user >>> space. >>> >> >> Yes, this is indeed a problem. Perhaps some flags can be introduced so >> that the release of page table pages can be delayed in some cases. >> Similar to the lazyfree mechanism in MADV_FREE? > > The issue AFAIU is that once your refcount hits 0 (no more references, > no more entries), the longer you wait with reclaim, the longer others > have to wait for populating a fresh page table because the "page table > to be reclaimed" is still stuck around. You'd have to keep the refcount > increased for a while, and only drop it after a while. But when? And > how? IMHO it's not trivial, but maybe there is an easy way to achieve it. > For running VMs with memory ballooning after inflating the balloon, is this a hot behavior? Even if it is, it is already facing the release and reallocation of physical pages. The overhead after introducing pte_refcount is that we need to release and re-allocate page table page. But 2MB physical pages only corresponds to 4KiB of PTE page table page. So maybe the overhead is not big. In fact, the performance test shown on the cover letter is this case: test program: https://lore.kernel.org/lkml/20100106160614.ff756f82.kamezawa.hiroyu@jp.fujitsu.com/2-multi-fault-all.c Thanks, Qi >