From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEFE1C433F5 for ; Thu, 14 Apr 2022 16:57:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03A8C6B0071; Thu, 14 Apr 2022 12:57:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F2C126B0073; Thu, 14 Apr 2022 12:57:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCD3B6B0074; Thu, 14 Apr 2022 12:57:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id CADAB6B0071 for ; Thu, 14 Apr 2022 12:57:52 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id A0C588329C for ; Thu, 14 Apr 2022 16:57:52 +0000 (UTC) X-FDA: 79356091584.02.639E5F8 Received: from mail-vk1-f182.google.com (mail-vk1-f182.google.com [209.85.221.182]) by imf28.hostedemail.com (Postfix) with ESMTP id ECCE4C0008 for ; Thu, 14 Apr 2022 16:57:51 +0000 (UTC) Received: by mail-vk1-f182.google.com with SMTP id bc42so2547854vkb.12 for ; Thu, 14 Apr 2022 09:57:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BZ6t4npj/dkDR/9Ph0VbUFeuQFB/dSviTTwGt9Fj5ZE=; b=iQ/9o9V2PaSGOd4AwoDp6pOl6zrejO/TBoa3Rpklnufwf10VjyN1OXXqA99M6M4Mkg Mgo+5saDuKAuzkymQaSwXoW3qo/rc4kk0z1jBFdRwsRfRP36WDzS/tpxmdgXmL+05jK7 4e0RwLr0+fdYY9njr9aF3rVOnifpnPwU/B/B0Qfyvs4nWCCYeWiDqMIoEC4dE68VCblE VYJGNevkkuc2YmVv5g7CI5BHD0bK3OCa+E/qrUSzAP2cdUZsoqYFFlVgmALqsVN7OUzl SRDFp1glZi0YXbazQ5AxLRknV2A03euL/7ibjX69fHDFfkr5G0Efw6wIpCNT6XxfsKD+ e/KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BZ6t4npj/dkDR/9Ph0VbUFeuQFB/dSviTTwGt9Fj5ZE=; b=H4MAVy65zaXcp/rJXFc8hkGWHy1XeHzg9qfWnfJV64kBPUydp3jnB+46k+DfB3XsQS t8RnUfZYqwz0F2QoFlDF8MrQD0Hny2bL2n6gXuZiEK+73xk4K35swFb+GFmvJJVJr7W7 I35CgHsBlYF9iK8het6d++iH6Xks5hBmdOVgX6Zi3SaaiIHbTZBMXYkCdT6n5mz8sFHo wJ8e8+Q2jo+r3s0k8QhwYShgoPMpLJzZ4Yb4BcoZVkoxW0WVt4DL4qNnhpDEbs5HemnF jiXu4y6fhVb5C5OCEFyV7kmvqt7V5w+EG+n2pAnxVVy/iJ2wjGqZ8hsxb9glAzxoOTKk +ToQ== X-Gm-Message-State: AOAM533qn3XB9W0XA8cFXq6F/a2qVQ9mI4bKJrqaBQyTKkFesMxkr6gz trZ4q8ietFiXMgZTH4BsoX2FmRurtKpp87l2D0wkIA== X-Google-Smtp-Source: ABdhPJx7VF6Nq+2eeFeNN4ggvB1JU9cuhQ4OyyJKiZB1z4/ZEoTrGYuQHfF7dBVk/FqJ5UD75YGAyHr5iMCMc507Hhc= X-Received: by 2002:a05:6122:887:b0:332:699e:7e67 with SMTP id 7-20020a056122088700b00332699e7e67mr1764779vkf.35.1649955471019; Thu, 14 Apr 2022 09:57:51 -0700 (PDT) MIME-Version: 1.0 References: <20220404143501.2016403-1-Liam.Howlett@oracle.com> <20220413235051.3a4eb7c86d31656c7aea250c@linux-foundation.org> <20220414135706.rcn7zr36s2hcd5re@revolver> In-Reply-To: <20220414135706.rcn7zr36s2hcd5re@revolver> From: Yu Zhao Date: Thu, 14 Apr 2022 10:57:15 -0600 Message-ID: Subject: Re: [PATCH v7 00/70] Introducing the Maple Tree To: Liam Howlett Cc: Andrew Morton , "maple-tree@lists.infradead.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: ECCE4C0008 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="iQ/9o9V2"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of yuzhao@google.com designates 209.85.221.182 as permitted sender) smtp.mailfrom=yuzhao@google.com X-Stat-Signature: waek89xjyyzawdzo5ycdp7oq6w8nej7c X-HE-Tag: 1649955471-699654 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 14, 2022 at 7:57 AM Liam Howlett wrote: > > * Andrew Morton [220414 02:51]: > > On Mon, 4 Apr 2022 14:35:26 +0000 Liam Howlett wrote: > > > > > Please add this patch set to your branch. They are based on v5.18-rc1. > > > > Do we get a nice [0/n] cover letter telling us all about this? > > > > I have that all merged up and it compiles. > > > > https://lkml.kernel.org/r/20220402094550.129-1-lipeifeng@oppo.com and > > https://lkml.kernel.org/r/20220412081014.399-1-lipeifeng@oppo.com are > > disabled for now. > > > > > > Several patches were marked > > > > From: Liam > > Signed-off-by: Matthew > > Signed-off-by: Liam > > > > Which makes me wonder whether the attribution was correct. Please > > double-check. > > I'll have a look, thanks. > > > > > > > > > I made a lame attempt to fix up mglru's get_next_vma(), and it probably > > wants a revisit in the maple-tree world anyway. Please check this and > > send me a better version ;) > > What you have below will function, but there is probably a more maple > way of doing things. Happy to help get the sap flowing - it is that > time of the year after all ;) Thanks. Please let me know when the more maple way is ready. I'll test with it. Also I noticed, for the end address to walk_page_range(), Matthew used -1 and you used ULONG_MAX in the maple branch; Andrew used TASK_SIZE below. Having a single value throughout would be great. > > --- a/mm/vmscan.c~mglru-vs-maple-tree > > +++ a/mm/vmscan.c > > @@ -3704,7 +3704,7 @@ static bool get_next_vma(struct mm_walk > > > > while (walk->vma) { > > if (next >= walk->vma->vm_end) { > > - walk->vma = walk->vma->vm_next; > > + walk->vma = find_vma(walk->mm, walk->vma->vm_end); > > continue; > > } > > > > @@ -3712,7 +3712,7 @@ static bool get_next_vma(struct mm_walk > > return false; > > > > if (should_skip_vma(walk->vma->vm_start, walk->vma->vm_end, walk)) { > > - walk->vma = walk->vma->vm_next; > > + walk->vma = find_vma(walk->mm, walk->vma->vm_end); > > continue; > > } > > > > @@ -4062,7 +4062,7 @@ static void walk_mm(struct lruvec *lruve > > /* the caller might be holding the lock for write */ > > if (mmap_read_trylock(mm)) { > > unsigned long start = walk->next_addr; > > - unsigned long end = mm->highest_vm_end; > > + unsigned long end = TASK_SIZE; > > > > err = walk_page_range(mm, start, end, &mm_walk_ops, walk);