From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79A52C43334 for ; Wed, 6 Jul 2022 22:26:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234243AbiGFW0g (ORCPT ); Wed, 6 Jul 2022 18:26:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234249AbiGFW0e (ORCPT ); Wed, 6 Jul 2022 18:26:34 -0400 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D47092AE2F for ; Wed, 6 Jul 2022 15:26:33 -0700 (PDT) Received: by mail-io1-xd2a.google.com with SMTP id p69so15269166iod.10 for ; Wed, 06 Jul 2022 15:26:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=tdMeO7jrD/dIfpY7p2JB9yqnBwdG8BahWjwdHXRbu1k=; b=naH4kwPAB5R8CvE/dGvN0fuWyppIK1xf3NBrtyjW3Y41AYDfdBR/vtTJnvbs61H6Bo ctYrY2owT9GvOmsWDgCOAHQ9N4p3OuMvnM15cYJnx8ESbgjggdI3/H3dE5nyD24iZiro 35f2BbIakOw/NQnp6rDPvUYH56fdY3+FoExVEiTlkSriCXKCBNApax4DuFDx0PiUOnGo Sb4FO7GPktbl3uEUcchhWKqeD/MWyl0Bw0HJ29aEPK1/cCnq/tBxWsiqivcUreYflkEl 5tQnpP8XmFtnwcM5inWfp5G8mEVBx8TmTScYhsQqkDh/aIWK+uurX2OglenfVAYr/j11 vovg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=tdMeO7jrD/dIfpY7p2JB9yqnBwdG8BahWjwdHXRbu1k=; b=z76KBD5EMM9RJiLrhteXOfcSnumQkCnXb1iOBu2hFkOL9Yuc+G6JtdGPgrH3yKLn04 D7lZZRbDzsuUkefOSC2/r7ZUmLgHhfk8QczQxF2k5/mFhd64iHR58+qwuNHW8EqLI1lT ZCU0BDbV5O2ab9t0s3ypqSy2DGCI49wsHFv+1mdmHJxGwhNzSbd+ELm7ABiiZjK2RkwQ Y1IxBcwxqdotZPa43YgNF9LL9wuwg7UJCOwuTTXu0Y8owcVOuc8Nz67Gin67DiWc7840 8cky3HXEF6kZ5OhIdenWzlLmXMJy1zKMf68lFDXQyemhfqVZ9k/eAQ4ppM0TNB0mtS+c yVOw== X-Gm-Message-State: AJIora8TVDidci78DwoY0onH63cPQZHHYOwpL84RD2JoTZ3jcq8cvi5i cmo47gfDkuRyN5Qmv8WkxX4NoA== X-Google-Smtp-Source: AGRyM1s0AIMCH99LiToMeYSdiGPBoRUTHvuuNfnnciKcHf9pq+YmScNTAa9bLACPyJfsm70G4LXStw== X-Received: by 2002:a05:6638:31c2:b0:32e:167a:d887 with SMTP id n2-20020a05663831c200b0032e167ad887mr26983433jav.197.1657146393078; Wed, 06 Jul 2022 15:26:33 -0700 (PDT) Received: from google.com ([2620:15c:183:200:b89c:e10a:466e:cf7d]) by smtp.gmail.com with ESMTPSA id e25-20020a0566380cd900b00339ee768069sm16275743jak.74.2022.07.06.15.26.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 15:26:32 -0700 (PDT) Date: Wed, 6 Jul 2022 16:26:27 -0600 From: Yu Zhao To: Andrew Morton , Liam Howlett Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , Holger =?iso-8859-1?Q?Hoffst=E4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Subject: Re: [PATCH v13 08/14] mm: multi-gen LRU: support page table walks Message-ID: References: <20220706220022.968789-1-yuzhao@google.com> <20220706220022.968789-9-yuzhao@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220706220022.968789-9-yuzhao@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 06, 2022 at 04:00:17PM -0600, Yu Zhao wrote: ... > +/* > + * Some userspace memory allocators map many single-page VMAs. Instead of > + * returning back to the PGD table for each of such VMAs, finish an entire PMD > + * table to reduce zigzags and improve cache performance. > + */ > +static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk *args, > + unsigned long *vm_start, unsigned long *vm_end) > +{ > + unsigned long start = round_up(*vm_end, size); > + unsigned long end = (start | ~mask) + 1; > + > + VM_WARN_ON_ONCE(mask & size); > + VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); > + > + while (args->vma) { > + if (start >= args->vma->vm_end) { > + args->vma = args->vma->vm_next; > + continue; > + } > + > + if (end && end <= args->vma->vm_start) > + return false; > + > + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) { > + args->vma = args->vma->vm_next; > + continue; > + } > + > + *vm_start = max(start, args->vma->vm_start); > + *vm_end = min(end - 1, args->vma->vm_end - 1) + 1; > + > + return true; > + } > + > + return false; > +} To make the above work on top of the Maple Tree: diff --git a/mm/vmscan.c b/mm/vmscan.c index 7096ff7836db..c0c1195da803 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3689,23 +3689,14 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk { unsigned long start = round_up(*vm_end, size); unsigned long end = (start | ~mask) + 1; + VMA_ITERATOR(vmi, args->mm, start); VM_WARN_ON_ONCE(mask & size); VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); - while (args->vma) { - if (start >= args->vma->vm_end) { - args->vma = args->vma->vm_next; + for_each_vma_range(vmi, args->vma, end) { + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) continue; - } - - if (end && end <= args->vma->vm_start) - return false; - - if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) { - args->vma = args->vma->vm_next; - continue; - } *vm_start = max(start, args->vma->vm_start); *vm_end = min(end - 1, args->vma->vm_end - 1) + 1; From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1888C43334 for ; Wed, 6 Jul 2022 22:27:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Vw7g2t4Zln/UsqZ1sWsAv8PCGHopM2Zr6YnQ8D7EMNA=; b=DtZawmnTO9LUJg QN8YKJgx298PKWSM1iz8hF1dWBRgBFMG/kaIv6BN88J/am70yx5S/7Loifaq9PZflUbPJHdOFcL7K YbIQX/45Cn7eR4D/UyrRaS6oOnPFOCofXAfBC3ohBF+xomaXvR/MB2M1F3reReb9j7zhrrc9UnQaQ 5b91i4etQyMH2XQ+3+dRfxRfuhG5EYVz1ZbtpmM1EkZZI/MSQJ5j9Ts7Y5Rfx9BLxoSiIUJtBWVda v2dvnH2PQCtTROU8w2BAQ76xlcOI9YbvDrpknQ+uVyNx5EQD0rIUvy7QceUKZ1ZLelpMVcuGHjUYG /D8uWW/vgqWItWjFHrNA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9DTk-00CbQK-Vm; Wed, 06 Jul 2022 22:26:41 +0000 Received: from mail-io1-xd33.google.com ([2607:f8b0:4864:20::d33]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9DTh-00CbN3-Qx for linux-arm-kernel@lists.infradead.org; Wed, 06 Jul 2022 22:26:39 +0000 Received: by mail-io1-xd33.google.com with SMTP id d3so15251161ioi.9 for ; Wed, 06 Jul 2022 15:26:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=tdMeO7jrD/dIfpY7p2JB9yqnBwdG8BahWjwdHXRbu1k=; b=naH4kwPAB5R8CvE/dGvN0fuWyppIK1xf3NBrtyjW3Y41AYDfdBR/vtTJnvbs61H6Bo ctYrY2owT9GvOmsWDgCOAHQ9N4p3OuMvnM15cYJnx8ESbgjggdI3/H3dE5nyD24iZiro 35f2BbIakOw/NQnp6rDPvUYH56fdY3+FoExVEiTlkSriCXKCBNApax4DuFDx0PiUOnGo Sb4FO7GPktbl3uEUcchhWKqeD/MWyl0Bw0HJ29aEPK1/cCnq/tBxWsiqivcUreYflkEl 5tQnpP8XmFtnwcM5inWfp5G8mEVBx8TmTScYhsQqkDh/aIWK+uurX2OglenfVAYr/j11 vovg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=tdMeO7jrD/dIfpY7p2JB9yqnBwdG8BahWjwdHXRbu1k=; b=fufXvb3k8SMP5Ry+XQTVYzAP632OOPhFSWiPH1LulzV8nioX9SgWqnz7pw5K5C39f+ Hqyi+9pQKFE+dFob9fM8P9oiV4BEbfO0VwY4nFW0IAq2WuFVg7W3z5ADZXo8c7ng7FCi SncIU9ObALdpV0A5MG/s/bXqxUPr7Tp9AAGfWc836ulTA9Rj2aUCNIqc72IeDGEpRPCe 8stWfoGFVHe4LHAi6xrif56CkLTMvSpN+5aNM7dETTorWfnHetqvDd+D4XXEZWd8BZ7n euKR72cxsizpMui6TqvHrYcXH6k1xWQ6zSG6rDNc0K2iaAu0xfipFcNRQqCgKTUujguS JAKw== X-Gm-Message-State: AJIora9Dll/XMaTRDJlAQPkUsiOEzxainY3K6m4UIc0SNDCEHqJm5oD6 2/I9sRDWuO+mj714dQN7QPgqxA== X-Google-Smtp-Source: AGRyM1s0AIMCH99LiToMeYSdiGPBoRUTHvuuNfnnciKcHf9pq+YmScNTAa9bLACPyJfsm70G4LXStw== X-Received: by 2002:a05:6638:31c2:b0:32e:167a:d887 with SMTP id n2-20020a05663831c200b0032e167ad887mr26983433jav.197.1657146393078; Wed, 06 Jul 2022 15:26:33 -0700 (PDT) Received: from google.com ([2620:15c:183:200:b89c:e10a:466e:cf7d]) by smtp.gmail.com with ESMTPSA id e25-20020a0566380cd900b00339ee768069sm16275743jak.74.2022.07.06.15.26.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 15:26:32 -0700 (PDT) Date: Wed, 6 Jul 2022 16:26:27 -0600 From: Yu Zhao To: Andrew Morton , Liam Howlett Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , Holger =?iso-8859-1?Q?Hoffst=E4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Subject: Re: [PATCH v13 08/14] mm: multi-gen LRU: support page table walks Message-ID: References: <20220706220022.968789-1-yuzhao@google.com> <20220706220022.968789-9-yuzhao@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220706220022.968789-9-yuzhao@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_152637_976642_AA3A7C1B X-CRM114-Status: GOOD ( 17.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jul 06, 2022 at 04:00:17PM -0600, Yu Zhao wrote: ... > +/* > + * Some userspace memory allocators map many single-page VMAs. Instead of > + * returning back to the PGD table for each of such VMAs, finish an entire PMD > + * table to reduce zigzags and improve cache performance. > + */ > +static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk *args, > + unsigned long *vm_start, unsigned long *vm_end) > +{ > + unsigned long start = round_up(*vm_end, size); > + unsigned long end = (start | ~mask) + 1; > + > + VM_WARN_ON_ONCE(mask & size); > + VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); > + > + while (args->vma) { > + if (start >= args->vma->vm_end) { > + args->vma = args->vma->vm_next; > + continue; > + } > + > + if (end && end <= args->vma->vm_start) > + return false; > + > + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) { > + args->vma = args->vma->vm_next; > + continue; > + } > + > + *vm_start = max(start, args->vma->vm_start); > + *vm_end = min(end - 1, args->vma->vm_end - 1) + 1; > + > + return true; > + } > + > + return false; > +} To make the above work on top of the Maple Tree: diff --git a/mm/vmscan.c b/mm/vmscan.c index 7096ff7836db..c0c1195da803 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3689,23 +3689,14 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk { unsigned long start = round_up(*vm_end, size); unsigned long end = (start | ~mask) + 1; + VMA_ITERATOR(vmi, args->mm, start); VM_WARN_ON_ONCE(mask & size); VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); - while (args->vma) { - if (start >= args->vma->vm_end) { - args->vma = args->vma->vm_next; + for_each_vma_range(vmi, args->vma, end) { + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) continue; - } - - if (end && end <= args->vma->vm_start) - return false; - - if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) { - args->vma = args->vma->vm_next; - continue; - } *vm_start = max(start, args->vma->vm_start); *vm_end = min(end - 1, args->vma->vm_end - 1) + 1; _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel