From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50D27C54EE9 for ; Sun, 18 Sep 2022 08:19:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JHbfJ4QJg+f9N+kRb2NacJOqjnz7gf3GyB5q/NMHJ04=; b=alV2kEW0pukYbk BvA1E4ftJnNOExWKotUQbJ5Heb/NBo+9nZofSYSy8UD3eV58nxNynGuCxEW52BQNjDawkCdN7C9hm 9pk1J9iycIFC4vEQaCXmXKvxNu6yiNhfFjI2wSrALT/UbSviIGFMGvLR/jD7qBmu29ynW41b1ON1Z 6tto0jLfCc4/CGEVeZM534ofj4Fkr7xmgTxTbAEa3XKP32mNlV9ewc4NlPWu617kZHfjVT2pZaUsk pnKLHqUZvX+wdgslB5lfa7hu/6lU0oRZluGgATlMr3Hch+K8Y+fHo5Bqia4ya2CYqszBnPU9WNb5T yHWZ6sHuOfD9NvJEg2LA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oZpV7-00Disp-4Q; Sun, 18 Sep 2022 08:18:05 +0000 Received: from mail-vs1-xe35.google.com ([2607:f8b0:4864:20::e35]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oZpV0-00Dinf-OD for linux-arm-kernel@lists.infradead.org; Sun, 18 Sep 2022 08:18:00 +0000 Received: by mail-vs1-xe35.google.com with SMTP id p4so8766660vsa.9 for ; Sun, 18 Sep 2022 01:17:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=1RQT6ZUvczeoaRPplunvS/UF2W+GT/ANijT55uawz7o=; b=kCm0eYH8uHV+mpt/ousLYi9kx4TU1+jSTgVvZcCP7oQJ/7W+1muuuVQcEDNDnp8JVY eJwf6lG5lBeWQun+mA7ozlDl0KTY7FtfFqU33SXinqBNWaQ5CqXbludizCD0y3B5gKB1 oCqKAtppUmIe/EHAHEApyoUtD720xKQadRMr/UPWeGJeaLE2wX2LoyuQyJsETQ8Gr7JT HrYn/WL+ardsRIdI65gMXX2+niWUuH80dcUGlbYNI/sIz4eor1MQaIMKIpwejjQDgvqY 8kzkLFSpmpZr3j+X6VuFvvjD2bIpkDh6opEdM3mW1mFc/BgKag+p01XsYMDF1+ximxm6 3zdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=1RQT6ZUvczeoaRPplunvS/UF2W+GT/ANijT55uawz7o=; b=ev1aWsAdNW7Em3EOxInOkhAwEjY3OUHGON1tE0W/HE99w1SH6Nqgv7U+zmNQgb+oBN DBfytNSAiJAWME8NGbYmyDw4n5x9kUqTBeIF04qWBSrABbUEwvSHn0zTwwg5PmSTHWi0 hPM2lJQS+QqTPYKe6jfdPNL4IgqK3KmSOJ/rZiT82Mmrxkn3CI3RIrZEhxlttHRvrVYI GPN37x1Uau/bLoLQxcDphb7VACZTzIkSGFgdcP6uEKiXM1r9HjEmi+Z2WKWzejpVam0s XSwoyqm7OvZhtYCVYnXnhAOKIUUcOTDZfRbEeHQ3pmzzFNZFIihrHXY+lcQCFwU6vZKY bEdA== X-Gm-Message-State: ACrzQf39V7JpHLvv0LPgcTK8v13z5ZLM0IHM5QpaLeOOQWHNaokOPPZv 4NCkwfLIZ0tTe4M7d0gt1ZOhCl95APDJXSD3C+lvlA== X-Google-Smtp-Source: AMsMyM4v5RQfDbwmieO7CIiyFGLMLy8eTDBGVAXobhjr+eOhIdfGkDnfsk55xkhz8Ap4SQbQFwxKKFykyJIV/VAL5ao= X-Received: by 2002:a05:6102:3309:b0:39a:e5eb:8508 with SMTP id v9-20020a056102330900b0039ae5eb8508mr1186221vsc.65.1663489072391; Sun, 18 Sep 2022 01:17:52 -0700 (PDT) MIME-Version: 1.0 References: <20220918080010.2920238-1-yuzhao@google.com> <20220918080010.2920238-9-yuzhao@google.com> In-Reply-To: <20220918080010.2920238-9-yuzhao@google.com> From: Yu Zhao Date: Sun, 18 Sep 2022 02:17:16 -0600 Message-ID: Subject: Re: [PATCH mm-unstable v15 08/14] mm: multi-gen LRU: support page table walks To: Andrew Morton , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Tejun Heo , Vlastimil Babka , Will Deacon , Linux ARM , "open list:DOCUMENTATION" , linux-kernel , Linux-MM , "the arch/x86 maintainers" , Kernel Page Reclaim v2 , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , =?UTF-8?Q?Holger_Hoffst=C3=A4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220918_011758_815099_4ABA4030 X-CRM114-Status: GOOD ( 22.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Sun, Sep 18, 2022 at 2:01 AM Yu Zhao wrote: ... > This patch uses the following optimizations when walking page tables: > 1. It tracks the usage of mm_struct's between context switches so that > page table walkers can skip processes that have been sleeping since > the last iteration. ... > @@ -672,6 +672,22 @@ struct mm_struct { > */ > unsigned long ksm_merging_pages; > #endif > +#ifdef CONFIG_LRU_GEN > + struct { > + /* this mm_struct is on lru_gen_mm_list */ > + struct list_head list; > + /* > + * Set when switching to this mm_struct, as a hint of > + * whether it has been used since the last time per-node > + * page table walkers cleared the corresponding bits. > + */ > + unsigned long bitmap; ... > +static inline void lru_gen_use_mm(struct mm_struct *mm) > +{ > + /* > + * When the bitmap is set, page reclaim knows this mm_struct has been > + * used since the last time it cleared the bitmap. So it might be worth > + * walking the page tables of this mm_struct to clear the accessed bit. > + */ > + WRITE_ONCE(mm->lru_gen.bitmap, -1); > +} ... > @@ -5180,6 +5180,7 @@ context_switch(struct rq *rq, struct task_struct *prev, > * finish_task_switch()'s mmdrop(). > */ > switch_mm_irqs_off(prev->active_mm, next->mm, next); > + lru_gen_use_mm(next->mm); > > if (!prev->mm) { // from kernel > /* will mmdrop() in finish_task_switch(). */ Adding Ingo, Peter, Juri and Vincent for the bit above, per previous discussion here: https://lore.kernel.org/r/CAOUHufY91Eju-g1+xbUsGkGZ-cwBm78v+S_Air7Cp8mAnYJVYA@mail.gmail.com/ I trimmed 99% of this patch to save your time. In case you want to hear the whole story: https://lore.kernel.org/r/20220918080010.2920238-9-yuzhao@google.com/ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel