From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64B83C47094 for ; Thu, 10 Jun 2021 06:47:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 401EA613E2 for ; Thu, 10 Jun 2021 06:47:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229792AbhFJGtm (ORCPT ); Thu, 10 Jun 2021 02:49:42 -0400 Received: from mail-qv1-f41.google.com ([209.85.219.41]:46844 "EHLO mail-qv1-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhFJGtk (ORCPT ); Thu, 10 Jun 2021 02:49:40 -0400 Received: by mail-qv1-f41.google.com with SMTP id w9so14152042qvi.13 for ; Wed, 09 Jun 2021 23:47:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=2OT9ZOT1k9Jrl0NBTEVPMljz+ghdujF4gPDHMzwzXkI=; b=fItDKAwEWa27P6UoPqgsJmxWxx+88uY7YcweWA0CjJkHQ/a0qq3VMLFQGCheHOetC3 thbHJ+GkAgYvV9MvOGSd1SedriGSLoRyUGD7SHaGqOJaIRvBfO6VzJLBx101C7QOIdqe 7d/xyw4cAq1x1sC14413xqHk+aX9nGDcmYOH6HsbEY3Sq0M8O6KTTCRTfTwfZUc0oO74 9qz1oWxVUORVI69H72mEyT4hMad64PunjlNNCmPyWhv6XK2woi+jrJlx4x0eMuLLOvL4 gNsTLQ7nISZaJNkvP4IIVj25ZkvW+15nQxnmN9XZOX9TYBCNpUuD3it60s6CWLrEjfHl 19ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=2OT9ZOT1k9Jrl0NBTEVPMljz+ghdujF4gPDHMzwzXkI=; b=KZAkNtSpE0WgTV+q312JEKTUWl9006u6BmKjSRu45rRATmBvfOhm00qKsJjSp6AQ39 p6/EA5NCd2dcL4PKVG/CQvXFXmhW3FEZ6/fRtz/JRshkk7e28LfRwCMXXMsOhleCIxz5 YoV1WSRqL0mMLtimlW83Ym9em8XF/7suVvYu5O0XUs4G+haxxk7PT6pPkMqrwyqBbAfv Et4zfQegyjNxAeU/0VnOAUau7EyVBU8BwKZi5HT3Un/QknHAD6aTmPvTx3N/H46D2cm9 JYCbBbYRJmBuMYgzxaPloaM9IdIfyQjrOBezs26TBE2dQaWsRIkYfJF5U+mqXhwcFweV zQgg== X-Gm-Message-State: AOAM531I+zd52p7gR7xJYgoEyZGIPeBpqzRf6NOiZJi1RF2QfE4ty9HA i51l2mRdbBoExxJdpVnppPxncg== X-Google-Smtp-Source: ABdhPJzebxEMV8nIRPyxc3ps1IgHzw1v6T3H/U3jGmnUmUAEsDFERk8zIQP7dpPhp/qi25qO4XO93A== X-Received: by 2002:a0c:d610:: with SMTP id c16mr3670609qvj.13.1623307593003; Wed, 09 Jun 2021 23:46:33 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x8sm1675789qkb.54.2021.06.09.23.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 23:46:32 -0700 (PDT) Date: Wed, 9 Jun 2021 23:46:30 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Alistair Popple , Ralph Campbell , Zi Yan , Peter Xu , Will Deacon , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/11] mm: page_vma_mapped_walk(): add a level of indentation In-Reply-To: <589b358c-febc-c88e-d4c2-7834b37fa7bf@google.com> Message-ID: References: <589b358c-febc-c88e-d4c2-7834b37fa7bf@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org page_vma_mapped_walk() cleanup: add a level of indentation to much of the body, making no functional change in this commit, but reducing the later diff when this is all converted to a loop. Signed-off-by: Hugh Dickins Cc: --- mm/page_vma_mapped.c | 109 +++++++++++++++++++++++---------------------- 1 file changed, 56 insertions(+), 53 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 0fe6e558d336..0840079ef7d2 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -173,65 +173,68 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->pte) goto next_pte; restart: - pgd = pgd_offset(mm, pvmw->address); - if (!pgd_present(*pgd)) - return false; - p4d = p4d_offset(pgd, pvmw->address); - if (!p4d_present(*p4d)) - return false; - pud = pud_offset(p4d, pvmw->address); - if (!pud_present(*pud)) - return false; - - pvmw->pmd = pmd_offset(pud, pvmw->address); - /* - * Make sure the pmd value isn't cached in a register by the - * compiler and used as a stale value after we've observed a - * subsequent update. - */ - pmde = pmd_read_atomic(pvmw->pmd); - barrier(); - - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { - pvmw->ptl = pmd_lock(mm, pvmw->pmd); - pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { - if (pvmw->flags & PVMW_MIGRATION) - return not_found(pvmw); - if (pmd_page(pmde) != page) - return not_found(pvmw); - return true; - } - if (!pmd_present(pmde)) { - swp_entry_t entry; + { + pgd = pgd_offset(mm, pvmw->address); + if (!pgd_present(*pgd)) + return false; + p4d = p4d_offset(pgd, pvmw->address); + if (!p4d_present(*p4d)) + return false; + pud = pud_offset(p4d, pvmw->address); + if (!pud_present(*pud)) + return false; - if (!thp_migration_supported() || - !(pvmw->flags & PVMW_MIGRATION)) - return not_found(pvmw); - entry = pmd_to_swp_entry(pmde); - if (!is_migration_entry(entry) || - migration_entry_to_page(entry) != page) - return not_found(pvmw); - return true; - } - /* THP pmd was split under us: handle on pte level */ - spin_unlock(pvmw->ptl); - pvmw->ptl = NULL; - } else if (!pmd_present(pmde)) { + pvmw->pmd = pmd_offset(pud, pvmw->address); /* - * If PVMW_SYNC, take and drop THP pmd lock so that we - * cannot return prematurely, while zap_huge_pmd() has - * cleared *pmd but not decremented compound_mapcount(). + * Make sure the pmd value isn't cached in a register by the + * compiler and used as a stale value after we've observed a + * subsequent update. */ - if ((pvmw->flags & PVMW_SYNC) && PageTransCompound(page)) { - spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); + pmde = pmd_read_atomic(pvmw->pmd); + barrier(); + + if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + pvmw->ptl = pmd_lock(mm, pvmw->pmd); + pmde = *pvmw->pmd; + if (likely(pmd_trans_huge(pmde))) { + if (pvmw->flags & PVMW_MIGRATION) + return not_found(pvmw); + if (pmd_page(pmde) != page) + return not_found(pvmw); + return true; + } + if (!pmd_present(pmde)) { + swp_entry_t entry; - spin_unlock(ptl); + if (!thp_migration_supported() || + !(pvmw->flags & PVMW_MIGRATION)) + return not_found(pvmw); + entry = pmd_to_swp_entry(pmde); + if (!is_migration_entry(entry) || + migration_entry_to_page(entry) != page) + return not_found(pvmw); + return true; + } + /* THP pmd was split under us: handle on pte level */ + spin_unlock(pvmw->ptl); + pvmw->ptl = NULL; + } else if (!pmd_present(pmde)) { + /* + * If PVMW_SYNC, take and drop THP pmd lock so that we + * cannot return prematurely, while zap_huge_pmd() has + * cleared *pmd but not decremented compound_mapcount(). + */ + if ((pvmw->flags & PVMW_SYNC) && + PageTransCompound(page)) { + spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); + + spin_unlock(ptl); + } + return false; } - return false; + if (!map_pte(pvmw)) + goto next_pte; } - if (!map_pte(pvmw)) - goto next_pte; while (1) { unsigned long end; -- 2.26.2 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5FC4C47094 for ; Thu, 10 Jun 2021 06:46:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A240613D9 for ; Thu, 10 Jun 2021 06:46:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A240613D9 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C2E956B0036; Thu, 10 Jun 2021 02:46:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB8366B006E; Thu, 10 Jun 2021 02:46:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A09CE6B0070; Thu, 10 Jun 2021 02:46:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 6ACFD6B0036 for ; Thu, 10 Jun 2021 02:46:34 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0B7A14820 for ; Thu, 10 Jun 2021 06:46:34 +0000 (UTC) X-FDA: 78236880708.04.CADE6C6 Received: from mail-qv1-f47.google.com (mail-qv1-f47.google.com [209.85.219.47]) by imf16.hostedemail.com (Postfix) with ESMTP id 7083B8019357 for ; Thu, 10 Jun 2021 06:46:29 +0000 (UTC) Received: by mail-qv1-f47.google.com with SMTP id 5so3900548qvf.1 for ; Wed, 09 Jun 2021 23:46:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=2OT9ZOT1k9Jrl0NBTEVPMljz+ghdujF4gPDHMzwzXkI=; b=fItDKAwEWa27P6UoPqgsJmxWxx+88uY7YcweWA0CjJkHQ/a0qq3VMLFQGCheHOetC3 thbHJ+GkAgYvV9MvOGSd1SedriGSLoRyUGD7SHaGqOJaIRvBfO6VzJLBx101C7QOIdqe 7d/xyw4cAq1x1sC14413xqHk+aX9nGDcmYOH6HsbEY3Sq0M8O6KTTCRTfTwfZUc0oO74 9qz1oWxVUORVI69H72mEyT4hMad64PunjlNNCmPyWhv6XK2woi+jrJlx4x0eMuLLOvL4 gNsTLQ7nISZaJNkvP4IIVj25ZkvW+15nQxnmN9XZOX9TYBCNpUuD3it60s6CWLrEjfHl 19ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=2OT9ZOT1k9Jrl0NBTEVPMljz+ghdujF4gPDHMzwzXkI=; b=tq76goTOAOD6w5CmC3ptXJWapyt23IsoXpcH2r1RQBvi2szGV5ScuhexioSmU/175/ PKoSWvalXT6io+FS3yEVNOEdNwI2Y4ZA29Xy/fO7AEWG8xcoTiGRPsunRwj/FBB5mxyJ qskglnO15DNWBvcsFoj9Asjjssk4okAvClRbbqFSvZdpCxeVKeLC7mpYYmSqkWPIWWTM 1HUNl5uCtcrBX+4k63Q79BBf5D3CEICYVWC69a1LxXBISZiJFaPxP2+kMvQMtvqaWMnQ O7VYRM8qADY98yo8LagZN8BQhnpmja8iMDYxfl3JINERMsr0LDwZYqHzlQYuHY6o2mT/ xu3w== X-Gm-Message-State: AOAM532Y+2P/0iL51a98jNfGfOSj0LWKrH+p2Jn1+2EqxRvgrlmSR8ff zdtOK5RA2ZPnd1F+LYa7TX4l2g== X-Google-Smtp-Source: ABdhPJzebxEMV8nIRPyxc3ps1IgHzw1v6T3H/U3jGmnUmUAEsDFERk8zIQP7dpPhp/qi25qO4XO93A== X-Received: by 2002:a0c:d610:: with SMTP id c16mr3670609qvj.13.1623307593003; Wed, 09 Jun 2021 23:46:33 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x8sm1675789qkb.54.2021.06.09.23.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 23:46:32 -0700 (PDT) Date: Wed, 9 Jun 2021 23:46:30 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Matthew Wilcox , Alistair Popple , Ralph Campbell , Zi Yan , Peter Xu , Will Deacon , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/11] mm: page_vma_mapped_walk(): add a level of indentation In-Reply-To: <589b358c-febc-c88e-d4c2-7834b37fa7bf@google.com> Message-ID: References: <589b358c-febc-c88e-d4c2-7834b37fa7bf@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 7083B8019357 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=fItDKAwE; spf=pass (imf16.hostedemail.com: domain of hughd@google.com designates 209.85.219.47 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: zzq5hozei1s4iszakneke8tgh54qbqam X-HE-Tag: 1623307589-710453 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_vma_mapped_walk() cleanup: add a level of indentation to much of the body, making no functional change in this commit, but reducing the later diff when this is all converted to a loop. Signed-off-by: Hugh Dickins Cc: --- mm/page_vma_mapped.c | 109 +++++++++++++++++++++++---------------------- 1 file changed, 56 insertions(+), 53 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 0fe6e558d336..0840079ef7d2 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -173,65 +173,68 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->pte) goto next_pte; restart: - pgd = pgd_offset(mm, pvmw->address); - if (!pgd_present(*pgd)) - return false; - p4d = p4d_offset(pgd, pvmw->address); - if (!p4d_present(*p4d)) - return false; - pud = pud_offset(p4d, pvmw->address); - if (!pud_present(*pud)) - return false; - - pvmw->pmd = pmd_offset(pud, pvmw->address); - /* - * Make sure the pmd value isn't cached in a register by the - * compiler and used as a stale value after we've observed a - * subsequent update. - */ - pmde = pmd_read_atomic(pvmw->pmd); - barrier(); - - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { - pvmw->ptl = pmd_lock(mm, pvmw->pmd); - pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { - if (pvmw->flags & PVMW_MIGRATION) - return not_found(pvmw); - if (pmd_page(pmde) != page) - return not_found(pvmw); - return true; - } - if (!pmd_present(pmde)) { - swp_entry_t entry; + { + pgd = pgd_offset(mm, pvmw->address); + if (!pgd_present(*pgd)) + return false; + p4d = p4d_offset(pgd, pvmw->address); + if (!p4d_present(*p4d)) + return false; + pud = pud_offset(p4d, pvmw->address); + if (!pud_present(*pud)) + return false; - if (!thp_migration_supported() || - !(pvmw->flags & PVMW_MIGRATION)) - return not_found(pvmw); - entry = pmd_to_swp_entry(pmde); - if (!is_migration_entry(entry) || - migration_entry_to_page(entry) != page) - return not_found(pvmw); - return true; - } - /* THP pmd was split under us: handle on pte level */ - spin_unlock(pvmw->ptl); - pvmw->ptl = NULL; - } else if (!pmd_present(pmde)) { + pvmw->pmd = pmd_offset(pud, pvmw->address); /* - * If PVMW_SYNC, take and drop THP pmd lock so that we - * cannot return prematurely, while zap_huge_pmd() has - * cleared *pmd but not decremented compound_mapcount(). + * Make sure the pmd value isn't cached in a register by the + * compiler and used as a stale value after we've observed a + * subsequent update. */ - if ((pvmw->flags & PVMW_SYNC) && PageTransCompound(page)) { - spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); + pmde = pmd_read_atomic(pvmw->pmd); + barrier(); + + if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + pvmw->ptl = pmd_lock(mm, pvmw->pmd); + pmde = *pvmw->pmd; + if (likely(pmd_trans_huge(pmde))) { + if (pvmw->flags & PVMW_MIGRATION) + return not_found(pvmw); + if (pmd_page(pmde) != page) + return not_found(pvmw); + return true; + } + if (!pmd_present(pmde)) { + swp_entry_t entry; - spin_unlock(ptl); + if (!thp_migration_supported() || + !(pvmw->flags & PVMW_MIGRATION)) + return not_found(pvmw); + entry = pmd_to_swp_entry(pmde); + if (!is_migration_entry(entry) || + migration_entry_to_page(entry) != page) + return not_found(pvmw); + return true; + } + /* THP pmd was split under us: handle on pte level */ + spin_unlock(pvmw->ptl); + pvmw->ptl = NULL; + } else if (!pmd_present(pmde)) { + /* + * If PVMW_SYNC, take and drop THP pmd lock so that we + * cannot return prematurely, while zap_huge_pmd() has + * cleared *pmd but not decremented compound_mapcount(). + */ + if ((pvmw->flags & PVMW_SYNC) && + PageTransCompound(page)) { + spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); + + spin_unlock(ptl); + } + return false; } - return false; + if (!map_pte(pvmw)) + goto next_pte; } - if (!map_pte(pvmw)) - goto next_pte; while (1) { unsigned long end; -- 2.26.2