From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26534C433E1 for ; Fri, 7 Aug 2020 06:26:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E140B22D6F for ; Fri, 7 Aug 2020 06:26:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596781589; bh=L411IMx8s6AO3vyH70GRl4hazc8nPOEqZzjjm7QWvWI=; h=Date:From:To:Subject:In-Reply-To:Reply-To:List-ID:From; b=WpDP6scgd27XgBfElV8rJL+24IdLpPOx6HPKMfUl5iamrB57vshkH2V5CSKx5PUJE rUh/PUv+rWZ4p0OcfgHe26Uo6tKCUfkUQViWr7ihUssIic6A4RBiRpEE+kAgv/4Xvx hLsKKNsEgCaC2P2fJaNUpxcvFhIWyqpJIzulmGNo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726645AbgHGG02 (ORCPT ); Fri, 7 Aug 2020 02:26:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:35044 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725805AbgHGG01 (ORCPT ); Fri, 7 Aug 2020 02:26:27 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8AB2D22CF6; Fri, 7 Aug 2020 06:26:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596781587; bh=L411IMx8s6AO3vyH70GRl4hazc8nPOEqZzjjm7QWvWI=; h=Date:From:To:Subject:In-Reply-To:From; b=19rWJdeFJOoS8bkEAzfeHxevXyDePWDfnMGT++D+/Po2gRjxccU7K6+25SKlCt8jW 7wnInz2k4y+cisU+64RtQ0P7tEOiENsLIk/pxGyxuB9rlj3EFoCOsoVuDkxzqMNbg0 tlp6D3/8vLFoQlK8Ksbu2qOjTWFsUCNNTWKbGiX0= Date: Thu, 06 Aug 2020 23:26:25 -0700 From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, hughd@google.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, songliubraving@fb.com, stable@vger.kernel.org, torvalds@linux-foundation.org Subject: [patch 161/163] khugepaged: khugepaged_test_exit() check mmget_still_valid() Message-ID: <20200807062625.sPpz4GT5t%akpm@linux-foundation.org> In-Reply-To: <20200806231643.a2711a608dd0f18bff2caf2b@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: mm-commits-owner@vger.kernel.org Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Hugh Dickins Subject: khugepaged: khugepaged_test_exit() check mmget_still_valid() Move collapse_huge_page()'s mmget_still_valid() check into khugepaged_test_exit() itself. collapse_huge_page() is used for anon THP only, and earned its mmget_still_valid() check because it inserts a huge pmd entry in place of the page table's pmd entry; whereas collapse_file()'s retract_page_tables() or collapse_pte_mapped_thp() merely clears the page table's pmd entry. But core dumping without mmap lock must have been as open to mistaking a racily cleared pmd entry for a page table at physical page 0, as exit_mmap() was. And we certainly have no interest in mapping as a THP once dumping core. Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008021217020.27773@eggly.anvils Fixes: 59ea6d06cfa9 ("coredump: fix race condition between collapse_huge_page() and core dumping") Signed-off-by: Hugh Dickins Cc: Andrea Arcangeli Cc: Song Liu Cc: Mike Kravetz Cc: Kirill A. Shutemov Cc: [4.8+] Signed-off-by: Andrew Morton --- mm/khugepaged.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) --- a/mm/khugepaged.c~khugepaged-khugepaged_test_exit-check-mmget_still_valid +++ a/mm/khugepaged.c @@ -431,7 +431,7 @@ static void insert_to_mm_slots_hash(stru static inline int khugepaged_test_exit(struct mm_struct *mm) { - return atomic_read(&mm->mm_users) == 0; + return atomic_read(&mm->mm_users) == 0 || !mmget_still_valid(mm); } static bool hugepage_vma_check(struct vm_area_struct *vma, @@ -1100,9 +1100,6 @@ static void collapse_huge_page(struct mm * handled by the anon_vma lock + PG_lock. */ mmap_write_lock(mm); - result = SCAN_ANY_PROCESS; - if (!mmget_still_valid(mm)) - goto out; result = hugepage_vma_revalidate(mm, address, &vma); if (result) goto out; _