From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C94B6C432C2 for ; Tue, 24 Sep 2019 23:25:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7DACB2146E for ; Tue, 24 Sep 2019 23:25:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GmtwfGFW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7DACB2146E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 231D96B000C; Tue, 24 Sep 2019 19:25:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 209C26B000D; Tue, 24 Sep 2019 19:25:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D53D6B000E; Tue, 24 Sep 2019 19:25:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id D800F6B000C for ; Tue, 24 Sep 2019 19:25:21 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 65AF7181AC9B4 for ; Tue, 24 Sep 2019 23:25:21 +0000 (UTC) X-FDA: 75971397642.09.join85_4bd2417a8951d X-HE-Tag: join85_4bd2417a8951d X-Filterd-Recvd-Size: 6330 Received: from mail-ua1-f74.google.com (mail-ua1-f74.google.com [209.85.222.74]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Sep 2019 23:25:20 +0000 (UTC) Received: by mail-ua1-f74.google.com with SMTP id r39so790302uad.3 for ; Tue, 24 Sep 2019 16:25:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W6ANTbXC/5tTOAaJaqjhpWRd4RlgZ8urEFF8tzHfF70=; b=GmtwfGFWEmM5rBrJZ3zL/rMurfIJHKW3DPe1mpCFLhK2FzPz/sdblXdmkhCzzDEr1p c1CRp4xcJFw1WGCy4uVkAynF2k0tQwMR21q8qe53S5JjRk0+bNxUy6BDvxJ2ED6mNRbC gF5SJ8uZ1yGe3Axb+WL3xWOM6fqQpzVFaHdkrIB/dkLyBO4RlaB+DJTctzG9y55IFbV1 AQGwyJ0PFRLXPKejau/Zp/XV1pF1eDXX5oIG0w+mPl8fjnJGR2sTiSwKsEAWfCaZUrzs EShJxvaMFLPOV0NoZ+Ut9VsP7qr8bxn3HanFveOEApilFTtmrqhoEZ+MyTs2T/leHskp T64g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W6ANTbXC/5tTOAaJaqjhpWRd4RlgZ8urEFF8tzHfF70=; b=hU5ETB0kG7I3tbBE1a3RZYD/TDxhA1F5AZAq4N/f02/60Ozs4u23V7NIeQ7cS+tTS6 2Lv1R2aByujScQjPYqSjgcXLMPo/0iF7CLOlWBBPKFoZbcKKTnZkrbGxQYvZSBLRQGGm NisbVgNTyXZXZQPFLzVaQVT0yTE2DHzqjxz8nrmZRpT9UuLGJ2v6z2/lfXEmx0qPPGLQ xt58Exf7vMyczYKxuRdlarlRaDzGH9ozAuC+Z+NT3Fn2zRPt712xdzGJj9YGV24Bl1Ya 4M/eqY3hqS9i34fCP0m5uLN9wrGbmlAHJUkvI2CiKvOjb5L6DQsoVg1qdQcIboZi94br eebQ== X-Gm-Message-State: APjAAAVP96QD2n7BB2Dc+W7XG4JGC1O+JpiBglKf+XTIuorkIvhtEae4 YLUBr6GFp8P5iAb/Yz4ggg7YimnV4j8= X-Google-Smtp-Source: APXvYqyjV5ELtfxTEDN6T4GvPwrc1VD3sg7ZOW3l6KFpc18OtkYspLxzhnkSZhs+S8ZRWaXFVNPkcGEjrTA= X-Received: by 2002:a67:dc95:: with SMTP id g21mr3113756vsk.164.1569367519578; Tue, 24 Sep 2019 16:25:19 -0700 (PDT) Date: Tue, 24 Sep 2019 17:24:57 -0600 In-Reply-To: <20190924232459.214097-1-yuzhao@google.com> Message-Id: <20190924232459.214097-2-yuzhao@google.com> Mime-Version: 1.0 References: <20190914070518.112954-1-yuzhao@google.com> <20190924232459.214097-1-yuzhao@google.com> X-Mailer: git-send-email 2.23.0.351.gc4317032e6-goog Subject: [PATCH v3 2/4] mm: don't expose hugetlb page to fast gup prematurely From: Yu Zhao To: Andrew Morton , Michal Hocko , "Kirill A . Shutemov" Cc: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Vlastimil Babka , Hugh Dickins , "=?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?=" , Andrea Arcangeli , "Aneesh Kumar K . V" , David Rientjes , Matthew Wilcox , Lance Roy , Ralph Campbell , Jason Gunthorpe , Dave Airlie , Thomas Hellstrom , Souptick Joarder , Mel Gorman , Jan Kara , Mike Kravetz , Huang Ying , Aaron Lu , Omar Sandoval , Thomas Gleixner , Vineeth Remanan Pillai , Daniel Jordan , Mike Rapoport , Joel Fernandes , Mark Rutland , Alexander Duyck , Pavel Tatashin , David Hildenbrand , Juergen Gross , Anthony Yznaga , Johannes Weiner , "Darrick J . Wong" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yu Zhao Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We don't want to expose a hugetlb page to the fast gup running on a remote CPU before the local non-atomic op __SetPageUptodate() is visible first. For a hugetlb page, there is no memory barrier between the non-atomic op and set_huge_pte_at(). Therefore, the page can appear to the fast gup before the flag does. There is no evidence this would cause any problem, but there is no point risking the race either. This patch simply replace 3 uses of the non-atomic op with its atomic version though out mm/hugetlb.c. The only one left in hugetlbfs_fallocate() is safe because huge_add_to_page_cache() serves as a valid write barrier. Signed-off-by: Yu Zhao --- mm/hugetlb.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6d7296dd11b8..0be5b7937085 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3693,7 +3693,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, copy_user_huge_page(new_page, old_page, address, vma, pages_per_huge_page(h)); - __SetPageUptodate(new_page); + SetPageUptodate(new_page); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, haddr, haddr + huge_page_size(h)); @@ -3879,7 +3879,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, goto out; } clear_huge_page(page, address, pages_per_huge_page(h)); - __SetPageUptodate(page); + SetPageUptodate(page); new_page = true; if (vma->vm_flags & VM_MAYSHARE) { @@ -4180,11 +4180,11 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, } /* - * The memory barrier inside __SetPageUptodate makes sure that + * The memory barrier inside SetPageUptodate makes sure that * preceding stores to the page contents become visible before * the set_pte_at() write. */ - __SetPageUptodate(page); + SetPageUptodate(page); mapping = dst_vma->vm_file->f_mapping; idx = vma_hugecache_offset(h, dst_vma, dst_addr); -- 2.23.0.351.gc4317032e6-goog