From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.1 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FSL_HELO_FAKE,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 333C3ECE587 for ; Wed, 2 Oct 2019 00:00:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD08A21A4A for ; Wed, 2 Oct 2019 00:00:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="b6M8s8Kx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD08A21A4A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 88E068E0007; Tue, 1 Oct 2019 20:00:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 83E528E0001; Tue, 1 Oct 2019 20:00:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72DEF8E0007; Tue, 1 Oct 2019 20:00:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 506B18E0001 for ; Tue, 1 Oct 2019 20:00:54 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id D8213180AD7C3 for ; Wed, 2 Oct 2019 00:00:53 +0000 (UTC) X-FDA: 75996888786.24.cave98_41fb51c31900b X-HE-Tag: cave98_41fb51c31900b X-Filterd-Recvd-Size: 9378 Received: from mail-io1-f66.google.com (mail-io1-f66.google.com [209.85.166.66]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Oct 2019 00:00:53 +0000 (UTC) Received: by mail-io1-f66.google.com with SMTP id c25so52639430iot.12 for ; Tue, 01 Oct 2019 17:00:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=HRoUIDj1r8i/Ij0NQgXqYrayFN5Nee0d4B0FhL1c8iQ=; b=b6M8s8KxOjS1ZmfLWDD0K48VF/aZb0ck5PqQp7kbunEnIyg1qZfxy72WXGFkyKgEyT gYsqU6v2omDpvO+/qRugfjZYkDaQ5t8d/H/KfvynvAC8Cw7GnqghCQlt5C7Js9iUOldL emkC1mCIHUCdJJ8MWTqSCx1L+nlhkXY3APJlw5iaFWb62RdEn2UNrcccDJMn2FAGLGE4 +XjonTJiBJWD8NvkqgoqIwA9PTVo/G4wkgQ12i6p9cNezDFNc1fIL913kFFct+zxw+QT raSk6QdQUD3flDuBkqofdEJhyJ6CHBO1SAhCfT3EuzLjDNwlbMIFvFN8gX7D5nzjzhwP aeIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=HRoUIDj1r8i/Ij0NQgXqYrayFN5Nee0d4B0FhL1c8iQ=; b=DWjrXAwrKXiZ3DbjSj5RWBqdu+V98HZOwEwdT7nfgIklvFSnfy6nVfKlhECWSnIW4t nBFFHWUodVhHvsNhQl6X5Mq+HUHWvk3jXmvrEFAUalAA8LK5mJmPUHnJe90CMwDTv0UT YlkSeCQwD6xFUCT2ZQJdnu5YH7N3Omcak81he3KosKyJAPq+XrHPIMrc4ZOjLXxPjGq/ Ki78QStndwWwvEEsGmtTkq16hyqA7nPd5s+kd3bg83FA5Isc+HCq5tuMTosxtydRIPbg ymAJ/kNm+cspgmrfRjnMJp09ZVvnZZaeCq1QBSemFkcJlzRaXe1BzfzZeaZ9kKPQBttJ inwA== X-Gm-Message-State: APjAAAWX8sS8Is1y4IGSoXwb7WJWmiflMWKFIZ78R9u24CcSoJo3vLUd UYrjDhEZr6aqaOPf7nROxbQZ1w== X-Google-Smtp-Source: APXvYqzEkD7xquRTtDPw329fI8Jif2fv/j8emc8DGNLITQydAzaOFYo8NZx2LF9UYnO8CSBm8J/ebg== X-Received: by 2002:a92:8988:: with SMTP id w8mr882693ilk.86.1569974452245; Tue, 01 Oct 2019 17:00:52 -0700 (PDT) Received: from google.com ([2620:15c:183:0:9f3b:444a:4649:ca05]) by smtp.gmail.com with ESMTPSA id r138sm7936092iod.59.2019.10.01.17.00.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2019 17:00:51 -0700 (PDT) Date: Tue, 1 Oct 2019 18:00:46 -0600 From: Yu Zhao To: John Hubbard , Mark Rutland Cc: "Kirill A. Shutemov" , Peter Zijlstra , Andrew Morton , Michal Hocko , "Kirill A . Shutemov" , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Vlastimil Babka , Hugh Dickins , =?iso-8859-1?B?Suly9G1l?= Glisse , Andrea Arcangeli , "Aneesh Kumar K . V" , David Rientjes , Matthew Wilcox , Lance Roy , Ralph Campbell , Jason Gunthorpe , Dave Airlie , Thomas Hellstrom , Souptick Joarder , Mel Gorman , Jan Kara , Mike Kravetz , Huang Ying , Aaron Lu , Omar Sandoval , Thomas Gleixner , Vineeth Remanan Pillai , Daniel Jordan , Mike Rapoport , Joel Fernandes , Alexander Duyck , Pavel Tatashin , David Hildenbrand , Juergen Gross , Anthony Yznaga , Johannes Weiner , "Darrick J . Wong" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3 3/4] mm: don't expose non-hugetlb page to fast gup prematurely Message-ID: <20191002000046.GA60764@google.com> References: <20190914070518.112954-1-yuzhao@google.com> <20190924232459.214097-1-yuzhao@google.com> <20190924232459.214097-3-yuzhao@google.com> <20190925082530.GD4536@hirez.programming.kicks-ass.net> <20190925222654.GA180125@google.com> <20190926102036.od2wamdx2s7uznvq@box> <9465df76-0229-1b44-5646-5cced1bc1718@nvidia.com> <20190927050648.GA92494@google.com> <712513fe-f064-c965-d165-80d43cfc606f@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <712513fe-f064-c965-d165-80d43cfc606f@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Oct 01, 2019 at 03:31:51PM -0700, John Hubbard wrote: > On 9/26/19 10:06 PM, Yu Zhao wrote: > > On Thu, Sep 26, 2019 at 08:26:46PM -0700, John Hubbard wrote: > >> On 9/26/19 3:20 AM, Kirill A. Shutemov wrote: > >>> On Wed, Sep 25, 2019 at 04:26:54PM -0600, Yu Zhao wrote: > >>>> On Wed, Sep 25, 2019 at 10:25:30AM +0200, Peter Zijlstra wrote: > >>>>> On Tue, Sep 24, 2019 at 05:24:58PM -0600, Yu Zhao wrote: > >> ... > >>>>> I'm thinking this patch make stuff rather fragile.. Should we instead > >>>>> stick the barrier in set_p*d_at() instead? Or rather, make that store a > >>>>> store-release? > >>>> > >>>> I prefer it this way too, but I suspected the majority would be > >>>> concerned with the performance implications, especially those > >>>> looping set_pte_at()s in mm/huge_memory.c. > >>> > >>> We can rename current set_pte_at() to __set_pte_at() or something and > >>> leave it in places where barrier is not needed. The new set_pte_at()( will > >>> be used in the rest of the places with the barrier inside. > >> > >> +1, sounds nice. I was unhappy about the wide-ranging changes that would have > >> to be maintained. So this seems much better. > > > > Just to be clear that doing so will add unnecessary barriers to one > > of the two paths that share set_pte_at(). > > Good point, maybe there's a better place to do it... > > > > > >>> BTW, have you looked at other levels of page table hierarchy. Do we have > >>> the same issue for PMD/PUD/... pages? > >>> > >> > >> Along the lines of "what other memory barriers might be missing for > >> get_user_pages_fast(), I'm also concerned that the synchronization between > >> get_user_pages_fast() and freeing the page tables might be technically broken, > >> due to missing memory barriers on the get_user_pages_fast() side. Details: > >> > >> gup_fast() disables interrupts, but I think it also needs some sort of > >> memory barrier(s), in order to prevent reads of the page table (gup_pgd_range, > >> etc) from speculatively happening before the interrupts are disabled. > > > > I was under impression switching back from interrupt context is a > > full barrier (otherwise wouldn't we be vulnerable to some side > > channel attacks?), so the reader side wouldn't need explicit rmb. > > > > Documentation/memory-barriers.txt points out: > > INTERRUPT DISABLING FUNCTIONS > ----------------------------- > > Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts > (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O > barriers are required in such a situation, they must be provided from some > other means. > > btw, I'm really sorry I missed your responses over the last 3 or 4 days. > I just tracked down something in our email system that was sometimes > moving some emails to spam (just few enough to escape immediate attention, argghh!). > I think I killed it off for good now. I wasn't ignoring you. :) Thanks, John. I agree with all you said, including the irq disabling function not being a sufficient smp_rmb(). I was hoping somebody could clarify whether ipi handlers used by tlb flush are sufficient to prevent CPU 1 from seeing any stale data from freed page tables on all supported archs. CPU 1 CPU 2 flush remote tlb by ipi wait for the ipi hanlder free page table disable irq walk page table enable irq I think they should because otherwise tlb flush wouldn't work if CPU 1 still sees stale data from the freed page table, unless there is a really strange CPU cache design I'm not aware of. Quoting comments from x86 ipi handler flush_tlb_func_common(): * read active_mm's tlb_gen. We don't need any explicit barriers * because all x86 flush operations are serializing and the * atomic64_read operation won't be reordered by the compiler. For ppc64 ipi hander radix__flush_tlb_range(), there is an "eieio" instruction: https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/assembler/idalangref_eieio_instrs.html I'm not sure why it's not "sync" -- I'd guess something implicitly works as "sync" already (or it's a bug). I didn't find an ipi handler for tlb flush on arm64. There should be one, otherwise fast gup on arm64 would be broken. Mark?