From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1EFBC4727D for ; Wed, 23 Sep 2020 00:27:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3169420874 for ; Wed, 23 Sep 2020 00:27:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GPBFrnJq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3169420874 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E3056B006E; Tue, 22 Sep 2020 20:27:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6931A6B0070; Tue, 22 Sep 2020 20:27:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 583CF6B0071; Tue, 22 Sep 2020 20:27:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 439146B006E for ; Tue, 22 Sep 2020 20:27:45 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D7149180AD807 for ; Wed, 23 Sep 2020 00:27:44 +0000 (UTC) X-FDA: 77292438048.26.space02_170239927152 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id AAAA81804B656 for ; Wed, 23 Sep 2020 00:27:44 +0000 (UTC) X-HE-Tag: space02_170239927152 X-Filterd-Recvd-Size: 9176 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 23 Sep 2020 00:27:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600820863; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xGDE1yzKyE5kc6EXF+iSj4DX9Dvcn5/HEjhUC9MrZfM=; b=GPBFrnJqiI1ur4ZAnuziyrmUaz6cvdxc6IhxBjhSQ5ZnueHPVCoKab3OfTCAxC/PtUnffz jJ7hjtQXfYOFNdEmAHgPLj2jkCR9s59wdrAdPS3eLGpMWdZkOSP6fMVZ8duQf6POzFRIoW olGEnu4beBSJKsgwR4Y0cA85eFrbaRo= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-186-WsXInivJNM-rpaFPy5kWvQ-1; Tue, 22 Sep 2020 20:27:39 -0400 X-MC-Unique: WsXInivJNM-rpaFPy5kWvQ-1 Received: by mail-qv1-f72.google.com with SMTP id k14so13117811qvw.20 for ; Tue, 22 Sep 2020 17:27:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=xGDE1yzKyE5kc6EXF+iSj4DX9Dvcn5/HEjhUC9MrZfM=; b=E8ZRkH/2iDtYc0grD6FUZ3deA7wspyAZmm0JWxZzfASoENubV0zqvQ+ohX4xXuU2gn jpOyLhaT1J1cApHxk16a9/c2ucowsWcmH0vSWF5tIJU8WYtXNwmxDQiWrFl3arJ770iJ WMGDNdaQ7fO5zdJcyun1HUIVRgJ8tPi0kfj2HHixsz31sIBVp/LN4ymGQOJx7SEIWrDm Un4WMS5udTpaE+Omg/IhMTUatFn/azb6enL3qh9LxIwes2oi+XLTjdT0YUoOk7CdBBoB CQz6FCZ1cjOw7UhQFk2D2V0CyhmdIztn4OqLnYjdxD4rRoHUSC/0or0FzoKP5SSx3WBM yRrQ== X-Gm-Message-State: AOAM531WFOMOCj8BcMZnE/aMkWqfyKB8+IOBVFQyNN+kozFjJ8N2spLP iY36TKUo6+TKNQ0Ff0kEq95kQ9/6b94QzyivBCCpGMTyQj0V8wfKQvodSKk/vH5susoDVVXIIhZ GrabzOrSIpZA= X-Received: by 2002:a0c:dd8d:: with SMTP id v13mr8635435qvk.22.1600820858770; Tue, 22 Sep 2020 17:27:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyaaRTxMotr0e7loft5rDR9S4uZO8D2viy77VWKYGRYo7u6uovWqlsy/QULgXLw0bLZzvNoYQ== X-Received: by 2002:a0c:dd8d:: with SMTP id v13mr8635411qvk.22.1600820858450; Tue, 22 Sep 2020 17:27:38 -0700 (PDT) Received: from xz-x1 (bras-vprn-toroon474qw-lp130-11-70-53-122-15.dsl.bell.ca. [70.53.122.15]) by smtp.gmail.com with ESMTPSA id t11sm13498143qtp.32.2020.09.22.17.27.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Sep 2020 17:27:37 -0700 (PDT) Date: Tue, 22 Sep 2020 20:27:35 -0400 From: Peter Xu To: Jason Gunthorpe Cc: John Hubbard , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Jan Kara , Michal Hocko , Kirill Tkhai , Kirill Shutemov , Hugh Dickins , Christoph Hellwig , Andrea Arcangeli , Oleg Nesterov , Leon Romanovsky , Linus Torvalds , Jann Horn Subject: Re: [PATCH 1/5] mm: Introduce mm_struct.has_pinned Message-ID: <20200923002735.GN19098@xz-x1> References: <20200921211744.24758-1-peterx@redhat.com> <20200921211744.24758-2-peterx@redhat.com> <224908c1-5d0f-8e01-baa9-94ec2374971f@nvidia.com> <20200922151736.GD19098@xz-x1> <20200922161046.GB731578@ziepe.ca> <20200922175415.GI19098@xz-x1> <20200922191116.GK8409@ziepe.ca> MIME-Version: 1.0 In-Reply-To: <20200922191116.GK8409@ziepe.ca> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Sep 22, 2020 at 04:11:16PM -0300, Jason Gunthorpe wrote: > On Tue, Sep 22, 2020 at 01:54:15PM -0400, Peter Xu wrote: > > diff --git a/mm/memory.c b/mm/memory.c > > index 8f3521be80ca..6591f3f33299 100644 > > +++ b/mm/memory.c > > @@ -888,8 +888,8 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > * Because we'll need to release the locks before doing cow, > > * pass this work to upper layer. > > */ > > - if (READ_ONCE(src_mm->has_pinned) && wp && > > - page_maybe_dma_pinned(page)) { > > + if (wp && page_maybe_dma_pinned(page) && > > + READ_ONCE(src_mm->has_pinned)) { > > /* We've got the page already; we're safe */ > > data->cow_old_page = page; > > data->cow_oldpte = *src_pte; > > > > I can also add some more comment to emphasize this. > > It is not just that, but the ptep_set_wrprotect() has to be done > earlier. Now I understand your point, I think.. So I guess it's not only about has_pinned, but it should be a race between the fast-gup and the fork() code, even if has_pinned is always set. > > Otherwise it races like: > > pin_user_pages_fast() fork() > atomic_set(has_pinned, 1); > [..] > atomic_read(page->_refcount) //false > // skipped atomic_read(has_pinned) > atomic_add(page->_refcount) > ordered check write protect() > ordered set write protect() > > And now have a write protect on a DMA pinned page, which is the > invarient we are trying to create. > > The best algorithm I've thought of is something like: > > pte_map_lock() > if (page) { > if (wp) { > ptep_set_wrprotect() > /* Order with try_grab_compound_head(), either we see > * page_maybe_dma_pinned(), or they see the wrprotect */ > get_page(); Is this get_page() a must to be after ptep_set_wrprotect() explicitly? IIUC what we need is to order ptep_set_wrprotect() and page_maybe_dma_pinned() here. E.g., would a "mb()" work? Another thing is, do we need similar thing for e.g. gup_pte_range(), so that to guarantee ordering of try_grab_compound_head() and the pte change check? > > if (page_maybe_dma_pinned() && READ_ONCE(src_mm->has_pinned)) { > put_page(); > ptep_clear_wrprotect() > > // do copy > return > } > } else { > get_page(); > } > page_dup_rmap() > pte_unmap_lock() > > Then the do_wp_page() path would have to detect that the page is not > write protected under the pte lock inside the fault handler and just > do nothing. Yes, iiuc do_wp_page() should be able to handle spurious write page faults like this already, as below: vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); spin_lock(vmf->ptl); ... if (vmf->flags & FAULT_FLAG_WRITE) { if (!pte_write(entry)) return do_wp_page(vmf); entry = pte_mkdirty(entry); } So when spin_lock() returns: - When it's a real cow (not pinned pages; we write-protected it and it keeps write-protected), we should do cow here as usual. - When it's a fake cow (pinned pages), the write bit should have been recovered before the page table lock released, and we'll skip do_wp_page() and retry the page fault immediately. > Ie the set/clear could be visible to the CPU and trigger a > spurious fault, but never trigger a COW. > > Thus 'wp' becomes a 'lock' that prevents GUP from returning this page. Another question is, how about read fast-gup for pinning? Because we can't use the write-protect mechanism to block a read gup. I remember we've discussed similar things and iirc your point is "pinned pages should always be with WRITE". However now I still doubt it... Because I feel like read gup is still legal (as I mentioned previously - when device purely writes to the page and the processor only reads from it). > > Very tricky, deserves a huge comment near the ptep_clear_wrprotect() > > Consider the above algorithm beside the gup_fast() algorithm: > > if (!pte_access_permitted(pte, flags & FOLL_WRITE)) > goto pte_unmap; > [..] > head = try_grab_compound_head(page, 1, flags); > if (!head) > goto pte_unmap; > if (unlikely(pte_val(pte) != pte_val(*ptep))) { > put_compound_head(head, 1, flags); > goto pte_unmap; > > That last *ptep will check that the WP is not set after making > page_maybe_dma_pinned() true. > > It still looks reasonable, the extra work is still just the additional > atomic in page_maybe_dma_pinned(), just everything else has to be very > carefully sequenced due to unlocked page table accessors. Tricky! I'm still thinking about some easier way but no much clue so far. Hopefully we'll figure out something solid soon. Thanks, -- Peter Xu