From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2702EC388F9 for ; Mon, 26 Oct 2020 04:44:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AEF6F22281 for ; Mon, 26 Oct 2020 04:44:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="VMzQSXqQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEF6F22281 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0415A6B005D; Mon, 26 Oct 2020 00:44:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F0D1D6B0062; Mon, 26 Oct 2020 00:44:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DFABE6B0068; Mon, 26 Oct 2020 00:44:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id AF7A26B005D for ; Mon, 26 Oct 2020 00:44:14 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 52480180AD815 for ; Mon, 26 Oct 2020 04:44:14 +0000 (UTC) X-FDA: 77412834828.13.error03_07008a227270 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 2CA6918140B60 for ; Mon, 26 Oct 2020 04:44:14 +0000 (UTC) X-HE-Tag: error03_07008a227270 X-Filterd-Recvd-Size: 6172 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Oct 2020 04:44:13 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Sun, 25 Oct 2020 21:44:16 -0700 Received: from [10.2.57.113] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 26 Oct 2020 04:44:07 +0000 Subject: Re: [RFCv2 08/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory To: Matthew Wilcox CC: "Kirill A. Shutemov" , Dave Hansen , Andy Lutomirski , "Peter Zijlstra" , Paolo Bonzini , "Sean Christopherson" , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , "Liran Alon" , Mike Rapoport , , , , , "Kirill A. Shutemov" References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> <20201020061859.18385-9-kirill.shutemov@linux.intel.com> <20201022114946.GR20115@casper.infradead.org> <30ce6691-fd70-76a2-8b61-86d207c88713@nvidia.com> <20201026042158.GN20115@casper.infradead.org> From: John Hubbard Message-ID: Date: Sun, 25 Oct 2020 21:44:07 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201026042158.GN20115@casper.infradead.org> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1603687456; bh=OugSIDEgAyrbA+hBdM+S+wjKYpamFwazDYrzvQ7ACu0=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=VMzQSXqQYz00y6HdMRRIWN0A9Ldha+c7/7WdKnZwGZT+28JsXCO93EXGLTyXEcGIu wdy4+yKnswoval6Koi52Q/E6Wvzl4IC64Do8HTpw51+wqx/azBng6qtTzQY7W5voFH jJut2BtelObV3oyNUfB316xz5b1aQd219Otl/WtFdNKVONA2Z482P82MHhfwIPnoY9 OFKmkxJLq6yIFbgCj8rtuaMMrIg52Z4+1hFNbGYej8K+/ixbHctvSIV/9Icd4J1O0B kNGgWE2fS8cWFm2doc93+ifbzddx83+9n9apHfIo0DUlLYXKgAcT2C1QXFGl0m9ueF PMd6PV9rFRDcA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/25/20 9:21 PM, Matthew Wilcox wrote: > On Thu, Oct 22, 2020 at 12:58:14PM -0700, John Hubbard wrote: >> On 10/22/20 4:49 AM, Matthew Wilcox wrote: >>> On Tue, Oct 20, 2020 at 01:25:59AM -0700, John Hubbard wrote: >>>> Should copy_to_guest() use pin_user_pages_unlocked() instead of gup_unlocked? >>>> We wrote a "Case 5" in Documentation/core-api/pin_user_pages.rst, just for this >>>> situation, I think: >>>> >>>> >>>> CASE 5: Pinning in order to write to the data within the page >>>> ------------------------------------------------------------- >>>> Even though neither DMA nor Direct IO is involved, just a simple case of "pin, >>>> write to a page's data, unpin" can cause a problem. Case 5 may be considered a >>>> superset of Case 1, plus Case 2, plus anything that invokes that pattern. In >>>> other words, if the code is neither Case 1 nor Case 2, it may still require >>>> FOLL_PIN, for patterns like this: >>>> >>>> Correct (uses FOLL_PIN calls): >>>> pin_user_pages() >>>> write to the data within the pages >>>> unpin_user_pages() >>> >>> Case 5 is crap though. That bug should have been fixed by getting >>> the locking right. ie: >>> >>> get_user_pages_fast(); >>> lock_page(); >>> kmap(); >>> set_bit(); >>> kunmap(); >>> set_page_dirty() >>> unlock_page(); >>> >>> I should have vetoed that patch at the time, but I was busy with other things. >> >> It does seem like lock_page() is better, for now at least, because it >> forces the kind of synchronization with file system writeback that is >> still yet to be implemented for pin_user_pages(). >> >> Long term though, Case 5 provides an alternative way to do this >> pattern--without using lock_page(). Also, note that Case 5, *in >> general*, need not be done page-at-a-time, unlike the lock_page() >> approach. Therefore, Case 5 might potentially help at some call sites, >> either for deadlock avoidance or performance improvements. >> >> In other words, once the other half of the pin_user_pages() plan is >> implemented, either of these approaches should work. >> >> Or, are you thinking that there is never a situation in which Case 5 is >> valid? > > I don't think the page pinning approach is ever valid. For file Could you qualify that? Surely you don't mean that the entire pin_user_pages story is a waste of time--I would have expected you to make more noise earlier if you thought that, yes? > mappings, the infiniband registration should take a lease on the inode, > blocking truncation. DAX shouldn't be using struct pages at all, so > there shouldn't be anything there to pin. > > It's been five years since DAX was merged, and page pinning still > doesn't work. How much longer before the people who are pushing it > realise that it's fundamentally flawed? > Is this a separate rant about *only* DAX, or is general RDMA in your sights too? :) thanks, -- John Hubbard NVIDIA