From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56FF8C43460 for ; Fri, 21 May 2021 08:49:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB53C613BE for ; Fri, 21 May 2021 08:49:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB53C613BE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F8B58E0027; Fri, 21 May 2021 04:49:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AA078E0022; Fri, 21 May 2021 04:49:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEFEE8E0027; Fri, 21 May 2021 04:49:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id ACA888E0022 for ; Fri, 21 May 2021 04:49:09 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 42F07181AF5C1 for ; Fri, 21 May 2021 08:49:09 +0000 (UTC) X-FDA: 78164613618.01.7FA9CDB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf22.hostedemail.com (Postfix) with ESMTP id A3058C0042C0 for ; Fri, 21 May 2021 08:49:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1621586948; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ss3oq71TJpZynR4jRUU0R8ghR5Bhj/98WSz+6h5ZWWw=; b=J9DQp73+wHbjmJFWy8jJK3ssDHvcRKRMrHJGfhqC9JxSrabDJ94NGUTNnDCsBERcJAqVFk paiWUW+bRYBWXVfljof7kkW7UkZTggjz7tctB72mLC8CzF2wISj6tSra5cGMBkf0FwpffT hJzHZNxhQucxP2wmyocApDX4aGWWHI8= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-435-8QxIE0X7MpGhDYZbE-1_0A-1; Fri, 21 May 2021 04:49:01 -0400 X-MC-Unique: 8QxIE0X7MpGhDYZbE-1_0A-1 Received: by mail-wr1-f70.google.com with SMTP id 22-20020adf82960000b02901115ae2f734so9179135wrc.5 for ; Fri, 21 May 2021 01:49:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=Ss3oq71TJpZynR4jRUU0R8ghR5Bhj/98WSz+6h5ZWWw=; b=G9YKroa8TVe+668e05Qs2saCKUY5uCPlYwsdzTv7Dp7dAqMMc1mXrkvKJCOh2iOvdf kIdcsGqL5649etcq8Y/AWF3WTZCECKdcLIBssodRQr6F7Z5qVxAeJxroUArx3vFbFXAR pZZW/r2cOlfQb8Z+rDrr3/dBeLhefqq7StAwNviSXEwp/UsAFPlXE8ivWw6WRe7a37NN R5kZo6EY6tiFAAXD/N+iRSkbmk0JrYgXjZDRm8zb1lt+/vTlPGQUF8+2x6twA+YgEUP3 AeD1XZ1cS7mLyxWVBuOQkrU9pjg9cCdabexJ4D9oi/1+VMupkKS26/4SgyGJZrXi/jYU CQIQ== X-Gm-Message-State: AOAM532zS9VC45vBj7IJjQ/TUjWb7dzk8ssZc608x78u6zZQDIt7qO03 qY8m8M6PaUDWgKGz76bx6ZaBjYa8Msi0cYiY2iwbOfhlGHCU30b9Laefyd15L7R0oyjjQTK3Hed BzbI57bOXUjQ= X-Received: by 2002:a05:600c:3592:: with SMTP id p18mr7606429wmq.44.1621586940350; Fri, 21 May 2021 01:49:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwEzS6ZR08kC/nab/1F9qSPZO8CRDhLDB8U6o4FhSCqgEsVRhOFUu1qq4q3oGPk3STOmbHF0Q== X-Received: by 2002:a05:600c:3592:: with SMTP id p18mr7606385wmq.44.1621586939966; Fri, 21 May 2021 01:48:59 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6502.dip0.t-ipconnect.de. [91.12.101.2]) by smtp.gmail.com with ESMTPSA id x4sm11507774wmj.17.2021.05.21.01.48.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 May 2021 01:48:59 -0700 (PDT) To: Michal Hocko Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Arnd Bergmann , Oscar Salvador , Matthew Wilcox , Andrea Arcangeli , Minchan Kim , Jann Horn , Jason Gunthorpe , Dave Hansen , Hugh Dickins , Rik van Riel , "Michael S . Tsirkin" , "Kirill A . Shutemov" , Vlastimil Babka , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Chris Zankel , Max Filippov , Mike Kravetz , Peter Xu , Rolf Eike Beer , linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, Linux API References: <20210511081534.3507-1-david@redhat.com> <20210511081534.3507-3-david@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH resend v2 2/5] mm/madvise: introduce MADV_POPULATE_(READ|WRITE) to prefault page tables Message-ID: <2e41144c-27f4-f341-d855-f966dabc2c21@redhat.com> Date: Fri, 21 May 2021 10:48:57 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Queue-Id: A3058C0042C0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J9DQp73+; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf22.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=david@redhat.com X-Rspamd-Server: rspam03 X-Stat-Signature: qbnzxqgwncm3w7iuxzx3jphb9tnar8nf X-HE-Tag: 1621586946-529427 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > [...] >> Anyhow, please suggest a way to handle it via a single flag in the ker= nel -- >> which would be some kind of heuristic as we know from MAP_POPULATE. Ha= ving >> an alternative at hand would make it easier to discuss this topic furt= her. I >> certainly *don't* want MAP_POPULATE semantics when it comes to >> MADV_POPULATE, especially when it comes to shared mappings. Not useful= in >> QEMU now and in the future. >=20 > OK, this point is still not entirely clear to me. Elsewhere you are > saying that QEMU cannot use MAP_POPULATE because it ignores errors > and also it doesn't support sparse mappings because they apply to the > whole mmap. These are all clear but it is less clear to me why the same > semantic is not applicable for QEMU when used through madvise interface > which can handle both of those. It's a combination of things: a) MAP_POPULATE never was an option simply because of deferred "prealloc=3Don" handling in QEMU, happening way after we created the memmap. Further it doesn't report if there was an error, which is another reason why it's basically useless for QEMU use cases. b) QEMU uses manual read-write prefaulting for "preallocation", for example, to avoid SIGBUS on hugetlbfs or shmem at runtime. There are cases where we absolutely want to avoid crashing the VM later just because of a user error. MAP_POPULATE does *not* do what we want for shared mappings, because it triggers a read fault. c) QEMU uses the same mechanism for prefaulting in RT environments, where we want to avoid any kind of pagefault, using mlock() etc. d) MAP_POPULATE does not apply to sparse memory mappings that I'll be using more heavily in QEMU, also for the purpose of preallocation with virtio-mem. See the current QEMU code along with a comment in https://github.com/qemu/qemu/blob/972e848b53970d12cb2ca64687ef8ff797fb623= 6/util/oslib-posix.c#L496 it's especially bad for PMEM ("wear on the storage backing"), which is=20 why we have to trust on users not to trigger preallocation/prefaulting=20 on PMEM, otherwise (as already expressed via bug reports) we waste a lot=20 of time when backing VMs on PMEM or forwarding NVDIMMs, unnecessarily=20 read/writing (slow) DAX. > Do I get it right that you really want to emulate the full fledged writ= e > fault to a) limit another write fault when the content is actually > modified and b) prevent from potential errors during the write fault > (e.g. mkwrite failing on the fs data)? Yes, for the use case of "preallocation" in QEMU. See the QEMU link. But again, the thing that makes it more complicated is that I can come=20 up with some use cases that want to handle "shared mappings of ordinary=20 files" a little better. Or the usefaultfd-wp example I gave, where=20 prefaulting via MADV_POPULATE_READ can roughly half the population time. >> We could make MADV_POPULATE act depending on the readability/writabili= ty of >> a mapping. Use MADV_POPULATE_WRITE on writable mappings, use >> MADV_POPULATE_READ on readable mappings. Certainly not perfect for use= cases >> where you have writable mappings that are mostly read only (as in the >> example with fake-NVDIMMs I gave ...), but if it makes people happy, f= ine >> with me. I mostly care about MADV_POPULATE_WRITE. >=20 > Yes, this is where my thinking was going as well. Essentially define > MADV_POPULATE as "Populate the mapping with the memory based on the > mapping access." This looks like a straightforward semantic to me and i= t > doesn't really require any deep knowledge of internals. >=20 > Now, I was trying to compare which of those would be more tricky to > understand and use and TBH I am not really convinced any of the two is > much better. Separate READ/WRITE modes are explicit which can be good > but it will require quite an advanced knowledge of the #PF behavior. > On the other hand MADV_POPULATE would require some tricks like mmap, > madvise and mprotect(to change to writable) when the data is really > written to. I am not sure how much of a deal this would be for QEMU for > example. IIRC, at the time we enable background snapshotting, the VM is running=20 and we cannot temporarily mprotect(PROT_READ) without making the guest=20 crash. But again, uffd-wp handling is somewhat a special case because=20 the implementation in the kernel is really suboptimal. The reason I chose MADV_POPULATE_READ + MADV_POPULATE_WRITE is because=20 it really mimics what user space currently does to get the job done. I guess the important part to document is that "be careful when using=20 MADV_POPULATE_READ because it might just populate the shared zeropage"=20 and "be careful with MADV_POPULATE_WRITE because it will do the same as=20 when writing to every page: dirty the pages such that they will have to=20 be written back when backed by actual files". The current MAN page entry for MADV_POPULATE_READ reads: " Populate (prefault) page tables readable for the whole range without=20 actually reading. Depending on the underlying mapping, map the shared=20 zeropage, preallocate memory or read the underlying file. Do not=20 generate SIGBUS when populating fails, return an error instead. If MADV_POPULATE_READ succeeds, all page tables have been populated=20 (prefaulted) readable once. If MADV_POPULATE_READ fails, some page=20 tables might have been populated. MADV_POPULATE_READ cannot be applied to mappings without read=20 permissions and special mappings marked with the kernel-internal=20 VM_PFNMAP and VM_IO. Note that with MADV_POPULATE_READ, the process can still be killed at=20 any moment when the system runs out of memory. " >=20 > So, all that being said, I am not really sure. I am not really happy > about READ/WRITE split but if a simpler interface is going to be a bad > fit for existing usecases then I believe a proper way to go is the > document the more complex interface thoroughly. I think with the split we are better off long term without requiring=20 workarounds (mprotect()) to make some use cases work in the long term. But again, if there is a good justification why a single MADV_POPULATE=20 make sense, I'm happy to change it. Again, for me, the most important=20 thing long-term is MADV_POPULATE_WRITE because that's really what QEMU=20 mainly uses right now for preallocation. But I can see use cases for=20 MADV_POPULATE_READ as well. Thanks for your input! --=20 Thanks, David / dhildenb