From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BB19C433E2 for ; Tue, 30 Mar 2021 16:32:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60F1A619D4 for ; Tue, 30 Mar 2021 16:32:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232102AbhC3QcL (ORCPT ); Tue, 30 Mar 2021 12:32:11 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:40350 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232183AbhC3QcG (ORCPT ); Tue, 30 Mar 2021 12:32:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617121926; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5DxGbfaWBQohyHUg4T2hAugoPQ+5ugb6cQ8DhRFqjyc=; b=Qk0asC3LG6zgnKEmQtkAH1xFva/ZxNB7u7F2Llgg9rcqbKey8nqtnlNC4T8KmlGXTwHkyM /hi15jXxSIOq048ry1PaaejxpViSKshutbNnu9WXE4YOYoDwKtW5NkMqxH5SgS2odtZgWG BFqMO2wVPSsSBvJVSclboBT1Q+0TiBA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-332-4doft93uOSKqpQWwdKTZlg-1; Tue, 30 Mar 2021 12:32:02 -0400 X-MC-Unique: 4doft93uOSKqpQWwdKTZlg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F06F184BA40; Tue, 30 Mar 2021 16:31:58 +0000 (UTC) Received: from [10.36.114.210] (ovpn-114-210.ams2.redhat.com [10.36.114.210]) by smtp.corp.redhat.com (Postfix) with ESMTP id 12B9B10016DB; Tue, 30 Mar 2021 16:31:42 +0000 (UTC) Subject: Re: [PATCH v1 2/5] mm/madvise: introduce MADV_POPULATE_(READ|WRITE) to prefault/prealloc memory From: David Hildenbrand To: Jann Horn Cc: kernel list , Linux-MM , Andrew Morton , Arnd Bergmann , Michal Hocko , Oscar Salvador , Matthew Wilcox , Andrea Arcangeli , Minchan Kim , Jason Gunthorpe , Dave Hansen , Hugh Dickins , Rik van Riel , "Michael S . Tsirkin" , "Kirill A . Shutemov" , Vlastimil Babka , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Chris Zankel , Max Filippov , Mike Kravetz , Peter Xu , Rolf Eike Beer , linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch , Linux API References: <20210317110644.25343-1-david@redhat.com> <20210317110644.25343-3-david@redhat.com> <2bab28c7-08c0-7ff0-c70e-9bf94da05ce1@redhat.com> <26227fc6-3e7b-4e69-f69d-4dc2a67ecfe8@redhat.com> Organization: Red Hat GmbH Message-ID: <54165ffe-dbf7-377a-a710-d15be4701f20@redhat.com> Date: Tue, 30 Mar 2021 18:31:41 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: <26227fc6-3e7b-4e69-f69d-4dc2a67ecfe8@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 30.03.21 18:30, David Hildenbrand wrote: > On 30.03.21 18:21, Jann Horn wrote: >> On Tue, Mar 30, 2021 at 5:01 PM David Hildenbrand wrote: >>>>> +long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, >>>>> + unsigned long end, bool write, int *locked) >>>>> +{ >>>>> + struct mm_struct *mm = vma->vm_mm; >>>>> + unsigned long nr_pages = (end - start) / PAGE_SIZE; >>>>> + int gup_flags; >>>>> + >>>>> + VM_BUG_ON(!PAGE_ALIGNED(start)); >>>>> + VM_BUG_ON(!PAGE_ALIGNED(end)); >>>>> + VM_BUG_ON_VMA(start < vma->vm_start, vma); >>>>> + VM_BUG_ON_VMA(end > vma->vm_end, vma); >>>>> + mmap_assert_locked(mm); >>>>> + >>>>> + /* >>>>> + * FOLL_HWPOISON: Return -EHWPOISON instead of -EFAULT when we hit >>>>> + * a poisoned page. >>>>> + * FOLL_POPULATE: Always populate memory with VM_LOCKONFAULT. >>>>> + * !FOLL_FORCE: Require proper access permissions. >>>>> + */ >>>>> + gup_flags = FOLL_TOUCH | FOLL_POPULATE | FOLL_MLOCK | FOLL_HWPOISON; >>>>> + if (write) >>>>> + gup_flags |= FOLL_WRITE; >>>>> + >>>>> + /* >>>>> + * See check_vma_flags(): Will return -EFAULT on incompatible mappings >>>>> + * or with insufficient permissions. >>>>> + */ >>>>> + return __get_user_pages(mm, start, nr_pages, gup_flags, >>>>> + NULL, NULL, locked); >>>> >>>> You mentioned in the commit message that you don't want to actually >>>> dirty all the file pages and force writeback; but doesn't >>>> POPULATE_WRITE still do exactly that? In follow_page_pte(), if >>>> FOLL_TOUCH and FOLL_WRITE are set, we mark the page as dirty: >>> >>> Well, I mention that POPULATE_READ explicitly doesn't do that. I >>> primarily set it because populate_vma_page_range() also sets it. >>> >>> Is it safe to *not* set it? IOW, fault something writable into a page >>> table (where the CPU could dirty it without additional page faults) >>> without marking it accessed? For me, this made logically sense. Thus I >>> also understood why populate_vma_page_range() set it. >> >> FOLL_TOUCH doesn't have anything to do with installing the PTE - it >> essentially means "the caller of get_user_pages wants to read/write >> the contents of the returned page, so please do the same things you >> would do if userspace was accessing the page". So in particular, if >> you look up a page via get_user_pages() with FOLL_WRITE|FOLL_TOUCH, >> that tells the MM subsystem "I will be writing into this page directly >> from the kernel, bypassing the userspace page tables, so please mark >> it as dirty now so that it will be properly written back later". Part >> of that is that it marks the page as recently used, which has an >> effect on LRU pageout behavior, I think - as far as I understand, that >> is why populate_vma_page_range() uses FOLL_TOUCH. >> >> If you look at __get_user_pages(), you can see that it is split up >> into two major parts: faultin_page() for creating PTEs, and >> follow_page_mask() for grabbing pages from PTEs. faultin_page() >> ignores FOLL_TOUCH completely; only follow_page_mask() uses it. >> >> In a way I guess maybe you do want the "mark as recently accessed" >> part that FOLL_TOUCH would give you without FOLL_WRITE? But I think >> you very much don't want the dirtying that FOLL_TOUCH|FOLL_WRITE leads >> to. Maybe the ideal approach would be to add a new FOLL flag to say "I >> only want to mark as recently used, I don't want to dirty". Or maybe >> it's enough to just leave out the FOLL_TOUCH entirely, I don't know. > > Any thoughts why populate_vma_page_range() does it? Sorry, I missed the explanation above - thanks! -- Thanks, David / dhildenb