From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82052C433B4 for ; Fri, 14 May 2021 08:43:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 50C1661278 for ; Fri, 14 May 2021 08:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233554AbhENIos (ORCPT ); Fri, 14 May 2021 04:44:48 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:48024 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231889AbhENIoq (ORCPT ); Fri, 14 May 2021 04:44:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620981815; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lsAavQGtZZQfxxuNta+2PaMZ+gKpW3kyTUx2+4IIAWk=; b=JiMEd8pQZmZeGJR5SfsEN9Xj7aAWl7ahr3kI/jkZbfoHa3dYKpu7hAilU7K7s0crVTVZdN XYuJfnkNvLuKvADSyxsvA7Yp6w1ApPAP/u1utRDK81rzuNjxAY2k8e+Yf/THJI76ZU9SCZ C3th9TBK6+XQCFETS8n4BF/kEl9hrGU= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-425-PBdOcpiFOs2oO1lMwz2rng-1; Fri, 14 May 2021 04:43:32 -0400 X-MC-Unique: PBdOcpiFOs2oO1lMwz2rng-1 Received: by mail-ed1-f72.google.com with SMTP id n6-20020a0564020606b029038cdc241890so2509977edv.20 for ; Fri, 14 May 2021 01:43:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=lsAavQGtZZQfxxuNta+2PaMZ+gKpW3kyTUx2+4IIAWk=; b=r39W1xoQZ7wxL05pgZ7ehAkcbKu2JVI8xq3xzm8JVHdFYgp39pMpqnM1kdndcN7i/Y mL2M2MqyOEYXU/TuiDBFu08bgJwYKSlJRaGG2aaNofN6gdDZf6XLQyYnWnBx2gsoPsqy bFHuEcQEKyny56e7S299d14yW9iefbMIpGCKLVK090bov2KX/uPHSftIPWVPYYnXR9D0 6pXiiYx2n0ycKdbKtzJxBm/o6ycVOVcVQy0JbYS3CNlg2VHfTyGvZWfLNvWubtMt1Psr ffAA0coy6VqR0pMFLnQMWgA4Yxlc/fCX5PBHi7vWhmhO1loImddhw8XHynTFsulit8ik OEpQ== X-Gm-Message-State: AOAM533opOaZhbQJ/1fD0vM1oNhDPApz/znrXcARn6HwelBK6K6IdrJ1 u7uNuOKPRPGZ5GqWsm6/UC29QDxVP4trGzL7GcrC2vERzTMkm/NQllB3XM2SMPRWHxexHeU2i+h yjfh9E7bSVyAKxWP/IHgH1i43 X-Received: by 2002:a50:ed0c:: with SMTP id j12mr54641155eds.12.1620981811580; Fri, 14 May 2021 01:43:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJypI4ge2hxqi/JfrvWJM2sA3w80IXDC+WmQ5wQdEdZZtqA+keJm9IEpbNTASfWRrXFPNSGQBQ== X-Received: by 2002:a50:ed0c:: with SMTP id j12mr54641100eds.12.1620981811327; Fri, 14 May 2021 01:43:31 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6501.dip0.t-ipconnect.de. [91.12.101.1]) by smtp.gmail.com with ESMTPSA id k12sm3969468edo.50.2021.05.14.01.43.29 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 14 May 2021 01:43:31 -0700 (PDT) To: Mike Rapoport , Andrew Morton Cc: Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Hagen Paul Pfeifer , Ingo Molnar , James Bottomley , Kees Cook , "Kirill A. Shutemov" , Matthew Wilcox , Matthew Garrett , Mark Rutland , Michal Hocko , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , "Rafael J. Wysocki" , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , Yury Norov , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org References: <20210513184734.29317-1-rppt@kernel.org> <20210513184734.29317-4-rppt@kernel.org> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v19 3/8] set_memory: allow set_direct_map_*_noflush() for multiple pages Message-ID: <858e5561-bc7d-4ce1-5cb8-3c333199d52a@redhat.com> Date: Fri, 14 May 2021 10:43:29 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210513184734.29317-4-rppt@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13.05.21 20:47, Mike Rapoport wrote: > From: Mike Rapoport > > The underlying implementations of set_direct_map_invalid_noflush() and > set_direct_map_default_noflush() allow updating multiple contiguous pages > at once. > > Add numpages parameter to set_direct_map_*_noflush() to expose this > ability with these APIs. > [...] Finally doing some in-depth review, sorry for not having a detailed look earlier. > > -int set_direct_map_invalid_noflush(struct page *page) > +int set_direct_map_invalid_noflush(struct page *page, int numpages) > { > struct page_change_data data = { > .set_mask = __pgprot(0), > .clear_mask = __pgprot(PTE_VALID), > }; > + unsigned long size = PAGE_SIZE * numpages; > Nit: I'd have made this const and added an early exit for !numpages. But whatever you prefer. > if (!debug_pagealloc_enabled() && !rodata_full) > return 0; > > return apply_to_page_range(&init_mm, > (unsigned long)page_address(page), > - PAGE_SIZE, change_page_range, &data); > + size, change_page_range, &data); > } > > -int set_direct_map_default_noflush(struct page *page) > +int set_direct_map_default_noflush(struct page *page, int numpages) > { > struct page_change_data data = { > .set_mask = __pgprot(PTE_VALID | PTE_WRITE), > .clear_mask = __pgprot(PTE_RDONLY), > }; > + unsigned long size = PAGE_SIZE * numpages; > Nit: dito > if (!debug_pagealloc_enabled() && !rodata_full) > return 0; > > return apply_to_page_range(&init_mm, > (unsigned long)page_address(page), > - PAGE_SIZE, change_page_range, &data); > + size, change_page_range, &data); > } > [...] > extern int kernel_set_to_readonly; > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 156cd235659f..15a55d6e9cec 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -2192,14 +2192,14 @@ static int __set_pages_np(struct page *page, int numpages) > return __change_page_attr_set_clr(&cpa, 0); > } > > -int set_direct_map_invalid_noflush(struct page *page) > +int set_direct_map_invalid_noflush(struct page *page, int numpages) > { > - return __set_pages_np(page, 1); > + return __set_pages_np(page, numpages); > } > > -int set_direct_map_default_noflush(struct page *page) > +int set_direct_map_default_noflush(struct page *page, int numpages) > { > - return __set_pages_p(page, 1); > + return __set_pages_p(page, numpages); > } > So, what happens if we succeeded setting set_direct_map_invalid_noflush() for some pages but fail when having to split a large mapping? Did I miss something or would the current code not undo what it partially did? Or do we simply not care? I guess to handle this cleanly we would either have to catch all error cases first (esp. splitting large mappings) before actually performing the set to invalid, or have some recovery code in place if possible. AFAIKs, your patch #5 right now only calls it with 1 page, do we need this change at all? Feels like a leftover from older versions to me where we could have had more than a single page. -- Thanks, David / dhildenb