From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC233C433DF for ; Tue, 2 Jun 2020 20:17:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 853B02074B for ; Tue, 2 Jun 2020 20:17:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="a06gdc08" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 853B02074B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2A93F280007; Tue, 2 Jun 2020 16:17:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 260E680052; Tue, 2 Jun 2020 16:17:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16DDC80080; Tue, 2 Jun 2020 16:17:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id F0B0F80052 for ; Tue, 2 Jun 2020 16:17:34 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id ACE8C368AD for ; Tue, 2 Jun 2020 20:17:34 +0000 (UTC) X-FDA: 76885382028.16.curve21_7d866dbcf74f Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 632681005E8B7 for ; Tue, 2 Jun 2020 20:17:34 +0000 (UTC) X-HE-Tag: curve21_7d866dbcf74f X-Filterd-Recvd-Size: 4740 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 2 Jun 2020 20:17:33 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8312920734; Tue, 2 Jun 2020 20:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591129053; bh=iKrPNxTJ9BC1B6dUiqiKKl6NxnG5oC+JuOXl7SL/Np8=; h=Date:From:To:Subject:In-Reply-To:From; b=a06gdc08M63f2FMYigZ9Sit0zgK054xn428YKbbZC83mqb0X5CiJlGBX7x3RY4Jr0 zcc8t3wJchTS02znL88C9dUVsAF0wRhz/2t3zjNSAJndWt/OAIntTie7SaiYQ9RI0S SelO5L7Wm40bbR6cvKKxmCTqiwZ7InoMSjyY0cnA= Date: Tue, 02 Jun 2020 13:17:32 -0700 From: Andrew Morton To: akpm@linux-foundation.org, arnd@arndb.de, dave.hansen@linux.intel.com, hch@lst.de, hpa@zytor.com, jroedel@suse.de, linux-mm@kvack.org, luto@kernel.org, mhocko@kernel.org, mingo@elte.hu, mm-commits@vger.kernel.org, peterz@infradead.org, rjw@rjwysocki.net, rostedt@goodmis.org, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz, willy@infradead.org Subject: [patch 122/128] x86/mm/32: implement arch_sync_kernel_mappings() Message-ID: <20200602201732.nguP-AKBD%akpm@linux-foundation.org> In-Reply-To: <20200602130930.8e8f10fa6f19e3766e70921f@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 632681005E8B7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joerg Roedel Subject: x86/mm/32: implement arch_sync_kernel_mappings() Implement the function to sync changes in vmalloc and ioremap ranges to all page-tables. Link: http://lkml.kernel.org/r/20200515140023.25469-6-joro@8bytes.org Signed-off-by: Joerg Roedel Acked-by: Andy Lutomirski Acked-by: Peter Zijlstra (Intel) Cc: Arnd Bergmann Cc: Christoph Hellwig Cc: Dave Hansen Cc: "H . Peter Anvin" Cc: Ingo Molnar Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: "Rafael J. Wysocki" Cc: Steven Rostedt (VMware) Cc: Thomas Gleixner Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- arch/x86/include/asm/pgtable-2level_types.h | 2 + arch/x86/include/asm/pgtable-3level_types.h | 2 + arch/x86/mm/fault.c | 25 +++++++++++------- 3 files changed, 20 insertions(+), 9 deletions(-) --- a/arch/x86/include/asm/pgtable-2level_types.h~x86-mm-32-implement-arch_sync_kernel_mappings +++ a/arch/x86/include/asm/pgtable-2level_types.h @@ -20,6 +20,8 @@ typedef union { #define SHARED_KERNEL_PMD 0 +#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED + /* * traditional i386 two-level paging structure: */ --- a/arch/x86/include/asm/pgtable-3level_types.h~x86-mm-32-implement-arch_sync_kernel_mappings +++ a/arch/x86/include/asm/pgtable-3level_types.h @@ -27,6 +27,8 @@ typedef union { #define SHARED_KERNEL_PMD (!static_cpu_has(X86_FEATURE_PTI)) #endif +#define ARCH_PAGE_TABLE_SYNC_MASK (SHARED_KERNEL_PMD ? 0 : PGTBL_PMD_MODIFIED) + /* * PGDIR_SHIFT determines what a top-level page table entry can map */ --- a/arch/x86/mm/fault.c~x86-mm-32-implement-arch_sync_kernel_mappings +++ a/arch/x86/mm/fault.c @@ -190,16 +190,13 @@ static inline pmd_t *vmalloc_sync_one(pg return pmd_k; } -static void vmalloc_sync(void) +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) { - unsigned long address; + unsigned long addr; - if (SHARED_KERNEL_PMD) - return; - - for (address = VMALLOC_START & PMD_MASK; - address >= TASK_SIZE_MAX && address < VMALLOC_END; - address += PMD_SIZE) { + for (addr = start & PMD_MASK; + addr >= TASK_SIZE_MAX && addr < VMALLOC_END; + addr += PMD_SIZE) { struct page *page; spin_lock(&pgd_lock); @@ -210,13 +207,23 @@ static void vmalloc_sync(void) pgt_lock = &pgd_page_get_mm(page)->page_table_lock; spin_lock(pgt_lock); - vmalloc_sync_one(page_address(page), address); + vmalloc_sync_one(page_address(page), addr); spin_unlock(pgt_lock); } spin_unlock(&pgd_lock); } } +static void vmalloc_sync(void) +{ + unsigned long address; + + if (SHARED_KERNEL_PMD) + return; + + arch_sync_kernel_mappings(VMALLOC_START, VMALLOC_END); +} + void vmalloc_sync_mappings(void) { vmalloc_sync(); _