From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BFD5C433DB for ; Wed, 3 Mar 2021 00:23:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 942F064E58 for ; Wed, 3 Mar 2021 00:23:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 942F064E58 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8FEC68D010E; Tue, 2 Mar 2021 19:22:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 822F78D010D; Tue, 2 Mar 2021 19:22:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50FBB8D010E; Tue, 2 Mar 2021 19:22:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 950CE8D010D for ; Tue, 2 Mar 2021 19:22:52 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 51E0718144F6A for ; Wed, 3 Mar 2021 00:22:52 +0000 (UTC) X-FDA: 77876662584.09.C93291F Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf18.hostedemail.com (Postfix) with ESMTP id 705DA200038F for ; Wed, 3 Mar 2021 00:22:51 +0000 (UTC) Received: by mail-qt1-f175.google.com with SMTP id 18so13634403qty.3 for ; Tue, 02 Mar 2021 16:22:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Olm+fYXYbuVgp6bWsix/kMAWKOrqJlCkAHr9xCmjIpk=; b=K8kaTNPYcS3zquvbMQ0dtlLepeH8HKg/xCkwC1qj6nsKzbvTfQ63+Ks/RZEV3bUkmE y+QkmQaX6vcgEtn1YnwU7oJSM7haYdmSxw2yqC1Qf3YNDgEFTj1vUr6PvuQRH/KJFtHo JD59CklExk570Vv9Pr1/M82SYFlrf93ssmow3dcobqItrZ9QMnpounC4Sh2DTaIcy3oc CvvomkmEgsTKwkBVp9BC0SmnLqI5CuktCtdt20kzsgz1kO8YMSQGsgMHC9E+2iXhK6As rp6QpILmBre+MfVCQrotSwc9ZOttqOwTTzIP/JkW/2P0pHt12j8jhRTB6iZgS2quUD27 /56w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Olm+fYXYbuVgp6bWsix/kMAWKOrqJlCkAHr9xCmjIpk=; b=pA3Q0RBdiGLbTf1Jaq+DkDivhiaTMhkDYEahFQzw3lxIJZyDZU5MCUCe7b7aGTTVBn qMIxAVUPX7G/mRlVECbQrbFkKvHVCqQDp30hl9Ql5dJsGU3N9TpQkkfgjjrb2BbN+5Kv +B3wFuM2fEEPGNtkcylPqYqQCh52RgsZQlH0d8b8OVPrrDRAw+xx+abcrNLFuIIhL0ia mp8OSaqAM3n4xbPLEiE7psuFqj5uoFqx7iINhowRVRomTiGgChyDgL/o733tLBEDodM7 O/ttla0ueUJR85/+t3VodCBsTWOXlVFOPQdUhiAvCtnECouzGsLnOH5pMwfPX1pc3jlh zQng== X-Gm-Message-State: AOAM533VKQGXQjtBcii3hjXIoBQFvUStthNa8hJ/GmbPTLEcQufgPKSu Z3orx286RMEGFPmiIRR4VRQvUQ== X-Google-Smtp-Source: ABdhPJzanxLZpfCCAQZtVgjL5PevxfgLuq3u4oQDxWDQ5ASJZdnQkNQpbYmMikL7M6jXu/HxEyd+hA== X-Received: by 2002:ac8:75d4:: with SMTP id z20mr20660664qtq.61.1614730971180; Tue, 02 Mar 2021 16:22:51 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id r3sm16690512qkm.129.2021.03.02.16.22.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Mar 2021 16:22:50 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v12 14/17] arm64: kexec: install a copy of the linear-map Date: Tue, 2 Mar 2021 19:22:27 -0500 Message-Id: <20210303002230.1083176-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210303002230.1083176-1-pasha.tatashin@soleen.com> References: <20210303002230.1083176-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 705DA200038F X-Stat-Signature: jmbwue9y8oq5tb6d1f6st943bbruzgiy Received-SPF: none (soleen.com>: No applicable sender policy available) receiver=imf18; identity=mailfrom; envelope-from=""; helo=mail-qt1-f175.google.com; client-ip=209.85.160.175 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614730971-29455 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To perform the kexec relocations with the MMU enabled, we need a copy of the linear map. Create one, and install it from the relocation code. This has to be done from the assembly code as it will be idmapped with TTBR0. The kernel runs in TTRB1, so can't use the break-before-make sequence on the mapping it is executing from. The makes no difference yet as the relocation code runs with the MMU disabled. Co-developed-by: James Morse Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/assembler.h | 19 +++++++++++++++++++ arch/arm64/include/asm/kexec.h | 2 ++ arch/arm64/kernel/asm-offsets.c | 2 ++ arch/arm64/kernel/hibernate-asm.S | 20 -------------------- arch/arm64/kernel/machine_kexec.c | 16 ++++++++++++++-- arch/arm64/kernel/relocate_kernel.S | 3 +++ 6 files changed, 40 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/= assembler.h index 29061b76aab6..3ce8131ad660 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -425,6 +425,25 @@ USER(\label, ic ivau, \tmp2) // invalidate I line = PoU isb .endm =20 +/* + * To prevent the possibility of old and new partial table walks being v= isible + * in the tlb, switch the ttbr to a zero page when we invalidate the old + * records. D4.7.1 'General TLB maintenance requirements' in ARM DDI 048= 7A.i + * Even switching to our copied tables will cause a changed output addre= ss at + * each stage of the walk. + */ + .macro break_before_make_ttbr_switch zero_page, page_table, tmp, tmp2 + phys_to_ttbr \tmp, \zero_page + msr ttbr1_el1, \tmp + isb + tlbi vmalle1 + dsb nsh + phys_to_ttbr \tmp, \page_table + offset_ttbr1 \tmp, \tmp2 + msr ttbr1_el1, \tmp + isb + .endm + /* * reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present */ diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexe= c.h index 305cf0840ed3..59ac166daf53 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -97,6 +97,8 @@ struct kimage_arch { phys_addr_t dtb_mem; phys_addr_t kern_reloc; phys_addr_t el2_vectors; + phys_addr_t ttbr1; + phys_addr_t zero_page; /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_mem; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offs= ets.c index 2e3278df1fc3..609362b5aa76 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -158,6 +158,8 @@ int main(void) #ifdef CONFIG_KEXEC_CORE DEFINE(KIMAGE_ARCH_DTB_MEM, offsetof(struct kimage, arch.dtb_mem)); DEFINE(KIMAGE_ARCH_EL2_VECTORS, offsetof(struct kimage, arch.el2_vecto= rs)); + DEFINE(KIMAGE_ARCH_ZERO_PAGE, offsetof(struct kimage, arch.zero_page)= ); + DEFINE(KIMAGE_ARCH_TTBR1, offsetof(struct kimage, arch.ttbr1)); DEFINE(KIMAGE_HEAD, offsetof(struct kimage, head)); DEFINE(KIMAGE_START, offsetof(struct kimage, start)); BLANK(); diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibern= ate-asm.S index 8ccca660034e..a31e621ba867 100644 --- a/arch/arm64/kernel/hibernate-asm.S +++ b/arch/arm64/kernel/hibernate-asm.S @@ -15,26 +15,6 @@ #include #include =20 -/* - * To prevent the possibility of old and new partial table walks being v= isible - * in the tlb, switch the ttbr to a zero page when we invalidate the old - * records. D4.7.1 'General TLB maintenance requirements' in ARM DDI 048= 7A.i - * Even switching to our copied tables will cause a changed output addre= ss at - * each stage of the walk. - */ -.macro break_before_make_ttbr_switch zero_page, page_table, tmp, tmp2 - phys_to_ttbr \tmp, \zero_page - msr ttbr1_el1, \tmp - isb - tlbi vmalle1 - dsb nsh - phys_to_ttbr \tmp, \page_table - offset_ttbr1 \tmp, \tmp2 - msr ttbr1_el1, \tmp - isb -.endm - - /* * Resume from hibernate * diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machin= e_kexec.c index f1451d807708..c875ef522e53 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -153,6 +153,8 @@ static void *kexec_page_alloc(void *arg) =20 int machine_kexec_post_load(struct kimage *kimage) { + int rc; + pgd_t *trans_pgd; void *reloc_code =3D page_to_virt(kimage->control_code_page); long reloc_size; struct trans_pgd_info info =3D { @@ -169,12 +171,22 @@ int machine_kexec_post_load(struct kimage *kimage) =20 kimage->arch.el2_vectors =3D 0; if (is_hyp_callable()) { - int rc =3D trans_pgd_copy_el2_vectors(&info, - &kimage->arch.el2_vectors); + rc =3D trans_pgd_copy_el2_vectors(&info, + &kimage->arch.el2_vectors); if (rc) return rc; } =20 + /* Create a copy of the linear map */ + trans_pgd =3D kexec_page_alloc(kimage); + if (!trans_pgd) + return -ENOMEM; + rc =3D trans_pgd_create_copy(&info, &trans_pgd, PAGE_OFFSET, PAGE_END); + if (rc) + return rc; + kimage->arch.ttbr1 =3D __pa(trans_pgd); + kimage->arch.zero_page =3D __pa(empty_zero_page); + reloc_size =3D __relocate_new_kernel_end - __relocate_new_kernel_start; memcpy(reloc_code, __relocate_new_kernel_start, reloc_size); kimage->arch.kern_reloc =3D __pa(reloc_code); diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relo= cate_kernel.S index 7a600ba33ae1..e83b6380907d 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -29,10 +29,13 @@ */ SYM_CODE_START(arm64_relocate_new_kernel) /* Setup the list loop variables. */ + ldr x18, [x0, #KIMAGE_ARCH_ZERO_PAGE] /* x18 =3D zero page for BBM */ + ldr x17, [x0, #KIMAGE_ARCH_TTBR1] /* x17 =3D linear map copy */ ldr x16, [x0, #KIMAGE_HEAD] /* x16 =3D kimage_head */ mov x14, xzr /* x14 =3D entry ptr */ mov x13, xzr /* x13 =3D copy dest */ raw_dcache_line_size x15, x1 /* x15 =3D dcache line size */ + break_before_make_ttbr_switch x18, x17, x1, x2 /* set linear map */ .Lloop: and x12, x16, PAGE_MASK /* x12 =3D addr */ =20 --=20 2.25.1