From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936515AbcIEQvw (ORCPT ); Mon, 5 Sep 2016 12:51:52 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:52680 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935028AbcIEQvr (ORCPT ); Mon, 5 Sep 2016 12:51:47 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ard Biesheuvel , Mark Rutland , Catalin Marinas Subject: [PATCH 4.7 069/143] arm64: kernel: avoid literal load of virtual address with MMU off Date: Mon, 5 Sep 2016 18:44:05 +0200 Message-Id: <20160905164433.531295471@linuxfoundation.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20160905164430.593075551@linuxfoundation.org> References: <20160905164430.593075551@linuxfoundation.org> User-Agent: quilt/0.64 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.7-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ard Biesheuvel commit bc9f3d7788a88d080a30599bde68f383daf8f8a5 upstream. Literal loads of virtual addresses are subject to runtime relocation when CONFIG_RELOCATABLE=y, and given that the relocation routines run with the MMU and caches enabled, literal loads of relocated values performed with the MMU off are not guaranteed to return the latest value unless the memory covering the literal is cleaned to the PoC explicitly. So defer the literal load until after the MMU has been enabled, just like we do for primary_switch() and secondary_switch() in head.S. Fixes: 1e48ef7fcc37 ("arm64: add support for building vmlinux as a relocatable PIE binary") Signed-off-by: Ard Biesheuvel Acked-by: Mark Rutland Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/sleep.S | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -101,12 +101,20 @@ ENTRY(cpu_resume) bl el2_setup // if in EL2 drop to EL1 cleanly /* enable the MMU early - so we can access sleep_save_stash by va */ adr_l lr, __enable_mmu /* __cpu_setup will return here */ - ldr x27, =_cpu_resume /* __enable_mmu will branch here */ + adr_l x27, _resume_switched /* __enable_mmu will branch here */ adrp x25, idmap_pg_dir adrp x26, swapper_pg_dir b __cpu_setup ENDPROC(cpu_resume) + .pushsection ".idmap.text", "ax" +_resume_switched: + ldr x8, =_cpu_resume + br x8 +ENDPROC(_resume_switched) + .ltorg + .popsection + ENTRY(_cpu_resume) mrs x1, mpidr_el1 adrp x8, mpidr_hash