From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58283C4338F for ; Wed, 4 Aug 2021 14:01:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 12C2860F58 for ; Wed, 4 Aug 2021 14:01:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 12C2860F58 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=rGFKs4VgRMesjKNYOXDwfNk8Z5vndPzeh1HQ2TK0p3E=; b=zleuXgP1W59PFd TfxlzJat4BgrgICyen+w8d2gi57FR45ffCFxAUBQvG99vHeNdr9k51TpzoRNOtvMLZ/aAN8O1jEud WYh5BzPkR6RaxsfP7LOIK+MdVqt9BZKKIKjHW1OwGlwPvCgdctHCyEStmvUdg5t0v0D6UpX45NAmz 1o39gEtLGtrHVMmUFWy3g12RsT1JjhA61dyYK67kc5921j42v1tB/w1vUcMKKgD/n7jMoAUaatI09 4e2Ko9BPaCFCPl1B/UCdYg5OIDB5a8mmqKdysJHQ9iwcbY75zcx9Squ19ndd52jFL8nFB4pDrHna9 o3cC5XrCadA4UYaFghGg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBHRE-006Fvr-1F; Wed, 04 Aug 2021 14:00:04 +0000 Received: from mail-lj1-x229.google.com ([2a00:1450:4864:20::229]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBHRA-006FuI-2I for linux-arm-kernel@lists.infradead.org; Wed, 04 Aug 2021 14:00:01 +0000 Received: by mail-lj1-x229.google.com with SMTP id n6so2662506ljp.9 for ; Wed, 04 Aug 2021 06:59:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=opZCES105PrkUe6AenJyH+tvYl/n5di+WK9JVXNYrFY=; b=dTVhMAdu1DWWNZr6WPHgrhfq7oQePaj4/vD+fD2gY27lBZ4wlO4AW2d1re46W1/PHN 4TjD0gVellG4eQ/isLPjpAA+F642Ba8iWBCsvGW4tvUZjRXA1pzwQfOj2toz+K2cWI8c 47LGTiLFV+DgSU0XQThUSDKhcGER+ur47J18GpYtxeWWEEorshCnxmpcgR1qlEpjVM4I oftcIU8L8XEDDk57jvlYwNBMaNByAigdLCOLan2DTFyvclSjImw2PYm5Yy/YKC6rcuPY JzJVQceYIiDllQpwmZvd6nzsTxhU9x+DNMtnQ9ILu8x215Gvo4yPkSovf6JmF13LRXtg tagA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=opZCES105PrkUe6AenJyH+tvYl/n5di+WK9JVXNYrFY=; b=rYUurxBKj1/SByz4yN7HIrUNznqO2CmYKq0KDwk2OehH0b4bQqJAk8J7RzuZ44lTnr K9+z8tgg7Bc29/pSJeKa+JPiOos1KRR1X9AouFVabDqNcAGekCwF4e5+22dTSuTHdBu0 qtXDPqgoby9o9oXhr2mfAgMYVSmzOXUCGj0GxGxR4WzLK6bE0lLLbFS0C4Tn8yywN0gp Fw0ubf7jikKXysfCgnuUAvdAEUfWCiJp9PV7aHUkKZdDgVptwRqvQMfznAxFNfbUTSMd dPHhV/SukCdTaddnRDsD5a2RJxtlGUW85GOBjjtgCRQyVt6cjgdy7MWYE5F5R6TAM+zE Lalg== X-Gm-Message-State: AOAM533CoYbBEQthPVYp0v//SlI+RYP9fu6wSt1IAxd+gVLQoJ33Mu9z iCl5iPBf3tntKd4VBjDc4HpsFw== X-Google-Smtp-Source: ABdhPJzh6SjEQHJ40VeupKCKbTLXdIZidYz7IK2t3svNvSCigMpn6rgB8Nk3E2aVibRZJbg767eIIg== X-Received: by 2002:a2e:960e:: with SMTP id v14mr18085992ljh.189.1628085597894; Wed, 04 Aug 2021 06:59:57 -0700 (PDT) Received: from localhost.localdomain (c-fdcc225c.014-348-6c756e10.bbcust.telenor.se. [92.34.204.253]) by smtp.gmail.com with ESMTPSA id x4sm173635ljh.130.2021.08.04.06.59.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Aug 2021 06:59:57 -0700 (PDT) From: Linus Walleij To: Russell King Cc: linux-arm-kernel@lists.infradead.org, Linus Walleij , Nishanth Menon Subject: [PATCH] ARM: Fix Keystone 2 kernel mapping regression Date: Wed, 4 Aug 2021 15:57:47 +0200 Message-Id: <20210804135747.737233-1-linus.walleij@linaro.org> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210804_070000_220307_24F503F8 X-CRM114-Status: GOOD ( 21.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This fixes a Keystone 2 regression discovered as a side effect of defining an passing the physical start/end sections of the kernel to the MMU remapping code. As the Keystone applies an offset to all physical addresses, including those identified and patches by phys2virt, we fail to account for this offset in the kernel_sec_start and kernel_sec_end variables. Further these offsets can extend into the 64bit range on LPAE systems such as the Keystone 2. Fix it like this: - Extend kernel_sec_start and kernel_sec_end to be 64bit - Add the offset also to kernel_sec_start and kernel_sec_end As passing kernel_sec_start and kernel_sec_end as 64bit invariably incurs endianness issues I have attempted to dry-code around these. Please review. Tested on the Vexpress QEMU model both with and without LPAE enabled. Fixes: 6e121df14ccd ("ARM: 9090/1: Map the lowmem and kernel separately") Reported-by: Nishanth Menon Suggested-by: Russell King Signed-off-by: Linus Walleij --- Nishanth: Please test! Other smart folks: please have a look at my endianness compensation assembly. --- arch/arm/include/asm/memory.h | 7 ++++--- arch/arm/kernel/head.S | 19 ++++++++++++++++--- arch/arm/mm/mmu.c | 9 ++++++++- arch/arm/mm/pv-fixup-asm.S | 2 +- 4 files changed, 29 insertions(+), 8 deletions(-) diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index cfc9dfd70aad..f673e13e0f94 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -160,10 +160,11 @@ extern unsigned long vectors_base; /* * Physical start and end address of the kernel sections. These addresses are - * 2MB-aligned to match the section mappings placed over the kernel. + * 2MB-aligned to match the section mappings placed over the kernel. We use + * u64 so that LPAE mappings beyond the 32bit limit will work out as well. */ -extern u32 kernel_sec_start; -extern u32 kernel_sec_end; +extern u64 kernel_sec_start; +extern u64 kernel_sec_end; /* * Physical vs virtual RAM address space conversion. These are diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index 9eb0b4dbcc12..49f5e04cf7e0 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -49,7 +49,8 @@ /* * This needs to be assigned at runtime when the linker symbols are - * resolved. + * resolved. These are unsigned 64bit really, but in this assembly code + * We store them as 32bit. */ .pushsection .data .align 2 @@ -57,7 +58,9 @@ .globl kernel_sec_end kernel_sec_start: .long 0 + .long 0 kernel_sec_end: + .long 0 .long 0 .popsection @@ -250,7 +253,12 @@ __create_page_tables: add r0, r4, #KERNEL_OFFSET >> (SECTION_SHIFT - PMD_ORDER) ldr r6, =(_end - 1) adr_l r5, kernel_sec_start @ _pa(kernel_sec_start) - str r8, [r5] @ Save physical start of kernel +#ifdef CONFIG_CPU_ENDIAN_BE8 + add r5, r5, #4 @ Move to the upper 32bit word + str r8, [r5] @ Save physical start of kernel (BE) +#else + str r8, [r5] @ Save physical start of kernel (LE) +#endif orr r3, r8, r7 @ Add the MMU flags add r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ORDER) 1: str r3, [r0], #1 << PMD_ORDER @@ -259,7 +267,12 @@ __create_page_tables: bls 1b eor r3, r3, r7 @ Remove the MMU flags adr_l r5, kernel_sec_end @ _pa(kernel_sec_end) - str r3, [r5] @ Save physical end of kernel +#ifdef CONFIG_CPU_ENDIAN_BE8 + add r5, r5, #4 @ Move to the upper 32bit word + str r8, [r5] @ Save physical end of kernel (BE) +#else + str r3, [r5] @ Save physical end of kernel (LE) +#endif #ifdef CONFIG_XIP_KERNEL /* diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 7583bda5ea7d..a4e006005107 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1608,6 +1608,13 @@ static void __init early_paging_init(const struct machine_desc *mdesc) if (offset == 0) return; + /* + * Offset the kernel section physical offsets so that the kernel + * mapping will work out later on. + */ + kernel_sec_start += offset; + kernel_sec_end += offset; + /* * Get the address of the remap function in the 1:1 identity * mapping setup by the early page table assembly code. We @@ -1716,7 +1723,7 @@ void __init paging_init(const struct machine_desc *mdesc) { void *zero_page; - pr_debug("physical kernel sections: 0x%08x-0x%08x\n", + pr_debug("physical kernel sections: 0x%08llx-0x%08llx\n", kernel_sec_start, kernel_sec_end); prepare_page_table(); diff --git a/arch/arm/mm/pv-fixup-asm.S b/arch/arm/mm/pv-fixup-asm.S index 5c5e1952000a..f8e11f7c7880 100644 --- a/arch/arm/mm/pv-fixup-asm.S +++ b/arch/arm/mm/pv-fixup-asm.S @@ -29,7 +29,7 @@ ENTRY(lpae_pgtables_remap_asm) ldr r6, =(_end - 1) add r7, r2, #0x1000 add r6, r7, r6, lsr #SECTION_SHIFT - L2_ORDER - add r7, r7, #PAGE_OFFSET >> (SECTION_SHIFT - L2_ORDER) + add r7, r7, #KERNEL_OFFSET >> (SECTION_SHIFT - L2_ORDER) 1: ldrd r4, r5, [r7] adds r4, r4, r0 adc r5, r5, r1 -- 2.31.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel