From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0021CC433F5 for ; Mon, 31 Jan 2022 11:25:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358998AbiAaLZB (ORCPT ); Mon, 31 Jan 2022 06:25:01 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:35162 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376714AbiAaLPz (ORCPT ); Mon, 31 Jan 2022 06:15:55 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9F80EB82A61; Mon, 31 Jan 2022 11:15:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C953C340E8; Mon, 31 Jan 2022 11:15:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1643627751; bh=nSDn0/dlALnDYL/nwBq7guoFwDOI7dGG6cuhXkb1QZo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MUvX4awDoqifdacIn/9va2bTbrNZn27QZeTk8B+R79Rrv/3OP6jvc0jZctStaW/9U tPsBExYTEIi/jAqOYXPO3TERhXzeKYcvEJyHROEHZlXbTEMKKE7Is7yroXxQQVGRbQ 5NZ7Xc4i8fTK/RTU2JudrDi7nhoyRzIZf74WjpNQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ard Biesheuvel , "Russell King (Oracle)" Subject: [PATCH 5.16 014/200] ARM: 9180/1: Thumb2: align ALT_UP() sections in modules sufficiently Date: Mon, 31 Jan 2022 11:54:37 +0100 Message-Id: <20220131105234.040375029@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220131105233.561926043@linuxfoundation.org> References: <20220131105233.561926043@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ard Biesheuvel commit 9f80ccda53b9417236945bc7ece4b519037df74d upstream. When building for Thumb2, the .alt.smp.init sections that are emitted by the ALT_UP() patching code may not be 32-bit aligned, even though the fixup_smp_on_up() routine expects that. This results in alignment faults at module load time, which need to be fixed up by the fault handler. So let's align those sections explicitly, and prevent this from occurring. Cc: Signed-off-by: Ard Biesheuvel Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman --- arch/arm/include/asm/assembler.h | 2 ++ arch/arm/include/asm/processor.h | 1 + 2 files changed, 3 insertions(+) --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -288,6 +288,7 @@ */ #define ALT_UP(instr...) \ .pushsection ".alt.smp.init", "a" ;\ + .align 2 ;\ .long 9998b - . ;\ 9997: instr ;\ .if . - 9997b == 2 ;\ @@ -299,6 +300,7 @@ .popsection #define ALT_UP_B(label) \ .pushsection ".alt.smp.init", "a" ;\ + .align 2 ;\ .long 9998b - . ;\ W(b) . + (label - 9998b) ;\ .popsection --- a/arch/arm/include/asm/processor.h +++ b/arch/arm/include/asm/processor.h @@ -96,6 +96,7 @@ unsigned long __get_wchan(struct task_st #define __ALT_SMP_ASM(smp, up) \ "9998: " smp "\n" \ " .pushsection \".alt.smp.init\", \"a\"\n" \ + " .align 2\n" \ " .long 9998b - .\n" \ " " up "\n" \ " .popsection\n"