From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8524CC742A5 for ; Fri, 12 Jul 2019 05:30:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5DC0F21530 for ; Fri, 12 Jul 2019 05:30:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="hdQErRJG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726081AbfGLFaI (ORCPT ); Fri, 12 Jul 2019 01:30:08 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:35672 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725791AbfGLFaH (ORCPT ); Fri, 12 Jul 2019 01:30:07 -0400 Received: by mail-pl1-f195.google.com with SMTP id w24so4214881plp.2 for ; Thu, 11 Jul 2019 22:30:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7MRMdAKqkkU6zuM8WMZXIs4D8fug3sxafZUJy2Rr9SE=; b=hdQErRJG50LhllXMbuwbsA/8lrvNcZxpQqCGeAJBq56yYh3HaWRUDSGzQn2miXcBiI /TlBhgCC1WkHTcD40KOEjN2EDpb6jUNgkjaEA4DWazgx/EdYZy5PJ9sbGJAknfASxnGK qE3yhnWZQfXykkr81R7Tjuz9//PhJEJDhKF947CYY80uNlAxwfR9GfbN7m3Hf9SHZCn2 uEC40ee7iZMBBo1mIYZqCZx4Pc+U3uvrvxT62VUkpuDcVVy9kxIB/7M2Q2x5i8KHKNzN etpNyZz1UF1JdeaCRR84z6xMxcl0xlMeNanpZFQIsgTOvJmPJX8xAxP8YR6i+Zu4OnGT lBNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7MRMdAKqkkU6zuM8WMZXIs4D8fug3sxafZUJy2Rr9SE=; b=iHlGk9Yu2y3cFhVGaoCuDeSEKQVlnO2mUwL9i7BBW1pBEjcc55hyKm3RcoBTy9m1JP mJnDaK2dV0PYZR7gC0PHRhYNsFJGjWfLUWEPxsoN1iH7YpcCMD6uj3v31xViSZBJK/nW usGefhg3Bv813hnElVp3Pdrjo2ClFj7HgH3HxsuuORUUq5R9VzOAE9Wvuvgv+gilWpzz c2k/1rCYC/Mz7Ts6NC7A7JZHf+WxJ1wRSUC1gOhaqhJEScjyNejlznUvxXZ46/EDU2xu +UPn4uWKnFiKnyoA9+z7N0gaP8Ce6oZKy+8OE3gQZ+IM9n47H//hUaLAHeRssm0NOhEX vR3w== X-Gm-Message-State: APjAAAWXOPWveYQYK4Nelh9swUkyNzgJHRmRwTCZBDMjY6ofk1knltVC ACRmlMxXMyWZbvZaF4qkiCQ5fMWJnIM= X-Google-Smtp-Source: APXvYqyS6Ew+8n/SMiTxYzwjHR7AMoLvDMKuezPrhjABfJGL2Mw1QTJ1wCqLX06dCzy/7jjUi6Jg0Q== X-Received: by 2002:a17:902:ba8e:: with SMTP id k14mr9155933pls.256.1562909406966; Thu, 11 Jul 2019 22:30:06 -0700 (PDT) Received: from localhost ([122.172.28.117]) by smtp.gmail.com with ESMTPSA id m11sm6618260pgk.65.2019.07.11.22.30.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Jul 2019 22:30:06 -0700 (PDT) From: Viresh Kumar To: stable@vger.kernel.org, Julien Thierry Cc: Viresh Kumar , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Marc Zyngier , Mark Rutland , Will Deacon , Russell King , Vincent Guittot , mark.brown@arm.com Subject: [PATCH v4.4 V2 25/43] arm64: Move BP hardening to check_and_switch_context Date: Fri, 12 Jul 2019 10:58:13 +0530 Message-Id: X-Mailer: git-send-email 2.21.0.rc0.269.g1a574e7a288b In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit a8e4c0a919ae310944ed2c9ace11cf3ccd8a609b upstream. We call arm64_apply_bp_hardening() from post_ttbr_update_workaround, which has the unexpected consequence of being triggered on every exception return to userspace when ARM64_SW_TTBR0_PAN is selected, even if no context switch actually occured. This is a bit suboptimal, and it would be more logical to only invalidate the branch predictor when we actually switch to a different mm. In order to solve this, move the call to arm64_apply_bp_hardening() into check_and_switch_context(), where we're guaranteed to pick a different mm context. Acked-by: Will Deacon Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Viresh Kumar --- arch/arm64/mm/context.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index be42bd3dca5c..de5afc27b4e6 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -183,6 +183,8 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: + arm64_apply_bp_hardening(); + cpu_switch_mm(mm->pgd, mm); } @@ -193,8 +195,6 @@ asmlinkage void post_ttbr_update_workaround(void) "ic iallu; dsb nsh; isb", ARM64_WORKAROUND_CAVIUM_27456, CONFIG_CAVIUM_ERRATUM_27456)); - - arm64_apply_bp_hardening(); } static int asids_init(void) -- 2.21.0.rc0.269.g1a574e7a288b