From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7963EC43611 for ; Thu, 6 May 2021 04:35:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 48ABB613BA for ; Thu, 6 May 2021 04:35:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232371AbhEFEgQ (ORCPT ); Thu, 6 May 2021 00:36:16 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:64614 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230441AbhEFEgP (ORCPT ); Thu, 6 May 2021 00:36:15 -0400 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 1464XVLQ031233; Thu, 6 May 2021 00:35:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=unlX3zD51AWBMvr8brikx9AwqxVRtrxgPgfbAHIEdQk=; b=QzZ8gfXYS3UjUMjCAh0N7uxbW6hNJ2NKzlCS8Fg8YszWce63KMm4UrqwrA1MJYQNTpaI 9zgEX9f8Do8xIJDfNgW5TZJ1RsfOpf4EAiYrK0xwK9oKUXjYmcN4ucEcE2KhcMyaSmMW wZgMDy8Gk2L2l29DsF9bfalcGClWOaxGBoQpUCpOm3+/BjztJnqH2xlM7XY3aSW46L56 eS3Ixvy+GLXsxL1ZjzcqJXxbtDo4LtuAwuapL6W3CYYkeeuh+ZbElZ9iLnIKV5vuXfZO krfh5JnaRk7OyLjBxT7HJl+xVn4F/3XMcANbpekTF6++pi9Ydz8pt7LREfLBMygJstsy Iw== Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0a-001b2d01.pphosted.com with ESMTP id 38c7m0a76r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 May 2021 00:35:04 -0400 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.16.0.43/8.16.0.43) with SMTP id 1464RX0f025389; Thu, 6 May 2021 04:35:03 GMT Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com [9.57.198.29]) by ppma02wdc.us.ibm.com with ESMTP id 38behe15cd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 May 2021 04:35:03 +0000 Received: from b01ledav001.gho.pok.ibm.com (b01ledav001.gho.pok.ibm.com [9.57.199.106]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 1464Z2rK41812470 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 6 May 2021 04:35:02 GMT Received: from b01ledav001.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 974FE28058; Thu, 6 May 2021 04:35:02 +0000 (GMT) Received: from b01ledav001.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CE1E028059; Thu, 6 May 2021 04:35:01 +0000 (GMT) Received: from oc8246131445.ibm.com (unknown [9.160.168.222]) by b01ledav001.gho.pok.ibm.com (Postfix) with SMTP; Thu, 6 May 2021 04:35:01 +0000 (GMT) Received: from oc8246131445.ibm.com (localhost.localdomain [127.0.0.1]) by oc8246131445.ibm.com (Postfix) with ESMTP id AF8CEBC1E26; Wed, 5 May 2021 23:34:53 -0500 (CDT) From: "Christopher M. Riedl" To: linuxppc-dev@lists.ozlabs.org Cc: tglx@linutronix.de, x86@kernel.org, linux-hardening@vger.kernel.org, keescook@chromium.org Subject: [RESEND PATCH v4 06/11] powerpc: Introduce temporary mm Date: Wed, 5 May 2021 23:34:47 -0500 Message-Id: <20210506043452.9674-7-cmr@linux.ibm.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20210506043452.9674-1-cmr@linux.ibm.com> References: <20210506043452.9674-1-cmr@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: nUn2pWgtYWxZrXqRXty_y0C_4a4vn4Qx X-Proofpoint-GUID: nUn2pWgtYWxZrXqRXty_y0C_4a4vn4Qx X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.761 definitions=2021-05-06_03:2021-05-05,2021-05-06 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 bulkscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 phishscore=0 priorityscore=1501 lowpriorityscore=0 spamscore=0 suspectscore=0 malwarescore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104060000 definitions=main-2105060029 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org x86 supports the notion of a temporary mm which restricts access to temporary PTEs to a single CPU. A temporary mm is useful for situations where a CPU needs to perform sensitive operations (such as patching a STRICT_KERNEL_RWX kernel) requiring temporary mappings without exposing said mappings to other CPUs. A side benefit is that other CPU TLBs do not need to be flushed when the temporary mm is torn down. Mappings in the temporary mm can be set in the userspace portion of the address-space. Interrupts must be disabled while the temporary mm is in use. HW breakpoints, which may have been set by userspace as watchpoints on addresses now within the temporary mm, are saved and disabled when loading the temporary mm. The HW breakpoints are restored when unloading the temporary mm. All HW breakpoints are indiscriminately disabled while the temporary mm is in use. With the Book3s64 Hash MMU the SLB is preloaded with entries from the current thread_info struct during switch_slb(). This could cause a Machine Check (MCE) due to an SLB Multihit when creating arbitrary userspace mappings in the temporary mm later. Disable SLB preload from the thread_info struct for any temporary mm to avoid this. Based on x86 implementation: commit cefa929c034e ("x86/mm: Introduce temporary mm structs") Signed-off-by: Christopher M. Riedl --- v4: * Pass the prev mm instead of NULL to switch_mm_irqs_off() when using/unusing the temp mm as suggested by Jann Horn to keep the context.active counter in-sync on mm/nohash. * Disable SLB preload in the temporary mm when initializing the temp_mm struct. * Include asm/debug.h header to fix build issue with ppc44x_defconfig. --- arch/powerpc/include/asm/debug.h | 1 + arch/powerpc/kernel/process.c | 5 +++ arch/powerpc/lib/code-patching.c | 67 ++++++++++++++++++++++++++++++++ 3 files changed, 73 insertions(+) diff --git a/arch/powerpc/include/asm/debug.h b/arch/powerpc/include/asm/= debug.h index 86a14736c76c3..dfd82635ea8b3 100644 --- a/arch/powerpc/include/asm/debug.h +++ b/arch/powerpc/include/asm/debug.h @@ -46,6 +46,7 @@ static inline int debugger_fault_handler(struct pt_regs= *regs) { return 0; } #endif =20 void __set_breakpoint(int nr, struct arch_hw_breakpoint *brk); +void __get_breakpoint(int nr, struct arch_hw_breakpoint *brk); bool ppc_breakpoint_available(void); #ifdef CONFIG_PPC_ADV_DEBUG_REGS extern void do_send_trap(struct pt_regs *regs, unsigned long address, diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.= c index 89e34aa273e21..8e94cabaea3c3 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -864,6 +864,11 @@ static inline int set_breakpoint_8xx(struct arch_hw_= breakpoint *brk) return 0; } =20 +void __get_breakpoint(int nr, struct arch_hw_breakpoint *brk) +{ + memcpy(brk, this_cpu_ptr(¤t_brk[nr]), sizeof(*brk)); +} + void __set_breakpoint(int nr, struct arch_hw_breakpoint *brk) { memcpy(this_cpu_ptr(¤t_brk[nr]), brk, sizeof(*brk)); diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-pat= ching.c index 2b1b3e9043ade..cbdfba8a39360 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -17,6 +17,8 @@ #include #include #include +#include +#include =20 static int __patch_instruction(struct ppc_inst *exec_addr, struct ppc_in= st instr, struct ppc_inst *patch_addr) @@ -46,6 +48,71 @@ int raw_patch_instruction(struct ppc_inst *addr, struc= t ppc_inst instr) } =20 #ifdef CONFIG_STRICT_KERNEL_RWX + +struct temp_mm { + struct mm_struct *temp; + struct mm_struct *prev; + struct arch_hw_breakpoint brk[HBP_NUM_MAX]; +}; + +static inline void init_temp_mm(struct temp_mm *temp_mm, struct mm_struc= t *mm) +{ + /* Do not preload SLB entries from the thread_info struct */ + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) + skip_slb_preload_mm(mm); + + temp_mm->temp =3D mm; + temp_mm->prev =3D NULL; + memset(&temp_mm->brk, 0, sizeof(temp_mm->brk)); +} + +static inline void use_temporary_mm(struct temp_mm *temp_mm) +{ + lockdep_assert_irqs_disabled(); + + temp_mm->prev =3D current->active_mm; + switch_mm_irqs_off(temp_mm->prev, temp_mm->temp, current); + + WARN_ON(!mm_is_thread_local(temp_mm->temp)); + + if (ppc_breakpoint_available()) { + struct arch_hw_breakpoint null_brk =3D {0}; + int i =3D 0; + + for (; i < nr_wp_slots(); ++i) { + __get_breakpoint(i, &temp_mm->brk[i]); + if (temp_mm->brk[i].type !=3D 0) + __set_breakpoint(i, &null_brk); + } + } +} + +static inline void unuse_temporary_mm(struct temp_mm *temp_mm) +{ + lockdep_assert_irqs_disabled(); + + switch_mm_irqs_off(temp_mm->temp, temp_mm->prev, current); + + /* + * On book3s64 the active_cpus counter increments in + * switch_mm_irqs_off(). With the Hash MMU this counter affects if TLB + * flushes are local. We have to manually decrement that counter here + * along with removing our current CPU from the mm's cpumask so that in + * the future a different CPU can reuse the temporary mm and still rely + * on local TLB flushes. + */ + dec_mm_active_cpus(temp_mm->temp); + cpumask_clear_cpu(smp_processor_id(), mm_cpumask(temp_mm->temp)); + + if (ppc_breakpoint_available()) { + int i =3D 0; + + for (; i < nr_wp_slots(); ++i) + if (temp_mm->brk[i].type !=3D 0) + __set_breakpoint(i, &temp_mm->brk[i]); + } +} + static DEFINE_PER_CPU(struct vm_struct *, text_poke_area); =20 #if IS_BUILTIN(CONFIG_LKDTM) --=20 2.26.1