From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4672DC64E75 for ; Tue, 27 Oct 2020 00:34:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BA0F21D41 for ; Tue, 27 Oct 2020 00:34:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603758870; bh=lcasvV20dzHVbxUCDTwqYc1WGmUNFgV/A9VWpSiQnPE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=u1PiwJJp9XICnkFeTNYuAt7Rl/ycukwpKkMRtx9Lnj2mCp+4OIIkDzVJlDd0MVH5v cYgHIwsL/pgYlJFC/GT/Zx6q/xABPAmjcbzstbuUHRMBKTWT5yIfQHjXsEY8r2m36f xwZuQA61q6DlYNOakciuluqNJKFRKr1AHDcWseOc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2442457AbgJ0AbG (ORCPT ); Mon, 26 Oct 2020 20:31:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:54234 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409653AbgJZXwM (ORCPT ); Mon, 26 Oct 2020 19:52:12 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0878E20882; Mon, 26 Oct 2020 23:52:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603756331; bh=lcasvV20dzHVbxUCDTwqYc1WGmUNFgV/A9VWpSiQnPE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qitAQA36snlovyeZM4fpDReqeJRt8Tda+My0qehoWOtupOrAyLbhACOfVqjY1jY/d TuRRnai0QSiHSO5XuDzNHinnV9Dk2C+kNXRzYUsIxIjIAhXdPzimlOJoJBWesWaC79 KCr7By0N7FlYUxyacn5ZMFwKeTP7PhKNCEy0h9GU= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Nicholas Piggin , Peter Zijlstra , Michael Ellerman , Sasha Levin , linux-fsdevel@vger.kernel.org Subject: [PATCH AUTOSEL 5.8 005/132] mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race Date: Mon, 26 Oct 2020 19:49:57 -0400 Message-Id: <20201026235205.1023962-5-sashal@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201026235205.1023962-1-sashal@kernel.org> References: <20201026235205.1023962-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Nicholas Piggin [ Upstream commit d53c3dfb23c45f7d4f910c3a3ca84bf0a99c6143 ] Reading and modifying current->mm and current->active_mm and switching mm should be done with irqs off, to prevent races seeing an intermediate state. This is similar to commit 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB invalidate"). At exec-time when the new mm is activated, the old one should usually be single-threaded and no longer used, unless something else is holding an mm_users reference (which may be possible). Absent other mm_users, there is also a race with preemption and lazy tlb switching. Consider the kernel_execve case where the current thread is using a lazy tlb active mm: call_usermodehelper() kernel_execve() old_mm = current->mm; active_mm = current->active_mm; *** preempt *** --------------------> schedule() prev->active_mm = NULL; mmdrop(prev active_mm); ... <-------------------- schedule() current->mm = mm; current->active_mm = mm; if (!old_mm) mmdrop(active_mm); If we switch back to the kernel thread from a different mm, there is a double free of the old active_mm, and a missing free of the new one. Closing this race only requires interrupts to be disabled while ->mm and ->active_mm are being switched, but the TLB problem requires also holding interrupts off over activate_mm. Unfortunately not all archs can do that yet, e.g., arm defers the switch if irqs are disabled and expects finish_arch_post_lock_switch() to be called to complete the flush; um takes a blocking lock in activate_mm(). So as a first step, disable interrupts across the mm/active_mm updates to close the lazy tlb preempt race, and provide an arch option to extend that to activate_mm which allows architectures doing IPI based TLB shootdowns to close the second race. This is a bit ugly, but in the interest of fixing the bug and backporting before all architectures are converted this is a compromise. Signed-off-by: Nicholas Piggin Acked-by: Peter Zijlstra (Intel) Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20200914045219.3736466-2-npiggin@gmail.com Signed-off-by: Sasha Levin --- arch/Kconfig | 7 +++++++ fs/exec.c | 17 +++++++++++++++-- 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 8cc35dc556c72..2fb97d79e7693 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -411,6 +411,13 @@ config MMU_GATHER_NO_GATHER bool depends on MMU_GATHER_TABLE_FREE +config ARCH_WANT_IRQS_OFF_ACTIVATE_MM + bool + help + Temporary select until all architectures can be converted to have + irqs disabled over activate_mm. Architectures that do IPI based TLB + shootdowns should enable this. + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/fs/exec.c b/fs/exec.c index e6e8a9a703278..791384bb02b05 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1103,11 +1103,24 @@ static int exec_mmap(struct mm_struct *mm) } task_lock(tsk); - active_mm = tsk->active_mm; membarrier_exec_mmap(mm); - tsk->mm = mm; + + local_irq_disable(); + active_mm = tsk->active_mm; tsk->active_mm = mm; + tsk->mm = mm; + /* + * This prevents preemption while active_mm is being loaded and + * it and mm are being updated, which could cause problems for + * lazy tlb mm refcounting when these are updated by context + * switches. Not all architectures can handle irqs off over + * activate_mm yet. + */ + if (!IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM)) + local_irq_enable(); activate_mm(active_mm, mm); + if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM)) + local_irq_enable(); tsk->mm->vmacache_seqnum = 0; vmacache_flush(tsk); task_unlock(tsk); -- 2.25.1