From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E89CBC352AA for ; Tue, 1 Oct 2019 16:42:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BB46E2168B for ; Tue, 1 Oct 2019 16:42:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1569948150; bh=SW5+MJ+dAc9BhA2Qt94J5IT4MmvBAaF7zVFKoytNsbU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=kvZ9k6htztISJim2ASHX/9q/wNeGx1WueAUWexlRlmzqrkLFPifcOXXEnlFaBcbhr tIVg/5An08d4ehSfUzi/HkX6Z5/C4EybBKQB+8sbOdmUhy+ipYjBmOpVv0nB/PNkPC Gn+XGKMZsH7rG7cPJ5k4NsL/CmhzL3BW8xzgxfxI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732015AbfJAQm3 (ORCPT ); Tue, 1 Oct 2019 12:42:29 -0400 Received: from mail.kernel.org ([198.145.29.99]:54322 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731988AbfJAQm1 (ORCPT ); Tue, 1 Oct 2019 12:42:27 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C0B34205C9; Tue, 1 Oct 2019 16:42:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1569948146; bh=SW5+MJ+dAc9BhA2Qt94J5IT4MmvBAaF7zVFKoytNsbU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yiopCf9HumHaF9x0d8NLyjvHjiWxq7jXzJ2aGoehnDPXfneGa58CcWXzHXf9jLbtq fyoC/pTfQFPYlPvtFqMeyz9W5QZHFIO0S04V3lGorU6zAyVuQFk82nJxynw/AGP1jw L2/sHc3F1/CbRH5Mocxne/PD6/tb0dGJQEV7vSvY= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Mathieu Desnoyers , Oleg Nesterov , Peter Zijlstra , Chris Metcalf , Christoph Lameter , "Eric W . Biederman" , Kirill Tkhai , Linus Torvalds , Mike Galbraith , "Paul E . McKenney" , Russell King - ARM Linux admin , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH AUTOSEL 5.2 41/63] sched/membarrier: Call sync_core only before usermode for same mm Date: Tue, 1 Oct 2019 12:41:03 -0400 Message-Id: <20191001164125.15398-41-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191001164125.15398-1-sashal@kernel.org> References: <20191001164125.15398-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Desnoyers [ Upstream commit 2840cf02fae627860156737e83326df354ee4ec6 ] When the prev and next task's mm change, switch_mm() provides the core serializing guarantees before returning to usermode. The only case where an explicit core serialization is needed is when the scheduler keeps the same mm for prev and next. Suggested-by: Oleg Nesterov Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Cc: Chris Metcalf Cc: Christoph Lameter Cc: Eric W. Biederman Cc: Kirill Tkhai Cc: Linus Torvalds Cc: Mike Galbraith Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Russell King - ARM Linux admin Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/20190919173705.2181-4-mathieu.desnoyers@efficios.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- include/linux/sched/mm.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 4a7944078cc35..8557ec6642130 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -362,6 +362,8 @@ enum { static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) { + if (current->mm != mm) + return; if (likely(!(atomic_read(&mm->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))) return; -- 2.20.1