From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 469B6C32753 for ; Fri, 2 Aug 2019 13:20:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1E20E21874 for ; Fri, 2 Aug 2019 13:20:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1564752058; bh=8LckdS0aV16gjDTP4ua5q1u/mMnZe6SiPr68l2IsRrk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=oREELXWTYXzW9S/9wbRkwxtqjd6D+dVd3vNwDSwzNliE54mPVFZMrDiOiSbjuG56t 2aSONC5ffB4mO5b6s2J1CjK5xw1C24kVKZAc2P/DTn5pjQWbPFrPp89cFQGBe0B1b3 5t/6uYftBLluqIuqT7cS6h9Bz2I0qNmA6ffQ9O6c= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393497AbfHBNU5 (ORCPT ); Fri, 2 Aug 2019 09:20:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:59434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393483AbfHBNUz (ORCPT ); Fri, 2 Aug 2019 09:20:55 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 342A421882; Fri, 2 Aug 2019 13:20:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1564752054; bh=8LckdS0aV16gjDTP4ua5q1u/mMnZe6SiPr68l2IsRrk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VRj6f7fndJTYUvq/TPjQNzXAvVfDB7ZxW986FZHbpmQLm9SgQRL8zmstpnoNuWGqL CGWsmqgXJh9lFKlz1kwqUa5y6Tij20Jc5RKkKOqg5+sU6P6jql56b2REA/pbLxtH8L o8jPkE/j7Dg4KAHUaQP/u2CyHXxX+/9sz2XMnSsA= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Joerg Roedel , Thomas Gleixner , Dave Hansen , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.2 36/76] mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy() Date: Fri, 2 Aug 2019 09:19:10 -0400 Message-Id: <20190802131951.11600-36-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190802131951.11600-1-sashal@kernel.org> References: <20190802131951.11600-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Joerg Roedel [ Upstream commit 3f8fd02b1bf1d7ba964485a56f2f4b53ae88c167 ] On x86-32 with PTI enabled, parts of the kernel page-tables are not shared between processes. This can cause mappings in the vmalloc/ioremap area to persist in some page-tables after the region is unmapped and released. When the region is re-used the processes with the old mappings do not fault in the new mappings but still access the old ones. This causes undefined behavior, in reality often data corruption, kernel oopses and panics and even spontaneous reboots. Fix this problem by activly syncing unmaps in the vmalloc/ioremap area to all page-tables in the system before the regions can be re-used. References: https://bugzilla.suse.com/show_bug.cgi?id=1118689 Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F') Signed-off-by: Joerg Roedel Signed-off-by: Thomas Gleixner Reviewed-by: Dave Hansen Link: https://lkml.kernel.org/r/20190719184652.11391-4-joro@8bytes.org Signed-off-by: Sasha Levin --- mm/vmalloc.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 0f76cca32a1ce..080d30408ce30 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1213,6 +1213,12 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) if (unlikely(valist == NULL)) return false; + /* + * First make sure the mappings are removed from all page-tables + * before they are freed. + */ + vmalloc_sync_all(); + /* * TODO: to calculate a flush range without looping. * The list can be up to lazy_max_pages() elements. @@ -3001,6 +3007,9 @@ EXPORT_SYMBOL(remap_vmalloc_range); /* * Implement a stub for vmalloc_sync_all() if the architecture chose not to * have one. + * + * The purpose of this function is to make sure the vmalloc area + * mappings are identical in all page-tables in the system. */ void __weak vmalloc_sync_all(void) { -- 2.20.1