From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D90AC32753 for ; Wed, 14 Aug 2019 17:16:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CF6E820665 for ; Wed, 14 Aug 2019 17:16:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1565803018; bh=4Lq1f7vlDb2mq8tU9e2/0cm39cCB3sKcfvHsdSwMk1c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=xlt4bJQznX7kXm4Vb9xbTJxru75R+M8SnrSRDvugNxp5/IvLpoDlkC5MQSTSGBsgE ZRDl10pnS1XHhBAupyKGTcKQoFJeaqolLFjZdVLIZMEatlMspDdEMm475L6nFYCeQr HbwpMuKGNvxxM93Y/YC1Smciz107Q58YKCqwKY6A= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730962AbfHNRNw (ORCPT ); Wed, 14 Aug 2019 13:13:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:38218 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730963AbfHNRNv (ORCPT ); Wed, 14 Aug 2019 13:13:51 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2DCA92063F; Wed, 14 Aug 2019 17:13:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1565802830; bh=4Lq1f7vlDb2mq8tU9e2/0cm39cCB3sKcfvHsdSwMk1c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hMeF4iV2OHkEdtkD/zh/DGWNpN8CkmeeDK/ZaVTngze8fmK3aBo8a7gApt83mUspX KvoPSIP9R4OfMxj9QOWNER3UqJHmPFhbfOp+OpXYsXAVNvCDSsLWfDSK3jcog2tGKy ZkMo6c3Bq6BEKOVWKE1wN8ScbR7Ly7fvRUrnADLQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Joerg Roedel , Thomas Gleixner , Dave Hansen Subject: [PATCH 4.14 16/69] x86/mm: Sync also unmappings in vmalloc_sync_all() Date: Wed, 14 Aug 2019 19:01:14 +0200 Message-Id: <20190814165746.624744275@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190814165744.822314328@linuxfoundation.org> References: <20190814165744.822314328@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Joerg Roedel commit 8e998fc24de47c55b47a887f6c95ab91acd4a720 upstream. With huge-page ioremap areas the unmappings also need to be synced between all page-tables. Otherwise it can cause data corruption when a region is unmapped and later re-used. Make the vmalloc_sync_one() function ready to sync unmappings and make sure vmalloc_sync_all() iterates over all page-tables even when an unmapped PMD is found. Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F') Signed-off-by: Joerg Roedel Signed-off-by: Thomas Gleixner Reviewed-by: Dave Hansen Link: https://lkml.kernel.org/r/20190719184652.11391-3-joro@8bytes.org Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/fault.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -260,11 +260,12 @@ static inline pmd_t *vmalloc_sync_one(pg pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); - if (!pmd_present(*pmd_k)) - return NULL; - if (!pmd_present(*pmd)) + if (pmd_present(*pmd) != pmd_present(*pmd_k)) set_pmd(pmd, *pmd_k); + + if (!pmd_present(*pmd_k)) + return NULL; else BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k)); @@ -286,17 +287,13 @@ void vmalloc_sync_all(void) spin_lock(&pgd_lock); list_for_each_entry(page, &pgd_list, lru) { spinlock_t *pgt_lock; - pmd_t *ret; /* the pgt_lock only for Xen */ pgt_lock = &pgd_page_get_mm(page)->page_table_lock; spin_lock(pgt_lock); - ret = vmalloc_sync_one(page_address(page), address); + vmalloc_sync_one(page_address(page), address); spin_unlock(pgt_lock); - - if (!ret) - break; } spin_unlock(&pgd_lock); }