From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77475C4741F for ; Mon, 9 Nov 2020 13:28:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 297C82083B for ; Mon, 9 Nov 2020 13:28:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604928523; bh=XxnATp9HULNcOgR/Wtu94Yzo15VEht3cuy99ua/fkEE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=OP0FKkVCLCI2qNgdGFeEv6/1gom0ORmaBMcW36PA5Yq9GkzylAmkga4V+P0Gdethn ciWm5OcDQ/Az9zha0RyU7rt9/mlKKjjgR/OqCprb42smVZFyKWCM9Do1II80MO/jKC uOy4Im5yJOTmgIqDO3ObEfZaehoDB1J2VqkgWCgQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733218AbgKIN2l (ORCPT ); Mon, 9 Nov 2020 08:28:41 -0500 Received: from mail.kernel.org ([198.145.29.99]:39382 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732967AbgKINNa (ORCPT ); Mon, 9 Nov 2020 08:13:30 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 188242076E; Mon, 9 Nov 2020 13:13:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604927609; bh=XxnATp9HULNcOgR/Wtu94Yzo15VEht3cuy99ua/fkEE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fc/ias8h3fEJqIEJbdrrhjJlNcu2VAjrIYNt/oL8RZqSrZIPS97c+ZCBXcPSsoAYw xsb2mXoMyNHU7bnwmYyuDlMoWyxXrPFTujzmq5lVnEAsCKSBDzuPF2/cPkRIdYkhDB MiTwuEz/wYVIbvSr1zOuHqF6JnK4ROyvJz+xAWPI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shijie Luo , Miaohe Lin , Andrew Morton , Oscar Salvador , Michal Hocko , Feilong Lin , Linus Torvalds Subject: [PATCH 5.4 29/85] mm: mempolicy: fix potential pte_unmap_unlock pte error Date: Mon, 9 Nov 2020 13:55:26 +0100 Message-Id: <20201109125023.985251501@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201109125022.614792961@linuxfoundation.org> References: <20201109125022.614792961@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Shijie Luo commit 3f08842098e842c51e3b97d0dcdebf810b32558e upstream. When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea. queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate misplaced pages but returns with EIO when encountering such a page. Since commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified") and early break on the first pte in the range results in pte_unmap_unlock on an underflow pte. This can lead to lockups later on when somebody tries to lock the pte resp. page_table_lock again.. Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified") Signed-off-by: Shijie Luo Signed-off-by: Miaohe Lin Signed-off-by: Andrew Morton Reviewed-by: Oscar Salvador Acked-by: Michal Hocko Cc: Miaohe Lin Cc: Feilong Lin Cc: Shijie Luo Cc: Link: https://lkml.kernel.org/r/20201019074853.50856-1-luoshijie1@huawei.com Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/mempolicy.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -496,7 +496,7 @@ static int queue_pages_pte_range(pmd_t * unsigned long flags = qp->flags; int ret; bool has_unmovable = false; - pte_t *pte; + pte_t *pte, *mapped_pte; spinlock_t *ptl; ptl = pmd_trans_huge_lock(pmd, vma); @@ -510,7 +510,7 @@ static int queue_pages_pte_range(pmd_t * if (pmd_trans_unstable(pmd)) return 0; - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); for (; addr != end; pte++, addr += PAGE_SIZE) { if (!pte_present(*pte)) continue; @@ -542,7 +542,7 @@ static int queue_pages_pte_range(pmd_t * } else break; } - pte_unmap_unlock(pte - 1, ptl); + pte_unmap_unlock(mapped_pte, ptl); cond_resched(); if (has_unmovable)