From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46AC1C3A5A2 for ; Thu, 22 Aug 2019 17:08:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 17D9F23426 for ; Thu, 22 Aug 2019 17:08:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566493700; bh=qilmrSz0Yqh8giWWt9Gr7R/6ySvCEXEHCsxy9Pi5a+M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=nsYRn3lkUGIi7Bp2HMmcrUUrYCQLU6OMxvRyc3AkOwStRx5QgHEwWd5mCT6/fHfbL k0Oc17VntEORFc1sG8UEAzo+cxvsvLk1cE9XCbjpSg8MozQUqrEaYZn1Q9NGHHFiU0 f5Wy6S8/Hkf1yqs31gNS3pPXOww6Vupjbn/TrhtU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390261AbfHVRIT (ORCPT ); Thu, 22 Aug 2019 13:08:19 -0400 Received: from mail.kernel.org ([198.145.29.99]:57618 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390237AbfHVRIR (ORCPT ); Thu, 22 Aug 2019 13:08:17 -0400 Received: from sasha-vm.mshome.net (wsip-184-188-36-2.sd.sd.cox.net [184.188.36.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E1AF023400; Thu, 22 Aug 2019 17:08:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566493696; bh=qilmrSz0Yqh8giWWt9Gr7R/6ySvCEXEHCsxy9Pi5a+M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ECeQwqszIuyu/n8fDFHaWKiT5KlJxMKdc/EM0cO5h+DsimuYahDwuyYiEcdIrfRIg ujg45itQmEcFicst8izsjgKQfImc2O0zbBv5VxxzAk9Xv+xI6FnvTZHwJvmf5PnJVu lcz9lwxEaPYv5x7qarVO7fRpvPgG/eIrX5C4kJTk= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Ralph Campbell , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A. Shutemov" , Mike Kravetz , Christoph Hellwig , Jason Gunthorpe , John Hubbard , Andrea Arcangeli , Andrey Ryabinin , Christoph Lameter , Dan Williams , Dave Hansen , Ira Weiny , Jan Kara , Lai Jiangshan , Logan Gunthorpe , Martin Schwidefsky , Matthew Wilcox , Mel Gorman , Michal Hocko , Pekka Enberg , Randy Dunlap , Vlastimil Babka , Andrew Morton , Linus Torvalds , Greg Kroah-Hartman Subject: [PATCH 5.2 004/135] mm/hmm: fix bad subpage pointer in try_to_unmap_one Date: Thu, 22 Aug 2019 13:06:00 -0400 Message-Id: <20190822170811.13303-5-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190822170811.13303-1-sashal@kernel.org> References: <20190822170811.13303-1-sashal@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-KernelTest-Patch: http://kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.10-rc1.gz X-KernelTest-Tree: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git X-KernelTest-Branch: linux-5.2.y X-KernelTest-Patches: git://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git X-KernelTest-Version: 5.2.10-rc1 X-KernelTest-Deadline: 2019-08-24T17:07+00:00 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ralph Campbell commit 1de13ee59225dfc98d483f8cce7d83f97c0b31de upstream. When migrating an anonymous private page to a ZONE_DEVICE private page, the source page->mapping and page->index fields are copied to the destination ZONE_DEVICE struct page and the page_mapcount() is increased. This is so rmap_walk() can be used to unmap and migrate the page back to system memory. However, try_to_unmap_one() computes the subpage pointer from a swap pte which computes an invalid page pointer and a kernel panic results such as: BUG: unable to handle page fault for address: ffffea1fffffffc8 Currently, only single pages can be migrated to device private memory so no subpage computation is needed and it can be set to "page". [rcampbell@nvidia.com: add comment] Link: http://lkml.kernel.org/r/20190724232700.23327-4-rcampbell@nvidia.com Link: http://lkml.kernel.org/r/20190719192955.30462-4-rcampbell@nvidia.com Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in migration") Signed-off-by: Ralph Campbell Cc: "Jérôme Glisse" Cc: "Kirill A. Shutemov" Cc: Mike Kravetz Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: John Hubbard Cc: Andrea Arcangeli Cc: Andrey Ryabinin Cc: Christoph Lameter Cc: Dan Williams Cc: Dave Hansen Cc: Ira Weiny Cc: Jan Kara Cc: Lai Jiangshan Cc: Logan Gunthorpe Cc: Martin Schwidefsky Cc: Matthew Wilcox Cc: Mel Gorman Cc: Michal Hocko Cc: Pekka Enberg Cc: Randy Dunlap Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/rmap.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2ae6b0d5..003377e242323 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1475,7 +1475,15 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, /* * No need to invalidate here it will synchronize on * against the special swap migration pte. + * + * The assignment to subpage above was computed from a + * swap PTE which results in an invalid pointer. + * Since only PAGE_SIZE pages can currently be + * migrated, just set it to page. This will need to be + * changed when hugepage migrations to device private + * memory are supported. */ + subpage = page; goto discard; } -- 2.20.1