From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28FCAC4363A for ; Wed, 21 Oct 2020 19:47:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 877B02417A for ; Wed, 21 Oct 2020 19:47:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="XoOqKyzL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 877B02417A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C03036B005D; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B634B6B0062; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98EA76B0068; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id 5F7A56B005D for ; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 01BBA82E86B9 for ; Wed, 21 Oct 2020 19:47:57 +0000 (UTC) X-FDA: 77396968194.21.value36_54125c32724a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id D9361180442C0 for ; Wed, 21 Oct 2020 19:47:56 +0000 (UTC) X-HE-Tag: value36_54125c32724a X-Filterd-Recvd-Size: 3586 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 21 Oct 2020 19:47:56 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 21 Oct 2020 12:47:42 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Oct 2020 19:47:49 +0000 Received: from rcampbell-dev.nvidia.com (10.124.1.5) by mail.nvidia.com (172.20.187.18) with Microsoft SMTP Server id 15.0.1473.3 via Frontend Transport; Wed, 21 Oct 2020 19:47:50 +0000 From: Ralph Campbell To: , CC: Jerome Glisse , John Hubbard , Alistair Popple , Christoph Hellwig , "Jason Gunthorpe" , Dan Williams , "Matthew Wilcox" , Andrew Morton , Ralph Campbell Subject: [PATCH] mm: handle zone device pages in release_pages() Date: Wed, 21 Oct 2020 12:47:33 -0700 Message-ID: <20201021194733.11530-1-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1603309662; bh=MGJaH6/SdFv7VRZFRXj5UhNs695KbTQVGDqZPM15c4s=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:MIME-Version: X-NVConfidentiality:Content-Transfer-Encoding:Content-Type; b=XoOqKyzLq98jjDNdrPZ2nieuS/tliQh+Vsm58Wq2Xh2ARaWytegEKizBYwn07xFiY xZRlQHUYYJ8YCsUdPd+cxh4D9JBOcWI7qsop5+C8oEVgElo15ct+ymYnUTTENrflvO bzrx5shdvT+mQGPvmM4Ad2Xs+1/YEIH9b5nXwr0bcw1xIVBLjrHUBSHDjjEOW9dKTQ 5EZYObcyGvKaCtKUshtgIwT98fXaF5YVevnQ9+HnvKGJYEAtTr1anuLhPAkR9UZd8D JNefzmwVGyHeYisnKV6zPKUDyUJuSPsSsw7L3IkZ/IdnT+S4rrsln1ftziF4wwBM4A pEzXmT11x1pqw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: release_pages() is an optimized, inlined version of __put_pages() except that zone device struct pages that are not page_is_devmap_managed() (i.e., memory_type MEMORY_DEVICE_GENERIC and MEMORY_DEVICE_PCI_P2PDMA), fall through to the code that could return the zone device page to the page allocator instead of adjusting the pgmap reference count. Clearly these type of pages are not having the reference count decremented to zero via release_pages() or page allocation problems would be seen. Just to be safe, handle the 1 to zero case in release_pages() like __put_page() does. Signed-off-by: Ralph Campbell --- I found this by code inspection while working on converting ZONE_DEVICE struct pages to have zero based reference counts. I don't think there is an actual problem that this fixes, it's more to future proof new uses of release_pages(). This is for Andrew Morton's mm tree after the merge window. mm/swap.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/swap.c b/mm/swap.c index 0eb057141a04..106f519c45ac 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -907,6 +907,9 @@ void release_pages(struct page **pages, int nr) put_devmap_managed_page(page); continue; } + if (put_page_testzero(page)) + put_dev_pagemap(page->pgmap); + continue; } =20 if (!put_page_testzero(page)) --=20 2.20.1