From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0163EC432C3 for ; Fri, 15 Nov 2019 00:11:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C29BD206B6 for ; Fri, 15 Nov 2019 00:11:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="hB9h9gjW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727264AbfKOALh (ORCPT ); Thu, 14 Nov 2019 19:11:37 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:10611 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726956AbfKOALh (ORCPT ); Thu, 14 Nov 2019 19:11:37 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 14 Nov 2019 16:10:40 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 14 Nov 2019 16:11:36 -0800 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 14 Nov 2019 16:11:36 -0800 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 15 Nov 2019 00:11:35 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 15 Nov 2019 00:11:35 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 14 Nov 2019 16:11:35 -0800 From: John Hubbard To: Andrew Morton CC: Christoph Hellwig , Dan Williams , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , , , LKML , John Hubbard , Christoph Hellwig Subject: [PATCH 2/2] mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages Date: Thu, 14 Nov 2019 16:11:34 -0800 Message-ID: <20191115001134.2489505-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191115001134.2489505-1-jhubbard@nvidia.com> References: <20191115001134.2489505-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573776640; bh=OEnwZ2pSOFUNQcV1LOpiequ14+DMiNnpojwFAjOdr0c=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=hB9h9gjW0c3C08FCy3+gLp9voBujaR83KWrC5mU8KXeWrya1Pb61/u+edL6Ezc/TZ MSTb687YTiPx8+d/I6LC2ni+dF3upNUCHWqhGjH8nCVCpHbVNWkSc+U3cXB6qyo+fV xnEBn86LqfvRH4ka47Iu8zoXZhpu/Mwfjl2Nb/jszi6dqbKs6NBRkF2duwqIIH5Gv3 wU0ZLEMxWVwsKb0yhP5jLMfx9aT7nrVOCDpBGbbSglm4lxNPXuBumeOwiKMa9+qOnw ms7a5lQlsByJ3OZqmP0+TxpCagQo2O2dNr4zgGaa52edg6t/z/ZL2Nmce4Jy/tZddE Sl8ZKwIxA78dA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org An upcoming patch changes and complicates the refcounting and especially the "put page" aspects of it. In order to keep everything clean, refactor the devmap page release routines: * Rename put_devmap_managed_page() to page_is_devmap_managed(), and limit the functionality to "read only": return a bool, with no side effects. * Add a new routine, put_devmap_managed_page(), to handle checking what kind of page it is, and what kind of refcount handling it requires. * Rename __put_devmap_managed_page() to free_devmap_managed_page(), and limit the functionality to unconditionally freeing a devmap page. This is originally based on a separate patch by Ira Weiny, which applied to an early version of the put_user_page() experiments. Since then, J=C3=A9r=C3=B4me Glisse suggested the refactoring described abo= ve. Cc: Jan Kara Cc: J=C3=A9r=C3=B4me Glisse Cc: Christoph Hellwig Cc: Dan Williams Suggested-by: J=C3=A9r=C3=B4me Glisse Signed-off-by: Ira Weiny Signed-off-by: John Hubbard --- include/linux/mm.h | 27 ++++++++++++++++++++++++--- mm/memremap.c | 16 ++-------------- 2 files changed, 26 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a2adf95b3f9c..96228376139c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -967,9 +967,10 @@ static inline bool is_zone_device_page(const struct pa= ge *page) #endif =20 #ifdef CONFIG_DEV_PAGEMAP_OPS -void __put_devmap_managed_page(struct page *page); +void free_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); -static inline bool put_devmap_managed_page(struct page *page) + +static inline bool page_is_devmap_managed(struct page *page) { if (!static_branch_unlikely(&devmap_managed_key)) return false; @@ -978,7 +979,6 @@ static inline bool put_devmap_managed_page(struct page = *page) switch (page->pgmap->type) { case MEMORY_DEVICE_PRIVATE: case MEMORY_DEVICE_FS_DAX: - __put_devmap_managed_page(page); return true; default: break; @@ -986,6 +986,27 @@ static inline bool put_devmap_managed_page(struct page= *page) return false; } =20 +static inline bool put_devmap_managed_page(struct page *page) +{ + bool is_devmap =3D page_is_devmap_managed(page); + + if (is_devmap) { + int count =3D page_ref_dec_return(page); + + /* + * devmap page refcounts are 1-based, rather than 0-based: if + * refcount is 1, then the page is free and the refcount is + * stable because nobody holds a reference on the page. + */ + if (count =3D=3D 1) + free_devmap_managed_page(page); + else if (!count) + __put_page(page); + } + + return is_devmap; +} + #else /* CONFIG_DEV_PAGEMAP_OPS */ static inline bool put_devmap_managed_page(struct page *page) { diff --git a/mm/memremap.c b/mm/memremap.c index e899fa876a62..2ba773859031 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -411,20 +411,8 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, EXPORT_SYMBOL_GPL(get_dev_pagemap); =20 #ifdef CONFIG_DEV_PAGEMAP_OPS -void __put_devmap_managed_page(struct page *page) +void free_devmap_managed_page(struct page *page) { - int count =3D page_ref_dec_return(page); - - /* still busy */ - if (count > 1) - return; - - /* only triggered by the dev_pagemap shutdown path */ - if (count =3D=3D 0) { - __put_page(page); - return; - } - /* notify page idle for dax */ if (!is_device_private_page(page)) { wake_up_var(&page->_refcount); @@ -461,5 +449,5 @@ void __put_devmap_managed_page(struct page *page) page->mapping =3D NULL; page->pgmap->ops->page_free(page); } -EXPORT_SYMBOL(__put_devmap_managed_page); +EXPORT_SYMBOL(free_devmap_managed_page); #endif /* CONFIG_DEV_PAGEMAP_OPS */ --=20 2.24.0