From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF621C433E0 for ; Thu, 28 Jan 2021 16:46:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 44CE964E0C for ; Thu, 28 Jan 2021 16:46:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44CE964E0C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A3E2B6B0071; Thu, 28 Jan 2021 11:46:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9EEF36B0074; Thu, 28 Jan 2021 11:46:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 904306B0078; Thu, 28 Jan 2021 11:46:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 7B1596B0071 for ; Thu, 28 Jan 2021 11:46:10 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 38EBA8249980 for ; Thu, 28 Jan 2021 16:46:10 +0000 (UTC) X-FDA: 77755761300.01.ghost89_45080fe275a1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id DE43B1004B795 for ; Thu, 28 Jan 2021 16:45:45 +0000 (UTC) X-HE-Tag: ghost89_45080fe275a1 X-Filterd-Recvd-Size: 5778 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Thu, 28 Jan 2021 16:45:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611852344; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LrbBbrZjgAv/B1wjq2Km74Xb+2VP5Qkal75C1Cc+NeE=; b=Hy7eBveUQJWbfNOLEtZp2SBaBgJ62/8wdZOAhgOHKKgroeN17uud4D7pVftgyooYJD9wZa ua1wx/drWy+ZqNFtGsIs7oFaI3Jvfn0iWSRUUXry5AT+wMoByd7TEnqMWijuDatgqyIbTY VtX/5uBGfWX6E45+v3Ee/orJa4qJoHg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-405-Sc0k_4bSOiuTVbO9hkQWLQ-1; Thu, 28 Jan 2021 11:45:42 -0500 X-MC-Unique: Sc0k_4bSOiuTVbO9hkQWLQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5FAC459; Thu, 28 Jan 2021 16:45:40 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-207.ams2.redhat.com [10.36.113.207]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3E5C910027A5; Thu, 28 Jan 2021 16:45:34 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Thomas Gleixner , "Peter Zijlstra (Intel)" , Mike Rapoport , Oscar Salvador , Michal Hocko , Wei Yang , linux-api@vger.kernel.org Subject: [PATCH v2] mm/page_alloc: count CMA pages per zone and print them in /proc/zoneinfo Date: Thu, 28 Jan 2021 17:45:33 +0100 Message-Id: <20210128164533.18566-1-david@redhat.com> In-Reply-To: <20210127101813.6370-3-david@redhat.com> References: <20210127101813.6370-3-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's count the number of CMA pages per zone and print them in /proc/zoneinfo. Having access to the total number of CMA pages per zone is helpful for debugging purposes to know where exactly the CMA pages ended up, and to figure out how many pages of a zone might behave differently, even after some of these pages might already have been allocated. As one example, CMA pages part of a kernel zone cannot be used for ordinary kernel allocations but instead behave more like ZONE_MOVABLE. For now, we are only able to get the global nr+free cma pages from /proc/meminfo and the free cma pages per zone from /proc/zoneinfo. Example after this patch when booting a 6 GiB QEMU VM with "hugetlb_cma=3D2G": # cat /proc/zoneinfo | grep cma cma 0 nr_free_cma 0 cma 0 nr_free_cma 0 cma 524288 nr_free_cma 493016 cma 0 cma 0 # cat /proc/meminfo | grep Cma CmaTotal: 2097152 kB CmaFree: 1972064 kB Note: We track/print only with CONFIG_CMA; "nr_free_cma" in /proc/zoneinf= o is currently also printed without CONFIG_CMA. Cc: Andrew Morton Cc: Thomas Gleixner Cc: "Peter Zijlstra (Intel)" Cc: Mike Rapoport Cc: Oscar Salvador Cc: Michal Hocko Cc: Wei Yang Cc: linux-api@vger.kernel.org Signed-off-by: David Hildenbrand --- v1 -> v2: - Print/track only with CONFIG_CMA - Extend patch description --- include/linux/mmzone.h | 6 ++++++ mm/page_alloc.c | 1 + mm/vmstat.c | 5 +++++ 3 files changed, 12 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index ae588b2f87ef..27d22fb22e05 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -503,6 +503,9 @@ struct zone { * bootmem allocator): * managed_pages =3D present_pages - reserved_pages; * + * cma pages is present pages that are assigned for CMA use + * (MIGRATE_CMA). + * * So present_pages may be used by memory hotplug or memory power * management logic to figure out unmanaged pages by checking * (present_pages - managed_pages). And managed_pages should be used @@ -527,6 +530,9 @@ struct zone { atomic_long_t managed_pages; unsigned long spanned_pages; unsigned long present_pages; +#ifdef CONFIG_CMA + unsigned long cma_pages; +#endif =20 const char *name; =20 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b031a5ae0bd5..9a82375bbcb2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2168,6 +2168,7 @@ void __init init_cma_reserved_pageblock(struct page= *page) } =20 adjust_managed_page_count(page, pageblock_nr_pages); + page_zone(page)->cma_pages +=3D pageblock_nr_pages; } #endif =20 diff --git a/mm/vmstat.c b/mm/vmstat.c index 7758486097f9..957680db41fa 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1650,6 +1650,11 @@ static void zoneinfo_show_print(struct seq_file *m= , pg_data_t *pgdat, zone->spanned_pages, zone->present_pages, zone_managed_pages(zone)); +#ifdef CONFIG_CMA + seq_printf(m, + "\n cma %lu", + zone->cma_pages); +#endif =20 seq_printf(m, "\n protection: (%ld", --=20 2.29.2