Linux-kselftest Archive on lore.kernel.org
 help / color / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
	"Al Viro" <viro@zeniv.linux.org.uk>,
	"Christoph Hellwig" <hch@infradead.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Dave Chinner" <david@fromorbit.com>,
	"Ira Weiny" <ira.weiny@intel.com>, "Jan Kara" <jack@suse.cz>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Michal Hocko" <mhocko@suse.com>,
	"Mike Kravetz" <mike.kravetz@oracle.com>,
	"Shuah Khan" <shuah@kernel.org>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Matthew Wilcox" <willy@infradead.org>,
	linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-kselftest@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v4 10/12] mm/gup: /proc/vmstat: pin_user_pages (FOLL_PIN) reporting
Date: Wed, 5 Feb 2020 15:58:53 -0800
Message-ID: <2b0656a2-3402-f376-7440-481124485bde@nvidia.com> (raw)
In-Reply-To: <20200205114325.4e2f5aghsusihpap@box>

On 2/5/20 3:43 AM, Kirill A. Shutemov wrote:
> On Tue, Feb 04, 2020 at 03:41:15PM -0800, John Hubbard wrote:
>> Now that pages are "DMA-pinned" via pin_user_page*(), and unpinned via
>> unpin_user_pages*(), we need some visibility into whether all of this is
>> working correctly.
>>
>> Add two new fields to /proc/vmstat:
>>
>>     nr_foll_pin_acquired
>>     nr_foll_pin_released
>>
>> These are documented in Documentation/core-api/pin_user_pages.rst.
>> They represent the number of pages (since boot time) that have been
>> pinned ("nr_foll_pin_acquired") and unpinned ("nr_foll_pin_released"),
>> via pin_user_pages*() and unpin_user_pages*().
>>
>> In the absence of long-running DMA or RDMA operations that hold pages
>> pinned, the above two fields will normally be equal to each other.
>>
>> Also: update Documentation/core-api/pin_user_pages.rst, to remove an
>> earlier (now confirmed untrue) claim about a performance problem with
>> /proc/vmstat.
>>
>> Also: updated Documentation/core-api/pin_user_pages.rst to rename the
>> new /proc/vmstat entries, to the names listed here.
>>
>> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> 
> Please, clarify semantics for huge page. An user may want to know if we
> count huge page as one pin-acquired or by number of pages.


OK, I've added this for the next version:


diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst
index 5776ad1ed5e4..2e939ff10b86 100644
--- a/Documentation/core-api/pin_user_pages.rst
+++ b/Documentation/core-api/pin_user_pages.rst
@@ -211,6 +211,33 @@ since the system was booted, via two new /proc/vmstat entries: ::
     /proc/vmstat/nr_foll_pin_acquired
     /proc/vmstat/nr_foll_pin_released
 
+Under normal conditions, these two values will be equal unless there are any
+long-term [R]DMA pins in place, or during pin/unpin transitions.
+
+* nr_foll_pin_acquired: This is the number of logical pins that have been
+  acquired since the system was powered on. For huge pages, the head page is
+  pinned once for each page (head page and each tail page) within the huge page.
+  This follows the same sort of behavior that get_user_pages() uses for huge
+  pages: the head page is refcounted once for each tail or head page in the huge
+  page, when get_user_pages() is applied to a huge page.
+
+* nr_foll_pin_released: The number of logical pins that have been released since
+  the system was powered on. Note that pages are released (unpinned) on a
+  PAGE_SIZE granularity, even if the original pin was applied to a huge page.
+  Becaused of the pin count behavior described above in "nr_foll_pin_acquired",
+  the accounting balances out, so that after doing this::
+
+    pin_user_pages(huge_page);
+    for (each page in huge_page)
+        unpin_user_page(page);
+
+...the following is expected::
+
+    nr_foll_pin_released == nr_foll_pin_acquired
+
+(...unless it was already out of balance due to a long-term RDMA pin being in
+place.)
+
 Other diagnostics
 =================
 


thanks,
-- 
John Hubbard
NVIDIA

> 
> Otherwise looks good (given Jan concern is addressed).
> 

  reply index

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-04 23:41 [PATCH v4 00/12] mm/gup: track FOLL_PIN pages John Hubbard
2020-02-04 23:41 ` [PATCH v4 01/12] mm: dump_page(): better diagnostics for compound pages John Hubbard
2020-02-04 23:41 ` [PATCH v4 02/12] mm/gup: split get_user_pages_remote() into two routines John Hubbard
2020-02-04 23:41 ` [PATCH v4 03/12] mm/gup: pass a flags arg to __gup_device_* functions John Hubbard
2020-02-04 23:41 ` [PATCH v4 04/12] mm: introduce page_ref_sub_return() John Hubbard
2020-02-05  9:23   ` Jan Kara
2020-02-05 11:23   ` Kirill A. Shutemov
2020-02-04 23:41 ` [PATCH v4 05/12] mm/gup: pass gup flags to two more routines John Hubbard
2020-02-04 23:41 ` [PATCH v4 06/12] mm/gup: require FOLL_GET for get_user_pages_fast() John Hubbard
2020-02-04 23:41 ` [PATCH v4 07/12] mm/gup: track FOLL_PIN pages John Hubbard
2020-02-05 11:35   ` Kirill A. Shutemov
2020-02-04 23:41 ` [PATCH v4 08/12] mm/gup: page->hpage_pinned_refcount: exact pin counts for huge pages John Hubbard
2020-02-04 23:41 ` [PATCH v4 09/12] mm: dump_page(): better diagnostics for huge pinned pages John Hubbard
2020-02-04 23:41 ` [PATCH v4 10/12] mm/gup: /proc/vmstat: pin_user_pages (FOLL_PIN) reporting John Hubbard
2020-02-05  9:37   ` Jan Kara
2020-02-05 23:13     ` John Hubbard
2020-02-05 11:43   ` Kirill A. Shutemov
2020-02-05 23:58     ` John Hubbard [this message]
2020-02-04 23:41 ` [PATCH v4 11/12] mm/gup_benchmark: support pin_user_pages() and related calls John Hubbard
2020-02-05 11:46   ` Kirill A. Shutemov
2020-02-04 23:41 ` [PATCH v4 12/12] selftests/vm: run_vmtests: invoke gup_benchmark with basic FOLL_PIN coverage John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2b0656a2-3402-f376-7440-481124485bde@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=ira.weiny@intel.com \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=kirill@shutemov.name \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=shuah@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-kselftest Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-kselftest/0 linux-kselftest/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-kselftest linux-kselftest/ https://lore.kernel.org/linux-kselftest \
		linux-kselftest@vger.kernel.org
	public-inbox-index linux-kselftest

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kselftest


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git