All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yin Fengwei <fengwei.yin@intel.com>
To: willy@infradead.org, david@redhat.com, linux-mm@kvack.org
Cc: dave.hansen@intel.com, tim.c.chen@intel.com,
	ying.huang@intel.com, fengwei.yin@intel.com
Subject: [RFC PATCH v4 4/4] filemap: batched update mm counter,rmap when map file folio
Date: Mon,  6 Feb 2023 22:06:39 +0800	[thread overview]
Message-ID: <20230206140639.538867-5-fengwei.yin@intel.com> (raw)
In-Reply-To: <20230206140639.538867-1-fengwei.yin@intel.com>

Use do_set_pte_range() in filemap_map_folio_range(). Which
batched updates mm counter, rmap for large folio.

With a will-it-scale.page_fault3 like app (change file write
fault testing to read fault testing. Trying to upstream it to
will-it-scale at [1]) got 15% performance gain on a 48C/96T
Cascade Lake test box with 96 processes running against xfs.

Perf data collected before/after the change:
  18.73%--page_add_file_rmap
          |
           --11.60%--__mod_lruvec_page_state
                     |
                     |--7.40%--__mod_memcg_lruvec_state
                     |          |
                     |           --5.58%--cgroup_rstat_updated
                     |
                      --2.53%--__mod_lruvec_state
                                |
                                 --1.48%--__mod_node_page_state

  9.93%--page_add_file_rmap_range
         |
          --2.67%--__mod_lruvec_page_state
                    |
                    |--1.95%--__mod_memcg_lruvec_state
                    |          |
                    |           --1.57%--cgroup_rstat_updated
                    |
                     --0.61%--__mod_lruvec_state
                               |
                                --0.54%--__mod_node_page_state

The running time of __mode_lruvec_page_state() is reduced about 9%.

[1]: https://github.com/antonblanchard/will-it-scale/pull/37

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
---
 mm/filemap.c | 35 +++++++++++++++++++++++++----------
 1 file changed, 25 insertions(+), 10 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 6f110b9e5d27..4452361e8858 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3354,11 +3354,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
 	struct file *file = vma->vm_file;
 	struct page *page = folio_page(folio, start);
 	unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss);
-	unsigned int ref_count = 0, count = 0;
+	unsigned int mapped = 0;
+	pte_t *pte = vmf->pte;
 
 	do {
 		if (PageHWPoison(page))
-			continue;
+			goto map;
 
 		if (mmap_miss > 0)
 			mmap_miss--;
@@ -3368,20 +3369,34 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
 		 * handled in the specific fault path, and it'll prohibit the
 		 * fault-around logic.
 		 */
-		if (!pte_none(*vmf->pte))
-			continue;
+		if (!pte_none(pte[mapped]))
+			goto map;
 
 		if (vmf->address == addr)
 			ret = VM_FAULT_NOPAGE;
 
-		ref_count++;
-		do_set_pte(vmf, page, addr);
-	} while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages);
+		mapped++;
+		continue;
 
-	/* Restore the vmf->pte */
-	vmf->pte -= nr_pages;
+map:
+		if (mapped) {
+			do_set_pte_range(vmf, folio, addr, pte, start, mapped);
+			folio_ref_add(folio, mapped);
+		}
+
+		/* advance 1 to jump over the HWPoison or !pte_none entry */
+		mapped++;
+		start += mapped;
+		pte += mapped;
+		addr += mapped * PAGE_SIZE;
+		mapped = 0;
+	} while (page++, --nr_pages > 0);
+
+	if (mapped) {
+		do_set_pte_range(vmf, folio, addr, pte, start, mapped);
+		folio_ref_add(folio, mapped);
+	}
 
-	folio_ref_add(folio, ref_count);
 	WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss);
 
 	return ret;
-- 
2.30.2



  parent reply	other threads:[~2023-02-06 14:08 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-06 14:06 [RFC PATCH v4 0/4] folio based filemap_map_pages() Yin Fengwei
2023-02-06 14:06 ` [RFC PATCH v4 1/4] filemap: add function filemap_map_folio_range() Yin Fengwei
2023-02-06 14:06 ` [RFC PATCH v4 2/4] rmap: add folio_add_file_rmap_range() Yin Fengwei
2023-02-06 14:06 ` [RFC PATCH v4 3/4] mm: add do_set_pte_range() Yin Fengwei
2023-02-06 14:44   ` Matthew Wilcox
2023-02-06 14:58     ` Yin, Fengwei
2023-02-06 15:13       ` David Hildenbrand
2023-02-06 16:33         ` Matthew Wilcox
2023-02-06 16:35           ` David Hildenbrand
2023-02-06 16:43             ` Matthew Wilcox
2023-02-06 16:49               ` David Hildenbrand
2023-02-06 17:10                 ` Matthew Wilcox
2023-02-06 17:35                   ` David Hildenbrand
2023-02-07  6:05                     ` Yin, Fengwei
2023-02-06 14:06 ` Yin Fengwei [this message]
2023-02-06 14:34   ` [RFC PATCH v4 4/4] filemap: batched update mm counter,rmap when map file folio Matthew Wilcox
2023-02-06 15:03     ` Yin, Fengwei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230206140639.538867-5-fengwei.yin@intel.com \
    --to=fengwei.yin@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=david@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=tim.c.chen@intel.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.