linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm,unmap: avoid flushing TLB in batch if PTE is inaccessible
@ 2023-04-10  7:52 Huang Ying
  2023-04-10 19:47 ` Nadav Amit
  2023-04-20  7:44 ` haoxin
  0 siblings, 2 replies; 10+ messages in thread
From: Huang Ying @ 2023-04-10  7:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, kernel test robot,
	Nadav Amit, Mel Gorman, Hugh Dickins, Matthew Wilcox,
	David Hildenbrand

0Day/LKP reported a performance regression for commit
7e12beb8ca2a ("migrate_pages: batch flushing TLB"). In the commit, the
TLB flushing during page migration is batched.  So, in
try_to_migrate_one(), ptep_clear_flush() is replaced with
set_tlb_ubc_flush_pending().  In further investigation, it is found
that the TLB flushing can be avoided in ptep_clear_flush() if the PTE
is inaccessible.  In fact, we can optimize in similar way for the
batched TLB flushing too to improve the performance.

So in this patch, we check pte_accessible() before
set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one().  Tests show
that the benchmark score of the anon-cow-rand-mt test case of
vm-scalability test suite can improve up to 2.1% with the patch on a
Intel server machine.  The TLB flushing IPI can reduce up to 44.3%.

Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@intel.com
Link: https://lore.kernel.org/oe-lkp/ab92aaddf1b52ede15e2c608696c36765a2602c1.camel@intel.com/
Fixes: 7e12beb8ca2a ("migrate_pages: batch flushing TLB")
Reported-by: kernel test robot <yujie.liu@intel.com>
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: David Hildenbrand <david@redhat.com>
---
 mm/rmap.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 8632e02661ac..3c7c43642d7c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1582,7 +1582,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 				 */
 				pteval = ptep_get_and_clear(mm, address, pvmw.pte);
 
-				set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+				if (pte_accessible(mm, pteval))
+					set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
 			} else {
 				pteval = ptep_clear_flush(vma, address, pvmw.pte);
 			}
@@ -1963,7 +1964,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
 				 */
 				pteval = ptep_get_and_clear(mm, address, pvmw.pte);
 
-				set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+				if (pte_accessible(mm, pteval))
+					set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
 			} else {
 				pteval = ptep_clear_flush(vma, address, pvmw.pte);
 			}
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-04-20  8:39 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-10  7:52 [PATCH] mm,unmap: avoid flushing TLB in batch if PTE is inaccessible Huang Ying
2023-04-10 19:47 ` Nadav Amit
2023-04-11  1:31   ` Huang, Ying
2023-04-11 17:52     ` Nadav Amit
2023-04-12  1:50       ` Huang, Ying
2023-04-12 17:00         ` Nadav Amit
2023-04-18  3:17           ` Huang, Ying
2023-04-19 22:58             ` Nadav Amit
2023-04-20  7:44 ` haoxin
2023-04-20  8:38   ` Huang, Ying

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).