From: Barry Song <21cnbao@gmail.com> To: Yu Zhao <yuzhao@google.com>, Will Deacon <will@kernel.org> Cc: "Andrew Morton" <akpm@linux-foundation.org>, Linux-MM <linux-mm@kvack.org>, "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Peter Zijlstra" <peterz@infradead.org>, "Tejun Heo" <tj@kernel.org>, "Vlastimil Babka" <vbabka@suse.cz>, LAK <linux-arm-kernel@lists.infradead.org>, "Linux Doc Mailing List" <linux-doc@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, x86 <x86@kernel.org>, "Kernel Page Reclaim v2" <page-reclaim@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com>, huzhanyuan@oppo.com Subject: Re: [PATCH v11 07/14] mm: multi-gen LRU: exploit locality in rmap Date: Tue, 7 Jun 2022 19:37:10 +1200 [thread overview] Message-ID: <CAGsJ_4w3S_8Kaw2GyB3hg7b4N_D+6yBO7D6qmgxD9Fqz3_dhAg@mail.gmail.com> (raw) In-Reply-To: <CAGsJ_4yboZEY9OfyujPxBa_AEuGM3OAq5y_L9gvzSMUv70BxeQ@mail.gmail.com> On Mon, Jun 6, 2022 at 9:25 PM Barry Song <21cnbao@gmail.com> wrote: > > On Wed, May 18, 2022 at 4:49 PM Yu Zhao <yuzhao@google.com> wrote: > > > > Searching the rmap for PTEs mapping each page on an LRU list (to test > > and clear the accessed bit) can be expensive because pages from > > different VMAs (PA space) are not cache friendly to the rmap (VA > > space). For workloads mostly using mapped pages, the rmap has a high > > CPU cost in the reclaim path. > > > > This patch exploits spatial locality to reduce the trips into the > > rmap. When shrink_page_list() walks the rmap and finds a young PTE, a > > new function lru_gen_look_around() scans at most BITS_PER_LONG-1 > > adjacent PTEs. On finding another young PTE, it clears the accessed > > bit and updates the gen counter of the page mapped by this PTE to > > (max_seq%MAX_NR_GENS)+1. > > > > Server benchmark results: > > Single workload: > > fio (buffered I/O): no change > > > > Single workload: > > memcached (anon): +[5.5, 7.5]% > > Ops/sec KB/sec > > patch1-6: 1120643.70 43588.06 > > patch1-7: 1193918.93 46438.15 > > > > Configurations: > > no change > > > > Client benchmark results: > > kswapd profiles: > > patch1-6 > > 35.99% lzo1x_1_do_compress (real work) > > 19.40% page_vma_mapped_walk > > 6.31% _raw_spin_unlock_irq > > 3.95% do_raw_spin_lock > > 2.39% anon_vma_interval_tree_iter_first > > 2.25% ptep_clear_flush > > 1.92% __anon_vma_interval_tree_subtree_search > > 1.70% folio_referenced_one > > 1.68% __zram_bvec_write > > 1.43% anon_vma_interval_tree_iter_next > > > > patch1-7 > > 45.90% lzo1x_1_do_compress (real work) > > 9.14% page_vma_mapped_walk > > 6.81% _raw_spin_unlock_irq > > 2.80% ptep_clear_flush > > 2.34% __zram_bvec_write > > 2.29% do_raw_spin_lock > > 1.84% lru_gen_look_around > > 1.78% memmove > > 1.74% obj_malloc > > 1.50% free_unref_page_list > > > > Configurations: > > no change > > > > Signed-off-by: Yu Zhao <yuzhao@google.com> > > Acked-by: Brian Geffon <bgeffon@google.com> > > Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> > > Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> > > Acked-by: Steven Barrett <steven@liquorix.net> > > Acked-by: Suleiman Souhlal <suleiman@google.com> > > Tested-by: Daniel Byrne <djbyrne@mtu.edu> > > Tested-by: Donald Carr <d@chaos-reins.com> > > Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> > > Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> > > Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> > > Tested-by: Sofia Trinh <sofia.trinh@edi.works> > > Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> > > --- > > include/linux/memcontrol.h | 31 ++++++++ > > include/linux/mm.h | 5 ++ > > include/linux/mmzone.h | 6 ++ > > mm/internal.h | 1 + > > mm/memcontrol.c | 1 + > > mm/rmap.c | 7 ++ > > mm/swap.c | 4 +- > > mm/vmscan.c | 157 +++++++++++++++++++++++++++++++++++++ > > 8 files changed, 210 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 89b14729d59f..2bfdcc77648a 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -438,6 +438,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) > > * - LRU isolation > > * - lock_page_memcg() > > * - exclusive reference > > + * - mem_cgroup_trylock_pages() > > * > > * For a kmem folio a caller should hold an rcu read lock to protect memcg > > * associated with a kmem folio from being released. > > @@ -499,6 +500,7 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) > > * - LRU isolation > > * - lock_page_memcg() > > * - exclusive reference > > + * - mem_cgroup_trylock_pages() > > * > > * For a kmem page a caller should hold an rcu read lock to protect memcg > > * associated with a kmem page from being released. > > @@ -948,6 +950,23 @@ void unlock_page_memcg(struct page *page); > > > > void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); > > > > +/* try to stablize folio_memcg() for all the pages in a memcg */ > > +static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > > +{ > > + rcu_read_lock(); > > + > > + if (mem_cgroup_disabled() || !atomic_read(&memcg->moving_account)) > > + return true; > > + > > + rcu_read_unlock(); > > + return false; > > +} > > + > > +static inline void mem_cgroup_unlock_pages(void) > > +{ > > + rcu_read_unlock(); > > +} > > + > > /* idx can be of type enum memcg_stat_item or node_stat_item */ > > static inline void mod_memcg_state(struct mem_cgroup *memcg, > > int idx, int val) > > @@ -1386,6 +1405,18 @@ static inline void folio_memcg_unlock(struct folio *folio) > > { > > } > > > > +static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > > +{ > > + /* to match folio_memcg_rcu() */ > > + rcu_read_lock(); > > + return true; > > +} > > + > > +static inline void mem_cgroup_unlock_pages(void) > > +{ > > + rcu_read_unlock(); > > +} > > + > > static inline void mem_cgroup_handle_over_high(void) > > { > > } > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 894c289c2c06..4e8ab4ad4473 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -1523,6 +1523,11 @@ static inline unsigned long folio_pfn(struct folio *folio) > > return page_to_pfn(&folio->page); > > } > > > > +static inline struct folio *pfn_folio(unsigned long pfn) > > +{ > > + return page_folio(pfn_to_page(pfn)); > > +} > > + > > static inline atomic_t *folio_pincount_ptr(struct folio *folio) > > { > > return &folio_page(folio, 1)->compound_pincount; > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > > index 2d023d243e73..f0b980362186 100644 > > --- a/include/linux/mmzone.h > > +++ b/include/linux/mmzone.h > > @@ -374,6 +374,7 @@ enum lruvec_flags { > > #ifndef __GENERATING_BOUNDS_H > > > > struct lruvec; > > +struct page_vma_mapped_walk; > > > > #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) > > #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF) > > @@ -429,6 +430,7 @@ struct lru_gen_struct { > > }; > > > > void lru_gen_init_lruvec(struct lruvec *lruvec); > > +void lru_gen_look_around(struct page_vma_mapped_walk *pvmw); > > > > #ifdef CONFIG_MEMCG > > void lru_gen_init_memcg(struct mem_cgroup *memcg); > > @@ -441,6 +443,10 @@ static inline void lru_gen_init_lruvec(struct lruvec *lruvec) > > { > > } > > > > +static inline void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) > > +{ > > +} > > + > > #ifdef CONFIG_MEMCG > > static inline void lru_gen_init_memcg(struct mem_cgroup *memcg) > > { > > diff --git a/mm/internal.h b/mm/internal.h > > index cf16280ce132..59d2422b647d 100644 > > --- a/mm/internal.h > > +++ b/mm/internal.h > > @@ -68,6 +68,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf); > > void folio_rotate_reclaimable(struct folio *folio); > > bool __folio_end_writeback(struct folio *folio); > > void deactivate_file_folio(struct folio *folio); > > +void folio_activate(struct folio *folio); > > > > void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, > > unsigned long floor, unsigned long ceiling); > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 2ee074f80e72..98aa720ac639 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -2769,6 +2769,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) > > * - LRU isolation > > * - lock_page_memcg() > > * - exclusive reference > > + * - mem_cgroup_trylock_pages() > > */ > > folio->memcg_data = (unsigned long)memcg; > > } > > diff --git a/mm/rmap.c b/mm/rmap.c > > index fedb82371efe..7cb7ef29088a 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -73,6 +73,7 @@ > > #include <linux/page_idle.h> > > #include <linux/memremap.h> > > #include <linux/userfaultfd_k.h> > > +#include <linux/mm_inline.h> > > > > #include <asm/tlbflush.h> > > > > @@ -821,6 +822,12 @@ static bool folio_referenced_one(struct folio *folio, > > } > > > > if (pvmw.pte) { > > + if (lru_gen_enabled() && pte_young(*pvmw.pte) && > > + !(vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))) { > > + lru_gen_look_around(&pvmw); > > + referenced++; > > + } > > + > > if (ptep_clear_flush_young_notify(vma, address, > > Hello, Yu. > look_around() is calling ptep_test_and_clear_young(pvmw->vma, addr, pte + i) > only without flush and notify. for flush, there is a tlb operation for arm64: > static inline int ptep_clear_flush_young(struct vm_area_struct *vma, > unsigned long address, pte_t *ptep) > { > int young = ptep_test_and_clear_young(vma, address, ptep); > > if (young) { > /* > * We can elide the trailing DSB here since the worst that can > * happen is that a CPU continues to use the young entry in its > * TLB and we mistakenly reclaim the associated page. The > * window for such an event is bounded by the next > * context-switch, which provides a DSB to complete the TLB > * invalidation. > */ > flush_tlb_page_nosync(vma, address); > } > > return young; > } > > Does it mean the current kernel is over cautious? is it > safe to call ptep_test_and_clear_young() only? I can't really explain why we are getting a random app/java vm crash in monkey test by using ptep_test_and_clear_young() only in lru_gen_look_around() on an armv8-a machine without hardware PTE young support. Moving to ptep_clear_flush_young() in look_around can make the random hang disappear according to zhanyuan(Cc-ed). On x86, ptep_clear_flush_young() is exactly ptep_test_and_clear_young() after 'commit b13b1d2d8692 ("x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead of flushing the TLB")' But on arm64, they are different. according to Will's comments in this thread which tried to make arm64 same with x86, https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1793881.html " This is blindly copied from x86 and isn't true for us: we don't invalidate the TLB on context switch. That means our window for keeping the stale entries around is potentially much bigger and might not be a great idea. If we roll a TLB invalidation routine without the trailing DSB, what sort of performance does that get you? " We shouldn't think ptep_clear_flush_young() is safe enough in LRU to clear PTE young? Any comments from Will? > > btw, lru_gen_look_around() has already included 'address', are we doing > pte check for 'address' twice here? > Thanks Barry
WARNING: multiple messages have this Message-ID (diff)
From: Barry Song <21cnbao@gmail.com> To: Yu Zhao <yuzhao@google.com>, Will Deacon <will@kernel.org> Cc: "Andrew Morton" <akpm@linux-foundation.org>, Linux-MM <linux-mm@kvack.org>, "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Peter Zijlstra" <peterz@infradead.org>, "Tejun Heo" <tj@kernel.org>, "Vlastimil Babka" <vbabka@suse.cz>, LAK <linux-arm-kernel@lists.infradead.org>, "Linux Doc Mailing List" <linux-doc@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, x86 <x86@kernel.org>, "Kernel Page Reclaim v2" <page-reclaim@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com>, huzhanyuan@oppo.com Subject: Re: [PATCH v11 07/14] mm: multi-gen LRU: exploit locality in rmap Date: Tue, 7 Jun 2022 19:37:10 +1200 [thread overview] Message-ID: <CAGsJ_4w3S_8Kaw2GyB3hg7b4N_D+6yBO7D6qmgxD9Fqz3_dhAg@mail.gmail.com> (raw) In-Reply-To: <CAGsJ_4yboZEY9OfyujPxBa_AEuGM3OAq5y_L9gvzSMUv70BxeQ@mail.gmail.com> On Mon, Jun 6, 2022 at 9:25 PM Barry Song <21cnbao@gmail.com> wrote: > > On Wed, May 18, 2022 at 4:49 PM Yu Zhao <yuzhao@google.com> wrote: > > > > Searching the rmap for PTEs mapping each page on an LRU list (to test > > and clear the accessed bit) can be expensive because pages from > > different VMAs (PA space) are not cache friendly to the rmap (VA > > space). For workloads mostly using mapped pages, the rmap has a high > > CPU cost in the reclaim path. > > > > This patch exploits spatial locality to reduce the trips into the > > rmap. When shrink_page_list() walks the rmap and finds a young PTE, a > > new function lru_gen_look_around() scans at most BITS_PER_LONG-1 > > adjacent PTEs. On finding another young PTE, it clears the accessed > > bit and updates the gen counter of the page mapped by this PTE to > > (max_seq%MAX_NR_GENS)+1. > > > > Server benchmark results: > > Single workload: > > fio (buffered I/O): no change > > > > Single workload: > > memcached (anon): +[5.5, 7.5]% > > Ops/sec KB/sec > > patch1-6: 1120643.70 43588.06 > > patch1-7: 1193918.93 46438.15 > > > > Configurations: > > no change > > > > Client benchmark results: > > kswapd profiles: > > patch1-6 > > 35.99% lzo1x_1_do_compress (real work) > > 19.40% page_vma_mapped_walk > > 6.31% _raw_spin_unlock_irq > > 3.95% do_raw_spin_lock > > 2.39% anon_vma_interval_tree_iter_first > > 2.25% ptep_clear_flush > > 1.92% __anon_vma_interval_tree_subtree_search > > 1.70% folio_referenced_one > > 1.68% __zram_bvec_write > > 1.43% anon_vma_interval_tree_iter_next > > > > patch1-7 > > 45.90% lzo1x_1_do_compress (real work) > > 9.14% page_vma_mapped_walk > > 6.81% _raw_spin_unlock_irq > > 2.80% ptep_clear_flush > > 2.34% __zram_bvec_write > > 2.29% do_raw_spin_lock > > 1.84% lru_gen_look_around > > 1.78% memmove > > 1.74% obj_malloc > > 1.50% free_unref_page_list > > > > Configurations: > > no change > > > > Signed-off-by: Yu Zhao <yuzhao@google.com> > > Acked-by: Brian Geffon <bgeffon@google.com> > > Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> > > Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> > > Acked-by: Steven Barrett <steven@liquorix.net> > > Acked-by: Suleiman Souhlal <suleiman@google.com> > > Tested-by: Daniel Byrne <djbyrne@mtu.edu> > > Tested-by: Donald Carr <d@chaos-reins.com> > > Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> > > Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> > > Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> > > Tested-by: Sofia Trinh <sofia.trinh@edi.works> > > Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> > > --- > > include/linux/memcontrol.h | 31 ++++++++ > > include/linux/mm.h | 5 ++ > > include/linux/mmzone.h | 6 ++ > > mm/internal.h | 1 + > > mm/memcontrol.c | 1 + > > mm/rmap.c | 7 ++ > > mm/swap.c | 4 +- > > mm/vmscan.c | 157 +++++++++++++++++++++++++++++++++++++ > > 8 files changed, 210 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 89b14729d59f..2bfdcc77648a 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -438,6 +438,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) > > * - LRU isolation > > * - lock_page_memcg() > > * - exclusive reference > > + * - mem_cgroup_trylock_pages() > > * > > * For a kmem folio a caller should hold an rcu read lock to protect memcg > > * associated with a kmem folio from being released. > > @@ -499,6 +500,7 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) > > * - LRU isolation > > * - lock_page_memcg() > > * - exclusive reference > > + * - mem_cgroup_trylock_pages() > > * > > * For a kmem page a caller should hold an rcu read lock to protect memcg > > * associated with a kmem page from being released. > > @@ -948,6 +950,23 @@ void unlock_page_memcg(struct page *page); > > > > void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); > > > > +/* try to stablize folio_memcg() for all the pages in a memcg */ > > +static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > > +{ > > + rcu_read_lock(); > > + > > + if (mem_cgroup_disabled() || !atomic_read(&memcg->moving_account)) > > + return true; > > + > > + rcu_read_unlock(); > > + return false; > > +} > > + > > +static inline void mem_cgroup_unlock_pages(void) > > +{ > > + rcu_read_unlock(); > > +} > > + > > /* idx can be of type enum memcg_stat_item or node_stat_item */ > > static inline void mod_memcg_state(struct mem_cgroup *memcg, > > int idx, int val) > > @@ -1386,6 +1405,18 @@ static inline void folio_memcg_unlock(struct folio *folio) > > { > > } > > > > +static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > > +{ > > + /* to match folio_memcg_rcu() */ > > + rcu_read_lock(); > > + return true; > > +} > > + > > +static inline void mem_cgroup_unlock_pages(void) > > +{ > > + rcu_read_unlock(); > > +} > > + > > static inline void mem_cgroup_handle_over_high(void) > > { > > } > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 894c289c2c06..4e8ab4ad4473 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -1523,6 +1523,11 @@ static inline unsigned long folio_pfn(struct folio *folio) > > return page_to_pfn(&folio->page); > > } > > > > +static inline struct folio *pfn_folio(unsigned long pfn) > > +{ > > + return page_folio(pfn_to_page(pfn)); > > +} > > + > > static inline atomic_t *folio_pincount_ptr(struct folio *folio) > > { > > return &folio_page(folio, 1)->compound_pincount; > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > > index 2d023d243e73..f0b980362186 100644 > > --- a/include/linux/mmzone.h > > +++ b/include/linux/mmzone.h > > @@ -374,6 +374,7 @@ enum lruvec_flags { > > #ifndef __GENERATING_BOUNDS_H > > > > struct lruvec; > > +struct page_vma_mapped_walk; > > > > #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) > > #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF) > > @@ -429,6 +430,7 @@ struct lru_gen_struct { > > }; > > > > void lru_gen_init_lruvec(struct lruvec *lruvec); > > +void lru_gen_look_around(struct page_vma_mapped_walk *pvmw); > > > > #ifdef CONFIG_MEMCG > > void lru_gen_init_memcg(struct mem_cgroup *memcg); > > @@ -441,6 +443,10 @@ static inline void lru_gen_init_lruvec(struct lruvec *lruvec) > > { > > } > > > > +static inline void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) > > +{ > > +} > > + > > #ifdef CONFIG_MEMCG > > static inline void lru_gen_init_memcg(struct mem_cgroup *memcg) > > { > > diff --git a/mm/internal.h b/mm/internal.h > > index cf16280ce132..59d2422b647d 100644 > > --- a/mm/internal.h > > +++ b/mm/internal.h > > @@ -68,6 +68,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf); > > void folio_rotate_reclaimable(struct folio *folio); > > bool __folio_end_writeback(struct folio *folio); > > void deactivate_file_folio(struct folio *folio); > > +void folio_activate(struct folio *folio); > > > > void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, > > unsigned long floor, unsigned long ceiling); > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 2ee074f80e72..98aa720ac639 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -2769,6 +2769,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) > > * - LRU isolation > > * - lock_page_memcg() > > * - exclusive reference > > + * - mem_cgroup_trylock_pages() > > */ > > folio->memcg_data = (unsigned long)memcg; > > } > > diff --git a/mm/rmap.c b/mm/rmap.c > > index fedb82371efe..7cb7ef29088a 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -73,6 +73,7 @@ > > #include <linux/page_idle.h> > > #include <linux/memremap.h> > > #include <linux/userfaultfd_k.h> > > +#include <linux/mm_inline.h> > > > > #include <asm/tlbflush.h> > > > > @@ -821,6 +822,12 @@ static bool folio_referenced_one(struct folio *folio, > > } > > > > if (pvmw.pte) { > > + if (lru_gen_enabled() && pte_young(*pvmw.pte) && > > + !(vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))) { > > + lru_gen_look_around(&pvmw); > > + referenced++; > > + } > > + > > if (ptep_clear_flush_young_notify(vma, address, > > Hello, Yu. > look_around() is calling ptep_test_and_clear_young(pvmw->vma, addr, pte + i) > only without flush and notify. for flush, there is a tlb operation for arm64: > static inline int ptep_clear_flush_young(struct vm_area_struct *vma, > unsigned long address, pte_t *ptep) > { > int young = ptep_test_and_clear_young(vma, address, ptep); > > if (young) { > /* > * We can elide the trailing DSB here since the worst that can > * happen is that a CPU continues to use the young entry in its > * TLB and we mistakenly reclaim the associated page. The > * window for such an event is bounded by the next > * context-switch, which provides a DSB to complete the TLB > * invalidation. > */ > flush_tlb_page_nosync(vma, address); > } > > return young; > } > > Does it mean the current kernel is over cautious? is it > safe to call ptep_test_and_clear_young() only? I can't really explain why we are getting a random app/java vm crash in monkey test by using ptep_test_and_clear_young() only in lru_gen_look_around() on an armv8-a machine without hardware PTE young support. Moving to ptep_clear_flush_young() in look_around can make the random hang disappear according to zhanyuan(Cc-ed). On x86, ptep_clear_flush_young() is exactly ptep_test_and_clear_young() after 'commit b13b1d2d8692 ("x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead of flushing the TLB")' But on arm64, they are different. according to Will's comments in this thread which tried to make arm64 same with x86, https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1793881.html " This is blindly copied from x86 and isn't true for us: we don't invalidate the TLB on context switch. That means our window for keeping the stale entries around is potentially much bigger and might not be a great idea. If we roll a TLB invalidation routine without the trailing DSB, what sort of performance does that get you? " We shouldn't think ptep_clear_flush_young() is safe enough in LRU to clear PTE young? Any comments from Will? > > btw, lru_gen_look_around() has already included 'address', are we doing > pte check for 'address' twice here? > Thanks Barry _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-06-07 7:37 UTC|newest] Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-05-18 1:46 [PATCH v11 00/14] Multi-Gen LRU Framework Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 01/14] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 02/14] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 03/14] mm/vmscan.c: refactor shrink_node() Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 04/14] Revert "include/linux/mm_inline.h: fold __update_lru_size() into its sole caller" Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 05/14] mm: multi-gen LRU: groundwork Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-06-09 5:33 ` zhong jiang 2022-06-09 5:33 ` zhong jiang 2022-05-18 1:46 ` [PATCH v11 06/14] mm: multi-gen LRU: minimal implementation Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-06-09 12:34 ` zhong jiang 2022-06-09 12:34 ` zhong jiang 2022-06-09 14:46 ` zhong jiang 2022-06-09 14:46 ` zhong jiang 2022-05-18 1:46 ` [PATCH v11 07/14] mm: multi-gen LRU: exploit locality in rmap Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-06-06 9:25 ` Barry Song 2022-06-06 9:25 ` Barry Song 2022-06-07 7:37 ` Barry Song [this message] 2022-06-07 7:37 ` Barry Song 2022-06-07 10:21 ` Will Deacon 2022-06-07 10:21 ` Will Deacon 2022-06-06 22:37 ` Barry Song 2022-06-06 22:37 ` Barry Song 2022-06-07 10:43 ` Will Deacon 2022-06-07 10:43 ` Will Deacon 2022-06-07 21:06 ` Yu Zhao 2022-06-07 21:06 ` Yu Zhao 2022-06-08 0:43 ` Barry Song 2022-06-08 0:43 ` Barry Song 2022-06-08 15:51 ` Linus Torvalds 2022-06-08 15:51 ` Linus Torvalds 2022-06-08 22:45 ` Barry Song 2022-06-08 22:45 ` Barry Song 2022-06-16 21:55 ` Yu Zhao 2022-06-16 21:55 ` Yu Zhao 2022-06-16 22:33 ` Barry Song 2022-06-16 22:33 ` Barry Song 2022-06-16 23:29 ` Yu Zhao 2022-06-16 23:29 ` Yu Zhao 2022-06-17 1:42 ` Yu Zhao 2022-06-17 1:42 ` Yu Zhao 2022-06-17 2:01 ` Barry Song 2022-06-17 2:01 ` Barry Song 2022-06-17 3:03 ` Yu Zhao 2022-06-17 3:03 ` Yu Zhao 2022-06-17 3:17 ` Yu Zhao 2022-06-17 3:17 ` Yu Zhao 2022-06-19 20:36 ` Yu Zhao 2022-06-19 20:36 ` Yu Zhao 2022-06-19 21:56 ` Barry Song 2022-06-19 21:56 ` Barry Song 2022-06-07 19:07 ` Yu Zhao 2022-06-07 19:07 ` Yu Zhao 2022-06-08 7:48 ` Barry Song 2022-06-08 7:48 ` Barry Song 2022-06-07 18:58 ` Yu Zhao 2022-06-07 18:58 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 08/14] mm: multi-gen LRU: support page table walks Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 09/14] mm: multi-gen LRU: optimize multiple memcgs Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 10/14] mm: multi-gen LRU: kill switch Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 11/14] mm: multi-gen LRU: thrashing prevention Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 12/14] mm: multi-gen LRU: debugfs interface Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 13/14] mm: multi-gen LRU: admin guide Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 1:46 ` [PATCH v11 14/14] mm: multi-gen LRU: design doc Yu Zhao 2022-05-18 1:46 ` Yu Zhao 2022-05-18 2:05 ` [PATCH v11 00/14] Multi-Gen LRU Framework Jens Axboe 2022-05-18 2:05 ` Jens Axboe 2022-06-07 22:47 ` Yu Zhao 2022-06-07 22:47 ` Yu Zhao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CAGsJ_4w3S_8Kaw2GyB3hg7b4N_D+6yBO7D6qmgxD9Fqz3_dhAg@mail.gmail.com \ --to=21cnbao@gmail.com \ --cc=Hi-Angel@yandex.ru \ --cc=Michael@michaellarabel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.ibm.com \ --cc=axboe@kernel.dk \ --cc=bgeffon@google.com \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=d@chaos-reins.com \ --cc=dave.hansen@linux.intel.com \ --cc=djbyrne@mtu.edu \ --cc=hannes@cmpxchg.org \ --cc=hdanton@sina.com \ --cc=heftig@archlinux.org \ --cc=holger@applied-asynchrony.com \ --cc=huzhanyuan@oppo.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@kernel.org \ --cc=oleksandr@natalenko.name \ --cc=page-reclaim@google.com \ --cc=peterz@infradead.org \ --cc=rppt@kernel.org \ --cc=sofia.trinh@edi.works \ --cc=steven@liquorix.net \ --cc=suleiman@google.com \ --cc=szhai2@cs.rochester.edu \ --cc=tj@kernel.org \ --cc=torvalds@linux-foundation.org \ --cc=vaibhav@linux.ibm.com \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ --cc=yuzhao@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.