From: Hugh Dickins <hughd@google.com> To: Alex Shi <alex.shi@linux.alibaba.com> Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com, Alexander Duyck <alexander.h.duyck@linux.intel.com>, Thomas Gleixner <tglx@linutronix.de>, Andrey Ryabinin <aryabinin@virtuozzo.com> Subject: Re: [PATCH v18 21/32] mm/lru: introduce the relock_page_lruvec function Date: Mon, 21 Sep 2020 22:40:10 -0700 (PDT) [thread overview] Message-ID: <alpine.LSU.2.11.2009212229270.6434@eggly.anvils> (raw) In-Reply-To: <1598273705-69124-22-git-send-email-alex.shi@linux.alibaba.com> On Mon, 24 Aug 2020, Alex Shi wrote: > From: Alexander Duyck <alexander.h.duyck@linux.intel.com> > > Use this new function to replace repeated same code, no func change. > > When testing for relock we can avoid the need for RCU locking if we simply > compare the page pgdat and memcg pointers versus those that the lruvec is > holding. By doing this we can avoid the extra pointer walks and accesses of > the memory cgroup. > > In addition we can avoid the checks entirely if lruvec is currently NULL. > > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com> > Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Again, I'll wait to see __munlock_pagevec() fixed before Acking this one, but that's the only issue. And I've suggested that you use lruvec_holds_page_lru_lock() in mm/vmscan.c move_pages_to_lru(), to replace the uglier and less efficient VM_BUG_ON_PAGE there. Oh, there is one other issue: 0day robot did report (2020-06-19) that sparse doesn't understand relock_page_lruvec*(): I've never got around to working out how to write what it needs, conditional __release plus __acquire in some form, I imagine. I've never got into sparse annotations before, I'll give it a try, but if anyone beats me that will be welcome: and there are higher priorities - I do not think you should wait for the sparse warning to be fixed before reposting. > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> > Cc: Hugh Dickins <hughd@google.com> > Cc: Tejun Heo <tj@kernel.org> > Cc: linux-kernel@vger.kernel.org > Cc: cgroups@vger.kernel.org > Cc: linux-mm@kvack.org > --- > include/linux/memcontrol.h | 52 ++++++++++++++++++++++++++++++++++++++++++++++ > mm/mlock.c | 9 +------- > mm/swap.c | 33 +++++++---------------------- > mm/vmscan.c | 8 +------ > 4 files changed, 61 insertions(+), 41 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 7b170e9028b5..ee6ef2d8ad52 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -488,6 +488,22 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, > > struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); > > +static inline bool lruvec_holds_page_lru_lock(struct page *page, > + struct lruvec *lruvec) > +{ > + pg_data_t *pgdat = page_pgdat(page); > + const struct mem_cgroup *memcg; > + struct mem_cgroup_per_node *mz; > + > + if (mem_cgroup_disabled()) > + return lruvec == &pgdat->__lruvec; > + > + mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec); > + memcg = page->mem_cgroup ? : root_mem_cgroup; > + > + return lruvec->pgdat == pgdat && mz->memcg == memcg; > +} > + > struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); > > struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > @@ -1023,6 +1039,14 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, > return &pgdat->__lruvec; > } > > +static inline bool lruvec_holds_page_lru_lock(struct page *page, > + struct lruvec *lruvec) > +{ > + pg_data_t *pgdat = page_pgdat(page); > + > + return lruvec == &pgdat->__lruvec; > +} > + > static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) > { > return NULL; > @@ -1469,6 +1493,34 @@ static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, > spin_unlock_irqrestore(&lruvec->lru_lock, flags); > } > > +/* Don't lock again iff page's lruvec locked */ > +static inline struct lruvec *relock_page_lruvec_irq(struct page *page, > + struct lruvec *locked_lruvec) > +{ > + if (locked_lruvec) { > + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) > + return locked_lruvec; > + > + unlock_page_lruvec_irq(locked_lruvec); > + } > + > + return lock_page_lruvec_irq(page); > +} > + > +/* Don't lock again iff page's lruvec locked */ > +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, > + struct lruvec *locked_lruvec, unsigned long *flags) > +{ > + if (locked_lruvec) { > + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) > + return locked_lruvec; > + > + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > + } > + > + return lock_page_lruvec_irqsave(page, flags); > +} > + > #ifdef CONFIG_CGROUP_WRITEBACK > > struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); > diff --git a/mm/mlock.c b/mm/mlock.c > index 177d2588e863..0448409184e3 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -302,17 +302,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) > /* Phase 1: page isolation */ > for (i = 0; i < nr; i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > > /* block memcg change in mem_cgroup_move_account */ > lock_page_memcg(page); > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (new_lruvec != lruvec) { > - if (lruvec) > - unlock_page_lruvec_irq(lruvec); > - lruvec = lock_page_lruvec_irq(page); > - } > - > + lruvec = relock_page_lruvec_irq(page, lruvec); > if (TestClearPageMlocked(page)) { > /* > * We already have pin from follow_page_mask() > diff --git a/mm/swap.c b/mm/swap.c > index b67959b701c0..2ac78e8fab71 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -209,19 +209,12 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > > /* block memcg migration during page moving between lru */ > if (!TestClearPageLRU(page)) > continue; > > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > - > + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); > (*move_fn)(page, lruvec); > > SetPageLRU(page); > @@ -865,17 +858,12 @@ void release_pages(struct page **pages, int nr) > } > > if (PageLRU(page)) { > - struct lruvec *new_lruvec; > - > - new_lruvec = mem_cgroup_page_lruvec(page, > - page_pgdat(page)); > - if (new_lruvec != lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, > - flags); > + struct lruvec *prev_lruvec = lruvec; > + > + lruvec = relock_page_lruvec_irqsave(page, lruvec, > + &flags); > + if (prev_lruvec != lruvec) > lock_batch = 0; > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > > VM_BUG_ON_PAGE(!PageLRU(page), page); > __ClearPageLRU(page); > @@ -982,15 +970,8 @@ void __pagevec_lru_add(struct pagevec *pvec) > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > - > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > > + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); > __pagevec_lru_add_fn(page, lruvec); > } > if (lruvec) > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 789444ae4c88..2c94790d4cb1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -4280,15 +4280,9 @@ void check_move_unevictable_pages(struct pagevec *pvec) > > for (i = 0; i < pvec->nr; i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > > pgscanned++; > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irq(lruvec); > - lruvec = lock_page_lruvec_irq(page); > - } > + lruvec = relock_page_lruvec_irq(page, lruvec); > > if (!PageLRU(page) || !PageUnevictable(page)) > continue; > -- > 1.8.3.1
WARNING: multiple messages have this Message-ID (diff)
From: Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> To: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org, tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org, daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, lkp-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org, richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, kirill-oKw7cIdHH8eLwutG50LtGA@public.gmane.org, alexander.duyck-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, rong.a.chen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, mhocko-IBi9RG/b67k@public.gmane.org, vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, Alexander Duyck <alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>, Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>, Andrey Ryabinin <aryabinin-5HdwGun5lf+gSpxsJD1C4w@public.gmane.org> Subject: Re: [PATCH v18 21/32] mm/lru: introduce the relock_page_lruvec function Date: Mon, 21 Sep 2020 22:40:10 -0700 (PDT) [thread overview] Message-ID: <alpine.LSU.2.11.2009212229270.6434@eggly.anvils> (raw) In-Reply-To: <1598273705-69124-22-git-send-email-alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> On Mon, 24 Aug 2020, Alex Shi wrote: > From: Alexander Duyck <alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> > > Use this new function to replace repeated same code, no func change. > > When testing for relock we can avoid the need for RCU locking if we simply > compare the page pgdat and memcg pointers versus those that the lruvec is > holding. By doing this we can avoid the extra pointer walks and accesses of > the memory cgroup. > > In addition we can avoid the checks entirely if lruvec is currently NULL. > > Signed-off-by: Alexander Duyck <alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> > Signed-off-by: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Again, I'll wait to see __munlock_pagevec() fixed before Acking this one, but that's the only issue. And I've suggested that you use lruvec_holds_page_lru_lock() in mm/vmscan.c move_pages_to_lru(), to replace the uglier and less efficient VM_BUG_ON_PAGE there. Oh, there is one other issue: 0day robot did report (2020-06-19) that sparse doesn't understand relock_page_lruvec*(): I've never got around to working out how to write what it needs, conditional __release plus __acquire in some form, I imagine. I've never got into sparse annotations before, I'll give it a try, but if anyone beats me that will be welcome: and there are higher priorities - I do not think you should wait for the sparse warning to be fixed before reposting. > Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> > Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org> > Cc: Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org> > Cc: Andrey Ryabinin <aryabinin-5HdwGun5lf+gSpxsJD1C4w@public.gmane.org> > Cc: Matthew Wilcox <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> > Cc: Mel Gorman <mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org> > Cc: Konstantin Khlebnikov <khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org> > Cc: Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> > Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> > Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org > --- > include/linux/memcontrol.h | 52 ++++++++++++++++++++++++++++++++++++++++++++++ > mm/mlock.c | 9 +------- > mm/swap.c | 33 +++++++---------------------- > mm/vmscan.c | 8 +------ > 4 files changed, 61 insertions(+), 41 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 7b170e9028b5..ee6ef2d8ad52 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -488,6 +488,22 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, > > struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); > > +static inline bool lruvec_holds_page_lru_lock(struct page *page, > + struct lruvec *lruvec) > +{ > + pg_data_t *pgdat = page_pgdat(page); > + const struct mem_cgroup *memcg; > + struct mem_cgroup_per_node *mz; > + > + if (mem_cgroup_disabled()) > + return lruvec == &pgdat->__lruvec; > + > + mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec); > + memcg = page->mem_cgroup ? : root_mem_cgroup; > + > + return lruvec->pgdat == pgdat && mz->memcg == memcg; > +} > + > struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); > > struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > @@ -1023,6 +1039,14 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, > return &pgdat->__lruvec; > } > > +static inline bool lruvec_holds_page_lru_lock(struct page *page, > + struct lruvec *lruvec) > +{ > + pg_data_t *pgdat = page_pgdat(page); > + > + return lruvec == &pgdat->__lruvec; > +} > + > static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) > { > return NULL; > @@ -1469,6 +1493,34 @@ static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, > spin_unlock_irqrestore(&lruvec->lru_lock, flags); > } > > +/* Don't lock again iff page's lruvec locked */ > +static inline struct lruvec *relock_page_lruvec_irq(struct page *page, > + struct lruvec *locked_lruvec) > +{ > + if (locked_lruvec) { > + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) > + return locked_lruvec; > + > + unlock_page_lruvec_irq(locked_lruvec); > + } > + > + return lock_page_lruvec_irq(page); > +} > + > +/* Don't lock again iff page's lruvec locked */ > +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, > + struct lruvec *locked_lruvec, unsigned long *flags) > +{ > + if (locked_lruvec) { > + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) > + return locked_lruvec; > + > + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > + } > + > + return lock_page_lruvec_irqsave(page, flags); > +} > + > #ifdef CONFIG_CGROUP_WRITEBACK > > struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); > diff --git a/mm/mlock.c b/mm/mlock.c > index 177d2588e863..0448409184e3 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -302,17 +302,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) > /* Phase 1: page isolation */ > for (i = 0; i < nr; i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > > /* block memcg change in mem_cgroup_move_account */ > lock_page_memcg(page); > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (new_lruvec != lruvec) { > - if (lruvec) > - unlock_page_lruvec_irq(lruvec); > - lruvec = lock_page_lruvec_irq(page); > - } > - > + lruvec = relock_page_lruvec_irq(page, lruvec); > if (TestClearPageMlocked(page)) { > /* > * We already have pin from follow_page_mask() > diff --git a/mm/swap.c b/mm/swap.c > index b67959b701c0..2ac78e8fab71 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -209,19 +209,12 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > > /* block memcg migration during page moving between lru */ > if (!TestClearPageLRU(page)) > continue; > > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > - > + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); > (*move_fn)(page, lruvec); > > SetPageLRU(page); > @@ -865,17 +858,12 @@ void release_pages(struct page **pages, int nr) > } > > if (PageLRU(page)) { > - struct lruvec *new_lruvec; > - > - new_lruvec = mem_cgroup_page_lruvec(page, > - page_pgdat(page)); > - if (new_lruvec != lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, > - flags); > + struct lruvec *prev_lruvec = lruvec; > + > + lruvec = relock_page_lruvec_irqsave(page, lruvec, > + &flags); > + if (prev_lruvec != lruvec) > lock_batch = 0; > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > > VM_BUG_ON_PAGE(!PageLRU(page), page); > __ClearPageLRU(page); > @@ -982,15 +970,8 @@ void __pagevec_lru_add(struct pagevec *pvec) > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > - > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > > + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); > __pagevec_lru_add_fn(page, lruvec); > } > if (lruvec) > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 789444ae4c88..2c94790d4cb1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -4280,15 +4280,9 @@ void check_move_unevictable_pages(struct pagevec *pvec) > > for (i = 0; i < pvec->nr; i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > > pgscanned++; > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irq(lruvec); > - lruvec = lock_page_lruvec_irq(page); > - } > + lruvec = relock_page_lruvec_irq(page, lruvec); > > if (!PageLRU(page) || !PageUnevictable(page)) > continue; > -- > 1.8.3.1
next prev parent reply other threads:[~2020-09-22 5:40 UTC|newest] Thread overview: 201+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-08-24 12:54 [PATCH v18 00/32] per memcg lru_lock Alex Shi 2020-08-24 12:54 ` [PATCH v18 01/32] mm/memcg: warning on !memcg after readahead page charged Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 02/32] mm/memcg: bail out early from swap accounting when memcg is disabled Alex Shi 2020-08-24 12:54 ` [PATCH v18 03/32] mm/thp: move lru_add_page_tail func to huge_memory.c Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 04/32] mm/thp: clean up lru_add_page_tail Alex Shi 2020-08-24 12:54 ` [PATCH v18 05/32] mm/thp: remove code path which never got into Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 06/32] mm/thp: narrow lru locking Alex Shi 2020-09-10 13:49 ` Matthew Wilcox 2020-09-10 13:49 ` Matthew Wilcox 2020-09-11 3:37 ` Alex Shi 2020-09-11 3:37 ` Alex Shi 2020-09-13 15:27 ` Matthew Wilcox 2020-09-13 15:27 ` Matthew Wilcox 2020-09-19 1:00 ` Hugh Dickins 2020-09-19 1:00 ` Hugh Dickins 2020-08-24 12:54 ` [PATCH v18 07/32] mm/swap.c: stop deactivate_file_page if page not on lru Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 08/32] mm/vmscan: remove unnecessary lruvec adding Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 09/32] mm/page_idle: no unlikely double check for idle page counting Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 10/32] mm/compaction: rename compact_deferred as compact_should_defer Alex Shi 2020-08-24 12:54 ` [PATCH v18 11/32] mm/memcg: add debug checking in lock_page_memcg Alex Shi 2020-08-24 12:54 ` [PATCH v18 12/32] mm/memcg: optimize mem_cgroup_page_lruvec Alex Shi 2020-08-24 12:54 ` [PATCH v18 13/32] mm/swap.c: fold vm event PGROTATED into pagevec_move_tail_fn Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 14/32] mm/lru: move lru_lock holding in func lru_note_cost_page Alex Shi 2020-08-24 12:54 ` [PATCH v18 15/32] mm/lru: move lock into lru_note_cost Alex Shi 2020-09-21 21:36 ` Hugh Dickins 2020-09-21 21:36 ` Hugh Dickins 2020-09-21 21:36 ` Hugh Dickins 2020-09-21 22:03 ` Hugh Dickins 2020-09-21 22:03 ` Hugh Dickins 2020-09-22 3:39 ` Alex Shi 2020-09-22 3:39 ` Alex Shi 2020-09-22 3:38 ` Alex Shi 2020-09-22 3:38 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 16/32] mm/lru: introduce TestClearPageLRU Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-09-21 23:16 ` Hugh Dickins 2020-09-21 23:16 ` Hugh Dickins 2020-09-21 23:16 ` Hugh Dickins 2020-09-22 3:53 ` Alex Shi 2020-09-22 3:53 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 17/32] mm/compaction: do page isolation first in compaction Alex Shi 2020-09-21 23:49 ` Hugh Dickins 2020-09-21 23:49 ` Hugh Dickins 2020-09-22 4:57 ` Alex Shi 2020-09-22 4:57 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 18/32] mm/thp: add tail pages into lru anyway in split_huge_page() Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 19/32] mm/swap.c: serialize memcg changes in pagevec_lru_move_fn Alex Shi 2020-09-22 0:42 ` Hugh Dickins 2020-09-22 0:42 ` Hugh Dickins 2020-09-22 0:42 ` Hugh Dickins 2020-09-22 5:00 ` Alex Shi 2020-09-22 5:00 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 20/32] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi 2020-09-22 5:27 ` Hugh Dickins 2020-09-22 5:27 ` Hugh Dickins 2020-09-22 5:27 ` Hugh Dickins 2020-09-22 8:58 ` Alex Shi 2020-09-22 8:58 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 21/32] mm/lru: introduce the relock_page_lruvec function Alex Shi 2020-09-22 5:40 ` Hugh Dickins [this message] 2020-09-22 5:40 ` Hugh Dickins 2020-09-22 5:40 ` Hugh Dickins 2020-08-24 12:54 ` [PATCH v18 22/32] mm/vmscan: use relock for move_pages_to_lru Alex Shi 2020-09-22 5:44 ` Hugh Dickins 2020-09-22 5:44 ` Hugh Dickins 2020-09-23 1:55 ` Alex Shi 2020-09-23 1:55 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 23/32] mm/lru: revise the comments of lru_lock Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-09-22 5:48 ` Hugh Dickins 2020-09-22 5:48 ` Hugh Dickins 2020-08-24 12:54 ` [PATCH v18 24/32] mm/pgdat: remove pgdat lru_lock Alex Shi 2020-09-22 5:53 ` Hugh Dickins 2020-09-22 5:53 ` Hugh Dickins 2020-09-23 1:55 ` Alex Shi 2020-09-23 1:55 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 25/32] mm/mlock: remove lru_lock on TestClearPageMlocked in munlock_vma_page Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-26 5:52 ` Alex Shi 2020-08-26 5:52 ` Alex Shi 2020-09-22 6:13 ` Hugh Dickins 2020-09-22 6:13 ` Hugh Dickins 2020-09-22 6:13 ` Hugh Dickins 2020-09-23 1:58 ` Alex Shi 2020-09-23 1:58 ` Alex Shi 2020-08-24 12:54 ` [PATCH v18 26/32] mm/mlock: remove __munlock_isolate_lru_page Alex Shi 2020-08-24 12:54 ` Alex Shi 2020-08-24 12:55 ` [PATCH v18 27/32] mm/swap.c: optimizing __pagevec_lru_add lru_lock Alex Shi 2020-08-24 12:55 ` Alex Shi 2020-08-26 9:07 ` Alex Shi 2020-08-24 12:55 ` [PATCH v18 28/32] mm/compaction: Drop locked from isolate_migratepages_block Alex Shi 2020-08-24 12:55 ` [PATCH v18 29/32] mm: Identify compound pages sooner in isolate_migratepages_block Alex Shi 2020-08-24 12:55 ` Alex Shi 2020-08-24 12:55 ` [PATCH v18 30/32] mm: Drop use of test_and_set_skip in favor of just setting skip Alex Shi 2020-08-24 12:55 ` [PATCH v18 31/32] mm: Add explicit page decrement in exception path for isolate_lru_pages Alex Shi 2020-08-24 12:55 ` Alex Shi 2020-09-09 1:01 ` Matthew Wilcox 2020-09-09 1:01 ` Matthew Wilcox 2020-09-09 15:43 ` Alexander Duyck 2020-09-09 15:43 ` Alexander Duyck 2020-09-09 15:43 ` Alexander Duyck 2020-09-09 17:07 ` Matthew Wilcox 2020-09-09 17:07 ` Matthew Wilcox 2020-09-09 18:24 ` Hugh Dickins 2020-09-09 18:24 ` Hugh Dickins 2020-09-09 18:24 ` Hugh Dickins 2020-09-09 20:15 ` Matthew Wilcox 2020-09-09 20:15 ` Matthew Wilcox 2020-09-09 21:05 ` Hugh Dickins 2020-09-09 21:05 ` Hugh Dickins 2020-09-09 21:05 ` Hugh Dickins 2020-09-09 21:17 ` Alexander Duyck 2020-09-09 21:17 ` Alexander Duyck 2020-09-09 21:17 ` Alexander Duyck 2020-08-24 12:55 ` [PATCH v18 32/32] mm: Split release_pages work into 3 passes Alex Shi 2020-08-24 12:55 ` Alex Shi 2020-08-24 18:42 ` [PATCH v18 00/32] per memcg lru_lock Andrew Morton 2020-08-24 19:50 ` Qian Cai 2020-08-24 19:50 ` Qian Cai 2020-08-24 20:24 ` Hugh Dickins 2020-08-24 20:24 ` Hugh Dickins 2020-08-24 20:24 ` Hugh Dickins 2020-08-25 1:56 ` Daniel Jordan 2020-08-25 1:56 ` Daniel Jordan 2020-08-25 3:26 ` Alex Shi 2020-08-25 3:26 ` Alex Shi 2020-08-25 11:39 ` Matthew Wilcox 2020-08-25 11:39 ` Matthew Wilcox 2020-08-26 1:19 ` Daniel Jordan 2020-08-26 8:59 ` Alex Shi 2020-08-26 8:59 ` Alex Shi 2020-08-28 1:40 ` Daniel Jordan 2020-08-28 1:40 ` Daniel Jordan 2020-08-28 5:22 ` Alex Shi 2020-08-28 5:22 ` Alex Shi 2020-09-09 2:44 ` Aaron Lu 2020-09-09 2:44 ` Aaron Lu 2020-09-09 11:40 ` Michal Hocko 2020-08-25 8:52 ` Alex Shi 2020-08-25 8:52 ` Alex Shi 2020-08-25 13:00 ` Alex Shi 2020-08-25 13:00 ` Alex Shi 2020-08-27 7:01 ` Hugh Dickins 2020-08-27 7:01 ` Hugh Dickins 2020-08-27 12:20 ` Race between freeing and waking page Matthew Wilcox 2020-09-08 23:41 ` [PATCH v18 00/32] per memcg lru_lock: reviews Hugh Dickins 2020-09-08 23:41 ` Hugh Dickins 2020-09-09 2:24 ` Wei Yang 2020-09-09 2:24 ` Wei Yang 2020-09-09 15:08 ` Alex Shi 2020-09-09 15:08 ` Alex Shi 2020-09-09 23:16 ` Hugh Dickins 2020-09-09 23:16 ` Hugh Dickins 2020-09-11 2:50 ` Alex Shi 2020-09-11 2:50 ` Alex Shi 2020-09-12 2:13 ` Hugh Dickins 2020-09-12 2:13 ` Hugh Dickins 2020-09-13 14:21 ` Alex Shi 2020-09-13 14:21 ` Alex Shi 2020-09-15 8:21 ` Hugh Dickins 2020-09-15 8:21 ` Hugh Dickins 2020-09-15 8:21 ` Hugh Dickins 2020-09-15 16:58 ` Daniel Jordan 2020-09-15 16:58 ` Daniel Jordan 2020-09-16 12:44 ` Alex Shi 2020-09-17 2:37 ` Alex Shi 2020-09-17 2:37 ` Alex Shi 2020-09-17 14:35 ` Daniel Jordan 2020-09-17 14:35 ` Daniel Jordan 2020-09-17 15:39 ` Alexander Duyck 2020-09-17 15:39 ` Alexander Duyck 2020-09-17 15:39 ` Alexander Duyck 2020-09-17 16:48 ` Daniel Jordan 2020-09-17 16:48 ` Daniel Jordan 2020-09-12 8:38 ` Hugh Dickins 2020-09-12 8:38 ` Hugh Dickins 2020-09-12 8:38 ` Hugh Dickins 2020-09-13 14:22 ` Alex Shi 2020-09-13 14:22 ` Alex Shi 2020-09-09 16:11 ` Alexander Duyck 2020-09-09 16:11 ` Alexander Duyck 2020-09-09 16:11 ` Alexander Duyck 2020-09-10 0:32 ` Hugh Dickins 2020-09-10 0:32 ` Hugh Dickins 2020-09-10 0:32 ` Hugh Dickins 2020-09-10 14:24 ` Alexander Duyck 2020-09-10 14:24 ` Alexander Duyck 2020-09-10 14:24 ` Alexander Duyck 2020-09-12 5:12 ` Hugh Dickins 2020-09-12 5:12 ` Hugh Dickins 2020-09-12 5:12 ` Hugh Dickins 2020-08-25 7:21 ` [PATCH v18 00/32] per memcg lru_lock Michal Hocko 2020-08-25 7:21 ` Michal Hocko
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=alpine.LSU.2.11.2009212229270.6434@eggly.anvils \ --to=hughd@google.com \ --cc=akpm@linux-foundation.org \ --cc=alex.shi@linux.alibaba.com \ --cc=alexander.duyck@gmail.com \ --cc=alexander.h.duyck@linux.intel.com \ --cc=aryabinin@virtuozzo.com \ --cc=cgroups@vger.kernel.org \ --cc=daniel.m.jordan@oracle.com \ --cc=hannes@cmpxchg.org \ --cc=iamjoonsoo.kim@lge.com \ --cc=khlebnikov@yandex-team.ru \ --cc=kirill@shutemov.name \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lkp@intel.com \ --cc=mgorman@techsingularity.net \ --cc=mhocko@suse.com \ --cc=richard.weiyang@gmail.com \ --cc=rong.a.chen@intel.com \ --cc=shakeelb@google.com \ --cc=shy828301@gmail.com \ --cc=tglx@linutronix.de \ --cc=tj@kernel.org \ --cc=vdavydov.dev@gmail.com \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.