From: Alex Shi <alex.shi@linux.alibaba.com> To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name Cc: Michal Hocko <mhocko@kernel.org>, Vladimir Davydov <vdavydov.dev@gmail.com> Subject: [PATCH v16 13/22] mm/lru: introduce TestClearPageLRU Date: Sat, 11 Jul 2020 08:58:47 +0800 [thread overview] Message-ID: <1594429136-20002-14-git-send-email-alex.shi@linux.alibaba.com> (raw) In-Reply-To: <1594429136-20002-1-git-send-email-alex.shi@linux.alibaba.com> Combine PageLRU check and ClearPageLRU into a function by new introduced func TestClearPageLRU. This function will be used as page isolation precondition to prevent other isolations some where else. Then there are may non PageLRU page on lru list, need to remove BUG checking accordingly. Hugh Dickins pointed that __page_cache_release and release_pages has no need to do atomic clear bit since no user on the page at that moment. and no need get_page() before lru bit clear in isolate_lru_page, since it '(1) Must be called with an elevated refcount on the page'. As Andrew Morton mentioned this change would dirty cacheline for page isn't on LRU. But the lost would be acceptable with Rong Chen <rong.a.chen@intel.com> report: https://lkml.org/lkml/2020/3/4/173 Suggested-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/page-flags.h | 1 + mm/mlock.c | 3 +-- mm/swap.c | 6 ++---- mm/vmscan.c | 26 +++++++++++--------------- 4 files changed, 15 insertions(+), 21 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 6be1aa559b1e..9554ed1387dc 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -326,6 +326,7 @@ static inline void page_init_poison(struct page *page, size_t size) PAGEFLAG(Dirty, dirty, PF_HEAD) TESTSCFLAG(Dirty, dirty, PF_HEAD) __CLEARPAGEFLAG(Dirty, dirty, PF_HEAD) PAGEFLAG(LRU, lru, PF_HEAD) __CLEARPAGEFLAG(LRU, lru, PF_HEAD) + TESTCLEARFLAG(LRU, lru, PF_HEAD) PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) TESTCLEARFLAG(Active, active, PF_HEAD) PAGEFLAG(Workingset, workingset, PF_HEAD) diff --git a/mm/mlock.c b/mm/mlock.c index f8736136fad7..228ba5a8e0a5 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -108,13 +108,12 @@ void mlock_vma_page(struct page *page) */ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) { - if (PageLRU(page)) { + if (TestClearPageLRU(page)) { struct lruvec *lruvec; lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); if (getpage) get_page(page); - ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_lru(page)); return true; } diff --git a/mm/swap.c b/mm/swap.c index f645965fde0e..5092fe9c8c47 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -83,10 +83,9 @@ static void __page_cache_release(struct page *page) struct lruvec *lruvec; unsigned long flags; + __ClearPageLRU(page); spin_lock_irqsave(&pgdat->lru_lock, flags); lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); spin_unlock_irqrestore(&pgdat->lru_lock, flags); } @@ -878,9 +877,8 @@ void release_pages(struct page **pages, int nr) spin_lock_irqsave(&locked_pgdat->lru_lock, flags); } - lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); - VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); + lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); del_page_from_lru_list(page, lruvec, page_off_lru(page)); } diff --git a/mm/vmscan.c b/mm/vmscan.c index c1c4259b4de5..18986fefd49b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1548,16 +1548,16 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) { int ret = -EINVAL; - /* Only take pages on the LRU. */ - if (!PageLRU(page)) - return ret; - /* Compaction should not handle unevictable pages but CMA can do so */ if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) return ret; ret = -EBUSY; + /* Only take pages on the LRU. */ + if (!PageLRU(page)) + return ret; + /* * To minimise LRU disruption, the caller can indicate that it only * wants to isolate pages it will be able to operate on without @@ -1671,8 +1671,6 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, page = lru_to_page(src); prefetchw_prev_lru_page(page, src, flags); - VM_BUG_ON_PAGE(!PageLRU(page), page); - nr_pages = compound_nr(page); total_scan += nr_pages; @@ -1769,21 +1767,19 @@ int isolate_lru_page(struct page *page) VM_BUG_ON_PAGE(!page_count(page), page); WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); - if (PageLRU(page)) { + if (TestClearPageLRU(page)) { pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; + int lru = page_lru(page); - spin_lock_irq(&pgdat->lru_lock); + get_page(page); lruvec = mem_cgroup_page_lruvec(page, pgdat); - if (PageLRU(page)) { - int lru = page_lru(page); - get_page(page); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, lru); - ret = 0; - } + spin_lock_irq(&pgdat->lru_lock); + del_page_from_lru_list(page, lruvec, lru); spin_unlock_irq(&pgdat->lru_lock); + ret = 0; } + return ret; } -- 1.8.3.1
WARNING: multiple messages have this Message-ID (diff)
From: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org, tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org, daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org, willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, lkp-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org, richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, kirill-oKw7cIdHH8eLwutG50LtGA@public.gmane.org Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>, Vladimir Davydov <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> Subject: [PATCH v16 13/22] mm/lru: introduce TestClearPageLRU Date: Sat, 11 Jul 2020 08:58:47 +0800 [thread overview] Message-ID: <1594429136-20002-14-git-send-email-alex.shi@linux.alibaba.com> (raw) In-Reply-To: <1594429136-20002-1-git-send-email-alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Combine PageLRU check and ClearPageLRU into a function by new introduced func TestClearPageLRU. This function will be used as page isolation precondition to prevent other isolations some where else. Then there are may non PageLRU page on lru list, need to remove BUG checking accordingly. Hugh Dickins pointed that __page_cache_release and release_pages has no need to do atomic clear bit since no user on the page at that moment. and no need get_page() before lru bit clear in isolate_lru_page, since it '(1) Must be called with an elevated refcount on the page'. As Andrew Morton mentioned this change would dirty cacheline for page isn't on LRU. But the lost would be acceptable with Rong Chen <rong.a.chen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> report: https://lkml.org/lkml/2020/3/4/173 Suggested-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> Signed-off-by: Alex Shi <alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Cc: Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> Cc: Vladimir Davydov <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org> Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org --- include/linux/page-flags.h | 1 + mm/mlock.c | 3 +-- mm/swap.c | 6 ++---- mm/vmscan.c | 26 +++++++++++--------------- 4 files changed, 15 insertions(+), 21 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 6be1aa559b1e..9554ed1387dc 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -326,6 +326,7 @@ static inline void page_init_poison(struct page *page, size_t size) PAGEFLAG(Dirty, dirty, PF_HEAD) TESTSCFLAG(Dirty, dirty, PF_HEAD) __CLEARPAGEFLAG(Dirty, dirty, PF_HEAD) PAGEFLAG(LRU, lru, PF_HEAD) __CLEARPAGEFLAG(LRU, lru, PF_HEAD) + TESTCLEARFLAG(LRU, lru, PF_HEAD) PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) TESTCLEARFLAG(Active, active, PF_HEAD) PAGEFLAG(Workingset, workingset, PF_HEAD) diff --git a/mm/mlock.c b/mm/mlock.c index f8736136fad7..228ba5a8e0a5 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -108,13 +108,12 @@ void mlock_vma_page(struct page *page) */ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) { - if (PageLRU(page)) { + if (TestClearPageLRU(page)) { struct lruvec *lruvec; lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); if (getpage) get_page(page); - ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_lru(page)); return true; } diff --git a/mm/swap.c b/mm/swap.c index f645965fde0e..5092fe9c8c47 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -83,10 +83,9 @@ static void __page_cache_release(struct page *page) struct lruvec *lruvec; unsigned long flags; + __ClearPageLRU(page); spin_lock_irqsave(&pgdat->lru_lock, flags); lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); spin_unlock_irqrestore(&pgdat->lru_lock, flags); } @@ -878,9 +877,8 @@ void release_pages(struct page **pages, int nr) spin_lock_irqsave(&locked_pgdat->lru_lock, flags); } - lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); - VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); + lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); del_page_from_lru_list(page, lruvec, page_off_lru(page)); } diff --git a/mm/vmscan.c b/mm/vmscan.c index c1c4259b4de5..18986fefd49b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1548,16 +1548,16 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) { int ret = -EINVAL; - /* Only take pages on the LRU. */ - if (!PageLRU(page)) - return ret; - /* Compaction should not handle unevictable pages but CMA can do so */ if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) return ret; ret = -EBUSY; + /* Only take pages on the LRU. */ + if (!PageLRU(page)) + return ret; + /* * To minimise LRU disruption, the caller can indicate that it only * wants to isolate pages it will be able to operate on without @@ -1671,8 +1671,6 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, page = lru_to_page(src); prefetchw_prev_lru_page(page, src, flags); - VM_BUG_ON_PAGE(!PageLRU(page), page); - nr_pages = compound_nr(page); total_scan += nr_pages; @@ -1769,21 +1767,19 @@ int isolate_lru_page(struct page *page) VM_BUG_ON_PAGE(!page_count(page), page); WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); - if (PageLRU(page)) { + if (TestClearPageLRU(page)) { pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; + int lru = page_lru(page); - spin_lock_irq(&pgdat->lru_lock); + get_page(page); lruvec = mem_cgroup_page_lruvec(page, pgdat); - if (PageLRU(page)) { - int lru = page_lru(page); - get_page(page); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, lru); - ret = 0; - } + spin_lock_irq(&pgdat->lru_lock); + del_page_from_lru_list(page, lruvec, lru); spin_unlock_irq(&pgdat->lru_lock); + ret = 0; } + return ret; } -- 1.8.3.1
next prev parent reply other threads:[~2020-07-11 0:59 UTC|newest] Thread overview: 125+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-07-11 0:58 [PATCH v16 00/22] per memcg lru_lock Alex Shi 2020-07-11 0:58 ` [PATCH v16 01/22] mm/vmscan: remove unnecessary lruvec adding Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 02/22] mm/page_idle: no unlikely double check for idle page counting Alex Shi 2020-07-11 0:58 ` [PATCH v16 03/22] mm/compaction: correct the comments of compact_defer_shift Alex Shi 2020-07-11 0:58 ` [PATCH v16 04/22] mm/compaction: rename compact_deferred as compact_should_defer Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 05/22] mm/thp: move lru_add_page_tail func to huge_memory.c Alex Shi 2020-07-16 8:59 ` Alex Shi 2020-07-16 8:59 ` Alex Shi 2020-07-16 13:17 ` Kirill A. Shutemov 2020-07-16 13:17 ` Kirill A. Shutemov 2020-07-17 5:13 ` Alex Shi 2020-07-17 5:13 ` Alex Shi 2020-07-20 8:37 ` Kirill A. Shutemov 2020-07-20 8:37 ` Kirill A. Shutemov 2020-07-11 0:58 ` [PATCH v16 06/22] mm/thp: clean up lru_add_page_tail Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-20 8:43 ` Kirill A. Shutemov 2020-07-20 8:43 ` Kirill A. Shutemov 2020-07-11 0:58 ` [PATCH v16 07/22] mm/thp: remove code path which never got into Alex Shi 2020-07-20 8:43 ` Kirill A. Shutemov 2020-07-20 8:43 ` Kirill A. Shutemov 2020-07-11 0:58 ` [PATCH v16 08/22] mm/thp: narrow lru locking Alex Shi 2020-07-11 0:58 ` [PATCH v16 09/22] mm/memcg: add debug checking in lock_page_memcg Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 10/22] mm/swap: fold vm event PGROTATED into pagevec_move_tail_fn Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 11/22] mm/lru: move lru_lock holding in func lru_note_cost_page Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 12/22] mm/lru: move lock into lru_note_cost Alex Shi 2020-07-11 0:58 ` Alex Shi [this message] 2020-07-11 0:58 ` [PATCH v16 13/22] mm/lru: introduce TestClearPageLRU Alex Shi 2020-07-16 9:06 ` Alex Shi 2020-07-16 9:06 ` Alex Shi 2020-07-16 21:12 ` Alexander Duyck 2020-07-16 21:12 ` Alexander Duyck 2020-07-16 21:12 ` Alexander Duyck 2020-07-17 7:45 ` Alex Shi 2020-07-17 7:45 ` Alex Shi 2020-07-17 18:26 ` Alexander Duyck 2020-07-17 18:26 ` Alexander Duyck 2020-07-19 4:45 ` Alex Shi 2020-07-19 4:45 ` Alex Shi 2020-07-19 11:24 ` Alex Shi 2020-07-19 11:24 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 14/22] mm/thp: add tail pages into lru anyway in split_huge_page() Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-17 9:30 ` Alex Shi 2020-07-17 9:30 ` Alex Shi 2020-07-20 8:49 ` Kirill A. Shutemov 2020-07-20 8:49 ` Kirill A. Shutemov 2020-07-20 9:04 ` Alex Shi 2020-07-20 9:04 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 15/22] mm/compaction: do page isolation first in compaction Alex Shi 2020-07-16 21:32 ` Alexander Duyck 2020-07-16 21:32 ` Alexander Duyck 2020-07-16 21:32 ` Alexander Duyck 2020-07-17 5:09 ` Alex Shi 2020-07-17 5:09 ` Alex Shi 2020-07-17 16:09 ` Alexander Duyck 2020-07-17 16:09 ` Alexander Duyck 2020-07-17 16:09 ` Alexander Duyck 2020-07-19 3:59 ` Alex Shi 2020-07-19 3:59 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 16/22] mm/mlock: reorder isolation sequence during munlock Alex Shi 2020-07-17 20:30 ` Alexander Duyck 2020-07-17 20:30 ` Alexander Duyck 2020-07-17 20:30 ` Alexander Duyck 2020-07-19 3:55 ` Alex Shi 2020-07-19 3:55 ` Alex Shi 2020-07-20 18:51 ` Alexander Duyck 2020-07-20 18:51 ` Alexander Duyck 2020-07-20 18:51 ` Alexander Duyck 2020-07-21 9:26 ` Alex Shi 2020-07-21 9:26 ` Alex Shi 2020-07-21 13:51 ` Alex Shi 2020-07-21 13:51 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 17/22] mm/swap: serialize memcg changes during pagevec_lru_move_fn Alex Shi 2020-07-11 0:58 ` [PATCH v16 18/22] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-17 21:38 ` Alexander Duyck 2020-07-17 21:38 ` Alexander Duyck 2020-07-17 21:38 ` Alexander Duyck 2020-07-18 14:15 ` Alex Shi 2020-07-19 9:12 ` Alex Shi 2020-07-19 9:12 ` Alex Shi 2020-07-19 15:14 ` Alexander Duyck 2020-07-19 15:14 ` Alexander Duyck 2020-07-19 15:14 ` Alexander Duyck 2020-07-20 5:47 ` Alex Shi 2020-07-20 5:47 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 19/22] mm/lru: introduce the relock_page_lruvec function Alex Shi 2020-07-11 0:58 ` Alex Shi 2020-07-17 22:03 ` Alexander Duyck 2020-07-17 22:03 ` Alexander Duyck 2020-07-18 14:01 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 20/22] mm/vmscan: use relock for move_pages_to_lru Alex Shi 2020-07-17 21:44 ` Alexander Duyck 2020-07-17 21:44 ` Alexander Duyck 2020-07-17 21:44 ` Alexander Duyck 2020-07-18 14:15 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 21/22] mm/pgdat: remove pgdat lru_lock Alex Shi 2020-07-17 21:09 ` Alexander Duyck 2020-07-17 21:09 ` Alexander Duyck 2020-07-18 14:17 ` Alex Shi 2020-07-18 14:17 ` Alex Shi 2020-07-11 0:58 ` [PATCH v16 22/22] mm/lru: revise the comments of lru_lock Alex Shi 2020-07-11 1:02 ` [PATCH v16 00/22] per memcg lru_lock Alex Shi 2020-07-11 1:02 ` Alex Shi 2020-07-16 8:49 ` Alex Shi 2020-07-16 14:11 ` Alexander Duyck 2020-07-16 14:11 ` Alexander Duyck 2020-07-16 14:11 ` Alexander Duyck 2020-07-17 5:24 ` Alex Shi 2020-07-17 5:24 ` Alex Shi 2020-07-19 15:23 ` Hugh Dickins 2020-07-19 15:23 ` Hugh Dickins 2020-07-20 3:01 ` Alex Shi 2020-07-20 3:01 ` Alex Shi 2020-07-20 4:47 ` Hugh Dickins 2020-07-20 4:47 ` Hugh Dickins 2020-07-20 4:47 ` Hugh Dickins 2020-07-20 7:30 ` Alex Shi 2020-07-20 7:30 ` Alex Shi
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1594429136-20002-14-git-send-email-alex.shi@linux.alibaba.com \ --to=alex.shi@linux.alibaba.com \ --cc=akpm@linux-foundation.org \ --cc=cgroups@vger.kernel.org \ --cc=daniel.m.jordan@oracle.com \ --cc=hannes@cmpxchg.org \ --cc=hughd@google.com \ --cc=iamjoonsoo.kim@lge.com \ --cc=khlebnikov@yandex-team.ru \ --cc=kirill@shutemov.name \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lkp@intel.com \ --cc=mgorman@techsingularity.net \ --cc=mhocko@kernel.org \ --cc=richard.weiyang@gmail.com \ --cc=shakeelb@google.com \ --cc=tj@kernel.org \ --cc=vdavydov.dev@gmail.com \ --cc=willy@infradead.org \ --cc=yang.shi@linux.alibaba.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.