From: Suren Baghdasaryan <surenb@google.com> To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, mingo@redhat.com, will@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, chriscli@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, rppt@kernel.org, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, leewalsh@google.com, posk@google.com, michalechner92@googlemail.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Suren Baghdasaryan <surenb@google.com> Subject: [PATCH v4 15/33] mm/khugepaged: write-lock VMA while collapsing a huge page Date: Mon, 27 Feb 2023 09:36:14 -0800 [thread overview] Message-ID: <20230227173632.3292573-16-surenb@google.com> (raw) In-Reply-To: <20230227173632.3292573-1-surenb@google.com> Protect VMA from concurrent page fault handler while collapsing a huge page. Page fault handler needs a stable PMD to use PTL and relies on per-VMA lock to prevent concurrent PMD changes. pmdp_collapse_flush(), set_huge_pmd() and collapse_and_free_pmd() can modify a PMD, which will not be detected by a page fault handler without proper locking. Before this patch, page tables can be walked under any one of the mmap_lock, the mapping lock, and the anon_vma lock; so when khugepaged unlinks and frees page tables, it must ensure that all of those either are locked or don't exist. This patch adds a fourth lock under which page tables can be traversed, and so khugepaged must also lock out that one. Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- mm/khugepaged.c | 5 +++++ mm/rmap.c | 31 ++++++++++++++++--------------- 2 files changed, 21 insertions(+), 15 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 941d1c7ea910..c64e01f03f27 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1147,6 +1147,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, if (result != SCAN_SUCCEED) goto out_up_write; + vma_start_write(vma); anon_vma_lock_write(vma->anon_vma); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, @@ -1614,6 +1615,9 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, goto drop_hpage; } + /* Lock the vma before taking i_mmap and page table locks */ + vma_start_write(vma); + /* * We need to lock the mapping so that from here on, only GUP-fast and * hardware page walks can access the parts of the page tables that @@ -1819,6 +1823,7 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, result = SCAN_PTE_UFFD_WP; goto unlock_next; } + vma_start_write(vma); collapse_and_free_pmd(mm, vma, addr, pmd); if (!cc->is_khugepaged && is_target) result = set_huge_pmd(vma, addr, pmd, hpage); diff --git a/mm/rmap.c b/mm/rmap.c index 8632e02661ac..cfdaa56cad3e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -25,21 +25,22 @@ * mapping->invalidate_lock (in filemap_fault) * page->flags PG_locked (lock_page) * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share, see hugetlbfs below) - * mapping->i_mmap_rwsem - * anon_vma->rwsem - * mm->page_table_lock or pte_lock - * swap_lock (in swap_duplicate, swap_info_get) - * mmlist_lock (in mmput, drain_mmlist and others) - * mapping->private_lock (in block_dirty_folio) - * folio_lock_memcg move_lock (in block_dirty_folio) - * i_pages lock (widely used) - * lruvec->lru_lock (in folio_lruvec_lock_irq) - * inode->i_lock (in set_page_dirty's __mark_inode_dirty) - * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) - * sb_lock (within inode_lock in fs/fs-writeback.c) - * i_pages lock (widely used, in set_page_dirty, - * in arch-dependent flush_dcache_mmap_lock, - * within bdi.wb->list_lock in __sync_single_inode) + * vma_start_write + * mapping->i_mmap_rwsem + * anon_vma->rwsem + * mm->page_table_lock or pte_lock + * swap_lock (in swap_duplicate, swap_info_get) + * mmlist_lock (in mmput, drain_mmlist and others) + * mapping->private_lock (in block_dirty_folio) + * folio_lock_memcg move_lock (in block_dirty_folio) + * i_pages lock (widely used) + * lruvec->lru_lock (in folio_lruvec_lock_irq) + * inode->i_lock (in set_page_dirty's __mark_inode_dirty) + * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) + * sb_lock (within inode_lock in fs/fs-writeback.c) + * i_pages lock (widely used, in set_page_dirty, + * in arch-dependent flush_dcache_mmap_lock, + * within bdi.wb->list_lock in __sync_single_inode) * * anon_vma->rwsem,mapping->i_mmap_rwsem (memory_failure, collect_procs_anon) * ->tasklist_lock -- 2.39.2.722.g9855ee24e9-goog
WARNING: multiple messages have this Message-ID (diff)
From: Suren Baghdasaryan <surenb@google.com> To: akpm@linux-foundation.org Cc: michel@lespinasse.org, joelaf@google.com, songliubraving@fb.com, mhocko@suse.com, leewalsh@google.com, david@redhat.com, peterz@infradead.org, bigeasy@linutronix.de, peterx@redhat.com, dhowells@redhat.com, linux-mm@kvack.org, edumazet@google.com, jglisse@google.com, punit.agrawal@bytedance.com, will@kernel.org, arjunroy@google.com, chriscli@google.com, dave@stgolabs.net, minchan@google.com, x86@kernel.org, hughd@google.com, willy@infradead.org, gurua@google.com, mingo@redhat.com, linux-arm-kernel@lists.infradead.org, rientjes@google.com, axelrasmussen@google.com, kernel-team@android.com, michalechner92@googlemail.com, soheil@google.com, paulmck@kernel.org, jannh@google.com, liam.howlett@oracle.com, shakeelb@google.com, luto@kernel.org, gthelen@google.com, ldufour@linux.ibm.com, Suren Baghdasaryan <surenb@google.com>, vbabka@suse.cz, posk@google.com, lstoakes@gmail.com, peterjung1337@gmail.com, linuxppc-dev@lists.ozlabs.org, kent.overstreet@linux.dev, linux-kernel@vger.kernel.org, hannes@cmpxchg.org, tatashin@google.com, mgorman@techsingularity.net, rppt@kernel.org Subject: [PATCH v4 15/33] mm/khugepaged: write-lock VMA while collapsing a huge page Date: Mon, 27 Feb 2023 09:36:14 -0800 [thread overview] Message-ID: <20230227173632.3292573-16-surenb@google.com> (raw) In-Reply-To: <20230227173632.3292573-1-surenb@google.com> Protect VMA from concurrent page fault handler while collapsing a huge page. Page fault handler needs a stable PMD to use PTL and relies on per-VMA lock to prevent concurrent PMD changes. pmdp_collapse_flush(), set_huge_pmd() and collapse_and_free_pmd() can modify a PMD, which will not be detected by a page fault handler without proper locking. Before this patch, page tables can be walked under any one of the mmap_lock, the mapping lock, and the anon_vma lock; so when khugepaged unlinks and frees page tables, it must ensure that all of those either are locked or don't exist. This patch adds a fourth lock under which page tables can be traversed, and so khugepaged must also lock out that one. Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- mm/khugepaged.c | 5 +++++ mm/rmap.c | 31 ++++++++++++++++--------------- 2 files changed, 21 insertions(+), 15 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 941d1c7ea910..c64e01f03f27 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1147,6 +1147,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, if (result != SCAN_SUCCEED) goto out_up_write; + vma_start_write(vma); anon_vma_lock_write(vma->anon_vma); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, @@ -1614,6 +1615,9 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, goto drop_hpage; } + /* Lock the vma before taking i_mmap and page table locks */ + vma_start_write(vma); + /* * We need to lock the mapping so that from here on, only GUP-fast and * hardware page walks can access the parts of the page tables that @@ -1819,6 +1823,7 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, result = SCAN_PTE_UFFD_WP; goto unlock_next; } + vma_start_write(vma); collapse_and_free_pmd(mm, vma, addr, pmd); if (!cc->is_khugepaged && is_target) result = set_huge_pmd(vma, addr, pmd, hpage); diff --git a/mm/rmap.c b/mm/rmap.c index 8632e02661ac..cfdaa56cad3e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -25,21 +25,22 @@ * mapping->invalidate_lock (in filemap_fault) * page->flags PG_locked (lock_page) * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share, see hugetlbfs below) - * mapping->i_mmap_rwsem - * anon_vma->rwsem - * mm->page_table_lock or pte_lock - * swap_lock (in swap_duplicate, swap_info_get) - * mmlist_lock (in mmput, drain_mmlist and others) - * mapping->private_lock (in block_dirty_folio) - * folio_lock_memcg move_lock (in block_dirty_folio) - * i_pages lock (widely used) - * lruvec->lru_lock (in folio_lruvec_lock_irq) - * inode->i_lock (in set_page_dirty's __mark_inode_dirty) - * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) - * sb_lock (within inode_lock in fs/fs-writeback.c) - * i_pages lock (widely used, in set_page_dirty, - * in arch-dependent flush_dcache_mmap_lock, - * within bdi.wb->list_lock in __sync_single_inode) + * vma_start_write + * mapping->i_mmap_rwsem + * anon_vma->rwsem + * mm->page_table_lock or pte_lock + * swap_lock (in swap_duplicate, swap_info_get) + * mmlist_lock (in mmput, drain_mmlist and others) + * mapping->private_lock (in block_dirty_folio) + * folio_lock_memcg move_lock (in block_dirty_folio) + * i_pages lock (widely used) + * lruvec->lru_lock (in folio_lruvec_lock_irq) + * inode->i_lock (in set_page_dirty's __mark_inode_dirty) + * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) + * sb_lock (within inode_lock in fs/fs-writeback.c) + * i_pages lock (widely used, in set_page_dirty, + * in arch-dependent flush_dcache_mmap_lock, + * within bdi.wb->list_lock in __sync_single_inode) * * anon_vma->rwsem,mapping->i_mmap_rwsem (memory_failure, collect_procs_anon) * ->tasklist_lock -- 2.39.2.722.g9855ee24e9-goog
next prev parent reply other threads:[~2023-02-27 17:38 UTC|newest] Thread overview: 134+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-02-27 17:35 [PATCH v4 00/33] Per-VMA locks Suren Baghdasaryan 2023-02-27 17:35 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 01/33] maple_tree: Be more cautious about dead nodes Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 02/33] maple_tree: Detect dead nodes in mas_start() Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 03/33] maple_tree: Fix freeing of nodes in rcu mode Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 04/33] maple_tree: remove extra smp_wmb() from mas_dead_leaves() Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 05/33] maple_tree: Fix write memory barrier of nodes once dead for RCU mode Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 06/33] maple_tree: Add smp_rmb() to dead node detection Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 07/33] maple_tree: Add RCU lock checking to rcu callback functions Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 08/33] mm: Enable maple tree RCU mode by default Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 09/33] mm: introduce CONFIG_PER_VMA_LOCK Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 10/33] mm: rcu safe VMA freeing Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 11/33] mm: move mmap_lock assert function definitions Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 12/33] mm: add per-VMA lock and helper functions to control it Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 13/33] mm: mark VMA as being written when changing vm_flags Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 14/33] mm/mmap: move vma_prepare before vma_adjust_trans_huge Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan [this message] 2023-02-27 17:36 ` [PATCH v4 15/33] mm/khugepaged: write-lock VMA while collapsing a huge page Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 16/33] mm/mmap: write-lock VMAs in vma_prepare before modifying them Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 17/33] mm/mremap: write-lock VMA while remapping it to a new address range Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-03-01 7:01 ` Hyeonggon Yoo 2023-03-01 7:01 ` Hyeonggon Yoo 2023-02-27 17:36 ` [PATCH v4 18/33] mm: write-lock VMAs before removing them from VMA tree Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-03-01 7:43 ` Hyeonggon Yoo 2023-03-01 7:43 ` Hyeonggon Yoo 2023-03-01 7:56 ` Hyeonggon Yoo 2023-03-01 7:56 ` Hyeonggon Yoo 2023-03-01 18:34 ` Suren Baghdasaryan 2023-03-01 18:34 ` Suren Baghdasaryan 2023-03-01 18:42 ` Suren Baghdasaryan 2023-03-01 18:42 ` Suren Baghdasaryan 2023-03-02 0:53 ` Hyeonggon Yoo 2023-03-02 0:53 ` Hyeonggon Yoo 2023-03-02 2:21 ` Suren Baghdasaryan 2023-03-02 2:21 ` Suren Baghdasaryan 2023-03-01 19:07 ` Suren Baghdasaryan 2023-03-01 19:07 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 19/33] mm: conditionally write-lock VMA in free_pgtables Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 20/33] kernel/fork: assert no VMA readers during its destruction Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 21/33] mm/mmap: prevent pagefault handler from racing with mmu_notifier registration Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 22/33] mm: introduce vma detached flag Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 23/33] mm: introduce lock_vma_under_rcu to be used from arch-specific code Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 24/33] mm: fall back to mmap_lock if vma->anon_vma is not yet set Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-03-01 9:54 ` Hyeonggon Yoo 2023-03-01 9:54 ` Hyeonggon Yoo 2023-02-27 17:36 ` [PATCH v4 25/33] mm: add FAULT_FLAG_VMA_LOCK flag Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 26/33] mm: prevent do_swap_page from handling page faults under VMA lock Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 27/33] mm: prevent userfaults to be handled under per-vma lock Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 28/33] mm: introduce per-VMA lock statistics Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 29/33] x86/mm: try VMA lock-based page fault handling first Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-06-29 14:40 ` Jiri Slaby 2023-06-29 14:40 ` Jiri Slaby 2023-06-29 15:30 ` Suren Baghdasaryan 2023-06-29 15:30 ` Suren Baghdasaryan 2023-06-30 6:35 ` Jiri Slaby 2023-06-30 6:35 ` Jiri Slaby 2023-06-30 8:28 ` Jiri Slaby 2023-06-30 8:28 ` Jiri Slaby 2023-06-30 8:43 ` Jiri Slaby 2023-06-30 8:43 ` Jiri Slaby 2023-06-30 17:40 ` Suren Baghdasaryan 2023-06-30 17:40 ` Suren Baghdasaryan 2023-07-03 10:47 ` Jiri Slaby 2023-07-03 10:47 ` Jiri Slaby 2023-07-03 13:52 ` Holger Hoffstätte 2023-07-03 14:45 ` Suren Baghdasaryan 2023-07-03 15:24 ` Suren Baghdasaryan 2023-07-03 18:28 ` Suren Baghdasaryan 2023-07-05 22:15 ` Suren Baghdasaryan 2023-07-05 22:37 ` Holger Hoffstätte 2023-07-05 22:55 ` Suren Baghdasaryan 2023-07-06 14:27 ` Holger Hoffstätte 2023-07-06 16:11 ` Suren Baghdasaryan 2023-07-07 2:23 ` Suren Baghdasaryan 2023-07-07 4:40 ` Suren Baghdasaryan 2023-07-11 6:20 ` Jiri Slaby 2023-06-29 17:06 ` Linux regression tracking #adding (Thorsten Leemhuis) 2023-06-29 17:06 ` Linux regression tracking #adding (Thorsten Leemhuis) 2023-07-10 10:45 ` Linux regression tracking #update (Thorsten Leemhuis) 2023-07-03 9:58 ` Linux regression tracking (Thorsten Leemhuis) 2023-07-03 9:58 ` Linux regression tracking (Thorsten Leemhuis) 2023-02-27 17:36 ` [PATCH v4 30/33] arm64/mm: " Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 31/33] powerc/mm: " Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-03-06 15:42 ` [PATCH] powerpc/mm: fix mmap_lock bad unlock Laurent Dufour 2023-03-06 15:42 ` Laurent Dufour 2023-03-06 20:25 ` [PATCH v4 31/33] powerc/mm: try VMA lock-based page fault handling first Suren Baghdasaryan 2023-03-06 20:25 ` Suren Baghdasaryan 2023-03-06 20:25 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 32/33] mm/mmap: free vm_area_struct without call_rcu in exit_mmap Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-02-27 17:36 ` [PATCH v4 33/33] mm: separate vma->lock from vm_area_struct Suren Baghdasaryan 2023-02-27 17:36 ` Suren Baghdasaryan 2023-07-11 10:35 ` [PATCH v4 00/33] Per-VMA locks Leon Romanovsky 2023-07-11 10:35 ` Leon Romanovsky 2023-07-11 10:39 ` Vlastimil Babka 2023-07-11 10:39 ` Vlastimil Babka 2023-07-11 11:01 ` Leon Romanovsky 2023-07-11 11:01 ` Leon Romanovsky 2023-07-11 11:09 ` Leon Romanovsky 2023-07-11 11:09 ` Leon Romanovsky 2023-07-11 16:35 ` Suren Baghdasaryan 2023-07-11 16:35 ` Suren Baghdasaryan 2023-07-11 17:14 ` Leon Romanovsky 2023-07-11 17:14 ` Leon Romanovsky
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20230227173632.3292573-16-surenb@google.com \ --to=surenb@google.com \ --cc=akpm@linux-foundation.org \ --cc=arjunroy@google.com \ --cc=axelrasmussen@google.com \ --cc=bigeasy@linutronix.de \ --cc=chriscli@google.com \ --cc=dave@stgolabs.net \ --cc=david@redhat.com \ --cc=dhowells@redhat.com \ --cc=edumazet@google.com \ --cc=gthelen@google.com \ --cc=gurua@google.com \ --cc=hannes@cmpxchg.org \ --cc=hughd@google.com \ --cc=jannh@google.com \ --cc=jglisse@google.com \ --cc=joelaf@google.com \ --cc=kent.overstreet@linux.dev \ --cc=kernel-team@android.com \ --cc=ldufour@linux.ibm.com \ --cc=leewalsh@google.com \ --cc=liam.howlett@oracle.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linuxppc-dev@lists.ozlabs.org \ --cc=lstoakes@gmail.com \ --cc=luto@kernel.org \ --cc=mgorman@techsingularity.net \ --cc=mhocko@suse.com \ --cc=michalechner92@googlemail.com \ --cc=michel@lespinasse.org \ --cc=minchan@google.com \ --cc=mingo@redhat.com \ --cc=paulmck@kernel.org \ --cc=peterjung1337@gmail.com \ --cc=peterx@redhat.com \ --cc=peterz@infradead.org \ --cc=posk@google.com \ --cc=punit.agrawal@bytedance.com \ --cc=rientjes@google.com \ --cc=rppt@kernel.org \ --cc=shakeelb@google.com \ --cc=soheil@google.com \ --cc=songliubraving@fb.com \ --cc=tatashin@google.com \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.