From: Reza Arbab <arbab@linux.vnet.ibm.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>,
Andrea Arcangeli <aarcange@redhat.com>,
Yasuaki Ishimatsu <yasu.isimatu@gmail.com>,
Tang Chen <tangchen@cn.fujitsu.com>,
qiuxishi@huawei.com, Kani Toshimitsu <toshi.kani@hpe.com>,
slaoub@gmail.com, Joonsoo Kim <js1304@gmail.com>,
Andi Kleen <ak@linux.intel.com>,
Zhang Zhen <zhenzhang.zhang@huawei.com>,
David Rientjes <rientjes@google.com>,
Daniel Kiper <daniel.kiper@oracle.com>,
Igor Mammedov <imammedo@redhat.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
LKML <linux-kernel@vger.kernel.org>,
Chris Metcalf <cmetcalf@mellanox.com>,
Dan Williams <dan.j.williams@gmail.com>,
Heiko Carstens <heiko.carstens@de.ibm.com>,
Lai Jiangshan <laijs@cn.fujitsu.com>,
Martin Schwidefsky <schwidefsky@de.ibm.com>
Subject: Re: [PATCH 0/6] mm: make movable onlining suck less
Date: Wed, 5 Apr 2017 09:53:05 -0500 [thread overview]
Message-ID: <20170405145304.wxzfavqxnyqtrlru@arbab-laptop> (raw)
In-Reply-To: <20170405092427.GG6035@dhcp22.suse.cz>
On Wed, Apr 05, 2017 at 11:24:27AM +0200, Michal Hocko wrote:
>On Wed 05-04-17 08:42:39, Michal Hocko wrote:
>> On Tue 04-04-17 16:43:39, Reza Arbab wrote:
>> > It's new. Without this patchset, I can repeatedly
>> > add_memory()->online_movable->offline->remove_memory() all of a node's
>> > memory.
>>
>> This is quite unexpected because the code obviously cannot handle the
>> first memory section. Could you paste /proc/zoneinfo and
>> grep . -r /sys/devices/system/memory/auto_online_blocks/memory*, after
>> onlining for both patched and unpatched kernels?
>
>Btw. how do you test this? I am really surprised you managed to
>hotremove such a low pfn range.
When I boot, I have node 0 (4GB) and node 1 (empty):
Early memory node ranges
node 0: [mem 0x0000000000000000-0x00000000ffffffff]
Initmem setup node 0 [mem 0x0000000000000000-0x00000000ffffffff]
On node 0 totalpages: 65536
DMA zone: 64 pages used for memmap
DMA zone: 0 pages reserved
DMA zone: 65536 pages, LIFO batch:1
Could not find start_pfn for node 1
Initmem setup node 1 [mem 0x0000000000000000-0x0000000000000000]
On node 1 totalpages: 0
My steps from there:
1. add_memory(1, 0x100000000, 0x100000000)
2. echo online_movable > /sys/devices/system/node/node1/memory[511..256]
3. echo offline > /sys/devices/system/node/node1/memory[256..511]
4. remove_memory(1, 0x100000000, 0x100000000)
After step 2, regardless of kernel:
$ cat /proc/zoneinfo
Node 0, zone DMA
per-node stats
nr_inactive_anon 418
nr_active_anon 2710
nr_inactive_file 4895
nr_active_file 1945
nr_unevictable 0
nr_isolated_anon 0
nr_isolated_file 0
nr_pages_scanned 0
workingset_refault 0
workingset_activate 0
workingset_nodereclaim 0
nr_anon_pages 2654
nr_mapped 739
nr_file_pages 7314
nr_dirty 1
nr_writeback 0
nr_writeback_temp 0
nr_shmem 474
nr_shmem_hugepages 0
nr_shmem_pmdmapped 0
nr_anon_transparent_hugepages 0
nr_unstable 0
nr_vmscan_write 0
nr_vmscan_immediate_reclaim 0
nr_dirtied 3259
nr_written 460
pages free 53520
min 63
low 128
high 193
node_scanned 0
spanned 65536
present 65536
managed 65218
nr_free_pages 53520
nr_zone_inactive_anon 418
nr_zone_active_anon 2710
nr_zone_inactive_file 4895
nr_zone_active_file 1945
nr_zone_unevictable 0
nr_zone_write_pending 1
nr_mlock 0
nr_slab_reclaimable 438
nr_slab_unreclaimable 808
nr_page_table_pages 32
nr_kernel_stack 2080
nr_bounce 0
numa_hit 313226
numa_miss 0
numa_foreign 0
numa_interleave 3071
numa_local 313226
numa_other 0
nr_free_cma 0
protection: (0, 0, 0, 0)
pagesets
cpu: 0
count: 2
high: 6
batch: 1
vm stats threshold: 12
node_unreclaimable: 0
start_pfn: 0
node_inactive_ratio: 0
Node 1, zone Movable
per-node stats
nr_inactive_anon 0
nr_active_anon 0
nr_inactive_file 0
nr_active_file 0
nr_unevictable 0
nr_isolated_anon 0
nr_isolated_file 0
nr_pages_scanned 0
workingset_refault 0
workingset_activate 0
workingset_nodereclaim 0
nr_anon_pages 0
nr_mapped 0
nr_file_pages 0
nr_dirty 0
nr_writeback 0
nr_writeback_temp 0
nr_shmem 0
nr_shmem_hugepages 0
nr_shmem_pmdmapped 0
nr_anon_transparent_hugepages 0
nr_unstable 0
nr_vmscan_write 0
nr_vmscan_immediate_reclaim 0
nr_dirtied 0
nr_written 0
pages free 65536
min 63
low 128
high 193
node_scanned 0
spanned 65536
present 65536
managed 65536
nr_free_pages 65536
nr_zone_inactive_anon 0
nr_zone_active_anon 0
nr_zone_inactive_file 0
nr_zone_active_file 0
nr_zone_unevictable 0
nr_zone_write_pending 0
nr_mlock 0
nr_slab_reclaimable 0
nr_slab_unreclaimable 0
nr_page_table_pages 0
nr_kernel_stack 0
nr_bounce 0
numa_hit 0
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 0
numa_other 0
nr_free_cma 0
protection: (0, 0, 0, 0)
pagesets
cpu: 0
count: 0
high: 6
batch: 1
vm stats threshold: 14
node_unreclaimable: 1
start_pfn: 65536
node_inactive_ratio: 0
After step 2, on v4.11-rc5:
$ grep . /sys/devices/system/memory/memory*/valid_zones
/sys/devices/system/memory/memory[0..254]/valid_zones:DMA
/sys/devices/system/memory/memory255/valid_zones:DMA Normal Movable
/sys/devices/system/memory/memory256/valid_zones:Movable Normal
/sys/devices/system/memory/memory[257..511]/valid_zones:Movable
After step 2, on v4.11-rc5 + all the patches from this thread:
$ grep . /sys/devices/system/memory/memory*/valid_zones
/sys/devices/system/memory/memory[0..255]/valid_zones:DMA
/sys/devices/system/memory/memory[256..511]/valid_zones:Movable
On v4.11-rc5, I can do steps 1-4 ad nauseam.
On v4.11-rc5 + all the patches from this thread, I can do things
repeatedly, but starting on the second iteration, all the
/sys/devices/system/node/node1/memory*
symlinks are not created. I can still proceed using the actual files,
/sys/devices/system/memory/memory[256..511]
instead. I think it may be because step 4 does node_set_offline(1). That
is, the node is not only emptied of memory, it is offlined completely.
I hope this made sense. :/
--
Reza Arbab
WARNING: multiple messages have this Message-ID (diff)
From: Reza Arbab <arbab@linux.vnet.ibm.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>,
Andrea Arcangeli <aarcange@redhat.com>,
Yasuaki Ishimatsu <yasu.isimatu@gmail.com>,
Tang Chen <tangchen@cn.fujitsu.com>,
qiuxishi@huawei.com, Kani Toshimitsu <toshi.kani@hpe.com>,
slaoub@gmail.com, Joonsoo Kim <js1304@gmail.com>,
Andi Kleen <ak@linux.intel.com>,
Zhang Zhen <zhenzhang.zhang@huawei.com>,
David Rientjes <rientjes@google.com>,
Daniel Kiper <daniel.kiper@oracle.com>,
Igor Mammedov <imammedo@redhat.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
LKML <linux-kernel@vger.kernel.org>,
Chris Metcalf <cmetcalf@mellanox.com>,
Dan Williams <dan.j.williams@gmail.com>,
Heiko Carstens <heiko.carstens@de.ibm.com>,
Lai Jiangshan <laijs@cn.fujitsu.com>,
Martin Schwidefsky <schwidefsky@de.ibm.com>
Subject: Re: [PATCH 0/6] mm: make movable onlining suck less
Date: Wed, 5 Apr 2017 09:53:05 -0500 [thread overview]
Message-ID: <20170405145304.wxzfavqxnyqtrlru@arbab-laptop> (raw)
In-Reply-To: <20170405092427.GG6035@dhcp22.suse.cz>
On Wed, Apr 05, 2017 at 11:24:27AM +0200, Michal Hocko wrote:
>On Wed 05-04-17 08:42:39, Michal Hocko wrote:
>> On Tue 04-04-17 16:43:39, Reza Arbab wrote:
>> > It's new. Without this patchset, I can repeatedly
>> > add_memory()->online_movable->offline->remove_memory() all of a node's
>> > memory.
>>
>> This is quite unexpected because the code obviously cannot handle the
>> first memory section. Could you paste /proc/zoneinfo and
>> grep . -r /sys/devices/system/memory/auto_online_blocks/memory*, after
>> onlining for both patched and unpatched kernels?
>
>Btw. how do you test this? I am really surprised you managed to
>hotremove such a low pfn range.
When I boot, I have node 0 (4GB) and node 1 (empty):
Early memory node ranges
node 0: [mem 0x0000000000000000-0x00000000ffffffff]
Initmem setup node 0 [mem 0x0000000000000000-0x00000000ffffffff]
On node 0 totalpages: 65536
DMA zone: 64 pages used for memmap
DMA zone: 0 pages reserved
DMA zone: 65536 pages, LIFO batch:1
Could not find start_pfn for node 1
Initmem setup node 1 [mem 0x0000000000000000-0x0000000000000000]
On node 1 totalpages: 0
My steps from there:
1. add_memory(1, 0x100000000, 0x100000000)
2. echo online_movable > /sys/devices/system/node/node1/memory[511..256]
3. echo offline > /sys/devices/system/node/node1/memory[256..511]
4. remove_memory(1, 0x100000000, 0x100000000)
After step 2, regardless of kernel:
$ cat /proc/zoneinfo
Node 0, zone DMA
per-node stats
nr_inactive_anon 418
nr_active_anon 2710
nr_inactive_file 4895
nr_active_file 1945
nr_unevictable 0
nr_isolated_anon 0
nr_isolated_file 0
nr_pages_scanned 0
workingset_refault 0
workingset_activate 0
workingset_nodereclaim 0
nr_anon_pages 2654
nr_mapped 739
nr_file_pages 7314
nr_dirty 1
nr_writeback 0
nr_writeback_temp 0
nr_shmem 474
nr_shmem_hugepages 0
nr_shmem_pmdmapped 0
nr_anon_transparent_hugepages 0
nr_unstable 0
nr_vmscan_write 0
nr_vmscan_immediate_reclaim 0
nr_dirtied 3259
nr_written 460
pages free 53520
min 63
low 128
high 193
node_scanned 0
spanned 65536
present 65536
managed 65218
nr_free_pages 53520
nr_zone_inactive_anon 418
nr_zone_active_anon 2710
nr_zone_inactive_file 4895
nr_zone_active_file 1945
nr_zone_unevictable 0
nr_zone_write_pending 1
nr_mlock 0
nr_slab_reclaimable 438
nr_slab_unreclaimable 808
nr_page_table_pages 32
nr_kernel_stack 2080
nr_bounce 0
numa_hit 313226
numa_miss 0
numa_foreign 0
numa_interleave 3071
numa_local 313226
numa_other 0
nr_free_cma 0
protection: (0, 0, 0, 0)
pagesets
cpu: 0
count: 2
high: 6
batch: 1
vm stats threshold: 12
node_unreclaimable: 0
start_pfn: 0
node_inactive_ratio: 0
Node 1, zone Movable
per-node stats
nr_inactive_anon 0
nr_active_anon 0
nr_inactive_file 0
nr_active_file 0
nr_unevictable 0
nr_isolated_anon 0
nr_isolated_file 0
nr_pages_scanned 0
workingset_refault 0
workingset_activate 0
workingset_nodereclaim 0
nr_anon_pages 0
nr_mapped 0
nr_file_pages 0
nr_dirty 0
nr_writeback 0
nr_writeback_temp 0
nr_shmem 0
nr_shmem_hugepages 0
nr_shmem_pmdmapped 0
nr_anon_transparent_hugepages 0
nr_unstable 0
nr_vmscan_write 0
nr_vmscan_immediate_reclaim 0
nr_dirtied 0
nr_written 0
pages free 65536
min 63
low 128
high 193
node_scanned 0
spanned 65536
present 65536
managed 65536
nr_free_pages 65536
nr_zone_inactive_anon 0
nr_zone_active_anon 0
nr_zone_inactive_file 0
nr_zone_active_file 0
nr_zone_unevictable 0
nr_zone_write_pending 0
nr_mlock 0
nr_slab_reclaimable 0
nr_slab_unreclaimable 0
nr_page_table_pages 0
nr_kernel_stack 0
nr_bounce 0
numa_hit 0
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 0
numa_other 0
nr_free_cma 0
protection: (0, 0, 0, 0)
pagesets
cpu: 0
count: 0
high: 6
batch: 1
vm stats threshold: 14
node_unreclaimable: 1
start_pfn: 65536
node_inactive_ratio: 0
After step 2, on v4.11-rc5:
$ grep . /sys/devices/system/memory/memory*/valid_zones
/sys/devices/system/memory/memory[0..254]/valid_zones:DMA
/sys/devices/system/memory/memory255/valid_zones:DMA Normal Movable
/sys/devices/system/memory/memory256/valid_zones:Movable Normal
/sys/devices/system/memory/memory[257..511]/valid_zones:Movable
After step 2, on v4.11-rc5 + all the patches from this thread:
$ grep . /sys/devices/system/memory/memory*/valid_zones
/sys/devices/system/memory/memory[0..255]/valid_zones:DMA
/sys/devices/system/memory/memory[256..511]/valid_zones:Movable
On v4.11-rc5, I can do steps 1-4 ad nauseam.
On v4.11-rc5 + all the patches from this thread, I can do things
repeatedly, but starting on the second iteration, all the
/sys/devices/system/node/node1/memory*
symlinks are not created. I can still proceed using the actual files,
/sys/devices/system/memory/memory[256..511]
instead. I think it may be because step 4 does node_set_offline(1). That
is, the node is not only emptied of memory, it is offlined completely.
I hope this made sense. :/
--
Reza Arbab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-04-05 14:53 UTC|newest]
Thread overview: 140+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-30 11:54 [PATCH 0/6] mm: make movable onlining suck less Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-30 11:54 ` [PATCH 1/6] mm: get rid of zone_is_initialized Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-31 3:39 ` Hillf Danton
2017-03-31 3:39 ` Hillf Danton
2017-03-31 6:43 ` Michal Hocko
2017-03-31 6:43 ` Michal Hocko
2017-03-31 6:48 ` Michal Hocko
2017-03-31 6:48 ` Michal Hocko
2017-03-31 7:39 ` [PATCH v1 " Michal Hocko
2017-03-31 7:39 ` Michal Hocko
2017-04-05 8:14 ` Michal Hocko
2017-04-05 8:14 ` Michal Hocko
2017-04-05 9:06 ` Igor Mammedov
2017-04-05 9:06 ` Igor Mammedov
2017-04-05 9:23 ` Michal Hocko
2017-04-05 9:23 ` Michal Hocko
2017-03-30 11:54 ` [PATCH 2/6] mm, tile: drop arch_{add,remove}_memory Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-30 15:41 ` Chris Metcalf
2017-03-30 15:41 ` Chris Metcalf
2017-03-30 11:54 ` [PATCH 3/6] mm: remove return value from init_currently_empty_zone Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-31 3:49 ` Hillf Danton
2017-03-31 3:49 ` Hillf Danton
2017-03-31 6:49 ` Michal Hocko
2017-03-31 6:49 ` Michal Hocko
2017-03-31 7:06 ` Hillf Danton
2017-03-31 7:06 ` Hillf Danton
2017-03-31 7:18 ` Michal Hocko
2017-03-31 7:18 ` Michal Hocko
2017-03-31 7:43 ` Michal Hocko
2017-03-31 7:43 ` Michal Hocko
2017-04-03 21:22 ` Reza Arbab
2017-04-03 21:22 ` Reza Arbab
2017-04-04 7:30 ` Michal Hocko
2017-04-04 7:30 ` Michal Hocko
2017-03-30 11:54 ` [PATCH 4/6] mm, memory_hotplug: use node instead of zone in can_online_high_movable Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-30 11:54 ` [PATCH 5/6] mm, memory_hotplug: do not associate hotadded memory to zones until online Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-31 6:18 ` Hillf Danton
2017-03-31 6:18 ` Hillf Danton
2017-03-31 6:50 ` Michal Hocko
2017-03-31 6:50 ` Michal Hocko
2017-04-04 12:21 ` Tobias Regnery
2017-04-04 12:21 ` Tobias Regnery
2017-04-04 12:45 ` Michal Hocko
2017-04-04 12:45 ` Michal Hocko
2017-04-06 8:14 ` Michal Hocko
2017-04-06 8:14 ` Michal Hocko
2017-04-06 12:46 ` Michal Hocko
2017-04-06 12:46 ` Michal Hocko
2017-03-30 11:54 ` [PATCH 6/6] mm, memory_hotplug: remove unused cruft after memory hotplug rework Michal Hocko
2017-03-30 11:54 ` Michal Hocko
2017-03-31 7:46 ` Michal Hocko
2017-03-31 7:46 ` Michal Hocko
2017-03-31 19:19 ` [PATCH 0/6] mm: make movable onlining suck less Heiko Carstens
2017-03-31 19:19 ` Heiko Carstens
2017-04-03 7:34 ` Michal Hocko
2017-04-03 7:34 ` Michal Hocko
2017-04-03 11:55 ` Michal Hocko
2017-04-03 11:55 ` Michal Hocko
2017-04-03 12:20 ` Igor Mammedov
2017-04-03 12:20 ` Igor Mammedov
2017-04-03 19:58 ` Reza Arbab
2017-04-03 19:58 ` Reza Arbab
2017-04-03 20:23 ` Michal Hocko
2017-04-03 20:23 ` Michal Hocko
2017-04-03 20:42 ` Reza Arbab
2017-04-03 20:42 ` Reza Arbab
2017-04-04 7:23 ` Michal Hocko
2017-04-04 7:23 ` Michal Hocko
2017-04-04 7:34 ` Michal Hocko
2017-04-04 7:34 ` Michal Hocko
2017-04-04 8:23 ` Michal Hocko
2017-04-04 8:23 ` Michal Hocko
2017-04-04 15:59 ` Reza Arbab
2017-04-04 15:59 ` Reza Arbab
2017-04-04 16:02 ` Reza Arbab
2017-04-04 16:02 ` Reza Arbab
2017-04-04 16:44 ` Michal Hocko
2017-04-04 16:44 ` Michal Hocko
2017-04-04 18:30 ` Reza Arbab
2017-04-04 18:30 ` Reza Arbab
2017-04-04 19:41 ` Michal Hocko
2017-04-04 19:41 ` Michal Hocko
2017-04-04 21:43 ` Reza Arbab
2017-04-04 21:43 ` Reza Arbab
2017-04-05 6:42 ` Michal Hocko
2017-04-05 6:42 ` Michal Hocko
2017-04-05 9:24 ` Michal Hocko
2017-04-05 9:24 ` Michal Hocko
2017-04-05 14:53 ` Reza Arbab [this message]
2017-04-05 14:53 ` Reza Arbab
2017-04-05 15:42 ` Michal Hocko
2017-04-05 15:42 ` Michal Hocko
2017-04-05 17:32 ` Reza Arbab
2017-04-05 17:32 ` Reza Arbab
2017-04-05 18:15 ` Michal Hocko
2017-04-05 18:15 ` Michal Hocko
2017-04-05 19:39 ` Michal Hocko
2017-04-05 19:39 ` Michal Hocko
2017-04-05 21:02 ` Michal Hocko
2017-04-05 21:02 ` Michal Hocko
2017-04-06 11:07 ` Michal Hocko
2017-04-06 11:07 ` Michal Hocko
2017-04-05 15:48 ` Reza Arbab
2017-04-05 15:48 ` Reza Arbab
2017-04-05 16:34 ` Michal Hocko
2017-04-05 16:34 ` Michal Hocko
2017-04-05 20:55 ` Reza Arbab
2017-04-05 20:55 ` Reza Arbab
2017-04-06 9:25 ` Michal Hocko
2017-04-06 9:25 ` Michal Hocko
2017-04-05 13:52 ` Michal Hocko
2017-04-05 13:52 ` Michal Hocko
2017-04-05 15:23 ` Reza Arbab
2017-04-05 15:23 ` Reza Arbab
2017-04-05 6:36 ` Michal Hocko
2017-04-05 6:36 ` Michal Hocko
2017-04-06 13:08 ` Michal Hocko
2017-04-06 13:08 ` Michal Hocko
2017-04-06 15:24 ` Reza Arbab
2017-04-06 15:24 ` Reza Arbab
2017-04-06 15:41 ` Michal Hocko
2017-04-06 15:41 ` Michal Hocko
2017-04-06 15:46 ` Reza Arbab
2017-04-06 15:46 ` Reza Arbab
2017-04-06 16:21 ` Michal Hocko
2017-04-06 16:21 ` Michal Hocko
2017-04-06 16:24 ` Mel Gorman
2017-04-06 16:24 ` Mel Gorman
2017-04-06 16:55 ` Mel Gorman
2017-04-06 16:55 ` Mel Gorman
2017-04-06 17:12 ` Michal Hocko
2017-04-06 17:12 ` Michal Hocko
2017-04-06 17:46 ` Mel Gorman
2017-04-06 17:46 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170405145304.wxzfavqxnyqtrlru@arbab-laptop \
--to=arbab@linux.vnet.ibm.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=cmetcalf@mellanox.com \
--cc=dan.j.williams@gmail.com \
--cc=daniel.kiper@oracle.com \
--cc=heiko.carstens@de.ibm.com \
--cc=imammedo@redhat.com \
--cc=js1304@gmail.com \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=qiuxishi@huawei.com \
--cc=rientjes@google.com \
--cc=schwidefsky@de.ibm.com \
--cc=slaoub@gmail.com \
--cc=tangchen@cn.fujitsu.com \
--cc=toshi.kani@hpe.com \
--cc=vbabka@suse.cz \
--cc=vkuznets@redhat.com \
--cc=yasu.isimatu@gmail.com \
--cc=zhenzhang.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.