From: James Morse <james.morse@arm.com> To: David Hildenbrand <david@redhat.com> Cc: kexec@lists.infradead.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, Eric Biederman <ebiederm@xmission.com>, Andrew Morton <akpm@linux-foundation.org>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Anshuman Khandual <anshuman.khandual@arm.com>, Bhupesh Sharma <bhsharma@redhat.com> Subject: Re: [PATCH 1/3] kexec: Prevent removal of memory in use by a loaded kexec image Date: Fri, 27 Mar 2020 18:07:44 +0000 [thread overview] Message-ID: <b0443908-e36f-9bc4-4a8a-4206cb782d4b@arm.com> (raw) In-Reply-To: <9cb4ea0d-34c3-de42-4b3f-ee25a59c4835@redhat.com> Hi David, On 3/27/20 5:06 PM, David Hildenbrand wrote: > On 27.03.20 17:56, James Morse wrote: >> On 3/27/20 9:30 AM, David Hildenbrand wrote: >>> On 26.03.20 19:07, James Morse wrote: >>>> An image loaded for kexec is not stored in place, instead its segments >>>> are scattered through memory, and are re-assembled when needed. In the >>>> meantime, the target memory may have been removed. >>>> >>>> Because mm is not aware that this memory is still in use, it allows it >>>> to be removed. >>>> >>>> Add a memory notifier to prevent the removal of memory regions that >>>> overlap with a loaded kexec image segment. e.g., when triggered from the >>>> Qemu console: >>>> | kexec_core: memory region in use >>>> | memory memory32: Offline failed. >>>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c >>>> index c19c0dad1ebe..ba1d91e868ca 100644 >>>> --- a/kernel/kexec_core.c >>>> +++ b/kernel/kexec_core.c >> >>> E.g., in kernel/kexec_core.c:kimage_alloc_pages() >>> >>> "SetPageReserved(pages + i);" >>> >>> Pages that are reserved cannot get offlined. How are you able to trigger >>> that before this patch? (where is the allocation path for kexec, which >>> will not set the pages reserved?) >> >> This sets page reserved on the memory it gets back from >> alloc_pages() in kimage_alloc_pages(). This is when you load the image[0]. >> >> The problem I see is for the target or destination memory once you execute the >> image. Once machine_kexec() runs, it tries to write to this, assuming it is >> still present... > Let's recap > > 1. You load the image. You allocate memory for e.g., the kexec kernel. > The pages will be marked PG_reserved, so they cannot be offlined. > > 2. You do the kexec. The kexec kernel will only operate on a reserved > memory region (reserved via e.g., kernel cmdline crashkernel=128M). I think you are merging the kexec and kdump behaviours. (Wrong terminology? The things behind 'kexec -l Image' and 'kexec -p Image') For kdump, yes, the new kernel is loaded into the crashkernel reservation, and confined to it. For regular kexec, the new kernel can be loaded any where in memory. There might be a difference with how this works on arm64.... The regular kexec kernel isn't stored in its final location when its loaded, its relocated there when the image is executed. The target/destination memory may have been removed in the meantime. (an example recipe below should clarify this) > Is it that in 2., the reserved memory region (for the crashkernel) could > have been offlined in the meantime? No, for kdump: the crashkernel reservation is PG_reserved, and its not something mm knows how to move, so that region can't be taken offline. (On arm64 we additionally prevent the boot-memory from being removed as it is all described as present by UEFI. The crashkernel reservation would always be from this type of memory) This is about a regular kexec, any crashdump reservation is irrelevant. This kexec kernel is temporarily stored out of line, then relocated when executed. A recipe so that we're at least on the same terminal! This is on a TX2 running arm64's for-next/core using Qemu-TCG to emulate x86. (Sorry for the bizarre config, its because Qemu supports hotremove on x86, but not yet on arm64). Insert the memory: (qemu) object_add memory-backend-ram,id=mem1,size=1G (qemu) device_add pc-dimm,id=dimm1,memdev=mem1 | root@vm:~# free -m | total used free shared ... | Mem: 918 52 814 0 ... | Swap: 0 0 0 Bring it online: | root@vm:~# cd /sys/devices/system/memory/ | root@vm:/sys/devices/system/memory# for F in memory3*; do echo \ | online_movable > $F/state; done | Built 1 zonelists, mobility grouping on. Total pages: 251049 | Policy zone: DMA32 | -bash: echo: write error: Invalid argument | root@vm:/sys/devices/system/memory# free -m | total used free shared ... | Mem: 1942 53 1836 0 ... | Swap: 0 0 0 Load kexec: | root@vm:/sys/devices/system/memory# kexec -l /root/bzImage --reuse-cmdline Press the Attention button to request removal: (qemu) device_del dimm1 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Built 1 zonelists, mobility grouping on. Total pages: 233728 | Policy zone: DMA32 The memory is gone: | root@vm:/sys/devices/system/memory# free -m | total used free shared ... | Mem: 918 89 769 0 ... | Swap: 0 0 0 Trigger kexec: | root@vm:/sys/devices/system/memory# kexec -e [...] | sd 0:0:0:0: [sda] Synchronizing SCSI cache | kexec_core: Starting new kernel ... and Qemu restarts the platform firmware instead of proceeding with kexec. (I assume this is a triple fault) You can use mem-min and mem-max to control where kexec's user space will place the memory. If you apply this patch, the above sequence will fail at the device remove step, as the physical addresses match the loaded kexec image: | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | kexec_core: Memory region in use | kexec_core: Memory region in use | memory memory39: Offline failed. | Built 1 zonelists, mobility grouping on. Total pages: 299212 | Policy zone: Normal | root@vm:/sys/devices/system/memory# free -m | total used free shared ... | Mem: 1942 90 1793 0 ... | Swap: 0 0 0 I can't remove the DIMM, because we failed to offline it: (qemu) object_del mem1 object 'mem1' is in use, can not be deleted and I can trigger kexec and boot the new kernel. kexec user-space here comes from debian bullseye. It picked the removable memory all by itself without any additional arguments. (a different issue that can be ignored for now: x86 additionally fails to reboot if I remove memory, even if its not in use by the kexec image. This doesn't cause qemu to reboot via firmware, I think it dies before the console. It doesn't happen on arm64. I suspect the memory map is snapshotted and assumed to still be correct when the image is executed.) Thanks, James
WARNING: multiple messages have this Message-ID (diff)
From: James Morse <james.morse@arm.com> To: David Hildenbrand <david@redhat.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Bhupesh Sharma <bhsharma@redhat.com>, kexec@lists.infradead.org, linux-mm@kvack.org, Eric Biederman <ebiederm@xmission.com>, Andrew Morton <akpm@linux-foundation.org>, Will Deacon <will@kernel.org>, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 1/3] kexec: Prevent removal of memory in use by a loaded kexec image Date: Fri, 27 Mar 2020 18:07:44 +0000 [thread overview] Message-ID: <b0443908-e36f-9bc4-4a8a-4206cb782d4b@arm.com> (raw) In-Reply-To: <9cb4ea0d-34c3-de42-4b3f-ee25a59c4835@redhat.com> Hi David, On 3/27/20 5:06 PM, David Hildenbrand wrote: > On 27.03.20 17:56, James Morse wrote: >> On 3/27/20 9:30 AM, David Hildenbrand wrote: >>> On 26.03.20 19:07, James Morse wrote: >>>> An image loaded for kexec is not stored in place, instead its segments >>>> are scattered through memory, and are re-assembled when needed. In the >>>> meantime, the target memory may have been removed. >>>> >>>> Because mm is not aware that this memory is still in use, it allows it >>>> to be removed. >>>> >>>> Add a memory notifier to prevent the removal of memory regions that >>>> overlap with a loaded kexec image segment. e.g., when triggered from the >>>> Qemu console: >>>> | kexec_core: memory region in use >>>> | memory memory32: Offline failed. >>>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c >>>> index c19c0dad1ebe..ba1d91e868ca 100644 >>>> --- a/kernel/kexec_core.c >>>> +++ b/kernel/kexec_core.c >> >>> E.g., in kernel/kexec_core.c:kimage_alloc_pages() >>> >>> "SetPageReserved(pages + i);" >>> >>> Pages that are reserved cannot get offlined. How are you able to trigger >>> that before this patch? (where is the allocation path for kexec, which >>> will not set the pages reserved?) >> >> This sets page reserved on the memory it gets back from >> alloc_pages() in kimage_alloc_pages(). This is when you load the image[0]. >> >> The problem I see is for the target or destination memory once you execute the >> image. Once machine_kexec() runs, it tries to write to this, assuming it is >> still present... > Let's recap > > 1. You load the image. You allocate memory for e.g., the kexec kernel. > The pages will be marked PG_reserved, so they cannot be offlined. > > 2. You do the kexec. The kexec kernel will only operate on a reserved > memory region (reserved via e.g., kernel cmdline crashkernel=128M). I think you are merging the kexec and kdump behaviours. (Wrong terminology? The things behind 'kexec -l Image' and 'kexec -p Image') For kdump, yes, the new kernel is loaded into the crashkernel reservation, and confined to it. For regular kexec, the new kernel can be loaded any where in memory. There might be a difference with how this works on arm64.... The regular kexec kernel isn't stored in its final location when its loaded, its relocated there when the image is executed. The target/destination memory may have been removed in the meantime. (an example recipe below should clarify this) > Is it that in 2., the reserved memory region (for the crashkernel) could > have been offlined in the meantime? No, for kdump: the crashkernel reservation is PG_reserved, and its not something mm knows how to move, so that region can't be taken offline. (On arm64 we additionally prevent the boot-memory from being removed as it is all described as present by UEFI. The crashkernel reservation would always be from this type of memory) This is about a regular kexec, any crashdump reservation is irrelevant. This kexec kernel is temporarily stored out of line, then relocated when executed. A recipe so that we're at least on the same terminal! This is on a TX2 running arm64's for-next/core using Qemu-TCG to emulate x86. (Sorry for the bizarre config, its because Qemu supports hotremove on x86, but not yet on arm64). Insert the memory: (qemu) object_add memory-backend-ram,id=mem1,size=1G (qemu) device_add pc-dimm,id=dimm1,memdev=mem1 | root@vm:~# free -m | total used free shared ... | Mem: 918 52 814 0 ... | Swap: 0 0 0 Bring it online: | root@vm:~# cd /sys/devices/system/memory/ | root@vm:/sys/devices/system/memory# for F in memory3*; do echo \ | online_movable > $F/state; done | Built 1 zonelists, mobility grouping on. Total pages: 251049 | Policy zone: DMA32 | -bash: echo: write error: Invalid argument | root@vm:/sys/devices/system/memory# free -m | total used free shared ... | Mem: 1942 53 1836 0 ... | Swap: 0 0 0 Load kexec: | root@vm:/sys/devices/system/memory# kexec -l /root/bzImage --reuse-cmdline Press the Attention button to request removal: (qemu) device_del dimm1 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Built 1 zonelists, mobility grouping on. Total pages: 233728 | Policy zone: DMA32 The memory is gone: | root@vm:/sys/devices/system/memory# free -m | total used free shared ... | Mem: 918 89 769 0 ... | Swap: 0 0 0 Trigger kexec: | root@vm:/sys/devices/system/memory# kexec -e [...] | sd 0:0:0:0: [sda] Synchronizing SCSI cache | kexec_core: Starting new kernel ... and Qemu restarts the platform firmware instead of proceeding with kexec. (I assume this is a triple fault) You can use mem-min and mem-max to control where kexec's user space will place the memory. If you apply this patch, the above sequence will fail at the device remove step, as the physical addresses match the loaded kexec image: | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | Offlined Pages 32768 | kexec_core: Memory region in use | kexec_core: Memory region in use | memory memory39: Offline failed. | Built 1 zonelists, mobility grouping on. Total pages: 299212 | Policy zone: Normal | root@vm:/sys/devices/system/memory# free -m | total used free shared ... | Mem: 1942 90 1793 0 ... | Swap: 0 0 0 I can't remove the DIMM, because we failed to offline it: (qemu) object_del mem1 object 'mem1' is in use, can not be deleted and I can trigger kexec and boot the new kernel. kexec user-space here comes from debian bullseye. It picked the removable memory all by itself without any additional arguments. (a different issue that can be ignored for now: x86 additionally fails to reboot if I remove memory, even if its not in use by the kexec image. This doesn't cause qemu to reboot via firmware, I think it dies before the console. It doesn't happen on arm64. I suspect the memory map is snapshotted and assumed to still be correct when the image is executed.) Thanks, James _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-03-27 18:08 UTC|newest] Thread overview: 264+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-03-26 18:07 [PATCH 0/3] kexec/memory_hotplug: Prevent removal and accidental use James Morse 2020-03-26 18:07 ` James Morse 2020-03-26 18:07 ` [PATCH 1/3] kexec: Prevent removal of memory in use by a loaded kexec image James Morse 2020-03-26 18:07 ` James Morse 2020-03-27 0:43 ` Anshuman Khandual 2020-03-27 0:43 ` Anshuman Khandual 2020-03-27 2:54 ` Baoquan He 2020-03-27 2:54 ` Baoquan He 2020-03-27 15:46 ` James Morse 2020-03-27 15:46 ` James Morse 2020-03-27 2:34 ` Baoquan He 2020-03-27 2:34 ` Baoquan He 2020-03-27 9:30 ` David Hildenbrand 2020-03-27 9:30 ` David Hildenbrand 2020-03-27 16:56 ` James Morse 2020-03-27 16:56 ` James Morse 2020-03-27 17:06 ` David Hildenbrand 2020-03-27 17:06 ` David Hildenbrand 2020-03-27 18:07 ` James Morse [this message] 2020-03-27 18:07 ` James Morse 2020-03-27 18:52 ` David Hildenbrand 2020-03-27 18:52 ` David Hildenbrand 2020-03-30 13:00 ` James Morse 2020-03-30 13:00 ` James Morse 2020-03-30 13:13 ` David Hildenbrand 2020-03-30 13:13 ` David Hildenbrand 2020-03-30 17:17 ` James Morse 2020-03-30 17:17 ` James Morse 2020-03-30 18:14 ` David Hildenbrand 2020-03-30 18:14 ` David Hildenbrand 2020-04-10 19:10 ` Andrew Morton 2020-04-10 19:10 ` Andrew Morton 2020-04-10 19:10 ` Andrew Morton 2020-04-11 3:44 ` Baoquan He 2020-04-11 3:44 ` Baoquan He 2020-04-11 3:44 ` Baoquan He 2020-04-11 9:30 ` Russell King - ARM Linux admin 2020-04-11 9:30 ` Russell King - ARM Linux admin 2020-04-11 9:30 ` Russell King - ARM Linux admin 2020-04-11 9:58 ` David Hildenbrand 2020-04-11 9:58 ` David Hildenbrand 2020-04-11 9:58 ` David Hildenbrand 2020-04-12 5:35 ` Baoquan He 2020-04-12 5:35 ` Baoquan He 2020-04-12 5:35 ` Baoquan He 2020-04-12 8:08 ` Russell King - ARM Linux admin 2020-04-12 8:08 ` Russell King - ARM Linux admin 2020-04-12 8:08 ` Russell King - ARM Linux admin 2020-04-12 19:52 ` Eric W. Biederman 2020-04-12 19:52 ` Eric W. Biederman 2020-04-12 19:52 ` Eric W. Biederman 2020-04-12 20:37 ` Bhupesh SHARMA 2020-04-12 20:37 ` Bhupesh SHARMA 2020-04-12 20:37 ` Bhupesh SHARMA 2020-04-13 2:37 ` Baoquan He 2020-04-13 2:37 ` Baoquan He 2020-04-13 2:37 ` Baoquan He 2020-04-13 13:15 ` Eric W. Biederman 2020-04-13 13:15 ` Eric W. Biederman 2020-04-13 13:15 ` Eric W. Biederman 2020-04-13 23:01 ` Andrew Morton 2020-04-13 23:01 ` Andrew Morton 2020-04-13 23:01 ` Andrew Morton 2020-04-14 6:13 ` Eric W. Biederman 2020-04-14 6:13 ` Eric W. Biederman 2020-04-14 6:13 ` Eric W. Biederman 2020-04-14 6:40 ` Baoquan He 2020-04-14 6:40 ` Baoquan He 2020-04-14 6:40 ` Baoquan He 2020-04-14 6:51 ` Baoquan He 2020-04-14 6:51 ` Baoquan He 2020-04-14 6:51 ` Baoquan He 2020-04-14 8:00 ` David Hildenbrand 2020-04-14 8:00 ` David Hildenbrand 2020-04-14 8:00 ` David Hildenbrand 2020-04-14 9:22 ` Baoquan He 2020-04-14 9:22 ` Baoquan He 2020-04-14 9:22 ` Baoquan He 2020-04-14 9:22 ` Baoquan He 2020-04-14 9:37 ` David Hildenbrand 2020-04-14 9:37 ` David Hildenbrand 2020-04-14 9:37 ` David Hildenbrand 2020-04-14 9:37 ` David Hildenbrand 2020-04-14 14:39 ` Baoquan He 2020-04-14 14:39 ` Baoquan He 2020-04-14 14:39 ` Baoquan He 2020-04-14 14:39 ` Baoquan He 2020-04-14 14:49 ` David Hildenbrand 2020-04-14 14:49 ` David Hildenbrand 2020-04-14 14:49 ` David Hildenbrand 2020-04-14 14:49 ` David Hildenbrand 2020-04-15 2:35 ` Baoquan He 2020-04-15 2:35 ` Baoquan He 2020-04-15 2:35 ` Baoquan He 2020-04-15 2:35 ` Baoquan He 2020-04-16 13:31 ` David Hildenbrand 2020-04-16 13:31 ` David Hildenbrand 2020-04-16 13:31 ` David Hildenbrand 2020-04-16 13:31 ` David Hildenbrand 2020-04-16 14:02 ` Baoquan He 2020-04-16 14:02 ` Baoquan He 2020-04-16 14:02 ` Baoquan He 2020-04-16 14:02 ` Baoquan He 2020-04-16 14:09 ` David Hildenbrand 2020-04-16 14:09 ` David Hildenbrand 2020-04-16 14:09 ` David Hildenbrand 2020-04-16 14:09 ` David Hildenbrand 2020-04-16 14:36 ` Baoquan He 2020-04-16 14:36 ` Baoquan He 2020-04-16 14:36 ` Baoquan He 2020-04-16 14:36 ` Baoquan He 2020-04-16 14:47 ` David Hildenbrand 2020-04-16 14:47 ` David Hildenbrand 2020-04-16 14:47 ` David Hildenbrand 2020-04-16 14:47 ` David Hildenbrand 2020-04-21 13:29 ` David Hildenbrand 2020-04-21 13:29 ` David Hildenbrand 2020-04-21 13:29 ` David Hildenbrand 2020-04-21 13:29 ` David Hildenbrand 2020-04-21 13:57 ` David Hildenbrand 2020-04-21 13:57 ` David Hildenbrand 2020-04-21 13:57 ` David Hildenbrand 2020-04-21 13:57 ` David Hildenbrand 2020-04-21 13:59 ` Eric W. Biederman 2020-04-21 13:59 ` Eric W. Biederman 2020-04-21 13:59 ` Eric W. Biederman 2020-04-21 13:59 ` Eric W. Biederman 2020-04-21 14:30 ` David Hildenbrand 2020-04-21 14:30 ` David Hildenbrand 2020-04-21 14:30 ` David Hildenbrand 2020-04-21 14:30 ` David Hildenbrand 2020-04-22 9:17 ` Baoquan He 2020-04-22 9:17 ` Baoquan He 2020-04-22 9:17 ` Baoquan He 2020-04-22 9:17 ` Baoquan He 2020-04-22 9:24 ` David Hildenbrand 2020-04-22 9:24 ` David Hildenbrand 2020-04-22 9:24 ` David Hildenbrand 2020-04-22 9:24 ` David Hildenbrand 2020-04-22 9:57 ` Baoquan He 2020-04-22 9:57 ` Baoquan He 2020-04-22 9:57 ` Baoquan He 2020-04-22 9:57 ` Baoquan He 2020-04-22 10:05 ` David Hildenbrand 2020-04-22 10:05 ` David Hildenbrand 2020-04-22 10:05 ` David Hildenbrand 2020-04-22 10:05 ` David Hildenbrand 2020-04-22 10:36 ` Baoquan He 2020-04-22 10:36 ` Baoquan He 2020-04-22 10:36 ` Baoquan He 2020-04-22 10:36 ` Baoquan He 2020-04-14 9:16 ` Dave Young 2020-04-14 9:16 ` Dave Young 2020-04-14 9:16 ` Dave Young 2020-04-14 9:38 ` Dave Young 2020-04-14 9:38 ` Dave Young 2020-04-14 9:38 ` Dave Young 2020-04-14 7:05 ` David Hildenbrand 2020-04-14 7:05 ` David Hildenbrand 2020-04-14 7:05 ` David Hildenbrand 2020-04-14 16:55 ` James Morse 2020-04-14 16:55 ` James Morse 2020-04-14 16:55 ` James Morse 2020-04-14 17:41 ` David Hildenbrand 2020-04-14 17:41 ` David Hildenbrand 2020-04-14 17:41 ` David Hildenbrand 2020-04-15 20:33 ` Eric W. Biederman 2020-04-15 20:33 ` Eric W. Biederman 2020-04-15 20:33 ` Eric W. Biederman 2020-04-22 12:28 ` James Morse 2020-04-22 12:28 ` James Morse 2020-04-22 12:28 ` James Morse 2020-04-22 15:25 ` Eric W. Biederman 2020-04-22 15:25 ` Eric W. Biederman 2020-04-22 15:25 ` Eric W. Biederman 2020-04-22 16:40 ` David Hildenbrand 2020-04-22 16:40 ` David Hildenbrand 2020-04-22 16:40 ` David Hildenbrand 2020-04-23 16:29 ` Eric W. Biederman 2020-04-23 16:29 ` Eric W. Biederman 2020-04-23 16:29 ` Eric W. Biederman 2020-04-24 7:39 ` David Hildenbrand 2020-04-24 7:39 ` David Hildenbrand 2020-04-24 7:39 ` David Hildenbrand 2020-04-24 7:41 ` David Hildenbrand 2020-04-24 7:41 ` David Hildenbrand 2020-04-24 7:41 ` David Hildenbrand 2020-05-01 16:55 ` James Morse 2020-05-01 16:55 ` James Morse 2020-05-01 16:55 ` James Morse 2020-03-26 18:07 ` [PATCH 2/3] mm/memory_hotplug: Allow arch override of non boot memory resource names James Morse 2020-03-26 18:07 ` James Morse 2020-03-27 9:59 ` David Hildenbrand 2020-03-27 9:59 ` David Hildenbrand 2020-03-27 15:39 ` James Morse 2020-03-27 15:39 ` James Morse 2020-03-30 13:23 ` David Hildenbrand 2020-03-30 13:23 ` David Hildenbrand 2020-03-30 17:17 ` James Morse 2020-03-30 17:17 ` James Morse 2020-04-02 5:49 ` Dave Young 2020-04-02 5:49 ` Dave Young 2020-04-02 5:49 ` Dave Young 2020-04-02 6:12 ` piliu 2020-04-02 6:12 ` piliu 2020-04-02 6:12 ` piliu 2020-04-14 17:21 ` James Morse 2020-04-14 17:21 ` James Morse 2020-04-14 17:21 ` James Morse 2020-04-15 20:36 ` Eric W. Biederman 2020-04-15 20:36 ` Eric W. Biederman 2020-04-15 20:36 ` Eric W. Biederman 2020-04-22 12:14 ` James Morse 2020-04-22 12:14 ` James Morse 2020-04-22 12:14 ` James Morse 2020-05-09 0:45 ` Andrew Morton 2020-05-09 0:45 ` Andrew Morton 2020-05-09 0:45 ` Andrew Morton 2020-05-11 8:35 ` David Hildenbrand 2020-05-11 8:35 ` David Hildenbrand 2020-05-11 8:35 ` David Hildenbrand 2020-03-26 18:07 ` [PATCH 3/3] arm64: memory: Give hotplug memory a different resource name James Morse 2020-03-26 18:07 ` James Morse 2020-03-30 19:01 ` David Hildenbrand 2020-03-30 19:01 ` David Hildenbrand 2020-04-15 20:37 ` Eric W. Biederman 2020-04-15 20:37 ` Eric W. Biederman 2020-04-15 20:37 ` Eric W. Biederman 2020-04-22 12:14 ` James Morse 2020-04-22 12:14 ` James Morse 2020-04-22 12:14 ` James Morse 2020-03-27 2:11 ` [PATCH 0/3] kexec/memory_hotplug: Prevent removal and accidental use Baoquan He 2020-03-27 2:11 ` Baoquan He 2020-03-27 15:40 ` James Morse 2020-03-27 15:40 ` James Morse 2020-03-27 9:27 ` David Hildenbrand 2020-03-27 9:27 ` David Hildenbrand 2020-03-27 15:42 ` James Morse 2020-03-27 15:42 ` James Morse 2020-03-30 13:18 ` David Hildenbrand 2020-03-30 13:18 ` David Hildenbrand 2020-03-30 13:55 ` Baoquan He 2020-03-30 13:55 ` Baoquan He 2020-03-30 17:17 ` James Morse 2020-03-30 17:17 ` James Morse 2020-03-31 3:46 ` Dave Young 2020-03-31 3:46 ` Dave Young 2020-04-14 17:31 ` James Morse 2020-04-14 17:31 ` James Morse 2020-04-14 17:31 ` James Morse 2020-03-31 3:38 ` Dave Young 2020-03-31 3:38 ` Dave Young 2020-04-15 20:29 ` Eric W. Biederman 2020-04-15 20:29 ` Eric W. Biederman 2020-04-15 20:29 ` Eric W. Biederman 2020-04-22 12:14 ` James Morse 2020-04-22 12:14 ` James Morse 2020-04-22 12:14 ` James Morse 2020-04-22 13:04 ` Eric W. Biederman 2020-04-22 13:04 ` Eric W. Biederman 2020-04-22 13:04 ` Eric W. Biederman 2020-04-22 15:40 ` James Morse 2020-04-22 15:40 ` James Morse 2020-04-22 15:40 ` James Morse
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=b0443908-e36f-9bc4-4a8a-4206cb782d4b@arm.com \ --to=james.morse@arm.com \ --cc=akpm@linux-foundation.org \ --cc=anshuman.khandual@arm.com \ --cc=bhsharma@redhat.com \ --cc=catalin.marinas@arm.com \ --cc=david@redhat.com \ --cc=ebiederm@xmission.com \ --cc=kexec@lists.infradead.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-mm@kvack.org \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.