All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Mike Rapoport <rppt@kernel.org>, linux-arm-kernel@lists.infradead.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Rapoport <rppt@linux.ibm.com>, Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH v2 2/4] memblock: update initialization of reserved pages
Date: Wed, 21 Apr 2021 09:49:24 +0200	[thread overview]
Message-ID: <752fd822-6479-53f1-81fb-24b55500e963@redhat.com> (raw)
In-Reply-To: <20210421065108.1987-3-rppt@kernel.org>

On 21.04.21 08:51, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> The struct pages representing a reserved memory region are initialized
> using reserve_bootmem_range() function. This function is called for each
> reserved region just before the memory is freed from memblock to the buddy
> page allocator.
> 
> The struct pages for MEMBLOCK_NOMAP regions are kept with the default
> values set by the memory map initialization which makes it necessary to
> have a special treatment for such pages in pfn_valid() and
> pfn_valid_within().
> 
> Split out initialization of the reserved pages to a function with a
> meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the
> reserved regions and mark struct pages for the NOMAP regions as
> PageReserved.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>   include/linux/memblock.h |  4 +++-
>   mm/memblock.c            | 28 ++++++++++++++++++++++++++--
>   2 files changed, 29 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index 5984fff3f175..634c1a578db8 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -30,7 +30,9 @@ extern unsigned long long max_possible_pfn;
>    * @MEMBLOCK_NONE: no special request
>    * @MEMBLOCK_HOTPLUG: hotpluggable region
>    * @MEMBLOCK_MIRROR: mirrored region
> - * @MEMBLOCK_NOMAP: don't add to kernel direct mapping
> + * @MEMBLOCK_NOMAP: don't add to kernel direct mapping and treat as
> + * reserved in the memory map; refer to memblock_mark_nomap() description
> + * for futher details
>    */
>   enum memblock_flags {
>   	MEMBLOCK_NONE		= 0x0,	/* No special request */
> diff --git a/mm/memblock.c b/mm/memblock.c
> index afaefa8fc6ab..3abf2c3fea7f 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -906,6 +906,11 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size)
>    * @base: the base phys addr of the region
>    * @size: the size of the region
>    *
> + * The memory regions marked with %MEMBLOCK_NOMAP will not be added to the
> + * direct mapping of the physical memory. These regions will still be
> + * covered by the memory map. The struct page representing NOMAP memory
> + * frames in the memory map will be PageReserved()
> + *
>    * Return: 0 on success, -errno on failure.
>    */
>   int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size)
> @@ -2002,6 +2007,26 @@ static unsigned long __init __free_memory_core(phys_addr_t start,
>   	return end_pfn - start_pfn;
>   }
>   
> +static void __init memmap_init_reserved_pages(void)
> +{
> +	struct memblock_region *region;
> +	phys_addr_t start, end;
> +	u64 i;
> +
> +	/* initialize struct pages for the reserved regions */
> +	for_each_reserved_mem_range(i, &start, &end)
> +		reserve_bootmem_region(start, end);
> +
> +	/* and also treat struct pages for the NOMAP regions as PageReserved */
> +	for_each_mem_region(region) {
> +		if (memblock_is_nomap(region)) {
> +			start = region->base;
> +			end = start + region->size;
> +			reserve_bootmem_region(start, end);
> +		}
> +	}
> +}
> +
>   static unsigned long __init free_low_memory_core_early(void)
>   {
>   	unsigned long count = 0;
> @@ -2010,8 +2035,7 @@ static unsigned long __init free_low_memory_core_early(void)
>   
>   	memblock_clear_hotplug(0, -1);
>   
> -	for_each_reserved_mem_range(i, &start, &end)
> -		reserve_bootmem_region(start, end);
> +	memmap_init_reserved_pages();
>   
>   	/*
>   	 * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Mike Rapoport <rppt@kernel.org>, linux-arm-kernel@lists.infradead.org
Cc: Anshuman Khandual <anshuman.khandual@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	linux-kernel@vger.kernel.org, Mike Rapoport <rppt@linux.ibm.com>,
	linux-mm@kvack.org, kvmarm@lists.cs.columbia.edu,
	Marc Zyngier <maz@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Will Deacon <will@kernel.org>
Subject: Re: [PATCH v2 2/4] memblock: update initialization of reserved pages
Date: Wed, 21 Apr 2021 09:49:24 +0200	[thread overview]
Message-ID: <752fd822-6479-53f1-81fb-24b55500e963@redhat.com> (raw)
In-Reply-To: <20210421065108.1987-3-rppt@kernel.org>

On 21.04.21 08:51, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> The struct pages representing a reserved memory region are initialized
> using reserve_bootmem_range() function. This function is called for each
> reserved region just before the memory is freed from memblock to the buddy
> page allocator.
> 
> The struct pages for MEMBLOCK_NOMAP regions are kept with the default
> values set by the memory map initialization which makes it necessary to
> have a special treatment for such pages in pfn_valid() and
> pfn_valid_within().
> 
> Split out initialization of the reserved pages to a function with a
> meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the
> reserved regions and mark struct pages for the NOMAP regions as
> PageReserved.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>   include/linux/memblock.h |  4 +++-
>   mm/memblock.c            | 28 ++++++++++++++++++++++++++--
>   2 files changed, 29 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index 5984fff3f175..634c1a578db8 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -30,7 +30,9 @@ extern unsigned long long max_possible_pfn;
>    * @MEMBLOCK_NONE: no special request
>    * @MEMBLOCK_HOTPLUG: hotpluggable region
>    * @MEMBLOCK_MIRROR: mirrored region
> - * @MEMBLOCK_NOMAP: don't add to kernel direct mapping
> + * @MEMBLOCK_NOMAP: don't add to kernel direct mapping and treat as
> + * reserved in the memory map; refer to memblock_mark_nomap() description
> + * for futher details
>    */
>   enum memblock_flags {
>   	MEMBLOCK_NONE		= 0x0,	/* No special request */
> diff --git a/mm/memblock.c b/mm/memblock.c
> index afaefa8fc6ab..3abf2c3fea7f 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -906,6 +906,11 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size)
>    * @base: the base phys addr of the region
>    * @size: the size of the region
>    *
> + * The memory regions marked with %MEMBLOCK_NOMAP will not be added to the
> + * direct mapping of the physical memory. These regions will still be
> + * covered by the memory map. The struct page representing NOMAP memory
> + * frames in the memory map will be PageReserved()
> + *
>    * Return: 0 on success, -errno on failure.
>    */
>   int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size)
> @@ -2002,6 +2007,26 @@ static unsigned long __init __free_memory_core(phys_addr_t start,
>   	return end_pfn - start_pfn;
>   }
>   
> +static void __init memmap_init_reserved_pages(void)
> +{
> +	struct memblock_region *region;
> +	phys_addr_t start, end;
> +	u64 i;
> +
> +	/* initialize struct pages for the reserved regions */
> +	for_each_reserved_mem_range(i, &start, &end)
> +		reserve_bootmem_region(start, end);
> +
> +	/* and also treat struct pages for the NOMAP regions as PageReserved */
> +	for_each_mem_region(region) {
> +		if (memblock_is_nomap(region)) {
> +			start = region->base;
> +			end = start + region->size;
> +			reserve_bootmem_region(start, end);
> +		}
> +	}
> +}
> +
>   static unsigned long __init free_low_memory_core_early(void)
>   {
>   	unsigned long count = 0;
> @@ -2010,8 +2035,7 @@ static unsigned long __init free_low_memory_core_early(void)
>   
>   	memblock_clear_hotplug(0, -1);
>   
> -	for_each_reserved_mem_range(i, &start, &end)
> -		reserve_bootmem_region(start, end);
> +	memmap_init_reserved_pages();
>   
>   	/*
>   	 * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Mike Rapoport <rppt@kernel.org>, linux-arm-kernel@lists.infradead.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Rapoport <rppt@linux.ibm.com>, Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH v2 2/4] memblock: update initialization of reserved pages
Date: Wed, 21 Apr 2021 09:49:24 +0200	[thread overview]
Message-ID: <752fd822-6479-53f1-81fb-24b55500e963@redhat.com> (raw)
In-Reply-To: <20210421065108.1987-3-rppt@kernel.org>

On 21.04.21 08:51, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> The struct pages representing a reserved memory region are initialized
> using reserve_bootmem_range() function. This function is called for each
> reserved region just before the memory is freed from memblock to the buddy
> page allocator.
> 
> The struct pages for MEMBLOCK_NOMAP regions are kept with the default
> values set by the memory map initialization which makes it necessary to
> have a special treatment for such pages in pfn_valid() and
> pfn_valid_within().
> 
> Split out initialization of the reserved pages to a function with a
> meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the
> reserved regions and mark struct pages for the NOMAP regions as
> PageReserved.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>   include/linux/memblock.h |  4 +++-
>   mm/memblock.c            | 28 ++++++++++++++++++++++++++--
>   2 files changed, 29 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index 5984fff3f175..634c1a578db8 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -30,7 +30,9 @@ extern unsigned long long max_possible_pfn;
>    * @MEMBLOCK_NONE: no special request
>    * @MEMBLOCK_HOTPLUG: hotpluggable region
>    * @MEMBLOCK_MIRROR: mirrored region
> - * @MEMBLOCK_NOMAP: don't add to kernel direct mapping
> + * @MEMBLOCK_NOMAP: don't add to kernel direct mapping and treat as
> + * reserved in the memory map; refer to memblock_mark_nomap() description
> + * for futher details
>    */
>   enum memblock_flags {
>   	MEMBLOCK_NONE		= 0x0,	/* No special request */
> diff --git a/mm/memblock.c b/mm/memblock.c
> index afaefa8fc6ab..3abf2c3fea7f 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -906,6 +906,11 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size)
>    * @base: the base phys addr of the region
>    * @size: the size of the region
>    *
> + * The memory regions marked with %MEMBLOCK_NOMAP will not be added to the
> + * direct mapping of the physical memory. These regions will still be
> + * covered by the memory map. The struct page representing NOMAP memory
> + * frames in the memory map will be PageReserved()
> + *
>    * Return: 0 on success, -errno on failure.
>    */
>   int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size)
> @@ -2002,6 +2007,26 @@ static unsigned long __init __free_memory_core(phys_addr_t start,
>   	return end_pfn - start_pfn;
>   }
>   
> +static void __init memmap_init_reserved_pages(void)
> +{
> +	struct memblock_region *region;
> +	phys_addr_t start, end;
> +	u64 i;
> +
> +	/* initialize struct pages for the reserved regions */
> +	for_each_reserved_mem_range(i, &start, &end)
> +		reserve_bootmem_region(start, end);
> +
> +	/* and also treat struct pages for the NOMAP regions as PageReserved */
> +	for_each_mem_region(region) {
> +		if (memblock_is_nomap(region)) {
> +			start = region->base;
> +			end = start + region->size;
> +			reserve_bootmem_region(start, end);
> +		}
> +	}
> +}
> +
>   static unsigned long __init free_low_memory_core_early(void)
>   {
>   	unsigned long count = 0;
> @@ -2010,8 +2035,7 @@ static unsigned long __init free_low_memory_core_early(void)
>   
>   	memblock_clear_hotplug(0, -1);
>   
> -	for_each_reserved_mem_range(i, &start, &end)
> -		reserve_bootmem_region(start, end);
> +	memmap_init_reserved_pages();
>   
>   	/*
>   	 * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-04-21  7:49 UTC|newest]

Thread overview: 143+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-21  6:51 [PATCH v2 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Mike Rapoport
2021-04-21  6:51 ` Mike Rapoport
2021-04-21  6:51 ` Mike Rapoport
2021-04-21  6:51 ` [PATCH v2 1/4] include/linux/mmzone.h: add documentation for pfn_valid() Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21 10:49   ` Anshuman Khandual
2021-04-21 10:49     ` Anshuman Khandual
2021-04-21 10:49     ` Anshuman Khandual
2021-04-21  6:51 ` [PATCH v2 2/4] memblock: update initialization of reserved pages Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21  7:49   ` David Hildenbrand [this message]
2021-04-21  7:49     ` David Hildenbrand
2021-04-21  7:49     ` David Hildenbrand
2021-04-21 10:51   ` Anshuman Khandual
2021-04-21 10:51     ` Anshuman Khandual
2021-04-21 10:51     ` Anshuman Khandual
2021-04-21  6:51 ` [PATCH v2 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21 10:59   ` Anshuman Khandual
2021-04-21 10:59     ` Anshuman Khandual
2021-04-21 10:59     ` Anshuman Khandual
2021-04-21 12:19     ` Mike Rapoport
2021-04-21 12:19       ` Mike Rapoport
2021-04-21 12:19       ` Mike Rapoport
2021-04-21 13:13       ` Anshuman Khandual
2021-04-21 13:13         ` Anshuman Khandual
2021-04-21 13:13         ` Anshuman Khandual
2021-04-21  6:51 ` [PATCH v2 4/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21  6:51   ` Mike Rapoport
2021-04-21  7:49   ` David Hildenbrand
2021-04-21  7:49     ` David Hildenbrand
2021-04-21  7:49     ` David Hildenbrand
2021-04-21 11:06   ` Anshuman Khandual
2021-04-21 11:06     ` Anshuman Khandual
2021-04-21 11:06     ` Anshuman Khandual
2021-04-21 12:24     ` Mike Rapoport
2021-04-21 12:24       ` Mike Rapoport
2021-04-21 12:24       ` Mike Rapoport
2021-04-21 13:15       ` Anshuman Khandual
2021-04-21 13:15         ` Anshuman Khandual
2021-04-21 13:15         ` Anshuman Khandual
2021-04-22  7:00 ` [PATCH v2 0/4] " Kefeng Wang
2021-04-22  7:00   ` Kefeng Wang
2021-04-22  7:00   ` Kefeng Wang
2021-04-22  7:29   ` Mike Rapoport
2021-04-22  7:29     ` Mike Rapoport
2021-04-22  7:29     ` Mike Rapoport
2021-04-22 15:28     ` Kefeng Wang
2021-04-22 15:28       ` Kefeng Wang
2021-04-22 15:28       ` Kefeng Wang
2021-04-23  8:11       ` Kefeng Wang
2021-04-23  8:11         ` Kefeng Wang
2021-04-23  8:11         ` Kefeng Wang
2021-04-25  7:19         ` arm32: panic in move_freepages (Was [PATCH v2 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid()) Mike Rapoport
2021-04-25  7:19           ` Mike Rapoport
2021-04-25  7:19           ` Mike Rapoport
2021-04-25  7:51           ` Kefeng Wang
2021-04-25  7:51             ` Kefeng Wang
2021-04-26  5:20             ` Mike Rapoport
2021-04-26  5:20               ` Mike Rapoport
2021-04-26  5:20               ` Mike Rapoport
2021-04-26 15:26               ` Kefeng Wang
2021-04-26 15:26                 ` Kefeng Wang
2021-04-26 15:26                 ` Kefeng Wang
2021-04-27  6:23                 ` Mike Rapoport
2021-04-27  6:23                   ` Mike Rapoport
2021-04-27  6:23                   ` Mike Rapoport
2021-04-27 11:08                   ` Kefeng Wang
2021-04-27 11:08                     ` Kefeng Wang
2021-04-27 11:08                     ` Kefeng Wang
2021-04-28  5:59                     ` Mike Rapoport
2021-04-28  5:59                       ` Mike Rapoport
2021-04-28  5:59                       ` Mike Rapoport
2021-04-29  0:48                       ` Kefeng Wang
2021-04-29  0:48                         ` Kefeng Wang
2021-04-29  0:48                         ` Kefeng Wang
2021-04-29  6:57                         ` Mike Rapoport
2021-04-29  6:57                           ` Mike Rapoport
2021-04-29  6:57                           ` Mike Rapoport
2021-04-29 10:22                           ` Kefeng Wang
2021-04-29 10:22                             ` Kefeng Wang
2021-04-29 10:22                             ` Kefeng Wang
2021-04-30  9:51                             ` Mike Rapoport
2021-04-30  9:51                               ` Mike Rapoport
2021-04-30  9:51                               ` Mike Rapoport
2021-04-30 11:24                               ` Kefeng Wang
2021-04-30 11:24                                 ` Kefeng Wang
2021-04-30 11:24                                 ` Kefeng Wang
2021-05-03  6:26                                 ` Mike Rapoport
2021-05-03  6:26                                   ` Mike Rapoport
2021-05-03  6:26                                   ` Mike Rapoport
2021-05-03  8:07                                   ` David Hildenbrand
2021-05-03  8:07                                     ` David Hildenbrand
2021-05-03  8:07                                     ` David Hildenbrand
2021-05-03  8:44                                     ` Mike Rapoport
2021-05-03  8:44                                       ` Mike Rapoport
2021-05-03  8:44                                       ` Mike Rapoport
2021-05-06 12:47                                       ` Kefeng Wang
2021-05-06 12:47                                         ` Kefeng Wang
2021-05-06 12:47                                         ` Kefeng Wang
2021-05-07  7:17                                         ` Kefeng Wang
2021-05-07  7:17                                           ` Kefeng Wang
2021-05-07  7:17                                           ` Kefeng Wang
2021-05-07 10:30                                           ` Mike Rapoport
2021-05-07 10:30                                             ` Mike Rapoport
2021-05-07 10:30                                             ` Mike Rapoport
2021-05-07 12:34                                             ` Kefeng Wang
2021-05-07 12:34                                               ` Kefeng Wang
2021-05-07 12:34                                               ` Kefeng Wang
2021-05-09  5:59                                               ` Mike Rapoport
2021-05-09  5:59                                                 ` Mike Rapoport
2021-05-09  5:59                                                 ` Mike Rapoport
2021-05-10  3:10                                                 ` Kefeng Wang
2021-05-10  3:10                                                   ` Kefeng Wang
2021-05-10  3:10                                                   ` Kefeng Wang
2021-05-11  8:48                                                   ` Mike Rapoport
2021-05-11  8:48                                                     ` Mike Rapoport
2021-05-11  8:48                                                     ` Mike Rapoport
2021-05-12  3:08                                                     ` Kefeng Wang
2021-05-12  3:08                                                       ` Kefeng Wang
2021-05-12  3:08                                                       ` Kefeng Wang
2021-05-12  8:26                                                       ` Mike Rapoport
2021-05-12  8:26                                                         ` Mike Rapoport
2021-05-12  8:26                                                         ` Mike Rapoport
2021-05-13  3:44                                                         ` Kefeng Wang
2021-05-13  3:44                                                           ` Kefeng Wang
2021-05-13  3:44                                                           ` Kefeng Wang
2021-05-13 10:55                                                           ` Mike Rapoport
2021-05-13 10:55                                                             ` Mike Rapoport
2021-05-13 10:55                                                             ` Mike Rapoport
2021-05-14  2:18                                                             ` Kefeng Wang
2021-05-14  2:18                                                               ` Kefeng Wang
2021-05-14  2:18                                                               ` Kefeng Wang
2021-05-12  3:50             ` Matthew Wilcox
2021-05-12  3:50               ` Matthew Wilcox
2021-05-12  3:50               ` Matthew Wilcox
2021-04-25  6:59       ` [PATCH v2 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Mike Rapoport
2021-04-25  6:59         ` Mike Rapoport
2021-04-25  6:59         ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=752fd822-6479-53f1-81fb-24b55500e963@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=ardb@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.