All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will.deacon@arm.com>
To: Pavel Tatashin <pasha.tatashin@oracle.com>
Cc: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org,
	linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	x86@kernel.org, kasan-dev@googlegroups.com,
	borntraeger@de.ibm.com, heiko.carstens@de.ibm.com,
	davem@davemloft.net, willy@infradead.org, mhocko@kernel.org,
	ard.biesheuvel@linaro.org, catalin.marinas@arm.com,
	sam@ravnborg.org
Subject: Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory
Date: Tue, 8 Aug 2017 10:07:44 +0100	[thread overview]
Message-ID: <20170808090743.GA12887@arm.com> (raw)
In-Reply-To: <1502138329-123460-12-git-send-email-pasha.tatashin@oracle.com>

On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Reviewed-by: Bob Picco <bob.picco@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>  		set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> +	struct memblock_region *reg;
> +	u64 start, end;
> +
> +	for_each_memblock(memory, reg) {
> +		start = __phys_to_virt(reg->base);
> +		end = __phys_to_virt(reg->base + reg->size);
> +
> +		if (start >= end)
> +			break;
> +
> +		start = (u64)kasan_mem_to_shadow((void *)start);
> +		end = (u64)kasan_mem_to_shadow((void *)end);
> +
> +		/* Round to the start end of the mapped pages */
> +		start = round_down(start, SWAPPER_BLOCK_SIZE);
> +		end = round_up(end, SWAPPER_BLOCK_SIZE);
> +		memset((void *)start, 0, end - start);
> +	}
> +
> +	start = (u64)kasan_mem_to_shadow(_text);
> +	end = (u64)kasan_mem_to_shadow(_end);
> +
> +	/* Round to the start end of the mapped pages */
> +	start = round_down(start, SWAPPER_BLOCK_SIZE);
> +	end = round_up(end, SWAPPER_BLOCK_SIZE);
> +	memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will

WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Subject: Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory
Date: Tue, 08 Aug 2017 09:07:44 +0000	[thread overview]
Message-ID: <20170808090743.GA12887@arm.com> (raw)
In-Reply-To: <1502138329-123460-12-git-send-email-pasha.tatashin@oracle.com>

On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Reviewed-by: Bob Picco <bob.picco@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>  		set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> +	struct memblock_region *reg;
> +	u64 start, end;
> +
> +	for_each_memblock(memory, reg) {
> +		start = __phys_to_virt(reg->base);
> +		end = __phys_to_virt(reg->base + reg->size);
> +
> +		if (start >= end)
> +			break;
> +
> +		start = (u64)kasan_mem_to_shadow((void *)start);
> +		end = (u64)kasan_mem_to_shadow((void *)end);
> +
> +		/* Round to the start end of the mapped pages */
> +		start = round_down(start, SWAPPER_BLOCK_SIZE);
> +		end = round_up(end, SWAPPER_BLOCK_SIZE);
> +		memset((void *)start, 0, end - start);
> +	}
> +
> +	start = (u64)kasan_mem_to_shadow(_text);
> +	end = (u64)kasan_mem_to_shadow(_end);
> +
> +	/* Round to the start end of the mapped pages */
> +	start = round_down(start, SWAPPER_BLOCK_SIZE);
> +	end = round_up(end, SWAPPER_BLOCK_SIZE);
> +	memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will

WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will.deacon@arm.com>
To: Pavel Tatashin <pasha.tatashin@oracle.com>
Cc: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org,
	linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	x86@kernel.org, kasan-dev@googlegroups.com,
	borntraeger@de.ibm.com, heiko.carstens@de.ibm.com,
	davem@davemloft.net, willy@infradead.org, mhocko@kernel.org,
	ard.biesheuvel@linaro.org, catalin.marinas@arm.com,
	sam@ravnborg.org
Subject: Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory
Date: Tue, 8 Aug 2017 10:07:44 +0100	[thread overview]
Message-ID: <20170808090743.GA12887@arm.com> (raw)
In-Reply-To: <1502138329-123460-12-git-send-email-pasha.tatashin@oracle.com>

On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Reviewed-by: Bob Picco <bob.picco@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>  		set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> +	struct memblock_region *reg;
> +	u64 start, end;
> +
> +	for_each_memblock(memory, reg) {
> +		start = __phys_to_virt(reg->base);
> +		end = __phys_to_virt(reg->base + reg->size);
> +
> +		if (start >= end)
> +			break;
> +
> +		start = (u64)kasan_mem_to_shadow((void *)start);
> +		end = (u64)kasan_mem_to_shadow((void *)end);
> +
> +		/* Round to the start end of the mapped pages */
> +		start = round_down(start, SWAPPER_BLOCK_SIZE);
> +		end = round_up(end, SWAPPER_BLOCK_SIZE);
> +		memset((void *)start, 0, end - start);
> +	}
> +
> +	start = (u64)kasan_mem_to_shadow(_text);
> +	end = (u64)kasan_mem_to_shadow(_end);
> +
> +	/* Round to the start end of the mapped pages */
> +	start = round_down(start, SWAPPER_BLOCK_SIZE);
> +	end = round_up(end, SWAPPER_BLOCK_SIZE);
> +	memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: will.deacon@arm.com (Will Deacon)
To: linux-arm-kernel@lists.infradead.org
Subject: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory
Date: Tue, 8 Aug 2017 10:07:44 +0100	[thread overview]
Message-ID: <20170808090743.GA12887@arm.com> (raw)
In-Reply-To: <1502138329-123460-12-git-send-email-pasha.tatashin@oracle.com>

On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Reviewed-by: Bob Picco <bob.picco@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>  		set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> +	struct memblock_region *reg;
> +	u64 start, end;
> +
> +	for_each_memblock(memory, reg) {
> +		start = __phys_to_virt(reg->base);
> +		end = __phys_to_virt(reg->base + reg->size);
> +
> +		if (start >= end)
> +			break;
> +
> +		start = (u64)kasan_mem_to_shadow((void *)start);
> +		end = (u64)kasan_mem_to_shadow((void *)end);
> +
> +		/* Round to the start end of the mapped pages */
> +		start = round_down(start, SWAPPER_BLOCK_SIZE);
> +		end = round_up(end, SWAPPER_BLOCK_SIZE);
> +		memset((void *)start, 0, end - start);
> +	}
> +
> +	start = (u64)kasan_mem_to_shadow(_text);
> +	end = (u64)kasan_mem_to_shadow(_end);
> +
> +	/* Round to the start end of the mapped pages */
> +	start = round_down(start, SWAPPER_BLOCK_SIZE);
> +	end = round_up(end, SWAPPER_BLOCK_SIZE);
> +	memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will

  reply	other threads:[~2017-08-08  9:07 UTC|newest]

Thread overview: 282+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-07 20:38 [v6 00/15] complete deferred page initialization Pavel Tatashin
2017-08-07 20:38 ` Pavel Tatashin
2017-08-07 20:38 ` Pavel Tatashin
2017-08-07 20:38 ` Pavel Tatashin
2017-08-07 20:38 ` [v6 01/15] x86/mm: reserve only exiting low pages Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11  8:07   ` Michal Hocko
2017-08-11  8:07     ` Michal Hocko
2017-08-11  8:07     ` Michal Hocko
2017-08-11  8:07     ` Michal Hocko
2017-08-11 15:24     ` Pasha Tatashin
2017-08-11 15:24       ` Pasha Tatashin
2017-08-11 15:24       ` Pasha Tatashin
2017-08-11 15:24       ` Pasha Tatashin
2017-08-14 11:40       ` Michal Hocko
2017-08-14 11:40         ` Michal Hocko
2017-08-14 11:40         ` Michal Hocko
2017-08-14 11:40         ` Michal Hocko
2017-08-14 13:30         ` Pasha Tatashin
2017-08-14 13:30           ` Pasha Tatashin
2017-08-14 13:30           ` Pasha Tatashin
2017-08-14 13:30           ` Pasha Tatashin
2017-08-14 13:55   ` Michal Hocko
2017-08-14 13:55     ` Michal Hocko
2017-08-14 13:55     ` Michal Hocko
2017-08-14 13:55     ` Michal Hocko
2017-08-17 15:37     ` Pasha Tatashin
2017-08-17 15:37       ` Pasha Tatashin
2017-08-17 15:37       ` Pasha Tatashin
2017-08-17 15:37       ` Pasha Tatashin
2017-08-07 20:38 ` [v6 02/15] x86/mm: setting fields in deferred pages Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11  9:02   ` Michal Hocko
2017-08-11  9:02     ` Michal Hocko
2017-08-11  9:02     ` Michal Hocko
2017-08-11  9:02     ` Michal Hocko
2017-08-11 15:39     ` Pasha Tatashin
2017-08-11 15:39       ` Pasha Tatashin
2017-08-11 15:39       ` Pasha Tatashin
2017-08-11 15:39       ` Pasha Tatashin
2017-08-14 11:43       ` Michal Hocko
2017-08-14 11:43         ` Michal Hocko
2017-08-14 11:43         ` Michal Hocko
2017-08-14 11:43         ` Michal Hocko
2017-08-14 13:32         ` Pasha Tatashin
2017-08-14 13:32           ` Pasha Tatashin
2017-08-14 13:32           ` Pasha Tatashin
2017-08-14 13:32           ` Pasha Tatashin
2017-08-07 20:38 ` [v6 03/15] sparc64/mm: " Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38 ` [v6 04/15] mm: discard memblock data later Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11  9:32   ` Michal Hocko
2017-08-11  9:32     ` Michal Hocko
2017-08-11  9:32     ` Michal Hocko
2017-08-11  9:32     ` Michal Hocko
2017-08-11  9:50     ` Mel Gorman
2017-08-11  9:50       ` Mel Gorman
2017-08-11  9:50       ` Mel Gorman
2017-08-11  9:50       ` Mel Gorman
2017-08-11 15:49     ` Pasha Tatashin
2017-08-11 15:49       ` Pasha Tatashin
2017-08-11 15:49       ` Pasha Tatashin
2017-08-11 15:49       ` Pasha Tatashin
2017-08-11 16:04       ` Michal Hocko
2017-08-11 16:04         ` Michal Hocko
2017-08-11 16:04         ` Michal Hocko
2017-08-11 16:04         ` Michal Hocko
2017-08-11 16:22         ` Pasha Tatashin
2017-08-11 16:22           ` Pasha Tatashin
2017-08-11 16:22           ` Pasha Tatashin
2017-08-11 16:22           ` Pasha Tatashin
2017-08-14 11:36           ` Michal Hocko
2017-08-14 11:36             ` Michal Hocko
2017-08-14 11:36             ` Michal Hocko
2017-08-14 11:36             ` Michal Hocko
2017-08-14 13:35             ` Pasha Tatashin
2017-08-14 13:35               ` Pasha Tatashin
2017-08-14 13:35               ` Pasha Tatashin
2017-08-14 13:35               ` Pasha Tatashin
2017-08-11 19:00     ` Pasha Tatashin
2017-08-11 19:00       ` Pasha Tatashin
2017-08-11 19:00       ` Pasha Tatashin
2017-08-11 19:00       ` Pasha Tatashin
2017-08-14 11:34       ` Michal Hocko
2017-08-14 11:34         ` Michal Hocko
2017-08-14 11:34         ` Michal Hocko
2017-08-14 11:34         ` Michal Hocko
2017-08-14 13:39         ` Pasha Tatashin
2017-08-14 13:39           ` Pasha Tatashin
2017-08-14 13:39           ` Pasha Tatashin
2017-08-14 13:39           ` Pasha Tatashin
2017-08-14 13:42           ` Michal Hocko
2017-08-14 13:42             ` Michal Hocko
2017-08-14 13:42             ` Michal Hocko
2017-08-14 13:42             ` Michal Hocko
2017-08-07 20:38 ` [v6 05/15] mm: don't accessed uninitialized struct pages Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11  9:37   ` Michal Hocko
2017-08-11  9:37     ` Michal Hocko
2017-08-11  9:37     ` Michal Hocko
2017-08-11  9:37     ` Michal Hocko
2017-08-11 15:55     ` Pasha Tatashin
2017-08-11 15:55       ` Pasha Tatashin
2017-08-11 15:55       ` Pasha Tatashin
2017-08-11 15:55       ` Pasha Tatashin
2017-08-14 11:47       ` Michal Hocko
2017-08-14 11:47         ` Michal Hocko
2017-08-14 11:47         ` Michal Hocko
2017-08-14 11:47         ` Michal Hocko
2017-08-14 13:51         ` Pasha Tatashin
2017-08-14 13:51           ` Pasha Tatashin
2017-08-14 13:51           ` Pasha Tatashin
2017-08-14 13:51           ` Pasha Tatashin
2017-08-17 15:28           ` Pasha Tatashin
2017-08-17 15:28             ` Pasha Tatashin
2017-08-17 15:28             ` Pasha Tatashin
2017-08-17 15:28             ` Pasha Tatashin
2017-08-17 15:43             ` Michal Hocko
2017-08-17 15:43               ` Michal Hocko
2017-08-17 15:43               ` Michal Hocko
2017-08-17 15:43               ` Michal Hocko
2017-08-15  9:33   ` Michal Hocko
2017-08-15  9:33     ` Michal Hocko
2017-08-15  9:33     ` Michal Hocko
2017-08-15  9:33     ` Michal Hocko
2017-08-07 20:38 ` [v6 06/15] sparc64: simplify vmemmap_populate Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38 ` [v6 07/15] mm: defining memblock_virt_alloc_try_nid_raw Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11 12:39   ` Michal Hocko
2017-08-11 12:39     ` Michal Hocko
2017-08-11 12:39     ` Michal Hocko
2017-08-11 12:39     ` Michal Hocko
2017-08-11 15:58     ` Pasha Tatashin
2017-08-11 15:58       ` Pasha Tatashin
2017-08-11 15:58       ` Pasha Tatashin
2017-08-11 15:58       ` Pasha Tatashin
2017-08-11 16:06       ` Michal Hocko
2017-08-11 16:06         ` Michal Hocko
2017-08-11 16:06         ` Michal Hocko
2017-08-11 16:06         ` Michal Hocko
2017-08-11 16:24         ` Pasha Tatashin
2017-08-11 16:24           ` Pasha Tatashin
2017-08-11 16:24           ` Pasha Tatashin
2017-08-11 16:24           ` Pasha Tatashin
2017-08-07 20:38 ` [v6 08/15] mm: zero struct pages during initialization Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11 12:50   ` Michal Hocko
2017-08-11 12:50     ` Michal Hocko
2017-08-11 12:50     ` Michal Hocko
2017-08-11 12:50     ` Michal Hocko
2017-08-11 16:03     ` Pasha Tatashin
2017-08-11 16:03       ` Pasha Tatashin
2017-08-11 16:03       ` Pasha Tatashin
2017-08-11 16:03       ` Pasha Tatashin
2017-08-07 20:38 ` [v6 09/15] sparc64: optimized struct page zeroing Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11 12:53   ` Michal Hocko
2017-08-11 12:53     ` Michal Hocko
2017-08-11 12:53     ` Michal Hocko
2017-08-11 12:53     ` Michal Hocko
2017-08-11 16:04     ` Pasha Tatashin
2017-08-11 16:04       ` Pasha Tatashin
2017-08-11 16:04       ` Pasha Tatashin
2017-08-11 16:04       ` Pasha Tatashin
2017-08-07 20:38 ` [v6 10/15] x86/kasan: explicitly zero kasan shadow memory Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38 ` [v6 11/15] arm64/kasan: " Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-08  9:07   ` Will Deacon [this message]
2017-08-08  9:07     ` Will Deacon
2017-08-08  9:07     ` Will Deacon
2017-08-08  9:07     ` Will Deacon
2017-08-08 11:49     ` Pasha Tatashin
2017-08-08 11:49       ` Pasha Tatashin
2017-08-08 11:49       ` Pasha Tatashin
2017-08-08 11:49       ` Pasha Tatashin
2017-08-08 12:30       ` Will Deacon
2017-08-08 12:30         ` Will Deacon
2017-08-08 12:30         ` Will Deacon
2017-08-08 12:30         ` Will Deacon
2017-08-08 12:49         ` Pasha Tatashin
2017-08-08 12:49           ` Pasha Tatashin
2017-08-08 12:49           ` Pasha Tatashin
2017-08-08 12:49           ` Pasha Tatashin
2017-08-08 13:15       ` David Laight
2017-08-08 13:15         ` David Laight
2017-08-08 13:15         ` David Laight
2017-08-08 13:15         ` David Laight
2017-08-08 13:15         ` David Laight
2017-08-08 13:30         ` Pasha Tatashin
2017-08-08 13:30           ` Pasha Tatashin
2017-08-08 13:30           ` Pasha Tatashin
2017-08-08 13:30           ` Pasha Tatashin
2017-08-08 13:30           ` Pasha Tatashin
2017-08-07 20:38 ` [v6 12/15] mm: explicitly zero pagetable memory Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38 ` [v6 13/15] mm: stop zeroing memory during allocation in vmemmap Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11 13:04   ` Michal Hocko
2017-08-11 13:04     ` Michal Hocko
2017-08-11 13:04     ` Michal Hocko
2017-08-11 13:04     ` Michal Hocko
2017-08-11 16:11     ` Pasha Tatashin
2017-08-11 16:11       ` Pasha Tatashin
2017-08-11 16:11       ` Pasha Tatashin
2017-08-11 16:11       ` Pasha Tatashin
2017-08-07 20:38 ` [v6 14/15] mm: optimize early system hash allocations Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11 13:05   ` Michal Hocko
2017-08-11 13:05     ` Michal Hocko
2017-08-11 13:05     ` Michal Hocko
2017-08-11 13:05     ` Michal Hocko
2017-08-11 16:13     ` Pasha Tatashin
2017-08-11 16:13       ` Pasha Tatashin
2017-08-11 16:13       ` Pasha Tatashin
2017-08-11 16:13       ` Pasha Tatashin
2017-08-07 20:38 ` [v6 15/15] mm: debug for raw alloctor Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-07 20:38   ` Pavel Tatashin
2017-08-11 13:08   ` Michal Hocko
2017-08-11 13:08     ` Michal Hocko
2017-08-11 13:08     ` Michal Hocko
2017-08-11 13:08     ` Michal Hocko
2017-08-11 16:18     ` Pasha Tatashin
2017-08-11 16:18       ` Pasha Tatashin
2017-08-11 16:18       ` Pasha Tatashin
2017-08-11 16:18       ` Pasha Tatashin
2017-08-14 11:50       ` Michal Hocko
2017-08-14 11:50         ` Michal Hocko
2017-08-14 11:50         ` Michal Hocko
2017-08-14 11:50         ` Michal Hocko
2017-08-14 14:01         ` Pasha Tatashin
2017-08-14 14:01           ` Pasha Tatashin
2017-08-14 14:01           ` Pasha Tatashin
2017-08-14 14:01           ` Pasha Tatashin
2017-08-15  9:36           ` Michal Hocko
2017-08-15  9:36             ` Michal Hocko
2017-08-15  9:36             ` Michal Hocko
2017-08-15  9:36             ` Michal Hocko
2017-08-11  7:58 ` [v6 00/15] complete deferred page initialization Michal Hocko
2017-08-11  7:58   ` Michal Hocko
2017-08-11  7:58   ` Michal Hocko
2017-08-11  7:58   ` Michal Hocko
2017-08-11 15:13   ` Pasha Tatashin
2017-08-11 15:13     ` Pasha Tatashin
2017-08-11 15:13     ` Pasha Tatashin
2017-08-11 15:13     ` Pasha Tatashin
2017-08-11 15:22     ` Michal Hocko
2017-08-11 15:22       ` Michal Hocko
2017-08-11 15:22       ` Michal Hocko
2017-08-11 15:22       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170808090743.GA12887@arm.com \
    --to=will.deacon@arm.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=borntraeger@de.ibm.com \
    --cc=catalin.marinas@arm.com \
    --cc=davem@davemloft.net \
    --cc=heiko.carstens@de.ibm.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mhocko@kernel.org \
    --cc=pasha.tatashin@oracle.com \
    --cc=sam@ravnborg.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.