All of lore.kernel.org
 help / color / mirror / Atom feed
* [5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush
@ 2019-06-17 20:46 Yang Shi
  2019-06-17 20:52 ` Yang Shi
  0 siblings, 1 reply; 2+ messages in thread
From: Yang Shi @ 2019-06-17 20:46 UTC (permalink / raw)
  To: gregkh, akpm, aneesh.kumar, jstancek, mgorman, minchan, namit,
	npiggin, peterz, will.deacon
  Cc: yang.shi, stable, linux-mm, linux-kernel

A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.

__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush for parallel mapping changes for
the same range under non-exclusive lock (i.e. read mmap_sem).  Before
commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
munmap"), the syscalls (e.g. MADV_DONTNEED, MADV_FREE) which may update
PTEs in parallel don't remove page tables.  But, the forementioned
commit may do munmap() under read mmap_sem and free page tables.  This
may result in program hang on aarch64 reported by Jan Stancek.  The
problem could be reproduced by his test program with slightly modified
below.

---8<---

static int map_size = 4096;
static int num_iter = 500;
static long threads_total;

static void *distant_area;

void *map_write_unmap(void *ptr)
{
	int *fd = ptr;
	unsigned char *map_address;
	int i, j = 0;

	for (i = 0; i < num_iter; i++) {
		map_address = mmap(distant_area, (size_t) map_size, PROT_WRITE | PROT_READ,
			MAP_SHARED | MAP_ANONYMOUS, -1, 0);
		if (map_address == MAP_FAILED) {
			perror("mmap");
			exit(1);
		}

		for (j = 0; j < map_size; j++)
			map_address[j] = 'b';

		if (munmap(map_address, map_size) == -1) {
			perror("munmap");
			exit(1);
		}
	}

	return NULL;
}

void *dummy(void *ptr)
{
	return NULL;
}

int main(void)
{
	pthread_t thid[2];

	/* hint for mmap in map_write_unmap() */
	distant_area = mmap(0, DISTANT_MMAP_SIZE, PROT_WRITE | PROT_READ,
			MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
	munmap(distant_area, (size_t)DISTANT_MMAP_SIZE);
	distant_area += DISTANT_MMAP_SIZE / 2;

	while (1) {
		pthread_create(&thid[0], NULL, map_write_unmap, NULL);
		pthread_create(&thid[1], NULL, dummy, NULL);

		pthread_join(thid[0], NULL);
		pthread_join(thid[1], NULL);
	}
}
---8<---

The program may bring in parallel execution like below:

        t1                                        t2
munmap(map_address)
  downgrade_write(&mm->mmap_sem);
  unmap_region()
  tlb_gather_mmu()
    inc_tlb_flush_pending(tlb->mm);
  free_pgtables()
    tlb->freed_tables = 1
    tlb->cleared_pmds = 1

                                        pthread_exit()
                                        madvise(thread_stack, 8M, MADV_DONTNEED)
                                          zap_page_range()
                                            tlb_gather_mmu()
                                              inc_tlb_flush_pending(tlb->mm);

  tlb_finish_mmu()
    if (mm_tlb_flush_nested(tlb->mm))
      __tlb_reset_range()

__tlb_reset_range() would reset freed_tables and cleared_* bits, but
this may cause inconsistency for munmap() which do free page tables.
Then it may result in some architectures, e.g. aarch64, may not flush
TLB completely as expected to have stale TLB entries remained.

Use fullmm flush since it yields much better performance on aarch64 and
non-fullmm doesn't yields significant difference on x86.

The original proposed fix came from Jan Stancek who mainly debugged this
issue, I just wrapped up everything together.

Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
Reported-by: Jan Stancek <jstancek@redhat.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Suggested-by: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: stable@vger.kernel.org  4.20+
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Signed-off-by: Jan Stancek <jstancek@redhat.com>
---
 mm/mmu_gather.c | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index f2f03c6..3543b82 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -92,9 +92,30 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb,
 {
 	struct mmu_gather_batch *batch, *next;
 
+	/*
+	 * If there are parallel threads are doing PTE changes on same range
+	 * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
+	 * flush by batching, one thread may end up seeing inconsistent PTEs
+	 * and result in having stale TLB entries.  So flush TLB forcefully
+	 * if we detect parallel PTE batching threads.
+	 *
+	 * However, some syscalls, e.g. munmap(), may free page tables, this
+	 * needs force flush everything in the given range. Otherwise this
+	 * may result in having stale TLB entries for some architectures,
+	 * e.g. aarch64, that could specify flush what level TLB.
+	 */
 	if (force) {
+		/*
+		 * The aarch64 yields better performance with fullmm by
+		 * avoiding multiple CPUs spamming TLBI messages at the
+		 * same time.
+		 *
+		 * On x86 non-fullmm doesn't yield significant difference
+		 * against fullmm.
+		 */
+		tlb->fullmm = 1;
 		__tlb_reset_range(tlb);
-		__tlb_adjust_range(tlb, start, end - start);
+		tlb->freed_tables = 1;
 	}
 
 	tlb_flush_mmu(tlb);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush
  2019-06-17 20:46 [5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush Yang Shi
@ 2019-06-17 20:52 ` Yang Shi
  0 siblings, 0 replies; 2+ messages in thread
From: Yang Shi @ 2019-06-17 20:52 UTC (permalink / raw)
  To: gregkh, akpm, aneesh.kumar, jstancek, mgorman, minchan, namit,
	npiggin, peterz, will.deacon
  Cc: stable, linux-mm, linux-kernel

This patch is wrong, please disregard this one. The corrected one will 
be posted soon. Sorry for the inconvenience.


Yang



On 6/17/19 1:46 PM, Yang Shi wrote:
> A few new fields were added to mmu_gather to make TLB flush smarter for
> huge page by telling what level of page table is changed.
>
> __tlb_reset_range() is used to reset all these page table state to
> unchanged, which is called by TLB flush for parallel mapping changes for
> the same range under non-exclusive lock (i.e. read mmap_sem).  Before
> commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
> munmap"), the syscalls (e.g. MADV_DONTNEED, MADV_FREE) which may update
> PTEs in parallel don't remove page tables.  But, the forementioned
> commit may do munmap() under read mmap_sem and free page tables.  This
> may result in program hang on aarch64 reported by Jan Stancek.  The
> problem could be reproduced by his test program with slightly modified
> below.
>
> ---8<---
>
> static int map_size = 4096;
> static int num_iter = 500;
> static long threads_total;
>
> static void *distant_area;
>
> void *map_write_unmap(void *ptr)
> {
> 	int *fd = ptr;
> 	unsigned char *map_address;
> 	int i, j = 0;
>
> 	for (i = 0; i < num_iter; i++) {
> 		map_address = mmap(distant_area, (size_t) map_size, PROT_WRITE | PROT_READ,
> 			MAP_SHARED | MAP_ANONYMOUS, -1, 0);
> 		if (map_address == MAP_FAILED) {
> 			perror("mmap");
> 			exit(1);
> 		}
>
> 		for (j = 0; j < map_size; j++)
> 			map_address[j] = 'b';
>
> 		if (munmap(map_address, map_size) == -1) {
> 			perror("munmap");
> 			exit(1);
> 		}
> 	}
>
> 	return NULL;
> }
>
> void *dummy(void *ptr)
> {
> 	return NULL;
> }
>
> int main(void)
> {
> 	pthread_t thid[2];
>
> 	/* hint for mmap in map_write_unmap() */
> 	distant_area = mmap(0, DISTANT_MMAP_SIZE, PROT_WRITE | PROT_READ,
> 			MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> 	munmap(distant_area, (size_t)DISTANT_MMAP_SIZE);
> 	distant_area += DISTANT_MMAP_SIZE / 2;
>
> 	while (1) {
> 		pthread_create(&thid[0], NULL, map_write_unmap, NULL);
> 		pthread_create(&thid[1], NULL, dummy, NULL);
>
> 		pthread_join(thid[0], NULL);
> 		pthread_join(thid[1], NULL);
> 	}
> }
> ---8<---
>
> The program may bring in parallel execution like below:
>
>          t1                                        t2
> munmap(map_address)
>    downgrade_write(&mm->mmap_sem);
>    unmap_region()
>    tlb_gather_mmu()
>      inc_tlb_flush_pending(tlb->mm);
>    free_pgtables()
>      tlb->freed_tables = 1
>      tlb->cleared_pmds = 1
>
>                                          pthread_exit()
>                                          madvise(thread_stack, 8M, MADV_DONTNEED)
>                                            zap_page_range()
>                                              tlb_gather_mmu()
>                                                inc_tlb_flush_pending(tlb->mm);
>
>    tlb_finish_mmu()
>      if (mm_tlb_flush_nested(tlb->mm))
>        __tlb_reset_range()
>
> __tlb_reset_range() would reset freed_tables and cleared_* bits, but
> this may cause inconsistency for munmap() which do free page tables.
> Then it may result in some architectures, e.g. aarch64, may not flush
> TLB completely as expected to have stale TLB entries remained.
>
> Use fullmm flush since it yields much better performance on aarch64 and
> non-fullmm doesn't yields significant difference on x86.
>
> The original proposed fix came from Jan Stancek who mainly debugged this
> issue, I just wrapped up everything together.
>
> Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
> Reported-by: Jan Stancek <jstancek@redhat.com>
> Tested-by: Jan Stancek <jstancek@redhat.com>
> Suggested-by: Will Deacon <will.deacon@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Nick Piggin <npiggin@gmail.com>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
> Cc: Nadav Amit <namit@vmware.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: stable@vger.kernel.org  4.20+
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> Signed-off-by: Jan Stancek <jstancek@redhat.com>
> ---
>   mm/mmu_gather.c | 23 ++++++++++++++++++++++-
>   1 file changed, 22 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index f2f03c6..3543b82 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -92,9 +92,30 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb,
>   {
>   	struct mmu_gather_batch *batch, *next;
>   
> +	/*
> +	 * If there are parallel threads are doing PTE changes on same range
> +	 * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
> +	 * flush by batching, one thread may end up seeing inconsistent PTEs
> +	 * and result in having stale TLB entries.  So flush TLB forcefully
> +	 * if we detect parallel PTE batching threads.
> +	 *
> +	 * However, some syscalls, e.g. munmap(), may free page tables, this
> +	 * needs force flush everything in the given range. Otherwise this
> +	 * may result in having stale TLB entries for some architectures,
> +	 * e.g. aarch64, that could specify flush what level TLB.
> +	 */
>   	if (force) {
> +		/*
> +		 * The aarch64 yields better performance with fullmm by
> +		 * avoiding multiple CPUs spamming TLBI messages at the
> +		 * same time.
> +		 *
> +		 * On x86 non-fullmm doesn't yield significant difference
> +		 * against fullmm.
> +		 */
> +		tlb->fullmm = 1;
>   		__tlb_reset_range(tlb);
> -		__tlb_adjust_range(tlb, start, end - start);
> +		tlb->freed_tables = 1;
>   	}
>   
>   	tlb_flush_mmu(tlb);


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-06-17 20:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-17 20:46 [5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush Yang Shi
2019-06-17 20:52 ` Yang Shi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.