All of lore.kernel.org
 help / color / mirror / Atom feed
From: Max Filippov <jcmvbkbc@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	Ralf Baechle <ralf@linux-mips.org>,
	Chris Zankel <chris@zankel.net>,
	Marc Gauthier <Marc.Gauthier@tensilica.com>,
	linux-xtensa@linux-xtensa.org, Hugh Dickins <hughd@google.com>
Subject: Re: TLB and PTE coherency during munmap
Date: Fri, 31 May 2013 08:09:17 +0400	[thread overview]
Message-ID: <CAMo8BfJtkEtf9RKsGRnOnZ5zbJQz5tW4HeDfydFq_ZnrFr8opw@mail.gmail.com> (raw)
In-Reply-To: <20130529175125.GJ12193@twins.programming.kicks-ass.net>

Hi Peter,

On Wed, May 29, 2013 at 9:51 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> What about something like this?

With that patch I still get mtest05 firing my TLB/PTE incoherency check
in the UP PREEMPT_VOLUNTARY configuration. This happens after
zap_pte_range completion in the end of unmap_region because of
rescheduling called in the following call chain:

unmap_region
  free_pgtables
    unlink_anon_vmas
      lock_anon_vma_root
        down_write
          might_sleep
            might_resched
              _cond_resched


>
> ---
>  include/asm-generic/tlb.h | 11 ++++++++++-
>  mm/memory.c               | 17 ++++++++++++++++-
>  2 files changed, 26 insertions(+), 2 deletions(-)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index b1b1fa6..651b1cf 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -116,6 +116,7 @@ struct mmu_gather {
>
>  static inline int tlb_fast_mode(struct mmu_gather *tlb)
>  {
> +#ifndef CONFIG_PREEMPT
>  #ifdef CONFIG_SMP
>         return tlb->fast_mode;
>  #else
> @@ -124,7 +125,15 @@ static inline int tlb_fast_mode(struct mmu_gather *tlb)
>          * and page free order so much..
>          */
>         return 1;
> -#endif
> +#endif /* CONFIG_SMP */
> +#else  /* CONFIG_PREEMPT */
> +       /*
> +        * Since mmu_gather is preemptible, preemptible kernels are like SMP
> +        * kernels, we must batch to make sure we invalidate TLBs before we
> +        * free the pages.
> +        */
> +       return 0;
> +#endif /* CONFIG_PREEMPT */
>  }
>
>  void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm);
> diff --git a/mm/memory.c b/mm/memory.c
> index 6dc1882..e915af2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -384,6 +384,21 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
>
>  #endif /* CONFIG_HAVE_RCU_TABLE_FREE */
>
> +static inline void cond_resched_tlb(struct mmu_gather *tlb)
> +{
> +#ifndef CONFIG_PREEMPT
> +       /*
> +        * For full preempt kernels we must do regular batching like
> +        * SMP, see tlb_fast_mode(). For !PREEMPT we can 'cheat' and
> +        * do a flush before our voluntary 'yield'.
> +        */
> +       if (need_resched()) {
> +               tlb_flush_mmu(tlb);
> +               cond_resched();
> +       }
> +#endif
> +}
> +
>  /*
>   * If a p?d_bad entry is found while walking page tables, report
>   * the error, before resetting entry to p?d_none.  Usually (but
> @@ -1264,7 +1279,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
>                         goto next;
>                 next = zap_pte_range(tlb, vma, pmd, addr, next, details);
>  next:
> -               cond_resched();
> +               cond_resched_tlb(tlb);
>         } while (pmd++, addr = next, addr != end);
>
>         return addr;
>
>



-- 
Thanks.
-- Max

WARNING: multiple messages have this Message-ID (diff)
From: Max Filippov <jcmvbkbc@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	Ralf Baechle <ralf@linux-mips.org>,
	Chris Zankel <chris@zankel.net>,
	Marc Gauthier <Marc.Gauthier@tensilica.com>,
	linux-xtensa@linux-xtensa.org, Hugh Dickins <hughd@google.com>
Subject: Re: TLB and PTE coherency during munmap
Date: Fri, 31 May 2013 08:09:17 +0400	[thread overview]
Message-ID: <CAMo8BfJtkEtf9RKsGRnOnZ5zbJQz5tW4HeDfydFq_ZnrFr8opw@mail.gmail.com> (raw)
In-Reply-To: <20130529175125.GJ12193@twins.programming.kicks-ass.net>

Hi Peter,

On Wed, May 29, 2013 at 9:51 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> What about something like this?

With that patch I still get mtest05 firing my TLB/PTE incoherency check
in the UP PREEMPT_VOLUNTARY configuration. This happens after
zap_pte_range completion in the end of unmap_region because of
rescheduling called in the following call chain:

unmap_region
  free_pgtables
    unlink_anon_vmas
      lock_anon_vma_root
        down_write
          might_sleep
            might_resched
              _cond_resched


>
> ---
>  include/asm-generic/tlb.h | 11 ++++++++++-
>  mm/memory.c               | 17 ++++++++++++++++-
>  2 files changed, 26 insertions(+), 2 deletions(-)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index b1b1fa6..651b1cf 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -116,6 +116,7 @@ struct mmu_gather {
>
>  static inline int tlb_fast_mode(struct mmu_gather *tlb)
>  {
> +#ifndef CONFIG_PREEMPT
>  #ifdef CONFIG_SMP
>         return tlb->fast_mode;
>  #else
> @@ -124,7 +125,15 @@ static inline int tlb_fast_mode(struct mmu_gather *tlb)
>          * and page free order so much..
>          */
>         return 1;
> -#endif
> +#endif /* CONFIG_SMP */
> +#else  /* CONFIG_PREEMPT */
> +       /*
> +        * Since mmu_gather is preemptible, preemptible kernels are like SMP
> +        * kernels, we must batch to make sure we invalidate TLBs before we
> +        * free the pages.
> +        */
> +       return 0;
> +#endif /* CONFIG_PREEMPT */
>  }
>
>  void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm);
> diff --git a/mm/memory.c b/mm/memory.c
> index 6dc1882..e915af2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -384,6 +384,21 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
>
>  #endif /* CONFIG_HAVE_RCU_TABLE_FREE */
>
> +static inline void cond_resched_tlb(struct mmu_gather *tlb)
> +{
> +#ifndef CONFIG_PREEMPT
> +       /*
> +        * For full preempt kernels we must do regular batching like
> +        * SMP, see tlb_fast_mode(). For !PREEMPT we can 'cheat' and
> +        * do a flush before our voluntary 'yield'.
> +        */
> +       if (need_resched()) {
> +               tlb_flush_mmu(tlb);
> +               cond_resched();
> +       }
> +#endif
> +}
> +
>  /*
>   * If a p?d_bad entry is found while walking page tables, report
>   * the error, before resetting entry to p?d_none.  Usually (but
> @@ -1264,7 +1279,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
>                         goto next;
>                 next = zap_pte_range(tlb, vma, pmd, addr, next, details);
>  next:
> -               cond_resched();
> +               cond_resched_tlb(tlb);
>         } while (pmd++, addr = next, addr != end);
>
>         return addr;
>
>



-- 
Thanks.
-- Max

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2013-05-31  4:09 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-26  2:42 TLB and PTE coherency during munmap Max Filippov
2013-05-26  2:50 ` Max Filippov
2013-05-26  2:50   ` Max Filippov
2013-05-28  7:10   ` Max Filippov
2013-05-28  7:10     ` Max Filippov
2013-05-29 12:27     ` Peter Zijlstra
2013-05-29 12:27       ` Peter Zijlstra
2013-05-29 12:42       ` Vineet Gupta
2013-05-29 12:42         ` Vineet Gupta
2013-05-29 12:47         ` Peter Zijlstra
2013-05-29 12:47           ` Peter Zijlstra
2013-05-29 17:51         ` Peter Zijlstra
2013-05-29 17:51           ` Peter Zijlstra
2013-05-29 22:04           ` Catalin Marinas
2013-05-29 22:04             ` Catalin Marinas
2013-05-30  6:48             ` Peter Zijlstra
2013-05-30  6:48               ` Peter Zijlstra
2013-05-30  5:04           ` Vineet Gupta
2013-05-30  5:04             ` Vineet Gupta
2013-05-30  6:56             ` Peter Zijlstra
2013-05-30  6:56               ` Peter Zijlstra
2013-05-30  7:00               ` Vineet Gupta
2013-05-30  7:00                 ` Vineet Gupta
2013-05-30 11:03                 ` Peter Zijlstra
2013-05-30 11:03                   ` Peter Zijlstra
2013-05-31  4:09           ` Max Filippov [this message]
2013-05-31  4:09             ` Max Filippov
2013-05-31  7:55             ` Peter Zijlstra
2013-05-31  7:55               ` Peter Zijlstra
2013-06-03  9:05             ` Peter Zijlstra
2013-06-03  9:05               ` Peter Zijlstra
2013-06-03  9:16               ` Ingo Molnar
2013-06-03  9:16                 ` Ingo Molnar
2013-06-03 10:01                 ` Catalin Marinas
2013-06-03 10:01                   ` Catalin Marinas
2013-06-03 10:04                   ` Peter Zijlstra
2013-06-03 10:04                     ` Peter Zijlstra
2013-06-03 10:09                     ` Catalin Marinas
2013-06-03 10:09                       ` Catalin Marinas
2013-06-04  9:52               ` Peter Zijlstra
2013-06-04  9:52                 ` Peter Zijlstra
2013-06-05  0:05                 ` Linus Torvalds
2013-06-05  0:05                   ` Linus Torvalds
2013-06-05 10:26                   ` [PATCH] arch, mm: Remove tlb_fast_mode() Peter Zijlstra
2013-06-05 10:26                     ` Peter Zijlstra
2013-05-31  1:40       ` TLB and PTE coherency during munmap Max Filippov
2013-05-31  1:40         ` Max Filippov
2013-05-28 14:34   ` Konrad Rzeszutek Wilk
2013-05-28 14:34     ` Konrad Rzeszutek Wilk
2013-05-29  3:23     ` Max Filippov
2013-05-29  3:23       ` Max Filippov
2013-05-28 15:16   ` Michal Hocko
2013-05-28 15:16     ` Michal Hocko
2013-05-28 15:23     ` Catalin Marinas
2013-05-28 15:23       ` Catalin Marinas
2013-05-28 14:35 ` Catalin Marinas
2013-05-29  4:15   ` Max Filippov
2013-05-29  4:15     ` Max Filippov
2013-05-29 10:15     ` Catalin Marinas
2013-05-29 10:15       ` Catalin Marinas
2013-05-31  1:26       ` Max Filippov
2013-05-31  1:26         ` Max Filippov
2013-05-31  9:06         ` Catalin Marinas
2013-05-31  9:06           ` Catalin Marinas
2013-06-03  9:16         ` Max Filippov
2013-06-03  9:16           ` Max Filippov
2013-05-29 11:53   ` Vineet Gupta
2013-05-29 12:00   ` Vineet Gupta
2013-05-29 12:00     ` Vineet Gupta
2013-06-07  2:21 George Spelvin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMo8BfJtkEtf9RKsGRnOnZ5zbJQz5tW4HeDfydFq_ZnrFr8opw@mail.gmail.com \
    --to=jcmvbkbc@gmail.com \
    --cc=Marc.Gauthier@tensilica.com \
    --cc=Vineet.Gupta1@synopsys.com \
    --cc=chris@zankel.net \
    --cc=hughd@google.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xtensa@linux-xtensa.org \
    --cc=peterz@infradead.org \
    --cc=ralf@linux-mips.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.