From: Yang Shi <yang.shi@linux.alibaba.com>
To: Will Deacon <will.deacon@arm.com>
Cc: jstancek@redhat.com, peterz@infradead.org, namit@vmware.com,
minchan@kernel.org, mgorman@suse.de, stable@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush
Date: Mon, 13 May 2019 16:01:09 -0700 [thread overview]
Message-ID: <360170d7-b16f-f130-f930-bfe54be9747a@linux.alibaba.com> (raw)
In-Reply-To: <20190513163804.GB10754@fuggles.cambridge.arm.com>
On 5/13/19 9:38 AM, Will Deacon wrote:
> On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>> index 99740e1..469492d 100644
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -245,14 +245,39 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
>> {
>> /*
>> * If there are parallel threads are doing PTE changes on same range
>> - * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB
>> - * flush by batching, a thread has stable TLB entry can fail to flush
>> - * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
>> - * forcefully if we detect parallel PTE batching threads.
>> + * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
>> + * flush by batching, one thread may end up seeing inconsistent PTEs
>> + * and result in having stale TLB entries. So flush TLB forcefully
>> + * if we detect parallel PTE batching threads.
>> + *
>> + * However, some syscalls, e.g. munmap(), may free page tables, this
>> + * needs force flush everything in the given range. Otherwise this
>> + * may result in having stale TLB entries for some architectures,
>> + * e.g. aarch64, that could specify flush what level TLB.
>> */
>> - if (mm_tlb_flush_nested(tlb->mm)) {
>> - __tlb_reset_range(tlb);
>> - __tlb_adjust_range(tlb, start, end - start);
>> + if (mm_tlb_flush_nested(tlb->mm) && !tlb->fullmm) {
>> + /*
>> + * Since we can't tell what we actually should have
>> + * flushed, flush everything in the given range.
>> + */
>> + tlb->freed_tables = 1;
>> + tlb->cleared_ptes = 1;
>> + tlb->cleared_pmds = 1;
>> + tlb->cleared_puds = 1;
>> + tlb->cleared_p4ds = 1;
>> +
>> + /*
>> + * Some architectures, e.g. ARM, that have range invalidation
>> + * and care about VM_EXEC for I-Cache invalidation, need force
>> + * vma_exec set.
>> + */
>> + tlb->vma_exec = 1;
>> +
>> + /* Force vma_huge clear to guarantee safer flush */
>> + tlb->vma_huge = 0;
>> +
>> + tlb->start = start;
>> + tlb->end = end;
>> }
> Whilst I think this is correct, it would be interesting to see whether
> or not it's actually faster than just nuking the whole mm, as I mentioned
> before.
>
> At least in terms of getting a short-term fix, I'd prefer the diff below
> if it's not measurably worse.
I did a quick test with ebizzy (96 threads with 5 iterations) on my x86
VM, it shows slightly slowdown on records/s but much more sys time spent
with fullmm flush, the below is the data.
nofullmm fullmm
ops (records/s) 225606 225119
sys (s) 0.69 1.14
It looks the slight reduction of records/s is caused by the increase of
sys time.
>
> Will
>
> --->8
>
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 99740e1dd273..cc251422d307 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -251,8 +251,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
> * forcefully if we detect parallel PTE batching threads.
> */
> if (mm_tlb_flush_nested(tlb->mm)) {
> + tlb->fullmm = 1;
> __tlb_reset_range(tlb);
> - __tlb_adjust_range(tlb, start, end - start);
> + tlb->freed_tables = 1;
> }
>
> tlb_flush_mmu(tlb);
next prev parent reply other threads:[~2019-05-13 23:01 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-09 23:26 [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush Yang Shi
2019-05-13 16:38 ` Will Deacon
2019-05-13 23:01 ` Yang Shi [this message]
2019-05-14 14:54 ` Will Deacon
2019-05-14 17:25 ` Yang Shi
2019-05-16 15:29 ` Jan Stancek
2019-05-20 2:59 ` Yang Shi
2019-05-14 11:52 ` Peter Zijlstra
2019-05-14 12:02 ` Will Deacon
[not found] <45c6096e-c3e0-4058-8669-75fbba415e07@email.android.com>
2019-05-14 7:15 ` Jan Stancek
2019-05-14 7:21 ` Nadav Amit
2019-05-14 11:49 ` Peter Zijlstra
2019-05-14 11:43 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=360170d7-b16f-f130-f930-bfe54be9747a@linux.alibaba.com \
--to=yang.shi@linux.alibaba.com \
--cc=jstancek@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=minchan@kernel.org \
--cc=namit@vmware.com \
--cc=peterz@infradead.org \
--cc=stable@vger.kernel.org \
--cc=will.deacon@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).