From: Peter Zijlstra <peterz@infradead.org>
To: Nadav Amit <nadav.amit@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>, Peter Xu <peterx@redhat.com>,
Nadav Amit <namit@vmware.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Andy Lutomirski <luto@kernel.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Will Deacon <will@kernel.org>, Yu Zhao <yuzhao@google.com>,
Nick Piggin <npiggin@gmail.com>,
x86@kernel.org
Subject: Re: [PATCH 1/2] mm/mprotect: use mmu_gather
Date: Sun, 3 Oct 2021 14:10:19 +0200 [thread overview]
Message-ID: <20211003121019.GF4323@worktop.programming.kicks-ass.net> (raw)
In-Reply-To: <20210925205423.168858-2-namit@vmware.com>
On Sat, Sep 25, 2021 at 01:54:22PM -0700, Nadav Amit wrote:
> @@ -338,25 +344,25 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
> struct mm_struct *mm = vma->vm_mm;
> pgd_t *pgd;
> unsigned long next;
> - unsigned long start = addr;
> unsigned long pages = 0;
> + struct mmu_gather tlb;
>
> BUG_ON(addr >= end);
> pgd = pgd_offset(mm, addr);
> flush_cache_range(vma, addr, end);
> inc_tlb_flush_pending(mm);
That seems unbalanced...
> + tlb_gather_mmu(&tlb, mm);
> + tlb_start_vma(&tlb, vma);
> do {
> next = pgd_addr_end(addr, end);
> if (pgd_none_or_clear_bad(pgd))
> continue;
> - pages += change_p4d_range(vma, pgd, addr, next, newprot,
> + pages += change_p4d_range(&tlb, vma, pgd, addr, next, newprot,
> cp_flags);
> } while (pgd++, addr = next, addr != end);
>
> - /* Only flush the TLB if we actually modified any entries: */
> - if (pages)
> - flush_tlb_range(vma, start, end);
> - dec_tlb_flush_pending(mm);
... seeing you do remove the extra decrement.
> + tlb_end_vma(&tlb, vma);
> + tlb_finish_mmu(&tlb);
>
> return pages;
> }
> --
> 2.25.1
>
next prev parent reply other threads:[~2021-10-03 12:11 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-25 20:54 [PATCH 0/2] mm/mprotect: avoid unnecessary TLB flushes Nadav Amit
2021-09-25 20:54 ` [PATCH 1/2] mm/mprotect: use mmu_gather Nadav Amit
2021-10-03 12:10 ` Peter Zijlstra [this message]
2021-10-04 19:24 ` Nadav Amit
2021-10-05 6:53 ` Peter Zijlstra
2021-10-05 16:34 ` Nadav Amit
2021-10-11 3:45 ` Nadav Amit
2021-10-12 10:16 ` Peter Xu
2021-10-12 17:31 ` Nadav Amit
2021-10-12 23:20 ` Peter Xu
2021-10-13 15:59 ` Nadav Amit
2021-09-25 20:54 ` [PATCH 2/2] mm/mprotect: do not flush on permission promotion Nadav Amit
2021-10-07 12:13 ` David Hildenbrand
2021-10-07 16:16 ` Nadav Amit
2021-10-07 17:07 ` David Hildenbrand
2021-10-08 6:06 ` Nadav Amit
2021-10-08 7:35 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211003121019.GF4323@worktop.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=andrew.cooper3@citrix.com \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=nadav.amit@gmail.com \
--cc=namit@vmware.com \
--cc=npiggin@gmail.com \
--cc=peterx@redhat.com \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).