linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Andy Lutomirski" <luto@kernel.org>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: "Dave Hansen" <dave.hansen@linux.intel.com>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	"the arch/x86 maintainers" <x86@kernel.org>,
	"kcc@google.com" <kcc@google.com>,
	"ryabinin.a.a@gmail.com" <ryabinin.a.a@gmail.com>,
	"andreyknvl@gmail.com" <andreyknvl@gmail.com>,
	"glider@google.com" <glider@google.com>,
	"dvyukov@google.com" <dvyukov@google.com>,
	"H.J. Lu" <hjl.tools@gmail.com>,
	"Andi Kleen" <ak@linux.intel.com>,
	"Rick P Edgecombe" <rick.p.edgecombe@intel.com>,
	linux-mm@kvack.org,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCHv3 4/8] x86/mm: Handle LAM on context switch
Date: Wed, 29 Jun 2022 18:51:44 -0700	[thread overview]
Message-ID: <c12b8785-1fc2-434a-83ae-f28532e6823a@www.fastmail.com> (raw)
In-Reply-To: <20220629003452.37yojljbcl7jjgu5@black.fi.intel.com>



On Tue, Jun 28, 2022, at 5:34 PM, Kirill A. Shutemov wrote:
> On Tue, Jun 28, 2022 at 04:33:21PM -0700, Andy Lutomirski wrote:
>> On 6/10/22 07:35, Kirill A. Shutemov wrote:
>> > Linear Address Masking mode for userspace pointers encoded in CR3 bits.
>> > The mode is selected per-thread. Add new thread features indicate that the
>> > thread has Linear Address Masking enabled.
>> > 
>> > switch_mm_irqs_off() now respects these flags and constructs CR3
>> > accordingly.
>> > 
>> > The active LAM mode gets recorded in the tlb_state.
>> > 
>> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> > ---
>> >   arch/x86/include/asm/mmu.h         |  1 +
>> >   arch/x86/include/asm/mmu_context.h | 24 ++++++++++++
>> >   arch/x86/include/asm/tlbflush.h    |  3 ++
>> >   arch/x86/mm/tlb.c                  | 62 ++++++++++++++++++++++--------
>> >   4 files changed, 75 insertions(+), 15 deletions(-)
>> > 
>> > diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
>> > index 5d7494631ea9..d150e92163b6 100644
>> > --- a/arch/x86/include/asm/mmu.h
>> > +++ b/arch/x86/include/asm/mmu.h
>> > @@ -40,6 +40,7 @@ typedef struct {
>> >   #ifdef CONFIG_X86_64
>> >   	unsigned short flags;
>> > +	u64 lam_cr3_mask;
>> >   #endif
>> >   	struct mutex lock;
>> > diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
>> > index b8d40ddeab00..e6eac047c728 100644
>> > --- a/arch/x86/include/asm/mmu_context.h
>> > +++ b/arch/x86/include/asm/mmu_context.h
>> > @@ -91,6 +91,29 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next)
>> >   }
>> >   #endif
>> > +#ifdef CONFIG_X86_64
>> > +static inline u64 mm_cr3_lam_mask(struct mm_struct *mm)
>> > +{
>> > +	return mm->context.lam_cr3_mask;
>> > +}
>> > +
>> > +static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm)
>> > +{
>> > +	mm->context.lam_cr3_mask = oldmm->context.lam_cr3_mask;
>> > +}
>> > +
>> > +#else
>> > +
>> > +static inline u64 mm_cr3_lam_mask(struct mm_struct *mm)
>> > +{
>> > +	return 0;
>> > +}
>> > +
>> > +static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm)
>> > +{
>> > +}
>> > +#endif
>> 
>> Do we really need the ifdeffery here?  I see no real harm in having the
>> field exist on 32-bit -- we don't care much about performance for 32-bit
>> kernels.
>
> The waste doesn't feel right to me. I would rather keep it.
>
> But sure I can do this if needed.

I could go either way here.

>
>> > -	if (real_prev == next) {
>> > +	if (real_prev == next && prev_lam == new_lam) {
>> >   		VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
>> >   			   next->context.ctx_id);
>> 
>> This looks wrong to me.  If we change threads within the same mm but lam
>> changes (which is certainly possible by a race if nothing else) then this
>> will go down the "we really are changing mms" path, not the "we're not
>> changing but we might need to flush something" path.
>
> If LAM gets enabled we must write CR3 with the new LAM mode. Without the
> change real_prev == next case will not do this for !was_lazy case.

You could fix that.  Or you could determine that this isn’t actually needed, just like updating the LDT in that path isn’t needed, if you change the way LAM is updated.

>
> Note that currently enabling LAM is done by setting LAM mode in the mmu
> context and doing switch_mm(current->mm, current->mm, current), so it is
> very important case.
>

Well, I did separately ask why this is the case.

> -- 
>  Kirill A. Shutemov

  reply	other threads:[~2022-06-30  1:52 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-10 14:35 [PATCHv3 0/8] Linear Address Masking enabling Kirill A. Shutemov
2022-06-10 14:35 ` [PATCHv3 1/8] x86/mm: Fix CR3_ADDR_MASK Kirill A. Shutemov
2022-06-10 23:32   ` Edgecombe, Rick P
2022-06-10 14:35 ` [PATCHv3 2/8] x86: CPUID and CR3/CR4 flags for Linear Address Masking Kirill A. Shutemov
2022-06-10 14:35 ` [PATCHv3 3/8] mm: Pass down mm_struct to untagged_addr() Kirill A. Shutemov
2022-06-10 23:33   ` Edgecombe, Rick P
2022-06-17 15:27   ` Alexander Potapenko
2022-06-17 22:38     ` Kirill A. Shutemov
2022-06-10 14:35 ` [PATCHv3 4/8] x86/mm: Handle LAM on context switch Kirill A. Shutemov
2022-06-10 23:55   ` Edgecombe, Rick P
2022-06-15 15:54     ` Kirill A. Shutemov
2022-06-16  9:08   ` Peter Zijlstra
2022-06-16 16:40     ` Kirill A. Shutemov
2022-06-17 15:35   ` Alexander Potapenko
2022-06-17 22:39     ` Kirill A. Shutemov
2022-06-28 23:33   ` Andy Lutomirski
2022-06-29  0:34     ` Kirill A. Shutemov
2022-06-30  1:51       ` Andy Lutomirski [this message]
2022-06-10 14:35 ` [PATCHv3 5/8] x86/uaccess: Provide untagged_addr() and remove tags before address check Kirill A. Shutemov
2022-06-13 17:36   ` Edgecombe, Rick P
2022-06-15 16:58     ` Kirill A. Shutemov
2022-06-15 19:06       ` Edgecombe, Rick P
2022-06-16  9:30     ` Peter Zijlstra
2022-06-16 16:44       ` Kirill A. Shutemov
2022-06-17 11:36         ` Peter Zijlstra
2022-06-17 14:22           ` H.J. Lu
2022-06-17 14:28             ` Peter Zijlstra
2022-06-16  9:34     ` Peter Zijlstra
2022-06-16 10:02   ` Peter Zijlstra
2022-06-16 16:48     ` Kirill A. Shutemov
2022-06-28 23:40   ` Andy Lutomirski
2022-06-29  0:42     ` Kirill A. Shutemov
2022-06-30  2:38       ` Andy Lutomirski
2022-07-05  0:13         ` Kirill A. Shutemov
2022-06-10 14:35 ` [PATCHv3 6/8] x86/mm: Provide ARCH_GET_UNTAG_MASK and ARCH_ENABLE_TAGGED_ADDR Kirill A. Shutemov
2022-06-10 15:25   ` Edgecombe, Rick P
2022-06-10 18:04     ` Kirill A. Shutemov
2022-06-10 16:16   ` Edgecombe, Rick P
2022-06-10 18:06     ` Kirill A. Shutemov
2022-06-10 18:08       ` Edgecombe, Rick P
2022-06-10 22:18         ` Edgecombe, Rick P
2022-06-11  1:12           ` Kirill A. Shutemov
2022-06-11  2:36             ` Edgecombe, Rick P
2022-06-12 21:03           ` Andy Lutomirski
2022-06-16  9:44             ` Peter Zijlstra
2022-06-16 16:54               ` Kirill A. Shutemov
2022-06-30  2:04                 ` Andy Lutomirski
2022-06-13 14:42   ` Michal Hocko
2022-06-16 17:05     ` Kirill A. Shutemov
2022-06-19 23:40       ` Kirill A. Shutemov
2022-06-16  9:39   ` Peter Zijlstra
2022-06-28 23:42   ` Andy Lutomirski
2022-06-29  0:53     ` Kirill A. Shutemov
2022-06-30  2:29       ` Andy Lutomirski
2022-07-01 15:38         ` Kirill A. Shutemov
2022-07-02 23:55           ` Andy Lutomirski
2022-07-04 13:43             ` Kirill A. Shutemov
2022-06-10 14:35 ` [PATCHv3 7/8] x86: Expose untagging mask in /proc/$PID/arch_status Kirill A. Shutemov
2022-06-10 15:24   ` Dave Hansen
2022-06-11  1:28     ` Kirill A. Shutemov
2022-06-27 12:00       ` Catalin Marinas
2022-06-10 14:35 ` [PATCHv3 OPTIONAL 8/8] x86/mm: Extend LAM to support to LAM_U48 Kirill A. Shutemov
2022-06-16 10:00   ` Peter Zijlstra
2022-06-10 20:22 ` [PATCHv3 0/8] Linear Address Masking enabling Kostya Serebryany
2022-06-16 22:52 ` Edgecombe, Rick P
2022-06-16 23:43   ` Kirill A. Shutemov
2022-06-16 23:48     ` Edgecombe, Rick P

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c12b8785-1fc2-434a-83ae-f28532e6823a@www.fastmail.com \
    --to=luto@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=andreyknvl@gmail.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=hjl.tools@gmail.com \
    --cc=kcc@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=ryabinin.a.a@gmail.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).