From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9CCFC04AB6 for ; Fri, 31 May 2019 06:37:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8F7E2241A6 for ; Fri, 31 May 2019 06:37:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726977AbfEaGhg (ORCPT ); Fri, 31 May 2019 02:37:36 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:44896 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726853AbfEaGhK (ORCPT ); Fri, 31 May 2019 02:37:10 -0400 Received: by mail-pg1-f194.google.com with SMTP id n2so3455345pgp.11 for ; Thu, 30 May 2019 23:37:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vz3lav1usM7a7FK+D+j9zo7FeWMkhJ9ayd5Qt3ljpFk=; b=emyUfYpDQuLQJTmOYNAAyRvNfmXdj8k8tE5u8jO2ajGozU/g/vrdrAOIdQ3ztNnU4I 9n4qtc8STlH0Mdm/vKBzzmAQp3gLRu4UwMtVr/PucHt2T77KzXJ24Y6L8tD/YRB0630u zy5CYpIWSqESR7YD93dvTaDn2V9Kfi6UvWX216BBUvNYWGPN+uW6+UQFj8a518bsohXu +k7tWbWpZ7luOIplmjfR1KvEwXcWtgz4y1SrQmWbsogSAm7HW+4wMYzQt9U+WU79OUmv 69PbqDD8TApkhAZbsWxG7OnvMSVPciduUJXNaR0t7SyQjzhY61mCzYOUtyOk4R5JjMlh GxDw== X-Gm-Message-State: APjAAAUEv2T2gYXWZ/WUbpKJb4MCdNDPhu1Frlnnfr2zinvv2Pv1uE8t z3GMu9Zwor0z0EOH2a07GZs= X-Google-Smtp-Source: APXvYqzwEyomLct+bfbcqg4TPLlMr1XkodtO/A4Wvx/fIgekjm8jrKapz0IVfsHvqsOncRfkGvRPKg== X-Received: by 2002:aa7:8f16:: with SMTP id x22mr8055894pfr.202.1559284629928; Thu, 30 May 2019 23:37:09 -0700 (PDT) Received: from htb-2n-eng-dhcp405.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id g17sm9256429pfk.55.2019.05.30.23.37.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 23:37:08 -0700 (PDT) From: Nadav Amit To: Peter Zijlstra , Andy Lutomirski Cc: Borislav Petkov , Dave Hansen , Ingo Molnar , Thomas Gleixner , x86@kernel.org, linux-kernel@vger.kernel.org, Nadav Amit , Dave Hansen Subject: [RFC PATCH v2 08/12] x86/tlb: Privatize cpu_tlbstate Date: Thu, 30 May 2019 23:36:41 -0700 Message-Id: <20190531063645.4697-9-namit@vmware.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531063645.4697-1-namit@vmware.com> References: <20190531063645.4697-1-namit@vmware.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org cpu_tlbstate is mostly private and only the variable is_lazy is shared. This causes some false-sharing when TLB flushes are performed. Break cpu_tlbstate intro cpu_tlbstate and cpu_tlbstate_shared, and mark each one accordingly. Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Signed-off-by: Nadav Amit --- arch/x86/include/asm/tlbflush.h | 40 ++++++++++++++++++--------------- arch/x86/mm/init.c | 2 +- arch/x86/mm/tlb.c | 15 ++++++++----- 3 files changed, 32 insertions(+), 25 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 79272938cf79..a1fea36d5292 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -178,23 +178,6 @@ struct tlb_state { u16 loaded_mm_asid; u16 next_asid; - /* - * We can be in one of several states: - * - * - Actively using an mm. Our CPU's bit will be set in - * mm_cpumask(loaded_mm) and is_lazy == false; - * - * - Not using a real mm. loaded_mm == &init_mm. Our CPU's bit - * will not be set in mm_cpumask(&init_mm) and is_lazy == false. - * - * - Lazily using a real mm. loaded_mm != &init_mm, our bit - * is set in mm_cpumask(loaded_mm), but is_lazy == true. - * We're heuristically guessing that the CR3 load we - * skipped more than makes up for the overhead added by - * lazy mode. - */ - bool is_lazy; - /* * If set we changed the page tables in such a way that we * needed an invalidation of all contexts (aka. PCIDs / ASIDs). @@ -240,7 +223,27 @@ struct tlb_state { */ struct tlb_context ctxs[TLB_NR_DYN_ASIDS]; }; -DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate); +DECLARE_PER_CPU_ALIGNED(struct tlb_state, cpu_tlbstate); + +struct tlb_state_shared { + /* + * We can be in one of several states: + * + * - Actively using an mm. Our CPU's bit will be set in + * mm_cpumask(loaded_mm) and is_lazy == false; + * + * - Not using a real mm. loaded_mm == &init_mm. Our CPU's bit + * will not be set in mm_cpumask(&init_mm) and is_lazy == false. + * + * - Lazily using a real mm. loaded_mm != &init_mm, our bit + * is set in mm_cpumask(loaded_mm), but is_lazy == true. + * We're heuristically guessing that the CR3 load we + * skipped more than makes up for the overhead added by + * lazy mode. + */ + bool is_lazy; +}; +DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared); /* * Blindly accessing user memory from NMI context can be dangerous @@ -439,6 +442,7 @@ static inline void __native_flush_tlb_one_user(unsigned long addr) { u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); + //invpcid_flush_one(kern_pcid(loaded_mm_asid), addr); asm volatile("invlpg (%0)" ::"r" (addr) : "memory"); if (!static_cpu_has(X86_FEATURE_PTI)) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index fd10d91a6115..34027f36a944 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -951,7 +951,7 @@ void __init zone_sizes_init(void) free_area_init_nodes(max_zone_pfns); } -__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = { +__visible DEFINE_PER_CPU_ALIGNED(struct tlb_state, cpu_tlbstate) = { .loaded_mm = &init_mm, .next_asid = 1, .cr4 = ~0UL, /* fail hard if we screw up cr4 shadow initialization */ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index b0c3065aad5d..755b2bb3e5b6 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -144,7 +144,7 @@ void leave_mm(int cpu) return; /* Warn if we're not lazy. */ - WARN_ON(!this_cpu_read(cpu_tlbstate.is_lazy)); + WARN_ON(!this_cpu_read(cpu_tlbstate_shared.is_lazy)); switch_mm(NULL, &init_mm, NULL); } @@ -276,7 +276,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, { struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm); u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); - bool was_lazy = this_cpu_read(cpu_tlbstate.is_lazy); + bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy); unsigned cpu = smp_processor_id(); u64 next_tlb_gen; bool need_flush; @@ -321,7 +321,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, __flush_tlb_all(); } #endif - this_cpu_write(cpu_tlbstate.is_lazy, false); + this_cpu_write(cpu_tlbstate_shared.is_lazy, false); /* * The membarrier system call requires a full memory barrier and @@ -462,7 +462,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) return; - this_cpu_write(cpu_tlbstate.is_lazy, true); + this_cpu_write(cpu_tlbstate_shared.is_lazy, true); } /* @@ -543,7 +543,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f, VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].ctx_id) != loaded_mm->context.ctx_id); - if (this_cpu_read(cpu_tlbstate.is_lazy)) { + if (this_cpu_read(cpu_tlbstate_shared.is_lazy)) { /* * We're in lazy mode. We need to at least flush our * paging-structure cache to avoid speculatively reading @@ -659,11 +659,14 @@ static void flush_tlb_func_remote(void *info) static inline bool tlb_is_not_lazy(int cpu) { - return !per_cpu(cpu_tlbstate.is_lazy, cpu); + return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu); } static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); +DEFINE_PER_CPU_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared); +EXPORT_PER_CPU_SYMBOL(cpu_tlbstate_shared); + void native_flush_tlb_multi(const struct cpumask *cpumask, const struct flush_tlb_info *info) { -- 2.20.1