From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65A1FC4708F for ; Tue, 1 Jun 2021 06:23:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DFD4461159 for ; Tue, 1 Jun 2021 06:23:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DFD4461159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7E50D8E0002; Tue, 1 Jun 2021 02:23:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BBDC8E0001; Tue, 1 Jun 2021 02:23:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E6E98E0002; Tue, 1 Jun 2021 02:23:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 2BA1B8E0001 for ; Tue, 1 Jun 2021 02:23:21 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B279C8249980 for ; Tue, 1 Jun 2021 06:23:20 +0000 (UTC) X-FDA: 78204162960.08.184438C Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf07.hostedemail.com (Postfix) with ESMTP id EECBDA00025A for ; Tue, 1 Jun 2021 06:23:07 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id k22-20020a17090aef16b0290163512accedso1028159pjz.0 for ; Mon, 31 May 2021 23:23:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B2ZM54o9Bp1W7ui49t2FYRDuyLvPHwC8Il/uTHhhdMs=; b=XDJ2kYvmv/Z5M24qrG7KDMUE9p7aJ0RlQOBX1p/t2w4hIRyxR6UjTzA19/wWm8wdfC 2bssCYG5blnj9LMkj0KYjULkjvgmSvquyWjePNQYm//MxqhrHejkK0QcMGRIOOJbMpox 7lTAot474lVNko8WQIRuPXUlRf3A83o42kAVvfnakMYQ5V5QvGDcEzfdLv1MpTxUKbNv e+S+jSKkwHUTddDHEL3mGYjuaQDAWQaGCy1sXxnPFkwfKdci8juvdPKPN8bYXqfSQPrI UD3u5SitNYnPXIjG3reVtT1Z66MrR1ddVVxfrlRrOkUzRhQFpbERqoAxcac1BEAeYuRK nDEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B2ZM54o9Bp1W7ui49t2FYRDuyLvPHwC8Il/uTHhhdMs=; b=jE6YvGqB8Ahq4h7jN+nNzovLrlxJfYbw9Bvs0gJ15zfyUL7gBSVogJKKmEO+I4kTy1 x9wf6viEXGSIqAHX5ZVB+F7pDNGaj56VR9hwdCXN9edIae8YAvtuYY8Iq75p9Qd4gunm /9f+75u+HZQeWcdhBtRUiMNzcIZSSDastljLpHz08LiWcXPbkbVqr/WALULtfdfcS99/ VB3d7IO5QlGx1vwlAgXZIS5G/XjvoWTx3h3HfHR3xAtlwTI9JK0XFMQlLQcYGXNVjht1 jr5gHlATYgLIr8I8/bqE+QRGAlkfXjRdtnb5SGEI3zHzj0LPD4kMQrVoXFDZHlt7dvJI U/Uw== X-Gm-Message-State: AOAM530rKFrspi7FekTqImTAY5Wg0++VuV06eHZrcGIrTg/j/+jFn+jY yrvLFXQLdbSNYneHOCmCedw= X-Google-Smtp-Source: ABdhPJy2DUM8gibAhLieX5iGh16ca9VDqhVgs0t9sh7k8/bnT7er59ZtGvorelTu6yvv6DxaKgwUsw== X-Received: by 2002:a17:90b:3587:: with SMTP id mm7mr21958255pjb.71.1622528599607; Mon, 31 May 2021 23:23:19 -0700 (PDT) Received: from bobo.ibm.com (60-241-69-122.static.tpgi.com.au. [60.241.69.122]) by smtp.gmail.com with ESMTPSA id h1sm12519100pfh.72.2021.05.31.23.23.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 May 2021 23:23:19 -0700 (PDT) From: Nicholas Piggin To: Andrew Morton Cc: Nicholas Piggin , Randy Dunlap , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard , Andy Lutomirski Subject: [PATCH v3 2/4] lazy tlb: allow lazy tlb mm switching to be configurable Date: Tue, 1 Jun 2021 16:23:01 +1000 Message-Id: <20210601062303.3932513-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210601062303.3932513-1-npiggin@gmail.com> References: <20210601062303.3932513-1-npiggin@gmail.com> MIME-Version: 1.0 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=XDJ2kYvm; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of npiggin@gmail.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=npiggin@gmail.com X-Stat-Signature: fy1kqwx9es3ei85ocatsoa64widdp4je X-Rspamd-Queue-Id: EECBDA00025A X-Rspamd-Server: rspam02 X-HE-Tag: 1622528587-382474 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add CONFIG_MMU_LAZY_TLB which can be configured out to disable the lazy tlb mechanism entirely, and switches to init_mm when switching to a kernel thread. NOMMU systems could easily go without this and save a bit of code and the refcount atomics, because their mm switch is a no-op. They have not been switched over by default because the arch code needs to be audited and tested for lazy tlb mm refcounting and converted to _lazy_tlb refcounting if necessary. CONFIG_MMU_LAZY_TLB_REFCOUNT is also added, but it must always be enabled if CONFIG_MMU_LAZY_TLB is enabled until the next patch which provides an alternate scheme. Signed-off-by: Nicholas Piggin --- arch/Kconfig | 26 ++++++++++++++ include/linux/sched/mm.h | 13 +++++-- kernel/sched/core.c | 75 ++++++++++++++++++++++++++++++---------- kernel/sched/sched.h | 4 ++- 4 files changed, 96 insertions(+), 22 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index c45b770d3579..276e1c1c0219 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -418,6 +418,32 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM irqs disabled over activate_mm. Architectures that do IPI based TLB shootdowns should enable this. =20 +# Enable "lazy TLB", which means a user->kernel thread context switch do= es not +# switch the mm to init_mm and the kernel thread takes a reference to th= e user +# mm to provide its kernel mapping. This is how Linux has traditionally = worked +# (see Documentation/vm/active_mm.rst), for performance. Switching to an= d from +# idle thread is a performance-critical case. +# +# If mm context switches are inexpensive or free (in the case of NOMMU) = then +# this could be disabled. +# +# It would make sense to have this depend on MMU, but need to audit and = test +# the NOMMU architectures for lazy mm refcounting first. +config MMU_LAZY_TLB + def_bool y + depends on !NO_MMU_LAZY_TLB + +# This allows archs to disable MMU_LAZY_TLB. mmgrab/mmdrop in arch/ code= has +# to be audited and switched to _lazy_tlb postfix as necessary. +config NO_MMU_LAZY_TLB + def_bool n + +# Use normal mm refcounting for MMU_LAZY_TLB kernel thread references. +# For now, this must be enabled if MMU_LAZY_TLB is enabled. +config MMU_LAZY_TLB_REFCOUNT + def_bool y + depends on MMU_LAZY_TLB + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool =20 diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index bfd1baca5266..29e4638ad124 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -52,12 +52,21 @@ static inline void mmdrop(struct mm_struct *mm) /* Helpers for lazy TLB mm refcounting */ static inline void mmgrab_lazy_tlb(struct mm_struct *mm) { - mmgrab(mm); + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) + mmgrab(mm); } =20 static inline void mmdrop_lazy_tlb(struct mm_struct *mm) { - mmdrop(mm); + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) { + mmdrop(mm); + } else { + /* + * mmdrop_lazy_tlb must provide a full memory barrier, see the + * membarrier comment finish_task_switch which relies on this. + */ + smp_mb(); + } } =20 /** diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e359c76ea2e2..299c3eb12b2b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4171,7 +4171,7 @@ static struct rq *finish_task_switch(struct task_st= ruct *prev) __releases(rq->lock) { struct rq *rq =3D this_rq(); - struct mm_struct *mm =3D rq->prev_mm; + struct mm_struct *mm =3D NULL; long prev_state; =20 /* @@ -4190,7 +4190,10 @@ static struct rq *finish_task_switch(struct task_s= truct *prev) current->comm, current->pid, preempt_count())) preempt_count_set(FORK_PREEMPT_COUNT); =20 - rq->prev_mm =3D NULL; +#ifdef CONFIG_MMU_LAZY_TLB_REFCOUNT + mm =3D rq->prev_lazy_mm; + rq->prev_lazy_mm =3D NULL; +#endif =20 /* * A task struct has one reference for the use as "current". @@ -4282,22 +4285,10 @@ asmlinkage __visible void schedule_tail(struct ta= sk_struct *prev) calculate_sigpending(); } =20 -/* - * context_switch - switch to the new MM and the new thread's register s= tate. - */ -static __always_inline struct rq * -context_switch(struct rq *rq, struct task_struct *prev, - struct task_struct *next, struct rq_flags *rf) +static __always_inline void +context_switch_mm(struct rq *rq, struct task_struct *prev, + struct task_struct *next) { - prepare_task_switch(rq, prev, next); - - /* - * For paravirt, this is coupled with an exit in switch_to to - * combine the page table reload and the switch backend into - * one hypercall. - */ - arch_start_context_switch(prev); - /* * kernel -> kernel lazy + transfer active * user -> kernel lazy + mmgrab_lazy_tlb() active @@ -4326,11 +4317,57 @@ context_switch(struct rq *rq, struct task_struct = *prev, switch_mm_irqs_off(prev->active_mm, next->mm, next); =20 if (!prev->mm) { // from kernel - /* will mmdrop_lazy_tlb() in finish_task_switch(). */ - rq->prev_mm =3D prev->active_mm; +#ifdef CONFIG_MMU_LAZY_TLB_REFCOUNT + /* Will mmdrop_lazy_tlb() in finish_task_switch(). */ + rq->prev_lazy_mm =3D prev->active_mm; prev->active_mm =3D NULL; +#else + /* + * Without MMU_LAZY_REFCOUNT there is no lazy + * tracking (because no rq->prev_lazy_mm) in + * finish_task_switch, so no mmdrop_lazy_tlb(), + * so no memory barrier for membarrier (see the + * membarrier comment in finish_task_switch()). + * Do it here. + */ + smp_mb(); +#endif } } +} + +static __always_inline void +context_switch_mm_nolazy(struct rq *rq, struct task_struct *prev, + struct task_struct *next) +{ + if (!next->mm) + next->active_mm =3D &init_mm; + membarrier_switch_mm(rq, prev->active_mm, next->active_mm); + switch_mm_irqs_off(prev->active_mm, next->active_mm, next); + if (!prev->mm) + prev->active_mm =3D NULL; +} + +/* + * context_switch - switch to the new MM and the new thread's register s= tate. + */ +static __always_inline struct rq * +context_switch(struct rq *rq, struct task_struct *prev, + struct task_struct *next, struct rq_flags *rf) +{ + prepare_task_switch(rq, prev, next); + + /* + * For paravirt, this is coupled with an exit in switch_to to + * combine the page table reload and the switch backend into + * one hypercall. + */ + arch_start_context_switch(prev); + + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB)) + context_switch_mm(rq, prev, next); + else + context_switch_mm_nolazy(rq, prev, next); =20 rq->clock_update_flags &=3D ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP); =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index a189bec13729..0729cf19a987 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -961,7 +961,9 @@ struct rq { struct task_struct *idle; struct task_struct *stop; unsigned long next_balance; - struct mm_struct *prev_mm; +#ifdef CONFIG_MMU_LAZY_TLB_REFCOUNT + struct mm_struct *prev_lazy_mm; +#endif =20 unsigned int clock_update_flags; u64 clock; --=20 2.23.0