From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E764C47082 for ; Sat, 5 Jun 2021 04:46:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5B266120F for ; Sat, 5 Jun 2021 04:46:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E5B266120F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C3346B0036; Sat, 5 Jun 2021 00:46:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34C116B006C; Sat, 5 Jun 2021 00:46:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1780C6B006E; Sat, 5 Jun 2021 00:46:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id D136B6B0036 for ; Sat, 5 Jun 2021 00:46:56 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6D065ABEF for ; Sat, 5 Jun 2021 04:46:56 +0000 (UTC) X-FDA: 78218435232.02.6EC82AB Received: from mail-oi1-f177.google.com (mail-oi1-f177.google.com [209.85.167.177]) by imf16.hostedemail.com (Postfix) with ESMTP id 9F29E801937E for ; Sat, 5 Jun 2021 04:46:55 +0000 (UTC) Received: by mail-oi1-f177.google.com with SMTP id x196so11510244oif.10 for ; Fri, 04 Jun 2021 21:46:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ecwzSsruDCg5HQ2i3NCpkmbXtWN83qkbLVUmQqDCWlk=; b=K84jdMCLYtlUEgBdPd1FB73VBjM6s0HtzHQUEKQuqVbWUPLu6QNiGSyAiZKtAnSXZI qa4yEvXbNmIMTYUakiYMlhfQjzIRQyLNh9Jrtbve9f/zthPJ5CoYO/U1vCsbE68/oQo3 GqQ4ijTqbXVuk2mA1T65SdOnvXOHd72h0KEHCwafBFdOtVmnSxI6Nf2iyCXf+O+7QEyR 4o/pLi9n+6oidBSD8apK3ir/f7FzIJ/jZctGYehvbOABo4abVcwekvo8LzyZA/cgWtzR XCGf6kZDt3iIatDop/2zSlU9nQRfONbTF6CCJaHYWykAvm3j3mJ3esA/tPCsbIgWOq7U orFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ecwzSsruDCg5HQ2i3NCpkmbXtWN83qkbLVUmQqDCWlk=; b=REkNl9y84+2ZWepO1DwFZTVJiQ9nKRJ72+UuVEfnFO+/I+Q2znyZA0O6al42jTZf8l IiQNM3zpzu2LB0F33lgDeQhkvWXXWTrl9tw2LZ/10Wjf2fGO2RqYteh61CPhSO6XZT8D +iC6W7rrNNbRcrMxT68woka5wIgX+losRkdW8SQiarKboT9HypIlqaH66Gw5Vug+YGxU NjbKPBbThbWFUdnCtWuwQ0nEY6uwPJR7Ojid9AHFHOUXxZpKiauOkyQCoOpBgqbWxT28 pp+3FcsDyVQuS5oYla/aUQNH3XcnlxldoLzJN6LMFXNKg4AtaVtWK6HnsxZc5pjI7Wob 17Pw== X-Gm-Message-State: AOAM532tNI1zuf5L+T0XVLsjLpf0yXTcBn8zB3agYwNcza5m4uPcknC4 Jr742/O8MIWVFCVItQkaJJlux14KOwE= X-Google-Smtp-Source: ABdhPJxLEzbHIWZWXXi4Xnd53BEeXOfGW7hwSRiz+7W58BsNagoAhoouQQKJuY6psVrgcnmdrfBJRg== X-Received: by 2002:a17:90a:6343:: with SMTP id v3mr19806273pjs.61.1622857350102; Fri, 04 Jun 2021 18:42:30 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (60-242-147-73.tpgi.com.au. [60.242.147.73]) by smtp.gmail.com with ESMTPSA id q68sm5779056pjq.45.2021.06.04.18.42.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 18:42:29 -0700 (PDT) From: Nicholas Piggin To: Andrew Morton Cc: Nicholas Piggin , Randy Dunlap , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard , Andy Lutomirski Subject: [PATCH v4 1/4] lazy tlb: introduce lazy mm refcount helper functions Date: Sat, 5 Jun 2021 11:42:13 +1000 Message-Id: <20210605014216.446867-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210605014216.446867-1-npiggin@gmail.com> References: <20210605014216.446867-1-npiggin@gmail.com> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=K84jdMCL; spf=pass (imf16.hostedemail.com: domain of npiggin@gmail.com designates 209.85.167.177 as permitted sender) smtp.mailfrom=npiggin@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9F29E801937E X-Stat-Signature: 79ote7zhn8zre6kig4fz1ekpdexr1mdu X-HE-Tag: 1622868415-587129 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add explicit _lazy_tlb annotated functions for lazy mm refcounting. This makes lazy mm references more obvious, and allows explicit refcounting to be removed if it is not used. Signed-off-by: Nicholas Piggin --- arch/arm/mach-rpc/ecard.c | 2 +- arch/powerpc/kernel/smp.c | 2 +- arch/powerpc/mm/book3s64/radix_tlb.c | 4 ++-- fs/exec.c | 4 ++-- include/linux/sched/mm.h | 11 +++++++++++ kernel/cpu.c | 2 +- kernel/exit.c | 2 +- kernel/kthread.c | 11 +++++++---- kernel/sched/core.c | 15 ++++++++------- 9 files changed, 34 insertions(+), 19 deletions(-) diff --git a/arch/arm/mach-rpc/ecard.c b/arch/arm/mach-rpc/ecard.c index 827b50f1c73e..1b4a41aad793 100644 --- a/arch/arm/mach-rpc/ecard.c +++ b/arch/arm/mach-rpc/ecard.c @@ -253,7 +253,7 @@ static int ecard_init_mm(void) current->mm =3D mm; current->active_mm =3D mm; activate_mm(active_mm, mm); - mmdrop(active_mm); + mmdrop_lazy_tlb(active_mm); ecard_init_pgtables(mm); return 0; } diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 2e05c783440a..fb0bdfc67366 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1541,7 +1541,7 @@ void start_secondary(void *unused) { unsigned int cpu =3D raw_smp_processor_id(); =20 - mmgrab(&init_mm); + mmgrab_lazy_tlb(&init_mm); current->active_mm =3D &init_mm; =20 smp_store_cpu_info(cpu); diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3= s64/radix_tlb.c index 409e61210789..2962082787c0 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -663,10 +663,10 @@ void exit_lazy_flush_tlb(struct mm_struct *mm, bool= always_flush) if (current->active_mm =3D=3D mm) { WARN_ON_ONCE(current->mm !=3D NULL); /* Is a kernel thread and is using mm as the lazy tlb */ - mmgrab(&init_mm); + mmgrab_lazy_tlb(&init_mm); current->active_mm =3D &init_mm; switch_mm_irqs_off(mm, &init_mm, current); - mmdrop(mm); + mmdrop_lazy_tlb(mm); } =20 /* diff --git a/fs/exec.c b/fs/exec.c index 18594f11c31f..ca0f8b1af23a 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1029,9 +1029,9 @@ static int exec_mmap(struct mm_struct *mm) setmax_mm_hiwater_rss(&tsk->signal->maxrss, old_mm); mm_update_next_owner(old_mm); mmput(old_mm); - return 0; + } else { + mmdrop_lazy_tlb(active_mm); } - mmdrop(active_mm); return 0; } =20 diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index e24b1fe348e3..bfd1baca5266 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -49,6 +49,17 @@ static inline void mmdrop(struct mm_struct *mm) __mmdrop(mm); } =20 +/* Helpers for lazy TLB mm refcounting */ +static inline void mmgrab_lazy_tlb(struct mm_struct *mm) +{ + mmgrab(mm); +} + +static inline void mmdrop_lazy_tlb(struct mm_struct *mm) +{ + mmdrop(mm); +} + /** * mmget() - Pin the address space associated with a &struct mm_struct. * @mm: The address space to pin. diff --git a/kernel/cpu.c b/kernel/cpu.c index e538518556f4..e87a89824e6c 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -602,7 +602,7 @@ static int finish_cpu(unsigned int cpu) */ if (mm !=3D &init_mm) idle->active_mm =3D &init_mm; - mmdrop(mm); + mmdrop_lazy_tlb(mm); return 0; } =20 diff --git a/kernel/exit.c b/kernel/exit.c index fd1c04193e18..8e87ec5f6be2 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -476,7 +476,7 @@ static void exit_mm(void) __set_current_state(TASK_RUNNING); mmap_read_lock(mm); } - mmgrab(mm); + mmgrab_lazy_tlb(mm); BUG_ON(mm !=3D current->active_mm); /* more a memory barrier than a real lock */ task_lock(current); diff --git a/kernel/kthread.c b/kernel/kthread.c index fe3f2a40d61e..b70e28431a01 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1314,14 +1314,14 @@ void kthread_use_mm(struct mm_struct *mm) WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD)); WARN_ON_ONCE(tsk->mm); =20 + mmgrab(mm); + task_lock(tsk); /* Hold off tlb flush IPIs while switching mm's */ local_irq_disable(); active_mm =3D tsk->active_mm; - if (active_mm !=3D mm) { - mmgrab(mm); + if (active_mm !=3D mm) tsk->active_mm =3D mm; - } tsk->mm =3D mm; membarrier_update_current_mm(mm); switch_mm_irqs_off(active_mm, mm, tsk); @@ -1341,7 +1341,7 @@ void kthread_use_mm(struct mm_struct *mm) * mmdrop(), or explicitly with smp_mb(). */ if (active_mm !=3D mm) - mmdrop(active_mm); + mmdrop_lazy_tlb(active_mm); else smp_mb(); =20 @@ -1375,10 +1375,13 @@ void kthread_unuse_mm(struct mm_struct *mm) local_irq_disable(); tsk->mm =3D NULL; membarrier_update_current_mm(NULL); + mmgrab_lazy_tlb(mm); /* active_mm is still 'mm' */ enter_lazy_tlb(mm, tsk); local_irq_enable(); task_unlock(tsk); + + mmdrop(mm); } EXPORT_SYMBOL_GPL(kthread_unuse_mm); =20 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5226cc26a095..e359c76ea2e2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4229,13 +4229,14 @@ static struct rq *finish_task_switch(struct task_= struct *prev) * rq->curr, before returning to userspace, so provide them here: * * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly - * provided by mmdrop(), + * provided by mmdrop_lazy_tlb(), * - a sync_core for SYNC_CORE. */ if (mm) { membarrier_mm_sync_core_before_usermode(mm); - mmdrop(mm); + mmdrop_lazy_tlb(mm); } + if (unlikely(prev_state =3D=3D TASK_DEAD)) { if (prev->sched_class->task_dead) prev->sched_class->task_dead(prev); @@ -4299,9 +4300,9 @@ context_switch(struct rq *rq, struct task_struct *p= rev, =20 /* * kernel -> kernel lazy + transfer active - * user -> kernel lazy + mmgrab() active + * user -> kernel lazy + mmgrab_lazy_tlb() active * - * kernel -> user switch + mmdrop() active + * kernel -> user switch + mmdrop_lazy_tlb() active * user -> user switch */ if (!next->mm) { // to kernel @@ -4309,7 +4310,7 @@ context_switch(struct rq *rq, struct task_struct *p= rev, =20 next->active_mm =3D prev->active_mm; if (prev->mm) // from user - mmgrab(prev->active_mm); + mmgrab_lazy_tlb(prev->active_mm); else prev->active_mm =3D NULL; } else { // to user @@ -4325,7 +4326,7 @@ context_switch(struct rq *rq, struct task_struct *p= rev, switch_mm_irqs_off(prev->active_mm, next->mm, next); =20 if (!prev->mm) { // from kernel - /* will mmdrop() in finish_task_switch(). */ + /* will mmdrop_lazy_tlb() in finish_task_switch(). */ rq->prev_mm =3D prev->active_mm; prev->active_mm =3D NULL; } @@ -8239,7 +8240,7 @@ void __init sched_init(void) /* * The boot idle thread does lazy MMU switching as well: */ - mmgrab(&init_mm); + mmgrab_lazy_tlb(&init_mm); enter_lazy_tlb(&init_mm, current); =20 /* --=20 2.23.0