From: Anup Patel <anup@brainfault.org>
To: Gary Guo <gary@garyguo.net>
Cc: Anup Patel <Anup.Patel@wdc.com>,
Palmer Dabbelt <palmer@sifive.com>,
Christoph Hellwig <hch@infradead.org>,
Atish Patra <atish.patra@wdc.com>,
Albert Ou <aou@eecs.berkeley.edu>,
"linux-riscv@lists.infradead.org"
<linux-riscv@lists.infradead.org>
Subject: Re: [PATCH v4 1/5] riscv: move flush_icache_{all,mm} to cacheflush.c
Date: Thu, 28 Mar 2019 12:15:36 +0530 [thread overview]
Message-ID: <CAAhSdy1FQN2VDkZ65Rp+xnE16WqN6vYevOoJ3LDkdddj7MH1jw@mail.gmail.com> (raw)
In-Reply-To: <d141829022c075172b1410e67299fc29fa95c6cd.1553647082.git.gary@garyguo.net>
On Wed, Mar 27, 2019 at 6:11 AM Gary Guo <gary@garyguo.net> wrote:
>
> From: Gary Guo <gary@garyguo.net>
>
> Currently, flush_icache_all is macro-expanded into a SBI call, yet no
> asm/sbi.h is included in asm/cacheflush.h. This could be moved to
> mm/cacheflush.c instead (SBI call will dominate performance-wise and
> there is no worry to not have it inlined.
>
> Currently, flush_icache_mm stays in kernel/smp.c, which looks like a
> hack to prevent it from being compiled when CONFIG_SMP=n. It should
> also be in mm/cacheflush.c.
>
> Signed-off-by: Gary Guo <gary@garyguo.net>
> ---
> arch/riscv/include/asm/cacheflush.h | 2 +-
> arch/riscv/kernel/smp.c | 49 -----------------------
> arch/riscv/mm/cacheflush.c | 61 +++++++++++++++++++++++++++++
> 3 files changed, 62 insertions(+), 50 deletions(-)
>
> diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> index 8f13074413a7..1f4ba68ab9aa 100644
> --- a/arch/riscv/include/asm/cacheflush.h
> +++ b/arch/riscv/include/asm/cacheflush.h
> @@ -47,7 +47,7 @@ static inline void flush_dcache_page(struct page *page)
>
> #else /* CONFIG_SMP */
>
> -#define flush_icache_all() sbi_remote_fence_i(NULL)
> +void flush_icache_all(void);
> void flush_icache_mm(struct mm_struct *mm, bool local);
>
> #endif /* CONFIG_SMP */
> diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
> index 0c41d07ec281..17f491e8ed0a 100644
> --- a/arch/riscv/kernel/smp.c
> +++ b/arch/riscv/kernel/smp.c
> @@ -199,52 +199,3 @@ void smp_send_reschedule(int cpu)
> send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
> }
>
> -/*
> - * Performs an icache flush for the given MM context. RISC-V has no direct
> - * mechanism for instruction cache shoot downs, so instead we send an IPI that
> - * informs the remote harts they need to flush their local instruction caches.
> - * To avoid pathologically slow behavior in a common case (a bunch of
> - * single-hart processes on a many-hart machine, ie 'make -j') we avoid the
> - * IPIs for harts that are not currently executing a MM context and instead
> - * schedule a deferred local instruction cache flush to be performed before
> - * execution resumes on each hart.
> - */
> -void flush_icache_mm(struct mm_struct *mm, bool local)
> -{
> - unsigned int cpu;
> - cpumask_t others, hmask, *mask;
> -
> - preempt_disable();
> -
> - /* Mark every hart's icache as needing a flush for this MM. */
> - mask = &mm->context.icache_stale_mask;
> - cpumask_setall(mask);
> - /* Flush this hart's I$ now, and mark it as flushed. */
> - cpu = smp_processor_id();
> - cpumask_clear_cpu(cpu, mask);
> - local_flush_icache_all();
> -
> - /*
> - * Flush the I$ of other harts concurrently executing, and mark them as
> - * flushed.
> - */
> - cpumask_andnot(&others, mm_cpumask(mm), cpumask_of(cpu));
> - local |= cpumask_empty(&others);
> - if (mm != current->active_mm || !local) {
> - cpumask_clear(&hmask);
> - riscv_cpuid_to_hartid_mask(&others, &hmask);
> - sbi_remote_fence_i(hmask.bits);
> - } else {
> - /*
> - * It's assumed that at least one strongly ordered operation is
> - * performed on this hart between setting a hart's cpumask bit
> - * and scheduling this MM context on that hart. Sending an SBI
> - * remote message will do this, but in the case where no
> - * messages are sent we still need to order this hart's writes
> - * with flush_icache_deferred().
> - */
> - smp_mb();
> - }
> -
> - preempt_enable();
> -}
> diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
> index 498c0a0814fe..497b7d07af0c 100644
> --- a/arch/riscv/mm/cacheflush.c
> +++ b/arch/riscv/mm/cacheflush.c
> @@ -14,6 +14,67 @@
> #include <asm/pgtable.h>
> #include <asm/cacheflush.h>
>
> +#ifdef CONFIG_SMP
> +
> +#include <asm/sbi.h>
> +
> +void flush_icache_all(void)
> +{
> + sbi_remote_fence_i(NULL);
> +}
> +
> +/*
> + * Performs an icache flush for the given MM context. RISC-V has no direct
> + * mechanism for instruction cache shoot downs, so instead we send an IPI that
> + * informs the remote harts they need to flush their local instruction caches.
> + * To avoid pathologically slow behavior in a common case (a bunch of
> + * single-hart processes on a many-hart machine, ie 'make -j') we avoid the
> + * IPIs for harts that are not currently executing a MM context and instead
> + * schedule a deferred local instruction cache flush to be performed before
> + * execution resumes on each hart.
> + */
> +void flush_icache_mm(struct mm_struct *mm, bool local)
> +{
> + unsigned int cpu;
> + cpumask_t others, hmask, *mask;
> +
> + preempt_disable();
> +
> + /* Mark every hart's icache as needing a flush for this MM. */
> + mask = &mm->context.icache_stale_mask;
> + cpumask_setall(mask);
> + /* Flush this hart's I$ now, and mark it as flushed. */
> + cpu = smp_processor_id();
> + cpumask_clear_cpu(cpu, mask);
> + local_flush_icache_all();
> +
> + /*
> + * Flush the I$ of other harts concurrently executing, and mark them as
> + * flushed.
> + */
> + cpumask_andnot(&others, mm_cpumask(mm), cpumask_of(cpu));
> + local |= cpumask_empty(&others);
> + if (mm != current->active_mm || !local) {
> + cpumask_clear(&hmask);
> + riscv_cpuid_to_hartid_mask(&others, &hmask);
> + sbi_remote_fence_i(hmask.bits);
> + } else {
> + /*
> + * It's assumed that at least one strongly ordered operation is
> + * performed on this hart between setting a hart's cpumask bit
> + * and scheduling this MM context on that hart. Sending an SBI
> + * remote message will do this, but in the case where no
> + * messages are sent we still need to order this hart's writes
> + * with flush_icache_deferred().
> + */
> + smp_mb();
> + }
> +
> + preempt_enable();
> +}
> +
> +#endif /* CONFIG_SMP */
> +
> void flush_icache_pte(pte_t pte)
> {
> struct page *page = pte_page(pte);
> --
> 2.17.1
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
LGTM.
Reviewed-by: Anup Patel <anup@brainfault.org>
Regards,
Anup
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2019-03-28 6:45 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-27 0:41 [PATCH v4 0/5] TLB/I$ flush cleanups and improvements Gary Guo
2019-03-27 0:41 ` [PATCH v4 1/5] riscv: move flush_icache_{all,mm} to cacheflush.c Gary Guo
2019-03-27 7:06 ` Christoph Hellwig
2019-03-28 6:45 ` Anup Patel [this message]
2019-03-27 0:41 ` [PATCH v4 3/5] riscv: fix sbi_remote_sfence_vma{,_asid} Gary Guo
2019-03-27 7:08 ` Christoph Hellwig
2019-03-28 6:47 ` Anup Patel
2019-03-27 0:41 ` [PATCH v4 4/5] riscv: rewrite tlb flush for performance Gary Guo
2019-03-27 7:25 ` Christoph Hellwig
2019-03-27 13:56 ` Gary Guo
2019-03-28 16:17 ` Christoph Hellwig
2019-03-28 16:39 ` Gary Guo
2019-03-28 16:55 ` Christoph Hellwig
2019-03-27 0:41 ` [PATCH v4 2/5] riscv: move switch_mm to its own file Gary Guo
2019-03-27 7:08 ` Christoph Hellwig
2019-03-27 7:18 ` Christoph Hellwig
2019-03-28 6:47 ` Anup Patel
2019-03-27 0:41 ` [PATCH v4 5/5] riscv: implement IPI-based remote TLB shootdown Gary Guo
2019-03-27 7:31 ` Christoph Hellwig
2019-03-27 14:03 ` Gary Guo
2019-03-28 16:36 ` Christoph Hellwig
2019-03-28 16:47 ` Gary Guo
2019-03-28 16:57 ` Christoph Hellwig
2019-03-28 6:50 ` Anup Patel
2019-04-10 7:04 ` [PATCH v4 0/5] TLB/I$ flush cleanups and improvements Christoph Hellwig
2019-04-10 9:01 ` Anup Patel
2019-04-10 10:11 ` Christoph Hellwig
2019-04-10 10:22 ` Anup Patel
2019-04-11 1:24 ` Atish Patra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAAhSdy1FQN2VDkZ65Rp+xnE16WqN6vYevOoJ3LDkdddj7MH1jw@mail.gmail.com \
--to=anup@brainfault.org \
--cc=Anup.Patel@wdc.com \
--cc=aou@eecs.berkeley.edu \
--cc=atish.patra@wdc.com \
--cc=gary@garyguo.net \
--cc=hch@infradead.org \
--cc=linux-riscv@lists.infradead.org \
--cc=palmer@sifive.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).