linux-riscv.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC] riscv: restore the irq save/restore logic for nosync code patching
@ 2022-06-11  2:31 Wu Zhangjin
  2022-06-11  4:56 ` Guo Ren
  2022-06-11  5:14 ` Guo Ren
  0 siblings, 2 replies; 5+ messages in thread
From: Wu Zhangjin @ 2022-06-11  2:31 UTC (permalink / raw)
  To: linux-riscv, Paul Walmsley, Palmer Dabbelt, Peter Zijlstra
  Cc: Zong Li, Steven Rostedt, Guo Ren, Andy Chiu, Wu Zhangjin

The commit '0ff7c3b331276f584bde3ae9a16bacd8fa3d01e6' removed the
old patch_lock together with the irq save/restore logic.

But the patching interface: patch_text_nosync() has new users like
Jump Label, which doesn't use stop_machine(), here restores the irq
save/restore logic for such users, just as the other architectures do.

Move lockdep assert to the required path too, the current stop_machine()
based ftrace implementation should use the new added patch_text_noirq()
without this lockdep assert, it has its own mutex, but not named as
text_mutex.

As the latest maillist shows, a new ftrace support without
stop_machine() is in review, which requires the nosync version with this
irq save/restore logic or the new added noirq version (as we can see it
also calls patch_insn_write) with explicit text_mutex and irq
save/restore logic.

Signed-off-by: Wu Zhangjin <falcon@tinylab.org>
---
 arch/riscv/kernel/patch.c | 32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 765004b60513..03f03832e077 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -55,13 +55,6 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
 	bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
 	int ret;
 
-	/*
-	 * Before reaching here, it was expected to lock the text_mutex
-	 * already, so we don't need to give another lock here and could
-	 * ensure that it was safe between each cores.
-	 */
-	lockdep_assert_held(&text_mutex);
-
 	if (across_pages)
 		patch_map(addr + len, FIX_TEXT_POKE1);
 
@@ -85,7 +78,8 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
 NOKPROBE_SYMBOL(patch_insn_write);
 #endif /* CONFIG_MMU */
 
-int patch_text_nosync(void *addr, const void *insns, size_t len)
+/* For stop_machine() or the other cases which have already disabled irq and have locked among cpu cores */
+int patch_text_noirq(void *addr, const void *insns, size_t len)
 {
 	u32 *tp = addr;
 	int ret;
@@ -97,6 +91,26 @@ int patch_text_nosync(void *addr, const void *insns, size_t len)
 
 	return ret;
 }
+NOKPROBE_SYMBOL(patch_text_noirq);
+
+int patch_text_nosync(void *addr, const void *insns, size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	/*
+	 * Before reaching here, it was expected to lock the text_mutex
+	 * already, so we don't need to give another lock here and could
+	 * ensure that it was safe between each cores.
+	 */
+	lockdep_assert_held(&text_mutex);
+
+	local_irq_save(flags);
+	ret = patch_text_noirq(addr, insns, len);
+	local_irq_restore(flags);
+
+	return ret;
+}
 NOKPROBE_SYMBOL(patch_text_nosync);
 
 static int patch_text_cb(void *data)
@@ -106,7 +120,7 @@ static int patch_text_cb(void *data)
 
 	if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
 		ret =
-		    patch_text_nosync(patch->addr, &patch->insn,
+		    patch_text_noirq(patch->addr, &patch->insn,
 					    GET_INSN_LENGTH(patch->insn));
 		atomic_inc(&patch->cpu_count);
 	} else {
-- 
2.35.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC] riscv: restore the irq save/restore logic for nosync code patching
  2022-06-11  2:31 [RFC] riscv: restore the irq save/restore logic for nosync code patching Wu Zhangjin
@ 2022-06-11  4:56 ` Guo Ren
  2022-06-11  5:14 ` Guo Ren
  1 sibling, 0 replies; 5+ messages in thread
From: Guo Ren @ 2022-06-11  4:56 UTC (permalink / raw)
  To: Wu Zhangjin
  Cc: linux-riscv, Paul Walmsley, Palmer Dabbelt, Peter Zijlstra,
	Zong Li, Steven Rostedt, Andy Chiu

On Sat, Jun 11, 2022 at 10:31 AM Wu Zhangjin <falcon@tinylab.org> wrote:
>
> The commit '0ff7c3b331276f584bde3ae9a16bacd8fa3d01e6' removed the
> old patch_lock together with the irq save/restore logic.
>
> But the patching interface: patch_text_nosync() has new users like
> Jump Label, which doesn't use stop_machine(), here restores the irq
> save/restore logic for such users, just as the other architectures do.
>
> Move lockdep assert to the required path too, the current stop_machine()
> based ftrace implementation should use the new added patch_text_noirq()
> without this lockdep assert, it has its own mutex, but not named as
> text_mutex.
>
> As the latest maillist shows, a new ftrace support without
> stop_machine() is in review, which requires the nosync version with this
> irq save/restore logic or the new added noirq version (as we can see it
> also calls patch_insn_write) with explicit text_mutex and irq
> save/restore logic.
>
> Signed-off-by: Wu Zhangjin <falcon@tinylab.org>
> ---
>  arch/riscv/kernel/patch.c | 32 +++++++++++++++++++++++---------
>  1 file changed, 23 insertions(+), 9 deletions(-)
>
> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> index 765004b60513..03f03832e077 100644
> --- a/arch/riscv/kernel/patch.c
> +++ b/arch/riscv/kernel/patch.c
> @@ -55,13 +55,6 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
>         bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
>         int ret;
>
> -       /*
> -        * Before reaching here, it was expected to lock the text_mutex
> -        * already, so we don't need to give another lock here and could
> -        * ensure that it was safe between each cores.
> -        */
> -       lockdep_assert_held(&text_mutex);
> -
>         if (across_pages)
>                 patch_map(addr + len, FIX_TEXT_POKE1);
>
> @@ -85,7 +78,8 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
>  NOKPROBE_SYMBOL(patch_insn_write);
>  #endif /* CONFIG_MMU */
>
> -int patch_text_nosync(void *addr, const void *insns, size_t len)
> +/* For stop_machine() or the other cases which have already disabled irq and have locked among cpu cores */
> +int patch_text_noirq(void *addr, const void *insns, size_t len)
>  {
>         u32 *tp = addr;
>         int ret;
> @@ -97,6 +91,26 @@ int patch_text_nosync(void *addr, const void *insns, size_t len)
>
>         return ret;
>  }
> +NOKPROBE_SYMBOL(patch_text_noirq);
> +
> +int patch_text_nosync(void *addr, const void *insns, size_t len)
> +{
> +       int ret;
> +       unsigned long flags;
> +
> +       /*
> +        * Before reaching here, it was expected to lock the text_mutex
> +        * already, so we don't need to give another lock here and could
> +        * ensure that it was safe between each cores.
> +        */
> +       lockdep_assert_held(&text_mutex);
> +
> +       local_irq_save(flags);
> +       ret = patch_text_noirq(addr, insns, len);
> +       local_irq_restore(flags);
> +
> +       return ret;
> +}
>  NOKPROBE_SYMBOL(patch_text_nosync);
>
>  static int patch_text_cb(void *data)
> @@ -106,7 +120,7 @@ static int patch_text_cb(void *data)
>
>         if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
>                 ret =
> -                   patch_text_nosync(patch->addr, &patch->insn,
> +                   patch_text_noirq(patch->addr, &patch->insn,
>                                             GET_INSN_LENGTH(patch->insn));
patch_text_cb is only called in stop_machine_cpuslocked, no need
disable irq for its callback

>                 atomic_inc(&patch->cpu_count);
>         } else {
> --
> 2.35.1
>


-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC] riscv: restore the irq save/restore logic for nosync code patching
  2022-06-11  2:31 [RFC] riscv: restore the irq save/restore logic for nosync code patching Wu Zhangjin
  2022-06-11  4:56 ` Guo Ren
@ 2022-06-11  5:14 ` Guo Ren
  2022-06-11  9:26   ` Wu Zhangjin
  1 sibling, 1 reply; 5+ messages in thread
From: Guo Ren @ 2022-06-11  5:14 UTC (permalink / raw)
  To: Wu Zhangjin
  Cc: linux-riscv, Paul Walmsley, Palmer Dabbelt, Peter Zijlstra,
	Zong Li, Steven Rostedt, Andy Chiu

Sorry, I misunderstood your patch, here is the new review feedback.

On Sat, Jun 11, 2022 at 10:31 AM Wu Zhangjin <falcon@tinylab.org> wrote:
>
> The commit '0ff7c3b331276f584bde3ae9a16bacd8fa3d01e6' removed the
> old patch_lock together with the irq save/restore logic.
>
> But the patching interface: patch_text_nosync() has new users like
> Jump Label, which doesn't use stop_machine(), here restores the irq
> save/restore logic for such users, just as the other architectures do.
Let's give protection out of patch_text_nosync.

        mutex_lock(&text_mutex);
        patch_text_nosync(addr, &insn, sizeof(insn));
        mutex_unlock(&text_mutex);

Why mutex isn't enough for patch_text?

>
> Move lockdep assert to the required path too, the current stop_machine()
> based ftrace implementation should use the new added patch_text_noirq()
> without this lockdep assert, it has its own mutex, but not named as
> text_mutex.
Do you mean we should use irq_save&restore instead of text_mutex?


>
> As the latest maillist shows, a new ftrace support without
> stop_machine() is in review, which requires the nosync version with this
> irq save/restore logic or the new added noirq version (as we can see it
> also calls patch_insn_write) with explicit text_mutex and irq
> save/restore logic.
>
> Signed-off-by: Wu Zhangjin <falcon@tinylab.org>
> ---
>  arch/riscv/kernel/patch.c | 32 +++++++++++++++++++++++---------
>  1 file changed, 23 insertions(+), 9 deletions(-)
>
> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> index 765004b60513..03f03832e077 100644
> --- a/arch/riscv/kernel/patch.c
> +++ b/arch/riscv/kernel/patch.c
> @@ -55,13 +55,6 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
>         bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
>         int ret;
>
> -       /*
> -        * Before reaching here, it was expected to lock the text_mutex
> -        * already, so we don't need to give another lock here and could
> -        * ensure that it was safe between each cores.
> -        */
> -       lockdep_assert_held(&text_mutex);
> -
>         if (across_pages)
>                 patch_map(addr + len, FIX_TEXT_POKE1);
>
> @@ -85,7 +78,8 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
>  NOKPROBE_SYMBOL(patch_insn_write);
>  #endif /* CONFIG_MMU */
>
> -int patch_text_nosync(void *addr, const void *insns, size_t len)
> +/* For stop_machine() or the other cases which have already disabled irq and have locked among cpu cores */
> +int patch_text_noirq(void *addr, const void *insns, size_t len)
>  {
>         u32 *tp = addr;
>         int ret;
> @@ -97,6 +91,26 @@ int patch_text_nosync(void *addr, const void *insns, size_t len)
>
>         return ret;
>  }
> +NOKPROBE_SYMBOL(patch_text_noirq);
> +
> +int patch_text_nosync(void *addr, const void *insns, size_t len)
> +{
> +       int ret;
> +       unsigned long flags;
> +
> +       /*
> +        * Before reaching here, it was expected to lock the text_mutex
> +        * already, so we don't need to give another lock here and could
> +        * ensure that it was safe between each cores.
> +        */
> +       lockdep_assert_held(&text_mutex);
> +
> +       local_irq_save(flags);
> +       ret = patch_text_noirq(addr, insns, len);
> +       local_irq_restore(flags);
> +
> +       return ret;
> +}
>  NOKPROBE_SYMBOL(patch_text_nosync);
>
>  static int patch_text_cb(void *data)
> @@ -106,7 +120,7 @@ static int patch_text_cb(void *data)
>
>         if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
>                 ret =
> -                   patch_text_nosync(patch->addr, &patch->insn,
> +                   patch_text_noirq(patch->addr, &patch->insn,
>                                             GET_INSN_LENGTH(patch->insn));
>                 atomic_inc(&patch->cpu_count);
>         } else {
> --
> 2.35.1
>


-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Re: [RFC] riscv: restore the irq save/restore logic for nosync code patching
  2022-06-11  5:14 ` Guo Ren
@ 2022-06-11  9:26   ` Wu Zhangjin
  2022-06-11 12:46     ` Guo Ren
  0 siblings, 1 reply; 5+ messages in thread
From: Wu Zhangjin @ 2022-06-11  9:26 UTC (permalink / raw)
  To: Guo Ren
  Cc: linux-riscv, Paul Walmsley, Palmer Dabbelt, Zong Li,
	Steven Rostedt, Peter Zijlstra, Andy Chiu, Wu Zhangjin

Hi, Ren

Thanks very much for your review.

>
>
> Sorry, I misunderstood your patch, here is the new review feedback.
>
> On Sat, Jun 11, 2022 at 10:31 AM Wu Zhangjin <falcon@tinylab.org> wrote:
> >
> > The commit '0ff7c3b331276f584bde3ae9a16bacd8fa3d01e6' removed the
> > old patch_lock together with the irq save/restore logic.
> >
> > But the patching interface: patch_text_nosync() has new users like
> > Jump Label, which doesn't use stop_machine(), here restores the irq
> > save/restore logic for such users, just as the other architectures do.
> Let's give protection out of patch_text_nosync.
>
>         mutex_lock(&text_mutex);
>         patch_text_nosync(addr, &insn, sizeof(insn));
>         mutex_unlock(&text_mutex);
>
> Why mutex isn't enough for patch_text?

The interrupt may come during code modification or between code modification
and icache flush.

Just recheck the jump label code in arch/riscv/kernel/jump_labe.c, both of the
nop and jal instructions are 4 bytes, so, the interrupt may not be possible to
come just in the code modification procedure and therefore impossible to
execute only part of the instructions, but may be possible to come in just
after code modification and before icache flush. is this a risk? If not, this
irq save/restore part can be simply ignored.

>
> > Move lockdep assert to the required path too, the current stop_machine()
> > based ftrace implementation should use the new added patch_text_noirq()
> > without this lockdep assert, it has its own mutex, but not named as
> > text_mutex.
> Do you mean we should use irq_save&restore instead of text_mutex?
>

Sorry (and to zong li), since 'grep text_mutex kernel/trace/ftrace.c' returns
nothing and my kernel image built with lockdep configured failed to boot on
qemu and I just missed that the old added global functions:
ftrace_arch_code_modify_prepare() and ftrace_arch_code_modify_post_process()
have added text_mutex and they have overrided the weak ones defined in
kernel/trace/ftrace.c, so, this code update path (even with the new
implementation without stop_machine) have been protected with text_mutex too,
lockdep assert here is therefore perfectly ok, we can not move it away.

BR,
Falcon

>
> >
> > As the latest maillist shows, a new ftrace support without
> > stop_machine() is in review, which requires the nosync version with this
> > irq save/restore logic or the new added noirq version (as we can see it
> > also calls patch_insn_write) with explicit text_mutex and irq
> > save/restore logic.
> >
> > Signed-off-by: Wu Zhangjin <falcon@tinylab.org>
> > ---
> >  arch/riscv/kernel/patch.c | 32 +++++++++++++++++++++++---------
> >  1 file changed, 23 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> > index 765004b60513..03f03832e077 100644
> > --- a/arch/riscv/kernel/patch.c
> > +++ b/arch/riscv/kernel/patch.c
> > @@ -55,13 +55,6 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
> >         bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
> >         int ret;
> >
> > -       /*
> > -        * Before reaching here, it was expected to lock the text_mutex
> > -        * already, so we don't need to give another lock here and could
> > -        * ensure that it was safe between each cores.
> > -        */
> > -       lockdep_assert_held(&text_mutex);
> > -
> >         if (across_pages)
> >                 patch_map(addr + len, FIX_TEXT_POKE1);
> >
> > @@ -85,7 +78,8 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
> >  NOKPROBE_SYMBOL(patch_insn_write);
> >  #endif /* CONFIG_MMU */
> >
> > -int patch_text_nosync(void *addr, const void *insns, size_t len)
> > +/* For stop_machine() or the other cases which have already disabled irq and have locked among cpu cores */
> > +int patch_text_noirq(void *addr, const void *insns, size_t len)
> >  {
> >         u32 *tp = addr;
> >         int ret;
> > @@ -97,6 +91,26 @@ int patch_text_nosync(void *addr, const void *insns, size_t len)
> >
> >         return ret;
> >  }
> > +NOKPROBE_SYMBOL(patch_text_noirq);
> > +
> > +int patch_text_nosync(void *addr, const void *insns, size_t len)
> > +{
> > +       int ret;
> > +       unsigned long flags;
> > +
> > +       /*
> > +        * Before reaching here, it was expected to lock the text_mutex
> > +        * already, so we don't need to give another lock here and could
> > +        * ensure that it was safe between each cores.
> > +        */
> > +       lockdep_assert_held(&text_mutex);
> > +
> > +       local_irq_save(flags);
> > +       ret = patch_text_noirq(addr, insns, len);
> > +       local_irq_restore(flags);
> > +
> > +       return ret;
> > +}
> >  NOKPROBE_SYMBOL(patch_text_nosync);
> >
> >  static int patch_text_cb(void *data)
> > @@ -106,7 +120,7 @@ static int patch_text_cb(void *data)
> >
> >         if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
> >                 ret =
> > -                   patch_text_nosync(patch->addr, &patch->insn,
> > +                   patch_text_noirq(patch->addr, &patch->insn,
> >                                             GET_INSN_LENGTH(patch->insn));
> >                 atomic_inc(&patch->cpu_count);
> >         } else {
> > --
> > 2.35.1
> >
>
>
> --
> Best Regards
>  Guo Ren

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Re: [RFC] riscv: restore the irq save/restore logic for nosync code patching
  2022-06-11  9:26   ` Wu Zhangjin
@ 2022-06-11 12:46     ` Guo Ren
  0 siblings, 0 replies; 5+ messages in thread
From: Guo Ren @ 2022-06-11 12:46 UTC (permalink / raw)
  To: Wu Zhangjin
  Cc: linux-riscv, Paul Walmsley, Palmer Dabbelt, Zong Li,
	Steven Rostedt, Peter Zijlstra, Andy Chiu

On Sat, Jun 11, 2022 at 5:27 PM Wu Zhangjin <falcon@tinylab.org> wrote:
>
> Hi, Ren
>
> Thanks very much for your review.
>
> >
> >
> > Sorry, I misunderstood your patch, here is the new review feedback.
> >
> > On Sat, Jun 11, 2022 at 10:31 AM Wu Zhangjin <falcon@tinylab.org> wrote:
> > >
> > > The commit '0ff7c3b331276f584bde3ae9a16bacd8fa3d01e6' removed the
> > > old patch_lock together with the irq save/restore logic.
> > >
> > > But the patching interface: patch_text_nosync() has new users like
> > > Jump Label, which doesn't use stop_machine(), here restores the irq
> > > save/restore logic for such users, just as the other architectures do.
> > Let's give protection out of patch_text_nosync.
> >
> >         mutex_lock(&text_mutex);
> >         patch_text_nosync(addr, &insn, sizeof(insn));
> >         mutex_unlock(&text_mutex);
> >
> > Why mutex isn't enough for patch_text?
>
> The interrupt may come during code modification or between code modification
> and icache flush.
>
> Just recheck the jump label code in arch/riscv/kernel/jump_labe.c, both of the
> nop and jal instructions are 4 bytes, so, the interrupt may not be possible to
> come just in the code modification procedure and therefore impossible to
> execute only part of the instructions, but may be possible to come in just
> after code modification and before icache flush. is this a risk? If not, this
> irq save/restore part can be simply ignored.
The irq save/restore couldn't affect other harts and software should
use text_mutex to prevent race contention.

>
> >
> > > Move lockdep assert to the required path too, the current stop_machine()
> > > based ftrace implementation should use the new added patch_text_noirq()
> > > without this lockdep assert, it has its own mutex, but not named as
> > > text_mutex.
> > Do you mean we should use irq_save&restore instead of text_mutex?
> >
>
> Sorry (and to zong li), since 'grep text_mutex kernel/trace/ftrace.c' returns
> nothing and my kernel image built with lockdep configured failed to boot on
> qemu and I just missed that the old added global functions:
> ftrace_arch_code_modify_prepare() and ftrace_arch_code_modify_post_process()
> have added text_mutex and they have overrided the weak ones defined in
> kernel/trace/ftrace.c, so, this code update path (even with the new
#ifdef CONFIG_DYNAMIC_FTRACE
void ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex)
{
        mutex_lock(&text_mutex);
}

void ftrace_arch_code_modify_post_process(void) __releases(&text_mutex)
{
        mutex_unlock(&text_mutex);
}
arch/riscv/kernel/ftrace.c: * text_mutex, which triggers a lockdep
failure.  SMP isn't running so we could

When DYNAMIC_FTRACE=n, do you meet the warning during boot time?

Actually, for DYNAMIC_FTRACE=n seems nobody to test it for a long time, right?

> implementation without stop_machine) have been protected with text_mutex too,
> lockdep assert here is therefore perfectly ok, we can not move it away.
>
> BR,
> Falcon
>
> >
> > >
> > > As the latest maillist shows, a new ftrace support without
> > > stop_machine() is in review, which requires the nosync version with this
> > > irq save/restore logic or the new added noirq version (as we can see it
> > > also calls patch_insn_write) with explicit text_mutex and irq
> > > save/restore logic.
> > >
> > > Signed-off-by: Wu Zhangjin <falcon@tinylab.org>
> > > ---
> > >  arch/riscv/kernel/patch.c | 32 +++++++++++++++++++++++---------
> > >  1 file changed, 23 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> > > index 765004b60513..03f03832e077 100644
> > > --- a/arch/riscv/kernel/patch.c
> > > +++ b/arch/riscv/kernel/patch.c
> > > @@ -55,13 +55,6 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
> > >         bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
> > >         int ret;
> > >
> > > -       /*
> > > -        * Before reaching here, it was expected to lock the text_mutex
> > > -        * already, so we don't need to give another lock here and could
> > > -        * ensure that it was safe between each cores.
> > > -        */
> > > -       lockdep_assert_held(&text_mutex);
> > > -
> > >         if (across_pages)
> > >                 patch_map(addr + len, FIX_TEXT_POKE1);
> > >
> > > @@ -85,7 +78,8 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
> > >  NOKPROBE_SYMBOL(patch_insn_write);
> > >  #endif /* CONFIG_MMU */
> > >
> > > -int patch_text_nosync(void *addr, const void *insns, size_t len)
> > > +/* For stop_machine() or the other cases which have already disabled irq and have locked among cpu cores */
> > > +int patch_text_noirq(void *addr, const void *insns, size_t len)
> > >  {
> > >         u32 *tp = addr;
> > >         int ret;
> > > @@ -97,6 +91,26 @@ int patch_text_nosync(void *addr, const void *insns, size_t len)
> > >
> > >         return ret;
> > >  }
> > > +NOKPROBE_SYMBOL(patch_text_noirq);
> > > +
> > > +int patch_text_nosync(void *addr, const void *insns, size_t len)
> > > +{
> > > +       int ret;
> > > +       unsigned long flags;
> > > +
> > > +       /*
> > > +        * Before reaching here, it was expected to lock the text_mutex
> > > +        * already, so we don't need to give another lock here and could
> > > +        * ensure that it was safe between each cores.
> > > +        */
> > > +       lockdep_assert_held(&text_mutex);
> > > +
> > > +       local_irq_save(flags);
> > > +       ret = patch_text_noirq(addr, insns, len);
> > > +       local_irq_restore(flags);
> > > +
> > > +       return ret;
> > > +}
> > >  NOKPROBE_SYMBOL(patch_text_nosync);
> > >
> > >  static int patch_text_cb(void *data)
> > > @@ -106,7 +120,7 @@ static int patch_text_cb(void *data)
> > >
> > >         if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
> > >                 ret =
> > > -                   patch_text_nosync(patch->addr, &patch->insn,
> > > +                   patch_text_noirq(patch->addr, &patch->insn,
> > >                                             GET_INSN_LENGTH(patch->insn));
> > >                 atomic_inc(&patch->cpu_count);
> > >         } else {
> > > --
> > > 2.35.1
> > >
> >
> >
> > --
> > Best Regards
> >  Guo Ren



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-06-11 12:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-11  2:31 [RFC] riscv: restore the irq save/restore logic for nosync code patching Wu Zhangjin
2022-06-11  4:56 ` Guo Ren
2022-06-11  5:14 ` Guo Ren
2022-06-11  9:26   ` Wu Zhangjin
2022-06-11 12:46     ` Guo Ren

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).