* [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-12 15:56 ` guoren
0 siblings, 0 replies; 15+ messages in thread
From: guoren @ 2022-03-12 15:56 UTC (permalink / raw)
To: guoren
Cc: linux-arm-kernel, linux-kernel, linux-csky, linux-riscv,
linux-xtensa, Guo Ren, Will Deacon, Catalin Marinas,
Palmer Dabbelt, Masami Hiramatsu, Chris Zankel, Max Filippov,
Arnd Bergmann
From: Guo Ren <guoren@linux.alibaba.com>
These patch_text implementations are using stop_machine_cpuslocked
infrastructure with atomic cpu_count. The origin idea is that when
the master CPU patch_text, others should wait for it. But current
implementation is using the first CPU as master, which couldn't
guarantee continue CPUs are waiting. This patch changes the last
CPU as the master to solve the potaintial risk.
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Peter Zijlstra <peterz@infradead.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
---
arch/arm64/kernel/patching.c | 4 ++--
arch/csky/kernel/probes/kprobes.c | 2 +-
arch/riscv/kernel/patch.c | 2 +-
arch/xtensa/kernel/jump_label.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 771f543464e0..6cfea9650e65 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
- /* The first CPU becomes master */
- if (atomic_inc_return(&pp->cpu_count) == 1) {
+ /* The last CPU becomes master */
+ if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);
diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
index 42920f25e73c..19821a06a991 100644
--- a/arch/csky/kernel/probes/kprobes.c
+++ b/arch/csky/kernel/probes/kprobes.c
@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
struct csky_insn_patch *param = priv;
unsigned int addr = (unsigned int)param->addr;
- if (atomic_inc_return(¶m->cpu_count) == 1) {
+ if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
*(u16 *) addr = cpu_to_le16(param->opcode);
dcache_wb_range(addr, addr + 2);
atomic_inc(¶m->cpu_count);
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 0b552873a577..cca72a9388e3 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
struct patch_insn *patch = data;
int ret = 0;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
ret =
patch_text_nosync(patch->addr, &patch->insn,
GET_INSN_LENGTH(patch->insn));
diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
index 61cf6497a646..7e1d3f952eb3 100644
--- a/arch/xtensa/kernel/jump_label.c
+++ b/arch/xtensa/kernel/jump_label.c
@@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
{
struct patch *patch = data;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
local_patch_text(patch->addr, patch->data, patch->sz);
atomic_inc(&patch->cpu_count);
} else {
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-12 15:56 ` guoren
0 siblings, 0 replies; 15+ messages in thread
From: guoren @ 2022-03-12 15:56 UTC (permalink / raw)
To: guoren
Cc: linux-arm-kernel, linux-kernel, linux-csky, linux-riscv,
linux-xtensa, Guo Ren, Will Deacon, Catalin Marinas,
Palmer Dabbelt, Masami Hiramatsu, Chris Zankel, Max Filippov,
Arnd Bergmann
From: Guo Ren <guoren@linux.alibaba.com>
These patch_text implementations are using stop_machine_cpuslocked
infrastructure with atomic cpu_count. The origin idea is that when
the master CPU patch_text, others should wait for it. But current
implementation is using the first CPU as master, which couldn't
guarantee continue CPUs are waiting. This patch changes the last
CPU as the master to solve the potaintial risk.
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Peter Zijlstra <peterz@infradead.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
---
arch/arm64/kernel/patching.c | 4 ++--
arch/csky/kernel/probes/kprobes.c | 2 +-
arch/riscv/kernel/patch.c | 2 +-
arch/xtensa/kernel/jump_label.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 771f543464e0..6cfea9650e65 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
- /* The first CPU becomes master */
- if (atomic_inc_return(&pp->cpu_count) == 1) {
+ /* The last CPU becomes master */
+ if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);
diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
index 42920f25e73c..19821a06a991 100644
--- a/arch/csky/kernel/probes/kprobes.c
+++ b/arch/csky/kernel/probes/kprobes.c
@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
struct csky_insn_patch *param = priv;
unsigned int addr = (unsigned int)param->addr;
- if (atomic_inc_return(¶m->cpu_count) == 1) {
+ if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
*(u16 *) addr = cpu_to_le16(param->opcode);
dcache_wb_range(addr, addr + 2);
atomic_inc(¶m->cpu_count);
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 0b552873a577..cca72a9388e3 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
struct patch_insn *patch = data;
int ret = 0;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
ret =
patch_text_nosync(patch->addr, &patch->insn,
GET_INSN_LENGTH(patch->insn));
diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
index 61cf6497a646..7e1d3f952eb3 100644
--- a/arch/xtensa/kernel/jump_label.c
+++ b/arch/xtensa/kernel/jump_label.c
@@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
{
struct patch *patch = data;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
local_patch_text(patch->addr, patch->data, patch->sz);
atomic_inc(&patch->cpu_count);
} else {
--
2.25.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-12 15:56 ` guoren
0 siblings, 0 replies; 15+ messages in thread
From: guoren @ 2022-03-12 15:56 UTC (permalink / raw)
To: guoren
Cc: linux-arm-kernel, linux-kernel, linux-csky, linux-riscv,
linux-xtensa, Guo Ren, Will Deacon, Catalin Marinas,
Palmer Dabbelt, Masami Hiramatsu, Chris Zankel, Max Filippov,
Arnd Bergmann
From: Guo Ren <guoren@linux.alibaba.com>
These patch_text implementations are using stop_machine_cpuslocked
infrastructure with atomic cpu_count. The origin idea is that when
the master CPU patch_text, others should wait for it. But current
implementation is using the first CPU as master, which couldn't
guarantee continue CPUs are waiting. This patch changes the last
CPU as the master to solve the potaintial risk.
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Peter Zijlstra <peterz@infradead.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
---
arch/arm64/kernel/patching.c | 4 ++--
arch/csky/kernel/probes/kprobes.c | 2 +-
arch/riscv/kernel/patch.c | 2 +-
arch/xtensa/kernel/jump_label.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 771f543464e0..6cfea9650e65 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
- /* The first CPU becomes master */
- if (atomic_inc_return(&pp->cpu_count) == 1) {
+ /* The last CPU becomes master */
+ if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);
diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
index 42920f25e73c..19821a06a991 100644
--- a/arch/csky/kernel/probes/kprobes.c
+++ b/arch/csky/kernel/probes/kprobes.c
@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
struct csky_insn_patch *param = priv;
unsigned int addr = (unsigned int)param->addr;
- if (atomic_inc_return(¶m->cpu_count) == 1) {
+ if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
*(u16 *) addr = cpu_to_le16(param->opcode);
dcache_wb_range(addr, addr + 2);
atomic_inc(¶m->cpu_count);
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 0b552873a577..cca72a9388e3 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
struct patch_insn *patch = data;
int ret = 0;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
ret =
patch_text_nosync(patch->addr, &patch->insn,
GET_INSN_LENGTH(patch->insn));
diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
index 61cf6497a646..7e1d3f952eb3 100644
--- a/arch/xtensa/kernel/jump_label.c
+++ b/arch/xtensa/kernel/jump_label.c
@@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
{
struct patch *patch = data;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
local_patch_text(patch->addr, patch->data, patch->sz);
atomic_inc(&patch->cpu_count);
} else {
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
2022-03-12 15:56 ` guoren
(?)
@ 2022-03-12 23:50 ` Max Filippov
-1 siblings, 0 replies; 15+ messages in thread
From: Max Filippov @ 2022-03-12 23:50 UTC (permalink / raw)
To: Guo Ren
Cc: linux-arm-kernel, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
Hi Guo Ren,
On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> These patch_text implementations are using stop_machine_cpuslocked
> infrastructure with atomic cpu_count. The origin idea is that when
The original
> the master CPU patch_text, others should wait for it. But current
> implementation is using the first CPU as master, which couldn't
> guarantee continue CPUs are waiting. This patch changes the last
guarantee that remaining CPUs are waiting.
> CPU as the master to solve the potaintial risk.
potential
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Peter Zijlstra <peterz@infradead.org
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Chris Zankel <chris@zankel.net>
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---
> arch/arm64/kernel/patching.c | 4 ++--
> arch/csky/kernel/probes/kprobes.c | 2 +-
> arch/riscv/kernel/patch.c | 2 +-
> arch/xtensa/kernel/jump_label.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
I'm curious, is there a specific issue that prompted this patch?
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-12 23:50 ` Max Filippov
0 siblings, 0 replies; 15+ messages in thread
From: Max Filippov @ 2022-03-12 23:50 UTC (permalink / raw)
To: Guo Ren
Cc: linux-arm-kernel, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
Hi Guo Ren,
On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> These patch_text implementations are using stop_machine_cpuslocked
> infrastructure with atomic cpu_count. The origin idea is that when
The original
> the master CPU patch_text, others should wait for it. But current
> implementation is using the first CPU as master, which couldn't
> guarantee continue CPUs are waiting. This patch changes the last
guarantee that remaining CPUs are waiting.
> CPU as the master to solve the potaintial risk.
potential
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Peter Zijlstra <peterz@infradead.org
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Chris Zankel <chris@zankel.net>
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---
> arch/arm64/kernel/patching.c | 4 ++--
> arch/csky/kernel/probes/kprobes.c | 2 +-
> arch/riscv/kernel/patch.c | 2 +-
> arch/xtensa/kernel/jump_label.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
I'm curious, is there a specific issue that prompted this patch?
--
Thanks.
-- Max
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-12 23:50 ` Max Filippov
0 siblings, 0 replies; 15+ messages in thread
From: Max Filippov @ 2022-03-12 23:50 UTC (permalink / raw)
To: Guo Ren
Cc: linux-arm-kernel, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
Hi Guo Ren,
On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> These patch_text implementations are using stop_machine_cpuslocked
> infrastructure with atomic cpu_count. The origin idea is that when
The original
> the master CPU patch_text, others should wait for it. But current
> implementation is using the first CPU as master, which couldn't
> guarantee continue CPUs are waiting. This patch changes the last
guarantee that remaining CPUs are waiting.
> CPU as the master to solve the potaintial risk.
potential
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Peter Zijlstra <peterz@infradead.org
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Chris Zankel <chris@zankel.net>
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---
> arch/arm64/kernel/patching.c | 4 ++--
> arch/csky/kernel/probes/kprobes.c | 2 +-
> arch/riscv/kernel/patch.c | 2 +-
> arch/xtensa/kernel/jump_label.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
I'm curious, is there a specific issue that prompted this patch?
--
Thanks.
-- Max
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
2022-03-12 15:56 ` guoren
(?)
@ 2022-03-12 23:56 ` Max Filippov
-1 siblings, 0 replies; 15+ messages in thread
From: Max Filippov @ 2022-03-12 23:56 UTC (permalink / raw)
To: Guo Ren
Cc: linux-arm-kernel, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> These patch_text implementations are using stop_machine_cpuslocked
> infrastructure with atomic cpu_count. The origin idea is that when
> the master CPU patch_text, others should wait for it. But current
> implementation is using the first CPU as master, which couldn't
> guarantee continue CPUs are waiting. This patch changes the last
> CPU as the master to solve the potaintial risk.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Peter Zijlstra <peterz@infradead.org
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Chris Zankel <chris@zankel.net>
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---
> arch/arm64/kernel/patching.c | 4 ++--
> arch/csky/kernel/probes/kprobes.c | 2 +-
> arch/riscv/kernel/patch.c | 2 +-
> arch/xtensa/kernel/jump_label.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> index 771f543464e0..6cfea9650e65 100644
> --- a/arch/arm64/kernel/patching.c
> +++ b/arch/arm64/kernel/patching.c
> @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> int i, ret = 0;
> struct aarch64_insn_patch *pp = arg;
>
> - /* The first CPU becomes master */
> - if (atomic_inc_return(&pp->cpu_count) == 1) {
> + /* The last CPU becomes master */
> + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
atomic_inc_return returns the incremented value, so the last CPU gets
num_online_cpus(), not (num_online_cpus() - 1).
> for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> pp->new_insns[i]);
> diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> index 42920f25e73c..19821a06a991 100644
> --- a/arch/csky/kernel/probes/kprobes.c
> +++ b/arch/csky/kernel/probes/kprobes.c
> @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> struct csky_insn_patch *param = priv;
> unsigned int addr = (unsigned int)param->addr;
>
> - if (atomic_inc_return(¶m->cpu_count) == 1) {
> + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> *(u16 *) addr = cpu_to_le16(param->opcode);
> dcache_wb_range(addr, addr + 2);
> atomic_inc(¶m->cpu_count);
> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> index 0b552873a577..cca72a9388e3 100644
> --- a/arch/riscv/kernel/patch.c
> +++ b/arch/riscv/kernel/patch.c
> @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> struct patch_insn *patch = data;
> int ret = 0;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> ret =
> patch_text_nosync(patch->addr, &patch->insn,
> GET_INSN_LENGTH(patch->insn));
> diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> index 61cf6497a646..7e1d3f952eb3 100644
> --- a/arch/xtensa/kernel/jump_label.c
> +++ b/arch/xtensa/kernel/jump_label.c
> @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> {
> struct patch *patch = data;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> local_patch_text(patch->addr, patch->data, patch->sz);
> atomic_inc(&patch->cpu_count);
> } else {
> --
> 2.25.1
>
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-12 23:56 ` Max Filippov
0 siblings, 0 replies; 15+ messages in thread
From: Max Filippov @ 2022-03-12 23:56 UTC (permalink / raw)
To: Guo Ren
Cc: linux-arm-kernel, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> These patch_text implementations are using stop_machine_cpuslocked
> infrastructure with atomic cpu_count. The origin idea is that when
> the master CPU patch_text, others should wait for it. But current
> implementation is using the first CPU as master, which couldn't
> guarantee continue CPUs are waiting. This patch changes the last
> CPU as the master to solve the potaintial risk.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Peter Zijlstra <peterz@infradead.org
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Chris Zankel <chris@zankel.net>
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---
> arch/arm64/kernel/patching.c | 4 ++--
> arch/csky/kernel/probes/kprobes.c | 2 +-
> arch/riscv/kernel/patch.c | 2 +-
> arch/xtensa/kernel/jump_label.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> index 771f543464e0..6cfea9650e65 100644
> --- a/arch/arm64/kernel/patching.c
> +++ b/arch/arm64/kernel/patching.c
> @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> int i, ret = 0;
> struct aarch64_insn_patch *pp = arg;
>
> - /* The first CPU becomes master */
> - if (atomic_inc_return(&pp->cpu_count) == 1) {
> + /* The last CPU becomes master */
> + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
atomic_inc_return returns the incremented value, so the last CPU gets
num_online_cpus(), not (num_online_cpus() - 1).
> for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> pp->new_insns[i]);
> diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> index 42920f25e73c..19821a06a991 100644
> --- a/arch/csky/kernel/probes/kprobes.c
> +++ b/arch/csky/kernel/probes/kprobes.c
> @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> struct csky_insn_patch *param = priv;
> unsigned int addr = (unsigned int)param->addr;
>
> - if (atomic_inc_return(¶m->cpu_count) == 1) {
> + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> *(u16 *) addr = cpu_to_le16(param->opcode);
> dcache_wb_range(addr, addr + 2);
> atomic_inc(¶m->cpu_count);
> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> index 0b552873a577..cca72a9388e3 100644
> --- a/arch/riscv/kernel/patch.c
> +++ b/arch/riscv/kernel/patch.c
> @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> struct patch_insn *patch = data;
> int ret = 0;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> ret =
> patch_text_nosync(patch->addr, &patch->insn,
> GET_INSN_LENGTH(patch->insn));
> diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> index 61cf6497a646..7e1d3f952eb3 100644
> --- a/arch/xtensa/kernel/jump_label.c
> +++ b/arch/xtensa/kernel/jump_label.c
> @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> {
> struct patch *patch = data;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> local_patch_text(patch->addr, patch->data, patch->sz);
> atomic_inc(&patch->cpu_count);
> } else {
> --
> 2.25.1
>
--
Thanks.
-- Max
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-12 23:56 ` Max Filippov
0 siblings, 0 replies; 15+ messages in thread
From: Max Filippov @ 2022-03-12 23:56 UTC (permalink / raw)
To: Guo Ren
Cc: linux-arm-kernel, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> These patch_text implementations are using stop_machine_cpuslocked
> infrastructure with atomic cpu_count. The origin idea is that when
> the master CPU patch_text, others should wait for it. But current
> implementation is using the first CPU as master, which couldn't
> guarantee continue CPUs are waiting. This patch changes the last
> CPU as the master to solve the potaintial risk.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Peter Zijlstra <peterz@infradead.org
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Chris Zankel <chris@zankel.net>
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---
> arch/arm64/kernel/patching.c | 4 ++--
> arch/csky/kernel/probes/kprobes.c | 2 +-
> arch/riscv/kernel/patch.c | 2 +-
> arch/xtensa/kernel/jump_label.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> index 771f543464e0..6cfea9650e65 100644
> --- a/arch/arm64/kernel/patching.c
> +++ b/arch/arm64/kernel/patching.c
> @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> int i, ret = 0;
> struct aarch64_insn_patch *pp = arg;
>
> - /* The first CPU becomes master */
> - if (atomic_inc_return(&pp->cpu_count) == 1) {
> + /* The last CPU becomes master */
> + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
atomic_inc_return returns the incremented value, so the last CPU gets
num_online_cpus(), not (num_online_cpus() - 1).
> for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> pp->new_insns[i]);
> diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> index 42920f25e73c..19821a06a991 100644
> --- a/arch/csky/kernel/probes/kprobes.c
> +++ b/arch/csky/kernel/probes/kprobes.c
> @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> struct csky_insn_patch *param = priv;
> unsigned int addr = (unsigned int)param->addr;
>
> - if (atomic_inc_return(¶m->cpu_count) == 1) {
> + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> *(u16 *) addr = cpu_to_le16(param->opcode);
> dcache_wb_range(addr, addr + 2);
> atomic_inc(¶m->cpu_count);
> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> index 0b552873a577..cca72a9388e3 100644
> --- a/arch/riscv/kernel/patch.c
> +++ b/arch/riscv/kernel/patch.c
> @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> struct patch_insn *patch = data;
> int ret = 0;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> ret =
> patch_text_nosync(patch->addr, &patch->insn,
> GET_INSN_LENGTH(patch->insn));
> diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> index 61cf6497a646..7e1d3f952eb3 100644
> --- a/arch/xtensa/kernel/jump_label.c
> +++ b/arch/xtensa/kernel/jump_label.c
> @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> {
> struct patch *patch = data;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
Ditto.
> local_patch_text(patch->addr, patch->data, patch->sz);
> atomic_inc(&patch->cpu_count);
> } else {
> --
> 2.25.1
>
--
Thanks.
-- Max
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
2022-03-12 23:56 ` Max Filippov
(?)
@ 2022-03-13 1:04 ` Guo Ren
-1 siblings, 0 replies; 15+ messages in thread
From: Guo Ren @ 2022-03-13 1:04 UTC (permalink / raw)
To: Max Filippov
Cc: Linux ARM, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sun, Mar 13, 2022 at 7:57 AM Max Filippov <jcmvbkbc@gmail.com> wrote:
>
> On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch_text implementations are using stop_machine_cpuslocked
> > infrastructure with atomic cpu_count. The origin idea is that when
> > the master CPU patch_text, others should wait for it. But current
> > implementation is using the first CPU as master, which couldn't
> > guarantee continue CPUs are waiting. This patch changes the last
> > CPU as the master to solve the potaintial risk.
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Peter Zijlstra <peterz@infradead.org
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Chris Zankel <chris@zankel.net>
> > Cc: Max Filippov <jcmvbkbc@gmail.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> > arch/arm64/kernel/patching.c | 4 ++--
> > arch/csky/kernel/probes/kprobes.c | 2 +-
> > arch/riscv/kernel/patch.c | 2 +-
> > arch/xtensa/kernel/jump_label.c | 2 +-
> > 4 files changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> > index 771f543464e0..6cfea9650e65 100644
> > --- a/arch/arm64/kernel/patching.c
> > +++ b/arch/arm64/kernel/patching.c
> > @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> > int i, ret = 0;
> > struct aarch64_insn_patch *pp = arg;
> >
> > - /* The first CPU becomes master */
> > - if (atomic_inc_return(&pp->cpu_count) == 1) {
> > + /* The last CPU becomes master */
> > + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
>
> atomic_inc_return returns the incremented value, so the last CPU gets
> num_online_cpus(), not (num_online_cpus() - 1).
Oops! You are right, thx.
>
> > for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> > ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> > pp->new_insns[i]);
> > diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> > index 42920f25e73c..19821a06a991 100644
> > --- a/arch/csky/kernel/probes/kprobes.c
> > +++ b/arch/csky/kernel/probes/kprobes.c
> > @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> > struct csky_insn_patch *param = priv;
> > unsigned int addr = (unsigned int)param->addr;
> >
> > - if (atomic_inc_return(¶m->cpu_count) == 1) {
> > + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > *(u16 *) addr = cpu_to_le16(param->opcode);
> > dcache_wb_range(addr, addr + 2);
> > atomic_inc(¶m->cpu_count);
> > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> > index 0b552873a577..cca72a9388e3 100644
> > --- a/arch/riscv/kernel/patch.c
> > +++ b/arch/riscv/kernel/patch.c
> > @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> > struct patch_insn *patch = data;
> > int ret = 0;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > ret =
> > patch_text_nosync(patch->addr, &patch->insn,
> > GET_INSN_LENGTH(patch->insn));
> > diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> > index 61cf6497a646..7e1d3f952eb3 100644
> > --- a/arch/xtensa/kernel/jump_label.c
> > +++ b/arch/xtensa/kernel/jump_label.c
> > @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> > {
> > struct patch *patch = data;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > local_patch_text(patch->addr, patch->data, patch->sz);
> > atomic_inc(&patch->cpu_count);
> > } else {
> > --
> > 2.25.1
> >
>
>
> --
> Thanks.
> -- Max
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-13 1:04 ` Guo Ren
0 siblings, 0 replies; 15+ messages in thread
From: Guo Ren @ 2022-03-13 1:04 UTC (permalink / raw)
To: Max Filippov
Cc: Linux ARM, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sun, Mar 13, 2022 at 7:57 AM Max Filippov <jcmvbkbc@gmail.com> wrote:
>
> On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch_text implementations are using stop_machine_cpuslocked
> > infrastructure with atomic cpu_count. The origin idea is that when
> > the master CPU patch_text, others should wait for it. But current
> > implementation is using the first CPU as master, which couldn't
> > guarantee continue CPUs are waiting. This patch changes the last
> > CPU as the master to solve the potaintial risk.
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Peter Zijlstra <peterz@infradead.org
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Chris Zankel <chris@zankel.net>
> > Cc: Max Filippov <jcmvbkbc@gmail.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> > arch/arm64/kernel/patching.c | 4 ++--
> > arch/csky/kernel/probes/kprobes.c | 2 +-
> > arch/riscv/kernel/patch.c | 2 +-
> > arch/xtensa/kernel/jump_label.c | 2 +-
> > 4 files changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> > index 771f543464e0..6cfea9650e65 100644
> > --- a/arch/arm64/kernel/patching.c
> > +++ b/arch/arm64/kernel/patching.c
> > @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> > int i, ret = 0;
> > struct aarch64_insn_patch *pp = arg;
> >
> > - /* The first CPU becomes master */
> > - if (atomic_inc_return(&pp->cpu_count) == 1) {
> > + /* The last CPU becomes master */
> > + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
>
> atomic_inc_return returns the incremented value, so the last CPU gets
> num_online_cpus(), not (num_online_cpus() - 1).
Oops! You are right, thx.
>
> > for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> > ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> > pp->new_insns[i]);
> > diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> > index 42920f25e73c..19821a06a991 100644
> > --- a/arch/csky/kernel/probes/kprobes.c
> > +++ b/arch/csky/kernel/probes/kprobes.c
> > @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> > struct csky_insn_patch *param = priv;
> > unsigned int addr = (unsigned int)param->addr;
> >
> > - if (atomic_inc_return(¶m->cpu_count) == 1) {
> > + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > *(u16 *) addr = cpu_to_le16(param->opcode);
> > dcache_wb_range(addr, addr + 2);
> > atomic_inc(¶m->cpu_count);
> > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> > index 0b552873a577..cca72a9388e3 100644
> > --- a/arch/riscv/kernel/patch.c
> > +++ b/arch/riscv/kernel/patch.c
> > @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> > struct patch_insn *patch = data;
> > int ret = 0;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > ret =
> > patch_text_nosync(patch->addr, &patch->insn,
> > GET_INSN_LENGTH(patch->insn));
> > diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> > index 61cf6497a646..7e1d3f952eb3 100644
> > --- a/arch/xtensa/kernel/jump_label.c
> > +++ b/arch/xtensa/kernel/jump_label.c
> > @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> > {
> > struct patch *patch = data;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > local_patch_text(patch->addr, patch->data, patch->sz);
> > atomic_inc(&patch->cpu_count);
> > } else {
> > --
> > 2.25.1
> >
>
>
> --
> Thanks.
> -- Max
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-13 1:04 ` Guo Ren
0 siblings, 0 replies; 15+ messages in thread
From: Guo Ren @ 2022-03-13 1:04 UTC (permalink / raw)
To: Max Filippov
Cc: Linux ARM, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sun, Mar 13, 2022 at 7:57 AM Max Filippov <jcmvbkbc@gmail.com> wrote:
>
> On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch_text implementations are using stop_machine_cpuslocked
> > infrastructure with atomic cpu_count. The origin idea is that when
> > the master CPU patch_text, others should wait for it. But current
> > implementation is using the first CPU as master, which couldn't
> > guarantee continue CPUs are waiting. This patch changes the last
> > CPU as the master to solve the potaintial risk.
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Peter Zijlstra <peterz@infradead.org
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Chris Zankel <chris@zankel.net>
> > Cc: Max Filippov <jcmvbkbc@gmail.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> > arch/arm64/kernel/patching.c | 4 ++--
> > arch/csky/kernel/probes/kprobes.c | 2 +-
> > arch/riscv/kernel/patch.c | 2 +-
> > arch/xtensa/kernel/jump_label.c | 2 +-
> > 4 files changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> > index 771f543464e0..6cfea9650e65 100644
> > --- a/arch/arm64/kernel/patching.c
> > +++ b/arch/arm64/kernel/patching.c
> > @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> > int i, ret = 0;
> > struct aarch64_insn_patch *pp = arg;
> >
> > - /* The first CPU becomes master */
> > - if (atomic_inc_return(&pp->cpu_count) == 1) {
> > + /* The last CPU becomes master */
> > + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
>
> atomic_inc_return returns the incremented value, so the last CPU gets
> num_online_cpus(), not (num_online_cpus() - 1).
Oops! You are right, thx.
>
> > for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> > ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> > pp->new_insns[i]);
> > diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> > index 42920f25e73c..19821a06a991 100644
> > --- a/arch/csky/kernel/probes/kprobes.c
> > +++ b/arch/csky/kernel/probes/kprobes.c
> > @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> > struct csky_insn_patch *param = priv;
> > unsigned int addr = (unsigned int)param->addr;
> >
> > - if (atomic_inc_return(¶m->cpu_count) == 1) {
> > + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > *(u16 *) addr = cpu_to_le16(param->opcode);
> > dcache_wb_range(addr, addr + 2);
> > atomic_inc(¶m->cpu_count);
> > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> > index 0b552873a577..cca72a9388e3 100644
> > --- a/arch/riscv/kernel/patch.c
> > +++ b/arch/riscv/kernel/patch.c
> > @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> > struct patch_insn *patch = data;
> > int ret = 0;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > ret =
> > patch_text_nosync(patch->addr, &patch->insn,
> > GET_INSN_LENGTH(patch->insn));
> > diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> > index 61cf6497a646..7e1d3f952eb3 100644
> > --- a/arch/xtensa/kernel/jump_label.c
> > +++ b/arch/xtensa/kernel/jump_label.c
> > @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> > {
> > struct patch *patch = data;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > local_patch_text(patch->addr, patch->data, patch->sz);
> > atomic_inc(&patch->cpu_count);
> > } else {
> > --
> > 2.25.1
> >
>
>
> --
> Thanks.
> -- Max
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
2022-03-12 23:50 ` Max Filippov
(?)
@ 2022-03-13 1:10 ` Guo Ren
-1 siblings, 0 replies; 15+ messages in thread
From: Guo Ren @ 2022-03-13 1:10 UTC (permalink / raw)
To: Max Filippov
Cc: Linux ARM, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sun, Mar 13, 2022 at 7:50 AM Max Filippov <jcmvbkbc@gmail.com> wrote:
>
> Hi Guo Ren,
>
> On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch_text implementations are using stop_machine_cpuslocked
> > infrastructure with atomic cpu_count. The origin idea is that when
>
> The original
>
> > the master CPU patch_text, others should wait for it. But current
> > implementation is using the first CPU as master, which couldn't
> > guarantee continue CPUs are waiting. This patch changes the last
>
> guarantee that remaining CPUs are waiting.
>
> > CPU as the master to solve the potaintial risk.
>
> potential
>
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Peter Zijlstra <peterz@infradead.org
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Chris Zankel <chris@zankel.net>
> > Cc: Max Filippov <jcmvbkbc@gmail.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> > arch/arm64/kernel/patching.c | 4 ++--
> > arch/csky/kernel/probes/kprobes.c | 2 +-
> > arch/riscv/kernel/patch.c | 2 +-
> > arch/xtensa/kernel/jump_label.c | 2 +-
> > 4 files changed, 5 insertions(+), 5 deletions(-)
>
> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
>
> I'm curious, is there a specific issue that prompted this patch?
No, theoretical risk.
>
> --
> Thanks.
> -- Max
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-13 1:10 ` Guo Ren
0 siblings, 0 replies; 15+ messages in thread
From: Guo Ren @ 2022-03-13 1:10 UTC (permalink / raw)
To: Max Filippov
Cc: Linux ARM, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sun, Mar 13, 2022 at 7:50 AM Max Filippov <jcmvbkbc@gmail.com> wrote:
>
> Hi Guo Ren,
>
> On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch_text implementations are using stop_machine_cpuslocked
> > infrastructure with atomic cpu_count. The origin idea is that when
>
> The original
>
> > the master CPU patch_text, others should wait for it. But current
> > implementation is using the first CPU as master, which couldn't
> > guarantee continue CPUs are waiting. This patch changes the last
>
> guarantee that remaining CPUs are waiting.
>
> > CPU as the master to solve the potaintial risk.
>
> potential
>
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Peter Zijlstra <peterz@infradead.org
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Chris Zankel <chris@zankel.net>
> > Cc: Max Filippov <jcmvbkbc@gmail.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> > arch/arm64/kernel/patching.c | 4 ++--
> > arch/csky/kernel/probes/kprobes.c | 2 +-
> > arch/riscv/kernel/patch.c | 2 +-
> > arch/xtensa/kernel/jump_label.c | 2 +-
> > 4 files changed, 5 insertions(+), 5 deletions(-)
>
> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
>
> I'm curious, is there a specific issue that prompted this patch?
No, theoretical risk.
>
> --
> Thanks.
> -- Max
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
@ 2022-03-13 1:10 ` Guo Ren
0 siblings, 0 replies; 15+ messages in thread
From: Guo Ren @ 2022-03-13 1:10 UTC (permalink / raw)
To: Max Filippov
Cc: Linux ARM, LKML, linux-csky, linux-riscv,
open list:TENSILICA XTENSA PORT (xtensa),
Guo Ren, Will Deacon, Catalin Marinas, Palmer Dabbelt,
Masami Hiramatsu, Chris Zankel, Arnd Bergmann
On Sun, Mar 13, 2022 at 7:50 AM Max Filippov <jcmvbkbc@gmail.com> wrote:
>
> Hi Guo Ren,
>
> On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch_text implementations are using stop_machine_cpuslocked
> > infrastructure with atomic cpu_count. The origin idea is that when
>
> The original
>
> > the master CPU patch_text, others should wait for it. But current
> > implementation is using the first CPU as master, which couldn't
> > guarantee continue CPUs are waiting. This patch changes the last
>
> guarantee that remaining CPUs are waiting.
>
> > CPU as the master to solve the potaintial risk.
>
> potential
>
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Peter Zijlstra <peterz@infradead.org
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Chris Zankel <chris@zankel.net>
> > Cc: Max Filippov <jcmvbkbc@gmail.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> > arch/arm64/kernel/patching.c | 4 ++--
> > arch/csky/kernel/probes/kprobes.c | 2 +-
> > arch/riscv/kernel/patch.c | 2 +-
> > arch/xtensa/kernel/jump_label.c | 2 +-
> > 4 files changed, 5 insertions(+), 5 deletions(-)
>
> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
>
> I'm curious, is there a specific issue that prompted this patch?
No, theoretical risk.
>
> --
> Thanks.
> -- Max
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2022-03-13 1:12 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-12 15:56 [RFC PATCH] arch: patch_text: Fixup last cpu should be master guoren
2022-03-12 15:56 ` guoren
2022-03-12 15:56 ` guoren
2022-03-12 23:50 ` Max Filippov
2022-03-12 23:50 ` Max Filippov
2022-03-12 23:50 ` Max Filippov
2022-03-13 1:10 ` Guo Ren
2022-03-13 1:10 ` Guo Ren
2022-03-13 1:10 ` Guo Ren
2022-03-12 23:56 ` Max Filippov
2022-03-12 23:56 ` Max Filippov
2022-03-12 23:56 ` Max Filippov
2022-03-13 1:04 ` Guo Ren
2022-03-13 1:04 ` Guo Ren
2022-03-13 1:04 ` Guo Ren
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.