* [PATCH 1/4] x86/alternatives: free smp_alt_modules when enable smp
@ 2017-10-28 12:50 Zhou Chengming
2017-10-28 12:50 ` [PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP Zhou Chengming
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Zhou Chengming @ 2017-10-28 12:50 UTC (permalink / raw)
To: mhiramat, bp, peterz, ananth, anil.s.keshavamurthy, davem, tglx,
jkosina, rostedt, mjurczyk
Cc: x86, linux-kernel, zhouchengming1
In the current code, we don't free smp_alt_modules when enable smp,
so have to wait module unload to call alternatives_smp_module_del()
to free its smp_alt_module. This strategy has shortcomings.
First, maybe some modules don't unload after smp enabled, so these
smp_alt_modules won't be freed even they are useless anymore. Second,
every module has to call alternatives_smp_module_del() when unload
to see if it's in the list even after smp enabled, it's complete
useless work.
And more important, alternatives_smp_module_del() will not traverse
the list after smp enabled, so we don't need a mutex to protect
the list, only need to use preempt_disable().
We can make sure smp_alt_modules will be useless after enable smp,
so free it all. And alternatives_smp_module_del() can return directly
when !uniproc_patched to avoid a list traversal.
Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
---
arch/x86/kernel/alternative.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 3344d33..8549269 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -534,6 +534,9 @@ void __init_or_module alternatives_smp_module_del(struct module *mod)
struct smp_alt_module *item;
mutex_lock(&smp_alt);
+ if (!uniproc_patched)
+ goto unlock;
+
list_for_each_entry(item, &smp_alt_modules, next) {
if (mod != item->mod)
continue;
@@ -541,12 +544,13 @@ void __init_or_module alternatives_smp_module_del(struct module *mod)
kfree(item);
break;
}
+unlock:
mutex_unlock(&smp_alt);
}
void alternatives_enable_smp(void)
{
- struct smp_alt_module *mod;
+ struct smp_alt_module *mod, *tmp;
/* Why bother if there are no other CPUs? */
BUG_ON(num_possible_cpus() == 1);
@@ -558,9 +562,12 @@ void alternatives_enable_smp(void)
BUG_ON(num_online_cpus() != 1);
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
clear_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
- list_for_each_entry(mod, &smp_alt_modules, next)
+ list_for_each_entry_safe(mod, tmp, &smp_alt_modules, next) {
alternatives_smp_lock(mod->locks, mod->locks_end,
mod->text, mod->text_end);
+ list_del(&mod->next);
+ kfree(mod);
+ }
uniproc_patched = false;
}
mutex_unlock(&smp_alt);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP
2017-10-28 12:50 [PATCH 1/4] x86/alternatives: free smp_alt_modules when enable smp Zhou Chengming
@ 2017-10-28 12:50 ` Zhou Chengming
2017-10-29 1:20 ` zhouchengming
2017-10-28 12:50 ` [PATCH 3/4] x86/alternatives: get rid of the smp_alt mutex Zhou Chengming
2017-10-28 12:50 ` [PATCH 4/4] kprobes, x86/alternatives: preempt_disable() when check smp_alt_modules Zhou Chengming
2 siblings, 1 reply; 5+ messages in thread
From: Zhou Chengming @ 2017-10-28 12:50 UTC (permalink / raw)
To: mhiramat, bp, peterz, ananth, anil.s.keshavamurthy, davem, tglx,
jkosina, rostedt, mjurczyk
Cc: x86, linux-kernel, zhouchengming1
The alternatives_smp_lock/unlock only be used on UP, so we don't
need to hold the text_mutex when text_poke(). Then in the next patch,
we can remove the outside smp_alt mutex too.
Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
---
arch/x86/kernel/alternative.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 8549269..5c3f593 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -442,7 +442,6 @@ static void alternatives_smp_lock(const s32 *start, const s32 *end,
{
const s32 *poff;
- mutex_lock(&text_mutex);
for (poff = start; poff < end; poff++) {
u8 *ptr = (u8 *)poff + *poff;
@@ -452,7 +451,6 @@ static void alternatives_smp_lock(const s32 *start, const s32 *end,
if (*ptr == 0x3e)
text_poke(ptr, ((unsigned char []){0xf0}), 1);
}
- mutex_unlock(&text_mutex);
}
static void alternatives_smp_unlock(const s32 *start, const s32 *end,
@@ -460,7 +458,6 @@ static void alternatives_smp_unlock(const s32 *start, const s32 *end,
{
const s32 *poff;
- mutex_lock(&text_mutex);
for (poff = start; poff < end; poff++) {
u8 *ptr = (u8 *)poff + *poff;
@@ -470,7 +467,6 @@ static void alternatives_smp_unlock(const s32 *start, const s32 *end,
if (*ptr == 0xf0)
text_poke(ptr, ((unsigned char []){0x3E}), 1);
}
- mutex_unlock(&text_mutex);
}
struct smp_alt_module {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 3/4] x86/alternatives: get rid of the smp_alt mutex
2017-10-28 12:50 [PATCH 1/4] x86/alternatives: free smp_alt_modules when enable smp Zhou Chengming
2017-10-28 12:50 ` [PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP Zhou Chengming
@ 2017-10-28 12:50 ` Zhou Chengming
2017-10-28 12:50 ` [PATCH 4/4] kprobes, x86/alternatives: preempt_disable() when check smp_alt_modules Zhou Chengming
2 siblings, 0 replies; 5+ messages in thread
From: Zhou Chengming @ 2017-10-28 12:50 UTC (permalink / raw)
To: mhiramat, bp, peterz, ananth, anil.s.keshavamurthy, davem, tglx,
jkosina, rostedt, mjurczyk
Cc: x86, linux-kernel, zhouchengming1
Previous two patches can make sure the smp_alt_modules will only
be used when UP, so we don't need a mutex to protect the list,
we only need to preempt_disable() when traverse the list.
Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
---
arch/x86/kernel/alternative.c | 31 +++++++++++++++----------------
1 file changed, 15 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 5c3f593..7eab6f6 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -485,8 +485,7 @@ struct smp_alt_module {
struct list_head next;
};
static LIST_HEAD(smp_alt_modules);
-static DEFINE_MUTEX(smp_alt);
-static bool uniproc_patched = false; /* protected by smp_alt */
+static bool uniproc_patched = false;
void __init_or_module alternatives_smp_module_add(struct module *mod,
char *name,
@@ -495,18 +494,18 @@ void __init_or_module alternatives_smp_module_add(struct module *mod,
{
struct smp_alt_module *smp;
- mutex_lock(&smp_alt);
+ preempt_disable();
if (!uniproc_patched)
- goto unlock;
+ goto out;
if (num_possible_cpus() == 1)
/* Don't bother remembering, we'll never have to undo it. */
- goto smp_unlock;
+ goto smp_out;
- smp = kzalloc(sizeof(*smp), GFP_KERNEL);
+ smp = kzalloc(sizeof(*smp), GFP_ATOMIC);
if (NULL == smp)
/* we'll run the (safe but slow) SMP code then ... */
- goto unlock;
+ goto out;
smp->mod = mod;
smp->name = name;
@@ -519,19 +518,19 @@ void __init_or_module alternatives_smp_module_add(struct module *mod,
smp->text, smp->text_end, smp->name);
list_add_tail(&smp->next, &smp_alt_modules);
-smp_unlock:
+smp_out:
alternatives_smp_unlock(locks, locks_end, text, text_end);
-unlock:
- mutex_unlock(&smp_alt);
+out:
+ preempt_enable();
}
void __init_or_module alternatives_smp_module_del(struct module *mod)
{
struct smp_alt_module *item;
- mutex_lock(&smp_alt);
+ preempt_disable();
if (!uniproc_patched)
- goto unlock;
+ goto out;
list_for_each_entry(item, &smp_alt_modules, next) {
if (mod != item->mod)
@@ -540,8 +539,8 @@ void __init_or_module alternatives_smp_module_del(struct module *mod)
kfree(item);
break;
}
-unlock:
- mutex_unlock(&smp_alt);
+out:
+ preempt_enable();
}
void alternatives_enable_smp(void)
@@ -551,7 +550,7 @@ void alternatives_enable_smp(void)
/* Why bother if there are no other CPUs? */
BUG_ON(num_possible_cpus() == 1);
- mutex_lock(&smp_alt);
+ preempt_disable();
if (uniproc_patched) {
pr_info("switching to SMP code\n");
@@ -566,7 +565,7 @@ void alternatives_enable_smp(void)
}
uniproc_patched = false;
}
- mutex_unlock(&smp_alt);
+ preempt_enable();
}
/* Return 1 if the address range is reserved for smp-alternatives */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 4/4] kprobes, x86/alternatives: preempt_disable() when check smp_alt_modules
2017-10-28 12:50 [PATCH 1/4] x86/alternatives: free smp_alt_modules when enable smp Zhou Chengming
2017-10-28 12:50 ` [PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP Zhou Chengming
2017-10-28 12:50 ` [PATCH 3/4] x86/alternatives: get rid of the smp_alt mutex Zhou Chengming
@ 2017-10-28 12:50 ` Zhou Chengming
2 siblings, 0 replies; 5+ messages in thread
From: Zhou Chengming @ 2017-10-28 12:50 UTC (permalink / raw)
To: mhiramat, bp, peterz, ananth, anil.s.keshavamurthy, davem, tglx,
jkosina, rostedt, mjurczyk
Cc: x86, linux-kernel, zhouchengming1
Fixes: 2cfa197 "ftrace/alternatives: Introducing *_text_reserved
functions"
We use alternatives_text_reserved() to check if the address is in
the fixed pieces of alternative reserved, but the problem is that
we don't hold the smp_alt mutex when call this function. So the list
traversal may encounter a deleted list_head if another path is doing
alternatives_smp_module_del().
One solution is that we can hold smp_alt mutex before call this
function, but the difficult point is that the callers of this
functions, arch_prepare_kprobe() and arch_prepare_optimized_kprobe(),
are called inside the text_mutex. So we must hold smp_alt mutex
before we go into these arch dependent code. But we can't now,
the smp_alt mutex is the arch dependent part, only x86 has it.
Maybe we can export another arch dependent callback to solve this.
Another simpler solution reuse the text_mutex to protect smp_alt_modules
list, all the arch dependent checks of kprobes are inside the
text_mutex, so it's safe. But it will use one text_mutex to protect
2 resources, which is not a good idea. And the original patch and
related discussions are here:
https://patchwork.kernel.org/patch/10029465/
This patchset take a different way to solve the problem. We could
remove the smp_alt mutex, only use preempt_disable() to protect
the smp_alt_modules list, since the list only be used when UP now.
So alternatives_text_reserved() also only need to preempt_disable()
when do the smp_alt_modules list check, it's safe now.
Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
---
arch/x86/kernel/alternative.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 7eab6f6..b278cad 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -575,6 +575,9 @@ int alternatives_text_reserved(void *start, void *end)
const s32 *poff;
u8 *text_start = start;
u8 *text_end = end;
+ int ret = 0;
+
+ preempt_disable();
list_for_each_entry(mod, &smp_alt_modules, next) {
if (mod->text > text_end || mod->text_end < text_start)
@@ -582,12 +585,16 @@ int alternatives_text_reserved(void *start, void *end)
for (poff = mod->locks; poff < mod->locks_end; poff++) {
const u8 *ptr = (const u8 *)poff + *poff;
- if (text_start <= ptr && text_end > ptr)
- return 1;
+ if (text_start <= ptr && text_end > ptr) {
+ ret = 1;
+ goto out;
+ }
}
}
- return 0;
+out:
+ preempt_enable();
+ return ret;
}
#endif /* CONFIG_SMP */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP
2017-10-28 12:50 ` [PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP Zhou Chengming
@ 2017-10-29 1:20 ` zhouchengming
0 siblings, 0 replies; 5+ messages in thread
From: zhouchengming @ 2017-10-29 1:20 UTC (permalink / raw)
To: Zhou Chengming
Cc: mhiramat, bp, peterz, ananth, anil.s.keshavamurthy, davem, tglx,
jkosina, rostedt, mjurczyk, x86, linux-kernel
Oops, this is very wrong. Please ignore this patchset. Sorry for the noise...
Thanks!
On 2017/10/28 20:50, Zhou Chengming wrote:
> The alternatives_smp_lock/unlock only be used on UP, so we don't
> need to hold the text_mutex when text_poke(). Then in the next patch,
> we can remove the outside smp_alt mutex too.
>
> Signed-off-by: Zhou Chengming<zhouchengming1@huawei.com>
> ---
> arch/x86/kernel/alternative.c | 4 ----
> 1 file changed, 4 deletions(-)
>
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index 8549269..5c3f593 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -442,7 +442,6 @@ static void alternatives_smp_lock(const s32 *start, const s32 *end,
> {
> const s32 *poff;
>
> - mutex_lock(&text_mutex);
> for (poff = start; poff< end; poff++) {
> u8 *ptr = (u8 *)poff + *poff;
>
> @@ -452,7 +451,6 @@ static void alternatives_smp_lock(const s32 *start, const s32 *end,
> if (*ptr == 0x3e)
> text_poke(ptr, ((unsigned char []){0xf0}), 1);
> }
> - mutex_unlock(&text_mutex);
> }
>
> static void alternatives_smp_unlock(const s32 *start, const s32 *end,
> @@ -460,7 +458,6 @@ static void alternatives_smp_unlock(const s32 *start, const s32 *end,
> {
> const s32 *poff;
>
> - mutex_lock(&text_mutex);
> for (poff = start; poff< end; poff++) {
> u8 *ptr = (u8 *)poff + *poff;
>
> @@ -470,7 +467,6 @@ static void alternatives_smp_unlock(const s32 *start, const s32 *end,
> if (*ptr == 0xf0)
> text_poke(ptr, ((unsigned char []){0x3E}), 1);
> }
> - mutex_unlock(&text_mutex);
> }
>
> struct smp_alt_module {
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2017-10-29 1:21 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-28 12:50 [PATCH 1/4] x86/alternatives: free smp_alt_modules when enable smp Zhou Chengming
2017-10-28 12:50 ` [PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP Zhou Chengming
2017-10-29 1:20 ` zhouchengming
2017-10-28 12:50 ` [PATCH 3/4] x86/alternatives: get rid of the smp_alt mutex Zhou Chengming
2017-10-28 12:50 ` [PATCH 4/4] kprobes, x86/alternatives: preempt_disable() when check smp_alt_modules Zhou Chengming
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).