All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
@ 2024-03-01 22:52 Tetsuo Handa
  2024-03-05 11:31 ` Tetsuo Handa
  2024-03-05 15:21 ` Dave Hansen
  0 siblings, 2 replies; 11+ messages in thread
From: Tetsuo Handa @ 2024-03-01 22:52 UTC (permalink / raw)
  To: LKML, the arch/x86 maintainers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin

dump_emit_page() caused a false-positive KMSAN warning, for
memcpy_from_iter_mc() is called via iterate_bvec() by setting "struct
iov_iter"->copy_mc to true.

Fallback to memcpy()/copy_user_generic() when KMSAN is enabled.

Reported-by: syzbot <syzbot+d7521c1e3841ed075a42@syzkaller.appspotmail.com>
Closes: https://syzkaller.appspot.com/bug?extid=d7521c1e3841ed075a42
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Tested-by: syzbot <syzbot+d7521c1e3841ed075a42@syzkaller.appspotmail.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
---
Changes in v2:
  Update description.

 arch/x86/lib/copy_mc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c
index 6e8b7e600def..c6a0b8dbf58d 100644
--- a/arch/x86/lib/copy_mc.c
+++ b/arch/x86/lib/copy_mc.c
@@ -61,9 +61,9 @@ unsigned long copy_mc_enhanced_fast_string(void *dst, const void *src, unsigned
  */
 unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigned len)
 {
-	if (copy_mc_fragile_enabled)
+	if (!IS_ENABLED(CONFIG_KMSAN) && copy_mc_fragile_enabled)
 		return copy_mc_fragile(dst, src, len);
-	if (static_cpu_has(X86_FEATURE_ERMS))
+	if (!IS_ENABLED(CONFIG_KMSAN) && static_cpu_has(X86_FEATURE_ERMS))
 		return copy_mc_enhanced_fast_string(dst, src, len);
 	memcpy(dst, src, len);
 	return 0;
@@ -74,14 +74,14 @@ unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, un
 {
 	unsigned long ret;
 
-	if (copy_mc_fragile_enabled) {
+	if (!IS_ENABLED(CONFIG_KMSAN) && copy_mc_fragile_enabled) {
 		__uaccess_begin();
 		ret = copy_mc_fragile((__force void *)dst, src, len);
 		__uaccess_end();
 		return ret;
 	}
 
-	if (static_cpu_has(X86_FEATURE_ERMS)) {
+	if (!IS_ENABLED(CONFIG_KMSAN) && static_cpu_has(X86_FEATURE_ERMS)) {
 		__uaccess_begin();
 		ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
 		__uaccess_end();
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-01 22:52 [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled Tetsuo Handa
@ 2024-03-05 11:31 ` Tetsuo Handa
  2024-03-05 16:22   ` Thomas Gleixner
                     ` (2 more replies)
  2024-03-05 15:21 ` Dave Hansen
  1 sibling, 3 replies; 11+ messages in thread
From: Tetsuo Handa @ 2024-03-05 11:31 UTC (permalink / raw)
  To: LKML, the arch/x86 maintainers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin, Linus Torvalds

Ping?

This is current top crasher.
I hope this patch is applied before the merge window opens.

On 2024/03/02 7:52, Tetsuo Handa wrote:
> dump_emit_page() caused a false-positive KMSAN warning, for
> memcpy_from_iter_mc() is called via iterate_bvec() by setting "struct
> iov_iter"->copy_mc to true.
> 
> Fallback to memcpy()/copy_user_generic() when KMSAN is enabled.
> 
> Reported-by: syzbot <syzbot+d7521c1e3841ed075a42@syzkaller.appspotmail.com>
> Closes: https://syzkaller.appspot.com/bug?extid=d7521c1e3841ed075a42
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Tested-by: syzbot <syzbot+d7521c1e3841ed075a42@syzkaller.appspotmail.com>
> Reviewed-by: Alexander Potapenko <glider@google.com>
> ---
> Changes in v2:
>   Update description.
> 
>  arch/x86/lib/copy_mc.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-01 22:52 [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled Tetsuo Handa
  2024-03-05 11:31 ` Tetsuo Handa
@ 2024-03-05 15:21 ` Dave Hansen
  2024-03-05 16:50   ` Thomas Gleixner
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Hansen @ 2024-03-05 15:21 UTC (permalink / raw)
  To: Tetsuo Handa, LKML, the arch/x86 maintainers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin

On 3/1/24 14:52, Tetsuo Handa wrote:
>  unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigned len)
>  {
> -	if (copy_mc_fragile_enabled)
> +	if (!IS_ENABLED(CONFIG_KMSAN) && copy_mc_fragile_enabled)
>  		return copy_mc_fragile(dst, src, len);
> -	if (static_cpu_has(X86_FEATURE_ERMS))
> +	if (!IS_ENABLED(CONFIG_KMSAN) && static_cpu_has(X86_FEATURE_ERMS))
>  		return copy_mc_enhanced_fast_string(dst, src, len);
>  	memcpy(dst, src, len);
>  	return 0;
> @@ -74,14 +74,14 @@ unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, un
>  {
>  	unsigned long ret;
>  
> -	if (copy_mc_fragile_enabled) {
> +	if (!IS_ENABLED(CONFIG_KMSAN) && copy_mc_fragile_enabled) {
>  		__uaccess_begin();
>  		ret = copy_mc_fragile((__force void *)dst, src, len);
>  		__uaccess_end();
>  		return ret;
>  	}
>  
> -	if (static_cpu_has(X86_FEATURE_ERMS)) {
> +	if (!IS_ENABLED(CONFIG_KMSAN) && static_cpu_has(X86_FEATURE_ERMS)) {
>  		__uaccess_begin();
>  		ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
>  		__uaccess_end();

Where does the false positive _come_ from?  Can we fix copy_mc_fragile()
and copy_mc_enhanced_fast_string() instead of just not using them?

The three enable_copy_mc_fragile() are presumably doing so for a reason.
 What is this patch's impact on _those_?

Third, instead of sprinkling IS_ENABLED(CONFIG_KMSAN) checks in random
spots, can we do this in a central spot?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-05 11:31 ` Tetsuo Handa
@ 2024-03-05 16:22   ` Thomas Gleixner
  2024-03-05 17:57   ` Linus Torvalds
  2024-03-06  9:16   ` Ingo Molnar
  2 siblings, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2024-03-05 16:22 UTC (permalink / raw)
  To: Tetsuo Handa, LKML, the arch/x86 maintainers
  Cc: Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Linus Torvalds

On Tue, Mar 05 2024 at 20:31, Tetsuo Handa wrote:
> This is current top crasher.
> I hope this patch is applied before the merge window opens.

Maintainers have weekends too. The world is not going to end if this is
not applied within minutes.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-05 15:21 ` Dave Hansen
@ 2024-03-05 16:50   ` Thomas Gleixner
  0 siblings, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2024-03-05 16:50 UTC (permalink / raw)
  To: Dave Hansen, Tetsuo Handa, LKML, the arch/x86 maintainers
  Cc: Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin

On Tue, Mar 05 2024 at 07:21, Dave Hansen wrote:
> On 3/1/24 14:52, Tetsuo Handa wrote:
>> -	if (static_cpu_has(X86_FEATURE_ERMS)) {
>> +	if (!IS_ENABLED(CONFIG_KMSAN) && static_cpu_has(X86_FEATURE_ERMS)) {
>>  		__uaccess_begin();
>>  		ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
>>  		__uaccess_end();
>
> Where does the false positive _come_ from?  Can we fix copy_mc_fragile()
> and copy_mc_enhanced_fast_string() instead of just not using them?

All it takes is a variant of __msan_memcpy() which uses a variant of
copy_mc_to_kernel() instead of __memcpy(). It's not rocket science.

Aside of that, this:

@@ -74,14 +74,14 @@ unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, un
 {
	unsigned long ret;

-	if (copy_mc_fragile_enabled) {
+	if (!IS_ENABLED(CONFIG_KMSAN) && copy_mc_fragile_enabled) {
		__uaccess_begin();

is completely bogus. copy_user_generic() is not at all covered by
KMSAN. So why fiddling with it in the first place? Just because it has
the same pattern as copy_mc_to_kernel()?

> The three enable_copy_mc_fragile() are presumably doing so for a
> reason.

Very much so. It's for MCE recovery purposes.

And yes, the changelog and the non-existing comments should explain why
this is "correct" when KMSAN is enabled. Hint: It is NOT.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-05 11:31 ` Tetsuo Handa
  2024-03-05 16:22   ` Thomas Gleixner
@ 2024-03-05 17:57   ` Linus Torvalds
  2024-03-06 22:08     ` Tetsuo Handa
  2024-03-06  9:16   ` Ingo Molnar
  2 siblings, 1 reply; 11+ messages in thread
From: Linus Torvalds @ 2024-03-05 17:57 UTC (permalink / raw)
  To: Tetsuo Handa, Alexander Potapenko, Marco Elver, Dmitry Vyukov
  Cc: LKML, the arch/x86 maintainers, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin

[ For the KMSAN people I brought in: this is the patch I'm NAK'ing:

    https://lore.kernel.org/all/3b7dbd88-0861-4638-b2d2-911c97a4cadf@I-love.SAKURA.ne.jp/

  and it looks like you were already cc'd on earlier versions (which
were even more broken) ]

On Tue, 5 Mar 2024 at 03:31, Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
>
> Ping?

Please don't add new people and 'ping' without context. Very annoying.

That said, after having to search for it that whole patch is
disgusting. Why make duplicated complex conditionals when you could
have just had the tests inside one #ifndef.

Also, that patch means that a KMSAN kernel potentially simply no
longer works on admittedly crappy hardware that almost doesn't exist.

So now a debug feature changes actual semantics in a big way. Not ok.

So I think this patch is ugly but also doubly incorrect.

I think the KMSAN people need to tell us how to tell kmsan that it's a
memcpy (and about the "I'm going to touch this part of memory", needed
for the "copy_mv_to_user" side).

So somebody needs to abstract out that

        depot_stack_handle_t origin;

        if (!kmsan_enabled || kmsan_in_runtime())
                return;

        kmsan_enter_runtime();
        /* Using memmove instead of memcpy doesn't affect correctness. */
        kmsan_internal_memmove_metadata(dst, (void *)src, n);
        kmsan_leave_runtime();

        set_retval_metadata(shadow, origin);

kind of thing, and expose it as a helper function for "I did something
that looks like a memory copy", the same way that we currently have
kmsan_copy_page_meta()

Because NO, IT IS NEVER CORRECT TO USE __msan_memcpy FOR THE MC COPIES.

So no. NAK on that patch. It's completely and utterly wrong.

The onus is firmly on the KMSAN people to give kernel people a way to
tell KMSAN to shut the f&%^ up about that.

End result: don't bother the x86 people until KMSAN has the required support.

               Linus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-05 11:31 ` Tetsuo Handa
  2024-03-05 16:22   ` Thomas Gleixner
  2024-03-05 17:57   ` Linus Torvalds
@ 2024-03-06  9:16   ` Ingo Molnar
  2024-03-06 10:12     ` Tetsuo Handa
  2 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2024-03-06  9:16 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: LKML, the arch/x86 maintainers, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin, Linus Torvalds


* Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> wrote:

> Ping?
> 
> This is current top crasher.
> I hope this patch is applied before the merge window opens.

1) A false positive is not a 'crasher', it's a bug in instrumentation.

2) A false positive in intrusive instrumentation that top distributions do 
   not enable in their kernels has no immediate relevance to the timing of 
   the merge window.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-06  9:16   ` Ingo Molnar
@ 2024-03-06 10:12     ` Tetsuo Handa
  0 siblings, 0 replies; 11+ messages in thread
From: Tetsuo Handa @ 2024-03-06 10:12 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, the arch/x86 maintainers, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin, Linus Torvalds

On 2024/03/06 18:16, Ingo Molnar wrote:
> 
> * Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> wrote:
> 
>> Ping?
>>
>> This is current top crasher.
>> I hope this patch is applied before the merge window opens.

I posted this patch on Sun, 25 Feb 2024 13:40:59 +0900 but did not get response.
Therefore, I reposted on Sat, 2 Mar 2024 07:52:23 +0900, and Linus responded
that this patch is wrong.

> 
> 1) A false positive is not a 'crasher', it's a bug in instrumentation.
> 
> 2) A false positive in intrusive instrumentation that top distributions do 
>    not enable in their kernels has no immediate relevance to the timing of 
>    the merge window.

Not fixing a bug prevents us from finding and fixing other bugs. A refcount bug at
unknown location is preventing linux-next.git from finding other bugs for 20 days
( https://syzkaller.appspot.com/bug?id=8e4e66dfe299a2a00204ad220c641daaf1486a00 ).
Failure to fix bugs as many as possible in linux-next results in more bug reports
when failed-to-find-in-linux-next.git-bugs arrive at linux.git. I want to make it
possible to bisect linux.git cleanly before that refcount bug arrives at linux.git
by fixing bugs which we can fix now.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-05 17:57   ` Linus Torvalds
@ 2024-03-06 22:08     ` Tetsuo Handa
  2024-03-07  0:09       ` Linus Torvalds
  0 siblings, 1 reply; 11+ messages in thread
From: Tetsuo Handa @ 2024-03-06 22:08 UTC (permalink / raw)
  To: Linus Torvalds, Alexander Potapenko, Marco Elver, Dmitry Vyukov,
	kasan-dev
  Cc: LKML, the arch/x86 maintainers, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin

Thank you for explanation.

On 2024/03/06 2:57, Linus Torvalds wrote:
> I think the KMSAN people need to tell us how to tell kmsan that it's a
> memcpy (and about the "I'm going to touch this part of memory", needed
> for the "copy_mv_to_user" side).
> 
> So somebody needs to abstract out that
> 
>         depot_stack_handle_t origin;
> 
>         if (!kmsan_enabled || kmsan_in_runtime())
>                 return;
> 
>         kmsan_enter_runtime();
>         /* Using memmove instead of memcpy doesn't affect correctness. */
>         kmsan_internal_memmove_metadata(dst, (void *)src, n);
>         kmsan_leave_runtime();
> 
>         set_retval_metadata(shadow, origin);
> 
> kind of thing, and expose it as a helper function for "I did something
> that looks like a memory copy", the same way that we currently have
> kmsan_copy_page_meta()

Something like below one? Can we assume that 0 <= ret <= len is always true?

diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c
index 6e8b7e600def..6858f80fc9a2 100644
--- a/arch/x86/lib/copy_mc.c
+++ b/arch/x86/lib/copy_mc.c
@@ -61,12 +61,18 @@ unsigned long copy_mc_enhanced_fast_string(void *dst, const void *src, unsigned
  */
 unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigned len)
 {
-	if (copy_mc_fragile_enabled)
-		return copy_mc_fragile(dst, src, len);
-	if (static_cpu_has(X86_FEATURE_ERMS))
-		return copy_mc_enhanced_fast_string(dst, src, len);
-	memcpy(dst, src, len);
-	return 0;
+	unsigned long ret;
+
+	if (copy_mc_fragile_enabled) {
+		ret = copy_mc_fragile(dst, src, len);
+	} else if (static_cpu_has(X86_FEATURE_ERMS)) {
+		ret = copy_mc_enhanced_fast_string(dst, src, len);
+	} else {
+		memcpy(dst, src, len);
+		ret = 0;
+	}
+	kmsan_memmove(dst, src, len - ret);
+	return ret;
 }
 EXPORT_SYMBOL_GPL(copy_mc_to_kernel);
 
@@ -78,15 +84,13 @@ unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, un
 		__uaccess_begin();
 		ret = copy_mc_fragile((__force void *)dst, src, len);
 		__uaccess_end();
-		return ret;
-	}
-
-	if (static_cpu_has(X86_FEATURE_ERMS)) {
+	} else if (static_cpu_has(X86_FEATURE_ERMS)) {
 		__uaccess_begin();
 		ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
 		__uaccess_end();
-		return ret;
+	} else {
+		ret = copy_user_generic((__force void *)dst, src, len);
 	}
-
-	return copy_user_generic((__force void *)dst, src, len);
+	kmsan_copy_to_user(dst, src, len, ret);
+	return ret;
 }
diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h
index c4cae333deec..4c2a614dab2d 100644
--- a/include/linux/kmsan-checks.h
+++ b/include/linux/kmsan-checks.h
@@ -61,6 +61,17 @@ void kmsan_check_memory(const void *address, size_t size);
 void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy,
 			size_t left);
 
+/**
+ * kmsan_memmove() - Notify KMSAN about a data copy within kernel.
+ * @to:   destination address in the kernel.
+ * @from: source address in the kernel.
+ * @size: number of bytes to copy.
+ *
+ * Invoked after non-instrumented version (e.g. implemented using assembly
+ * code) of memmove()/memcpy() is called, in order to copy KMSAN's metadata.
+ */
+void kmsan_memmove(void *to, const void *from, size_t size);
+
 #else
 
 static inline void kmsan_poison_memory(const void *address, size_t size,
@@ -77,6 +88,9 @@ static inline void kmsan_copy_to_user(void __user *to, const void *from,
 				      size_t to_copy, size_t left)
 {
 }
+static inline void kmsan_memmove(void *to, const void *from, size_t size)
+{
+}
 
 #endif
 
diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index 5d6e2dee5692..364f778ee226 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -285,6 +285,17 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy,
 }
 EXPORT_SYMBOL(kmsan_copy_to_user);
 
+void kmsan_memmove(void *to, const void *from, size_t size)
+{
+	if (!kmsan_enabled || kmsan_in_runtime())
+		return;
+
+	kmsan_enter_runtime();
+	kmsan_internal_memmove_metadata(to, (void *)from, size);
+	kmsan_leave_runtime();
+}
+EXPORT_SYMBOL(kmsan_memmove);
+
 /* Helper function to check an URB. */
 void kmsan_handle_urb(const struct urb *urb, bool is_out)
 {


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-06 22:08     ` Tetsuo Handa
@ 2024-03-07  0:09       ` Linus Torvalds
  2024-03-19 12:38         ` Alexander Potapenko
  0 siblings, 1 reply; 11+ messages in thread
From: Linus Torvalds @ 2024-03-07  0:09 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, kasan-dev, LKML,
	the arch/x86 maintainers, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin

On Wed, 6 Mar 2024 at 14:08, Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
>
> Something like below one?

I'd rather leave the regular fallbacks (to memcpy and copy_to_user())
alone, and I'd just put the

        kmsan_memmove(dst, src, len - ret);

etc in the places that currently just call the MC copy functions.

The copy_mc_to_user() logic is already set up for that, since it has
to do the __uaccess_begin/end().

Changing copy_mc_to_kernel() to look visually the same would only
improve on this horror-show, I feel.

Obviously some kmsan person needs to validate your kmsan_memmove() thing, but

> Can we assume that 0 <= ret <= len is always true?

Yes. It had better be for other reasons.

                  Linus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled
  2024-03-07  0:09       ` Linus Torvalds
@ 2024-03-19 12:38         ` Alexander Potapenko
  0 siblings, 0 replies; 11+ messages in thread
From: Alexander Potapenko @ 2024-03-19 12:38 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Tetsuo Handa, Marco Elver, Dmitry Vyukov, kasan-dev, LKML,
	the arch/x86 maintainers, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin

On Thu, Mar 7, 2024 at 1:09 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Wed, 6 Mar 2024 at 14:08, Tetsuo Handa
> <penguin-kernel@i-love.sakura.ne.jp> wrote:
> >
> > Something like below one?
>
> I'd rather leave the regular fallbacks (to memcpy and copy_to_user())
> alone, and I'd just put the
>
>         kmsan_memmove(dst, src, len - ret);
>
> etc in the places that currently just call the MC copy functions.

(sorry for being late to the party)

We should probably use <linux/instrumented.h> here, as other tools
(KASAN and KCSAN) do not instrument copy_mc_to_kernel() either, and
might benefit from the same checks.

Something along the lines of:

================================
static __always_inline void
instrument_memcpy_before(const void *to, const void *from, unsigned long n)
{
        kasan_check_write(to, n);
        kasan_check_read(from, n);
        kcsan_check_write(to, n);
        kcsan_check_read(from, n);
}

static __always_inline void instrument_memcpy_after(const void *to,
                                                    const void *from,
                                                    unsigned long n,
                                                    unsigned long left)
{
        kmsan_memcpy(to, n - left);
}
================================

(with kmsan_memcpy() working as Tetsuo described).

We can also update copy_mc_fragile_handle_tail() and copy_mc_to_user()
to call the instrumentation hooks.
Let me send the patches.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-03-19 12:39 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-01 22:52 [PATCH v2] x86: disable non-instrumented version of copy_mc when KMSAN is enabled Tetsuo Handa
2024-03-05 11:31 ` Tetsuo Handa
2024-03-05 16:22   ` Thomas Gleixner
2024-03-05 17:57   ` Linus Torvalds
2024-03-06 22:08     ` Tetsuo Handa
2024-03-07  0:09       ` Linus Torvalds
2024-03-19 12:38         ` Alexander Potapenko
2024-03-06  9:16   ` Ingo Molnar
2024-03-06 10:12     ` Tetsuo Handa
2024-03-05 15:21 ` Dave Hansen
2024-03-05 16:50   ` Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.