All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
@ 2021-09-10 21:13 ` Peter Collingbourne
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Collingbourne @ 2021-09-10 21:13 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon, Catalin Marinas, Andrey Konovalov,
	Marco Elver
  Cc: Peter Collingbourne, Mark Rutland, Evgenii Stepanov,
	Alexander Potapenko, Linux ARM, linux-mm

With HW tag-based KASAN, error checks are performed implicitly by the
load and store instructions in the memcpy implementation.  A failed check
results in tag checks being disabled and execution will keep going. As a
result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
would end up corrupting memory until it hits an inaccessible page and
causes a kernel panic.

This is a pre-existing issue that was revealed by commit 285133040e6c
("arm64: Import latest memcpy()/memmove() implementation") which changed
the memcpy implementation from using signed comparisons (incorrectly,
resulting in the memcpy being terminated early for negative sizes)
to using unsigned comparisons.

It is unclear how this could be handled by memcpy itself in a reasonable
way. One possibility would be to add an exception handler that would force
memcpy to return if a tag check fault is detected -- this would make the
behavior roughly similar to generic and SW tag-based KASAN. However,
this wouldn't solve the problem for asynchronous mode and also makes
memcpy behavior inconsistent with manually copying data.

This test was added as a part of a series that taught KASAN to detect
negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
detect negative size in memory operation function"). Therefore we
should keep testing for negative sizes with generic and SW tag-based
KASAN. But there is some value in testing small memcpy overflows, so
let's add another test with memcpy that does not destabilize the kernel
by performing out-of-bounds writes, and run it in all modes.

Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
Signed-off-by: Peter Collingbourne <pcc@google.com>
---
 lib/test_kasan.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 8835e0784578..aa8e42250219 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
 	kfree(ptr);
 }
 
-static void kmalloc_memmove_invalid_size(struct kunit *test)
+static void kmalloc_memmove_negative_size(struct kunit *test)
 {
 	char *ptr;
 	size_t size = 64;
@@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
 	kfree(ptr);
 }
 
+static void kmalloc_memmove_invalid_size(struct kunit *test)
+{
+	char *ptr;
+	size_t size = 64;
+	volatile size_t invalid_size = size;
+
+	ptr = kmalloc(size, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+	memset((char *)ptr, 0, 64);
+	KUNIT_EXPECT_KASAN_FAIL(test,
+		memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+	kfree(ptr);
+}
+
 static void kmalloc_uaf(struct kunit *test)
 {
 	char *ptr;
@@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
 	KUNIT_CASE(kmalloc_oob_memset_4),
 	KUNIT_CASE(kmalloc_oob_memset_8),
 	KUNIT_CASE(kmalloc_oob_memset_16),
+	KUNIT_CASE(kmalloc_memmove_negative_size),
 	KUNIT_CASE(kmalloc_memmove_invalid_size),
 	KUNIT_CASE(kmalloc_uaf),
 	KUNIT_CASE(kmalloc_uaf_memset),
-- 
2.33.0.309.g3052b89438-goog



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
@ 2021-09-10 21:13 ` Peter Collingbourne
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Collingbourne @ 2021-09-10 21:13 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon, Catalin Marinas, Andrey Konovalov,
	Marco Elver
  Cc: Peter Collingbourne, Mark Rutland, Evgenii Stepanov,
	Alexander Potapenko, Linux ARM, linux-mm

With HW tag-based KASAN, error checks are performed implicitly by the
load and store instructions in the memcpy implementation.  A failed check
results in tag checks being disabled and execution will keep going. As a
result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
would end up corrupting memory until it hits an inaccessible page and
causes a kernel panic.

This is a pre-existing issue that was revealed by commit 285133040e6c
("arm64: Import latest memcpy()/memmove() implementation") which changed
the memcpy implementation from using signed comparisons (incorrectly,
resulting in the memcpy being terminated early for negative sizes)
to using unsigned comparisons.

It is unclear how this could be handled by memcpy itself in a reasonable
way. One possibility would be to add an exception handler that would force
memcpy to return if a tag check fault is detected -- this would make the
behavior roughly similar to generic and SW tag-based KASAN. However,
this wouldn't solve the problem for asynchronous mode and also makes
memcpy behavior inconsistent with manually copying data.

This test was added as a part of a series that taught KASAN to detect
negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
detect negative size in memory operation function"). Therefore we
should keep testing for negative sizes with generic and SW tag-based
KASAN. But there is some value in testing small memcpy overflows, so
let's add another test with memcpy that does not destabilize the kernel
by performing out-of-bounds writes, and run it in all modes.

Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
Signed-off-by: Peter Collingbourne <pcc@google.com>
---
 lib/test_kasan.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 8835e0784578..aa8e42250219 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
 	kfree(ptr);
 }
 
-static void kmalloc_memmove_invalid_size(struct kunit *test)
+static void kmalloc_memmove_negative_size(struct kunit *test)
 {
 	char *ptr;
 	size_t size = 64;
@@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
 	kfree(ptr);
 }
 
+static void kmalloc_memmove_invalid_size(struct kunit *test)
+{
+	char *ptr;
+	size_t size = 64;
+	volatile size_t invalid_size = size;
+
+	ptr = kmalloc(size, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+	memset((char *)ptr, 0, 64);
+	KUNIT_EXPECT_KASAN_FAIL(test,
+		memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+	kfree(ptr);
+}
+
 static void kmalloc_uaf(struct kunit *test)
 {
 	char *ptr;
@@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
 	KUNIT_CASE(kmalloc_oob_memset_4),
 	KUNIT_CASE(kmalloc_oob_memset_8),
 	KUNIT_CASE(kmalloc_oob_memset_16),
+	KUNIT_CASE(kmalloc_memmove_negative_size),
 	KUNIT_CASE(kmalloc_memmove_invalid_size),
 	KUNIT_CASE(kmalloc_uaf),
 	KUNIT_CASE(kmalloc_uaf_memset),
-- 
2.33.0.309.g3052b89438-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
  2021-09-10 21:13 ` Peter Collingbourne
@ 2021-09-10 21:17   ` Andrey Konovalov
  -1 siblings, 0 replies; 12+ messages in thread
From: Andrey Konovalov @ 2021-09-10 21:17 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Robin Murphy, Will Deacon, Catalin Marinas, Marco Elver,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List

On Fri, Sep 10, 2021 at 11:14 PM Peter Collingbourne <pcc@google.com> wrote:
>
> With HW tag-based KASAN, error checks are performed implicitly by the
> load and store instructions in the memcpy implementation.  A failed check
> results in tag checks being disabled and execution will keep going. As a
> result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> would end up corrupting memory until it hits an inaccessible page and
> causes a kernel panic.
>
> This is a pre-existing issue that was revealed by commit 285133040e6c
> ("arm64: Import latest memcpy()/memmove() implementation") which changed
> the memcpy implementation from using signed comparisons (incorrectly,
> resulting in the memcpy being terminated early for negative sizes)
> to using unsigned comparisons.
>
> It is unclear how this could be handled by memcpy itself in a reasonable
> way. One possibility would be to add an exception handler that would force
> memcpy to return if a tag check fault is detected -- this would make the
> behavior roughly similar to generic and SW tag-based KASAN. However,
> this wouldn't solve the problem for asynchronous mode and also makes
> memcpy behavior inconsistent with manually copying data.
>
> This test was added as a part of a series that taught KASAN to detect
> negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> detect negative size in memory operation function"). Therefore we
> should keep testing for negative sizes with generic and SW tag-based
> KASAN. But there is some value in testing small memcpy overflows, so
> let's add another test with memcpy that does not destabilize the kernel
> by performing out-of-bounds writes, and run it in all modes.
>
> Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> ---
>  lib/test_kasan.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 8835e0784578..aa8e42250219 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
>         kfree(ptr);
>  }
>
> -static void kmalloc_memmove_invalid_size(struct kunit *test)
> +static void kmalloc_memmove_negative_size(struct kunit *test)
>  {
>         char *ptr;
>         size_t size = 64;
> @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
>         kfree(ptr);
>  }
>
> +static void kmalloc_memmove_invalid_size(struct kunit *test)
> +{
> +       char *ptr;
> +       size_t size = 64;
> +       volatile size_t invalid_size = size;
> +
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +
> +       memset((char *)ptr, 0, 64);
> +       KUNIT_EXPECT_KASAN_FAIL(test,
> +               memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +       kfree(ptr);
> +}
> +
>  static void kmalloc_uaf(struct kunit *test)
>  {
>         char *ptr;
> @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
>         KUNIT_CASE(kmalloc_oob_memset_4),
>         KUNIT_CASE(kmalloc_oob_memset_8),
>         KUNIT_CASE(kmalloc_oob_memset_16),
> +       KUNIT_CASE(kmalloc_memmove_negative_size),
>         KUNIT_CASE(kmalloc_memmove_invalid_size),
>         KUNIT_CASE(kmalloc_uaf),
>         KUNIT_CASE(kmalloc_uaf_memset),
> --
> 2.33.0.309.g3052b89438-goog
>

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

Thanks!


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
@ 2021-09-10 21:17   ` Andrey Konovalov
  0 siblings, 0 replies; 12+ messages in thread
From: Andrey Konovalov @ 2021-09-10 21:17 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Robin Murphy, Will Deacon, Catalin Marinas, Marco Elver,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List

On Fri, Sep 10, 2021 at 11:14 PM Peter Collingbourne <pcc@google.com> wrote:
>
> With HW tag-based KASAN, error checks are performed implicitly by the
> load and store instructions in the memcpy implementation.  A failed check
> results in tag checks being disabled and execution will keep going. As a
> result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> would end up corrupting memory until it hits an inaccessible page and
> causes a kernel panic.
>
> This is a pre-existing issue that was revealed by commit 285133040e6c
> ("arm64: Import latest memcpy()/memmove() implementation") which changed
> the memcpy implementation from using signed comparisons (incorrectly,
> resulting in the memcpy being terminated early for negative sizes)
> to using unsigned comparisons.
>
> It is unclear how this could be handled by memcpy itself in a reasonable
> way. One possibility would be to add an exception handler that would force
> memcpy to return if a tag check fault is detected -- this would make the
> behavior roughly similar to generic and SW tag-based KASAN. However,
> this wouldn't solve the problem for asynchronous mode and also makes
> memcpy behavior inconsistent with manually copying data.
>
> This test was added as a part of a series that taught KASAN to detect
> negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> detect negative size in memory operation function"). Therefore we
> should keep testing for negative sizes with generic and SW tag-based
> KASAN. But there is some value in testing small memcpy overflows, so
> let's add another test with memcpy that does not destabilize the kernel
> by performing out-of-bounds writes, and run it in all modes.
>
> Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> ---
>  lib/test_kasan.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 8835e0784578..aa8e42250219 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
>         kfree(ptr);
>  }
>
> -static void kmalloc_memmove_invalid_size(struct kunit *test)
> +static void kmalloc_memmove_negative_size(struct kunit *test)
>  {
>         char *ptr;
>         size_t size = 64;
> @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
>         kfree(ptr);
>  }
>
> +static void kmalloc_memmove_invalid_size(struct kunit *test)
> +{
> +       char *ptr;
> +       size_t size = 64;
> +       volatile size_t invalid_size = size;
> +
> +       ptr = kmalloc(size, GFP_KERNEL);
> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +
> +       memset((char *)ptr, 0, 64);
> +       KUNIT_EXPECT_KASAN_FAIL(test,
> +               memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +       kfree(ptr);
> +}
> +
>  static void kmalloc_uaf(struct kunit *test)
>  {
>         char *ptr;
> @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
>         KUNIT_CASE(kmalloc_oob_memset_4),
>         KUNIT_CASE(kmalloc_oob_memset_8),
>         KUNIT_CASE(kmalloc_oob_memset_16),
> +       KUNIT_CASE(kmalloc_memmove_negative_size),
>         KUNIT_CASE(kmalloc_memmove_invalid_size),
>         KUNIT_CASE(kmalloc_uaf),
>         KUNIT_CASE(kmalloc_uaf_memset),
> --
> 2.33.0.309.g3052b89438-goog
>

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

Thanks!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
  2021-09-10 21:17   ` Andrey Konovalov
@ 2021-09-13  6:00     ` Marco Elver
  -1 siblings, 0 replies; 12+ messages in thread
From: Marco Elver @ 2021-09-13  6:00 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Peter Collingbourne, Robin Murphy, Will Deacon, Catalin Marinas,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List

On Fri, 10 Sept 2021 at 23:17, Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Fri, Sep 10, 2021 at 11:14 PM Peter Collingbourne <pcc@google.com> wrote:
> >
> > With HW tag-based KASAN, error checks are performed implicitly by the
> > load and store instructions in the memcpy implementation.  A failed check
> > results in tag checks being disabled and execution will keep going. As a
> > result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> > test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> > would end up corrupting memory until it hits an inaccessible page and
> > causes a kernel panic.
> >
> > This is a pre-existing issue that was revealed by commit 285133040e6c
> > ("arm64: Import latest memcpy()/memmove() implementation") which changed
> > the memcpy implementation from using signed comparisons (incorrectly,
> > resulting in the memcpy being terminated early for negative sizes)
> > to using unsigned comparisons.
> >
> > It is unclear how this could be handled by memcpy itself in a reasonable
> > way. One possibility would be to add an exception handler that would force
> > memcpy to return if a tag check fault is detected -- this would make the
> > behavior roughly similar to generic and SW tag-based KASAN. However,
> > this wouldn't solve the problem for asynchronous mode and also makes
> > memcpy behavior inconsistent with manually copying data.
> >
> > This test was added as a part of a series that taught KASAN to detect
> > negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> > detect negative size in memory operation function"). Therefore we
> > should keep testing for negative sizes with generic and SW tag-based
> > KASAN. But there is some value in testing small memcpy overflows, so
> > let's add another test with memcpy that does not destabilize the kernel
> > by performing out-of-bounds writes, and run it in all modes.
> >
> > Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > ---
> >  lib/test_kasan.c | 18 +++++++++++++++++-
> >  1 file changed, 17 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> > index 8835e0784578..aa8e42250219 100644
> > --- a/lib/test_kasan.c
> > +++ b/lib/test_kasan.c
> > @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
> >         kfree(ptr);
> >  }
> >
> > -static void kmalloc_memmove_invalid_size(struct kunit *test)
> > +static void kmalloc_memmove_negative_size(struct kunit *test)
> >  {
> >         char *ptr;
> >         size_t size = 64;
> > @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> >         kfree(ptr);
> >  }
> >
> > +static void kmalloc_memmove_invalid_size(struct kunit *test)
> > +{
> > +       char *ptr;
> > +       size_t size = 64;
> > +       volatile size_t invalid_size = size;
> > +
> > +       ptr = kmalloc(size, GFP_KERNEL);
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > +
> > +       memset((char *)ptr, 0, 64);
> > +       KUNIT_EXPECT_KASAN_FAIL(test,
> > +               memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> > +       kfree(ptr);
> > +}
> > +
> >  static void kmalloc_uaf(struct kunit *test)
> >  {
> >         char *ptr;
> > @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> >         KUNIT_CASE(kmalloc_oob_memset_4),
> >         KUNIT_CASE(kmalloc_oob_memset_8),
> >         KUNIT_CASE(kmalloc_oob_memset_16),
> > +       KUNIT_CASE(kmalloc_memmove_negative_size),
> >         KUNIT_CASE(kmalloc_memmove_invalid_size),
> >         KUNIT_CASE(kmalloc_uaf),
> >         KUNIT_CASE(kmalloc_uaf_memset),
> > --
> > 2.33.0.309.g3052b89438-goog
> >
>
> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

Acked-by: Marco Elver <elver@google.com>

Do you intend this patch to go through the arm64 or mm tree?

> Thanks!


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
@ 2021-09-13  6:00     ` Marco Elver
  0 siblings, 0 replies; 12+ messages in thread
From: Marco Elver @ 2021-09-13  6:00 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Peter Collingbourne, Robin Murphy, Will Deacon, Catalin Marinas,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List

On Fri, 10 Sept 2021 at 23:17, Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Fri, Sep 10, 2021 at 11:14 PM Peter Collingbourne <pcc@google.com> wrote:
> >
> > With HW tag-based KASAN, error checks are performed implicitly by the
> > load and store instructions in the memcpy implementation.  A failed check
> > results in tag checks being disabled and execution will keep going. As a
> > result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> > test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> > would end up corrupting memory until it hits an inaccessible page and
> > causes a kernel panic.
> >
> > This is a pre-existing issue that was revealed by commit 285133040e6c
> > ("arm64: Import latest memcpy()/memmove() implementation") which changed
> > the memcpy implementation from using signed comparisons (incorrectly,
> > resulting in the memcpy being terminated early for negative sizes)
> > to using unsigned comparisons.
> >
> > It is unclear how this could be handled by memcpy itself in a reasonable
> > way. One possibility would be to add an exception handler that would force
> > memcpy to return if a tag check fault is detected -- this would make the
> > behavior roughly similar to generic and SW tag-based KASAN. However,
> > this wouldn't solve the problem for asynchronous mode and also makes
> > memcpy behavior inconsistent with manually copying data.
> >
> > This test was added as a part of a series that taught KASAN to detect
> > negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> > detect negative size in memory operation function"). Therefore we
> > should keep testing for negative sizes with generic and SW tag-based
> > KASAN. But there is some value in testing small memcpy overflows, so
> > let's add another test with memcpy that does not destabilize the kernel
> > by performing out-of-bounds writes, and run it in all modes.
> >
> > Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > ---
> >  lib/test_kasan.c | 18 +++++++++++++++++-
> >  1 file changed, 17 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> > index 8835e0784578..aa8e42250219 100644
> > --- a/lib/test_kasan.c
> > +++ b/lib/test_kasan.c
> > @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
> >         kfree(ptr);
> >  }
> >
> > -static void kmalloc_memmove_invalid_size(struct kunit *test)
> > +static void kmalloc_memmove_negative_size(struct kunit *test)
> >  {
> >         char *ptr;
> >         size_t size = 64;
> > @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> >         kfree(ptr);
> >  }
> >
> > +static void kmalloc_memmove_invalid_size(struct kunit *test)
> > +{
> > +       char *ptr;
> > +       size_t size = 64;
> > +       volatile size_t invalid_size = size;
> > +
> > +       ptr = kmalloc(size, GFP_KERNEL);
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > +
> > +       memset((char *)ptr, 0, 64);
> > +       KUNIT_EXPECT_KASAN_FAIL(test,
> > +               memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> > +       kfree(ptr);
> > +}
> > +
> >  static void kmalloc_uaf(struct kunit *test)
> >  {
> >         char *ptr;
> > @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> >         KUNIT_CASE(kmalloc_oob_memset_4),
> >         KUNIT_CASE(kmalloc_oob_memset_8),
> >         KUNIT_CASE(kmalloc_oob_memset_16),
> > +       KUNIT_CASE(kmalloc_memmove_negative_size),
> >         KUNIT_CASE(kmalloc_memmove_invalid_size),
> >         KUNIT_CASE(kmalloc_uaf),
> >         KUNIT_CASE(kmalloc_uaf_memset),
> > --
> > 2.33.0.309.g3052b89438-goog
> >
>
> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

Acked-by: Marco Elver <elver@google.com>

Do you intend this patch to go through the arm64 or mm tree?

> Thanks!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
  2021-09-10 21:13 ` Peter Collingbourne
@ 2021-09-13  9:42   ` Robin Murphy
  -1 siblings, 0 replies; 12+ messages in thread
From: Robin Murphy @ 2021-09-13  9:42 UTC (permalink / raw)
  To: Peter Collingbourne, Will Deacon, Catalin Marinas,
	Andrey Konovalov, Marco Elver
  Cc: Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM, linux-mm

On 2021-09-10 22:13, Peter Collingbourne wrote:
> With HW tag-based KASAN, error checks are performed implicitly by the
> load and store instructions in the memcpy implementation.  A failed check
> results in tag checks being disabled and execution will keep going. As a
> result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> would end up corrupting memory until it hits an inaccessible page and
> causes a kernel panic.
> 
> This is a pre-existing issue that was revealed by commit 285133040e6c
> ("arm64: Import latest memcpy()/memmove() implementation") which changed
> the memcpy implementation from using signed comparisons (incorrectly,
> resulting in the memcpy being terminated early for negative sizes)
> to using unsigned comparisons.
> 
> It is unclear how this could be handled by memcpy itself in a reasonable
> way. One possibility would be to add an exception handler that would force
> memcpy to return if a tag check fault is detected -- this would make the
> behavior roughly similar to generic and SW tag-based KASAN. However,
> this wouldn't solve the problem for asynchronous mode and also makes
> memcpy behavior inconsistent with manually copying data.
> 
> This test was added as a part of a series that taught KASAN to detect
> negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> detect negative size in memory operation function"). Therefore we
> should keep testing for negative sizes with generic and SW tag-based
> KASAN. But there is some value in testing small memcpy overflows, so
> let's add another test with memcpy that does not destabilize the kernel
> by performing out-of-bounds writes, and run it in all modes.

The only thing is, that's nonsense. You can't pass a negative size to 
memmove()/memcpy(), any more than you could pass a negative address. You 
can use the usual integer conversions to pass a very large size, but 
that's no different from just passing a very large size, and the 
language does not make any restrictions on the validity of very large 
sizes. Indeed in general a 32-bit program could legitimately memcpy() 
exactly half its address space to the other half, or memmove() a 3GB 
buffer a small distance.

I'm not sure what we're trying to enforce there, other than arbitrary 
restrictions on how we think it makes sense to call library functions. 
The only way to say that a size is actually invalid is if it leads to an 
out-of-bounds access relative to the source or destination buffer, but 
to provoke that the given size only ever needs to be at least 1 byte 
larger than the object - making it excessively large only generates 
excessively large numbers of invalid accesses, and I fail to see what 
use that has. By all means introduce KAROHWTIMSTCLFSAN, but I'm not 
convinced it's meaningfully within the scope of *address* sanitisation.

Thanks,
Robin.

> Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> ---
>   lib/test_kasan.c | 18 +++++++++++++++++-
>   1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 8835e0784578..aa8e42250219 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
>   	kfree(ptr);
>   }
>   
> -static void kmalloc_memmove_invalid_size(struct kunit *test)
> +static void kmalloc_memmove_negative_size(struct kunit *test)
>   {
>   	char *ptr;
>   	size_t size = 64;
> @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
>   	kfree(ptr);
>   }
>   
> +static void kmalloc_memmove_invalid_size(struct kunit *test)
> +{
> +	char *ptr;
> +	size_t size = 64;
> +	volatile size_t invalid_size = size;
> +
> +	ptr = kmalloc(size, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +
> +	memset((char *)ptr, 0, 64);
> +	KUNIT_EXPECT_KASAN_FAIL(test,
> +		memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +	kfree(ptr);
> +}
> +
>   static void kmalloc_uaf(struct kunit *test)
>   {
>   	char *ptr;
> @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
>   	KUNIT_CASE(kmalloc_oob_memset_4),
>   	KUNIT_CASE(kmalloc_oob_memset_8),
>   	KUNIT_CASE(kmalloc_oob_memset_16),
> +	KUNIT_CASE(kmalloc_memmove_negative_size),
>   	KUNIT_CASE(kmalloc_memmove_invalid_size),
>   	KUNIT_CASE(kmalloc_uaf),
>   	KUNIT_CASE(kmalloc_uaf_memset),
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
@ 2021-09-13  9:42   ` Robin Murphy
  0 siblings, 0 replies; 12+ messages in thread
From: Robin Murphy @ 2021-09-13  9:42 UTC (permalink / raw)
  To: Peter Collingbourne, Will Deacon, Catalin Marinas,
	Andrey Konovalov, Marco Elver
  Cc: Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM, linux-mm

On 2021-09-10 22:13, Peter Collingbourne wrote:
> With HW tag-based KASAN, error checks are performed implicitly by the
> load and store instructions in the memcpy implementation.  A failed check
> results in tag checks being disabled and execution will keep going. As a
> result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> would end up corrupting memory until it hits an inaccessible page and
> causes a kernel panic.
> 
> This is a pre-existing issue that was revealed by commit 285133040e6c
> ("arm64: Import latest memcpy()/memmove() implementation") which changed
> the memcpy implementation from using signed comparisons (incorrectly,
> resulting in the memcpy being terminated early for negative sizes)
> to using unsigned comparisons.
> 
> It is unclear how this could be handled by memcpy itself in a reasonable
> way. One possibility would be to add an exception handler that would force
> memcpy to return if a tag check fault is detected -- this would make the
> behavior roughly similar to generic and SW tag-based KASAN. However,
> this wouldn't solve the problem for asynchronous mode and also makes
> memcpy behavior inconsistent with manually copying data.
> 
> This test was added as a part of a series that taught KASAN to detect
> negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> detect negative size in memory operation function"). Therefore we
> should keep testing for negative sizes with generic and SW tag-based
> KASAN. But there is some value in testing small memcpy overflows, so
> let's add another test with memcpy that does not destabilize the kernel
> by performing out-of-bounds writes, and run it in all modes.

The only thing is, that's nonsense. You can't pass a negative size to 
memmove()/memcpy(), any more than you could pass a negative address. You 
can use the usual integer conversions to pass a very large size, but 
that's no different from just passing a very large size, and the 
language does not make any restrictions on the validity of very large 
sizes. Indeed in general a 32-bit program could legitimately memcpy() 
exactly half its address space to the other half, or memmove() a 3GB 
buffer a small distance.

I'm not sure what we're trying to enforce there, other than arbitrary 
restrictions on how we think it makes sense to call library functions. 
The only way to say that a size is actually invalid is if it leads to an 
out-of-bounds access relative to the source or destination buffer, but 
to provoke that the given size only ever needs to be at least 1 byte 
larger than the object - making it excessively large only generates 
excessively large numbers of invalid accesses, and I fail to see what 
use that has. By all means introduce KAROHWTIMSTCLFSAN, but I'm not 
convinced it's meaningfully within the scope of *address* sanitisation.

Thanks,
Robin.

> Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> ---
>   lib/test_kasan.c | 18 +++++++++++++++++-
>   1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 8835e0784578..aa8e42250219 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
>   	kfree(ptr);
>   }
>   
> -static void kmalloc_memmove_invalid_size(struct kunit *test)
> +static void kmalloc_memmove_negative_size(struct kunit *test)
>   {
>   	char *ptr;
>   	size_t size = 64;
> @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
>   	kfree(ptr);
>   }
>   
> +static void kmalloc_memmove_invalid_size(struct kunit *test)
> +{
> +	char *ptr;
> +	size_t size = 64;
> +	volatile size_t invalid_size = size;
> +
> +	ptr = kmalloc(size, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +
> +	memset((char *)ptr, 0, 64);
> +	KUNIT_EXPECT_KASAN_FAIL(test,
> +		memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +	kfree(ptr);
> +}
> +
>   static void kmalloc_uaf(struct kunit *test)
>   {
>   	char *ptr;
> @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
>   	KUNIT_CASE(kmalloc_oob_memset_4),
>   	KUNIT_CASE(kmalloc_oob_memset_8),
>   	KUNIT_CASE(kmalloc_oob_memset_16),
> +	KUNIT_CASE(kmalloc_memmove_negative_size),
>   	KUNIT_CASE(kmalloc_memmove_invalid_size),
>   	KUNIT_CASE(kmalloc_uaf),
>   	KUNIT_CASE(kmalloc_uaf_memset),
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
  2021-09-13  9:42   ` Robin Murphy
@ 2021-09-13 18:18     ` Peter Collingbourne
  -1 siblings, 0 replies; 12+ messages in thread
From: Peter Collingbourne @ 2021-09-13 18:18 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, Catalin Marinas, Andrey Konovalov, Marco Elver,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List, Walter Wu

On Mon, Sep 13, 2021 at 2:42 AM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-09-10 22:13, Peter Collingbourne wrote:
> > With HW tag-based KASAN, error checks are performed implicitly by the
> > load and store instructions in the memcpy implementation.  A failed check
> > results in tag checks being disabled and execution will keep going. As a
> > result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> > test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> > would end up corrupting memory until it hits an inaccessible page and
> > causes a kernel panic.
> >
> > This is a pre-existing issue that was revealed by commit 285133040e6c
> > ("arm64: Import latest memcpy()/memmove() implementation") which changed
> > the memcpy implementation from using signed comparisons (incorrectly,
> > resulting in the memcpy being terminated early for negative sizes)
> > to using unsigned comparisons.
> >
> > It is unclear how this could be handled by memcpy itself in a reasonable
> > way. One possibility would be to add an exception handler that would force
> > memcpy to return if a tag check fault is detected -- this would make the
> > behavior roughly similar to generic and SW tag-based KASAN. However,
> > this wouldn't solve the problem for asynchronous mode and also makes
> > memcpy behavior inconsistent with manually copying data.
> >
> > This test was added as a part of a series that taught KASAN to detect
> > negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> > detect negative size in memory operation function"). Therefore we
> > should keep testing for negative sizes with generic and SW tag-based
> > KASAN. But there is some value in testing small memcpy overflows, so
> > let's add another test with memcpy that does not destabilize the kernel
> > by performing out-of-bounds writes, and run it in all modes.
>
> The only thing is, that's nonsense. You can't pass a negative size to
> memmove()/memcpy(), any more than you could pass a negative address. You
> can use the usual integer conversions to pass a very large size, but
> that's no different from just passing a very large size, and the
> language does not make any restrictions on the validity of very large
> sizes. Indeed in general a 32-bit program could legitimately memcpy()
> exactly half its address space to the other half, or memmove() a 3GB
> buffer a small distance.
>
> I'm not sure what we're trying to enforce there, other than arbitrary
> restrictions on how we think it makes sense to call library functions.
> The only way to say that a size is actually invalid is if it leads to an
> out-of-bounds access relative to the source or destination buffer, but
> to provoke that the given size only ever needs to be at least 1 byte
> larger than the object - making it excessively large only generates
> excessively large numbers of invalid accesses, and I fail to see what
> use that has. By all means introduce KAROHWTIMSTCLFSAN, but I'm not
> convinced it's meaningfully within the scope of *address* sanitisation.

This is an orthogonal issue, isn't it? It may make sense to make the
memmove()/memcpy() behavior controllable separately, but that can be
done separately from this change.

Peter


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
@ 2021-09-13 18:18     ` Peter Collingbourne
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Collingbourne @ 2021-09-13 18:18 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, Catalin Marinas, Andrey Konovalov, Marco Elver,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List, Walter Wu

On Mon, Sep 13, 2021 at 2:42 AM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-09-10 22:13, Peter Collingbourne wrote:
> > With HW tag-based KASAN, error checks are performed implicitly by the
> > load and store instructions in the memcpy implementation.  A failed check
> > results in tag checks being disabled and execution will keep going. As a
> > result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> > test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> > would end up corrupting memory until it hits an inaccessible page and
> > causes a kernel panic.
> >
> > This is a pre-existing issue that was revealed by commit 285133040e6c
> > ("arm64: Import latest memcpy()/memmove() implementation") which changed
> > the memcpy implementation from using signed comparisons (incorrectly,
> > resulting in the memcpy being terminated early for negative sizes)
> > to using unsigned comparisons.
> >
> > It is unclear how this could be handled by memcpy itself in a reasonable
> > way. One possibility would be to add an exception handler that would force
> > memcpy to return if a tag check fault is detected -- this would make the
> > behavior roughly similar to generic and SW tag-based KASAN. However,
> > this wouldn't solve the problem for asynchronous mode and also makes
> > memcpy behavior inconsistent with manually copying data.
> >
> > This test was added as a part of a series that taught KASAN to detect
> > negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> > detect negative size in memory operation function"). Therefore we
> > should keep testing for negative sizes with generic and SW tag-based
> > KASAN. But there is some value in testing small memcpy overflows, so
> > let's add another test with memcpy that does not destabilize the kernel
> > by performing out-of-bounds writes, and run it in all modes.
>
> The only thing is, that's nonsense. You can't pass a negative size to
> memmove()/memcpy(), any more than you could pass a negative address. You
> can use the usual integer conversions to pass a very large size, but
> that's no different from just passing a very large size, and the
> language does not make any restrictions on the validity of very large
> sizes. Indeed in general a 32-bit program could legitimately memcpy()
> exactly half its address space to the other half, or memmove() a 3GB
> buffer a small distance.
>
> I'm not sure what we're trying to enforce there, other than arbitrary
> restrictions on how we think it makes sense to call library functions.
> The only way to say that a size is actually invalid is if it leads to an
> out-of-bounds access relative to the source or destination buffer, but
> to provoke that the given size only ever needs to be at least 1 byte
> larger than the object - making it excessively large only generates
> excessively large numbers of invalid accesses, and I fail to see what
> use that has. By all means introduce KAROHWTIMSTCLFSAN, but I'm not
> convinced it's meaningfully within the scope of *address* sanitisation.

This is an orthogonal issue, isn't it? It may make sense to make the
memmove()/memcpy() behavior controllable separately, but that can be
done separately from this change.

Peter

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
  2021-09-13  6:00     ` Marco Elver
@ 2021-09-13 18:19       ` Peter Collingbourne
  -1 siblings, 0 replies; 12+ messages in thread
From: Peter Collingbourne @ 2021-09-13 18:19 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Robin Murphy, Will Deacon, Catalin Marinas,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List, Andrew Morton

On Sun, Sep 12, 2021 at 11:00 PM Marco Elver <elver@google.com> wrote:
>
> On Fri, 10 Sept 2021 at 23:17, Andrey Konovalov <andreyknvl@gmail.com> wrote:
> >
> > On Fri, Sep 10, 2021 at 11:14 PM Peter Collingbourne <pcc@google.com> wrote:
> > >
> > > With HW tag-based KASAN, error checks are performed implicitly by the
> > > load and store instructions in the memcpy implementation.  A failed check
> > > results in tag checks being disabled and execution will keep going. As a
> > > result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> > > test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> > > would end up corrupting memory until it hits an inaccessible page and
> > > causes a kernel panic.
> > >
> > > This is a pre-existing issue that was revealed by commit 285133040e6c
> > > ("arm64: Import latest memcpy()/memmove() implementation") which changed
> > > the memcpy implementation from using signed comparisons (incorrectly,
> > > resulting in the memcpy being terminated early for negative sizes)
> > > to using unsigned comparisons.
> > >
> > > It is unclear how this could be handled by memcpy itself in a reasonable
> > > way. One possibility would be to add an exception handler that would force
> > > memcpy to return if a tag check fault is detected -- this would make the
> > > behavior roughly similar to generic and SW tag-based KASAN. However,
> > > this wouldn't solve the problem for asynchronous mode and also makes
> > > memcpy behavior inconsistent with manually copying data.
> > >
> > > This test was added as a part of a series that taught KASAN to detect
> > > negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> > > detect negative size in memory operation function"). Therefore we
> > > should keep testing for negative sizes with generic and SW tag-based
> > > KASAN. But there is some value in testing small memcpy overflows, so
> > > let's add another test with memcpy that does not destabilize the kernel
> > > by performing out-of-bounds writes, and run it in all modes.
> > >
> > > Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> > > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > > ---
> > >  lib/test_kasan.c | 18 +++++++++++++++++-
> > >  1 file changed, 17 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> > > index 8835e0784578..aa8e42250219 100644
> > > --- a/lib/test_kasan.c
> > > +++ b/lib/test_kasan.c
> > > @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
> > >         kfree(ptr);
> > >  }
> > >
> > > -static void kmalloc_memmove_invalid_size(struct kunit *test)
> > > +static void kmalloc_memmove_negative_size(struct kunit *test)
> > >  {
> > >         char *ptr;
> > >         size_t size = 64;
> > > @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> > >         kfree(ptr);
> > >  }
> > >
> > > +static void kmalloc_memmove_invalid_size(struct kunit *test)
> > > +{
> > > +       char *ptr;
> > > +       size_t size = 64;
> > > +       volatile size_t invalid_size = size;
> > > +
> > > +       ptr = kmalloc(size, GFP_KERNEL);
> > > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > > +
> > > +       memset((char *)ptr, 0, 64);
> > > +       KUNIT_EXPECT_KASAN_FAIL(test,
> > > +               memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> > > +       kfree(ptr);
> > > +}
> > > +
> > >  static void kmalloc_uaf(struct kunit *test)
> > >  {
> > >         char *ptr;
> > > @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> > >         KUNIT_CASE(kmalloc_oob_memset_4),
> > >         KUNIT_CASE(kmalloc_oob_memset_8),
> > >         KUNIT_CASE(kmalloc_oob_memset_16),
> > > +       KUNIT_CASE(kmalloc_memmove_negative_size),
> > >         KUNIT_CASE(kmalloc_memmove_invalid_size),
> > >         KUNIT_CASE(kmalloc_uaf),
> > >         KUNIT_CASE(kmalloc_uaf_memset),
> > > --
> > > 2.33.0.309.g3052b89438-goog
> > >
> >
> > Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
>
> Acked-by: Marco Elver <elver@google.com>
>
> Do you intend this patch to go through the arm64 or mm tree?

Let's take it through the mm tree.

Peter


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write
@ 2021-09-13 18:19       ` Peter Collingbourne
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Collingbourne @ 2021-09-13 18:19 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Robin Murphy, Will Deacon, Catalin Marinas,
	Mark Rutland, Evgenii Stepanov, Alexander Potapenko, Linux ARM,
	Linux Memory Management List, Andrew Morton

On Sun, Sep 12, 2021 at 11:00 PM Marco Elver <elver@google.com> wrote:
>
> On Fri, 10 Sept 2021 at 23:17, Andrey Konovalov <andreyknvl@gmail.com> wrote:
> >
> > On Fri, Sep 10, 2021 at 11:14 PM Peter Collingbourne <pcc@google.com> wrote:
> > >
> > > With HW tag-based KASAN, error checks are performed implicitly by the
> > > load and store instructions in the memcpy implementation.  A failed check
> > > results in tag checks being disabled and execution will keep going. As a
> > > result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan:
> > > test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy
> > > would end up corrupting memory until it hits an inaccessible page and
> > > causes a kernel panic.
> > >
> > > This is a pre-existing issue that was revealed by commit 285133040e6c
> > > ("arm64: Import latest memcpy()/memmove() implementation") which changed
> > > the memcpy implementation from using signed comparisons (incorrectly,
> > > resulting in the memcpy being terminated early for negative sizes)
> > > to using unsigned comparisons.
> > >
> > > It is unclear how this could be handled by memcpy itself in a reasonable
> > > way. One possibility would be to add an exception handler that would force
> > > memcpy to return if a tag check fault is detected -- this would make the
> > > behavior roughly similar to generic and SW tag-based KASAN. However,
> > > this wouldn't solve the problem for asynchronous mode and also makes
> > > memcpy behavior inconsistent with manually copying data.
> > >
> > > This test was added as a part of a series that taught KASAN to detect
> > > negative sizes in memory operations, see commit 8cceeff48f23 ("kasan:
> > > detect negative size in memory operation function"). Therefore we
> > > should keep testing for negative sizes with generic and SW tag-based
> > > KASAN. But there is some value in testing small memcpy overflows, so
> > > let's add another test with memcpy that does not destabilize the kernel
> > > by performing out-of-bounds writes, and run it in all modes.
> > >
> > > Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
> > > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > > ---
> > >  lib/test_kasan.c | 18 +++++++++++++++++-
> > >  1 file changed, 17 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> > > index 8835e0784578..aa8e42250219 100644
> > > --- a/lib/test_kasan.c
> > > +++ b/lib/test_kasan.c
> > > @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test)
> > >         kfree(ptr);
> > >  }
> > >
> > > -static void kmalloc_memmove_invalid_size(struct kunit *test)
> > > +static void kmalloc_memmove_negative_size(struct kunit *test)
> > >  {
> > >         char *ptr;
> > >         size_t size = 64;
> > > @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> > >         kfree(ptr);
> > >  }
> > >
> > > +static void kmalloc_memmove_invalid_size(struct kunit *test)
> > > +{
> > > +       char *ptr;
> > > +       size_t size = 64;
> > > +       volatile size_t invalid_size = size;
> > > +
> > > +       ptr = kmalloc(size, GFP_KERNEL);
> > > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > > +
> > > +       memset((char *)ptr, 0, 64);
> > > +       KUNIT_EXPECT_KASAN_FAIL(test,
> > > +               memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> > > +       kfree(ptr);
> > > +}
> > > +
> > >  static void kmalloc_uaf(struct kunit *test)
> > >  {
> > >         char *ptr;
> > > @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> > >         KUNIT_CASE(kmalloc_oob_memset_4),
> > >         KUNIT_CASE(kmalloc_oob_memset_8),
> > >         KUNIT_CASE(kmalloc_oob_memset_16),
> > > +       KUNIT_CASE(kmalloc_memmove_negative_size),
> > >         KUNIT_CASE(kmalloc_memmove_invalid_size),
> > >         KUNIT_CASE(kmalloc_uaf),
> > >         KUNIT_CASE(kmalloc_uaf_memset),
> > > --
> > > 2.33.0.309.g3052b89438-goog
> > >
> >
> > Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
>
> Acked-by: Marco Elver <elver@google.com>
>
> Do you intend this patch to go through the arm64 or mm tree?

Let's take it through the mm tree.

Peter

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-09-13 18:21 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-10 21:13 [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write Peter Collingbourne
2021-09-10 21:13 ` Peter Collingbourne
2021-09-10 21:17 ` Andrey Konovalov
2021-09-10 21:17   ` Andrey Konovalov
2021-09-13  6:00   ` Marco Elver
2021-09-13  6:00     ` Marco Elver
2021-09-13 18:19     ` Peter Collingbourne
2021-09-13 18:19       ` Peter Collingbourne
2021-09-13  9:42 ` Robin Murphy
2021-09-13  9:42   ` Robin Murphy
2021-09-13 18:18   ` Peter Collingbourne
2021-09-13 18:18     ` Peter Collingbourne

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.