All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>, <linux-kernel@vger.kernel.org>,
	<kasan-dev@googlegroups.com>, <linux-mm@kvack.org>,
	Alexander Potapenko <glider@google.com>,
	<linux-arm-kernel@lists.infradead.org>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
Date: Thu, 1 Jun 2017 17:34:43 +0100	[thread overview]
Message-ID: <20170601163442.GC17711@leverpostej> (raw)
In-Reply-To: <20170601162338.23540-3-aryabinin@virtuozzo.com>

On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
> We used to read several bytes of the shadow memory in advance.
> Therefore additional shadow memory mapped to prevent crash if
> speculative load would happen near the end of the mapped shadow memory.
>
> Now we don't have such speculative loads, so we no longer need to map
> additional shadow memory.

I see that patch 1 fixed up the Linux helpers for outline
instrumentation.

Just to check, is it also true that the inline instrumentation never
performs unaligned accesses to the shadow memory?

If so, this looks good to me; it also avoids a potential fencepost issue
when memory exists right at the end of the linear map. Assuming that
holds:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/mm/kasan_init.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 687a358a3733..81f03959a4ab 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -191,14 +191,8 @@ void __init kasan_init(void)
>               if (start >= end)
>                       break;
>
> -             /*
> -              * end + 1 here is intentional. We check several shadow bytes in
> -              * advance to slightly speed up fastpath. In some rare cases
> -              * we could cross boundary of mapped shadow, so we just map
> -              * some more here.
> -              */
>               vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
> -                             (unsigned long)kasan_mem_to_shadow(end) + 1,
> +                             (unsigned long)kasan_mem_to_shadow(end),
>                               pfn_to_nid(virt_to_pfn(start)));
>       }
>
> --
> 2.13.0
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

WARNING: multiple messages have this Message-ID (diff)
From: Mark Rutland <mark.rutland@arm.com>
To: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com,
	linux-mm@kvack.org, Alexander Potapenko <glider@google.com>,
	linux-arm-kernel@lists.infradead.org,
	Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
Date: Thu, 1 Jun 2017 17:34:43 +0100	[thread overview]
Message-ID: <20170601163442.GC17711@leverpostej> (raw)
In-Reply-To: <20170601162338.23540-3-aryabinin@virtuozzo.com>

On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
> We used to read several bytes of the shadow memory in advance.
> Therefore additional shadow memory mapped to prevent crash if
> speculative load would happen near the end of the mapped shadow memory.
>
> Now we don't have such speculative loads, so we no longer need to map
> additional shadow memory.

I see that patch 1 fixed up the Linux helpers for outline
instrumentation.

Just to check, is it also true that the inline instrumentation never
performs unaligned accesses to the shadow memory?

If so, this looks good to me; it also avoids a potential fencepost issue
when memory exists right at the end of the linear map. Assuming that
holds:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/mm/kasan_init.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 687a358a3733..81f03959a4ab 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -191,14 +191,8 @@ void __init kasan_init(void)
>               if (start >= end)
>                       break;
>
> -             /*
> -              * end + 1 here is intentional. We check several shadow bytes in
> -              * advance to slightly speed up fastpath. In some rare cases
> -              * we could cross boundary of mapped shadow, so we just map
> -              * some more here.
> -              */
>               vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
> -                             (unsigned long)kasan_mem_to_shadow(end) + 1,
> +                             (unsigned long)kasan_mem_to_shadow(end),
>                               pfn_to_nid(virt_to_pfn(start)));
>       }
>
> --
> 2.13.0
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: mark.rutland@arm.com (Mark Rutland)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
Date: Thu, 1 Jun 2017 17:34:43 +0100	[thread overview]
Message-ID: <20170601163442.GC17711@leverpostej> (raw)
In-Reply-To: <20170601162338.23540-3-aryabinin@virtuozzo.com>

On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
> We used to read several bytes of the shadow memory in advance.
> Therefore additional shadow memory mapped to prevent crash if
> speculative load would happen near the end of the mapped shadow memory.
>
> Now we don't have such speculative loads, so we no longer need to map
> additional shadow memory.

I see that patch 1 fixed up the Linux helpers for outline
instrumentation.

Just to check, is it also true that the inline instrumentation never
performs unaligned accesses to the shadow memory?

If so, this looks good to me; it also avoids a potential fencepost issue
when memory exists right at the end of the linear map. Assuming that
holds:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: linux-arm-kernel at lists.infradead.org
> ---
>  arch/arm64/mm/kasan_init.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 687a358a3733..81f03959a4ab 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -191,14 +191,8 @@ void __init kasan_init(void)
>               if (start >= end)
>                       break;
>
> -             /*
> -              * end + 1 here is intentional. We check several shadow bytes in
> -              * advance to slightly speed up fastpath. In some rare cases
> -              * we could cross boundary of mapped shadow, so we just map
> -              * some more here.
> -              */
>               vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
> -                             (unsigned long)kasan_mem_to_shadow(end) + 1,
> +                             (unsigned long)kasan_mem_to_shadow(end),
>                               pfn_to_nid(virt_to_pfn(start)));
>       }
>
> --
> 2.13.0
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

  reply	other threads:[~2017-06-01 16:35 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-01 16:23 [PATCH 1/4] mm/kasan: get rid of speculative shadow checks Andrey Ryabinin
2017-06-01 16:23 ` Andrey Ryabinin
2017-06-01 16:23 ` [PATCH 2/4] x86/kasan: don't allocate extra shadow memory Andrey Ryabinin
2017-06-01 16:23   ` Andrey Ryabinin
2017-06-01 16:23 ` [PATCH 3/4] arm64/kasan: " Andrey Ryabinin
2017-06-01 16:23   ` Andrey Ryabinin
2017-06-01 16:23   ` Andrey Ryabinin
2017-06-01 16:34   ` Mark Rutland [this message]
2017-06-01 16:34     ` Mark Rutland
2017-06-01 16:34     ` Mark Rutland
2017-06-01 16:45     ` Dmitry Vyukov
2017-06-01 16:45       ` Dmitry Vyukov
2017-06-01 16:45       ` Dmitry Vyukov
2017-06-01 16:52       ` Mark Rutland
2017-06-01 16:52         ` Mark Rutland
2017-06-01 16:52         ` Mark Rutland
2017-06-01 16:59         ` Andrey Ryabinin
2017-06-01 16:59           ` Andrey Ryabinin
2017-06-01 16:59           ` Andrey Ryabinin
2017-06-01 17:00           ` Andrey Ryabinin
2017-06-01 17:00             ` Andrey Ryabinin
2017-06-01 17:00             ` Andrey Ryabinin
2017-06-01 17:05             ` Dmitry Vyukov
2017-06-01 17:05               ` Dmitry Vyukov
2017-06-01 17:05               ` Dmitry Vyukov
2017-06-01 17:38               ` Dmitry Vyukov
2017-06-01 17:38                 ` Dmitry Vyukov
2017-06-01 17:38                 ` Dmitry Vyukov
2017-06-01 16:23 ` [PATCH 4/4] mm/kasan: Add support for memory hotplug Andrey Ryabinin
2017-06-01 16:23   ` Andrey Ryabinin
2017-06-01 17:45 ` [PATCH 1/4] mm/kasan: get rid of speculative shadow checks Dmitry Vyukov
2017-06-01 17:45   ` Dmitry Vyukov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170601163442.GC17711@leverpostej \
    --to=mark.rutland@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=catalin.marinas@arm.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.