All of lore.kernel.org
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>,
	Al Viro <viro@zeniv.linux.org.uk>
Cc: Matthew Wilcox <willy@infradead.org>,
	Christoph Hellwig <hch@infradead.org>,
	Chen Huang <chenhuang5@huawei.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Stephen Rothwell <sfr@canb.auug.org.au>,
	Randy Dunlap <rdunlap@infradead.org>,
	Will Deacon <will@kernel.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-mm <linux-mm@kvack.org>,
	open list <linux-kernel@vger.kernel.org>
Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write()
Date: Thu, 24 Jun 2021 21:36:54 +0100	[thread overview]
Message-ID: <e8e87aba-22f7-d039-ceaa-a93591b04b1e@arm.com> (raw)
In-Reply-To: <20210624185554.GC25097@arm.com>

On 2021-06-24 19:55, Catalin Marinas wrote:
> On Thu, Jun 24, 2021 at 04:27:17PM +0000, Al Viro wrote:
>> On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote:
>>> FWIW I think the only way to make the kernel behaviour any more robust here
>>> would be to make the whole uaccess API more expressive, such that rather
>>> than simply saying "I only got this far" it could actually differentiate
>>> between stopping due to a fault which may be recoverable and worth retrying,
>>> and one which definitely isn't.
>>
>> ... and propagate that "more expressive" information through what, 3 or 4
>> levels in the call chain?
>>
>>  From include/linux/uaccess.h:
>>
>>   * If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting
>>   * at to must become equal to the bytes fetched from the corresponding area
>>   * starting at from.  All data past to + size - N must be left unmodified.
>>   *
>>   * If copying succeeds, the return value must be 0.  If some data cannot be
>>   * fetched, it is permitted to copy less than had been fetched; the only
>>   * hard requirement is that not storing anything at all (i.e. returning size)
>>   * should happen only when nothing could be copied.  In other words, you don't
>>   * have to squeeze as much as possible - it is allowed, but not necessary.
>>
>> arm64 instances violate the aforementioned hard requirement.
> 
> After reading the above a few more times, I think I get it. The key
> sentence is: not storing anything at all should happen only when nothing
> could be copied. In the MTE case, something can still be copied.
> 
>> Please, fix
>> it there; it's not hard.  All you need is an exception handler in .Ltiny15
>> that would fall back to (short) byte-by-byte copy if the faulting address
>> happened to be unaligned.  Or just do one-byte copy, not that it had been
>> considerably cheaper than a loop.  Will be cheaper than propagating that extra
>> information up the call chain, let alone paying for extra ->write_begin()
>> and ->write_end() for single byte in generic_perform_write().
> 
> Yeah, it's definitely fixable in the arch code. I misread the above
> requirements and thought it could be fixed in the core code.
> 
> Quick hack, though I think in the actual exception handling path in .S
> more sense (and it needs the copy_to_user for symmetry):

Hmm, if anything the asm version might be even more straightforward; I 
think it's pretty much just this (untested):

diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 043da90f5dd7..632bf1f9540d 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -62,6 +62,9 @@ EXPORT_SYMBOL(__arch_copy_to_user)

         .section .fixup,"ax"
         .align  2
-9998:  sub     x0, end, dst                    // bytes not copied
+9998:  ldrb    w7, [x1]
+USER(9997f,    sttrb   w7, [x0])
+       add     x0, x0, #1
+9997:  sub     x0, end, dst                    // bytes not copied
         ret
         .previous

If we can get away without trying to finish the whole copy bytewise, 
(i.e. we don't cause any faults of our own by knowingly over-reading in 
the routine itself), I'm more than happy with that.

Robin.

> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index b5f08621fa29..903f8a2a457b 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -415,6 +415,15 @@ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __u
>   	uaccess_ttbr0_enable();						\
>   	__acfu_ret = __arch_copy_from_user((to),			\
>   				      __uaccess_mask_ptr(from), (n));	\
> +	if (__acfu_ret == n) {						\
> +		int __cfu_err = 0;					\
> +		char __cfu_val;						\
> +		__raw_get_mem("ldtr", __cfu_val, (char *)from, __cfu_err);\
> +		if (!__cfu_err) {					\
> +			*(char *)to = __cfu_val;			\
> +			__acfu_ret--;					\
> +		}							\
> +	}								\
>   	uaccess_ttbr0_disable();					\
>   	__acfu_ret;							\
>   })
> 
> Of course, it only fixes the MTE problem, I'll ignore the MMIO case
> (though it may work in certain configurations like synchronous faults).
> 

WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>,
	Al Viro <viro@zeniv.linux.org.uk>
Cc: Matthew Wilcox <willy@infradead.org>,
	Christoph Hellwig <hch@infradead.org>,
	Chen Huang <chenhuang5@huawei.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Stephen Rothwell <sfr@canb.auug.org.au>,
	Randy Dunlap <rdunlap@infradead.org>,
	Will Deacon <will@kernel.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-mm <linux-mm@kvack.org>,
	open list <linux-kernel@vger.kernel.org>
Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write()
Date: Thu, 24 Jun 2021 21:36:54 +0100	[thread overview]
Message-ID: <e8e87aba-22f7-d039-ceaa-a93591b04b1e@arm.com> (raw)
In-Reply-To: <20210624185554.GC25097@arm.com>

On 2021-06-24 19:55, Catalin Marinas wrote:
> On Thu, Jun 24, 2021 at 04:27:17PM +0000, Al Viro wrote:
>> On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote:
>>> FWIW I think the only way to make the kernel behaviour any more robust here
>>> would be to make the whole uaccess API more expressive, such that rather
>>> than simply saying "I only got this far" it could actually differentiate
>>> between stopping due to a fault which may be recoverable and worth retrying,
>>> and one which definitely isn't.
>>
>> ... and propagate that "more expressive" information through what, 3 or 4
>> levels in the call chain?
>>
>>  From include/linux/uaccess.h:
>>
>>   * If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting
>>   * at to must become equal to the bytes fetched from the corresponding area
>>   * starting at from.  All data past to + size - N must be left unmodified.
>>   *
>>   * If copying succeeds, the return value must be 0.  If some data cannot be
>>   * fetched, it is permitted to copy less than had been fetched; the only
>>   * hard requirement is that not storing anything at all (i.e. returning size)
>>   * should happen only when nothing could be copied.  In other words, you don't
>>   * have to squeeze as much as possible - it is allowed, but not necessary.
>>
>> arm64 instances violate the aforementioned hard requirement.
> 
> After reading the above a few more times, I think I get it. The key
> sentence is: not storing anything at all should happen only when nothing
> could be copied. In the MTE case, something can still be copied.
> 
>> Please, fix
>> it there; it's not hard.  All you need is an exception handler in .Ltiny15
>> that would fall back to (short) byte-by-byte copy if the faulting address
>> happened to be unaligned.  Or just do one-byte copy, not that it had been
>> considerably cheaper than a loop.  Will be cheaper than propagating that extra
>> information up the call chain, let alone paying for extra ->write_begin()
>> and ->write_end() for single byte in generic_perform_write().
> 
> Yeah, it's definitely fixable in the arch code. I misread the above
> requirements and thought it could be fixed in the core code.
> 
> Quick hack, though I think in the actual exception handling path in .S
> more sense (and it needs the copy_to_user for symmetry):

Hmm, if anything the asm version might be even more straightforward; I 
think it's pretty much just this (untested):

diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 043da90f5dd7..632bf1f9540d 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -62,6 +62,9 @@ EXPORT_SYMBOL(__arch_copy_to_user)

         .section .fixup,"ax"
         .align  2
-9998:  sub     x0, end, dst                    // bytes not copied
+9998:  ldrb    w7, [x1]
+USER(9997f,    sttrb   w7, [x0])
+       add     x0, x0, #1
+9997:  sub     x0, end, dst                    // bytes not copied
         ret
         .previous

If we can get away without trying to finish the whole copy bytewise, 
(i.e. we don't cause any faults of our own by knowingly over-reading in 
the routine itself), I'm more than happy with that.

Robin.

> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index b5f08621fa29..903f8a2a457b 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -415,6 +415,15 @@ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __u
>   	uaccess_ttbr0_enable();						\
>   	__acfu_ret = __arch_copy_from_user((to),			\
>   				      __uaccess_mask_ptr(from), (n));	\
> +	if (__acfu_ret == n) {						\
> +		int __cfu_err = 0;					\
> +		char __cfu_val;						\
> +		__raw_get_mem("ldtr", __cfu_val, (char *)from, __cfu_err);\
> +		if (!__cfu_err) {					\
> +			*(char *)to = __cfu_val;			\
> +			__acfu_ret--;					\
> +		}							\
> +	}								\
>   	uaccess_ttbr0_disable();					\
>   	__acfu_ret;							\
>   })
> 
> Of course, it only fixes the MTE problem, I'll ignore the MMIO case
> (though it may work in certain configurations like synchronous faults).
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-06-24 20:37 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-23  2:39 [BUG] arm64: an infinite loop in generic_perform_write() Chen Huang
2021-06-23  2:39 ` Chen Huang
2021-06-23  2:50 ` Al Viro
2021-06-23  2:50   ` Al Viro
2021-06-23  3:24   ` Xiaoming Ni
2021-06-23  3:24     ` Xiaoming Ni
2021-06-23  4:27     ` Al Viro
2021-06-23  4:27       ` Al Viro
2021-06-23  9:32       ` Catalin Marinas
2021-06-23  9:32         ` Catalin Marinas
2021-06-23 11:51         ` Matthew Wilcox
2021-06-23 11:51           ` Matthew Wilcox
2021-06-23 13:04         ` Al Viro
2021-06-23 13:04           ` Al Viro
2021-06-23 13:22 ` Mark Rutland
2021-06-23 13:22   ` Mark Rutland
2021-06-24  3:10   ` Chen Huang
2021-06-24  3:10     ` Chen Huang
2021-06-24  3:24     ` Matthew Wilcox
2021-06-24  3:24       ` Matthew Wilcox
2021-06-24  3:52       ` Chen Huang
2021-06-24  3:52         ` Chen Huang
2021-06-24  7:04       ` Christoph Hellwig
2021-06-24  7:04         ` Christoph Hellwig
2021-06-24 11:15         ` Matthew Wilcox
2021-06-24 11:15           ` Matthew Wilcox
2021-06-24 13:22           ` Robin Murphy
2021-06-24 13:22             ` Robin Murphy
2021-06-24 16:27             ` Al Viro
2021-06-24 16:27               ` Al Viro
2021-06-24 16:38               ` Robin Murphy
2021-06-24 16:38                 ` Robin Murphy
2021-06-24 16:39                 ` Al Viro
2021-06-24 16:39                   ` Al Viro
2021-06-24 17:24                   ` Robin Murphy
2021-06-24 17:24                     ` Robin Murphy
2021-06-24 18:55               ` Catalin Marinas
2021-06-24 18:55                 ` Catalin Marinas
2021-06-24 20:36                 ` Robin Murphy [this message]
2021-06-24 20:36                   ` Robin Murphy
2021-06-25 10:39                   ` Catalin Marinas
2021-06-25 10:39                     ` Catalin Marinas
2021-06-28 16:22                     ` Robin Murphy
2021-06-28 16:22                       ` Robin Murphy
2021-06-29  8:30                       ` Catalin Marinas
2021-06-29  8:30                         ` Catalin Marinas
2021-06-29 10:01                         ` Robin Murphy
2021-06-29 10:01                           ` Robin Murphy
2021-07-06 17:50                       ` Catalin Marinas
2021-07-06 17:50                         ` Catalin Marinas
2021-07-06 19:15                         ` Robin Murphy
2021-07-06 19:15                           ` Robin Murphy
2021-07-07  9:55                           ` David Laight
2021-07-07  9:55                             ` David Laight
2021-07-07 11:04                             ` Robin Murphy
2021-07-07 11:04                               ` Robin Murphy
2021-07-07 12:50                           ` Catalin Marinas
2021-07-07 12:50                             ` Catalin Marinas
2021-06-24 15:09           ` Catalin Marinas
2021-06-24 15:09             ` Catalin Marinas
2021-06-24 16:17             ` Al Viro
2021-06-24 16:17               ` Al Viro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e8e87aba-22f7-d039-ceaa-a93591b04b1e@arm.com \
    --to=robin.murphy@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=chenhuang5@huawei.com \
    --cc=hch@infradead.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=rdunlap@infradead.org \
    --cc=sfr@canb.auug.org.au \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.