From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4CECC2BA19 for ; Wed, 15 Apr 2020 17:28:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 898F220784 for ; Wed, 15 Apr 2020 17:28:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2411069AbgDOR2j (ORCPT ); Wed, 15 Apr 2020 13:28:39 -0400 Received: from foss.arm.com ([217.140.110.172]:50434 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2410994AbgDOR2X (ORCPT ); Wed, 15 Apr 2020 13:28:23 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AC66F1FB; Wed, 15 Apr 2020 10:28:22 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CB8143F68F; Wed, 15 Apr 2020 10:28:20 -0700 (PDT) Date: Wed, 15 Apr 2020 18:28:14 +0100 From: Mark Rutland To: Will Deacon Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, kernel-team@android.com, Michael Ellerman , Peter Zijlstra , Linus Torvalds , Segher Boessenkool , Christian Borntraeger , Luc Van Oostenryck , Arnd Bergmann , Peter Oberparleiter , Masahiro Yamada , Nick Desaulniers , Robin Murphy Subject: Re: [PATCH v3 05/12] arm64: csum: Disable KASAN for do_csum() Message-ID: <20200415172813.GA2272@lakrids.cambridge.arm.com> References: <20200415165218.20251-1-will@kernel.org> <20200415165218.20251-6-will@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200415165218.20251-6-will@kernel.org> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Will, On Wed, Apr 15, 2020 at 05:52:11PM +0100, Will Deacon wrote: > do_csum() over-reads the source buffer and therefore abuses > READ_ONCE_NOCHECK() to avoid tripping up KASAN. In preparation for > READ_ONCE_NOCHECK() becoming a macro, and therefore losing its > '__no_sanitize_address' annotation, just annotate do_csum() explicitly > and fall back to normal loads. I'm confused by this. The whole point of READ_ONCE_NOCHECK() is that it isn't checked by KASAN, so if that semantic is removed it has no reason to exist. Changing that will break the unwind/stacktrace code across multiple architectures. IIRC they use READ_ONCE_NOCHECK() for two reasons: 1. Races with concurrent modification, as might happen when a thread's stack is corrupted. Allowing the unwinder to bail out after a sanity check means the resulting report is more useful than a KASAN splat in the unwinder. I made the arm64 unwinder robust to this case. 2. I believe that the frame record itself /might/ be poisoned by KASAN, since it's not meant to be an accessible object at the C langauge level. I could be wrong about this, and would have to check. I would like to keep the unwinding robust in the first case, even if the second case doesn't apply, and I'd prefer to not mark the entirety of the unwinding code as unchecked as that's sufficiently large an subtle that it could have nasty bugs. Is there any way we keep something like READ_ONCE_NOCHECK() around even if we have to give it reduced functionality relative to READ_ONCE()? I'm not enirely sure why READ_ONCE_NOCHECK() had to go, so if there's a particular pain point I'm happy to take a look. Thanks, Mark. > > Cc: Mark Rutland > Cc: Robin Murphy > Signed-off-by: Will Deacon > --- > arch/arm64/lib/csum.c | 20 ++++++++++++-------- > 1 file changed, 12 insertions(+), 8 deletions(-) > > diff --git a/arch/arm64/lib/csum.c b/arch/arm64/lib/csum.c > index 60eccae2abad..78b87a64ca0a 100644 > --- a/arch/arm64/lib/csum.c > +++ b/arch/arm64/lib/csum.c > @@ -14,7 +14,11 @@ static u64 accumulate(u64 sum, u64 data) > return tmp + (tmp >> 64); > } > > -unsigned int do_csum(const unsigned char *buff, int len) > +/* > + * We over-read the buffer and this makes KASAN unhappy. Instead, disable > + * instrumentation and call kasan explicitly. > + */ > +unsigned int __no_sanitize_address do_csum(const unsigned char *buff, int len) > { > unsigned int offset, shift, sum; > const u64 *ptr; > @@ -42,7 +46,7 @@ unsigned int do_csum(const unsigned char *buff, int len) > * odd/even alignment, and means we can ignore it until the very end. > */ > shift = offset * 8; > - data = READ_ONCE_NOCHECK(*ptr++); > + data = *ptr++; > #ifdef __LITTLE_ENDIAN > data = (data >> shift) << shift; > #else > @@ -58,10 +62,10 @@ unsigned int do_csum(const unsigned char *buff, int len) > while (unlikely(len > 64)) { > __uint128_t tmp1, tmp2, tmp3, tmp4; > > - tmp1 = READ_ONCE_NOCHECK(*(__uint128_t *)ptr); > - tmp2 = READ_ONCE_NOCHECK(*(__uint128_t *)(ptr + 2)); > - tmp3 = READ_ONCE_NOCHECK(*(__uint128_t *)(ptr + 4)); > - tmp4 = READ_ONCE_NOCHECK(*(__uint128_t *)(ptr + 6)); > + tmp1 = *(__uint128_t *)ptr; > + tmp2 = *(__uint128_t *)(ptr + 2); > + tmp3 = *(__uint128_t *)(ptr + 4); > + tmp4 = *(__uint128_t *)(ptr + 6); > > len -= 64; > ptr += 8; > @@ -85,7 +89,7 @@ unsigned int do_csum(const unsigned char *buff, int len) > __uint128_t tmp; > > sum64 = accumulate(sum64, data); > - tmp = READ_ONCE_NOCHECK(*(__uint128_t *)ptr); > + tmp = *(__uint128_t *)ptr; > > len -= 16; > ptr += 2; > @@ -100,7 +104,7 @@ unsigned int do_csum(const unsigned char *buff, int len) > } > if (len > 0) { > sum64 = accumulate(sum64, data); > - data = READ_ONCE_NOCHECK(*ptr); > + data = *ptr; > len -= 8; > } > /* > -- > 2.26.0.110.g2183baf09c-goog >