From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F488C433B4 for ; Wed, 19 May 2021 16:29:09 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8DC7E61007 for ; Wed, 19 May 2021 16:29:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8DC7E61007 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From:In-Reply-To: References:MIME-Version:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mxNmoVilxJjWaXYpSWzcG1rs0gVfudEDMgSpjJeZvb4=; b=AUkMk3Gc/cediMZva6rg84cV/ SvzFwHawjv+urOD4VxdPSDI70kGs0hfCkZeggdtkU8O8ja6OQrOBqRyUBpbdNFItKfdSXKN+FBlNx jfOjpYDEftCX9XHcRPyC7p5Y73xLQD5woYEWLlNbP7udFSpWdtl1opCL68IURGAnsBrGiTt+JGE2Y TWX+lT6w4WI3/gpKQagY3Y8TW0ChaMMGxWVxOLrHLhExjQgPuDFOKs4a7UmVRHHhzXHlu8VRleUYa g+HvHRgAjKxgkv+d3h+MhLPiMEdERwkh66KjsW501OYgFHV0jNomi3dwgKALdTxW3zQEvwyG/S0v8 5hC1gsTZA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1ljP1b-004Wh7-Vx; Wed, 19 May 2021 16:26:24 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1ljP1Y-004Wfi-0N for linux-arm-kernel@desiato.infradead.org; Wed, 19 May 2021 16:26:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:Cc:To:Subject:Message-ID :Date:From:In-Reply-To:References:MIME-Version:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=gYscA4MoughZ1mCWgZ7udaUdwo/CpwxsifMzBP6FxW4=; b=B2+Yre3nt25gYnm2wNZPno4pri YyDADOprtjHMqO7us8cUbMYEZpNV08j9woEFwZ5FiqRV37cLUsRCpZgLgs49p9uzu2scFjbwW8TsD raSEfZ28cHkHZkE5h5n+TPoBpoWn3qIhG1rMEut4wCHEiKSrEM9VgkJqP8TuADlqP7HzGSABqThpE 79O268GVKH3qVb5Xe6i3J1ex5IgUGwHx9LTcQZdDfhjxaOBF7QjEqxtVkUCYHkeOtK96ZLWV0XweQ gLqnTJs1vO+Q+6gZhFfmUoP/3BZAw9/MwmrO2xbODkCYQKWubP4twqj8z7GKmd1X7+jfextjlrpve a7+I6Raw==; Received: from mail-ot1-x32b.google.com ([2607:f8b0:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1ljP1U-00Fd1o-N4 for linux-arm-kernel@lists.infradead.org; Wed, 19 May 2021 16:26:18 +0000 Received: by mail-ot1-x32b.google.com with SMTP id 69-20020a9d0a4b0000b02902ed42f141e1so12263822otg.2 for ; Wed, 19 May 2021 09:26:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gYscA4MoughZ1mCWgZ7udaUdwo/CpwxsifMzBP6FxW4=; b=i7e0/aL/wRb4X1IeaBJwJuTm38ou55Hl2BZqG5nhXt/+/N8tPIYI4BBL29kYg12eBN 0tvijxjLwPONzxZKWlHGgtvbMkQl5pUnrx0OeB64yVDxTQQy/3dyP/2si8G9m1Zaeige qnefzTgBEyQRoCK5G1+fdgNvSdp9uHdfcKIzOGQMqptoFO6eKHY23V2fejIoQlMz10vT onQJWK3j/dSr2JNMPBK/cgDLwdHBSO+XnlViG5/yAj4rkeAtlUgWjswUpweXhvgrtfYT 1D79McbDYRsHDE+BwVn3NWDjc+O2MSGS6FT87nhMSwfU1rpHxr5QacX964kggL+im1ZY pT2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gYscA4MoughZ1mCWgZ7udaUdwo/CpwxsifMzBP6FxW4=; b=AE/IDIjr8VinQbfjt0uF4Lqrtfcg2h0OPlh57XHS2j4YrHlVULKGgjC5An21FDinDt R1c0CXbDlm90GUV3mlNWD/JLxk3YXGJgb9Kosd/eZFZyFv6IfCnvjtoXKfWuKMykA2FS AS1eaFUecHkhV4Kg7Qf3rSZ0mfQAGypdw22/4X38QJY6VkM0AeRpY/SRUkMG2txIQO+E 2KMdD3+U5pHEtPFr6KxIUldFtu9zMDlSNU+vmjqWxiLVf3ejQJjUeELo7gM+aBwNfJXC wYNxTcxUFnt6FkqVEVV8raCwJv/k4s/9asPklS5M29q4xpZeZG07OokjP5yNlh60k/fU XLRg== X-Gm-Message-State: AOAM532ESsc4IBwwI1CzSZGipT0Y4/O+72WuCoCSYwp2N5CzxoZZVHc0 ISbmup1CpJxm+CVLzQGtAYGgp2Irpj2T7ZvepyPhXT6R88M= X-Google-Smtp-Source: ABdhPJy86uuE9ZlO/2dnU93FfFKjf/02xFT/qmODV2myBbutvAX9mCVCbVsr2KU2umPpZgfsto7bLzPAo8D/YCusuq4= X-Received: by 2002:a9d:6198:: with SMTP id g24mr173583otk.365.1621441574145; Wed, 19 May 2021 09:26:14 -0700 (PDT) MIME-Version: 1.0 References: <20210517075124.152151-1-tabba@google.com> <20210517075124.152151-3-tabba@google.com> <20210518153351.GJ82842@C02TD0UTHF1T.local> In-Reply-To: <20210518153351.GJ82842@C02TD0UTHF1T.local> From: Fuad Tabba Date: Wed, 19 May 2021 17:25:37 +0100 Message-ID: Subject: Re: [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range To: Mark Rutland Cc: "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , Will Deacon , Catalin Marinas , Marc Zyngier , Ard Biesheuvel , James Morse , Alexandru Elisei , Suzuki K Poulose , Robin Murphy X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210519_092616_799035_1B7E95AE X-CRM114-Status: GOOD ( 46.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Mark, On Tue, May 18, 2021 at 4:33 PM Mark Rutland wrote: > > Hi Fuad, > > This is great! I had a play with the series locally, and I have a few > suggestions below for how to make this a bit clearer. > > On Mon, May 17, 2021 at 08:51:10AM +0100, Fuad Tabba wrote: > > __flush_icache_range works on the kernel linear map, and doesn't > > need uaccess. The existing code is a side-effect of its current > > implementation with __flush_cache_user_range fallthrough. > > > > Instead of fallthrough to share the code, use a common macro for > > the two where the caller can specify whether user-space access is > > needed. > > > > No functional change intended. > > Possible performance impact due to the reduced number of > > instructions. > > This looks correct, but I'm not too keen on all the duplication we have > to do w.r.t. `needs_uaccess`, and I think it would be much clearer to > put the TTBR maintenance directly in `__flush_cache_user_range` > immediately, rather than doing that later in the series. > > > Reported-by: Catalin Marinas > > Reported-by: Will Deacon > > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/ > > Signed-off-by: Fuad Tabba > > --- > > arch/arm64/include/asm/assembler.h | 13 ++++-- > > arch/arm64/mm/cache.S | 64 +++++++++++++++++++++--------- > > 2 files changed, 54 insertions(+), 23 deletions(-) > > > > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h > > index 8418c1bd8f04..6ff7a3a3b238 100644 > > --- a/arch/arm64/include/asm/assembler.h > > +++ b/arch/arm64/include/asm/assembler.h > > @@ -426,16 +426,21 @@ alternative_endif > > * Macro to perform an instruction cache maintenance for the interval > > * [start, end) > > * > > - * start, end: virtual addresses describing the region > > - * label: A label to branch to on user fault. > > - * Corrupts: tmp1, tmp2 > > + * start, end: virtual addresses describing the region > > + * needs_uaccess: might access user space memory > > + * label: label to branch to on user fault (if needs_uaccess) > > + * Corrupts: tmp1, tmp2 > > */ > > I'm not too keen on the separate `needs_uaccess` and `label` arguments. > We should be able to collapse those into a single argument by checking > with .ifnc, e.g. > > .macro op arg, fixup > .ifnc fixup, > do_thing_with \fixup > .endif > .endm > > ... which I think would make things clearer overall. > > > - .macro invalidate_icache_by_line start, end, tmp1, tmp2, label > > + .macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label > > icache_line_size \tmp1, \tmp2 > > sub \tmp2, \tmp1, #1 > > bic \tmp2, \start, \tmp2 > > 9997: > > + .if \needs_uaccess > > USER(\label, ic ivau, \tmp2) // invalidate I line PoU > > + .else > > + ic ivau, \tmp2 > > + .endif > > add \tmp2, \tmp2, \tmp1 > > cmp \tmp2, \end > > b.lo 9997b > > I'm also not keen on duplicating the instruction here. I reckon what we > should do is add a conditional extable macro: > > .macro _cond_extable insn, fixup > .ifnc \fixup, > _asm_extable \insn, \fixup > .endif > .endm > > ... which'd allow us to do: > > .macro invalidate_icache_by_line start, end, tmp1, tmp2, fixup > icache_line_size \tmp1, \tmp2 > sub \tmp2, \tmp1, #1 > bic \tmp2, \start, \tmp2 > .Licache_op\@: > ic ivau, \tmp2 // invalidate I line PoU > add \tmp2, \tmp2, \tmp1 > cmp \tmp2, \end > b.lo .Licache_op\@ > dsb ish > isb > > _cond_extable .Licache_op\@, \fixup > .endm > > ... which I think is clearer. > > We could do likewise in dcache_by_line_op, and with some refactoring we > could remove the logic that we have to currently duplicate. > > I pushed a couple of prearatory patches for that to: > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/cleanups/cache > git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/cleanups/cache > > ... in case you felt like taking those as-is. Thanks for this, and for the other comments and suggestions. I'll take your patches, as well as all the fixes you suggested in the next round. Cheers, /fuad > > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S > > index 2d881f34dd9d..092f73acdf9a 100644 > > --- a/arch/arm64/mm/cache.S > > +++ b/arch/arm64/mm/cache.S > > @@ -15,30 +15,20 @@ > > #include > > > > /* > > - * flush_icache_range(start,end) > > + * __flush_cache_range(start,end) [needs_uaccess] > > * > > * Ensure that the I and D caches are coherent within specified region. > > * This is typically used when code has been written to a memory region, > > * and will be executed. > > * > > - * - start - virtual start address of region > > - * - end - virtual end address of region > > + * - start - virtual start address of region > > + * - end - virtual end address of region > > + * - needs_uaccess - (macro parameter) might access user space memory > > */ > > -SYM_FUNC_START(__flush_icache_range) > > - /* FALLTHROUGH */ > > - > > -/* > > - * __flush_cache_user_range(start,end) > > - * > > - * Ensure that the I and D caches are coherent within specified region. > > - * This is typically used when code has been written to a memory region, > > - * and will be executed. > > - * > > - * - start - virtual start address of region > > - * - end - virtual end address of region > > - */ > > -SYM_FUNC_START(__flush_cache_user_range) > > +.macro __flush_cache_range, needs_uaccess > > + .if \needs_uaccess > > uaccess_ttbr0_enable x2, x3, x4 > > + .endif > > alternative_if ARM64_HAS_CACHE_IDC > > dsb ishst > > b 7f > > @@ -47,7 +37,11 @@ alternative_else_nop_endif > > sub x3, x2, #1 > > bic x4, x0, x3 > > 1: > > + .if \needs_uaccess > > user_alt 9f, "dc cvau, x4", "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE > > + .else > > +alternative_insn "dc cvau, x4", "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE > > + .endif > > add x4, x4, x2 > > cmp x4, x1 > > b.lo 1b > > @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC > > isb > > b 8f > > alternative_else_nop_endif > > - invalidate_icache_by_line x0, x1, x2, x3, 9f > > + invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f > > 8: mov x0, #0 > > 1: > > + .if \needs_uaccess > > uaccess_ttbr0_disable x1, x2 > > + .endif > > ret > > + > > + .if \needs_uaccess > > 9: > > mov x0, #-EFAULT > > b 1b > > + .endif > > +.endm > > As above, I think we should reduce this to the core logic, moving the > ttbr manipulation and fixup handler inline in __flush_cache_user_range. > > For clarity, I'd also like to leave the RETs out of the macro, since > that's required for the fixup handling anyway, and it generally amkes > the control flow clearer at the function definition. > > > +/* > > + * flush_icache_range(start,end) > > + * > > + * Ensure that the I and D caches are coherent within specified region. > > + * This is typically used when code has been written to a memory region, > > + * and will be executed. > > + * > > + * - start - virtual start address of region > > + * - end - virtual end address of region > > + */ > > +SYM_FUNC_START(__flush_icache_range) > > + __flush_cache_range needs_uaccess=0 > > SYM_FUNC_END(__flush_icache_range) > > ...so with the suggestions above, this could be: > > SYM_FUNC_START(__flush_icache_range) > __flush_cache_range > ret > SYM_FUNC_END(__flush_icache_range) > > > +/* > > + * __flush_cache_user_range(start,end) > > + * > > + * Ensure that the I and D caches are coherent within specified region. > > + * This is typically used when code has been written to a memory region, > > + * and will be executed. > > + * > > + * - start - virtual start address of region > > + * - end - virtual end address of region > > + */ > > +SYM_FUNC_START(__flush_cache_user_range) > > + __flush_cache_range needs_uaccess=1 > > SYM_FUNC_END(__flush_cache_user_range) > > ... this could be: > > SYM_FUNC_START(__flush_cache_user_range) > uaccess_ttbr0_enable x2, x3, x4 > __flush_cache_range 2f > 1: > uaccess_ttbr0_disable x1, x2 > ret > 2: > mov x0, #-EFAULT > b 1b > SYM_FUNC_END(__flush_cache_user_range) > > > /* > > @@ -86,7 +112,7 @@ alternative_else_nop_endif > > > > uaccess_ttbr0_enable x2, x3, x4 > > > > - invalidate_icache_by_line x0, x1, x2, x3, 2f > > + invalidate_icache_by_line x0, x1, x2, x3, 1, 2f > > ... and this wouldn't need to change. > > Thanks, > Mark. > > > mov x0, xzr > > 1: > > uaccess_ttbr0_disable x1, x2 > > -- > > 2.31.1.751.gd2f1c929bd-goog > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel