From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83628C433DF for ; Tue, 30 Jun 2020 19:52:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4AD05206A1 for ; Tue, 30 Jun 2020 19:52:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="W4J3ROJu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4AD05206A1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xp+2eeo5RO7gvnF87a4n86CLWb8llR/itM9L41eQCjI=; b=W4J3ROJuWugF+bWgNvMaRjYFBw JSPFTWztub+crUbE+LlnLmf1fiCZkx1yBSjTIID/AQ5cF2ilh+BzrAmcb/dcIfTgMY7hRsAzCo/9S aVGRz4bJepNm/PInivVRuJ/b8sN4FiCm2vorXQuNyMNB7V5MIHdZwWp7kU522zWBwRQNN09MqDHGs DZbcIbUQoBKaKfYDynGCBGsS1LE66yAyZyjnbway1F+eIxoOGy21kl/ttKC/Ok4FLj+uL8yPuWeUS 8uTe2r+Hs4HpN/WMqE53GMuWrD7yv8639h8KbQ9xNc2kDYU0R6IkLmpqSR3idWi9/ykJf6KFHpu1T oy/aM4FQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqMGi-0002zH-0U; Tue, 30 Jun 2020 19:50:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqMFY-0002Oz-Hb for linux-arm-kernel@lists.infradead.org; Tue, 30 Jun 2020 19:49:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D8D631B; Tue, 30 Jun 2020 12:48:59 -0700 (PDT) Received: from seattle-bionic.arm.com.Home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 80BDC3F73C; Tue, 30 Jun 2020 12:48:58 -0700 (PDT) From: Oliver Swede To: Will Deacon , Catalin Marinas Subject: [PATCH v4 14/14] arm64: Improve accuracy of fixup for UAO cases Date: Tue, 30 Jun 2020 19:48:22 +0000 Message-Id: <20200630194822.1082-15-oli.swede@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630194822.1082-1-oli.swede@arm.com> References: <20200630194822.1082-1-oli.swede@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200630_154900_759174_FAD7D175 X-CRM114-Status: GOOD ( 11.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robin Murphy , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This accounts for variations in the number of bytes copied to the destination buffer that could result from the substitution of STP instructions with 2x unprivileged STTR variants if UAO is supported and enabled. Rather than duplicating the store fixups with the modifications, the relevant alternatives are inserted in-line. Signed-off-by: Oliver Swede --- arch/arm64/lib/copy_user_fixup.S | 47 ++++++++++++++++++++++++++++---- 1 file changed, 41 insertions(+), 6 deletions(-) diff --git a/arch/arm64/lib/copy_user_fixup.S b/arch/arm64/lib/copy_user_fixup.S index 37ca3d99a02a..2d413f9ba5d3 100644 --- a/arch/arm64/lib/copy_user_fixup.S +++ b/arch/arm64/lib/copy_user_fixup.S @@ -205,7 +205,12 @@ addr .req x15 /* 32 < count < 128 -> count - ((addr-dst)&15) */ cmp count, 128 sub x0, addr, dst // relative fault offset + /* fault offset within dest. buffer */ + alternative_if ARM64_HAS_UAO + bic x0, x0, 7 // stp subst. for 2x sttr + alternative_else bic x0, x0, 15 // bytes already copied (steps of 16B stores) + alternative_endif sub x0, count, x0 // bytes yet to copy b.le L(end_fixup) /* 128 < count -> count */ @@ -265,7 +270,12 @@ addr .req x15 sub tmp1, count, tmp1 // remaining bytes after non-overlapping section sub x0, dstend, 64 sub x0, addr, x0 - bic x0, x0, 15 // fault offset within dest. buffer + /* fault offset within dest. buffer */ + alternative_if ARM64_HAS_UAO + bic x0, x0, 7 // stp subst. for 2x sttr + alternative_else + bic x0, x0, 15 // bytes already copied (steps of 16B stores) + alternative_endif add x0, dstend, x0 sub x0, x0, 64 sub x0, dstend, x0 // remaining bytes in final (overlapping) 64B @@ -295,7 +305,12 @@ addr .req x15 */ sub tmp1, dstend, 32 sub tmp1, addr, tmp1 - bic tmp1, tmp1, 15 + /* fault offset */ + alternative_if ARM64_HAS_UAO + bic tmp1, tmp1, 7 // stp subst. for 2x sttr + alternative_else + bic tmp1, tmp1, 15 // bytes already copied (steps of 16B stores) + alternative_endif mov x0, 32 sub tmp1, x0, tmp1 sub x0, count, 32 @@ -309,7 +324,12 @@ addr .req x15 */ sub tmp1, dstend, 32 sub tmp1, addr, tmp1 - bic tmp1, tmp1, 15 + /* fault offset */ + alternative_if ARM64_HAS_UAO + bic tmp1, tmp1, 7 // stp subst. for 2x sttr + alternative_else + bic tmp1, tmp1, 15 // bytes already copied (steps of 16B stores) + alternative_endif mov x0, 32 sub tmp1, x0, tmp1 sub x0, count, 64 @@ -324,7 +344,12 @@ addr .req x15 */ sub tmp1, dstend, 64 sub tmp1, addr, tmp1 - bic tmp1, tmp1, 15 + /* fault offset */ + alternative_if ARM64_HAS_UAO + bic tmp1, tmp1, 7 // stp subst. for 2x sttr + alternative_else + bic tmp1, tmp1, 15 // bytes already copied (steps of 16B stores) + alternative_endif mov x0, 64 sub tmp1, x0, tmp1 cmp count, 128 @@ -378,10 +403,20 @@ addr .req x15 /* Take the min from {16,(fault_addr&15)-(dst&15)} * and subtract from count to obtain the return value */ bic tmp1, dst, 15 // aligned dst - bic x0, addr, 15 + /* fault offset */ + alternative_if ARM64_HAS_UAO + bic x0, addr, 7 // stp subst. for 2x sttr + alternative_else + bic x0, addr, 15 // bytes already copied (steps of 16B stores) + alternative_endif sub x0, x0, tmp1 // relative fault offset cmp x0, 16 - bic x0, addr, 15 + /* fault offset */ + alternative_if ARM64_HAS_UAO + bic x0, addr, 7 // stp subst. for 2x sttr + alternative_else + bic x0, addr, 15 // bytes already copied (steps of 16B stores) + alternative_endif sub x0, x0, dst sub x0, count, x0 b.gt L(end_fixup) -- 2.17.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel