From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8F26C433DF for ; Tue, 14 Jul 2020 10:37:58 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8BFDB22202 for ; Tue, 14 Jul 2020 10:37:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="W0pcMfqu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BFDB22202 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=00BRyo4P3ItSmdIrtqsSi2SfUwj7oJ+97hVx6+fkktw=; b=W0pcMfquT7yAAjSB/5Dqr8xT9 FVRl6nssew6mbli53mGkbRrrILMF3TCybjkeCnCJ1EO1eNkgNaq2/d4hL1DFSr3ciNGGdAIrzNWVV k58R62VAu0E9yJVekcjrUQFQKdhK7Bk0Tc9WbwjYmApnfuUbnGeW/tmd8pYciQVwMvrQU80vuoJ6t T07dZHWThVMx4Nqzk737tgx9KDuTaH8/KZMtAo9FzVGRcKlnV8OtecPv6y5N7Fq4j3si35Mz0CnWy +nwlW5FT/xy8nUalejZhSY8yaXms8DvSM8e+nR4q8xEh8isIocgJz1RU8T/9xNkoU8C35SnNz5pB6 68CbbHodA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jvIIh-0007RU-BJ; Tue, 14 Jul 2020 10:36:39 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jvIId-0007QM-Qd for linux-arm-kernel@lists.infradead.org; Tue, 14 Jul 2020 10:36:37 +0000 Received: from gaia (unknown [95.146.230.158]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B8C0222202; Tue, 14 Jul 2020 10:36:32 +0000 (UTC) Date: Tue, 14 Jul 2020 11:36:30 +0100 From: Catalin Marinas To: Zhenyu Ye Subject: Re: [PATCH v2 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64 Message-ID: <20200714103629.GA18793@gaia> References: <20200710094420.517-1-yezhenyu2@huawei.com> <20200710094420.517-3-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200710094420.517-3-yezhenyu2@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200714_063636_011305_9FC2CA23 X-CRM114-Status: GOOD ( 13.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, steven.price@arm.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, guohanjun@huawei.com, olof@lixom.net, kuhn.chenqun@huawei.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Jul 10, 2020 at 05:44:20PM +0800, Zhenyu Ye wrote: > +#define __TLBI_RANGE_PAGES(num, scale) (((num) + 1) << (5 * (scale) + 1)) > +#define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3) > + > +#define TLBI_RANGE_MASK GENMASK_ULL(4, 0) > +#define __TLBI_RANGE_NUM(range, scale) \ > + (((range) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) [...] > + int num = 0; > + int scale = 0; [...] > + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; [...] Since num is an int, __TLBI_RANGE_PAGES is still an int. Shifting it by PAGE_SHIFT can overflow as the maximum would be 8GB for 4K pages (or 128GB for 64K pages). I think we probably get away with this because of some implicit type conversion but I'd rather make __TLBI_RANGE_PAGES an explicit unsigned long: #define __TLBI_RANGE_PAGES(num, scale) ((unsigned long)((num) + 1) << (5 * (scale) + 1)) Without this change, the CBMC check fails (see below for the test). In the kernel, we don't have this problem as we encode the address via __TLBI_VADDR_RANGE and it doesn't overflow. The good part is that CBMC reckons the algorithm is correct ;). ---------------8<------tlbinval.c--------------------------- // SPDX-License-Identifier: GPL-2.0-only /* * Check with: * cbmc --unwind 6 tlbinval.c */ #define PAGE_SHIFT (12) #define PAGE_SIZE (1 << PAGE_SHIFT) #define VA_RANGE (1UL << 48) #define __round_mask(x, y) ((__typeof__(x))((y)-1)) #define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1) #define round_down(x, y) ((x) & ~__round_mask(x, y)) #define __TLBI_RANGE_PAGES(num, scale) ((unsigned long)((num) + 1) << (5 * (scale) + 1)) #define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3) #define TLBI_RANGE_MASK 0x1fUL #define __TLBI_RANGE_NUM(pages, scale) \ ((((pages) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) - 1) static unsigned long inval_start; static unsigned long inval_end; static void tlbi(unsigned long start, unsigned long size) { unsigned long end = start + size; if (inval_end == 0) { inval_start = start; inval_end = end; return; } /* contiguous ranges in ascending order only */ __CPROVER_assert(start == inval_end, "Contiguous TLBI ranges"); inval_end = end; } static void __flush_tlb_range(unsigned long start, unsigned long end, unsigned long stride) { int num = 0; int scale = 0; unsigned long pages; start = round_down(start, stride); end = round_up(end, stride); pages = (end - start) >> PAGE_SHIFT; if (pages >= MAX_TLBI_RANGE_PAGES) { tlbi(0, VA_RANGE); return; } while (pages > 0) { __CPROVER_assert(scale <= 3, "Scale in range"); if (pages % 2 == 1) { tlbi(start, stride); start += stride; pages -= stride >> PAGE_SHIFT; continue; } num = __TLBI_RANGE_NUM(pages, scale); __CPROVER_assert(num <= 30, "Num in range"); if (num >= 0) { tlbi(start, __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT); start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; pages -= __TLBI_RANGE_PAGES(num, scale); } scale++; } } static unsigned long nondet_ulong(void); int main(void) { unsigned long stride = nondet_ulong(); unsigned long start = round_down(nondet_ulong(), stride); unsigned long end = round_up(nondet_ulong(), stride); __CPROVER_assume(stride == PAGE_SIZE || stride == PAGE_SIZE << (PAGE_SHIFT - 3) || stride == PAGE_SIZE << (2 * (PAGE_SHIFT - 3))); __CPROVER_assume(start < end); __CPROVER_assume(end <= VA_RANGE); __flush_tlb_range(start, end, stride); __CPROVER_assert((inval_start == 0 && inval_end == VA_RANGE) || (inval_start == start && inval_end == end), "Correct invalidation"); return 0; } _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel