From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_2 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD58BC2D0C2 for ; Fri, 3 Jan 2020 16:49:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 80C322072C for ; Fri, 3 Jan 2020 16:49:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727996AbgACQtS (ORCPT ); Fri, 3 Jan 2020 11:49:18 -0500 Received: from foss.arm.com ([217.140.110.172]:56880 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727912AbgACQtS (ORCPT ); Fri, 3 Jan 2020 11:49:18 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 535D8328; Fri, 3 Jan 2020 08:49:17 -0800 (PST) Received: from donnerap.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5310D3F703; Fri, 3 Jan 2020 08:49:16 -0800 (PST) Date: Fri, 3 Jan 2020 16:49:03 +0000 From: Andre Przywara To: Alexandru Elisei Cc: kvm@vger.kernel.org, pbonzini@redhat.com, drjones@redhat.com, maz@kernel.org, vladimir.murzin@arm.com, mark.rutland@arm.com Subject: Re: [kvm-unit-tests PATCH v3 15/18] arm/arm64: Perform dcache clean + invalidate after turning MMU off Message-ID: <20200103164903.07cf0c56@donnerap.cambridge.arm.com> In-Reply-To: <1577808589-31892-16-git-send-email-alexandru.elisei@arm.com> References: <1577808589-31892-1-git-send-email-alexandru.elisei@arm.com> <1577808589-31892-16-git-send-email-alexandru.elisei@arm.com> Organization: ARM X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; aarch64-unknown-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, 31 Dec 2019 16:09:46 +0000 Alexandru Elisei wrote: Hi, > When the MMU is off, data accesses are to Device nGnRnE memory on arm64 [1] > or to Strongly-Ordered memory on arm [2]. This means that the accesses are > non-cacheable. > > Perform a dcache clean to PoC so we can read the newer values from the > cache after we turn the MMU off, instead of the stale values from memory. Wow, did we really not do this before? > Perform an invalidation so we can access the data written to memory after > we turn the MMU back on. This prevents reading back the stale values we > cleaned from the cache when we turned the MMU off. > > Data caches are PIPT and the VAs are translated using the current > translation tables, or an identity mapping (what Arm calls a "flat > mapping") when the MMU is off [1, 2]. Do the clean + invalidate when the > MMU is off so we don't depend on the current translation tables and we can > make sure that the operation applies to the entire physical memory. The intention of the patch is very much valid, I am just wondering if there is any reason why you do the cache line size determination in (quite some lines of) C? Given that you only use that in asm, wouldn't it be much easier to read the CTR register there, just before you actually use it? The actual CTR read is (inline) assembly anyway, so you just need the mask/shift/add in asm as well. You could draw inspiration from here, for instance: https://gitlab.denx.de/u-boot/u-boot/blob/master/arch/arm/cpu/armv8/cache.S#L132 > The patch was tested by hacking arm/selftest.c: > > +#include > +#include > int main(int argc, char **argv) > { > + int *x = alloc_page(); > + > report_prefix_push("selftest"); > > + *x = 0x42; > + mmu_disable(); > + report("read back value written with MMU on", *x == 0x42); > + *x = 0x50; > + mmu_enable(current_thread_info()->pgtable); > + report("read back value written with MMU off", *x == 0x50); > + Shall this be a new test then as well? At least to avoid regressions in kvm-unit-tests itself? But also to test for proper MMU-off and cache inval operations inside guests? Cheers, Andre > if (argc < 2) > report_abort("no test specified"); > > Without the fix, the first report fails, and the test usually hangs before > the second report. This is because mmu_enable pushes the LR register on the > stack when the MMU is off, which means that the value will be written to > memory. However, after asm_mmu_enable, the MMU is enabled, and we read it > back from the dcache, thus getting garbage. > > With the fix, the two reports pass. > > [1] ARM DDI 0487E.a, section D5.2.9 > [2] ARM DDI 0406C.d, section B3.2.1 > > Signed-off-by: Alexandru Elisei > --- > lib/arm/asm/processor.h | 6 ++++++ > lib/arm64/asm/processor.h | 6 ++++++ > lib/arm/processor.c | 10 ++++++++++ > lib/arm/setup.c | 2 ++ > lib/arm64/processor.c | 11 +++++++++++ > arm/cstart.S | 22 ++++++++++++++++++++++ > arm/cstart64.S | 23 +++++++++++++++++++++++ > 7 files changed, 80 insertions(+) > > diff --git a/lib/arm/asm/processor.h b/lib/arm/asm/processor.h > index a8c4628da818..4684fb4755b3 100644 > --- a/lib/arm/asm/processor.h > +++ b/lib/arm/asm/processor.h > @@ -9,6 +9,11 @@ > #include > #include > > +#define CTR_DMINLINE_SHIFT 16 > +#define CTR_DMINLINE_MASK (0xf << 16) > +#define CTR_DMINLINE(x) \ > + (((x) & CTR_DMINLINE_MASK) >> CTR_DMINLINE_SHIFT) > + > enum vector { > EXCPTN_RST, > EXCPTN_UND, > @@ -25,6 +30,7 @@ typedef void (*exception_fn)(struct pt_regs *); > extern void install_exception_handler(enum vector v, exception_fn fn); > > extern void show_regs(struct pt_regs *regs); > +extern void init_dcache_line_size(void); > > static inline unsigned long current_cpsr(void) > { > diff --git a/lib/arm64/asm/processor.h b/lib/arm64/asm/processor.h > index 1d9223f728a5..fd508c02f30d 100644 > --- a/lib/arm64/asm/processor.h > +++ b/lib/arm64/asm/processor.h > @@ -16,6 +16,11 @@ > #define SCTLR_EL1_A (1 << 1) > #define SCTLR_EL1_M (1 << 0) > > +#define CTR_EL0_DMINLINE_SHIFT 16 > +#define CTR_EL0_DMINLINE_MASK (0xf << 16) > +#define CTR_EL0_DMINLINE(x) \ > + (((x) & CTR_EL0_DMINLINE_MASK) >> CTR_EL0_DMINLINE_SHIFT) > + > #ifndef __ASSEMBLY__ > #include > #include > @@ -60,6 +65,7 @@ extern void vector_handlers_default_init(vector_fn *handlers); > > extern void show_regs(struct pt_regs *regs); > extern bool get_far(unsigned int esr, unsigned long *far); > +extern void init_dcache_line_size(void); > > static inline unsigned long current_level(void) > { > diff --git a/lib/arm/processor.c b/lib/arm/processor.c > index 773337e6d3b7..c57657c5ea53 100644 > --- a/lib/arm/processor.c > +++ b/lib/arm/processor.c > @@ -25,6 +25,8 @@ static const char *vector_names[] = { > "rst", "und", "svc", "pabt", "dabt", "addrexcptn", "irq", "fiq" > }; > > +unsigned int dcache_line_size; > + > void show_regs(struct pt_regs *regs) > { > unsigned long flags; > @@ -145,3 +147,11 @@ bool is_user(void) > { > return current_thread_info()->flags & TIF_USER_MODE; > } > +void init_dcache_line_size(void) > +{ > + u32 ctr; > + > + asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr)); > + /* DminLine is log2 of the number of words in the smallest cache line */ > + dcache_line_size = 1 << (CTR_DMINLINE(ctr) + 2); > +} > diff --git a/lib/arm/setup.c b/lib/arm/setup.c > index 4f02fca85607..54fc19a20942 100644 > --- a/lib/arm/setup.c > +++ b/lib/arm/setup.c > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > #include > > #include "io.h" > @@ -63,6 +64,7 @@ static void cpu_init(void) > ret = dt_for_each_cpu_node(cpu_set, NULL); > assert(ret == 0); > set_cpu_online(0, true); > + init_dcache_line_size(); > } > > static void mem_init(phys_addr_t freemem_start) > diff --git a/lib/arm64/processor.c b/lib/arm64/processor.c > index 2a024e3f4e9d..f28066d40145 100644 > --- a/lib/arm64/processor.c > +++ b/lib/arm64/processor.c > @@ -62,6 +62,8 @@ static const char *ec_names[EC_MAX] = { > [ESR_EL1_EC_BRK64] = "BRK64", > }; > > +unsigned int dcache_line_size; > + > void show_regs(struct pt_regs *regs) > { > int i; > @@ -257,3 +259,12 @@ bool is_user(void) > { > return current_thread_info()->flags & TIF_USER_MODE; > } > + > +void init_dcache_line_size(void) > +{ > + u64 ctr; > + > + ctr = read_sysreg(ctr_el0); > + /* DminLine is log2 of the number of words in the smallest cache line */ > + dcache_line_size = 1 << (CTR_EL0_DMINLINE(ctr) + 2); > +} > diff --git a/arm/cstart.S b/arm/cstart.S > index dfef48e4dbb2..3c2a3bcde61a 100644 > --- a/arm/cstart.S > +++ b/arm/cstart.S > @@ -188,6 +188,20 @@ asm_mmu_enable: > > mov pc, lr > > +.macro dcache_clean_inval domain, start, end, tmp1, tmp2 > + ldr \tmp1, =dcache_line_size > + ldr \tmp1, [\tmp1] > + sub \tmp2, \tmp1, #1 > + bic \start, \start, \tmp2 > +9998: > + /* DCCIMVAC */ > + mcr p15, 0, \start, c7, c14, 1 > + add \start, \start, \tmp1 > + cmp \start, \end > + blo 9998b > + dsb \domain > +.endm > + > .globl asm_mmu_disable > asm_mmu_disable: > /* SCTLR */ > @@ -195,6 +209,14 @@ asm_mmu_disable: > bic r0, #CR_M > mcr p15, 0, r0, c1, c0, 0 > isb > + > + ldr r0, =__phys_offset > + ldr r0, [r0] > + ldr r1, =__phys_end > + ldr r1, [r1] > + dcache_clean_inval sy, r0, r1, r2, r3 > + isb > + > mov pc, lr > > /* > diff --git a/arm/cstart64.S b/arm/cstart64.S > index c98842f11e90..f41ffa3bc6c2 100644 > --- a/arm/cstart64.S > +++ b/arm/cstart64.S > @@ -201,12 +201,35 @@ asm_mmu_enable: > > ret > > +/* Taken with small changes from arch/arm64/incluse/asm/assembler.h */ > +.macro dcache_by_line_op op, domain, start, end, tmp1, tmp2 > + adrp \tmp1, dcache_line_size > + ldr \tmp1, [\tmp1, :lo12:dcache_line_size] > + sub \tmp2, \tmp1, #1 > + bic \start, \start, \tmp2 > +9998: > + dc \op , \start > + add \start, \start, \tmp1 > + cmp \start, \end > + b.lo 9998b > + dsb \domain > +.endm > + > .globl asm_mmu_disable > asm_mmu_disable: > mrs x0, sctlr_el1 > bic x0, x0, SCTLR_EL1_M > msr sctlr_el1, x0 > isb > + > + /* Clean + invalidate the entire memory */ > + adrp x0, __phys_offset > + ldr x0, [x0, :lo12:__phys_offset] > + adrp x1, __phys_end > + ldr x1, [x1, :lo12:__phys_end] > + dcache_by_line_op civac, sy, x0, x1, x2, x3 > + isb > + > ret > > /*