kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates
@ 2020-01-06 10:03 Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 01/17] arm: Add missing test name prefix for pl031 and spinlock Andrew Jones
                   ` (17 more replies)
  0 siblings, 18 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini

Hi Paolo,

Happy New Year and please pull.

Thanks,
drew


The following changes since commit 2c6589bc4e8bb0299cf4e8af84fa05cbbb30952d:

  Update AMD instructions to conform to LLVM assembler (2019-12-18 18:31:17 +0100)

are available in the Git repository at:

  https://github.com/rhdrjones/kvm-unit-tests arm/queue

for you to fetch changes up to ada4c643a2332c408a262a749b0365a1f970084c:

  arm: cstart64.S: Remove icache invalidation from asm_mmu_enable (2020-01-06 10:30:43 +0100)

----------------------------------------------------------------
Alexandru Elisei (13):
      lib: arm/arm64: Remove unnecessary dcache maintenance operations
      lib: arm: Add proper data synchronization barriers for TLBIs
      lib: Add WRITE_ONCE and READ_ONCE implementations in compiler.h
      lib: arm/arm64: Use WRITE_ONCE to update the translation tables
      lib: arm/arm64: Remove unused CPU_OFF parameter
      lib: arm/arm64: Add missing include for alloc_page.h in pgtable.h
      lib: arm: Implement flush_tlb_all
      lib: arm/arm64: Teach mmu_clear_user about block mappings
      arm64: timer: Write to ICENABLER to disable timer IRQ
      lib: arm/arm64: Refuse to disable the MMU with non-identity stack pointer
      arm: cstart64.S: Downgrade TLBI to non-shareable in asm_mmu_enable
      arm/arm64: Invalidate TLB before enabling MMU
      arm: cstart64.S: Remove icache invalidation from asm_mmu_enable

Andrew Jones (2):
      arm: Enable the VFP
      arm/arm64: PL031: Fix check_rtc_irq

Chen Qun (1):
      arm: Add missing test name prefix for pl031 and spinlock

Zeng Tao (1):
      devicetree: Fix the dt_for_each_cpu_node

 .gitlab-ci.yml                |  2 +-
 arm/Makefile.arm              |  2 +-
 arm/cache.c                   |  3 +-
 arm/cstart.S                  | 25 +++++++++++--
 arm/cstart64.S                |  5 ++-
 arm/pl031.c                   |  5 +--
 arm/spinlock-test.c           |  1 +
 arm/timer.c                   | 22 ++++++------
 lib/arm/asm/gic-v3.h          |  1 +
 lib/arm/asm/gic.h             |  1 +
 lib/arm/asm/mmu-api.h         |  2 +-
 lib/arm/asm/mmu.h             | 18 ++++++----
 lib/arm/asm/pgtable-hwdef.h   | 11 ++++++
 lib/arm/asm/pgtable.h         | 20 ++++++++---
 lib/arm/mmu.c                 | 60 ++++++++++++++++++-------------
 lib/arm/psci.c                |  4 +--
 lib/arm64/asm/pgtable-hwdef.h |  3 ++
 lib/arm64/asm/pgtable.h       | 15 ++++++--
 lib/devicetree.c              |  2 +-
 lib/linux/compiler.h          | 83 +++++++++++++++++++++++++++++++++++++++++++
 20 files changed, 221 insertions(+), 64 deletions(-)
 create mode 100644 lib/linux/compiler.h

-- 
2.21.0


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 01/17] arm: Add missing test name prefix for pl031 and spinlock
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 02/17] arm: Enable the VFP Andrew Jones
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Chen Qun

From: Chen Qun <kuhn.chenqun@huawei.com>

pl031 and spinlock testcase without prefix, when running
the unit tests in TAP mode (./run_tests.sh -t), it is
difficult to the test results.

The test results:
ok 13 - Periph/PCell IDs match
ok 14 - R/O fields are R/O
ok 15 - RTC ticks at 1HZ
ok 16 - RTC IRQ not pending yet
...
ok 24 -   RTC IRQ not pending anymore
ok 25 - CPU1: Done - Errors: 0
ok 26 - CPU0: Done - Errors: 0

It should be like this:
ok 13 - pl031: Periph/PCell IDs match
ok 14 - pl031: R/O fields are R/O
ok 15 - pl031: RTC ticks at 1HZ
ok 16 - pl031: RTC IRQ not pending yet
...
ok 24 - pl031:   RTC IRQ not pending anymore
ok 25 - spinlock: CPU0: Done - Errors: 0
ok 26 - spinlock: CPU1: Done - Errors: 0

Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/pl031.c         | 1 +
 arm/spinlock-test.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arm/pl031.c b/arm/pl031.c
index 1f63ef13994f..a6adf6845f55 100644
--- a/arm/pl031.c
+++ b/arm/pl031.c
@@ -252,6 +252,7 @@ int main(int argc, char **argv)
 		return 0;
 	}
 
+	report_prefix_push("pl031");
 	report(!check_id(), "Periph/PCell IDs match");
 	report(!check_ro(), "R/O fields are R/O");
 	report(!check_rtc_freq(), "RTC ticks at 1HZ");
diff --git a/arm/spinlock-test.c b/arm/spinlock-test.c
index a63fb41ccd91..73aea76add3e 100644
--- a/arm/spinlock-test.c
+++ b/arm/spinlock-test.c
@@ -72,6 +72,7 @@ static void test_spinlock(void *data __unused)
 
 int main(int argc, char **argv)
 {
+	report_prefix_push("spinlock");
 	if (argc > 1 && strcmp(argv[1], "bad") != 0) {
 		lock_ops.lock = gcc_builtin_lock;
 		lock_ops.unlock = gcc_builtin_unlock;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 02/17] arm: Enable the VFP
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 01/17] arm: Add missing test name prefix for pl031 and spinlock Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 03/17] arm/arm64: PL031: Fix check_rtc_irq Andrew Jones
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Thomas Huth

Variable argument macros frequently depend on floating point
registers. Indeed we needed to enable the VFP for arm64 since its
introduction in order to use printf and the like. Somehow we
didn't need to do that for arm32 until recently when compiling
with GCC 9.

Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .gitlab-ci.yml   |  2 +-
 arm/Makefile.arm |  2 +-
 arm/cstart.S     | 14 +++++++++++++-
 3 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index fbf3328a19ea..a9dc16a2d6fd 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -17,7 +17,7 @@ build-aarch64:
 
 build-arm:
  script:
- - dnf install -y qemu-system-arm gcc-arm-linux-gnu-8.2.1-1.fc30.2
+ - dnf install -y qemu-system-arm gcc-arm-linux-gnu
  - ./configure --arch=arm --cross-prefix=arm-linux-gnu-
  - make -j2
  - ACCEL=tcg ./run_tests.sh
diff --git a/arm/Makefile.arm b/arm/Makefile.arm
index 43b4be1e05ee..d379a2800749 100644
--- a/arm/Makefile.arm
+++ b/arm/Makefile.arm
@@ -5,7 +5,7 @@
 #
 bits = 32
 ldarch = elf32-littlearm
-machine = -marm
+machine = -marm -mfpu=vfp
 
 # stack.o relies on frame pointers.
 KEEP_FRAME_POINTER := y
diff --git a/arm/cstart.S b/arm/cstart.S
index 114726feab82..bc6219d8a3ee 100644
--- a/arm/cstart.S
+++ b/arm/cstart.S
@@ -50,10 +50,11 @@ start:
 	mov	r0, r2
 	push	{r0-r1}
 
-	/* set up vector table and mode stacks */
+	/* set up vector table, mode stacks, and enable the VFP */
 	mov	r0, lr			@ lr is stack top (see above),
 					@ which is the exception stacks base
 	bl	exceptions_init
+	bl	enable_vfp
 
 	/* complete setup */
 	pop	{r0-r1}
@@ -100,6 +101,16 @@ exceptions_init:
 	isb
 	mov	pc, lr
 
+enable_vfp:
+	/* Enable full access to CP10 and CP11: */
+	mov	r0, #(3 << 22 | 3 << 20)
+	mcr	p15, 0, r0, c1, c0, 2
+	isb
+	/* Set the FPEXC.EN bit to enable Advanced SIMD and VFP: */
+	mov	r0, #(1 << 30)
+	vmsr	fpexc, r0
+	mov	pc, lr
+
 .text
 
 .global get_mmu_off
@@ -130,6 +141,7 @@ secondary_entry:
 	ldr	r0, [r1]
 	mov	sp, r0
 	bl	exceptions_init
+	bl	enable_vfp
 
 	/* finish init in C code */
 	bl	secondary_cinit
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 03/17] arm/arm64: PL031: Fix check_rtc_irq
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 01/17] arm: Add missing test name prefix for pl031 and spinlock Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 02/17] arm: Enable the VFP Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 04/17] devicetree: Fix the dt_for_each_cpu_node Andrew Jones
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexander Graf

Since QEMU commit 83ad95957c7e ("pl031: Expose RTCICR as proper WC
register") the PL031 test gets into an infinite loop. Now we must
write bit zero of RTCICR to clear the IRQ status. Before, writing
anything to RTCICR would work. As '1' is a member of 'anything'
writing it should work for old QEMU as well.

Cc: Alexander Graf <graf@amazon.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Alexander Graf <graf@amazon.com>
---
 arm/pl031.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arm/pl031.c b/arm/pl031.c
index a6adf6845f55..86035fa407e6 100644
--- a/arm/pl031.c
+++ b/arm/pl031.c
@@ -143,8 +143,8 @@ static void irq_handler(struct pt_regs *regs)
 		report(readl(&pl031->ris) == 1, "  RTC RIS == 1");
 		report(readl(&pl031->mis) == 1, "  RTC MIS == 1");
 
-		/* Writing any value should clear IRQ status */
-		writel(0x80000000ULL, &pl031->icr);
+		/* Writing one to bit zero should clear IRQ status */
+		writel(1, &pl031->icr);
 
 		report(readl(&pl031->ris) == 0, "  RTC RIS == 0");
 		report(readl(&pl031->mis) == 0, "  RTC MIS == 0");
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 04/17] devicetree: Fix the dt_for_each_cpu_node
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (2 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 03/17] arm/arm64: PL031: Fix check_rtc_irq Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 05/17] lib: arm/arm64: Remove unnecessary dcache maintenance operations Andrew Jones
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Zeng Tao

From: Zeng Tao <prime.zeng@hisilicon.com>

If the /cpus node contains nodes other than /cpus/cpu*, for example:
/cpus/cpu-map/. The test will issue an unexpected assert error as
follow:
[root@localhost]# ./arm-run arm/spinlock-test.flat
qemu-system-aarch64 -nodefaults -machine virt,gic-version=host,accel=kvm
 -cpu host -device virtio-serial-device -device virtconsole,chardev=ctd
-chardev testdev,id=ctd -device pci-testdev -display none -serial stdio
-kernel arm/spinlock-test.flat # -initrd /tmp/tmp.mwPLiF4EWm
lib/arm/setup.c:64: assert failed: ret == 0
        STACK:

In this patch, ignore the non-cpu subnodes instead of return an error.

Signed-off-by: Zeng Tao <prime.zeng@hisilicon.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/devicetree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/devicetree.c b/lib/devicetree.c
index 2b89178a109b..102032491e26 100644
--- a/lib/devicetree.c
+++ b/lib/devicetree.c
@@ -225,7 +225,7 @@ int dt_for_each_cpu_node(void (*func)(int fdtnode, u64 regval, void *info),
 
 		prop = fdt_get_property(fdt, cpu, "device_type", &len);
 		if (prop == NULL)
-			return len;
+			continue;
 
 		if (len != 4 || strcmp((char *)prop->data, "cpu"))
 			continue;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 05/17] lib: arm/arm64: Remove unnecessary dcache maintenance operations
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (3 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 04/17] devicetree: Fix the dt_for_each_cpu_node Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 06/17] lib: arm: Add proper data synchronization barriers for TLBIs Andrew Jones
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei, Mark Rutland

From: Alexandru Elisei <alexandru.elisei@arm.com>

On ARMv7 with multiprocessing extensions (which are mandated by the
virtualization extensions [1]), and on ARMv8, translation table walks are
coherent [2, 3], which means that no dcache maintenance operations are
required when changing the tables. Remove the maintenance operations so
that we do only the minimum required to ensure correctness.

Translation table walks are coherent if the memory where the tables
themselves reside have the same shareability and cacheability attributes
as the translation table walks. For ARMv8, this is already the case, and
it is only a matter of removing the cache operations.

However, for ARMv7, translation table walks were being configured as
Non-shareable (TTBCR.SH0 = 0b00) and Non-cacheable
(TTBCR.{I,O}RGN0 = 0b00). Fix that by marking them as Inner Shareable,
Normal memory, Inner and Outer Write-Back Write-Allocate Cacheable.

Because translation table walks are now coherent on arm, replace the
TLBIMVAA operation with TLBIMVAAIS in flush_tlb_page, which acts on the
Inner Shareable domain instead of being private to the PE.

The functions that update the translation table are called when the MMU
is off, or to modify permissions, in the case of the cache test, so
break-before-make is not necessary.

[1] ARM DDI 0406C.d, section B1.7
[2] ARM DDI 0406C.d, section B3.3.1
[3] ARM DDI 0487E.a, section D13.2.72
[4] ARM DDI 0487E.a, section K11.5.3

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/cstart.S                |  7 +++++--
 lib/arm/asm/mmu.h           |  4 ++--
 lib/arm/asm/pgtable-hwdef.h |  8 ++++++++
 lib/arm/mmu.c               | 14 +-------------
 4 files changed, 16 insertions(+), 17 deletions(-)

diff --git a/arm/cstart.S b/arm/cstart.S
index bc6219d8a3ee..8c041da50ae2 100644
--- a/arm/cstart.S
+++ b/arm/cstart.S
@@ -9,6 +9,7 @@
 #include <auxinfo.h>
 #include <asm/thread_info.h>
 #include <asm/asm-offsets.h>
+#include <asm/pgtable-hwdef.h>
 #include <asm/ptrace.h>
 #include <asm/sysreg.h>
 
@@ -166,9 +167,11 @@ halt:
 .globl asm_mmu_enable
 asm_mmu_enable:
 	/* TTBCR */
-	mrc	p15, 0, r2, c2, c0, 2
-	orr	r2, #(1 << 31)		@ TTB_EAE
+	ldr	r2, =(TTBCR_EAE | 				\
+		      TTBCR_SH0_SHARED | 			\
+		      TTBCR_IRGN0_WBWA | TTBCR_ORGN0_WBWA)
 	mcr	p15, 0, r2, c2, c0, 2
+	isb
 
 	/* MAIR */
 	ldr	r2, =PRRR
diff --git a/lib/arm/asm/mmu.h b/lib/arm/asm/mmu.h
index 915c2b07dead..361f3cdcc3d5 100644
--- a/lib/arm/asm/mmu.h
+++ b/lib/arm/asm/mmu.h
@@ -31,8 +31,8 @@ static inline void flush_tlb_all(void)
 
 static inline void flush_tlb_page(unsigned long vaddr)
 {
-	/* TLBIMVAA */
-	asm volatile("mcr p15, 0, %0, c8, c7, 3" :: "r" (vaddr));
+	/* TLBIMVAAIS */
+	asm volatile("mcr p15, 0, %0, c8, c3, 3" :: "r" (vaddr));
 	dsb();
 	isb();
 }
diff --git a/lib/arm/asm/pgtable-hwdef.h b/lib/arm/asm/pgtable-hwdef.h
index c08e6e2c01b4..4f24c78ee011 100644
--- a/lib/arm/asm/pgtable-hwdef.h
+++ b/lib/arm/asm/pgtable-hwdef.h
@@ -108,4 +108,12 @@
 #define PHYS_MASK_SHIFT		(40)
 #define PHYS_MASK		((_AC(1, ULL) << PHYS_MASK_SHIFT) - 1)
 
+#define TTBCR_IRGN0_WBWA	(_AC(1, UL) << 8)
+#define TTBCR_ORGN0_WBWA	(_AC(1, UL) << 10)
+#define TTBCR_SH0_SHARED	(_AC(3, UL) << 12)
+#define TTBCR_IRGN1_WBWA	(_AC(1, UL) << 24)
+#define TTBCR_ORGN1_WBWA	(_AC(1, UL) << 26)
+#define TTBCR_SH1_SHARED	(_AC(3, UL) << 28)
+#define TTBCR_EAE		(_AC(1, UL) << 31)
+
 #endif /* _ASMARM_PGTABLE_HWDEF_H_ */
diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 78db22e6af14..5c31c00ccb31 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -73,17 +73,6 @@ void mmu_disable(void)
 	asm_mmu_disable();
 }
 
-static void flush_entry(pgd_t *pgtable, uintptr_t vaddr)
-{
-	pgd_t *pgd = pgd_offset(pgtable, vaddr);
-	pmd_t *pmd = pmd_offset(pgd, vaddr);
-
-	flush_dcache_addr((ulong)pgd);
-	flush_dcache_addr((ulong)pmd);
-	flush_dcache_addr((ulong)pte_offset(pmd, vaddr));
-	flush_tlb_page(vaddr);
-}
-
 static pteval_t *get_pte(pgd_t *pgtable, uintptr_t vaddr)
 {
 	pgd_t *pgd = pgd_offset(pgtable, vaddr);
@@ -98,7 +87,7 @@ static pteval_t *install_pte(pgd_t *pgtable, uintptr_t vaddr, pteval_t pte)
 	pteval_t *p_pte = get_pte(pgtable, vaddr);
 
 	*p_pte = pte;
-	flush_entry(pgtable, vaddr);
+	flush_tlb_page(vaddr);
 	return p_pte;
 }
 
@@ -148,7 +137,6 @@ void mmu_set_range_sect(pgd_t *pgtable, uintptr_t virt_offset,
 		pgd_val(*pgd) = paddr;
 		pgd_val(*pgd) |= PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S;
 		pgd_val(*pgd) |= pgprot_val(prot);
-		flush_dcache_addr((ulong)pgd);
 		flush_tlb_page(vaddr);
 	}
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 06/17] lib: arm: Add proper data synchronization barriers for TLBIs
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (4 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 05/17] lib: arm/arm64: Remove unnecessary dcache maintenance operations Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 07/17] lib: Add WRITE_ONCE and READ_ONCE implementations in compiler.h Andrew Jones
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

We need to issue a DSB before doing TLB invalidation to make sure that the
table walker sees the new VA mapping after the TLBI finishes. For
flush_tlb_page, we do a DSB ISHST (synchronization barrier for writes in
the Inner Shareable domain) because translation table walks are now
coherent for arm. For local_flush_tlb_all, we only need to affect the
Non-shareable domain, and we do a DSB NSHST. We need a synchronization
barrier here, and not a memory ordering barrier, because a table walk is
not a memory operation and therefore not affected by the DMB.

For the same reasons, we downgrade the full system DSB after the TLBI to a
DSB ISH (synchronization barrier for reads and writes in the Inner
Shareable domain), and, respectively, DSB NSH (in the Non-shareable
domain).

With these two changes, our TLB maintenance functions now match what Linux
does in __flush_tlb_kernel_page, and, respectively, in local_flush_tlb_all.

A similar change was implemented in Linux commit 62cbbc42e001 ("ARM: tlb:
reduce scope of barrier domains for TLB invalidation").

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/mmu.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/arm/asm/mmu.h b/lib/arm/asm/mmu.h
index 361f3cdcc3d5..2bf8965ed35e 100644
--- a/lib/arm/asm/mmu.h
+++ b/lib/arm/asm/mmu.h
@@ -17,9 +17,10 @@
 
 static inline void local_flush_tlb_all(void)
 {
+	dsb(nshst);
 	/* TLBIALL */
 	asm volatile("mcr p15, 0, %0, c8, c7, 0" :: "r" (0));
-	dsb();
+	dsb(nsh);
 	isb();
 }
 
@@ -31,9 +32,10 @@ static inline void flush_tlb_all(void)
 
 static inline void flush_tlb_page(unsigned long vaddr)
 {
+	dsb(ishst);
 	/* TLBIMVAAIS */
 	asm volatile("mcr p15, 0, %0, c8, c3, 3" :: "r" (vaddr));
-	dsb();
+	dsb(ish);
 	isb();
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 07/17] lib: Add WRITE_ONCE and READ_ONCE implementations in compiler.h
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (5 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 06/17] lib: arm: Add proper data synchronization barriers for TLBIs Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 08/17] lib: arm/arm64: Use WRITE_ONCE to update the translation tables Andrew Jones
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini
  Cc: Alexandru Elisei, Drew Jones, Laurent Vivier, Thomas Huth,
	David Hildenbrand, Andre Przywara

From: Alexandru Elisei <alexandru.elisei@arm.com>

Add the WRITE_ONCE and READ_ONCE macros which are used to prevent the
compiler from optimizing a store or a load, respectively, into something
else.

Cc: Drew Jones <drjones@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/linux/compiler.h | 83 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)
 create mode 100644 lib/linux/compiler.h

diff --git a/lib/linux/compiler.h b/lib/linux/compiler.h
new file mode 100644
index 000000000000..2d72f18c36e5
--- /dev/null
+++ b/lib/linux/compiler.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Taken from Linux commit 219d54332a09 ("Linux 5.4"), from the file
+ * tools/include/linux/compiler.h, with minor changes.
+ */
+#ifndef __LINUX_COMPILER_H
+#define __LINUX_COMPILER_H
+
+#ifndef __ASSEMBLY__
+
+#include <stdint.h>
+
+#define barrier()	asm volatile("" : : : "memory")
+
+#define __always_inline	inline __attribute__((always_inline))
+
+static __always_inline void __read_once_size(const volatile void *p, void *res, int size)
+{
+	switch (size) {
+	case 1: *(uint8_t *)res = *(volatile uint8_t *)p; break;
+	case 2: *(uint16_t *)res = *(volatile uint16_t *)p; break;
+	case 4: *(uint32_t *)res = *(volatile uint32_t *)p; break;
+	case 8: *(uint64_t *)res = *(volatile uint64_t *)p; break;
+	default:
+		barrier();
+		__builtin_memcpy((void *)res, (const void *)p, size);
+		barrier();
+	}
+}
+
+/*
+ * Prevent the compiler from merging or refetching reads or writes. The
+ * compiler is also forbidden from reordering successive instances of
+ * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
+ * particular ordering. One way to make the compiler aware of ordering is to
+ * put the two invocations of READ_ONCE or WRITE_ONCE in different C
+ * statements.
+ *
+ * These two macros will also work on aggregate data types like structs or
+ * unions. If the size of the accessed data type exceeds the word size of
+ * the machine (e.g., 32 bits or 64 bits) READ_ONCE() and WRITE_ONCE() will
+ * fall back to memcpy and print a compile-time warning.
+ *
+ * Their two major use cases are: (1) Mediating communication between
+ * process-level code and irq/NMI handlers, all running on the same CPU,
+ * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
+ * mutilate accesses that either do not require ordering or that interact
+ * with an explicit memory barrier or atomic instruction that provides the
+ * required ordering.
+ */
+
+#define READ_ONCE(x)					\
+({							\
+	union { typeof(x) __val; char __c[1]; } __u =	\
+		{ .__c = { 0 } };			\
+	__read_once_size(&(x), __u.__c, sizeof(x));	\
+	__u.__val;					\
+})
+
+static __always_inline void __write_once_size(volatile void *p, void *res, int size)
+{
+	switch (size) {
+	case 1: *(volatile uint8_t *) p = *(uint8_t  *) res; break;
+	case 2: *(volatile uint16_t *) p = *(uint16_t *) res; break;
+	case 4: *(volatile uint32_t *) p = *(uint32_t *) res; break;
+	case 8: *(volatile uint64_t *) p = *(uint64_t *) res; break;
+	default:
+		barrier();
+		__builtin_memcpy((void *)p, (const void *)res, size);
+		barrier();
+	}
+}
+
+#define WRITE_ONCE(x, val)				\
+({							\
+	union { typeof(x) __val; char __c[1]; } __u =	\
+		{ .__val = (val) }; 			\
+	__write_once_size(&(x), __u.__c, sizeof(x));	\
+	__u.__val;					\
+})
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !__LINUX_COMPILER_H */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 08/17] lib: arm/arm64: Use WRITE_ONCE to update the translation tables
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (6 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 07/17] lib: Add WRITE_ONCE and READ_ONCE implementations in compiler.h Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 09/17] lib: arm/arm64: Remove unused CPU_OFF parameter Andrew Jones
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei, Mark Rutland, Andre Przywara

From: Alexandru Elisei <alexandru.elisei@arm.com>

Use WRITE_ONCE to prevent store tearing when updating an entry in the
translation tables. Without WRITE_ONCE, the compiler, even though it is
unlikely, can emit several stores when changing the table, and we might
end up with bogus TLB entries.

It's worth noting that the existing code is mostly fine without any
changes because the translation tables are updated in one of the
following situations:

- When the tables are being created with the MMU off, which means no TLB
  caching is being performed.

- When new page table entries are added as a result of vmalloc'ing a
  stack for a secondary CPU, which doesn't happen very often.

- When clearing the PTE_USER bit for the cache test, and store tearing
  has no effect on the table walker because there are no intermediate
  values between bit values 0 and 1. We still use WRITE_ONCE in this case
  for consistency.

However, the functions are global and there is nothing preventing someone
from writing a test that uses them in a different scenario. Let's make
sure that when that happens, there will be no breakage once in a blue
moon.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/pgtable.h   | 12 ++++++++----
 lib/arm/mmu.c           | 19 +++++++++++++------
 lib/arm64/asm/pgtable.h |  7 +++++--
 3 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/lib/arm/asm/pgtable.h b/lib/arm/asm/pgtable.h
index 241dff69b38a..794514b8c927 100644
--- a/lib/arm/asm/pgtable.h
+++ b/lib/arm/asm/pgtable.h
@@ -19,6 +19,8 @@
  * because we always allocate their pages with alloc_page(), and
  * alloc_page() always returns identity mapped pages.
  */
+#include <linux/compiler.h>
+
 #define pgtable_va(x)		((void *)(unsigned long)(x))
 #define pgtable_pa(x)		((unsigned long)(x))
 
@@ -58,8 +60,9 @@ static inline pmd_t *pmd_alloc_one(void)
 static inline pmd_t *pmd_alloc(pgd_t *pgd, unsigned long addr)
 {
 	if (pgd_none(*pgd)) {
-		pmd_t *pmd = pmd_alloc_one();
-		pgd_val(*pgd) = pgtable_pa(pmd) | PMD_TYPE_TABLE;
+		pgd_t entry;
+		pgd_val(entry) = pgtable_pa(pmd_alloc_one()) | PMD_TYPE_TABLE;
+		WRITE_ONCE(*pgd, entry);
 	}
 	return pmd_offset(pgd, addr);
 }
@@ -84,8 +87,9 @@ static inline pte_t *pte_alloc_one(void)
 static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr)
 {
 	if (pmd_none(*pmd)) {
-		pte_t *pte = pte_alloc_one();
-		pmd_val(*pmd) = pgtable_pa(pte) | PMD_TYPE_TABLE;
+		pmd_t entry;
+		pmd_val(entry) = pgtable_pa(pte_alloc_one()) | PMD_TYPE_TABLE;
+		WRITE_ONCE(*pmd, entry);
 	}
 	return pte_offset(pmd, addr);
 }
diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 5c31c00ccb31..86a829966a3c 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -17,6 +17,8 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
 
+#include <linux/compiler.h>
+
 extern unsigned long etext;
 
 pgd_t *mmu_idmap;
@@ -86,7 +88,7 @@ static pteval_t *install_pte(pgd_t *pgtable, uintptr_t vaddr, pteval_t pte)
 {
 	pteval_t *p_pte = get_pte(pgtable, vaddr);
 
-	*p_pte = pte;
+	WRITE_ONCE(*p_pte, pte);
 	flush_tlb_page(vaddr);
 	return p_pte;
 }
@@ -131,12 +133,15 @@ void mmu_set_range_sect(pgd_t *pgtable, uintptr_t virt_offset,
 	phys_addr_t paddr = phys_start & PGDIR_MASK;
 	uintptr_t vaddr = virt_offset & PGDIR_MASK;
 	uintptr_t virt_end = phys_end - paddr + vaddr;
+	pgd_t *pgd;
+	pgd_t entry;
 
 	for (; vaddr < virt_end; vaddr += PGDIR_SIZE, paddr += PGDIR_SIZE) {
-		pgd_t *pgd = pgd_offset(pgtable, vaddr);
-		pgd_val(*pgd) = paddr;
-		pgd_val(*pgd) |= PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S;
-		pgd_val(*pgd) |= pgprot_val(prot);
+		pgd_val(entry) = paddr;
+		pgd_val(entry) |= PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S;
+		pgd_val(entry) |= pgprot_val(prot);
+		pgd = pgd_offset(pgtable, vaddr);
+		WRITE_ONCE(*pgd, entry);
 		flush_tlb_page(vaddr);
 	}
 }
@@ -210,6 +215,7 @@ void mmu_clear_user(unsigned long vaddr)
 {
 	pgd_t *pgtable;
 	pteval_t *pte;
+	pteval_t entry;
 
 	if (!mmu_enabled())
 		return;
@@ -217,6 +223,7 @@ void mmu_clear_user(unsigned long vaddr)
 	pgtable = current_thread_info()->pgtable;
 	pte = get_pte(pgtable, vaddr);
 
-	*pte &= ~PTE_USER;
+	entry = *pte & ~PTE_USER;
+	WRITE_ONCE(*pte, entry);
 	flush_tlb_page(vaddr);
 }
diff --git a/lib/arm64/asm/pgtable.h b/lib/arm64/asm/pgtable.h
index ee0a2c88cc18..dbf9e7253b71 100644
--- a/lib/arm64/asm/pgtable.h
+++ b/lib/arm64/asm/pgtable.h
@@ -18,6 +18,8 @@
 #include <asm/page.h>
 #include <asm/pgtable-hwdef.h>
 
+#include <linux/compiler.h>
+
 /*
  * We can convert va <=> pa page table addresses with simple casts
  * because we always allocate their pages with alloc_page(), and
@@ -66,8 +68,9 @@ static inline pte_t *pte_alloc_one(void)
 static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr)
 {
 	if (pmd_none(*pmd)) {
-		pte_t *pte = pte_alloc_one();
-		pmd_val(*pmd) = pgtable_pa(pte) | PMD_TYPE_TABLE;
+		pmd_t entry;
+		pmd_val(entry) = pgtable_pa(pte_alloc_one()) | PMD_TYPE_TABLE;
+		WRITE_ONCE(*pmd, entry);
 	}
 	return pte_offset(pmd, addr);
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 09/17] lib: arm/arm64: Remove unused CPU_OFF parameter
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (7 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 08/17] lib: arm/arm64: Use WRITE_ONCE to update the translation tables Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 10/17] lib: arm/arm64: Add missing include for alloc_page.h in pgtable.h Andrew Jones
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei, Andre Przywara

From: Alexandru Elisei <alexandru.elisei@arm.com>

The first version of PSCI required an argument for CPU_OFF, the power_state
argument, which was removed in version 0.2 of the specification [1].
kvm-unit-tests supports PSCI 0.2, and KVM ignores any CPU_OFF parameters,
so let's remove the PSCI_POWER_STATE_TYPE_POWER_DOWN parameter.

[1] ARM DEN 0022D, section 7.3.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/psci.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/arm/psci.c b/lib/arm/psci.c
index c3d399064ae3..936c83948b6a 100644
--- a/lib/arm/psci.c
+++ b/lib/arm/psci.c
@@ -40,11 +40,9 @@ int cpu_psci_cpu_boot(unsigned int cpu)
 	return err;
 }
 
-#define PSCI_POWER_STATE_TYPE_POWER_DOWN (1U << 16)
 void cpu_psci_cpu_die(void)
 {
-	int err = psci_invoke(PSCI_0_2_FN_CPU_OFF,
-			PSCI_POWER_STATE_TYPE_POWER_DOWN, 0, 0);
+	int err = psci_invoke(PSCI_0_2_FN_CPU_OFF, 0, 0, 0);
 	printf("CPU%d unable to power off (error = %d)\n", smp_processor_id(), err);
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 10/17] lib: arm/arm64: Add missing include for alloc_page.h in pgtable.h
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (8 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 09/17] lib: arm/arm64: Remove unused CPU_OFF parameter Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 11/17] lib: arm: Implement flush_tlb_all Andrew Jones
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

pgtable.h is used only by mmu.c, where it is included after alloc_page.h.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/pgtable.h   | 1 +
 lib/arm64/asm/pgtable.h | 1 +
 2 files changed, 2 insertions(+)

diff --git a/lib/arm/asm/pgtable.h b/lib/arm/asm/pgtable.h
index 794514b8c927..e7f967071980 100644
--- a/lib/arm/asm/pgtable.h
+++ b/lib/arm/asm/pgtable.h
@@ -13,6 +13,7 @@
  *
  * This work is licensed under the terms of the GNU GPL, version 2.
  */
+#include <alloc_page.h>
 
 /*
  * We can convert va <=> pa page table addresses with simple casts
diff --git a/lib/arm64/asm/pgtable.h b/lib/arm64/asm/pgtable.h
index dbf9e7253b71..6412d67759e4 100644
--- a/lib/arm64/asm/pgtable.h
+++ b/lib/arm64/asm/pgtable.h
@@ -14,6 +14,7 @@
  * This work is licensed under the terms of the GNU GPL, version 2.
  */
 #include <alloc.h>
+#include <alloc_page.h>
 #include <asm/setup.h>
 #include <asm/page.h>
 #include <asm/pgtable-hwdef.h>
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 11/17] lib: arm: Implement flush_tlb_all
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (9 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 10/17] lib: arm/arm64: Add missing include for alloc_page.h in pgtable.h Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 12/17] lib: arm/arm64: Teach mmu_clear_user about block mappings Andrew Jones
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

flush_tlb_all performs a TLBIALL, which invalidates the entire TLB and
affects only the executing PE; translation table walks are now Inner
Shareable, so execute a TLBIALLIS (invalidate TLB Inner Shareable) instead.
TLBIALLIS is the equivalent of TLBIALL [1] when the multiprocessing
extensions are implemented, which are mandated by the virtualization
extensions.

Also add the necessary barriers to tlb_flush_all and a comment to
flush_dcache_addr stating what instruction is uses (unsurprisingly, it's
DCCIMVAC, which does a dcache clean and invalidate by VA to PoC).

[1] ARM DDI 0406C.d, section B3.10.6

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/mmu.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/lib/arm/asm/mmu.h b/lib/arm/asm/mmu.h
index 2bf8965ed35e..122874b8aebe 100644
--- a/lib/arm/asm/mmu.h
+++ b/lib/arm/asm/mmu.h
@@ -26,8 +26,11 @@ static inline void local_flush_tlb_all(void)
 
 static inline void flush_tlb_all(void)
 {
-	//TODO
-	local_flush_tlb_all();
+	dsb(ishst);
+	/* TLBIALLIS */
+	asm volatile("mcr p15, 0, %0, c8, c3, 0" :: "r" (0));
+	dsb(ish);
+	isb();
 }
 
 static inline void flush_tlb_page(unsigned long vaddr)
@@ -41,6 +44,7 @@ static inline void flush_tlb_page(unsigned long vaddr)
 
 static inline void flush_dcache_addr(unsigned long vaddr)
 {
+	/* DCCIMVAC */
 	asm volatile("mcr p15, 0, %0, c7, c14, 1" :: "r" (vaddr));
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 12/17] lib: arm/arm64: Teach mmu_clear_user about block mappings
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (10 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 11/17] lib: arm: Implement flush_tlb_all Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 13/17] arm64: timer: Write to ICENABLER to disable timer IRQ Andrew Jones
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

kvm-unit-tests uses block mappings, so let's expand the mmu_clear_user
function to handle those as well.

Now that the function knows about block mappings, we cannot simply
assume that if an address isn't mapped we can map it as a regular page.
Change the semantics of the function to fail quite loudly if the address
isn't mapped, and shift the burden on the caller to map the address as a
page or block mapping before calling mmu_clear_user.

Also make mmu_clear_user more flexible by adding a pgtable parameter,
instead of assuming that the change always applies to the current
translation tables.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/cache.c                   |  3 ++-
 lib/arm/asm/mmu-api.h         |  2 +-
 lib/arm/asm/pgtable-hwdef.h   |  3 +++
 lib/arm/asm/pgtable.h         |  7 +++++++
 lib/arm/mmu.c                 | 26 +++++++++++++++++++-------
 lib/arm64/asm/pgtable-hwdef.h |  3 +++
 lib/arm64/asm/pgtable.h       |  7 +++++++
 7 files changed, 42 insertions(+), 9 deletions(-)

diff --git a/arm/cache.c b/arm/cache.c
index 13dc5d52d40c..2756066fd4e9 100644
--- a/arm/cache.c
+++ b/arm/cache.c
@@ -2,6 +2,7 @@
 #include <alloc_page.h>
 #include <asm/mmu.h>
 #include <asm/processor.h>
+#include <asm/thread_info.h>
 
 #define NTIMES			(1 << 16)
 
@@ -47,7 +48,7 @@ static void check_code_generation(bool dcache_clean, bool icache_inval)
 	bool success;
 
 	/* Make sure we can execute from a writable page */
-	mmu_clear_user((unsigned long)code);
+	mmu_clear_user(current_thread_info()->pgtable, (unsigned long)code);
 
 	sctlr = read_sysreg(sctlr_el1);
 	if (sctlr & SCTLR_EL1_WXN) {
diff --git a/lib/arm/asm/mmu-api.h b/lib/arm/asm/mmu-api.h
index 8fe85ba31ec9..2bbe1faea900 100644
--- a/lib/arm/asm/mmu-api.h
+++ b/lib/arm/asm/mmu-api.h
@@ -22,5 +22,5 @@ extern void mmu_set_range_sect(pgd_t *pgtable, uintptr_t virt_offset,
 extern void mmu_set_range_ptes(pgd_t *pgtable, uintptr_t virt_offset,
 			       phys_addr_t phys_start, phys_addr_t phys_end,
 			       pgprot_t prot);
-extern void mmu_clear_user(unsigned long vaddr);
+extern void mmu_clear_user(pgd_t *pgtable, unsigned long vaddr);
 #endif
diff --git a/lib/arm/asm/pgtable-hwdef.h b/lib/arm/asm/pgtable-hwdef.h
index 4f24c78ee011..4107e188014a 100644
--- a/lib/arm/asm/pgtable-hwdef.h
+++ b/lib/arm/asm/pgtable-hwdef.h
@@ -14,6 +14,8 @@
 #define PGDIR_SIZE		(_AC(1,UL) << PGDIR_SHIFT)
 #define PGDIR_MASK		(~((1 << PGDIR_SHIFT) - 1))
 
+#define PGD_VALID		(_AT(pgdval_t, 1) << 0)
+
 #define PTRS_PER_PTE		512
 #define PTRS_PER_PMD		512
 
@@ -54,6 +56,7 @@
 #define PMD_TYPE_FAULT		(_AT(pmdval_t, 0) << 0)
 #define PMD_TYPE_TABLE		(_AT(pmdval_t, 3) << 0)
 #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
+#define PMD_SECT_VALID		(_AT(pmdval_t, 1) << 0)
 #define PMD_TABLE_BIT		(_AT(pmdval_t, 1) << 1)
 #define PMD_BIT4		(_AT(pmdval_t, 0))
 #define PMD_DOMAIN(x)		(_AT(pmdval_t, 0))
diff --git a/lib/arm/asm/pgtable.h b/lib/arm/asm/pgtable.h
index e7f967071980..078dd16fa799 100644
--- a/lib/arm/asm/pgtable.h
+++ b/lib/arm/asm/pgtable.h
@@ -29,6 +29,13 @@
 #define pmd_none(pmd)		(!pmd_val(pmd))
 #define pte_none(pte)		(!pte_val(pte))
 
+#define pgd_valid(pgd)		(pgd_val(pgd) & PGD_VALID)
+#define pmd_valid(pmd)		(pmd_val(pmd) & PMD_SECT_VALID)
+#define pte_valid(pte)		(pte_val(pte) & L_PTE_VALID)
+
+#define pmd_huge(pmd)	\
+	((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT)
+
 #define pgd_index(addr) \
 	(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
 #define pgd_offset(pgtable, addr) ((pgtable) + pgd_index(addr))
diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 86a829966a3c..928a3702c563 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -211,19 +211,31 @@ unsigned long __phys_to_virt(phys_addr_t addr)
 	return addr;
 }
 
-void mmu_clear_user(unsigned long vaddr)
+void mmu_clear_user(pgd_t *pgtable, unsigned long vaddr)
 {
-	pgd_t *pgtable;
-	pteval_t *pte;
-	pteval_t entry;
+	pgd_t *pgd;
+	pmd_t *pmd;
+	pte_t *pte;
 
 	if (!mmu_enabled())
 		return;
 
-	pgtable = current_thread_info()->pgtable;
-	pte = get_pte(pgtable, vaddr);
+	pgd = pgd_offset(pgtable, vaddr);
+	assert(pgd_valid(*pgd));
+	pmd = pmd_offset(pgd, vaddr);
+	assert(pmd_valid(*pmd));
+
+	if (pmd_huge(*pmd)) {
+		pmd_t entry = __pmd(pmd_val(*pmd) & ~PMD_SECT_USER);
+		WRITE_ONCE(*pmd, entry);
+		goto out_flush_tlb;
+	}
 
-	entry = *pte & ~PTE_USER;
+	pte = pte_offset(pmd, vaddr);
+	assert(pte_valid(*pte));
+	pte_t entry = __pte(pte_val(*pte) & ~PTE_USER);
 	WRITE_ONCE(*pte, entry);
+
+out_flush_tlb:
 	flush_tlb_page(vaddr);
 }
diff --git a/lib/arm64/asm/pgtable-hwdef.h b/lib/arm64/asm/pgtable-hwdef.h
index 045a3ce12645..33524899e5fa 100644
--- a/lib/arm64/asm/pgtable-hwdef.h
+++ b/lib/arm64/asm/pgtable-hwdef.h
@@ -22,6 +22,8 @@
 #define PGDIR_MASK		(~(PGDIR_SIZE-1))
 #define PTRS_PER_PGD		(1 << (VA_BITS - PGDIR_SHIFT))
 
+#define PGD_VALID		(_AT(pgdval_t, 1) << 0)
+
 /* From include/asm-generic/pgtable-nopmd.h */
 #define PMD_SHIFT		PGDIR_SHIFT
 #define PTRS_PER_PMD		1
@@ -71,6 +73,7 @@
 #define PTE_TYPE_MASK		(_AT(pteval_t, 3) << 0)
 #define PTE_TYPE_FAULT		(_AT(pteval_t, 0) << 0)
 #define PTE_TYPE_PAGE		(_AT(pteval_t, 3) << 0)
+#define PTE_VALID		(_AT(pteval_t, 1) << 0)
 #define PTE_TABLE_BIT		(_AT(pteval_t, 1) << 1)
 #define PTE_USER		(_AT(pteval_t, 1) << 6)		/* AP[1] */
 #define PTE_RDONLY		(_AT(pteval_t, 1) << 7)		/* AP[2] */
diff --git a/lib/arm64/asm/pgtable.h b/lib/arm64/asm/pgtable.h
index 6412d67759e4..e577d9cf304e 100644
--- a/lib/arm64/asm/pgtable.h
+++ b/lib/arm64/asm/pgtable.h
@@ -33,6 +33,13 @@
 #define pmd_none(pmd)		(!pmd_val(pmd))
 #define pte_none(pte)		(!pte_val(pte))
 
+#define pgd_valid(pgd)		(pgd_val(pgd) & PGD_VALID)
+#define pmd_valid(pmd)		(pmd_val(pmd) & PMD_SECT_VALID)
+#define pte_valid(pte)		(pte_val(pte) & PTE_VALID)
+
+#define pmd_huge(pmd)	\
+	((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT)
+
 #define pgd_index(addr) \
 	(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
 #define pgd_offset(pgtable, addr) ((pgtable) + pgd_index(addr))
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 13/17] arm64: timer: Write to ICENABLER to disable timer IRQ
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (11 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 12/17] lib: arm/arm64: Teach mmu_clear_user about block mappings Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 14/17] lib: arm/arm64: Refuse to disable the MMU with non-identity stack pointer Andrew Jones
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei, Andre Przywara

From: Alexandru Elisei <alexandru.elisei@arm.com>

According the Generic Interrupt Controller versions 2, 3 and 4 architecture
specifications, a write of 0 to the GIC{D,R}_ISENABLER{,0} registers is
ignored; this is also how KVM emulates the corresponding register. Write
instead to the ICENABLER register when disabling the timer interrupt.

Note that fortunately for us, the timer test was still working as intended
because KVM does the sensible thing and all interrupts are disabled by
default when creating a VM.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/timer.c          | 22 +++++++++++-----------
 lib/arm/asm/gic-v3.h |  1 +
 lib/arm/asm/gic.h    |  1 +
 3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/arm/timer.c b/arm/timer.c
index b30fd6b6d90b..f390e8e65d31 100644
--- a/arm/timer.c
+++ b/arm/timer.c
@@ -17,6 +17,9 @@
 #define ARCH_TIMER_CTL_ISTATUS (1 << 2)
 
 static void *gic_ispendr;
+static void *gic_isenabler;
+static void *gic_icenabler;
+
 static bool ptimer_unsupported;
 
 static void ptimer_unsupported_handler(struct pt_regs *regs, unsigned int esr)
@@ -132,19 +135,12 @@ static struct timer_info ptimer_info = {
 
 static void set_timer_irq_enabled(struct timer_info *info, bool enabled)
 {
-	u32 val = 0;
+	u32 val = 1 << PPI(info->irq);
 
 	if (enabled)
-		val = 1 << PPI(info->irq);
-
-	switch (gic_version()) {
-	case 2:
-		writel(val, gicv2_dist_base() + GICD_ISENABLER + 0);
-		break;
-	case 3:
-		writel(val, gicv3_sgi_base() + GICR_ISENABLER0);
-		break;
-	}
+		writel(val, gic_isenabler);
+	else
+		writel(val, gic_icenabler);
 }
 
 static void irq_handler(struct pt_regs *regs)
@@ -306,9 +302,13 @@ static void test_init(void)
 	switch (gic_version()) {
 	case 2:
 		gic_ispendr = gicv2_dist_base() + GICD_ISPENDR;
+		gic_isenabler = gicv2_dist_base() + GICD_ISENABLER;
+		gic_icenabler = gicv2_dist_base() + GICD_ICENABLER;
 		break;
 	case 3:
 		gic_ispendr = gicv3_sgi_base() + GICD_ISPENDR;
+		gic_isenabler = gicv3_sgi_base() + GICR_ISENABLER0;
+		gic_icenabler = gicv3_sgi_base() + GICR_ICENABLER0;
 		break;
 	}
 
diff --git a/lib/arm/asm/gic-v3.h b/lib/arm/asm/gic-v3.h
index 347be2f9da17..0dc838b3ab2d 100644
--- a/lib/arm/asm/gic-v3.h
+++ b/lib/arm/asm/gic-v3.h
@@ -31,6 +31,7 @@
 /* Re-Distributor registers, offsets from SGI_base */
 #define GICR_IGROUPR0			GICD_IGROUPR
 #define GICR_ISENABLER0			GICD_ISENABLER
+#define GICR_ICENABLER0			GICD_ICENABLER
 #define GICR_IPRIORITYR0		GICD_IPRIORITYR
 
 #define ICC_SGI1R_AFFINITY_1_SHIFT	16
diff --git a/lib/arm/asm/gic.h b/lib/arm/asm/gic.h
index 1fc10a096259..09826fd5bc29 100644
--- a/lib/arm/asm/gic.h
+++ b/lib/arm/asm/gic.h
@@ -15,6 +15,7 @@
 #define GICD_IIDR			0x0008
 #define GICD_IGROUPR			0x0080
 #define GICD_ISENABLER			0x0100
+#define GICD_ICENABLER			0x0180
 #define GICD_ISPENDR			0x0200
 #define GICD_ICPENDR			0x0280
 #define GICD_ISACTIVER			0x0300
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 14/17] lib: arm/arm64: Refuse to disable the MMU with non-identity stack pointer
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (12 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 13/17] arm64: timer: Write to ICENABLER to disable timer IRQ Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 15/17] arm: cstart64.S: Downgrade TLBI to non-shareable in asm_mmu_enable Andrew Jones
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

When the MMU is off, all addresses are physical addresses. If the stack
pointer is not an identity mapped address (the virtual address is not the
same as the physical address), then we end up trying to access an invalid
memory region. This can happen if we call mmu_disable from a secondary CPU,
which has its stack allocated from the vmalloc region.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/mmu.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 928a3702c563..111e3a52591a 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -68,8 +68,12 @@ void mmu_enable(pgd_t *pgtable)
 extern void asm_mmu_disable(void);
 void mmu_disable(void)
 {
+	unsigned long sp = current_stack_pointer;
 	int cpu = current_thread_info()->cpu;
 
+	assert_msg(__virt_to_phys(sp) == sp,
+			"Attempting to disable MMU with non-identity mapped stack");
+
 	mmu_mark_disabled(cpu);
 
 	asm_mmu_disable();
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 15/17] arm: cstart64.S: Downgrade TLBI to non-shareable in asm_mmu_enable
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (13 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 14/17] lib: arm/arm64: Refuse to disable the MMU with non-identity stack pointer Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 16/17] arm/arm64: Invalidate TLB before enabling MMU Andrew Jones
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

There's really no need to invalidate the TLB entries for all CPUs when
enabling the MMU for the current CPU, so use the non-shareable version of
the TLBI operation (and downgrade the DSB accordingly).

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/cstart64.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arm/cstart64.S b/arm/cstart64.S
index b0e8baa1a23a..6f49506ca19b 100644
--- a/arm/cstart64.S
+++ b/arm/cstart64.S
@@ -160,8 +160,8 @@ halt:
 .globl asm_mmu_enable
 asm_mmu_enable:
 	ic	iallu			// I+BTB cache invalidate
-	tlbi	vmalle1is		// invalidate I + D TLBs
-	dsb	ish
+	tlbi	vmalle1			// invalidate I + D TLBs
+	dsb	nsh
 
 	/* TCR */
 	ldr	x1, =TCR_TxSZ(VA_BITS) |		\
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 16/17] arm/arm64: Invalidate TLB before enabling MMU
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (14 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 15/17] arm: cstart64.S: Downgrade TLBI to non-shareable in asm_mmu_enable Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-06 10:03 ` [PULL kvm-unit-tests 17/17] arm: cstart64.S: Remove icache invalidation from asm_mmu_enable Andrew Jones
  2020-01-08 18:04 ` [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Paolo Bonzini
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

Let's invalidate the TLB before enabling the MMU, not after, so we don't
accidently use a stale TLB mapping. For arm, we add a TLBIALL operation,
which applies only to the PE that executed the instruction [1]. For arm64,
we already do that in asm_mmu_enable.

We now find ourselves in a situation where we issue an extra invalidation
after asm_mmu_enable returns. Remove this redundant call to tlb_flush_all.

[1] ARM DDI 0406C.d, section B3.10.6

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/cstart.S  | 4 ++++
 lib/arm/mmu.c | 1 -
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arm/cstart.S b/arm/cstart.S
index 8c041da50ae2..e54e380e0d53 100644
--- a/arm/cstart.S
+++ b/arm/cstart.S
@@ -166,6 +166,10 @@ halt:
 .equ	NMRR,	0xff000004		@ MAIR1 (from Linux kernel)
 .globl asm_mmu_enable
 asm_mmu_enable:
+	/* TLBIALL */
+	mcr	p15, 0, r2, c8, c7, 0
+	dsb	nsh
+
 	/* TTBCR */
 	ldr	r2, =(TTBCR_EAE | 				\
 		      TTBCR_SH0_SHARED | 			\
diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 111e3a52591a..5fb56180d334 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -59,7 +59,6 @@ void mmu_enable(pgd_t *pgtable)
 	struct thread_info *info = current_thread_info();
 
 	asm_mmu_enable(__pa(pgtable));
-	flush_tlb_all();
 
 	info->pgtable = pgtable;
 	mmu_mark_enabled(info->cpu);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PULL kvm-unit-tests 17/17] arm: cstart64.S: Remove icache invalidation from asm_mmu_enable
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (15 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 16/17] arm/arm64: Invalidate TLB before enabling MMU Andrew Jones
@ 2020-01-06 10:03 ` Andrew Jones
  2020-01-08 18:04 ` [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Paolo Bonzini
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Jones @ 2020-01-06 10:03 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: Alexandru Elisei

From: Alexandru Elisei <alexandru.elisei@arm.com>

According to the ARM ARM [1]:

"In Armv8, any permitted instruction cache implementation can be
described as implementing the IVIPT Extension to the Arm architecture.

The formal definition of the Arm IVIPT Extension is that it reduces the
instruction cache maintenance requirement to the following condition:
Instruction cache maintenance is required only after writing new data to
a PA that holds an instruction".

We never patch instructions in the boot path, so remove the icache
invalidation from asm_mmu_enable. Tests that modify instructions (like
the cache test) should have their own icache maintenance operations.

[1] ARM DDI 0487E.a, section D5.11.2 "Instruction caches"

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arm/cstart64.S | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arm/cstart64.S b/arm/cstart64.S
index 6f49506ca19b..e5a561ea2e39 100644
--- a/arm/cstart64.S
+++ b/arm/cstart64.S
@@ -159,7 +159,6 @@ halt:
 
 .globl asm_mmu_enable
 asm_mmu_enable:
-	ic	iallu			// I+BTB cache invalidate
 	tlbi	vmalle1			// invalidate I + D TLBs
 	dsb	nsh
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates
  2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
                   ` (16 preceding siblings ...)
  2020-01-06 10:03 ` [PULL kvm-unit-tests 17/17] arm: cstart64.S: Remove icache invalidation from asm_mmu_enable Andrew Jones
@ 2020-01-08 18:04 ` Paolo Bonzini
  17 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2020-01-08 18:04 UTC (permalink / raw)
  To: Andrew Jones, kvm

On 06/01/20 11:03, Andrew Jones wrote:
>   https://github.com/rhdrjones/kvm-unit-tests arm/queue

Pulled, thanks.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-01-08 18:04 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-06 10:03 [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 01/17] arm: Add missing test name prefix for pl031 and spinlock Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 02/17] arm: Enable the VFP Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 03/17] arm/arm64: PL031: Fix check_rtc_irq Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 04/17] devicetree: Fix the dt_for_each_cpu_node Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 05/17] lib: arm/arm64: Remove unnecessary dcache maintenance operations Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 06/17] lib: arm: Add proper data synchronization barriers for TLBIs Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 07/17] lib: Add WRITE_ONCE and READ_ONCE implementations in compiler.h Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 08/17] lib: arm/arm64: Use WRITE_ONCE to update the translation tables Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 09/17] lib: arm/arm64: Remove unused CPU_OFF parameter Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 10/17] lib: arm/arm64: Add missing include for alloc_page.h in pgtable.h Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 11/17] lib: arm: Implement flush_tlb_all Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 12/17] lib: arm/arm64: Teach mmu_clear_user about block mappings Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 13/17] arm64: timer: Write to ICENABLER to disable timer IRQ Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 14/17] lib: arm/arm64: Refuse to disable the MMU with non-identity stack pointer Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 15/17] arm: cstart64.S: Downgrade TLBI to non-shareable in asm_mmu_enable Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 16/17] arm/arm64: Invalidate TLB before enabling MMU Andrew Jones
2020-01-06 10:03 ` [PULL kvm-unit-tests 17/17] arm: cstart64.S: Remove icache invalidation from asm_mmu_enable Andrew Jones
2020-01-08 18:04 ` [PULL kvm-unit-tests 00/17] arm/arm64: fixes and updates Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).