All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/18] KVM/arm64: Randomise EL2 mappings
@ 2017-12-06 14:38 ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse

Whilst KVM benefits from the kernel randomisation when running VHE,
there is no randomisation whatsoever when the kernel is running at
EL1, as we directly use a fixed offset from the linear mapping.

This series proposes to randomise the offset by inserting a few random
bits between the MSB of the linear mapping and the top of the HYP VA
(VA_BITS - 2). That's not a lot of random bits (on my Mustang, I get
13 bits), but that's better than nothing.

In order to achieve this, we need to be able to patch dynamic values
in the kernel text. This results in a bunch of changes to the
alternative framework, the insn library, cleanups in asm-offsets, and
a few more hacks in KVM itself (we get a new way to map the GIC at
EL2).

This has been tested on the FVP model, Seattle (both 39 and 48bit VA),
Mustang and Thunder-X. I've also done a sanity check on 32bit (which
is only impacted by the HYP IO VA stuff).

Thanks,

	M.

Marc Zyngier (18):
  arm64: asm-offsets: Avoid clashing DMA definitions
  arm64: asm-offsets: Remove unused definitions
  arm64: asm-offsets: Remove potential circular dependency
  arm64: alternatives: Enforce alignment of struct alt_instr
  arm64: alternatives: Add dynamic patching feature
  arm64: insn: Add N immediate encoding
  arm64: insn: Add encoder for bitwise operations using litterals
  arm64: KVM: Dynamically patch the kernel/hyp VA mask
  arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
  arm64; insn: Add encoder for the EXTR instruction
  arm64: insn: Allow ADD/SUB (immediate) with LSL #12
  arm64: KVM: Introduce EL2 VA randomisation
  arm64: Update the KVM memory map documentation
  KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
  KVM: arm/arm64: Demote HYP VA range display to being a debug feature
  KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
  KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
  KVM: arm/arm64: Move HYP IO VAs to the "idmap" range

 Documentation/arm64/memory.txt             |   8 +-
 arch/arm/include/asm/kvm_hyp.h             |   6 +
 arch/arm/include/asm/kvm_mmu.h             |   4 +-
 arch/arm64/include/asm/alternative.h       |  47 ++++---
 arch/arm64/include/asm/alternative_types.h |  17 +++
 arch/arm64/include/asm/asm-offsets.h       |   2 +
 arch/arm64/include/asm/cpucaps.h           |   2 +-
 arch/arm64/include/asm/insn.h              |  16 +++
 arch/arm64/include/asm/kvm_hyp.h           |   9 ++
 arch/arm64/include/asm/kvm_mmu.h           |  54 ++++----
 arch/arm64/kernel/alternative.c            |  13 +-
 arch/arm64/kernel/asm-offsets.c            |  17 +--
 arch/arm64/kernel/cpufeature.c             |  19 ---
 arch/arm64/kernel/insn.c                   | 191 ++++++++++++++++++++++++++++-
 arch/arm64/kvm/Makefile                    |   2 +-
 arch/arm64/kvm/haslr.c                     | 129 +++++++++++++++++++
 arch/arm64/mm/cache.S                      |   4 +-
 include/kvm/arm_vgic.h                     |  12 +-
 virt/kvm/arm/hyp/vgic-v2-sr.c              |  12 +-
 virt/kvm/arm/mmu.c                         |  63 +++++++---
 virt/kvm/arm/vgic/vgic-init.c              |   6 -
 virt/kvm/arm/vgic/vgic-v2.c                |  40 ++----
 22 files changed, 519 insertions(+), 154 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h
 create mode 100644 arch/arm64/kvm/haslr.c

-- 
2.14.2

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH 00/18] KVM/arm64: Randomise EL2 mappings
@ 2017-12-06 14:38 ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

Whilst KVM benefits from the kernel randomisation when running VHE,
there is no randomisation whatsoever when the kernel is running at
EL1, as we directly use a fixed offset from the linear mapping.

This series proposes to randomise the offset by inserting a few random
bits between the MSB of the linear mapping and the top of the HYP VA
(VA_BITS - 2). That's not a lot of random bits (on my Mustang, I get
13 bits), but that's better than nothing.

In order to achieve this, we need to be able to patch dynamic values
in the kernel text. This results in a bunch of changes to the
alternative framework, the insn library, cleanups in asm-offsets, and
a few more hacks in KVM itself (we get a new way to map the GIC at
EL2).

This has been tested on the FVP model, Seattle (both 39 and 48bit VA),
Mustang and Thunder-X. I've also done a sanity check on 32bit (which
is only impacted by the HYP IO VA stuff).

Thanks,

	M.

Marc Zyngier (18):
  arm64: asm-offsets: Avoid clashing DMA definitions
  arm64: asm-offsets: Remove unused definitions
  arm64: asm-offsets: Remove potential circular dependency
  arm64: alternatives: Enforce alignment of struct alt_instr
  arm64: alternatives: Add dynamic patching feature
  arm64: insn: Add N immediate encoding
  arm64: insn: Add encoder for bitwise operations using litterals
  arm64: KVM: Dynamically patch the kernel/hyp VA mask
  arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
  arm64; insn: Add encoder for the EXTR instruction
  arm64: insn: Allow ADD/SUB (immediate) with LSL #12
  arm64: KVM: Introduce EL2 VA randomisation
  arm64: Update the KVM memory map documentation
  KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
  KVM: arm/arm64: Demote HYP VA range display to being a debug feature
  KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
  KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
  KVM: arm/arm64: Move HYP IO VAs to the "idmap" range

 Documentation/arm64/memory.txt             |   8 +-
 arch/arm/include/asm/kvm_hyp.h             |   6 +
 arch/arm/include/asm/kvm_mmu.h             |   4 +-
 arch/arm64/include/asm/alternative.h       |  47 ++++---
 arch/arm64/include/asm/alternative_types.h |  17 +++
 arch/arm64/include/asm/asm-offsets.h       |   2 +
 arch/arm64/include/asm/cpucaps.h           |   2 +-
 arch/arm64/include/asm/insn.h              |  16 +++
 arch/arm64/include/asm/kvm_hyp.h           |   9 ++
 arch/arm64/include/asm/kvm_mmu.h           |  54 ++++----
 arch/arm64/kernel/alternative.c            |  13 +-
 arch/arm64/kernel/asm-offsets.c            |  17 +--
 arch/arm64/kernel/cpufeature.c             |  19 ---
 arch/arm64/kernel/insn.c                   | 191 ++++++++++++++++++++++++++++-
 arch/arm64/kvm/Makefile                    |   2 +-
 arch/arm64/kvm/haslr.c                     | 129 +++++++++++++++++++
 arch/arm64/mm/cache.S                      |   4 +-
 include/kvm/arm_vgic.h                     |  12 +-
 virt/kvm/arm/hyp/vgic-v2-sr.c              |  12 +-
 virt/kvm/arm/mmu.c                         |  63 +++++++---
 virt/kvm/arm/vgic/vgic-init.c              |   6 -
 virt/kvm/arm/vgic/vgic-v2.c                |  40 ++----
 22 files changed, 519 insertions(+), 154 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h
 create mode 100644 arch/arm64/kvm/haslr.c

-- 
2.14.2

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH 01/18] arm64: asm-offsets: Avoid clashing DMA definitions
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse

asm-offsets.h contains a few DMA related definitions that have
the exact same name than the enum members they are derived from.

While this is not a problem so far, it will become an issue if
both asm-offsets.h and include/linux/dma-direction.h: are pulled
by the same file.

Let's sidestep the issue by renaming the asm-offsets.h constants.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 6 +++---
 arch/arm64/mm/cache.S           | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 71bf088f1e4b..7e8be0c22ce0 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -87,9 +87,9 @@ int main(void)
   BLANK();
   DEFINE(PAGE_SZ,	       	PAGE_SIZE);
   BLANK();
-  DEFINE(DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
-  DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
-  DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
+  DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
+  DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
+  DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
   BLANK();
   DEFINE(CLOCK_REALTIME,	CLOCK_REALTIME);
   DEFINE(CLOCK_MONOTONIC,	CLOCK_MONOTONIC);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7f1dbe962cf5..c1336be085eb 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -205,7 +205,7 @@ ENDPIPROC(__dma_flush_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_map_area)
-	cmp	w2, #DMA_FROM_DEVICE
+	cmp	w2, #__DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
 	b	__dma_clean_area
 ENDPIPROC(__dma_map_area)
@@ -217,7 +217,7 @@ ENDPIPROC(__dma_map_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_unmap_area)
-	cmp	w2, #DMA_TO_DEVICE
+	cmp	w2, #__DMA_TO_DEVICE
 	b.ne	__dma_inv_area
 	ret
 ENDPIPROC(__dma_unmap_area)
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 01/18] arm64: asm-offsets: Avoid clashing DMA definitions
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

asm-offsets.h contains a few DMA related definitions that have
the exact same name than the enum members they are derived from.

While this is not a problem so far, it will become an issue if
both asm-offsets.h and include/linux/dma-direction.h: are pulled
by the same file.

Let's sidestep the issue by renaming the asm-offsets.h constants.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 6 +++---
 arch/arm64/mm/cache.S           | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 71bf088f1e4b..7e8be0c22ce0 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -87,9 +87,9 @@ int main(void)
   BLANK();
   DEFINE(PAGE_SZ,	       	PAGE_SIZE);
   BLANK();
-  DEFINE(DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
-  DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
-  DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
+  DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
+  DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
+  DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
   BLANK();
   DEFINE(CLOCK_REALTIME,	CLOCK_REALTIME);
   DEFINE(CLOCK_MONOTONIC,	CLOCK_MONOTONIC);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7f1dbe962cf5..c1336be085eb 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -205,7 +205,7 @@ ENDPIPROC(__dma_flush_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_map_area)
-	cmp	w2, #DMA_FROM_DEVICE
+	cmp	w2, #__DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
 	b	__dma_clean_area
 ENDPIPROC(__dma_map_area)
@@ -217,7 +217,7 @@ ENDPIPROC(__dma_map_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_unmap_area)
-	cmp	w2, #DMA_TO_DEVICE
+	cmp	w2, #__DMA_TO_DEVICE
 	b.ne	__dma_inv_area
 	ret
 ENDPIPROC(__dma_unmap_area)
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 02/18] arm64: asm-offsets: Remove unused definitions
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

asm-offsets.h contains a number of definitions that are not used
at all, and in some cases conflict with other definitions (such as
NSEC_PER_SEC).

Spring clean-up time.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7e8be0c22ce0..742887330101 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -83,10 +83,6 @@ int main(void)
   DEFINE(VMA_VM_MM,		offsetof(struct vm_area_struct, vm_mm));
   DEFINE(VMA_VM_FLAGS,		offsetof(struct vm_area_struct, vm_flags));
   BLANK();
-  DEFINE(VM_EXEC,	       	VM_EXEC);
-  BLANK();
-  DEFINE(PAGE_SZ,	       	PAGE_SIZE);
-  BLANK();
   DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
   DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
   DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
@@ -98,7 +94,6 @@ int main(void)
   DEFINE(CLOCK_REALTIME_COARSE,	CLOCK_REALTIME_COARSE);
   DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
   DEFINE(CLOCK_COARSE_RES,	LOW_RES_NSEC);
-  DEFINE(NSEC_PER_SEC,		NSEC_PER_SEC);
   BLANK();
   DEFINE(VDSO_CS_CYCLE_LAST,	offsetof(struct vdso_data, cs_cycle_last));
   DEFINE(VDSO_RAW_TIME_SEC,	offsetof(struct vdso_data, raw_time_sec));
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 02/18] arm64: asm-offsets: Remove unused definitions
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

asm-offsets.h contains a number of definitions that are not used
at all, and in some cases conflict with other definitions (such as
NSEC_PER_SEC).

Spring clean-up time.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7e8be0c22ce0..742887330101 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -83,10 +83,6 @@ int main(void)
   DEFINE(VMA_VM_MM,		offsetof(struct vm_area_struct, vm_mm));
   DEFINE(VMA_VM_FLAGS,		offsetof(struct vm_area_struct, vm_flags));
   BLANK();
-  DEFINE(VM_EXEC,	       	VM_EXEC);
-  BLANK();
-  DEFINE(PAGE_SZ,	       	PAGE_SIZE);
-  BLANK();
   DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
   DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
   DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
@@ -98,7 +94,6 @@ int main(void)
   DEFINE(CLOCK_REALTIME_COARSE,	CLOCK_REALTIME_COARSE);
   DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
   DEFINE(CLOCK_COARSE_RES,	LOW_RES_NSEC);
-  DEFINE(NSEC_PER_SEC,		NSEC_PER_SEC);
   BLANK();
   DEFINE(VDSO_CS_CYCLE_LAST,	offsetof(struct vdso_data, cs_cycle_last));
   DEFINE(VDSO_RAW_TIME_SEC,	offsetof(struct vdso_data, raw_time_sec));
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 03/18] arm64: asm-offsets: Remove potential circular dependency
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse

So far, we've been lucky enough that none of the include files
that asm-offsets.c requires do include asm-offsets.h. This is
about to change, and would introduce a nasty circular dependency...

Let's now guard the inclusion of asm-offsets.h so that it never
gets pulled from asm-offsets.c.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/asm-offsets.h | 2 ++
 arch/arm64/kernel/asm-offsets.c      | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/asm-offsets.h b/arch/arm64/include/asm/asm-offsets.h
index d370ee36a182..ed8df3a9c95a 100644
--- a/arch/arm64/include/asm/asm-offsets.h
+++ b/arch/arm64/include/asm/asm-offsets.h
@@ -1 +1,3 @@
+#ifndef IN_ASM_OFFSET_GENERATOR
 #include <generated/asm-offsets.h>
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 742887330101..74b9a26a84b5 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -18,6 +18,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#define IN_ASM_OFFSET_GENERATOR	1
+
 #include <linux/sched.h>
 #include <linux/mm.h>
 #include <linux/dma-mapping.h>
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 03/18] arm64: asm-offsets: Remove potential circular dependency
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

So far, we've been lucky enough that none of the include files
that asm-offsets.c requires do include asm-offsets.h. This is
about to change, and would introduce a nasty circular dependency...

Let's now guard the inclusion of asm-offsets.h so that it never
gets pulled from asm-offsets.c.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/asm-offsets.h | 2 ++
 arch/arm64/kernel/asm-offsets.c      | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/asm-offsets.h b/arch/arm64/include/asm/asm-offsets.h
index d370ee36a182..ed8df3a9c95a 100644
--- a/arch/arm64/include/asm/asm-offsets.h
+++ b/arch/arm64/include/asm/asm-offsets.h
@@ -1 +1,3 @@
+#ifndef IN_ASM_OFFSET_GENERATOR
 #include <generated/asm-offsets.h>
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 742887330101..74b9a26a84b5 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -18,6 +18,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#define IN_ASM_OFFSET_GENERATOR	1
+
 #include <linux/sched.h>
 #include <linux/mm.h>
 #include <linux/dma-mapping.h>
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We're playing a dangerous game with struct alt_instr, as we produce
it using assembly tricks, but parse them using the C structure.
We just assume that the respective alignments of the two will
be the same.

But as we add more fields to this structure, the alignment requirements
of the structure may change, and lead to all kind of funky bugs.

TO solve this, let's move the definition of struct alt_instr to its
own file, and use this to generate the alignment constraint from
asm-offsets.c. The various macros are then patched to take the
alignment into account.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 13 +++++--------
 arch/arm64/include/asm/alternative_types.h | 13 +++++++++++++
 arch/arm64/kernel/asm-offsets.c            |  4 ++++
 3 files changed, 22 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 4a85c6952a22..395befde7595 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -2,28 +2,24 @@
 #ifndef __ASM_ALTERNATIVE_H
 #define __ASM_ALTERNATIVE_H
 
+#include <asm/asm-offsets.h>
 #include <asm/cpucaps.h>
 #include <asm/insn.h>
 
 #ifndef __ASSEMBLY__
 
+#include <asm/alternative_types.h>
+
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
 
-struct alt_instr {
-	s32 orig_offset;	/* offset to original instruction */
-	s32 alt_offset;		/* offset to replacement instruction */
-	u16 cpufeature;		/* cpufeature bit set for replacement */
-	u8  orig_len;		/* size of original instruction(s) */
-	u8  alt_len;		/* size of new instruction(s), <= orig_len */
-};
-
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
 #define ALTINSTR_ENTRY(feature)						      \
+	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
 	" .word 661b - .\n"				/* label           */ \
 	" .word 663f - .\n"				/* new instruction */ \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
@@ -69,6 +65,7 @@ void apply_alternatives(void *start, size_t length);
 #include <asm/assembler.h>
 
 .macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+	.align ALTINSTR_ALIGN
 	.word \orig_offset - .
 	.word \alt_offset - .
 	.hword \feature
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
new file mode 100644
index 000000000000..26cf76167f2d
--- /dev/null
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ALTERNATIVE_TYPES_H
+#define __ASM_ALTERNATIVE_TYPES_H
+
+struct alt_instr {
+	s32 orig_offset;	/* offset to original instruction */
+	s32 alt_offset;		/* offset to replacement instruction */
+	u16 cpufeature;		/* cpufeature bit set for replacement */
+	u8  orig_len;		/* size of original instruction(s) */
+	u8  alt_len;		/* size of new instruction(s), <= orig_len */
+};
+
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 74b9a26a84b5..652165c8655a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -25,6 +25,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/kvm_host.h>
 #include <linux/suspend.h>
+#include <asm/alternative_types.h>
 #include <asm/cpufeature.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
@@ -151,5 +152,8 @@ int main(void)
   DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
+  BLANK();
+  DEFINE(ALTINSTR_ALIGN,	(63 - __builtin_clzl(__alignof__(struct alt_instr))));
+
   return 0;
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

We're playing a dangerous game with struct alt_instr, as we produce
it using assembly tricks, but parse them using the C structure.
We just assume that the respective alignments of the two will
be the same.

But as we add more fields to this structure, the alignment requirements
of the structure may change, and lead to all kind of funky bugs.

TO solve this, let's move the definition of struct alt_instr to its
own file, and use this to generate the alignment constraint from
asm-offsets.c. The various macros are then patched to take the
alignment into account.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 13 +++++--------
 arch/arm64/include/asm/alternative_types.h | 13 +++++++++++++
 arch/arm64/kernel/asm-offsets.c            |  4 ++++
 3 files changed, 22 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 4a85c6952a22..395befde7595 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -2,28 +2,24 @@
 #ifndef __ASM_ALTERNATIVE_H
 #define __ASM_ALTERNATIVE_H
 
+#include <asm/asm-offsets.h>
 #include <asm/cpucaps.h>
 #include <asm/insn.h>
 
 #ifndef __ASSEMBLY__
 
+#include <asm/alternative_types.h>
+
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
 
-struct alt_instr {
-	s32 orig_offset;	/* offset to original instruction */
-	s32 alt_offset;		/* offset to replacement instruction */
-	u16 cpufeature;		/* cpufeature bit set for replacement */
-	u8  orig_len;		/* size of original instruction(s) */
-	u8  alt_len;		/* size of new instruction(s), <= orig_len */
-};
-
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
 #define ALTINSTR_ENTRY(feature)						      \
+	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
 	" .word 661b - .\n"				/* label           */ \
 	" .word 663f - .\n"				/* new instruction */ \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
@@ -69,6 +65,7 @@ void apply_alternatives(void *start, size_t length);
 #include <asm/assembler.h>
 
 .macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+	.align ALTINSTR_ALIGN
 	.word \orig_offset - .
 	.word \alt_offset - .
 	.hword \feature
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
new file mode 100644
index 000000000000..26cf76167f2d
--- /dev/null
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ALTERNATIVE_TYPES_H
+#define __ASM_ALTERNATIVE_TYPES_H
+
+struct alt_instr {
+	s32 orig_offset;	/* offset to original instruction */
+	s32 alt_offset;		/* offset to replacement instruction */
+	u16 cpufeature;		/* cpufeature bit set for replacement */
+	u8  orig_len;		/* size of original instruction(s) */
+	u8  alt_len;		/* size of new instruction(s), <= orig_len */
+};
+
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 74b9a26a84b5..652165c8655a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -25,6 +25,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/kvm_host.h>
 #include <linux/suspend.h>
+#include <asm/alternative_types.h>
 #include <asm/cpufeature.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
@@ -151,5 +152,8 @@ int main(void)
   DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
+  BLANK();
+  DEFINE(ALTINSTR_ALIGN,	(63 - __builtin_clzl(__alignof__(struct alt_instr))));
+
   return 0;
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 05/18] arm64: alternatives: Add dynamic patching feature
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to finely control what gets
patched. For a single feature, this is an all or nothing thing.

It would be interesting to have a more fine grained way of patching
the kernel though, where we could dynamically tune the code that gets
injected.

In order to achive this, let's introduce a new form of alternative
that is associated with a callback. This callback gets the instruction
sequence number and the old instruction as a parameter, and returns
the new instruction. This callback is always called, as the patching
decision is now done at runtime (not patching is equivalent to returning
the same instruction).

Patching with a callback is declared with the new ALTERNATIVE_CB
and alternative_cb directives:

	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
		     : "r" (v));
or
	alternative_cb callback
		mov	x0, #0
	alternative_else_nop_endif

where callback is the C function computing the alternative.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 34 +++++++++++++++++++++---------
 arch/arm64/include/asm/alternative_types.h |  4 ++++
 arch/arm64/kernel/alternative.c            | 13 ++++++++++--
 3 files changed, 39 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 395befde7595..8a8740a03514 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -18,8 +18,9 @@
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
-#define ALTINSTR_ENTRY(feature)						      \
+#define ALTINSTR_ENTRY(feature,cb)					      \
 	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
+	" .quad " __stringify(cb) "\n"			/* callback	   */ \
 	" .word 661b - .\n"				/* label           */ \
 	" .word 663f - .\n"				/* new instruction */ \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
@@ -40,13 +41,13 @@ void apply_alternatives(void *start, size_t length);
  * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
  * containing commit 4e4d08cf7399b606 or c1baaddf8861).
  */
-#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled, cb)	\
 	".if "__stringify(cfg_enabled)" == 1\n"				\
 	"661:\n\t"							\
 	oldinstr "\n"							\
 	"662:\n"							\
 	".pushsection .altinstructions,\"a\"\n"				\
-	ALTINSTR_ENTRY(feature)						\
+	ALTINSTR_ENTRY(feature,cb)					\
 	".popsection\n"							\
 	".pushsection .altinstr_replacement, \"a\"\n"			\
 	"663:\n\t"							\
@@ -58,14 +59,17 @@ void apply_alternatives(void *start, size_t length);
 	".endif\n"
 
 #define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
-	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg), 0)
 
+#define _ALTERNATIVE_CB(oldinstr, cb, ...) \
+	__ALTERNATIVE_CFG(oldinstr, oldinstr, ARM64_NCAPS, 1, cb)
 #else
 
 #include <asm/assembler.h>
 
-.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
 	.align ALTINSTR_ALIGN
+	.quad \cb
 	.word \orig_offset - .
 	.word \alt_offset - .
 	.hword \feature
@@ -73,11 +77,11 @@ void apply_alternatives(void *start, size_t length);
 	.byte \alt_len
 .endm
 
-.macro alternative_insn insn1, insn2, cap, enable = 1
+.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
 	.if \enable
 661:	\insn1
 662:	.pushsection .altinstructions, "a"
-	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
+	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 663:	\insn2
@@ -109,10 +113,10 @@ void apply_alternatives(void *start, size_t length);
 /*
  * Begin an alternative code sequence.
  */
-.macro alternative_if_not cap
+.macro alternative_if_not cap, cb = 0
 	.set .Lasm_alt_mode, 0
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
+	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
 	.popsection
 661:
 .endm
@@ -120,13 +124,17 @@ void apply_alternatives(void *start, size_t length);
 .macro alternative_if cap
 	.set .Lasm_alt_mode, 1
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
+	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 	.align 2	/* So GAS knows label 661 is suitably aligned */
 661:
 .endm
 
+.macro alternative_cb cb
+	alternative_if_not ARM64_NCAPS, \cb
+.endm
+
 /*
  * Provide the other half of the alternative code sequence.
  */
@@ -166,6 +174,9 @@ alternative_endif
 #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
 	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
 
+#define _ALTERNATIVE_CB(insn1, cb, ...)	\
+	alternative_insn insn1, insn1, ARM64_NCAPS, 1, cb
+
 .macro user_alt, label, oldinstr, newinstr, cond
 9999:	alternative_insn "\oldinstr", "\newinstr", \cond
 	_ASM_EXTABLE 9999b, \label
@@ -242,4 +253,7 @@ alternative_endif
 #define ALTERNATIVE(oldinstr, newinstr, ...)   \
 	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
 
+#define ALTERNATIVE_CB(oldinstr, cb, ...)	\
+	_ALTERNATIVE_CB(oldinstr, cb)
+
 #endif /* __ASM_ALTERNATIVE_H */
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
index 26cf76167f2d..7ac90d69602e 100644
--- a/arch/arm64/include/asm/alternative_types.h
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -2,7 +2,11 @@
 #ifndef __ASM_ALTERNATIVE_TYPES_H
 #define __ASM_ALTERNATIVE_TYPES_H
 
+struct alt_instr;
+typedef u32 (*alternative_cb_t)(struct alt_instr *alt, int index, u32 new_insn);
+
 struct alt_instr {
+	alternative_cb_t cb;	/* Callback for dynamic patching */
 	s32 orig_offset;	/* offset to original instruction */
 	s32 alt_offset;		/* offset to replacement instruction */
 	u16 cpufeature;		/* cpufeature bit set for replacement */
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 6dd0a3a3e5c9..7812d537be25 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -115,9 +115,12 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 		u32 insn;
 		int i, nr_inst;
 
-		if (!cpus_have_cap(alt->cpufeature))
+		/* Use ARM64_NCAPS as an unconditional patch */
+		if (alt->cpufeature != ARM64_NCAPS &&
+		    !cpus_have_cap(alt->cpufeature))
 			continue;
 
+		BUG_ON(alt->cpufeature == ARM64_NCAPS && !alt->cb);
 		BUG_ON(alt->alt_len != alt->orig_len);
 
 		pr_info_once("patching kernel code\n");
@@ -128,7 +131,13 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 		nr_inst = alt->alt_len / sizeof(insn);
 
 		for (i = 0; i < nr_inst; i++) {
-			insn = get_alt_insn(alt, origptr + i, replptr + i);
+			if (alt->cb) {
+				insn = le32_to_cpu(updptr[i]);
+				insn = alt->cb(alt, i, insn);
+			} else {
+				insn = get_alt_insn(alt, origptr + i,
+						    replptr + i);
+			}
 			updptr[i] = cpu_to_le32(insn);
 		}
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 05/18] arm64: alternatives: Add dynamic patching feature
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to finely control what gets
patched. For a single feature, this is an all or nothing thing.

It would be interesting to have a more fine grained way of patching
the kernel though, where we could dynamically tune the code that gets
injected.

In order to achive this, let's introduce a new form of alternative
that is associated with a callback. This callback gets the instruction
sequence number and the old instruction as a parameter, and returns
the new instruction. This callback is always called, as the patching
decision is now done at runtime (not patching is equivalent to returning
the same instruction).

Patching with a callback is declared with the new ALTERNATIVE_CB
and alternative_cb directives:

	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
		     : "r" (v));
or
	alternative_cb callback
		mov	x0, #0
	alternative_else_nop_endif

where callback is the C function computing the alternative.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 34 +++++++++++++++++++++---------
 arch/arm64/include/asm/alternative_types.h |  4 ++++
 arch/arm64/kernel/alternative.c            | 13 ++++++++++--
 3 files changed, 39 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 395befde7595..8a8740a03514 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -18,8 +18,9 @@
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
-#define ALTINSTR_ENTRY(feature)						      \
+#define ALTINSTR_ENTRY(feature,cb)					      \
 	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
+	" .quad " __stringify(cb) "\n"			/* callback	   */ \
 	" .word 661b - .\n"				/* label           */ \
 	" .word 663f - .\n"				/* new instruction */ \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
@@ -40,13 +41,13 @@ void apply_alternatives(void *start, size_t length);
  * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
  * containing commit 4e4d08cf7399b606 or c1baaddf8861).
  */
-#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled, cb)	\
 	".if "__stringify(cfg_enabled)" == 1\n"				\
 	"661:\n\t"							\
 	oldinstr "\n"							\
 	"662:\n"							\
 	".pushsection .altinstructions,\"a\"\n"				\
-	ALTINSTR_ENTRY(feature)						\
+	ALTINSTR_ENTRY(feature,cb)					\
 	".popsection\n"							\
 	".pushsection .altinstr_replacement, \"a\"\n"			\
 	"663:\n\t"							\
@@ -58,14 +59,17 @@ void apply_alternatives(void *start, size_t length);
 	".endif\n"
 
 #define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
-	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg), 0)
 
+#define _ALTERNATIVE_CB(oldinstr, cb, ...) \
+	__ALTERNATIVE_CFG(oldinstr, oldinstr, ARM64_NCAPS, 1, cb)
 #else
 
 #include <asm/assembler.h>
 
-.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
 	.align ALTINSTR_ALIGN
+	.quad \cb
 	.word \orig_offset - .
 	.word \alt_offset - .
 	.hword \feature
@@ -73,11 +77,11 @@ void apply_alternatives(void *start, size_t length);
 	.byte \alt_len
 .endm
 
-.macro alternative_insn insn1, insn2, cap, enable = 1
+.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
 	.if \enable
 661:	\insn1
 662:	.pushsection .altinstructions, "a"
-	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
+	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 663:	\insn2
@@ -109,10 +113,10 @@ void apply_alternatives(void *start, size_t length);
 /*
  * Begin an alternative code sequence.
  */
-.macro alternative_if_not cap
+.macro alternative_if_not cap, cb = 0
 	.set .Lasm_alt_mode, 0
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
+	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
 	.popsection
 661:
 .endm
@@ -120,13 +124,17 @@ void apply_alternatives(void *start, size_t length);
 .macro alternative_if cap
 	.set .Lasm_alt_mode, 1
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
+	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 	.align 2	/* So GAS knows label 661 is suitably aligned */
 661:
 .endm
 
+.macro alternative_cb cb
+	alternative_if_not ARM64_NCAPS, \cb
+.endm
+
 /*
  * Provide the other half of the alternative code sequence.
  */
@@ -166,6 +174,9 @@ alternative_endif
 #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
 	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
 
+#define _ALTERNATIVE_CB(insn1, cb, ...)	\
+	alternative_insn insn1, insn1, ARM64_NCAPS, 1, cb
+
 .macro user_alt, label, oldinstr, newinstr, cond
 9999:	alternative_insn "\oldinstr", "\newinstr", \cond
 	_ASM_EXTABLE 9999b, \label
@@ -242,4 +253,7 @@ alternative_endif
 #define ALTERNATIVE(oldinstr, newinstr, ...)   \
 	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
 
+#define ALTERNATIVE_CB(oldinstr, cb, ...)	\
+	_ALTERNATIVE_CB(oldinstr, cb)
+
 #endif /* __ASM_ALTERNATIVE_H */
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
index 26cf76167f2d..7ac90d69602e 100644
--- a/arch/arm64/include/asm/alternative_types.h
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -2,7 +2,11 @@
 #ifndef __ASM_ALTERNATIVE_TYPES_H
 #define __ASM_ALTERNATIVE_TYPES_H
 
+struct alt_instr;
+typedef u32 (*alternative_cb_t)(struct alt_instr *alt, int index, u32 new_insn);
+
 struct alt_instr {
+	alternative_cb_t cb;	/* Callback for dynamic patching */
 	s32 orig_offset;	/* offset to original instruction */
 	s32 alt_offset;		/* offset to replacement instruction */
 	u16 cpufeature;		/* cpufeature bit set for replacement */
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 6dd0a3a3e5c9..7812d537be25 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -115,9 +115,12 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 		u32 insn;
 		int i, nr_inst;
 
-		if (!cpus_have_cap(alt->cpufeature))
+		/* Use ARM64_NCAPS as an unconditional patch */
+		if (alt->cpufeature != ARM64_NCAPS &&
+		    !cpus_have_cap(alt->cpufeature))
 			continue;
 
+		BUG_ON(alt->cpufeature == ARM64_NCAPS && !alt->cb);
 		BUG_ON(alt->alt_len != alt->orig_len);
 
 		pr_info_once("patching kernel code\n");
@@ -128,7 +131,13 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 		nr_inst = alt->alt_len / sizeof(insn);
 
 		for (i = 0; i < nr_inst; i++) {
-			insn = get_alt_insn(alt, origptr + i, replptr + i);
+			if (alt->cb) {
+				insn = le32_to_cpu(updptr[i]);
+				insn = alt->cb(alt, i, insn);
+			} else {
+				insn = get_alt_insn(alt, origptr + i,
+						    replptr + i);
+			}
 			updptr[i] = cpu_to_le32(insn);
 		}
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 06/18] arm64: insn: Add N immediate encoding
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We're missing the a way to generate the encoding of the N immediate,
which is only a single bit used in a number of instruction that take
an immediate.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h | 1 +
 arch/arm64/kernel/insn.c      | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 4214c38d016b..21fffdd290a3 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -70,6 +70,7 @@ enum aarch64_insn_imm_type {
 	AARCH64_INSN_IMM_6,
 	AARCH64_INSN_IMM_S,
 	AARCH64_INSN_IMM_R,
+	AARCH64_INSN_IMM_N,
 	AARCH64_INSN_IMM_MAX
 };
 
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 2718a77da165..7e432662d454 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -343,6 +343,10 @@ static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type,
 		mask = BIT(6) - 1;
 		shift = 16;
 		break;
+	case AARCH64_INSN_IMM_N:
+		mask = 1;
+		shift = 22;
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 06/18] arm64: insn: Add N immediate encoding
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

We're missing the a way to generate the encoding of the N immediate,
which is only a single bit used in a number of instruction that take
an immediate.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h | 1 +
 arch/arm64/kernel/insn.c      | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 4214c38d016b..21fffdd290a3 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -70,6 +70,7 @@ enum aarch64_insn_imm_type {
 	AARCH64_INSN_IMM_6,
 	AARCH64_INSN_IMM_S,
 	AARCH64_INSN_IMM_R,
+	AARCH64_INSN_IMM_N,
 	AARCH64_INSN_IMM_MAX
 };
 
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 2718a77da165..7e432662d454 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -343,6 +343,10 @@ static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type,
 		mask = BIT(6) - 1;
 		shift = 16;
 		break;
+	case AARCH64_INSN_IMM_N:
+		mask = 1;
+		shift = 22;
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 07/18] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We lack a way to encode operations such as AND, ORR, EOR that take
an immediate value. Doing so is quite involved, and is all about
reverse engineering the decoding algorithm described in the
pseudocode function DecodeBitMasks().

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |   9 +++
 arch/arm64/kernel/insn.c      | 137 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 146 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 21fffdd290a3..815b35bc53ed 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -315,6 +315,10 @@ __AARCH64_INSN_FUNCS(eor,	0x7F200000, 0x4A000000)
 __AARCH64_INSN_FUNCS(eon,	0x7F200000, 0x4A200000)
 __AARCH64_INSN_FUNCS(ands,	0x7F200000, 0x6A000000)
 __AARCH64_INSN_FUNCS(bics,	0x7F200000, 0x6A200000)
+__AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
+__AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
+__AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
+__AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -424,6 +428,11 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst,
 					 int shift,
 					 enum aarch64_insn_variant variant,
 					 enum aarch64_insn_logic_type type);
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7e432662d454..326b17016485 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1485,3 +1485,140 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
 	__check_hi, __check_ls, __check_ge, __check_lt,
 	__check_gt, __check_le, __check_al, __check_al
 };
+
+static bool range_of_ones(u64 val)
+{
+	/* Doesn't handle full ones or full zeroes */
+	int x = __ffs64(val) - 1;
+	u64 sval = val >> x;
+
+	/* One of Sean Eron Anderson's bithack tricks */
+	return ((sval + 1) & (sval)) == 0;
+}
+
+static u32 aarch64_encode_immediate(u64 imm,
+				    enum aarch64_insn_variant variant,
+				    u32 insn)
+{
+	unsigned int immr, imms, n, ones, ror, esz, tmp;
+	u64 mask;
+
+	/* Can't encode full zeroes or full ones */
+	if (!imm || !~imm)
+		return AARCH64_BREAK_FAULT;
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (upper_32_bits(imm))
+			return AARCH64_BREAK_FAULT;
+		esz = 32;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		insn |= AARCH64_INSN_SF_BIT;
+		esz = 64;
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	/*
+	 * Inverse of Replicate(). Try to spot a repeating pattern
+	 * with a pow2 stride.
+	 */
+	for (tmp = esz; tmp > 2; tmp /= 2) {
+		u64 emask = BIT(tmp / 2) - 1;
+
+		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
+			break;
+
+		esz = tmp;
+	}
+
+	/* N is only set if we're encoding a 64bit value */
+	n = esz == 64;
+
+	/* Trim imm to the element size */
+	mask = BIT(esz - 1) - 1;
+	imm &= mask;
+
+	/* That's how many ones we need to encode */
+	ones = hweight64(imm);
+
+	/*
+	 * imms is set to (ones - 1), prefixed with a string of ones
+	 * and a zero if they fit. Cap it to 6 bits.
+	 */
+	imms  = ones - 1;
+	imms |= 0xf << ffs(esz);
+	imms &= BIT(6) - 1;
+
+	/* Compute the rotation */
+	if (range_of_ones(imm)) {
+		/*
+		 * Pattern: 0..01..10..0
+		 *
+		 * Compute how many rotate we need to align it right
+		 */
+		ror = ffs(imm) - 1;
+	} else {
+		/*
+		 * Pattern: 0..01..10..01..1
+		 *
+		 * Fill the unused top bits with ones, and check if
+		 * the result is a valid immediate (all ones with a
+		 * contiguous ranges of zeroes).
+		 */
+		imm |= ~mask;
+		if (!range_of_ones(~imm))
+			return AARCH64_BREAK_FAULT;
+
+		/*
+		 * Compute the rotation to get a continuous set of
+		 * ones, with the first bit set at position 0
+		 */
+		ror = fls(~imm);
+	}
+
+	/*
+	 * immr is the number of bits we need to rotate back to the
+	 * original set of ones. Note that this is relative to the
+	 * element size...
+	 */
+	immr = (esz - ror) & (esz - 1);
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, n);
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_R, insn, immr);
+	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, imms);
+}
+
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm)
+{
+	u32 insn;
+
+	switch (type) {
+	case AARCH64_INSN_LOGIC_AND:
+		insn = aarch64_insn_get_and_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_ORR:
+		insn = aarch64_insn_get_orr_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_EOR:
+		insn = aarch64_insn_get_eor_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_AND_SETFLAGS:
+		insn = aarch64_insn_get_ands_imm_value();
+		break;
+	default:
+		pr_err("%s: unknown logical encoding %d\n", __func__, type);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_encode_immediate(imm, variant, insn);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 07/18] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

We lack a way to encode operations such as AND, ORR, EOR that take
an immediate value. Doing so is quite involved, and is all about
reverse engineering the decoding algorithm described in the
pseudocode function DecodeBitMasks().

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |   9 +++
 arch/arm64/kernel/insn.c      | 137 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 146 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 21fffdd290a3..815b35bc53ed 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -315,6 +315,10 @@ __AARCH64_INSN_FUNCS(eor,	0x7F200000, 0x4A000000)
 __AARCH64_INSN_FUNCS(eon,	0x7F200000, 0x4A200000)
 __AARCH64_INSN_FUNCS(ands,	0x7F200000, 0x6A000000)
 __AARCH64_INSN_FUNCS(bics,	0x7F200000, 0x6A200000)
+__AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
+__AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
+__AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
+__AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -424,6 +428,11 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst,
 					 int shift,
 					 enum aarch64_insn_variant variant,
 					 enum aarch64_insn_logic_type type);
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7e432662d454..326b17016485 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1485,3 +1485,140 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
 	__check_hi, __check_ls, __check_ge, __check_lt,
 	__check_gt, __check_le, __check_al, __check_al
 };
+
+static bool range_of_ones(u64 val)
+{
+	/* Doesn't handle full ones or full zeroes */
+	int x = __ffs64(val) - 1;
+	u64 sval = val >> x;
+
+	/* One of Sean Eron Anderson's bithack tricks */
+	return ((sval + 1) & (sval)) == 0;
+}
+
+static u32 aarch64_encode_immediate(u64 imm,
+				    enum aarch64_insn_variant variant,
+				    u32 insn)
+{
+	unsigned int immr, imms, n, ones, ror, esz, tmp;
+	u64 mask;
+
+	/* Can't encode full zeroes or full ones */
+	if (!imm || !~imm)
+		return AARCH64_BREAK_FAULT;
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (upper_32_bits(imm))
+			return AARCH64_BREAK_FAULT;
+		esz = 32;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		insn |= AARCH64_INSN_SF_BIT;
+		esz = 64;
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	/*
+	 * Inverse of Replicate(). Try to spot a repeating pattern
+	 * with a pow2 stride.
+	 */
+	for (tmp = esz; tmp > 2; tmp /= 2) {
+		u64 emask = BIT(tmp / 2) - 1;
+
+		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
+			break;
+
+		esz = tmp;
+	}
+
+	/* N is only set if we're encoding a 64bit value */
+	n = esz == 64;
+
+	/* Trim imm to the element size */
+	mask = BIT(esz - 1) - 1;
+	imm &= mask;
+
+	/* That's how many ones we need to encode */
+	ones = hweight64(imm);
+
+	/*
+	 * imms is set to (ones - 1), prefixed with a string of ones
+	 * and a zero if they fit. Cap it to 6 bits.
+	 */
+	imms  = ones - 1;
+	imms |= 0xf << ffs(esz);
+	imms &= BIT(6) - 1;
+
+	/* Compute the rotation */
+	if (range_of_ones(imm)) {
+		/*
+		 * Pattern: 0..01..10..0
+		 *
+		 * Compute how many rotate we need to align it right
+		 */
+		ror = ffs(imm) - 1;
+	} else {
+		/*
+		 * Pattern: 0..01..10..01..1
+		 *
+		 * Fill the unused top bits with ones, and check if
+		 * the result is a valid immediate (all ones with a
+		 * contiguous ranges of zeroes).
+		 */
+		imm |= ~mask;
+		if (!range_of_ones(~imm))
+			return AARCH64_BREAK_FAULT;
+
+		/*
+		 * Compute the rotation to get a continuous set of
+		 * ones, with the first bit set@position 0
+		 */
+		ror = fls(~imm);
+	}
+
+	/*
+	 * immr is the number of bits we need to rotate back to the
+	 * original set of ones. Note that this is relative to the
+	 * element size...
+	 */
+	immr = (esz - ror) & (esz - 1);
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, n);
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_R, insn, immr);
+	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, imms);
+}
+
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm)
+{
+	u32 insn;
+
+	switch (type) {
+	case AARCH64_INSN_LOGIC_AND:
+		insn = aarch64_insn_get_and_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_ORR:
+		insn = aarch64_insn_get_orr_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_EOR:
+		insn = aarch64_insn_get_eor_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_AND_SETFLAGS:
+		insn = aarch64_insn_get_ands_imm_value();
+		break;
+	default:
+		pr_err("%s: unknown logical encoding %d\n", __func__, type);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_encode_immediate(imm, variant, insn);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 08/18] arm64: KVM: Dynamically patch the kernel/hyp VA mask
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

So far, we're using a complicated sequence of alternatives to
patch the kernel/hyp VA mask on non-VHE, and NOP out the
masking altogether when on VHE.

THe newly introduced dynamic patching gives us the opportunity
to simplify that code by patching a single instruction with
the correct mask (instead of the mind bending cummulative masking
we have at the moment) or even a single NOP on VHE.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 42 ++++++-----------------
 arch/arm64/kvm/Makefile          |  2 +-
 arch/arm64/kvm/haslr.c           | 74 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/kvm/haslr.c

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 672c8684d5c2..accff489aa22 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -69,9 +69,6 @@
  * mappings, and none of this applies in that case.
  */
 
-#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
-#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
-
 #ifdef __ASSEMBLY__
 
 #include <asm/alternative.h>
@@ -81,27 +78,13 @@
  * Convert a kernel VA into a HYP VA.
  * reg: VA to be converted.
  *
- * This generates the following sequences:
- * - High mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		nop
- * - Low mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		and x0, x0, #HYP_PAGE_OFFSET_LOW_MASK
- * - VHE:
- *		nop
- *		nop
- *
- * The "low mask" version works because the mask is a strict subset of
- * the "high mask", hence performing the first mask for nothing.
- * Should be completely invisible on any viable CPU.
+ * The actual code generation takes place in kvm_update_va_mask, and
+ * the instructions below are only there to reserve the space and
+ * perform the register allocation.
  */
 .macro kern_hyp_va	reg
-alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
-	and     \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK
-alternative_else_nop_endif
-alternative_if ARM64_HYP_OFFSET_LOW
-	and     \reg, \reg, #HYP_PAGE_OFFSET_LOW_MASK
+alternative_cb kvm_update_va_mask
+	and     \reg, \reg, #1
 alternative_else_nop_endif
 .endm
 
@@ -113,18 +96,13 @@ alternative_else_nop_endif
 #include <asm/mmu_context.h>
 #include <asm/pgtable.h>
 
+u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
+
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
-	asm volatile(ALTERNATIVE("and %0, %0, %1",
-				 "nop",
-				 ARM64_HAS_VIRT_HOST_EXTN)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_HIGH_MASK));
-	asm volatile(ALTERNATIVE("nop",
-				 "and %0, %0, %1",
-				 ARM64_HYP_OFFSET_LOW)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_LOW_MASK));
+	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n"
+				    kvm_update_va_mask)
+		     : "+r" (v));
 	return v;
 }
 
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 87c4f7ae24de..baba030ee29e 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/e
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
 
-kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
+kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o haslr.o
 kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
 kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
 kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
new file mode 100644
index 000000000000..c21ab93a9ad9
--- /dev/null
+++ b/arch/arm64/kvm/haslr.c
@@ -0,0 +1,74 @@
+/*
+ * Copyright (C) 2017 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/alternative.h>
+#include <asm/debug-monitors.h>
+#include <asm/insn.h>
+#include <asm/kvm_mmu.h>
+
+#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
+#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
+
+static unsigned long get_hyp_va_mask(void)
+{
+	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
+	unsigned long mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	/*
+	 * Activate the lower HYP offset only if the idmap doesn't
+	 * clash with it,
+	 */
+	if (idmap_addr > HYP_PAGE_OFFSET_LOW_MASK)
+		mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	return mask;
+}
+
+u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
+{
+	u32 rd, rn, insn;
+	u64 imm;
+
+	/* We only expect a 1 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+
+	/* VHE doesn't need any address translation, let's NOP everything */
+	if (has_vhe())
+		return aarch64_insn_gen_nop();
+
+	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
+
+	switch (index) {
+	default:
+		/* Something went wrong... */
+		insn = AARCH64_BREAK_FAULT;
+		break;
+
+	case 0:
+		imm = get_hyp_va_mask();
+		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
+							  AARCH64_INSN_VARIANT_64BIT,
+							  rn, rd, imm);
+		break;
+	}
+
+	BUG_ON(insn == AARCH64_BREAK_FAULT);
+
+	return insn;
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 08/18] arm64: KVM: Dynamically patch the kernel/hyp VA mask
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

So far, we're using a complicated sequence of alternatives to
patch the kernel/hyp VA mask on non-VHE, and NOP out the
masking altogether when on VHE.

THe newly introduced dynamic patching gives us the opportunity
to simplify that code by patching a single instruction with
the correct mask (instead of the mind bending cummulative masking
we have at the moment) or even a single NOP on VHE.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 42 ++++++-----------------
 arch/arm64/kvm/Makefile          |  2 +-
 arch/arm64/kvm/haslr.c           | 74 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/kvm/haslr.c

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 672c8684d5c2..accff489aa22 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -69,9 +69,6 @@
  * mappings, and none of this applies in that case.
  */
 
-#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
-#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
-
 #ifdef __ASSEMBLY__
 
 #include <asm/alternative.h>
@@ -81,27 +78,13 @@
  * Convert a kernel VA into a HYP VA.
  * reg: VA to be converted.
  *
- * This generates the following sequences:
- * - High mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		nop
- * - Low mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		and x0, x0, #HYP_PAGE_OFFSET_LOW_MASK
- * - VHE:
- *		nop
- *		nop
- *
- * The "low mask" version works because the mask is a strict subset of
- * the "high mask", hence performing the first mask for nothing.
- * Should be completely invisible on any viable CPU.
+ * The actual code generation takes place in kvm_update_va_mask, and
+ * the instructions below are only there to reserve the space and
+ * perform the register allocation.
  */
 .macro kern_hyp_va	reg
-alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
-	and     \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK
-alternative_else_nop_endif
-alternative_if ARM64_HYP_OFFSET_LOW
-	and     \reg, \reg, #HYP_PAGE_OFFSET_LOW_MASK
+alternative_cb kvm_update_va_mask
+	and     \reg, \reg, #1
 alternative_else_nop_endif
 .endm
 
@@ -113,18 +96,13 @@ alternative_else_nop_endif
 #include <asm/mmu_context.h>
 #include <asm/pgtable.h>
 
+u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
+
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
-	asm volatile(ALTERNATIVE("and %0, %0, %1",
-				 "nop",
-				 ARM64_HAS_VIRT_HOST_EXTN)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_HIGH_MASK));
-	asm volatile(ALTERNATIVE("nop",
-				 "and %0, %0, %1",
-				 ARM64_HYP_OFFSET_LOW)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_LOW_MASK));
+	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n"
+				    kvm_update_va_mask)
+		     : "+r" (v));
 	return v;
 }
 
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 87c4f7ae24de..baba030ee29e 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/e
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
 
-kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
+kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o haslr.o
 kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
 kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
 kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
new file mode 100644
index 000000000000..c21ab93a9ad9
--- /dev/null
+++ b/arch/arm64/kvm/haslr.c
@@ -0,0 +1,74 @@
+/*
+ * Copyright (C) 2017 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/alternative.h>
+#include <asm/debug-monitors.h>
+#include <asm/insn.h>
+#include <asm/kvm_mmu.h>
+
+#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
+#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
+
+static unsigned long get_hyp_va_mask(void)
+{
+	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
+	unsigned long mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	/*
+	 * Activate the lower HYP offset only if the idmap doesn't
+	 * clash with it,
+	 */
+	if (idmap_addr > HYP_PAGE_OFFSET_LOW_MASK)
+		mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	return mask;
+}
+
+u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
+{
+	u32 rd, rn, insn;
+	u64 imm;
+
+	/* We only expect a 1 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+
+	/* VHE doesn't need any address translation, let's NOP everything */
+	if (has_vhe())
+		return aarch64_insn_gen_nop();
+
+	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
+
+	switch (index) {
+	default:
+		/* Something went wrong... */
+		insn = AARCH64_BREAK_FAULT;
+		break;
+
+	case 0:
+		imm = get_hyp_va_mask();
+		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
+							  AARCH64_INSN_VARIANT_64BIT,
+							  rn, rd, imm);
+		break;
+	}
+
+	BUG_ON(insn == AARCH64_BREAK_FAULT);
+
+	return insn;
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 09/18] arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse

Now that we can dynamically compute the kernek/hyp VA mask, there
is need for a feature flag to trigger the alternative patching.
Let's drop the flag and everything that depends on it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  2 +-
 arch/arm64/kernel/cpufeature.c   | 19 -------------------
 2 files changed, 1 insertion(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 2ff7c5e8efab..f130f35dca3c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -32,7 +32,7 @@
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
 #define ARM64_HAS_32BIT_EL0			13
-#define ARM64_HYP_OFFSET_LOW			14
+/* #define ARM64_UNALLOCATED_ENTRY			14 */
 #define ARM64_MISMATCHED_CACHE_LINE_SIZE	15
 #define ARM64_HAS_NO_FPSIMD			16
 #define ARM64_WORKAROUND_REPEAT_TLBI		17
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c5ba0097887f..9eabceaaf5fb 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -824,19 +824,6 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 	return is_kernel_in_hyp_mode();
 }
 
-static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
-			   int __unused)
-{
-	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
-
-	/*
-	 * Activate the lower HYP offset only if:
-	 * - the idmap doesn't clash with it,
-	 * - the kernel is not running at EL2.
-	 */
-	return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
-}
-
 static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unused)
 {
 	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
@@ -925,12 +912,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
 		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
 	},
-	{
-		.desc = "Reduced HYP mapping offset",
-		.capability = ARM64_HYP_OFFSET_LOW,
-		.def_scope = SCOPE_SYSTEM,
-		.matches = hyp_offset_low,
-	},
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 09/18] arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we can dynamically compute the kernek/hyp VA mask, there
is need for a feature flag to trigger the alternative patching.
Let's drop the flag and everything that depends on it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  2 +-
 arch/arm64/kernel/cpufeature.c   | 19 -------------------
 2 files changed, 1 insertion(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 2ff7c5e8efab..f130f35dca3c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -32,7 +32,7 @@
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
 #define ARM64_HAS_32BIT_EL0			13
-#define ARM64_HYP_OFFSET_LOW			14
+/* #define ARM64_UNALLOCATED_ENTRY			14 */
 #define ARM64_MISMATCHED_CACHE_LINE_SIZE	15
 #define ARM64_HAS_NO_FPSIMD			16
 #define ARM64_WORKAROUND_REPEAT_TLBI		17
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c5ba0097887f..9eabceaaf5fb 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -824,19 +824,6 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 	return is_kernel_in_hyp_mode();
 }
 
-static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
-			   int __unused)
-{
-	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
-
-	/*
-	 * Activate the lower HYP offset only if:
-	 * - the idmap doesn't clash with it,
-	 * - the kernel is not running at EL2.
-	 */
-	return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
-}
-
 static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unused)
 {
 	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
@@ -925,12 +912,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
 		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
 	},
-	{
-		.desc = "Reduced HYP mapping offset",
-		.capability = ARM64_HYP_OFFSET_LOW,
-		.def_scope = SCOPE_SYSTEM,
-		.matches = hyp_offset_low,
-	},
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 10/18] arm64; insn: Add encoder for the EXTR instruction
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse

Add an encoder for the EXTR instruction, which also implements the ROR
variant (where Rn == Rm).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |  6 ++++++
 arch/arm64/kernel/insn.c      | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 815b35bc53ed..f62c56b1793f 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -319,6 +319,7 @@ __AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
 __AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
 __AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
 __AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
+__AARCH64_INSN_FUNCS(extr,	0x7FA00000, 0x13800000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -433,6 +434,11 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 				       enum aarch64_insn_register Rn,
 				       enum aarch64_insn_register Rd,
 				       u64 imm);
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 326b17016485..af29fc3e09a9 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1622,3 +1622,35 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
 	return aarch64_encode_immediate(imm, variant, insn);
 }
+
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb)
+{
+	u32 insn;
+
+	insn = aarch64_insn_get_extr_value();
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (lsb > 31)
+			return AARCH64_BREAK_FAULT;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		if (lsb > 63)
+			return AARCH64_BREAK_FAULT;
+		insn |= AARCH64_INSN_SF_BIT;
+		insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, 1);
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, lsb);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 10/18] arm64; insn: Add encoder for the EXTR instruction
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

Add an encoder for the EXTR instruction, which also implements the ROR
variant (where Rn == Rm).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |  6 ++++++
 arch/arm64/kernel/insn.c      | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 815b35bc53ed..f62c56b1793f 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -319,6 +319,7 @@ __AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
 __AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
 __AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
 __AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
+__AARCH64_INSN_FUNCS(extr,	0x7FA00000, 0x13800000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -433,6 +434,11 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 				       enum aarch64_insn_register Rn,
 				       enum aarch64_insn_register Rd,
 				       u64 imm);
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 326b17016485..af29fc3e09a9 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1622,3 +1622,35 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
 	return aarch64_encode_immediate(imm, variant, insn);
 }
+
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb)
+{
+	u32 insn;
+
+	insn = aarch64_insn_get_extr_value();
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (lsb > 31)
+			return AARCH64_BREAK_FAULT;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		if (lsb > 63)
+			return AARCH64_BREAK_FAULT;
+		insn |= AARCH64_INSN_SF_BIT;
+		insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, 1);
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, lsb);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 11/18] arm64: insn: Allow ADD/SUB (immediate) with LSL #12
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse

The encoder for ADD/SUB (immediate) can only cope with 12bit
immediates, while there is an encoding for a 12bit immediate shifted
by 12 bits to the left.

Let's fix this small oversight by allowing the LSL_12 bit to be set.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/insn.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index af29fc3e09a9..b8fb2d89b3a6 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -35,6 +35,7 @@
 
 #define AARCH64_INSN_SF_BIT	BIT(31)
 #define AARCH64_INSN_N_BIT	BIT(22)
+#define AARCH64_INSN_LSL_12	BIT(22)
 
 static int aarch64_insn_encoding_class[] = {
 	AARCH64_INSN_CLS_UNKNOWN,
@@ -903,9 +904,18 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 		return AARCH64_BREAK_FAULT;
 	}
 
+	/* We can't encode more than a 24bit value (12bit + 12bit shift) */
+	if (imm & ~(BIT(24) - 1))
+		goto out;
+
+	/* If we have something in the top 12 bits... */
 	if (imm & ~(SZ_4K - 1)) {
-		pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
-		return AARCH64_BREAK_FAULT;
+		/* ... and in the low 12 bits -> error */
+		if (imm & (SZ_4K - 1))
+			goto out;
+
+		imm >>= 12;
+		insn |= AARCH64_INSN_LSL_12;
 	}
 
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst);
@@ -913,6 +923,10 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src);
 
 	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_12, insn, imm);
+
+out:
+	pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
+	return AARCH64_BREAK_FAULT;
 }
 
 u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 11/18] arm64: insn: Allow ADD/SUB (immediate) with LSL #12
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

The encoder for ADD/SUB (immediate) can only cope with 12bit
immediates, while there is an encoding for a 12bit immediate shifted
by 12 bits to the left.

Let's fix this small oversight by allowing the LSL_12 bit to be set.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/insn.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index af29fc3e09a9..b8fb2d89b3a6 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -35,6 +35,7 @@
 
 #define AARCH64_INSN_SF_BIT	BIT(31)
 #define AARCH64_INSN_N_BIT	BIT(22)
+#define AARCH64_INSN_LSL_12	BIT(22)
 
 static int aarch64_insn_encoding_class[] = {
 	AARCH64_INSN_CLS_UNKNOWN,
@@ -903,9 +904,18 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 		return AARCH64_BREAK_FAULT;
 	}
 
+	/* We can't encode more than a 24bit value (12bit + 12bit shift) */
+	if (imm & ~(BIT(24) - 1))
+		goto out;
+
+	/* If we have something in the top 12 bits... */
 	if (imm & ~(SZ_4K - 1)) {
-		pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
-		return AARCH64_BREAK_FAULT;
+		/* ... and in the low 12 bits -> error */
+		if (imm & (SZ_4K - 1))
+			goto out;
+
+		imm >>= 12;
+		insn |= AARCH64_INSN_LSL_12;
 	}
 
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst);
@@ -913,6 +923,10 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src);
 
 	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_12, insn, imm);
+
+out:
+	pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
+	return AARCH64_BREAK_FAULT;
 }
 
 u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 12/18] arm64: KVM: Introduce EL2 VA randomisation
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

The main idea behind randomising the EL2 VA is that we usually have
a few spare bits between the most significant bit of the VA mask
and the most significant bit of the linear mapping.

Those bits are by definition a bunch of zeroes, and could be useful
to move things around a bit. Of course, the more memory you have,
the less randomisation you get...

Inserting these random bits is a bit involved. We don't have a spare
register (short of rewriting all the kern_hyp_va call sites), and
the immediate we want to insert is too random to be used with the
ORR instruction. The best option I could come up with is the following
sequence:

	and x0, x0, #va_mask
	ror x0, x0, #first_random_bit
	add x0, x0, #(random & 0xfff)
	add x0, x0, #(random >> 12), lsl #12
	ror x0, x0, #(63 - first_random_bit)

making it a fairly long sequence, but one that a decent CPU should
be able to execute without breaking a sweat. It is of course NOPed
out on VHE. THe last 4 instructions can only be turned into NOPs
if it turns out that there is no free bits to use.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h |  8 ++++++
 arch/arm64/kvm/haslr.c           | 59 ++++++++++++++++++++++++++++++++++++++--
 virt/kvm/arm/mmu.c               |  2 +-
 3 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index accff489aa22..dcc3eef3b1ac 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -85,6 +85,10 @@
 .macro kern_hyp_va	reg
 alternative_cb kvm_update_va_mask
 	and     \reg, \reg, #1
+	ror	\reg, \reg, #1
+	add	\reg, \reg, #0
+	add	\reg, \reg, #0
+	ror	\reg, \reg, #63
 alternative_else_nop_endif
 .endm
 
@@ -101,6 +105,10 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
 	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n"
+				    "ror %0, %0, #1\n"
+				    "add %0, %0, #0\n"
+				    "add %0, %0, #0\n"
+				    "ror %0, %0, #63\n",
 				    kvm_update_va_mask)
 		     : "+r" (v));
 	return v;
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
index c21ab93a9ad9..f97fe2055a6f 100644
--- a/arch/arm64/kvm/haslr.c
+++ b/arch/arm64/kvm/haslr.c
@@ -16,6 +16,7 @@
  */
 
 #include <linux/kvm_host.h>
+#include <linux/random.h>
 #include <asm/alternative.h>
 #include <asm/debug-monitors.h>
 #include <asm/insn.h>
@@ -24,6 +25,9 @@
 #define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
 #define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
 
+static u8 tag_lsb;
+static u64 tag_val;
+
 static unsigned long get_hyp_va_mask(void)
 {
 	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
@@ -44,13 +48,25 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 	u32 rd, rn, insn;
 	u64 imm;
 
-	/* We only expect a 1 instruction sequence */
-	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+	/* We only expect a 5 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 5);
 
 	/* VHE doesn't need any address translation, let's NOP everything */
 	if (has_vhe())
 		return aarch64_insn_gen_nop();
 
+	if (!tag_lsb) {
+		u64 mask = GENMASK(VA_BITS - 2, 0);
+		tag_lsb = fls64(((unsigned long)(high_memory - 1) & mask));
+		if (!tag_lsb) {
+			tag_lsb = 0xff;
+		} else {
+			mask = GENMASK_ULL(VA_BITS - 2, tag_lsb);
+			tag_val = get_random_long() & mask;
+			tag_val >>= tag_lsb;
+		}
+	}
+
 	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
 	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
 
@@ -66,6 +82,45 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 							  AARCH64_INSN_VARIANT_64BIT,
 							  rn, rd, imm);
 		break;
+
+	case 1:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd,
+					     tag_lsb);
+		break;
+
+	case 2:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & (SZ_4K - 1),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 3:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & GENMASK(23, 12),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 4:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd, 64 - tag_lsb);
+		break;
 	}
 
 	BUG_ON(insn == AARCH64_BREAK_FAULT);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index b36945d49986..e60605c3603e 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1767,7 +1767,7 @@ int kvm_mmu_init(void)
 		 kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
-	    hyp_idmap_start <  kern_hyp_va(~0UL) &&
+	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
 	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
 		/*
 		 * The idmap page is intersecting with the VA space,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 12/18] arm64: KVM: Introduce EL2 VA randomisation
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

The main idea behind randomising the EL2 VA is that we usually have
a few spare bits between the most significant bit of the VA mask
and the most significant bit of the linear mapping.

Those bits are by definition a bunch of zeroes, and could be useful
to move things around a bit. Of course, the more memory you have,
the less randomisation you get...

Inserting these random bits is a bit involved. We don't have a spare
register (short of rewriting all the kern_hyp_va call sites), and
the immediate we want to insert is too random to be used with the
ORR instruction. The best option I could come up with is the following
sequence:

	and x0, x0, #va_mask
	ror x0, x0, #first_random_bit
	add x0, x0, #(random & 0xfff)
	add x0, x0, #(random >> 12), lsl #12
	ror x0, x0, #(63 - first_random_bit)

making it a fairly long sequence, but one that a decent CPU should
be able to execute without breaking a sweat. It is of course NOPed
out on VHE. THe last 4 instructions can only be turned into NOPs
if it turns out that there is no free bits to use.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h |  8 ++++++
 arch/arm64/kvm/haslr.c           | 59 ++++++++++++++++++++++++++++++++++++++--
 virt/kvm/arm/mmu.c               |  2 +-
 3 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index accff489aa22..dcc3eef3b1ac 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -85,6 +85,10 @@
 .macro kern_hyp_va	reg
 alternative_cb kvm_update_va_mask
 	and     \reg, \reg, #1
+	ror	\reg, \reg, #1
+	add	\reg, \reg, #0
+	add	\reg, \reg, #0
+	ror	\reg, \reg, #63
 alternative_else_nop_endif
 .endm
 
@@ -101,6 +105,10 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
 	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n"
+				    "ror %0, %0, #1\n"
+				    "add %0, %0, #0\n"
+				    "add %0, %0, #0\n"
+				    "ror %0, %0, #63\n",
 				    kvm_update_va_mask)
 		     : "+r" (v));
 	return v;
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
index c21ab93a9ad9..f97fe2055a6f 100644
--- a/arch/arm64/kvm/haslr.c
+++ b/arch/arm64/kvm/haslr.c
@@ -16,6 +16,7 @@
  */
 
 #include <linux/kvm_host.h>
+#include <linux/random.h>
 #include <asm/alternative.h>
 #include <asm/debug-monitors.h>
 #include <asm/insn.h>
@@ -24,6 +25,9 @@
 #define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
 #define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
 
+static u8 tag_lsb;
+static u64 tag_val;
+
 static unsigned long get_hyp_va_mask(void)
 {
 	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
@@ -44,13 +48,25 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 	u32 rd, rn, insn;
 	u64 imm;
 
-	/* We only expect a 1 instruction sequence */
-	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+	/* We only expect a 5 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 5);
 
 	/* VHE doesn't need any address translation, let's NOP everything */
 	if (has_vhe())
 		return aarch64_insn_gen_nop();
 
+	if (!tag_lsb) {
+		u64 mask = GENMASK(VA_BITS - 2, 0);
+		tag_lsb = fls64(((unsigned long)(high_memory - 1) & mask));
+		if (!tag_lsb) {
+			tag_lsb = 0xff;
+		} else {
+			mask = GENMASK_ULL(VA_BITS - 2, tag_lsb);
+			tag_val = get_random_long() & mask;
+			tag_val >>= tag_lsb;
+		}
+	}
+
 	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
 	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
 
@@ -66,6 +82,45 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 							  AARCH64_INSN_VARIANT_64BIT,
 							  rn, rd, imm);
 		break;
+
+	case 1:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd,
+					     tag_lsb);
+		break;
+
+	case 2:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & (SZ_4K - 1),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 3:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & GENMASK(23, 12),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 4:
+		if (tag_lsb == 0xff)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd, 64 - tag_lsb);
+		break;
 	}
 
 	BUG_ON(insn == AARCH64_BREAK_FAULT);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index b36945d49986..e60605c3603e 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1767,7 +1767,7 @@ int kvm_mmu_init(void)
 		 kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
-	    hyp_idmap_start <  kern_hyp_va(~0UL) &&
+	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
 	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
 		/*
 		 * The idmap page is intersecting with the VA space,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 13/18] arm64: Update the KVM memory map documentation
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

Update the documentation to reflect the new tricks we play on the
EL2 mappings...

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/arm64/memory.txt | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/Documentation/arm64/memory.txt b/Documentation/arm64/memory.txt
index 671bc0639262..ea64e20037f6 100644
--- a/Documentation/arm64/memory.txt
+++ b/Documentation/arm64/memory.txt
@@ -86,9 +86,11 @@ Translation table lookup with 64KB pages:
  +-------------------------------------------------> [63] TTBR0/1
 
 
-When using KVM without the Virtualization Host Extensions, the hypervisor
-maps kernel pages in EL2 at a fixed offset from the kernel VA. See the
-kern_hyp_va macro for more details.
+When using KVM without the Virtualization Host Extensions, the
+hypervisor maps kernel pages in EL2 at a fixed offset (modulo a random
+offset) from the linear mapping. See the kern_hyp_va macro and
+kvm_update_va_mask function for more details. MMIO devices such as
+GICv2 gets mapped next to the HYP idmap page.
 
 When using KVM with the Virtualization Host Extensions, no additional
 mappings are created, since the host kernel runs directly in EL2.
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 13/18] arm64: Update the KVM memory map documentation
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

Update the documentation to reflect the new tricks we play on the
EL2 mappings...

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/arm64/memory.txt | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/Documentation/arm64/memory.txt b/Documentation/arm64/memory.txt
index 671bc0639262..ea64e20037f6 100644
--- a/Documentation/arm64/memory.txt
+++ b/Documentation/arm64/memory.txt
@@ -86,9 +86,11 @@ Translation table lookup with 64KB pages:
  +-------------------------------------------------> [63] TTBR0/1
 
 
-When using KVM without the Virtualization Host Extensions, the hypervisor
-maps kernel pages in EL2 at a fixed offset from the kernel VA. See the
-kern_hyp_va macro for more details.
+When using KVM without the Virtualization Host Extensions, the
+hypervisor maps kernel pages in EL2 at a fixed offset (modulo a random
+offset) from the linear mapping. See the kern_hyp_va macro and
+kvm_update_va_mask function for more details. MMIO devices such as
+GICv2 gets mapped next to the HYP idmap page.
 
 When using KVM with the Virtualization Host Extensions, no additional
 mappings are created, since the host kernel runs directly in EL2.
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 14/18] KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always be PC relative.

So let's bite the bullet and provide our own accessor.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_hyp.h   | 6 ++++++
 arch/arm64/include/asm/kvm_hyp.h | 9 +++++++++
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 4 ++--
 3 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index ab20ffa8b9e7..1d42d0aa2feb 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -26,6 +26,12 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr = &(s);					\
+		addr;							\
+	})
+
 #define __ACCESS_VFP(CRn)			\
 	"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 08d3bb66c8b7..a2d98c539023 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -25,6 +25,15 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr;					\
+		asm volatile("adrp	%0, %1\n"			\
+			     "add	%0, %0, :lo12:%1\n"		\
+			     : "=r" (addr) : "S" (&s));			\
+		addr;							\
+	})
+
 #define read_sysreg_elx(r,nvh,vh)					\
 	({								\
 		u64 reg;						\
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index a3f18d362366..19f63fbf3682 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -25,7 +25,7 @@
 static void __hyp_text save_elrsr(struct kvm_vcpu *vcpu, void __iomem *base)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr;
+	int nr_lr = hyp_symbol_addr(kvm_vgic_global_state)->nr_lr;
 	u32 elrsr0, elrsr1;
 
 	elrsr0 = readl_relaxed(base + GICH_ELRSR0);
@@ -143,7 +143,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 14/18] KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always be PC relative.

So let's bite the bullet and provide our own accessor.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_hyp.h   | 6 ++++++
 arch/arm64/include/asm/kvm_hyp.h | 9 +++++++++
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 4 ++--
 3 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index ab20ffa8b9e7..1d42d0aa2feb 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -26,6 +26,12 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr = &(s);					\
+		addr;							\
+	})
+
 #define __ACCESS_VFP(CRn)			\
 	"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 08d3bb66c8b7..a2d98c539023 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -25,6 +25,15 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr;					\
+		asm volatile("adrp	%0, %1\n"			\
+			     "add	%0, %0, :lo12:%1\n"		\
+			     : "=r" (addr) : "S" (&s));			\
+		addr;							\
+	})
+
 #define read_sysreg_elx(r,nvh,vh)					\
 	({								\
 		u64 reg;						\
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index a3f18d362366..19f63fbf3682 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -25,7 +25,7 @@
 static void __hyp_text save_elrsr(struct kvm_vcpu *vcpu, void __iomem *base)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr;
+	int nr_lr = hyp_symbol_addr(kvm_vgic_global_state)->nr_lr;
 	u32 elrsr0, elrsr1;
 
 	elrsr0 = readl_relaxed(base + GICH_ELRSR0);
@@ -143,7 +143,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 15/18] KVM: arm/arm64: Demote HYP VA range display to being a debug feature
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

Displaying the HYP VA information is slightly counterproductive when
using VA randomization. Turn it into a debug feature only, and adjust
the last displayed value to reflect the top of RAM instead of ~0.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index e60605c3603e..326f257d6008 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1762,9 +1762,10 @@ int kvm_mmu_init(void)
 	 */
 	BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
 
-	kvm_info("IDMAP page: %lx\n", hyp_idmap_start);
-	kvm_info("HYP VA range: %lx:%lx\n",
-		 kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
+	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
+	kvm_debug("HYP VA range: %lx:%lx\n",
+		  kern_hyp_va(PAGE_OFFSET),
+		  kern_hyp_va((unsigned long)high_memory - 1));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
 	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 15/18] KVM: arm/arm64: Demote HYP VA range display to being a debug feature
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

Displaying the HYP VA information is slightly counterproductive when
using VA randomization. Turn it into a debug feature only, and adjust
the last displayed value to reflect the top of RAM instead of ~0.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index e60605c3603e..326f257d6008 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1762,9 +1762,10 @@ int kvm_mmu_init(void)
 	 */
 	BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
 
-	kvm_info("IDMAP page: %lx\n", hyp_idmap_start);
-	kvm_info("HYP VA range: %lx:%lx\n",
-		 kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
+	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
+	kvm_debug("HYP VA range: %lx:%lx\n",
+		  kern_hyp_va(PAGE_OFFSET),
+		  kern_hyp_va((unsigned long)high_memory - 1));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
 	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 16/18] KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse

Both HYP io mappings call ioremap, followed by create_hyp_io_mappings.
Let's move the ioremap call into create_hyp_io_mappings itself, which
simplifies the code a bit and allows for further refactoring.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 virt/kvm/arm/mmu.c               | 24 ++++++++++++++----------
 virt/kvm/arm/vgic/vgic-v2.c      | 31 ++++++++-----------------------
 4 files changed, 26 insertions(+), 35 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index fa6f2174276b..cb3bef71ec9b 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -41,7 +41,8 @@
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index dcc3eef3b1ac..21a7335b87c8 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -126,7 +126,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 326f257d6008..cc0b1a4fa55c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -711,26 +711,30 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
 }
 
 /**
- * create_hyp_io_mappings - duplicate a kernel IO mapping into Hyp mode
- * @from:	The kernel start VA of the range
- * @to:		The kernel end VA of the range (exclusive)
+ * create_hyp_io_mappings - Map IO into both kernel and HYP
  * @phys_addr:	The physical start address which gets mapped
+ * @size:	Size of the region being mapped
+ * @kaddr:	Kernel VA for this mapping
  *
  * The resulting HYP VA is the same as the kernel VA, modulo
  * HYP_PAGE_OFFSET.
  */
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr)
 {
-	unsigned long start = kern_hyp_va((unsigned long)from);
-	unsigned long end = kern_hyp_va((unsigned long)to);
+	unsigned long start, end;
 
-	if (is_kernel_in_hyp_mode())
+	*kaddr = ioremap(phys_addr, size);
+	if (!*kaddr)
+		return -ENOMEM;
+
+	if (is_kernel_in_hyp_mode()) {
 		return 0;
+	}
 
-	/* Check for a valid kernel IO mapping */
-	if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
-		return -EINVAL;
 
+	start = kern_hyp_va((unsigned long)*kaddr);
+	end = kern_hyp_va((unsigned long)*kaddr + size);
 	return __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 }
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 80897102da26..bc49d702f9f0 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -332,16 +332,10 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 	if (!PAGE_ALIGNED(info->vcpu.start) ||
 	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
 		kvm_info("GICV region size/alignment is unsafe, using trapping (reduced performance)\n");
-		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
-							     resource_size(&info->vcpu));
-		if (!kvm_vgic_global_state.vcpu_base_va) {
-			kvm_err("Cannot ioremap GICV\n");
-			return -ENOMEM;
-		}
 
-		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
-					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
-					     info->vcpu.start);
+		ret = create_hyp_io_mappings(info->vcpu.start,
+					     resource_size(&info->vcpu),
+					     &kvm_vgic_global_state.vcpu_base_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -350,26 +344,17 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 		static_branch_enable(&vgic_v2_cpuif_trap);
 	}
 
-	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
-						   resource_size(&info->vctrl));
-	if (!kvm_vgic_global_state.vctrl_base) {
-		kvm_err("Cannot ioremap GICH\n");
-		ret = -ENOMEM;
+	ret = create_hyp_io_mappings(info->vctrl.start,
+				     resource_size(&info->vctrl),
+				     &kvm_vgic_global_state.vctrl_base);
+	if (ret) {
+		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
 	}
 
 	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
 	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
 
-	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
-				     kvm_vgic_global_state.vctrl_base +
-					 resource_size(&info->vctrl),
-				     info->vctrl.start);
-	if (ret) {
-		kvm_err("Cannot map VCTRL into hyp\n");
-		goto out;
-	}
-
 	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
 	if (ret) {
 		kvm_err("Cannot register GICv2 KVM device\n");
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 16/18] KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

Both HYP io mappings call ioremap, followed by create_hyp_io_mappings.
Let's move the ioremap call into create_hyp_io_mappings itself, which
simplifies the code a bit and allows for further refactoring.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 virt/kvm/arm/mmu.c               | 24 ++++++++++++++----------
 virt/kvm/arm/vgic/vgic-v2.c      | 31 ++++++++-----------------------
 4 files changed, 26 insertions(+), 35 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index fa6f2174276b..cb3bef71ec9b 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -41,7 +41,8 @@
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index dcc3eef3b1ac..21a7335b87c8 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -126,7 +126,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 326f257d6008..cc0b1a4fa55c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -711,26 +711,30 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
 }
 
 /**
- * create_hyp_io_mappings - duplicate a kernel IO mapping into Hyp mode
- * @from:	The kernel start VA of the range
- * @to:		The kernel end VA of the range (exclusive)
+ * create_hyp_io_mappings - Map IO into both kernel and HYP
  * @phys_addr:	The physical start address which gets mapped
+ * @size:	Size of the region being mapped
+ * @kaddr:	Kernel VA for this mapping
  *
  * The resulting HYP VA is the same as the kernel VA, modulo
  * HYP_PAGE_OFFSET.
  */
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr)
 {
-	unsigned long start = kern_hyp_va((unsigned long)from);
-	unsigned long end = kern_hyp_va((unsigned long)to);
+	unsigned long start, end;
 
-	if (is_kernel_in_hyp_mode())
+	*kaddr = ioremap(phys_addr, size);
+	if (!*kaddr)
+		return -ENOMEM;
+
+	if (is_kernel_in_hyp_mode()) {
 		return 0;
+	}
 
-	/* Check for a valid kernel IO mapping */
-	if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
-		return -EINVAL;
 
+	start = kern_hyp_va((unsigned long)*kaddr);
+	end = kern_hyp_va((unsigned long)*kaddr + size);
 	return __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 }
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 80897102da26..bc49d702f9f0 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -332,16 +332,10 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 	if (!PAGE_ALIGNED(info->vcpu.start) ||
 	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
 		kvm_info("GICV region size/alignment is unsafe, using trapping (reduced performance)\n");
-		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
-							     resource_size(&info->vcpu));
-		if (!kvm_vgic_global_state.vcpu_base_va) {
-			kvm_err("Cannot ioremap GICV\n");
-			return -ENOMEM;
-		}
 
-		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
-					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
-					     info->vcpu.start);
+		ret = create_hyp_io_mappings(info->vcpu.start,
+					     resource_size(&info->vcpu),
+					     &kvm_vgic_global_state.vcpu_base_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -350,26 +344,17 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 		static_branch_enable(&vgic_v2_cpuif_trap);
 	}
 
-	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
-						   resource_size(&info->vctrl));
-	if (!kvm_vgic_global_state.vctrl_base) {
-		kvm_err("Cannot ioremap GICH\n");
-		ret = -ENOMEM;
+	ret = create_hyp_io_mappings(info->vctrl.start,
+				     resource_size(&info->vctrl),
+				     &kvm_vgic_global_state.vctrl_base);
+	if (ret) {
+		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
 	}
 
 	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
 	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
 
-	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
-				     kvm_vgic_global_state.vctrl_base +
-					 resource_size(&info->vctrl),
-				     info->vctrl.start);
-	if (ret) {
-		kvm_err("Cannot map VCTRL into hyp\n");
-		goto out;
-	}
-
 	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
 	if (ret) {
 		kvm_err("Cannot register GICv2 KVM device\n");
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 17/18] KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

As we're about to change the way we map devices at HYP, we need
to move away from kern_hyp_va on an IO address.

One way of achieving this is to store the VAs in kvm_vgic_global_state,
and use that directly from the HYP code. This requires a small change
to create_hyp_io_mappings so that it can also return a HYP VA.

We take this opportunity to nuke the vctrl_base field in the emulated
distributor, as it is not used anymore.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 include/kvm/arm_vgic.h           | 12 ++++++------
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 10 +++-------
 virt/kvm/arm/mmu.c               | 20 ++++++++++++++++----
 virt/kvm/arm/vgic/vgic-init.c    |  6 ------
 virt/kvm/arm/vgic/vgic-v2.c      | 13 +++++++------
 7 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index cb3bef71ec9b..feff24b34506 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -42,7 +42,8 @@
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 21a7335b87c8..160444b78505 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -127,7 +127,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 8c896540a72c..8b3fbc03293b 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -57,11 +57,15 @@ struct vgic_global {
 	/* Physical address of vgic virtual cpu interface */
 	phys_addr_t		vcpu_base;
 
-	/* GICV mapping */
+	/* GICV mapping, kernel VA */
 	void __iomem		*vcpu_base_va;
+	/* GICV mapping, HYP VA */
+	void __iomem		*vcpu_hyp_va;
 
-	/* virtual control interface mapping */
+	/* virtual control interface mapping, kernel VA */
 	void __iomem		*vctrl_base;
+	/* virtual control interface mapping, HYP VA */
+	void __iomem		*vctrl_hyp;
 
 	/* Number of implemented list registers */
 	int			nr_lr;
@@ -198,10 +202,6 @@ struct vgic_dist {
 
 	int			nr_spis;
 
-	/* TODO: Consider moving to global state */
-	/* Virtual control interface mapping */
-	void __iomem		*vctrl_base;
-
 	/* base addresses in guest physical address space: */
 	gpa_t			vgic_dist_base;		/* distributor */
 	union {
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 19f63fbf3682..a3b224e09f74 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -60,10 +60,8 @@ static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
 	if (!base)
@@ -85,10 +83,8 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	int i;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
@@ -143,7 +139,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
+	addr  = hyp_symbol_addr(kvm_vgic_global_state)->vcpu_hyp_va;
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index cc0b1a4fa55c..9f73a8a17147 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -715,28 +715,40 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
  * @phys_addr:	The physical start address which gets mapped
  * @size:	Size of the region being mapped
  * @kaddr:	Kernel VA for this mapping
+ * @haddr:	HYP VA for this mapping
  *
- * The resulting HYP VA is the same as the kernel VA, modulo
- * HYP_PAGE_OFFSET.
+ * The resulting HYP VA is completely unrelated to the kernel VA.
  */
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr)
+			   void __iomem **kaddr,
+			   void __iomem **haddr)
 {
 	unsigned long start, end;
+	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
 	if (!*kaddr)
 		return -ENOMEM;
 
 	if (is_kernel_in_hyp_mode()) {
+		*haddr = *kaddr;
 		return 0;
 	}
 
 
 	start = kern_hyp_va((unsigned long)*kaddr);
 	end = kern_hyp_va((unsigned long)*kaddr + size);
-	return __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
+
+	if (ret) {
+		iounmap(*kaddr);
+		*kaddr = NULL;
+	} else {
+		*haddr = (void __iomem *)start;
+	}
+
+	return ret;
 }
 
 /**
diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
index 62310122ee78..3f01b5975055 100644
--- a/virt/kvm/arm/vgic/vgic-init.c
+++ b/virt/kvm/arm/vgic/vgic-init.c
@@ -166,12 +166,6 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
 	kvm->arch.vgic.in_kernel = true;
 	kvm->arch.vgic.vgic_model = type;
 
-	/*
-	 * kvm_vgic_global_state.vctrl_base is set on vgic probe (kvm_arch_init)
-	 * it is stored in distributor struct for asm save/restore purpose
-	 */
-	kvm->arch.vgic.vctrl_base = kvm_vgic_global_state.vctrl_base;
-
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_redist_base = VGIC_ADDR_UNDEF;
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index bc49d702f9f0..f0f566e4494e 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -335,7 +335,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 		ret = create_hyp_io_mappings(info->vcpu.start,
 					     resource_size(&info->vcpu),
-					     &kvm_vgic_global_state.vcpu_base_va);
+					     &kvm_vgic_global_state.vcpu_base_va,
+					     &kvm_vgic_global_state.vcpu_hyp_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -346,7 +347,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 	ret = create_hyp_io_mappings(info->vctrl.start,
 				     resource_size(&info->vctrl),
-				     &kvm_vgic_global_state.vctrl_base);
+				     &kvm_vgic_global_state.vctrl_base,
+				     &kvm_vgic_global_state.vctrl_hyp);
 	if (ret) {
 		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
@@ -381,15 +383,14 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 void vgic_v2_load(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	writel_relaxed(cpu_if->vgic_vmcr, vgic->vctrl_base + GICH_VMCR);
+	writel_relaxed(cpu_if->vgic_vmcr,
+		       kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
 
 void vgic_v2_put(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	cpu_if->vgic_vmcr = readl_relaxed(vgic->vctrl_base + GICH_VMCR);
+	cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 17/18] KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

As we're about to change the way we map devices at HYP, we need
to move away from kern_hyp_va on an IO address.

One way of achieving this is to store the VAs in kvm_vgic_global_state,
and use that directly from the HYP code. This requires a small change
to create_hyp_io_mappings so that it can also return a HYP VA.

We take this opportunity to nuke the vctrl_base field in the emulated
distributor, as it is not used anymore.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 include/kvm/arm_vgic.h           | 12 ++++++------
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 10 +++-------
 virt/kvm/arm/mmu.c               | 20 ++++++++++++++++----
 virt/kvm/arm/vgic/vgic-init.c    |  6 ------
 virt/kvm/arm/vgic/vgic-v2.c      | 13 +++++++------
 7 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index cb3bef71ec9b..feff24b34506 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -42,7 +42,8 @@
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 21a7335b87c8..160444b78505 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -127,7 +127,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 8c896540a72c..8b3fbc03293b 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -57,11 +57,15 @@ struct vgic_global {
 	/* Physical address of vgic virtual cpu interface */
 	phys_addr_t		vcpu_base;
 
-	/* GICV mapping */
+	/* GICV mapping, kernel VA */
 	void __iomem		*vcpu_base_va;
+	/* GICV mapping, HYP VA */
+	void __iomem		*vcpu_hyp_va;
 
-	/* virtual control interface mapping */
+	/* virtual control interface mapping, kernel VA */
 	void __iomem		*vctrl_base;
+	/* virtual control interface mapping, HYP VA */
+	void __iomem		*vctrl_hyp;
 
 	/* Number of implemented list registers */
 	int			nr_lr;
@@ -198,10 +202,6 @@ struct vgic_dist {
 
 	int			nr_spis;
 
-	/* TODO: Consider moving to global state */
-	/* Virtual control interface mapping */
-	void __iomem		*vctrl_base;
-
 	/* base addresses in guest physical address space: */
 	gpa_t			vgic_dist_base;		/* distributor */
 	union {
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 19f63fbf3682..a3b224e09f74 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -60,10 +60,8 @@ static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
 	if (!base)
@@ -85,10 +83,8 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	int i;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
@@ -143,7 +139,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
+	addr  = hyp_symbol_addr(kvm_vgic_global_state)->vcpu_hyp_va;
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index cc0b1a4fa55c..9f73a8a17147 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -715,28 +715,40 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
  * @phys_addr:	The physical start address which gets mapped
  * @size:	Size of the region being mapped
  * @kaddr:	Kernel VA for this mapping
+ * @haddr:	HYP VA for this mapping
  *
- * The resulting HYP VA is the same as the kernel VA, modulo
- * HYP_PAGE_OFFSET.
+ * The resulting HYP VA is completely unrelated to the kernel VA.
  */
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr)
+			   void __iomem **kaddr,
+			   void __iomem **haddr)
 {
 	unsigned long start, end;
+	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
 	if (!*kaddr)
 		return -ENOMEM;
 
 	if (is_kernel_in_hyp_mode()) {
+		*haddr = *kaddr;
 		return 0;
 	}
 
 
 	start = kern_hyp_va((unsigned long)*kaddr);
 	end = kern_hyp_va((unsigned long)*kaddr + size);
-	return __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
+
+	if (ret) {
+		iounmap(*kaddr);
+		*kaddr = NULL;
+	} else {
+		*haddr = (void __iomem *)start;
+	}
+
+	return ret;
 }
 
 /**
diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
index 62310122ee78..3f01b5975055 100644
--- a/virt/kvm/arm/vgic/vgic-init.c
+++ b/virt/kvm/arm/vgic/vgic-init.c
@@ -166,12 +166,6 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
 	kvm->arch.vgic.in_kernel = true;
 	kvm->arch.vgic.vgic_model = type;
 
-	/*
-	 * kvm_vgic_global_state.vctrl_base is set on vgic probe (kvm_arch_init)
-	 * it is stored in distributor struct for asm save/restore purpose
-	 */
-	kvm->arch.vgic.vctrl_base = kvm_vgic_global_state.vctrl_base;
-
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_redist_base = VGIC_ADDR_UNDEF;
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index bc49d702f9f0..f0f566e4494e 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -335,7 +335,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 		ret = create_hyp_io_mappings(info->vcpu.start,
 					     resource_size(&info->vcpu),
-					     &kvm_vgic_global_state.vcpu_base_va);
+					     &kvm_vgic_global_state.vcpu_base_va,
+					     &kvm_vgic_global_state.vcpu_hyp_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -346,7 +347,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 	ret = create_hyp_io_mappings(info->vctrl.start,
 				     resource_size(&info->vctrl),
-				     &kvm_vgic_global_state.vctrl_base);
+				     &kvm_vgic_global_state.vctrl_base,
+				     &kvm_vgic_global_state.vctrl_hyp);
 	if (ret) {
 		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
@@ -381,15 +383,14 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 void vgic_v2_load(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	writel_relaxed(cpu_if->vgic_vmcr, vgic->vctrl_base + GICH_VMCR);
+	writel_relaxed(cpu_if->vgic_vmcr,
+		       kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
 
 void vgic_v2_put(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	cpu_if->vgic_vmcr = readl_relaxed(vgic->vctrl_base + GICH_VMCR);
+	cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 18/18] KVM: arm/arm64: Move HYP IO VAs to the "idmap" range
  2017-12-06 14:38 ` Marc Zyngier
@ 2017-12-06 14:38   ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We so far mapped our HYP IO (which is essencially the GICv2 control
registers) using the same method as for memory. It recently appeared
that is a bit unsafe:

we compute the HYP VA using the kern_hyp_va helper, but that helper
is only designed to deal with kernel VAs coming from the linear map,
and not from the vmalloc region... This could in turn cause some bad
aliasing between the two, amplified by the new VA randomisation.

A solution is to come up with our very own basic VA allocator for
MMIO. Since half of the HYP address space only contains a single
page (the idmap), we have plenty to borrow from. Let's use the idmap
as a base, and allocate downwards from it. GICv2 now lives on the
other side of the great VA barrier.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 9f73a8a17147..d843548abd3d 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -43,6 +43,9 @@ static unsigned long hyp_idmap_start;
 static unsigned long hyp_idmap_end;
 static phys_addr_t hyp_idmap_vector;
 
+static DEFINE_MUTEX(io_map_lock);
+static unsigned long io_map_base;
+
 #define S2_PGD_SIZE	(PTRS_PER_S2_PGD * sizeof(pgd_t))
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
@@ -723,7 +726,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 			   void __iomem **kaddr,
 			   void __iomem **haddr)
 {
-	unsigned long start, end;
+	pgd_t *pgd = hyp_pgd;
+	unsigned long base;
 	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
@@ -735,19 +739,26 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 		return 0;
 	}
 
+	mutex_lock(&io_map_lock);
+
+	base = io_map_base - size;
+	base &= ~(size - 1);
+
+	if (__kvm_cpu_uses_extended_idmap())
+		pgd = boot_hyp_pgd;
 
-	start = kern_hyp_va((unsigned long)*kaddr);
-	end = kern_hyp_va((unsigned long)*kaddr + size);
-	ret = __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(pgd, base, base + size,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 
 	if (ret) {
 		iounmap(*kaddr);
 		*kaddr = NULL;
 	} else {
-		*haddr = (void __iomem *)start;
+		*haddr = (void __iomem *)base;
+		io_map_base = base;
 	}
 
+	mutex_unlock(&io_map_lock);
 	return ret;
 }
 
@@ -1828,6 +1839,7 @@ int kvm_mmu_init(void)
 			goto out;
 	}
 
+	io_map_base = hyp_idmap_start;
 	return 0;
 out:
 	free_hyp_pgds();
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 18/18] KVM: arm/arm64: Move HYP IO VAs to the "idmap" range
@ 2017-12-06 14:38   ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

We so far mapped our HYP IO (which is essencially the GICv2 control
registers) using the same method as for memory. It recently appeared
that is a bit unsafe:

we compute the HYP VA using the kern_hyp_va helper, but that helper
is only designed to deal with kernel VAs coming from the linear map,
and not from the vmalloc region... This could in turn cause some bad
aliasing between the two, amplified by the new VA randomisation.

A solution is to come up with our very own basic VA allocator for
MMIO. Since half of the HYP address space only contains a single
page (the idmap), we have plenty to borrow from. Let's use the idmap
as a base, and allocate downwards from it. GICv2 now lives on the
other side of the great VA barrier.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 9f73a8a17147..d843548abd3d 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -43,6 +43,9 @@ static unsigned long hyp_idmap_start;
 static unsigned long hyp_idmap_end;
 static phys_addr_t hyp_idmap_vector;
 
+static DEFINE_MUTEX(io_map_lock);
+static unsigned long io_map_base;
+
 #define S2_PGD_SIZE	(PTRS_PER_S2_PGD * sizeof(pgd_t))
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
@@ -723,7 +726,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 			   void __iomem **kaddr,
 			   void __iomem **haddr)
 {
-	unsigned long start, end;
+	pgd_t *pgd = hyp_pgd;
+	unsigned long base;
 	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
@@ -735,19 +739,26 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 		return 0;
 	}
 
+	mutex_lock(&io_map_lock);
+
+	base = io_map_base - size;
+	base &= ~(size - 1);
+
+	if (__kvm_cpu_uses_extended_idmap())
+		pgd = boot_hyp_pgd;
 
-	start = kern_hyp_va((unsigned long)*kaddr);
-	end = kern_hyp_va((unsigned long)*kaddr + size);
-	ret = __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(pgd, base, base + size,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 
 	if (ret) {
 		iounmap(*kaddr);
 		*kaddr = NULL;
 	} else {
-		*haddr = (void __iomem *)start;
+		*haddr = (void __iomem *)base;
+		io_map_base = base;
 	}
 
+	mutex_unlock(&io_map_lock);
 	return ret;
 }
 
@@ -1828,6 +1839,7 @@ int kvm_mmu_init(void)
 			goto out;
 	}
 
+	io_map_base = hyp_idmap_start;
 	return 0;
 out:
 	free_hyp_pgds();
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
  2017-12-06 14:38   ` Marc Zyngier
@ 2017-12-06 14:48     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 46+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-12-06 14:48 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, Catalin Marinas, Will Deacon, linux-arm-kernel, kvmarm

On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
> We're playing a dangerous game with struct alt_instr, as we produce
> it using assembly tricks, but parse them using the C structure.
> We just assume that the respective alignments of the two will
> be the same.
> 
> But as we add more fields to this structure, the alignment requirements
> of the structure may change, and lead to all kind of funky bugs.
> 
> TO solve this, let's move the definition of struct alt_instr to its
> own file, and use this to generate the alignment constraint from
> asm-offsets.c. The various macros are then patched to take the
> alignment into account.

Would it be better to use .p2align as on 32-bit ARM you must
have it 4-byte aligned. Or at least have and BUILD_BUG_ON
to make sure the size can be divided by four??

Oh wait. You are not even touching ARM-32, how come? The alternative
code can run on ARM-32 ...
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/alternative.h       | 13 +++++--------
>  arch/arm64/include/asm/alternative_types.h | 13 +++++++++++++
>  arch/arm64/kernel/asm-offsets.c            |  4 ++++
>  3 files changed, 22 insertions(+), 8 deletions(-)
>  create mode 100644 arch/arm64/include/asm/alternative_types.h
> 
> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
> index 4a85c6952a22..395befde7595 100644
> --- a/arch/arm64/include/asm/alternative.h
> +++ b/arch/arm64/include/asm/alternative.h
> @@ -2,28 +2,24 @@
>  #ifndef __ASM_ALTERNATIVE_H
>  #define __ASM_ALTERNATIVE_H
>  
> +#include <asm/asm-offsets.h>
>  #include <asm/cpucaps.h>
>  #include <asm/insn.h>
>  
>  #ifndef __ASSEMBLY__
>  
> +#include <asm/alternative_types.h>
> +
>  #include <linux/init.h>
>  #include <linux/types.h>
>  #include <linux/stddef.h>
>  #include <linux/stringify.h>
>  
> -struct alt_instr {
> -	s32 orig_offset;	/* offset to original instruction */
> -	s32 alt_offset;		/* offset to replacement instruction */
> -	u16 cpufeature;		/* cpufeature bit set for replacement */
> -	u8  orig_len;		/* size of original instruction(s) */
> -	u8  alt_len;		/* size of new instruction(s), <= orig_len */
> -};
> -
>  void __init apply_alternatives_all(void);
>  void apply_alternatives(void *start, size_t length);
>  
>  #define ALTINSTR_ENTRY(feature)						      \
> +	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
>  	" .word 661b - .\n"				/* label           */ \
>  	" .word 663f - .\n"				/* new instruction */ \
>  	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
> @@ -69,6 +65,7 @@ void apply_alternatives(void *start, size_t length);
>  #include <asm/assembler.h>
>  
>  .macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
> +	.align ALTINSTR_ALIGN
>  	.word \orig_offset - .
>  	.word \alt_offset - .
>  	.hword \feature
> diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
> new file mode 100644
> index 000000000000..26cf76167f2d
> --- /dev/null
> +++ b/arch/arm64/include/asm/alternative_types.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_ALTERNATIVE_TYPES_H
> +#define __ASM_ALTERNATIVE_TYPES_H
> +
> +struct alt_instr {
> +	s32 orig_offset;	/* offset to original instruction */
> +	s32 alt_offset;		/* offset to replacement instruction */
> +	u16 cpufeature;		/* cpufeature bit set for replacement */
> +	u8  orig_len;		/* size of original instruction(s) */
> +	u8  alt_len;		/* size of new instruction(s), <= orig_len */
> +};
> +
> +#endif
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 74b9a26a84b5..652165c8655a 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -25,6 +25,7 @@
>  #include <linux/dma-mapping.h>
>  #include <linux/kvm_host.h>
>  #include <linux/suspend.h>
> +#include <asm/alternative_types.h>
>  #include <asm/cpufeature.h>
>  #include <asm/thread_info.h>
>  #include <asm/memory.h>
> @@ -151,5 +152,8 @@ int main(void)
>    DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
>    DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
>    DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
> +  BLANK();
> +  DEFINE(ALTINSTR_ALIGN,	(63 - __builtin_clzl(__alignof__(struct alt_instr))));
> +
>    return 0;
>  }
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
@ 2017-12-06 14:48     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 46+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-12-06 14:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
> We're playing a dangerous game with struct alt_instr, as we produce
> it using assembly tricks, but parse them using the C structure.
> We just assume that the respective alignments of the two will
> be the same.
> 
> But as we add more fields to this structure, the alignment requirements
> of the structure may change, and lead to all kind of funky bugs.
> 
> TO solve this, let's move the definition of struct alt_instr to its
> own file, and use this to generate the alignment constraint from
> asm-offsets.c. The various macros are then patched to take the
> alignment into account.

Would it be better to use .p2align as on 32-bit ARM you must
have it 4-byte aligned. Or at least have and BUILD_BUG_ON
to make sure the size can be divided by four??

Oh wait. You are not even touching ARM-32, how come? The alternative
code can run on ARM-32 ...
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/alternative.h       | 13 +++++--------
>  arch/arm64/include/asm/alternative_types.h | 13 +++++++++++++
>  arch/arm64/kernel/asm-offsets.c            |  4 ++++
>  3 files changed, 22 insertions(+), 8 deletions(-)
>  create mode 100644 arch/arm64/include/asm/alternative_types.h
> 
> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
> index 4a85c6952a22..395befde7595 100644
> --- a/arch/arm64/include/asm/alternative.h
> +++ b/arch/arm64/include/asm/alternative.h
> @@ -2,28 +2,24 @@
>  #ifndef __ASM_ALTERNATIVE_H
>  #define __ASM_ALTERNATIVE_H
>  
> +#include <asm/asm-offsets.h>
>  #include <asm/cpucaps.h>
>  #include <asm/insn.h>
>  
>  #ifndef __ASSEMBLY__
>  
> +#include <asm/alternative_types.h>
> +
>  #include <linux/init.h>
>  #include <linux/types.h>
>  #include <linux/stddef.h>
>  #include <linux/stringify.h>
>  
> -struct alt_instr {
> -	s32 orig_offset;	/* offset to original instruction */
> -	s32 alt_offset;		/* offset to replacement instruction */
> -	u16 cpufeature;		/* cpufeature bit set for replacement */
> -	u8  orig_len;		/* size of original instruction(s) */
> -	u8  alt_len;		/* size of new instruction(s), <= orig_len */
> -};
> -
>  void __init apply_alternatives_all(void);
>  void apply_alternatives(void *start, size_t length);
>  
>  #define ALTINSTR_ENTRY(feature)						      \
> +	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
>  	" .word 661b - .\n"				/* label           */ \
>  	" .word 663f - .\n"				/* new instruction */ \
>  	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
> @@ -69,6 +65,7 @@ void apply_alternatives(void *start, size_t length);
>  #include <asm/assembler.h>
>  
>  .macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
> +	.align ALTINSTR_ALIGN
>  	.word \orig_offset - .
>  	.word \alt_offset - .
>  	.hword \feature
> diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
> new file mode 100644
> index 000000000000..26cf76167f2d
> --- /dev/null
> +++ b/arch/arm64/include/asm/alternative_types.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_ALTERNATIVE_TYPES_H
> +#define __ASM_ALTERNATIVE_TYPES_H
> +
> +struct alt_instr {
> +	s32 orig_offset;	/* offset to original instruction */
> +	s32 alt_offset;		/* offset to replacement instruction */
> +	u16 cpufeature;		/* cpufeature bit set for replacement */
> +	u8  orig_len;		/* size of original instruction(s) */
> +	u8  alt_len;		/* size of new instruction(s), <= orig_len */
> +};
> +
> +#endif
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 74b9a26a84b5..652165c8655a 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -25,6 +25,7 @@
>  #include <linux/dma-mapping.h>
>  #include <linux/kvm_host.h>
>  #include <linux/suspend.h>
> +#include <asm/alternative_types.h>
>  #include <asm/cpufeature.h>
>  #include <asm/thread_info.h>
>  #include <asm/memory.h>
> @@ -151,5 +152,8 @@ int main(void)
>    DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
>    DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
>    DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
> +  BLANK();
> +  DEFINE(ALTINSTR_ALIGN,	(63 - __builtin_clzl(__alignof__(struct alt_instr))));
> +
>    return 0;
>  }
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
  2017-12-06 14:48     ` Konrad Rzeszutek Wilk
@ 2017-12-06 14:57       ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:57 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-arm-kernel, kvm, kvmarm, Christoffer Dall, Mark Rutland,
	Catalin Marinas, Will Deacon, James Morse

On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
>> We're playing a dangerous game with struct alt_instr, as we produce
>> it using assembly tricks, but parse them using the C structure.
>> We just assume that the respective alignments of the two will
>> be the same.
>>
>> But as we add more fields to this structure, the alignment requirements
>> of the structure may change, and lead to all kind of funky bugs.
>>
>> TO solve this, let's move the definition of struct alt_instr to its
>> own file, and use this to generate the alignment constraint from
>> asm-offsets.c. The various macros are then patched to take the
>> alignment into account.
> 
> Would it be better to use .p2align as on 32-bit ARM you must
> have it 4-byte aligned. Or at least have and BUILD_BUG_ON
> to make sure the size can be divided by four??
> 
> Oh wait. You are not even touching ARM-32, how come? The alternative
> code can run on ARM-32 ...

How? Given that I haven't written yet, I'd be grateful if you could
share your time machine...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
@ 2017-12-06 14:57       ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 14:57 UTC (permalink / raw)
  To: linux-arm-kernel

On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
>> We're playing a dangerous game with struct alt_instr, as we produce
>> it using assembly tricks, but parse them using the C structure.
>> We just assume that the respective alignments of the two will
>> be the same.
>>
>> But as we add more fields to this structure, the alignment requirements
>> of the structure may change, and lead to all kind of funky bugs.
>>
>> TO solve this, let's move the definition of struct alt_instr to its
>> own file, and use this to generate the alignment constraint from
>> asm-offsets.c. The various macros are then patched to take the
>> alignment into account.
> 
> Would it be better to use .p2align as on 32-bit ARM you must
> have it 4-byte aligned. Or at least have and BUILD_BUG_ON
> to make sure the size can be divided by four??
> 
> Oh wait. You are not even touching ARM-32, how come? The alternative
> code can run on ARM-32 ...

How? Given that I haven't written yet, I'd be grateful if you could
share your time machine...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
  2017-12-06 14:57       ` Marc Zyngier
@ 2017-12-06 15:18         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 46+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-12-06 15:18 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, Catalin Marinas, Will Deacon, linux-arm-kernel, kvmarm

On Wed, Dec 06, 2017 at 02:57:29PM +0000, Marc Zyngier wrote:
> On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
> >> We're playing a dangerous game with struct alt_instr, as we produce
> >> it using assembly tricks, but parse them using the C structure.
> >> We just assume that the respective alignments of the two will
> >> be the same.
> >>
> >> But as we add more fields to this structure, the alignment requirements
> >> of the structure may change, and lead to all kind of funky bugs.
> >>
> >> TO solve this, let's move the definition of struct alt_instr to its
> >> own file, and use this to generate the alignment constraint from
> >> asm-offsets.c. The various macros are then patched to take the
> >> alignment into account.
> > 
> > Would it be better to use .p2align as on 32-bit ARM you must
> > have it 4-byte aligned. Or at least have and BUILD_BUG_ON
> > to make sure the size can be divided by four??
> > 
> > Oh wait. You are not even touching ARM-32, how come? The alternative
> > code can run on ARM-32 ...
> 
> How? Given that I haven't written yet, I'd be grateful if you could
> share your time machine...

Oh! I assumed it would be there as the Xen variant runs on ARM-32 and
it borrowed a bunch of code from Linux. 

Please disregard my comment. I will go back to tweaking the time machine.
> 
> Thanks,
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
@ 2017-12-06 15:18         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 46+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-12-06 15:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 06, 2017 at 02:57:29PM +0000, Marc Zyngier wrote:
> On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
> >> We're playing a dangerous game with struct alt_instr, as we produce
> >> it using assembly tricks, but parse them using the C structure.
> >> We just assume that the respective alignments of the two will
> >> be the same.
> >>
> >> But as we add more fields to this structure, the alignment requirements
> >> of the structure may change, and lead to all kind of funky bugs.
> >>
> >> TO solve this, let's move the definition of struct alt_instr to its
> >> own file, and use this to generate the alignment constraint from
> >> asm-offsets.c. The various macros are then patched to take the
> >> alignment into account.
> > 
> > Would it be better to use .p2align as on 32-bit ARM you must
> > have it 4-byte aligned. Or at least have and BUILD_BUG_ON
> > to make sure the size can be divided by four??
> > 
> > Oh wait. You are not even touching ARM-32, how come? The alternative
> > code can run on ARM-32 ...
> 
> How? Given that I haven't written yet, I'd be grateful if you could
> share your time machine...

Oh! I assumed it would be there as the Xen variant runs on ARM-32 and
it borrowed a bunch of code from Linux. 

Please disregard my comment. I will go back to tweaking the time machine.
> 
> Thanks,
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
  2017-12-06 15:18         ` Konrad Rzeszutek Wilk
@ 2017-12-06 15:39           ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 15:39 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: kvm, Catalin Marinas, Will Deacon, linux-arm-kernel, kvmarm

On 06/12/17 15:18, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 06, 2017 at 02:57:29PM +0000, Marc Zyngier wrote:
>> On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
>>>> We're playing a dangerous game with struct alt_instr, as we produce
>>>> it using assembly tricks, but parse them using the C structure.
>>>> We just assume that the respective alignments of the two will
>>>> be the same.
>>>>
>>>> But as we add more fields to this structure, the alignment requirements
>>>> of the structure may change, and lead to all kind of funky bugs.
>>>>
>>>> TO solve this, let's move the definition of struct alt_instr to its
>>>> own file, and use this to generate the alignment constraint from
>>>> asm-offsets.c. The various macros are then patched to take the
>>>> alignment into account.
>>>
>>> Would it be better to use .p2align as on 32-bit ARM you must
>>> have it 4-byte aligned. Or at least have and BUILD_BUG_ON
>>> to make sure the size can be divided by four??
>>>
>>> Oh wait. You are not even touching ARM-32, how come? The alternative
>>> code can run on ARM-32 ...
>>
>> How? Given that I haven't written yet, I'd be grateful if you could
>> share your time machine...
> 
> Oh! I assumed it would be there as the Xen variant runs on ARM-32 and
> it borrowed a bunch of code from Linux. 

KVM definitely runs on 32bit, but doesn't have any alternative facility,
at least not in the way arm64 does.

> Please disregard my comment. I will go back to tweaking the time machine.

Let me know once you've debugged it, I have a couple of issues to
address in the past...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr
@ 2017-12-06 15:39           ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2017-12-06 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 06/12/17 15:18, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 06, 2017 at 02:57:29PM +0000, Marc Zyngier wrote:
>> On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Dec 06, 2017 at 02:38:25PM +0000, Marc Zyngier wrote:
>>>> We're playing a dangerous game with struct alt_instr, as we produce
>>>> it using assembly tricks, but parse them using the C structure.
>>>> We just assume that the respective alignments of the two will
>>>> be the same.
>>>>
>>>> But as we add more fields to this structure, the alignment requirements
>>>> of the structure may change, and lead to all kind of funky bugs.
>>>>
>>>> TO solve this, let's move the definition of struct alt_instr to its
>>>> own file, and use this to generate the alignment constraint from
>>>> asm-offsets.c. The various macros are then patched to take the
>>>> alignment into account.
>>>
>>> Would it be better to use .p2align as on 32-bit ARM you must
>>> have it 4-byte aligned. Or at least have and BUILD_BUG_ON
>>> to make sure the size can be divided by four??
>>>
>>> Oh wait. You are not even touching ARM-32, how come? The alternative
>>> code can run on ARM-32 ...
>>
>> How? Given that I haven't written yet, I'd be grateful if you could
>> share your time machine...
> 
> Oh! I assumed it would be there as the Xen variant runs on ARM-32 and
> it borrowed a bunch of code from Linux. 

KVM definitely runs on 32bit, but doesn't have any alternative facility,
at least not in the way arm64 does.

> Please disregard my comment. I will go back to tweaking the time machine.

Let me know once you've debugged it, I have a couple of issues to
address in the past...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2017-12-06 15:39 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-06 14:38 [PATCH 00/18] KVM/arm64: Randomise EL2 mappings Marc Zyngier
2017-12-06 14:38 ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 01/18] arm64: asm-offsets: Avoid clashing DMA definitions Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 02/18] arm64: asm-offsets: Remove unused definitions Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 03/18] arm64: asm-offsets: Remove potential circular dependency Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 04/18] arm64: alternatives: Enforce alignment of struct alt_instr Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:48   ` Konrad Rzeszutek Wilk
2017-12-06 14:48     ` Konrad Rzeszutek Wilk
2017-12-06 14:57     ` Marc Zyngier
2017-12-06 14:57       ` Marc Zyngier
2017-12-06 15:18       ` Konrad Rzeszutek Wilk
2017-12-06 15:18         ` Konrad Rzeszutek Wilk
2017-12-06 15:39         ` Marc Zyngier
2017-12-06 15:39           ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 05/18] arm64: alternatives: Add dynamic patching feature Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 06/18] arm64: insn: Add N immediate encoding Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 07/18] arm64: insn: Add encoder for bitwise operations using litterals Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 08/18] arm64: KVM: Dynamically patch the kernel/hyp VA mask Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 09/18] arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 10/18] arm64; insn: Add encoder for the EXTR instruction Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 11/18] arm64: insn: Allow ADD/SUB (immediate) with LSL #12 Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 12/18] arm64: KVM: Introduce EL2 VA randomisation Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 13/18] arm64: Update the KVM memory map documentation Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 14/18] KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 15/18] KVM: arm/arm64: Demote HYP VA range display to being a debug feature Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 16/18] KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 17/18] KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier
2017-12-06 14:38 ` [PATCH 18/18] KVM: arm/arm64: Move HYP IO VAs to the "idmap" range Marc Zyngier
2017-12-06 14:38   ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.