All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/19] KVM/arm64: Randomise EL2 mappings
@ 2017-12-11 14:49 ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

Whilst KVM benefits from the kernel randomisation via KASLR, there is
no additional randomisation when the kernel is running at EL1, as we
directly use a fixed offset from the linear mapping. This is not
necessarily a problem, but we could do a bit better by independently
randomizing the HYP placement.

This series proposes to randomise the offset by inserting a few random
bits between the MSB of the RAM linear mapping and the top of the HYP
VA (VA_BITS - 2). That's not a lot of random bits (on my Mustang, I
get 13 bits), but that's better than nothing.

In order to achieve this, we need to be able to patch dynamic values
in the kernel text. This results in a bunch of changes to the
alternative framework, the insn library, and a few more hacks in KVM
itself (we get a new way to map the GIC at EL2). This series used to
depend on a number of cleanups in asm-offsets, which is not the case
anymore. I'm still including them as I think they are still pretty
useful.

This has been tested on the FVP model, Seattle (both 39 and 48bit VA),
Mustang and Thunder-X. I've also done a sanity check on 32bit (which
is only impacted by the HYP IO VA stuff).

Thanks,

	M.

* From v1:
  - Now works correctly with KASLR
  - Dropped the callback field from alt_instr, and reuse one of the
    existing fields to store an offset to the callback
  - Fix HYP teardown path (depends on fixes previously posted)
  - Dropped the VA offset macros

Marc Zyngier (19):
  arm64: asm-offsets: Avoid clashing DMA definitions
  arm64: asm-offsets: Remove unused definitions
  arm64: asm-offsets: Remove potential circular dependency
  arm64: alternatives: Enforce alignment of struct alt_instr
  arm64: alternatives: Add dynamic patching feature
  arm64: insn: Add N immediate encoding
  arm64: insn: Add encoder for bitwise operations using litterals
  arm64: KVM: Dynamically patch the kernel/hyp VA mask
  arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
  KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
  KVM: arm/arm64: Demote HYP VA range display to being a debug feature
  KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
  KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
  KVM: arm/arm64: Move HYP IO VAs to the "idmap" range
  arm64; insn: Add encoder for the EXTR instruction
  arm64: insn: Allow ADD/SUB (immediate) with LSL #12
  arm64: KVM: Dynamically compute the HYP VA mask
  arm64: KVM: Introduce EL2 VA randomisation
  arm64: Update the KVM memory map documentation

 Documentation/arm64/memory.txt             |   8 +-
 arch/arm/include/asm/kvm_hyp.h             |   6 +
 arch/arm/include/asm/kvm_mmu.h             |   4 +-
 arch/arm64/include/asm/alternative.h       |  53 +++++---
 arch/arm64/include/asm/alternative_types.h |  16 +++
 arch/arm64/include/asm/asm-offsets.h       |   2 +
 arch/arm64/include/asm/cpucaps.h           |   2 +-
 arch/arm64/include/asm/insn.h              |  16 +++
 arch/arm64/include/asm/kvm_hyp.h           |   9 ++
 arch/arm64/include/asm/kvm_mmu.h           |  54 ++++----
 arch/arm64/kernel/alternative.c            |  14 ++-
 arch/arm64/kernel/asm-offsets.c            |  17 +--
 arch/arm64/kernel/cpufeature.c             |  19 ---
 arch/arm64/kernel/insn.c                   | 191 ++++++++++++++++++++++++++++-
 arch/arm64/kvm/Makefile                    |   2 +-
 arch/arm64/kvm/haslr.c                     | 135 ++++++++++++++++++++
 arch/arm64/mm/cache.S                      |   4 +-
 include/kvm/arm_vgic.h                     |  12 +-
 virt/kvm/arm/hyp/vgic-v2-sr.c              |  12 +-
 virt/kvm/arm/mmu.c                         |  81 ++++++++----
 virt/kvm/arm/vgic/vgic-init.c              |   6 -
 virt/kvm/arm/vgic/vgic-v2.c                |  40 ++----
 22 files changed, 542 insertions(+), 161 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h
 create mode 100644 arch/arm64/kvm/haslr.c

-- 
2.14.2

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 00/19] KVM/arm64: Randomise EL2 mappings
@ 2017-12-11 14:49 ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

Whilst KVM benefits from the kernel randomisation via KASLR, there is
no additional randomisation when the kernel is running at EL1, as we
directly use a fixed offset from the linear mapping. This is not
necessarily a problem, but we could do a bit better by independently
randomizing the HYP placement.

This series proposes to randomise the offset by inserting a few random
bits between the MSB of the RAM linear mapping and the top of the HYP
VA (VA_BITS - 2). That's not a lot of random bits (on my Mustang, I
get 13 bits), but that's better than nothing.

In order to achieve this, we need to be able to patch dynamic values
in the kernel text. This results in a bunch of changes to the
alternative framework, the insn library, and a few more hacks in KVM
itself (we get a new way to map the GIC at EL2). This series used to
depend on a number of cleanups in asm-offsets, which is not the case
anymore. I'm still including them as I think they are still pretty
useful.

This has been tested on the FVP model, Seattle (both 39 and 48bit VA),
Mustang and Thunder-X. I've also done a sanity check on 32bit (which
is only impacted by the HYP IO VA stuff).

Thanks,

	M.

* From v1:
  - Now works correctly with KASLR
  - Dropped the callback field from alt_instr, and reuse one of the
    existing fields to store an offset to the callback
  - Fix HYP teardown path (depends on fixes previously posted)
  - Dropped the VA offset macros

Marc Zyngier (19):
  arm64: asm-offsets: Avoid clashing DMA definitions
  arm64: asm-offsets: Remove unused definitions
  arm64: asm-offsets: Remove potential circular dependency
  arm64: alternatives: Enforce alignment of struct alt_instr
  arm64: alternatives: Add dynamic patching feature
  arm64: insn: Add N immediate encoding
  arm64: insn: Add encoder for bitwise operations using litterals
  arm64: KVM: Dynamically patch the kernel/hyp VA mask
  arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
  KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
  KVM: arm/arm64: Demote HYP VA range display to being a debug feature
  KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
  KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
  KVM: arm/arm64: Move HYP IO VAs to the "idmap" range
  arm64; insn: Add encoder for the EXTR instruction
  arm64: insn: Allow ADD/SUB (immediate) with LSL #12
  arm64: KVM: Dynamically compute the HYP VA mask
  arm64: KVM: Introduce EL2 VA randomisation
  arm64: Update the KVM memory map documentation

 Documentation/arm64/memory.txt             |   8 +-
 arch/arm/include/asm/kvm_hyp.h             |   6 +
 arch/arm/include/asm/kvm_mmu.h             |   4 +-
 arch/arm64/include/asm/alternative.h       |  53 +++++---
 arch/arm64/include/asm/alternative_types.h |  16 +++
 arch/arm64/include/asm/asm-offsets.h       |   2 +
 arch/arm64/include/asm/cpucaps.h           |   2 +-
 arch/arm64/include/asm/insn.h              |  16 +++
 arch/arm64/include/asm/kvm_hyp.h           |   9 ++
 arch/arm64/include/asm/kvm_mmu.h           |  54 ++++----
 arch/arm64/kernel/alternative.c            |  14 ++-
 arch/arm64/kernel/asm-offsets.c            |  17 +--
 arch/arm64/kernel/cpufeature.c             |  19 ---
 arch/arm64/kernel/insn.c                   | 191 ++++++++++++++++++++++++++++-
 arch/arm64/kvm/Makefile                    |   2 +-
 arch/arm64/kvm/haslr.c                     | 135 ++++++++++++++++++++
 arch/arm64/mm/cache.S                      |   4 +-
 include/kvm/arm_vgic.h                     |  12 +-
 virt/kvm/arm/hyp/vgic-v2-sr.c              |  12 +-
 virt/kvm/arm/mmu.c                         |  81 ++++++++----
 virt/kvm/arm/vgic/vgic-init.c              |   6 -
 virt/kvm/arm/vgic/vgic-v2.c                |  40 ++----
 22 files changed, 542 insertions(+), 161 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h
 create mode 100644 arch/arm64/kvm/haslr.c

-- 
2.14.2

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 01/19] arm64: asm-offsets: Avoid clashing DMA definitions
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

asm-offsets.h contains a few DMA related definitions that have
the exact same name than the enum members they are derived from.

While this is not a problem so far, it will become an issue if
both asm-offsets.h and include/linux/dma-direction.h: are pulled
by the same file.

Let's sidestep the issue by renaming the asm-offsets.h constants.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 6 +++---
 arch/arm64/mm/cache.S           | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 71bf088f1e4b..7e8be0c22ce0 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -87,9 +87,9 @@ int main(void)
   BLANK();
   DEFINE(PAGE_SZ,	       	PAGE_SIZE);
   BLANK();
-  DEFINE(DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
-  DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
-  DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
+  DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
+  DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
+  DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
   BLANK();
   DEFINE(CLOCK_REALTIME,	CLOCK_REALTIME);
   DEFINE(CLOCK_MONOTONIC,	CLOCK_MONOTONIC);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7f1dbe962cf5..c1336be085eb 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -205,7 +205,7 @@ ENDPIPROC(__dma_flush_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_map_area)
-	cmp	w2, #DMA_FROM_DEVICE
+	cmp	w2, #__DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
 	b	__dma_clean_area
 ENDPIPROC(__dma_map_area)
@@ -217,7 +217,7 @@ ENDPIPROC(__dma_map_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_unmap_area)
-	cmp	w2, #DMA_TO_DEVICE
+	cmp	w2, #__DMA_TO_DEVICE
 	b.ne	__dma_inv_area
 	ret
 ENDPIPROC(__dma_unmap_area)
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 01/19] arm64: asm-offsets: Avoid clashing DMA definitions
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

asm-offsets.h contains a few DMA related definitions that have
the exact same name than the enum members they are derived from.

While this is not a problem so far, it will become an issue if
both asm-offsets.h and include/linux/dma-direction.h: are pulled
by the same file.

Let's sidestep the issue by renaming the asm-offsets.h constants.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 6 +++---
 arch/arm64/mm/cache.S           | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 71bf088f1e4b..7e8be0c22ce0 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -87,9 +87,9 @@ int main(void)
   BLANK();
   DEFINE(PAGE_SZ,	       	PAGE_SIZE);
   BLANK();
-  DEFINE(DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
-  DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
-  DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
+  DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
+  DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
+  DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
   BLANK();
   DEFINE(CLOCK_REALTIME,	CLOCK_REALTIME);
   DEFINE(CLOCK_MONOTONIC,	CLOCK_MONOTONIC);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7f1dbe962cf5..c1336be085eb 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -205,7 +205,7 @@ ENDPIPROC(__dma_flush_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_map_area)
-	cmp	w2, #DMA_FROM_DEVICE
+	cmp	w2, #__DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
 	b	__dma_clean_area
 ENDPIPROC(__dma_map_area)
@@ -217,7 +217,7 @@ ENDPIPROC(__dma_map_area)
  *	- dir	- DMA direction
  */
 ENTRY(__dma_unmap_area)
-	cmp	w2, #DMA_TO_DEVICE
+	cmp	w2, #__DMA_TO_DEVICE
 	b.ne	__dma_inv_area
 	ret
 ENDPIPROC(__dma_unmap_area)
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 02/19] arm64: asm-offsets: Remove unused definitions
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

asm-offsets.h contains a number of definitions that are not used
at all, and in some cases conflict with other definitions (such as
NSEC_PER_SEC).

Spring clean-up time.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7e8be0c22ce0..742887330101 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -83,10 +83,6 @@ int main(void)
   DEFINE(VMA_VM_MM,		offsetof(struct vm_area_struct, vm_mm));
   DEFINE(VMA_VM_FLAGS,		offsetof(struct vm_area_struct, vm_flags));
   BLANK();
-  DEFINE(VM_EXEC,	       	VM_EXEC);
-  BLANK();
-  DEFINE(PAGE_SZ,	       	PAGE_SIZE);
-  BLANK();
   DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
   DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
   DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
@@ -98,7 +94,6 @@ int main(void)
   DEFINE(CLOCK_REALTIME_COARSE,	CLOCK_REALTIME_COARSE);
   DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
   DEFINE(CLOCK_COARSE_RES,	LOW_RES_NSEC);
-  DEFINE(NSEC_PER_SEC,		NSEC_PER_SEC);
   BLANK();
   DEFINE(VDSO_CS_CYCLE_LAST,	offsetof(struct vdso_data, cs_cycle_last));
   DEFINE(VDSO_RAW_TIME_SEC,	offsetof(struct vdso_data, raw_time_sec));
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 02/19] arm64: asm-offsets: Remove unused definitions
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

asm-offsets.h contains a number of definitions that are not used
at all, and in some cases conflict with other definitions (such as
NSEC_PER_SEC).

Spring clean-up time.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7e8be0c22ce0..742887330101 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -83,10 +83,6 @@ int main(void)
   DEFINE(VMA_VM_MM,		offsetof(struct vm_area_struct, vm_mm));
   DEFINE(VMA_VM_FLAGS,		offsetof(struct vm_area_struct, vm_flags));
   BLANK();
-  DEFINE(VM_EXEC,	       	VM_EXEC);
-  BLANK();
-  DEFINE(PAGE_SZ,	       	PAGE_SIZE);
-  BLANK();
   DEFINE(__DMA_BIDIRECTIONAL,	DMA_BIDIRECTIONAL);
   DEFINE(__DMA_TO_DEVICE,	DMA_TO_DEVICE);
   DEFINE(__DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
@@ -98,7 +94,6 @@ int main(void)
   DEFINE(CLOCK_REALTIME_COARSE,	CLOCK_REALTIME_COARSE);
   DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
   DEFINE(CLOCK_COARSE_RES,	LOW_RES_NSEC);
-  DEFINE(NSEC_PER_SEC,		NSEC_PER_SEC);
   BLANK();
   DEFINE(VDSO_CS_CYCLE_LAST,	offsetof(struct vdso_data, cs_cycle_last));
   DEFINE(VDSO_RAW_TIME_SEC,	offsetof(struct vdso_data, raw_time_sec));
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 03/19] arm64: asm-offsets: Remove potential circular dependency
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

So far, we've been lucky enough that none of the include files
that asm-offsets.c requires do include asm-offsets.h. This is
about to change, and would introduce a nasty circular dependency...

Let's now guard the inclusion of asm-offsets.h so that it never
gets pulled from asm-offsets.c.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/asm-offsets.h | 2 ++
 arch/arm64/kernel/asm-offsets.c      | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/asm-offsets.h b/arch/arm64/include/asm/asm-offsets.h
index d370ee36a182..ed8df3a9c95a 100644
--- a/arch/arm64/include/asm/asm-offsets.h
+++ b/arch/arm64/include/asm/asm-offsets.h
@@ -1 +1,3 @@
+#ifndef IN_ASM_OFFSET_GENERATOR
 #include <generated/asm-offsets.h>
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 742887330101..74b9a26a84b5 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -18,6 +18,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#define IN_ASM_OFFSET_GENERATOR	1
+
 #include <linux/sched.h>
 #include <linux/mm.h>
 #include <linux/dma-mapping.h>
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 03/19] arm64: asm-offsets: Remove potential circular dependency
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

So far, we've been lucky enough that none of the include files
that asm-offsets.c requires do include asm-offsets.h. This is
about to change, and would introduce a nasty circular dependency...

Let's now guard the inclusion of asm-offsets.h so that it never
gets pulled from asm-offsets.c.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/asm-offsets.h | 2 ++
 arch/arm64/kernel/asm-offsets.c      | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/asm-offsets.h b/arch/arm64/include/asm/asm-offsets.h
index d370ee36a182..ed8df3a9c95a 100644
--- a/arch/arm64/include/asm/asm-offsets.h
+++ b/arch/arm64/include/asm/asm-offsets.h
@@ -1 +1,3 @@
+#ifndef IN_ASM_OFFSET_GENERATOR
 #include <generated/asm-offsets.h>
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 742887330101..74b9a26a84b5 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -18,6 +18,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#define IN_ASM_OFFSET_GENERATOR	1
+
 #include <linux/sched.h>
 #include <linux/mm.h>
 #include <linux/dma-mapping.h>
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 04/19] arm64: alternatives: Enforce alignment of struct alt_instr
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We're playing a dangerous game with struct alt_instr, as we produce
it using assembly tricks, but parse them using the C structure.
We just assume that the respective alignments of the two will
be the same.

But as we add more fields to this structure, the alignment requirements
of the structure may change, and lead to all kind of funky bugs.

TO solve this, let's move the definition of struct alt_instr to its
own file, and use this to generate the alignment constraint from
asm-offsets.c. The various macros are then patched to take the
alignment into account.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 13 +++++--------
 arch/arm64/include/asm/alternative_types.h | 13 +++++++++++++
 arch/arm64/kernel/asm-offsets.c            |  4 ++++
 3 files changed, 22 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 4a85c6952a22..395befde7595 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -2,28 +2,24 @@
 #ifndef __ASM_ALTERNATIVE_H
 #define __ASM_ALTERNATIVE_H
 
+#include <asm/asm-offsets.h>
 #include <asm/cpucaps.h>
 #include <asm/insn.h>
 
 #ifndef __ASSEMBLY__
 
+#include <asm/alternative_types.h>
+
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
 
-struct alt_instr {
-	s32 orig_offset;	/* offset to original instruction */
-	s32 alt_offset;		/* offset to replacement instruction */
-	u16 cpufeature;		/* cpufeature bit set for replacement */
-	u8  orig_len;		/* size of original instruction(s) */
-	u8  alt_len;		/* size of new instruction(s), <= orig_len */
-};
-
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
 #define ALTINSTR_ENTRY(feature)						      \
+	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
 	" .word 661b - .\n"				/* label           */ \
 	" .word 663f - .\n"				/* new instruction */ \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
@@ -69,6 +65,7 @@ void apply_alternatives(void *start, size_t length);
 #include <asm/assembler.h>
 
 .macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+	.align ALTINSTR_ALIGN
 	.word \orig_offset - .
 	.word \alt_offset - .
 	.hword \feature
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
new file mode 100644
index 000000000000..26cf76167f2d
--- /dev/null
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ALTERNATIVE_TYPES_H
+#define __ASM_ALTERNATIVE_TYPES_H
+
+struct alt_instr {
+	s32 orig_offset;	/* offset to original instruction */
+	s32 alt_offset;		/* offset to replacement instruction */
+	u16 cpufeature;		/* cpufeature bit set for replacement */
+	u8  orig_len;		/* size of original instruction(s) */
+	u8  alt_len;		/* size of new instruction(s), <= orig_len */
+};
+
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 74b9a26a84b5..652165c8655a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -25,6 +25,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/kvm_host.h>
 #include <linux/suspend.h>
+#include <asm/alternative_types.h>
 #include <asm/cpufeature.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
@@ -151,5 +152,8 @@ int main(void)
   DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
+  BLANK();
+  DEFINE(ALTINSTR_ALIGN,	(63 - __builtin_clzl(__alignof__(struct alt_instr))));
+
   return 0;
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 04/19] arm64: alternatives: Enforce alignment of struct alt_instr
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

We're playing a dangerous game with struct alt_instr, as we produce
it using assembly tricks, but parse them using the C structure.
We just assume that the respective alignments of the two will
be the same.

But as we add more fields to this structure, the alignment requirements
of the structure may change, and lead to all kind of funky bugs.

TO solve this, let's move the definition of struct alt_instr to its
own file, and use this to generate the alignment constraint from
asm-offsets.c. The various macros are then patched to take the
alignment into account.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 13 +++++--------
 arch/arm64/include/asm/alternative_types.h | 13 +++++++++++++
 arch/arm64/kernel/asm-offsets.c            |  4 ++++
 3 files changed, 22 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative_types.h

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 4a85c6952a22..395befde7595 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -2,28 +2,24 @@
 #ifndef __ASM_ALTERNATIVE_H
 #define __ASM_ALTERNATIVE_H
 
+#include <asm/asm-offsets.h>
 #include <asm/cpucaps.h>
 #include <asm/insn.h>
 
 #ifndef __ASSEMBLY__
 
+#include <asm/alternative_types.h>
+
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
 
-struct alt_instr {
-	s32 orig_offset;	/* offset to original instruction */
-	s32 alt_offset;		/* offset to replacement instruction */
-	u16 cpufeature;		/* cpufeature bit set for replacement */
-	u8  orig_len;		/* size of original instruction(s) */
-	u8  alt_len;		/* size of new instruction(s), <= orig_len */
-};
-
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
 #define ALTINSTR_ENTRY(feature)						      \
+	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
 	" .word 661b - .\n"				/* label           */ \
 	" .word 663f - .\n"				/* new instruction */ \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
@@ -69,6 +65,7 @@ void apply_alternatives(void *start, size_t length);
 #include <asm/assembler.h>
 
 .macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+	.align ALTINSTR_ALIGN
 	.word \orig_offset - .
 	.word \alt_offset - .
 	.hword \feature
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
new file mode 100644
index 000000000000..26cf76167f2d
--- /dev/null
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ALTERNATIVE_TYPES_H
+#define __ASM_ALTERNATIVE_TYPES_H
+
+struct alt_instr {
+	s32 orig_offset;	/* offset to original instruction */
+	s32 alt_offset;		/* offset to replacement instruction */
+	u16 cpufeature;		/* cpufeature bit set for replacement */
+	u8  orig_len;		/* size of original instruction(s) */
+	u8  alt_len;		/* size of new instruction(s), <= orig_len */
+};
+
+#endif
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 74b9a26a84b5..652165c8655a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -25,6 +25,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/kvm_host.h>
 #include <linux/suspend.h>
+#include <asm/alternative_types.h>
 #include <asm/cpufeature.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
@@ -151,5 +152,8 @@ int main(void)
   DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
+  BLANK();
+  DEFINE(ALTINSTR_ALIGN,	(63 - __builtin_clzl(__alignof__(struct alt_instr))));
+
   return 0;
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 05/19] arm64: alternatives: Add dynamic patching feature
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to finely control what gets
patched. For a single feature, this is an all or nothing thing.

It would be interesting to have a more fine grained way of patching
the kernel though, where we could dynamically tune the code that gets
injected.

In order to achive this, let's introduce a new form of alternative
that is associated with a callback. This callback gets the instruction
sequence number and the old instruction as a parameter, and returns
the new instruction. This callback is always called, as the patching
decision is now done at runtime (not patching is equivalent to returning
the same instruction).

Patching with a callback is declared with the new ALTERNATIVE_CB
and alternative_cb directives:

	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
		     : "r" (v));
or
	alternative_cb callback
		mov	x0, #0
	alternative_else_nop_endif

where callback is the C function computing the alternative.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 40 ++++++++++++++++++++++--------
 arch/arm64/include/asm/alternative_types.h |  3 +++
 arch/arm64/kernel/alternative.c            | 14 +++++++++--
 3 files changed, 45 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 395befde7595..ce612e10a2c9 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -18,10 +18,14 @@
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
-#define ALTINSTR_ENTRY(feature)						      \
+#define ALTINSTR_ENTRY(feature,cb)					      \
 	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
 	" .word 661b - .\n"				/* label           */ \
+	" .if " __stringify(cb) " == 0\n"				      \
 	" .word 663f - .\n"				/* new instruction */ \
+	" .else\n"							      \
+	" .word " __stringify(cb) "- .\n"		/* callback */	      \
+	" .endif\n"							      \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
 	" .byte 662b-661b\n"				/* source len      */ \
 	" .byte 664f-663f\n"				/* replacement len */
@@ -40,13 +44,13 @@ void apply_alternatives(void *start, size_t length);
  * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
  * containing commit 4e4d08cf7399b606 or c1baaddf8861).
  */
-#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled, cb)	\
 	".if "__stringify(cfg_enabled)" == 1\n"				\
 	"661:\n\t"							\
 	oldinstr "\n"							\
 	"662:\n"							\
 	".pushsection .altinstructions,\"a\"\n"				\
-	ALTINSTR_ENTRY(feature)						\
+	ALTINSTR_ENTRY(feature,cb)					\
 	".popsection\n"							\
 	".pushsection .altinstr_replacement, \"a\"\n"			\
 	"663:\n\t"							\
@@ -58,26 +62,32 @@ void apply_alternatives(void *start, size_t length);
 	".endif\n"
 
 #define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
-	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg), 0)
 
+#define _ALTERNATIVE_CB(oldinstr, cb, ...) \
+	__ALTERNATIVE_CFG(oldinstr, oldinstr, ARM64_NCAPS, 1, cb)
 #else
 
 #include <asm/assembler.h>
 
-.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
 	.align ALTINSTR_ALIGN
 	.word \orig_offset - .
+	.if \cb == 0
 	.word \alt_offset - .
+	.else
+	.word \cb - .
+	.endif
 	.hword \feature
 	.byte \orig_len
 	.byte \alt_len
 .endm
 
-.macro alternative_insn insn1, insn2, cap, enable = 1
+.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
 	.if \enable
 661:	\insn1
 662:	.pushsection .altinstructions, "a"
-	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
+	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 663:	\insn2
@@ -109,10 +119,10 @@ void apply_alternatives(void *start, size_t length);
 /*
  * Begin an alternative code sequence.
  */
-.macro alternative_if_not cap
+.macro alternative_if_not cap, cb = 0
 	.set .Lasm_alt_mode, 0
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
+	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
 	.popsection
 661:
 .endm
@@ -120,13 +130,17 @@ void apply_alternatives(void *start, size_t length);
 .macro alternative_if cap
 	.set .Lasm_alt_mode, 1
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
+	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 	.align 2	/* So GAS knows label 661 is suitably aligned */
 661:
 .endm
 
+.macro alternative_cb cb
+	alternative_if_not ARM64_NCAPS, \cb
+.endm
+
 /*
  * Provide the other half of the alternative code sequence.
  */
@@ -166,6 +180,9 @@ alternative_endif
 #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
 	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
 
+#define _ALTERNATIVE_CB(insn1, cb, ...)	\
+	alternative_insn insn1, insn1, ARM64_NCAPS, 1, cb
+
 .macro user_alt, label, oldinstr, newinstr, cond
 9999:	alternative_insn "\oldinstr", "\newinstr", \cond
 	_ASM_EXTABLE 9999b, \label
@@ -242,4 +259,7 @@ alternative_endif
 #define ALTERNATIVE(oldinstr, newinstr, ...)   \
 	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
 
+#define ALTERNATIVE_CB(oldinstr, cb, ...)	\
+	_ALTERNATIVE_CB(oldinstr, cb)
+
 #endif /* __ASM_ALTERNATIVE_H */
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
index 26cf76167f2d..513f3985d455 100644
--- a/arch/arm64/include/asm/alternative_types.h
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -2,6 +2,9 @@
 #ifndef __ASM_ALTERNATIVE_TYPES_H
 #define __ASM_ALTERNATIVE_TYPES_H
 
+struct alt_instr;
+typedef u32 (*alternative_cb_t)(struct alt_instr *alt, int index, u32 new_insn);
+
 struct alt_instr {
 	s32 orig_offset;	/* offset to original instruction */
 	s32 alt_offset;		/* offset to replacement instruction */
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 6dd0a3a3e5c9..279c103ea801 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -110,12 +110,15 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 	struct alt_instr *alt;
 	struct alt_region *region = alt_region;
 	__le32 *origptr, *replptr, *updptr;
+	alternative_cb_t alt_cb;
 
 	for (alt = region->begin; alt < region->end; alt++) {
 		u32 insn;
 		int i, nr_inst;
 
-		if (!cpus_have_cap(alt->cpufeature))
+		/* Use ARM64_NCAPS as an unconditional patch */
+		if (alt->cpufeature != ARM64_NCAPS &&
+		    !cpus_have_cap(alt->cpufeature))
 			continue;
 
 		BUG_ON(alt->alt_len != alt->orig_len);
@@ -124,11 +127,18 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 
 		origptr = ALT_ORIG_PTR(alt);
 		replptr = ALT_REPL_PTR(alt);
+		alt_cb  = ALT_REPL_PTR(alt);
 		updptr = use_linear_alias ? lm_alias(origptr) : origptr;
 		nr_inst = alt->alt_len / sizeof(insn);
 
 		for (i = 0; i < nr_inst; i++) {
-			insn = get_alt_insn(alt, origptr + i, replptr + i);
+			if (alt->cpufeature == ARM64_NCAPS) {
+				insn = le32_to_cpu(updptr[i]);
+				insn = alt_cb(alt, i, insn);
+			} else {
+				insn = get_alt_insn(alt, origptr + i,
+						    replptr + i);
+			}
 			updptr[i] = cpu_to_le32(insn);
 		}
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 05/19] arm64: alternatives: Add dynamic patching feature
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to finely control what gets
patched. For a single feature, this is an all or nothing thing.

It would be interesting to have a more fine grained way of patching
the kernel though, where we could dynamically tune the code that gets
injected.

In order to achive this, let's introduce a new form of alternative
that is associated with a callback. This callback gets the instruction
sequence number and the old instruction as a parameter, and returns
the new instruction. This callback is always called, as the patching
decision is now done at runtime (not patching is equivalent to returning
the same instruction).

Patching with a callback is declared with the new ALTERNATIVE_CB
and alternative_cb directives:

	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
		     : "r" (v));
or
	alternative_cb callback
		mov	x0, #0
	alternative_else_nop_endif

where callback is the C function computing the alternative.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/alternative.h       | 40 ++++++++++++++++++++++--------
 arch/arm64/include/asm/alternative_types.h |  3 +++
 arch/arm64/kernel/alternative.c            | 14 +++++++++--
 3 files changed, 45 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 395befde7595..ce612e10a2c9 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -18,10 +18,14 @@
 void __init apply_alternatives_all(void);
 void apply_alternatives(void *start, size_t length);
 
-#define ALTINSTR_ENTRY(feature)						      \
+#define ALTINSTR_ENTRY(feature,cb)					      \
 	" .align " __stringify(ALTINSTR_ALIGN) "\n"			      \
 	" .word 661b - .\n"				/* label           */ \
+	" .if " __stringify(cb) " == 0\n"				      \
 	" .word 663f - .\n"				/* new instruction */ \
+	" .else\n"							      \
+	" .word " __stringify(cb) "- .\n"		/* callback */	      \
+	" .endif\n"							      \
 	" .hword " __stringify(feature) "\n"		/* feature bit     */ \
 	" .byte 662b-661b\n"				/* source len      */ \
 	" .byte 664f-663f\n"				/* replacement len */
@@ -40,13 +44,13 @@ void apply_alternatives(void *start, size_t length);
  * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
  * containing commit 4e4d08cf7399b606 or c1baaddf8861).
  */
-#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled, cb)	\
 	".if "__stringify(cfg_enabled)" == 1\n"				\
 	"661:\n\t"							\
 	oldinstr "\n"							\
 	"662:\n"							\
 	".pushsection .altinstructions,\"a\"\n"				\
-	ALTINSTR_ENTRY(feature)						\
+	ALTINSTR_ENTRY(feature,cb)					\
 	".popsection\n"							\
 	".pushsection .altinstr_replacement, \"a\"\n"			\
 	"663:\n\t"							\
@@ -58,26 +62,32 @@ void apply_alternatives(void *start, size_t length);
 	".endif\n"
 
 #define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
-	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg), 0)
 
+#define _ALTERNATIVE_CB(oldinstr, cb, ...) \
+	__ALTERNATIVE_CFG(oldinstr, oldinstr, ARM64_NCAPS, 1, cb)
 #else
 
 #include <asm/assembler.h>
 
-.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
 	.align ALTINSTR_ALIGN
 	.word \orig_offset - .
+	.if \cb == 0
 	.word \alt_offset - .
+	.else
+	.word \cb - .
+	.endif
 	.hword \feature
 	.byte \orig_len
 	.byte \alt_len
 .endm
 
-.macro alternative_insn insn1, insn2, cap, enable = 1
+.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
 	.if \enable
 661:	\insn1
 662:	.pushsection .altinstructions, "a"
-	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
+	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 663:	\insn2
@@ -109,10 +119,10 @@ void apply_alternatives(void *start, size_t length);
 /*
  * Begin an alternative code sequence.
  */
-.macro alternative_if_not cap
+.macro alternative_if_not cap, cb = 0
 	.set .Lasm_alt_mode, 0
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
+	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
 	.popsection
 661:
 .endm
@@ -120,13 +130,17 @@ void apply_alternatives(void *start, size_t length);
 .macro alternative_if cap
 	.set .Lasm_alt_mode, 1
 	.pushsection .altinstructions, "a"
-	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
+	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
 	.popsection
 	.pushsection .altinstr_replacement, "ax"
 	.align 2	/* So GAS knows label 661 is suitably aligned */
 661:
 .endm
 
+.macro alternative_cb cb
+	alternative_if_not ARM64_NCAPS, \cb
+.endm
+
 /*
  * Provide the other half of the alternative code sequence.
  */
@@ -166,6 +180,9 @@ alternative_endif
 #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
 	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
 
+#define _ALTERNATIVE_CB(insn1, cb, ...)	\
+	alternative_insn insn1, insn1, ARM64_NCAPS, 1, cb
+
 .macro user_alt, label, oldinstr, newinstr, cond
 9999:	alternative_insn "\oldinstr", "\newinstr", \cond
 	_ASM_EXTABLE 9999b, \label
@@ -242,4 +259,7 @@ alternative_endif
 #define ALTERNATIVE(oldinstr, newinstr, ...)   \
 	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
 
+#define ALTERNATIVE_CB(oldinstr, cb, ...)	\
+	_ALTERNATIVE_CB(oldinstr, cb)
+
 #endif /* __ASM_ALTERNATIVE_H */
diff --git a/arch/arm64/include/asm/alternative_types.h b/arch/arm64/include/asm/alternative_types.h
index 26cf76167f2d..513f3985d455 100644
--- a/arch/arm64/include/asm/alternative_types.h
+++ b/arch/arm64/include/asm/alternative_types.h
@@ -2,6 +2,9 @@
 #ifndef __ASM_ALTERNATIVE_TYPES_H
 #define __ASM_ALTERNATIVE_TYPES_H
 
+struct alt_instr;
+typedef u32 (*alternative_cb_t)(struct alt_instr *alt, int index, u32 new_insn);
+
 struct alt_instr {
 	s32 orig_offset;	/* offset to original instruction */
 	s32 alt_offset;		/* offset to replacement instruction */
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 6dd0a3a3e5c9..279c103ea801 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -110,12 +110,15 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 	struct alt_instr *alt;
 	struct alt_region *region = alt_region;
 	__le32 *origptr, *replptr, *updptr;
+	alternative_cb_t alt_cb;
 
 	for (alt = region->begin; alt < region->end; alt++) {
 		u32 insn;
 		int i, nr_inst;
 
-		if (!cpus_have_cap(alt->cpufeature))
+		/* Use ARM64_NCAPS as an unconditional patch */
+		if (alt->cpufeature != ARM64_NCAPS &&
+		    !cpus_have_cap(alt->cpufeature))
 			continue;
 
 		BUG_ON(alt->alt_len != alt->orig_len);
@@ -124,11 +127,18 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
 
 		origptr = ALT_ORIG_PTR(alt);
 		replptr = ALT_REPL_PTR(alt);
+		alt_cb  = ALT_REPL_PTR(alt);
 		updptr = use_linear_alias ? lm_alias(origptr) : origptr;
 		nr_inst = alt->alt_len / sizeof(insn);
 
 		for (i = 0; i < nr_inst; i++) {
-			insn = get_alt_insn(alt, origptr + i, replptr + i);
+			if (alt->cpufeature == ARM64_NCAPS) {
+				insn = le32_to_cpu(updptr[i]);
+				insn = alt_cb(alt, i, insn);
+			} else {
+				insn = get_alt_insn(alt, origptr + i,
+						    replptr + i);
+			}
 			updptr[i] = cpu_to_le32(insn);
 		}
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 06/19] arm64: insn: Add N immediate encoding
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

We're missing the a way to generate the encoding of the N immediate,
which is only a single bit used in a number of instruction that take
an immediate.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h | 1 +
 arch/arm64/kernel/insn.c      | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 4214c38d016b..21fffdd290a3 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -70,6 +70,7 @@ enum aarch64_insn_imm_type {
 	AARCH64_INSN_IMM_6,
 	AARCH64_INSN_IMM_S,
 	AARCH64_INSN_IMM_R,
+	AARCH64_INSN_IMM_N,
 	AARCH64_INSN_IMM_MAX
 };
 
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 2718a77da165..7e432662d454 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -343,6 +343,10 @@ static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type,
 		mask = BIT(6) - 1;
 		shift = 16;
 		break;
+	case AARCH64_INSN_IMM_N:
+		mask = 1;
+		shift = 22;
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 06/19] arm64: insn: Add N immediate encoding
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

We're missing the a way to generate the encoding of the N immediate,
which is only a single bit used in a number of instruction that take
an immediate.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h | 1 +
 arch/arm64/kernel/insn.c      | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 4214c38d016b..21fffdd290a3 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -70,6 +70,7 @@ enum aarch64_insn_imm_type {
 	AARCH64_INSN_IMM_6,
 	AARCH64_INSN_IMM_S,
 	AARCH64_INSN_IMM_R,
+	AARCH64_INSN_IMM_N,
 	AARCH64_INSN_IMM_MAX
 };
 
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 2718a77da165..7e432662d454 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -343,6 +343,10 @@ static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type,
 		mask = BIT(6) - 1;
 		shift = 16;
 		break;
+	case AARCH64_INSN_IMM_N:
+		mask = 1;
+		shift = 22;
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

We lack a way to encode operations such as AND, ORR, EOR that take
an immediate value. Doing so is quite involved, and is all about
reverse engineering the decoding algorithm described in the
pseudocode function DecodeBitMasks().

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |   9 +++
 arch/arm64/kernel/insn.c      | 137 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 146 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 21fffdd290a3..815b35bc53ed 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -315,6 +315,10 @@ __AARCH64_INSN_FUNCS(eor,	0x7F200000, 0x4A000000)
 __AARCH64_INSN_FUNCS(eon,	0x7F200000, 0x4A200000)
 __AARCH64_INSN_FUNCS(ands,	0x7F200000, 0x6A000000)
 __AARCH64_INSN_FUNCS(bics,	0x7F200000, 0x6A200000)
+__AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
+__AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
+__AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
+__AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -424,6 +428,11 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst,
 					 int shift,
 					 enum aarch64_insn_variant variant,
 					 enum aarch64_insn_logic_type type);
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7e432662d454..326b17016485 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1485,3 +1485,140 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
 	__check_hi, __check_ls, __check_ge, __check_lt,
 	__check_gt, __check_le, __check_al, __check_al
 };
+
+static bool range_of_ones(u64 val)
+{
+	/* Doesn't handle full ones or full zeroes */
+	int x = __ffs64(val) - 1;
+	u64 sval = val >> x;
+
+	/* One of Sean Eron Anderson's bithack tricks */
+	return ((sval + 1) & (sval)) == 0;
+}
+
+static u32 aarch64_encode_immediate(u64 imm,
+				    enum aarch64_insn_variant variant,
+				    u32 insn)
+{
+	unsigned int immr, imms, n, ones, ror, esz, tmp;
+	u64 mask;
+
+	/* Can't encode full zeroes or full ones */
+	if (!imm || !~imm)
+		return AARCH64_BREAK_FAULT;
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (upper_32_bits(imm))
+			return AARCH64_BREAK_FAULT;
+		esz = 32;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		insn |= AARCH64_INSN_SF_BIT;
+		esz = 64;
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	/*
+	 * Inverse of Replicate(). Try to spot a repeating pattern
+	 * with a pow2 stride.
+	 */
+	for (tmp = esz; tmp > 2; tmp /= 2) {
+		u64 emask = BIT(tmp / 2) - 1;
+
+		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
+			break;
+
+		esz = tmp;
+	}
+
+	/* N is only set if we're encoding a 64bit value */
+	n = esz == 64;
+
+	/* Trim imm to the element size */
+	mask = BIT(esz - 1) - 1;
+	imm &= mask;
+
+	/* That's how many ones we need to encode */
+	ones = hweight64(imm);
+
+	/*
+	 * imms is set to (ones - 1), prefixed with a string of ones
+	 * and a zero if they fit. Cap it to 6 bits.
+	 */
+	imms  = ones - 1;
+	imms |= 0xf << ffs(esz);
+	imms &= BIT(6) - 1;
+
+	/* Compute the rotation */
+	if (range_of_ones(imm)) {
+		/*
+		 * Pattern: 0..01..10..0
+		 *
+		 * Compute how many rotate we need to align it right
+		 */
+		ror = ffs(imm) - 1;
+	} else {
+		/*
+		 * Pattern: 0..01..10..01..1
+		 *
+		 * Fill the unused top bits with ones, and check if
+		 * the result is a valid immediate (all ones with a
+		 * contiguous ranges of zeroes).
+		 */
+		imm |= ~mask;
+		if (!range_of_ones(~imm))
+			return AARCH64_BREAK_FAULT;
+
+		/*
+		 * Compute the rotation to get a continuous set of
+		 * ones, with the first bit set at position 0
+		 */
+		ror = fls(~imm);
+	}
+
+	/*
+	 * immr is the number of bits we need to rotate back to the
+	 * original set of ones. Note that this is relative to the
+	 * element size...
+	 */
+	immr = (esz - ror) & (esz - 1);
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, n);
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_R, insn, immr);
+	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, imms);
+}
+
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm)
+{
+	u32 insn;
+
+	switch (type) {
+	case AARCH64_INSN_LOGIC_AND:
+		insn = aarch64_insn_get_and_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_ORR:
+		insn = aarch64_insn_get_orr_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_EOR:
+		insn = aarch64_insn_get_eor_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_AND_SETFLAGS:
+		insn = aarch64_insn_get_ands_imm_value();
+		break;
+	default:
+		pr_err("%s: unknown logical encoding %d\n", __func__, type);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_encode_immediate(imm, variant, insn);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

We lack a way to encode operations such as AND, ORR, EOR that take
an immediate value. Doing so is quite involved, and is all about
reverse engineering the decoding algorithm described in the
pseudocode function DecodeBitMasks().

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |   9 +++
 arch/arm64/kernel/insn.c      | 137 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 146 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 21fffdd290a3..815b35bc53ed 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -315,6 +315,10 @@ __AARCH64_INSN_FUNCS(eor,	0x7F200000, 0x4A000000)
 __AARCH64_INSN_FUNCS(eon,	0x7F200000, 0x4A200000)
 __AARCH64_INSN_FUNCS(ands,	0x7F200000, 0x6A000000)
 __AARCH64_INSN_FUNCS(bics,	0x7F200000, 0x6A200000)
+__AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
+__AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
+__AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
+__AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -424,6 +428,11 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst,
 					 int shift,
 					 enum aarch64_insn_variant variant,
 					 enum aarch64_insn_logic_type type);
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7e432662d454..326b17016485 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1485,3 +1485,140 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
 	__check_hi, __check_ls, __check_ge, __check_lt,
 	__check_gt, __check_le, __check_al, __check_al
 };
+
+static bool range_of_ones(u64 val)
+{
+	/* Doesn't handle full ones or full zeroes */
+	int x = __ffs64(val) - 1;
+	u64 sval = val >> x;
+
+	/* One of Sean Eron Anderson's bithack tricks */
+	return ((sval + 1) & (sval)) == 0;
+}
+
+static u32 aarch64_encode_immediate(u64 imm,
+				    enum aarch64_insn_variant variant,
+				    u32 insn)
+{
+	unsigned int immr, imms, n, ones, ror, esz, tmp;
+	u64 mask;
+
+	/* Can't encode full zeroes or full ones */
+	if (!imm || !~imm)
+		return AARCH64_BREAK_FAULT;
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (upper_32_bits(imm))
+			return AARCH64_BREAK_FAULT;
+		esz = 32;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		insn |= AARCH64_INSN_SF_BIT;
+		esz = 64;
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	/*
+	 * Inverse of Replicate(). Try to spot a repeating pattern
+	 * with a pow2 stride.
+	 */
+	for (tmp = esz; tmp > 2; tmp /= 2) {
+		u64 emask = BIT(tmp / 2) - 1;
+
+		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
+			break;
+
+		esz = tmp;
+	}
+
+	/* N is only set if we're encoding a 64bit value */
+	n = esz == 64;
+
+	/* Trim imm to the element size */
+	mask = BIT(esz - 1) - 1;
+	imm &= mask;
+
+	/* That's how many ones we need to encode */
+	ones = hweight64(imm);
+
+	/*
+	 * imms is set to (ones - 1), prefixed with a string of ones
+	 * and a zero if they fit. Cap it to 6 bits.
+	 */
+	imms  = ones - 1;
+	imms |= 0xf << ffs(esz);
+	imms &= BIT(6) - 1;
+
+	/* Compute the rotation */
+	if (range_of_ones(imm)) {
+		/*
+		 * Pattern: 0..01..10..0
+		 *
+		 * Compute how many rotate we need to align it right
+		 */
+		ror = ffs(imm) - 1;
+	} else {
+		/*
+		 * Pattern: 0..01..10..01..1
+		 *
+		 * Fill the unused top bits with ones, and check if
+		 * the result is a valid immediate (all ones with a
+		 * contiguous ranges of zeroes).
+		 */
+		imm |= ~mask;
+		if (!range_of_ones(~imm))
+			return AARCH64_BREAK_FAULT;
+
+		/*
+		 * Compute the rotation to get a continuous set of
+		 * ones, with the first bit set@position 0
+		 */
+		ror = fls(~imm);
+	}
+
+	/*
+	 * immr is the number of bits we need to rotate back to the
+	 * original set of ones. Note that this is relative to the
+	 * element size...
+	 */
+	immr = (esz - ror) & (esz - 1);
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, n);
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_R, insn, immr);
+	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, imms);
+}
+
+u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
+				       enum aarch64_insn_variant variant,
+				       enum aarch64_insn_register Rn,
+				       enum aarch64_insn_register Rd,
+				       u64 imm)
+{
+	u32 insn;
+
+	switch (type) {
+	case AARCH64_INSN_LOGIC_AND:
+		insn = aarch64_insn_get_and_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_ORR:
+		insn = aarch64_insn_get_orr_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_EOR:
+		insn = aarch64_insn_get_eor_imm_value();
+		break;
+	case AARCH64_INSN_LOGIC_AND_SETFLAGS:
+		insn = aarch64_insn_get_ands_imm_value();
+		break;
+	default:
+		pr_err("%s: unknown logical encoding %d\n", __func__, type);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_encode_immediate(imm, variant, insn);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 08/19] arm64: KVM: Dynamically patch the kernel/hyp VA mask
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

So far, we're using a complicated sequence of alternatives to
patch the kernel/hyp VA mask on non-VHE, and NOP out the
masking altogether when on VHE.

THe newly introduced dynamic patching gives us the opportunity
to simplify that code by patching a single instruction with
the correct mask (instead of the mind bending cummulative masking
we have at the moment) or even a single NOP on VHE.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 42 ++++++-----------------
 arch/arm64/kvm/Makefile          |  2 +-
 arch/arm64/kvm/haslr.c           | 74 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/kvm/haslr.c

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 672c8684d5c2..d03eb75f1704 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -69,9 +69,6 @@
  * mappings, and none of this applies in that case.
  */
 
-#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
-#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
-
 #ifdef __ASSEMBLY__
 
 #include <asm/alternative.h>
@@ -81,27 +78,13 @@
  * Convert a kernel VA into a HYP VA.
  * reg: VA to be converted.
  *
- * This generates the following sequences:
- * - High mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		nop
- * - Low mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		and x0, x0, #HYP_PAGE_OFFSET_LOW_MASK
- * - VHE:
- *		nop
- *		nop
- *
- * The "low mask" version works because the mask is a strict subset of
- * the "high mask", hence performing the first mask for nothing.
- * Should be completely invisible on any viable CPU.
+ * The actual code generation takes place in kvm_update_va_mask, and
+ * the instructions below are only there to reserve the space and
+ * perform the register allocation.
  */
 .macro kern_hyp_va	reg
-alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
-	and     \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK
-alternative_else_nop_endif
-alternative_if ARM64_HYP_OFFSET_LOW
-	and     \reg, \reg, #HYP_PAGE_OFFSET_LOW_MASK
+alternative_cb kvm_update_va_mask
+	and     \reg, \reg, #1
 alternative_else_nop_endif
 .endm
 
@@ -113,18 +96,13 @@ alternative_else_nop_endif
 #include <asm/mmu_context.h>
 #include <asm/pgtable.h>
 
+u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
+
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
-	asm volatile(ALTERNATIVE("and %0, %0, %1",
-				 "nop",
-				 ARM64_HAS_VIRT_HOST_EXTN)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_HIGH_MASK));
-	asm volatile(ALTERNATIVE("nop",
-				 "and %0, %0, %1",
-				 ARM64_HYP_OFFSET_LOW)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_LOW_MASK));
+	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n",
+				    kvm_update_va_mask)
+		     : "+r" (v));
 	return v;
 }
 
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 87c4f7ae24de..baba030ee29e 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/e
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
 
-kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
+kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o haslr.o
 kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
 kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
 kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
new file mode 100644
index 000000000000..5e1643a4e7bf
--- /dev/null
+++ b/arch/arm64/kvm/haslr.c
@@ -0,0 +1,74 @@
+/*
+ * Copyright (C) 2017 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/alternative.h>
+#include <asm/debug-monitors.h>
+#include <asm/insn.h>
+#include <asm/kvm_mmu.h>
+
+#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
+#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
+
+static unsigned long get_hyp_va_mask(void)
+{
+	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
+	unsigned long mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	/*
+	 * Activate the lower HYP offset only if the idmap doesn't
+	 * clash with it,
+	 */
+	if (idmap_addr > HYP_PAGE_OFFSET_LOW_MASK)
+		mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	return mask;
+}
+
+u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
+{
+	u32 rd, rn, insn;
+	u64 imm;
+
+	/* We only expect a 1 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+
+	/* VHE doesn't need any address translation, let's NOP everything */
+	if (has_vhe())
+		return aarch64_insn_gen_nop();
+
+	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
+
+	switch (index) {
+	default:
+		/* Something went wrong... */
+		insn = AARCH64_BREAK_FAULT;
+		break;
+
+	case 0:
+		imm = get_hyp_va_mask();
+		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
+							  AARCH64_INSN_VARIANT_64BIT,
+							  rn, rd, imm);
+		break;
+	}
+
+	BUG_ON(insn == AARCH64_BREAK_FAULT);
+
+	return insn;
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 08/19] arm64: KVM: Dynamically patch the kernel/hyp VA mask
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

So far, we're using a complicated sequence of alternatives to
patch the kernel/hyp VA mask on non-VHE, and NOP out the
masking altogether when on VHE.

THe newly introduced dynamic patching gives us the opportunity
to simplify that code by patching a single instruction with
the correct mask (instead of the mind bending cummulative masking
we have at the moment) or even a single NOP on VHE.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 42 ++++++-----------------
 arch/arm64/kvm/Makefile          |  2 +-
 arch/arm64/kvm/haslr.c           | 74 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/kvm/haslr.c

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 672c8684d5c2..d03eb75f1704 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -69,9 +69,6 @@
  * mappings, and none of this applies in that case.
  */
 
-#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
-#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
-
 #ifdef __ASSEMBLY__
 
 #include <asm/alternative.h>
@@ -81,27 +78,13 @@
  * Convert a kernel VA into a HYP VA.
  * reg: VA to be converted.
  *
- * This generates the following sequences:
- * - High mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		nop
- * - Low mask:
- *		and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
- *		and x0, x0, #HYP_PAGE_OFFSET_LOW_MASK
- * - VHE:
- *		nop
- *		nop
- *
- * The "low mask" version works because the mask is a strict subset of
- * the "high mask", hence performing the first mask for nothing.
- * Should be completely invisible on any viable CPU.
+ * The actual code generation takes place in kvm_update_va_mask, and
+ * the instructions below are only there to reserve the space and
+ * perform the register allocation.
  */
 .macro kern_hyp_va	reg
-alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
-	and     \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK
-alternative_else_nop_endif
-alternative_if ARM64_HYP_OFFSET_LOW
-	and     \reg, \reg, #HYP_PAGE_OFFSET_LOW_MASK
+alternative_cb kvm_update_va_mask
+	and     \reg, \reg, #1
 alternative_else_nop_endif
 .endm
 
@@ -113,18 +96,13 @@ alternative_else_nop_endif
 #include <asm/mmu_context.h>
 #include <asm/pgtable.h>
 
+u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
+
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
-	asm volatile(ALTERNATIVE("and %0, %0, %1",
-				 "nop",
-				 ARM64_HAS_VIRT_HOST_EXTN)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_HIGH_MASK));
-	asm volatile(ALTERNATIVE("nop",
-				 "and %0, %0, %1",
-				 ARM64_HYP_OFFSET_LOW)
-		     : "+r" (v)
-		     : "i" (HYP_PAGE_OFFSET_LOW_MASK));
+	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n",
+				    kvm_update_va_mask)
+		     : "+r" (v));
 	return v;
 }
 
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 87c4f7ae24de..baba030ee29e 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/e
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
 
-kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
+kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o haslr.o
 kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
 kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
 kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
new file mode 100644
index 000000000000..5e1643a4e7bf
--- /dev/null
+++ b/arch/arm64/kvm/haslr.c
@@ -0,0 +1,74 @@
+/*
+ * Copyright (C) 2017 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/alternative.h>
+#include <asm/debug-monitors.h>
+#include <asm/insn.h>
+#include <asm/kvm_mmu.h>
+
+#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
+#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
+
+static unsigned long get_hyp_va_mask(void)
+{
+	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
+	unsigned long mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	/*
+	 * Activate the lower HYP offset only if the idmap doesn't
+	 * clash with it,
+	 */
+	if (idmap_addr > HYP_PAGE_OFFSET_LOW_MASK)
+		mask = HYP_PAGE_OFFSET_HIGH_MASK;
+
+	return mask;
+}
+
+u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
+{
+	u32 rd, rn, insn;
+	u64 imm;
+
+	/* We only expect a 1 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+
+	/* VHE doesn't need any address translation, let's NOP everything */
+	if (has_vhe())
+		return aarch64_insn_gen_nop();
+
+	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
+
+	switch (index) {
+	default:
+		/* Something went wrong... */
+		insn = AARCH64_BREAK_FAULT;
+		break;
+
+	case 0:
+		imm = get_hyp_va_mask();
+		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
+							  AARCH64_INSN_VARIANT_64BIT,
+							  rn, rd, imm);
+		break;
+	}
+
+	BUG_ON(insn == AARCH64_BREAK_FAULT);
+
+	return insn;
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 09/19] arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

Now that we can dynamically compute the kernek/hyp VA mask, there
is need for a feature flag to trigger the alternative patching.
Let's drop the flag and everything that depends on it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  2 +-
 arch/arm64/kernel/cpufeature.c   | 19 -------------------
 2 files changed, 1 insertion(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 2ff7c5e8efab..f130f35dca3c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -32,7 +32,7 @@
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
 #define ARM64_HAS_32BIT_EL0			13
-#define ARM64_HYP_OFFSET_LOW			14
+/* #define ARM64_UNALLOCATED_ENTRY			14 */
 #define ARM64_MISMATCHED_CACHE_LINE_SIZE	15
 #define ARM64_HAS_NO_FPSIMD			16
 #define ARM64_WORKAROUND_REPEAT_TLBI		17
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c5ba0097887f..9eabceaaf5fb 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -824,19 +824,6 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 	return is_kernel_in_hyp_mode();
 }
 
-static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
-			   int __unused)
-{
-	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
-
-	/*
-	 * Activate the lower HYP offset only if:
-	 * - the idmap doesn't clash with it,
-	 * - the kernel is not running at EL2.
-	 */
-	return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
-}
-
 static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unused)
 {
 	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
@@ -925,12 +912,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
 		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
 	},
-	{
-		.desc = "Reduced HYP mapping offset",
-		.capability = ARM64_HYP_OFFSET_LOW,
-		.def_scope = SCOPE_SYSTEM,
-		.matches = hyp_offset_low,
-	},
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 09/19] arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we can dynamically compute the kernek/hyp VA mask, there
is need for a feature flag to trigger the alternative patching.
Let's drop the flag and everything that depends on it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  2 +-
 arch/arm64/kernel/cpufeature.c   | 19 -------------------
 2 files changed, 1 insertion(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 2ff7c5e8efab..f130f35dca3c 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -32,7 +32,7 @@
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
 #define ARM64_HAS_32BIT_EL0			13
-#define ARM64_HYP_OFFSET_LOW			14
+/* #define ARM64_UNALLOCATED_ENTRY			14 */
 #define ARM64_MISMATCHED_CACHE_LINE_SIZE	15
 #define ARM64_HAS_NO_FPSIMD			16
 #define ARM64_WORKAROUND_REPEAT_TLBI		17
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c5ba0097887f..9eabceaaf5fb 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -824,19 +824,6 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 	return is_kernel_in_hyp_mode();
 }
 
-static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
-			   int __unused)
-{
-	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
-
-	/*
-	 * Activate the lower HYP offset only if:
-	 * - the idmap doesn't clash with it,
-	 * - the kernel is not running at EL2.
-	 */
-	return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
-}
-
 static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unused)
 {
 	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
@@ -925,12 +912,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
 		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
 	},
-	{
-		.desc = "Reduced HYP mapping offset",
-		.capability = ARM64_HYP_OFFSET_LOW,
-		.def_scope = SCOPE_SYSTEM,
-		.matches = hyp_offset_low,
-	},
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 10/19] KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always be PC relative.

So let's bite the bullet and provide our own accessor.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_hyp.h   | 6 ++++++
 arch/arm64/include/asm/kvm_hyp.h | 9 +++++++++
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 4 ++--
 3 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index ab20ffa8b9e7..1d42d0aa2feb 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -26,6 +26,12 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr = &(s);					\
+		addr;							\
+	})
+
 #define __ACCESS_VFP(CRn)			\
 	"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 08d3bb66c8b7..a2d98c539023 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -25,6 +25,15 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr;					\
+		asm volatile("adrp	%0, %1\n"			\
+			     "add	%0, %0, :lo12:%1\n"		\
+			     : "=r" (addr) : "S" (&s));			\
+		addr;							\
+	})
+
 #define read_sysreg_elx(r,nvh,vh)					\
 	({								\
 		u64 reg;						\
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index a3f18d362366..19f63fbf3682 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -25,7 +25,7 @@
 static void __hyp_text save_elrsr(struct kvm_vcpu *vcpu, void __iomem *base)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr;
+	int nr_lr = hyp_symbol_addr(kvm_vgic_global_state)->nr_lr;
 	u32 elrsr0, elrsr1;
 
 	elrsr0 = readl_relaxed(base + GICH_ELRSR0);
@@ -143,7 +143,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 10/19] KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always be PC relative.

So let's bite the bullet and provide our own accessor.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_hyp.h   | 6 ++++++
 arch/arm64/include/asm/kvm_hyp.h | 9 +++++++++
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 4 ++--
 3 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index ab20ffa8b9e7..1d42d0aa2feb 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -26,6 +26,12 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr = &(s);					\
+		addr;							\
+	})
+
 #define __ACCESS_VFP(CRn)			\
 	"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 08d3bb66c8b7..a2d98c539023 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -25,6 +25,15 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr;					\
+		asm volatile("adrp	%0, %1\n"			\
+			     "add	%0, %0, :lo12:%1\n"		\
+			     : "=r" (addr) : "S" (&s));			\
+		addr;							\
+	})
+
 #define read_sysreg_elx(r,nvh,vh)					\
 	({								\
 		u64 reg;						\
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index a3f18d362366..19f63fbf3682 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -25,7 +25,7 @@
 static void __hyp_text save_elrsr(struct kvm_vcpu *vcpu, void __iomem *base)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr;
+	int nr_lr = hyp_symbol_addr(kvm_vgic_global_state)->nr_lr;
 	u32 elrsr0, elrsr1;
 
 	elrsr0 = readl_relaxed(base + GICH_ELRSR0);
@@ -143,7 +143,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 11/19] KVM: arm/arm64: Demote HYP VA range display to being a debug feature
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

Displaying the HYP VA information is slightly counterproductive when
using VA randomization. Turn it into a debug feature only, and adjust
the last displayed value to reflect the top of RAM instead of ~0.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index b4b69c2d1012..84d09f1a44d4 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1760,9 +1760,10 @@ int kvm_mmu_init(void)
 	 */
 	BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
 
-	kvm_info("IDMAP page: %lx\n", hyp_idmap_start);
-	kvm_info("HYP VA range: %lx:%lx\n",
-		 kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
+	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
+	kvm_debug("HYP VA range: %lx:%lx\n",
+		  kern_hyp_va(PAGE_OFFSET),
+		  kern_hyp_va((unsigned long)high_memory - 1));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
 	    hyp_idmap_start <  kern_hyp_va(~0UL) &&
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 11/19] KVM: arm/arm64: Demote HYP VA range display to being a debug feature
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

Displaying the HYP VA information is slightly counterproductive when
using VA randomization. Turn it into a debug feature only, and adjust
the last displayed value to reflect the top of RAM instead of ~0.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index b4b69c2d1012..84d09f1a44d4 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1760,9 +1760,10 @@ int kvm_mmu_init(void)
 	 */
 	BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
 
-	kvm_info("IDMAP page: %lx\n", hyp_idmap_start);
-	kvm_info("HYP VA range: %lx:%lx\n",
-		 kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
+	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
+	kvm_debug("HYP VA range: %lx:%lx\n",
+		  kern_hyp_va(PAGE_OFFSET),
+		  kern_hyp_va((unsigned long)high_memory - 1));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
 	    hyp_idmap_start <  kern_hyp_va(~0UL) &&
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 12/19] KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

Both HYP io mappings call ioremap, followed by create_hyp_io_mappings.
Let's move the ioremap call into create_hyp_io_mappings itself, which
simplifies the code a bit and allows for further refactoring.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 virt/kvm/arm/mmu.c               | 24 ++++++++++++++----------
 virt/kvm/arm/vgic/vgic-v2.c      | 31 ++++++++-----------------------
 4 files changed, 26 insertions(+), 35 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index fa6f2174276b..cb3bef71ec9b 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -41,7 +41,8 @@
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index d03eb75f1704..5553c1abf1d5 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -118,7 +118,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 84d09f1a44d4..38adbe0a016c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -709,26 +709,30 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
 }
 
 /**
- * create_hyp_io_mappings - duplicate a kernel IO mapping into Hyp mode
- * @from:	The kernel start VA of the range
- * @to:		The kernel end VA of the range (exclusive)
+ * create_hyp_io_mappings - Map IO into both kernel and HYP
  * @phys_addr:	The physical start address which gets mapped
+ * @size:	Size of the region being mapped
+ * @kaddr:	Kernel VA for this mapping
  *
  * The resulting HYP VA is the same as the kernel VA, modulo
  * HYP_PAGE_OFFSET.
  */
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr)
 {
-	unsigned long start = kern_hyp_va((unsigned long)from);
-	unsigned long end = kern_hyp_va((unsigned long)to);
+	unsigned long start, end;
 
-	if (is_kernel_in_hyp_mode())
+	*kaddr = ioremap(phys_addr, size);
+	if (!*kaddr)
+		return -ENOMEM;
+
+	if (is_kernel_in_hyp_mode()) {
 		return 0;
+	}
 
-	/* Check for a valid kernel IO mapping */
-	if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
-		return -EINVAL;
 
+	start = kern_hyp_va((unsigned long)*kaddr);
+	end = kern_hyp_va((unsigned long)*kaddr + size);
 	return __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 }
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 80897102da26..bc49d702f9f0 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -332,16 +332,10 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 	if (!PAGE_ALIGNED(info->vcpu.start) ||
 	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
 		kvm_info("GICV region size/alignment is unsafe, using trapping (reduced performance)\n");
-		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
-							     resource_size(&info->vcpu));
-		if (!kvm_vgic_global_state.vcpu_base_va) {
-			kvm_err("Cannot ioremap GICV\n");
-			return -ENOMEM;
-		}
 
-		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
-					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
-					     info->vcpu.start);
+		ret = create_hyp_io_mappings(info->vcpu.start,
+					     resource_size(&info->vcpu),
+					     &kvm_vgic_global_state.vcpu_base_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -350,26 +344,17 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 		static_branch_enable(&vgic_v2_cpuif_trap);
 	}
 
-	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
-						   resource_size(&info->vctrl));
-	if (!kvm_vgic_global_state.vctrl_base) {
-		kvm_err("Cannot ioremap GICH\n");
-		ret = -ENOMEM;
+	ret = create_hyp_io_mappings(info->vctrl.start,
+				     resource_size(&info->vctrl),
+				     &kvm_vgic_global_state.vctrl_base);
+	if (ret) {
+		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
 	}
 
 	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
 	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
 
-	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
-				     kvm_vgic_global_state.vctrl_base +
-					 resource_size(&info->vctrl),
-				     info->vctrl.start);
-	if (ret) {
-		kvm_err("Cannot map VCTRL into hyp\n");
-		goto out;
-	}
-
 	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
 	if (ret) {
 		kvm_err("Cannot register GICv2 KVM device\n");
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 12/19] KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

Both HYP io mappings call ioremap, followed by create_hyp_io_mappings.
Let's move the ioremap call into create_hyp_io_mappings itself, which
simplifies the code a bit and allows for further refactoring.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 virt/kvm/arm/mmu.c               | 24 ++++++++++++++----------
 virt/kvm/arm/vgic/vgic-v2.c      | 31 ++++++++-----------------------
 4 files changed, 26 insertions(+), 35 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index fa6f2174276b..cb3bef71ec9b 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -41,7 +41,8 @@
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index d03eb75f1704..5553c1abf1d5 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -118,7 +118,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 #include <asm/stage2_pgtable.h>
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 84d09f1a44d4..38adbe0a016c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -709,26 +709,30 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
 }
 
 /**
- * create_hyp_io_mappings - duplicate a kernel IO mapping into Hyp mode
- * @from:	The kernel start VA of the range
- * @to:		The kernel end VA of the range (exclusive)
+ * create_hyp_io_mappings - Map IO into both kernel and HYP
  * @phys_addr:	The physical start address which gets mapped
+ * @size:	Size of the region being mapped
+ * @kaddr:	Kernel VA for this mapping
  *
  * The resulting HYP VA is the same as the kernel VA, modulo
  * HYP_PAGE_OFFSET.
  */
-int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
+int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
+			   void __iomem **kaddr)
 {
-	unsigned long start = kern_hyp_va((unsigned long)from);
-	unsigned long end = kern_hyp_va((unsigned long)to);
+	unsigned long start, end;
 
-	if (is_kernel_in_hyp_mode())
+	*kaddr = ioremap(phys_addr, size);
+	if (!*kaddr)
+		return -ENOMEM;
+
+	if (is_kernel_in_hyp_mode()) {
 		return 0;
+	}
 
-	/* Check for a valid kernel IO mapping */
-	if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
-		return -EINVAL;
 
+	start = kern_hyp_va((unsigned long)*kaddr);
+	end = kern_hyp_va((unsigned long)*kaddr + size);
 	return __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 }
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 80897102da26..bc49d702f9f0 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -332,16 +332,10 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 	if (!PAGE_ALIGNED(info->vcpu.start) ||
 	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
 		kvm_info("GICV region size/alignment is unsafe, using trapping (reduced performance)\n");
-		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
-							     resource_size(&info->vcpu));
-		if (!kvm_vgic_global_state.vcpu_base_va) {
-			kvm_err("Cannot ioremap GICV\n");
-			return -ENOMEM;
-		}
 
-		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
-					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
-					     info->vcpu.start);
+		ret = create_hyp_io_mappings(info->vcpu.start,
+					     resource_size(&info->vcpu),
+					     &kvm_vgic_global_state.vcpu_base_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -350,26 +344,17 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 		static_branch_enable(&vgic_v2_cpuif_trap);
 	}
 
-	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
-						   resource_size(&info->vctrl));
-	if (!kvm_vgic_global_state.vctrl_base) {
-		kvm_err("Cannot ioremap GICH\n");
-		ret = -ENOMEM;
+	ret = create_hyp_io_mappings(info->vctrl.start,
+				     resource_size(&info->vctrl),
+				     &kvm_vgic_global_state.vctrl_base);
+	if (ret) {
+		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
 	}
 
 	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
 	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
 
-	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
-				     kvm_vgic_global_state.vctrl_base +
-					 resource_size(&info->vctrl),
-				     info->vctrl.start);
-	if (ret) {
-		kvm_err("Cannot map VCTRL into hyp\n");
-		goto out;
-	}
-
 	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
 	if (ret) {
 		kvm_err("Cannot register GICv2 KVM device\n");
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 13/19] KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

As we're about to change the way we map devices at HYP, we need
to move away from kern_hyp_va on an IO address.

One way of achieving this is to store the VAs in kvm_vgic_global_state,
and use that directly from the HYP code. This requires a small change
to create_hyp_io_mappings so that it can also return a HYP VA.

We take this opportunity to nuke the vctrl_base field in the emulated
distributor, as it is not used anymore.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 include/kvm/arm_vgic.h           | 12 ++++++------
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 10 +++-------
 virt/kvm/arm/mmu.c               | 20 ++++++++++++++++----
 virt/kvm/arm/vgic/vgic-init.c    |  6 ------
 virt/kvm/arm/vgic/vgic-v2.c      | 13 +++++++------
 7 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index cb3bef71ec9b..feff24b34506 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -42,7 +42,8 @@
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 5553c1abf1d5..d30e83df5ccb 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -119,7 +119,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 8c896540a72c..8b3fbc03293b 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -57,11 +57,15 @@ struct vgic_global {
 	/* Physical address of vgic virtual cpu interface */
 	phys_addr_t		vcpu_base;
 
-	/* GICV mapping */
+	/* GICV mapping, kernel VA */
 	void __iomem		*vcpu_base_va;
+	/* GICV mapping, HYP VA */
+	void __iomem		*vcpu_hyp_va;
 
-	/* virtual control interface mapping */
+	/* virtual control interface mapping, kernel VA */
 	void __iomem		*vctrl_base;
+	/* virtual control interface mapping, HYP VA */
+	void __iomem		*vctrl_hyp;
 
 	/* Number of implemented list registers */
 	int			nr_lr;
@@ -198,10 +202,6 @@ struct vgic_dist {
 
 	int			nr_spis;
 
-	/* TODO: Consider moving to global state */
-	/* Virtual control interface mapping */
-	void __iomem		*vctrl_base;
-
 	/* base addresses in guest physical address space: */
 	gpa_t			vgic_dist_base;		/* distributor */
 	union {
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 19f63fbf3682..a3b224e09f74 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -60,10 +60,8 @@ static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
 	if (!base)
@@ -85,10 +83,8 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	int i;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
@@ -143,7 +139,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
+	addr  = hyp_symbol_addr(kvm_vgic_global_state)->vcpu_hyp_va;
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 38adbe0a016c..6192d45d1e1a 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -713,28 +713,40 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
  * @phys_addr:	The physical start address which gets mapped
  * @size:	Size of the region being mapped
  * @kaddr:	Kernel VA for this mapping
+ * @haddr:	HYP VA for this mapping
  *
- * The resulting HYP VA is the same as the kernel VA, modulo
- * HYP_PAGE_OFFSET.
+ * The resulting HYP VA is completely unrelated to the kernel VA.
  */
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr)
+			   void __iomem **kaddr,
+			   void __iomem **haddr)
 {
 	unsigned long start, end;
+	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
 	if (!*kaddr)
 		return -ENOMEM;
 
 	if (is_kernel_in_hyp_mode()) {
+		*haddr = *kaddr;
 		return 0;
 	}
 
 
 	start = kern_hyp_va((unsigned long)*kaddr);
 	end = kern_hyp_va((unsigned long)*kaddr + size);
-	return __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
+
+	if (ret) {
+		iounmap(*kaddr);
+		*kaddr = NULL;
+	} else {
+		*haddr = (void __iomem *)start;
+	}
+
+	return ret;
 }
 
 /**
diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
index 62310122ee78..3f01b5975055 100644
--- a/virt/kvm/arm/vgic/vgic-init.c
+++ b/virt/kvm/arm/vgic/vgic-init.c
@@ -166,12 +166,6 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
 	kvm->arch.vgic.in_kernel = true;
 	kvm->arch.vgic.vgic_model = type;
 
-	/*
-	 * kvm_vgic_global_state.vctrl_base is set on vgic probe (kvm_arch_init)
-	 * it is stored in distributor struct for asm save/restore purpose
-	 */
-	kvm->arch.vgic.vctrl_base = kvm_vgic_global_state.vctrl_base;
-
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_redist_base = VGIC_ADDR_UNDEF;
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index bc49d702f9f0..f0f566e4494e 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -335,7 +335,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 		ret = create_hyp_io_mappings(info->vcpu.start,
 					     resource_size(&info->vcpu),
-					     &kvm_vgic_global_state.vcpu_base_va);
+					     &kvm_vgic_global_state.vcpu_base_va,
+					     &kvm_vgic_global_state.vcpu_hyp_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -346,7 +347,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 	ret = create_hyp_io_mappings(info->vctrl.start,
 				     resource_size(&info->vctrl),
-				     &kvm_vgic_global_state.vctrl_base);
+				     &kvm_vgic_global_state.vctrl_base,
+				     &kvm_vgic_global_state.vctrl_hyp);
 	if (ret) {
 		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
@@ -381,15 +383,14 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 void vgic_v2_load(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	writel_relaxed(cpu_if->vgic_vmcr, vgic->vctrl_base + GICH_VMCR);
+	writel_relaxed(cpu_if->vgic_vmcr,
+		       kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
 
 void vgic_v2_put(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	cpu_if->vgic_vmcr = readl_relaxed(vgic->vctrl_base + GICH_VMCR);
+	cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 13/19] KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

As we're about to change the way we map devices at HYP, we need
to move away from kern_hyp_va on an IO address.

One way of achieving this is to store the VAs in kvm_vgic_global_state,
and use that directly from the HYP code. This requires a small change
to create_hyp_io_mappings so that it can also return a HYP VA.

We take this opportunity to nuke the vctrl_base field in the emulated
distributor, as it is not used anymore.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 ++-
 arch/arm64/include/asm/kvm_mmu.h |  3 ++-
 include/kvm/arm_vgic.h           | 12 ++++++------
 virt/kvm/arm/hyp/vgic-v2-sr.c    | 10 +++-------
 virt/kvm/arm/mmu.c               | 20 ++++++++++++++++----
 virt/kvm/arm/vgic/vgic-init.c    |  6 ------
 virt/kvm/arm/vgic/vgic-v2.c      | 13 +++++++------
 7 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index cb3bef71ec9b..feff24b34506 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -42,7 +42,8 @@
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 5553c1abf1d5..d30e83df5ccb 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -119,7 +119,8 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 
 int create_hyp_mappings(void *from, void *to, pgprot_t prot);
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr);
+			   void __iomem **kaddr,
+			   void __iomem **haddr);
 void free_hyp_pgds(void);
 
 void stage2_unmap_vm(struct kvm *kvm);
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 8c896540a72c..8b3fbc03293b 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -57,11 +57,15 @@ struct vgic_global {
 	/* Physical address of vgic virtual cpu interface */
 	phys_addr_t		vcpu_base;
 
-	/* GICV mapping */
+	/* GICV mapping, kernel VA */
 	void __iomem		*vcpu_base_va;
+	/* GICV mapping, HYP VA */
+	void __iomem		*vcpu_hyp_va;
 
-	/* virtual control interface mapping */
+	/* virtual control interface mapping, kernel VA */
 	void __iomem		*vctrl_base;
+	/* virtual control interface mapping, HYP VA */
+	void __iomem		*vctrl_hyp;
 
 	/* Number of implemented list registers */
 	int			nr_lr;
@@ -198,10 +202,6 @@ struct vgic_dist {
 
 	int			nr_spis;
 
-	/* TODO: Consider moving to global state */
-	/* Virtual control interface mapping */
-	void __iomem		*vctrl_base;
-
 	/* base addresses in guest physical address space: */
 	gpa_t			vgic_dist_base;		/* distributor */
 	union {
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 19f63fbf3682..a3b224e09f74 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -60,10 +60,8 @@ static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
 	if (!base)
@@ -85,10 +83,8 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
 /* vcpu is already in the HYP VA space */
 void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &kvm->arch.vgic;
-	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+	void __iomem *base = hyp_symbol_addr(kvm_vgic_global_state)->vctrl_hyp;
 	int i;
 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
 
@@ -143,7 +139,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 
 	rd = kvm_vcpu_dabt_get_rd(vcpu);
-	addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
+	addr  = hyp_symbol_addr(kvm_vgic_global_state)->vcpu_hyp_va;
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 38adbe0a016c..6192d45d1e1a 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -713,28 +713,40 @@ int create_hyp_mappings(void *from, void *to, pgprot_t prot)
  * @phys_addr:	The physical start address which gets mapped
  * @size:	Size of the region being mapped
  * @kaddr:	Kernel VA for this mapping
+ * @haddr:	HYP VA for this mapping
  *
- * The resulting HYP VA is the same as the kernel VA, modulo
- * HYP_PAGE_OFFSET.
+ * The resulting HYP VA is completely unrelated to the kernel VA.
  */
 int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
-			   void __iomem **kaddr)
+			   void __iomem **kaddr,
+			   void __iomem **haddr)
 {
 	unsigned long start, end;
+	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
 	if (!*kaddr)
 		return -ENOMEM;
 
 	if (is_kernel_in_hyp_mode()) {
+		*haddr = *kaddr;
 		return 0;
 	}
 
 
 	start = kern_hyp_va((unsigned long)*kaddr);
 	end = kern_hyp_va((unsigned long)*kaddr + size);
-	return __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(hyp_pgd, start, end,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
+
+	if (ret) {
+		iounmap(*kaddr);
+		*kaddr = NULL;
+	} else {
+		*haddr = (void __iomem *)start;
+	}
+
+	return ret;
 }
 
 /**
diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
index 62310122ee78..3f01b5975055 100644
--- a/virt/kvm/arm/vgic/vgic-init.c
+++ b/virt/kvm/arm/vgic/vgic-init.c
@@ -166,12 +166,6 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
 	kvm->arch.vgic.in_kernel = true;
 	kvm->arch.vgic.vgic_model = type;
 
-	/*
-	 * kvm_vgic_global_state.vctrl_base is set on vgic probe (kvm_arch_init)
-	 * it is stored in distributor struct for asm save/restore purpose
-	 */
-	kvm->arch.vgic.vctrl_base = kvm_vgic_global_state.vctrl_base;
-
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_redist_base = VGIC_ADDR_UNDEF;
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index bc49d702f9f0..f0f566e4494e 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -335,7 +335,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 		ret = create_hyp_io_mappings(info->vcpu.start,
 					     resource_size(&info->vcpu),
-					     &kvm_vgic_global_state.vcpu_base_va);
+					     &kvm_vgic_global_state.vcpu_base_va,
+					     &kvm_vgic_global_state.vcpu_hyp_va);
 		if (ret) {
 			kvm_err("Cannot map GICV into hyp\n");
 			goto out;
@@ -346,7 +347,8 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 
 	ret = create_hyp_io_mappings(info->vctrl.start,
 				     resource_size(&info->vctrl),
-				     &kvm_vgic_global_state.vctrl_base);
+				     &kvm_vgic_global_state.vctrl_base,
+				     &kvm_vgic_global_state.vctrl_hyp);
 	if (ret) {
 		kvm_err("Cannot map VCTRL into hyp\n");
 		goto out;
@@ -381,15 +383,14 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 void vgic_v2_load(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	writel_relaxed(cpu_if->vgic_vmcr, vgic->vctrl_base + GICH_VMCR);
+	writel_relaxed(cpu_if->vgic_vmcr,
+		       kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
 
 void vgic_v2_put(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 
-	cpu_if->vgic_vmcr = readl_relaxed(vgic->vctrl_base + GICH_VMCR);
+	cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR);
 }
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 14/19] KVM: arm/arm64: Move HYP IO VAs to the "idmap" range
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

We so far mapped our HYP IO (which is essencially the GICv2 control
registers) using the same method as for memory. It recently appeared
that is a bit unsafe:

we compute the HYP VA using the kern_hyp_va helper, but that helper
is only designed to deal with kernel VAs coming from the linear map,
and not from the vmalloc region... This could in turn cause some bad
aliasing between the two, amplified by the new VA randomisation.

A solution is to come up with our very own basic VA allocator for
MMIO. Since half of the HYP address space only contains a single
page (the idmap), we have plenty to borrow from. Let's use the idmap
as a base, and allocate downwards from it. GICv2 now lives on the
other side of the great VA barrier.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 40 ++++++++++++++++++++++++++++------------
 1 file changed, 28 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 6192d45d1e1a..0597c9846f1a 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -43,6 +43,9 @@ static unsigned long hyp_idmap_start;
 static unsigned long hyp_idmap_end;
 static phys_addr_t hyp_idmap_vector;
 
+static DEFINE_MUTEX(io_map_lock);
+static unsigned long io_map_base;
+
 #define S2_PGD_SIZE	(PTRS_PER_S2_PGD * sizeof(pgd_t))
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
@@ -502,27 +505,31 @@ static void unmap_hyp_range(pgd_t *pgdp, phys_addr_t start, u64 size)
  *
  * Assumes hyp_pgd is a page table used strictly in Hyp-mode and
  * therefore contains either mappings in the kernel memory area (above
- * PAGE_OFFSET), or device mappings in the vmalloc range (from
- * VMALLOC_START to VMALLOC_END).
+ * PAGE_OFFSET), or device mappings in the idmap range.
  *
- * boot_hyp_pgd should only map two pages for the init code.
+ * boot_hyp_pgd should only map the idmap range, and is only used in
+ * the extended idmap case.
  */
 void free_hyp_pgds(void)
 {
+	pgd_t *id_pgd;
+
 	mutex_lock(&kvm_hyp_pgd_mutex);
 
+	id_pgd = boot_hyp_pgd ? boot_hyp_pgd : hyp_pgd;
+
+	if (id_pgd)
+		unmap_hyp_range(id_pgd, io_map_base,
+				hyp_idmap_start + PAGE_SIZE - io_map_base);
+
 	if (boot_hyp_pgd) {
-		unmap_hyp_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
 		free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
 		boot_hyp_pgd = NULL;
 	}
 
 	if (hyp_pgd) {
-		unmap_hyp_range(hyp_pgd, hyp_idmap_start, PAGE_SIZE);
 		unmap_hyp_range(hyp_pgd, kern_hyp_va(PAGE_OFFSET),
 				(uintptr_t)high_memory - PAGE_OFFSET);
-		unmap_hyp_range(hyp_pgd, kern_hyp_va(VMALLOC_START),
-				VMALLOC_END - VMALLOC_START);
 
 		free_pages((unsigned long)hyp_pgd, hyp_pgd_order);
 		hyp_pgd = NULL;
@@ -721,7 +728,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 			   void __iomem **kaddr,
 			   void __iomem **haddr)
 {
-	unsigned long start, end;
+	pgd_t *pgd = hyp_pgd;
+	unsigned long base;
 	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
@@ -733,19 +741,26 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 		return 0;
 	}
 
+	mutex_lock(&io_map_lock);
+
+	base = io_map_base - size;
+	base &= ~(size - 1);
+
+	if (__kvm_cpu_uses_extended_idmap())
+		pgd = boot_hyp_pgd;
 
-	start = kern_hyp_va((unsigned long)*kaddr);
-	end = kern_hyp_va((unsigned long)*kaddr + size);
-	ret = __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(pgd, base, base + size,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 
 	if (ret) {
 		iounmap(*kaddr);
 		*kaddr = NULL;
 	} else {
-		*haddr = (void __iomem *)start;
+		*haddr = (void __iomem *)base;
+		io_map_base = base;
 	}
 
+	mutex_unlock(&io_map_lock);
 	return ret;
 }
 
@@ -1826,6 +1841,7 @@ int kvm_mmu_init(void)
 			goto out;
 	}
 
+	io_map_base = hyp_idmap_start;
 	return 0;
 out:
 	free_hyp_pgds();
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 14/19] KVM: arm/arm64: Move HYP IO VAs to the "idmap" range
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

We so far mapped our HYP IO (which is essencially the GICv2 control
registers) using the same method as for memory. It recently appeared
that is a bit unsafe:

we compute the HYP VA using the kern_hyp_va helper, but that helper
is only designed to deal with kernel VAs coming from the linear map,
and not from the vmalloc region... This could in turn cause some bad
aliasing between the two, amplified by the new VA randomisation.

A solution is to come up with our very own basic VA allocator for
MMIO. Since half of the HYP address space only contains a single
page (the idmap), we have plenty to borrow from. Let's use the idmap
as a base, and allocate downwards from it. GICv2 now lives on the
other side of the great VA barrier.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/mmu.c | 40 ++++++++++++++++++++++++++++------------
 1 file changed, 28 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 6192d45d1e1a..0597c9846f1a 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -43,6 +43,9 @@ static unsigned long hyp_idmap_start;
 static unsigned long hyp_idmap_end;
 static phys_addr_t hyp_idmap_vector;
 
+static DEFINE_MUTEX(io_map_lock);
+static unsigned long io_map_base;
+
 #define S2_PGD_SIZE	(PTRS_PER_S2_PGD * sizeof(pgd_t))
 #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
 
@@ -502,27 +505,31 @@ static void unmap_hyp_range(pgd_t *pgdp, phys_addr_t start, u64 size)
  *
  * Assumes hyp_pgd is a page table used strictly in Hyp-mode and
  * therefore contains either mappings in the kernel memory area (above
- * PAGE_OFFSET), or device mappings in the vmalloc range (from
- * VMALLOC_START to VMALLOC_END).
+ * PAGE_OFFSET), or device mappings in the idmap range.
  *
- * boot_hyp_pgd should only map two pages for the init code.
+ * boot_hyp_pgd should only map the idmap range, and is only used in
+ * the extended idmap case.
  */
 void free_hyp_pgds(void)
 {
+	pgd_t *id_pgd;
+
 	mutex_lock(&kvm_hyp_pgd_mutex);
 
+	id_pgd = boot_hyp_pgd ? boot_hyp_pgd : hyp_pgd;
+
+	if (id_pgd)
+		unmap_hyp_range(id_pgd, io_map_base,
+				hyp_idmap_start + PAGE_SIZE - io_map_base);
+
 	if (boot_hyp_pgd) {
-		unmap_hyp_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
 		free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
 		boot_hyp_pgd = NULL;
 	}
 
 	if (hyp_pgd) {
-		unmap_hyp_range(hyp_pgd, hyp_idmap_start, PAGE_SIZE);
 		unmap_hyp_range(hyp_pgd, kern_hyp_va(PAGE_OFFSET),
 				(uintptr_t)high_memory - PAGE_OFFSET);
-		unmap_hyp_range(hyp_pgd, kern_hyp_va(VMALLOC_START),
-				VMALLOC_END - VMALLOC_START);
 
 		free_pages((unsigned long)hyp_pgd, hyp_pgd_order);
 		hyp_pgd = NULL;
@@ -721,7 +728,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 			   void __iomem **kaddr,
 			   void __iomem **haddr)
 {
-	unsigned long start, end;
+	pgd_t *pgd = hyp_pgd;
+	unsigned long base;
 	int ret;
 
 	*kaddr = ioremap(phys_addr, size);
@@ -733,19 +741,26 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
 		return 0;
 	}
 
+	mutex_lock(&io_map_lock);
+
+	base = io_map_base - size;
+	base &= ~(size - 1);
+
+	if (__kvm_cpu_uses_extended_idmap())
+		pgd = boot_hyp_pgd;
 
-	start = kern_hyp_va((unsigned long)*kaddr);
-	end = kern_hyp_va((unsigned long)*kaddr + size);
-	ret = __create_hyp_mappings(hyp_pgd, start, end,
+	ret = __create_hyp_mappings(pgd, base, base + size,
 				     __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
 
 	if (ret) {
 		iounmap(*kaddr);
 		*kaddr = NULL;
 	} else {
-		*haddr = (void __iomem *)start;
+		*haddr = (void __iomem *)base;
+		io_map_base = base;
 	}
 
+	mutex_unlock(&io_map_lock);
 	return ret;
 }
 
@@ -1826,6 +1841,7 @@ int kvm_mmu_init(void)
 			goto out;
 	}
 
+	io_map_base = hyp_idmap_start;
 	return 0;
 out:
 	free_hyp_pgds();
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 15/19] arm64; insn: Add encoder for the EXTR instruction
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: Christoffer Dall, Mark Rutland, Catalin Marinas, Will Deacon,
	James Morse, Steve Capper

Add an encoder for the EXTR instruction, which also implements the ROR
variant (where Rn == Rm).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |  6 ++++++
 arch/arm64/kernel/insn.c      | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 815b35bc53ed..f62c56b1793f 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -319,6 +319,7 @@ __AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
 __AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
 __AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
 __AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
+__AARCH64_INSN_FUNCS(extr,	0x7FA00000, 0x13800000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -433,6 +434,11 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 				       enum aarch64_insn_register Rn,
 				       enum aarch64_insn_register Rd,
 				       u64 imm);
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 326b17016485..af29fc3e09a9 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1622,3 +1622,35 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
 	return aarch64_encode_immediate(imm, variant, insn);
 }
+
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb)
+{
+	u32 insn;
+
+	insn = aarch64_insn_get_extr_value();
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (lsb > 31)
+			return AARCH64_BREAK_FAULT;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		if (lsb > 63)
+			return AARCH64_BREAK_FAULT;
+		insn |= AARCH64_INSN_SF_BIT;
+		insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, 1);
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, lsb);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 15/19] arm64; insn: Add encoder for the EXTR instruction
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

Add an encoder for the EXTR instruction, which also implements the ROR
variant (where Rn == Rm).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |  6 ++++++
 arch/arm64/kernel/insn.c      | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 815b35bc53ed..f62c56b1793f 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -319,6 +319,7 @@ __AARCH64_INSN_FUNCS(and_imm,	0x7F800000, 0x12000000)
 __AARCH64_INSN_FUNCS(orr_imm,	0x7F800000, 0x32000000)
 __AARCH64_INSN_FUNCS(eor_imm,	0x7F800000, 0x52000000)
 __AARCH64_INSN_FUNCS(ands_imm,	0x7F800000, 0x72000000)
+__AARCH64_INSN_FUNCS(extr,	0x7FA00000, 0x13800000)
 __AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
 __AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
 __AARCH64_INSN_FUNCS(cbz,	0x7F000000, 0x34000000)
@@ -433,6 +434,11 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 				       enum aarch64_insn_register Rn,
 				       enum aarch64_insn_register Rd,
 				       u64 imm);
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb);
 u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
 			      enum aarch64_insn_prfm_type type,
 			      enum aarch64_insn_prfm_target target,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 326b17016485..af29fc3e09a9 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1622,3 +1622,35 @@ u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
 	return aarch64_encode_immediate(imm, variant, insn);
 }
+
+u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+			  enum aarch64_insn_register Rm,
+			  enum aarch64_insn_register Rn,
+			  enum aarch64_insn_register Rd,
+			  u8 lsb)
+{
+	u32 insn;
+
+	insn = aarch64_insn_get_extr_value();
+
+	switch (variant) {
+	case AARCH64_INSN_VARIANT_32BIT:
+		if (lsb > 31)
+			return AARCH64_BREAK_FAULT;
+		break;
+	case AARCH64_INSN_VARIANT_64BIT:
+		if (lsb > 63)
+			return AARCH64_BREAK_FAULT;
+		insn |= AARCH64_INSN_SF_BIT;
+		insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_N, insn, 1);
+		break;
+	default:
+		pr_err("%s: unknown variant encoding %d\n", __func__, variant);
+		return AARCH64_BREAK_FAULT;
+	}
+
+	insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, lsb);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd);
+	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
+	return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm);
+}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 16/19] arm64: insn: Allow ADD/SUB (immediate) with LSL #12
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

The encoder for ADD/SUB (immediate) can only cope with 12bit
immediates, while there is an encoding for a 12bit immediate shifted
by 12 bits to the left.

Let's fix this small oversight by allowing the LSL_12 bit to be set.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/insn.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index af29fc3e09a9..b8fb2d89b3a6 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -35,6 +35,7 @@
 
 #define AARCH64_INSN_SF_BIT	BIT(31)
 #define AARCH64_INSN_N_BIT	BIT(22)
+#define AARCH64_INSN_LSL_12	BIT(22)
 
 static int aarch64_insn_encoding_class[] = {
 	AARCH64_INSN_CLS_UNKNOWN,
@@ -903,9 +904,18 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 		return AARCH64_BREAK_FAULT;
 	}
 
+	/* We can't encode more than a 24bit value (12bit + 12bit shift) */
+	if (imm & ~(BIT(24) - 1))
+		goto out;
+
+	/* If we have something in the top 12 bits... */
 	if (imm & ~(SZ_4K - 1)) {
-		pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
-		return AARCH64_BREAK_FAULT;
+		/* ... and in the low 12 bits -> error */
+		if (imm & (SZ_4K - 1))
+			goto out;
+
+		imm >>= 12;
+		insn |= AARCH64_INSN_LSL_12;
 	}
 
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst);
@@ -913,6 +923,10 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src);
 
 	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_12, insn, imm);
+
+out:
+	pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
+	return AARCH64_BREAK_FAULT;
 }
 
 u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 16/19] arm64: insn: Allow ADD/SUB (immediate) with LSL #12
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

The encoder for ADD/SUB (immediate) can only cope with 12bit
immediates, while there is an encoding for a 12bit immediate shifted
by 12 bits to the left.

Let's fix this small oversight by allowing the LSL_12 bit to be set.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/insn.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index af29fc3e09a9..b8fb2d89b3a6 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -35,6 +35,7 @@
 
 #define AARCH64_INSN_SF_BIT	BIT(31)
 #define AARCH64_INSN_N_BIT	BIT(22)
+#define AARCH64_INSN_LSL_12	BIT(22)
 
 static int aarch64_insn_encoding_class[] = {
 	AARCH64_INSN_CLS_UNKNOWN,
@@ -903,9 +904,18 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 		return AARCH64_BREAK_FAULT;
 	}
 
+	/* We can't encode more than a 24bit value (12bit + 12bit shift) */
+	if (imm & ~(BIT(24) - 1))
+		goto out;
+
+	/* If we have something in the top 12 bits... */
 	if (imm & ~(SZ_4K - 1)) {
-		pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
-		return AARCH64_BREAK_FAULT;
+		/* ... and in the low 12 bits -> error */
+		if (imm & (SZ_4K - 1))
+			goto out;
+
+		imm >>= 12;
+		insn |= AARCH64_INSN_LSL_12;
 	}
 
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst);
@@ -913,6 +923,10 @@ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
 	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src);
 
 	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_12, insn, imm);
+
+out:
+	pr_err("%s: invalid immediate encoding %d\n", __func__, imm);
+	return AARCH64_BREAK_FAULT;
 }
 
 u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 17/19] arm64: KVM: Dynamically compute the HYP VA mask
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

As we're moving towards a much more dynamic way to compute our
HYP VA, let's express the mask in a slightly different way.

Instead of comparing the idmap position to the "low" VA mask,
we directly compute the mask by taking into account the idmap's
(VA_BIT-1) bit.

No functionnal change.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/haslr.c | 34 ++++++++++++++--------------------
 1 file changed, 14 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
index 5e1643a4e7bf..2314bebe4883 100644
--- a/arch/arm64/kvm/haslr.c
+++ b/arch/arm64/kvm/haslr.c
@@ -21,28 +21,11 @@
 #include <asm/insn.h>
 #include <asm/kvm_mmu.h>
 
-#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
-#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
-
-static unsigned long get_hyp_va_mask(void)
-{
-	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
-	unsigned long mask = HYP_PAGE_OFFSET_HIGH_MASK;
-
-	/*
-	 * Activate the lower HYP offset only if the idmap doesn't
-	 * clash with it,
-	 */
-	if (idmap_addr > HYP_PAGE_OFFSET_LOW_MASK)
-		mask = HYP_PAGE_OFFSET_HIGH_MASK;
-
-	return mask;
-}
+static u64 va_mask;
 
 u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 {
 	u32 rd, rn, insn;
-	u64 imm;
 
 	/* We only expect a 1 instruction sequence */
 	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
@@ -51,6 +34,18 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 	if (has_vhe())
 		return aarch64_insn_gen_nop();
 
+	if (!va_mask) {
+		phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
+		u64 region;
+
+		/* Where is my RAM region? */
+		region  = idmap_addr & BIT(VA_BITS - 1);
+		region ^= BIT(VA_BITS - 1);
+
+		va_mask  = BIT(VA_BITS - 1) - 1;
+		va_mask |= region;
+	}
+
 	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
 	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
 
@@ -61,10 +56,9 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 		break;
 
 	case 0:
-		imm = get_hyp_va_mask();
 		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
 							  AARCH64_INSN_VARIANT_64BIT,
-							  rn, rd, imm);
+							  rn, rd, va_mask);
 		break;
 	}
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 17/19] arm64: KVM: Dynamically compute the HYP VA mask
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

As we're moving towards a much more dynamic way to compute our
HYP VA, let's express the mask in a slightly different way.

Instead of comparing the idmap position to the "low" VA mask,
we directly compute the mask by taking into account the idmap's
(VA_BIT-1) bit.

No functionnal change.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/haslr.c | 34 ++++++++++++++--------------------
 1 file changed, 14 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
index 5e1643a4e7bf..2314bebe4883 100644
--- a/arch/arm64/kvm/haslr.c
+++ b/arch/arm64/kvm/haslr.c
@@ -21,28 +21,11 @@
 #include <asm/insn.h>
 #include <asm/kvm_mmu.h>
 
-#define HYP_PAGE_OFFSET_HIGH_MASK	((UL(1) << VA_BITS) - 1)
-#define HYP_PAGE_OFFSET_LOW_MASK	((UL(1) << (VA_BITS - 1)) - 1)
-
-static unsigned long get_hyp_va_mask(void)
-{
-	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
-	unsigned long mask = HYP_PAGE_OFFSET_HIGH_MASK;
-
-	/*
-	 * Activate the lower HYP offset only if the idmap doesn't
-	 * clash with it,
-	 */
-	if (idmap_addr > HYP_PAGE_OFFSET_LOW_MASK)
-		mask = HYP_PAGE_OFFSET_HIGH_MASK;
-
-	return mask;
-}
+static u64 va_mask;
 
 u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 {
 	u32 rd, rn, insn;
-	u64 imm;
 
 	/* We only expect a 1 instruction sequence */
 	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
@@ -51,6 +34,18 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 	if (has_vhe())
 		return aarch64_insn_gen_nop();
 
+	if (!va_mask) {
+		phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
+		u64 region;
+
+		/* Where is my RAM region? */
+		region  = idmap_addr & BIT(VA_BITS - 1);
+		region ^= BIT(VA_BITS - 1);
+
+		va_mask  = BIT(VA_BITS - 1) - 1;
+		va_mask |= region;
+	}
+
 	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
 	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
 
@@ -61,10 +56,9 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 		break;
 
 	case 0:
-		imm = get_hyp_va_mask();
 		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
 							  AARCH64_INSN_VARIANT_64BIT,
-							  rn, rd, imm);
+							  rn, rd, va_mask);
 		break;
 	}
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 18/19] arm64: KVM: Introduce EL2 VA randomisation
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

The main idea behind randomising the EL2 VA is that we usually have
a few spare bits between the most significant bit of the VA mask
and the most significant bit of the linear mapping.

Those bits could be a bunch of zeroes, and could be useful
to move things around a bit. Of course, the more memory you have,
the less randomisation you get...

Alternatively, these bits could be the result of KASLR, in which
case they are already random. But it would be nice to have a
*different* randomization, just to make the job of a potential
attacker a bit more difficult.

Inserting these random bits is a bit involved. We don't have a spare
register (short of rewriting all the kern_hyp_va call sites), and
the immediate we want to insert is too random to be used with the
ORR instruction. The best option I could come up with is the following
sequence:

	and x0, x0, #va_mask
	ror x0, x0, #first_random_bit
	add x0, x0, #(random & 0xfff)
	add x0, x0, #(random >> 12), lsl #12
	ror x0, x0, #(63 - first_random_bit)

making it a fairly long sequence, but one that a decent CPU should
be able to execute without breaking a sweat. It is of course NOPed
out on VHE. The last 4 instructions can also be turned into NOPs
if it appears that there is no free bits to use.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 10 +++++-
 arch/arm64/kvm/haslr.c           | 75 +++++++++++++++++++++++++++++++++++++---
 virt/kvm/arm/mmu.c               |  2 +-
 3 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index d30e83df5ccb..160444b78505 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -85,6 +85,10 @@
 .macro kern_hyp_va	reg
 alternative_cb kvm_update_va_mask
 	and     \reg, \reg, #1
+	ror	\reg, \reg, #1
+	add	\reg, \reg, #0
+	add	\reg, \reg, #0
+	ror	\reg, \reg, #63
 alternative_else_nop_endif
 .endm
 
@@ -100,7 +104,11 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
 
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
-	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n",
+	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n"
+				    "ror %0, %0, #1\n"
+				    "add %0, %0, #0\n"
+				    "add %0, %0, #0\n"
+				    "ror %0, %0, #63\n",
 				    kvm_update_va_mask)
 		     : "+r" (v));
 	return v;
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
index 2314bebe4883..111d8499166f 100644
--- a/arch/arm64/kvm/haslr.c
+++ b/arch/arm64/kvm/haslr.c
@@ -16,19 +16,23 @@
  */
 
 #include <linux/kvm_host.h>
+#include <linux/random.h>
+#include <linux/memblock.h>
 #include <asm/alternative.h>
 #include <asm/debug-monitors.h>
 #include <asm/insn.h>
 #include <asm/kvm_mmu.h>
 
+static u8 tag_lsb;
+static u64 tag_val;
 static u64 va_mask;
 
 u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 {
 	u32 rd, rn, insn;
 
-	/* We only expect a 1 instruction sequence */
-	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+	/* We only expect a 5 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 5);
 
 	/* VHE doesn't need any address translation, let's NOP everything */
 	if (has_vhe())
@@ -42,8 +46,32 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 		region  = idmap_addr & BIT(VA_BITS - 1);
 		region ^= BIT(VA_BITS - 1);
 
-		va_mask  = BIT(VA_BITS - 1) - 1;
-		va_mask |= region;
+		tag_lsb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^
+				(u64)(high_memory - 1));
+
+		if (tag_lsb == (VA_BITS - 1)) {
+			/*
+			 * No space in the address, let's compute the
+			 * mask so that it covers (VA_BITS - 1) bits,
+			 * and the region bit. The tag is set to zero.
+			 */
+			tag_lsb = tag_val = 0;
+			va_mask  = BIT(VA_BITS - 1) - 1;
+			va_mask |= region;
+		} else {
+			/*
+			 * We do have some free bits. Let's have the
+			 * mask to cover the low bits of the VA, and
+			 * the tag to contain the random stuff plus
+			 * the region bit.
+			 */
+			u64 mask = GENMASK_ULL(VA_BITS - 2, tag_lsb);
+
+			va_mask = BIT(tag_lsb) - 1;
+			tag_val  = get_random_long() & mask;
+			tag_val |= region;
+			tag_val >>= tag_lsb;
+		}
 	}
 
 	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
@@ -60,6 +88,45 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 							  AARCH64_INSN_VARIANT_64BIT,
 							  rn, rd, va_mask);
 		break;
+
+	case 1:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd,
+					     tag_lsb);
+		break;
+
+	case 2:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & (SZ_4K - 1),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 3:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & GENMASK(23, 12),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 4:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd, 64 - tag_lsb);
+		break;
 	}
 
 	BUG_ON(insn == AARCH64_BREAK_FAULT);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 0597c9846f1a..6633f5f07200 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1797,7 +1797,7 @@ int kvm_mmu_init(void)
 		  kern_hyp_va((unsigned long)high_memory - 1));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
-	    hyp_idmap_start <  kern_hyp_va(~0UL) &&
+	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
 	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
 		/*
 		 * The idmap page is intersecting with the VA space,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 18/19] arm64: KVM: Introduce EL2 VA randomisation
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

The main idea behind randomising the EL2 VA is that we usually have
a few spare bits between the most significant bit of the VA mask
and the most significant bit of the linear mapping.

Those bits could be a bunch of zeroes, and could be useful
to move things around a bit. Of course, the more memory you have,
the less randomisation you get...

Alternatively, these bits could be the result of KASLR, in which
case they are already random. But it would be nice to have a
*different* randomization, just to make the job of a potential
attacker a bit more difficult.

Inserting these random bits is a bit involved. We don't have a spare
register (short of rewriting all the kern_hyp_va call sites), and
the immediate we want to insert is too random to be used with the
ORR instruction. The best option I could come up with is the following
sequence:

	and x0, x0, #va_mask
	ror x0, x0, #first_random_bit
	add x0, x0, #(random & 0xfff)
	add x0, x0, #(random >> 12), lsl #12
	ror x0, x0, #(63 - first_random_bit)

making it a fairly long sequence, but one that a decent CPU should
be able to execute without breaking a sweat. It is of course NOPed
out on VHE. The last 4 instructions can also be turned into NOPs
if it appears that there is no free bits to use.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 10 +++++-
 arch/arm64/kvm/haslr.c           | 75 +++++++++++++++++++++++++++++++++++++---
 virt/kvm/arm/mmu.c               |  2 +-
 3 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index d30e83df5ccb..160444b78505 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -85,6 +85,10 @@
 .macro kern_hyp_va	reg
 alternative_cb kvm_update_va_mask
 	and     \reg, \reg, #1
+	ror	\reg, \reg, #1
+	add	\reg, \reg, #0
+	add	\reg, \reg, #0
+	ror	\reg, \reg, #63
 alternative_else_nop_endif
 .endm
 
@@ -100,7 +104,11 @@ u32 kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn);
 
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
-	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n",
+	asm volatile(ALTERNATIVE_CB("and %0, %0, #1\n"
+				    "ror %0, %0, #1\n"
+				    "add %0, %0, #0\n"
+				    "add %0, %0, #0\n"
+				    "ror %0, %0, #63\n",
 				    kvm_update_va_mask)
 		     : "+r" (v));
 	return v;
diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
index 2314bebe4883..111d8499166f 100644
--- a/arch/arm64/kvm/haslr.c
+++ b/arch/arm64/kvm/haslr.c
@@ -16,19 +16,23 @@
  */
 
 #include <linux/kvm_host.h>
+#include <linux/random.h>
+#include <linux/memblock.h>
 #include <asm/alternative.h>
 #include <asm/debug-monitors.h>
 #include <asm/insn.h>
 #include <asm/kvm_mmu.h>
 
+static u8 tag_lsb;
+static u64 tag_val;
 static u64 va_mask;
 
 u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 {
 	u32 rd, rn, insn;
 
-	/* We only expect a 1 instruction sequence */
-	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
+	/* We only expect a 5 instruction sequence */
+	BUG_ON((alt->alt_len / sizeof(insn)) != 5);
 
 	/* VHE doesn't need any address translation, let's NOP everything */
 	if (has_vhe())
@@ -42,8 +46,32 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 		region  = idmap_addr & BIT(VA_BITS - 1);
 		region ^= BIT(VA_BITS - 1);
 
-		va_mask  = BIT(VA_BITS - 1) - 1;
-		va_mask |= region;
+		tag_lsb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^
+				(u64)(high_memory - 1));
+
+		if (tag_lsb == (VA_BITS - 1)) {
+			/*
+			 * No space in the address, let's compute the
+			 * mask so that it covers (VA_BITS - 1) bits,
+			 * and the region bit. The tag is set to zero.
+			 */
+			tag_lsb = tag_val = 0;
+			va_mask  = BIT(VA_BITS - 1) - 1;
+			va_mask |= region;
+		} else {
+			/*
+			 * We do have some free bits. Let's have the
+			 * mask to cover the low bits of the VA, and
+			 * the tag to contain the random stuff plus
+			 * the region bit.
+			 */
+			u64 mask = GENMASK_ULL(VA_BITS - 2, tag_lsb);
+
+			va_mask = BIT(tag_lsb) - 1;
+			tag_val  = get_random_long() & mask;
+			tag_val |= region;
+			tag_val >>= tag_lsb;
+		}
 	}
 
 	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
@@ -60,6 +88,45 @@ u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
 							  AARCH64_INSN_VARIANT_64BIT,
 							  rn, rd, va_mask);
 		break;
+
+	case 1:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd,
+					     tag_lsb);
+		break;
+
+	case 2:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & (SZ_4K - 1),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 3:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		insn = aarch64_insn_gen_add_sub_imm(rd, rn,
+						    tag_val & GENMASK(23, 12),
+						    AARCH64_INSN_VARIANT_64BIT,
+						    AARCH64_INSN_ADSB_ADD);
+		break;
+
+	case 4:
+		if (!tag_lsb)
+			return aarch64_insn_gen_nop();
+
+		/* ROR is a variant of EXTR with Rm = Rn */
+		insn = aarch64_insn_gen_extr(AARCH64_INSN_VARIANT_64BIT,
+					     rn, rn, rd, 64 - tag_lsb);
+		break;
 	}
 
 	BUG_ON(insn == AARCH64_BREAK_FAULT);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 0597c9846f1a..6633f5f07200 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1797,7 +1797,7 @@ int kvm_mmu_init(void)
 		  kern_hyp_va((unsigned long)high_memory - 1));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
-	    hyp_idmap_start <  kern_hyp_va(~0UL) &&
+	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
 	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
 		/*
 		 * The idmap page is intersecting with the VA space,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 19/19] arm64: Update the KVM memory map documentation
  2017-12-11 14:49 ` Marc Zyngier
@ 2017-12-11 14:49   ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: Catalin Marinas, Will Deacon

Update the documentation to reflect the new tricks we play on the
EL2 mappings...

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/arm64/memory.txt | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/Documentation/arm64/memory.txt b/Documentation/arm64/memory.txt
index 671bc0639262..ea64e20037f6 100644
--- a/Documentation/arm64/memory.txt
+++ b/Documentation/arm64/memory.txt
@@ -86,9 +86,11 @@ Translation table lookup with 64KB pages:
  +-------------------------------------------------> [63] TTBR0/1
 
 
-When using KVM without the Virtualization Host Extensions, the hypervisor
-maps kernel pages in EL2 at a fixed offset from the kernel VA. See the
-kern_hyp_va macro for more details.
+When using KVM without the Virtualization Host Extensions, the
+hypervisor maps kernel pages in EL2 at a fixed offset (modulo a random
+offset) from the linear mapping. See the kern_hyp_va macro and
+kvm_update_va_mask function for more details. MMIO devices such as
+GICv2 gets mapped next to the HYP idmap page.
 
 When using KVM with the Virtualization Host Extensions, no additional
 mappings are created, since the host kernel runs directly in EL2.
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 19/19] arm64: Update the KVM memory map documentation
@ 2017-12-11 14:49   ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

Update the documentation to reflect the new tricks we play on the
EL2 mappings...

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/arm64/memory.txt | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/Documentation/arm64/memory.txt b/Documentation/arm64/memory.txt
index 671bc0639262..ea64e20037f6 100644
--- a/Documentation/arm64/memory.txt
+++ b/Documentation/arm64/memory.txt
@@ -86,9 +86,11 @@ Translation table lookup with 64KB pages:
  +-------------------------------------------------> [63] TTBR0/1
 
 
-When using KVM without the Virtualization Host Extensions, the hypervisor
-maps kernel pages in EL2 at a fixed offset from the kernel VA. See the
-kern_hyp_va macro for more details.
+When using KVM without the Virtualization Host Extensions, the
+hypervisor maps kernel pages in EL2 at a fixed offset (modulo a random
+offset) from the linear mapping. See the kern_hyp_va macro and
+kvm_update_va_mask function for more details. MMIO devices such as
+GICv2 gets mapped next to the HYP idmap page.
 
 When using KVM with the Virtualization Host Extensions, no additional
 mappings are created, since the host kernel runs directly in EL2.
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 01/19] arm64: asm-offsets: Avoid clashing DMA definitions
  2017-12-11 14:49   ` Marc Zyngier
@ 2017-12-11 15:03     ` Russell King - ARM Linux
  -1 siblings, 0 replies; 66+ messages in thread
From: Russell King - ARM Linux @ 2017-12-11 15:03 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvm, kvmarm, Mark Rutland, Steve Capper,
	Catalin Marinas, Will Deacon, James Morse, Christoffer Dall

On Mon, Dec 11, 2017 at 02:49:19PM +0000, Marc Zyngier wrote:
> asm-offsets.h contains a few DMA related definitions that have
> the exact same name than the enum members they are derived from.
> 
> While this is not a problem so far, it will become an issue if
> both asm-offsets.h and include/linux/dma-direction.h: are pulled
> by the same file.

Umm.  asm-offsets.h is only supposed to be included by assembly files.
Assembly files would not be able to include linux/dma-direction.h
So this shouldn't be a problem.

The same could be true of things like CLOCK_REALTIME etc.

Just don't do it.  Keep asm-offsets.h as something that gets included
by assembly and only assembly.

If you need to know the offset of some member, use offsetof(), don't
re-use asm-offsets.h.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 01/19] arm64: asm-offsets: Avoid clashing DMA definitions
@ 2017-12-11 15:03     ` Russell King - ARM Linux
  0 siblings, 0 replies; 66+ messages in thread
From: Russell King - ARM Linux @ 2017-12-11 15:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Dec 11, 2017 at 02:49:19PM +0000, Marc Zyngier wrote:
> asm-offsets.h contains a few DMA related definitions that have
> the exact same name than the enum members they are derived from.
> 
> While this is not a problem so far, it will become an issue if
> both asm-offsets.h and include/linux/dma-direction.h: are pulled
> by the same file.

Umm.  asm-offsets.h is only supposed to be included by assembly files.
Assembly files would not be able to include linux/dma-direction.h
So this shouldn't be a problem.

The same could be true of things like CLOCK_REALTIME etc.

Just don't do it.  Keep asm-offsets.h as something that gets included
by assembly and only assembly.

If you need to know the offset of some member, use offsetof(), don't
re-use asm-offsets.h.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 01/19] arm64: asm-offsets: Avoid clashing DMA definitions
  2017-12-11 15:03     ` Russell King - ARM Linux
@ 2017-12-11 15:22       ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 15:22 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: kvm, Catalin Marinas, Will Deacon, kvmarm, linux-arm-kernel

On 11/12/17 15:03, Russell King - ARM Linux wrote:
> On Mon, Dec 11, 2017 at 02:49:19PM +0000, Marc Zyngier wrote:
>> asm-offsets.h contains a few DMA related definitions that have
>> the exact same name than the enum members they are derived from.
>>
>> While this is not a problem so far, it will become an issue if
>> both asm-offsets.h and include/linux/dma-direction.h: are pulled
>> by the same file.
> 
> Umm.  asm-offsets.h is only supposed to be included by assembly files.
> Assembly files would not be able to include linux/dma-direction.h
> So this shouldn't be a problem.
> 
> The same could be true of things like CLOCK_REALTIME etc.
> 
> Just don't do it.  Keep asm-offsets.h as something that gets included
> by assembly and only assembly.

That'd be true if C code never used anything that is exposed by
asm-offsets.h. Unfortunately, things like our alternative patching
relies on generating assembly (or rather, using assembly generated data
structures). For more details, please see patch 4 in the same series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 01/19] arm64: asm-offsets: Avoid clashing DMA definitions
@ 2017-12-11 15:22       ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-11 15:22 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/12/17 15:03, Russell King - ARM Linux wrote:
> On Mon, Dec 11, 2017 at 02:49:19PM +0000, Marc Zyngier wrote:
>> asm-offsets.h contains a few DMA related definitions that have
>> the exact same name than the enum members they are derived from.
>>
>> While this is not a problem so far, it will become an issue if
>> both asm-offsets.h and include/linux/dma-direction.h: are pulled
>> by the same file.
> 
> Umm.  asm-offsets.h is only supposed to be included by assembly files.
> Assembly files would not be able to include linux/dma-direction.h
> So this shouldn't be a problem.
> 
> The same could be true of things like CLOCK_REALTIME etc.
> 
> Just don't do it.  Keep asm-offsets.h as something that gets included
> by assembly and only assembly.

That'd be true if C code never used anything that is exposed by
asm-offsets.h. Unfortunately, things like our alternative patching
relies on generating assembly (or rather, using assembly generated data
structures). For more details, please see patch 4 in the same series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-11 14:49   ` Marc Zyngier
@ 2017-12-12 18:32     ` James Morse
  -1 siblings, 0 replies; 66+ messages in thread
From: James Morse @ 2017-12-12 18:32 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, Catalin Marinas, Will Deacon, linux-arm-kernel, kvmarm

Hi Marc,

On 11/12/17 14:49, Marc Zyngier wrote:
> We lack a way to encode operations such as AND, ORR, EOR that take
> an immediate value. Doing so is quite involved, and is all about
> reverse engineering the decoding algorithm described in the
> pseudocode function DecodeBitMasks().


As this is over my head, I've been pushing random encodings through gas/objdump
and then tracing them through here.... can this encode 0xf80000000fffffff?

gas thinks this is legal:
|   0:   92458000        and     x0, x0, #0xf80000000fffffff

I make that N=1, S=0x20, R=0x05.
(I'm still working out what 'S' means)


> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7e432662d454..326b17016485 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c

> +static u32 aarch64_encode_immediate(u64 imm,
> +				    enum aarch64_insn_variant variant,
> +				    u32 insn)
> +{
> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
> +	u64 mask;

[...]

> +	/* N is only set if we're encoding a 64bit value */
> +	n = esz == 64;
> +
> +	/* Trim imm to the element size */
> +	mask = BIT(esz - 1) - 1;
> +	imm &= mask;

Won't this lose the top bit of a 64bit immediate?

(but then you put it back later, so something funny is going on)

This becomes 0x780000000fffffff,


> +
> +	/* That's how many ones we need to encode */
> +	ones = hweight64(imm);

meaning we're short a one here,


> +
> +	/*
> +	 * imms is set to (ones - 1), prefixed with a string of ones
> +	 * and a zero if they fit. Cap it to 6 bits.
> +	 */
> +	imms  = ones - 1;
> +	imms |= 0xf << ffs(esz);
> +	imms &= BIT(6) - 1;

so imms is 0x1f, not 0x20.


> +	/* Compute the rotation */
> +	if (range_of_ones(imm)) {
> +		/*
> +		 * Pattern: 0..01..10..0
> +		 *
> +		 * Compute how many rotate we need to align it right
> +		 */
> +		ror = ffs(imm) - 1;

(how come range_of_ones() uses __ffs64() on the same value?)


> +	} else {
> +		/*
> +		 * Pattern: 0..01..10..01..1
> +		 *
> +		 * Fill the unused top bits with ones, and check if
> +		 * the result is a valid immediate (all ones with a
> +		 * contiguous ranges of zeroes).
> +		 */

> +		imm |= ~mask;

but here we put the missing one back,


> +		if (!range_of_ones(~imm))
> +			return AARCH64_BREAK_FAULT;

meaning we pass this check and carry on, (even though 0x780000000fffffff isn't a
legal value)


(this next bit I haven't worked out yet)
> +		/*
> +		 * Compute the rotation to get a continuous set of
> +		 * ones, with the first bit set at position 0
> +		 */
> +		ror = fls(~imm);
> +	}
> +
> +	/*
> +	 * immr is the number of bits we need to rotate back to the
> +	 * original set of ones. Note that this is relative to the
> +	 * element size...
> +	 */
> +	immr = (esz - ror) & (esz - 1);


If I've followed this through correctly, this results in:
|   0:   92457c00        and     x0, x0, #0xf800000007ffffff

... which wasn't the immediate I started with.


Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.



Thanks,

James

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-12 18:32     ` James Morse
  0 siblings, 0 replies; 66+ messages in thread
From: James Morse @ 2017-12-12 18:32 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 11/12/17 14:49, Marc Zyngier wrote:
> We lack a way to encode operations such as AND, ORR, EOR that take
> an immediate value. Doing so is quite involved, and is all about
> reverse engineering the decoding algorithm described in the
> pseudocode function DecodeBitMasks().


As this is over my head, I've been pushing random encodings through gas/objdump
and then tracing them through here.... can this encode 0xf80000000fffffff?

gas thinks this is legal:
|   0:   92458000        and     x0, x0, #0xf80000000fffffff

I make that N=1, S=0x20, R=0x05.
(I'm still working out what 'S' means)


> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7e432662d454..326b17016485 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c

> +static u32 aarch64_encode_immediate(u64 imm,
> +				    enum aarch64_insn_variant variant,
> +				    u32 insn)
> +{
> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
> +	u64 mask;

[...]

> +	/* N is only set if we're encoding a 64bit value */
> +	n = esz == 64;
> +
> +	/* Trim imm to the element size */
> +	mask = BIT(esz - 1) - 1;
> +	imm &= mask;

Won't this lose the top bit of a 64bit immediate?

(but then you put it back later, so something funny is going on)

This becomes 0x780000000fffffff,


> +
> +	/* That's how many ones we need to encode */
> +	ones = hweight64(imm);

meaning we're short a one here,


> +
> +	/*
> +	 * imms is set to (ones - 1), prefixed with a string of ones
> +	 * and a zero if they fit. Cap it to 6 bits.
> +	 */
> +	imms  = ones - 1;
> +	imms |= 0xf << ffs(esz);
> +	imms &= BIT(6) - 1;

so imms is 0x1f, not 0x20.


> +	/* Compute the rotation */
> +	if (range_of_ones(imm)) {
> +		/*
> +		 * Pattern: 0..01..10..0
> +		 *
> +		 * Compute how many rotate we need to align it right
> +		 */
> +		ror = ffs(imm) - 1;

(how come range_of_ones() uses __ffs64() on the same value?)


> +	} else {
> +		/*
> +		 * Pattern: 0..01..10..01..1
> +		 *
> +		 * Fill the unused top bits with ones, and check if
> +		 * the result is a valid immediate (all ones with a
> +		 * contiguous ranges of zeroes).
> +		 */

> +		imm |= ~mask;

but here we put the missing one back,


> +		if (!range_of_ones(~imm))
> +			return AARCH64_BREAK_FAULT;

meaning we pass this check and carry on, (even though 0x780000000fffffff isn't a
legal value)


(this next bit I haven't worked out yet)
> +		/*
> +		 * Compute the rotation to get a continuous set of
> +		 * ones, with the first bit set at position 0
> +		 */
> +		ror = fls(~imm);
> +	}
> +
> +	/*
> +	 * immr is the number of bits we need to rotate back to the
> +	 * original set of ones. Note that this is relative to the
> +	 * element size...
> +	 */
> +	immr = (esz - ror) & (esz - 1);


If I've followed this through correctly, this results in:
|   0:   92457c00        and     x0, x0, #0xf800000007ffffff

... which wasn't the immediate I started with.


Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.



Thanks,

James

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-11 14:49   ` Marc Zyngier
@ 2017-12-12 18:56     ` Peter Maydell
  -1 siblings, 0 replies; 66+ messages in thread
From: Peter Maydell @ 2017-12-12 18:56 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: arm-mail-list, kvm-devel, kvmarm, Catalin Marinas, Will Deacon

On 11 December 2017 at 14:49, Marc Zyngier <marc.zyngier@arm.com> wrote:
> We lack a way to encode operations such as AND, ORR, EOR that take
> an immediate value. Doing so is quite involved, and is all about
> reverse engineering the decoding algorithm described in the
> pseudocode function DecodeBitMasks().

Is it possible to borrow the existing tested implementation
which a compiler surely must have for this, rather than having
to reinvent this rather complicated wheel? Here's LLVM's version:

https://github.com/llvm-mirror/llvm/blob/93e6e5414ded14bcbb233baaaa5567132fee9a0c/lib/Target/AArch64/MCTargetDesc/AArch64AddressingModes.h#L209

(confirming that the LLVM license is GPLv2 compatible is
left as an exercise for the reader, but I'm pretty sure it is)

PS: typo in subject: 'literal'.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-12 18:56     ` Peter Maydell
  0 siblings, 0 replies; 66+ messages in thread
From: Peter Maydell @ 2017-12-12 18:56 UTC (permalink / raw)
  To: linux-arm-kernel

On 11 December 2017 at 14:49, Marc Zyngier <marc.zyngier@arm.com> wrote:
> We lack a way to encode operations such as AND, ORR, EOR that take
> an immediate value. Doing so is quite involved, and is all about
> reverse engineering the decoding algorithm described in the
> pseudocode function DecodeBitMasks().

Is it possible to borrow the existing tested implementation
which a compiler surely must have for this, rather than having
to reinvent this rather complicated wheel? Here's LLVM's version:

https://github.com/llvm-mirror/llvm/blob/93e6e5414ded14bcbb233baaaa5567132fee9a0c/lib/Target/AArch64/MCTargetDesc/AArch64AddressingModes.h#L209

(confirming that the LLVM license is GPLv2 compatible is
left as an exercise for the reader, but I'm pretty sure it is)

PS: typo in subject: 'literal'.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-12 18:32     ` James Morse
@ 2017-12-12 23:40       ` Peter Maydell
  -1 siblings, 0 replies; 66+ messages in thread
From: Peter Maydell @ 2017-12-12 23:40 UTC (permalink / raw)
  To: James Morse
  Cc: kvm-devel, Marc Zyngier, Catalin Marinas, Will Deacon, kvmarm,
	arm-mail-list

On 12 December 2017 at 18:32, James Morse <james.morse@arm.com> wrote:
> As this is over my head, I've been pushing random encodings through gas/objdump
> and then tracing them through here.... can this encode 0xf80000000fffffff?
>
> gas thinks this is legal:
> |   0:   92458000        and     x0, x0, #0xf80000000fffffff
>
> I make that N=1, S=0x20, R=0x05.
> (I'm still working out what 'S' means)

This comment from QEMU (describing the decode direction, ie
immn,imms,immr => immediate) might assist:

    /* The bit patterns we create here are 64 bit patterns which
     * are vectors of identical elements of size e = 2, 4, 8, 16, 32 or
     * 64 bits each. Each element contains the same value: a run
     * of between 1 and e-1 non-zero bits, rotated within the
     * element by between 0 and e-1 bits.
     *
     * The element size and run length are encoded into immn (1 bit)
     * and imms (6 bits) as follows:
     * 64 bit elements: immn = 1, imms = <length of run - 1>
     * 32 bit elements: immn = 0, imms = 0 : <length of run - 1>
     * 16 bit elements: immn = 0, imms = 10 : <length of run - 1>
     *  8 bit elements: immn = 0, imms = 110 : <length of run - 1>
     *  4 bit elements: immn = 0, imms = 1110 : <length of run - 1>
     *  2 bit elements: immn = 0, imms = 11110 : <length of run - 1>
     * Notice that immn = 0, imms = 11111x is the only combination
     * not covered by one of the above options; this is reserved.
     * Further, <length of run - 1> all-ones is a reserved pattern.
     *
     * In all cases the rotation is by immr % e (and immr is 6 bits).
     */

so N=1 S=0x20 means run length 33, element size 64 (and
indeed your immediate has a run of 33 set bits).

(The Arm ARM pseudocode is confusing here because it merges
the handling of logical-immediates and bitfield instructions
together, which is nice if you're a hardware engineer. For
software you're much better off keeping the two separate.)

thanks
-- PMM

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-12 23:40       ` Peter Maydell
  0 siblings, 0 replies; 66+ messages in thread
From: Peter Maydell @ 2017-12-12 23:40 UTC (permalink / raw)
  To: linux-arm-kernel

On 12 December 2017 at 18:32, James Morse <james.morse@arm.com> wrote:
> As this is over my head, I've been pushing random encodings through gas/objdump
> and then tracing them through here.... can this encode 0xf80000000fffffff?
>
> gas thinks this is legal:
> |   0:   92458000        and     x0, x0, #0xf80000000fffffff
>
> I make that N=1, S=0x20, R=0x05.
> (I'm still working out what 'S' means)

This comment from QEMU (describing the decode direction, ie
immn,imms,immr => immediate) might assist:

    /* The bit patterns we create here are 64 bit patterns which
     * are vectors of identical elements of size e = 2, 4, 8, 16, 32 or
     * 64 bits each. Each element contains the same value: a run
     * of between 1 and e-1 non-zero bits, rotated within the
     * element by between 0 and e-1 bits.
     *
     * The element size and run length are encoded into immn (1 bit)
     * and imms (6 bits) as follows:
     * 64 bit elements: immn = 1, imms = <length of run - 1>
     * 32 bit elements: immn = 0, imms = 0 : <length of run - 1>
     * 16 bit elements: immn = 0, imms = 10 : <length of run - 1>
     *  8 bit elements: immn = 0, imms = 110 : <length of run - 1>
     *  4 bit elements: immn = 0, imms = 1110 : <length of run - 1>
     *  2 bit elements: immn = 0, imms = 11110 : <length of run - 1>
     * Notice that immn = 0, imms = 11111x is the only combination
     * not covered by one of the above options; this is reserved.
     * Further, <length of run - 1> all-ones is a reserved pattern.
     *
     * In all cases the rotation is by immr % e (and immr is 6 bits).
     */

so N=1 S=0x20 means run length 33, element size 64 (and
indeed your immediate has a run of 33 set bits).

(The Arm ARM pseudocode is confusing here because it merges
the handling of logical-immediates and bitfield instructions
together, which is nice if you're a hardware engineer. For
software you're much better off keeping the two separate.)

thanks
-- PMM

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-12 18:32     ` James Morse
@ 2017-12-13 14:32       ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-13 14:32 UTC (permalink / raw)
  To: James Morse
  Cc: linux-arm-kernel, kvm, kvmarm, Christoffer Dall, Mark Rutland,
	Catalin Marinas, Will Deacon, Steve Capper

Hi James,

On 12/12/17 18:32, James Morse wrote:
> Hi Marc,
> 
> On 11/12/17 14:49, Marc Zyngier wrote:
>> We lack a way to encode operations such as AND, ORR, EOR that take
>> an immediate value. Doing so is quite involved, and is all about
>> reverse engineering the decoding algorithm described in the
>> pseudocode function DecodeBitMasks().
> 
> 
> As this is over my head, I've been pushing random encodings through gas/objdump
> and then tracing them through here.... can this encode 0xf80000000fffffff?
> 
> gas thinks this is legal:
> |   0:   92458000        and     x0, x0, #0xf80000000fffffff
> 
> I make that N=1, S=0x20, R=0x05.
> (I'm still working out what 'S' means)
> 
> 
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 7e432662d454..326b17016485 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
> 
>> +static u32 aarch64_encode_immediate(u64 imm,
>> +				    enum aarch64_insn_variant variant,
>> +				    u32 insn)
>> +{
>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>> +	u64 mask;
> 
> [...]
> 
>> +	/* N is only set if we're encoding a 64bit value */
>> +	n = esz == 64;
>> +
>> +	/* Trim imm to the element size */
>> +	mask = BIT(esz - 1) - 1;
>> +	imm &= mask;
> 
> Won't this lose the top bit of a 64bit immediate?

Humfff... Yup, nicely spotted.

> 
> (but then you put it back later, so something funny is going on)
> 
> This becomes 0x780000000fffffff,
> 
> 
>> +
>> +	/* That's how many ones we need to encode */
>> +	ones = hweight64(imm);
> 
> meaning we're short a one here,
> 
> 
>> +
>> +	/*
>> +	 * imms is set to (ones - 1), prefixed with a string of ones
>> +	 * and a zero if they fit. Cap it to 6 bits.
>> +	 */
>> +	imms  = ones - 1;
>> +	imms |= 0xf << ffs(esz);
>> +	imms &= BIT(6) - 1;
> 
> so imms is 0x1f, not 0x20.
> 
> 
>> +	/* Compute the rotation */
>> +	if (range_of_ones(imm)) {
>> +		/*
>> +		 * Pattern: 0..01..10..0
>> +		 *
>> +		 * Compute how many rotate we need to align it right
>> +		 */
>> +		ror = ffs(imm) - 1;
> 
> (how come range_of_ones() uses __ffs64() on the same value?)

News flash: range_of_ones is completely buggy. It will fail on the 
trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
I definitely got mixed up between the two.

>> +	} else {
>> +		/*
>> +		 * Pattern: 0..01..10..01..1
>> +		 *
>> +		 * Fill the unused top bits with ones, and check if
>> +		 * the result is a valid immediate (all ones with a
>> +		 * contiguous ranges of zeroes).
>> +		 */
> 
>> +		imm |= ~mask;
> 
> but here we put the missing one back,
> 
> 
>> +		if (!range_of_ones(~imm))
>> +			return AARCH64_BREAK_FAULT;
> 
> meaning we pass this check and carry on, (even though 0x780000000fffffff isn't a
> legal value)
> 
> 
> (this next bit I haven't worked out yet)
>> +		/*
>> +		 * Compute the rotation to get a continuous set of
>> +		 * ones, with the first bit set at position 0
>> +		 */
>> +		ror = fls(~imm);
>> +	}
>> +
>> +	/*
>> +	 * immr is the number of bits we need to rotate back to the
>> +	 * original set of ones. Note that this is relative to the
>> +	 * element size...
>> +	 */
>> +	immr = (esz - ror) & (esz - 1);
> 
> 
> If I've followed this through correctly, this results in:
> |   0:   92457c00        and     x0, x0, #0xf800000007ffffff
> 
> ... which wasn't the immediate I started with.
> 
> 
> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.

Yup. I've stashed the following patch:

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index b8fb2d89b3a6..e58be1c57f18 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
 static bool range_of_ones(u64 val)
 {
 	/* Doesn't handle full ones or full zeroes */
-	int x = __ffs64(val) - 1;
-	u64 sval = val >> x;
+	u64 sval = val >> __ffs64(val);
 
 	/* One of Sean Eron Anderson's bithack tricks */
 	return ((sval + 1) & (sval)) == 0;
@@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
 				    u32 insn)
 {
 	unsigned int immr, imms, n, ones, ror, esz, tmp;
-	u64 mask;
+	u64 mask = ~0UL;
 
 	/* Can't encode full zeroes or full ones */
 	if (!imm || !~imm)
@@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
 	for (tmp = esz; tmp > 2; tmp /= 2) {
 		u64 emask = BIT(tmp / 2) - 1;
 
-		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
+		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
+			/* Trim imm to the element size */
+			mask = BIT(esz - 1) - 1;
+			imm &= mask;
 			break;
+		}
 
 		esz = tmp;
 	}
@@ -1552,10 +1555,6 @@ static u32 aarch64_encode_immediate(u64 imm,
 	/* N is only set if we're encoding a 64bit value */
 	n = esz == 64;
 
-	/* Trim imm to the element size */
-	mask = BIT(esz - 1) - 1;
-	imm &= mask;
-
 	/* That's how many ones we need to encode */
 	ones = hweight64(imm);
 
I really need to run this against gas in order to make sure
I get the same parameters for all the possible values.

Many thanks for this careful review!

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-13 14:32       ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-13 14:32 UTC (permalink / raw)
  To: linux-arm-kernel

Hi James,

On 12/12/17 18:32, James Morse wrote:
> Hi Marc,
> 
> On 11/12/17 14:49, Marc Zyngier wrote:
>> We lack a way to encode operations such as AND, ORR, EOR that take
>> an immediate value. Doing so is quite involved, and is all about
>> reverse engineering the decoding algorithm described in the
>> pseudocode function DecodeBitMasks().
> 
> 
> As this is over my head, I've been pushing random encodings through gas/objdump
> and then tracing them through here.... can this encode 0xf80000000fffffff?
> 
> gas thinks this is legal:
> |   0:   92458000        and     x0, x0, #0xf80000000fffffff
> 
> I make that N=1, S=0x20, R=0x05.
> (I'm still working out what 'S' means)
> 
> 
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 7e432662d454..326b17016485 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
> 
>> +static u32 aarch64_encode_immediate(u64 imm,
>> +				    enum aarch64_insn_variant variant,
>> +				    u32 insn)
>> +{
>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>> +	u64 mask;
> 
> [...]
> 
>> +	/* N is only set if we're encoding a 64bit value */
>> +	n = esz == 64;
>> +
>> +	/* Trim imm to the element size */
>> +	mask = BIT(esz - 1) - 1;
>> +	imm &= mask;
> 
> Won't this lose the top bit of a 64bit immediate?

Humfff... Yup, nicely spotted.

> 
> (but then you put it back later, so something funny is going on)
> 
> This becomes 0x780000000fffffff,
> 
> 
>> +
>> +	/* That's how many ones we need to encode */
>> +	ones = hweight64(imm);
> 
> meaning we're short a one here,
> 
> 
>> +
>> +	/*
>> +	 * imms is set to (ones - 1), prefixed with a string of ones
>> +	 * and a zero if they fit. Cap it to 6 bits.
>> +	 */
>> +	imms  = ones - 1;
>> +	imms |= 0xf << ffs(esz);
>> +	imms &= BIT(6) - 1;
> 
> so imms is 0x1f, not 0x20.
> 
> 
>> +	/* Compute the rotation */
>> +	if (range_of_ones(imm)) {
>> +		/*
>> +		 * Pattern: 0..01..10..0
>> +		 *
>> +		 * Compute how many rotate we need to align it right
>> +		 */
>> +		ror = ffs(imm) - 1;
> 
> (how come range_of_ones() uses __ffs64() on the same value?)

News flash: range_of_ones is completely buggy. It will fail on the 
trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
I definitely got mixed up between the two.

>> +	} else {
>> +		/*
>> +		 * Pattern: 0..01..10..01..1
>> +		 *
>> +		 * Fill the unused top bits with ones, and check if
>> +		 * the result is a valid immediate (all ones with a
>> +		 * contiguous ranges of zeroes).
>> +		 */
> 
>> +		imm |= ~mask;
> 
> but here we put the missing one back,
> 
> 
>> +		if (!range_of_ones(~imm))
>> +			return AARCH64_BREAK_FAULT;
> 
> meaning we pass this check and carry on, (even though 0x780000000fffffff isn't a
> legal value)
> 
> 
> (this next bit I haven't worked out yet)
>> +		/*
>> +		 * Compute the rotation to get a continuous set of
>> +		 * ones, with the first bit set at position 0
>> +		 */
>> +		ror = fls(~imm);
>> +	}
>> +
>> +	/*
>> +	 * immr is the number of bits we need to rotate back to the
>> +	 * original set of ones. Note that this is relative to the
>> +	 * element size...
>> +	 */
>> +	immr = (esz - ror) & (esz - 1);
> 
> 
> If I've followed this through correctly, this results in:
> |   0:   92457c00        and     x0, x0, #0xf800000007ffffff
> 
> ... which wasn't the immediate I started with.
> 
> 
> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.

Yup. I've stashed the following patch:

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index b8fb2d89b3a6..e58be1c57f18 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
 static bool range_of_ones(u64 val)
 {
 	/* Doesn't handle full ones or full zeroes */
-	int x = __ffs64(val) - 1;
-	u64 sval = val >> x;
+	u64 sval = val >> __ffs64(val);
 
 	/* One of Sean Eron Anderson's bithack tricks */
 	return ((sval + 1) & (sval)) == 0;
@@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
 				    u32 insn)
 {
 	unsigned int immr, imms, n, ones, ror, esz, tmp;
-	u64 mask;
+	u64 mask = ~0UL;
 
 	/* Can't encode full zeroes or full ones */
 	if (!imm || !~imm)
@@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
 	for (tmp = esz; tmp > 2; tmp /= 2) {
 		u64 emask = BIT(tmp / 2) - 1;
 
-		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
+		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
+			/* Trim imm to the element size */
+			mask = BIT(esz - 1) - 1;
+			imm &= mask;
 			break;
+		}
 
 		esz = tmp;
 	}
@@ -1552,10 +1555,6 @@ static u32 aarch64_encode_immediate(u64 imm,
 	/* N is only set if we're encoding a 64bit value */
 	n = esz == 64;
 
-	/* Trim imm to the element size */
-	mask = BIT(esz - 1) - 1;
-	imm &= mask;
-
 	/* That's how many ones we need to encode */
 	ones = hweight64(imm);
 
I really need to run this against gas in order to make sure
I get the same parameters for all the possible values.

Many thanks for this careful review!

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-13 14:32       ` Marc Zyngier
@ 2017-12-13 15:45         ` James Morse
  -1 siblings, 0 replies; 66+ messages in thread
From: James Morse @ 2017-12-13 15:45 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvm, kvmarm, Christoffer Dall, Mark Rutland,
	Catalin Marinas, Will Deacon, Steve Capper

Hi Marc,

On 13/12/17 14:32, Marc Zyngier wrote:
> On 12/12/17 18:32, James Morse wrote:
>> On 11/12/17 14:49, Marc Zyngier wrote:
>>> We lack a way to encode operations such as AND, ORR, EOR that take
>>> an immediate value. Doing so is quite involved, and is all about
>>> reverse engineering the decoding algorithm described in the
>>> pseudocode function DecodeBitMasks().

>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>> index 7e432662d454..326b17016485 100644
>>> --- a/arch/arm64/kernel/insn.c
>>> +++ b/arch/arm64/kernel/insn.c
>>
>>> +static u32 aarch64_encode_immediate(u64 imm,
>>> +				    enum aarch64_insn_variant variant,
>>> +				    u32 insn)
>>> +{
>>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>>> +	u64 mask;
>>
>> [...]
>>
>>> +	/* Compute the rotation */
>>> +	if (range_of_ones(imm)) {
>>> +		/*
>>> +		 * Pattern: 0..01..10..0
>>> +		 *
>>> +		 * Compute how many rotate we need to align it right
>>> +		 */
>>> +		ror = ffs(imm) - 1;
>>
>> (how come range_of_ones() uses __ffs64() on the same value?)
> 
> News flash: range_of_ones is completely buggy. It will fail on the 
> trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
> I definitely got mixed up between the two.

They do different things!? Aaaaaahhhh....

[ ...]

>> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
>> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.


> Yup. I've stashed the following patch:
> 
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index b8fb2d89b3a6..e58be1c57f18 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
>  static bool range_of_ones(u64 val)
>  {
>  	/* Doesn't handle full ones or full zeroes */
> -	int x = __ffs64(val) - 1;
> -	u64 sval = val >> x;
> +	u64 sval = val >> __ffs64(val);
>  
>  	/* One of Sean Eron Anderson's bithack tricks */
>  	return ((sval + 1) & (sval)) == 0;
> @@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
>  				    u32 insn)
>  {
>  	unsigned int immr, imms, n, ones, ror, esz, tmp;
> -	u64 mask;
> +	u64 mask = ~0UL;
>  
>  	/* Can't encode full zeroes or full ones */
>  	if (!imm || !~imm)
> @@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
>  	for (tmp = esz; tmp > 2; tmp /= 2) {
>  		u64 emask = BIT(tmp / 2) - 1;
>  
> -		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
> +		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
> +			/* Trim imm to the element size */
> +			mask = BIT(esz - 1) - 1;
> +			imm &= mask;

Won't this still lose the top bit? It generates 0x7fffffff for esz=32, and for
esz=32 we run through here when the two 16bit values are different.

This still runs for a 64bit immediate. The 0xf80000000fffffff example compares
0xf8000000 with 0fffffff then breaks here on the first iteration of this loop.
With this change it still attempts to generate a 64bit mask.

I was thinking of something like [0]. That only runs when we know the two
tmp:halves match, it just keeps the bottom tmp:half for the next run and never
runs for a 64bit immediate.


>  			break;
> +		}
>  
>  		esz = tmp;
>  	}
> @@ -1552,10 +1555,6 @@ static u32 aarch64_encode_immediate(u64 imm,
>  	/* N is only set if we're encoding a 64bit value */
>  	n = esz == 64;
>  
> -	/* Trim imm to the element size */
> -	mask = BIT(esz - 1) - 1;
> -	imm &= mask;
> -
>  	/* That's how many ones we need to encode */
>  	ones = hweight64(imm);
>  
> I really need to run this against gas in order to make sure
> I get the same parameters for all the possible values.

Sounds good,


Thanks,

James


[0] Not even built:
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 12d3ec2154c2..d9fbdea7b18d 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1529,15 +1529,15 @@ static u32 aarch64_encode_immediate(u64 imm,
                        break;

                esz = tmp;
+
+               /* Trim imm to the element size */
+               mask = BIT(esz) - 1;
+               imm &= mask;
        }

        /* N is only set if we're encoding a 64bit value */
        n = esz == 64;

-       /* Trim imm to the element size */
-       mask = BIT(esz - 1) - 1;
-       imm &= mask;
-
        /* That's how many ones we need to encode */
        ones = hweight64(imm);

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-13 15:45         ` James Morse
  0 siblings, 0 replies; 66+ messages in thread
From: James Morse @ 2017-12-13 15:45 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 13/12/17 14:32, Marc Zyngier wrote:
> On 12/12/17 18:32, James Morse wrote:
>> On 11/12/17 14:49, Marc Zyngier wrote:
>>> We lack a way to encode operations such as AND, ORR, EOR that take
>>> an immediate value. Doing so is quite involved, and is all about
>>> reverse engineering the decoding algorithm described in the
>>> pseudocode function DecodeBitMasks().

>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>> index 7e432662d454..326b17016485 100644
>>> --- a/arch/arm64/kernel/insn.c
>>> +++ b/arch/arm64/kernel/insn.c
>>
>>> +static u32 aarch64_encode_immediate(u64 imm,
>>> +				    enum aarch64_insn_variant variant,
>>> +				    u32 insn)
>>> +{
>>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>>> +	u64 mask;
>>
>> [...]
>>
>>> +	/* Compute the rotation */
>>> +	if (range_of_ones(imm)) {
>>> +		/*
>>> +		 * Pattern: 0..01..10..0
>>> +		 *
>>> +		 * Compute how many rotate we need to align it right
>>> +		 */
>>> +		ror = ffs(imm) - 1;
>>
>> (how come range_of_ones() uses __ffs64() on the same value?)
> 
> News flash: range_of_ones is completely buggy. It will fail on the 
> trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
> I definitely got mixed up between the two.

They do different things!? Aaaaaahhhh....

[ ...]

>> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
>> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.


> Yup. I've stashed the following patch:
> 
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index b8fb2d89b3a6..e58be1c57f18 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
>  static bool range_of_ones(u64 val)
>  {
>  	/* Doesn't handle full ones or full zeroes */
> -	int x = __ffs64(val) - 1;
> -	u64 sval = val >> x;
> +	u64 sval = val >> __ffs64(val);
>  
>  	/* One of Sean Eron Anderson's bithack tricks */
>  	return ((sval + 1) & (sval)) == 0;
> @@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
>  				    u32 insn)
>  {
>  	unsigned int immr, imms, n, ones, ror, esz, tmp;
> -	u64 mask;
> +	u64 mask = ~0UL;
>  
>  	/* Can't encode full zeroes or full ones */
>  	if (!imm || !~imm)
> @@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
>  	for (tmp = esz; tmp > 2; tmp /= 2) {
>  		u64 emask = BIT(tmp / 2) - 1;
>  
> -		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
> +		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
> +			/* Trim imm to the element size */
> +			mask = BIT(esz - 1) - 1;
> +			imm &= mask;

Won't this still lose the top bit? It generates 0x7fffffff for esz=32, and for
esz=32 we run through here when the two 16bit values are different.

This still runs for a 64bit immediate. The 0xf80000000fffffff example compares
0xf8000000 with 0fffffff then breaks here on the first iteration of this loop.
With this change it still attempts to generate a 64bit mask.

I was thinking of something like [0]. That only runs when we know the two
tmp:halves match, it just keeps the bottom tmp:half for the next run and never
runs for a 64bit immediate.


>  			break;
> +		}
>  
>  		esz = tmp;
>  	}
> @@ -1552,10 +1555,6 @@ static u32 aarch64_encode_immediate(u64 imm,
>  	/* N is only set if we're encoding a 64bit value */
>  	n = esz == 64;
>  
> -	/* Trim imm to the element size */
> -	mask = BIT(esz - 1) - 1;
> -	imm &= mask;
> -
>  	/* That's how many ones we need to encode */
>  	ones = hweight64(imm);
>  
> I really need to run this against gas in order to make sure
> I get the same parameters for all the possible values.

Sounds good,


Thanks,

James


[0] Not even built:
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 12d3ec2154c2..d9fbdea7b18d 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1529,15 +1529,15 @@ static u32 aarch64_encode_immediate(u64 imm,
                        break;

                esz = tmp;
+
+               /* Trim imm to the element size */
+               mask = BIT(esz) - 1;
+               imm &= mask;
        }

        /* N is only set if we're encoding a 64bit value */
        n = esz == 64;

-       /* Trim imm to the element size */
-       mask = BIT(esz - 1) - 1;
-       imm &= mask;
-
        /* That's how many ones we need to encode */
        ones = hweight64(imm);

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-13 15:45         ` James Morse
@ 2017-12-13 15:52           ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-13 15:52 UTC (permalink / raw)
  To: James Morse
  Cc: linux-arm-kernel, kvm, kvmarm, Christoffer Dall, Mark Rutland,
	Catalin Marinas, Will Deacon, Steve Capper

On 13/12/17 15:45, James Morse wrote:
> Hi Marc,
> 
> On 13/12/17 14:32, Marc Zyngier wrote:
>> On 12/12/17 18:32, James Morse wrote:
>>> On 11/12/17 14:49, Marc Zyngier wrote:
>>>> We lack a way to encode operations such as AND, ORR, EOR that take
>>>> an immediate value. Doing so is quite involved, and is all about
>>>> reverse engineering the decoding algorithm described in the
>>>> pseudocode function DecodeBitMasks().
> 
>>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>>> index 7e432662d454..326b17016485 100644
>>>> --- a/arch/arm64/kernel/insn.c
>>>> +++ b/arch/arm64/kernel/insn.c
>>>
>>>> +static u32 aarch64_encode_immediate(u64 imm,
>>>> +				    enum aarch64_insn_variant variant,
>>>> +				    u32 insn)
>>>> +{
>>>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>>>> +	u64 mask;
>>>
>>> [...]
>>>
>>>> +	/* Compute the rotation */
>>>> +	if (range_of_ones(imm)) {
>>>> +		/*
>>>> +		 * Pattern: 0..01..10..0
>>>> +		 *
>>>> +		 * Compute how many rotate we need to align it right
>>>> +		 */
>>>> +		ror = ffs(imm) - 1;
>>>
>>> (how come range_of_ones() uses __ffs64() on the same value?)
>>
>> News flash: range_of_ones is completely buggy. It will fail on the 
>> trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
>> I definitely got mixed up between the two.
> 
> They do different things!? Aaaaaahhhh....
> 
> [ ...]
> 
>>> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
>>> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.
> 
> 
>> Yup. I've stashed the following patch:
>>
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index b8fb2d89b3a6..e58be1c57f18 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
>>  static bool range_of_ones(u64 val)
>>  {
>>  	/* Doesn't handle full ones or full zeroes */
>> -	int x = __ffs64(val) - 1;
>> -	u64 sval = val >> x;
>> +	u64 sval = val >> __ffs64(val);
>>  
>>  	/* One of Sean Eron Anderson's bithack tricks */
>>  	return ((sval + 1) & (sval)) == 0;
>> @@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  				    u32 insn)
>>  {
>>  	unsigned int immr, imms, n, ones, ror, esz, tmp;
>> -	u64 mask;
>> +	u64 mask = ~0UL;
>>  
>>  	/* Can't encode full zeroes or full ones */
>>  	if (!imm || !~imm)
>> @@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  	for (tmp = esz; tmp > 2; tmp /= 2) {
>>  		u64 emask = BIT(tmp / 2) - 1;
>>  
>> -		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
>> +		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
>> +			/* Trim imm to the element size */
>> +			mask = BIT(esz - 1) - 1;
>> +			imm &= mask;
> 
> Won't this still lose the top bit? It generates 0x7fffffff for esz=32, and for
> esz=32 we run through here when the two 16bit values are different.
> 
> This still runs for a 64bit immediate. The 0xf80000000fffffff example compares
> 0xf8000000 with 0fffffff then breaks here on the first iteration of this loop.
> With this change it still attempts to generate a 64bit mask.
> 
> I was thinking of something like [0]. That only runs when we know the two
> tmp:halves match, it just keeps the bottom tmp:half for the next run and never
> runs for a 64bit immediate.

You're right. Again. And I can't think. That's it, I'm implementing the
testing rig.

> [0] Not even built:
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 12d3ec2154c2..d9fbdea7b18d 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -1529,15 +1529,15 @@ static u32 aarch64_encode_immediate(u64 imm,
>                         break;
> 
>                 esz = tmp;
> +
> +               /* Trim imm to the element size */
> +               mask = BIT(esz) - 1;
> +               imm &= mask;
>         }
> 
>         /* N is only set if we're encoding a 64bit value */
>         n = esz == 64;
> 
> -       /* Trim imm to the element size */
> -       mask = BIT(esz - 1) - 1;
> -       imm &= mask;
> -
>         /* That's how many ones we need to encode */
>         ones = hweight64(imm);

This is definitely much better.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-13 15:52           ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-13 15:52 UTC (permalink / raw)
  To: linux-arm-kernel

On 13/12/17 15:45, James Morse wrote:
> Hi Marc,
> 
> On 13/12/17 14:32, Marc Zyngier wrote:
>> On 12/12/17 18:32, James Morse wrote:
>>> On 11/12/17 14:49, Marc Zyngier wrote:
>>>> We lack a way to encode operations such as AND, ORR, EOR that take
>>>> an immediate value. Doing so is quite involved, and is all about
>>>> reverse engineering the decoding algorithm described in the
>>>> pseudocode function DecodeBitMasks().
> 
>>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>>> index 7e432662d454..326b17016485 100644
>>>> --- a/arch/arm64/kernel/insn.c
>>>> +++ b/arch/arm64/kernel/insn.c
>>>
>>>> +static u32 aarch64_encode_immediate(u64 imm,
>>>> +				    enum aarch64_insn_variant variant,
>>>> +				    u32 insn)
>>>> +{
>>>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>>>> +	u64 mask;
>>>
>>> [...]
>>>
>>>> +	/* Compute the rotation */
>>>> +	if (range_of_ones(imm)) {
>>>> +		/*
>>>> +		 * Pattern: 0..01..10..0
>>>> +		 *
>>>> +		 * Compute how many rotate we need to align it right
>>>> +		 */
>>>> +		ror = ffs(imm) - 1;
>>>
>>> (how come range_of_ones() uses __ffs64() on the same value?)
>>
>> News flash: range_of_ones is completely buggy. It will fail on the 
>> trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
>> I definitely got mixed up between the two.
> 
> They do different things!? Aaaaaahhhh....
> 
> [ ...]
> 
>>> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
>>> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.
> 
> 
>> Yup. I've stashed the following patch:
>>
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index b8fb2d89b3a6..e58be1c57f18 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
>>  static bool range_of_ones(u64 val)
>>  {
>>  	/* Doesn't handle full ones or full zeroes */
>> -	int x = __ffs64(val) - 1;
>> -	u64 sval = val >> x;
>> +	u64 sval = val >> __ffs64(val);
>>  
>>  	/* One of Sean Eron Anderson's bithack tricks */
>>  	return ((sval + 1) & (sval)) == 0;
>> @@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  				    u32 insn)
>>  {
>>  	unsigned int immr, imms, n, ones, ror, esz, tmp;
>> -	u64 mask;
>> +	u64 mask = ~0UL;
>>  
>>  	/* Can't encode full zeroes or full ones */
>>  	if (!imm || !~imm)
>> @@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  	for (tmp = esz; tmp > 2; tmp /= 2) {
>>  		u64 emask = BIT(tmp / 2) - 1;
>>  
>> -		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
>> +		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
>> +			/* Trim imm to the element size */
>> +			mask = BIT(esz - 1) - 1;
>> +			imm &= mask;
> 
> Won't this still lose the top bit? It generates 0x7fffffff for esz=32, and for
> esz=32 we run through here when the two 16bit values are different.
> 
> This still runs for a 64bit immediate. The 0xf80000000fffffff example compares
> 0xf8000000 with 0fffffff then breaks here on the first iteration of this loop.
> With this change it still attempts to generate a 64bit mask.
> 
> I was thinking of something like [0]. That only runs when we know the two
> tmp:halves match, it just keeps the bottom tmp:half for the next run and never
> runs for a 64bit immediate.

You're right. Again. And I can't think. That's it, I'm implementing the
testing rig.

> [0] Not even built:
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 12d3ec2154c2..d9fbdea7b18d 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -1529,15 +1529,15 @@ static u32 aarch64_encode_immediate(u64 imm,
>                         break;
> 
>                 esz = tmp;
> +
> +               /* Trim imm to the element size */
> +               mask = BIT(esz) - 1;
> +               imm &= mask;
>         }
> 
>         /* N is only set if we're encoding a 64bit value */
>         n = esz == 64;
> 
> -       /* Trim imm to the element size */
> -       mask = BIT(esz - 1) - 1;
> -       imm &= mask;
> -
>         /* That's how many ones we need to encode */
>         ones = hweight64(imm);

This is definitely much better.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 05/19] arm64: alternatives: Add dynamic patching feature
  2017-12-11 14:49   ` Marc Zyngier
@ 2017-12-13 17:53     ` Catalin Marinas
  -1 siblings, 0 replies; 66+ messages in thread
From: Catalin Marinas @ 2017-12-13 17:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvm, kvmarm, Mark Rutland, Steve Capper,
	Will Deacon, James Morse, Christoffer Dall

On Mon, Dec 11, 2017 at 02:49:23PM +0000, Marc Zyngier wrote:
> We've so far relied on a patching infrastructure that only gave us
> a single alternative, without any way to finely control what gets
> patched. For a single feature, this is an all or nothing thing.
> 
> It would be interesting to have a more fine grained way of patching
> the kernel though, where we could dynamically tune the code that gets
> injected.
> 
> In order to achive this, let's introduce a new form of alternative
> that is associated with a callback. This callback gets the instruction
> sequence number and the old instruction as a parameter, and returns
> the new instruction. This callback is always called, as the patching
> decision is now done at runtime (not patching is equivalent to returning
> the same instruction).
> 
> Patching with a callback is declared with the new ALTERNATIVE_CB
> and alternative_cb directives:
> 
> 	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
> 		     : "r" (v));
> or
> 	alternative_cb callback
> 		mov	x0, #0
> 	alternative_else_nop_endif

Could we have a new "alternative_cb_endif" instead of
alternative_else_no_endif? IIUC, the nops generated in the
.altinstr_replacement section wouldn't be used, so I think it makes the
code clearer that there is no other alternative instruction set, just an
update in-place of the given instruction.

> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
> index 395befde7595..ce612e10a2c9 100644
> --- a/arch/arm64/include/asm/alternative.h
> +++ b/arch/arm64/include/asm/alternative.h
[...]
> -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
> +.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
>  	.align ALTINSTR_ALIGN
>  	.word \orig_offset - .
> +	.if \cb == 0
>  	.word \alt_offset - .
> +	.else
> +	.word \cb - .
> +	.endif
>  	.hword \feature
>  	.byte \orig_len
>  	.byte \alt_len
>  .endm
>  
> -.macro alternative_insn insn1, insn2, cap, enable = 1
> +.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
>  	.if \enable
>  661:	\insn1
>  662:	.pushsection .altinstructions, "a"
> -	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
> +	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
>  	.popsection
>  	.pushsection .altinstr_replacement, "ax"
>  663:	\insn2

So here we could skip .pushsection .altinstr_replacement if cb. We could
even pass \cb directly to altinstruction_entry instead of 663f so that
we keep altinstruction_entry unmodified.

> @@ -109,10 +119,10 @@ void apply_alternatives(void *start, size_t length);
>  /*
>   * Begin an alternative code sequence.
>   */
> -.macro alternative_if_not cap
> +.macro alternative_if_not cap, cb = 0
>  	.set .Lasm_alt_mode, 0
>  	.pushsection .altinstructions, "a"
> -	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
> +	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
>  	.popsection
>  661:
>  .endm
> @@ -120,13 +130,17 @@ void apply_alternatives(void *start, size_t length);
>  .macro alternative_if cap
>  	.set .Lasm_alt_mode, 1
>  	.pushsection .altinstructions, "a"
> -	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
> +	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
>  	.popsection
>  	.pushsection .altinstr_replacement, "ax"
>  	.align 2	/* So GAS knows label 661 is suitably aligned */
>  661:
>  .endm

and here we wouldn't need this hunk for alternative_if.

> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -110,12 +110,15 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>  	struct alt_instr *alt;
>  	struct alt_region *region = alt_region;
>  	__le32 *origptr, *replptr, *updptr;
> +	alternative_cb_t alt_cb;
>  
>  	for (alt = region->begin; alt < region->end; alt++) {
>  		u32 insn;
>  		int i, nr_inst;
>  
> -		if (!cpus_have_cap(alt->cpufeature))
> +		/* Use ARM64_NCAPS as an unconditional patch */
> +		if (alt->cpufeature != ARM64_NCAPS &&

Nitpick (personal preference): alt->cpufeature < ARM64_NCAPS.

> +		    !cpus_have_cap(alt->cpufeature))
>  			continue;
>  
>  		BUG_ON(alt->alt_len != alt->orig_len);
> @@ -124,11 +127,18 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>  
>  		origptr = ALT_ORIG_PTR(alt);
>  		replptr = ALT_REPL_PTR(alt);
> +		alt_cb  = ALT_REPL_PTR(alt);
>  		updptr = use_linear_alias ? lm_alias(origptr) : origptr;
>  		nr_inst = alt->alt_len / sizeof(insn);
>  
>  		for (i = 0; i < nr_inst; i++) {
> -			insn = get_alt_insn(alt, origptr + i, replptr + i);
> +			if (alt->cpufeature == ARM64_NCAPS) {
> +				insn = le32_to_cpu(updptr[i]);
> +				insn = alt_cb(alt, i, insn);

I wonder whether we'd need the origptr + i as well at some point (e.g.
to generate some relative relocations).

-- 
Catalin

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 05/19] arm64: alternatives: Add dynamic patching feature
@ 2017-12-13 17:53     ` Catalin Marinas
  0 siblings, 0 replies; 66+ messages in thread
From: Catalin Marinas @ 2017-12-13 17:53 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Dec 11, 2017 at 02:49:23PM +0000, Marc Zyngier wrote:
> We've so far relied on a patching infrastructure that only gave us
> a single alternative, without any way to finely control what gets
> patched. For a single feature, this is an all or nothing thing.
> 
> It would be interesting to have a more fine grained way of patching
> the kernel though, where we could dynamically tune the code that gets
> injected.
> 
> In order to achive this, let's introduce a new form of alternative
> that is associated with a callback. This callback gets the instruction
> sequence number and the old instruction as a parameter, and returns
> the new instruction. This callback is always called, as the patching
> decision is now done at runtime (not patching is equivalent to returning
> the same instruction).
> 
> Patching with a callback is declared with the new ALTERNATIVE_CB
> and alternative_cb directives:
> 
> 	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
> 		     : "r" (v));
> or
> 	alternative_cb callback
> 		mov	x0, #0
> 	alternative_else_nop_endif

Could we have a new "alternative_cb_endif" instead of
alternative_else_no_endif? IIUC, the nops generated in the
.altinstr_replacement section wouldn't be used, so I think it makes the
code clearer that there is no other alternative instruction set, just an
update in-place of the given instruction.

> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
> index 395befde7595..ce612e10a2c9 100644
> --- a/arch/arm64/include/asm/alternative.h
> +++ b/arch/arm64/include/asm/alternative.h
[...]
> -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
> +.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
>  	.align ALTINSTR_ALIGN
>  	.word \orig_offset - .
> +	.if \cb == 0
>  	.word \alt_offset - .
> +	.else
> +	.word \cb - .
> +	.endif
>  	.hword \feature
>  	.byte \orig_len
>  	.byte \alt_len
>  .endm
>  
> -.macro alternative_insn insn1, insn2, cap, enable = 1
> +.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
>  	.if \enable
>  661:	\insn1
>  662:	.pushsection .altinstructions, "a"
> -	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
> +	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
>  	.popsection
>  	.pushsection .altinstr_replacement, "ax"
>  663:	\insn2

So here we could skip .pushsection .altinstr_replacement if cb. We could
even pass \cb directly to altinstruction_entry instead of 663f so that
we keep altinstruction_entry unmodified.

> @@ -109,10 +119,10 @@ void apply_alternatives(void *start, size_t length);
>  /*
>   * Begin an alternative code sequence.
>   */
> -.macro alternative_if_not cap
> +.macro alternative_if_not cap, cb = 0
>  	.set .Lasm_alt_mode, 0
>  	.pushsection .altinstructions, "a"
> -	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
> +	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
>  	.popsection
>  661:
>  .endm
> @@ -120,13 +130,17 @@ void apply_alternatives(void *start, size_t length);
>  .macro alternative_if cap
>  	.set .Lasm_alt_mode, 1
>  	.pushsection .altinstructions, "a"
> -	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
> +	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
>  	.popsection
>  	.pushsection .altinstr_replacement, "ax"
>  	.align 2	/* So GAS knows label 661 is suitably aligned */
>  661:
>  .endm

and here we wouldn't need this hunk for alternative_if.

> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -110,12 +110,15 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>  	struct alt_instr *alt;
>  	struct alt_region *region = alt_region;
>  	__le32 *origptr, *replptr, *updptr;
> +	alternative_cb_t alt_cb;
>  
>  	for (alt = region->begin; alt < region->end; alt++) {
>  		u32 insn;
>  		int i, nr_inst;
>  
> -		if (!cpus_have_cap(alt->cpufeature))
> +		/* Use ARM64_NCAPS as an unconditional patch */
> +		if (alt->cpufeature != ARM64_NCAPS &&

Nitpick (personal preference): alt->cpufeature < ARM64_NCAPS.

> +		    !cpus_have_cap(alt->cpufeature))
>  			continue;
>  
>  		BUG_ON(alt->alt_len != alt->orig_len);
> @@ -124,11 +127,18 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>  
>  		origptr = ALT_ORIG_PTR(alt);
>  		replptr = ALT_REPL_PTR(alt);
> +		alt_cb  = ALT_REPL_PTR(alt);
>  		updptr = use_linear_alias ? lm_alias(origptr) : origptr;
>  		nr_inst = alt->alt_len / sizeof(insn);
>  
>  		for (i = 0; i < nr_inst; i++) {
> -			insn = get_alt_insn(alt, origptr + i, replptr + i);
> +			if (alt->cpufeature == ARM64_NCAPS) {
> +				insn = le32_to_cpu(updptr[i]);
> +				insn = alt_cb(alt, i, insn);

I wonder whether we'd need the origptr + i as well at some point (e.g.
to generate some relative relocations).

-- 
Catalin

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
  2017-12-13 15:45         ` James Morse
@ 2017-12-14  8:40           ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-14  8:40 UTC (permalink / raw)
  To: James Morse
  Cc: linux-arm-kernel, kvm, kvmarm, Christoffer Dall, Mark Rutland,
	Catalin Marinas, Will Deacon, Steve Capper

On 13/12/17 15:45, James Morse wrote:
> Hi Marc,
> 
> On 13/12/17 14:32, Marc Zyngier wrote:
>> On 12/12/17 18:32, James Morse wrote:
>>> On 11/12/17 14:49, Marc Zyngier wrote:
>>>> We lack a way to encode operations such as AND, ORR, EOR that take
>>>> an immediate value. Doing so is quite involved, and is all about
>>>> reverse engineering the decoding algorithm described in the
>>>> pseudocode function DecodeBitMasks().
> 
>>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>>> index 7e432662d454..326b17016485 100644
>>>> --- a/arch/arm64/kernel/insn.c
>>>> +++ b/arch/arm64/kernel/insn.c
>>>
>>>> +static u32 aarch64_encode_immediate(u64 imm,
>>>> +				    enum aarch64_insn_variant variant,
>>>> +				    u32 insn)
>>>> +{
>>>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>>>> +	u64 mask;
>>>
>>> [...]
>>>
>>>> +	/* Compute the rotation */
>>>> +	if (range_of_ones(imm)) {
>>>> +		/*
>>>> +		 * Pattern: 0..01..10..0
>>>> +		 *
>>>> +		 * Compute how many rotate we need to align it right
>>>> +		 */
>>>> +		ror = ffs(imm) - 1;
>>>
>>> (how come range_of_ones() uses __ffs64() on the same value?)
>>
>> News flash: range_of_ones is completely buggy. It will fail on the 
>> trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
>> I definitely got mixed up between the two.
> 
> They do different things!? Aaaaaahhhh....
> 
> [ ...]
> 
>>> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
>>> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.
> 
> 
>> Yup. I've stashed the following patch:
>>
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index b8fb2d89b3a6..e58be1c57f18 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
>>  static bool range_of_ones(u64 val)
>>  {
>>  	/* Doesn't handle full ones or full zeroes */
>> -	int x = __ffs64(val) - 1;
>> -	u64 sval = val >> x;
>> +	u64 sval = val >> __ffs64(val);
>>  
>>  	/* One of Sean Eron Anderson's bithack tricks */
>>  	return ((sval + 1) & (sval)) == 0;
>> @@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  				    u32 insn)
>>  {
>>  	unsigned int immr, imms, n, ones, ror, esz, tmp;
>> -	u64 mask;
>> +	u64 mask = ~0UL;
>>  
>>  	/* Can't encode full zeroes or full ones */
>>  	if (!imm || !~imm)
>> @@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  	for (tmp = esz; tmp > 2; tmp /= 2) {
>>  		u64 emask = BIT(tmp / 2) - 1;
>>  
>> -		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
>> +		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
>> +			/* Trim imm to the element size */
>> +			mask = BIT(esz - 1) - 1;
>> +			imm &= mask;
> 
> Won't this still lose the top bit? It generates 0x7fffffff for esz=32, and for
> esz=32 we run through here when the two 16bit values are different.
> 
> This still runs for a 64bit immediate. The 0xf80000000fffffff example compares
> 0xf8000000 with 0fffffff then breaks here on the first iteration of this loop.
> With this change it still attempts to generate a 64bit mask.
> 
> I was thinking of something like [0]. That only runs when we know the two
> tmp:halves match, it just keeps the bottom tmp:half for the next run and never
> runs for a 64bit immediate.
> 
> 
>>  			break;
>> +		}
>>  
>>  		esz = tmp;
>>  	}
>> @@ -1552,10 +1555,6 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  	/* N is only set if we're encoding a 64bit value */
>>  	n = esz == 64;
>>  
>> -	/* Trim imm to the element size */
>> -	mask = BIT(esz - 1) - 1;
>> -	imm &= mask;
>> -
>>  	/* That's how many ones we need to encode */
>>  	ones = hweight64(imm);
>>  
>> I really need to run this against gas in order to make sure
>> I get the same parameters for all the possible values.
> 
> Sounds good,
> 
> 
> Thanks,
> 
> James
> 
> 
> [0] Not even built:
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 12d3ec2154c2..d9fbdea7b18d 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -1529,15 +1529,15 @@ static u32 aarch64_encode_immediate(u64 imm,
>                         break;
> 
>                 esz = tmp;
> +
> +               /* Trim imm to the element size */
> +               mask = BIT(esz) - 1;
> +               imm &= mask;

This should go together with a small adjustment of the narrowing loop,
as we never hit esz==2, which is a bit of a problem.

I now have a small test rig generating all the valid immediates and
comparing the encodings with GAS, which helped figuring out the bugs.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals
@ 2017-12-14  8:40           ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-14  8:40 UTC (permalink / raw)
  To: linux-arm-kernel

On 13/12/17 15:45, James Morse wrote:
> Hi Marc,
> 
> On 13/12/17 14:32, Marc Zyngier wrote:
>> On 12/12/17 18:32, James Morse wrote:
>>> On 11/12/17 14:49, Marc Zyngier wrote:
>>>> We lack a way to encode operations such as AND, ORR, EOR that take
>>>> an immediate value. Doing so is quite involved, and is all about
>>>> reverse engineering the decoding algorithm described in the
>>>> pseudocode function DecodeBitMasks().
> 
>>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>>> index 7e432662d454..326b17016485 100644
>>>> --- a/arch/arm64/kernel/insn.c
>>>> +++ b/arch/arm64/kernel/insn.c
>>>
>>>> +static u32 aarch64_encode_immediate(u64 imm,
>>>> +				    enum aarch64_insn_variant variant,
>>>> +				    u32 insn)
>>>> +{
>>>> +	unsigned int immr, imms, n, ones, ror, esz, tmp;
>>>> +	u64 mask;
>>>
>>> [...]
>>>
>>>> +	/* Compute the rotation */
>>>> +	if (range_of_ones(imm)) {
>>>> +		/*
>>>> +		 * Pattern: 0..01..10..0
>>>> +		 *
>>>> +		 * Compute how many rotate we need to align it right
>>>> +		 */
>>>> +		ror = ffs(imm) - 1;
>>>
>>> (how come range_of_ones() uses __ffs64() on the same value?)
>>
>> News flash: range_of_ones is completely buggy. It will fail on the 
>> trivial value 1 (__ffs64(1) = 0; 0 - 1 = -1; val >> -1 is... ermmmm).
>> I definitely got mixed up between the two.
> 
> They do different things!? Aaaaaahhhh....
> 
> [ ...]
> 
>>> Unless I've gone wrong, I think the 'Trim imm to the element size' code needs to
>>> move up into the esz-reducing loop so it doesn't happen for a 64bit immediate.
> 
> 
>> Yup. I've stashed the following patch:
>>
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index b8fb2d89b3a6..e58be1c57f18 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -1503,8 +1503,7 @@ pstate_check_t * const aarch32_opcode_cond_checks[16] = {
>>  static bool range_of_ones(u64 val)
>>  {
>>  	/* Doesn't handle full ones or full zeroes */
>> -	int x = __ffs64(val) - 1;
>> -	u64 sval = val >> x;
>> +	u64 sval = val >> __ffs64(val);
>>  
>>  	/* One of Sean Eron Anderson's bithack tricks */
>>  	return ((sval + 1) & (sval)) == 0;
>> @@ -1515,7 +1514,7 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  				    u32 insn)
>>  {
>>  	unsigned int immr, imms, n, ones, ror, esz, tmp;
>> -	u64 mask;
>> +	u64 mask = ~0UL;
>>  
>>  	/* Can't encode full zeroes or full ones */
>>  	if (!imm || !~imm)
>> @@ -1543,8 +1542,12 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  	for (tmp = esz; tmp > 2; tmp /= 2) {
>>  		u64 emask = BIT(tmp / 2) - 1;
>>  
>> -		if ((imm & emask) != ((imm >> (tmp / 2)) & emask))
>> +		if ((imm & emask) != ((imm >> (tmp / 2)) & emask)) {
>> +			/* Trim imm to the element size */
>> +			mask = BIT(esz - 1) - 1;
>> +			imm &= mask;
> 
> Won't this still lose the top bit? It generates 0x7fffffff for esz=32, and for
> esz=32 we run through here when the two 16bit values are different.
> 
> This still runs for a 64bit immediate. The 0xf80000000fffffff example compares
> 0xf8000000 with 0fffffff then breaks here on the first iteration of this loop.
> With this change it still attempts to generate a 64bit mask.
> 
> I was thinking of something like [0]. That only runs when we know the two
> tmp:halves match, it just keeps the bottom tmp:half for the next run and never
> runs for a 64bit immediate.
> 
> 
>>  			break;
>> +		}
>>  
>>  		esz = tmp;
>>  	}
>> @@ -1552,10 +1555,6 @@ static u32 aarch64_encode_immediate(u64 imm,
>>  	/* N is only set if we're encoding a 64bit value */
>>  	n = esz == 64;
>>  
>> -	/* Trim imm to the element size */
>> -	mask = BIT(esz - 1) - 1;
>> -	imm &= mask;
>> -
>>  	/* That's how many ones we need to encode */
>>  	ones = hweight64(imm);
>>  
>> I really need to run this against gas in order to make sure
>> I get the same parameters for all the possible values.
> 
> Sounds good,
> 
> 
> Thanks,
> 
> James
> 
> 
> [0] Not even built:
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 12d3ec2154c2..d9fbdea7b18d 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -1529,15 +1529,15 @@ static u32 aarch64_encode_immediate(u64 imm,
>                         break;
> 
>                 esz = tmp;
> +
> +               /* Trim imm to the element size */
> +               mask = BIT(esz) - 1;
> +               imm &= mask;

This should go together with a small adjustment of the narrowing loop,
as we never hit esz==2, which is a bit of a problem.

I now have a small test rig generating all the valid immediates and
comparing the encodings with GAS, which helped figuring out the bugs.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 05/19] arm64: alternatives: Add dynamic patching feature
  2017-12-13 17:53     ` Catalin Marinas
@ 2017-12-14 12:22       ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-14 12:22 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, kvm, kvmarm, Mark Rutland, Steve Capper,
	Will Deacon, James Morse, Christoffer Dall

On 13/12/17 17:53, Catalin Marinas wrote:
> On Mon, Dec 11, 2017 at 02:49:23PM +0000, Marc Zyngier wrote:
>> We've so far relied on a patching infrastructure that only gave us
>> a single alternative, without any way to finely control what gets
>> patched. For a single feature, this is an all or nothing thing.
>>
>> It would be interesting to have a more fine grained way of patching
>> the kernel though, where we could dynamically tune the code that gets
>> injected.
>>
>> In order to achive this, let's introduce a new form of alternative
>> that is associated with a callback. This callback gets the instruction
>> sequence number and the old instruction as a parameter, and returns
>> the new instruction. This callback is always called, as the patching
>> decision is now done at runtime (not patching is equivalent to returning
>> the same instruction).
>>
>> Patching with a callback is declared with the new ALTERNATIVE_CB
>> and alternative_cb directives:
>>
>> 	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
>> 		     : "r" (v));
>> or
>> 	alternative_cb callback
>> 		mov	x0, #0
>> 	alternative_else_nop_endif
> 
> Could we have a new "alternative_cb_endif" instead of
> alternative_else_no_endif? IIUC, the nops generated in the
> .altinstr_replacement section wouldn't be used, so I think it makes the
> code clearer that there is no other alternative instruction set, just an
> update in-place of the given instruction.

Yes, good call.

> 
>> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
>> index 395befde7595..ce612e10a2c9 100644
>> --- a/arch/arm64/include/asm/alternative.h
>> +++ b/arch/arm64/include/asm/alternative.h
> [...]
>> -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
>> +.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
>>  	.align ALTINSTR_ALIGN
>>  	.word \orig_offset - .
>> +	.if \cb == 0
>>  	.word \alt_offset - .
>> +	.else
>> +	.word \cb - .
>> +	.endif
>>  	.hword \feature
>>  	.byte \orig_len
>>  	.byte \alt_len
>>  .endm
>>  
>> -.macro alternative_insn insn1, insn2, cap, enable = 1
>> +.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
>>  	.if \enable
>>  661:	\insn1
>>  662:	.pushsection .altinstructions, "a"
>> -	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
>> +	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
>>  	.popsection
>>  	.pushsection .altinstr_replacement, "ax"
>>  663:	\insn2
> 
> So here we could skip .pushsection .altinstr_replacement if cb. We could
> even pass \cb directly to altinstruction_entry instead of 663f so that
> we keep altinstruction_entry unmodified.
> 
>> @@ -109,10 +119,10 @@ void apply_alternatives(void *start, size_t length);
>>  /*
>>   * Begin an alternative code sequence.
>>   */
>> -.macro alternative_if_not cap
>> +.macro alternative_if_not cap, cb = 0
>>  	.set .Lasm_alt_mode, 0
>>  	.pushsection .altinstructions, "a"
>> -	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
>> +	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
>>  	.popsection
>>  661:
>>  .endm
>> @@ -120,13 +130,17 @@ void apply_alternatives(void *start, size_t length);
>>  .macro alternative_if cap
>>  	.set .Lasm_alt_mode, 1
>>  	.pushsection .altinstructions, "a"
>> -	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
>> +	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
>>  	.popsection
>>  	.pushsection .altinstr_replacement, "ax"
>>  	.align 2	/* So GAS knows label 661 is suitably aligned */
>>  661:
>>  .endm
> 
> and here we wouldn't need this hunk for alternative_if.

All good remarks. I've reworked that and the changes are a lot more
manageable now. Thanks for the suggestion.

> 
>> --- a/arch/arm64/kernel/alternative.c
>> +++ b/arch/arm64/kernel/alternative.c
>> @@ -110,12 +110,15 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>>  	struct alt_instr *alt;
>>  	struct alt_region *region = alt_region;
>>  	__le32 *origptr, *replptr, *updptr;
>> +	alternative_cb_t alt_cb;
>>  
>>  	for (alt = region->begin; alt < region->end; alt++) {
>>  		u32 insn;
>>  		int i, nr_inst;
>>  
>> -		if (!cpus_have_cap(alt->cpufeature))
>> +		/* Use ARM64_NCAPS as an unconditional patch */
>> +		if (alt->cpufeature != ARM64_NCAPS &&
> 
> Nitpick (personal preference): alt->cpufeature < ARM64_NCAPS.
> 
>> +		    !cpus_have_cap(alt->cpufeature))
>>  			continue;
>>  
>>  		BUG_ON(alt->alt_len != alt->orig_len);
>> @@ -124,11 +127,18 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>>  
>>  		origptr = ALT_ORIG_PTR(alt);
>>  		replptr = ALT_REPL_PTR(alt);
>> +		alt_cb  = ALT_REPL_PTR(alt);
>>  		updptr = use_linear_alias ? lm_alias(origptr) : origptr;
>>  		nr_inst = alt->alt_len / sizeof(insn);
>>  
>>  		for (i = 0; i < nr_inst; i++) {
>> -			insn = get_alt_insn(alt, origptr + i, replptr + i);
>> +			if (alt->cpufeature == ARM64_NCAPS) {
>> +				insn = le32_to_cpu(updptr[i]);
>> +				insn = alt_cb(alt, i, insn);
> 
> I wonder whether we'd need the origptr + i as well at some point (e.g.
> to generate some relative relocations).

The callback takes the alt_instr structure as a parameter. All we need
is to expose the ALT_ORIG_PTR macro for the callback to resolve this as
an absolute address.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 05/19] arm64: alternatives: Add dynamic patching feature
@ 2017-12-14 12:22       ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-14 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

On 13/12/17 17:53, Catalin Marinas wrote:
> On Mon, Dec 11, 2017 at 02:49:23PM +0000, Marc Zyngier wrote:
>> We've so far relied on a patching infrastructure that only gave us
>> a single alternative, without any way to finely control what gets
>> patched. For a single feature, this is an all or nothing thing.
>>
>> It would be interesting to have a more fine grained way of patching
>> the kernel though, where we could dynamically tune the code that gets
>> injected.
>>
>> In order to achive this, let's introduce a new form of alternative
>> that is associated with a callback. This callback gets the instruction
>> sequence number and the old instruction as a parameter, and returns
>> the new instruction. This callback is always called, as the patching
>> decision is now done at runtime (not patching is equivalent to returning
>> the same instruction).
>>
>> Patching with a callback is declared with the new ALTERNATIVE_CB
>> and alternative_cb directives:
>>
>> 	asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
>> 		     : "r" (v));
>> or
>> 	alternative_cb callback
>> 		mov	x0, #0
>> 	alternative_else_nop_endif
> 
> Could we have a new "alternative_cb_endif" instead of
> alternative_else_no_endif? IIUC, the nops generated in the
> .altinstr_replacement section wouldn't be used, so I think it makes the
> code clearer that there is no other alternative instruction set, just an
> update in-place of the given instruction.

Yes, good call.

> 
>> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
>> index 395befde7595..ce612e10a2c9 100644
>> --- a/arch/arm64/include/asm/alternative.h
>> +++ b/arch/arm64/include/asm/alternative.h
> [...]
>> -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
>> +.macro altinstruction_entry orig_offset, alt_offset, feature, orig_len, alt_len, cb = 0
>>  	.align ALTINSTR_ALIGN
>>  	.word \orig_offset - .
>> +	.if \cb == 0
>>  	.word \alt_offset - .
>> +	.else
>> +	.word \cb - .
>> +	.endif
>>  	.hword \feature
>>  	.byte \orig_len
>>  	.byte \alt_len
>>  .endm
>>  
>> -.macro alternative_insn insn1, insn2, cap, enable = 1
>> +.macro alternative_insn insn1, insn2, cap, enable = 1, cb = 0
>>  	.if \enable
>>  661:	\insn1
>>  662:	.pushsection .altinstructions, "a"
>> -	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
>> +	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f, \cb
>>  	.popsection
>>  	.pushsection .altinstr_replacement, "ax"
>>  663:	\insn2
> 
> So here we could skip .pushsection .altinstr_replacement if cb. We could
> even pass \cb directly to altinstruction_entry instead of 663f so that
> we keep altinstruction_entry unmodified.
> 
>> @@ -109,10 +119,10 @@ void apply_alternatives(void *start, size_t length);
>>  /*
>>   * Begin an alternative code sequence.
>>   */
>> -.macro alternative_if_not cap
>> +.macro alternative_if_not cap, cb = 0
>>  	.set .Lasm_alt_mode, 0
>>  	.pushsection .altinstructions, "a"
>> -	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
>> +	altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f, \cb
>>  	.popsection
>>  661:
>>  .endm
>> @@ -120,13 +130,17 @@ void apply_alternatives(void *start, size_t length);
>>  .macro alternative_if cap
>>  	.set .Lasm_alt_mode, 1
>>  	.pushsection .altinstructions, "a"
>> -	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
>> +	altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f, 0
>>  	.popsection
>>  	.pushsection .altinstr_replacement, "ax"
>>  	.align 2	/* So GAS knows label 661 is suitably aligned */
>>  661:
>>  .endm
> 
> and here we wouldn't need this hunk for alternative_if.

All good remarks. I've reworked that and the changes are a lot more
manageable now. Thanks for the suggestion.

> 
>> --- a/arch/arm64/kernel/alternative.c
>> +++ b/arch/arm64/kernel/alternative.c
>> @@ -110,12 +110,15 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>>  	struct alt_instr *alt;
>>  	struct alt_region *region = alt_region;
>>  	__le32 *origptr, *replptr, *updptr;
>> +	alternative_cb_t alt_cb;
>>  
>>  	for (alt = region->begin; alt < region->end; alt++) {
>>  		u32 insn;
>>  		int i, nr_inst;
>>  
>> -		if (!cpus_have_cap(alt->cpufeature))
>> +		/* Use ARM64_NCAPS as an unconditional patch */
>> +		if (alt->cpufeature != ARM64_NCAPS &&
> 
> Nitpick (personal preference): alt->cpufeature < ARM64_NCAPS.
> 
>> +		    !cpus_have_cap(alt->cpufeature))
>>  			continue;
>>  
>>  		BUG_ON(alt->alt_len != alt->orig_len);
>> @@ -124,11 +127,18 @@ static void __apply_alternatives(void *alt_region, bool use_linear_alias)
>>  
>>  		origptr = ALT_ORIG_PTR(alt);
>>  		replptr = ALT_REPL_PTR(alt);
>> +		alt_cb  = ALT_REPL_PTR(alt);
>>  		updptr = use_linear_alias ? lm_alias(origptr) : origptr;
>>  		nr_inst = alt->alt_len / sizeof(insn);
>>  
>>  		for (i = 0; i < nr_inst; i++) {
>> -			insn = get_alt_insn(alt, origptr + i, replptr + i);
>> +			if (alt->cpufeature == ARM64_NCAPS) {
>> +				insn = le32_to_cpu(updptr[i]);
>> +				insn = alt_cb(alt, i, insn);
> 
> I wonder whether we'd need the origptr + i as well at some point (e.g.
> to generate some relative relocations).

The callback takes the alt_instr structure as a parameter. All we need
is to expose the ALT_ORIG_PTR macro for the callback to resolve this as
an absolute address.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 08/19] arm64: KVM: Dynamically patch the kernel/hyp VA mask
  2017-12-11 14:49   ` Marc Zyngier
@ 2017-12-14 13:17     ` James Morse
  -1 siblings, 0 replies; 66+ messages in thread
From: James Morse @ 2017-12-14 13:17 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: linux-arm-kernel, kvm, Christoffer Dall, Mark Rutland,
	Catalin Marinas, Will Deacon, Steve Capper

Hi Marc,

On 11/12/17 14:49, Marc Zyngier wrote:
> So far, we're using a complicated sequence of alternatives to
> patch the kernel/hyp VA mask on non-VHE, and NOP out the
> masking altogether when on VHE.
> 
> THe newly introduced dynamic patching gives us the opportunity
> to simplify that code by patching a single instruction with
> the correct mask (instead of the mind bending cummulative masking
> we have at the moment) or even a single NOP on VHE.

(and just a single NOP on VHE?)


> diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
> new file mode 100644
> index 000000000000..5e1643a4e7bf
> --- /dev/null
> +++ b/arch/arm64/kvm/haslr.c

> +u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
> +{
> +	u32 rd, rn, insn;
> +	u64 imm;
> +
> +	/* We only expect a 1 instruction sequence */
> +	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
> +
> +	/* VHE doesn't need any address translation, let's NOP everything */
> +	if (has_vhe())
> +		return aarch64_insn_gen_nop();
> +
> +	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
> +	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
> +
> +	switch (index) {
> +	default:
> +		/* Something went wrong... */
> +		insn = AARCH64_BREAK_FAULT;
> +		break;

Can this happen? You bug-on alt->alt_len != 1-instruction above, and the loop in
__apply_alternatives() is calculated in the same way.
If it can, BUG_ON(index != 0) should catch both cases in one go.


> +	case 0:
> +		imm = get_hyp_va_mask();
> +		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
> +							  AARCH64_INSN_VARIANT_64BIT,
> +							  rn, rd, imm);
> +		break;
> +	}
> +
> +	BUG_ON(insn == AARCH64_BREAK_FAULT);
> +
> +	return insn;
> +}
> 


Thanks,

James

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 08/19] arm64: KVM: Dynamically patch the kernel/hyp VA mask
@ 2017-12-14 13:17     ` James Morse
  0 siblings, 0 replies; 66+ messages in thread
From: James Morse @ 2017-12-14 13:17 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 11/12/17 14:49, Marc Zyngier wrote:
> So far, we're using a complicated sequence of alternatives to
> patch the kernel/hyp VA mask on non-VHE, and NOP out the
> masking altogether when on VHE.
> 
> THe newly introduced dynamic patching gives us the opportunity
> to simplify that code by patching a single instruction with
> the correct mask (instead of the mind bending cummulative masking
> we have at the moment) or even a single NOP on VHE.

(and just a single NOP on VHE?)


> diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
> new file mode 100644
> index 000000000000..5e1643a4e7bf
> --- /dev/null
> +++ b/arch/arm64/kvm/haslr.c

> +u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
> +{
> +	u32 rd, rn, insn;
> +	u64 imm;
> +
> +	/* We only expect a 1 instruction sequence */
> +	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
> +
> +	/* VHE doesn't need any address translation, let's NOP everything */
> +	if (has_vhe())
> +		return aarch64_insn_gen_nop();
> +
> +	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
> +	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
> +
> +	switch (index) {
> +	default:
> +		/* Something went wrong... */
> +		insn = AARCH64_BREAK_FAULT;
> +		break;

Can this happen? You bug-on alt->alt_len != 1-instruction above, and the loop in
__apply_alternatives() is calculated in the same way.
If it can, BUG_ON(index != 0) should catch both cases in one go.


> +	case 0:
> +		imm = get_hyp_va_mask();
> +		insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_AND,
> +							  AARCH64_INSN_VARIANT_64BIT,
> +							  rn, rd, imm);
> +		break;
> +	}
> +
> +	BUG_ON(insn == AARCH64_BREAK_FAULT);
> +
> +	return insn;
> +}
> 


Thanks,

James

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v2 08/19] arm64: KVM: Dynamically patch the kernel/hyp VA mask
  2017-12-14 13:17     ` James Morse
@ 2017-12-14 13:27       ` Marc Zyngier
  -1 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-14 13:27 UTC (permalink / raw)
  To: James Morse, kvmarm
  Cc: linux-arm-kernel, kvm, Christoffer Dall, Mark Rutland,
	Catalin Marinas, Will Deacon, Steve Capper

On 14/12/17 13:17, James Morse wrote:
> Hi Marc,
> 
> On 11/12/17 14:49, Marc Zyngier wrote:
>> So far, we're using a complicated sequence of alternatives to
>> patch the kernel/hyp VA mask on non-VHE, and NOP out the
>> masking altogether when on VHE.
>>
>> THe newly introduced dynamic patching gives us the opportunity
>> to simplify that code by patching a single instruction with
>> the correct mask (instead of the mind bending cummulative masking
>> we have at the moment) or even a single NOP on VHE.
> 
> (and just a single NOP on VHE?)

Yes, much better. Thanks.

> 
> 
>> diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
>> new file mode 100644
>> index 000000000000..5e1643a4e7bf
>> --- /dev/null
>> +++ b/arch/arm64/kvm/haslr.c
> 
>> +u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
>> +{
>> +	u32 rd, rn, insn;
>> +	u64 imm;
>> +
>> +	/* We only expect a 1 instruction sequence */
>> +	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
>> +
>> +	/* VHE doesn't need any address translation, let's NOP everything */
>> +	if (has_vhe())
>> +		return aarch64_insn_gen_nop();
>> +
>> +	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
>> +	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
>> +
>> +	switch (index) {
>> +	default:
>> +		/* Something went wrong... */
>> +		insn = AARCH64_BREAK_FAULT;
>> +		break;
> 
> Can this happen? You bug-on alt->alt_len != 1-instruction above, and the loop in
> __apply_alternatives() is calculated in the same way.
> If it can, BUG_ON(index != 0) should catch both cases in one go.

No, it cannot happen. Yes, I'm paranoid. I guess I should just
initialise insn to AARCH64_BREAK_FAULT and achieve the same level of
paranoia without that default clause.

Oh, and it keeps GCC quiet.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v2 08/19] arm64: KVM: Dynamically patch the kernel/hyp VA mask
@ 2017-12-14 13:27       ` Marc Zyngier
  0 siblings, 0 replies; 66+ messages in thread
From: Marc Zyngier @ 2017-12-14 13:27 UTC (permalink / raw)
  To: linux-arm-kernel

On 14/12/17 13:17, James Morse wrote:
> Hi Marc,
> 
> On 11/12/17 14:49, Marc Zyngier wrote:
>> So far, we're using a complicated sequence of alternatives to
>> patch the kernel/hyp VA mask on non-VHE, and NOP out the
>> masking altogether when on VHE.
>>
>> THe newly introduced dynamic patching gives us the opportunity
>> to simplify that code by patching a single instruction with
>> the correct mask (instead of the mind bending cummulative masking
>> we have at the moment) or even a single NOP on VHE.
> 
> (and just a single NOP on VHE?)

Yes, much better. Thanks.

> 
> 
>> diff --git a/arch/arm64/kvm/haslr.c b/arch/arm64/kvm/haslr.c
>> new file mode 100644
>> index 000000000000..5e1643a4e7bf
>> --- /dev/null
>> +++ b/arch/arm64/kvm/haslr.c
> 
>> +u32 __init kvm_update_va_mask(struct alt_instr *alt, int index, u32 oinsn)
>> +{
>> +	u32 rd, rn, insn;
>> +	u64 imm;
>> +
>> +	/* We only expect a 1 instruction sequence */
>> +	BUG_ON((alt->alt_len / sizeof(insn)) != 1);
>> +
>> +	/* VHE doesn't need any address translation, let's NOP everything */
>> +	if (has_vhe())
>> +		return aarch64_insn_gen_nop();
>> +
>> +	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
>> +	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, oinsn);
>> +
>> +	switch (index) {
>> +	default:
>> +		/* Something went wrong... */
>> +		insn = AARCH64_BREAK_FAULT;
>> +		break;
> 
> Can this happen? You bug-on alt->alt_len != 1-instruction above, and the loop in
> __apply_alternatives() is calculated in the same way.
> If it can, BUG_ON(index != 0) should catch both cases in one go.

No, it cannot happen. Yes, I'm paranoid. I guess I should just
initialise insn to AARCH64_BREAK_FAULT and achieve the same level of
paranoia without that default clause.

Oh, and it keeps GCC quiet.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 66+ messages in thread

end of thread, other threads:[~2017-12-14 13:27 UTC | newest]

Thread overview: 66+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-11 14:49 [PATCH v2 00/19] KVM/arm64: Randomise EL2 mappings Marc Zyngier
2017-12-11 14:49 ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 01/19] arm64: asm-offsets: Avoid clashing DMA definitions Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 15:03   ` Russell King - ARM Linux
2017-12-11 15:03     ` Russell King - ARM Linux
2017-12-11 15:22     ` Marc Zyngier
2017-12-11 15:22       ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 02/19] arm64: asm-offsets: Remove unused definitions Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 03/19] arm64: asm-offsets: Remove potential circular dependency Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 04/19] arm64: alternatives: Enforce alignment of struct alt_instr Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 05/19] arm64: alternatives: Add dynamic patching feature Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-13 17:53   ` Catalin Marinas
2017-12-13 17:53     ` Catalin Marinas
2017-12-14 12:22     ` Marc Zyngier
2017-12-14 12:22       ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 06/19] arm64: insn: Add N immediate encoding Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 07/19] arm64: insn: Add encoder for bitwise operations using litterals Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-12 18:32   ` James Morse
2017-12-12 18:32     ` James Morse
2017-12-12 23:40     ` Peter Maydell
2017-12-12 23:40       ` Peter Maydell
2017-12-13 14:32     ` Marc Zyngier
2017-12-13 14:32       ` Marc Zyngier
2017-12-13 15:45       ` James Morse
2017-12-13 15:45         ` James Morse
2017-12-13 15:52         ` Marc Zyngier
2017-12-13 15:52           ` Marc Zyngier
2017-12-14  8:40         ` Marc Zyngier
2017-12-14  8:40           ` Marc Zyngier
2017-12-12 18:56   ` Peter Maydell
2017-12-12 18:56     ` Peter Maydell
2017-12-11 14:49 ` [PATCH v2 08/19] arm64: KVM: Dynamically patch the kernel/hyp VA mask Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-14 13:17   ` James Morse
2017-12-14 13:17     ` James Morse
2017-12-14 13:27     ` Marc Zyngier
2017-12-14 13:27       ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 09/19] arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 10/19] KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 11/19] KVM: arm/arm64: Demote HYP VA range display to being a debug feature Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 12/19] KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 13/19] KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 14/19] KVM: arm/arm64: Move HYP IO VAs to the "idmap" range Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 15/19] arm64; insn: Add encoder for the EXTR instruction Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 16/19] arm64: insn: Allow ADD/SUB (immediate) with LSL #12 Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 17/19] arm64: KVM: Dynamically compute the HYP VA mask Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 18/19] arm64: KVM: Introduce EL2 VA randomisation Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier
2017-12-11 14:49 ` [PATCH v2 19/19] arm64: Update the KVM memory map documentation Marc Zyngier
2017-12-11 14:49   ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.