All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-18 16:06 ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

This series contains some cleanups applicable to the SVE KVM support
patches merged into kvmarm/next.  These arose from Andrew Jones'
review.

Apart from some minor changes to error codes and checking, these are
mostly cosmetic / sytlistic changes only.

The patches are based on
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
5d8d4af24460 ("arm64: KVM: Fix system register enumeration")

This series in git:
git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head

Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
Fast model (to exercise SVE support).

Some additional manual testing was done to exercise the updated error
behaviours when accessing the SVE registers via ioctl.

Dave Martin (14):
  arm64/sve: Clarify vq map semantics
  KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up
    SVE
  KVM: arm: Make vcpu finalization stubs into inline functions
  KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to
    WARNs
  KVM: arm64/sve: Clean up UAPI register ID definitions
  KVM: arm64/sve: Miscellaneous tidyups in guest.c
  KVM: arm64/sve: Make register ioctl access errors more consistent
  KVM: arm64/sve: WARN when avoiding divide-by-zero in
    sve_reg_to_region()
  KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
  KVM: arm64/sve: Explain validity checks in set_sve_vls()
  KVM: arm/arm64: Clean up vcpu finalization function parameter naming
  KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
  KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation
  KVM: arm64: Clarify access behaviour for out-of-range SVE register
    slice IDs

 Documentation/virtual/kvm/api.txt | 24 ++++++-----
 arch/arm/include/asm/kvm_host.h   | 13 ++++--
 arch/arm64/include/asm/fpsimd.h   |  4 --
 arch/arm64/include/asm/kvm_host.h |  4 +-
 arch/arm64/include/uapi/asm/kvm.h | 32 ++++++++++----
 arch/arm64/kernel/fpsimd.c        |  7 ++-
 arch/arm64/kvm/guest.c            | 91 +++++++++++++++++++++++----------------
 arch/arm64/kvm/reset.c            |  6 +--
 arch/arm64/kvm/sys_regs.c         |  4 +-
 virt/kvm/arm/arm.c                |  2 +-
 10 files changed, 116 insertions(+), 71 deletions(-)

-- 
2.1.4

@@@@

Dave Martin (14):
  arm64/sve: Clarify vq map semantics
  KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up
    SVE
  KVM: arm: Make vcpu finalization stubs into inline functions
  KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to
    WARNs
  KVM: arm64/sve: Clean up UAPI register ID definitions
  KVM: arm64/sve: Miscellaneous tidyups in guest.c
  KVM: arm64/sve: Make register ioctl access errors more consistent
  KVM: arm64/sve: WARN when avoiding divide-by-zero in
    sve_reg_to_region()
  KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
  KVM: arm64/sve: Explain validity checks in set_sve_vls()
  KVM: arm/arm64: Clean up vcpu finalization function parameter naming
  KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
  KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation
  KVM: arm64: Clarify access behaviour for out-of-range SVE register
    slice IDs

 Documentation/virtual/kvm/api.txt | 34 +++++++++------
 arch/arm/include/asm/kvm_host.h   | 13 ++++--
 arch/arm64/include/asm/fpsimd.h   |  4 --
 arch/arm64/include/asm/kvm_host.h |  4 +-
 arch/arm64/include/uapi/asm/kvm.h | 37 ++++++++++++----
 arch/arm64/kernel/fpsimd.c        |  7 ++-
 arch/arm64/kvm/guest.c            | 89 +++++++++++++++++++++++----------------
 arch/arm64/kvm/reset.c            |  6 +--
 arch/arm64/kvm/sys_regs.c         |  4 +-
 virt/kvm/arm/arm.c                |  2 +-
 10 files changed, 125 insertions(+), 75 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-18 16:06 ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

This series contains some cleanups applicable to the SVE KVM support
patches merged into kvmarm/next.  These arose from Andrew Jones'
review.

Apart from some minor changes to error codes and checking, these are
mostly cosmetic / sytlistic changes only.

The patches are based on
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
5d8d4af24460 ("arm64: KVM: Fix system register enumeration")

This series in git:
git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head

Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
Fast model (to exercise SVE support).

Some additional manual testing was done to exercise the updated error
behaviours when accessing the SVE registers via ioctl.

Dave Martin (14):
  arm64/sve: Clarify vq map semantics
  KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up
    SVE
  KVM: arm: Make vcpu finalization stubs into inline functions
  KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to
    WARNs
  KVM: arm64/sve: Clean up UAPI register ID definitions
  KVM: arm64/sve: Miscellaneous tidyups in guest.c
  KVM: arm64/sve: Make register ioctl access errors more consistent
  KVM: arm64/sve: WARN when avoiding divide-by-zero in
    sve_reg_to_region()
  KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
  KVM: arm64/sve: Explain validity checks in set_sve_vls()
  KVM: arm/arm64: Clean up vcpu finalization function parameter naming
  KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
  KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation
  KVM: arm64: Clarify access behaviour for out-of-range SVE register
    slice IDs

 Documentation/virtual/kvm/api.txt | 24 ++++++-----
 arch/arm/include/asm/kvm_host.h   | 13 ++++--
 arch/arm64/include/asm/fpsimd.h   |  4 --
 arch/arm64/include/asm/kvm_host.h |  4 +-
 arch/arm64/include/uapi/asm/kvm.h | 32 ++++++++++----
 arch/arm64/kernel/fpsimd.c        |  7 ++-
 arch/arm64/kvm/guest.c            | 91 +++++++++++++++++++++++----------------
 arch/arm64/kvm/reset.c            |  6 +--
 arch/arm64/kvm/sys_regs.c         |  4 +-
 virt/kvm/arm/arm.c                |  2 +-
 10 files changed, 116 insertions(+), 71 deletions(-)

-- 
2.1.4

@@@@

Dave Martin (14):
  arm64/sve: Clarify vq map semantics
  KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up
    SVE
  KVM: arm: Make vcpu finalization stubs into inline functions
  KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to
    WARNs
  KVM: arm64/sve: Clean up UAPI register ID definitions
  KVM: arm64/sve: Miscellaneous tidyups in guest.c
  KVM: arm64/sve: Make register ioctl access errors more consistent
  KVM: arm64/sve: WARN when avoiding divide-by-zero in
    sve_reg_to_region()
  KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
  KVM: arm64/sve: Explain validity checks in set_sve_vls()
  KVM: arm/arm64: Clean up vcpu finalization function parameter naming
  KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
  KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation
  KVM: arm64: Clarify access behaviour for out-of-range SVE register
    slice IDs

 Documentation/virtual/kvm/api.txt | 34 +++++++++------
 arch/arm/include/asm/kvm_host.h   | 13 ++++--
 arch/arm64/include/asm/fpsimd.h   |  4 --
 arch/arm64/include/asm/kvm_host.h |  4 +-
 arch/arm64/include/uapi/asm/kvm.h | 37 ++++++++++++----
 arch/arm64/kernel/fpsimd.c        |  7 ++-
 arch/arm64/kvm/guest.c            | 89 +++++++++++++++++++++++----------------
 arch/arm64/kvm/reset.c            |  6 +--
 arch/arm64/kvm/sys_regs.c         |  4 +-
 virt/kvm/arm/arm.c                |  2 +-
 10 files changed, 125 insertions(+), 75 deletions(-)

-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-18 16:06 ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

This series contains some cleanups applicable to the SVE KVM support
patches merged into kvmarm/next.  These arose from Andrew Jones'
review.

Apart from some minor changes to error codes and checking, these are
mostly cosmetic / sytlistic changes only.

The patches are based on
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
5d8d4af24460 ("arm64: KVM: Fix system register enumeration")

This series in git:
git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head

Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
Fast model (to exercise SVE support).

Some additional manual testing was done to exercise the updated error
behaviours when accessing the SVE registers via ioctl.

Dave Martin (14):
  arm64/sve: Clarify vq map semantics
  KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up
    SVE
  KVM: arm: Make vcpu finalization stubs into inline functions
  KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to
    WARNs
  KVM: arm64/sve: Clean up UAPI register ID definitions
  KVM: arm64/sve: Miscellaneous tidyups in guest.c
  KVM: arm64/sve: Make register ioctl access errors more consistent
  KVM: arm64/sve: WARN when avoiding divide-by-zero in
    sve_reg_to_region()
  KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
  KVM: arm64/sve: Explain validity checks in set_sve_vls()
  KVM: arm/arm64: Clean up vcpu finalization function parameter naming
  KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
  KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation
  KVM: arm64: Clarify access behaviour for out-of-range SVE register
    slice IDs

 Documentation/virtual/kvm/api.txt | 24 ++++++-----
 arch/arm/include/asm/kvm_host.h   | 13 ++++--
 arch/arm64/include/asm/fpsimd.h   |  4 --
 arch/arm64/include/asm/kvm_host.h |  4 +-
 arch/arm64/include/uapi/asm/kvm.h | 32 ++++++++++----
 arch/arm64/kernel/fpsimd.c        |  7 ++-
 arch/arm64/kvm/guest.c            | 91 +++++++++++++++++++++++----------------
 arch/arm64/kvm/reset.c            |  6 +--
 arch/arm64/kvm/sys_regs.c         |  4 +-
 virt/kvm/arm/arm.c                |  2 +-
 10 files changed, 116 insertions(+), 71 deletions(-)

-- 
2.1.4

@@@@

Dave Martin (14):
  arm64/sve: Clarify vq map semantics
  KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up
    SVE
  KVM: arm: Make vcpu finalization stubs into inline functions
  KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to
    WARNs
  KVM: arm64/sve: Clean up UAPI register ID definitions
  KVM: arm64/sve: Miscellaneous tidyups in guest.c
  KVM: arm64/sve: Make register ioctl access errors more consistent
  KVM: arm64/sve: WARN when avoiding divide-by-zero in
    sve_reg_to_region()
  KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
  KVM: arm64/sve: Explain validity checks in set_sve_vls()
  KVM: arm/arm64: Clean up vcpu finalization function parameter naming
  KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
  KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation
  KVM: arm64: Clarify access behaviour for out-of-range SVE register
    slice IDs

 Documentation/virtual/kvm/api.txt | 34 +++++++++------
 arch/arm/include/asm/kvm_host.h   | 13 ++++--
 arch/arm64/include/asm/fpsimd.h   |  4 --
 arch/arm64/include/asm/kvm_host.h |  4 +-
 arch/arm64/include/uapi/asm/kvm.h | 37 ++++++++++++----
 arch/arm64/kernel/fpsimd.c        |  7 ++-
 arch/arm64/kvm/guest.c            | 89 +++++++++++++++++++++++----------------
 arch/arm64/kvm/reset.c            |  6 +--
 arch/arm64/kvm/sys_regs.c         |  4 +-
 virt/kvm/arm/arm.c                |  2 +-
 10 files changed, 125 insertions(+), 75 deletions(-)

-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 01/14] arm64/sve: Clarify vq map semantics
@ 2019-04-18 16:06   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently the meanings of sve_vq_map and the ancillary helpers
__bit_to_vq() and __vq_to_bit() are not clearly explained.

This patch makes the explanatory comment clearer, and removes the
duplicate comment from fpsimd.h.

The WARN_ON() currently present in __bit_to_vq() confuses the
intended use of this helper.  Since these are low-level helpers not
intended for general-purpose use anyway, it is better not to make
guesses about how these functions will be used: rather, this patch
removes the WARN_ON() and relies on callers to use the helpers
sensibly.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/include/asm/fpsimd.h | 4 ----
 arch/arm64/kernel/fpsimd.c      | 7 ++++++-
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index ad6d2e4..df62bbd 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -92,7 +92,6 @@ extern u64 read_zcr_features(void);
 
 extern int __ro_after_init sve_max_vl;
 extern int __ro_after_init sve_max_virtualisable_vl;
-/* Set of available vector lengths, as vq_to_bit(vq): */
 extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX);
 
 /*
@@ -107,9 +106,6 @@ static inline unsigned int __vq_to_bit(unsigned int vq)
 
 static inline unsigned int __bit_to_vq(unsigned int bit)
 {
-	if (WARN_ON(bit >= SVE_VQ_MAX))
-		bit = SVE_VQ_MAX - 1;
-
 	return SVE_VQ_MAX - bit;
 }
 
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 577296b..56afa40 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -135,10 +135,15 @@ static int sve_default_vl = -1;
 /* Maximum supported vector length across all CPUs (initially poisoned) */
 int __ro_after_init sve_max_vl = SVE_VL_MIN;
 int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN;
-/* Set of available vector lengths, as vq_to_bit(vq): */
+
+/*
+ * Set of available vector lengths,
+ * where length vq encoded as bit __vq_to_bit(vq):
+ */
 __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX);
 /* Set of vector lengths present on at least one cpu: */
 static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX);
+
 static void __percpu *efi_sve_state;
 
 #else /* ! CONFIG_ARM64_SVE */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 01/14] arm64/sve: Clarify vq map semantics
@ 2019-04-18 16:06   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently the meanings of sve_vq_map and the ancillary helpers
__bit_to_vq() and __vq_to_bit() are not clearly explained.

This patch makes the explanatory comment clearer, and removes the
duplicate comment from fpsimd.h.

The WARN_ON() currently present in __bit_to_vq() confuses the
intended use of this helper.  Since these are low-level helpers not
intended for general-purpose use anyway, it is better not to make
guesses about how these functions will be used: rather, this patch
removes the WARN_ON() and relies on callers to use the helpers
sensibly.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/include/asm/fpsimd.h | 4 ----
 arch/arm64/kernel/fpsimd.c      | 7 ++++++-
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index ad6d2e4..df62bbd 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -92,7 +92,6 @@ extern u64 read_zcr_features(void);
 
 extern int __ro_after_init sve_max_vl;
 extern int __ro_after_init sve_max_virtualisable_vl;
-/* Set of available vector lengths, as vq_to_bit(vq): */
 extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX);
 
 /*
@@ -107,9 +106,6 @@ static inline unsigned int __vq_to_bit(unsigned int vq)
 
 static inline unsigned int __bit_to_vq(unsigned int bit)
 {
-	if (WARN_ON(bit >= SVE_VQ_MAX))
-		bit = SVE_VQ_MAX - 1;
-
 	return SVE_VQ_MAX - bit;
 }
 
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 577296b..56afa40 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -135,10 +135,15 @@ static int sve_default_vl = -1;
 /* Maximum supported vector length across all CPUs (initially poisoned) */
 int __ro_after_init sve_max_vl = SVE_VL_MIN;
 int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN;
-/* Set of available vector lengths, as vq_to_bit(vq): */
+
+/*
+ * Set of available vector lengths,
+ * where length vq encoded as bit __vq_to_bit(vq):
+ */
 __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX);
 /* Set of vector lengths present on at least one cpu: */
 static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX);
+
 static void __percpu *efi_sve_state;
 
 #else /* ! CONFIG_ARM64_SVE */
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 01/14] arm64/sve: Clarify vq map semantics
@ 2019-04-18 16:06   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

Currently the meanings of sve_vq_map and the ancillary helpers
__bit_to_vq() and __vq_to_bit() are not clearly explained.

This patch makes the explanatory comment clearer, and removes the
duplicate comment from fpsimd.h.

The WARN_ON() currently present in __bit_to_vq() confuses the
intended use of this helper.  Since these are low-level helpers not
intended for general-purpose use anyway, it is better not to make
guesses about how these functions will be used: rather, this patch
removes the WARN_ON() and relies on callers to use the helpers
sensibly.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/include/asm/fpsimd.h | 4 ----
 arch/arm64/kernel/fpsimd.c      | 7 ++++++-
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index ad6d2e4..df62bbd 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -92,7 +92,6 @@ extern u64 read_zcr_features(void);
 
 extern int __ro_after_init sve_max_vl;
 extern int __ro_after_init sve_max_virtualisable_vl;
-/* Set of available vector lengths, as vq_to_bit(vq): */
 extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX);
 
 /*
@@ -107,9 +106,6 @@ static inline unsigned int __vq_to_bit(unsigned int vq)
 
 static inline unsigned int __bit_to_vq(unsigned int bit)
 {
-	if (WARN_ON(bit >= SVE_VQ_MAX))
-		bit = SVE_VQ_MAX - 1;
-
 	return SVE_VQ_MAX - bit;
 }
 
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 577296b..56afa40 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -135,10 +135,15 @@ static int sve_default_vl = -1;
 /* Maximum supported vector length across all CPUs (initially poisoned) */
 int __ro_after_init sve_max_vl = SVE_VL_MIN;
 int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN;
-/* Set of available vector lengths, as vq_to_bit(vq): */
+
+/*
+ * Set of available vector lengths,
+ * where length vq encoded as bit __vq_to_bit(vq):
+ */
 __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX);
 /* Set of vector lengths present on at least one cpu: */
 static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX);
+
 static void __percpu *efi_sve_state;
 
 #else /* ! CONFIG_ARM64_SVE */
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 02/14] KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE
@ 2019-04-18 16:06   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

The introduction of kvm_arm_init_arch_resources() looks like
premature factoring, since nothing else uses this hook yet and it
is not clear what will use it in the future.

For now, let's not pretend that this is a general thing:

This patch simply renames the function to kvm_arm_init_sve(),
retaining the arm stub version under the new name.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   | 2 +-
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/kvm/reset.c            | 2 +-
 virt/kvm/arm/arm.c                | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index e80cfc1..d956273 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -54,7 +54,7 @@
 
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
-static inline int kvm_arm_init_arch_resources(void) { return 0; }
+static inline int kvm_arm_init_sve(void) { return 0; }
 
 u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
 int __attribute_const__ kvm_target_cpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9d57cf8..6adf08b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -59,7 +59,7 @@
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
 extern unsigned int kvm_sve_max_vl;
-int kvm_arm_init_arch_resources(void);
+int kvm_arm_init_sve(void);
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f13378d..8847f38 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -110,7 +110,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 
 unsigned int kvm_sve_max_vl;
 
-int kvm_arm_init_arch_resources(void)
+int kvm_arm_init_sve(void)
 {
 	if (system_supports_sve()) {
 		kvm_sve_max_vl = sve_max_virtualisable_vl;
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9edbf0f..7039c99c 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1682,7 +1682,7 @@ int kvm_arch_init(void *opaque)
 	if (err)
 		return err;
 
-	err = kvm_arm_init_arch_resources();
+	err = kvm_arm_init_sve();
 	if (err)
 		return err;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 02/14] KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE
@ 2019-04-18 16:06   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

The introduction of kvm_arm_init_arch_resources() looks like
premature factoring, since nothing else uses this hook yet and it
is not clear what will use it in the future.

For now, let's not pretend that this is a general thing:

This patch simply renames the function to kvm_arm_init_sve(),
retaining the arm stub version under the new name.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   | 2 +-
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/kvm/reset.c            | 2 +-
 virt/kvm/arm/arm.c                | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index e80cfc1..d956273 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -54,7 +54,7 @@
 
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
-static inline int kvm_arm_init_arch_resources(void) { return 0; }
+static inline int kvm_arm_init_sve(void) { return 0; }
 
 u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
 int __attribute_const__ kvm_target_cpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9d57cf8..6adf08b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -59,7 +59,7 @@
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
 extern unsigned int kvm_sve_max_vl;
-int kvm_arm_init_arch_resources(void);
+int kvm_arm_init_sve(void);
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f13378d..8847f38 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -110,7 +110,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 
 unsigned int kvm_sve_max_vl;
 
-int kvm_arm_init_arch_resources(void)
+int kvm_arm_init_sve(void)
 {
 	if (system_supports_sve()) {
 		kvm_sve_max_vl = sve_max_virtualisable_vl;
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9edbf0f..7039c99c 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1682,7 +1682,7 @@ int kvm_arch_init(void *opaque)
 	if (err)
 		return err;
 
-	err = kvm_arm_init_arch_resources();
+	err = kvm_arm_init_sve();
 	if (err)
 		return err;
 
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 02/14] KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE
@ 2019-04-18 16:06   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:06 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

The introduction of kvm_arm_init_arch_resources() looks like
premature factoring, since nothing else uses this hook yet and it
is not clear what will use it in the future.

For now, let's not pretend that this is a general thing:

This patch simply renames the function to kvm_arm_init_sve(),
retaining the arm stub version under the new name.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   | 2 +-
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/kvm/reset.c            | 2 +-
 virt/kvm/arm/arm.c                | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index e80cfc1..d956273 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -54,7 +54,7 @@
 
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
-static inline int kvm_arm_init_arch_resources(void) { return 0; }
+static inline int kvm_arm_init_sve(void) { return 0; }
 
 u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
 int __attribute_const__ kvm_target_cpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9d57cf8..6adf08b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -59,7 +59,7 @@
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
 extern unsigned int kvm_sve_max_vl;
-int kvm_arm_init_arch_resources(void);
+int kvm_arm_init_sve(void);
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f13378d..8847f38 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -110,7 +110,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 
 unsigned int kvm_sve_max_vl;
 
-int kvm_arm_init_arch_resources(void)
+int kvm_arm_init_sve(void)
 {
 	if (system_supports_sve()) {
 		kvm_sve_max_vl = sve_max_virtualisable_vl;
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9edbf0f..7039c99c 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1682,7 +1682,7 @@ int kvm_arch_init(void *opaque)
 	if (err)
 		return err;
 
-	err = kvm_arm_init_arch_resources();
+	err = kvm_arm_init_sve();
 	if (err)
 		return err;
 
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 03/14] KVM: arm: Make vcpu finalization stubs into inline functions
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

The vcpu finalization stubs kvm_arm_vcpu_finalize() and
kvm_arm_vcpu_is_finalized() are currently #defines for ARM, which
limits the type-checking that the compiler can do at runtime.

The only reason for them to be #defines was to avoid reliance on
the definition of struct kvm_vcpu, which is not available here due
to circular #include problems.  However, because these are stubs
containing no code, they don't need the definition of struct
kvm_vcpu after all; only a declaration is needed (which is
available already).

So in the interests of cleanliness, this patch converts them to
inline functions.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index d956273..7feddac 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -412,7 +412,14 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 	return 0;
 }
 
-#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL)
-#define kvm_arm_vcpu_is_finalized(vcpu) true
+static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+{
+	return -EINVAL;
+}
+
+static inline bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
+{
+	return true;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 03/14] KVM: arm: Make vcpu finalization stubs into inline functions
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

The vcpu finalization stubs kvm_arm_vcpu_finalize() and
kvm_arm_vcpu_is_finalized() are currently #defines for ARM, which
limits the type-checking that the compiler can do at runtime.

The only reason for them to be #defines was to avoid reliance on
the definition of struct kvm_vcpu, which is not available here due
to circular #include problems.  However, because these are stubs
containing no code, they don't need the definition of struct
kvm_vcpu after all; only a declaration is needed (which is
available already).

So in the interests of cleanliness, this patch converts them to
inline functions.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index d956273..7feddac 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -412,7 +412,14 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 	return 0;
 }
 
-#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL)
-#define kvm_arm_vcpu_is_finalized(vcpu) true
+static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+{
+	return -EINVAL;
+}
+
+static inline bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
+{
+	return true;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 03/14] KVM: arm: Make vcpu finalization stubs into inline functions
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

The vcpu finalization stubs kvm_arm_vcpu_finalize() and
kvm_arm_vcpu_is_finalized() are currently #defines for ARM, which
limits the type-checking that the compiler can do at runtime.

The only reason for them to be #defines was to avoid reliance on
the definition of struct kvm_vcpu, which is not available here due
to circular #include problems.  However, because these are stubs
containing no code, they don't need the definition of struct
kvm_vcpu after all; only a declaration is needed (which is
available already).

So in the interests of cleanliness, this patch converts them to
inline functions.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index d956273..7feddac 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -412,7 +412,14 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 	return 0;
 }
 
-#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL)
-#define kvm_arm_vcpu_is_finalized(vcpu) true
+static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+{
+	return -EINVAL;
+}
+
+static inline bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
+{
+	return true;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 04/14] KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to WARNs
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Because of the logic in kvm_arm_sys_reg_{get,set}_reg() and
sve_id_visibility(), we should never call
{get,set}_id_aa64zfr0_el1() for a vcpu where !vcpu_has_sve(vcpu).

To avoid the code giving the impression that it is valid for these
functions to be called in this situation, and to help the compiler
make the right optimisation decisions, this patch adds WARN_ON()
for these cases.

Given the way the logic is spread out, this seems preferable to
dropping the checks altogether.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 09e9b06..7046c76 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1144,7 +1144,7 @@ static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
 {
 	u64 val;
 
-	if (!vcpu_has_sve(vcpu))
+	if (WARN_ON(!vcpu_has_sve(vcpu)))
 		return -ENOENT;
 
 	val = guest_id_aa64zfr0_el1(vcpu);
@@ -1159,7 +1159,7 @@ static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
 	int err;
 	u64 val;
 
-	if (!vcpu_has_sve(vcpu))
+	if (WARN_ON(!vcpu_has_sve(vcpu)))
 		return -ENOENT;
 
 	err = reg_from_user(&val, uaddr, id);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 04/14] KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to WARNs
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Because of the logic in kvm_arm_sys_reg_{get,set}_reg() and
sve_id_visibility(), we should never call
{get,set}_id_aa64zfr0_el1() for a vcpu where !vcpu_has_sve(vcpu).

To avoid the code giving the impression that it is valid for these
functions to be called in this situation, and to help the compiler
make the right optimisation decisions, this patch adds WARN_ON()
for these cases.

Given the way the logic is spread out, this seems preferable to
dropping the checks altogether.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 09e9b06..7046c76 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1144,7 +1144,7 @@ static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
 {
 	u64 val;
 
-	if (!vcpu_has_sve(vcpu))
+	if (WARN_ON(!vcpu_has_sve(vcpu)))
 		return -ENOENT;
 
 	val = guest_id_aa64zfr0_el1(vcpu);
@@ -1159,7 +1159,7 @@ static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
 	int err;
 	u64 val;
 
-	if (!vcpu_has_sve(vcpu))
+	if (WARN_ON(!vcpu_has_sve(vcpu)))
 		return -ENOENT;
 
 	err = reg_from_user(&val, uaddr, id);
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 04/14] KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to WARNs
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

Because of the logic in kvm_arm_sys_reg_{get,set}_reg() and
sve_id_visibility(), we should never call
{get,set}_id_aa64zfr0_el1() for a vcpu where !vcpu_has_sve(vcpu).

To avoid the code giving the impression that it is valid for these
functions to be called in this situation, and to help the compiler
make the right optimisation decisions, this patch adds WARN_ON()
for these cases.

Given the way the logic is spread out, this seems preferable to
dropping the checks altogether.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 09e9b06..7046c76 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1144,7 +1144,7 @@ static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
 {
 	u64 val;
 
-	if (!vcpu_has_sve(vcpu))
+	if (WARN_ON(!vcpu_has_sve(vcpu)))
 		return -ENOENT;
 
 	val = guest_id_aa64zfr0_el1(vcpu);
@@ -1159,7 +1159,7 @@ static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
 	int err;
 	u64 val;
 
-	if (!vcpu_has_sve(vcpu))
+	if (WARN_ON(!vcpu_has_sve(vcpu)))
 		return -ENOENT;
 
 	err = reg_from_user(&val, uaddr, id);
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 05/14] KVM: arm64/sve: Clean up UAPI register ID definitions
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently, the SVE register ID macros are not all defined in the
same way, and advertise the fact that FFR maps onto the nonexistent
predicate register P16.  This is really just for kernel
convenience, and may lead userspace into bad habits.

Instead, this patch masks the ID macro arguments so that
architecturally invalid register numbers will not be passed through
any more, and uses a literal KVM_REG_ARM64_SVE_FFR_BASE macro to
define KVM_REG_ARM64_SVE_FFR(), similarly to the way the _ZREG()
and _PREG() macros are defined.

Rather than plugging in magic numbers for the number of Z- and P-
registers and the maximum possible number of register slices, this
patch provides definitions for those too.  Userspace is going to
need them in any case, and it makes sense for them to come from
<uapi/asm/kvm.h>.

sve_reg_to_region() uses convenience constants that are defined in
a different way, and also makes use of the fact that the FFR IDs
are really contiguous with the P15 IDs, so this patch retains the
existing convenience constants in guest.c, supplemented with a
couple of sanity checks to check for consistency with the UAPI
header.

Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/include/uapi/asm/kvm.h | 32 +++++++++++++++++++++++---------
 arch/arm64/kvm/guest.c            |  9 +++++++++
 2 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6963b7e..2a04ef0 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -35,6 +35,7 @@
 #include <linux/psci.h>
 #include <linux/types.h>
 #include <asm/ptrace.h>
+#include <asm/sve_context.h>
 
 #define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
@@ -233,16 +234,29 @@ struct kvm_vcpu_events {
 /* Z- and P-regs occupy blocks at the following offsets within this range: */
 #define KVM_REG_ARM64_SVE_ZREG_BASE	0
 #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
+#define KVM_REG_ARM64_SVE_FFR_BASE	0x600
 
-#define KVM_REG_ARM64_SVE_ZREG(n, i)	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
-					 KVM_REG_ARM64_SVE_ZREG_BASE |	\
-					 KVM_REG_SIZE_U2048 |		\
-					 ((n) << 5) | (i))
-#define KVM_REG_ARM64_SVE_PREG(n, i)	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
-					 KVM_REG_ARM64_SVE_PREG_BASE |	\
-					 KVM_REG_SIZE_U256 |		\
-					 ((n) << 5) | (i))
-#define KVM_REG_ARM64_SVE_FFR(i)	KVM_REG_ARM64_SVE_PREG(16, i)
+#define KVM_ARM64_SVE_NUM_ZREGS		__SVE_NUM_ZREGS
+#define KVM_ARM64_SVE_NUM_PREGS		__SVE_NUM_PREGS
+
+#define KVM_ARM64_SVE_MAX_SLICES	32
+
+#define KVM_REG_ARM64_SVE_ZREG(n, i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_ZREG_BASE | \
+	 KVM_REG_SIZE_U2048 |						\
+	 (((n) & (KVM_ARM64_SVE_NUM_ZREGS - 1)) << 5) |			\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_PREG(n, i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_PREG_BASE | \
+	 KVM_REG_SIZE_U256 |						\
+	 (((n) & (KVM_ARM64_SVE_NUM_PREGS - 1)) << 5) |			\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_FFR(i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_FFR_BASE | \
+	 KVM_REG_SIZE_U256 |						\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
 
 /* Vector lengths pseudo-register: */
 #define KVM_REG_ARM64_SVE_VLS		(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 4f7b26b..2e449e1 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -325,6 +325,15 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 
 	size_t sve_state_size;
 
+	const u64 last_preg_id = KVM_REG_ARM64_SVE_PREG(SVE_NUM_PREGS - 1,
+							SVE_NUM_SLICES - 1);
+
+	/* Verify that the P-regs and FFR really do have contiguous IDs: */
+	BUILD_BUG_ON(KVM_REG_ARM64_SVE_FFR(0) != last_preg_id + 1);
+
+	/* Verify that we match the UAPI header: */
+	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
+
 	/* Only the first slice ever exists, for now: */
 	if ((reg->id & SVE_REG_SLICE_MASK) != 0)
 		return -ENOENT;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 05/14] KVM: arm64/sve: Clean up UAPI register ID definitions
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently, the SVE register ID macros are not all defined in the
same way, and advertise the fact that FFR maps onto the nonexistent
predicate register P16.  This is really just for kernel
convenience, and may lead userspace into bad habits.

Instead, this patch masks the ID macro arguments so that
architecturally invalid register numbers will not be passed through
any more, and uses a literal KVM_REG_ARM64_SVE_FFR_BASE macro to
define KVM_REG_ARM64_SVE_FFR(), similarly to the way the _ZREG()
and _PREG() macros are defined.

Rather than plugging in magic numbers for the number of Z- and P-
registers and the maximum possible number of register slices, this
patch provides definitions for those too.  Userspace is going to
need them in any case, and it makes sense for them to come from
<uapi/asm/kvm.h>.

sve_reg_to_region() uses convenience constants that are defined in
a different way, and also makes use of the fact that the FFR IDs
are really contiguous with the P15 IDs, so this patch retains the
existing convenience constants in guest.c, supplemented with a
couple of sanity checks to check for consistency with the UAPI
header.

Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/include/uapi/asm/kvm.h | 32 +++++++++++++++++++++++---------
 arch/arm64/kvm/guest.c            |  9 +++++++++
 2 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6963b7e..2a04ef0 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -35,6 +35,7 @@
 #include <linux/psci.h>
 #include <linux/types.h>
 #include <asm/ptrace.h>
+#include <asm/sve_context.h>
 
 #define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
@@ -233,16 +234,29 @@ struct kvm_vcpu_events {
 /* Z- and P-regs occupy blocks at the following offsets within this range: */
 #define KVM_REG_ARM64_SVE_ZREG_BASE	0
 #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
+#define KVM_REG_ARM64_SVE_FFR_BASE	0x600
 
-#define KVM_REG_ARM64_SVE_ZREG(n, i)	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
-					 KVM_REG_ARM64_SVE_ZREG_BASE |	\
-					 KVM_REG_SIZE_U2048 |		\
-					 ((n) << 5) | (i))
-#define KVM_REG_ARM64_SVE_PREG(n, i)	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
-					 KVM_REG_ARM64_SVE_PREG_BASE |	\
-					 KVM_REG_SIZE_U256 |		\
-					 ((n) << 5) | (i))
-#define KVM_REG_ARM64_SVE_FFR(i)	KVM_REG_ARM64_SVE_PREG(16, i)
+#define KVM_ARM64_SVE_NUM_ZREGS		__SVE_NUM_ZREGS
+#define KVM_ARM64_SVE_NUM_PREGS		__SVE_NUM_PREGS
+
+#define KVM_ARM64_SVE_MAX_SLICES	32
+
+#define KVM_REG_ARM64_SVE_ZREG(n, i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_ZREG_BASE | \
+	 KVM_REG_SIZE_U2048 |						\
+	 (((n) & (KVM_ARM64_SVE_NUM_ZREGS - 1)) << 5) |			\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_PREG(n, i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_PREG_BASE | \
+	 KVM_REG_SIZE_U256 |						\
+	 (((n) & (KVM_ARM64_SVE_NUM_PREGS - 1)) << 5) |			\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_FFR(i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_FFR_BASE | \
+	 KVM_REG_SIZE_U256 |						\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
 
 /* Vector lengths pseudo-register: */
 #define KVM_REG_ARM64_SVE_VLS		(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 4f7b26b..2e449e1 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -325,6 +325,15 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 
 	size_t sve_state_size;
 
+	const u64 last_preg_id = KVM_REG_ARM64_SVE_PREG(SVE_NUM_PREGS - 1,
+							SVE_NUM_SLICES - 1);
+
+	/* Verify that the P-regs and FFR really do have contiguous IDs: */
+	BUILD_BUG_ON(KVM_REG_ARM64_SVE_FFR(0) != last_preg_id + 1);
+
+	/* Verify that we match the UAPI header: */
+	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
+
 	/* Only the first slice ever exists, for now: */
 	if ((reg->id & SVE_REG_SLICE_MASK) != 0)
 		return -ENOENT;
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 05/14] KVM: arm64/sve: Clean up UAPI register ID definitions
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

Currently, the SVE register ID macros are not all defined in the
same way, and advertise the fact that FFR maps onto the nonexistent
predicate register P16.  This is really just for kernel
convenience, and may lead userspace into bad habits.

Instead, this patch masks the ID macro arguments so that
architecturally invalid register numbers will not be passed through
any more, and uses a literal KVM_REG_ARM64_SVE_FFR_BASE macro to
define KVM_REG_ARM64_SVE_FFR(), similarly to the way the _ZREG()
and _PREG() macros are defined.

Rather than plugging in magic numbers for the number of Z- and P-
registers and the maximum possible number of register slices, this
patch provides definitions for those too.  Userspace is going to
need them in any case, and it makes sense for them to come from
<uapi/asm/kvm.h>.

sve_reg_to_region() uses convenience constants that are defined in
a different way, and also makes use of the fact that the FFR IDs
are really contiguous with the P15 IDs, so this patch retains the
existing convenience constants in guest.c, supplemented with a
couple of sanity checks to check for consistency with the UAPI
header.

Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/include/uapi/asm/kvm.h | 32 +++++++++++++++++++++++---------
 arch/arm64/kvm/guest.c            |  9 +++++++++
 2 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6963b7e..2a04ef0 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -35,6 +35,7 @@
 #include <linux/psci.h>
 #include <linux/types.h>
 #include <asm/ptrace.h>
+#include <asm/sve_context.h>
 
 #define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
@@ -233,16 +234,29 @@ struct kvm_vcpu_events {
 /* Z- and P-regs occupy blocks at the following offsets within this range: */
 #define KVM_REG_ARM64_SVE_ZREG_BASE	0
 #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
+#define KVM_REG_ARM64_SVE_FFR_BASE	0x600
 
-#define KVM_REG_ARM64_SVE_ZREG(n, i)	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
-					 KVM_REG_ARM64_SVE_ZREG_BASE |	\
-					 KVM_REG_SIZE_U2048 |		\
-					 ((n) << 5) | (i))
-#define KVM_REG_ARM64_SVE_PREG(n, i)	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
-					 KVM_REG_ARM64_SVE_PREG_BASE |	\
-					 KVM_REG_SIZE_U256 |		\
-					 ((n) << 5) | (i))
-#define KVM_REG_ARM64_SVE_FFR(i)	KVM_REG_ARM64_SVE_PREG(16, i)
+#define KVM_ARM64_SVE_NUM_ZREGS		__SVE_NUM_ZREGS
+#define KVM_ARM64_SVE_NUM_PREGS		__SVE_NUM_PREGS
+
+#define KVM_ARM64_SVE_MAX_SLICES	32
+
+#define KVM_REG_ARM64_SVE_ZREG(n, i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_ZREG_BASE | \
+	 KVM_REG_SIZE_U2048 |						\
+	 (((n) & (KVM_ARM64_SVE_NUM_ZREGS - 1)) << 5) |			\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_PREG(n, i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_PREG_BASE | \
+	 KVM_REG_SIZE_U256 |						\
+	 (((n) & (KVM_ARM64_SVE_NUM_PREGS - 1)) << 5) |			\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_FFR(i)					\
+	(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_FFR_BASE | \
+	 KVM_REG_SIZE_U256 |						\
+	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
 
 /* Vector lengths pseudo-register: */
 #define KVM_REG_ARM64_SVE_VLS		(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 4f7b26b..2e449e1 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -325,6 +325,15 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 
 	size_t sve_state_size;
 
+	const u64 last_preg_id = KVM_REG_ARM64_SVE_PREG(SVE_NUM_PREGS - 1,
+							SVE_NUM_SLICES - 1);
+
+	/* Verify that the P-regs and FFR really do have contiguous IDs: */
+	BUILD_BUG_ON(KVM_REG_ARM64_SVE_FFR(0) != last_preg_id + 1);
+
+	/* Verify that we match the UAPI header: */
+	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
+
 	/* Only the first slice ever exists, for now: */
 	if ((reg->id & SVE_REG_SLICE_MASK) != 0)
 		return -ENOENT;
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 06/14] KVM: arm64/sve: Miscellaneous tidyups in guest.c
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

 * Remove a few redundant blank lines that are stylistically
   inconsistent with code already in guest.c and are just taking up
   space.

 * Delete a couple of pointless empty default cases from switch
   statements whose behaviour is otherwise obvious anyway.

 * Fix some typos and consolidate some redundantly duplicated
   comments.

 * Respell the slice index check in sve_reg_to_region() as "> 0"
   to be more consistent with what is logically being checked here
   (i.e., "is the slice index too large"), even though we don't try
   to cope with multiple slices yet.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 18 +++++-------------
 1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 2e449e1..f5ff7ae 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -290,9 +290,10 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0))
 
 /*
- * number of register slices required to cover each whole SVE register on vcpu
- * NOTE: If you are tempted to modify this, you must also to rework
- * sve_reg_to_region() to match:
+ * Number of register slices required to cover each whole SVE register.
+ * NOTE: Only the first slice every exists, for now.
+ * If you are tempted to modify this, you must also rework sve_reg_to_region()
+ * to match:
  */
 #define vcpu_sve_slices(vcpu) 1
 
@@ -334,8 +335,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	/* Verify that we match the UAPI header: */
 	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
 
-	/* Only the first slice ever exists, for now: */
-	if ((reg->id & SVE_REG_SLICE_MASK) != 0)
+	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
 		return -ENOENT;
 
 	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
@@ -520,7 +520,6 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 
 static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
 {
-	/* Only the first slice ever exists, for now */
 	const unsigned int slices = vcpu_sve_slices(vcpu);
 
 	if (!vcpu_has_sve(vcpu))
@@ -536,7 +535,6 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
 static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 				u64 __user *uindices)
 {
-	/* Only the first slice ever exists, for now */
 	const unsigned int slices = vcpu_sve_slices(vcpu);
 	u64 reg;
 	unsigned int i, n;
@@ -555,7 +553,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 	reg = KVM_REG_ARM64_SVE_VLS;
 	if (put_user(reg, uindices++))
 		return -EFAULT;
-
 	++num_regs;
 
 	for (i = 0; i < slices; i++) {
@@ -563,7 +560,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 			reg = KVM_REG_ARM64_SVE_ZREG(n, i);
 			if (put_user(reg, uindices++))
 				return -EFAULT;
-
 			num_regs++;
 		}
 
@@ -571,14 +567,12 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 			reg = KVM_REG_ARM64_SVE_PREG(n, i);
 			if (put_user(reg, uindices++))
 				return -EFAULT;
-
 			num_regs++;
 		}
 
 		reg = KVM_REG_ARM64_SVE_FFR(i);
 		if (put_user(reg, uindices++))
 			return -EFAULT;
-
 		num_regs++;
 	}
 
@@ -645,7 +639,6 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	case KVM_REG_ARM_CORE:	return get_core_reg(vcpu, reg);
 	case KVM_REG_ARM_FW:	return kvm_arm_get_fw_reg(vcpu, reg);
 	case KVM_REG_ARM64_SVE:	return get_sve_reg(vcpu, reg);
-	default: break; /* fall through */
 	}
 
 	if (is_timer_reg(reg->id))
@@ -664,7 +657,6 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	case KVM_REG_ARM_CORE:	return set_core_reg(vcpu, reg);
 	case KVM_REG_ARM_FW:	return kvm_arm_set_fw_reg(vcpu, reg);
 	case KVM_REG_ARM64_SVE:	return set_sve_reg(vcpu, reg);
-	default: break; /* fall through */
 	}
 
 	if (is_timer_reg(reg->id))
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 06/14] KVM: arm64/sve: Miscellaneous tidyups in guest.c
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

 * Remove a few redundant blank lines that are stylistically
   inconsistent with code already in guest.c and are just taking up
   space.

 * Delete a couple of pointless empty default cases from switch
   statements whose behaviour is otherwise obvious anyway.

 * Fix some typos and consolidate some redundantly duplicated
   comments.

 * Respell the slice index check in sve_reg_to_region() as "> 0"
   to be more consistent with what is logically being checked here
   (i.e., "is the slice index too large"), even though we don't try
   to cope with multiple slices yet.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 18 +++++-------------
 1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 2e449e1..f5ff7ae 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -290,9 +290,10 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0))
 
 /*
- * number of register slices required to cover each whole SVE register on vcpu
- * NOTE: If you are tempted to modify this, you must also to rework
- * sve_reg_to_region() to match:
+ * Number of register slices required to cover each whole SVE register.
+ * NOTE: Only the first slice every exists, for now.
+ * If you are tempted to modify this, you must also rework sve_reg_to_region()
+ * to match:
  */
 #define vcpu_sve_slices(vcpu) 1
 
@@ -334,8 +335,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	/* Verify that we match the UAPI header: */
 	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
 
-	/* Only the first slice ever exists, for now: */
-	if ((reg->id & SVE_REG_SLICE_MASK) != 0)
+	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
 		return -ENOENT;
 
 	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
@@ -520,7 +520,6 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 
 static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
 {
-	/* Only the first slice ever exists, for now */
 	const unsigned int slices = vcpu_sve_slices(vcpu);
 
 	if (!vcpu_has_sve(vcpu))
@@ -536,7 +535,6 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
 static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 				u64 __user *uindices)
 {
-	/* Only the first slice ever exists, for now */
 	const unsigned int slices = vcpu_sve_slices(vcpu);
 	u64 reg;
 	unsigned int i, n;
@@ -555,7 +553,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 	reg = KVM_REG_ARM64_SVE_VLS;
 	if (put_user(reg, uindices++))
 		return -EFAULT;
-
 	++num_regs;
 
 	for (i = 0; i < slices; i++) {
@@ -563,7 +560,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 			reg = KVM_REG_ARM64_SVE_ZREG(n, i);
 			if (put_user(reg, uindices++))
 				return -EFAULT;
-
 			num_regs++;
 		}
 
@@ -571,14 +567,12 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 			reg = KVM_REG_ARM64_SVE_PREG(n, i);
 			if (put_user(reg, uindices++))
 				return -EFAULT;
-
 			num_regs++;
 		}
 
 		reg = KVM_REG_ARM64_SVE_FFR(i);
 		if (put_user(reg, uindices++))
 			return -EFAULT;
-
 		num_regs++;
 	}
 
@@ -645,7 +639,6 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	case KVM_REG_ARM_CORE:	return get_core_reg(vcpu, reg);
 	case KVM_REG_ARM_FW:	return kvm_arm_get_fw_reg(vcpu, reg);
 	case KVM_REG_ARM64_SVE:	return get_sve_reg(vcpu, reg);
-	default: break; /* fall through */
 	}
 
 	if (is_timer_reg(reg->id))
@@ -664,7 +657,6 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	case KVM_REG_ARM_CORE:	return set_core_reg(vcpu, reg);
 	case KVM_REG_ARM_FW:	return kvm_arm_set_fw_reg(vcpu, reg);
 	case KVM_REG_ARM64_SVE:	return set_sve_reg(vcpu, reg);
-	default: break; /* fall through */
 	}
 
 	if (is_timer_reg(reg->id))
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 06/14] KVM: arm64/sve: Miscellaneous tidyups in guest.c
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

 * Remove a few redundant blank lines that are stylistically
   inconsistent with code already in guest.c and are just taking up
   space.

 * Delete a couple of pointless empty default cases from switch
   statements whose behaviour is otherwise obvious anyway.

 * Fix some typos and consolidate some redundantly duplicated
   comments.

 * Respell the slice index check in sve_reg_to_region() as "> 0"
   to be more consistent with what is logically being checked here
   (i.e., "is the slice index too large"), even though we don't try
   to cope with multiple slices yet.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 18 +++++-------------
 1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 2e449e1..f5ff7ae 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -290,9 +290,10 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0))
 
 /*
- * number of register slices required to cover each whole SVE register on vcpu
- * NOTE: If you are tempted to modify this, you must also to rework
- * sve_reg_to_region() to match:
+ * Number of register slices required to cover each whole SVE register.
+ * NOTE: Only the first slice every exists, for now.
+ * If you are tempted to modify this, you must also rework sve_reg_to_region()
+ * to match:
  */
 #define vcpu_sve_slices(vcpu) 1
 
@@ -334,8 +335,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	/* Verify that we match the UAPI header: */
 	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
 
-	/* Only the first slice ever exists, for now: */
-	if ((reg->id & SVE_REG_SLICE_MASK) != 0)
+	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
 		return -ENOENT;
 
 	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
@@ -520,7 +520,6 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 
 static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
 {
-	/* Only the first slice ever exists, for now */
 	const unsigned int slices = vcpu_sve_slices(vcpu);
 
 	if (!vcpu_has_sve(vcpu))
@@ -536,7 +535,6 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
 static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 				u64 __user *uindices)
 {
-	/* Only the first slice ever exists, for now */
 	const unsigned int slices = vcpu_sve_slices(vcpu);
 	u64 reg;
 	unsigned int i, n;
@@ -555,7 +553,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 	reg = KVM_REG_ARM64_SVE_VLS;
 	if (put_user(reg, uindices++))
 		return -EFAULT;
-
 	++num_regs;
 
 	for (i = 0; i < slices; i++) {
@@ -563,7 +560,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 			reg = KVM_REG_ARM64_SVE_ZREG(n, i);
 			if (put_user(reg, uindices++))
 				return -EFAULT;
-
 			num_regs++;
 		}
 
@@ -571,14 +567,12 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 			reg = KVM_REG_ARM64_SVE_PREG(n, i);
 			if (put_user(reg, uindices++))
 				return -EFAULT;
-
 			num_regs++;
 		}
 
 		reg = KVM_REG_ARM64_SVE_FFR(i);
 		if (put_user(reg, uindices++))
 			return -EFAULT;
-
 		num_regs++;
 	}
 
@@ -645,7 +639,6 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	case KVM_REG_ARM_CORE:	return get_core_reg(vcpu, reg);
 	case KVM_REG_ARM_FW:	return kvm_arm_get_fw_reg(vcpu, reg);
 	case KVM_REG_ARM64_SVE:	return get_sve_reg(vcpu, reg);
-	default: break; /* fall through */
 	}
 
 	if (is_timer_reg(reg->id))
@@ -664,7 +657,6 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	case KVM_REG_ARM_CORE:	return set_core_reg(vcpu, reg);
 	case KVM_REG_ARM_FW:	return kvm_arm_set_fw_reg(vcpu, reg);
 	case KVM_REG_ARM64_SVE:	return set_sve_reg(vcpu, reg);
-	default: break; /* fall through */
 	}
 
 	if (is_timer_reg(reg->id))
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently, the way error codes are generated when processing the
SVE register access ioctls in a bit haphazard.

This patch refactors the code so that the behaviour is more
consistent: now, -EINVAL should be returned only for unrecognised
register IDs or when some other runtime error occurs.  -ENOENT is
returned for register IDs that are recognised, but whose
corresponding register (or slice) does not exist for the vcpu.

To this end, in {get,set}_sve_reg() we now delegate the
vcpu_has_sve() check down into {get,set}_sve_vls() and
sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
picked off first, then sve_reg_to_region() plays the role of
exhaustively validating or rejecting the register ID and (where
accepted) computing the applicable register region as before.

sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
returned prematurely, before checking whether reg->id is in a
recognised range.

-EPERM is now only returned when an attempt is made to access an
actually existing register slice on an unfinalized vcpu.

Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 52 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 31 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index f5ff7ae..e45a042 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -221,6 +221,9 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	unsigned int max_vq, vq;
 	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
 
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
 	if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl)))
 		return -EINVAL;
 
@@ -242,6 +245,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	unsigned int max_vq, vq;
 	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
 
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
 	if (kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM; /* too late! */
 
@@ -304,7 +310,10 @@ struct sve_state_reg_region {
 	unsigned int upad;	/* extra trailing padding in user memory */
 };
 
-/* Get sanitised bounds for user/kernel SVE register copy */
+/*
+ * Validate SVE register ID and get sanitised bounds for user/kernel SVE
+ * register copy
+ */
 static int sve_reg_to_region(struct sve_state_reg_region *region,
 			     struct kvm_vcpu *vcpu,
 			     const struct kvm_one_reg *reg)
@@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	/* Verify that we match the UAPI header: */
 	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
 
-	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
-		return -ENOENT;
-
-	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
-
 	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
 
 	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
+		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
+			return -ENOENT;
+
+		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
+
 		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
 				SVE_SIG_REGS_OFFSET;
 		reqlen = KVM_SVE_ZREG_SIZE;
 		maxlen = SVE_SIG_ZREG_SIZE(vq);
 	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
+		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
+			return -ENOENT;
+
+		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
+
 		reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
 				SVE_SIG_REGS_OFFSET;
 		reqlen = KVM_SVE_PREG_SIZE;
 		maxlen = SVE_SIG_PREG_SIZE(vq);
 	} else {
-		return -ENOENT;
+		return -EINVAL;
 	}
 
 	sve_state_size = vcpu_sve_state_size(vcpu);
@@ -369,24 +383,22 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 
 static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
+	int ret;
 	struct sve_state_reg_region region;
 	char __user *uptr = (char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu))
-		return -ENOENT;
-
 	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
 	if (reg->id == KVM_REG_ARM64_SVE_VLS)
 		return get_sve_vls(vcpu, reg);
 
-	/* Otherwise, reg is an architectural SVE register... */
+	/* Try to interpret reg ID as an architectural SVE register... */
+	ret = sve_reg_to_region(&region, vcpu, reg);
+	if (ret)
+		return ret;
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM;
 
-	if (sve_reg_to_region(&region, vcpu, reg))
-		return -ENOENT;
-
 	if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset,
 			 region.klen) ||
 	    clear_user(uptr + region.klen, region.upad))
@@ -397,24 +409,22 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 
 static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
+	int ret;
 	struct sve_state_reg_region region;
 	const char __user *uptr = (const char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu))
-		return -ENOENT;
-
 	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
 	if (reg->id == KVM_REG_ARM64_SVE_VLS)
 		return set_sve_vls(vcpu, reg);
 
-	/* Otherwise, reg is an architectural SVE register... */
+	/* Try to interpret reg ID as an architectural SVE register... */
+	ret = sve_reg_to_region(&region, vcpu, reg);
+	if (ret)
+		return ret;
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM;
 
-	if (sve_reg_to_region(&region, vcpu, reg))
-		return -ENOENT;
-
 	if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr,
 			   region.klen))
 		return -EFAULT;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently, the way error codes are generated when processing the
SVE register access ioctls in a bit haphazard.

This patch refactors the code so that the behaviour is more
consistent: now, -EINVAL should be returned only for unrecognised
register IDs or when some other runtime error occurs.  -ENOENT is
returned for register IDs that are recognised, but whose
corresponding register (or slice) does not exist for the vcpu.

To this end, in {get,set}_sve_reg() we now delegate the
vcpu_has_sve() check down into {get,set}_sve_vls() and
sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
picked off first, then sve_reg_to_region() plays the role of
exhaustively validating or rejecting the register ID and (where
accepted) computing the applicable register region as before.

sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
returned prematurely, before checking whether reg->id is in a
recognised range.

-EPERM is now only returned when an attempt is made to access an
actually existing register slice on an unfinalized vcpu.

Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 52 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 31 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index f5ff7ae..e45a042 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -221,6 +221,9 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	unsigned int max_vq, vq;
 	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
 
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
 	if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl)))
 		return -EINVAL;
 
@@ -242,6 +245,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	unsigned int max_vq, vq;
 	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
 
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
 	if (kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM; /* too late! */
 
@@ -304,7 +310,10 @@ struct sve_state_reg_region {
 	unsigned int upad;	/* extra trailing padding in user memory */
 };
 
-/* Get sanitised bounds for user/kernel SVE register copy */
+/*
+ * Validate SVE register ID and get sanitised bounds for user/kernel SVE
+ * register copy
+ */
 static int sve_reg_to_region(struct sve_state_reg_region *region,
 			     struct kvm_vcpu *vcpu,
 			     const struct kvm_one_reg *reg)
@@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	/* Verify that we match the UAPI header: */
 	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
 
-	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
-		return -ENOENT;
-
-	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
-
 	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
 
 	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
+		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
+			return -ENOENT;
+
+		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
+
 		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
 				SVE_SIG_REGS_OFFSET;
 		reqlen = KVM_SVE_ZREG_SIZE;
 		maxlen = SVE_SIG_ZREG_SIZE(vq);
 	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
+		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
+			return -ENOENT;
+
+		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
+
 		reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
 				SVE_SIG_REGS_OFFSET;
 		reqlen = KVM_SVE_PREG_SIZE;
 		maxlen = SVE_SIG_PREG_SIZE(vq);
 	} else {
-		return -ENOENT;
+		return -EINVAL;
 	}
 
 	sve_state_size = vcpu_sve_state_size(vcpu);
@@ -369,24 +383,22 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 
 static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
+	int ret;
 	struct sve_state_reg_region region;
 	char __user *uptr = (char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu))
-		return -ENOENT;
-
 	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
 	if (reg->id == KVM_REG_ARM64_SVE_VLS)
 		return get_sve_vls(vcpu, reg);
 
-	/* Otherwise, reg is an architectural SVE register... */
+	/* Try to interpret reg ID as an architectural SVE register... */
+	ret = sve_reg_to_region(&region, vcpu, reg);
+	if (ret)
+		return ret;
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM;
 
-	if (sve_reg_to_region(&region, vcpu, reg))
-		return -ENOENT;
-
 	if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset,
 			 region.klen) ||
 	    clear_user(uptr + region.klen, region.upad))
@@ -397,24 +409,22 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 
 static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
+	int ret;
 	struct sve_state_reg_region region;
 	const char __user *uptr = (const char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu))
-		return -ENOENT;
-
 	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
 	if (reg->id == KVM_REG_ARM64_SVE_VLS)
 		return set_sve_vls(vcpu, reg);
 
-	/* Otherwise, reg is an architectural SVE register... */
+	/* Try to interpret reg ID as an architectural SVE register... */
+	ret = sve_reg_to_region(&region, vcpu, reg);
+	if (ret)
+		return ret;
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM;
 
-	if (sve_reg_to_region(&region, vcpu, reg))
-		return -ENOENT;
-
 	if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr,
 			   region.klen))
 		return -EFAULT;
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

Currently, the way error codes are generated when processing the
SVE register access ioctls in a bit haphazard.

This patch refactors the code so that the behaviour is more
consistent: now, -EINVAL should be returned only for unrecognised
register IDs or when some other runtime error occurs.  -ENOENT is
returned for register IDs that are recognised, but whose
corresponding register (or slice) does not exist for the vcpu.

To this end, in {get,set}_sve_reg() we now delegate the
vcpu_has_sve() check down into {get,set}_sve_vls() and
sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
picked off first, then sve_reg_to_region() plays the role of
exhaustively validating or rejecting the register ID and (where
accepted) computing the applicable register region as before.

sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
returned prematurely, before checking whether reg->id is in a
recognised range.

-EPERM is now only returned when an attempt is made to access an
actually existing register slice on an unfinalized vcpu.

Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 52 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 31 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index f5ff7ae..e45a042 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -221,6 +221,9 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	unsigned int max_vq, vq;
 	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
 
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
 	if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl)))
 		return -EINVAL;
 
@@ -242,6 +245,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	unsigned int max_vq, vq;
 	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
 
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
 	if (kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM; /* too late! */
 
@@ -304,7 +310,10 @@ struct sve_state_reg_region {
 	unsigned int upad;	/* extra trailing padding in user memory */
 };
 
-/* Get sanitised bounds for user/kernel SVE register copy */
+/*
+ * Validate SVE register ID and get sanitised bounds for user/kernel SVE
+ * register copy
+ */
 static int sve_reg_to_region(struct sve_state_reg_region *region,
 			     struct kvm_vcpu *vcpu,
 			     const struct kvm_one_reg *reg)
@@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	/* Verify that we match the UAPI header: */
 	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
 
-	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
-		return -ENOENT;
-
-	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
-
 	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
 
 	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
+		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
+			return -ENOENT;
+
+		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
+
 		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
 				SVE_SIG_REGS_OFFSET;
 		reqlen = KVM_SVE_ZREG_SIZE;
 		maxlen = SVE_SIG_ZREG_SIZE(vq);
 	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
+		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
+			return -ENOENT;
+
+		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
+
 		reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
 				SVE_SIG_REGS_OFFSET;
 		reqlen = KVM_SVE_PREG_SIZE;
 		maxlen = SVE_SIG_PREG_SIZE(vq);
 	} else {
-		return -ENOENT;
+		return -EINVAL;
 	}
 
 	sve_state_size = vcpu_sve_state_size(vcpu);
@@ -369,24 +383,22 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 
 static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
+	int ret;
 	struct sve_state_reg_region region;
 	char __user *uptr = (char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu))
-		return -ENOENT;
-
 	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
 	if (reg->id == KVM_REG_ARM64_SVE_VLS)
 		return get_sve_vls(vcpu, reg);
 
-	/* Otherwise, reg is an architectural SVE register... */
+	/* Try to interpret reg ID as an architectural SVE register... */
+	ret = sve_reg_to_region(&region, vcpu, reg);
+	if (ret)
+		return ret;
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM;
 
-	if (sve_reg_to_region(&region, vcpu, reg))
-		return -ENOENT;
-
 	if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset,
 			 region.klen) ||
 	    clear_user(uptr + region.klen, region.upad))
@@ -397,24 +409,22 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 
 static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
+	int ret;
 	struct sve_state_reg_region region;
 	const char __user *uptr = (const char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu))
-		return -ENOENT;
-
 	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
 	if (reg->id == KVM_REG_ARM64_SVE_VLS)
 		return set_sve_vls(vcpu, reg);
 
-	/* Otherwise, reg is an architectural SVE register... */
+	/* Try to interpret reg ID as an architectural SVE register... */
+	ret = sve_reg_to_region(&region, vcpu, reg);
+	if (ret)
+		return ret;
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu))
 		return -EPERM;
 
-	if (sve_reg_to_region(&region, vcpu, reg))
-		return -ENOENT;
-
 	if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr,
 			   region.klen))
 		return -EFAULT;
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 08/14] KVM: arm64/sve: WARN when avoiding divide-by-zero in sve_reg_to_region()
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

sve_reg_to_region() currently passes the result of
vcpu_sve_state_size() to array_index_nospec(), effectively
leading to a divide / modulo operation.

Currently the code bails out and returns -EINVAL if
vcpu_sve_state_size() turns out to be zero, in order to avoid going
ahead and attempting to divide by zero.  This is reasonable, but it
should only happen if the kernel contains some other bug that
allowed this code to be reached without the vcpu having been
properly initialised.

To make it clear that this is a defence against bugs rather than
something that the user should be able to trigger, this patch marks
the check with WARN_ON().

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index e45a042..73044e3 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -371,7 +371,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	}
 
 	sve_state_size = vcpu_sve_state_size(vcpu);
-	if (!sve_state_size)
+	if (WARN_ON(!sve_state_size))
 		return -EINVAL;
 
 	region->koffset = array_index_nospec(reqoffset, sve_state_size);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 08/14] KVM: arm64/sve: WARN when avoiding divide-by-zero in sve_reg_to_region()
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

sve_reg_to_region() currently passes the result of
vcpu_sve_state_size() to array_index_nospec(), effectively
leading to a divide / modulo operation.

Currently the code bails out and returns -EINVAL if
vcpu_sve_state_size() turns out to be zero, in order to avoid going
ahead and attempting to divide by zero.  This is reasonable, but it
should only happen if the kernel contains some other bug that
allowed this code to be reached without the vcpu having been
properly initialised.

To make it clear that this is a defence against bugs rather than
something that the user should be able to trigger, this patch marks
the check with WARN_ON().

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index e45a042..73044e3 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -371,7 +371,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	}
 
 	sve_state_size = vcpu_sve_state_size(vcpu);
-	if (!sve_state_size)
+	if (WARN_ON(!sve_state_size))
 		return -EINVAL;
 
 	region->koffset = array_index_nospec(reqoffset, sve_state_size);
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 08/14] KVM: arm64/sve: WARN when avoiding divide-by-zero in sve_reg_to_region()
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

sve_reg_to_region() currently passes the result of
vcpu_sve_state_size() to array_index_nospec(), effectively
leading to a divide / modulo operation.

Currently the code bails out and returns -EINVAL if
vcpu_sve_state_size() turns out to be zero, in order to avoid going
ahead and attempting to divide by zero.  This is reasonable, but it
should only happen if the kernel contains some other bug that
allowed this code to be reached without the vcpu having been
properly initialised.

To make it clear that this is a defence against bugs rather than
something that the user should be able to trigger, this patch marks
the check with WARN_ON().

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index e45a042..73044e3 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -371,7 +371,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
 	}
 
 	sve_state_size = vcpu_sve_state_size(vcpu);
-	if (!sve_state_size)
+	if (WARN_ON(!sve_state_size))
 		return -EINVAL;
 
 	region->koffset = array_index_nospec(reqoffset, sve_state_size);
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 09/14] KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

A complicated DIV_ROUND_UP() expression is currently written out
explicitly in multiple places in order to specify the size of the
bitmap exchanged with userspace to represent the value of the
KVM_REG_ARM64_SVE_VLS pseudo-register.

Userspace currently has no direct way to work this out either: for
documentation purposes, the size is just quoted as 8 u64s.

To make this more intuitive, this patch replaces these with a
single define, which is also exported to userspace as
KVM_ARM64_SVE_VLS_WORDS.

Since the number of words in a bitmap is just the index of the last
word used + 1, this patch expresses the bound that way instead.
This should make it clearer what is being expressed.

For userspace convenience, the minimum and maximum possible vector
lengths relevant to the KVM ABI are exposed to UAPI as
KVM_ARM64_SVE_VQ_MIN, KVM_ARM64_SVE_VQ_MAX.  Since the only direct
use for these at present is manipulation of KVM_REG_ARM64_SVE_VLS,
no corresponding _VL_ macros are defined.  They could be added
later if a need arises.

Since use of DIV_ROUND_UP() was the only reason for including
<linux/kernel.h> in guest.c, this patch also removes that #include.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 10 ++++++----
 arch/arm64/include/uapi/asm/kvm.h |  5 +++++
 arch/arm64/kvm/guest.c            |  7 +++----
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 68509de..03df379 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2171,13 +2171,15 @@ and KVM_ARM_VCPU_FINALIZE for more information about this procedure.
 KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector
 lengths supported by the vcpu to be discovered and configured by
 userspace.  When transferred to or from user memory via KVM_GET_ONE_REG
-or KVM_SET_ONE_REG, the value of this register is of type __u64[8], and
-encodes the set of vector lengths as follows:
+or KVM_SET_ONE_REG, the value of this register is of type
+__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as
+follows:
 
-__u64 vector_lengths[8];
+__u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS];
 
 if (vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX &&
-    ((vector_lengths[(vq - 1) / 64] >> ((vq - 1) % 64)) & 1))
+    ((vector_lengths[(vq - KVM_ARM64_SVE_VQ_MIN) / 64] >>
+		((vq - KVM_ARM64_SVE_VQ_MIN) % 64)) & 1))
 	/* Vector length vq * 16 bytes supported */
 else
 	/* Vector length vq * 16 bytes not supported */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 2a04ef0..edd2db8 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -258,9 +258,14 @@ struct kvm_vcpu_events {
 	 KVM_REG_SIZE_U256 |						\
 	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
 
+#define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN
+#define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX
+
 /* Vector lengths pseudo-register: */
 #define KVM_REG_ARM64_SVE_VLS		(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
 					 KVM_REG_SIZE_U512 | 0xffff)
+#define KVM_ARM64_SVE_VLS_WORDS	\
+	((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 73044e3..5bb909c 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -23,7 +23,6 @@
 #include <linux/errno.h>
 #include <linux/err.h>
 #include <linux/nospec.h>
-#include <linux/kernel.h>
 #include <linux/kvm_host.h>
 #include <linux/module.h>
 #include <linux/stddef.h>
@@ -210,7 +209,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
 
 static bool vq_present(
-	const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)],
+	const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
 	unsigned int vq)
 {
 	return (*vqs)[vq_word(vq)] & vq_mask(vq);
@@ -219,7 +218,7 @@ static bool vq_present(
 static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
 	unsigned int max_vq, vq;
-	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+	u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
 
 	if (!vcpu_has_sve(vcpu))
 		return -ENOENT;
@@ -243,7 +242,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
 	unsigned int max_vq, vq;
-	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+	u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
 
 	if (!vcpu_has_sve(vcpu))
 		return -ENOENT;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 09/14] KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

A complicated DIV_ROUND_UP() expression is currently written out
explicitly in multiple places in order to specify the size of the
bitmap exchanged with userspace to represent the value of the
KVM_REG_ARM64_SVE_VLS pseudo-register.

Userspace currently has no direct way to work this out either: for
documentation purposes, the size is just quoted as 8 u64s.

To make this more intuitive, this patch replaces these with a
single define, which is also exported to userspace as
KVM_ARM64_SVE_VLS_WORDS.

Since the number of words in a bitmap is just the index of the last
word used + 1, this patch expresses the bound that way instead.
This should make it clearer what is being expressed.

For userspace convenience, the minimum and maximum possible vector
lengths relevant to the KVM ABI are exposed to UAPI as
KVM_ARM64_SVE_VQ_MIN, KVM_ARM64_SVE_VQ_MAX.  Since the only direct
use for these at present is manipulation of KVM_REG_ARM64_SVE_VLS,
no corresponding _VL_ macros are defined.  They could be added
later if a need arises.

Since use of DIV_ROUND_UP() was the only reason for including
<linux/kernel.h> in guest.c, this patch also removes that #include.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 10 ++++++----
 arch/arm64/include/uapi/asm/kvm.h |  5 +++++
 arch/arm64/kvm/guest.c            |  7 +++----
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 68509de..03df379 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2171,13 +2171,15 @@ and KVM_ARM_VCPU_FINALIZE for more information about this procedure.
 KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector
 lengths supported by the vcpu to be discovered and configured by
 userspace.  When transferred to or from user memory via KVM_GET_ONE_REG
-or KVM_SET_ONE_REG, the value of this register is of type __u64[8], and
-encodes the set of vector lengths as follows:
+or KVM_SET_ONE_REG, the value of this register is of type
+__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as
+follows:
 
-__u64 vector_lengths[8];
+__u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS];
 
 if (vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX &&
-    ((vector_lengths[(vq - 1) / 64] >> ((vq - 1) % 64)) & 1))
+    ((vector_lengths[(vq - KVM_ARM64_SVE_VQ_MIN) / 64] >>
+		((vq - KVM_ARM64_SVE_VQ_MIN) % 64)) & 1))
 	/* Vector length vq * 16 bytes supported */
 else
 	/* Vector length vq * 16 bytes not supported */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 2a04ef0..edd2db8 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -258,9 +258,14 @@ struct kvm_vcpu_events {
 	 KVM_REG_SIZE_U256 |						\
 	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
 
+#define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN
+#define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX
+
 /* Vector lengths pseudo-register: */
 #define KVM_REG_ARM64_SVE_VLS		(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
 					 KVM_REG_SIZE_U512 | 0xffff)
+#define KVM_ARM64_SVE_VLS_WORDS	\
+	((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 73044e3..5bb909c 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -23,7 +23,6 @@
 #include <linux/errno.h>
 #include <linux/err.h>
 #include <linux/nospec.h>
-#include <linux/kernel.h>
 #include <linux/kvm_host.h>
 #include <linux/module.h>
 #include <linux/stddef.h>
@@ -210,7 +209,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
 
 static bool vq_present(
-	const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)],
+	const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
 	unsigned int vq)
 {
 	return (*vqs)[vq_word(vq)] & vq_mask(vq);
@@ -219,7 +218,7 @@ static bool vq_present(
 static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
 	unsigned int max_vq, vq;
-	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+	u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
 
 	if (!vcpu_has_sve(vcpu))
 		return -ENOENT;
@@ -243,7 +242,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
 	unsigned int max_vq, vq;
-	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+	u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
 
 	if (!vcpu_has_sve(vcpu))
 		return -ENOENT;
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 09/14] KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

A complicated DIV_ROUND_UP() expression is currently written out
explicitly in multiple places in order to specify the size of the
bitmap exchanged with userspace to represent the value of the
KVM_REG_ARM64_SVE_VLS pseudo-register.

Userspace currently has no direct way to work this out either: for
documentation purposes, the size is just quoted as 8 u64s.

To make this more intuitive, this patch replaces these with a
single define, which is also exported to userspace as
KVM_ARM64_SVE_VLS_WORDS.

Since the number of words in a bitmap is just the index of the last
word used + 1, this patch expresses the bound that way instead.
This should make it clearer what is being expressed.

For userspace convenience, the minimum and maximum possible vector
lengths relevant to the KVM ABI are exposed to UAPI as
KVM_ARM64_SVE_VQ_MIN, KVM_ARM64_SVE_VQ_MAX.  Since the only direct
use for these at present is manipulation of KVM_REG_ARM64_SVE_VLS,
no corresponding _VL_ macros are defined.  They could be added
later if a need arises.

Since use of DIV_ROUND_UP() was the only reason for including
<linux/kernel.h> in guest.c, this patch also removes that #include.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 10 ++++++----
 arch/arm64/include/uapi/asm/kvm.h |  5 +++++
 arch/arm64/kvm/guest.c            |  7 +++----
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 68509de..03df379 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2171,13 +2171,15 @@ and KVM_ARM_VCPU_FINALIZE for more information about this procedure.
 KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector
 lengths supported by the vcpu to be discovered and configured by
 userspace.  When transferred to or from user memory via KVM_GET_ONE_REG
-or KVM_SET_ONE_REG, the value of this register is of type __u64[8], and
-encodes the set of vector lengths as follows:
+or KVM_SET_ONE_REG, the value of this register is of type
+__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as
+follows:
 
-__u64 vector_lengths[8];
+__u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS];
 
 if (vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX &&
-    ((vector_lengths[(vq - 1) / 64] >> ((vq - 1) % 64)) & 1))
+    ((vector_lengths[(vq - KVM_ARM64_SVE_VQ_MIN) / 64] >>
+		((vq - KVM_ARM64_SVE_VQ_MIN) % 64)) & 1))
 	/* Vector length vq * 16 bytes supported */
 else
 	/* Vector length vq * 16 bytes not supported */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 2a04ef0..edd2db8 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -258,9 +258,14 @@ struct kvm_vcpu_events {
 	 KVM_REG_SIZE_U256 |						\
 	 ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
 
+#define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN
+#define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX
+
 /* Vector lengths pseudo-register: */
 #define KVM_REG_ARM64_SVE_VLS		(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
 					 KVM_REG_SIZE_U512 | 0xffff)
+#define KVM_ARM64_SVE_VLS_WORDS	\
+	((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 73044e3..5bb909c 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -23,7 +23,6 @@
 #include <linux/errno.h>
 #include <linux/err.h>
 #include <linux/nospec.h>
-#include <linux/kernel.h>
 #include <linux/kvm_host.h>
 #include <linux/module.h>
 #include <linux/stddef.h>
@@ -210,7 +209,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
 
 static bool vq_present(
-	const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)],
+	const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
 	unsigned int vq)
 {
 	return (*vqs)[vq_word(vq)] & vq_mask(vq);
@@ -219,7 +218,7 @@ static bool vq_present(
 static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
 	unsigned int max_vq, vq;
-	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+	u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
 
 	if (!vcpu_has_sve(vcpu))
 		return -ENOENT;
@@ -243,7 +242,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
 	unsigned int max_vq, vq;
-	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+	u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
 
 	if (!vcpu_has_sve(vcpu))
 		return -ENOENT;
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 10/14] KVM: arm64/sve: Explain validity checks in set_sve_vls()
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Correct virtualization of SVE relies for correctness on code in
set_sve_vls() that verifies consistency between the set of vector
lengths requested by userspace and the set of vector lengths
available on the host.

However, the purpose of this code is not obvious, and not likely to
be apparent at all to people who do not have detailed knowledge of
the SVE system-level architecture.

This patch adds a suitable comment to explain what these checks are
for.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5bb909c..3ae2f82 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -264,6 +264,13 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	if (max_vq > sve_vq_from_vl(kvm_sve_max_vl))
 		return -EINVAL;
 
+	/*
+	 * Vector lengths supported by the host can't currently be
+	 * hidden from the guest individually: instead we can only set a
+	 * maxmium via ZCR_EL2.LEN.  So, make sure the available vector
+	 * lengths match the set requested exactly up to the requested
+	 * maximum:
+	 */
 	for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq)
 		if (vq_present(&vqs, vq) != sve_vq_available(vq))
 			return -EINVAL;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 10/14] KVM: arm64/sve: Explain validity checks in set_sve_vls()
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Correct virtualization of SVE relies for correctness on code in
set_sve_vls() that verifies consistency between the set of vector
lengths requested by userspace and the set of vector lengths
available on the host.

However, the purpose of this code is not obvious, and not likely to
be apparent at all to people who do not have detailed knowledge of
the SVE system-level architecture.

This patch adds a suitable comment to explain what these checks are
for.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5bb909c..3ae2f82 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -264,6 +264,13 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	if (max_vq > sve_vq_from_vl(kvm_sve_max_vl))
 		return -EINVAL;
 
+	/*
+	 * Vector lengths supported by the host can't currently be
+	 * hidden from the guest individually: instead we can only set a
+	 * maxmium via ZCR_EL2.LEN.  So, make sure the available vector
+	 * lengths match the set requested exactly up to the requested
+	 * maximum:
+	 */
 	for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq)
 		if (vq_present(&vqs, vq) != sve_vq_available(vq))
 			return -EINVAL;
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 10/14] KVM: arm64/sve: Explain validity checks in set_sve_vls()
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

Correct virtualization of SVE relies for correctness on code in
set_sve_vls() that verifies consistency between the set of vector
lengths requested by userspace and the set of vector lengths
available on the host.

However, the purpose of this code is not obvious, and not likely to
be apparent at all to people who do not have detailed knowledge of
the SVE system-level architecture.

This patch adds a suitable comment to explain what these checks are
for.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm64/kvm/guest.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5bb909c..3ae2f82 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -264,6 +264,13 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	if (max_vq > sve_vq_from_vl(kvm_sve_max_vl))
 		return -EINVAL;
 
+	/*
+	 * Vector lengths supported by the host can't currently be
+	 * hidden from the guest individually: instead we can only set a
+	 * maxmium via ZCR_EL2.LEN.  So, make sure the available vector
+	 * lengths match the set requested exactly up to the requested
+	 * maximum:
+	 */
 	for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq)
 		if (vq_present(&vqs, vq) != sve_vq_available(vq))
 			return -EINVAL;
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 11/14] KVM: arm/arm64: Clean up vcpu finalization function parameter naming
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently, the internal vcpu finalization functions use a different
name ("what") for the feature parameter than the name ("feature")
used in the documentation.

To avoid future confusion, this patch converts everything to use
the name "feature" consistently.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   | 2 +-
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/kvm/reset.c            | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 7feddac..fe77543 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -412,7 +412,7 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 	return 0;
 }
 
-static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
 {
 	return -EINVAL;
 }
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6adf08b..7a096fd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -627,7 +627,7 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
-int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what);
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
 #define kvm_arm_vcpu_sve_finalized(vcpu) \
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 8847f38..3402543 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -186,9 +186,9 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
 {
-	switch (what) {
+	switch (feature) {
 	case KVM_ARM_VCPU_SVE:
 		if (!vcpu_has_sve(vcpu))
 			return -EINVAL;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 11/14] KVM: arm/arm64: Clean up vcpu finalization function parameter naming
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Currently, the internal vcpu finalization functions use a different
name ("what") for the feature parameter than the name ("feature")
used in the documentation.

To avoid future confusion, this patch converts everything to use
the name "feature" consistently.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   | 2 +-
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/kvm/reset.c            | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 7feddac..fe77543 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -412,7 +412,7 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 	return 0;
 }
 
-static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
 {
 	return -EINVAL;
 }
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6adf08b..7a096fd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -627,7 +627,7 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
-int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what);
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
 #define kvm_arm_vcpu_sve_finalized(vcpu) \
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 8847f38..3402543 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -186,9 +186,9 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
 {
-	switch (what) {
+	switch (feature) {
 	case KVM_ARM_VCPU_SVE:
 		if (!vcpu_has_sve(vcpu))
 			return -EINVAL;
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 11/14] KVM: arm/arm64: Clean up vcpu finalization function parameter naming
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

Currently, the internal vcpu finalization functions use a different
name ("what") for the feature parameter than the name ("feature")
used in the documentation.

To avoid future confusion, this patch converts everything to use
the name "feature" consistently.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   | 2 +-
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/kvm/reset.c            | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 7feddac..fe77543 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -412,7 +412,7 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 	return 0;
 }
 
-static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
 {
 	return -EINVAL;
 }
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6adf08b..7a096fd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -627,7 +627,7 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
-int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what);
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
 #define kvm_arm_vcpu_sve_finalized(vcpu) \
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 8847f38..3402543 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -186,9 +186,9 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
 {
-	switch (what) {
+	switch (feature) {
 	case KVM_ARM_VCPU_SVE:
 		if (!vcpu_has_sve(vcpu))
 			return -EINVAL;
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 12/14] KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Userspace is only supposed to use KVM_ARM_VCPU_FINALIZE when there
is some vcpu feature that can actually be finalized.

This means that documenting KVM_ARM_VCPU_FINALIZE as available or
not depending on the capabilities present is not helpful.

This patch amends the documentation to describe availability in
terms of which capability is required for each finalizable feature
instead.

In any case, userspace sees the same error (EINVAL) regardless of
whether the given feature is not present or KVM_ARM_VCPU_FINALIZE
is not implemented at all.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 03df379..5519df0 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -3999,17 +3999,16 @@ userspace should not expect to get any particular value there.
 
 4.119 KVM_ARM_VCPU_FINALIZE
 
-Capability: KVM_CAP_ARM_SVE
 Architectures: arm, arm64
 Type: vcpu ioctl
 Parameters: int feature (in)
 Returns: 0 on success, -1 on error
 Errors:
   EPERM:     feature not enabled, needs configuration, or already finalized
-  EINVAL:    unknown feature
+  EINVAL:    feature unknown or not present
 
 Recognised values for feature:
-  arm64      KVM_ARM_VCPU_SVE
+  arm64      KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE)
 
 Finalizes the configuration of the specified vcpu feature.
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 12/14] KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

Userspace is only supposed to use KVM_ARM_VCPU_FINALIZE when there
is some vcpu feature that can actually be finalized.

This means that documenting KVM_ARM_VCPU_FINALIZE as available or
not depending on the capabilities present is not helpful.

This patch amends the documentation to describe availability in
terms of which capability is required for each finalizable feature
instead.

In any case, userspace sees the same error (EINVAL) regardless of
whether the given feature is not present or KVM_ARM_VCPU_FINALIZE
is not implemented at all.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 03df379..5519df0 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -3999,17 +3999,16 @@ userspace should not expect to get any particular value there.
 
 4.119 KVM_ARM_VCPU_FINALIZE
 
-Capability: KVM_CAP_ARM_SVE
 Architectures: arm, arm64
 Type: vcpu ioctl
 Parameters: int feature (in)
 Returns: 0 on success, -1 on error
 Errors:
   EPERM:     feature not enabled, needs configuration, or already finalized
-  EINVAL:    unknown feature
+  EINVAL:    feature unknown or not present
 
 Recognised values for feature:
-  arm64      KVM_ARM_VCPU_SVE
+  arm64      KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE)
 
 Finalizes the configuration of the specified vcpu feature.
 
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 12/14] KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

Userspace is only supposed to use KVM_ARM_VCPU_FINALIZE when there
is some vcpu feature that can actually be finalized.

This means that documenting KVM_ARM_VCPU_FINALIZE as available or
not depending on the capabilities present is not helpful.

This patch amends the documentation to describe availability in
terms of which capability is required for each finalizable feature
instead.

In any case, userspace sees the same error (EINVAL) regardless of
whether the given feature is not present or KVM_ARM_VCPU_FINALIZE
is not implemented at all.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 03df379..5519df0 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -3999,17 +3999,16 @@ userspace should not expect to get any particular value there.
 
 4.119 KVM_ARM_VCPU_FINALIZE
 
-Capability: KVM_CAP_ARM_SVE
 Architectures: arm, arm64
 Type: vcpu ioctl
 Parameters: int feature (in)
 Returns: 0 on success, -1 on error
 Errors:
   EPERM:     feature not enabled, needs configuration, or already finalized
-  EINVAL:    unknown feature
+  EINVAL:    feature unknown or not present
 
 Recognised values for feature:
-  arm64      KVM_ARM_VCPU_SVE
+  arm64      KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE)
 
 Finalizes the configuration of the specified vcpu feature.
 
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 13/14] KVM: Clarify KVM_{SET, GET}_ONE_REG error code documentation
  2019-04-18 16:06 ` Dave Martin
@ 2019-04-18 16:07   ` Dave Martin
  -1 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

The current error code documentation for KVM_GET_ONE_REG and
KVM_SET_ONE_REG could be read as implying that all architectures
implement these error codes, or that KVM guarantees which error
code is returned in a particular situation.

Because this is not really the case, this patch waters down the
documentation explicitly to remove such guarantees.

EPERM is marked as arm64-specific, since for now arm64 really is
the only architecture that yields this error code for the
finalization-required case.  Keeping this as a distinct error code
is useful however for debugging due to the statefulness of the API
in this instance.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Fixes: 395f562f2b4c ("KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG")
Fixes: 50036ad06b7f ("KVM: arm64/sve: Document KVM API extensions for SVE")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 5519df0..818ac97 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1873,8 +1873,10 @@ Parameters: struct kvm_one_reg (in)
 Returns: 0 on success, negative value on failure
 Errors:
   ENOENT:   no such register
-  EPERM:    register access forbidden for architecture-dependent reasons
-  EINVAL:   other errors, such as bad size encoding for a known register
+  EINVAL:   invalid register ID, or no such register
+  EPERM:    (arm64) register access not allowed before vcpu finalization
+(These error codes are indicative only: do not rely on a specific error
+code being returned in a specific situation.)
 
 struct kvm_one_reg {
        __u64 id;
@@ -2260,10 +2262,12 @@ Architectures: all
 Type: vcpu ioctl
 Parameters: struct kvm_one_reg (in and out)
 Returns: 0 on success, negative value on failure
-Errors:
+Errors include:
   ENOENT:   no such register
-  EPERM:    register access forbidden for architecture-dependent reasons
-  EINVAL:   other errors, such as bad size encoding for a known register
+  EINVAL:   invalid register ID, or no such register
+  EPERM:    (arm64) register access not allowed before vcpu finalization
+(These error codes are indicative only: do not rely on a specific error
+code being returned in a specific situation.)
 
 This ioctl allows to receive the value of a single register implemented
 in a vcpu. The register to read is indicated by the "id" field of the
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 13/14] KVM: Clarify KVM_{SET, GET}_ONE_REG error code documentation
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

The current error code documentation for KVM_GET_ONE_REG and
KVM_SET_ONE_REG could be read as implying that all architectures
implement these error codes, or that KVM guarantees which error
code is returned in a particular situation.

Because this is not really the case, this patch waters down the
documentation explicitly to remove such guarantees.

EPERM is marked as arm64-specific, since for now arm64 really is
the only architecture that yields this error code for the
finalization-required case.  Keeping this as a distinct error code
is useful however for debugging due to the statefulness of the API
in this instance.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Fixes: 395f562f2b4c ("KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG")
Fixes: 50036ad06b7f ("KVM: arm64/sve: Document KVM API extensions for SVE")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 5519df0..818ac97 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1873,8 +1873,10 @@ Parameters: struct kvm_one_reg (in)
 Returns: 0 on success, negative value on failure
 Errors:
   ENOENT:   no such register
-  EPERM:    register access forbidden for architecture-dependent reasons
-  EINVAL:   other errors, such as bad size encoding for a known register
+  EINVAL:   invalid register ID, or no such register
+  EPERM:    (arm64) register access not allowed before vcpu finalization
+(These error codes are indicative only: do not rely on a specific error
+code being returned in a specific situation.)
 
 struct kvm_one_reg {
        __u64 id;
@@ -2260,10 +2262,12 @@ Architectures: all
 Type: vcpu ioctl
 Parameters: struct kvm_one_reg (in and out)
 Returns: 0 on success, negative value on failure
-Errors:
+Errors include:
   ENOENT:   no such register
-  EPERM:    register access forbidden for architecture-dependent reasons
-  EINVAL:   other errors, such as bad size encoding for a known register
+  EINVAL:   invalid register ID, or no such register
+  EPERM:    (arm64) register access not allowed before vcpu finalization
+(These error codes are indicative only: do not rely on a specific error
+code being returned in a specific situation.)
 
 This ioctl allows to receive the value of a single register implemented
 in a vcpu. The register to read is indicated by the "id" field of the
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 14/14] KVM: arm64: Clarify access behaviour for out-of-range SVE register slice IDs
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

The existing documentation for which SVE register slice IDs are
considered out-of-range, and what happens when userspace tries to
access them, is cryptic.

This patch rewords the text with the aim of making it a bit easier to
understand.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 818ac97..e410a9f 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2159,8 +2159,9 @@ arm64 SVE registers have the following bit patterns:
   0x6050 0000 0015 060 <slice:5>        FFR bits[256*slice + 255 : 256*slice]
   0x6060 0000 0015 ffff                 KVM_REG_ARM64_SVE_VLS pseudo-register
 
-Access to slices beyond the maximum vector length configured for the
-vcpu (i.e., where 16 * slice >= max_vq (**)) will fail with ENOENT.
+Access to register IDs where 2048 * slice >= 128 * max_vq will fail with
+ENOENT.  max_vq is the vcpu's maximum supported vector length in 128-bit
+quadwords: see (**) below.
 
 These registers are only accessible on vcpus for which SVE is enabled.
 See KVM_ARM_VCPU_INIT for details.
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 14/14] KVM: arm64: Clarify access behaviour for out-of-range SVE register slice IDs
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall,
	linux-arm-kernel

The existing documentation for which SVE register slice IDs are
considered out-of-range, and what happens when userspace tries to
access them, is cryptic.

This patch rewords the text with the aim of making it a bit easier to
understand.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 818ac97..e410a9f 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2159,8 +2159,9 @@ arm64 SVE registers have the following bit patterns:
   0x6050 0000 0015 060 <slice:5>        FFR bits[256*slice + 255 : 256*slice]
   0x6060 0000 0015 ffff                 KVM_REG_ARM64_SVE_VLS pseudo-register
 
-Access to slices beyond the maximum vector length configured for the
-vcpu (i.e., where 16 * slice >= max_vq (**)) will fail with ENOENT.
+Access to register IDs where 2048 * slice >= 128 * max_vq will fail with
+ENOENT.  max_vq is the vcpu's maximum supported vector length in 128-bit
+quadwords: see (**) below.
 
 These registers are only accessible on vcpus for which SVE is enabled.
 See KVM_ARM_VCPU_INIT for details.
-- 
2.1.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v2 14/14] KVM: arm64: Clarify access behaviour for out-of-range SVE register slice IDs
@ 2019-04-18 16:07   ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-18 16:07 UTC (permalink / raw)
  To: kvmarm
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, Alex Bennée,
	linux-arm-kernel

The existing documentation for which SVE register slice IDs are
considered out-of-range, and what happens when userspace tries to
access them, is cryptic.

This patch rewords the text with the aim of making it a bit easier to
understand.

No functional change.

Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
---
 Documentation/virtual/kvm/api.txt | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 818ac97..e410a9f 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2159,8 +2159,9 @@ arm64 SVE registers have the following bit patterns:
   0x6050 0000 0015 060 <slice:5>        FFR bits[256*slice + 255 : 256*slice]
   0x6060 0000 0015 ffff                 KVM_REG_ARM64_SVE_VLS pseudo-register
 
-Access to slices beyond the maximum vector length configured for the
-vcpu (i.e., where 16 * slice >= max_vq (**)) will fail with ENOENT.
+Access to register IDs where 2048 * slice >= 128 * max_vq will fail with
+ENOENT.  max_vq is the vcpu's maximum supported vector length in 128-bit
+quadwords: see (**) below.
 
 These registers are only accessible on vcpus for which SVE is enabled.
 See KVM_ARM_VCPU_INIT for details.
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
  2019-04-18 16:06 ` Dave Martin
@ 2019-04-24  9:21   ` Alex Bennée
  -1 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-24  9:21 UTC (permalink / raw)
  To: Dave Martin
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> This series contains some cleanups applicable to the SVE KVM support
> patches merged into kvmarm/next.  These arose from Andrew Jones'
> review.

Does this mean these won't get merged into the original series before
the final merging upstream?

<snip>

--
Alex Bennée
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-24  9:21   ` Alex Bennée
  0 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-24  9:21 UTC (permalink / raw)
  To: Dave Martin
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, kvmarm, linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> This series contains some cleanups applicable to the SVE KVM support
> patches merged into kvmarm/next.  These arose from Andrew Jones'
> review.

Does this mean these won't get merged into the original series before
the final merging upstream?

<snip>

--
Alex Bennée

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
  2019-04-24  9:21   ` Alex Bennée
@ 2019-04-24  9:38     ` Marc Zyngier
  -1 siblings, 0 replies; 63+ messages in thread
From: Marc Zyngier @ 2019-04-24  9:38 UTC (permalink / raw)
  To: Alex Bennée, Dave Martin
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

On 24/04/2019 10:21, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
>> This series contains some cleanups applicable to the SVE KVM support
>> patches merged into kvmarm/next.  These arose from Andrew Jones'
>> review.
> 
> Does this mean these won't get merged into the original series before
> the final merging upstream?

I've queued these on top of the existing SVE patches. I'm quite keen on
not rebasing -next at this stage.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-24  9:38     ` Marc Zyngier
  0 siblings, 0 replies; 63+ messages in thread
From: Marc Zyngier @ 2019-04-24  9:38 UTC (permalink / raw)
  To: Alex Bennée, Dave Martin
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Catalin Marinas, Will Deacon, Andrew Jones,
	Zhang Lei, Julien Grall, kvmarm, linux-arm-kernel

On 24/04/2019 10:21, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
>> This series contains some cleanups applicable to the SVE KVM support
>> patches merged into kvmarm/next.  These arose from Andrew Jones'
>> review.
> 
> Does this mean these won't get merged into the original series before
> the final merging upstream?

I've queued these on top of the existing SVE patches. I'm quite keen on
not rebasing -next at this stage.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
  2019-04-18 16:07   ` Dave Martin
@ 2019-04-25 12:30     ` Alex Bennée
  -1 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-25 12:30 UTC (permalink / raw)
  To: Dave Martin
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> Currently, the way error codes are generated when processing the
> SVE register access ioctls in a bit haphazard.
>
> This patch refactors the code so that the behaviour is more
> consistent: now, -EINVAL should be returned only for unrecognised
> register IDs or when some other runtime error occurs.  -ENOENT is
> returned for register IDs that are recognised, but whose
> corresponding register (or slice) does not exist for the vcpu.
>
> To this end, in {get,set}_sve_reg() we now delegate the
> vcpu_has_sve() check down into {get,set}_sve_vls() and
> sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> picked off first, then sve_reg_to_region() plays the role of
> exhaustively validating or rejecting the register ID and (where
> accepted) computing the applicable register region as before.
>
> sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> returned prematurely, before checking whether reg->id is in a
> recognised range.
>
> -EPERM is now only returned when an attempt is made to access an
> actually existing register slice on an unfinalized vcpu.
>
> Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> Suggested-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Reviewed-by: Andrew Jones <drjones@redhat.com>
> ---
>  arch/arm64/kvm/guest.c | 52 ++++++++++++++++++++++++++++++--------------------
>  1 file changed, 31 insertions(+), 21 deletions(-)
>
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index f5ff7ae..e45a042 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -221,6 +221,9 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  	unsigned int max_vq, vq;
>  	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
>
> +	if (!vcpu_has_sve(vcpu))
> +		return -ENOENT;
> +
>  	if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl)))
>  		return -EINVAL;
>
> @@ -242,6 +245,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  	unsigned int max_vq, vq;
>  	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
>
> +	if (!vcpu_has_sve(vcpu))
> +		return -ENOENT;
> +
>  	if (kvm_arm_vcpu_sve_finalized(vcpu))
>  		return -EPERM; /* too late! */
>
> @@ -304,7 +310,10 @@ struct sve_state_reg_region {
>  	unsigned int upad;	/* extra trailing padding in user memory */
>  };
>
> -/* Get sanitised bounds for user/kernel SVE register copy */
> +/*
> + * Validate SVE register ID and get sanitised bounds for user/kernel SVE
> + * register copy
> + */
>  static int sve_reg_to_region(struct sve_state_reg_region *region,
>  			     struct kvm_vcpu *vcpu,
>  			     const struct kvm_one_reg *reg)
> @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
>  	/* Verify that we match the UAPI header: */
>  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
>
> -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> -		return -ENOENT;
> -
> -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> -
>  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
>
>  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> +			return -ENOENT;
> +
> +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> +
>  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
>  				SVE_SIG_REGS_OFFSET;
>  		reqlen = KVM_SVE_ZREG_SIZE;
>  		maxlen = SVE_SIG_ZREG_SIZE(vq);
>  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> +			return -ENOENT;
> +
> +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> +

I suppose you could argue for a:

	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
			return -ENOENT;

		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);

                if (reg->id <= zreg_id_max) {
			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
				SVE_SIG_REGS_OFFSET;
			reqlen = KVM_SVE_ZREG_SIZE;
			maxlen = SVE_SIG_ZREG_SIZE(vq);
                } else {
			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
				SVE_SIG_REGS_OFFSET;
			reqlen = KVM_SVE_PREG_SIZE;
			maxlen = SVE_SIG_PREG_SIZE(vq);
		}
	} else {
		return -EINVAL;
	}

but only for minimal DRY reasons.

>  		reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
>  				SVE_SIG_REGS_OFFSET;
>  		reqlen = KVM_SVE_PREG_SIZE;
>  		maxlen = SVE_SIG_PREG_SIZE(vq);
>  	} else {
> -		return -ENOENT;
> +		return -EINVAL;
>  	}
>
>  	sve_state_size = vcpu_sve_state_size(vcpu);
> @@ -369,24 +383,22 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
>
>  static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> +	int ret;
>  	struct sve_state_reg_region region;
>  	char __user *uptr = (char __user *)reg->addr;
>
> -	if (!vcpu_has_sve(vcpu))
> -		return -ENOENT;
> -
>  	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
>  	if (reg->id == KVM_REG_ARM64_SVE_VLS)
>  		return get_sve_vls(vcpu, reg);
>
> -	/* Otherwise, reg is an architectural SVE register... */
> +	/* Try to interpret reg ID as an architectural SVE register... */
> +	ret = sve_reg_to_region(&region, vcpu, reg);
> +	if (ret)
> +		return ret;
>
>  	if (!kvm_arm_vcpu_sve_finalized(vcpu))
>  		return -EPERM;
>
> -	if (sve_reg_to_region(&region, vcpu, reg))
> -		return -ENOENT;
> -
>  	if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset,
>  			 region.klen) ||
>  	    clear_user(uptr + region.klen, region.upad))
> @@ -397,24 +409,22 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>
>  static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> +	int ret;
>  	struct sve_state_reg_region region;
>  	const char __user *uptr = (const char __user *)reg->addr;
>
> -	if (!vcpu_has_sve(vcpu))
> -		return -ENOENT;
> -
>  	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
>  	if (reg->id == KVM_REG_ARM64_SVE_VLS)
>  		return set_sve_vls(vcpu, reg);
>
> -	/* Otherwise, reg is an architectural SVE register... */
> +	/* Try to interpret reg ID as an architectural SVE register... */
> +	ret = sve_reg_to_region(&region, vcpu, reg);
> +	if (ret)
> +		return ret;
>
>  	if (!kvm_arm_vcpu_sve_finalized(vcpu))
>  		return -EPERM;
>
> -	if (sve_reg_to_region(&region, vcpu, reg))
> -		return -ENOENT;
> -
>  	if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr,
>  			   region.klen))
>  		return -EFAULT;


--
Alex Bennée
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 12:30     ` Alex Bennée
  0 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-25 12:30 UTC (permalink / raw)
  To: Dave Martin
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, kvmarm, linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> Currently, the way error codes are generated when processing the
> SVE register access ioctls in a bit haphazard.
>
> This patch refactors the code so that the behaviour is more
> consistent: now, -EINVAL should be returned only for unrecognised
> register IDs or when some other runtime error occurs.  -ENOENT is
> returned for register IDs that are recognised, but whose
> corresponding register (or slice) does not exist for the vcpu.
>
> To this end, in {get,set}_sve_reg() we now delegate the
> vcpu_has_sve() check down into {get,set}_sve_vls() and
> sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> picked off first, then sve_reg_to_region() plays the role of
> exhaustively validating or rejecting the register ID and (where
> accepted) computing the applicable register region as before.
>
> sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> returned prematurely, before checking whether reg->id is in a
> recognised range.
>
> -EPERM is now only returned when an attempt is made to access an
> actually existing register slice on an unfinalized vcpu.
>
> Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> Suggested-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Reviewed-by: Andrew Jones <drjones@redhat.com>
> ---
>  arch/arm64/kvm/guest.c | 52 ++++++++++++++++++++++++++++++--------------------
>  1 file changed, 31 insertions(+), 21 deletions(-)
>
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index f5ff7ae..e45a042 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -221,6 +221,9 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  	unsigned int max_vq, vq;
>  	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
>
> +	if (!vcpu_has_sve(vcpu))
> +		return -ENOENT;
> +
>  	if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl)))
>  		return -EINVAL;
>
> @@ -242,6 +245,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  	unsigned int max_vq, vq;
>  	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
>
> +	if (!vcpu_has_sve(vcpu))
> +		return -ENOENT;
> +
>  	if (kvm_arm_vcpu_sve_finalized(vcpu))
>  		return -EPERM; /* too late! */
>
> @@ -304,7 +310,10 @@ struct sve_state_reg_region {
>  	unsigned int upad;	/* extra trailing padding in user memory */
>  };
>
> -/* Get sanitised bounds for user/kernel SVE register copy */
> +/*
> + * Validate SVE register ID and get sanitised bounds for user/kernel SVE
> + * register copy
> + */
>  static int sve_reg_to_region(struct sve_state_reg_region *region,
>  			     struct kvm_vcpu *vcpu,
>  			     const struct kvm_one_reg *reg)
> @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
>  	/* Verify that we match the UAPI header: */
>  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
>
> -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> -		return -ENOENT;
> -
> -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> -
>  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
>
>  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> +			return -ENOENT;
> +
> +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> +
>  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
>  				SVE_SIG_REGS_OFFSET;
>  		reqlen = KVM_SVE_ZREG_SIZE;
>  		maxlen = SVE_SIG_ZREG_SIZE(vq);
>  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> +			return -ENOENT;
> +
> +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> +

I suppose you could argue for a:

	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
			return -ENOENT;

		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);

                if (reg->id <= zreg_id_max) {
			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
				SVE_SIG_REGS_OFFSET;
			reqlen = KVM_SVE_ZREG_SIZE;
			maxlen = SVE_SIG_ZREG_SIZE(vq);
                } else {
			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
				SVE_SIG_REGS_OFFSET;
			reqlen = KVM_SVE_PREG_SIZE;
			maxlen = SVE_SIG_PREG_SIZE(vq);
		}
	} else {
		return -EINVAL;
	}

but only for minimal DRY reasons.

>  		reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
>  				SVE_SIG_REGS_OFFSET;
>  		reqlen = KVM_SVE_PREG_SIZE;
>  		maxlen = SVE_SIG_PREG_SIZE(vq);
>  	} else {
> -		return -ENOENT;
> +		return -EINVAL;
>  	}
>
>  	sve_state_size = vcpu_sve_state_size(vcpu);
> @@ -369,24 +383,22 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
>
>  static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> +	int ret;
>  	struct sve_state_reg_region region;
>  	char __user *uptr = (char __user *)reg->addr;
>
> -	if (!vcpu_has_sve(vcpu))
> -		return -ENOENT;
> -
>  	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
>  	if (reg->id == KVM_REG_ARM64_SVE_VLS)
>  		return get_sve_vls(vcpu, reg);
>
> -	/* Otherwise, reg is an architectural SVE register... */
> +	/* Try to interpret reg ID as an architectural SVE register... */
> +	ret = sve_reg_to_region(&region, vcpu, reg);
> +	if (ret)
> +		return ret;
>
>  	if (!kvm_arm_vcpu_sve_finalized(vcpu))
>  		return -EPERM;
>
> -	if (sve_reg_to_region(&region, vcpu, reg))
> -		return -ENOENT;
> -
>  	if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset,
>  			 region.klen) ||
>  	    clear_user(uptr + region.klen, region.upad))
> @@ -397,24 +409,22 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>
>  static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> +	int ret;
>  	struct sve_state_reg_region region;
>  	const char __user *uptr = (const char __user *)reg->addr;
>
> -	if (!vcpu_has_sve(vcpu))
> -		return -ENOENT;
> -
>  	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
>  	if (reg->id == KVM_REG_ARM64_SVE_VLS)
>  		return set_sve_vls(vcpu, reg);
>
> -	/* Otherwise, reg is an architectural SVE register... */
> +	/* Try to interpret reg ID as an architectural SVE register... */
> +	ret = sve_reg_to_region(&region, vcpu, reg);
> +	if (ret)
> +		return ret;
>
>  	if (!kvm_arm_vcpu_sve_finalized(vcpu))
>  		return -EPERM;
>
> -	if (sve_reg_to_region(&region, vcpu, reg))
> -		return -ENOENT;
> -
>  	if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr,
>  			   region.klen))
>  		return -EFAULT;


--
Alex Bennée

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
  2019-04-18 16:06 ` Dave Martin
@ 2019-04-25 12:35   ` Alex Bennée
  -1 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-25 12:35 UTC (permalink / raw)
  To: Dave Martin
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> This series contains some cleanups applicable to the SVE KVM support
> patches merged into kvmarm/next.  These arose from Andrew Jones'
> review.
>
> Apart from some minor changes to error codes and checking, these are
> mostly cosmetic / sytlistic changes only.
>
> The patches are based on
> git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
> 5d8d4af24460 ("arm64: KVM: Fix system register enumeration")
>
> This series in git:
> git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
> http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head
>
> Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
> Fast model (to exercise SVE support).

These all look good to me:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

--
Alex Bennée
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-25 12:35   ` Alex Bennée
  0 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-25 12:35 UTC (permalink / raw)
  To: Dave Martin
  Cc: Peter Maydell, Okamoto Takayuki, Christoffer Dall,
	Ard Biesheuvel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Zhang Lei, Julien Grall, kvmarm, linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> This series contains some cleanups applicable to the SVE KVM support
> patches merged into kvmarm/next.  These arose from Andrew Jones'
> review.
>
> Apart from some minor changes to error codes and checking, these are
> mostly cosmetic / sytlistic changes only.
>
> The patches are based on
> git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
> 5d8d4af24460 ("arm64: KVM: Fix system register enumeration")
>
> This series in git:
> git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
> http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head
>
> Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
> Fast model (to exercise SVE support).

These all look good to me:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

--
Alex Bennée

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 13:04       ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 13:04 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > Currently, the way error codes are generated when processing the
> > SVE register access ioctls in a bit haphazard.
> >
> > This patch refactors the code so that the behaviour is more
> > consistent: now, -EINVAL should be returned only for unrecognised
> > register IDs or when some other runtime error occurs.  -ENOENT is
> > returned for register IDs that are recognised, but whose
> > corresponding register (or slice) does not exist for the vcpu.
> >
> > To this end, in {get,set}_sve_reg() we now delegate the
> > vcpu_has_sve() check down into {get,set}_sve_vls() and
> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> > picked off first, then sve_reg_to_region() plays the role of
> > exhaustively validating or rejecting the register ID and (where
> > accepted) computing the applicable register region as before.
> >
> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> > returned prematurely, before checking whether reg->id is in a
> > recognised range.
> >
> > -EPERM is now only returned when an attempt is made to access an
> > actually existing register slice on an unfinalized vcpu.
> >
> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> > Suggested-by: Andrew Jones <drjones@redhat.com>
> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> > Reviewed-by: Andrew Jones <drjones@redhat.com>

[...]

> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
> >  	/* Verify that we match the UAPI header: */
> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
> >
> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> > -		return -ENOENT;
> > -
> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > -
> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
> >
> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> > +			return -ENOENT;
> > +
> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > +
> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >  				SVE_SIG_REGS_OFFSET;
> >  		reqlen = KVM_SVE_ZREG_SIZE;
> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> > +			return -ENOENT;
> > +
> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > +
> 
> I suppose you could argue for a:
> 
> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> 			return -ENOENT;
> 
> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> 
>                 if (reg->id <= zreg_id_max) {
> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> 				SVE_SIG_REGS_OFFSET;
> 			reqlen = KVM_SVE_ZREG_SIZE;
> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
>                 } else {
> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
> 				SVE_SIG_REGS_OFFSET;
> 			reqlen = KVM_SVE_PREG_SIZE;
> 			maxlen = SVE_SIG_PREG_SIZE(vq);
> 		}
> 	} else {
> 		return -EINVAL;
> 	}
> 
> but only for minimal DRY reasons.

Agreed, but that bakes in another assumption: that the ZREG and PREG ID
ranges are contiguous.

I preferred to keep the number of assumptions down.

Althoug the resulting code wasn't ideal, the actual amount of
duplication that I ended up with here seemed low enough as to be
acceptable (though opinions can differ on that).

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 13:04       ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 13:04 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="UTF-8", Size: 3927 bytes --]

On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > Currently, the way error codes are generated when processing the
> > SVE register access ioctls in a bit haphazard.
> >
> > This patch refactors the code so that the behaviour is more
> > consistent: now, -EINVAL should be returned only for unrecognised
> > register IDs or when some other runtime error occurs.  -ENOENT is
> > returned for register IDs that are recognised, but whose
> > corresponding register (or slice) does not exist for the vcpu.
> >
> > To this end, in {get,set}_sve_reg() we now delegate the
> > vcpu_has_sve() check down into {get,set}_sve_vls() and
> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> > picked off first, then sve_reg_to_region() plays the role of
> > exhaustively validating or rejecting the register ID and (where
> > accepted) computing the applicable register region as before.
> >
> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> > returned prematurely, before checking whether reg->id is in a
> > recognised range.
> >
> > -EPERM is now only returned when an attempt is made to access an
> > actually existing register slice on an unfinalized vcpu.
> >
> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> > Suggested-by: Andrew Jones <drjones@redhat.com>
> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> > Reviewed-by: Andrew Jones <drjones@redhat.com>

[...]

> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
> >  	/* Verify that we match the UAPI header: */
> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
> >
> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> > -		return -ENOENT;
> > -
> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > -
> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
> >
> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> > +			return -ENOENT;
> > +
> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > +
> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >  				SVE_SIG_REGS_OFFSET;
> >  		reqlen = KVM_SVE_ZREG_SIZE;
> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> > +			return -ENOENT;
> > +
> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > +
> 
> I suppose you could argue for a:
> 
> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> 			return -ENOENT;
> 
> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> 
>                 if (reg->id <= zreg_id_max) {
> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> 				SVE_SIG_REGS_OFFSET;
> 			reqlen = KVM_SVE_ZREG_SIZE;
> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
>                 } else {
> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
> 				SVE_SIG_REGS_OFFSET;
> 			reqlen = KVM_SVE_PREG_SIZE;
> 			maxlen = SVE_SIG_PREG_SIZE(vq);
> 		}
> 	} else {
> 		return -EINVAL;
> 	}
> 
> but only for minimal DRY reasons.

Agreed, but that bakes in another assumption: that the ZREG and PREG ID
ranges are contiguous.

I preferred to keep the number of assumptions down.

Althoug the resulting code wasn't ideal, the actual amount of
duplication that I ended up with here seemed low enough as to be
acceptable (though opinions can differ on that).

[...]

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 13:04       ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 13:04 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > Currently, the way error codes are generated when processing the
> > SVE register access ioctls in a bit haphazard.
> >
> > This patch refactors the code so that the behaviour is more
> > consistent: now, -EINVAL should be returned only for unrecognised
> > register IDs or when some other runtime error occurs.  -ENOENT is
> > returned for register IDs that are recognised, but whose
> > corresponding register (or slice) does not exist for the vcpu.
> >
> > To this end, in {get,set}_sve_reg() we now delegate the
> > vcpu_has_sve() check down into {get,set}_sve_vls() and
> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> > picked off first, then sve_reg_to_region() plays the role of
> > exhaustively validating or rejecting the register ID and (where
> > accepted) computing the applicable register region as before.
> >
> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> > returned prematurely, before checking whether reg->id is in a
> > recognised range.
> >
> > -EPERM is now only returned when an attempt is made to access an
> > actually existing register slice on an unfinalized vcpu.
> >
> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> > Suggested-by: Andrew Jones <drjones@redhat.com>
> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> > Reviewed-by: Andrew Jones <drjones@redhat.com>

[...]

> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
> >  	/* Verify that we match the UAPI header: */
> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
> >
> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> > -		return -ENOENT;
> > -
> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > -
> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
> >
> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> > +			return -ENOENT;
> > +
> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > +
> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >  				SVE_SIG_REGS_OFFSET;
> >  		reqlen = KVM_SVE_ZREG_SIZE;
> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> > +			return -ENOENT;
> > +
> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> > +
> 
> I suppose you could argue for a:
> 
> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> 			return -ENOENT;
> 
> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> 
>                 if (reg->id <= zreg_id_max) {
> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> 				SVE_SIG_REGS_OFFSET;
> 			reqlen = KVM_SVE_ZREG_SIZE;
> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
>                 } else {
> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
> 				SVE_SIG_REGS_OFFSET;
> 			reqlen = KVM_SVE_PREG_SIZE;
> 			maxlen = SVE_SIG_PREG_SIZE(vq);
> 		}
> 	} else {
> 		return -EINVAL;
> 	}
> 
> but only for minimal DRY reasons.

Agreed, but that bakes in another assumption: that the ZREG and PREG ID
ranges are contiguous.

I preferred to keep the number of assumptions down.

Althoug the resulting code wasn't ideal, the actual amount of
duplication that I ended up with here seemed low enough as to be
acceptable (though opinions can differ on that).

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-25 13:05     ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 13:05 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

On Thu, Apr 25, 2019 at 01:35:56PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > This series contains some cleanups applicable to the SVE KVM support
> > patches merged into kvmarm/next.  These arose from Andrew Jones'
> > review.
> >
> > Apart from some minor changes to error codes and checking, these are
> > mostly cosmetic / sytlistic changes only.
> >
> > The patches are based on
> > git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
> > 5d8d4af24460 ("arm64: KVM: Fix system register enumeration")
> >
> > This series in git:
> > git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
> > http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head
> >
> > Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
> > Fast model (to exercise SVE support).
> 
> These all look good to me:
> 
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thanks for the review!

Cheers
---Dave

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-25 13:05     ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 13:05 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="UTF-8", Size: 1169 bytes --]

On Thu, Apr 25, 2019 at 01:35:56PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > This series contains some cleanups applicable to the SVE KVM support
> > patches merged into kvmarm/next.  These arose from Andrew Jones'
> > review.
> >
> > Apart from some minor changes to error codes and checking, these are
> > mostly cosmetic / sytlistic changes only.
> >
> > The patches are based on
> > git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
> > 5d8d4af24460 ("arm64: KVM: Fix system register enumeration")
> >
> > This series in git:
> > git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
> > http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head
> >
> > Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
> > Fast model (to exercise SVE support).
> 
> These all look good to me:
> 
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thanks for the review!

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups
@ 2019-04-25 13:05     ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 13:05 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

On Thu, Apr 25, 2019 at 01:35:56PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > This series contains some cleanups applicable to the SVE KVM support
> > patches merged into kvmarm/next.  These arose from Andrew Jones'
> > review.
> >
> > Apart from some minor changes to error codes and checking, these are
> > mostly cosmetic / sytlistic changes only.
> >
> > The patches are based on
> > git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
> > 5d8d4af24460 ("arm64: KVM: Fix system register enumeration")
> >
> > This series in git:
> > git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
> > http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head
> >
> > Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
> > Fast model (to exercise SVE support).
> 
> These all look good to me:
> 
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thanks for the review!

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 15:04         ` Alex Bennée
  0 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-25 15:04 UTC (permalink / raw)
  To: Dave Martin
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
>>
>> Dave Martin <Dave.Martin@arm.com> writes:
>>
>> > Currently, the way error codes are generated when processing the
>> > SVE register access ioctls in a bit haphazard.
>> >
>> > This patch refactors the code so that the behaviour is more
>> > consistent: now, -EINVAL should be returned only for unrecognised
>> > register IDs or when some other runtime error occurs.  -ENOENT is
>> > returned for register IDs that are recognised, but whose
>> > corresponding register (or slice) does not exist for the vcpu.
>> >
>> > To this end, in {get,set}_sve_reg() we now delegate the
>> > vcpu_has_sve() check down into {get,set}_sve_vls() and
>> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
>> > picked off first, then sve_reg_to_region() plays the role of
>> > exhaustively validating or rejecting the register ID and (where
>> > accepted) computing the applicable register region as before.
>> >
>> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
>> > returned prematurely, before checking whether reg->id is in a
>> > recognised range.
>> >
>> > -EPERM is now only returned when an attempt is made to access an
>> > actually existing register slice on an unfinalized vcpu.
>> >
>> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
>> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
>> > Suggested-by: Andrew Jones <drjones@redhat.com>
>> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
>> > Reviewed-by: Andrew Jones <drjones@redhat.com>
>
> [...]
>
>> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
>> >  	/* Verify that we match the UAPI header: */
>> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
>> >
>> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
>> > -		return -ENOENT;
>> > -
>> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>> > -
>> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
>> >
>> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
>> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
>> > +			return -ENOENT;
>> > +
>> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>> > +
>> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
>> >  				SVE_SIG_REGS_OFFSET;
>> >  		reqlen = KVM_SVE_ZREG_SIZE;
>> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
>> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
>> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
>> > +			return -ENOENT;
>> > +
>> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>> > +
>>
>> I suppose you could argue for a:
>>
>> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
>> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
>> 			return -ENOENT;
>>
>> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>>
>>                 if (reg->id <= zreg_id_max) {
>> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
>> 				SVE_SIG_REGS_OFFSET;
>> 			reqlen = KVM_SVE_ZREG_SIZE;
>> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
>>                 } else {
>> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
>> 				SVE_SIG_REGS_OFFSET;
>> 			reqlen = KVM_SVE_PREG_SIZE;
>> 			maxlen = SVE_SIG_PREG_SIZE(vq);
>> 		}
>> 	} else {
>> 		return -EINVAL;
>> 	}
>>
>> but only for minimal DRY reasons.
>
> Agreed, but that bakes in another assumption: that the ZREG and PREG ID
> ranges are contiguous.

Ahh I'd misread:

  /* reg ID ranges for P- registers and FFR (which are contiguous) */

However these are defined in the UABI:

  /* Z- and P-regs occupy blocks at the following offsets within this range: */
  #define KVM_REG_ARM64_SVE_ZREG_BASE	0
  #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
  #define KVM_REG_ARM64_SVE_FFR_BASE	0x600

so there position is pretty fixed now right?

> I preferred to keep the number of assumptions down.
>
> Althoug the resulting code wasn't ideal, the actual amount of
> duplication that I ended up with here seemed low enough as to be
> acceptable (though opinions can differ on that).

It's no biggie ;-)

--
Alex Bennée

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 15:04         ` Alex Bennée
  0 siblings, 0 replies; 63+ messages in thread
From: Alex Bennée @ 2019-04-25 15:04 UTC (permalink / raw)
  To: Dave Martin
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel


Dave Martin <Dave.Martin@arm.com> writes:

> On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
>>
>> Dave Martin <Dave.Martin@arm.com> writes:
>>
>> > Currently, the way error codes are generated when processing the
>> > SVE register access ioctls in a bit haphazard.
>> >
>> > This patch refactors the code so that the behaviour is more
>> > consistent: now, -EINVAL should be returned only for unrecognised
>> > register IDs or when some other runtime error occurs.  -ENOENT is
>> > returned for register IDs that are recognised, but whose
>> > corresponding register (or slice) does not exist for the vcpu.
>> >
>> > To this end, in {get,set}_sve_reg() we now delegate the
>> > vcpu_has_sve() check down into {get,set}_sve_vls() and
>> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
>> > picked off first, then sve_reg_to_region() plays the role of
>> > exhaustively validating or rejecting the register ID and (where
>> > accepted) computing the applicable register region as before.
>> >
>> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
>> > returned prematurely, before checking whether reg->id is in a
>> > recognised range.
>> >
>> > -EPERM is now only returned when an attempt is made to access an
>> > actually existing register slice on an unfinalized vcpu.
>> >
>> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
>> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
>> > Suggested-by: Andrew Jones <drjones@redhat.com>
>> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
>> > Reviewed-by: Andrew Jones <drjones@redhat.com>
>
> [...]
>
>> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
>> >  	/* Verify that we match the UAPI header: */
>> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
>> >
>> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
>> > -		return -ENOENT;
>> > -
>> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>> > -
>> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
>> >
>> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
>> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
>> > +			return -ENOENT;
>> > +
>> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>> > +
>> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
>> >  				SVE_SIG_REGS_OFFSET;
>> >  		reqlen = KVM_SVE_ZREG_SIZE;
>> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
>> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
>> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
>> > +			return -ENOENT;
>> > +
>> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>> > +
>>
>> I suppose you could argue for a:
>>
>> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
>> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
>> 			return -ENOENT;
>>
>> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
>>
>>                 if (reg->id <= zreg_id_max) {
>> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
>> 				SVE_SIG_REGS_OFFSET;
>> 			reqlen = KVM_SVE_ZREG_SIZE;
>> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
>>                 } else {
>> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
>> 				SVE_SIG_REGS_OFFSET;
>> 			reqlen = KVM_SVE_PREG_SIZE;
>> 			maxlen = SVE_SIG_PREG_SIZE(vq);
>> 		}
>> 	} else {
>> 		return -EINVAL;
>> 	}
>>
>> but only for minimal DRY reasons.
>
> Agreed, but that bakes in another assumption: that the ZREG and PREG ID
> ranges are contiguous.

Ahh I'd misread:

  /* reg ID ranges for P- registers and FFR (which are contiguous) */

However these are defined in the UABI:

  /* Z- and P-regs occupy blocks at the following offsets within this range: */
  #define KVM_REG_ARM64_SVE_ZREG_BASE	0
  #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
  #define KVM_REG_ARM64_SVE_FFR_BASE	0x600

so there position is pretty fixed now right?

> I preferred to keep the number of assumptions down.
>
> Althoug the resulting code wasn't ideal, the actual amount of
> duplication that I ended up with here seemed low enough as to be
> acceptable (though opinions can differ on that).

It's no biggie ;-)

--
Alex Bennée
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 15:27           ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 15:27 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

On Thu, Apr 25, 2019 at 04:04:36PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
> >>
> >> Dave Martin <Dave.Martin@arm.com> writes:
> >>
> >> > Currently, the way error codes are generated when processing the
> >> > SVE register access ioctls in a bit haphazard.
> >> >
> >> > This patch refactors the code so that the behaviour is more
> >> > consistent: now, -EINVAL should be returned only for unrecognised
> >> > register IDs or when some other runtime error occurs.  -ENOENT is
> >> > returned for register IDs that are recognised, but whose
> >> > corresponding register (or slice) does not exist for the vcpu.
> >> >
> >> > To this end, in {get,set}_sve_reg() we now delegate the
> >> > vcpu_has_sve() check down into {get,set}_sve_vls() and
> >> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> >> > picked off first, then sve_reg_to_region() plays the role of
> >> > exhaustively validating or rejecting the register ID and (where
> >> > accepted) computing the applicable register region as before.
> >> >
> >> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> >> > returned prematurely, before checking whether reg->id is in a
> >> > recognised range.
> >> >
> >> > -EPERM is now only returned when an attempt is made to access an
> >> > actually existing register slice on an unfinalized vcpu.
> >> >
> >> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> >> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> >> > Suggested-by: Andrew Jones <drjones@redhat.com>
> >> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> >> > Reviewed-by: Andrew Jones <drjones@redhat.com>
> >
> > [...]
> >
> >> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
> >> >  	/* Verify that we match the UAPI header: */
> >> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
> >> >
> >> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > -		return -ENOENT;
> >> > -
> >> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > -
> >> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
> >> >
> >> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> >> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > +			return -ENOENT;
> >> > +
> >> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >> >  				SVE_SIG_REGS_OFFSET;
> >> >  		reqlen = KVM_SVE_ZREG_SIZE;
> >> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
> >> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> >> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > +			return -ENOENT;
> >> > +
> >> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >>
> >> I suppose you could argue for a:
> >>
> >> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
> >> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> 			return -ENOENT;
> >>
> >> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >>
> >>                 if (reg->id <= zreg_id_max) {
> >> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >> 				SVE_SIG_REGS_OFFSET;
> >> 			reqlen = KVM_SVE_ZREG_SIZE;
> >> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
> >>                 } else {
> >> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
> >> 				SVE_SIG_REGS_OFFSET;
> >> 			reqlen = KVM_SVE_PREG_SIZE;
> >> 			maxlen = SVE_SIG_PREG_SIZE(vq);
> >> 		}
> >> 	} else {
> >> 		return -EINVAL;
> >> 	}
> >>
> >> but only for minimal DRY reasons.
> >
> > Agreed, but that bakes in another assumption: that the ZREG and PREG ID
> > ranges are contiguous.
> 
> Ahh I'd misread:
> 
>   /* reg ID ranges for P- registers and FFR (which are contiguous) */
> 
> However these are defined in the UABI:
> 
>   /* Z- and P-regs occupy blocks at the following offsets within this range: */
>   #define KVM_REG_ARM64_SVE_ZREG_BASE	0
>   #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
>   #define KVM_REG_ARM64_SVE_FFR_BASE	0x600
> 
> so there position is pretty fixed now right?

Yes.  I couldn't really decide how much we consider this a to be
a happy accident.

In hindsight, simply making FFR == "P16" official may have been the
simpler way to go, but it should work whether code explicitly assumes
this or not.  As of today, I prefer not to encourage it too much,
but this is more for philosophical than practical reasons.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 15:27           ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 15:27 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="UTF-8", Size: 4851 bytes --]

On Thu, Apr 25, 2019 at 04:04:36PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
> >>
> >> Dave Martin <Dave.Martin@arm.com> writes:
> >>
> >> > Currently, the way error codes are generated when processing the
> >> > SVE register access ioctls in a bit haphazard.
> >> >
> >> > This patch refactors the code so that the behaviour is more
> >> > consistent: now, -EINVAL should be returned only for unrecognised
> >> > register IDs or when some other runtime error occurs.  -ENOENT is
> >> > returned for register IDs that are recognised, but whose
> >> > corresponding register (or slice) does not exist for the vcpu.
> >> >
> >> > To this end, in {get,set}_sve_reg() we now delegate the
> >> > vcpu_has_sve() check down into {get,set}_sve_vls() and
> >> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> >> > picked off first, then sve_reg_to_region() plays the role of
> >> > exhaustively validating or rejecting the register ID and (where
> >> > accepted) computing the applicable register region as before.
> >> >
> >> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> >> > returned prematurely, before checking whether reg->id is in a
> >> > recognised range.
> >> >
> >> > -EPERM is now only returned when an attempt is made to access an
> >> > actually existing register slice on an unfinalized vcpu.
> >> >
> >> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> >> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> >> > Suggested-by: Andrew Jones <drjones@redhat.com>
> >> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> >> > Reviewed-by: Andrew Jones <drjones@redhat.com>
> >
> > [...]
> >
> >> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
> >> >  	/* Verify that we match the UAPI header: */
> >> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
> >> >
> >> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > -		return -ENOENT;
> >> > -
> >> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > -
> >> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
> >> >
> >> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> >> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > +			return -ENOENT;
> >> > +
> >> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >> >  				SVE_SIG_REGS_OFFSET;
> >> >  		reqlen = KVM_SVE_ZREG_SIZE;
> >> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
> >> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> >> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > +			return -ENOENT;
> >> > +
> >> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >>
> >> I suppose you could argue for a:
> >>
> >> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
> >> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> 			return -ENOENT;
> >>
> >> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >>
> >>                 if (reg->id <= zreg_id_max) {
> >> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >> 				SVE_SIG_REGS_OFFSET;
> >> 			reqlen = KVM_SVE_ZREG_SIZE;
> >> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
> >>                 } else {
> >> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
> >> 				SVE_SIG_REGS_OFFSET;
> >> 			reqlen = KVM_SVE_PREG_SIZE;
> >> 			maxlen = SVE_SIG_PREG_SIZE(vq);
> >> 		}
> >> 	} else {
> >> 		return -EINVAL;
> >> 	}
> >>
> >> but only for minimal DRY reasons.
> >
> > Agreed, but that bakes in another assumption: that the ZREG and PREG ID
> > ranges are contiguous.
> 
> Ahh I'd misread:
> 
>   /* reg ID ranges for P- registers and FFR (which are contiguous) */
> 
> However these are defined in the UABI:
> 
>   /* Z- and P-regs occupy blocks at the following offsets within this range: */
>   #define KVM_REG_ARM64_SVE_ZREG_BASE	0
>   #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
>   #define KVM_REG_ARM64_SVE_FFR_BASE	0x600
> 
> so there position is pretty fixed now right?

Yes.  I couldn't really decide how much we consider this a to be
a happy accident.

In hindsight, simply making FFR == "P16" official may have been the
simpler way to go, but it should work whether code explicitly assumes
this or not.  As of today, I prefer not to encourage it too much,
but this is more for philosophical than practical reasons.

[...]

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent
@ 2019-04-25 15:27           ` Dave Martin
  0 siblings, 0 replies; 63+ messages in thread
From: Dave Martin @ 2019-04-25 15:27 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Okamoto Takayuki, Christoffer Dall, Ard Biesheuvel, Marc Zyngier,
	Catalin Marinas, Will Deacon, Zhang Lei, Julien Grall, kvmarm,
	linux-arm-kernel

On Thu, Apr 25, 2019 at 04:04:36PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
> >>
> >> Dave Martin <Dave.Martin@arm.com> writes:
> >>
> >> > Currently, the way error codes are generated when processing the
> >> > SVE register access ioctls in a bit haphazard.
> >> >
> >> > This patch refactors the code so that the behaviour is more
> >> > consistent: now, -EINVAL should be returned only for unrecognised
> >> > register IDs or when some other runtime error occurs.  -ENOENT is
> >> > returned for register IDs that are recognised, but whose
> >> > corresponding register (or slice) does not exist for the vcpu.
> >> >
> >> > To this end, in {get,set}_sve_reg() we now delegate the
> >> > vcpu_has_sve() check down into {get,set}_sve_vls() and
> >> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> >> > picked off first, then sve_reg_to_region() plays the role of
> >> > exhaustively validating or rejecting the register ID and (where
> >> > accepted) computing the applicable register region as before.
> >> >
> >> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> >> > returned prematurely, before checking whether reg->id is in a
> >> > recognised range.
> >> >
> >> > -EPERM is now only returned when an attempt is made to access an
> >> > actually existing register slice on an unfinalized vcpu.
> >> >
> >> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface")
> >> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths")
> >> > Suggested-by: Andrew Jones <drjones@redhat.com>
> >> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> >> > Reviewed-by: Andrew Jones <drjones@redhat.com>
> >
> > [...]
> >
> >> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region,
> >> >  	/* Verify that we match the UAPI header: */
> >> >  	BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
> >> >
> >> > -	if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > -		return -ENOENT;
> >> > -
> >> > -	vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > -
> >> >  	reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
> >> >
> >> >  	if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> >> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > +			return -ENOENT;
> >> > +
> >> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >> >  		reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >> >  				SVE_SIG_REGS_OFFSET;
> >> >  		reqlen = KVM_SVE_ZREG_SIZE;
> >> >  		maxlen = SVE_SIG_ZREG_SIZE(vq);
> >> >  	} else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> >> > +		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > +			return -ENOENT;
> >> > +
> >> > +		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >>
> >> I suppose you could argue for a:
> >>
> >> 	if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
> >> 		if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >> 			return -ENOENT;
> >>
> >> 		vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >>
> >>                 if (reg->id <= zreg_id_max) {
> >> 			reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >> 				SVE_SIG_REGS_OFFSET;
> >> 			reqlen = KVM_SVE_ZREG_SIZE;
> >> 			maxlen = SVE_SIG_ZREG_SIZE(vq);
> >>                 } else {
> >> 			reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
> >> 				SVE_SIG_REGS_OFFSET;
> >> 			reqlen = KVM_SVE_PREG_SIZE;
> >> 			maxlen = SVE_SIG_PREG_SIZE(vq);
> >> 		}
> >> 	} else {
> >> 		return -EINVAL;
> >> 	}
> >>
> >> but only for minimal DRY reasons.
> >
> > Agreed, but that bakes in another assumption: that the ZREG and PREG ID
> > ranges are contiguous.
> 
> Ahh I'd misread:
> 
>   /* reg ID ranges for P- registers and FFR (which are contiguous) */
> 
> However these are defined in the UABI:
> 
>   /* Z- and P-regs occupy blocks at the following offsets within this range: */
>   #define KVM_REG_ARM64_SVE_ZREG_BASE	0
>   #define KVM_REG_ARM64_SVE_PREG_BASE	0x400
>   #define KVM_REG_ARM64_SVE_FFR_BASE	0x600
> 
> so there position is pretty fixed now right?

Yes.  I couldn't really decide how much we consider this a to be
a happy accident.

In hindsight, simply making FFR == "P16" official may have been the
simpler way to go, but it should work whether code explicitly assumes
this or not.  As of today, I prefer not to encourage it too much,
but this is more for philosophical than practical reasons.

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2019-04-25 15:27 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-18 16:06 [PATCH v2 00/14] KVM: arm64: SVE cleanups Dave Martin
2019-04-18 16:06 ` Dave Martin
2019-04-18 16:06 ` Dave Martin
2019-04-18 16:06 ` [PATCH v2 01/14] arm64/sve: Clarify vq map semantics Dave Martin
2019-04-18 16:06   ` Dave Martin
2019-04-18 16:06   ` Dave Martin
2019-04-18 16:06 ` [PATCH v2 02/14] KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE Dave Martin
2019-04-18 16:06   ` Dave Martin
2019-04-18 16:06   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 03/14] KVM: arm: Make vcpu finalization stubs into inline functions Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 04/14] KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to WARNs Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 05/14] KVM: arm64/sve: Clean up UAPI register ID definitions Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 06/14] KVM: arm64/sve: Miscellaneous tidyups in guest.c Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-25 12:30   ` Alex Bennée
2019-04-25 12:30     ` Alex Bennée
2019-04-25 13:04     ` Dave Martin
2019-04-25 13:04       ` Dave Martin
2019-04-25 13:04       ` Dave Martin
2019-04-25 15:04       ` Alex Bennée
2019-04-25 15:04         ` Alex Bennée
2019-04-25 15:27         ` Dave Martin
2019-04-25 15:27           ` Dave Martin
2019-04-25 15:27           ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 08/14] KVM: arm64/sve: WARN when avoiding divide-by-zero in sve_reg_to_region() Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 09/14] KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 10/14] KVM: arm64/sve: Explain validity checks in set_sve_vls() Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 11/14] KVM: arm/arm64: Clean up vcpu finalization function parameter naming Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 12/14] KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 13/14] KVM: Clarify KVM_{SET, GET}_ONE_REG error code documentation Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07 ` [PATCH v2 14/14] KVM: arm64: Clarify access behaviour for out-of-range SVE register slice IDs Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-18 16:07   ` Dave Martin
2019-04-24  9:21 ` [PATCH v2 00/14] KVM: arm64: SVE cleanups Alex Bennée
2019-04-24  9:21   ` Alex Bennée
2019-04-24  9:38   ` Marc Zyngier
2019-04-24  9:38     ` Marc Zyngier
2019-04-25 12:35 ` Alex Bennée
2019-04-25 12:35   ` Alex Bennée
2019-04-25 13:05   ` Dave Martin
2019-04-25 13:05     ` Dave Martin
2019-04-25 13:05     ` Dave Martin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.