From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex =?utf-8?Q?Benn=C3=A9e?= Subject: Re: [RFC PATCH v2 16/23] KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST Date: Wed, 21 Nov 2018 16:49:59 +0000 Message-ID: <87ftvuide0.fsf@linaro.org> References: <1538141967-15375-1-git-send-email-Dave.Martin@arm.com> <1538141967-15375-17-git-send-email-Dave.Martin@arm.com> <87k1l6ifa8.fsf@linaro.org> <20181121163201.GC3505@e103592.cambridge.arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 573474A236 for ; Wed, 21 Nov 2018 11:50:03 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mLvxCeXxN+s7 for ; Wed, 21 Nov 2018 11:50:02 -0500 (EST) Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 199594A0CC for ; Wed, 21 Nov 2018 11:50:02 -0500 (EST) Received: by mail-wm1-f65.google.com with SMTP id u13-v6so6465604wmc.4 for ; Wed, 21 Nov 2018 08:50:02 -0800 (PST) In-reply-to: <20181121163201.GC3505@e103592.cambridge.arm.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Dave Martin Cc: Okamoto Takayuki , Christoffer Dall , Ard Biesheuvel , Marc Zyngier , Catalin Marinas , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org List-Id: kvmarm@lists.cs.columbia.edu CkRhdmUgTWFydGluIDxEYXZlLk1hcnRpbkBhcm0uY29tPiB3cml0ZXM6Cgo+IE9uIFdlZCwgTm92 IDIxLCAyMDE4IGF0IDA0OjA5OjAzUE0gKzAwMDAsIEFsZXggQmVubsOpZSB3cm90ZToKPj4KPj4g RGF2ZSBNYXJ0aW4gPERhdmUuTWFydGluQGFybS5jb20+IHdyaXRlczoKPj4KPj4gPiBUaGlzIHBh dGNoIGluY2x1ZGVzIHRoZSBTVkUgcmVnaXN0ZXIgSURzIGluIHRoZSBsaXN0IHJldHVybmVkIGJ5 Cj4+ID4gS1ZNX0dFVF9SRUdfTElTVCwgYXMgYXBwcm9wcmlhdGUuCj4+ID4KPj4gPiBPbiBhIG5v bi1TVkUtZW5hYmxlZCB2Y3B1LCBubyBleHRyYSBJRHMgYXJlIGFkZGVkLgo+PiA+Cj4+ID4gT24g YW4gU1ZFLWVuYWJsZWQgdmNwdSwgdGhlIGFwcHJvcHJpYXRlIG51bWJlciBvZiBzbGljZSBJRHMg YXJlCj4+ID4gZW51bWVyYXRlZCBmb3IgZWFjaCBTVkUgcmVnaXN0ZXIsIGRlcGVuZGluZyBvbiB0 aGUgbWF4aW11bSB2ZWN0b3IKPj4gPiBsZW5ndGggZm9yIHRoZSB2Y3B1Lgo+PiA+Cj4+ID4gU2ln bmVkLW9mZi1ieTogRGF2ZSBNYXJ0aW4gPERhdmUuTWFydGluQGFybS5jb20+Cj4+ID4gLS0tCj4+ ID4KPj4gPiBDaGFuZ2VzIHNpbmNlIFJGQ3YxOgo+PiA+Cj4+ID4gICogU2ltcGxpZnkgZW51bWVy YXRlX3N2ZV9yZWdzKCkgYmFzZWQgb24gQW5kcmV3IEpvbmVzJyBhcHByb2FjaC4KPj4gPgo+PiA+ ICAqIFJlZyBjb3B5aW5nIGxvb3BzIGFyZSBpbnZlcnRlZCBmb3IgYnJldml0eSwgc2luY2UgdGhl IG9yZGVyIHdlCj4+ID4gICAgc3BpdCBvdXQgdGhlIHJlZ3MgaW4gZG9lc24ndCByZWFsbHkgbWF0 dGVyLgo+PiA+Cj4+ID4gKEkgdHJpZWQgdG8ga2VlcCBwYXJ0IG9mIG15IGFwcHJvYWNoIHRvIGF2 b2lkIHRoZSBkdXBsaWNhdGUgbG9naWMKPj4gPiBiZXR3ZWVuIG51bV9zdmVfcmVncygpIGFuZCBj b3B5X3N2ZV9yZWdfaW5kaWNlcygpLCBidXQgYWx0aG91Z2gKPj4gPiBpdCB3b3JrcyBpbiBwcmlu Y2lwbGUsIGdjYyBmYWlscyB0byBmdWxseSBjb2xsYXBzZSB0aGUgbnVtX3JlZ3MoKQo+PiA+IGNh c2UuLi4gc28gSSBnYXZlIHVwLiAgVGhlIHR3byBmdW5jdGlvbnMgbmVlZCB0byBiZSBtYW51YWxs eSBrZXB0Cj4+ID4gY29uc2lzdGVudCwgYnV0IGhvcGVmdWxseSB0aGF0J3MgZmFpcmx5IHN0cmFp Z2h0Zm9yd2FyZC4pCj4+ID4gLS0tCj4+ID4gIGFyY2gvYXJtNjQva3ZtL2d1ZXN0LmMgfCA0NSAr KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKPj4gPiAgMSBmaWxl IGNoYW5nZWQsIDQ1IGluc2VydGlvbnMoKykKPj4gPgo+PiA+IGRpZmYgLS1naXQgYS9hcmNoL2Fy bTY0L2t2bS9ndWVzdC5jIGIvYXJjaC9hcm02NC9rdm0vZ3Vlc3QuYwo+PiA+IGluZGV4IDMyMGRi MGYuLjg5ZWFiNjggMTAwNjQ0Cj4+ID4gLS0tIGEvYXJjaC9hcm02NC9rdm0vZ3Vlc3QuYwo+PiA+ ICsrKyBiL2FyY2gvYXJtNjQva3ZtL2d1ZXN0LmMKPj4gPiBAQCAtMzIzLDYgKzMyMyw0NiBAQCBz dGF0aWMgaW50IGdldF90aW1lcl9yZWcoc3RydWN0IGt2bV92Y3B1ICp2Y3B1LCBjb25zdCBzdHJ1 Y3Qga3ZtX29uZV9yZWcgKnJlZykKPj4gPiAgCXJldHVybiBjb3B5X3RvX3VzZXIodWFkZHIsICZ2 YWwsIEtWTV9SRUdfU0laRShyZWctPmlkKSkgPyAtRUZBVUxUIDogMDsKPj4gPiAgfQo+PiA+Cj4+ ID4gK3N0YXRpYyB1bnNpZ25lZCBsb25nIG51bV9zdmVfcmVncyhjb25zdCBzdHJ1Y3Qga3ZtX3Zj cHUgKnZjcHUpCj4+ID4gK3sKPj4gPiArCWNvbnN0IHVuc2lnbmVkIGludCBzbGljZXMgPSBESVZf Uk9VTkRfVVAoCj4+ID4gKwkJdmNwdS0+YXJjaC5zdmVfbWF4X3ZsLAo+PiA+ICsJCUtWTV9SRUdf U0laRShLVk1fUkVHX0FSTTY0X1NWRV9aUkVHKDAsIDApKSk7Cj4+Cj4+IEhhdmluZyBzZWVuIHRo aXMgZm9ybXVsYXRpb24gY29tZSB1cCBzZXZlcmFsIHRpbWVzIG5vdyBJIHdvbmRlciBpZiB0aGVy ZQo+PiBzaG91bGQgYmUgYSBrZXJuZWwgcHJpdmF0ZSBkZWZpbmUsIEtWTV9TVkVfWlJFRy9QUkVH X1NJWkUgdG8gYXZvaWQgdGhpcwo+PiBjbHVtc2luZXNzLgo+Cj4gSSBhZ3JlZSBpdCdzIGEgYml0 IGF3a3dhcmQuICBQcmV2aW91cyBJIHNwZWxsZWQgdGhpcyAiMHgxMDAiLCB3aGljaAo+IHdhcyB0 ZXJzZSBidXQgbW9yZSBzZW5zaXRpdmUgdG8gdHlwb3MgYW5kIG90aGVyIHNjcmV3dXBzIHRoYXQg SW8KPiBsaWtlZC4KPgo+PiBZb3UgY291bGQgc3RpbGwgdXNlIHRoZSBLVk1fUkVHX1NJWkUgdG8g ZXh0cmFjdCBpdCBhcyBJIGd1ZXNzIHRoaXMgaXMgdG8KPj4gbWFrZSBjaGFuZ2VzIHNpbXBsZXIg aWYvd2hlbiB0aGUgU1ZFIHJlZyBzaXplIGdldHMgYnVtcGVkIHVwLgo+Cj4gVGhhdCBtaWdodCBi ZSBtb3JlIGNoYWxsZW5naW5nIHRvIGRldGVybWluZSBhdCBjb21waWxlIHRpbWUuCj4KPiBJJ20g bm90IHN1cmUgaG93IGdvb2QgR0NDIGlzIGF0IGRvaW5nIGNvbnN0LXByb3BhZ2F0aW9uIGJldHdl ZW4gcmVsYXRlZAo+IChidXQgZGlmZmVyZW50KSBleHByZXNzaW9ucywgc28gSSBwcmVmZXJyZWQg dG8gZ28gZm9yIHNvbWV0aGluZyB0aGF0Cj4gaXMgY2xlYXJseSBjb21waWxldGltZSBjb25zdGFu dCByYXRoZXIgdGhhbiBleHRyYWN0aW5nIGl0IGZyb20gdGhlCj4gcmVnaXN0ZXIgSUQgdGhhdCBj YW1lIGZyb20gdXNlcnNwYWNlLgo+Cj4gU28sIEknZCBwcmVmZXIgbm90IHRvIHVzZSBLVk1fUkVH X1NJWkUoKSBmb3IgdGhpcywgYnV0IEknbSBoYXBweSB0byBhZGQKPiBhIHByaXZhdGUgI2RlZmlu ZSB0byBoaWRlIHRoaXMgY3VtYmVyc29tZSBjb25zdHJ1Y3QuICBUaGF0IHdvdWxkCj4gY2VydGFp bmx5IG1ha2UgdGhlIGNvZGUgbW9yZSByZWFkYWJsZS4KPgo+IChPZiBjb3Vyc2UsIHRoZSBhY3R1 YWwgcnVudGltZSBjb3N0IGlzIHRyaXZpYWwgZWl0aGVyIHdheSwgYnV0IEkgZmVsdAo+IGl0IHdh cyBlYXNpZXIgdG8gcmVhc29uIGFib3V0IGNvcnJlY3RuZXNzIGlmIHRoaXMgaXMgcmVhbGx5IGEg Y29uc3RhbnQuKQo+Cj4KPiBTb3VuZCBPSz8KClllcy4KCkknZCBhbG1vc3Qgc3VnZ2VzdGVkIGJ5 IG5vdCBqdXN0IHVzZSBLVk1fUkVHX1NJWkUoS1ZNX1JFR19TSVpFX1UyMDQ4KQplYXJsaWVyIHVu dGlsIEkgcmVhbGlzZWQgdGhpcyBtaWdodCBiZSBmb3J3YXJkIGxvb2tpbmcuCgotLQpBbGV4IEJl bm7DqWUKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18Ka3Zt YXJtIG1haWxpbmcgbGlzdAprdm1hcm1AbGlzdHMuY3MuY29sdW1iaWEuZWR1Cmh0dHBzOi8vbGlz dHMuY3MuY29sdW1iaWEuZWR1L21haWxtYW4vbGlzdGluZm8va3ZtYXJtCg== From mboxrd@z Thu Jan 1 00:00:00 1970 From: alex.bennee@linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Wed, 21 Nov 2018 16:49:59 +0000 Subject: [RFC PATCH v2 16/23] KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST In-Reply-To: <20181121163201.GC3505@e103592.cambridge.arm.com> References: <1538141967-15375-1-git-send-email-Dave.Martin@arm.com> <1538141967-15375-17-git-send-email-Dave.Martin@arm.com> <87k1l6ifa8.fsf@linaro.org> <20181121163201.GC3505@e103592.cambridge.arm.com> Message-ID: <87ftvuide0.fsf@linaro.org> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org Dave Martin writes: > On Wed, Nov 21, 2018 at 04:09:03PM +0000, Alex Benn?e wrote: >> >> Dave Martin writes: >> >> > This patch includes the SVE register IDs in the list returned by >> > KVM_GET_REG_LIST, as appropriate. >> > >> > On a non-SVE-enabled vcpu, no extra IDs are added. >> > >> > On an SVE-enabled vcpu, the appropriate number of slice IDs are >> > enumerated for each SVE register, depending on the maximum vector >> > length for the vcpu. >> > >> > Signed-off-by: Dave Martin >> > --- >> > >> > Changes since RFCv1: >> > >> > * Simplify enumerate_sve_regs() based on Andrew Jones' approach. >> > >> > * Reg copying loops are inverted for brevity, since the order we >> > spit out the regs in doesn't really matter. >> > >> > (I tried to keep part of my approach to avoid the duplicate logic >> > between num_sve_regs() and copy_sve_reg_indices(), but although >> > it works in principle, gcc fails to fully collapse the num_regs() >> > case... so I gave up. The two functions need to be manually kept >> > consistent, but hopefully that's fairly straightforward.) >> > --- >> > arch/arm64/kvm/guest.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ >> > 1 file changed, 45 insertions(+) >> > >> > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c >> > index 320db0f..89eab68 100644 >> > --- a/arch/arm64/kvm/guest.c >> > +++ b/arch/arm64/kvm/guest.c >> > @@ -323,6 +323,46 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) >> > return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0; >> > } >> > >> > +static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) >> > +{ >> > + const unsigned int slices = DIV_ROUND_UP( >> > + vcpu->arch.sve_max_vl, >> > + KVM_REG_SIZE(KVM_REG_ARM64_SVE_ZREG(0, 0))); >> >> Having seen this formulation come up several times now I wonder if there >> should be a kernel private define, KVM_SVE_ZREG/PREG_SIZE to avoid this >> clumsiness. > > I agree it's a bit awkward. Previous I spelled this "0x100", which > was terse but more sensitive to typos and other screwups that Io > liked. > >> You could still use the KVM_REG_SIZE to extract it as I guess this is to >> make changes simpler if/when the SVE reg size gets bumped up. > > That might be more challenging to determine at compile time. > > I'm not sure how good GCC is at doing const-propagation between related > (but different) expressions, so I preferred to go for something that > is clearly compiletime constant rather than extracting it from the > register ID that came from userspace. > > So, I'd prefer not to use KVM_REG_SIZE() for this, but I'm happy to add > a private #define to hide this cumbersome construct. That would > certainly make the code more readable. > > (Of course, the actual runtime cost is trivial either way, but I felt > it was easier to reason about correctness if this is really a constant.) > > > Sound OK? Yes. I'd almost suggested by not just use KVM_REG_SIZE(KVM_REG_SIZE_U2048) earlier until I realised this might be forward looking. -- Alex Benn?e