From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Jones Subject: Re: [PATCH v7 19/27] KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST Date: Fri, 5 Apr 2019 11:45:56 +0200 Message-ID: <20190405094556.6hp24jkjcrfnirsb@kamzik.brq.redhat.com> References: <1553864452-15080-1-git-send-email-Dave.Martin@arm.com> <1553864452-15080-20-git-send-email-Dave.Martin@arm.com> <20190404140832.5ryfi35df5skg4ke@kamzik.brq.redhat.com> <20190405093545.GK3567@e103592.cambridge.arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E5B0B4A424 for ; Fri, 5 Apr 2019 05:46:04 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Nw1aHumDiUod for ; Fri, 5 Apr 2019 05:46:03 -0400 (EDT) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 6CC9B4A34E for ; Fri, 5 Apr 2019 05:46:03 -0400 (EDT) Content-Disposition: inline In-Reply-To: <20190405093545.GK3567@e103592.cambridge.arm.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Dave Martin Cc: Okamoto Takayuki , Christoffer Dall , Ard Biesheuvel , Marc Zyngier , Catalin Marinas , Will Deacon , Zhang Lei , Julien Grall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org List-Id: kvmarm@lists.cs.columbia.edu On Fri, Apr 05, 2019 at 10:35:45AM +0100, Dave Martin wrote: > On Thu, Apr 04, 2019 at 04:08:32PM +0200, Andrew Jones wrote: > > On Fri, Mar 29, 2019 at 01:00:44PM +0000, Dave Martin wrote: > > > This patch includes the SVE register IDs in the list returned by > > > KVM_GET_REG_LIST, as appropriate. > > > > > > On a non-SVE-enabled vcpu, no new IDs are added. > > > > > > On an SVE-enabled vcpu, IDs for the FPSIMD V-registers are removed > > > from the list, since userspace is required to access the Z- > > > registers instead in order to access the V-register content. For > > > the variably-sized SVE registers, the appropriate set of slice IDs > > > are enumerated, depending on the maximum vector length for the > > > vcpu. > > > > > > As it currently stands, the SVE architecture never requires more > > > than one slice to exist per register, so this patch adds no > > > explicit support for enumerating multiple slices. The code can be > > > extended straightforwardly to support this in the future, if > > > needed. > > > > > > Signed-off-by: Dave Martin > > > Reviewed-by: Julien Thierry > > > Tested-by: zhang.lei > > > > > > --- > > > > > > Changes since v6: > > > > > > * [Julien Thierry] Add a #define to replace the magic "slices = 1", > > > and add a comment explaining to maintainers what needs to happen if > > > this is updated in the future. > > > > > > Changes since v5: > > > > > > (Dropped Julien Thierry's Reviewed-by due to non-trivial rebasing) > > > > > > * Move mis-split reword to prevent put_user()s being accidentally the > > > correct size from KVM: arm64/sve: Add pseudo-register for the guest's > > > vector lengths. > > > --- > > > arch/arm64/kvm/guest.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 63 insertions(+) > > > > > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > > > index 736d8cb..2aa80a5 100644 > > > --- a/arch/arm64/kvm/guest.c > > > +++ b/arch/arm64/kvm/guest.c > > > @@ -222,6 +222,13 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > > > #define KVM_SVE_ZREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_ZREG(0, 0)) > > > #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0)) > > > > > > +/* > > > + * number of register slices required to cover each whole SVE register on vcpu > > > > s/number/Number/ > > Not a sentence -> no capital letter. > > Due to the adjacent note it does look a little odd though. I'm happy to > change it. > > > s/on vcpu// > > Agreed, I can drop that. > > > > + * NOTE: If you are tempted to modify this, you must also to rework > > > > s/to rework/rework/ > > Ack > > > > + * sve_reg_to_region() to match: > > > + */ > > > +#define vcpu_sve_slices(vcpu) 1 > > > + > > > /* Bounds of a single SVE register slice within vcpu->arch.sve_state */ > > > struct sve_state_reg_region { > > > unsigned int koffset; /* offset into sve_state in kernel memory */ > > > @@ -411,6 +418,56 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > > > return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0; > > > } > > > > > > +static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) > > > +{ > > > + /* Only the first slice ever exists, for now */ > > > > I'd move this comment up into the one above vcpu_sve_slices(), > > and then nothing needs to change here when more slices come. > > > > > + const unsigned int slices = vcpu_sve_slices(vcpu); > > > + > > > + if (!vcpu_has_sve(vcpu)) > > > + return 0; > > > + > > > + return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */); > > > +} > > > + > > > +static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, > > > + u64 __user *uindices) > > > +{ > > > + /* Only the first slice ever exists, for now */ > > > > Same comment as above. > > Fair point: this was to explain the magic "1" that was previously here, > but the comments are a bit redundant here now: better to move the > comments where the 1 itself went. > > > > + const unsigned int slices = vcpu_sve_slices(vcpu); > > > + u64 reg; > > > + unsigned int i, n; > > > + int num_regs = 0; > > > + > > > + if (!vcpu_has_sve(vcpu)) > > > + return 0; > > > + > > > + for (i = 0; i < slices; i++) { > > > + for (n = 0; n < SVE_NUM_ZREGS; n++) { > > > + reg = KVM_REG_ARM64_SVE_ZREG(n, i); > > > + if (put_user(reg, uindices++)) > > > + return -EFAULT; > > > + > > > + num_regs++; > > > + } > > > + > > > + for (n = 0; n < SVE_NUM_PREGS; n++) { > > > + reg = KVM_REG_ARM64_SVE_PREG(n, i); > > > + if (put_user(reg, uindices++)) > > > + return -EFAULT; > > > + > > > + num_regs++; > > > + } > > > + > > > + reg = KVM_REG_ARM64_SVE_FFR(i); > > > + if (put_user(reg, uindices++)) > > > + return -EFAULT; > > > + > > > + num_regs++; > > > + } > > > > nit: the extra blank lines above the num_regs++'s give the code an odd > > look (to me) > > There's no guaranteed fall-through onto the increments: the blank line > was there to highlight the fact that we may jump out using a return > instead. > > But I'm happy enough to change it if you have a strong preference or > you feel the code is equally clear without. It's just a nit, so I don't have a strong preference :) > > > > > > + > > > + return num_regs; > > > +} > > > + > > > /** > > > * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG > > > * > > > @@ -421,6 +478,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) > > > unsigned long res = 0; > > > > > > res += num_core_regs(vcpu); > > > + res += num_sve_regs(vcpu); > > > res += kvm_arm_num_sys_reg_descs(vcpu); > > > res += kvm_arm_get_fw_num_regs(vcpu); > > > res += NUM_TIMER_REGS; > > > @@ -442,6 +500,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) > > > return ret; > > > uindices += ret; > > > > > > + ret = copy_sve_reg_indices(vcpu, uindices); > > > + if (ret) > > > + return ret; > > > + uindices += ret; > > > > I know this if ret vs. if ret < 0 is being addressed already. > > Yes, Marc's patch in kvmarm/next should fix that. > > Cheers > ---Dave Thanks, drew From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_NEOMUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4B38C4360F for ; Fri, 5 Apr 2019 09:46:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8235521738 for ; Fri, 5 Apr 2019 09:46:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="gmxXkj2V" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8235521738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YMx4gkzXCkGk9Hoj3FMGZH0M+vm7xVTY6R+pRFAelGw=; b=gmxXkj2VpQPMqT 6zH3VZ+8J0vZJni3qY0p2VHx4jjWJgPFlSoC5ua2qfqDc7tSONHergsaSeeEJLbzcuYD/kn7l5PrO cDmVgzibtDt1E3q7dPwkAAO97OFDiycC4L/iqBz3sklKePpaylvGS/M9TqhrcPmJtai99tjHegXdq iVflQARvnDC26zTriDStB71hDmMrjdEh5vau1Lwew3R+D/SOd3FP/ZQJxNhkUTg/bBd3pJwlawFsa MzUAW5+q8mQuRbABD4NBreHgFwcgDWyRkt1SPWg0nVy6paX66xZ444tHohRxdbJqRGufTp0kXs0No mTc1wlbrFsg3mD5JpAxw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hCLQF-0000W6-9q; Fri, 05 Apr 2019 09:46:07 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hCLQB-0000V4-IE for linux-arm-kernel@lists.infradead.org; Fri, 05 Apr 2019 09:46:05 +0000 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8E11581231; Fri, 5 Apr 2019 09:46:02 +0000 (UTC) Received: from kamzik.brq.redhat.com (unknown [10.40.205.144]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BC070261C2; Fri, 5 Apr 2019 09:45:59 +0000 (UTC) Date: Fri, 5 Apr 2019 11:45:56 +0200 From: Andrew Jones To: Dave Martin Subject: Re: [PATCH v7 19/27] KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST Message-ID: <20190405094556.6hp24jkjcrfnirsb@kamzik.brq.redhat.com> References: <1553864452-15080-1-git-send-email-Dave.Martin@arm.com> <1553864452-15080-20-git-send-email-Dave.Martin@arm.com> <20190404140832.5ryfi35df5skg4ke@kamzik.brq.redhat.com> <20190405093545.GK3567@e103592.cambridge.arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20190405093545.GK3567@e103592.cambridge.arm.com> User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Fri, 05 Apr 2019 09:46:02 +0000 (UTC) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190405_024603_639187_000B47C3 X-CRM114-Status: GOOD ( 39.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Okamoto Takayuki , Christoffer Dall , Ard Biesheuvel , Marc Zyngier , Catalin Marinas , Will Deacon , Zhang Lei , Julien Grall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Apr 05, 2019 at 10:35:45AM +0100, Dave Martin wrote: > On Thu, Apr 04, 2019 at 04:08:32PM +0200, Andrew Jones wrote: > > On Fri, Mar 29, 2019 at 01:00:44PM +0000, Dave Martin wrote: > > > This patch includes the SVE register IDs in the list returned by > > > KVM_GET_REG_LIST, as appropriate. > > > > > > On a non-SVE-enabled vcpu, no new IDs are added. > > > > > > On an SVE-enabled vcpu, IDs for the FPSIMD V-registers are removed > > > from the list, since userspace is required to access the Z- > > > registers instead in order to access the V-register content. For > > > the variably-sized SVE registers, the appropriate set of slice IDs > > > are enumerated, depending on the maximum vector length for the > > > vcpu. > > > > > > As it currently stands, the SVE architecture never requires more > > > than one slice to exist per register, so this patch adds no > > > explicit support for enumerating multiple slices. The code can be > > > extended straightforwardly to support this in the future, if > > > needed. > > > > > > Signed-off-by: Dave Martin > > > Reviewed-by: Julien Thierry > > > Tested-by: zhang.lei > > > > > > --- > > > > > > Changes since v6: > > > > > > * [Julien Thierry] Add a #define to replace the magic "slices = 1", > > > and add a comment explaining to maintainers what needs to happen if > > > this is updated in the future. > > > > > > Changes since v5: > > > > > > (Dropped Julien Thierry's Reviewed-by due to non-trivial rebasing) > > > > > > * Move mis-split reword to prevent put_user()s being accidentally the > > > correct size from KVM: arm64/sve: Add pseudo-register for the guest's > > > vector lengths. > > > --- > > > arch/arm64/kvm/guest.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 63 insertions(+) > > > > > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > > > index 736d8cb..2aa80a5 100644 > > > --- a/arch/arm64/kvm/guest.c > > > +++ b/arch/arm64/kvm/guest.c > > > @@ -222,6 +222,13 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > > > #define KVM_SVE_ZREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_ZREG(0, 0)) > > > #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0)) > > > > > > +/* > > > + * number of register slices required to cover each whole SVE register on vcpu > > > > s/number/Number/ > > Not a sentence -> no capital letter. > > Due to the adjacent note it does look a little odd though. I'm happy to > change it. > > > s/on vcpu// > > Agreed, I can drop that. > > > > + * NOTE: If you are tempted to modify this, you must also to rework > > > > s/to rework/rework/ > > Ack > > > > + * sve_reg_to_region() to match: > > > + */ > > > +#define vcpu_sve_slices(vcpu) 1 > > > + > > > /* Bounds of a single SVE register slice within vcpu->arch.sve_state */ > > > struct sve_state_reg_region { > > > unsigned int koffset; /* offset into sve_state in kernel memory */ > > > @@ -411,6 +418,56 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) > > > return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0; > > > } > > > > > > +static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) > > > +{ > > > + /* Only the first slice ever exists, for now */ > > > > I'd move this comment up into the one above vcpu_sve_slices(), > > and then nothing needs to change here when more slices come. > > > > > + const unsigned int slices = vcpu_sve_slices(vcpu); > > > + > > > + if (!vcpu_has_sve(vcpu)) > > > + return 0; > > > + > > > + return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */); > > > +} > > > + > > > +static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, > > > + u64 __user *uindices) > > > +{ > > > + /* Only the first slice ever exists, for now */ > > > > Same comment as above. > > Fair point: this was to explain the magic "1" that was previously here, > but the comments are a bit redundant here now: better to move the > comments where the 1 itself went. > > > > + const unsigned int slices = vcpu_sve_slices(vcpu); > > > + u64 reg; > > > + unsigned int i, n; > > > + int num_regs = 0; > > > + > > > + if (!vcpu_has_sve(vcpu)) > > > + return 0; > > > + > > > + for (i = 0; i < slices; i++) { > > > + for (n = 0; n < SVE_NUM_ZREGS; n++) { > > > + reg = KVM_REG_ARM64_SVE_ZREG(n, i); > > > + if (put_user(reg, uindices++)) > > > + return -EFAULT; > > > + > > > + num_regs++; > > > + } > > > + > > > + for (n = 0; n < SVE_NUM_PREGS; n++) { > > > + reg = KVM_REG_ARM64_SVE_PREG(n, i); > > > + if (put_user(reg, uindices++)) > > > + return -EFAULT; > > > + > > > + num_regs++; > > > + } > > > + > > > + reg = KVM_REG_ARM64_SVE_FFR(i); > > > + if (put_user(reg, uindices++)) > > > + return -EFAULT; > > > + > > > + num_regs++; > > > + } > > > > nit: the extra blank lines above the num_regs++'s give the code an odd > > look (to me) > > There's no guaranteed fall-through onto the increments: the blank line > was there to highlight the fact that we may jump out using a return > instead. > > But I'm happy enough to change it if you have a strong preference or > you feel the code is equally clear without. It's just a nit, so I don't have a strong preference :) > > > > > > + > > > + return num_regs; > > > +} > > > + > > > /** > > > * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG > > > * > > > @@ -421,6 +478,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) > > > unsigned long res = 0; > > > > > > res += num_core_regs(vcpu); > > > + res += num_sve_regs(vcpu); > > > res += kvm_arm_num_sys_reg_descs(vcpu); > > > res += kvm_arm_get_fw_num_regs(vcpu); > > > res += NUM_TIMER_REGS; > > > @@ -442,6 +500,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) > > > return ret; > > > uindices += ret; > > > > > > + ret = copy_sve_reg_indices(vcpu, uindices); > > > + if (ret) > > > + return ret; > > > + uindices += ret; > > > > I know this if ret vs. if ret < 0 is being addressed already. > > Yes, Marc's patch in kvmarm/next should fix that. > > Cheers > ---Dave Thanks, drew _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel