On 1/9/20 6:08 PM, Cornelia Huck wrote: > On Thu, 9 Jan 2020 10:56:01 -0500 > Janosch Frank wrote: > >> The architecture states that we need to reset local IRQs for all CPU >> resets. Because the old reset interface did not support the normal CPU >> reset we never did that on a normal reset. >> >> Let's implement an interface for the missing normal and clear resets >> and reset all local IRQs, registers and control structures as stated >> in the architecture. >> >> Userspace might already reset the registers via the vcpu run struct, >> but as we need the interface for the interrupt clearing part anyway, >> we implement the resets fully and don't rely on userspace to reset the >> rest. >> >> Signed-off-by: Janosch Frank >> --- >> >> I dropped the reviews, as I changed quite a lot. >> >> Keep in mind, that now we'll need a new parameter in normal and >> initial reset for protected virtualization to indicate that we need to >> do the reset via the UV call. The Ultravisor does only accept the >> needed reset, not any subset resets. > > In the interface, or externally? ? > > [Apologies, but the details of the protected virt stuff are no longer > in my cache. Reworded explanation: I can't use a fallthrough, because the UV will reject the normal reset if we do an initial reset (same goes for the clear reset). To address this issue, I added a boolean to the normal and initial reset functions which tells the function if it was called directly or was called because of the fallthrough. Only if called directly a UV call for the reset is done, that way we can keep the fallthrough. >> static int kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu) >> { >> - kvm_s390_vcpu_initial_reset(vcpu); >> + /* this equals initial cpu reset in pop, but we don't switch to ESA */ > > Maybe also mention that in the documentation? Sure > >> + vcpu->arch.sie_block->gpsw.mask = 0UL; >> + vcpu->arch.sie_block->gpsw.addr = 0UL; >> + kvm_s390_set_prefix(vcpu, 0); >> + kvm_s390_set_cpu_timer(vcpu, 0); >> + vcpu->arch.sie_block->ckc = 0UL; >> + vcpu->arch.sie_block->todpr = 0; >> + memset(vcpu->arch.sie_block->gcr, 0, 16 * sizeof(__u64)); >> + vcpu->arch.sie_block->gcr[0] = CR0_UNUSED_56 | >> + CR0_INTERRUPT_KEY_SUBMASK | >> + CR0_MEASUREMENT_ALERT_SUBMASK; >> + vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 | >> + CR14_UNUSED_33 | >> + CR14_EXTERNAL_DAMAGE_SUBMASK; >> + /* make sure the new fpc will be lazily loaded */ >> + save_fpu_regs(); >> + current->thread.fpu.fpc = 0; >> + vcpu->arch.sie_block->gbea = 1; >> + vcpu->arch.sie_block->pp = 0; >> + vcpu->arch.sie_block->fpf &= ~FPF_BPBC; >> + > > Add a comment that the remaining work will be done in normal_reset? Will do > >> + return 0; >> +} >> + >> +static int kvm_arch_vcpu_ioctl_clear_reset(struct kvm_vcpu *vcpu) >> +{ >> + struct kvm_sync_regs *regs = &vcpu->run->s.regs; >> + >> + memset(®s->gprs, 0, sizeof(regs->gprs)); >> + /* >> + * Will be picked up via save_fpu_regs() in the initial reset >> + * fallthrough. >> + */ > > This comment is a bit confusing... what does 'picked up' mean? > > (Maybe I'm just too tired, sorry...) fpus are loaded lazily, maybe I should just remove the comment. > >> + memset(®s->vrs, 0, sizeof(regs->vrs)); >> + memset(®s->acrs, 0, sizeof(regs->acrs)); >> + >> + regs->etoken = 0; >> + regs->etoken_extension = 0; >> + >> + memset(®s->gscb, 0, sizeof(regs->gscb)); >> + if (MACHINE_HAS_GS) { >> + preempt_disable(); >> + __ctl_set_bit(2, 4); >> + if (current->thread.gs_cb) { >> + vcpu->arch.host_gscb = current->thread.gs_cb; >> + save_gs_cb(vcpu->arch.host_gscb); >> + } >> + if (vcpu->arch.gs_enabled) { >> + current->thread.gs_cb = (struct gs_cb *) >> + &vcpu->run->s.regs.gscb; >> + restore_gs_cb(current->thread.gs_cb); >> + } >> + preempt_enable(); >> + } > > And here that the remaining work will be done in initial_reset and > normal_reset? > >> return 0; >> } >> >> @@ -4363,8 +4402,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp, >> r = kvm_arch_vcpu_ioctl_set_initial_psw(vcpu, psw); >> break; >> } >> + >> + case KVM_S390_CLEAR_RESET: >> + r = kvm_arch_vcpu_ioctl_clear_reset(vcpu); >> + /* fallthrough */ >> case KVM_S390_INITIAL_RESET: >> r = kvm_arch_vcpu_ioctl_initial_reset(vcpu); >> + /* fallthrough */ >> + case KVM_S390_NORMAL_RESET: >> + r = kvm_arch_vcpu_ioctl_normal_reset(vcpu); > > Can any of these functions return !0 when the protected virt stuff is > done on top? If not, can we make them void and just set r=0; here? They do return > 0 if the UV call fails, so I need those r values. > >> break; >> case KVM_SET_ONE_REG: >> case KVM_GET_ONE_REG: { >> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h >> index f0a16b4adbbd..4b95f9a31a2f 100644 >> --- a/include/uapi/linux/kvm.h >> +++ b/include/uapi/linux/kvm.h >> @@ -1009,6 +1009,7 @@ struct kvm_ppc_resize_hpt { >> #define KVM_CAP_PPC_GUEST_DEBUG_SSTEP 176 >> #define KVM_CAP_ARM_NISV_TO_USER 177 >> #define KVM_CAP_ARM_INJECT_EXT_DABT 178 >> +#define KVM_CAP_S390_VCPU_RESETS 179 >> >> #ifdef KVM_CAP_IRQ_ROUTING >> >> @@ -1473,6 +1474,10 @@ struct kvm_enc_region { >> /* Available with KVM_CAP_ARM_SVE */ >> #define KVM_ARM_VCPU_FINALIZE _IOW(KVMIO, 0xc2, int) >> >> +/* Available with KVM_CAP_S390_VCPU_RESETS */ >> +#define KVM_S390_NORMAL_RESET _IO(KVMIO, 0xc3) >> +#define KVM_S390_CLEAR_RESET _IO(KVMIO, 0xc4) >> + >> /* Secure Encrypted Virtualization command */ >> enum sev_cmd_id { >> /* Guest initialization commands */ >