From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755467AbbEBT0n (ORCPT ); Sat, 2 May 2015 15:26:43 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:45360 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1946194AbbEBT0b (ORCPT ); Sat, 2 May 2015 15:26:31 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christian Borntraeger , Cornelia Huck , Jens Freimann Subject: [PATCH 3.19 024/177] KVM: s390: no need to hold the kvm->mutex for floating interrupts Date: Sat, 2 May 2015 21:00:46 +0200 Message-Id: <20150502190120.741253335@linuxfoundation.org> X-Mailer: git-send-email 2.3.7 In-Reply-To: <20150502190119.666291882@linuxfoundation.org> References: <20150502190119.666291882@linuxfoundation.org> User-Agent: quilt/0.64 MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Christian Borntraeger commit 69a8d456263849152826542c7cb0a164b90e68a8 upstream. The kvm mutex was (probably) used to protect against cpu hotplug. The current code no longer needs to protect against that, as we only rely on CPU data structures that are guaranteed to be available if we can access the CPU. (e.g. vcpu_create will put the cpu in the array AFTER the cpu is ready). Signed-off-by: Christian Borntraeger Acked-by: Cornelia Huck Reviewed-by: Jens Freimann Signed-off-by: Greg Kroah-Hartman --- arch/s390/kvm/interrupt.c | 8 -------- 1 file changed, 8 deletions(-) --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -1131,7 +1131,6 @@ struct kvm_s390_interrupt_info *kvm_s390 if ((!schid && !cr6) || (schid && cr6)) return NULL; - mutex_lock(&kvm->lock); fi = &kvm->arch.float_int; spin_lock(&fi->lock); inti = NULL; @@ -1159,7 +1158,6 @@ struct kvm_s390_interrupt_info *kvm_s390 if (list_empty(&fi->list)) atomic_set(&fi->active, 0); spin_unlock(&fi->lock); - mutex_unlock(&kvm->lock); return inti; } @@ -1172,7 +1170,6 @@ static int __inject_vm(struct kvm *kvm, int sigcpu; int rc = 0; - mutex_lock(&kvm->lock); fi = &kvm->arch.float_int; spin_lock(&fi->lock); if (fi->irq_count >= KVM_S390_MAX_FLOAT_IRQS) { @@ -1225,7 +1222,6 @@ static int __inject_vm(struct kvm *kvm, kvm_s390_vcpu_wakeup(kvm_get_vcpu(kvm, sigcpu)); unlock_fi: spin_unlock(&fi->lock); - mutex_unlock(&kvm->lock); return rc; } @@ -1379,7 +1375,6 @@ void kvm_s390_clear_float_irqs(struct kv struct kvm_s390_float_interrupt *fi; struct kvm_s390_interrupt_info *n, *inti = NULL; - mutex_lock(&kvm->lock); fi = &kvm->arch.float_int; spin_lock(&fi->lock); list_for_each_entry_safe(inti, n, &fi->list, list) { @@ -1389,7 +1384,6 @@ void kvm_s390_clear_float_irqs(struct kv fi->irq_count = 0; atomic_set(&fi->active, 0); spin_unlock(&fi->lock); - mutex_unlock(&kvm->lock); } static inline int copy_irq_to_user(struct kvm_s390_interrupt_info *inti, @@ -1429,7 +1423,6 @@ static int get_all_floating_irqs(struct int ret = 0; int n = 0; - mutex_lock(&kvm->lock); fi = &kvm->arch.float_int; spin_lock(&fi->lock); @@ -1448,7 +1441,6 @@ static int get_all_floating_irqs(struct } spin_unlock(&fi->lock); - mutex_unlock(&kvm->lock); return ret < 0 ? ret : n; }