From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Williamson Subject: Re: [v3 4/5] vfio: implement APIs to set/put kvm to/from vfio group Date: Wed, 9 Nov 2016 10:53:52 -0700 Message-ID: <20161109105352.3337acbb@t450s.home> References: <1477895706-22824-1-git-send-email-jike.song@intel.com> <1477895706-22824-5-git-send-email-jike.song@intel.com> <20161107110412.5db26fd4@t450s.home> <20161107112834.2aa971df@t450s.home> <9e83a26f-e5dc-ed91-a1b5-c6f165eed7ed@redhat.com> <58231B5C.3010506@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: Paolo Bonzini , guangrong.xiao@linux.intel.com, kwankhede@nvidia.com, cjia@nvidia.com, kevin.tian@intel.com, kvm@vger.kernel.org To: Jike Song Return-path: Received: from mx1.redhat.com ([209.132.183.28]:46480 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753079AbcKIRxy (ORCPT ); Wed, 9 Nov 2016 12:53:54 -0500 In-Reply-To: <58231B5C.3010506@intel.com> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, 09 Nov 2016 20:49:32 +0800 Jike Song wrote: > On 11/08/2016 04:45 AM, Paolo Bonzini wrote: > > On 07/11/2016 19:28, Alex Williamson wrote: > >>>> Can the reference become invalid? > >>> > >>> No, this is guaranteed by virt/kvm/vfio.c + the udata.lock mutex (which > >>> probably should be renamed...). > >> > >> The caller gets a reference to kvm, but there's no guarantee that the > >> association of that kvm reference to the group stays valid. Once we're > >> outside of that mutex, we might as well consider that kvm:group > >> association stale. > >> > >>>> The caller may still hold > >>>> a kvm references, but couldn't the group be detached from one kvm > >>>> instance and re-attached to another? > >>> > >>> Can this be handled by the vendor driver? Does it get a callback when > >>> it's detached from a KVM instance? > >> > >> The only release callback through vfio is when the user closes the > >> device, the code in this series is the full extent of vfio awareness of > >> kvm. Thanks, > > > > Maybe there should be an mdev callback at the point of association and > > deassociation between VFIO and KVM. Then the vendor driver can just use > > the same mutex for association, deassociation and usage. I'm not even > > sure that these patches are necessary once you have that callback. > > Hi Alex & Paolo, > > So I cooked another draft version of this, there is no kvm pointer saved > in vfio_group in this version, and notifier will be called on attach/detach, > please kindly have a look :-) > > > -- > Thanks, > Jike > > > diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c > index ed2361e4..20b5da9 100644 > --- a/drivers/vfio/vfio.c > +++ b/drivers/vfio/vfio.c > @@ -34,6 +34,7 @@ > #include > #include > #include > +#include > > #define DRIVER_VERSION "0.3" > #define DRIVER_AUTHOR "Alex Williamson " > @@ -86,6 +87,10 @@ struct vfio_group { > struct mutex unbound_lock; > atomic_t opened; > bool noiommu; > + struct { > + struct mutex lock; > + struct blocking_notifier_head notifier; > + } udata; > }; > > struct vfio_device { > @@ -333,6 +338,7 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group) > mutex_init(&group->device_lock); > INIT_LIST_HEAD(&group->unbound_list); > mutex_init(&group->unbound_lock); > + mutex_init(&group->udata.lock); > atomic_set(&group->container_users, 0); > atomic_set(&group->opened, 0); > group->iommu_group = iommu_group; > @@ -414,10 +420,11 @@ static void vfio_group_release(struct kref *kref) > iommu_group_put(iommu_group); > } > > -static void vfio_group_put(struct vfio_group *group) > +void vfio_group_put(struct vfio_group *group) > { > kref_put_mutex(&group->kref, vfio_group_release, &vfio.group_lock); > } > +EXPORT_SYMBOL_GPL(vfio_group_put); > > /* Assume group_lock or group reference is held */ > static void vfio_group_get(struct vfio_group *group) > @@ -480,7 +487,7 @@ static struct vfio_group *vfio_group_get_from_minor(int minor) > return group; > } > > -static struct vfio_group *vfio_group_get_from_dev(struct device *dev) > +struct vfio_group *vfio_group_get_from_dev(struct device *dev) > { > struct iommu_group *iommu_group; > struct vfio_group *group; > @@ -494,6 +501,7 @@ static struct vfio_group *vfio_group_get_from_dev(struct device *dev) > > return group; > } > +EXPORT_SYMBOL_GPL(vfio_group_get_from_dev); > > /** > * Device objects - create, release, get, put, search > @@ -1745,6 +1753,44 @@ long vfio_external_check_extension(struct vfio_group *group, unsigned long arg) > } > EXPORT_SYMBOL_GPL(vfio_external_check_extension); > > +int vfio_group_register_notifier(struct vfio_group *group, struct notifier_block *nb) > +{ > + return blocking_notifier_chain_register(&group->udata.notifier, nb); > +} > +EXPORT_SYMBOL_GPL(vfio_group_register_notifier); > + > +int vfio_group_unregister_notifier(struct vfio_group *group, struct notifier_block *nb) > +{ > + return blocking_notifier_chain_unregister(&group->udata.notifier, nb); > +} > +EXPORT_SYMBOL_GPL(vfio_group_unregister_notifier); Kirti is already adding vfio_register_notifier & vfio_unregister_notifier, these are not exclusive to the iommu, I clarified that in my question that IOVA range invalidation is just one aspect of what that notifier might be used for. The mdev framework also automatically registers and unregisters that notifier around open/release. So, I don't think we want a new notifier, we just want vfio.c to also consume that notifier. So I think this patch needs a few components that build on what Kirti has, 1) we add a blocking_notifier_head per vfio_group and have vfio_{un}regsiter_notifier add and remove that notifier to the group chain, 2) we create a vfio_group_notify() function that the kvm-vfio pseudo device can call via symbol_get, 3) Have kvm-vfio call vfio_group_notify() with VFIO_GROUP_NOTIFY_SET_KVM where the data is a pointer to the struct kvm (or NULL to unset, we don't need separate set vs unset notifiers). Does that work? Thanks, Alex > + > +void vfio_group_attach_kvm(struct vfio_group *group, struct kvm *kvm, > + void (*fn)(struct kvm *)) > +{ > + mutex_lock(&group->udata.lock); > + > + fn(kvm); > + blocking_notifier_call_chain(&group->udata.notifier, > + VFIO_GROUP_NOTIFY_ATTACH_KVM, kvm); > + > + mutex_unlock(&group->udata.lock); > +} > +EXPORT_SYMBOL_GPL(vfio_group_attach_kvm); > + > +void vfio_group_detach_kvm(struct vfio_group *group, struct kvm *kvm, > + void (*fn)(struct kvm *)) > +{ > + mutex_lock(&group->udata.lock); > + > + blocking_notifier_call_chain(&group->udata.notifier, > + VFIO_GROUP_NOTIFY_DETACH_KVM, kvm); > + fn(kvm); > + > + mutex_unlock(&group->udata.lock); > +} > +EXPORT_SYMBOL_GPL(vfio_group_detach_kvm); > + > /** > * Sub-module support > */ > diff --git a/include/linux/vfio.h b/include/linux/vfio.h > index 87c9afe..4819a45 100644 > --- a/include/linux/vfio.h > +++ b/include/linux/vfio.h > @@ -102,6 +102,21 @@ extern void vfio_unregister_iommu_driver( > extern long vfio_external_check_extension(struct vfio_group *group, > unsigned long arg); > > +extern struct vfio_group *vfio_group_get_from_dev(struct device *dev); > +extern void vfio_group_put(struct vfio_group *group); > + > +#define VFIO_GROUP_NOTIFY_ATTACH_KVM 1 > +#define VFIO_GROUP_NOTIFY_DETACH_KVM 2 > +struct kvm; > +extern void vfio_group_attach_kvm(struct vfio_group *group, struct kvm *kvm, > + void (*fn)(struct kvm *)); > +extern void vfio_group_detach_kvm(struct vfio_group *group, struct kvm *kvm, > + void (*fn)(struct kvm *)); > +extern int vfio_group_register_notifier(struct vfio_group *group, > + struct notifier_block *nb); > +extern int vfio_group_unregister_notifier(struct vfio_group *group, > + struct notifier_block *nb); > + > /* > * Sub-module helpers > */ > diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c > index 1dd087d..d889b56 100644 > --- a/virt/kvm/vfio.c > +++ b/virt/kvm/vfio.c > @@ -60,6 +60,32 @@ static void kvm_vfio_group_put_external_user(struct vfio_group *vfio_group) > symbol_put(vfio_group_put_external_user); > } > > +static void kvm_vfio_group_attach_kvm(struct vfio_group *group, struct kvm_device *dev) > +{ > + void (*fn)(struct vfio_group *, struct kvm *, void (*fn)(struct kvm *)); > + > + fn = symbol_get(vfio_group_attach_kvm); > + if (!fn) > + return; > + > + fn(group, dev->kvm, kvm_get_kvm); > + > + symbol_put(vfio_group_attach_kvm); > +} > + > +static void kvm_vfio_group_detach_kvm(struct vfio_group *group, struct kvm *kvm) > +{ > + void (*fn)(struct vfio_group *, struct kvm *, void (*fn)(struct kvm *)); > + > + fn = symbol_get(vfio_group_detach_kvm); > + if (!fn) > + return; > + > + fn(group, kvm, kvm_put_kvm); > + > + symbol_put(vfio_group_detach_kvm); > +} > + > static bool kvm_vfio_group_is_coherent(struct vfio_group *vfio_group) > { > long (*fn)(struct vfio_group *, unsigned long); > @@ -155,6 +181,8 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg) > list_add_tail(&kvg->node, &kv->group_list); > kvg->vfio_group = vfio_group; > > + kvm_vfio_group_attach_kvm(vfio_group, dev); > + > kvm_arch_start_assignment(dev->kvm); > > mutex_unlock(&kv->lock); > @@ -196,6 +224,8 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg) > > mutex_unlock(&kv->lock); > > + kvm_vfio_group_detach_kvm(vfio_group, dev->kvm); > + > kvm_vfio_group_put_external_user(vfio_group); > > kvm_vfio_update_coherency(dev); > @@ -240,6 +270,7 @@ static void kvm_vfio_destroy(struct kvm_device *dev) > struct kvm_vfio_group *kvg, *tmp; > > list_for_each_entry_safe(kvg, tmp, &kv->group_list, node) { > + kvm_vfio_group_detach_kvm(kvg->vfio_group, dev->kvm); > kvm_vfio_group_put_external_user(kvg->vfio_group); > list_del(&kvg->node); > kfree(kvg);