From: Jason Gunthorpe <jgg@nvidia.com> To: Yi Liu <yi.l.liu@intel.com> Cc: alex.williamson@redhat.com, kevin.tian@intel.com, joro@8bytes.org, robin.murphy@arm.com, cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, intel-gvt-dev@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-s390@vger.kernel.org, xudong.hao@intel.com, yan.y.zhao@intel.com, terrence.xu@intel.com, yanting.jiang@intel.com, zhenzhong.duan@intel.com, clegoate@redhat.com Subject: Re: [PATCH v7 8/9] vfio/pci: Extend VFIO_DEVICE_GET_PCI_HOT_RESET_INFO for vfio device cdev Date: Tue, 13 Jun 2023 15:23:02 -0300 [thread overview] Message-ID: <ZIi0Bizk9qr1SgJ/@nvidia.com> (raw) In-Reply-To: <20230602121515.79374-9-yi.l.liu@intel.com> On Fri, Jun 02, 2023 at 05:15:14AM -0700, Yi Liu wrote: > This allows VFIO_DEVICE_GET_PCI_HOT_RESET_INFO ioctl use the iommufd_ctx > of the cdev device to check the ownership of the other affected devices. > > When VFIO_DEVICE_GET_PCI_HOT_RESET_INFO is called on an IOMMUFD managed > device, the new flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is reported to indicate > the values returned are IOMMUFD devids rather than group IDs as used when > accessing vfio devices through the conventional vfio group interface. > Additionally the flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED will be reported > in this mode if all of the devices affected by the hot-reset are owned by > either virtue of being directly bound to the same iommufd context as the > calling device, or implicitly owned via a shared IOMMU group. > > Suggested-by: Jason Gunthorpe <jgg@nvidia.com> > Suggested-by: Alex Williamson <alex.williamson@redhat.com> > Signed-off-by: Yi Liu <yi.l.liu@intel.com> > --- > drivers/vfio/iommufd.c | 49 +++++++++++++++++++++++++++++++ > drivers/vfio/pci/vfio_pci_core.c | 47 +++++++++++++++++++++++++----- > include/linux/vfio.h | 16 ++++++++++ > include/uapi/linux/vfio.h | 50 +++++++++++++++++++++++++++++++- > 4 files changed, 154 insertions(+), 8 deletions(-) This could use some more fiddling, like we could copy each vfio_pci_dependent_device to user memory inside the loop instead of allocating an array. Add another patch with something like this in it: diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index b0eadafcbcf502..516e0fda74bec9 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -775,19 +775,23 @@ static int vfio_pci_count_devs(struct pci_dev *pdev, void *data) } struct vfio_pci_fill_info { - int max; - int cur; - struct vfio_pci_dependent_device *devices; + struct vfio_pci_dependent_device __user *devices; + struct vfio_pci_dependent_device __user *devices_end; struct vfio_device *vdev; u32 flags; }; static int vfio_pci_fill_devs(struct pci_dev *pdev, void *data) { + struct vfio_pci_dependent_device info = { + .segment = pci_domain_nr(pdev->bus), + .bus = pdev->bus->number, + .devfn = pdev->devfn, + }; struct vfio_pci_fill_info *fill = data; - if (fill->cur == fill->max) - return -EAGAIN; /* Something changed, try again */ + if (fill->devices_end >= fill->devices) + return -ENOSPC; if (fill->flags & VFIO_PCI_HOT_RESET_FLAG_DEV_ID) { struct iommufd_ctx *iommufd = vfio_iommufd_device_ictx(fill->vdev); @@ -800,12 +804,12 @@ static int vfio_pci_fill_devs(struct pci_dev *pdev, void *data) */ vdev = vfio_find_device_in_devset(dev_set, &pdev->dev); if (!vdev) - fill->devices[fill->cur].devid = VFIO_PCI_DEVID_NOT_OWNED; + info.devid = VFIO_PCI_DEVID_NOT_OWNED; else - fill->devices[fill->cur].devid = - vfio_iommufd_device_hot_reset_devid(vdev, iommufd); + info.devid = vfio_iommufd_device_hot_reset_devid( + vdev, iommufd); /* If devid is VFIO_PCI_DEVID_NOT_OWNED, clear owned flag. */ - if (fill->devices[fill->cur].devid == VFIO_PCI_DEVID_NOT_OWNED) + if (info.devid == VFIO_PCI_DEVID_NOT_OWNED) fill->flags &= ~VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED; } else { struct iommu_group *iommu_group; @@ -814,13 +818,13 @@ static int vfio_pci_fill_devs(struct pci_dev *pdev, void *data) if (!iommu_group) return -EPERM; /* Cannot reset non-isolated devices */ - fill->devices[fill->cur].group_id = iommu_group_id(iommu_group); + info.group_id = iommu_group_id(iommu_group); iommu_group_put(iommu_group); } - fill->devices[fill->cur].segment = pci_domain_nr(pdev->bus); - fill->devices[fill->cur].bus = pdev->bus->number; - fill->devices[fill->cur].devfn = pdev->devfn; - fill->cur++; + + if (copy_to_user(fill->devices, &info, sizeof(info))) + return -EFAULT; + fill->devices++; return 0; } @@ -1212,8 +1216,7 @@ static int vfio_pci_ioctl_get_pci_hot_reset_info( unsigned long minsz = offsetofend(struct vfio_pci_hot_reset_info, count); struct vfio_pci_hot_reset_info hdr; - struct vfio_pci_fill_info fill = { 0 }; - struct vfio_pci_dependent_device *devices = NULL; + struct vfio_pci_fill_info fill = {}; bool slot = false; int ret = 0; @@ -1231,29 +1234,9 @@ static int vfio_pci_ioctl_get_pci_hot_reset_info( else if (pci_probe_reset_bus(vdev->pdev->bus)) return -ENODEV; - /* How many devices are affected? */ - ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_count_devs, - &fill.max, slot); - if (ret) - return ret; - - WARN_ON(!fill.max); /* Should always be at least one */ - - /* - * If there's enough space, fill it now, otherwise return -ENOSPC and - * the number of devices affected. - */ - if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) { - ret = -ENOSPC; - hdr.count = fill.max; - goto reset_info_exit; - } - - devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL); - if (!devices) - return -ENOMEM; - - fill.devices = devices; + fill.devices = arg->devices; + fill.devices_end = arg->devices + + (hdr.argsz - sizeof(hdr)) / sizeof(arg->devices[0]); fill.vdev = &vdev->vdev; if (vfio_device_cdev_opened(&vdev->vdev)) @@ -1264,29 +1247,14 @@ static int vfio_pci_ioctl_get_pci_hot_reset_info( ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_fill_devs, &fill, slot); mutex_unlock(&vdev->vdev.dev_set->lock); + if (ret) + return ret; - /* - * If a device was removed between counting and filling, we may come up - * short of fill.max. If a device was added, we'll have a return of - * -EAGAIN above. - */ - if (!ret) { - hdr.count = fill.cur; - hdr.flags = fill.flags; - } - -reset_info_exit: + hdr.count = fill.devices - arg->devices; + hdr.flags = fill.flags; if (copy_to_user(arg, &hdr, minsz)) ret = -EFAULT; - - if (!ret) { - if (copy_to_user(&arg->devices, devices, - hdr.count * sizeof(*devices))) - ret = -EFAULT; - } - - kfree(devices); - return ret; + return 0; } static int
WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@nvidia.com> To: Yi Liu <yi.l.liu@intel.com> Cc: mjrosato@linux.ibm.com, jasowang@redhat.com, xudong.hao@intel.com, zhenzhong.duan@intel.com, peterx@redhat.com, terrence.xu@intel.com, chao.p.peng@linux.intel.com, linux-s390@vger.kernel.org, kvm@vger.kernel.org, lulu@redhat.com, yanting.jiang@intel.com, joro@8bytes.org, nicolinc@nvidia.com, kevin.tian@intel.com, yan.y.zhao@intel.com, intel-gfx@lists.freedesktop.org, eric.auger@redhat.com, intel-gvt-dev@lists.freedesktop.org, yi.y.sun@linux.intel.com, clegoate@redhat.com, cohuck@redhat.com, shameerali.kolothum.thodi@huawei.com, suravee.suthikulpanit@amd.com, robin.murphy@arm.com Subject: Re: [Intel-gfx] [PATCH v7 8/9] vfio/pci: Extend VFIO_DEVICE_GET_PCI_HOT_RESET_INFO for vfio device cdev Date: Tue, 13 Jun 2023 15:23:02 -0300 [thread overview] Message-ID: <ZIi0Bizk9qr1SgJ/@nvidia.com> (raw) In-Reply-To: <20230602121515.79374-9-yi.l.liu@intel.com> On Fri, Jun 02, 2023 at 05:15:14AM -0700, Yi Liu wrote: > This allows VFIO_DEVICE_GET_PCI_HOT_RESET_INFO ioctl use the iommufd_ctx > of the cdev device to check the ownership of the other affected devices. > > When VFIO_DEVICE_GET_PCI_HOT_RESET_INFO is called on an IOMMUFD managed > device, the new flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is reported to indicate > the values returned are IOMMUFD devids rather than group IDs as used when > accessing vfio devices through the conventional vfio group interface. > Additionally the flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED will be reported > in this mode if all of the devices affected by the hot-reset are owned by > either virtue of being directly bound to the same iommufd context as the > calling device, or implicitly owned via a shared IOMMU group. > > Suggested-by: Jason Gunthorpe <jgg@nvidia.com> > Suggested-by: Alex Williamson <alex.williamson@redhat.com> > Signed-off-by: Yi Liu <yi.l.liu@intel.com> > --- > drivers/vfio/iommufd.c | 49 +++++++++++++++++++++++++++++++ > drivers/vfio/pci/vfio_pci_core.c | 47 +++++++++++++++++++++++++----- > include/linux/vfio.h | 16 ++++++++++ > include/uapi/linux/vfio.h | 50 +++++++++++++++++++++++++++++++- > 4 files changed, 154 insertions(+), 8 deletions(-) This could use some more fiddling, like we could copy each vfio_pci_dependent_device to user memory inside the loop instead of allocating an array. Add another patch with something like this in it: diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index b0eadafcbcf502..516e0fda74bec9 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -775,19 +775,23 @@ static int vfio_pci_count_devs(struct pci_dev *pdev, void *data) } struct vfio_pci_fill_info { - int max; - int cur; - struct vfio_pci_dependent_device *devices; + struct vfio_pci_dependent_device __user *devices; + struct vfio_pci_dependent_device __user *devices_end; struct vfio_device *vdev; u32 flags; }; static int vfio_pci_fill_devs(struct pci_dev *pdev, void *data) { + struct vfio_pci_dependent_device info = { + .segment = pci_domain_nr(pdev->bus), + .bus = pdev->bus->number, + .devfn = pdev->devfn, + }; struct vfio_pci_fill_info *fill = data; - if (fill->cur == fill->max) - return -EAGAIN; /* Something changed, try again */ + if (fill->devices_end >= fill->devices) + return -ENOSPC; if (fill->flags & VFIO_PCI_HOT_RESET_FLAG_DEV_ID) { struct iommufd_ctx *iommufd = vfio_iommufd_device_ictx(fill->vdev); @@ -800,12 +804,12 @@ static int vfio_pci_fill_devs(struct pci_dev *pdev, void *data) */ vdev = vfio_find_device_in_devset(dev_set, &pdev->dev); if (!vdev) - fill->devices[fill->cur].devid = VFIO_PCI_DEVID_NOT_OWNED; + info.devid = VFIO_PCI_DEVID_NOT_OWNED; else - fill->devices[fill->cur].devid = - vfio_iommufd_device_hot_reset_devid(vdev, iommufd); + info.devid = vfio_iommufd_device_hot_reset_devid( + vdev, iommufd); /* If devid is VFIO_PCI_DEVID_NOT_OWNED, clear owned flag. */ - if (fill->devices[fill->cur].devid == VFIO_PCI_DEVID_NOT_OWNED) + if (info.devid == VFIO_PCI_DEVID_NOT_OWNED) fill->flags &= ~VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED; } else { struct iommu_group *iommu_group; @@ -814,13 +818,13 @@ static int vfio_pci_fill_devs(struct pci_dev *pdev, void *data) if (!iommu_group) return -EPERM; /* Cannot reset non-isolated devices */ - fill->devices[fill->cur].group_id = iommu_group_id(iommu_group); + info.group_id = iommu_group_id(iommu_group); iommu_group_put(iommu_group); } - fill->devices[fill->cur].segment = pci_domain_nr(pdev->bus); - fill->devices[fill->cur].bus = pdev->bus->number; - fill->devices[fill->cur].devfn = pdev->devfn; - fill->cur++; + + if (copy_to_user(fill->devices, &info, sizeof(info))) + return -EFAULT; + fill->devices++; return 0; } @@ -1212,8 +1216,7 @@ static int vfio_pci_ioctl_get_pci_hot_reset_info( unsigned long minsz = offsetofend(struct vfio_pci_hot_reset_info, count); struct vfio_pci_hot_reset_info hdr; - struct vfio_pci_fill_info fill = { 0 }; - struct vfio_pci_dependent_device *devices = NULL; + struct vfio_pci_fill_info fill = {}; bool slot = false; int ret = 0; @@ -1231,29 +1234,9 @@ static int vfio_pci_ioctl_get_pci_hot_reset_info( else if (pci_probe_reset_bus(vdev->pdev->bus)) return -ENODEV; - /* How many devices are affected? */ - ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_count_devs, - &fill.max, slot); - if (ret) - return ret; - - WARN_ON(!fill.max); /* Should always be at least one */ - - /* - * If there's enough space, fill it now, otherwise return -ENOSPC and - * the number of devices affected. - */ - if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) { - ret = -ENOSPC; - hdr.count = fill.max; - goto reset_info_exit; - } - - devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL); - if (!devices) - return -ENOMEM; - - fill.devices = devices; + fill.devices = arg->devices; + fill.devices_end = arg->devices + + (hdr.argsz - sizeof(hdr)) / sizeof(arg->devices[0]); fill.vdev = &vdev->vdev; if (vfio_device_cdev_opened(&vdev->vdev)) @@ -1264,29 +1247,14 @@ static int vfio_pci_ioctl_get_pci_hot_reset_info( ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_fill_devs, &fill, slot); mutex_unlock(&vdev->vdev.dev_set->lock); + if (ret) + return ret; - /* - * If a device was removed between counting and filling, we may come up - * short of fill.max. If a device was added, we'll have a return of - * -EAGAIN above. - */ - if (!ret) { - hdr.count = fill.cur; - hdr.flags = fill.flags; - } - -reset_info_exit: + hdr.count = fill.devices - arg->devices; + hdr.flags = fill.flags; if (copy_to_user(arg, &hdr, minsz)) ret = -EFAULT; - - if (!ret) { - if (copy_to_user(&arg->devices, devices, - hdr.count * sizeof(*devices))) - ret = -EFAULT; - } - - kfree(devices); - return ret; + return 0; } static int
next prev parent reply other threads:[~2023-06-13 18:23 UTC|newest] Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-06-02 12:15 [PATCH v7 0/9] Enhance vfio PCI hot reset for vfio cdev device Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-02 12:15 ` [PATCH v7 1/9] vfio/pci: Update comment around group_fd get in vfio_pci_ioctl_pci_hot_reset() Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-02 12:15 ` [PATCH v7 2/9] vfio/pci: Move the existing hot reset logic to be a helper Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-02 12:15 ` [PATCH v7 3/9] iommufd: Reserve all negative IDs in the iommufd xarray Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-13 11:46 ` Jason Gunthorpe 2023-06-13 11:46 ` [Intel-gfx] " Jason Gunthorpe 2023-06-02 12:15 ` [PATCH v7 4/9] iommufd: Add iommufd_ctx_has_group() Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-08 21:40 ` Alex Williamson 2023-06-08 21:40 ` Alex Williamson 2023-06-08 23:44 ` Liu, Yi L 2023-06-08 23:44 ` [Intel-gfx] " Liu, Yi L 2023-06-13 11:46 ` Jason Gunthorpe 2023-06-13 11:46 ` [Intel-gfx] " Jason Gunthorpe 2023-06-02 12:15 ` [PATCH v7 5/9] iommufd: Add helper to retrieve iommufd_ctx and devid Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-13 11:47 ` Jason Gunthorpe 2023-06-13 11:47 ` [Intel-gfx] " Jason Gunthorpe 2023-06-02 12:15 ` [PATCH v7 6/9] vfio: Mark cdev usage in vfio_device Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-13 17:56 ` Jason Gunthorpe 2023-06-13 17:56 ` [Intel-gfx] " Jason Gunthorpe 2023-06-14 5:56 ` Liu, Yi L 2023-06-14 5:56 ` [Intel-gfx] " Liu, Yi L 2023-06-14 12:11 ` Jason Gunthorpe 2023-06-14 12:11 ` [Intel-gfx] " Jason Gunthorpe 2023-06-02 12:15 ` [PATCH v7 7/9] vfio: Add helper to search vfio_device in a dev_set Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-13 11:47 ` Jason Gunthorpe 2023-06-13 11:47 ` Jason Gunthorpe 2023-06-02 12:15 ` [PATCH v7 8/9] vfio/pci: Extend VFIO_DEVICE_GET_PCI_HOT_RESET_INFO for vfio device cdev Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-08 22:26 ` Alex Williamson 2023-06-08 22:26 ` Alex Williamson 2023-06-09 0:04 ` Liu, Yi L 2023-06-09 0:04 ` [Intel-gfx] " Liu, Yi L 2023-06-13 11:46 ` Jason Gunthorpe 2023-06-13 11:46 ` [Intel-gfx] " Jason Gunthorpe 2023-06-13 12:50 ` Liu, Yi L 2023-06-13 12:50 ` [Intel-gfx] " Liu, Yi L 2023-06-13 14:32 ` Alex Williamson 2023-06-13 14:32 ` Alex Williamson 2023-06-13 17:40 ` Jason Gunthorpe 2023-06-13 17:40 ` [Intel-gfx] " Jason Gunthorpe 2023-06-13 18:23 ` Jason Gunthorpe [this message] 2023-06-13 18:23 ` Jason Gunthorpe 2023-06-14 10:35 ` Liu, Yi L 2023-06-14 10:35 ` [Intel-gfx] " Liu, Yi L 2023-06-14 12:17 ` Jason Gunthorpe 2023-06-14 12:17 ` [Intel-gfx] " Jason Gunthorpe 2023-06-14 13:05 ` Liu, Yi L 2023-06-14 13:05 ` [Intel-gfx] " Liu, Yi L 2023-06-14 13:37 ` Jason Gunthorpe 2023-06-14 13:37 ` [Intel-gfx] " Jason Gunthorpe 2023-06-15 3:31 ` Liu, Yi L 2023-06-15 3:31 ` Liu, Yi L 2023-06-02 12:15 ` [PATCH v7 9/9] vfio/pci: Allow passing zero-length fd array in VFIO_DEVICE_PCI_HOT_RESET Yi Liu 2023-06-02 12:15 ` [Intel-gfx] " Yi Liu 2023-06-08 22:30 ` Alex Williamson 2023-06-08 22:30 ` Alex Williamson 2023-06-09 0:13 ` Liu, Yi L 2023-06-09 0:13 ` [Intel-gfx] " Liu, Yi L 2023-06-09 14:38 ` Jason Gunthorpe 2023-06-09 14:38 ` [Intel-gfx] " Jason Gunthorpe 2023-06-13 18:09 ` Jason Gunthorpe 2023-06-13 18:09 ` [Intel-gfx] " Jason Gunthorpe 2023-06-02 15:14 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Enhance vfio PCI hot reset for vfio cdev device (rev5) Patchwork 2023-06-02 15:29 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2023-06-04 20:05 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork 2023-06-08 6:59 ` [PATCH v7 0/9] Enhance vfio PCI hot reset for vfio cdev device Jiang, Yanting 2023-06-08 6:59 ` [Intel-gfx] " Jiang, Yanting 2023-06-13 20:47 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Enhance vfio PCI hot reset for vfio cdev device (rev6) Patchwork 2023-06-14 15:47 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Enhance vfio PCI hot reset for vfio cdev device (rev7) Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=ZIi0Bizk9qr1SgJ/@nvidia.com \ --to=jgg@nvidia.com \ --cc=alex.williamson@redhat.com \ --cc=chao.p.peng@linux.intel.com \ --cc=clegoate@redhat.com \ --cc=cohuck@redhat.com \ --cc=eric.auger@redhat.com \ --cc=intel-gfx@lists.freedesktop.org \ --cc=intel-gvt-dev@lists.freedesktop.org \ --cc=jasowang@redhat.com \ --cc=joro@8bytes.org \ --cc=kevin.tian@intel.com \ --cc=kvm@vger.kernel.org \ --cc=linux-s390@vger.kernel.org \ --cc=lulu@redhat.com \ --cc=mjrosato@linux.ibm.com \ --cc=nicolinc@nvidia.com \ --cc=peterx@redhat.com \ --cc=robin.murphy@arm.com \ --cc=shameerali.kolothum.thodi@huawei.com \ --cc=suravee.suthikulpanit@amd.com \ --cc=terrence.xu@intel.com \ --cc=xudong.hao@intel.com \ --cc=yan.y.zhao@intel.com \ --cc=yanting.jiang@intel.com \ --cc=yi.l.liu@intel.com \ --cc=yi.y.sun@linux.intel.com \ --cc=zhenzhong.duan@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.