From: "Liu, Yi L" <yi.l.liu@intel.com>
To: Alex Williamson <alex.williamson@redhat.com>,
"jean-philippe@linaro.org" <jean-philippe@linaro.org>
Cc: "eric.auger@redhat.com" <eric.auger@redhat.com>,
"Tian, Kevin" <kevin.tian@intel.com>,
"jacob.jun.pan@linux.intel.com" <jacob.jun.pan@linux.intel.com>,
"joro@8bytes.org" <joro@8bytes.org>,
"Raj, Ashok" <ashok.raj@intel.com>,
"Tian, Jun J" <jun.j.tian@intel.com>,
"Sun, Yi Y" <yi.y.sun@intel.com>,
"jean-philippe@linaro.org" <jean-philippe@linaro.org>,
"peterx@redhat.com" <peterx@redhat.com>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Wu, Hao" <hao.wu@intel.com>
Subject: RE: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host
Date: Sat, 4 Apr 2020 10:28:43 +0000 [thread overview]
Message-ID: <A2975661238FB949B60364EF0F2C25743A221B05@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <20200403121102.255f069c@w520.home>
Hi Alex,
> From: Alex Williamson <alex.williamson@redhat.com>
> Sent: Saturday, April 4, 2020 2:11 AM
> To: Liu, Yi L <yi.l.liu@intel.com>
> Subject: Re: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host
>
> On Fri, 3 Apr 2020 13:30:49 +0000
> "Liu, Yi L" <yi.l.liu@intel.com> wrote:
>
> > Hi Alex,
> >
> > > From: Alex Williamson <alex.williamson@redhat.com>
> > > Sent: Friday, April 3, 2020 3:57 AM
> > > To: Liu, Yi L <yi.l.liu@intel.com>
> > >
> > > On Sun, 22 Mar 2020 05:32:03 -0700
> > > "Liu, Yi L" <yi.l.liu@intel.com> wrote:
> > >
> > > > From: Liu Yi L <yi.l.liu@intel.com>
> > > >
> > > > VFIO_TYPE1_NESTING_IOMMU is an IOMMU type which is backed by
> > > > hardware IOMMUs that have nesting DMA translation (a.k.a dual
> > > > stage address translation). For such hardware IOMMUs, there are
> > > > two stages/levels of address translation, and software may let
> > > > userspace/VM to own the first-
> > > > level/stage-1 translation structures. Example of such usage is
> > > > vSVA ( virtual Shared Virtual Addressing). VM owns the
> > > > first-level/stage-1 translation structures and bind the structures
> > > > to host, then hardware IOMMU would utilize nesting translation
> > > > when doing DMA translation fo the devices behind such hardware IOMMU.
> > > >
> > > > This patch adds vfio support for binding guest translation (a.k.a
> > > > stage 1) structure to host iommu. And for
> > > > VFIO_TYPE1_NESTING_IOMMU, not only bind guest page table is
> > > > needed, it also requires to expose interface to guest for iommu
> > > > cache invalidation when guest modified the first-level/stage-1
> > > > translation structures since hardware needs to be notified to flush stale iotlbs.
> This would be introduced in next patch.
> > > >
> > > > In this patch, guest page table bind and unbind are done by using
> > > > flags VFIO_IOMMU_BIND_GUEST_PGTBL and
> > > > VFIO_IOMMU_UNBIND_GUEST_PGTBL
> > > under IOCTL
> > > > VFIO_IOMMU_BIND, the bind/unbind data are conveyed by struct
> > > > iommu_gpasid_bind_data. Before binding guest page table to host,
> > > > VM should have got a PASID allocated by host via
> VFIO_IOMMU_PASID_REQUEST.
> > > >
> > > > Bind guest translation structures (here is guest page table) to
> > > > host are the first step to setup vSVA (Virtual Shared Virtual Addressing).
> > > >
> > > > Cc: Kevin Tian <kevin.tian@intel.com>
> > > > CC: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > > > Cc: Alex Williamson <alex.williamson@redhat.com>
> > > > Cc: Eric Auger <eric.auger@redhat.com>
> > > > Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.com>
> > > > Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> > > > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > > > ---
> > > > drivers/vfio/vfio_iommu_type1.c | 158
> > > ++++++++++++++++++++++++++++++++++++++++
> > > > include/uapi/linux/vfio.h | 46 ++++++++++++
> > > > 2 files changed, 204 insertions(+)
> > > >
> > > > diff --git a/drivers/vfio/vfio_iommu_type1.c
> > > > b/drivers/vfio/vfio_iommu_type1.c index 82a9e0b..a877747 100644
> > > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > > @@ -130,6 +130,33 @@ struct vfio_regions {
> > > > #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
> > > > (!list_empty(&iommu->domain_list))
> > > >
> > > > +struct domain_capsule {
> > > > + struct iommu_domain *domain;
> > > > + void *data;
> > > > +};
> > > > +
> > > > +/* iommu->lock must be held */
> > > > +static int vfio_iommu_for_each_dev(struct vfio_iommu *iommu,
> > > > + int (*fn)(struct device *dev, void *data),
> > > > + void *data)
> > > > +{
> > > > + struct domain_capsule dc = {.data = data};
> > > > + struct vfio_domain *d;
> > > > + struct vfio_group *g;
> > > > + int ret = 0;
> > > > +
> > > > + list_for_each_entry(d, &iommu->domain_list, next) {
> > > > + dc.domain = d->domain;
> > > > + list_for_each_entry(g, &d->group_list, next) {
> > > > + ret = iommu_group_for_each_dev(g->iommu_group,
> > > > + &dc, fn);
> > > > + if (ret)
> > > > + break;
> > > > + }
> > > > + }
> > > > + return ret;
> > > > +}
> > > > +
> > > > static int put_pfn(unsigned long pfn, int prot);
> > > >
> > > > /*
> > > > @@ -2314,6 +2341,88 @@ static int
> > > > vfio_iommu_info_add_nesting_cap(struct
> > > vfio_iommu *iommu,
> > > > return 0;
> > > > }
> > > >
> > > > +static int vfio_bind_gpasid_fn(struct device *dev, void *data) {
> > > > + struct domain_capsule *dc = (struct domain_capsule *)data;
> > > > + struct iommu_gpasid_bind_data *gbind_data =
> > > > + (struct iommu_gpasid_bind_data *) dc->data;
> > > > +
> > > > + return iommu_sva_bind_gpasid(dc->domain, dev, gbind_data); }
> > > > +
> > > > +static int vfio_unbind_gpasid_fn(struct device *dev, void *data)
> > > > +{
> > > > + struct domain_capsule *dc = (struct domain_capsule *)data;
> > > > + struct iommu_gpasid_bind_data *gbind_data =
> > > > + (struct iommu_gpasid_bind_data *) dc->data;
> > > > +
> > > > + return iommu_sva_unbind_gpasid(dc->domain, dev,
> > > > + gbind_data->hpasid);
> > > > +}
> > > > +
> > > > +/**
> > > > + * Unbind specific gpasid, caller of this function requires hold
> > > > + * vfio_iommu->lock
> > > > + */
> > > > +static long vfio_iommu_type1_do_guest_unbind(struct vfio_iommu *iommu,
> > > > + struct iommu_gpasid_bind_data *gbind_data) {
> > > > + return vfio_iommu_for_each_dev(iommu,
> > > > + vfio_unbind_gpasid_fn, gbind_data); }
> > > > +
> > > > +static long vfio_iommu_type1_bind_gpasid(struct vfio_iommu *iommu,
> > > > + struct iommu_gpasid_bind_data *gbind_data) {
> > > > + int ret = 0;
> > > > +
> > > > + mutex_lock(&iommu->lock);
> > > > + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
> > > > + ret = -EINVAL;
> > > > + goto out_unlock;
> > > > + }
> > > > +
> > > > + ret = vfio_iommu_for_each_dev(iommu,
> > > > + vfio_bind_gpasid_fn, gbind_data);
> > > > + /*
> > > > + * If bind failed, it may not be a total failure. Some devices
> > > > + * within the iommu group may have bind successfully. Although
> > > > + * we don't enable pasid capability for non-singletion iommu
> > > > + * groups, a unbind operation would be helpful to ensure no
> > > > + * partial binding for an iommu group.
> > >
> > > Where was the non-singleton group restriction done, I missed that.
> >
> > Hmm, it's missed. thanks for spotting it. How about adding this check
> > in the vfio_iommu_for_each_dev()? If looped a non-singleton group,
> > just skip it. It applies to the cache_inv path all the same.
>
> I don't really understand the singleton issue, which is why I was surprised to see this
> since I didn't see a discussion previously.
> Skipping a singleton group seems like unpredictable behavior to the user though.
There is a discussion on the SVA availability in the below link. There
was a conclusion to only support SVA for singleton group. I think bind
guest page table also needs to apply this rule.
https://patchwork.kernel.org/patch/10213877/
> > > > + */
> > > > + if (ret)
> > > > + /*
> > > > + * Undo all binds that already succeeded, no need to
> > > > + * check the return value here since some device within
> > > > + * the group has no successful bind when coming to this
> > > > + * place switch.
> > > > + */
> > > > + vfio_iommu_type1_do_guest_unbind(iommu, gbind_data);
> > >
> > > However, the for_each_dev function stops when the callback function
> > > returns error, are we just assuming we stop at the same device as we
> > > faulted on the first time and that we traverse the same set of
> > > devices the second time? It seems strange to me that unbind should
> > > be able to fail.
> >
> > unbind can fail if a user attempts to unbind a pasid which is not
> > belonged to it or a pasid which hasn't ever been bound. Otherwise, I
> > didn't see a reason to fail.
>
> Even if so, this doesn't address the first part of the question.
> If our for_each_dev()
> callback returns error then the loop stops and we can't be sure we've
> triggered it
> everywhere that it needs to be triggered.
Hmm, let me pull back a little. Back to the code, in the attempt to
do bind, the code uses for_each_dev() to loop devices. If failed then
uses for_each_dev() again to do unbind. Your question is can the second
for_each_dev() be able to undo the bind correctly as the second
for_each_dev() call has no idea where it failed in the bind phase. is it?
Actually, this is why I added the comment that no need to check the return
value of vfio_iommu_type1_do_guest_unbind().
> There are also aspects of whether it's an
> error to unbind something that is not bound because the result is still
> that the pasid
> is unbound, right?
agreed, as you mentioned in the below comment, no need to fail unbind
unless user is trying to unbind a pasid which doesn't belong to it.
> > > > +
> > > > +out_unlock:
> > > > + mutex_unlock(&iommu->lock);
> > > > + return ret;
> > > > +}
> > > > +
> > > > +static long vfio_iommu_type1_unbind_gpasid(struct vfio_iommu *iommu,
> > > > + struct iommu_gpasid_bind_data *gbind_data) {
> > > > + int ret = 0;
> > > > +
> > > > + mutex_lock(&iommu->lock);
> > > > + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
> > > > + ret = -EINVAL;
> > > > + goto out_unlock;
> > > > + }
> > > > +
> > > > + ret = vfio_iommu_type1_do_guest_unbind(iommu, gbind_data);
> > >
> > > How is a user supposed to respond to their unbind failing?
> >
> > If it's a malicious unbind (e.g. unbind a not yet bound pasid or
> > unbind a pasid which doesn't belong to current user).
>
> And if it's not a malicious unbind? To me this is similar semantics to
> free() failing. Is there any remedy other than to abort? Thanks,
got it. so if user is trying to unbind a pasid which doesn't belong to
it, should kernel return error to user or just abort it?
Regards,
Yi Liu
> Alex
>
> > > > +
> > > > +out_unlock:
> > > > + mutex_unlock(&iommu->lock);
> > > > + return ret;
> > > > +}
> > > > +
> > > > static long vfio_iommu_type1_ioctl(void *iommu_data,
> > > > unsigned int cmd, unsigned long arg) { @@ -
> 2471,6
> > > > +2580,55 @@ static long vfio_iommu_type1_ioctl(void
> > > *iommu_data,
> > > > default:
> > > > return -EINVAL;
> > > > }
> > > > +
> > > > + } else if (cmd == VFIO_IOMMU_BIND) {
> > > > + struct vfio_iommu_type1_bind bind;
> > > > + u32 version;
> > > > + int data_size;
> > > > + void *gbind_data;
> > > > + int ret;
> > > > +
> > > > + minsz = offsetofend(struct vfio_iommu_type1_bind, flags);
> > > > +
> > > > + if (copy_from_user(&bind, (void __user *)arg, minsz))
> > > > + return -EFAULT;
> > > > +
> > > > + if (bind.argsz < minsz)
> > > > + return -EINVAL;
> > > > +
> > > > + /* Get the version of struct iommu_gpasid_bind_data */
> > > > + if (copy_from_user(&version,
> > > > + (void __user *) (arg + minsz),
> > > > + sizeof(version)))
> > > > + return -EFAULT;
> > >
> > > Why are we coping things from beyond the size we've validated that
> > > the user has provided again?
> >
> > let me wait for the result in Jacob's thread below. looks like need to
> > have a decision from you and Joreg. If using argsze is good, then I
> > guess we don't need the version-to-size mapping. right? Actually, the
> > version-to-size mapping is added to ensure vfio copy data correctly.
> > https://lkml.org/lkml/2020/4/2/876
> >
> > > > +
> > > > + data_size = iommu_uapi_get_data_size(
> > > > + IOMMU_UAPI_BIND_GPASID, version);
> > > > + gbind_data = kzalloc(data_size, GFP_KERNEL);
> > > > + if (!gbind_data)
> > > > + return -ENOMEM;
> > > > +
> > > > + if (copy_from_user(gbind_data,
> > > > + (void __user *) (arg + minsz), data_size)) {
> > > > + kfree(gbind_data);
> > > > + return -EFAULT;
> > > > + }
> > >
> > > And again. argsz isn't just for minsz.
> > >
> > > > +
> > > > + switch (bind.flags & VFIO_IOMMU_BIND_MASK) {
> > > > + case VFIO_IOMMU_BIND_GUEST_PGTBL:
> > > > + ret = vfio_iommu_type1_bind_gpasid(iommu,
> > > > + gbind_data);
> > > > + break;
> > > > + case VFIO_IOMMU_UNBIND_GUEST_PGTBL:
> > > > + ret = vfio_iommu_type1_unbind_gpasid(iommu,
> > > > + gbind_data);
> > > > + break;
> > > > + default:
> > > > + ret = -EINVAL;
> > > > + break;
> > > > + }
> > > > + kfree(gbind_data);
> > > > + return ret;
> > > > }
> > > >
> > > > return -ENOTTY;
> > > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > > index ebeaf3e..2235bc6 100644
> > > > --- a/include/uapi/linux/vfio.h
> > > > +++ b/include/uapi/linux/vfio.h
> > > > @@ -14,6 +14,7 @@
> > > >
> > > > #include <linux/types.h>
> > > > #include <linux/ioctl.h>
> > > > +#include <linux/iommu.h>
> > > >
> > > > #define VFIO_API_VERSION 0
> > > >
> > > > @@ -853,6 +854,51 @@ struct vfio_iommu_type1_pasid_request {
> > > > */
> > > > #define VFIO_IOMMU_PASID_REQUEST _IO(VFIO_TYPE, VFIO_BASE + 22)
> > > >
> > > > +/**
> > > > + * Supported flags:
> > > > + * - VFIO_IOMMU_BIND_GUEST_PGTBL: bind guest page tables to
> host for
> > > > + * nesting type IOMMUs. In @data field It takes struct
> > > > + * iommu_gpasid_bind_data.
> > > > + * - VFIO_IOMMU_UNBIND_GUEST_PGTBL: undo a bind guest page
> table
> > > operation
> > > > + * invoked by VFIO_IOMMU_BIND_GUEST_PGTBL.
> > >
> > > This must require iommu_gpasid_bind_data in the data field as well,
> > > right?
> >
> > yes.
> >
> > Regards,
> > Yi Liu
> >
next prev parent reply other threads:[~2020-04-04 10:28 UTC|newest]
Thread overview: 110+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-22 12:31 [PATCH v1 0/8] vfio: expose virtual Shared Virtual Addressing to VMs Liu, Yi L
2020-03-22 12:31 ` [PATCH v1 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free) Liu, Yi L
2020-03-22 16:21 ` kbuild test robot
2020-03-30 8:32 ` Tian, Kevin
2020-03-30 14:36 ` Liu, Yi L
2020-03-31 5:40 ` Tian, Kevin
2020-03-31 13:22 ` Liu, Yi L
2020-04-01 5:43 ` Tian, Kevin
2020-04-01 5:48 ` Liu, Yi L
2020-03-31 7:53 ` Christoph Hellwig
2020-03-31 8:17 ` Liu, Yi L
2020-03-31 8:32 ` Liu, Yi L
2020-03-31 8:36 ` Liu, Yi L
2020-03-31 9:15 ` Christoph Hellwig
2020-04-02 13:52 ` Jean-Philippe Brucker
2020-04-03 11:56 ` Liu, Yi L
2020-04-03 12:39 ` Jean-Philippe Brucker
2020-04-03 12:44 ` Liu, Yi L
2020-04-02 17:50 ` Alex Williamson
2020-04-03 5:58 ` Tian, Kevin
2020-04-03 15:14 ` Alex Williamson
2020-04-07 4:42 ` Tian, Kevin
2020-04-07 15:14 ` Alex Williamson
2020-04-03 13:12 ` Liu, Yi L
2020-04-03 17:50 ` Alex Williamson
2020-04-07 4:52 ` Tian, Kevin
2020-04-08 0:52 ` Liu, Yi L
2020-03-22 12:31 ` [PATCH v1 2/8] vfio/type1: Add vfio_iommu_type1 parameter for quota tuning Liu, Yi L
2020-03-22 17:20 ` kbuild test robot
2020-03-30 8:40 ` Tian, Kevin
2020-03-30 8:52 ` Liu, Yi L
2020-03-30 9:19 ` Tian, Kevin
2020-03-30 9:26 ` Liu, Yi L
2020-03-30 11:44 ` Tian, Kevin
2020-04-02 17:58 ` Alex Williamson
2020-04-03 8:15 ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 3/8] vfio/type1: Report PASID alloc/free support to userspace Liu, Yi L
2020-03-30 9:43 ` Tian, Kevin
2020-04-01 7:46 ` Liu, Yi L
2020-04-01 9:41 ` Auger Eric
2020-04-01 13:13 ` Liu, Yi L
2020-04-02 18:01 ` Alex Williamson
2020-04-03 8:17 ` Liu, Yi L
2020-04-03 17:28 ` Alex Williamson
2020-04-04 11:36 ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 4/8] vfio: Check nesting iommu uAPI version Liu, Yi L
2020-03-22 18:30 ` kbuild test robot
2020-03-22 12:32 ` [PATCH v1 5/8] vfio/type1: Report 1st-level/stage-1 format to userspace Liu, Yi L
2020-03-22 16:44 ` kbuild test robot
2020-03-30 11:48 ` Tian, Kevin
2020-04-01 7:38 ` Liu, Yi L
2020-04-01 7:56 ` Tian, Kevin
2020-04-01 8:06 ` Liu, Yi L
2020-04-01 8:08 ` Tian, Kevin
2020-04-01 8:09 ` Liu, Yi L
2020-04-01 8:51 ` Auger Eric
2020-04-01 12:51 ` Liu, Yi L
2020-04-01 13:01 ` Auger Eric
2020-04-03 8:23 ` Jean-Philippe Brucker
2020-04-07 9:43 ` Liu, Yi L
2020-04-08 1:02 ` Liu, Yi L
2020-04-08 10:27 ` Auger Eric
2020-04-09 8:14 ` Jean-Philippe Brucker
2020-04-09 9:01 ` Auger Eric
2020-04-09 12:47 ` Liu, Yi L
2020-04-10 3:28 ` Auger Eric
2020-04-10 3:48 ` Liu, Yi L
2020-04-10 12:30 ` Liu, Yi L
2020-04-02 19:20 ` Alex Williamson
2020-04-03 11:59 ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 6/8] vfio/type1: Bind guest page tables to host Liu, Yi L
2020-03-22 18:10 ` kbuild test robot
2020-03-30 12:46 ` Tian, Kevin
2020-04-01 9:13 ` Liu, Yi L
2020-04-02 2:12 ` Tian, Kevin
2020-04-02 8:05 ` Liu, Yi L
2020-04-03 8:34 ` Jean-Philippe Brucker
2020-04-07 10:33 ` Liu, Yi L
2020-04-09 8:28 ` Jean-Philippe Brucker
2020-04-09 9:15 ` Liu, Yi L
2020-04-09 9:38 ` Jean-Philippe Brucker
2020-04-02 19:57 ` Alex Williamson
2020-04-03 13:30 ` Liu, Yi L
2020-04-03 18:11 ` Alex Williamson
2020-04-04 10:28 ` Liu, Yi L [this message]
2020-04-11 5:52 ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 7/8] vfio/type1: Add VFIO_IOMMU_CACHE_INVALIDATE Liu, Yi L
2020-03-30 12:58 ` Tian, Kevin
2020-04-01 7:49 ` Liu, Yi L
2020-03-31 7:56 ` Christoph Hellwig
2020-03-31 10:48 ` Liu, Yi L
2020-04-02 20:24 ` Alex Williamson
2020-04-03 6:39 ` Tian, Kevin
2020-04-03 15:31 ` Jacob Pan
2020-04-03 15:34 ` Alex Williamson
2020-04-08 2:28 ` Liu, Yi L
2020-04-16 10:40 ` Liu, Yi L
2020-04-16 12:09 ` Tian, Kevin
2020-04-16 12:42 ` Auger Eric
2020-04-16 13:28 ` Tian, Kevin
2020-04-16 15:12 ` Auger Eric
2020-04-16 14:40 ` Alex Williamson
2020-04-16 14:48 ` Alex Williamson
2020-04-17 6:03 ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 8/8] vfio/type1: Add vSVA support for IOMMU-backed mdevs Liu, Yi L
2020-03-30 13:18 ` Tian, Kevin
2020-04-01 7:51 ` Liu, Yi L
2020-04-02 20:33 ` Alex Williamson
2020-04-03 13:39 ` Liu, Yi L
2020-03-26 12:56 ` [PATCH v1 0/8] vfio: expose virtual Shared Virtual Addressing to VMs Liu, Yi L
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=A2975661238FB949B60364EF0F2C25743A221B05@SHSMSX104.ccr.corp.intel.com \
--to=yi.l.liu@intel.com \
--cc=alex.williamson@redhat.com \
--cc=ashok.raj@intel.com \
--cc=eric.auger@redhat.com \
--cc=hao.wu@intel.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jacob.jun.pan@linux.intel.com \
--cc=jean-philippe@linaro.org \
--cc=joro@8bytes.org \
--cc=jun.j.tian@intel.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterx@redhat.com \
--cc=yi.y.sun@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).