From: David Gibson <david@gibson.dropbear.id.au>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
Joerg Roedel <joro@8bytes.org>, Jason Gunthorpe <jgg@nvidia.com>,
Lu Baolu <baolu.lu@linux.intel.com>,
David Woodhouse <dwmw2@infradead.org>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"Alex Williamson (alex.williamson@redhat.com)"
<alex.williamson@redhat.com>, Jason Wang <jasowang@redhat.com>,
Eric Auger <eric.auger@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
"Raj, Ashok" <ashok.raj@intel.com>,
"Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>,
"Jiang, Dave" <dave.jiang@intel.com>,
Jacob Pan <jacob.jun.pan@linux.intel.com>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
Kirti Wankhede <kwankhede@nvidia.com>,
Robin Murphy <robin.murphy@arm.com>
Subject: Re: [RFC] /dev/ioasid uAPI proposal
Date: Tue, 8 Jun 2021 15:49:25 +1000 [thread overview]
Message-ID: <YL8E5UoT0tndJZfh@yekko> (raw)
In-Reply-To: <MWHPR11MB1886081DE676B0130D3D19258C3C9@MWHPR11MB1886.namprd11.prod.outlook.com>
[-- Attachment #1: Type: text/plain, Size: 8808 bytes --]
On Thu, Jun 03, 2021 at 07:17:23AM +0000, Tian, Kevin wrote:
> > From: David Gibson <david@gibson.dropbear.id.au>
> > Sent: Wednesday, June 2, 2021 2:15 PM
> >
> [...]
> > > An I/O address space takes effect in the IOMMU only after it is attached
> > > to a device. The device in the /dev/ioasid context always refers to a
> > > physical one or 'pdev' (PF or VF).
> >
> > What you mean by "physical" device here isn't really clear - VFs
> > aren't really physical devices, and the PF/VF terminology also doesn't
> > extent to non-PCI devices (which I think we want to consider for the
> > API, even if we're not implemenenting it any time soon).
>
> Yes, it's not very clear, and more in PCI context to simplify the
> description. A "physical" one here means an PCI endpoint function
> which has a unique RID. It's more to differentiate with later mdev/
> subdevice which uses both RID+PASID. Naming is always a hard
> exercise to me... Possibly I'll just use device vs. subdevice in future
> versions.
>
> >
> > Now, it's clear that we can't program things into the IOMMU before
> > attaching a device - we might not even know which IOMMU to use.
>
> yes
>
> > However, I'm not sure if its wise to automatically make the AS "real"
> > as soon as we attach a device:
> >
> > * If we're going to attach a whole bunch of devices, could we (for at
> > least some IOMMU models) end up doing a lot of work which then has
> > to be re-done for each extra device we attach?
>
> which extra work did you specifically refer to? each attach just implies
> writing the base address of the I/O page table to the IOMMU structure
> corresponding to this device (either being a per-device entry, or per
> device+PASID entry).
>
> and generally device attach should not be in a hot path.
>
> >
> > * With kernel managed IO page tables could attaching a second device
> > (at least on some IOMMU models) require some operation which would
> > require discarding those tables? e.g. if the second device somehow
> > forces a different IO page size
>
> Then the attach should fail and the user should create another IOASID
> for the second device.
Couldn't this make things weirdly order dependent though? If device A
has strictly more capabilities than device B, then attaching A then B
will be fine, but B then A will trigger a new ioasid fd.
> > For that reason I wonder if we want some sort of explicit enable or
> > activate call. Device attaches would only be valid before, map or
> > attach pagetable calls would only be valid after.
>
> I'm interested in learning a real example requiring explicit enable...
>
> >
> > > One I/O address space could be attached to multiple devices. In this case,
> > > /dev/ioasid uAPI applies to all attached devices under the specified IOASID.
> > >
> > > Based on the underlying IOMMU capability one device might be allowed
> > > to attach to multiple I/O address spaces, with DMAs accessing them by
> > > carrying different routing information. One of them is the default I/O
> > > address space routed by PCI Requestor ID (RID) or ARM Stream ID. The
> > > remaining are routed by RID + Process Address Space ID (PASID) or
> > > Stream+Substream ID. For simplicity the following context uses RID and
> > > PASID when talking about the routing information for I/O address spaces.
> >
> > I'm not really clear on how this interacts with nested ioasids. Would
> > you generally expect the RID+PASID IOASes to be children of the base
> > RID IOAS, or not?
>
> No. With Intel SIOV both parent/children could be RID+PASID, e.g.
> when one enables vSVA on a mdev.
Hm, ok. I really haven't understood how the PASIDs fit into this
then. I'll try again on v2.
> > If the PASID ASes are children of the RID AS, can we consider this not
> > as the device explicitly attaching to multiple IOASIDs, but instead
> > attaching to the parent IOASID with awareness of the child ones?
> >
> > > Device attachment is initiated through passthrough framework uAPI (use
> > > VFIO for simplicity in following context). VFIO is responsible for identifying
> > > the routing information and registering it to the ioasid driver when calling
> > > ioasid attach helper function. It could be RID if the assigned device is
> > > pdev (PF/VF) or RID+PASID if the device is mediated (mdev). In addition,
> > > user might also provide its view of virtual routing information (vPASID) in
> > > the attach call, e.g. when multiple user-managed I/O address spaces are
> > > attached to the vfio_device. In this case VFIO must figure out whether
> > > vPASID should be directly used (for pdev) or converted to a kernel-
> > > allocated one (pPASID, for mdev) for physical routing (see section 4).
> > >
> > > Device must be bound to an IOASID FD before attach operation can be
> > > conducted. This is also through VFIO uAPI. In this proposal one device
> > > should not be bound to multiple FD's. Not sure about the gain of
> > > allowing it except adding unnecessary complexity. But if others have
> > > different view we can further discuss.
> > >
> > > VFIO must ensure its device composes DMAs with the routing information
> > > attached to the IOASID. For pdev it naturally happens since vPASID is
> > > directly programmed to the device by guest software. For mdev this
> > > implies any guest operation carrying a vPASID on this device must be
> > > trapped into VFIO and then converted to pPASID before sent to the
> > > device. A detail explanation about PASID virtualization policies can be
> > > found in section 4.
> > >
> > > Modern devices may support a scalable workload submission interface
> > > based on PCI DMWr capability, allowing a single work queue to access
> > > multiple I/O address spaces. One example is Intel ENQCMD, having
> > > PASID saved in the CPU MSR and carried in the instruction payload
> > > when sent out to the device. Then a single work queue shared by
> > > multiple processes can compose DMAs carrying different PASIDs.
> >
> > Is the assumption here that the processes share the IOASID FD
> > instance, but not memory?
>
> I didn't get this question
Ok, stepping back, what exactly do you mean by "processes" above? Do
you mean Linux processes, or something else?
> > > When executing ENQCMD in the guest, the CPU MSR includes a vPASID
> > > which, if targeting a mdev, must be converted to pPASID before sent
> > > to the wire. Intel CPU provides a hardware PASID translation capability
> > > for auto-conversion in the fast path. The user is expected to setup the
> > > PASID mapping through KVM uAPI, with information about {vpasid,
> > > ioasid_fd, ioasid}. The ioasid driver provides helper function for KVM
> > > to figure out the actual pPASID given an IOASID.
> > >
> > > With above design /dev/ioasid uAPI is all about I/O address spaces.
> > > It doesn't include any device routing information, which is only
> > > indirectly registered to the ioasid driver through VFIO uAPI. For
> > > example, I/O page fault is always reported to userspace per IOASID,
> > > although it's physically reported per device (RID+PASID). If there is a
> > > need of further relaying this fault into the guest, the user is responsible
> > > of identifying the device attached to this IOASID (randomly pick one if
> > > multiple attached devices) and then generates a per-device virtual I/O
> > > page fault into guest. Similarly the iotlb invalidation uAPI describes the
> > > granularity in the I/O address space (all, or a range), different from the
> > > underlying IOMMU semantics (domain-wide, PASID-wide, range-based).
> > >
> > > I/O page tables routed through PASID are installed in a per-RID PASID
> > > table structure. Some platforms implement the PASID table in the guest
> > > physical space (GPA), expecting it managed by the guest. The guest
> > > PASID table is bound to the IOMMU also by attaching to an IOASID,
> > > representing the per-RID vPASID space.
> >
> > Do we need to consider two management modes here, much as we have for
> > the pagetables themsleves: either kernel managed, in which we have
> > explicit calls to bind a vPASID to a parent PASID, or user managed in
> > which case we register a table in some format.
>
> yes, this is related to PASID virtualization in section 4. And based on
> suggestion from Jason, the vPASID requirement will be reported to
> user space via the per-device reporting interface.
>
> Thanks
> Kevin
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2021-06-08 6:54 UTC|newest]
Thread overview: 258+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-27 7:58 [RFC] /dev/ioasid uAPI proposal Tian, Kevin
2021-05-28 2:24 ` Jason Wang
2021-05-28 20:25 ` Jason Gunthorpe
[not found] ` <20210531164118.265789ee@yiliu-dev>
2021-06-01 2:36 ` Jason Wang
2021-06-01 4:27 ` Shenming Lu
2021-06-01 5:10 ` Jason Wang
[not found] ` <20210601113152.6d09e47b@yiliu-dev>
2021-06-01 5:08 ` Jason Wang
2021-06-01 5:23 ` Lu Baolu
2021-06-01 5:29 ` Jason Wang
2021-06-01 5:42 ` Tian, Kevin
2021-06-01 6:07 ` Jason Wang
2021-06-01 6:16 ` Tian, Kevin
2021-06-01 8:47 ` Jason Wang
2021-06-01 17:31 ` Jason Gunthorpe
2021-06-02 8:54 ` Jason Wang
2021-06-02 17:21 ` Jason Gunthorpe
2021-06-07 13:30 ` Enrico Weigelt, metux IT consult
2021-06-07 18:01 ` Jason Gunthorpe
2021-06-08 10:45 ` Enrico Weigelt, metux IT consult
2021-06-10 2:16 ` Jason Wang
2021-06-08 1:10 ` Jason Wang
2021-06-08 13:20 ` Jason Gunthorpe
2021-06-10 2:00 ` Jason Wang
2021-06-10 4:03 ` Jason Wang
2021-06-10 11:47 ` Jason Gunthorpe
2021-06-11 5:43 ` Jason Wang
2021-06-01 17:29 ` Jason Gunthorpe
2021-06-02 8:58 ` Jason Wang
2021-05-28 16:23 ` Jean-Philippe Brucker
2021-05-28 20:16 ` Jason Gunthorpe
2021-06-01 7:50 ` Tian, Kevin
2021-05-28 17:35 ` Jason Gunthorpe
2021-06-01 8:10 ` Tian, Kevin
2021-06-01 17:42 ` Jason Gunthorpe
2021-06-02 1:33 ` Tian, Kevin
2021-06-02 16:09 ` Jason Gunthorpe
2021-06-03 1:29 ` Tian, Kevin
2021-06-03 5:09 ` David Gibson
2021-06-03 6:49 ` Tian, Kevin
2021-06-03 11:47 ` Jason Gunthorpe
2021-06-04 2:15 ` Tian, Kevin
2021-06-08 0:49 ` David Gibson
2021-06-09 2:52 ` Tian, Kevin
2021-06-02 6:32 ` David Gibson
2021-06-02 16:16 ` Jason Gunthorpe
2021-06-03 2:11 ` Tian, Kevin
2021-06-03 5:13 ` David Gibson
2021-06-03 11:52 ` Jason Gunthorpe
2021-06-08 0:53 ` David Gibson
2021-06-08 19:04 ` Jason Gunthorpe
2021-06-17 2:42 ` David Gibson
2021-05-28 19:58 ` Jason Gunthorpe
2021-06-01 8:38 ` Tian, Kevin
2021-06-01 17:56 ` Jason Gunthorpe
2021-06-02 2:00 ` Tian, Kevin
2021-06-02 6:57 ` David Gibson
2021-06-02 16:37 ` Jason Gunthorpe
2021-06-03 5:23 ` David Gibson
2021-06-03 12:28 ` Jason Gunthorpe
2021-06-08 6:04 ` David Gibson
2021-06-08 19:23 ` Jason Gunthorpe
2021-06-02 6:48 ` David Gibson
2021-06-02 16:58 ` Jason Gunthorpe
2021-06-03 2:49 ` Tian, Kevin
2021-06-03 5:48 ` David Gibson
2021-06-03 5:45 ` David Gibson
2021-06-03 12:11 ` Jason Gunthorpe
2021-06-04 6:08 ` Tian, Kevin
2021-06-04 12:33 ` Jason Gunthorpe
2021-06-04 23:20 ` Tian, Kevin
2021-06-08 6:13 ` David Gibson
2021-06-04 10:24 ` Jean-Philippe Brucker
2021-06-04 12:05 ` Jason Gunthorpe
2021-06-04 17:27 ` Jacob Pan
2021-06-04 17:40 ` Jason Gunthorpe
2021-06-08 6:31 ` David Gibson
2021-06-10 16:37 ` Jean-Philippe Brucker
2021-06-17 3:00 ` David Gibson
2021-06-18 17:03 ` Jean-Philippe Brucker
2021-06-18 18:30 ` Jason Gunthorpe
2021-06-23 8:19 ` Tian, Kevin
2021-06-23 7:57 ` Tian, Kevin
2021-06-24 3:49 ` David Gibson
2021-05-28 20:03 ` Jason Gunthorpe
2021-06-01 7:01 ` Tian, Kevin
2021-06-01 20:28 ` Jason Gunthorpe
2021-06-02 1:25 ` Tian, Kevin
2021-06-02 23:27 ` Jason Gunthorpe
2021-06-04 8:17 ` Jean-Philippe Brucker
2021-06-04 8:43 ` Tian, Kevin
2021-06-02 8:52 ` Jason Wang
2021-06-02 16:07 ` Jason Gunthorpe
2021-06-01 22:22 ` Alex Williamson
2021-06-02 2:20 ` Tian, Kevin
2021-06-02 16:01 ` Jason Gunthorpe
2021-06-02 17:11 ` Alex Williamson
2021-06-02 17:35 ` Jason Gunthorpe
2021-06-02 18:01 ` Alex Williamson
2021-06-02 18:09 ` Jason Gunthorpe
2021-06-02 19:00 ` Alex Williamson
2021-06-02 19:54 ` Jason Gunthorpe
2021-06-02 20:37 ` Alex Williamson
2021-06-02 22:45 ` Jason Gunthorpe
2021-06-03 2:50 ` Alex Williamson
2021-06-03 3:22 ` Tian, Kevin
2021-06-03 4:14 ` Alex Williamson
2021-06-03 5:18 ` Tian, Kevin
2021-06-03 12:40 ` Jason Gunthorpe
2021-06-03 20:41 ` Alex Williamson
2021-06-04 9:19 ` Tian, Kevin
2021-06-04 15:37 ` Alex Williamson
2021-06-04 12:13 ` Jason Gunthorpe
2021-06-04 21:45 ` Alex Williamson
2021-06-04 7:33 ` Tian, Kevin
2021-06-03 12:34 ` Jason Gunthorpe
2021-06-03 20:01 ` Alex Williamson
2021-06-03 20:10 ` Jason Gunthorpe
2021-06-03 21:44 ` Alex Williamson
2021-06-04 8:38 ` Tian, Kevin
2021-06-04 12:28 ` Jason Gunthorpe
2021-06-04 15:26 ` Alex Williamson
2021-06-04 15:40 ` Paolo Bonzini
2021-06-04 15:50 ` Jason Gunthorpe
2021-06-04 15:57 ` Paolo Bonzini
2021-06-04 16:03 ` Jason Gunthorpe
2021-06-04 16:10 ` Paolo Bonzini
2021-06-04 17:22 ` Jason Gunthorpe
2021-06-04 21:29 ` Alex Williamson
2021-06-04 23:01 ` Jason Gunthorpe
2021-06-07 15:41 ` Alex Williamson
2021-06-07 18:18 ` Jason Gunthorpe
2021-06-07 18:59 ` Alex Williamson
2021-06-07 19:08 ` Jason Gunthorpe
2021-06-07 19:41 ` Alex Williamson
2021-06-07 23:03 ` Jason Gunthorpe
2021-06-08 0:30 ` Alex Williamson
2021-06-08 1:20 ` Jason Wang
2021-06-30 6:53 ` Christoph Hellwig
2021-06-30 6:49 ` Christoph Hellwig
2021-06-07 3:25 ` Tian, Kevin
2021-06-07 6:51 ` Paolo Bonzini
2021-06-07 18:01 ` Jason Gunthorpe
2021-06-30 6:56 ` Christoph Hellwig
2021-06-05 6:22 ` Paolo Bonzini
2021-06-07 3:50 ` Tian, Kevin
2021-06-07 17:59 ` Jason Gunthorpe
2021-06-08 7:56 ` Paolo Bonzini
2021-06-08 13:15 ` Jason Gunthorpe
2021-06-08 13:44 ` Paolo Bonzini
2021-06-08 18:47 ` Alex Williamson
2021-06-08 19:00 ` Jason Gunthorpe
2021-06-09 8:51 ` Enrico Weigelt, metux IT consult
2021-06-09 9:11 ` Paolo Bonzini
2021-06-09 11:54 ` Jason Gunthorpe
2021-06-09 14:31 ` Alex Williamson
2021-06-09 14:45 ` Jason Gunthorpe
2021-06-09 15:20 ` Paolo Bonzini
2021-10-27 6:18 ` Tian, Kevin
2021-10-27 10:32 ` Paolo Bonzini
2021-10-28 1:50 ` Tian, Kevin
2021-06-09 2:49 ` Tian, Kevin
2021-06-09 11:57 ` Jason Gunthorpe
2021-06-09 12:46 ` Paolo Bonzini
2021-06-09 12:47 ` Jason Gunthorpe
2021-06-09 13:24 ` Paolo Bonzini
2021-06-09 14:32 ` Jason Gunthorpe
2021-06-30 7:01 ` Christoph Hellwig
2021-06-09 18:09 ` Alex Williamson
2021-06-03 2:52 ` Jason Wang
2021-06-03 13:09 ` Jason Gunthorpe
2021-06-04 1:11 ` Jason Wang
2021-06-04 11:58 ` Jason Gunthorpe
2021-06-07 3:18 ` Jason Wang
2021-06-07 14:14 ` Jason Gunthorpe
2021-06-08 1:00 ` Jason Wang
2021-06-08 8:54 ` Enrico Weigelt, metux IT consult
2021-06-08 12:52 ` Jason Gunthorpe
2021-06-30 7:07 ` Christoph Hellwig
2021-06-30 7:05 ` Christoph Hellwig
2021-06-08 2:37 ` David Gibson
2021-06-08 13:17 ` Jason Gunthorpe
2021-06-17 3:47 ` David Gibson
2021-06-23 7:59 ` Tian, Kevin
2021-06-24 3:53 ` David Gibson
2021-05-28 23:36 ` Jason Gunthorpe
2021-05-31 11:31 ` Liu Yi L
2021-05-31 18:09 ` Jason Gunthorpe
2021-06-01 3:08 ` Lu Baolu
2021-06-01 17:24 ` Jason Gunthorpe
2021-06-01 1:25 ` Lu Baolu
2021-06-01 11:09 ` Lu Baolu
2021-06-01 17:26 ` Jason Gunthorpe
2021-06-02 4:01 ` Lu Baolu
2021-06-02 23:23 ` Jason Gunthorpe
2021-06-03 5:49 ` Lu Baolu
2021-06-03 5:54 ` David Gibson
2021-06-03 6:50 ` Lu Baolu
2021-06-03 12:56 ` Jason Gunthorpe
2021-06-02 7:22 ` David Gibson
2021-06-03 6:39 ` Tian, Kevin
2021-06-03 13:05 ` Jason Gunthorpe
2021-06-04 6:37 ` Tian, Kevin
2021-06-04 12:09 ` Jason Gunthorpe
2021-06-04 23:10 ` Tian, Kevin
2021-06-07 17:54 ` Jason Gunthorpe
2021-06-15 8:59 ` Tian, Kevin
2021-06-15 15:06 ` Jason Gunthorpe
2021-06-15 22:59 ` Tian, Kevin
2021-06-15 23:02 ` Jason Gunthorpe
2021-06-15 23:09 ` Tian, Kevin
2021-06-15 23:40 ` Jason Gunthorpe
2021-06-15 23:56 ` Tian, Kevin
2021-06-15 23:59 ` Jason Gunthorpe
2021-06-16 0:02 ` Tian, Kevin
2021-05-31 17:37 ` Parav Pandit
2021-05-31 18:12 ` Jason Gunthorpe
2021-06-01 12:04 ` Parav Pandit
2021-06-01 17:36 ` Jason Gunthorpe
2021-06-02 8:38 ` Enrico Weigelt, metux IT consult
2021-06-02 12:41 ` Parav Pandit
2021-06-01 4:31 ` Shenming Lu
2021-06-01 5:10 ` Lu Baolu
2021-06-01 7:15 ` Shenming Lu
2021-06-01 12:30 ` Lu Baolu
2021-06-01 13:10 ` Shenming Lu
2021-06-01 17:33 ` Jason Gunthorpe
2021-06-02 4:50 ` Shenming Lu
2021-06-03 18:19 ` Jacob Pan
2021-06-04 1:30 ` Jason Wang
2021-06-04 16:22 ` Jacob Pan
2021-06-04 16:22 ` Jason Gunthorpe
2021-06-04 18:05 ` Jacob Pan
2021-06-04 2:03 ` Shenming Lu
2021-06-07 12:19 ` Liu, Yi L
2021-06-08 1:09 ` Shenming Lu
2021-06-01 17:30 ` Parav Pandit
2021-06-03 20:58 ` Jacob Pan
2021-06-08 6:30 ` Parav Pandit
2021-06-02 6:15 ` David Gibson
2021-06-02 17:19 ` Jason Gunthorpe
2021-06-03 3:02 ` Tian, Kevin
2021-06-03 6:26 ` David Gibson
2021-06-03 12:46 ` Jason Gunthorpe
2021-06-04 6:27 ` Tian, Kevin
2021-06-03 7:17 ` Tian, Kevin
2021-06-03 12:49 ` Jason Gunthorpe
2021-06-08 5:49 ` David Gibson [this message]
2021-06-03 8:12 ` Tian, Kevin
2021-06-17 4:07 ` David Gibson
2021-06-23 8:00 ` Tian, Kevin
2021-06-24 3:55 ` David Gibson
2021-06-02 8:56 ` Enrico Weigelt, metux IT consult
2021-06-02 17:24 ` Jason Gunthorpe
2021-06-04 10:44 ` Enrico Weigelt, metux IT consult
2021-06-04 12:30 ` Jason Gunthorpe
2021-06-08 1:15 ` David Gibson
2021-06-08 10:43 ` Enrico Weigelt, metux IT consult
2021-06-08 13:11 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YL8E5UoT0tndJZfh@yekko \
--to=david@gibson.dropbear.id.au \
--cc=alex.williamson@redhat.com \
--cc=ashok.raj@intel.com \
--cc=baolu.lu@linux.intel.com \
--cc=corbet@lwn.net \
--cc=dave.jiang@intel.com \
--cc=dwmw2@infradead.org \
--cc=eric.auger@redhat.com \
--cc=hao.wu@intel.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jacob.jun.pan@linux.intel.com \
--cc=jasowang@redhat.com \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).