All of lore.kernel.org
 help / color / mirror / Atom feed
From: Auger Eric <eric.auger@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: eric.auger.pro@gmail.com, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	kvmarm@lists.cs.columbia.edu, joro@8bytes.org,
	jacob.jun.pan@linux.intel.com, yi.l.liu@linux.intel.com,
	jean-philippe.brucker@arm.com, will.deacon@arm.com,
	robin.murphy@arm.com, kevin.tian@intel.com, ashok.raj@intel.com,
	marc.zyngier@arm.com, christoffer.dall@arm.com,
	peter.maydell@linaro.org, vincent.stehle@arm.com
Subject: Re: [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI
Date: Fri, 22 Mar 2019 10:30:02 +0100	[thread overview]
Message-ID: <16931d58-9c88-8cfb-a392-408ea7afdf16@redhat.com> (raw)
In-Reply-To: <20190321170159.38358f38@x1.home>

Hi Alex,
On 3/22/19 12:01 AM, Alex Williamson wrote:
> On Sun, 17 Mar 2019 18:22:19 +0100
> Eric Auger <eric.auger@redhat.com> wrote:
> 
>> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
>> to pass/withdraw the guest MSI binding to/from the host.
>>
>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>
>> ---
>> v3 -> v4:
>> - add UNBIND
>> - unwind on BIND error
>>
>> v2 -> v3:
>> - adapt to new proto of bind_guest_msi
>> - directly use vfio_iommu_for_each_dev
>>
>> v1 -> v2:
>> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
>> ---
>>  drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
>>  include/uapi/linux/vfio.h       | 29 +++++++++++++++++
>>  2 files changed, 87 insertions(+)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 12a40b9db6aa..66513679081b 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
>>  	return iommu_cache_invalidate(d, dev, &ustruct->info);
>>  }
>>  
>> +static int vfio_bind_msi_fn(struct device *dev, void *data)
>> +{
>> +	struct vfio_iommu_type1_bind_msi *ustruct =
>> +		(struct vfio_iommu_type1_bind_msi *)data;
>> +	struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>> +
>> +	return iommu_bind_guest_msi(d, dev, ustruct->iova,
>> +				    ustruct->gpa, ustruct->size);
>> +}
>> +
>> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
>> +{
>> +	dma_addr_t *iova = (dma_addr_t *)data;
>> +	struct iommu_domain *d = iommu_get_domain_for_dev(dev);
> 
> Same as previous, we can encapsulate domain in our own struct to avoid
> a lookup.
> 
>> +
>> +	iommu_unbind_guest_msi(d, dev, *iova);
> 
> Is it strange that iommu-core is exposing these interfaces at a device
> level if every one of them requires us to walk all the devices?  Thanks,

Hum this per device API was devised in response of Robin's comments on

[RFC v2 12/20] dma-iommu: Implement NESTED_MSI cookie.

"
But that then seems to reveal a somewhat bigger problem - if the callers
are simply registering IPAs, and relying on the ITS driver to grab an
entry and fill in a PA later, then how does either one know *which* PA
is supposed to belong to a given IPA in the case where you have multiple
devices with different ITS targets assigned to the same guest? (and if
it's possible to assume a guest will use per-device stage 1 mappings and
present it with a single vITS backed by multiple pITSes, I think things
start breaking even harder.)
"

However looking back into the problem I wonder if there was an issue
with the iommu_domain based API.

If my understanding is correct, when assigned devices are protected by a
vIOMMU then they necessarily end up in separate host iommu domains even
if they belong to the same iommu_domain on the guest. And there can only
be a single device in this iommu_domain.

If this is confirmed, there is a non ambiguous association between 1
physical iommu_domain, 1 device, 1 S1 mapping and 1 physical MSI
controller.

I added the device handle handle to disambiguate those associations. The
gIOVA ->gDB mapping is associated with a device handle. Then when the
host needs a stage 1 mapping for this device, to build the nested
mapping towards the physical DB it can easily grab the gIOVA->gDB stage
1 mapping registered for this device.

The correctness looks more obvious to me, at least.

Thanks

Eric

> 
> Alex
> 
>> +	return 0;
>> +}
>> +
>>  static long vfio_iommu_type1_ioctl(void *iommu_data,
>>  				   unsigned int cmd, unsigned long arg)
>>  {
>> @@ -1814,6 +1833,45 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>>  					      vfio_cache_inv_fn);
>>  		mutex_unlock(&iommu->lock);
>>  		return ret;
>> +	} else if (cmd == VFIO_IOMMU_BIND_MSI) {
>> +		struct vfio_iommu_type1_bind_msi ustruct;
>> +		int ret;
>> +
>> +		minsz = offsetofend(struct vfio_iommu_type1_bind_msi,
>> +				    size);
>> +
>> +		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> +			return -EFAULT;
>> +
>> +		if (ustruct.argsz < minsz || ustruct.flags)
>> +			return -EINVAL;
>> +
>> +		mutex_lock(&iommu->lock);
>> +		ret = vfio_iommu_for_each_dev(iommu, &ustruct,
>> +					      vfio_bind_msi_fn);
>> +		if (ret)
>> +			vfio_iommu_for_each_dev(iommu, &ustruct.iova,
>> +						vfio_unbind_msi_fn);
>> +		mutex_unlock(&iommu->lock);
>> +		return ret;
>> +	} else if (cmd == VFIO_IOMMU_UNBIND_MSI) {
>> +		struct vfio_iommu_type1_unbind_msi ustruct;
>> +		int ret;
>> +
>> +		minsz = offsetofend(struct vfio_iommu_type1_unbind_msi,
>> +				    iova);
>> +
>> +		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> +			return -EFAULT;
>> +
>> +		if (ustruct.argsz < minsz || ustruct.flags)
>> +			return -EINVAL;
>> +
>> +		mutex_lock(&iommu->lock);
>> +		ret = vfio_iommu_for_each_dev(iommu, &ustruct.iova,
>> +					      vfio_unbind_msi_fn);
>> +		mutex_unlock(&iommu->lock);
>> +		return ret;
>>  	}
>>  
>>  	return -ENOTTY;
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 29f0ef2d805d..6763389b6adc 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -789,6 +789,35 @@ struct vfio_iommu_type1_cache_invalidate {
>>  };
>>  #define VFIO_IOMMU_CACHE_INVALIDATE      _IO(VFIO_TYPE, VFIO_BASE + 24)
>>  
>> +/**
>> + * VFIO_IOMMU_BIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 25,
>> + *			struct vfio_iommu_type1_bind_msi)
>> + *
>> + * Pass a stage 1 MSI doorbell mapping to the host so that this
>> + * latter can build a nested stage2 mapping
>> + */
>> +struct vfio_iommu_type1_bind_msi {
>> +	__u32   argsz;
>> +	__u32   flags;
>> +	__u64	iova;
>> +	__u64	gpa;
>> +	__u64	size;
>> +};
>> +#define VFIO_IOMMU_BIND_MSI      _IO(VFIO_TYPE, VFIO_BASE + 25)
>> +
>> +/**
>> + * VFIO_IOMMU_UNBIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 26,
>> + *			struct vfio_iommu_type1_unbind_msi)
>> + *
>> + * Unregister an MSI mapping
>> + */
>> +struct vfio_iommu_type1_unbind_msi {
>> +	__u32   argsz;
>> +	__u32   flags;
>> +	__u64	iova;
>> +};
>> +#define VFIO_IOMMU_UNBIND_MSI      _IO(VFIO_TYPE, VFIO_BASE + 26)
>> +
>>  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>>  
>>  /*
> 

WARNING: multiple messages have this Message-ID (diff)
From: Auger Eric <eric.auger@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: yi.l.liu@linux.intel.com, kevin.tian@intel.com,
	jacob.jun.pan@linux.intel.com, ashok.raj@intel.com,
	kvm@vger.kernel.org, joro@8bytes.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, marc.zyngier@arm.com,
	iommu@lists.linux-foundation.org, vincent.stehle@arm.com,
	robin.murphy@arm.com, kvmarm@lists.cs.columbia.edu,
	eric.auger.pro@gmail.com
Subject: Re: [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI
Date: Fri, 22 Mar 2019 10:30:02 +0100	[thread overview]
Message-ID: <16931d58-9c88-8cfb-a392-408ea7afdf16@redhat.com> (raw)
In-Reply-To: <20190321170159.38358f38@x1.home>

Hi Alex,
On 3/22/19 12:01 AM, Alex Williamson wrote:
> On Sun, 17 Mar 2019 18:22:19 +0100
> Eric Auger <eric.auger@redhat.com> wrote:
> 
>> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
>> to pass/withdraw the guest MSI binding to/from the host.
>>
>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>
>> ---
>> v3 -> v4:
>> - add UNBIND
>> - unwind on BIND error
>>
>> v2 -> v3:
>> - adapt to new proto of bind_guest_msi
>> - directly use vfio_iommu_for_each_dev
>>
>> v1 -> v2:
>> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
>> ---
>>  drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
>>  include/uapi/linux/vfio.h       | 29 +++++++++++++++++
>>  2 files changed, 87 insertions(+)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 12a40b9db6aa..66513679081b 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
>>  	return iommu_cache_invalidate(d, dev, &ustruct->info);
>>  }
>>  
>> +static int vfio_bind_msi_fn(struct device *dev, void *data)
>> +{
>> +	struct vfio_iommu_type1_bind_msi *ustruct =
>> +		(struct vfio_iommu_type1_bind_msi *)data;
>> +	struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>> +
>> +	return iommu_bind_guest_msi(d, dev, ustruct->iova,
>> +				    ustruct->gpa, ustruct->size);
>> +}
>> +
>> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
>> +{
>> +	dma_addr_t *iova = (dma_addr_t *)data;
>> +	struct iommu_domain *d = iommu_get_domain_for_dev(dev);
> 
> Same as previous, we can encapsulate domain in our own struct to avoid
> a lookup.
> 
>> +
>> +	iommu_unbind_guest_msi(d, dev, *iova);
> 
> Is it strange that iommu-core is exposing these interfaces at a device
> level if every one of them requires us to walk all the devices?  Thanks,

Hum this per device API was devised in response of Robin's comments on

[RFC v2 12/20] dma-iommu: Implement NESTED_MSI cookie.

"
But that then seems to reveal a somewhat bigger problem - if the callers
are simply registering IPAs, and relying on the ITS driver to grab an
entry and fill in a PA later, then how does either one know *which* PA
is supposed to belong to a given IPA in the case where you have multiple
devices with different ITS targets assigned to the same guest? (and if
it's possible to assume a guest will use per-device stage 1 mappings and
present it with a single vITS backed by multiple pITSes, I think things
start breaking even harder.)
"

However looking back into the problem I wonder if there was an issue
with the iommu_domain based API.

If my understanding is correct, when assigned devices are protected by a
vIOMMU then they necessarily end up in separate host iommu domains even
if they belong to the same iommu_domain on the guest. And there can only
be a single device in this iommu_domain.

If this is confirmed, there is a non ambiguous association between 1
physical iommu_domain, 1 device, 1 S1 mapping and 1 physical MSI
controller.

I added the device handle handle to disambiguate those associations. The
gIOVA ->gDB mapping is associated with a device handle. Then when the
host needs a stage 1 mapping for this device, to build the nested
mapping towards the physical DB it can easily grab the gIOVA->gDB stage
1 mapping registered for this device.

The correctness looks more obvious to me, at least.

Thanks

Eric

> 
> Alex
> 
>> +	return 0;
>> +}
>> +
>>  static long vfio_iommu_type1_ioctl(void *iommu_data,
>>  				   unsigned int cmd, unsigned long arg)
>>  {
>> @@ -1814,6 +1833,45 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>>  					      vfio_cache_inv_fn);
>>  		mutex_unlock(&iommu->lock);
>>  		return ret;
>> +	} else if (cmd == VFIO_IOMMU_BIND_MSI) {
>> +		struct vfio_iommu_type1_bind_msi ustruct;
>> +		int ret;
>> +
>> +		minsz = offsetofend(struct vfio_iommu_type1_bind_msi,
>> +				    size);
>> +
>> +		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> +			return -EFAULT;
>> +
>> +		if (ustruct.argsz < minsz || ustruct.flags)
>> +			return -EINVAL;
>> +
>> +		mutex_lock(&iommu->lock);
>> +		ret = vfio_iommu_for_each_dev(iommu, &ustruct,
>> +					      vfio_bind_msi_fn);
>> +		if (ret)
>> +			vfio_iommu_for_each_dev(iommu, &ustruct.iova,
>> +						vfio_unbind_msi_fn);
>> +		mutex_unlock(&iommu->lock);
>> +		return ret;
>> +	} else if (cmd == VFIO_IOMMU_UNBIND_MSI) {
>> +		struct vfio_iommu_type1_unbind_msi ustruct;
>> +		int ret;
>> +
>> +		minsz = offsetofend(struct vfio_iommu_type1_unbind_msi,
>> +				    iova);
>> +
>> +		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> +			return -EFAULT;
>> +
>> +		if (ustruct.argsz < minsz || ustruct.flags)
>> +			return -EINVAL;
>> +
>> +		mutex_lock(&iommu->lock);
>> +		ret = vfio_iommu_for_each_dev(iommu, &ustruct.iova,
>> +					      vfio_unbind_msi_fn);
>> +		mutex_unlock(&iommu->lock);
>> +		return ret;
>>  	}
>>  
>>  	return -ENOTTY;
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 29f0ef2d805d..6763389b6adc 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -789,6 +789,35 @@ struct vfio_iommu_type1_cache_invalidate {
>>  };
>>  #define VFIO_IOMMU_CACHE_INVALIDATE      _IO(VFIO_TYPE, VFIO_BASE + 24)
>>  
>> +/**
>> + * VFIO_IOMMU_BIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 25,
>> + *			struct vfio_iommu_type1_bind_msi)
>> + *
>> + * Pass a stage 1 MSI doorbell mapping to the host so that this
>> + * latter can build a nested stage2 mapping
>> + */
>> +struct vfio_iommu_type1_bind_msi {
>> +	__u32   argsz;
>> +	__u32   flags;
>> +	__u64	iova;
>> +	__u64	gpa;
>> +	__u64	size;
>> +};
>> +#define VFIO_IOMMU_BIND_MSI      _IO(VFIO_TYPE, VFIO_BASE + 25)
>> +
>> +/**
>> + * VFIO_IOMMU_UNBIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 26,
>> + *			struct vfio_iommu_type1_unbind_msi)
>> + *
>> + * Unregister an MSI mapping
>> + */
>> +struct vfio_iommu_type1_unbind_msi {
>> +	__u32   argsz;
>> +	__u32   flags;
>> +	__u64	iova;
>> +};
>> +#define VFIO_IOMMU_UNBIND_MSI      _IO(VFIO_TYPE, VFIO_BASE + 26)
>> +
>>  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>>  
>>  /*
> 

  reply	other threads:[~2019-03-22  9:30 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-17 17:22 [PATCH v6 00/22] SMMUv3 Nested Stage Setup Eric Auger
2019-03-17 17:22 ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 01/22] driver core: add per device iommu param Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 02/22] iommu: introduce device fault data Eric Auger
2019-03-21 22:04   ` Jacob Pan
2019-03-22  8:00     ` Auger Eric
2019-03-17 17:22 ` [PATCH v6 03/22] iommu: introduce device fault report API Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-21 20:57   ` Alex Williamson
2019-03-17 17:22 ` [PATCH v6 04/22] iommu: Introduce attach/detach_pasid_table API Eric Auger
2019-03-17 17:22 ` [PATCH v6 05/22] iommu: Introduce cache_invalidate API Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-20 16:37   ` Jacob Pan
2019-03-20 16:37     ` Jacob Pan
2019-03-20 16:50     ` Jean-Philippe Brucker
2019-03-21 13:54       ` Auger Eric
2019-03-21 14:13         ` Jean-Philippe Brucker
2019-03-21 14:13           ` Jean-Philippe Brucker
2019-03-21 14:32           ` Auger Eric
2019-03-21 14:32             ` Auger Eric
2019-03-21 22:10             ` Jacob Pan
2019-03-22  7:58               ` Auger Eric
2019-03-17 17:22 ` [PATCH v6 06/22] iommu: Introduce bind/unbind_guest_msi Eric Auger
2019-03-17 17:22 ` [PATCH v6 07/22] vfio: VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE Eric Auger
2019-03-21 22:19   ` Alex Williamson
2019-03-22  7:58     ` Auger Eric
2019-03-17 17:22 ` [PATCH v6 08/22] vfio: VFIO_IOMMU_CACHE_INVALIDATE Eric Auger
2019-03-21 22:43   ` Alex Williamson
2019-03-17 17:22 ` [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-21 23:01   ` Alex Williamson
2019-03-22  9:30     ` Auger Eric [this message]
2019-03-22  9:30       ` Auger Eric
2019-03-22 22:09       ` Alex Williamson
2019-04-03 14:30         ` Auger Eric
2019-04-03 17:38           ` Alex Williamson
2019-04-04  6:55             ` Auger Eric
2019-04-10 12:35               ` Vincent Stehlé
2019-04-10 12:35                 ` Vincent Stehlé
2019-04-10 12:35                 ` Vincent Stehlé
2019-04-10 13:02                 ` Auger Eric
2019-04-10 13:02                   ` Auger Eric
2019-04-10 13:02                   ` Auger Eric
2019-04-10 13:15                 ` Marc Zyngier
2019-04-10 13:15                   ` Marc Zyngier
2019-04-10 13:15                   ` Marc Zyngier
2019-03-17 17:22 ` [PATCH v6 10/22] iommu/arm-smmu-v3: Link domains and devices Eric Auger
2019-03-17 17:22 ` [PATCH v6 11/22] iommu/arm-smmu-v3: Maintain a SID->device structure Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 12/22] iommu/smmuv3: Get prepared for nested stage support Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 13/22] iommu/smmuv3: Implement attach/detach_pasid_table Eric Auger
2019-03-17 17:22 ` [PATCH v6 14/22] iommu/smmuv3: Implement cache_invalidate Eric Auger
2019-03-17 17:22 ` [PATCH v6 15/22] dma-iommu: Implement NESTED_MSI cookie Eric Auger
2019-03-17 17:22 ` [PATCH v6 16/22] iommu/smmuv3: Implement bind/unbind_guest_msi Eric Auger
2019-03-17 17:22 ` [PATCH v6 17/22] iommu/smmuv3: Report non recoverable faults Eric Auger
2019-03-17 17:22 ` [PATCH v6 18/22] vfio-pci: Add a new VFIO_REGION_TYPE_NESTED region type Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 19/22] vfio-pci: Register an iommu fault handler Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 20/22] vfio_pci: Allow to mmap the fault queue Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 21/22] vfio-pci: Add VFIO_PCI_DMA_FAULT_IRQ_INDEX Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 22/22] vfio: Document nested stage control Eric Auger
2019-03-22 13:27 ` [PATCH v6 00/22] SMMUv3 Nested Stage Setup Auger Eric

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=16931d58-9c88-8cfb-a392-408ea7afdf16@redhat.com \
    --to=eric.auger@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=christoffer.dall@arm.com \
    --cc=eric.auger.pro@gmail.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jean-philippe.brucker@arm.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=peter.maydell@linaro.org \
    --cc=robin.murphy@arm.com \
    --cc=vincent.stehle@arm.com \
    --cc=will.deacon@arm.com \
    --cc=yi.l.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.