All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lu Baolu <baolu.lu@linux.intel.com>
To: Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>
Cc: baolu.lu@linux.intel.com, David Woodhouse <dwmw2@infradead.org>,
	Joerg Roedel <joro@8bytes.org>,
	ashok.raj@intel.com, jacob.jun.pan@intel.com, alan.cox@intel.com,
	kevin.tian@intel.com, mika.westerberg@linux.intel.com,
	pengfei.xu@intel.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 02/10] swiotlb: Factor out slot allocation and free
Date: Thu, 2 May 2019 09:47:53 +0800	[thread overview]
Message-ID: <998eadf0-0435-1a6b-7234-71554d95bb70@linux.intel.com> (raw)
In-Reply-To: <c044c51a-d348-ca37-3eaa-5475e3fec6c9@arm.com>

Hi Robin,

On 4/30/19 5:53 PM, Robin Murphy wrote:
> On 30/04/2019 03:02, Lu Baolu wrote:
>> Hi Robin,
>>
>> On 4/29/19 7:06 PM, Robin Murphy wrote:
>>> On 29/04/2019 06:10, Lu Baolu wrote:
>>>> Hi Christoph,
>>>>
>>>> On 4/26/19 11:04 PM, Christoph Hellwig wrote:
>>>>> On Thu, Apr 25, 2019 at 10:07:19AM +0800, Lu Baolu wrote:
>>>>>> This is not VT-d specific. It's just how generic IOMMU works.
>>>>>>
>>>>>> Normally, IOMMU works in paging mode. So if a driver issues DMA with
>>>>>> IOVA  0xAAAA0123, IOMMU can remap it with a physical address 
>>>>>> 0xBBBB0123.
>>>>>> But we should never expect IOMMU to remap 0xAAAA0123 with physical
>>>>>> address of 0xBBBB0000. That's the reason why I said that IOMMU 
>>>>>> will not
>>>>>> work there.
>>>>>
>>>>> Well, with the iommu it doesn't happen.  With swiotlb it obviosuly
>>>>> can happen, so drivers are fine with it.  Why would that suddenly
>>>>> become an issue when swiotlb is called from the iommu code?
>>>>>
>>>>
>>>> I would say IOMMU is DMA remapping, not DMA engine. :-)
>>>
>>> I'm not sure I really follow the issue here - if we're copying the 
>>> buffer to the bounce page(s) there's no conceptual difference from 
>>> copying it to SWIOTLB slot(s), so there should be no need to worry 
>>> about the original in-page offset.
>>>
>>>  From the reply up-thread I guess you're trying to include an 
>>> optimisation to only copy the head and tail of the buffer if it spans 
>>> multiple pages, and directly map the ones in the middle, but AFAICS 
>>> that's going to tie you to also using strict mode for TLB 
>>> maintenance, which may not be a win overall depending on the balance 
>>> between invalidation bandwidth vs. memcpy bandwidth. At least if we 
>>> use standard SWIOTLB logic to always copy the whole thing, we should 
>>> be able to release the bounce pages via the flush queue to allow 
>>> 'safe' lazy unmaps.
>>>
>>
>> With respect, even we use the standard SWIOTLB logic, we need to use
>> the strict mode for TLB maintenance.
>>
>> Say, some swiotbl slots are used by untrusted device for bounce page
>> purpose. When the device driver unmaps the IOVA, the slots are freed but
>> the mapping is still cached in IOTLB, hence the untrusted device is 
>> still able to access the slots. Then the slots are allocated to other
>> devices. This makes it possible for the untrusted device to access
>> the data buffer of other devices.
> 
> Sure, that's indeed how it would work right now - however since the 
> bounce pages will be freed and reused by the DMA API layer itself (at 
> the same level as the IOVAs) I see no technical reason why we couldn't 
> investigate deferred freeing as a future optimisation.

Yes, agreed.

Best regards,
Lu Baolu

WARNING: multiple messages have this Message-ID (diff)
From: Lu Baolu <baolu.lu@linux.intel.com>
To: Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>
Cc: kevin.tian@intel.com, mika.westerberg@linux.intel.com,
	ashok.raj@intel.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	alan.cox@intel.com, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, pengfei.xu@intel.com,
	jacob.jun.pan@intel.com, David Woodhouse <dwmw2@infradead.org>
Subject: Re: [PATCH v3 02/10] swiotlb: Factor out slot allocation and free
Date: Thu, 2 May 2019 09:47:53 +0800	[thread overview]
Message-ID: <998eadf0-0435-1a6b-7234-71554d95bb70@linux.intel.com> (raw)
Message-ID: <20190502014753.MckuGR0UNL-76KWNZA0ZYi6s2tmAd8-h_3IV2ALiddY@z> (raw)
In-Reply-To: <c044c51a-d348-ca37-3eaa-5475e3fec6c9@arm.com>

Hi Robin,

On 4/30/19 5:53 PM, Robin Murphy wrote:
> On 30/04/2019 03:02, Lu Baolu wrote:
>> Hi Robin,
>>
>> On 4/29/19 7:06 PM, Robin Murphy wrote:
>>> On 29/04/2019 06:10, Lu Baolu wrote:
>>>> Hi Christoph,
>>>>
>>>> On 4/26/19 11:04 PM, Christoph Hellwig wrote:
>>>>> On Thu, Apr 25, 2019 at 10:07:19AM +0800, Lu Baolu wrote:
>>>>>> This is not VT-d specific. It's just how generic IOMMU works.
>>>>>>
>>>>>> Normally, IOMMU works in paging mode. So if a driver issues DMA with
>>>>>> IOVA  0xAAAA0123, IOMMU can remap it with a physical address 
>>>>>> 0xBBBB0123.
>>>>>> But we should never expect IOMMU to remap 0xAAAA0123 with physical
>>>>>> address of 0xBBBB0000. That's the reason why I said that IOMMU 
>>>>>> will not
>>>>>> work there.
>>>>>
>>>>> Well, with the iommu it doesn't happen.  With swiotlb it obviosuly
>>>>> can happen, so drivers are fine with it.  Why would that suddenly
>>>>> become an issue when swiotlb is called from the iommu code?
>>>>>
>>>>
>>>> I would say IOMMU is DMA remapping, not DMA engine. :-)
>>>
>>> I'm not sure I really follow the issue here - if we're copying the 
>>> buffer to the bounce page(s) there's no conceptual difference from 
>>> copying it to SWIOTLB slot(s), so there should be no need to worry 
>>> about the original in-page offset.
>>>
>>>  From the reply up-thread I guess you're trying to include an 
>>> optimisation to only copy the head and tail of the buffer if it spans 
>>> multiple pages, and directly map the ones in the middle, but AFAICS 
>>> that's going to tie you to also using strict mode for TLB 
>>> maintenance, which may not be a win overall depending on the balance 
>>> between invalidation bandwidth vs. memcpy bandwidth. At least if we 
>>> use standard SWIOTLB logic to always copy the whole thing, we should 
>>> be able to release the bounce pages via the flush queue to allow 
>>> 'safe' lazy unmaps.
>>>
>>
>> With respect, even we use the standard SWIOTLB logic, we need to use
>> the strict mode for TLB maintenance.
>>
>> Say, some swiotbl slots are used by untrusted device for bounce page
>> purpose. When the device driver unmaps the IOVA, the slots are freed but
>> the mapping is still cached in IOTLB, hence the untrusted device is 
>> still able to access the slots. Then the slots are allocated to other
>> devices. This makes it possible for the untrusted device to access
>> the data buffer of other devices.
> 
> Sure, that's indeed how it would work right now - however since the 
> bounce pages will be freed and reused by the DMA API layer itself (at 
> the same level as the IOVAs) I see no technical reason why we couldn't 
> investigate deferred freeing as a future optimisation.

Yes, agreed.

Best regards,
Lu Baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2019-05-02  1:54 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-21  1:17 [PATCH v3 00/10] iommu: Bounce page for untrusted devices Lu Baolu
2019-04-21  1:17 ` Lu Baolu
2019-04-21  1:17 ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 01/10] iommu: Add helper to get minimal page size of domain Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-29 10:55   ` Robin Murphy
2019-04-29 10:55     ` Robin Murphy
2019-04-30  0:40     ` Lu Baolu
2019-04-30  0:40       ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 02/10] swiotlb: Factor out slot allocation and free Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-22 16:45   ` Christoph Hellwig
2019-04-22 16:45     ` Christoph Hellwig
2019-04-23  1:58     ` Lu Baolu
2019-04-23  1:58       ` Lu Baolu
2019-04-23  6:12       ` Christoph Hellwig
2019-04-23  6:12         ` Christoph Hellwig
2019-04-23  7:32         ` Lu Baolu
2019-04-23  7:32           ` Lu Baolu
2019-04-24 14:45           ` Christoph Hellwig
2019-04-24 14:45             ` Christoph Hellwig
2019-04-25  2:07             ` Lu Baolu
2019-04-25  2:07               ` Lu Baolu
2019-04-25  2:07               ` Lu Baolu
2019-04-26 15:04               ` Christoph Hellwig
2019-04-26 15:04                 ` Christoph Hellwig
2019-04-29  5:10                 ` Lu Baolu
2019-04-29  5:10                   ` Lu Baolu
2019-04-29 11:06                   ` Robin Murphy
2019-04-29 11:06                     ` Robin Murphy
2019-04-29 11:44                     ` Christoph Hellwig
2019-04-29 11:44                       ` Christoph Hellwig
2019-05-06  1:54                       ` Lu Baolu
2019-05-06  1:54                         ` Lu Baolu
2019-05-13  7:05                         ` Christoph Hellwig
2019-05-13  7:05                           ` Christoph Hellwig
2019-05-16  1:53                           ` Lu Baolu
2019-05-16  1:53                             ` Lu Baolu
2019-04-30  2:02                     ` Lu Baolu
2019-04-30  2:02                       ` Lu Baolu
2019-04-30  9:53                       ` Robin Murphy
2019-04-30  9:53                         ` Robin Murphy
2019-05-02  1:47                         ` Lu Baolu [this message]
2019-05-02  1:47                           ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 03/10] swiotlb: Limit tlb address range inside slot pool Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 04/10] swiotlb: Extend swiotlb to support page bounce Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 05/10] iommu: Add bounce page APIs Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 06/10] iommu/vt-d: Add trace events for domain map/unmap Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 07/10] iommu/vt-d: Keep swiotlb on if bounce page is necessary Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-22 16:47   ` Christoph Hellwig
2019-04-22 16:47     ` Christoph Hellwig
2019-04-23  2:00     ` Lu Baolu
2019-04-23  2:00       ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 08/10] iommu/vt-d: Check whether device requires bounce buffer Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-22 16:47   ` Christoph Hellwig
2019-04-22 16:47     ` Christoph Hellwig
2019-04-23  2:03     ` Lu Baolu
2019-04-23  2:03       ` Lu Baolu
2019-04-23  2:03       ` Lu Baolu
2019-04-23  6:08       ` Christoph Hellwig
2019-04-23  6:08         ` Christoph Hellwig
2019-04-23  7:35         ` Lu Baolu
2019-04-23  7:35           ` Lu Baolu
2019-04-24 18:27           ` Konrad Rzeszutek Wilk
2019-04-24 18:27             ` Konrad Rzeszutek Wilk
2019-04-24 18:27             ` Konrad Rzeszutek Wilk
2019-04-21  1:17 ` [PATCH v3 09/10] iommu/vt-d: Add dma sync ops for untrusted devices Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17 ` [PATCH v3 10/10] iommu/vt-d: Use bounce buffer " Lu Baolu
2019-04-21  1:17   ` Lu Baolu
2019-04-21  1:17   ` Lu Baolu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=998eadf0-0435-1a6b-7234-71554d95bb70@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=alan.cox@intel.com \
    --cc=ashok.raj@intel.com \
    --cc=dwmw2@infradead.org \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@intel.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mika.westerberg@linux.intel.com \
    --cc=pengfei.xu@intel.com \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.