linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Thomas Hellström (VMware)" <thomas_os@shipmail.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: "Christian König" <christian.koenig@amd.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: dma coherent memory user-space maps
Date: Mon, 4 Nov 2019 07:58:30 +0100	[thread overview]
Message-ID: <f7e76770-e716-6bcb-6b33-8f156e51ee2e@shipmail.org> (raw)
In-Reply-To: <7d5f8ec3-41a2-2ceb-aaa1-0bf3aa03d9a1@shipmail.org>

On 11/4/19 7:38 AM, Thomas Hellström (VMware) wrote:
> Hi, Crhistoph,
>
> On 10/31/19 10:54 PM, Christoph Hellwig wrote:
>> Hi Thomas,
>>
>> sorry for the delay.  I've been travelling way to much laterly and had
>> a hard time keeping up.
>>
>> On Tue, Oct 08, 2019 at 02:34:17PM +0200, Thomas Hellström (VMware) 
>> wrote:
>>> /* Obtain struct dma_pfn pointers from a dma coherent allocation */
>>> int dma_get_dpfns(struct device *dev, void *cpu_addr, dma_addr_t 
>>> dma_addr,
>>>            pgoff_t offset, pgoff_t num, dma_pfn_t dpfns[]);
>>>
>>> I figure, for most if not all architectures we could use an ordinary 
>>> pfn as
>>> dma_pfn_t, but the dma layer would still have control over how those 
>>> pfns
>>> are obtained and how they are used in the kernel's mapping APIs.
>>>
>>> If so, I could start looking at this, time permitting,  for the 
>>> cases where
>>> the pfn can be obtained from the kernel address or from
>>> arch_dma_coherent_to_pfn(), and also the needed work to have a tailored
>>> vmap_pfn().
>> I'm not sure that infrastructure is all that helpful unfortunately, even
>> if it ended up working.  The problem with the 'coherent' DMA mappings
>> is that we they have a few different backends.  For architectures that
>> are DMA coherent everything is easy and we use the normal page
>> allocator, and your above is trivially doable as wrappers around the
>> existing functionality.  Other remap ptes to be uncached, either
>> in-place or using vmap, and the remaining ones use weird special
>> allocators for which almost everything we can mormally do in the VM
>> will fail.
>
> Hmm, yes I was hoping one could hide that behind the dma_pfn_t and the 
> interface, so that non-trivial backends would be able to define the 
> dma_pfn_t as needed and also if needed have their own special 
> implementation of the interface functions. The interface was spec'ed 
> from the user's (TTM) point of view assuming that with a page-prot and 
> an opaque dma_pfn_t we'd be able to support most non-trivial backends, 
> but that's perhaps not the case?
>
>>
>> I promised Christian an uncached DMA allocator a while ago, and still
>> haven't finished that either unfortunately.  But based on looking at
>> the x86 pageattr code I'm now firmly down the road of using the
>> set_memory_* helpers that change the pte attributes in place, as
>> everything else can't actually work on x86 which doesn't allow
>> aliasing of PTEs with different caching attributes.  The arm64 folks
>> also would prefer in-place remapping even if they don't support it
>> yet, and that is something the i915 code already does in a somewhat
>> hacky way, and something the msm drm driver wants.  So I decided to
>> come up with an API that gives back 'coherent' pages on the
>> architectures that support it and otherwise just fail.
>>
>> Do you care about architectures other than x86 and arm64?  If not I'll
>> hopefully have something for you soon.
>
> For VMware we only care about x86 and arm64, but i think Christian 
> needs to fill in here.

And also for VMware the most important missing functionality is vmap() 
of a combined set of coherent memory allocations, as TTM buffer objects 
are, when using coherent memory, built by coalescing coherent memory 
allocations from a pool.

Thanks,
/Thomas


>
> Thanks,
>
> Thomas
>


  reply	other threads:[~2019-11-04  6:58 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-08 12:34 dma coherent memory user-space maps Thomas Hellström (VMware)
2019-10-21 12:26 ` Thomas Hellström (VMware)
2019-10-23  4:04   ` Christoph Hellwig
2019-10-31 21:54 ` Christoph Hellwig
2019-11-04  6:38   ` Thomas Hellström (VMware)
2019-11-04  6:58     ` Thomas Hellström (VMware) [this message]
2019-11-04 11:29       ` Koenig, Christian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f7e76770-e716-6bcb-6b33-8f156e51ee2e@shipmail.org \
    --to=thomas_os@shipmail.org \
    --cc=christian.koenig@amd.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).