From: Nicolas Dufresne <nicolas@ndufresne.ca> To: "Christian König" <christian.koenig@amd.com>, "Thomas Zimmermann" <tzimmermann@suse.de>, linux-media <linux-media@vger.kernel.org>, dri-devel <dri-devel@lists.freedesktop.org>, linaro-mm-sig@lists.linaro.org, lkml <linux-kernel@vger.kernel.org> Cc: "Sharma, Shashank" <Shashank.Sharma@amd.com> Subject: Re: DMA-buf and uncached system memory Date: Mon, 15 Feb 2021 15:46:39 -0500 [thread overview] Message-ID: <6e107811295b7fdd96525453ea5587ee6adc1b06.camel@ndufresne.ca> (raw) In-Reply-To: <80c42ce0-6df1-71ab-81ec-e46cc56840ba@amd.com> Le lundi 15 février 2021 à 13:10 +0100, Christian König a écrit : > > > Am 15.02.21 um 13:00 schrieb Thomas Zimmermann: > > Hi > > > > Am 15.02.21 um 10:49 schrieb Thomas Zimmermann: > > > Hi > > > > > > Am 15.02.21 um 09:58 schrieb Christian König: > > > > Hi guys, > > > > > > > > we are currently working an Freesync and direct scan out from system > > > > memory on AMD APUs in A+A laptops. > > > > > > > > On problem we stumbled over is that our display hardware needs to > > > > scan out from uncached system memory and we currently don't have a > > > > way to communicate that through DMA-buf. > > > > Re-reading this paragrah, it sounds more as if you want to let the > > exporter know where to move the buffer. Is this another case of the > > missing-pin-flag problem? > > No, your original interpretation was correct. Maybe my writing is a bit > unspecific. > > The real underlying issue is that our display hardware has a problem > with latency when accessing system memory. > > So the question is if that also applies to for example Intel hardware or > other devices as well or if it is just something AMD specific? I do believe that the answer is yes, Intel display have similar issue with latency, hence requires un-cached memory. > > Regards, > Christian. > > > > > Best regards > > Thomas > > > > > > > > > > For our specific use case at hand we are going to implement > > > > something driver specific, but the question is should we have > > > > something more generic for this? > > > > > > For vmap operations, we return the address as struct dma_buf_map, > > > which contains additional information about the memory buffer. In > > > vram helpers, we have the interface drm_gem_vram_offset() that > > > returns the offset of the GPU device memory. > > > > > > Would it be feasible to combine both concepts into a dma-buf > > > interface that returns the device-memory offset plus the additional > > > caching flag? > > > > > > There'd be a structure and a getter function returning the structure. > > > > > > struct dma_buf_offset { > > > bool cached; > > > u64 address; > > > }; > > > > > > // return offset in *off > > > int dma_buf_offset(struct dma_buf *buf, struct dma_buf_off *off); > > > > > > Whatever settings are returned by dma_buf_offset() are valid while > > > the dma_buf is pinned. > > > > > > Best regards > > > Thomas > > > > > > > > > > > After all the system memory access pattern is a PCIe extension and > > > > as such something generic. > > > > > > > > Regards, > > > > Christian. > > > > _______________________________________________ > > > > dri-devel mailing list > > > > dri-devel@lists.freedesktop.org > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > > > > > > _______________________________________________ > > > dri-devel mailing list > > > dri-devel@lists.freedesktop.org > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > > >
WARNING: multiple messages have this Message-ID (diff)
From: Nicolas Dufresne <nicolas@ndufresne.ca> To: "Christian König" <christian.koenig@amd.com>, "Thomas Zimmermann" <tzimmermann@suse.de>, linux-media <linux-media@vger.kernel.org>, dri-devel <dri-devel@lists.freedesktop.org>, linaro-mm-sig@lists.linaro.org, lkml <linux-kernel@vger.kernel.org> Cc: "Sharma, Shashank" <Shashank.Sharma@amd.com> Subject: Re: DMA-buf and uncached system memory Date: Mon, 15 Feb 2021 15:46:39 -0500 [thread overview] Message-ID: <6e107811295b7fdd96525453ea5587ee6adc1b06.camel@ndufresne.ca> (raw) In-Reply-To: <80c42ce0-6df1-71ab-81ec-e46cc56840ba@amd.com> Le lundi 15 février 2021 à 13:10 +0100, Christian König a écrit : > > > Am 15.02.21 um 13:00 schrieb Thomas Zimmermann: > > Hi > > > > Am 15.02.21 um 10:49 schrieb Thomas Zimmermann: > > > Hi > > > > > > Am 15.02.21 um 09:58 schrieb Christian König: > > > > Hi guys, > > > > > > > > we are currently working an Freesync and direct scan out from system > > > > memory on AMD APUs in A+A laptops. > > > > > > > > On problem we stumbled over is that our display hardware needs to > > > > scan out from uncached system memory and we currently don't have a > > > > way to communicate that through DMA-buf. > > > > Re-reading this paragrah, it sounds more as if you want to let the > > exporter know where to move the buffer. Is this another case of the > > missing-pin-flag problem? > > No, your original interpretation was correct. Maybe my writing is a bit > unspecific. > > The real underlying issue is that our display hardware has a problem > with latency when accessing system memory. > > So the question is if that also applies to for example Intel hardware or > other devices as well or if it is just something AMD specific? I do believe that the answer is yes, Intel display have similar issue with latency, hence requires un-cached memory. > > Regards, > Christian. > > > > > Best regards > > Thomas > > > > > > > > > > For our specific use case at hand we are going to implement > > > > something driver specific, but the question is should we have > > > > something more generic for this? > > > > > > For vmap operations, we return the address as struct dma_buf_map, > > > which contains additional information about the memory buffer. In > > > vram helpers, we have the interface drm_gem_vram_offset() that > > > returns the offset of the GPU device memory. > > > > > > Would it be feasible to combine both concepts into a dma-buf > > > interface that returns the device-memory offset plus the additional > > > caching flag? > > > > > > There'd be a structure and a getter function returning the structure. > > > > > > struct dma_buf_offset { > > > bool cached; > > > u64 address; > > > }; > > > > > > // return offset in *off > > > int dma_buf_offset(struct dma_buf *buf, struct dma_buf_off *off); > > > > > > Whatever settings are returned by dma_buf_offset() are valid while > > > the dma_buf is pinned. > > > > > > Best regards > > > Thomas > > > > > > > > > > > After all the system memory access pattern is a PCIe extension and > > > > as such something generic. > > > > > > > > Regards, > > > > Christian. > > > > _______________________________________________ > > > > dri-devel mailing list > > > > dri-devel@lists.freedesktop.org > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > > > > > > _______________________________________________ > > > dri-devel mailing list > > > dri-devel@lists.freedesktop.org > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > > > _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2021-02-15 20:47 UTC|newest] Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-02-15 8:58 DMA-buf and uncached system memory Christian König 2021-02-15 8:58 ` Christian König 2021-02-15 9:06 ` Simon Ser 2021-02-15 9:06 ` Simon Ser 2021-02-15 9:34 ` Christian König 2021-02-15 9:34 ` Christian König 2021-02-15 11:53 ` Lucas Stach 2021-02-15 11:53 ` Lucas Stach 2021-02-15 12:04 ` Christian König 2021-02-15 12:04 ` Christian König 2021-02-15 12:16 ` Lucas Stach 2021-02-15 12:16 ` Lucas Stach 2021-02-15 12:25 ` Christian König 2021-02-15 12:25 ` Christian König 2021-02-15 14:41 ` David Laight 2021-02-15 14:41 ` David Laight 2021-02-15 14:54 ` [Linaro-mm-sig] " Christian König 2021-02-15 14:54 ` Christian König 2021-02-15 9:49 ` Thomas Zimmermann 2021-02-15 9:49 ` Thomas Zimmermann 2021-02-15 12:00 ` Thomas Zimmermann 2021-02-15 12:00 ` Thomas Zimmermann 2021-02-15 12:10 ` Christian König 2021-02-15 12:10 ` Christian König 2021-02-15 20:46 ` Nicolas Dufresne [this message] 2021-02-15 20:46 ` Nicolas Dufresne 2021-02-15 20:39 ` Nicolas Dufresne 2021-02-15 20:39 ` Nicolas Dufresne 2022-06-21 10:17 ` Andy.Hsieh 2022-06-21 10:34 ` Christian König 2022-06-21 15:42 ` Nicolas Dufresne 2022-06-21 15:42 ` Nicolas Dufresne 2022-06-22 9:05 ` [Linaro-mm-sig] " Christian König 2022-06-22 9:05 ` Christian König 2021-02-16 9:25 ` Daniel Vetter 2021-02-16 9:25 ` Daniel Vetter 2022-06-22 19:39 ` Nicolas Dufresne 2022-06-22 19:39 ` Nicolas Dufresne 2022-06-22 23:34 ` Daniel Stone 2022-06-22 23:34 ` Daniel Stone 2022-06-23 6:59 ` Christian König 2022-06-23 6:59 ` Christian König 2022-06-23 7:13 ` Pekka Paalanen 2022-06-23 7:13 ` Pekka Paalanen 2022-06-23 7:26 ` Christian König 2022-06-23 7:26 ` Christian König 2022-06-23 8:04 ` Lucas Stach 2022-06-23 8:14 ` Christian König 2022-06-23 8:58 ` Lucas Stach 2022-06-23 9:09 ` Christian König 2022-06-23 9:33 ` Lucas Stach 2022-06-23 9:46 ` Christian König 2022-06-23 10:13 ` Lucas Stach 2022-06-23 11:10 ` Christian König 2022-06-23 11:27 ` Daniel Stone 2022-06-23 11:27 ` Daniel Stone 2022-06-23 11:32 ` Christian König 2022-06-23 11:32 ` Christian König 2022-06-24 22:02 ` [Linaro-mm-sig] " Daniel Vetter 2022-06-24 22:02 ` Daniel Vetter 2022-07-04 13:48 ` Christian König 2022-08-09 14:46 ` Daniel Vetter 2022-08-09 14:46 ` Daniel Vetter 2022-08-10 5:55 ` Christian König 2022-06-23 11:29 ` Lucas Stach 2022-06-23 11:54 ` Christian König 2022-06-23 12:14 ` Lucas Stach 2022-06-23 12:52 ` Christian König 2022-06-23 15:26 ` Lucas Stach 2022-06-24 6:54 ` Christian König 2022-06-24 8:10 ` Lucas Stach 2022-06-27 13:54 ` Nicolas Dufresne 2022-06-27 14:06 ` Lucas Stach 2022-06-27 14:30 ` Nicolas Dufresne 2022-06-27 13:51 ` Nicolas Dufresne 2022-06-23 8:13 ` Thomas Zimmermann 2022-06-23 8:26 ` Christian König 2022-06-23 8:42 ` Thomas Zimmermann 2022-08-09 15:01 ` Rob Clark 2022-08-09 15:01 ` Rob Clark
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=6e107811295b7fdd96525453ea5587ee6adc1b06.camel@ndufresne.ca \ --to=nicolas@ndufresne.ca \ --cc=Shashank.Sharma@amd.com \ --cc=christian.koenig@amd.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=linaro-mm-sig@lists.linaro.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-media@vger.kernel.org \ --cc=tzimmermann@suse.de \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.