From: Serguei Sagalovitch <serguei.sagalovitch@amd.com>
To: "Christian König" <christian.koenig@amd.com>,
"Dan Williams" <dan.j.williams@intel.com>,
"Dave Hansen" <dave.hansen@linux.intel.com>,
"linux-nvdimm@lists.01.org" <linux-nvdimm@ml01.01.org>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"Kuehling, Felix" <Felix.Kuehling@amd.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
"Sander, Ben" <ben.sander@amd.com>,
"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@amd.com>,
"Deucher, Alexander" <Alexander.Deucher@amd.com>,
"Blinzer, Paul" <Paul.Blinzer@amd.com>,
"Linux-media@vger.kernel.org" <Linux-media@vger.kernel.org>
Subject: Re: Enabling peer to peer device transactions for PCIe devices
Date: Wed, 23 Nov 2016 14:27:06 -0500 [thread overview]
Message-ID: <6ed2c861-7493-902b-bfbd-9c937a3efba3@amd.com> (raw)
In-Reply-To: <2a8a6582-f3de-5cda-0c6e-1c93774147e0@amd.com>
On 2016-11-23 03:51 AM, Christian König wrote:
> Am 23.11.2016 um 08:49 schrieb Daniel Vetter:
>> On Tue, Nov 22, 2016 at 01:21:03PM -0800, Dan Williams wrote:
>>> On Tue, Nov 22, 2016 at 1:03 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
>>>> On Tue, Nov 22, 2016 at 9:35 PM, Serguei Sagalovitch
>>>> <serguei.sagalovitch@amd.com> wrote:
>>>>> On 2016-11-22 03:10 PM, Daniel Vetter wrote:
>>>>>> On Tue, Nov 22, 2016 at 9:01 PM, Dan Williams
>>>>>> <dan.j.williams@intel.com>
>>>>>> wrote:
>>>>>>> On Tue, Nov 22, 2016 at 10:59 AM, Serguei Sagalovitch
>>>>>>> <serguei.sagalovitch@amd.com> wrote:
>>>>>>>> I personally like "device-DAX" idea but my concerns are:
>>>>>>>>
>>>>>>>> - How well it will co-exists with the DRM infrastructure /
>>>>>>>> implementations
>>>>>>>> in part dealing with CPU pointers?
>>>>>>> Inside the kernel a device-DAX range is "just memory" in the sense
>>>>>>> that you can perform pfn_to_page() on it and issue I/O, but the
>>>>>>> vma is
>>>>>>> not migratable. To be honest I do not know how well that co-exists
>>>>>>> with drm infrastructure.
>>>>>>>
>>>>>>>> - How well we will be able to handle case when we need to
>>>>>>>> "move"/"evict"
>>>>>>>> memory/data to the new location so CPU pointer should
>>>>>>>> point to the
>>>>>>>> new
>>>>>>>> physical location/address
>>>>>>>> (and may be not in PCI device memory at all)?
>>>>>>> So, device-DAX deliberately avoids support for in-kernel
>>>>>>> migration or
>>>>>>> overcommit. Those cases are left to the core mm or drm. The
>>>>>>> device-dax
>>>>>>> interface is for cases where all that is needed is a
>>>>>>> direct-mapping to
>>>>>>> a statically-allocated physical-address range be it persistent
>>>>>>> memory
>>>>>>> or some other special reserved memory range.
>>>>>> For some of the fancy use-cases (e.g. to be comparable to what
>>>>>> HMM can
>>>>>> pull off) I think we want all the magic in core mm, i.e.
>>>>>> migration and
>>>>>> overcommit. At least that seems to be the very strong drive in all
>>>>>> general-purpose gpu abstractions and implementations, where
>>>>>> memory is
>>>>>> allocated with malloc, and then mapped/moved into vram/gpu address
>>>>>> space through some magic,
>>>>> It is possible that there is other way around: memory is requested
>>>>> to be
>>>>> allocated and should be kept in vram for performance reason but due
>>>>> to possible overcommit case we need at least temporally to "move"
>>>>> such
>>>>> allocation to system memory.
>>>> With migration I meant migrating both ways of course. And with stuff
>>>> like numactl we can also influence where exactly the malloc'ed memory
>>>> is allocated originally, at least if we'd expose the vram range as a
>>>> very special numa node that happens to be far away and not hold any
>>>> cpu cores.
>>> I don't think we should be using numa distance to reverse engineer a
>>> certain allocation behavior. The latency data should be truthful, but
>>> you're right we'll need a mechanism to keep general purpose
>>> allocations out of that range by default. Btw, strict isolation is
>>> another design point of device-dax, but I think in this case we're
>>> describing something between the two extremes of full isolation and
>>> full compatibility with existing numactl apis.
>> Yes, agreed. My idea with exposing vram sections using numa nodes wasn't
>> to reuse all the existing allocation policies directly, those won't
>> work.
>> So at boot-up your default numa policy would exclude any vram nodes.
>>
>> But I think (as an -mm layman) that numa gives us a lot of the tools and
>> policy interface that we need to implement what we want for gpus.
>
> Agree completely. From a ten mile high view our GPUs are just command
> processors with local memory as well .
>
> Basically this is also the whole idea of what AMD is pushing with HSA
> for a while.
>
> It's just that a lot of problems start to pop up when you look at all
> the nasty details. For example only part of the GPU memory is usually
> accessible by the CPU.
>
> So even when numa nodes expose a good foundation for this I think
> there is still a lot of code to write.
>
> BTW: I should probably start to read into the numa code of the kernel.
> Any good pointers for that?
I would assume that "page" allocation logic itself should be inside of
graphics driver due to possible different requirements especially from
graphics: alignment, etc.
>
> Regards,
> Christian.
>
>> Wrt isolation: There's a sliding scale of what different users expect,
>> from full auto everything, including migrating pages around if needed to
>> full isolation all seems to be on the table. As long as we keep vram
>> nodes
>> out of any default allocation numasets, full isolation should be
>> possible.
>> -Daniel
>
>
Sincerely yours,
Serguei Sagalovitch
next prev parent reply other threads:[~2016-11-23 19:43 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-21 20:36 Enabling peer to peer device transactions for PCIe devices Deucher, Alexander
2016-11-22 18:11 ` Dan Williams
[not found] ` <75a1f44f-c495-7d1e-7e1c-17e89555edba@amd.com>
2016-11-22 20:01 ` Dan Williams
2016-11-22 20:10 ` Daniel Vetter
2016-11-22 20:24 ` Dan Williams
2016-11-22 20:35 ` Serguei Sagalovitch
2016-11-22 21:03 ` Daniel Vetter
2016-11-22 21:21 ` Dan Williams
2016-11-22 22:21 ` Sagalovitch, Serguei
2016-11-23 7:49 ` Daniel Vetter
2016-11-23 8:51 ` Christian König
2016-11-23 19:27 ` Serguei Sagalovitch [this message]
2016-11-23 17:03 ` Dave Hansen
2016-11-23 17:13 ` Logan Gunthorpe
2016-11-23 17:27 ` Bart Van Assche
2016-11-23 18:40 ` Dan Williams
2016-11-23 19:12 ` Jason Gunthorpe
2016-11-23 19:24 ` Serguei Sagalovitch
2016-11-23 19:06 ` Serguei Sagalovitch
2016-11-23 19:05 ` Jason Gunthorpe
2016-11-23 19:14 ` Serguei Sagalovitch
2016-11-23 19:32 ` Jason Gunthorpe
[not found] ` <c2c88376-5ba7-37d1-4d3e-592383ebb00a@amd.com>
2016-11-23 20:33 ` Jason Gunthorpe
2016-11-23 21:11 ` Logan Gunthorpe
2016-11-23 21:55 ` Jason Gunthorpe
2016-11-23 22:42 ` Dan Williams
2016-11-23 23:25 ` Jason Gunthorpe
2016-11-24 9:45 ` Christian König
2016-11-24 16:26 ` Jason Gunthorpe
2016-11-24 17:00 ` Serguei Sagalovitch
2016-11-24 17:55 ` Logan Gunthorpe
2016-11-25 13:06 ` Christian König
2016-11-25 16:45 ` Logan Gunthorpe
2016-11-25 17:20 ` Serguei Sagalovitch
2016-11-25 20:26 ` Felix Kuehling
2016-11-25 20:48 ` Serguei Sagalovitch
2016-11-24 0:40 ` Sagalovitch, Serguei
2016-11-24 16:24 ` Jason Gunthorpe
2016-11-24 1:25 ` Logan Gunthorpe
2016-11-24 16:42 ` Jason Gunthorpe
2016-11-24 18:11 ` Logan Gunthorpe
2016-11-25 7:58 ` Christoph Hellwig
2016-11-25 19:41 ` Jason Gunthorpe
2016-11-25 17:59 ` Serguei Sagalovitch
2016-11-25 13:22 ` Christian König
2016-11-25 17:16 ` Serguei Sagalovitch
2016-11-25 19:34 ` Jason Gunthorpe
2016-11-25 19:49 ` Serguei Sagalovitch
2016-11-25 20:19 ` Jason Gunthorpe
2016-11-25 23:41 ` Alex Deucher
2016-11-25 19:32 ` Jason Gunthorpe
2016-11-25 20:40 ` Christian König
2016-11-25 20:51 ` Felix Kuehling
2016-11-25 21:18 ` Jason Gunthorpe
2016-11-27 8:16 ` Haggai Eran
2016-11-27 14:02 ` Haggai Eran
2016-11-27 14:07 ` Christian König
2016-11-28 5:31 ` zhoucm1
2016-11-28 14:48 ` Serguei Sagalovitch
2016-11-28 18:36 ` Haggai Eran
2016-11-28 16:57 ` Jason Gunthorpe
2016-11-28 18:19 ` Haggai Eran
2016-11-28 19:02 ` Jason Gunthorpe
2016-11-30 10:45 ` Haggai Eran
2016-11-30 16:23 ` Jason Gunthorpe
2016-11-30 17:28 ` Serguei Sagalovitch
2016-12-04 7:33 ` Haggai Eran
2016-11-30 18:01 ` Logan Gunthorpe
2016-12-04 7:42 ` Haggai Eran
2016-12-04 13:06 ` Stephen Bates
2016-12-04 13:23 ` Stephen Bates
2016-12-05 17:18 ` Jason Gunthorpe
2016-12-05 17:40 ` Dan Williams
2016-12-05 18:02 ` Jason Gunthorpe
2016-12-05 18:08 ` Dan Williams
2016-12-05 18:39 ` Logan Gunthorpe
2016-12-05 18:48 ` Dan Williams
2016-12-05 19:14 ` Jason Gunthorpe
2016-12-05 19:27 ` Logan Gunthorpe
2016-12-05 19:46 ` Jason Gunthorpe
2016-12-05 19:59 ` Logan Gunthorpe
2016-12-05 20:06 ` Christoph Hellwig
2016-12-06 8:06 ` Stephen Bates
2016-12-06 16:38 ` Jason Gunthorpe
2016-12-06 16:51 ` Logan Gunthorpe
2016-12-06 17:28 ` Jason Gunthorpe
2016-12-06 21:47 ` Logan Gunthorpe
2016-12-06 22:02 ` Dan Williams
2016-12-06 17:12 ` Christoph Hellwig
2016-12-04 7:53 ` Haggai Eran
2016-11-30 17:10 ` Deucher, Alexander
2016-11-28 18:20 ` Logan Gunthorpe
2016-11-28 19:35 ` Serguei Sagalovitch
2016-11-28 21:36 ` Logan Gunthorpe
2016-11-28 21:55 ` Serguei Sagalovitch
2016-11-28 22:24 ` Jason Gunthorpe
2017-01-05 18:39 ` Jerome Glisse
2017-01-05 19:01 ` Jason Gunthorpe
2017-01-05 19:54 ` Jerome Glisse
2017-01-05 20:07 ` Jason Gunthorpe
2017-01-05 20:19 ` Jerome Glisse
2017-01-05 22:42 ` Jason Gunthorpe
2017-01-05 23:23 ` Jerome Glisse
2017-01-06 0:30 ` Jason Gunthorpe
2017-01-06 0:41 ` Serguei Sagalovitch
2017-01-06 1:58 ` Jerome Glisse
2017-01-06 16:56 ` Serguei Sagalovitch
2017-01-06 17:37 ` Jerome Glisse
2017-01-06 18:26 ` Jason Gunthorpe
2017-01-06 19:12 ` Deucher, Alexander
2017-01-06 22:10 ` Logan Gunthorpe
2017-01-12 4:54 ` Stephen Bates
2017-01-12 15:11 ` Jerome Glisse
2017-01-12 17:17 ` Jason Gunthorpe
2017-01-13 13:04 ` Christian König
2017-01-12 22:35 ` Logan Gunthorpe
2017-01-06 15:08 ` Henrique Almeida
2017-10-20 12:36 ` Ludwig Petrosyan
2017-10-20 15:48 ` Logan Gunthorpe
2017-10-22 6:13 ` Petrosyan, Ludwig
2017-10-22 17:19 ` Logan Gunthorpe
2017-10-23 16:08 ` David Laight
2017-10-23 22:04 ` Logan Gunthorpe
2017-10-24 5:58 ` Petrosyan, Ludwig
2017-10-24 14:58 ` David Laight
2017-10-26 13:28 ` Petrosyan, Ludwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6ed2c861-7493-902b-bfbd-9c937a3efba3@amd.com \
--to=serguei.sagalovitch@amd.com \
--cc=Alexander.Deucher@amd.com \
--cc=Felix.Kuehling@amd.com \
--cc=Linux-media@vger.kernel.org \
--cc=Paul.Blinzer@amd.com \
--cc=Suravee.Suthikulpanit@amd.com \
--cc=ben.sander@amd.com \
--cc=christian.koenig@amd.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvdimm@ml01.01.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).