amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Alex Deucher" <alexdeucher@gmail.com>,
	"Michel Dänzer" <michel@daenzer.net>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Pekka Paalanen" <ppaalanen@gmail.com>,
	dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v2 1/8] drm: Add dummy page per device or GEM object
Date: Thu, 19 Nov 2020 11:01:49 +0100	[thread overview]
Message-ID: <115337c7-dd70-b9df-3784-d81b6fefd97a@amd.com> (raw)
In-Reply-To: <237f46c9-48d4-ebd8-772a-0ac1619dbebb@amd.com>

Am 16.11.20 um 21:42 schrieb Andrey Grodzovsky:
>
> On 11/16/20 3:36 PM, Christian König wrote:
>> Am 16.11.20 um 20:00 schrieb Andrey Grodzovsky:
>>>
>>> On 11/16/20 4:48 AM, Christian König wrote:
>>>> Am 15.11.20 um 07:34 schrieb Andrey Grodzovsky:
>>>>>
>>>>> On 11/14/20 4:51 AM, Daniel Vetter wrote:
>>>>>> On Sat, Nov 14, 2020 at 9:41 AM Christian König
>>>>>> <ckoenig.leichtzumerken@gmail.com> wrote:
>>>>>>> Am 13.11.20 um 21:52 schrieb Andrey Grodzovsky:
>>>>>>>> On 6/22/20 1:50 PM, Daniel Vetter wrote:
>>>>>>>>> On Mon, Jun 22, 2020 at 7:45 PM Christian König
>>>>>>>>> <christian.koenig@amd.com> wrote:
>>>>>>>>>> Am 22.06.20 um 16:32 schrieb Andrey Grodzovsky:
>>>>>>>>>>> On 6/22/20 9:18 AM, Christian König wrote:
>>>>>>>>>>>> Am 21.06.20 um 08:03 schrieb Andrey Grodzovsky:
>>>>>>>>>>>>> Will be used to reroute CPU mapped BO's page faults once
>>>>>>>>>>>>> device is removed.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>     drivers/gpu/drm/drm_file.c  |  8 ++++++++
>>>>>>>>>>>>>     drivers/gpu/drm/drm_prime.c | 10 ++++++++++
>>>>>>>>>>>>>     include/drm/drm_file.h      |  2 ++
>>>>>>>>>>>>>     include/drm/drm_gem.h       |  2 ++
>>>>>>>>>>>>>     4 files changed, 22 insertions(+)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/drm_file.c 
>>>>>>>>>>>>> b/drivers/gpu/drm/drm_file.c
>>>>>>>>>>>>> index c4c704e..67c0770 100644
>>>>>>>>>>>>> --- a/drivers/gpu/drm/drm_file.c
>>>>>>>>>>>>> +++ b/drivers/gpu/drm/drm_file.c
>>>>>>>>>>>>> @@ -188,6 +188,12 @@ struct drm_file *drm_file_alloc(struct
>>>>>>>>>>>>> drm_minor *minor)
>>>>>>>>>>>>>                 goto out_prime_destroy;
>>>>>>>>>>>>>         }
>>>>>>>>>>>>>     +    file->dummy_page = alloc_page(GFP_KERNEL | 
>>>>>>>>>>>>> __GFP_ZERO);
>>>>>>>>>>>>> +    if (!file->dummy_page) {
>>>>>>>>>>>>> +        ret = -ENOMEM;
>>>>>>>>>>>>> +        goto out_prime_destroy;
>>>>>>>>>>>>> +    }
>>>>>>>>>>>>> +
>>>>>>>>>>>>>         return file;
>>>>>>>>>>>>>       out_prime_destroy:
>>>>>>>>>>>>> @@ -284,6 +290,8 @@ void drm_file_free(struct drm_file *file)
>>>>>>>>>>>>>         if (dev->driver->postclose)
>>>>>>>>>>>>> dev->driver->postclose(dev, file);
>>>>>>>>>>>>>     +    __free_page(file->dummy_page);
>>>>>>>>>>>>> +
>>>>>>>>>>>>> drm_prime_destroy_file_private(&file->prime);
>>>>>>>>>>>>> WARN_ON(!list_empty(&file->event_list));
>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/drm_prime.c
>>>>>>>>>>>>> b/drivers/gpu/drm/drm_prime.c
>>>>>>>>>>>>> index 1de2cde..c482e9c 100644
>>>>>>>>>>>>> --- a/drivers/gpu/drm/drm_prime.c
>>>>>>>>>>>>> +++ b/drivers/gpu/drm/drm_prime.c
>>>>>>>>>>>>> @@ -335,6 +335,13 @@ int drm_gem_prime_fd_to_handle(struct
>>>>>>>>>>>>> drm_device *dev,
>>>>>>>>>>>>>           ret = drm_prime_add_buf_handle(&file_priv->prime,
>>>>>>>>>>>>>                 dma_buf, *handle);
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +    if (!ret) {
>>>>>>>>>>>>> +        obj->dummy_page = alloc_page(GFP_KERNEL | 
>>>>>>>>>>>>> __GFP_ZERO);
>>>>>>>>>>>>> +        if (!obj->dummy_page)
>>>>>>>>>>>>> +            ret = -ENOMEM;
>>>>>>>>>>>>> +    }
>>>>>>>>>>>>> +
>>>>>>>>>>>> While the per file case still looks acceptable this is a 
>>>>>>>>>>>> clear NAK
>>>>>>>>>>>> since it will massively increase the memory needed for a prime
>>>>>>>>>>>> exported object.
>>>>>>>>>>>>
>>>>>>>>>>>> I think that this is quite overkill in the first place and 
>>>>>>>>>>>> for the
>>>>>>>>>>>> hot unplug case we can just use the global dummy page as well.
>>>>>>>>>>>>
>>>>>>>>>>>> Christian.
>>>>>>>>>>> Global dummy page is good for read access, what do you do on 
>>>>>>>>>>> write
>>>>>>>>>>> access ? My first approach was indeed to map at first global 
>>>>>>>>>>> dummy
>>>>>>>>>>> page as read only and mark the vma->vm_flags as !VM_SHARED 
>>>>>>>>>>> assuming
>>>>>>>>>>> that this would trigger Copy On Write flow in core mm
>>>>>>>>>>> (https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.7-rc7%2Fsource%2Fmm%2Fmemory.c%23L3977&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C00053e9d983041ed63ae08d88882ed87%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637409443224016377%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=kghiG3VpCJod6YefExoDVPl9X03zNhw3SN5GAxgbnmU%3D&amp;reserved=0) 
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> on the next page fault to same address triggered by a write 
>>>>>>>>>>> access but
>>>>>>>>>>> then i realized a new COW page will be allocated for each 
>>>>>>>>>>> such mapping
>>>>>>>>>>> and this is much more wasteful then having a dedicated page 
>>>>>>>>>>> per GEM
>>>>>>>>>>> object.
>>>>>>>>>> Yeah, but this is only for a very very small corner cases. 
>>>>>>>>>> What we need
>>>>>>>>>> to prevent is increasing the memory usage during normal 
>>>>>>>>>> operation to
>>>>>>>>>> much.
>>>>>>>>>>
>>>>>>>>>> Using memory during the unplug is completely unproblematic 
>>>>>>>>>> because we
>>>>>>>>>> just released quite a bunch of it by releasing all those 
>>>>>>>>>> system memory
>>>>>>>>>> buffers.
>>>>>>>>>>
>>>>>>>>>> And I'm pretty sure that COWed pages are correctly accounted 
>>>>>>>>>> towards
>>>>>>>>>> the
>>>>>>>>>> used memory of a process.
>>>>>>>>>>
>>>>>>>>>> So I think if that approach works as intended and the COW 
>>>>>>>>>> pages are
>>>>>>>>>> released again on unmapping it would be the perfect solution 
>>>>>>>>>> to the
>>>>>>>>>> problem.
>>>>>>>>>>
>>>>>>>>>> Daniel what do you think?
>>>>>>>>> If COW works, sure sounds reasonable. And if we can make sure we
>>>>>>>>> managed to drop all the system allocations (otherwise suddenly 2x
>>>>>>>>> memory usage, worst case). But I have no idea whether we can
>>>>>>>>> retroshoehorn that into an established vma, you might have fun 
>>>>>>>>> stuff
>>>>>>>>> like a mkwrite handler there (which I thought is the COW handler
>>>>>>>>> thing, but really no idea).
>>>>>>>>>
>>>>>>>>> If we need to massively change stuff then I think rw dummy page,
>>>>>>>>> allocated on first fault after hotunplug (maybe just make it 
>>>>>>>>> one per
>>>>>>>>> object, that's simplest) seems like the much safer option. 
>>>>>>>>> Much less
>>>>>>>>> code that can go wrong.
>>>>>>>>> -Daniel
>>>>>>>>
>>>>>>>> Regarding COW, i was looking into how to properly implement it 
>>>>>>>> from
>>>>>>>> within the fault handler (i.e. ttm_bo_vm_fault)
>>>>>>>> and the main obstacle I hit is that of exclusive access to the
>>>>>>>> vm_area_struct, i need to be able to modify
>>>>>>>> vma->vm_flags (and vm_page_prot)  to remove VM_SHARED bit so 
>>>>>>>> COW can
>>>>>>>> be triggered on subsequent write access
>>>>>>>> fault (here
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fmm%2Fmemory.c%23L4128&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C00053e9d983041ed63ae08d88882ed87%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637409443224016377%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ziHJtqyHuLrlb0uYKhoWCWhUAZnX0JquE%2BkBJ5Fx%2BNo%3D&amp;reserved=0) 
>>>>>>>>
>>>>>>>> but core mm takes only read side mm_sem (here for example
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fiommu%2Famd%2Fiommu_v2.c%23L488&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C00053e9d983041ed63ae08d88882ed87%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637409443224016377%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=h360c75Upl3%2FW7im7M1%2BxY%2FXy4gxin%2BkCF1Ui2zFXMs%3D&amp;reserved=0) 
>>>>>>>>
>>>>>>>> and so I am not supposed to modify vm_area_struct in this case. 
>>>>>>>> I am
>>>>>>>> not sure if it's legit to write lock tthe mm_sem from this point.
>>>>>>>> I found some discussions about this here
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Flkml.iu.edu%2Fhypermail%2Flinux%2Fkernel%2F1909.1%2F02754.html&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C00053e9d983041ed63ae08d88882ed87%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637409443224021379%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=sx6s1lH%2FvxbIZajc4Yr49vFhxvPEnBHZlTt52D8qvZA%3D&amp;reserved=0 
>>>>>>>> but it
>>>>>>>> wasn't really clear to me
>>>>>>>> what's the solution.
>>>>>>>>
>>>>>>>> In any case, seems to me that easier and more memory saving 
>>>>>>>> solution
>>>>>>>> would be to just switch to per ttm bo dumy rw page that
>>>>>>>> would be allocated on demand as you suggested here. This should 
>>>>>>>> also
>>>>>>>> take care of imported BOs and flink cases.
>>>>>>>> Then i can drop the per device FD and per GEM object FD dummy 
>>>>>>>> BO and
>>>>>>>> the ugly loop i am using in patch 2 to match faulting BO to the 
>>>>>>>> right
>>>>>>>> dummy page.
>>>>>>>>
>>>>>>>> Does this makes sense ?
>>>>>>> I still don't see the information leak as much of a problem, but if
>>>>>>> Daniel insists we should probably do this.
>>>>>> Well amdgpu doesn't clear buffers by default, so indeed you guys 
>>>>>> are a
>>>>>> lot more laissez-faire here. But in general we really don't do that
>>>>>> kind of leaking. Iirc there's even radeonsi bugs because else 
>>>>>> clears,
>>>>>> and radeonsi happily displays gunk :-)
>>>>>>
>>>>>>> But could we at least have only one page per client instead of 
>>>>>>> per BO?
>>>>>> I think you can do one page per file descriptor or something like
>>>>>> that. But gets annoying with shared bo, especially with dma_buf_mmap
>>>>>> forwarding.
>>>>>> -Daniel
>>>>>
>>>>>
>>>>> Christian - is your concern more with too much page allocations or 
>>>>> with extra pointer member
>>>>> cluttering TTM BO struct ?
>>>>
>>>> Yes, that is one problem.
>>>>
>>>>> Because we can allocate the dummy page on demand only when
>>>>> needed. It's just seems to me that keeping it per BO streamlines 
>>>>> the code as I don't need to
>>>>> have different handling for local vs imported BOs.
>>>>
>>>> Why should you have a difference between local vs imported BOs?
>>>
>>>
>>> For local BO seems like Daniel's suggestion to use 
>>> vm_area_struct->vm_file->private_data
>>> should work as this points to drm_file. For imported BOs 
>>> private_data will point to dma_buf structure
>>> since each imported BO is backed by a pseudo file (created in 
>>> dma_buf_getfile).
>>
>> Oh, good point. But we could easily fix that now. That should make 
>> the mapping code less complex as well.
>
>
> Can you clarify what fix u have in mind ? I assume it's not by 
> altering file->private_data to point
> to something else as we need to retrieve dmabuf (e.g. 
> dma_buf_mmap_internal)

Ah, crap. You are right that is really tricky because vma->vm_file 
doesn't point to something useful in this situation.

I was talking about the new vma_set_file() function I've just pushed to 
drm-misc-next, but that stuff can't be used here.

I still don't see the need to use more than the global dummy page even 
if that means information leak between processes on unplug.

Christian.

>
> Andrey
>
>
>>
>> Regards,
>> Christian.
>>
>>> If so,where should we store the dummy RW BO in this case ? In 
>>> current implementation  it's stored in drm_gem_object.
>>>
>>> P.S For FLINK case it seems to me the handling should be no 
>>> different then with local BO as the
>>> FD used for mmap in this case is still the same one associated with 
>>> the DRM file.
>>>
>>> Andrey
>>>
>>>
>>>>
>>>> Christian.
>>>>
>>>>>
>>>>> Andrey
>>>>
>>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-11-19 10:02 UTC|newest]

Thread overview: 97+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-21  6:03 [PATCH v2 0/8] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-06-21  6:03 ` [PATCH v2 1/8] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-06-22  9:35   ` Daniel Vetter
2020-06-22 14:21     ` Pekka Paalanen
2020-06-22 14:24       ` Daniel Vetter
2020-06-22 14:28         ` Pekka Paalanen
2020-11-09 20:34     ` Andrey Grodzovsky
2020-11-15  6:39     ` Andrey Grodzovsky
2020-06-22 13:18   ` Christian König
2020-06-22 14:23     ` Daniel Vetter
2020-06-22 14:32     ` Andrey Grodzovsky
2020-06-22 17:45       ` Christian König
2020-06-22 17:50         ` Daniel Vetter
2020-11-09 20:53           ` Andrey Grodzovsky
2020-11-13 20:52           ` Andrey Grodzovsky
2020-11-14  8:41             ` Christian König
2020-11-14  9:51               ` Daniel Vetter
2020-11-14  9:57                 ` Daniel Vetter
2020-11-16  9:42                   ` Michel Dänzer
2020-11-15  6:34                 ` Andrey Grodzovsky
2020-11-16  9:48                   ` Christian König
2020-11-16 19:00                     ` Andrey Grodzovsky
2020-11-16 20:36                       ` Christian König
2020-11-16 20:42                         ` Andrey Grodzovsky
2020-11-19 10:01                           ` Christian König [this message]
2020-06-21  6:03 ` [PATCH v2 2/8] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-06-22  9:41   ` Daniel Vetter
2020-06-24  3:31     ` Andrey Grodzovsky
2020-06-24  7:19       ` Daniel Vetter
2020-11-10 17:41     ` Andrey Grodzovsky
2020-06-22 19:30   ` Christian König
2020-06-21  6:03 ` [PATCH v2 3/8] drm/ttm: Add unampping of the entire device address space Andrey Grodzovsky
2020-06-22  9:45   ` Daniel Vetter
2020-06-23  5:00     ` Andrey Grodzovsky
2020-06-23 10:25       ` Daniel Vetter
2020-06-23 12:55         ` Christian König
2020-06-22 19:37   ` Christian König
2020-06-22 19:47   ` Alex Deucher
2020-06-21  6:03 ` [PATCH v2 4/8] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-06-22  9:48   ` Daniel Vetter
2020-11-12  4:19     ` Andrey Grodzovsky
2020-11-12  9:29       ` Daniel Vetter
2020-06-21  6:03 ` [PATCH v2 5/8] drm/amdgpu: Refactor sysfs removal Andrey Grodzovsky
2020-06-22  9:51   ` Daniel Vetter
2020-06-22 11:21     ` Greg KH
2020-06-22 16:07       ` Andrey Grodzovsky
2020-06-22 16:45         ` Greg KH
2020-06-23  4:51           ` Andrey Grodzovsky
2020-06-23  6:05             ` Greg KH
2020-06-24  3:04               ` Andrey Grodzovsky
2020-06-24  6:11                 ` Greg KH
2020-06-25  1:52                   ` Andrey Grodzovsky
2020-11-10 17:54                   ` Andrey Grodzovsky
2020-11-10 17:59                     ` Greg KH
2020-11-11 15:13                       ` Andrey Grodzovsky
2020-11-11 15:34                         ` Greg KH
2020-11-11 15:45                           ` Andrey Grodzovsky
2020-11-11 16:06                             ` Greg KH
2020-11-11 16:34                               ` Andrey Grodzovsky
2020-12-02 15:48                           ` Andrey Grodzovsky
2020-12-02 17:34                             ` Greg KH
2020-12-02 18:02                               ` Andrey Grodzovsky
2020-12-02 18:20                                 ` Greg KH
2020-12-02 18:40                                   ` Andrey Grodzovsky
2020-06-22 13:19   ` Christian König
2020-06-21  6:03 ` [PATCH v2 6/8] drm/amdgpu: Unmap entire device address space on device remove Andrey Grodzovsky
2020-06-22  9:56   ` Daniel Vetter
2020-06-22 19:38   ` Christian König
2020-06-22 19:48     ` Alex Deucher
2020-06-23 10:22       ` Daniel Vetter
2020-06-23 13:16         ` Christian König
2020-06-24  3:12           ` Andrey Grodzovsky
2020-06-21  6:03 ` [PATCH v2 7/8] drm/amdgpu: Fix sdma code crash post device unplug Andrey Grodzovsky
2020-06-22  9:55   ` Daniel Vetter
2020-06-22 19:40   ` Christian König
2020-06-23  5:11     ` Andrey Grodzovsky
2020-06-23  7:14       ` Christian König
2020-06-21  6:03 ` [PATCH v2 8/8] drm/amdgpu: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-06-22  9:53   ` Daniel Vetter
2020-11-17 18:38     ` Andrey Grodzovsky
2020-11-17 18:52       ` Daniel Vetter
2020-11-17 19:18         ` Andrey Grodzovsky
2020-11-17 19:49           ` Daniel Vetter
2020-11-17 20:07             ` Andrey Grodzovsky
2020-11-18  7:39               ` Daniel Vetter
2020-11-18 12:01                 ` Christian König
2020-11-18 15:43                   ` Luben Tuikov
2020-11-18 16:20                   ` Andrey Grodzovsky
2020-11-19  7:55                     ` Christian König
2020-11-19 15:02                       ` Andrey Grodzovsky
2020-11-19 15:29                         ` Daniel Vetter
2020-11-19 21:24                           ` Andrey Grodzovsky
2020-11-18  0:46             ` Luben Tuikov
2020-06-22  9:46 ` [PATCH v2 0/8] RFC Support hot device unplug in amdgpu Daniel Vetter
2020-06-23  5:14   ` Andrey Grodzovsky
2020-06-23  9:04     ` Michel Dänzer
2020-06-24  3:21       ` Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=115337c7-dd70-b9df-3784-d81b6fefd97a@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Andrey.Grodzovsky@amd.com \
    --cc=alexdeucher@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=michel@daenzer.net \
    --cc=ppaalanen@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).